text
stringlengths
56
7.94M
\begin{equation}gin{document} \title{Approximate the individually fair $k$-center with outliers} \titlerunning{Approximate the individually fair $k$-center with outliers} \author{Lu Han \and Dachuan Xu \and \\ Yicheng Xu\footnote{Corresponding author. Email:{[email protected]}} \and Ping Yang } \authorrunning{L. Han, D. Xu, Y. Xu, and P. Yang} \institute{L. Han \at School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China. \and D. Xu \at Beijing Institute for Scientific and Engineering Computing, Beijing University of Technology, Beijing 100124, P.R. China. \and Y. Xu \at Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, P.R. China. \and P. Yang \at College of Statistics and Data Science, Beijing University of Technology, Beijing 100124, P.R. China.} \date{Received: date / Accepted: date} \maketitle \begin{equation}gin{abstract} In this paper, we propose and investigate the individually fair $k$-center with outliers (IF$k$CO). In the IF$k$CO, we are given an $n$-sized vertex set in a metric space, as well as integers $k$ and $q$. At most $k$ vertices can be selected as the centers and at most $q$ vertices can be selected as the outliers. The centers are selected to serve all the not-an-outlier (i.e., served) vertices. The so-called individual fairness constraint restricts that every served vertex must have a selected center not too far way. More precisely, it is supposed that there exists at least one center among its $\lceil (n-q) / k \rceil$ closest neighbors for every served vertex. Because every center serves $(n -q) / k$ vertices on the average. The objective is to select centers and outliers, assign every served vertex to some center, so as to minimize the maximum fairness ratio over all served vertices, where the fairness ratio of a vertex is defined as the ratio between its distance with the assigned center and its distance with a $\lceil (n - q )/k \rceil_{\rm th}$ closest neighbor. As our main contribution, a 4-approximation algorithm is presented, based on which we develop an improved algorithm from a practical perspective. Extensive experiment results on both synthetic datasets and real-world datasets are presented to illustrate the effectiveness of the proposed algorithms. \keywords{$k$-center \and Individual fairness \and Outliers \and Approximation algorithm } \end{abstract} \section{Introduction} Clustering problems are studied due to their widespread applications in operations research and machine learning areas \cite{bprst,cgts,g,hs85,hs86,l,sta}. As a consequence, some natural and significant variants also attract lots of research interests \cite{c,kps,ks,kls,xxdw,xxzz}. The concept of fairness is introduced into clustering problems very recently. Chierichetti et al. \cite{cklv} first studies the fairness in the sense that each cluster is required to have approximately equal proportion of representations. Many more explanation of fairness in the clustering problems are proposed since then. They vary from each other in considering fairness in different objects. Some consider the balance between clusters \cite{aekm,biosvw,bcfn,bgkkrss,hjv,sss}, some consider the balance within selected centers \cite{jnn,kam} and others consider the balance of cost functions \cite{abv,gsv,mv}. All these fairness can be viewed as the so-called group fairness, and very limited work concentrates on the individual fairness. The individual fairness is proposed by Jung et al. \cite{jkl} in the sense of population density. They study the individually fair $k$-center (IF$k$C) where an $n$-sized vertex set in a metric space and an integer $k$ are given. At most $k$ vertices can be selected as the centers to serve all the given vertices. A vertex would expect that there exists a center among its $\lceil n / k \rceil$ closest neighbors, since each open center serves $n/k$ vertices on the average. Jung et al. \cite{jkl} show that sometimes it is impossible to find the suitable centers which satisfy the expectation of each vertex. So the IF$k$C focuses on optimizing how far from the ideal expectation. Specifically, the objective of the IF$k$C is to select at most $k$ vertices as centers, and assign each vertex to some center, so as to minimize the maximum fairness ratio over all vertices, where the fairness ratio of a vertex is defined as the ratio between its distance with the assigned center and its distance with a $\lceil n /k \rceil_{\rm th}$ closest neighbor. Jung et al. \cite{jkl} give a $4$-approximation algorithm for the IF$k$C. Soon afterwards, under the notion of individual fairness in \cite{jkl}, Mahabadi and Vakilian \cite{mvi} and Vakilian and Yal{\c{c}}{\i}ner \cite{vy} study the $k$-clustering with $l_p$-norm cost function. However, an isolated vertex may cause huge loss of the overall clustering quality in the IF$k$C. It is of significance to overcome this shortcoming of the problem. Towards this end, we introduce the individually fair $k$-center with outliers (IF$k$CO) in this paper, which allows some vertices, called outliers, to be discarded when clustering. Thus, an additional integer $q$ is given. At most $q$ vertices can be selected as the outliers which could not be served. If a vertex is selected as an outlier, it does not care the distance between a center and itself. If a vertex is not an outlier, it would expect that there exists a center among its $\lceil (n-q) / k \rceil$ closest neighbors, since ideally we wish that each center serves $(n -q) / k$ vertices. The goal is to select at most $k$ centers and at most $q$ outliers, assign each not-an-outlier vertex to some center, so as to minimize the maximum outlier-related fairness ratio over all not-an-outlier vertices, where the outlier-related fairness ratio of a vertex is defined as the ratio between its distance with the assigned center and its distance with a $\lceil (n-q) /k \rceil_{\rm th}$ closest neighbor. Our contributions are fourfold. \begin{equation}gin{itemize} \item Contribution 1: We first present a naive but natural algorithm for the IF$k$CO and prove that the algorithm may return a solution far from being optimal. \item Contribution 2: After finding out the naive algorithm's principle of selecting centers is lack of rationality, we then design a basic $4$-approximation algorithm for the IF$k$CO, which successfully avoids the shortcoming of the naive algorithm. \item Contribution 3: Unfortunately, the basic algorithm has its own limitation that it may select very few vertices as outliers. We further propose a refined $4$-approximation algorithm to deal with the limitation. \item Contribution 4: We apply the refined algorithm to several instances and show that the refined algorithm is well-behaved. \end{itemize} The remainder of this paper is structured as follows. In section 2, the mathematical description of the IF$k$CO is given, followed by a naive algorithm. In section 3, the main part of this paper, we present two algorithms for the IF$k$CO, a basic one and a refined one. In section 4, we test the refined algorithm on a large scale of synthetic and real-world instances. In section 5, we discuss the practical aspect of the proposed algorithms as well as some interesting directions. \section{Preliminaries} We start with the mathematical descriptions of the IF$k$CO and IF$k$C. A naive attempt show that the algorithm for the IF$k$C can easily obtain a feasible solution for the IF$k$CO instance. However, it can be arbitrarily bad. \subsection{Problem descriptions} In any instance for the IF$k$CO, denoted by $\mathcal{I}_{{\rm IF}k{\rm CO}}$, we are given a vertex set $V$ with size $n$. Let $d_{ij}$ be the distance between a pair of vertices $(i, j)$ with $i, j \in V$. It is assumed that the distances are metric, i.e., obey the following assumptions. \begin{equation}gin{itemize} \item They are \emph{non-negative}, i.e., $d_{ij} \geq 0$ for any $i, j \in V$; \item They are \emph{symmetric}, i.e., $d_{ii}=0$ and $d_{ij} = d_{ji}$ for any $i, j \in V$; \item They satisfy the \emph{triangle inequality}, i.e., $d_{hi} + d_{ij} \geq d_{hj}$ for any $h, i, j \in V$. \end{itemize} Also, we are given the integers $k$ and $q$, the maximum number of vertices that can be selected as the centers and that of the outliers. For each $i \in V$, let $NR_q(i)$ be the distance between $i$ and its $\lceil (n - q )/k \rceil_{\rm th}$ nearest neighbor. Note that any vertex itself is its nearest neighbor. We call $NR_q(i)$ the outlier-related neighborhood radius of $i$. The aim is to select vertices $S \subseteq V$ as centers and $O \subseteq V$ as outliers, assign each vertex $i \in V \setminus O$ to some center $\sigma(i) \in S$, such that $|S| \leq k$, $|O|\leq q$, and the maximum ratio of $d_{\sigma(i)i} / NR_q(i)$ of a vertex in $V \setminus O$ is minimized. We use $(S, O, \sigma)$ to denote a solution for the IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$, in which $S \subseteq V$ is the set of selected centers, $O \subseteq V$ is the set of selected outliers and $\sigma: V \setminus O \rightarrow S $ is an assignment mapping each vertex in $V \setminus O$ to some center in $S$. A solution $(S, O, \sigma)$ is feasible if $|S| \leq k$ and $|O| \leq q$. For each vertex $i \in V \setminus O$, we call $d_{\sigma(i)i} / NR_q(i)$ its outlier-related fairness ratio. For the solution $(S, O, \sigma)$, we call $\alpha(S, O, \sigma)$ its outlier-related fairness ratio, which is the maximum outlier-related fairness ratio of a vetex in $V \setminus O$, i.e., $$\alpha(S, O, \sigma) = \max \limits_{i \in V \setminus O} \frac{d_{\sigma(i)i}}{NR_q(i)}.$$ Denote by $(S^*, O^*, \sigma^*)$ the optimal solution for $\mathcal{I}_{{\rm IF}k{\rm CO}}$, and $OPT_{\mathcal{I}_{{\rm IF}k{\rm CO}}}$ the outlier-related fairness ratio of $(S^*, O^*, \sigma^*)$, i.e., $$(S^*, O^*, \sigma^*) = \arg\min \limits_{(S, O, \sigma) : |S| \leq k, |O| \leq q} \alpha (S, O, \sigma),$$ $${\rm and} ~ OPT_{\mathcal{I}_{{\rm IF}k{\rm CO}}} = \alpha(S^*, O^*, \sigma^*) = \max \limits_{i \in V \setminus O^*} \frac{d_{\sigma^*(i)i}}{NR_q(i)}.$$ By setting $q=0$, the $\mathcal{I}_{{\rm IF}k{\rm CO}}$ reduces to an IF$k$C instance. More specifically, in an IF$k$C instance $\mathcal{I}_{{\rm IF}k{\rm C}}$, we are given a vertex set $V$. Each pair of vertices $(i, j)$, where $i, j \in V$, has a distance $d_{ij}$. We assume that the distances are non-negative, symmetric, and satisfy the triangle inequality. Also, we are given an integer $k$, the maximum number of vertices that can be selected as the centers. For each vertex $i \in V$, let $NR(i)$ be the distance between $i$ and its $\lceil {n }/{k} \rceil_{\rm th}$ nearest neighbor. We call $NR(i)$ the neighborhood radius of $i$. The goal is to select vertices $S \subseteq V$ as centers, assign each vertex $i \in V$ to some center $\sigma(i) \in S$, such that $|S| \leq k$, and the maximum ratio of $d_{\sigma(i)i}/NR(i)$ of a vertex in $V$ is minimized. We use $(S, \sigma)$ to denote a solution for the IF$k$C instance $\mathcal{I}_{{\rm IF}k{\rm C}}$, in which $S \subseteq V$ is the set of selected centers and $\sigma: V \rightarrow S $ is an assignment mapping each vertex in $V$ to some center in $S$. A solution $(S, \sigma)$ is feasible if $|S| \leq k$. For each vertex $i \in V$, we call $d_{\sigma(i)i}/NR(i)$ its fairness ratio. For the solution $(S, \sigma)$, we call $\alpha(S, O)$ its fairness ratio, which is the maximum fairness ratio of a vetex in $V$, i.e., $$\alpha(S, \sigma) = \max \limits_{i \in V} \frac{d_{\sigma(i)i}}{NR(i)}.$$ \subsection{An attempt} Herein, we present a naive but quite natural algorithm that is able to give a feasible solution for $\mathcal{I}_{{\rm IF}k{\rm CO}}$. For the instance, first remove its input of $q$ in order to obtain an IF$k$C instance $\mathcal{I}_{{\rm IF}k{\rm C}}$. Then, use the algorithm for the IF$k$C to solve $\mathcal{I}_{{\rm IF}k{\rm C}}$ and obtain a feasible solution for $\mathcal{I}_{{\rm IF}k{\rm CO}}$. The naive algorithm is shown as Algorithm 1. It is worth mentioning that Step 2 of Algorithm 1 is a slightly modified version of the 2FAIRKCENTER algorithm for the IF$k$C appeared in \cite{jkl}. \begin{equation}gin{algorithm} \label{alg1} \caption{: A Naive Algorithm for the IF$k$CO.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] For $\mathcal{I}_{{\rm IF}k{\rm CO}}$, get rid of $q$ to yield a IF$k$C instance $\mathcal{I}_{{\rm IF}k{\rm C}}= (V, \{d_{ij}\}_{i, j \in V}, k)$. \item[Step 2] {Initially, set $P:= V$, $S: = \emptyset$}. \begin{equation}gin{description} \item ~~~{\bf While} $P \not= \emptyset$ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P} NR(i).$ \end{description} \item Update $S:= S \cup \{s\}$, $P:= \{ i \in P: d_{is} > 2 \cdot NR(i)\}$. \end{description} \end{description} \item[Step 3] Set $O := \emptyset$, $\sigma(i):= \arg \min_{h \in S} d_{ih}$ for each $i \in V$. \item[Step 4] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} For any selected center $s \in S$, denote by $V(s, NR(s))$ the set of vertices within the distance of $NR(s)$ from $s$. We call $V(s, NR(s))$ the neighboring vertex set of $s$. Here are some observations about Algorithm 1. \begin{equation}gin{obs} \label{o1} For any selected center $s \in S$, there are at least $n/k$ vertices in its neighboring vertex set. \end{obs} This observation can be seen from the definition of $NR(s)$. \begin{equation}gin{obs} \label{o2} If a vertex $s$ is selected as a center, any other vertex in its neighboring vertex set cannot be a center. \end{obs} \begin{equation}gin{proof} When a vertex $s$ is selected as a center, each vertex $i \in V(s, NR(s))$ either already be removed from the current $P$ or it satisfies $d_{is} \leq NR(s) \leq 2NR(s) \leq 2 NR(i)$. The last inequality follows by the principle of selecting centers in Step 2. For the second case, the vertex $i$ will be removed from $P$ and cannot be a center anymore, because of the principle of updating $P$ in Step 2. \qed \end{proof} \begin{equation}gin{obs} \label{o3} For any two selected centers $s, s^\prime \in S$, their neighboring vertex set are disjoint. \end{obs} \begin{equation}gin{proof} Assume that center $s$ is selected before $s^\prime$. If there exists a vertex $i \in V(s,NR(s)) \cap V(s^\prime,NR(s^\prime))$, we have that $d_{ss^\prime} \leq d_{si} + d_{is^\prime} \leq NR(s) + NR(s^\prime) \leq 2 NR(s^\prime)$. The last inequality follows by the principle of selecting centers in Step 2. In this case, because of the principle of updating $P$ in Step 2, once $s$ is selected, the vertex $s^\prime$ will be remove from $P$ and cannot be a center anymore, which is a contradiction. \qed \end{proof} \iffalse \begin{equation}gin{itemize} \item {\bf Observation 1.} For any selected center $s \in S$, there are at least $n/k$ vertices in its neighboring vertex set. \\ \textit{Explanation of Fact 1.} \item {\bf Observation 2.} If a vertex $s$ is selected as a center, any other vertex in its neighboring vertex set cannot be a center. \\ \textit{Explanation of Fact 2.} When a vertex $s$ is selected as a center, each vertex $i \in V(s, NR(s))$ either already be removed from the current $P$ or it satisfies $d_{is} \leq NR(s) \leq 2NR(s) \leq 2 NR(i)$. The last inequality follows by the principle of selecting centers in Step 2. For the second case, the vertex $i$ will be removed from $P$ and cannot be a center anymore, because of the principle of updating $P$ in Step 2. \item {\bf Observation 3.} For any two selected centers $s, s^\prime \in S$, their neighboring vertex set are disjoint. \\ \textit{Explanation of Fact 3.} Assume that center $s$ is selected before $s^\prime$. If there exists a vertex $i \in V(s,NR(s)) \cap V(s^\prime,NR(s^\prime))$, we have that $d_{ss^\prime} \leq d_{si} + d_{is^\prime} \leq NR(s) + NR(s^\prime) \leq 2 NR(s^\prime)$. The last inequality follows by the principle of selecting centers in Step 2. In this case, because of the principle of updating $P$ in Step 2, once $s$ is selected, the vertex $s^\prime$ will be remove from $P$ and cannot be a center anymore, which is a contradiction. \end{itemize} \fi The following lemma gives the feasibility of the solution $(S, O, \sigma)$ returned from Algorithm 1. \begin{equation}gin{lem} \label{lem1} Algorithm 1 outputs a feasible solution $(S, O, \sigma)$ for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{lem} \begin{equation}gin{proof} Recall that a solution $(S, O, \sigma)$ is feasible only if $|S| \leq k$ and $|O| \leq q$. Since $O = \emptyset$, the cardinality bound of $O$ obviously holds. We only need to prove $|S| \leq k$. From Observations \ref{o1}-\ref{o3}, we can see that each iteration of Step 2 of Algorithm 1 ensures that at least $n/k$ disjoint vertices are removed from the current $P$. Once $P=\emptyset$, we end the iterations. Therefore, there are at most $k$ iterations in Step 2, which implies $|S| \leq k$. This completes the proof of the lemma. \qed \end{proof} Note that Algorithm 1 may return a feasible solution far from the optimal one for some IF$k$CO instances, as shown by Example 1. \\ {\bf Example 1.} Consider the IF$k$CO instance $(V, \{d_{ij}\}_{i, j \in V}, k, q)$ where $V=\{h, i, j\}$, $d_{hi}=M$, $d_{hj}=M$, $d_{ij}=1$, $k=1$ and $q=1$. Suppose that $M >1$. Recall that for any $v \in V$, its neighborhood radius $NR(v)$ is the distance between $v$ and its $\lceil n/k \rceil$ nearest neighbor. Since $n=|\{h, i, j\}|=3$ and $\lceil n/k \rceil = \lceil 3/1 \rceil = 3$, the neighborhood radiuses used in Algorithm 1 are $NR(h)=NR(i)=NR(j)=M$. If we use Algorithm 1 to solve the instance, Algorithm 1 will arbitrarily select a vertex in $V$ as the center. It is possible that Algorithm 1 selects $h$ as the center and outputs $(\{h\}, \emptyset, \sigma_1)$ as the solution, where $\sigma_1(h)=\sigma_1(i)=\sigma_1(j)=h$. Recall that for any $v \in V$, its outlier-related neighborhood radius $NR_q(v)$ is the distance between $v$ and its $\lceil (n -q)/{k} \rceil$ nearest neighbor. Since $\lceil (n-q)/k \rceil = \lceil (3-1)/ 1 \rceil = 2$, the outlier-related neighborhood radiuses are $NR_q(h)=M$ and $NR_q(i)=NR_q(j)=1$. Therefore, the outlier-ralated fairness ratio of the solution $(\{h\}, \emptyset, \sigma_1)$ is $M$, i.e., \begin{equation}a \alpha(\{h\}, \emptyset, \sigma_1) &=& \max \limits_{v \in \{h, i, j\} \setminus \emptyset} \frac{d_{\sigma_1(v)v}}{NR_q(v)} \nonumber \\ &=&\max\{\frac{d_{\sigma_1(h)h}}{NR_q(h)},\frac{d_{\sigma_1(i)i}}{NR_q(i)}, \frac{d_{\sigma_1(j)j}}{NR_q(j)}\} \nonumber\\ &=&\max\{\frac{d_{hh}}{NR_q(h)},\frac{d_{hi}}{NR_q(i)}, \frac{d_{hj}}{NR_q(j)}\} \nonumber\\ &=&\max\{\frac{0}{M},\frac{M}{1},\frac{M}{1}\} \nonumber \\ &=&M. \nonumber \end{equation}a The optimal solution is to select either $i$ or $j$ as the center and $h$ as the outlier. Assume that the selected center is $i$. Therefore, the optimal solution is $(\{i\}, \{h\}, \sigma^*)$, where $\sigma^*(i)=\sigma^*(j)=i$. The outlier-related fairness ratio of the solution $(\{i\}, \{h\}, \sigma^*)$ is $1$, i.e., \begin{equation}a \alpha(\{i\}, \{h\}, \sigma^*) &=& \max \limits_{v \in \{h, i, j\} \setminus \{h\}} \frac{d_{\sigma^*(v)v}}{NR_q(v)} \nonumber \\ &=&\max\{\frac{d_{\sigma^*(i)i}}{NR_q(i)},\frac{d_{\sigma^*(j)j}}{NR_q(j)}\} \nonumber\\ &=&\max\{\frac{d_{ii}}{NR_q(i)}, \frac{d_{ij}}{NR_q(j)}\} \nonumber\\ &=&\max\{\frac{0}{1},\frac{1}{1}\} \nonumber \\ &=&1. \nonumber \end{equation}a Therefore, we have $$\frac{\alpha(\{h\}, \emptyset, \sigma_1)}{\alpha(\{i\}, \{h\}, \sigma^*)} =\frac{M}{1},$$ which implies that the Algorithm 1 may return a solution far from being optimal. An illustration of Example 1 is given in Fig. \ref{fig1}. \begin{equation}gin{figure}[h] \centering { \label{fig1a} \includegraphics[height=4.5cm]{exp1.jpg}} \caption{An illustration of Example 1. In the top graph, the circles are the vertices and the distances are given alongside the vertex pairs. The red and blue circles represent the selected centers and outlier, respectively. The dotted lines represent the assignments of the vertices. } \label{fig1} \end{figure} \section{Advisable algorithms for the IF$k$CO} In this section, we first propose a basic $4$-approximation algorithm for the IF$k$CO. Then, we give a refined algorithm, which overcomes the limitation of the basic one. \subsection{A basic algorithm} The main adversary for Algorithm 1 is that to select vertex $i$ with a minimum neighborhood radius $NR(i)$ may lead to very bad outcome, and no vertices are selected as outliers in the obtained solution. Therefore, we specifically design a basic algorithm for the IF$k$CO. Our basic algorithm keeps finding a selectable vertex $i$ with the minimum outlier-related neighborhood radius $NR_q(i)$ as a center while the set of selectable vertices is not empty and the number of currently chosen centers is less than $k$. The basic algorithm is formally presented as Algorithm 2. \begin{equation}gin{algorithm} \label{alg1} \caption{: A Basic Algorithm for the IF$k$CO.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] {Initially, set $P:= V$, $S: = \emptyset$}. \begin{equation}gin{description} \item ~~~{\bf While} $P \not= \emptyset$, $|S| < k $ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P} NR_q(i).$ \end{description} \item Update $S:= S \cup \{s\}$, $P:= \{ i \in P: d_{is} > 2 \cdot NR_q(i)\}$. \end{description} \end{description} \item[Step 2] Set $O := P$, $\sigma(i):= \arg \min_{h \in S} d_{ih}$ for each $i \in V \setminus O$. \item[Step 3] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} For any selected center $s \in S$, denote by $V(s, NR_q(s))$ the set of vertices within the distance of $NR_q(s)$ from $s$. We call $V(s, NR_q(s))$ the outlier-related neighboring vertex set of $s$. Here are some observations about Algorithm 2. \begin{equation}gin{obs} \label{o4} For any selected center $s \in S$, there are at least $(n - q )/k$ vertices in its outlier-related neighboring vertex set. \end{obs} This observation can be seen from the definition of $NR_q(s)$. \begin{equation}gin{obs} \label{o5} If a vertex $s$ is selected as a center, any other vertex in its outlier-related neighboring vertex set cannot be a center. \end{obs} \begin{equation}gin{proof} When a vertex $s$ is selected as a center, each vertex $i \in V(s, NR_q(s))$ either already be removed from the current $P$ or it satisfies $d_{is} \leq NR_q(s) \leq 2NR_q(s) \leq 2 NR(i)$. The last inequality follows by the principle of selecting centers in Step 1. For the second case, the vertex $i$ will be removed from $P$ and cannot be a center anymore, because of the principle of updating $P$ in Step 1. \qed \end{proof} \begin{equation}gin{obs} \label{o6} For any two selected centers $s, s^\prime \in S$, their outlier-related neighboring vertex sets are disjoint. \end{obs} \begin{equation}gin{proof} Assume that center $s$ is selected before $s^\prime$. If there exists a vertex $i \in V(s,NR_q(s)) \cap V(s^\prime,NR_q(s^\prime))$, we have that $d_{ss^\prime} \leq d_{si} + d_{is^\prime} \leq NR_q(s) + NR_q(s^\prime) \leq 2 NR_q(s^\prime)$. The last inequality follows by the principle of selecting centers in Step 1. In this case, because of the principle of updating $P$ in Step 1, once $s$ is selected, the vertex $s^\prime$ will be remove from $P$ and cannot be a center anymore, which is a contradiction. \qed \end{proof} The following lemma gives the feasibility of the solution $(S, O, \sigma)$ obtained from Algorithm 2. \begin{equation}gin{lem} \label{lem2} Algorithm 2 outputs a feasible solution $(S, O, \sigma)$ for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{lem} \begin{equation}gin{proof} Recall that a solution $(S, O, \sigma)$ is feasible only if $|S| \leq k$ and $|O|\leq q.$ Note that Step 1 of Algorithm 2 guarantees that $|S| \leq k$. Thus we only need to prove $|O| \leq q$. We consider the two cases that may terminate Step 1. The simpler case is $P = \emptyset$, and the other case is $|S|=k$. If $P = \emptyset$, the cardinality bound of $O$ obviously holds, since $|O|=|P|=0 \leq q$. If $|S|=k$, there are $k$ iterations. We conclude from Observations \ref{o4}-\ref{o6} that each iteration of Step 1 in Algorithm 2 guarantees that at least $(n - q )/k$ disjoint vertices are removed from the current $P$. Therefore, the number of vertices removed from the initial $P$ is at least $n-q$. We have that $|O| = |P| \leq n - (n-q)= q$. This completes the proof of the lemma. \qed \end{proof} \begin{equation}gin{lem} \label{l2a} The outlier-related fairness ratio of the solution $(S, O, \sigma)$ obtained from Algorithm 2 for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$ is at most $2$, i.e., $$\alpha(S, O, \sigma) \leq 2.$$ \end{lem} \begin{equation}gin{proof} For the solution $(S, O, \sigma)$ obtained from Algorithm 2, from Step 1 of Algorithm 2, it can be seen that for each vertex $i \in V \setminus O$, there must exist a center $s \in S$ such that $d_{is} \leq 2 NR_q(i)$. Recall that $\sigma(i):= \arg \min_{h \in S} d_{ih}$ for each $i \in V \setminus O$. Therefore, we have that $$d_{\sigma(i)i} \leq d_{is} \leq 2 NR_q(i).$$ That means, $$\frac{d_{\sigma(i)i}}{NR_q(i)} \leq 2 {\rm ~for~any}~ i \in V\setminus O.$$ Thus, we obtain that \begin{equation}a &&\alpha(S, O, \sigma)= \max \limits_{i \in V \setminus O} \frac{d_{\sigma(i)i}}{NR_q(i)} \leq 2.\nonumber \end{equation}a This completes the proof of this lemma. \qed \end{proof} \begin{equation}gin{lem} \label{l2b} The outlier-related fairness ratio of the optimal solution $(S^*, O^*, \sigma^*)$ for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$ is at least $1/2$, i.e., $$\alpha(S^*, O^*, \sigma^*)\geq \frac{1}{2}.$$ \end{lem} \begin{equation}gin{proof} For each center $s^* \in S^*$ in the optimal solution $(S^*, O^*, \sigma^*)$, denote by $D(s^*)$ the set of vertices assigned to $s^*$ under the assignment of $\sigma^*$, i.e., $$D(s^*) = \{i \in V \setminus O^*: \sigma^*(i) = s^*\}.$$ Since $(S^*, O^*, \sigma^*)$ is a feasible solution, we have that $|S^*| \leq k$, that $|O^*|\leq q$, and that $$|\bigcup_{s^* \in S^*}D(s^*)| = \sum \limits_{s^* \in S^*}|D(s^*)| =|V \setminus O^*|=|V|-|O^*| \geq n-q.$$ Therefore, $$\frac{|\sum \limits_{s^* \in S^*} D(s^*)|}{|S^*|} \geq \frac{n-q}{k}.$$ There must exist some center $s^\prime \in S^*$ satisfying $|D(s^\prime)| \geq (n-q) / k.$ Let $s_f$ be the vertex farthest from $s^\prime$ in $D(s^\prime)$ under the assignment of $\sigma^*$. Note that for any vertex $i \in D(s^\prime)$, we have that $$d_{is_f} \leq d_{is^\prime}+d_{s^\prime s_f} \leq 2 \cdot d_{s^\prime s_f}.$$ Since each vertex $i$ in $D(s^\prime)$ is within the distance of $2d_{s^\prime s_f}$ from $s_f$ and $|D(s^\prime)|\geq (n-q) / k$, combining with the definition of $NR_q(s_f)$, we obtain that $$NR_q(s_f) \leq 2 \cdot d_{s^\prime s_f}.$$ Thus, it satisfies for $s_f \in V \setminus O^*$ that $$\frac{d_{\sigma^*(s_f)s_f}}{NR_q(s_f)} = \frac{d_{s^\prime s_f}}{NR_q(s_f)} \geq \frac{1}{2}.$$ Therefore, \begin{equation}a &&\alpha(S^*, O^*, \sigma^*)= \max \limits_{i \in V \setminus O^*} \frac{d_{\sigma^*(i)i}}{NR_q(i)} \geq \frac{d_{\sigma^*(s_f)s_f}}{NR_q(s_f)} \geq \frac{1}{2}.\nonumber \end{equation}a Complete the proof. \qed \end{proof} From Lemmas \ref{l2a} and \ref{l2b}, for the solution $(S, O, \sigma)$ obtained from Algorithm 2 and the optimal solution $(S^*, O^*, \sigma^*)$, we have that $$\alpha(S, O, \sigma) \leq 4 \cdot \alpha(S^*, O^*, \sigma^*) = 4 \cdot OPT_{\mathcal{I}_{{\rm IF}k{\rm CO}}},$$ which implies the following result of Algorithm 2. \begin{equation}gin{thm}\label{thm2} Algorithm 2 is a $4$-approximation algorithm for the {\rm IF}$k${\rm CO}. \end{thm} Suppose that the Algorithm 2 is running on Example 1. It will arbitrarily select $i$ or $j$ as the center and leave $h$ as an outlier, since Algorithm 2 keeps searching a selectable vertex $v$ with a minimum outlier-related neighborhood radius $NR_q(v)$ to select as a center and $NR_q(h)=M > 1=NR_q(i)=NR_q(j).$ Therefore, Algorithm 2 outputs an optimal solution for the instance. \subsection{A refined algorithm} A limitation of Algorithm 2 is that it may select very few vertices as outliers. To overcome this shortcoming, we present a refined algorithm which uses a binary search on a parameterized version of Algorithm 2. The refined algorithm is formally described in Algorithm 3. Compared with Algorithm 2, an additional parameter $l$ needs to be given as an input in Algorithm 3. The integer $l$ limits the number of iterations for searching a solution with a better outlier-related fairness ratio. The more steps of the iterations, it is more likely that we obtain a smaller ratio. \begin{equation}gin{algorithm} \label{alg3} \caption{: A Refined Algorithm for the IF$k$CO.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$, an integer $l \geq 0$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] Use Algorithm 2 to solve $\mathcal{I}_{{\rm IF}k{\rm CO}}$ and obtain a solution $(S_b, O_b, \sigma_b)$. \item[Step 2] Initially set $t:=0$, $\begin{equation}ta_1:=1$, $\begin{equation}ta_2 := 2$, $\begin{equation}ta:=\begin{equation}ta_1$ and $(S, O, \sigma):=(S_b, O_b, \sigma_b)$. \item[Step 3] {\bf While} $t < l$ {\bf do} \begin{equation}gin{description} \item \begin{equation}gin{description} \item ~~~~Set $P_\begin{equation}ta:=V$, $S_\begin{equation}ta:=\emptyset$. \item {\bf While} $P_\begin{equation}ta \not= \emptyset$, $|S_\begin{equation}ta| < k $ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P_\begin{equation}ta$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P_\begin{equation}ta} NR_q(i).$ \end{description} \item Update $S_\begin{equation}ta:= S_\begin{equation}ta \cup \{s\}$, $P_\begin{equation}ta:= \{ i \in P_\begin{equation}ta: d_{is} > \begin{equation}ta \cdot NR_q(i)\}$. \end{description} \item Set $O_\begin{equation}ta := P_\begin{equation}ta$, $\sigma_\begin{equation}ta(i):= \arg \min_{h \in S_\begin{equation}ta} d_{ih}$ for each $i \in V \setminus O_\begin{equation}ta$. \item {\bf If} $|O_\begin{equation}ta| > q$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_1 := \begin{equation}ta$, $\begin{equation}ta := (\begin{equation}ta_1 + \begin{equation}ta_2) / 2$, $t:=t+1$. \end{description} \item {\bf If} $|O_\begin{equation}ta| \leq q$ {\bf then} \begin{equation}gin{description} \item Update $(S, O, \sigma) :=(S_\begin{equation}ta, O_\begin{equation}ta, \sigma_\begin{equation}ta)$, $\begin{equation}ta_2 := \begin{equation}ta$, $\begin{equation}ta := (\begin{equation}ta_1 + \begin{equation}ta_2) / 2$, $t:=t+1$. \end{description} \end{description} \end{description} \item[Step 4] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} \iffalse \begin{equation}gin{algorithm} \label{alg3} \caption{: A Refined Basic Algorithm for the IF$k$CO.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$, an integer $l \geq 0$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] Use Algorithm 2 to solve $\mathcal{I}_{{\rm IF}k{\rm CO}}$ and obtain a solution $(S_b, O_b, \sigma_b)$. \item[Step 2] Initially set $t:=0$, $\begin{equation}ta_1:=1$, $\begin{equation}ta_2 := 2$ and $(S, O, \sigma):=(S_b, O_b, \sigma_b)$. \item[Step 3] {\bf While} $t < l$ {\bf do} \begin{equation}gin{description} \item \begin{equation}gin{description} \item ~~~Set $\begin{equation}ta:= \frac{\begin{equation}ta_1+\begin{equation}ta_2}{2}$, $P_\begin{equation}ta:=V$, $S_\begin{equation}ta:=\emptyset$. \item {\bf While} $P_\begin{equation}ta \not= \emptyset$, $|S_\begin{equation}ta| < k $ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P_\begin{equation}ta$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P_\begin{equation}ta} NR_q(i).$ \end{description} \item Update $S_\begin{equation}ta:= S_\begin{equation}ta \cup \{s\}$, $P_\begin{equation}ta:= \{ i \in P_\begin{equation}ta: d_{is} > \begin{equation}ta \cdot NR_q(i)\}$. \end{description} \item Set $O_\begin{equation}ta := P_\begin{equation}ta$, $\sigma_\begin{equation}ta(i):= \arg \min_{h \in S_\begin{equation}ta} d_{ih}$ for each $i \in V \setminus O_\begin{equation}ta$. \item {\bf If} $|O_\begin{equation}ta| > q$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_1 := \begin{equation}ta$, $t:=t+1$. \end{description} \item {\bf If} $|O_\begin{equation}ta| \leq q$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_2 := \begin{equation}ta$, $t:=t+1$, $(S, O, \sigma) :=(S_\begin{equation}ta, O_\begin{equation}ta, \sigma_\begin{equation}ta)$. \end{description} \end{description} \end{description} \item[Step 4] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} \fi The following lemma gives the feasibility of the solution $(S, O, \sigma)$ obtained from Algorithm 3. \begin{equation}gin{lem} \label{lem3} Algorithm 3 outputs a feasible solution $(S, O, \sigma)$ for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{lem} \begin{equation}gin{proof} Recall that a solution $(S, O, \sigma)$ is feasible if $|S| \leq k$ and $|O| \leq q$. Initially, Algorithm 3 sets $(S, O, \sigma)$ as $(S_b, O_b, \sigma_b)$ which is a feasible solution obtained from Algorithm 2. Then, the principle of updating the solution $(S, O, \sigma)$ in Step 3 guarantees that the currently updated solution $(S_\begin{equation}ta, O_\begin{equation}ta, \sigma_\begin{equation}ta)$ must satisfy that $|S_\begin{equation}ta| \leq k$ and $|O_\begin{equation}ta| \leq q$. Therefore, the solution obtained from Algorithm 3 is a feasible solution. \qed \end{proof} \begin{equation}gin{lem} \label{l3a} The outlier-related fairness ratio of the solution $(S, O, \sigma)$ obtained from Algorithm 3 for any {\rm IF}$k${\rm CO} instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$ is at most $2$, i.e., $$\alpha(S, O, \sigma) \leq 2.$$ \end{lem} \begin{equation}gin{proof} If Step 3 of Algorithm 3 does not update the solution $(S, O, \sigma)$, we output $(S_b, O_b, \sigma_b)$ as the final solution. Therefore, from Lemma 3, we have that \begin{equation}a \alpha(S, O, \sigma)&=&\alpha(S_b, O_b, \sigma_b) \leq 2.\label{th7a} \end{equation}a Now consider the case that Step 3 of Algorithm 3 updates the solution $(S, O, \sigma)$. Assume that the final updated solution is $(S_{\begin{equation}ta_f}, O_{\begin{equation}ta_f}, \sigma_{\begin{equation}ta_f})$. For any $i \in V \setminus O = V \setminus O_{\begin{equation}ta_f}$, from Step 3, we have that $$d_{\sigma(i)i}= d_{\sigma_{\begin{equation}ta_f}(i)i} \leq \begin{equation}ta_f NR_q(i).$$ That means, $$\frac{d_{\sigma(i)i}}{NR_q(i)} \leq \begin{equation}ta_f {\rm ~for~any}~ i \in V\setminus O.$$ Thus, we obtain that \begin{equation}a &&\alpha(S, O, \sigma)= \max \limits_{i \in V \setminus O} \frac{d_{\sigma(i)i}}{NR_q(i)} \leq \begin{equation}ta_f \leq 2.\label{th7b} \end{equation}a The last inequality follows by the update principle of $\begin{equation}ta$ in Step 3, which guarantees that $\begin{equation}ta_f \in [1, 2]$. Combining inequalities \reff{th7a} and \reff{th7b}, we complete the proof of this lemma. \qed \end{proof} From Lemmas \ref{l2b} and \ref{l3a}, for the solution $(S, O, \sigma)$ obtained from Algorithm 3 and the optimal solution $(S^*, O^*, \sigma^*)$, we have that $$\alpha(S, O, \sigma) \leq 4 \cdot \alpha(S^*, O^*, \sigma^*) = 4 \cdot OPT_{\mathcal{I}_{{\rm IF}k{\rm CO}}},$$ which implies the following main result of Algorithm 3. \begin{equation}gin{thm}\label{thm2} Algorithm 3 is a $4$-approximation algorithm for the {\rm IF}$k${\rm CO}. \end{thm} Intuitively, it would be better to employ a parameterized algorithm for obtaining a solution with better outlier-related fairness ratio. We provided a parameterized version of Algorithm 1 in Algorithm 4. In the next section, we compare the performance of Algorithm 3 and 4 on a real-world dataset. \begin{equation}gin{algorithm} \label{alg4} \caption{: A Parameterized Version of Algorithm 1.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$, an integer $l \geq 0$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] Use Algorithm 1 to solve $\mathcal{I}_{{\rm IF}k{\rm CO}}$ and obtain a solution $(S_a, O_a, \sigma_a)$. \item[Step 2] Initially set $t:=0$, $\begin{equation}ta_1:=1$, $\begin{equation}ta_2 := 2$, $\begin{equation}ta:=\begin{equation}ta_1$ and $(S, O, \sigma):=(S_a, O_a, \sigma_a)$. \item[Step 3] {\bf While} $t < l$ {\bf do} \begin{equation}gin{description} \item \begin{equation}gin{description} \item ~~~~Set $P_\begin{equation}ta:=V$, $S_\begin{equation}ta:=\emptyset$. \item {\bf While} $P_\begin{equation}ta \not= \emptyset$ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P_\begin{equation}ta$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P_\begin{equation}ta} NR(i).$ \end{description} \item Update $S_\begin{equation}ta:= S_\begin{equation}ta \cup \{s\}$, $P_\begin{equation}ta:= \{ i \in P_\begin{equation}ta: d_{is} > \begin{equation}ta \cdot NR(i)\}$. \end{description} \item Set $O_\begin{equation}ta := \emptyset$, $\sigma_\begin{equation}ta(i):= \arg \min_{h \in S_\begin{equation}ta} d_{ih}$ for each $i \in V$. \item {\bf If} $|S_\begin{equation}ta| > k$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_1 := \begin{equation}ta$, $\begin{equation}ta:=(\begin{equation}ta_1 + \begin{equation}ta_2) / 2$, $t:=t+1$. \end{description} \item {\bf If} $|S_\begin{equation}ta| \leq k$ {\bf then} \begin{equation}gin{description} \item Update $(S, O, \sigma) :=(S_\begin{equation}ta, O_\begin{equation}ta, \sigma_\begin{equation}ta)$, $\begin{equation}ta_2 := \begin{equation}ta$, $\begin{equation}ta:=(\begin{equation}ta_1+\begin{equation}ta_2) / 2$, $t:=t+1$. \end{description} \end{description} \end{description} \item[Step 4] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} \section{Experiments} In this section, we provide the experimental results of Algorithm 3 running on both synthetic datasets and real-world datasets to illustrate its effectiveness. The environment for the experiments is Intel(R) Core(TM) CPU i7-6700 @3.40GHz with 8GB memory. \subsection{Synthetic Datasets} Theoretically, we prove that both Algorithm 2 and Algorithm 3 have a same approximation ratio of 4. However, Algorithm 3 has a much better performance in experiments. In this subsection, we mainly test the proposed algorithms on synthetic datasets. We randomly generate nine IF$k$CO instances with different settings of $n$, $k$ and $q$. The nine instances are divided into three groups. Each group of the three aims to find out the effect of one parameter on the outlier-related fairness ratios of the solutions of Algorithm 3. The details of the settings are: \begin{equation}gin{itemize} \item Group 1: Three randomly generated IF$k$CO instances, in which $k=20$, $q=50$ and $n=200, 1000, 5000$, respectively; \item Group 2: Three randomly generated IF$k$CO instances, in which $n=1000$, $q=50$ and $k=5, 20, 100$, respectively; \item Group 3: Three randomly generated IF$k$CO instances, in which $n=1000$, $k=20$, and $q=20, 50, 100$, respectively. \end{itemize} For the instances in Group 1, 2 and 3, we show the continuous changing lines of the number of outliers selected by Algorithm 3 with respect to different values of $\begin{equation}ta$ in Fig. \ref{en}, \ref{ek} and \ref{eq}, respectively. The corresponding outlier-related fairness ratios of the solutions obtained from Algorithm 3 are also shown in Fig. 2. From our intuition, we belive that all the changing lines tend to decrease. It can be seen from Fig. 2 that in general they do, but some of the changing lines are not strictly decreasing. Here are some specific observations for each group. \begin{equation}gin{itemize} \item For Group 1: The positions of the three lines are consistent with our expectation. The larger $n$ is, the higher the position of the line, since for the same $\begin{equation}ta$ a larger $n$ would cause Algorithm 3 to output more outliers. \item For Group 2: The positions of the three lines are unconventional. We intuitively think that for the same $\begin{equation}ta$, a larger $k$ would cause Algorithm 3 to output a smaller number of outliers. But for the randomly generated instance where $k=5$, whatever $\begin{equation}ta$ is, the number of its outliers obtained from Algorithm 3 is always the smallest. \item For Group 3: The positions of the three lines are reasonable. The larger $q$ is, the smaller the corresponding outlier-related radiuses of all the vertices are. Smaller outlier-related radiuses would cause Algorithm 3 to output more outliers. \end{itemize} More remarkably, we find that Algorithm 3 is a very well-performed algorithm. For all the tested instances, the maximum outlier-related fairness ratio obtained from Algorithm 3 is only $1.31$, which is far more below the theoretical bound of 2. \begin{equation}gin{figure}[h] \centering \subfigure[The effect of $n$. ] {\label{en} \includegraphics[height=4.2cm]{1en.png}} \subfigure[The effect of $k$.] {\label{ek} \includegraphics[height=4.2cm]{1ek.png}} \subfigure[The effect of $q$.] {\label{eq} \includegraphics[height=4.2cm]{1eq.png}} \caption{An illustration of the effect of the input of the IF$k$CO on the output of Algorithm 3. Each instance is represented by a colored line, which reflects the change of the number of outliers selected by Algorithm 3 with respect to different values of $\begin{equation}ta$. The colored dots represent the corresponding outlier-related fairness ratios of the solutions obtained from Algorithm 3.} \end{figure} \subsection{Real-world Datasets} In this subsection, we test Algorithm 3 and 4 on the Shenzhen POI (Point of Interest) dataset collected from the open API of Gaode Maps \cite{link}. The target POI type contains $2936$ points, and since the POI type does not affect the results by any means, we hide it throughout this paper. The distances between any two points are measured in Euclidean distance after mapping the latitude and longitude of all the points onto a plane. Recall that Algorithm 3 and 4 are the parameterized versions of Algorithm 2 and 1, respectively. It can be seen that Algorithm 2 performs much better than Algorithm 1 for Example 1. Thus intuitively, we would expect that the performance of Algorithm 3 is probably better than Algorithm 4 for the same instance. We show the centers selected by Algorithm 3 and 4 in Fig. \ref{alg1t3} and \ref{alg2t4}, respectively. It turns out that Algorithm 3 is more likely to locate a center in dense areas compared with Algorithm 4. In other words, the points that are not likely to be outliers are of more importance in Algorithm 3 than in Algorithm 4. As a consequence, Algorithm 3 tends to obtain a smaller outlier-related fairness ratio than Algorithm 4. This phenomenon makes sense because Algorithm 3 and Algorithm 4 keep searching the vertex $i$ with a minimum radius of $NR_q(i)$ and $NR(i)$ as a center, and a vertex $i$ in dense area is more likely to have the minimum $NR_q(i)$ than $NR(i)$. \iffalse \begin{equation}gin{algorithm} \label{alg4} \caption{: A Refined Naive Algorithm for the IF$k$CO.} { {\bf Input:} An IF$k$CO instance $\mathcal{I}_{{\rm IF}k{\rm CO}}= (V, \{d_{ij}\}_{i, j \in V}, k, q)$, an integer $l \geq 0$.}\\ { {\bf Output:} A feasible solution $(S, O, \sigma)$ for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$.} \begin{equation}gin{description} \item[Step 1] Use Algorithm 1 to solve $\mathcal{I}_{{\rm IF}k{\rm CO}}$ and obtain a solution $(S_a, O_a, \sigma_a)$. \item[Step 2] Initially set $t:=0$, $\begin{equation}ta_1:=1$, $\begin{equation}ta_2 := 2$ and $(S, O, \sigma):=(S_a, O_a, \sigma_a)$. \item[Step 3] {\bf While} $t < l$ {\bf do} \begin{equation}gin{description} \item \begin{equation}gin{description} \item ~~~Set $\begin{equation}ta:= \frac{\begin{equation}ta_1+\begin{equation}ta_2}{2}$, $P_\begin{equation}ta:=V$, $S_\begin{equation}ta:=\emptyset$. \item {\bf While} $P_\begin{equation}ta \not= \emptyset$ {\bf do} \begin{equation}gin{description} \item Find a vertex $s \in P_\begin{equation}ta$ such that \begin{equation}gin{description} \item $s := \arg \min \limits_{i \in P_\begin{equation}ta} NR(i).$ \end{description} \item Update $S_\begin{equation}ta:= S_\begin{equation}ta \cup \{s\}$, $P_\begin{equation}ta:= \{ i \in P_\begin{equation}ta: d_{is} > \begin{equation}ta \cdot NR(i)\}$. \end{description} \item Set $O_\begin{equation}ta := \emptyset$, $\sigma_\begin{equation}ta(i):= \arg \min_{h \in S_\begin{equation}ta} d_{ih}$ for each $i \in V$. \item {\bf If} $|S_\begin{equation}ta| > k$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_1 := \begin{equation}ta$, $t:=t+1$. \end{description} \item {\bf If} $|S_\begin{equation}ta| \leq k$ {\bf then} \begin{equation}gin{description} \item Update $\begin{equation}ta_2 := \begin{equation}ta$, $t:=t+1$, $(S, O, \sigma) :=(S_\begin{equation}ta, O_\begin{equation}ta, \sigma_\begin{equation}ta)$. \end{description} \end{description} \end{description} \item[Step 4] Output $(S, O, \sigma)$ as the solution for the instance $\mathcal{I}_{{\rm IF}k{\rm CO}}$. \end{description} \end{algorithm} \fi \begin{equation}gin{figure}[h] \centering \subfigure[The centers selected by Algorithm 3.] {\label{alg1t3} \includegraphics[height=5.8cm]{alg3_l.png}} \subfigure[The centers selected by Algorithm 4.] {\label{alg2t4} \includegraphics[height=5.8cm]{alg4_l.png}} \caption{An illustration of an actual instance in Shenzhen, China. The small green circles are the $2936$ points of interest. The big red circles are the selected centers.} \end{figure} \section{Discussions} In this paper, we propose and investigate the IF$k$CO, which overcomes the shortcoming of IF$k$C that the overall clustering quality may be effected by a few of isolated vertices. As our main contribution, several approximation algorithms for the proposed problem are presented, with a provable $4$ approximation ratio. Despite the theoretical performance guarantee, the experiments on both synthetic and real-world datasets show that the refined algorithm usually outputs a feasible solution with performance significantly better than the approximation ratio of $4$. To introduce the individual fairness and outlier detection into other clustering problems like the $k$-median and $k$-means are very interesting directions in the future. \end{document}
\begin{document} \title{Index realization for automorphisms of free groups} \author{Thierry Coulbois and Martin Lustig} \address{Institut de mathématiques de Marseille\\ Université d'Aix-Marseille\\ 39, rue Frédéric Joliot Curie \\ 13453 Marseille Cedex 13\\ France\\ \href{mailto:[email protected]}{\nolinkurl{[email protected]}}\\ \href{mailto:[email protected]}{\nolinkurl{[email protected]}} } \keywords{Free group automorphisms, Train tracks, Index realization, Gate structure} \subjclass{20E05, 20E08, 20F65, 57R30} \begin{abstract} For any surface $\Sigma$ of genus $g \geq 1$ and (essentially) any collection of positive integers $i_1, i_2, \ldots, i_\ell$ with $i_1+\cdots +i_\ell = 4g-4$ Masur and Smillie~\cite{MS} have shown that there exists a pseudo-Anosov homeomorphism $h:\Sigma \to \Sigma$ with precisely $\ell$ singularities $S_1, \ldots, S_\ell$ in its stable foliation $\mathcal L$, such that $\mathcal L$ has precisely $i_k+2$ separatrices raying out from each $S_k$. In this paper we prove the analogue of this result for automorphisms of a free group ${F}_N$, where ``pseudo-Anosov homeomorphism'' is replaced by ``fully irreducible automorphism'' and the Gauss-Bonnet equality $i_1+\cdots +i_\ell = 4g-4$ is replaced by the index inequality $i_1+\cdots +i_\ell \leq 2N-2$ from \cite{GJLL}. \end{abstract} \maketitle \section{Introduction} In \cite{GJLL} for every automorphism $\Phi \in \text{Aut}({F}_N)$ of a non-abelian free group ${F}_N$ of finite rank $N \geq 2$ an {\em index} $\ind (\Phi)$ has been defined, which counts in a natural way attracting fixed points at the Gromov boundary $\partial {F}_N$ and the rank of the fixed subgroup ${\rm Fix}(\Phi)$ of $\Phi$. If ${\rm Fix}(\Phi) = \{1\}$, then $2\ind( \Phi) + 2$ is simply the number of attractors of the homeomorphism $\partial \Phi:\partial{F}_N \to \partial {F}_N$ induced by $\Phi$. As main result of \cite{GJLL} the index inequality \[ \ind (\Phi) \leq N-1 \] has been proved, which strengthens the celebrated Scott conjecture, proved in~\cite{BH}, and also extends some well known consequences of Nielsen-Thurston theory for surface homeomorphisms to free group automorphisms, in particular after passing to the {\em stable index} $\ind_{stab} (\varphi$) of the associated outer automorphism $\varphi$, defined below in (\ref{eq:index-inequality-star}) as sum of $\ind (\Phi_k)$ for suitable representatives $\Phi_k$ of a positive power of $\varphi$. The main difference to surface homemorphisms, however, where the analog indices always sum up to give via Gauss-Bonnet the maximal possible value postulated in (\ref{eq:index-inequality-star}), is that $\ind_{stab} (\varphi)$ may well be strictly smaller than $N-1$. Ever since it has been an open question which precise value of $\{\frac{1}{2}, 1, \frac{3}{2}, \ldots,$ $\frac{2N-3}{2}, N-1\}$ can be realized as stable index $\ind_{stab} (\varphi)$ by some $\varphi \in \text{Out}({F}_N)$, in particular if one restricts to automorphisms $\varphi$ of ${F}_N$ which are {\em irreducible with irreducible powers (iwip)}, also called {\em fully irreducible} (see Section~\ref{sec:prelim-CLP}). For any given $\varphi \in \text{Out}({F}_N)$ its representatives $\Phi_k \in \text{Aut}({F}_N)$ are partitioned into {\em isogredience classes}, where isogredient automorphisms are conjugate by inner automorphisms and hence have conjugate $\partial {F}_N$-dynamics and thus equal indices. It follows from the results of \cite{GJLL} that any $\Phi_k$ has a positive power $\Phi_k^{m_k}$ for which (as well as for all of its powers) the fixed subgroup and the number of attracting fixed points on $\partial {F}_N$ is maximal; the index of $\Phi_k^{m_k}$ will be called the {\em stable index} of $\Phi_k$ and denoted by $\ind_{stab} \Phi_k$. The {\em stable index list} of $\varphi$ is defined as the longest sequence (up to permutation) of positive indices $\ind_{stab} (\Phi_1),\ind_{stab} (\Phi_2),\ldots,\ind_{stab} (\Phi_\ell)$, given by pairwise non-isogredient representatives $\Phi_k$ of some power $\varphi^t$, for any fixed $t\geq 1$. The inequalities \begin{equation}\label{eq:index-inequality-star} \frac{1}{2} \,\, \leq \,\, \ind_{stab} \varphi := \sum_{k=1}^\ell \ind_{stab} \Phi_k \,\, \leq \,\, N-1 \end{equation} have been shown in \cite{GJLL}. Handel and Mosher~\cite[Question~6 in \S1.5]{HM-axes} have asked explicitly which such values are realized as stable index list by iwip automorphisms of ${F}_N$. We denote such a (potential) index list by $[j_1, \ldots, j_\ell]$, where the $j_k$ are usually given in decreasing order. For the ``maximal'' case, i.e. $\ind_{stab} (\varphi) = N-1$, an almost complete answer to this question has been given by Masur and Smillie~\cite{MS}: For $N \geq 3$ any list $[j_1, j_2, \ldots, j_\ell]$ of positive $j_k \in \frac{1}{2} \mathbb Z$, with $\sum j_k = N-1$ (other than the single exceptional case $[\frac 32,\frac 32]$ for $N = 4$, see Section~\ref{sec:proof-discussion} below), can be realized as index list of an iwip automorphism which is {\em geometric}, i.e. $\varphi$ is induced by a pseudo-Anosov homeomorphism of a surface with one boundary component. On the other hand, if $\ind_{stab} (\varphi) \leq N - \frac{3}{2}$, then any iwip $\varphi$ is known not to be geometric, and in particular for any representative $\Phi_k \in \text{Aut}({F}_N)$ the fixed subgroup is trivial. The purpose of this paper is to show that the analogue of Masur and Smillie's result holds also in the non-maximal case: \begin{thm}\label{thm:main} Let $N \geq 3$, and let $j_1,j_2,\ldots,j_\ell$ be any list of positive numbers from $\frac{1}{2} \mathbb Z$ which satisfy: \[\frac{1}{2} \,\, \leq \,\,\sum_{k=1}^{\ell} j_k \,\,\leq \,\, N-\frac{3}{2}\] Then there exists (and we give an explicit construction) an iwip automorphism $\varphi \in \text{Out}({F}_N)$ which realizes the given list of values $j_k$ as stable index list. \end{thm} For $N=3$ the statement of the theorem had already been proved by C.~Pfaff \cite{Pfaff}. Other special cases were also known, for example the single element list $[N-\frac{3}{2}]$ for any $N \geq 3$ (see~\cite{JL}). A further discussion, including some experimental data obtained by the first author, is given in section~\ref{sec:proof-discussion} below (compare also~\cite[Section VI]{GJLL}). \begin{rem} \label{branchin-index++} From Theorem \ref{thm:main} one deduces directly as corollary an analogous existence statement for indecomposable $\mathbb R$-trees $T$ with free isometric ${F}_N$-action that have prescribed branching index list given by the numbers $j_k$. This follows directly from the material assembled in Section 8 of \cite{CL}. The authors do not know whether such an existence statement was known previously. \end{rem} Already in~\cite{GJLL} the relationship between the index of $\varphi$ and the branching index of a forward limit $\mathbb R$-tree tree $T$ of $\varphi$ has been exploited (compare also~\cite{HM-axes}). If $\varphi$ is iwip, then such $T$ in the Thurston boundary $\partial \text{cv}_N$ of (unprojectivized) Outer space $\text{cv}_N$ is unique up to rescaling, and for non-geometric $\varphi$ the isometric ${F}_N$-action on $T$ is free and has dense orbits. For a suitable exponent $t\geq 1$ there is a natural 1-1 correspondence, between isogredience classes of representatives $\Phi_k$ of $\varphi^t$ with $\ind_{} (\Phi_k) >0$ on one hand, and ${F}_N$-orbits of branch points $P_k$ of $T$ on the other, where $2 \ind_{stab} (\Phi_k) + 2$ is precisely equal to the number of directions at $P_k$. An exposition of this relationship is given in \S8 of \cite{CL}. This correspondence can be carried one step further by using the fact that $T$ is obtained as projective limit (in $\text{cv}_Nbar = \text{cv}_N \cup \partial \text{cv}_N$) of simplicial metric trees $\widetilde \Gamma$ with free isometric ${F}_N$-action, which occur naturally as universal cover of a train track representative $f:\Gamma \to \Gamma$ of $\varphi$ (see \S\ref{sec:prelim-CLP}). Such train track representatives carry an {\em intrinsic gate structure} which allows one to define a gate index at every vertex of $\Gamma$ and a gate index list by considering all periodic vertices of $\Gamma$ with 3 or more gates. There is a natural relationship between the gates of $\widetilde \Gamma$ and the branching directions of $T$, which in the absence of so called {\em periodic INPs} (see \S\ref{sec:prelim-CLP} below) becomes a 1-1 correspondence. Again, see \S8 of \cite{CL} for more details. The problem of realizing a given list $[j_1,j_2,\ldots, j_\ell]$ as in Theorem~\ref{thm:main} as stable index list of an iwip automorphism can hence be subdivided into the following subproblems: \begin{enumerate} \item\label{problem:graph} Construct a graph $\Gamma$ with vertices $v_1, \ldots, v_\ell$ and define a gate structure $\mathbf G$ on $\Gamma$ which realizes the given list of the values $j_k$ as gate indices at the vertices $v_k$. \item\label{problem:iwip-but-inp} Define a map $h: \Gamma \to \Gamma$ which respects the gate structure $\mathbf G$ and is ``iwip up to INPs''. \item\label{problem:control-inp} Control the periodic INPs of $h$. \end{enumerate} Subproblems~(\ref{problem:graph}) and (\ref{problem:iwip-but-inp}) are solved below in sections~\ref{sec:graph} and \ref{sec:map-iwip-but-inp}. Subproblem~(\ref{problem:control-inp}), which is the hardest and conceptually the most interesting, requires a new tool, called {\em long turns}, which has been provided and investigated by the first two authors in the ``companion paper''~\cite{CL}. In section~\ref{sec:legalizing-map} we give a brief summary of this method and provide the concrete tools that allow us in section~\ref{proofs} to apply the results of \cite{CL} in order to obtain a {\em legalizing} train track morphism $g: \Gamma \to \Gamma$. It is then shown how Theorem~1.1 of \cite{CL} (quoted in section~\ref{proofs} in an appropriate version) can be applied to solve the left-over Subproblem~(\ref{problem:control-inp}) for the resulting train track map $h\circ g$. \noindent{\em Acknowledgements:} This paper was intended by the authors to be joint with Catherine Pfaff: a large part of it is rooted in our weekly discussions with Catherine, during the months before she left Marseille. We regret that she declined despite our insistence to be coauthor of the paper. We also would like to point the reader to the thesis work of Sonya Leibman~\cite{leibman}, which came only very recently to our attention. Some of her results seem to be very interesting to the context of the work presented here; in particular, there is an overlap of the results of her section 5.2 (Lemma 5.4) and our subsection 7.1 below. \section{Preliminaries} \label{sec:prelim-CLP} We will use in this paper the same terminology as set up in sections~2 and 3 of \cite{CL}: A {\em graph} $\Gamma$ is always connected, without vertices of valence $1$ or $2$, and moreover it is finite, unless it is the universal covering of a finite connected graph. The {\em edges} $E^\pm(\Gamma)$ of $\Gamma$ come in pairs $e, \overline e$ which differ only in their orientation, and $E^+(\Gamma)$ contains precisely one of the two elements in each pair. A {\em gate structure} $\mathbf G$ on $\Gamma$ is a partition of the edges $e \in E^\pm(\Gamma)$ into equivalence classes $\mathfrak g_i$ (called {\em gates}), where equivalent edges must have the same initial vertex $v$. Two edges $e, e' \in E^\pm(\Gamma)$ with same initial vertex form a {\em turn} $(e, e')$, which is called {\em legal} (with respect to $\mathbf G$) if $e$ and $e'$ belong to distinct gates, and {\em illegal} if they belong to the same gate. The turn $(e, e')$ is called {\em degenerate} if $e = e'$. A path $\gamma = e_1 e_2 \ldots e_q$ {\em crosses over} a {\em gate turn} $(\mathfrak g_i, \mathfrak g_j)$ if for some $k \in \{1, \ldots, q-1\}$ one has $\overline e_k \in \mathfrak g_i, e_{k+1} \in \mathfrak g_j$ or $\overline e_k \in \mathfrak g_j, e_{k+1} \in \mathfrak g_i$. The path $\gamma$ is {\em legal} if, for each $k \in \{1, \ldots, q-1\}$, the edges $\overline e_k$ and $e_{k+1}$ belong to different gates of $\mathbf G$ (i.e. $\gamma$ crosses only over legal turns). The {\em gate index} $\ind_{\mathbf G}(v)$ at a vertex $v$ is given by $\ind_{\mathbf G}(v) := \frac{g(v)}{2} - 1$, where $g(v)$ denotes the number of gates at $v$. A {\em graph map} $f: \Gamma \to \Gamma$ maps vertices to vertices and edges to (possibly unreduced) edge paths. The map $f$ has {\em no contracted edges} if for any edge $e$ of $\Gamma$ the combinatorial length (= number of edges traversed) of $f(e)$ satisfies $|f(e)|\geq 1$. In this case $f$ induces a well defined map $Df$ on $E^\pm(\Gamma)$ which maps the edge $e$ to the initial edge of the path $f(e)$. The {\em transition matrix} $M(f) = (m_{e', e})_{e', e \in E^+(\Gamma)}$ of $f$ is defined as non-negative matrix, where $m_{e', e}$ counts the number of times that $f(e)$ crosses over $e'$ or over $\overline e'$. The equality \[ M(f\circ g) = M(f)M(g) \] is a direct consequence of the definition of the transition matrix. Recall that a non-negative matrix $M$ is called {\em primitive} if some positive power $M^t$ is {\em positive}, i.e. all coefficients of $M^t$ are strictly positive. A graph map $f: \Gamma \to \Gamma$ is a {\em train track morphism}, with respect to a given gate structure $\mathbf G$ on $\Gamma$, if it has no contracted edges, and if $f$ maps every legal path to a legal path. It is shown in~\cite{CL} that a train track morphism has the additional property that at every periodic vertex $v$ of $\Gamma$ any illegal turn is mapped to an illegal turn, or equivalently: $f$ induces at every periodic vertex $v$ of $\Gamma$ a bijective map from the gates at $v$ to the gates at $f(v)$. Note that in this paper all train track morphisms that occur have only periodic vertices; indeed, each vertex is a fixed point. For a graph $\Gamma$ without preassigned gate structure, a {\em train track map} $f:\Gamma\to\Gamma$ in the classical sense as defined by Bestvina and Handel~\cite{BH} (and hence in particular any {\em train track representative} of a given automorphism of ${F}_N$) is a graph map with no contracted edges with the property that for any $t>0$ and any edge $e$, $f^t(e)$ is a reduced path. As legal paths are reduced, any train track morphism $f:\Gamma \to \Gamma$ is always a classical train track map. Conversely, every classical train track map $f:\Gamma \to \Gamma$ is a train track morphism with respect to the {\em intrinsic} gate structure $\mathbf G(f)$ on $\Gamma$, defined by $f$ through declaring two edges $e, e'$ with same initial vertex to lie in the same gate if and only if for some $t \geq 1$ the edge paths $f^t(e)$ and $f^t(e')$ have non-trivial initial subpaths in common. Notice however that, for a train track morphism $f$ with respect to some gate structure $\mathbf G$, the intrinsic gate structure $\mathbf G(f)$ may be strictly finer than the given gate structure $\mathbf G$. A reduced path $\gamma \circ \gamma'$ in $\Gamma$ is a {\em periodic INP} for a train track morphism $f: \Gamma \to \Gamma$ if $\gamma$ and $\gamma'$ are legal and for some $t \geq 1$ the path $f^t(\gamma \circ \gamma')$ is homotopic relative endpoints to $\gamma \circ \gamma'$. The {\em gate-Whitehead-graph} $Wh_{\mathbf G}^v(f)$ of a train track morphism $f: \Gamma \to \Gamma$ at a vertex $v$ of $\Gamma$ has the gates $\mathfrak g_i$ of $\mathbf G$ at $v$ as vertices and a (non-oriented) edge connecting $\mathfrak g_i$ to $\mathfrak g_j$ if and only for some edge $e$ of $\Gamma$ and some integer $t \geq 1$ the edge path $f^t(e)$ crosses over the gate turn $(\mathfrak g_i, \mathfrak g_j)$. Recall also that an automorphism $\varphi \in \text{Out}({F}_N)$ is called {\em iwip} (or {\em fully irreducible}) if no positive power of $\varphi$ fixes the conjugacy class of any proper free factor of ${F}_N$. \section{The graph $\Gamma$ with prescribed index list}\label{sec:graph} In this and the following sections, let $N, \ell$ and $j_1,\ldots, j_\ell$ be given as in Theorem~\ref{thm:main}. In this section we will build a graph $\Gamma$ with $\pi_1\Gamma \cong {F}_N$ which has precisely $\ell$ vertices $v_1, \ldots, v_\ell$, and has at each vertex $v_k$ precisely $i_k := 2j_k + 2 \geq 3$ gates: the gate structure $\mathbf G$ on $\Gamma$ realizes the given list $j_1,\ldots,j_\ell$ as gate index list. Note that from the inequalities in Theorem~\ref{thm:main} we obtain \[1 \leq i_1+\cdots+i_\ell - 2 \ell\leq 2N-3\] as initial assumption on the number of gates in $\Gamma$. We divide the possible index lists in three different cases: \begin{enumerate} \item The {\em even case}: $i_1+\cdots+i_\ell$ is even (that is to say $j_1+\cdots+j_\ell$ is an integer $\leq N-2$). \item The {\em odd case} (non-maximal): $i_1+\cdots+i_\ell$ is odd and smaller than $2N-4 + 2 \ell$ (alternatively: $j_1+\cdots+j_\ell \leq N-2$). \item The {\em maximal odd case}: $i_1+\cdots+i_\ell=2N-3 + 2 \ell$ (i.e. $j_1+\cdots+j_\ell=N-\frac 32$). \end{enumerate} We consider a circle which is subdivided at vertices labeled $v_1,\ldots, v_\ell$, to obtain oriented edges labeled $c_1,\ldots,c_\ell$ such that $c_k$ starts at $v_k$ and ends at $v_{k+1}$ (for $k$ understood modulo $\ell$). Note that if $\ell = 1$ then $c_1 = c_\ell$ is a loop edge at the sole vertex $v_1$ of $\Gamma$. At each vertex $v_k$ we add $i_k-2$ germs of edges to this circle. In the odd and maximal odd cases we remove one of these germs at $v_1$ such that in any cases the number of germs is even. We group these germs arbitrarily into pairs to form $r$ oriented edges $b_1,\ldots,b_r$. Here $r$ is the largest integer $\leq j_1+\cdots+j_\ell$, with $r=0$ exactly if the index list is equal to $[\frac 12]$. In the even and odd cases let $s=N-r-1$, where we note that $s\geq 1$. We add $s$ oriented edges $a_1,\ldots,a_s$ which are loops at the vertex $v_1$. In the maximal odd case we set $s=1$ and add a single edge $a_1$ which is a loop at $v_1$. Finally, in the odd case we add an extra edge $d$ which is a loop at $v_1$. \begin{figure} \caption{\label{fig:graph-even} \label{fig:graph-even} \end{figure} \begin{figure} \caption{\label{fig:graph-maximal} \label{fig:graph-maximal} \end{figure} \begin{figure} \caption{\label{fig:graph-odd} \label{fig:graph-odd} \end{figure} The graphs $\Gamma$ defined above are connected, without vertices of valence $1$ or $2$, and have fundamental group ${F}_N$. The {\em oriented edge set} is given by $E^+(\Gamma) = \{c_1, \ldots, c_\ell, b_1, \ldots, b_r, a_1, \ldots a_s\}$ in the even and maximal odd cases, and by $E^+(\Gamma) = \{c_1, \ldots, c_\ell, b_1, \ldots, b_r, a_1, \ldots a_s,d\}$ in the odd case. In all cases we have $\ell,s\geq 1$ and $r \geq 0$, with $r = 0$ if and only if we are in the odd case with index list $[\frac 12]$. We define the gate structure $\mathbf G$ on $\Gamma$ in such a way that each gate consists of a single edge, except for the following gates, all based at $v_1$: \begin{itemize} \item in the even case: $\mathfrak g_1=\{c_1,a_1,\ldots a_s\}$, $\mathfrak g_2=\{\overline{c}_\ell,\overline{a}_1,\ldots,\overline{a}_s\}$; \item in the odd case: $\mathfrak g_1=\{c_1,a_1,\ldots, a_s\}$, $\mathfrak g_2=\{\overline{c}_\ell,\overline{a}_1,\ldots,\overline{a}_s\}$ and $\mathfrak g_3=\{d,\overline{d}\}$; \item in the maximal odd case: $\mathfrak g_1=\{c_1,a_1\}$. \end{itemize} Notice that in the maximal odd case $\overline a_1$ and $\overline c_\ell$ belong to distinct gates. As a consequence, at every vertex $v_k$ there are precisely $i_k$ gates, so that we obtain: \begin{prop} \label{prop:graph-index-realization} The gate structure $\mathbf G$ on $\Gamma$ realizes the given list of values $j_1,\ldots, j_\ell$ as gate indices at the vertices $v_1, \ldots, v_\ell$ of $\Gamma$. \qed \end{prop} The following will be used crucially in the subsequent sections: \begin{lem}\label{lem:hamiltonian-circuit} Let $\Gamma$ be the graph equipped with the gate structure $\mathbf G$ as defined above. \begin{enumerate} \item\label{item:ueuprime} For each edge $e\neq a_1$ in $E^+(\Gamma)$ there exists a legal loop $ ueu'$ in $\Gamma$ which starts in $\mathfrak g_1$ does not end in $\mathfrak g_1$, does not pass through $a_1, \overline a_1$ or $\overline e$ and passes exactly once through $e$ (we allow $u$ or $u'$ to be trivial). \item\label{item:v} For each gate turn $t=({\mathfrak g},{\mathfrak g}')$, except for gate turns involving $\{\overline{a}_1\}$ in the maximal odd case, there exists a legal loop $v$ which starts in ${\mathfrak g}_1$, does not end in ${\mathfrak g}_1$, does not pass through $a_1$ or $\overline a_1$, and crosses over the gate turn $({\mathfrak g},{\mathfrak g}')$. \item\label{item:alphabeta} In the even and odd cases, for any edge $e$ in $\mathfrak g_1$ there exist legal paths $\alpha$ and $\beta$ that do not pass through any of the $a_i$ or through $e$ (and neither through their inverses). Furthermore, $e\alpha$ is a legal path which ends in $\mathfrak g_2$, and $\beta$ is a legal loop based at $v_1$ that does not start in $\mathfrak g_1$ or $\mathfrak g_2$, and does not end in $\mathfrak g_1$. We allow $\alpha$ to be trivial. \item\label{item:alphabetaprime} Symmetrically, in the even and odd cases, for each edge $\overline{e}$ in $\mathfrak g_2$ there exist legal paths $\alpha'$ and $\beta'$ that do not pass through any of the $a_i$ or through $e$ (nor through their inverses), such that $\alpha'e$ is legal and starts in $\mathfrak g_1$, while the legal loop $\beta'$ is based at $v_1$ but does not start in $\mathfrak g_2$ and does not end in $\mathfrak g_1$ or $\mathfrak g_2$. We allow $\alpha'$ to be trivial. \end{enumerate} \end{lem} \begin{proof} The above statements~(\ref{item:ueuprime}) and (\ref{item:v}) are easy to verify if one keeps in mind that at every vertex there are $\geq 3$ gates, and that every vertex $v_k$ can be reached from $v_1$ by any one of two disjoint paths on the circle $c_1\cdots c_\ell$, so that it is easy to avoid any given edge in $\Gamma$. Concerning statement~(\ref{item:alphabeta}), if $e\neq c_1$ or $\ell=1$, we let $\alpha$ be trivial. Otherwise we set $\alpha=c_2\cdots c_\ell$. In the odd case we let $\beta=d$. In the even case, there is at least one edge $b_k$ (or $\overline b_k$) exiting from $v_1$. Let $v_{k'}$ be the endpoint of $b_k$ (or of $\overline b_k$), and set $\beta=b_k$ if $k'=1$ and $\beta=b_{k}c_{k'}\cdots c_\ell$ otherwise. For statement~(\ref{item:alphabetaprime}) the paths $\alpha'$ and $\beta'$ can be chosen symmetrically to $\alpha$ and $\beta$ in the above case~(\ref{item:alphabeta}). \end{proof} \section{The train track morphism} \label{sec:map-iwip-but-inp} In this section we construct for the graph $\Gamma$ a train track morphism $h: \Gamma \to \Gamma$ with respect to the gate structure $\mathbf G$ specified in the last section. The morphism $h$ will be ``fully irreducible up to INPs'' in that it has primitive transition matrix $M(h)$ and connected gate-Whitehead-graphs at every vertex (compare \cite[Propositions~4.1 and~4.2]{CL}). Below we will consider graph maps $f: \Gamma\to\Gamma$ with the following properties: $(*)$\parbox{.8\textwidth}{\begin{enumerate} \item\label{*:homot-eq} $f$ is a homotopy equivalence, \item\label{*:train-track} $f$ is a train track morphism with respect to the gate structure $\mathbf G$, \item\label{*:fix-vertex} $f$ fixes each vertex of $\Gamma$, \item\label{*:fix-gate} $f$ fixes each gate of $\mathbf G$, and \item\label{*:e-image} the $f$-image of every edge $e$ crosses over $e$. \end{enumerate} } \begin{lem}\label{lem:property-star} Let $f_1$ and $f_2$ be graph maps which satisfy the above Properties~$(*)$. \begin{enumerate} \renewcommand{\alph{enumi}}{\alph{enumi}} \item\label{item:composition-**} Then the composition $f_1 \circ f_2$ satisfies $(*)$ as well. \item\label{item:composition-iwg} Moreover, for any vertex $v_k$ of $\Gamma$ the gate-Whitehead-graph $Wh_{\mathbf G}^{v_k}(f_1 \circ f_2)$ contains both gate-Whitehead-graphs $Wh_{\mathbf G}^{v_k}(f_1)$ and $Wh_{\mathbf G}^{v_k}(f_2)$ as subgraphs. \end{enumerate} \end{lem} \begin{proof} Statement~(\ref{item:composition-**}) is a direct consequence of the definition of properties $(*)$. Statement~(\ref{item:composition-iwg}) has been proved under slightly more general hypotheses as Proposition~3.10 in~\cite{CL}. \end{proof} The Properties~(\ref{*:fix-vertex}) and (\ref{*:fix-gate}) above imply that a map which satisfies $(*)$ acts as identity on the set of gate turns. As a consequence one derives easily that Statement~(\ref{item:composition-iwg}) of Lemma~\ref{lem:property-star} can actually improved to $Wh_{\mathbf G}^{v_k}(f_1 \circ f_2) = Wh_{\mathbf G}^{v_k}(f_1) \cup Wh_{\mathbf G}^{v_k}(f_2)$. We define below several graph maps on $\Gamma$ where we use the following: \begin{convention} \label{convention:no-untouched-edges} In this and the following sections, in the definition of a graph map $\Gamma \to \Gamma$ we always use the convention that any edge with no explicitly defined image is mapped identically to itself. \end{convention} For any edge $e\neq a_1 \in E^+(\Gamma)$ let $u$ and $u'$ be as in Lemma~\ref{lem:hamiltonian-circuit}~(\ref{item:ueuprime}). Define $h_e: \Gamma \to \Gamma$ by: \[ h_e:\begin{array}[t]{rcl}a_1&\mapsto&ueu'a_1\\ e&\mapsto&eu'a_1ue \end{array} \] Note that the $h_e$-image of $e$ passes through $a_1$ and that the $h_e$ image of $a_1$ passes through $e$. For any gate turn $t=({\mathfrak g},{\mathfrak g}')$ of $\Gamma$, except for gate turns involving $\{\overline{a}_1\}$ in the maximal odd case, let $v$ be as in Lemma~\ref{lem:hamiltonian-circuit} (\ref{item:v}) and define $h_t:\Gamma\to\Gamma$ by: \[ h_t: a_1\mapsto va_1 \] Let $h'$ be the composition of all these maps $h_e$ (with $e\neq a_1$) and $h_t$, where we do not care about the order of the composition. Define $h:\Gamma\to\Gamma$ through $h=h'\circ h'$. \begin{prop}\label{prop:h-positive} The map $h: \Gamma \to \Gamma$ is a train track morphism with respect to $\mathbf G$. Furthermore $h$ fixes every vertex $v_k$ of $\Gamma$, maps each gate of the gate structure $\mathbf G$ to itself, and is a homotopy equivalence. In addition, the transition matrix $M(h)$ is positive, and the gate-Whitehead-graph $Wh_{\mathbf G}^v(h)$ of $h$ at any vertex $v$ of $\Gamma$ is connected. \end{prop} \begin{proof} We first consider the maps $h_e$ with $e \neq a_1$ and $h_t$ as defined above. Properties~(\ref{*:train-track})-(\ref{*:e-image}) of $(*)$ above are easily verified directly. For Property~(\ref{*:homot-eq}) the reader can check directly that the map given by \[ \begin{array}[t]{rcl} a_1&\mapsto&\overline u' \overline e \, \overline u a_1^2\\ e&\mapsto& \overline u \, \overline a_1 u e \end{array} \] is a homotopy inverse of $h_e$. The fact that it is not a train track map with respect to $\mathbf G$ is irrelevant. For $h_t$ a homotopy inverse is given simply by $a_1 \mapsto \overline v a_1$. From Lemma~\ref{lem:property-star} it follows now that the maps $h'$ and $h$ have this Property~$(*)$, which is the statement of the first paragraph to be shown. In order to show that $M(h)$ is positive, we use the equality $M(fg) = M(f) M(g)$ from Section~\ref{sec:prelim-CLP} and condition (\ref{*:e-image}) of Property~$(*)$ to obtain that the image $h'(e)$ of any edge $e$ crosses over $a_1$, and that the image $h'(a_1)$ of $a_1$ crosses over every edge $e\in E^+(\Gamma)$. Hence the image $h(e)$ of any edge $e$ passes through all edges of $\Gamma$. This proves that the transition matrix $M(h)$ is positive. From Lemma~\ref{lem:property-star} we know that the gate-Whitehead-graph of $h$ contains that of $h_t$, for each gate turn $t=({\mathfrak g},{\mathfrak g}')$. It follows from the above definition of $h_t$ via $a_1 \mapsto v a_1$ and from the definition of $v$ in Lemma~\ref{lem:hamiltonian-circuit}~(\ref{item:v}) that in the even and the odd cases the gate-Whitehead-graph of $h$ at each vertex of $\Gamma$ is a complete graph and thus connected. In the maximal odd case there are no maps $h_t$ for the gate turns involving the gate $\{\overline{a}_1\}$. But in this case the gate turn $(\{\overline{a}_1\},\{a_1, c_1\})$ is crossed over by $h_{c_1}(c_1)$. This is enough to get that the gate-Whitehead-graph of $h$ at $v_1$ is connected. \end{proof} \section{Building the legalizing map} \label{sec:legalizing-map} The goal of this (and the following) section is to construct a train track morphism $g: \Gamma \to \Gamma$ with respect to $\mathbf G$ which is a homotopy equivalence and is ``legalizing''. This notion has been introduced in~\cite{CL}, and is now briefly summarized: A pair $(\gamma, \gamma')$ of non-trivial legal paths $\gamma$ and $\gamma'$ in $\Gamma$ is called in~\cite{CL} a {\em long turn} if the {\em branches} $\gamma$ and $\gamma'$ start at the same vertex but have distinct initial edges. The set of long turns in $\Gamma$, with both branches of length equal to some integer $C \geq 1$, is denoted by $LT_C(\Gamma)$. The long turn $(\gamma, \gamma')$ can be {\em legal} or {\em illegal}, according to whether its {\em starting turn} $s(\gamma, \gamma')$, formed by the initial edges of $\gamma$ and $\gamma'$ respectively, is legal or illegal (as defined for turns in the traditional sense, see \S\ref{sec:prelim-CLP}). If neither $g(\gamma)$ is a subpath of $g(\gamma')$ nor conversely, then the long turn is called {\em $g$-long}, and the long turn, obtained from $g(\gamma)$ and $g(\gamma')$ through erasing from both the maximal common initial subpath, is called the {\em $g$-image} of $(\gamma, \gamma')$ and denoted by $g^{LT}(\gamma, \gamma')$. A train track morphism $g$ is called {\em legalizing} if for some sufficiently large constant $C \geq 1$ every long turn $(\gamma, \gamma') \in LT_C(\Gamma)$ is $g$-long, and if $g^{LT}(\gamma, \gamma')$ is legal (or, equivalenty, if the starting turn of $g^{LT}(\gamma, \gamma')$ is legal). To avoid a misunderstanding, we point out that any non-degenerate turn $(e, e')$ in the classical sense can be considered alternatively as long turn with branches of length 1. In particular, if $(e, e')$ is $g$-long, then it has both, an image turn $(Dg(e), Dg(e'))$, as well as an image long turn $g^{LT}(e, e')$, which furthermore has a starting turn $sg^{LT}(e, e')$ that is again a turn in the classical sense. However, in general $(Dg(e), Dg(e'))$ and $sg^{LT}(e, e')$ will be quite different: for example, $(Dg(e), Dg(e'))$ may well be degenerate, while $sg^{LT}(e, e')$ is by definition always non-degenerate. To construct the desired train track morphism $g$ we define now train track morphisms $g_t$. In each of the cases considered below the ``variable'' $t$ denotes a non-degenerate illegal turn in $\Gamma$, interpreted here as long turn with branch length 1. The reader can verify directly that all of the maps $g_t$ defined below satisfy the statements~(\ref{*:train-track}), (\ref{*:fix-vertex}) and (\ref{*:fix-gate}) of Property~$(*)$ from Section~\ref{sec:map-iwip-but-inp}. We use again Convention~\ref{convention:no-untouched-edges}. We first deal with the \underline{even and odd cases}: \begin{enumerate} \item Let $t=(a_i,e)$ be an illegal turn in $\Gamma$ with $1\leq i\leq s$, and with $e=c_1$ or $e=a_j$ where $i\neq j$. Let $\alpha$ and $\beta$ be as in Lemma~\ref{lem:hamiltonian-circuit}. Define: \[ g_t: \begin{array}[t]{rcl}a_i&\mapsto& a_i e\alpha\\ e&\mapsto&a_i \beta a_i e \end{array} \] The illegal turn $t=(a_i,e)$ is $g_t$-long and mapped by $g_t^{LT}$ to the long turn $(e\alpha,\beta a_ie)$ which is legal. \item Symmetrically, let $t=(\overline{a}_i,\overline{e})$ be an illegal turn in $\Gamma$ with $1\leq i\leq s$, and with $e=c_\ell$ or $e=a_j$ where $i\neq j$. Let $\alpha'$ and $\beta'$ be as in Lemma~\ref{lem:hamiltonian-circuit}. Define: \[ g_t: \begin{array}[t]{rcl}a_i&\mapsto& \alpha' ea_i\\ e&\mapsto&ea_i \beta' a_i \end{array} \] The illegal turn $t=(\overline{a}_i,\overline{e})$ is $g_t$-long and mapped by $g_t^{LT}$ to the long turn $(\overline{e}\,\overline{\alpha}',\overline{\beta}'\overline{a}_i\overline{e})$ which is legal. \item In the \underline{odd case} we have one more illegal turn $t=(d,\overline{d})$. Define: \[ g_t:\begin{array}[t]{rcl} a_1&\mapsto&a_1\overline d c_1\cdots c_\ell\\ d&\mapsto&da_1\overline d \end{array} \] The illegal turn $t=(d,\overline{d})$ is $g_t$-long and mapped by $g_t^{LT}$ to the long turn $(a_1 \overline d,\overline{a}_1 \overline d)$ which is legal. \end{enumerate} In the \underline{maximal odd case} there is only one illegal turn $t=(a_1,c_1)$. As the rank $N$ is greater or equal to $3$, there is at least one edge $b_1$ which starts from some vertex $v_{k}$ and ends at some $v_{k'}$. We set $c_{[1,k]}=c_1\cdots c_\ell$ if $k=1$ and $c_{[1,k]}=c_1\cdots c_{k-1}$ if $2 \leq k\leq\ell$. We furthermore set $c_{[k',\ell]}=1$ if $k'=1$, and $c_{[k',\ell]}=c_{k'}\cdots c_\ell$ if $2 \leq k'\leq\ell$. We define: \[ g_t: \begin{array}[t]{rcl} a_1&\mapsto&c_1\cdots c_\ell c_{[1,k]}b_1c_{[k',\ell]}a_1\\ c_1&\mapsto&c_1\cdots c_\ell c_{[1,k]}b_1c_{[k',\ell]}a_1c_1\\ b_1&\mapsto&b_1c_{[k',\ell]}a_1c_{[1,k]}b_1 \end{array} \] \begin{lem} \label{lem:easy-check} In the maximal odd case, every long turn of length equal to $\ell+1$ with starting turn $(a_1, c_1)$ is $g_t$-long and mapped by $g_t^{LT}$ to a legal long turn. \end{lem} \begin{proof} Let $t^* = (a_1 e_2 \ldots e_{\ell +1}, c_1 e'_2 \ldots e'_{\ell +1})$ be the long turn under consideration. We first observe that, if $e_2 \neq c_1$ and $e_2 \neq a_1$\footnote{\,\, We'd like to thank C. Pfaff for having pointed out to us that the treatment of this case was missing in an earlier version of our paper.}, then $g_t^{LT}(t^*)$ has starting turn $(e_2, c_1)$ and hence is legal. In order to treat computationally the possible ``exceptional'' cases without too many subcases we introduce a variable $x$ which we set to $x = a_1 c_1$ if $e_{2} = c_1$ and $x = a_1$ if $e_{2} = a_1$. Similarly, a second variable $y$ will be used below which is set to $y = a_1 c_1$ if $e'_{\ell +1} = c_1$ and $y = a_1$ if $e'_{\ell + 1} = a_1$. We observe that in each case $g_t^{LT}(t^*)$ is legal unless $e'_2 \ldots e'_{\ell +1} = c_2 \ldots c_\ell c_1$ or $e'_2 \ldots e'_{\ell +1} = c_2 \ldots c_\ell a_1$. We compute \[ g_t^{LT}(t^*) = (b_1c_{[k',\ell]} x, c_k\cdots c_\ell c_{[1,k]}b_1c_{[k',\ell]} y) \,\,\, {\rm if} \,\,\, \ell \geq k \geq 2 , \] and \[ g_t^{LT}(t^*) = (b_1c_{[k',\ell]} x, c_1 \ldots c_\ell b_1c_{[k',\ell]} y) \,\,\, {\rm if} \,\,\, k = 1 \,\,\, {\rm and} \,\,\, \ell \geq 2. \] Finally we have \[ g_t^{LT}(t^*) = (b_1c_{[k',\ell]} x, c_1 b_1c_{[k',\ell]} y) \,\,\, {\rm if} \,\,\, k = \ell = 1, \] All three of those computed long turns are legal. \end{proof} We now verify: \begin{lem} \label{lem:homotopy-inverses} Each of the above defined maps $g_t$ is a homotopy equivalence. \end{lem} \begin{proof} For each case of the map $g_t$ we list below a map $g'_t$; the reader can verify directly that they are homotopy inverses of the maps $g_t$. Even and odd cases: \noindent (1) \[ g'_t: \begin{array}[t]{rcl} a_i&\mapsto& e \alpha \overline a_i \overline \beta \\ e&\mapsto& \beta a_i \overline \alpha \, \overline e a_i \overline \alpha \end{array} \] \noindent (2) \[ g'_t: \begin{array}[t]{rcl} a_i&\mapsto& \overline \beta' \overline a_i \alpha' e \\ e&\mapsto& \overline \alpha' a_i \overline e \, \overline \alpha' a_i \beta' \end{array} \] \noindent (3) \[ g'_t: \begin{array}[t]{rcl} a_1&\mapsto& a_1 \overline c_\ell \ldots \overline c_1 d c_1 \ldots c_\ell \overline a_1 \\ d&\mapsto& d c_1 \ldots c_\ell \overline a_1 \end{array} \] Maximal odd case: \[ g'_t: \begin{array}[t]{rcl} a_1&\mapsto& \overline c_{[k', \ell]} \overline b_1 \overline c_{[1, k]} a_1 \overline c_\ell \ldots \overline c_1 a_1^2 \overline c_\ell \ldots \overline c_1 a_1^2 \\ c_1&\mapsto& \overline a_1 c_1 \\ b_1&\mapsto& \overline c_{[1, k]} \overline a_1 c_1 \ldots c_\ell \overline a_1 c_{[1, k ]} b_1 \end{array} \] \end{proof} We thus have proved: \begin{prop} \label{prop:local-legalizing} Let $L=1$ in the even and odd cases and $L=\ell+1$ in the maximal odd case, and let $t$ be any illegal turn of $\Gamma$. For each long turn $t^*$ of $\Gamma$, with branch length $L$ and with starting turn $t$, the map $g_t$ is a train track morphism with the property that $t^*$ is $g_t$-long and mapped by $g_t^{LT}$ to a legal long turn. Furthermore, $g_t$ is a homotopy equivalence which fixes every vertex of $\Gamma$ and every gate of $\mathbf G$. \qed \end{prop} \section{Proof of the main result } \label{proofs} Proposition~\ref{prop:local-legalizing} is the main ingredient we need to obtain the desired legalizing map. This is done through an application of Proposition~7.1 of~\cite{CL} which we quote now, for the convenience of the reader in a slightly weakened form and with terminology adapted to the present paper: \begin{prop}[{\cite[Proposition~7.1]{CL}}] \label{prop:quote-1} Let $\Gamma$ be a graph with a gate structure $\mathbf G$. Assume that there exists a constant $L\geq 1$, and assume furthermore: \begin{enumerate} \item\label{item:hyp-legalizing} For each illegal long turn $t$ with branch length $L$ there exists a train track morphism $g_t:\Gamma\to\Gamma$ such that $t$ is $g_t$-long and mapped by $g_t^{LT}$ to a legal long turn. \item\label{item:hyp-positive} There exists a train track morphism $h:\Gamma\to\Gamma$ which satisfies $|h(e)| \geq 2$ for any edge $e$ of $\Gamma$. \item\label{item:hyp-hom-eq} All the above maps $g_t$ and $h$ are homotopy equivalences. \end{enumerate} Then there exists a legalizing train track morphism $g:\Gamma \to \Gamma$ which is obtained as a composition of the $g_t$ and $h$. \qed \end{prop} Before going back to the situation considered in the previous sections, we first quote the main result of~\cite{CL}, in a slightly strengthened version due to Remark~6.6 of~\cite{CL} and adapted to the circumstances here: \begin{thm}[{\cite[Theorem~1.1 and Remark~6.6]{CL}}] \label{thm:quote-2} Let $\Gamma$ be a graph with gate structure $\mathbf G$, let $f: \Gamma \to \Gamma$ be a train track morphism with positive transition matrix $M(f)$ and gate-Whitehead-graph at every vertex that is connected. Let $g: \Gamma \to \Gamma$ be a legalizing train track morphism with respect to the gate structure $\mathbf G$ which is a homotopy equivalence that fixes every vertex of $\Gamma$ and every gate of $\mathbf G$. Then: \begin{enumerate} \item\label{item:thm-quote-2-iwip} The map $f \circ g: \Gamma \to\Gamma$ is a train track representative of an iwip automorphism $\varphi \in \text{Out}({F}_N)$. \item\label{item:thm-quote-2-no-inp} There is no periodic INP in $\Gamma$ for the train track map $f \circ g$. In particular there are no non-trivial $(f \circ g)$-periodic conjugacy classes in ${F}_N$. \item\label{item:thm-quote-2-index} The stable index list for $\varphi$ is given by the gate index list for $\Gamma$ defined by $\mathbf G$ at the $f$-periodic vertices of $\Gamma$. \end{enumerate} \end{thm} We will now go back to the graph $\Gamma$ from the previous sections, i.e. with gate structure $\mathbf G$ that realizes the given index list from Theorem~\ref{thm:main} as gate indices. We will show below how to use the previously derived train track morphisms $h$ and $g_t$ on $\Gamma$ via the above quoted results from~\cite{CL} to finish the proof of Theorem~\ref{thm:main}. We first observe: \begin{cor} \label{cor:g-legalizing} Let $\Gamma$ be the graph with gate structure $\mathbf G$ defined in Section~\ref{sec:graph} for the given list of gate indices. Then there exists a legalizing train track morphism $g:\Gamma\to\Gamma$ which is a homotopy equivalence and fixes each vertex and each gate of $\Gamma$. \end{cor} \begin{proof} We use Proposition~\ref{prop:local-legalizing} to obtain the hypothesis~(\ref{item:hyp-legalizing}) of Proposition~\ref{prop:quote-1}, where we note that if $g_t$ legalizes a long turn $t'$ with branch length $C' \geq 1$, then it also legalizes any long turn $t$ with branch length $C \geq C'$ which contains $t'$ as ``subturn''. We note that hypothesis~(\ref{item:hyp-positive}) of Proposition~\ref{prop:quote-1} is satisfied by the train track morphism $h:\Gamma\to\Gamma$ from Proposition~\ref{prop:h-positive}. Hypothesis~(\ref{item:hyp-hom-eq}) is satisfied, as has been shown in Propositions~\ref{prop:h-positive} and \ref{prop:local-legalizing}. Thus Proposition~\ref{prop:quote-1} assures us the existence of the legalizing map $g$ as product of $h$ and the $g_t$. Since by Propositions~\ref{prop:h-positive} and \ref{prop:local-legalizing} the latter are all homotopy equivalences that fix every vertex of $\Gamma$ and every gate of $\mathbf G$, this proves the claim. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] We note that the map $h: \Gamma \to \Gamma$ from Section~\ref{sec:map-iwip-but-inp} satisfies by Proposition~\ref{prop:h-positive} all of the requirements of the map $f$ in Theorem~\ref{thm:quote-2}. By Corollary~\ref{cor:g-legalizing} the same is true for the map $g$ obtained in Corollary~\ref{cor:g-legalizing}. Hence part~(\ref{item:thm-quote-2-iwip}) of Theorem~\ref{thm:quote-2} shows that $h \circ g$ represents an iwip automorphism $\varphi$ of ${F}_N = \pi_1 \Gamma$, and part~(\ref{item:thm-quote-2-index}) assures that the stable index list of $\varphi$ is equal to the gate index list of $\Gamma$ with respect to $\mathbf G$, which is built in Section~\ref{sec:graph} to realize the given list of values $j_1, j_2,\ldots,j_\ell$, see Proposition~\ref{prop:graph-index-realization}. \end{proof} \begin{rem} \label{rem:intrinsic-disappeared} There is a subtlety in the above proof which we would like to point out to the reader, concerning the topic ``given gate structure'' versus ``intrinsic gate structure'' (as defined in Section~\ref{sec:prelim-CLP}). It shows up in relation to two aspects which are relevant in our context: \begin{enumerate} \item[(a)] The index of the automorphism represented by a train track morphisms $f: \Gamma \to \Gamma$ with respect to a gate structure $\mathbf G$ depends on the intrinsic gate structure $\mathbf G(f)$, which may well be strictly finer than $\mathbf G$. \item[(b)] A map $g: \Gamma \to \Gamma$, which is a train track morphism with respect to two distinct gate structures $\mathbf G$ and $\mathbf G'$, may well be legalizing with respect to $\mathbf G$ but not with respect $\mathbf G'$. (In this case, however, $\mathbf G$ must be strictly finer than $\mathbf G'$). \end{enumerate} In the situation considered above, both potential problems are resolved as follows by the application of Theorem~\ref{thm:quote-2}: The train track morphism $h$ constructed in Section~\ref{sec:map-iwip-but-inp} may indeed well have an intrinsic gate structure $\mathbf G(h)$ that is strictly finer than the previously defined gate structure $\mathbf G$. However, in Lemma~5.9 of~\cite{CL} it has been shown that a train track morphisms $g$ with respect to a gate structure $\mathbf G$ which is legalizing (with respect to $\mathbf G$) satisfies indeed $\mathbf G(g) = \mathbf G$. Now, since the composition of any train track morphism with a legalizing train track morphism is again legalizing (see~\cite[Proposition~5.8]{CL}), by the same argument as before we obtain automatically $\mathbf G(h \circ g) = \mathbf G$. \end{rem} \section{Discussion} \label{sec:proof-discussion} We will now discuss some further aspects of the index of free group automorphisms: \subsection{The index deficit} Handel and Mosher~\cite[\S1.5, Question~5]{HM-axes} ask what values for the {\em index deficit} $N- \ind_{stab} \varphi \, -1$ for any (iwip) automorphism $\varphi \in \text{Out}({F}_N)$ are possible, and whether for $N \to \infty$ the maximal index deficit goes to $\infty$. From our Theorem~\ref{thm:main} it follows directly that for every $N \geq 3$ every value $j$ in $\frac{1}{2}\mathbb N$ with $0 \leq j \leq N - \frac{3}{2}$ is achieved as index deficit for some iwip $\varphi$. The maximal index deficit is hence equal to $N - \frac{3}{2}$, which indeed tends to $\infty$ for $N \to \infty$. This result has independently been obtained also by Sonya Leibman~\cite{leibman}. \subsection{The index of geometric iwips} We now consider in some detail the results of Masur and Smillie~\cite{MS}, in particular their Theorem~2 on p.~291: The translation of the terminology used there for quadratic differentials and pseudo-Anosov homeomorphisms to the usual terminology for free group automorphisms is not completely evident. We give here a bit of translation help: In the absence of punctures on the surface $M$, a $p_k$-pronged singularity in~\cite{MS} translates into an isogredience class of automorphisms $\Phi_k$ with $\ind \Phi_k = j_k = \frac{p_k - 2}{2}$. In this case we would have to translate the genus $g$ of $M$, multiplied by 2, into the rank $N$ of the free group, except that without punctures $\pi_1 M$ is not free, which explains the summand $-4$ in the index equality in part (a) of Theorem~2 of \cite{MS}. Now, the $n$ punctures which Masur and Smillie admit in their Theorem~2 appear nowhere explicitly, but in fact they can be essentially anywhere: If a puncture lies outside of the singularities and outside the separatrices raying out from them, then it lies on some regular leaf of the stable foliation, and hence it becomes a ``$2$-prong singularity'', thus contributing a value $p_k = 2$ to the given list. The automorphisms $\Phi_k$ in the corresponding isogredience class have two attracting fixed points on $\partial {F}_N$ and ${\rm rank(Fix} \,\,\Phi_k) = 1$, which adds up to $\ind \Phi_k = 1.$ If a puncture coincides with a singularity, say with $p_k$ prongs, then, by the analogous reasoning, we obtain $\ind \Phi_k = \frac{p_k}{2}$. This explains also why a value $p_k = 1$, which they admit, does not lead to negative index of the corresponding isogredience class: any singularity with a single prong only must coincide with one of the punctures! However, in the context of the paper here we have to add a further restriction: A pseudo-Anosov surface homeomorphism induces an iwip automorphism if and only if the surface has only one puncture, so that we have in our context always the condition $n = 1$. We come now to the 4 exceptional cases listed in part~(c) of their Theorem~2: The first case $(6; -1)$ can not be realized by a pseudo-Anosov map with non-orientable stable foliation, but according to their Theorem~2 there must be a realization by a pseudo-Anosov with orientable stable lamination. The last case, $(\, \, \, ; -1)$ requires more than one puncture, so that it is ruled out by the previous paragraph. The third case, $(1, 3; -1)$, adds up to $g = 1$, so that for $n =1$ one has $N = 2$: In this case all automorphisms are geometric, and hence there is no chance to realize the corresponding index list $[\frac{1}{2}, \frac{1}{2}]$. However, in the paper here we always assume $N \geq 3$, so that this case does not occur. There remains the second exceptional case: $(3, 5; -1)$. In this case we have $g = 2$. From $n = 1$ we deduce the following two possibilities for the index list: $[\frac{5}{2}, \frac{1}{2}]$ or $[\frac{3}{2}, \frac{3}{2}]$, according to which of the two singularities coincides with the puncture. The former index list is alternatively realized by the case $(1, 7; -1)$ from~\cite{MS} (with the puncture at the singularity with only 1 prong), which satisfies all conditions of Theorem~2 of~\cite{MS}. The other index list, $[\frac{3}{2}, \frac{3}{2}]$, however, leads always back to their second exceptional case $(3, 5; -1)$, and hence can not be realized by a geometric automorphism. There remains as a last ``left-over mystery'' of the index realization problem for $N \geq 3$ the question whether the index list $[\frac{3}{2}, \frac{3}{2}]$ can perhaps be realized by a parageometric automorphism of $F_4$. \subsection{Some numerical experiments regarding the stable index} Our realization result naturally leads to the question of frequency of the different index lists. Thanks to the program developed by the first author in Python and Sage we were able to do the following numerical experiments. Fix a finite alphabet $A$ with $N$ letters and let $F_A$ be the free group on $A$. With Convention~\ref{convention:no-untouched-edges}, an elementary Nielsen automorphism is given by $a\mapsto ab$ with $a,b\in A^{\pm 1}$, $a\neq b$ and $a\neq b^{-1}$. Recall that elementary Nielsen automorphisms form a generating set of $\text{Aut}(F_A)$ and $\text{Out}(F_A)$. Our program tries random products of $L$ elementary Nielsen automorphisms, that is to say they approximate the random walk on $\text{Out}(F_A)$ for this generating set. Each line of the table below corresponds to a sample size of $100$ computed random automorphisms. Computations were made at the math department in Marseille, without compiling the Sage code nor looking for serious optimization. Note that those automorphisms commonly involve words with several thousands of letters. Note also that those frequencies are not completly significant. First, Rivin~\cite{rivin} (see also \cite{sisto}) proved that the frequency of iwips goes to $100\%$ when the number of elementary Nielsen automorphisms in the product goes to infinity (but $26$ or even $41$ are quite small compare to infinity). Different tests with the above data may lead to slightly different results. However, what seems to be significant is that: \begin{enumerate} \item Automorphisms with small indices are much more frequent that automorphisms with high indices. \item Automorphisms with index greater than half the theoretical maximal ($\frac{N-1}{2}$) almost never occur. In particular the maximal index $N-1$ never occured in our tests out of thousands of tries. \item Several index lists seem to share positive frequency. \item The smallest index list: $[\frac 12]$, is not always the most frequent. \end{enumerate} We have no clue on how to prove or disprove such experimental observations. \noindent \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline &&\multicolumn{6}{c|}{Frequency}&\multirow{3}{*}{\parbox{2cm}{\begin{center}Computation\\ time\end{center}} }\\ \cline{3-8} N&L&\multirow{2}{*}{iwips}&\multicolumn{5}{c|}{most frequent index lists among iwips}&\\ \cline{4-8} &&&$[\frac 12]$&$[\frac 12,\frac 12]$&$[1]$&$[\frac 12,\frac 12,\frac 12]$&$[\frac 12,\frac 12,\frac 12,\frac 12]$&\\ \hline 3&26&100\%&64\%&34\%&1\%&0\%&0\%&2 min\\ \hline 4&26&97\%&47\%&34\%&3\%&14\%&1\%&4 min\\ \hline 5&26&93\%&29\%&32\%&3\%&28\%&5\%&7 min\\ \hline 6&29&95\%&21\%&29\%&6\%&20\%&9\%&15 min\\ \hline 7&34&91\%&17\%&26\%&9\%&25\%&7\%&23 min\\ \hline 8&36&84\%&13\%&19\%&7\%&17\%&18\%&31 min\\ \hline 9&39&78\%&11\%&6\%&11\%&18\%&10\%&46 min\\ \hline 10&41&76\%&3\%&8\%&8\%&5\%&8\%&1h5min\\ \hline \end{tabular} \end{center} \noindent Institut de mathématiques de Marseille\\ Université d'Aix-Marseille\\ 39, rue Frédéric Joliot Curie \\ 13453 Marseille Cedex 13\\ France\\ \href{mailto:[email protected]}{\nolinkurl{[email protected]}}\\ \href{mailto:[email protected]}{\nolinkurl{[email protected]}} \end{document}
\begin{document} \title{Squeezed light at sideband frequencies below 100 kHz from a single OPA} \author{R. Schnabel} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \author{H. Vahlbruch} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \author{A. Franzen} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \author{S. Chelkowski} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \author{N. Grosse} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \author{H.-A. Bachor} \affiliation{Department of Physics, Faculty of Science, The Australian National University, A.C.T. 0200, Australia.} \author{W. P. Bowen} \affiliation{Department of Physics, Faculty of Science, The Australian National University, A.C.T. 0200, Australia.} \author{P. K. Lam} \affiliation{Department of Physics, Faculty of Science, The Australian National University, A.C.T. 0200, Australia.} \author{K. Danzmann} \affiliation{Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut),\\ Institut f\"ur Atom- und Molek\"ulphysik, Universit\"at Hannover, 30167 Hannover, Germany} \email{[email protected]} \begin{abstract} Quantum noise of the electromagnetic field is one of the limiting noise sources in interferometric gravitational wave detectors. Shifting the spectrum of squeezed vacuum states downwards into the acoustic band of gravitational wave detectors is therefore of challenging demand to quantum optics experiments. We demonstrate a system that produces nonclassical continuous variable states of light that are squeezed at sideband frequencies below 100~kHz. A single optical parametric amplifier (OPA) is used in an optical noise cancellation scheme providing squeezed vacuum states with coherent bright phase modulation sidebands at higher frequencies. The system has been stably locked for half an hour limited by thermal stability of our laboratory. \end{abstract} \maketitle Currently, an international array of first-generation, kilometer-scale laser interferometric gravitational-wave detectors, consisting of GEO\,600~\cite{geo02}, LIGO~\cite{LIGO}, TAMA\,300~\cite{TAMA} and VIRGO~\cite{VIRGO}, targeted at gravitational-waves (GW) in the acoustic band from 10~Hz to 10~kHz, is going into operation. These detectors are all Michelson interferometers. Intense laser light is injected from the bright port, whereas the output port is locked to a dark fringe. The anti-symmetric mode of arm-length oscillations (e.g.\ excited by a gravitational wave) yields a sideband modulation field in the anti-symmetric (optical) mode which is detected at the dark output port. In general, several technical noise sources, and more fundamentally, thermal noise and quantum noise (radiation pressure and shot-noise \cite{KLMTV01}, \cite{HCCFVDS03}) contribute to the signal noise floor. Recently it has been shown that thermal noise and radiation pressure noise might be sensed by additional short high-finesse cavities and subsequently reduced \cite{CHP99}, \cite{CHP03}. Shot-noise on the other hand can be reduced by squeezed vacuum states of light injected into the dark port of the interferometer (\cite{SHSD04} and references therein). Experimental progress has been reported in \cite{KSMBL02} where the shot-noise of a power-recycled table-top interferometer has been reduced by squeezed states at about 5~MHz. GW interferometers require squeezed states at frequencies of the acoustic detection band. Such states have not been demonstrated so far since laser sources are classically noisy in the kHz-regime and below. Noise cancellation schemes have been proposed to enhance squeezing utilizing Kerr non-linearity in fibers \cite{Shirasaki90}, laser diodes \cite{LHY91} and second harmonic generation \cite{Ralph95}. In a recent experiment, continuous wave squeezing at frequencies as low as 220 kHz was demonstrated \cite{BSTBL02} utilizing two squeezed beams from two independent optical parametric amplifiers (OPAs). \begin{figure} \caption{Schematic of our experiment to produce squeezed vacuum states of light below 100 kHz utilizing a single OPA. } \label{TheoryFig} \end{figure} In this paper we report the observation of a squeezed vacuum state at a sideband frequency of 80~kHz utilizing a single dim squeezed beam and a classically correlated bright coherent laser beam in a Mach-Zehnder configuration. Broadband vacuum squeezing from 130~kHz up to several MHz was also measured. Consider a Mach-Zehnder interferometer with independently adjustable beamsplitter reflectivities $(\varepsilon_{1},\varepsilon_{2})$ and an OPA in one arm of the interferometer (Fig.~\ref{TheoryFig}). We theoretically and experimentally show that this configuration can be operated as a common mode noise cancellation experiment providing a squeezed vacuum with phase modulation sidebands at the interferometer output. First let us derive the expression for the amplitude quadrature operator in frequency space $\hat X^{+}_{{\rm out}}$. We consider amplitude noise from our laser source $(\hat X^{+}_{{\rm src}})$ and vacuum fluctuations entering lossy ports of our set-up $(\hat X^{+}_{{\rm vac}}$, $\hat X^{+}_{{\rm loss}}$, $\hat X^{+}_{{\rm oc}})$. A bright coherent state represented by the quadrature operator $\hat{X}^{+}_{\mathrm{src}}$ encounters the first beamsplitter $\varepsilon_{1}$ that couples in the vacuum state entering through the unused input port, thereby producing two fields $\hat{X}^{+}_{\mathrm{ic}}\!=\! \sqrt{\varepsilon_{1}}\hat{X}^{+}_{\mathrm{vac}}\!+\!\sqrt{1\!-\!\varepsilon_{1}}\hat{X}^{+}_{\mathrm{src}}$ and $\hat{X}^{+}_{\mathrm{ref}}\!=\!\sqrt{1\!-\!\varepsilon_{1}}\hat{X}^{+}_{\mathrm{vac}}\!-\!\sqrt{\varepsilon_{1}}\hat{X}^{+}_{\mathrm{src}}$. The field $\hat{X}^{+}_{\mathrm{ic}}$ is used to seed a single OPA consisting of a $\chi^{(2)}$ non-linear medium inside a highly over-coupled optical resonator. In our scheme the OPA acts as a de-amplifier resulting in a dim amplitude squeezed transmitted output field $\hat X^{+}_{{\rm sqz}}$. In this configuration the overall non-linearity $g$ of the OPA resonator, which is dependent on the crystal non-linearity, second harmonic pump power and mode matching of fundamental and pump beams, becomes real and negative. We follow the treatment of optical parametric oscillation and amplification which uses the linearized formalism of quantum mechanics and the mean field approximation as given in \cite{Lam99} yielding the OPA transfer function \begin{eqnarray} \hat{X}^{+}_{\mathrm{sqz}}&\!=\!&\Big\{\sqrt{4\kappa_{\mathrm{ic}}\kappa_{\mathrm{oc}}}\hat{X}^{+}_{\mathrm{ic}}\!+\!\sqrt{4\kappa_{\mathrm{loss}}\kappa_{\mathrm{oc}}}\hat{X}^{+}_{\mathrm{loss}}\nonumber\\ &&+(2\kappa_{\mathrm{oc}}\!-\!i\Omega\!-\!\kappa\!+ \!g)\hat{X}^{+}_{\mathrm{oc}}\Big\}/(i\Omega\!+\!\kappa\!-\! g) \end{eqnarray} where $\kappa_{\mathrm{ic}}$, $\kappa_{\mathrm{oc}}$, $\kappa_{\mathrm{loss}}$ are the input, output and loss coupling rates respectively for the OPA resonator with total decay rate $\kappa\!=\!\kappa_{\mathrm{ic}}\!+\!\kappa_{\mathrm{oc}}\!+\!\kappa_{\mathrm{loss}}$. The sideband frequency of detection is set by $\Omega/2\pi$. The reference beam $\hat{X}^{+}_{\mathrm{ref}}$ is given a phase shift $\phi$ before being interfered with the squeezed beam $\hat{X}^{+}_{\mathrm{sqz}}$ on the second beamsplitter $\varepsilon_{2}$ to give $\hat{X}^{+}_{\mathrm{out}}\!=\!\sqrt{\varepsilon_{2}}\hat{X}^{+}_{\mathrm{sqz}}\!+\!e^{-i\phi}\sqrt{1\!-\!\varepsilon_{2}}\hat{X}^{+}_{\mathrm{ref}}$ on the chosen output port. Fully expanding this expression and collecting terms it becomes clear that \begin{eqnarray} \hat{X}^{+}_{\mathrm{out}}&\!=\!&\biggl\{ \hat{X}^{+}_{\mathrm{src}}\Bigl[\sqrt{(1\!-\!\varepsilon_{1})\varepsilon_{2}}\sqrt{4\kappa_{\mathrm{ic}}\kappa_{\mathrm{oc}}}\nonumber\\ &&-\sqrt{\varepsilon_{1}(1\!-\!\varepsilon_{2})}(i\Omega\!+\!\kappa\!-\! g)e^{-i \phi}\Big]\nonumber\\ &&+\hat{X}^{+}_{\mathrm{vac}}\Bigl[\sqrt{\varepsilon_{1}\varepsilon_{2}}\sqrt{4\kappa_{\mathrm{ic}}\kappa_{\mathrm{oc}}}\nonumber\\ &&+\sqrt{(1\!-\!\varepsilon_{1})(1\!-\!\varepsilon_{2}\!)}(i\Omega\!+\!\kappa\!-\! g)e^{-i \phi}\Big]\nonumber\\ &&+\hat{X}^{+}_{\mathrm{oc}}\Bigl[\sqrt{\varepsilon_{2}}(2\kappa_{\mathrm{oc}}\!-\!i\Omega\!-\!\kappa\!+\! g)\Big]\nonumber\\ &&+\hat{X}^{+}_{\mathrm{loss}}\Bigl[\sqrt{\varepsilon_{2}}\sqrt{4\kappa_{\mathrm{loss}}\kappa_{\mathrm{oc}}}\Big]\bigg\}/(i\Omega\!+\!\kappa\!-\! g) \end{eqnarray} The value of $\phi$ is set to zero to ensure that complete cancellation of the coherent amplitude at the chosen output port is possible. The sideband detection frequencies are assumed to be within the linewidth of the OPA resonator such that the approximation $\Omega\ll\kappa$ is valid. From the above equation the noise spectrum of the output field as measured by a homodyne detector may be calculated using $V^{+}_{\mathrm{out}}\!=\!\langle (\hat{X}^{+}_{\mathrm{out}})^{2} \rangle \!- \!\langle \hat{X}^{+}_{\mathrm{out}} \rangle^{2}$. The input vacuum states are assumed to be uncorrelated amongst themselves and with variances set to $V^{+}_{\mathrm{vac}}\!=\!V^{+}_{\mathrm{oc}}\!=\!V^{+}_{\mathrm{loss}}\!=\!1$. The laser source contribution $(\hat{X}^{+}_{\mathrm{src}})$ to the output field may be completely removed provided that the following condition, consisting of a relation between both beamsplitter reflectivities and OPA resonator properties, is satisfied: \begin{equation} \varepsilon_{1}^{+}=1\!-\!\left[1\!+\!\frac{\varepsilon_{2}}{(1\!-\!\varepsilon_{2})}\frac{4\kappa_{\mathrm{ic}}\kappa_{\mathrm{oc}}/\kappa^{2}}{(1\!-\!g/\kappa)^{2}} \right]^{-1} \end{equation} This condition can be interpreted as $\varepsilon_{1}$ compensating for the classical OPA gain in order to achieve perfect interference visibility. With $\varepsilon_{1}$ suitably adjusted, the output noise spectrum becomes a squeezed vacuum of variance $V^{+}_{\mathrm{sqzvac}}$. Here the superscript $^+$ describes the fact that a phase reference might still be given by a modulation field at higher frequencies. \begin{equation} V^{+}_{\mathrm{out}}(\varepsilon_{1}=\varepsilon_{1}^{+})=V^{+}_{\mathrm{sqzvac}} =1\!+\!\varepsilon_{2}\frac{4\kappa_{\mathrm{oc}} g}{(\kappa-g)^{2}} \label{Vout} \end{equation} It can be seen that $\varepsilon_2$ should be close to one to keep losses on the squeezing as small as possible. In the experiment described below we chose $1 - \varepsilon_2 = 1 \%$, which is already a small value in comparison with typical losses in current squeezing experiments. Note that the noise variance in Eq.~(\ref{Vout}) is identical to that of a single OPA seeded with a quantum noise limited input and detected with $1 - \varepsilon_{2}$ intensity loss. However, here a modulation performed in only one interferometer arm will endow the squeezed vacuum with bright modulation sidebands outside the squeezing band of interest, thereby facilitating phase locking of any down-stream applications. Figs. \ref{OverviewSpectrum} and \ref{80kHzSpectrum} show experimental results from the single OPA noise cancellation scheme according to Fig.~\ref{TheoryFig}. The OPA was constructed from a non-critically phasematched MgO:LiNbO$_{3}$ crystal inside a hemilithic resonator of input and output power reflectivities of 0.9997 and 0.95, respectively. The laser source of the experiment was a monolithic non-planar Nd:YAG ring laser of up to 2~Watt single mode output power at 1064 nm. Intensity noise below 2~MHz was reduced by a servo loop acting on the pump diode current. The OPA was seeded with a coherent beam of 30~mW at the fundamental wavelength and pumped with 300~mW of the second harmonic (532~nm). The second harmonic beam was generated in a cavity which was similar to the OPA cavity. Conversion efficiency of 65\% was achieved. The phase difference of fundamental and second harmonic waves inside the OPA were stably locked to deamplification generating a dim amplitude quadrature squeezed beam of about 200~$\mu$W at 1064~nm. The noise power spectrum was measured in a homodyne detector constructed from two optically and electronically matched ETX~500 photodiodes. Amplitude quadrature squeezing was observed at sideband frequencies from 4 MHz up to the 29~MHz OPA cavity linewidth. Curve (d) in Fig.~\ref{OverviewSpectrum} shows the noise power spectrum of the amplitude quadrature stably locked to the homodyning local oscillator. The locking loop error signal was extracted from 20~MHz phase-modulation sidebands generated by application of a RF electric field along the optical axis of the non-linear OPA crystal. The shot-noise reference given by curve (a) was measured by blocking the squeezed beam before the homodyne detector. The apparent increase in shot-noise level at lower frequencies is due to higher homodyne detector dark noise, cf. curve (c). In a second step the dim squeezed beam was overlapped on a beamsplitter of reflectivity $\varepsilon_2 = 99 \%$ with a coherent beam from the same laser source. The intensities and the relative phase of both inputs were adjusted to provide a dark output of less than 6~$\mu$W. The amplitude noise spectrum of the dark port is shown in Fig.~\ref{OverviewSpectrum} curve (b). The spectra in Fig.~\ref{OverviewSpectrum} were recorded on a spectrum analyzer with resolution bandwidth (RBW) set to 100 kHz and video bandwidth (VBW) set to 100~Hz. In Fig.~\ref{80kHzSpectrum} the RBW was reduced to 3 kHz and the VBW to 10~Hz. Here, measured shot-noise (a) and squeezed vacuum noise (b) are shown after Gaussian weighted averaging within the RBW. The detector dark noise was at least 2.5 dB below the squeezed trace before being subtracted. Squeezing at 80~kHz and broadband squeezing from 130~kHz upwards were observed. The squeezed beam still carried the phase-modulation sidebands at 20~MHz and was stably locked to the homodyning local oscillator phase. \begin{figure} \caption{Measured noise power spectra at sideband frequencies $\Omega/2\pi$. The distances between the shot-noise curve (a) and curves (b) and (d), respectively, represent the directly observed squeezing. The distance between curves (b) and (d) represents the optical cancellation of technical noise achieved. (c) represents the detector dark noise.} \label{OverviewSpectrum} \end{figure} \begin{figure} \caption{Measured noise power spectra. Squeezing was observed where curve (b) was below the shot-noise curve (a). } \label{80kHzSpectrum} \end{figure} Within the assumptions of the presented theory, perfect cancellation of technical laser source noise is possible, provided that anti-correlated noise contributions are kept to zero, and matching of the coherent amplitudes and the spatial modes at the combining beam splitter are perfect. A flat spectrum of a constant level of squeezing inside the cavity linewidth is then expected. In our experiment, residual classical noise at some frequencies still limited the observation of squeezing to the lowest frequency of 80~kHz. Our results, however, were not limited by the strength of optical noise cancellation. Classical noise was suppressed by 25~dB at the 1.5~MHz laser relaxation oscillation (Fig.~\ref{OverviewSpectrum}). The strength of the optical noise cancellation was limited by 94.4\% visibility at the 99/1 combining beamsplitter. This limitation also led to the residual power of 6~$\mu$W at the dark output port. Classical noise from the homodyne local oscillator was electronically suppressed by $60$~dB and was not significant for our measurements. Our results were limited by anti-correlated classical noise, possibly arising from acoustic noise coupling into the optical scheme, locking noise of the OPA, electronic pickup of stray RF fields in the electrodes applied to the OPA crystal, or even noise coupled into the system via the second harmonic pump. The signal at 4.8~MHz was identified to be the beat of two modulation frequencies and indeed picked up by the OPA. In both figures the lower boundary of noise power was set by the squeezing achieved in the OPA and subsequent losses. Photodiode efficiencies of $(92\pm 2)\%$, homodyne detector visibility of $0.975 \pm 0.003$, propagation losses of $(5 \pm 0.5) \%$ and OPA escape efficiency of $(88 \pm 2)\%$ give an overall loss of 27 \%. In conclusion, we have reported the observation of squeezing at low sideband frequencies down to 80~kHz from a single OPA. The wavelength of the carrier laser field was 1064~nm which is compatible with current GW detectors. In total, just 600~mW laser power was necessary to generate the squeezed states. We note that power requirements linearly scale up with the number of OPAs employed in the scheme. One goal of further investigations will be the reduction of losses and the effect of green-induced infrared absorption \cite{FKAF01}. Both will enable higher green pump power and therefore increased squeezing strength. Residual classical noise contributions at low frequencies will also be identified in further investigations that aim to reach the acoustic band of gravitational wave detectors. This work has been supported by the Deutsche Forschungsgemeinschaft and is part of Sonderforschungsbereich 407. \end{document}
\begin{document} \maketitle \begin{abstract} In this paper, we study ``demi-caract\'eristique'' and (Poisson) stability in the sense of Poincar\'e. Using the definitions \'a la Poincar\'e for $\mathbb{R} $-actions $v$ on compact connected surfaces, we show that ``$R$-closed'' $\mathbb{R} ightarrow$ ``pointwise almost periodicity (p.a.p.)'' $\mathbb{R} ightarrow$ ``recurrence'' $\mathbb{R} ightarrow$ non-wandering. Moreover, we show that the action $v$ is ``recurrence'' with $|\mathrm{Sing}(v)| < \infty$ iff $v$ is regular non-wandering. If there are no locally dense orbits, then $v$ is ``p.a.p.'' iff $v$ is ``recurrence'' without ``orbits'' containing infinitely singularities. If $|\mathrm{Sing}(v)| < \infty$, then $v$ is ``$R$-closed'' iff $v$ is ``p.a.p.''. \end{abstract} \section{Introduction and preliminaries} In the Poincar\'e celebrated paper \cite{P} which is an origin of dynamical systems, he studied surface flows. In the series of the relative works, he used the slightly different definitions from the presence notations (e.g. semi-characteristics, limit cycles). On the other hand, the following fact for topological dynamics on compact metrizable spaces is known: $ \text{$R$-closed} \subsetneq \text{p.a.p.} \subsetneq \text{recurrent} \subsetneq \text{non-wandering}. $ In this paper, we study a surface flow using the notations of Poincar\'e. In particular, we study ``demi-caract\'eristique'' and (Poisson) stability in the sense of Poincar\'e (We call these extended positive orbits and extended recurrence). Precisely, we show the following relation for $\mathbb{R} $-actions on compact surfaces: $$ \text{extended $R$-closed} \subsetneq \text{extended p.a.p.} \subsetneq \text{extended recurrent} \subsetneq \text{non-wandering}. $$ Moreover, we show that the $\mathbb{R} $-action $v$ on a compact surface $S$ is extended recurrence with at most finitely many singularities if and only if $v$ is regular non-wandering. If $v$ has no locally dense orbits, then $v$ is extended recurrence with $| \mathrm{Sing}(v) \cap O_\mathrm{{ex}}(x)| < \infty$ for each point $x \in S$ if and only if $v$ is extended p.a.p.. If $|\mathrm{Sing}(v)| < \infty$, then $v$ is extended $R$-closed if and only if $v$ is extended p.a.p.. Recall ``demi-caract\'eristique'' in the sense of Poincar\'e. Let $v$ be an $\mathbb{R} $-action on a surface $S$. For a singular point $x$ of $S$, we call that $x$ is a (topological) saddle for a continuous $\mathbb{R} $-action if there is a neighborhood of $x$ which is locally homeomorphic to a neighborhood of a saddle for a $C^1$ $\mathbb{R} $-action. For a point $x$ of $S$, define $O^+_i(x)$ as follows: $O^+_0 := O^+(x)$, $$O^+_{i + 1}(x) := O^+_i(x) \cup \cup_{x' \in O^+_i(x)} \{ W^u(\omega(x')) \mid \omega(x') : \text{ saddle } \}$$ for any successor ordinal $i$, and $O^+_{\mu} := \cup_{\mu > \nu}O^+_{\nu}$ for any limit ordinal $\nu$. Here $W^u(y) := \{ z \in S \mid \alpha(z) = \{ y \} \}$. Put $O^+_\mathrm{{ex}}(x) := \cup \{ O^+_{\nu}(x) \mid {\nu : \text{ordinal}} \}$ is called the extended positive orbit of $x$. Note Poincar\'e called this demi-caract\'eristique. Similarly, we defined the extended negative orbit $O^-_\mathrm{{ex}}(x)$ and so define the extended orbit $O_\mathrm{{ex}}(x) := O^+_\mathrm{{ex}}(x) \cup O^-_\mathrm{{ex}}(x)$. Note that generally $O_\mathrm{{ex}}(x) \neq O_\mathrm{{ex}}(y)$ for a point $x \in S$ and for a point $y \in O_\mathrm{{ex}}(x)$. In fact, the binary relation $\{ (x, y) \mid y \in O_\mathrm{{ex}}(x) \}$ is reflexive and symmetric but need not transitive. If the extended orbit $O_\mathrm{{ex}}(x)$ is not a single point but a compact subset, then it is called an extended periodic orbit. Notice that the positive prolongations and the extend positive orbits are independent. A closed subset $\gamma$ of an extended orbit is called a limit cycle in the sense of Poincar\'e or an extended limit cycle if $\gamma$ is not a singleton but a union of simple closed curves and there is a point $x$ of $S - \gamma$ such that $\gamma$ is either the omega limit set $\omega(x)$ or the alpha limit set $\alpha(x)$ of $x$. A point $x$ of $S$ is extended positive recurrent if either $x$ is positive recurrent or $x \in \overline{O^+_\mathrm{{ex}}(x) - O^+(x)}$. Similarly, we define ``extended negative recurrent''. A point $x$ of $S$ is extended recurrent if $x$ is extended positive recurrent and extended negative recurrent. The $\mathbb{R} $-action $v$ is said to be extended recurrent if so is each point of $S$. By definitions, recurrence implies extended recurrence. Moreover each flow on a compact surface which consists of closed orbits and at least one saddle connections is not recurrent but extended recurrent. In addition, there is a flow which is not extended recurrent but non-wandering. Indeed, consider the unit sphere $\mathbb{S} ^2 \subset \mathbb{R} ^3$ and define $v_t: \mathbb{S} ^2 \to \mathbb{S} ^2$ with $\mathrm{Fix}(v) = \{(1, 0, 0) \}, \{ (0, 0, \pm 1) \}$ such that the regular orbits consists of $(\mathbb{S} ^1 \times \{0 \}) - \{(1, 0, 0) \}$ and of circles each of which is the intersection of $\mathbb{S} ^2 \cap (\mathbb{R} ^2 \times \{z\})$ for some $z \neq 0 \in (-1,1)$. \section{Extended recurrence and Non-wandering property} From now on, let $v$ be an $\mathbb{R} $-action on a compact surface $S$. Denote by $\mathrm{LD}$ (resp. $\mathrm{E}$) the union of locally dense (resp. exceptional) orbits of $v$. Recall that an orbit $O$ is proper if $\overline{O} - O$ is closed, is locally dense if $\mathrm{int}\overline{O} \neq \emptyset$, and is exceptional if $O$ is neither proper nor locally dense and that a point is proper (resp. locally dense, exceptional) if so is it. Let $\mathrm{P}$ be the union of points whose orbits are not closed but proper. Recall the following fundamental fact. \begin{lemma}\label{lem0a} The set of saddles are countable. \end{lemma} \begin{proof} By the definition of saddles, each saddle has a neighborhood which contains no other saddles. Since $S$ is second countable, the set of saddles can be enumerated and so is countable. \end{proof} The above proof also shows that an extended orbit $O_\mathrm{{ex}}(x)$ for a point $x$ of $S$ contains at most countably many saddles and is $O_\mathrm{\aleph_1}(x)$. Now we show the useful tools. \begin{lemma}\label{lem0} Each extended periodic orbit $O$ consists of finitely many proper orbits and saddles. \end{lemma} \begin{proof} We show that $O$ contains at most finitely many saddles. Otherwise $O$ contains infinitely many saddles. The definition of saddles implies that $O$ contains a singularity which is not a saddle, which contradicts to the definition of extended orbits. Then $O$ contains at most finitely many distinct orbits. If $O$ contains either locally dense orbits or exceptional orbits, then the closedness of $O$ implies that $O$ contains at least uncountably many orbits, which contradicts to the finiteness. Thus $O$ consists of finitely many proper orbits and saddles. \end{proof} \begin{lemma}\label{lem00} If there are extended limit cycles, there is a wandering point $x \ in \mathrm{P}$ such that $O(x) = O_\mathrm{{ex}}(x)$ \end{lemma} \begin{proof} Suppose that there is an extended limit cycle $C$. We may assume that there is a point whose omega limit set is $C$. Then there are uncountably many proper orbits each of whose omega limit set is $C$. Since the set of saddles are countable, there is a proper orbit $O$ whose extend orbit is coincident with itself such that $\omega(O) = C$. This implies that each point of $O$ is wandering. \end{proof} \begin{lemma}\label{lem1} If $\mathrm{P}$ consists of at most finitely many orbits, then $v$ is non-wandering. \end{lemma} \begin{proof} It suffices to show that each point $x \in \mathrm{P}$ is non-wandering. Indeed, By the flow box theorem, we have $x \in \overline{\mathrm{E} \sqcup \mathrm{LD} \sqcup \mathrm{Per}(v)}$. Since each point of $\mathrm{E} \sqcup \mathrm{LD} \sqcup \mathrm{Per}(v)$ is either positive or negative recurrent, we have that $x$ is non-wandering and so $v$ is non-wandering. \end{proof} \begin{lemma}\label{lem023} Suppose that $v$ is extended recurrent. For any point $x$ which is regular or is a saddle, there is a neighborhood $U$ such that $U - O_\mathrm{{ex}}(x)$ contains no singularities. \end{lemma} \begin{proof} If $O_\mathrm{{ex}}(x)$ contains no saddles, then it contains no singularities and so the flow box theorem implies the assertion. Thus we may assume that $O_\mathrm{{ex}}(x)$ contains saddles points. By the definition of extended orbits, we obtain that $O_\mathrm{{ex}}(x) \cap \mathrm{Sing}(v)$ consists of saddles points. Since each saddle $p$ has a neighborhood $U_p$ such that $U_p - \{ p \}$ consists of regular points. By the flow box theorem, there is a neighborhood $U$ of $O_\mathrm{{ex}}(x)$ such that $U - O_\mathrm{{ex}}(x)$ contains no singularities. \end{proof} The extended recurrence implies the (usual) non-wandering property. \begin{lemma}\label{lem2} If $v$ is extended recurrent, then $v$ is non-wandering. \end{lemma} \begin{proof} Note $S - \mathrm{P} = \mathrm{E} \sqcup \mathrm{LD} \sqcup \mathrm{Per}(v) \sqcup \mathrm{Sing}(v)$. If $\mathrm{int}\mathrm{P} = \emptyset$, then the closedness of $\mathrm{Sing}(v)$ implies that $\overline{\mathrm{E} \sqcup \mathrm{LD} \sqcup \mathrm{Per}(v)} \supset \mathrm{P}$ and so $v$ is non-wandering. Thus it suffices to show $\mathrm{int}\mathrm{P} = \emptyset$. Indeed, recall that the set of saddles are countable. For any $x \in \mathrm{P}$, the extended recurrence implies that the omega (resp. alpha) limit set of $x$ is a saddle. Therefore $\mathrm{P}$ consists of countable orbits. Since $S$ is a Baire space, we have that $\mathrm{int}\mathrm{P} = \emptyset$. \end{proof} Recall that a continuous $\mathbb{R} $-action $v$ is regular if each singularity of $v$ is locally homeomorphic to a non-degenerated singularity of a $C^1$ vector field. Note the non-wandering flow $v$ has no no exceptional orbits such that $\overline{\mathrm{LD} \sqcup \mathrm{Per}(v)} \supseteq S- \mathrm{Sing}(v)$, by Lemma 2.1 \cite{Y}. \begin{proposition}\label{lem23} Suppose that $v$ is non-wandering. Then $v$ is regular if and only if $v$ is extended recurrent and has finitely many singularities. Moreover, if $v$ is regular, then either $O_\mathrm{{ex}}(x)$ is closed or both $O^+_\mathrm{{ex}}(x)$ and $O^-_\mathrm{{ex}}(x)$ are locally dense for any $x \in S$. \end{proposition} \begin{proof} Suppose that $v$ is regular. The regularity implies that each singularity is either a center, a saddle, a sink, or a source. By the non-wandering property, we have that there are no limit cycles and that each singularity is either a center or a saddle. By the regularity, the set of saddles is finite. Since the omega (resp. alpha) limit set of each non-closed proper orbit is a saddle, we have that the set of non-closed proper orbits are finite. It suffices to show that each point $x \in S$ whose extended orbit is not closed but proper is extended recurrent. Indeed, we may assume that there are no $y \in O^+_\mathrm{{ex}}(x) - O^+(x)$ such that $x \notin {O^+_\mathrm{{ex}}(y)}$. Since each saddle has two local (un)stable manifolds, both $O^+_\mathrm{{ex}}(x)$ and $O^-_\mathrm{{ex}}(x)$ are not closed. Since the union of non-closed proper orbits is finite and since each non-closed proper orbit is a saddle connection, we have that $O^+_\mathrm{{ex}}(x)$ (resp. $O^-_\mathrm{{ex}}(x)$) contains a locally dense orbit. Let $U$ be a neighborhood of $O_\mathrm{{ex}}(x)$ such that $U - O_\mathrm{{ex}}(x)$ contains no singularities. By the finiteness of saddles, there is an arbitrary thin connected open subset $U_x \subseteq U$ which is disjoint from the union of heteroclinic connections in $O^+_\mathrm{{ex}}(x)$ and whose closure contains a curve $C^+$ in $O^+_\mathrm{{ex}}(x)$ from $x$ to a point in a locally dense orbit $O^+$ such that the orientations of $C^+$ and $O^+_\mathrm{{ex}}(x)$ are same. By locally density, we have $C^+ \subset \overline{U_x \cap \overline{O^+}}$ and so that $x \in \overline{O^+} \subseteq \overline{O^+_\mathrm{{ex}}(x) - O^+(x)}$. By the symmetry, this implies that $x$ is extended recurrent and so $v$ is extended recurrent. Conversely, suppose that $v$ is extended recurrent and has finitely many singularities. By the finiteness of singularities, we have $\overline{\mathrm{Per}(v) \sqcup \mathrm{LD}} = S$. Since each connected component $C$ of the boundary of $\mathrm{Per}(v)$ consists of proper orbits and finitely many singularities, by the extended recurrence, we have that $C$ is either a center or a closed extended orbit and so each singularity contained in $C$ is a center or a saddle. On the other hand, the boundary of $\mathrm{LD}$ consists of proper orbits and finitely many singularities. The extended recurrence implies that each singularity in the boundary is a saddle. Thus $v$ is regular. \end{proof} Now we describe an $\mathbb{R} $-action which has non-closed extended orbits and which is not recurrent but extended recurrent. Consider an irrational rotation on $\mathbb{T} ^2$ and a rational rotation on $\mathbb{T} ^2$. Removing a point from each torus, paste the metric completions of them such that the intersection is a circle which consists of two saddles and two heteroclinic connections. Then we obtain an extended recurrent $\mathbb{R} $-action on a closed oriented surface with genus $2$ which is not recurrent and has non-closed extended orbits. Notice that this example shows also that the extended orbits are different from chain recurrent components. We obtain the following dichotomy for extended recurrent $\mathbb{R} $-actions. \begin{lemma}\label{prop11} Suppose that $v$ is extended recurrent. For any point $x$ whose extended orbit is not closed, either there is a singular point in $\overline{O_\mathrm{{ex}}(x)}$ which is not a saddle or there is a locally dense orbit $O$ such that $O_\mathrm{{ex}}(x) \cap \overline{O} \neq \emptyset$. \end{lemma} \begin{proof} Since extended recurrence implies non-wandering property, there are no exceptional orbits and $\overline{\mathrm{LD} \sqcup \mathrm{Per}(v)} \supseteq S- \mathrm{Sing}(v)$. Suppose that $O_\mathrm{{ex}}(x) \cap \overline{\mathrm{LD}} = \emptyset$. Then $O_\mathrm{{ex}}(x)$ consists of proper orbits and saddles. The extended recurrence implies that the omega (resp. alpha) limit set of each proper orbit in $O_\mathrm{{ex}}(x)$ is a saddle. The non-closedness of $O_\mathrm{{ex}}(x)$ implies that $O_\mathrm{{ex}}(x)$ contains infinitely many saddles. Since saddles are isolated, a convergence point of saddles is a singular point which is not a saddle. This singularity is desired. Suppose that $O_\mathrm{{ex}}(x) \cap \overline{\mathrm{LD}} \neq \emptyset$. By the Ma\v \i er Theorem \cite{M}, the set of closures of locally dense orbits is finite and so there is a locally dense orbit $O$ such that $O_\mathrm{{ex}}(x) \cap \overline{O} \neq \emptyset$. \end{proof} Note that there is an extended recurrent $\mathbb{R} $-action with a non-closed proper extended orbit with infinitely many saddles on a disk. Indeed, let $S := \{ (x, y) \in \mathbb{R} ^2 \mid x^2 + y^2 \leq 2 \}$. Consider circles $S_n := \{ (x, y) \in \mathbb{R} ^2 \mid (x -3/2^n)^2 + y^2 = 2^{-n} \}$ for each $n \geq 2 \in \mathbb{Z}$. Let $O := \cup_{n \geq 2} S_n$. Define an $\mathbb{R} $-action $v$ with an extended orbit $O$ such that the origin is a fixed point which is not a saddle, the outside of $\overline{O}$ consists of periodic orbits, and each open disk bounded by $S_n$ is a center disk. Then $v$ is extended recurrent and has one non-closed proper extended orbit $O$ with infinitely many saddles. \section{Pointwise almost periodicity} We define extended versions of pointwise almost periodicity. An $\mathbb{R} $-action $v$ on a topological space $X$ is said to be extended pointwise almost periodic (extended p.a.p.) if the set $\{\overline{O_\mathrm{{ex}}(x)} \mid x \in X \}$ of closures of extended orbits is a decomposition of $X$. \begin{lemma}\label{lem21} If $v$ is an extended p.a.p. $\mathbb{R} $-action on a compact surface, then $v$ is extended recurrent and $| \mathrm{Sing}(v) \cap \overline{O_\mathrm{{ex}}(x)}| < \infty$ for each $x \in S$. \end{lemma} \begin{proof} Fix each regular point $y \in S$ such that $O_\mathrm{{ex}}(y)$ contains singularities. By definition of extended orbits, the singularities in $O_\mathrm{{ex}}(y)$ are saddles. If $\overline{O_\mathrm{{ex}}(y)}$ contains a singular point $p$ which is not a saddle, then $\overline{O_\mathrm{{ex}}(p)} = \{ p \} \subsetneq \overline{O_\mathrm{{ex}}(y)}$ which contradicts to the extended p.a.p.. Thus $\overline{O_\mathrm{{ex}}(y)} \cap \mathrm{Sing}(v)$ consists of finitely many saddles. Since the set of saddles is countable, the extended p.a.p. property implies that $\mathrm{P}$ consists of countably many orbits. The flow box theorem implies that $\mathrm{P} \subset \overline{\mathrm{E} \sqcup \mathrm{LD} \sqcup \mathrm{Per}(v)}$ and so that $v$ is non-wandering. Fix any point $x \in \mathrm{P}$ whose extended orbit is not closed. We show that either omega or alpha limit set $L$ of $x$ is a saddle. Otherwise there is a point $z \in \overline{O_\mathrm{{ex}}(x)}$ whose omega (resp. alpha) limit set is not a saddle. Then $O(z) = O_\mathrm{{ex}}(z)$ and $O(z) \cap \omega(z) = \emptyset$. On the other hand, $O_\mathrm{{ex}}(x) \subseteq \omega(z)$ and so $\overline{O_\mathrm{{ex}}(x)} \subseteq \omega(z)$. Since $z \notin \omega(z)$, we have $z \notin \overline{O_\mathrm{{ex}}(x)}$, which contradicts to the choice of $z$. Since $| \mathrm{Sing}(v) \cap \overline{O_\mathrm{{ex}}(x)}| < \infty$, each of $O^+_\mathrm{{ex}}(x)$ and $O^-_\mathrm{{ex}}(x)$ contains locally dense orbits. By symmetry, it suffices to show that $x \in \overline{O^+_\mathrm{{ex}}(x) - O^+(x)}$. Indeed, we may assume that there is no point $y \in O^+_\mathrm{{ex}}(x) - O^+(x)$ with $x \in O^+_\mathrm{{ex}}(y)$. Since $\mathrm{Sing}(v) \cap \overline{O_\mathrm{{ex}}(x)}$ consists of finitely many saddles, there is a thin connected open subset $U_x$ without singularities whose closure contains a curve in $O^+_\mathrm{{ex}}(x)$ from $x$ to a point $w \in \mathrm{LD}$ such that the orientations of the curve and $O^+_\mathrm{{ex}}(x)$ are compatible. Then $x \in \overline{O^+(w)} \subseteq \overline{O^+_\mathrm{{ex}}(x) - O^+(x)}$. \end{proof} In the case without locally dense orbits, the following equivalence holds. \begin{proposition}\label{prop32} Suppose that $v$ is a non-identical $\mathbb{R} $-action without locally dense orbits on a compact surface $S$. The following are equivalent: \\ 1) $v$ is extended p.a.p.. \\ 2) $v$ is extended recurrent and $| \mathrm{Sing}(v) \cap \overline{O_\mathrm{{ex}}(x)}| < \infty$ for each $x \in S$. \\ 3) $v$ consists of closed extended orbits. \end{proposition} \begin{proof} Obviously $3) \mathbb{R} ightarrow 1)$. By Lemma \ref{lem21}, we have that $1) \mathbb{R} ightarrow 2)$. Suppose that $2)$ holds. Moreover suppose that there is a non-closed extended orbit $O_\mathrm{{ex}}(x)$. By Lemma \ref{prop11}, there is a singularity $z$ in $\overline{O_\mathrm{{ex}}(x)}$ which is not a saddle. The extended recurrence implies that $\{ z \} \neq \alpha(y)$ and $\{ z \} \neq \omega(y)$ for any $y \neq z \in S$. This contradicts to the Ura-Kimura-Bhatia theorem (cf. Theorem 1.6 \cite{B}). Thus $v$ consists of closed extended orbits. \end{proof} Note that there is an $\mathbb{R} $-action on a connected closed surface which is not extended p.a.p. but extended recurrent and whose singularities consists of two saddles. Indeed, consider two irrational rotations on $\mathbb{T} ^2$. Let $T_1, T_2$ be the metric completions of the resulting surfaces by removing one point from each torus. Then $T_i$ is homeomorphic to a torus minus an open disk. Paste them such that the resulting surface $S = T_1 \cup T_2$ is a closed orientable surface with genus $2$ and that the intersection $T_1 \cap T_2$ is a circle which consists of two saddles and two heteroclinic connections. Let $v$ be the resulting $\mathbb{R} $-action on $S$. The extended orbit closure of each point of $(\mathrm{int} T_i) \setminus O_\mathrm{{ex}}(x)$ for a point $x \in T_1 \cap T_2$ is $T_i$, and the extended orbit closure of each point $x \in O_\mathrm{{ex}}(x_1) \cup O_\mathrm{{ex}}(x_2)$ is $S$, where any $x_i \in T_i$. Then $v$ is not extended p.a.p.. The extended recurrence is obviously. \section{Extended $R$-closedness} Define extended versions of $R$-closedness. An $\mathbb{R} $-action $v$ on a compact surface $S$ is said to be extended $R$-closed if $R_{\text{ex}} := \{ (x, y ) \mid y \in \overline{O_\mathrm{{ex}}(x)} \}$ is closed. \begin{lemma}\label{lem24} If $v$ is extended $R$-closed, then $v$ is extended p.a.p.. \end{lemma} \begin{proof} First we show that $R_{\text{ex}}$ is symmetric. Indeed, the definition of extended orbits implies that $\{ (x, y ) \mid y \in {O_\mathrm{{ex}}(x)} \}$ is symmetric. For any $y \in \overline{O_\mathrm{{ex}}(x)}$, let $( y_n)$ be a sequence of points in ${O_\mathrm{{ex}}(x)}$ converging to $y$. Since $x \in {O_\mathrm{{ex}}(y_n)}$, we have $(y_n, x) \in R_{\text{ex}}$. The extended $R$-closedness implies $(y, x) \in R_{\text{ex}}$ and so $x \in \overline{O_\mathrm{{ex}}(y)}$. The closure of each extended orbit contains at most finitely many singularities and either $\omega(x)$ or $\alpha(x)$ is a saddle for any $x \in \mathrm{P}$. Hence $\mathrm{P}$ consists of at most countably many orbits. By the flow box theorem, we obtain that $\mathrm{P} \subset \overline{\mathrm{LD} \sqcup \mathrm{Per}(v) \sqcup \mathrm{E}}$. This implies that $v$ is non-wandering. Fix any point $x \in S$. By symmetry, it suffices to show that $\overline{O_\mathrm{ex}(y)} \subseteq \overline{O_\mathrm{{ex}}(x)}$ for any $y \in O^+_\mathrm{{ex}}(x)$. We may assume that ${O_\mathrm{{ex}}(x)}$ is not closed. Then there is a point $z \in O^+_\mathrm{{ex}}(y)$ whose orbit is locally dense. Since the set of recurrent points is dense, there is a recurrent point $w \in \mathrm{int} \overline{O^+(z)}$ whose orbit is locally dense. For any $z' \in O^-_\mathrm{{ex}}(z)$, we have $w \in \overline{O_\mathrm{{ex}}(z')}$ and so $z' \in \overline{O_\mathrm{{ex}}(w)} = \overline{O^+(w)} \subseteq \overline{O^+(z)} \subseteq \overline{O_\mathrm{{ex}}(x)}$. Then $\overline{O^-_\mathrm{{ex}}(y)} \subseteq \overline{O^-_\mathrm{{ex}}(z)} \subseteq \overline{O_\mathrm{{ex}}(x)}$ and so $\overline{O_\mathrm{{ex}}(y)} \subseteq \overline{O_\mathrm{{ex}}(x)}$. \end{proof} For a singular point $x$, we call that $x$ is an extended center if there is a neighborhood $U$ of $x$ such that $U - \{ x \}$ consists of extended periodic orbits and centers. \begin{lemma}\label{lem034} Suppose that $v$ is non-identical extended $R$-closed and $S$ is connected. Then $\overline{\mathrm{LD}} \cap \mathrm{Sing}(v)$ is finite and all singularities are saddles and extended centers. \end{lemma} \begin{proof} Since $v$ is non-wandering, there are no exceptional orbits and $\overline{\mathrm{Per}(v) \cup \mathrm{LD}} \supseteq S -\mathrm{Sing}(v)$. By the extended $R$-closedness, we have that each connected component of the boundary of ${\mathrm{Per}(v)}$ (resp. $\mathrm{LD}$) is contained in one extended orbit and so that $\overline{\mathrm{Per}(v)} \cap \mathrm{Sing}(v)$ consists of saddles and extended centers. The extended recurrence also implies that $\overline{\mathrm{LD}} \cap \mathrm{Sing}(v)$ consists of saddles. Since each saddle is isolated, we have that $\overline{\mathrm{LD}} \cap \mathrm{Sing}(v)$ is finite. By Lemma \ref{lem023}, $\overline{\mathrm{Per}(v) \cup \mathrm{LD}}$ is clopen and so $S = \overline{\mathrm{Per}(v) \cup \mathrm{LD}}$. Thus each singularity is either a saddle or an extended center. \end{proof} There is an extended $R$-closed flow with infinitely many saddles. Indeed, consider a center disk and a converging sequence of periodic orbits to the center. Replacing the periodic orbits by homoclinic saddle connections with center disks, we obtain an extended center disk with infinitely many saddles. By doubling this disk, we obtain an extended $R$-closed flow on $\mathbb{S} ^2$ with two extended centers and with infinitely many saddles. Consider the case with finitely many singularities. \begin{proposition}\label{lem42} Suppose $|\mathrm{Sing}(v)|< \infty$. Then $v$ is extended $R$-closed if and only if $v$ is extended p.a.p.. \end{proposition} \begin{proof} It suffices to show the ``if'' part. Suppose that $v$ is extended p.a.p.. By Proposition \ref{lem23}, we have that each singularity is regular and so is a center or a saddle. By the Ma\v \i er Theorem \cite{M}, the set of closures of locally dense orbits is finite. Then $S - \overline{LD} \subseteq \mathrm{int}\overline{\mathrm{Per}(v)}$ consists of periodic orbits, finitely many centers, and finitely many closed extended orbits. By the extended p.a.p. property, we obtain that $\overline{LD}$ consists of finitely many minimal sets with respect to extended orbits. For any connected component $C$ of $\overline{LD}$, there is a neighborhood $U$ of $C$ with $U - C \subseteq \mathrm{Per}(v)$. Consider the quotient map $\pi: S \to S/\overline{O_\mathrm{ex}}$ of closures of extended orbits. Then $\overline{LD}$ is the inverse image of a finite subset of $S/\overline{O_\mathrm{ex}}$ and $\pi(S - \overline{LD})$ is a forest (i.e. a disjoint union of trees). Then $S/\overline{O_\mathrm{ex}}$ is Hausdorff. By Lemma 2.3 \cite{Y2}, we have that $v$ is extended $R$-closed. \end{proof} The finiteness and the non-existence of locally dense orbits imply the following corollary. \begin{corollary} Suppose that $v$ is a non-identical $\mathbb{R} $-action with finitely many singularities on a compact surface with genus $0$. The following are equivalent: \\ 1) $v$ is extended $R$-closed. \\ 2) $v$ is extended p.a.p.. \\ 3) $v$ is extended recurrent. \\ 4) $v$ is regular non-wandering. \\ 5) $v$ consists of closed extended orbits. \end{corollary} \section{An extended non-wandering} Naturally, we can define extended non-wandering property as others. It's easy to see that extended non-wandering property and (usual) non-wandering property are equivalent if the set of singularities are finite. The author don't know whether these notions are same or not in general. \section{A note for a more generalization of orbits} Let $F$ be a compact invariant set of $v$. Then $F$ is said to be isolated (from minimal sets) if there exists a neighborhood $U$ of $F$ such that any minimal set contained in $U$ is a subset of $F$. $F$ is called a saddle set if there exists a neighborhood $U$ of $F$ such that $\overline{G}_{U}\cap F\neq\phi$, where $G_{U} := \{ x \in \overline{U} - F \mid O^{+}(x) \not\subseteq \overline{U}, O^{-}(x) \not\subseteq \overline{U}\}$. In the definition of extended orbits of $x$, if we replace saddles with isolated saddle sets, then we call that the resulting extended orbits are generalized extended orbits, denoted by $O^-_\mathrm{{ge}}(x)$, $O^+_\mathrm{{ge}}(x)$, $O_\mathrm{{ge}}(x)$. Also we define some ``generalized'' notation by replacing saddles with isolated saddle sets. By the definitions, we notice that extended recurrence implies generalized recurrence. Then one can show the generalized version of Lemma \ref{lem023} in the similar fashion if one replaces saddles (resp. ex) with isolated saddle sets (resp. ge). However, this generalization does not imply the generalized version of Lemma \ref{lem2}, \ref{prop11}. Moreover non-wandering property and generalized recurrence are independent. In fact, the following example is a vector field on $\mathbb{S} ^2$ which is not non-wandering but generalized recurrent. Let $D = \{ (x, y) \in \mathbb{R} ^2 \mid x^2 + y^2 \leq 1 \}$, $D_+ = \{ (x, y) \in D \mid x > 0 \}$, $D_- = \{ (x, y) \in D \mid x < 0 \}$, $p_+ := (0, 1)$, $p_- := (0, -1)$, and let $v$ be a flow on $D$ such that $\mathrm{Fix}(v) = \{ (0, y) \in D \}$ and that $\alpha(p) = p_{\mp}$ and $\omega(p) = p_{\pm}$ for each point $p \in D_{\pm}$. Pasting an open center disk, we obtain a flow $v'$ on $\mathbb{S} ^2$ whose fixed point set consists of $\mathrm{Fix}(v)$ and the center. Then $p_-, p_+$ are isolated saddle sets and so $D - \mathrm{Fix}(v) \subset O^{-}_{\mathrm{ge}}(y) = O^{+}_{\mathrm{ge}}(y)$ for any $y \in D - \mathrm{Fix}(v)$. This implies that $v'$ is generalized recurrent. On the other hand, the following example is a vector field on $\mathbb{S} ^2$ which is not generalized recurrent but non-wandering. Define a vector field whose orbits consists of $\{ (1/n, 0) \}$, $\{ 1/n \} \times (\mathbb{T} ^1 - \{ 0 \})$, and $\{ x \} \times \mathbb{T} ^1$ for $n \in \mathbb{Z}_{>0}$ and for $x \in \mathbb{T} ^1 - \{ 1/m \mid m \in \mathbb{Z}_{>0} \}$. Because $\alpha (p) = \omega (p) = \{(0, 0) \}$ is a saddle set but not isolated for any $p \in \{ 0 \} \times (\mathbb{T} ^1 - \{ 0 \})$ and so $O_\mathrm{{ex}}(p) = \{ 0 \} \times (\mathbb{T} ^1 - \{ 0 \})$ is not generalised recurrent. \end{document}
\begin{document} \title{On the decomposition of random hypergraphs} \begin{abstract} For an $r$-uniform hypergraph $H$, let $f(H)$ be the minimum number of complete $r$-partite $r$-uniform subhypergraphs of $H$ whose edge sets partition the edge set of $H$. For a graph $G$, $f(G)$ is the bipartition number of $G$ which was introduced by Graham and Pollak in 1971. In 1988, Erd\H{o}s conjectured that if $G \in G(n,1/2)$, then with high probability $f(G)=n-\alpha(G)$, where $\alpha(G)$ is the independence number of $G$. This conjecture and related problems have received a lot of attention recently. In this paper, we study the value of $f(H)$ for a typical $r$-uniform hypergraph $H$. More precisely, we prove that if $(\log n)^{2.001}/n \leq p \leq 1/2$ and $H \in H^{(r)}(n,p)$, then with high probability $f(H)=(1-\pi(K^{(r-1)}_r)+o(1))\binom{n}{r-1}$, where $\pi(K^{(r-1)}_r)$ is the Tur\'an density of $K^{(r-1)}_r$. \end{abstract} \section{Introduction} For a graph $G$, the {\it bipartition number} $\tau(G)$ is the minimum number of complete bipartite subgraphs of $G$ so that each edge of $G$ belongs to exactly one of them. This parameter of a graph was introduced by Graham and Pollak \cite{gp1} in 1971. The famous Graham--Pollak \cite{gp1} Theorem asserts $\tau(K_n)=n-1$. Since it's original proof using Sylvester's Law of Intertia, many other proofs have been discovered, see \cite{lo}, \cite{peck}, \cite{tv}, \cite{v1}, \cite{v2}, \cite{yan}. Let $\alpha(G)$ be the independence number of $G$. It is easy to observe $\tau(G) \leq |V(G)|-\alpha(G)$. Erd\H{o}s (see \cite{krw}) conjectured that the equality holds for almost all graphs. Namely, if $G \in G(n,1/2)$, then $\tau(G)=n-\alpha(G)$ with high probability. Alon \cite{alon2} disproved this conjecture by showing $\tau(G) \leq n-\alpha(G)-1$ with high probability for most values of $n$. Alon's upper bound on the bipartition number of random graphs $G \in G(n,1/2)$ was improved by Alon, Bohman, and Huang \cite{alon3} recently. Chung and the author proved that if $G \in G(n,p)$, $p$ is a constant, and $p \leq 1/2$, then with high probability we have $\tau(G) \geq n- \delta(\log_{1/p} n)^{3+\epsilon}$ for any constants $\delta$ and $\epsilon$. When $p$ satisfies $\tfrac{2}{n} \leq p \leq c$ for some absolute (small) constant $c$, Alon \cite{alon1} showed that if $G \in G(n,p)$, then $\tau(G)=n-\Theta\left( \tfrac{\log(np)}{p}\right)$ with high probability. The hypergraph analogue of the bipartition number is well-defined. For $r\geq 3$ and an $r$-uniform hypergraph $H$, let $f(H)$ be the minimum number of complete $r$-partite $r$-uniform subhypergraphs of $H$ whose edge sets partition the edge set of $H$. Aharoni and Linial (see \cite{alon1}) first asked to determine the value of $f(K_n^{(r)})$ for $r \geq 3$, where $K_n^{(r)}$ is the complete $r$-uniform hypergraph with $n$ vertices. The value of $f(K_n^{(r)})$ is related to a perfect hashing problem from computer science. Alon \cite{alon1} proved $f(K^{(3)}_n)=n-2$ and $c_1(r) n^{\lfloor \tfrac{r}{2} \rfloor} \leq f(K^{(r)}_n) \leq c_2(r) n^{\lfloor \tfrac{r}{2} \rfloor}$ for $r \geq 4$. For improvements and variations, readers are referred to \cite{ck}, \cite{cktv}, \cite{ckv}, and \cite{ct}. In this paper, we examine the value of $f(H)$ for the random hypergraph $H \in H^{(r)}(n,p)$. To state our main theorem, we need few more definitions. For an $r$-uniform hypergraph $H$, the {\it Tur\'an number} $\textrm{ex}(n,H)$ is the maximum number of edges in an $n$-vertex $r$-uniform hypergraph which does not contain $H$ as a subhypergraph. We define the {\it Tur\'an density} of $H$ as \[ \pi(H)=\lim_{n \to \infty} \frac{\textrm{ex}(n,H)}{\binom{n}{r}}. \] For each $r \geq 3$, we use $K^{(r-1)}_{r}$ to denote the compete $(r-1)$-uniform hypergraph with $r$ vertices. By extending techniques from \cite{alon2} and \cite{cp}, we are able to prove the following theorem. \begin{theorem} \label{main} For $r\geq 3$, if $(\log n)^{2.001}/n \leq p \leq 1/2$ and $H \in H^{(r)}(n,p)$, then with high probability we have \[ f(H)=(1-\pi(K^{(r-1)}_r)+o(1)) \binom{n}{r-1}. \] \end{theorem} From this theorem, we can see the typical value of $f(H)$ has the order of magnitude $n^{r-1}$ while $f(K^{(r)}_n)$ has the order of magnitude $n^{{\lfloor \tfrac{r}{2} \rfloor}}$. We note $\pi(K_3^{(2)})=\tfrac{1}{2}$ while the value of $\pi(K^{(r-1)}_r)$ is not known for $r \geq 4$. We remark here that our techniques also work for $p \leq 1-c$ for any small positive constant $c$. However, we restrict out attention to the case where $p \leq 1/2$ in this paper. We will use the following notation throughout this paper. For each $r \geq 3$, we will use $[n]$ to denote the set $\{1,2,\ldots,n\}$ and $\binom{[n]}{r}$ to denote the collection of all $r$-subsets of $[n]$. If $A_1,A_2,\ldots,A_r$ are pairwise disjoint subsets of $[n]$, then we use $\prod_{i=1}^r A_i$ to denote those $r$-subsets $F$ of $[n]$ such that $|F \cap A_i|=1$ for each $1 \leq i \leq r$. We may also write $A_1 \times A_2 \times \cdots \times A_r$ for $\prod_{i=1}^r A_i$ on some occasions. The complete $r$-partite $r$-uniform hypergraph whose vertex parts are $A_1,A_2,\ldots,A_r$ is the $r$-uniform hypergraph with the edge set $\prod_{i=1}^r A_i$. Let $H$ be an $r$-uniform hypergraph with vertex set $[n]$ and edge set $E$. For pairwise disjoint subsets $A_1,A_2,\ldots,A_r \subset [n]$, we say $A_1,A_2,\ldots,A_r$ form a complete $r$-partite $r$-uniform hypergraph if $\prod_{i=1}^r A_i \subseteq E(H)$. For an $r$-uniform hypergraph $H$, suppose $E(H)=\sqcup_{i=1}^{q} \prod_{j=1}^r A_j^i$ is a partition of the edge set of $H$. For each $1 \leq i \leq q$, the $i$-th complete $r$-partite $r$-uniform hypergraph $H_i$ has vertex parts $A_1^i,\ldots,A_r^i$. We always assume $|A_1^i|\leq \cdots \leq |A_r^i|$. We say $H_i$ is a trivial complete $r$-partite $r$-uniform hypergraph if $|A_1^i|=\cdots=|A_{r-1}^i|=1$. Otherwise, we say $H_i$ is a nontrivial one. The {\it prefix} $P_i$ of $H_i$ is the set $\{A_1^i,\ldots,A_{r-1}^i\}$ and the prefix set ${\cal P}$ of the partition is $\{P_1,\ldots,P_q\}$. We will use $H^{(r)}(n,p)$ to denote the random $r$-uniform hypergraph in which each $r$-set $F \in \binom{[n]}{r}$ is selected as an edge with probability $p$ independently. We say an event ${\cal X}_n$ occurs with high probability if the probability that ${\cal X}_n$ holds goes to one as $n$ approaches infinity. All logarithms are in base 2, unless otherwise specified. The outline of the proof is the following. We will prove $f(H) \leq (1-\pi(K^{r-1}_r)-\epsilon)\binom{n}{r-1}$ with small probability for any positive constant $\epsilon$. To do so, for a given prefix set ${\cal P}=\{P_1,\ldots,P_q\}$ with $q \leq (1-\pi(K^{r-1}_r)-\epsilon)\binom{n}{r-1}$, let ${\cal P}_1=\{P_i \in {\cal P}: |P_j^i|=1 \text{ for each } 1 \leq j \leq r-1\}$ and ${\cal P}_2={\cal P} \setminus {\cal P}_1$. We will show that there are at least $c(\epsilon)n^r$ edges of $H \in H^{(r)}(n,p)$ which must be covered by some nontrivial complete $r$-partite $r$-uniform hypergraph with the prefix from ${\cal P}_2$. Theorem \ref{t:submain} will tell us this probability is sufficiently small. We will prove an upper bound for the number of possible choices for ${\cal P}$ and apply the union bound to complete the proof. The rest of the paper is organized as follows. In Section 2, we will prove several necessary lemmas. In Section 3, we will present the proof of an auxiliary theorem which is the key ingredient in the proof of the main result. Theorem \ref{main} will be proved in Section 4. Few concluding remarks will be mentioned in Section 5. \section{Lemmas} In this section, we will collect some necessary lemmas which are needed to prove the main theorem. We will use the following versions of Chernoff's inequality and Azuma's inequality. \begin{theorem}{\cite{chernoff}} \label{t:chernoff} Let $X_1,\ldots,X_n$ be independent random variables with $${\rm Pr}(X_i=1)=p_i, \qquad {\rm Pr}(X_i=0)=1-p_i.$$ We consider the sum $X=\sum_{i=1}^n X_i$ with expectation ${\rm E}(X)=\sum_{i=1}^n p_i$. Then we have \begin{eqnarray*} \mbox{(Lower tail)~~~~~~~~~~~~~~~~~} \qquad \qquad {\rm Pr}(X \leq {\rm E}(X)-\lambda)&\leq& e^{-\lambda^2/2{\rm E}(X)},\\ \mbox{(Upper tail)~~~~~~~~~~~~~~~~~} \qquad \qquad {\rm Pr}(X \geq {\rm E}(X)+\lambda)&\leq& e^{-\frac{\lambda^2}{2({\rm E}(X) + \lambda/3)}}. \end{eqnarray*} \end{theorem} \begin{theorem} \cite{azuma} \label{t:azuma} Let $X$ be a random variable determined by $m$ trials $T_1,\ldots,T_m$, such that for each $i$, and any two possible sequences of outcomes $t_1,\ldots,t_{i-1},t_i$ and $t_1,\ldots,t_{i-1}, t_i'$: \[ |{\rm E}\left(X|T_1=t_1,\ldots,T_i=t_i\right) -{\rm E}\left(X|T_1=t_1,\ldots, T_{i-1}=t_{i-1},T_i=t_i'\right) | \leq c_i \] then \[ {\rm Pr}\left(|X-{\rm E}(X)| \geq \lambda \right) \leq 2 {\rm exp}\left(-{\lambda}^2/2\sum_{i=1}^m c_i^2\right). \] \end{theorem} Recall that if $A_1,\ldots,A_r$ form a complete $r$-partite $r$-uniform hypergraph, then we assume $|A_1| \leq |A_2| \leq \cdots \leq |A_r|$. We have the following lemma. \begin{lemma} \label{l:lm1} For $H \in H^{(r)}(n,p)$ with $p \leq 1/2$, with high probability the vertex parts $A_1,A_2, \cdots, A_r$ of each complete $r$-partite $r$-uniform hypergraphs in $H$ satisfy $\prod_{i=1}^{r-1} |A_i| < (r+1) \log n$. \end{lemma} \noindent {\bf Proof:} We need only to prove the lemma for $p=1/2$. For a collection of pairwise disjoint sets $A_1, A_2, \ldots, A_r \subset [n]$, we assume $|A_i|=k_i$ for each $1 \leq i \leq r$ and $k_1 \leq k_2 \leq \cdots \leq k_r$. Fix a selection of $A_1, \ldots, A_r$, the probability that they form a complete $r$-partite $r$-uniform hypergraph in $H^{(r)}(n,1/2)$ is $2^{-\prod_{i=1}^r k_i}.$ For fixed $k_1,\ldots,k_r$, there are at most $\prod_{i=1}^r \binom{n}{k_i}$ choices for $A_1,A_2,\ldots,A_r$ such that $|A_i|=k_i$ for each $1 \leq i \leq r$. Therefore, for fixed $k_1,\ldots,k_r$ satisfying $ \prod_{i=1}^{r-1} k_i \geq (r+1) \log n$ and $k_1 \leq \cdots \leq k_r$, the probability that there are pairwise disjoint sets $A_1,A_2,\ldots,A_r$ such that $|A_i|=k_i$ and they form a complete $r$-partite $r$-uniform hypergraph is at most \begin{align*} \prod_{i=1}^r \binom{n}{k_i} 2^{-\prod_{i=1}^r k_i} &< 2^{(\sum_{i=1}^r k_i) \log n -\prod_{i=1}^r k_i}\\ &=2^{k_r\left((\sum_{i=1}^{r-1}k_i/k_r+1)\log n-\prod_{i=1}^{r-1} k_i\right)} \\ & \leq 2^{k_r( r \log n-\prod_{i=1}^{r-1} k_i)}\\ &< 2^{-k_r \log n} \end{align*} Put $s=\prod_{i=1}^{r-1} k_i$. We next estimate how many choices of $k_1,\ldots,k_r$ such that $\prod_{i=1}^{r-1} k_i=s$ and $k_1 \leq \cdots \leq k_r$. Let $t=\sum_{i=1}^{r-1} k_i$. If $s \geq \log n $, then $t \leq s+r< 2s$ and $k_r \geq k_{r-1} \geq s^{1/{(r-1)}}$. Thus the number of choices for $k_1,\ldots,k_{r-1}$ satisfying $\prod_{i=1}^{r-1} k_i=s$ and $k_1 \leq \cdots \leq k_{r-1}$ is less than the number of positive solutions to the equation $\sum_{i=1}^{r-1} k_i=t$, which is less than $\binom{2s}{r-2}$ as $t \leq 2s$. We have at most $n$ choices for $k_r$ regardless the choices of $k_1,\ldots,k_{r-1}$. Therefore, the probability that there are $A_1,A_2,\ldots,A_r$ which satisfy $s=\prod_{i=1}^{r-1} |A_i| \geq (r+1) \log n$ and form a complete $r$-partite $r$-uniform hypergraph in $H^{(r)}(n,1/2)$ is at most \[ \sum_{s=(r+1) \log n} ^n n \binom{2s}{r-2} 2^{-k_r \log n} \leq \sum_{s=(r+1) \log n} ^n n \binom{2s}{r-2} 2^{-s^{1/(r-1)} \log n} =o(1) \] Then the lemma follows from Markov's inequality. {\cal H}fill $\square$ For an $r$-uniform hypergraph $H=(V,E)$ and a prefix $P=\{A_1, A_2, \ldots, A_{r-1}\}$, we define \[ V(H,P)=\{v: v \in V(H) \setminus ( \cup_{i=1}^{r-1} A_i) \textrm{ and } F \in E(H) \textrm{ for each } F \in A_1 \times \cdots \times A_{r-1} \times \{v\} \}. \] \begin{figure} \caption{An example with $r=3$, $P=\{A_1,A_2\} \label{fg:fg1} \end{figure} Figure \ref{fg:fg1} is an illustrative example for $v \in V(H,P)$. It follows that $A_1, A_2, \ldots,A_r$ form a complete $r$-partite $r$-uniform hypergraph if $A_r$ is contained in $V(H,P)$, namely, $A_r \subseteq V(H,P)$. We say an edge $F \in E(H)$ is covered by a complete $r$-partite $r$-uniform hypergraph with the prefix $P$ if $F \in A_1 \times \cdots \times A_{r-1} \times V(H,P)$. Let ${\cal P}= \{P_1,\ldots,P_q\}$ be a prefix set, where $P_i=\{A_1^i,\ldots,A_{r-1}^i\}$. We define $g(H,{\cal P})$ as the set of edges of $H$ which are contained by exactly one complete $r$-partite $r$-uniform hypergraphs whose prefix is from ${\cal P}$. It is easy to see \[ g(H,{\cal P}) \leq \sum_{i=1}^q g(H,P_i)=\sum_{i=1}^q |V(H,P_i)| \prod_{j=1}^{r-1} |A_i^j|. \] We have the following lemma on $g(H,{\cal P})$ for $H \in H^{(r)}(n,p)$. \begin{lemma} \label{l:lm2} Assume $p \leq 1/2$ and $H \in H^{(r)}(n,p)$. Let $c(n)$ be a fixed function. Given a prefix set ${\cal P}=\{P_1,\ldots,P_q\}$, where $P_i=\{A_1^i,A_2^i,\ldots,A_{r-1}^i\}$ and $c(n) \leq \prod_{j=1}^{r-1} |A_j^i| < (r+1)\log n$ for each $1 \leq i \leq q$, then we have \[ {\rm Pr}\left( g(H,{\cal P}) \geq q c(n) p^{c(n)} n + 2n^{r-0.3} \right) \leq 2 {\rm exp}(-n^{r-0.8}). \] \end{lemma} \noindent {\bf Proof:} We shall use Theorem \ref{t:azuma} to prove this lemma. Let $m=\binom{n}{r}$ and we list all $r$-sets of $[n]$ as $F_1,F_2,\ldots,F_m$. For each $1 \leq i \leq m$, we consider $T_i \in \{\textrm{H}, \textrm{T}\}$, here $T_i=\textrm{H}$ means $F_i$ is an edge and $T_i=\textrm{T}$ means $F_i$ is a nonedge. To simplify the notation, we use $X$ to denote the random variable $g(H,{\cal P})$. We observe that $X$ is determined by $T_1,\ldots,T_m$. Fix the outcome $t_j$ of $T_j$ for each $1 \leq j \leq i-1$, we wish to show an upper bound for \begin{equation} \label{eq:lm1} \left|{\rm E}(X|T_1=t_1,\ldots,T_{i-1}=t_{i-1}, T_i=\textrm{H})-{\rm E}(X|T_1=t_1,\ldots,T_{i-1}=t_{i-1},T_i=\textrm{T})\right| \end{equation} If $T_i=\textrm{H}$, then we can assume $F_i$ is contained by some hypergraph whose prefix is $P_i$ for some $1 \leq i \leq q$. Otherwise the change of the outcome of $T_i$ will not effect the value of $X$. Suppose $F_i=\{v_1,\ldots,v_r\}$, where $A_j^i \cap F_i=\{v_j\}$ for each $1 \leq j \leq r-1$ and $v_r \not \in \cup_{j=1}^{r-1} A_j^i$. We next examine other edges which get covered because we change $F_i$ as an edge. These edges are from the family $A_1^i \times \cdots A_{r-1}^i \times \{v_r\}$. Therefore $\prod_{j=1}^{r-1} |A_j^i|$ is an upper bound for \eqref{eq:lm1}. Recalling the assumption $\prod_{j=1}^{r-1} |A_j^i| < (r+1)\log n$, then \eqref{eq:lm1} can be bounded from above by $(r+1)\log n$. We note ${\rm E}(g(H,P_i)) \leq c(n)p^{c(n)} n$ as we assume $\prod_{j=1}^{r-1} |A_j^i| \geq c(n)$. We get \[ {\rm E}(X) \leq \sum_{i=1}^q {\rm E}(H,P_i) \leq c(n)qp^{c(n)}n. \] Applying Theorem \ref{t:azuma} with $\lambda=2n^{r-0.3}$ and $c_i=(r+1) \log n$, we obtain \begin{align*} {\rm Pr}\left(X \geq c(n)qp^{c(n)}n+2n^{r-0.3}\right) &\leq {\rm Pr}\left(X \geq {\rm E}(X) +2n^{r-0.3}\right)\\ &\leq 2{\rm exp}\left(-4n^{2r-0.6}/(2m(r+1)\log n)\right) \\ &\leq 2{\rm exp}(-n^{r-0.8}) \end{align*} here we used the fact $m<n^r$. {\cal H}fill $\square$ We need a theorem which provides a lower bound for the number of uncovered edges. Let $k(n)$ and $l(n)$ be given functions. Suppose ${\cal F} \subset \binom{[n]}{r}$ and ${\cal Q}$ be the power set of $\binom{[n]}{r} \setminus {\cal F}$. Consider a function ${\cal C}: {\cal F} \to {\cal Q}$ such that for each $F \in {\cal F}$ and each $R \in {\cal C}(F)$, we have $|R \cap F|=r-1$. Let $h(H,{\cal F},{\cal C})$ be the number of $F \in {\cal F}$ such that $F$ is an edge in $H \in H^{(r)}(n,p)$ and $R$ is not an edge in $H \in H^{(r)}(n,p)$ for all $R \in {\cal C}(F) $. We have the following lemma. \begin{lemma} \label{l:lm3} Suppose $p \leq 1/2$ and $ {\cal F} \subset \binom{[n]}{r}$. Assume $H \in H^{(r)}(n,p)$, $|{\cal C}(F)| \leq k(n)$ for each $F \in {\cal F}$, and for each $R \in \cup_{F \in {\cal F}} {\cal C}(F) $, the number of $F \in {\cal F}$ satisfying $R \in {\cal C}(F)$ is at most $l(n)$, here $l(n)$ and $k(n)$ are some given functions. Then we have \[ {\rm Pr}\left(h(H,{\cal F},{\cal C}) \leq |{\cal F}|p(1-p)^{k(n)} - 2n^{r-0.01} \right) \leq 2{\rm exp}(-n^{r-0.02}/l(n)^2). \] \end{lemma} \noindent {\bf Proof:} To simplify notation, we use $X$ to denote the random variable $h(H,{\cal F},{\cal C})$ again. We list all $r$-sets from ${\cal F} \cup_{F \in {\cal F}} {\cal C}(F)$ as $F_1,F_2,\ldots,F_m$, here $m \leq \binom{n}{r}$. For each $F_i$, we consider $T_i \in \{\textrm{H},\textrm{T}\}$, here $T_i= \textrm{T}$ means $F_i$ is an edge and $T_i=\textrm{T}$ means $F_i$ is not an edge. Given the outcome $t_j$ of $T_j$ for each $1 \leq j \leq i-1$ we wish to establish an upper bound for \begin{equation} \label{eq:lb} \left|{\rm E}(X|T_1=t_1,\ldots,T_{i-1}=t_{i-1}, T_i=\textrm{H})-{\rm E}(X|T_1=t_1,\ldots,T_{i-1}=t_{i-1},T_i=\textrm{T})\right|. \end{equation} If $F_i \in {\cal F}$, then changing the outcome of $T_i$ can only effect \eqref{eq:lb} by one. If $F_i \in \cup_{F \in {\cal F}} {\cal C}(F)$, then changing the outcome of $T_i$ can effect \eqref{eq:lb} by at most $l(n)$ since $F_i \in {\cal C}(F)$ for at most $l(n)$ $r$-set $F$. Therefore, the expression \eqref{eq:lb} can be bounded from above by $l(n)$. Applying Theorem \ref{t:azuma} with $\lambda=2n^{r-0.01}$ and $c_i=l(n)$, we get \[ {\rm Pr}\left( |X-{\rm E}(X)| \geq 2n^{r-0.01} \right) \leq 2{\rm exp}\left(-4n^{2r-0.02}/2\sum_{i=1}^m c_i^2\right) \leq 2{\rm exp}(-n^{r-0.02}/ l(n)^2), \] here we used $m \leq \binom{n}{r}$. We note ${\rm E}(X)= \sum_{F \in {\cal F}} p(1-p)^{|{\cal C}(F)|} \geq |{\cal F}| p (1-p)^{k(n)}$ as $|{\cal C}(F)| \leq k(n)$. Therefore \begin{align*} {\rm Pr}\left(h(H,{\cal F},{\cal C}) \leq |{\cal F}|p(1-p)^{k(n)} - 2n^{r-0.01} \right) & \leq {\rm Pr}\left( |X-{\rm E}(X)| \geq 2n^{r-0.01} \right)\\ &\leq 2{\rm exp}(-n^{r-0.02}/ l(n)^2) \end{align*} We proved the lemma. {\cal H}fill $\square$ When $p \leq 1/\log\log \log \log n$, we adapt the approach in \cite{alon2}. The following two lemmas are the hypergraph version of Lemma 3.1 and Lemma 3.2 in \cite{alon2}. Before we state them, we need one additional definition. For positive integers $m \geq \log n$ and $r \geq 3$, let ${\cal T}_m$ be the set of tuples $(a_1,a_2,\ldots,a_r)$ satisfying the following properties. \begin{enumerate} \item[1:] $a_i$ is a positive integer for each $1 \leq i \leq r$; \item[2:] $1 \leq a_1 \leq a_2 \leq \cdots \leq a_r$; \item[3:] $a_1 \cdots a_r=m$; \item[4:] $a_{r-1} \geq 2$. \end{enumerate} \begin{lemma} \label{l:lm8} For any constant $c$, if $p$ satisfies $(\log n)^{2.001}/n \leq p \leq 1/\log\log\log n$, then the following holds for $n$ large enough. For every integer $m$ satisfying \[ \frac{ pcn}{16} \leq m \leq \frac{pcn}{4}, \] we have \[ \sum_{(a_1,\ldots,a_r) \in {\cal T}_m} \binom{n}{a_1} \binom{n-a_1}{a_2} \cdots \binom{n-\sum_{i=1}^{r-1}a_i}{a_r} p^m \leq 2^{-0.3 \log(1/p)m}. \] \end{lemma} Recall that a complete $r$-partite $r$-uniform hypergraph whose vertex parts $A_1,\ldots,A_r$ satisfying $|A_1|\leq |A_2| \leq \cdots \leq |A_r|$ is nontrivial if $\prod_{i=1}^{r-1} |A_i| \geq 2$. \begin{lemma}\label{l:lm9} For any constant $c$, if $p$ satisfies $(\log n)^{2.001}/n \leq p \leq 1/\log\log\log n$, then the probability that $H \in H^{r}(n,p)$ contains a set of at most $2n^{r-1}$ nontrivial complete $r$-partite $r$-uniform hypergraphs which cover at least $pcn^r/4$ edges is at most $2^{-0.05pc \log(1/p)n^r}$. \end{lemma} As proofs of the two lemmas above go the same lines as those for proving Lemma 3.1 and Lemma 3.2 in \cite{alon2}, they are omitted here. \section{An auxiliary theorem} Let ${\cal F} \subset \binom{[n]}{r}$ with $|{\cal F}| \geq cn^r$ for some positive constant $c$. Suppose the probability $p$ satisfies $1/\log\log \log \log n \leq p \leq 1/2$. We shall prove that if $H \in H^{(r)}(n,p)$, then with small probability that there are few nontrivial complete $r$-partite $r$-uniform hypergraphs such that each edge $F \in E(H) \cap {\cal F}$ is in exactly one of them. \begin{theorem} \label{t:submain} Assume ${\cal F} \subset \binom{[n]}{r}$ with $|{\cal F}| \geq c n^r$ for some positive constant $c$. Let ${\cal P}=\{P_1,\ldots,P_t\}$ be a given prefix set, where $t=|{\cal P}| \leq n^{r-1}$ and $P_i=\{A_1^i,\ldots,A_{r-1}^i\}$ satisfying $2 \leq \prod_{j=1}^{r-1} |A_j^{i}| < (r+1) \log n$ for each $1 \leq i \leq t$. If $1/\log\log \log \log n \leq p \leq 1/2$ and $H \in H^{(r)}(n,p)$, then with probability at most $3{\rm exp}(-n^{r-0.92})$ there are $t$ nontrivial complete $r$-partite $r$-uniform hypergraphs such that its prefix set is ${\cal P}$ and each edge $F \in E(H) \cap {\cal F}$ is in exactly one of these hypergraphs. \end{theorem} Suppose $H \in H^{(r)}(n,p)$ and \[ E(H) \cap {\cal F} \subseteq \bigsqcup_{i=1}^{t} \prod_{j=1}^r A_j^i, \] where `$\sqcup$' denotes the disjoint union. For each $1 \leq i \leq t$, we assume $A_1^i,A_2^i,\ldots,A_r^i$ form a nontrivial complete $r$-partite $r$-uniform hypergraph. We fix a constant $K=\tfrac{4}{c} $ and a function $q(n)=\log\log \log \log n$. For each $0 \leq i \leq q(n)-1$, we define $f_i=K^i2^{q(n)}$ and \[ {\cal P}_i=\left\{ P_i \in {\cal P} : f_i \leq \prod_{j=1}^{r-1}|A_j^i| < f_{i+1} \right\}. \] \begin{lemma} \label{l:lm4} There is some $0 \leq i \leq q(n) -1$ such that $|{\cal P}_i| \leq \tfrac{t}{q(n)}$. \end{lemma} \noindent {\bf Proof:} Suppose the lemma is not true. As ${\cal P}_i$'s are pairwise disjoint, then we have \[ |{\cal P} | \geq \sum_{i=0}^{q(n)-1} |{\cal P}_i| > t, \] which is a contradiction to the assumption on the size of ${\cal P}$. {\cal H}fill $\square$ Let $0 \leq i_0 \leq q(n)-1$ be the smallest integer satisfying the statement of Lemma \ref{l:lm4}. We consider \[ {\cal P}'=\left\{ P_i \in {\cal P} : \prod_{j=1}^{r-1}|A_j^i|<f_{i_0+1} \right\}. \] For an $r$-set $F=\{v_1,v_2,\ldots,v_r\} \in {\cal F}$ and each $v_j \in F$, we define \[ N_{{\cal P}',F}(v_j) =\left\{ P_i \in {\cal P}' : v_j \not \in \cup_{s=1}^{r-1} A_s^i, \textrm{ and } |F \cap A_s^i|=1 \textrm{ for each } 1 \leq s \leq r-1 \right\}. \] Figure \ref{fg:fg2} is an example for $P \in N_{{\cal P}',F}(v_j)$. Roughly speaking, each $P_i \in N_{{\cal P}',F}(v_j)$ could be the prefix of a possible nontrivial complete $r$-partite $r$-uniform hypergraph which contains $F$. \begin{figure} \caption{An example with $r=3$, $F=\{a,b,c\} \label{fg:fg2} \end{figure} We note that $N_{{\cal P}',F}(v_j)$ and $N_{{\cal P}',F}(v_k)$ are disjoint if $j \not = k$. Let $N_{{\cal P}'}(F)=\cup_{j=1}^r N_{{\cal P}',F}(v_j)$ and $d_{{\cal P}'}(F)=|N_{{\cal P}'}(F)|= \sum_{j=1}^{r}|N_{{\cal P}',F}(v_j)|$. \begin{lemma} \label{l:lm5} Assume $|{\cal P}' \setminus {\cal P}_{i_0}|=xn^{r-1}$ with $x \geq 0.01c$. Let ${\cal F}'=\{F \in {\cal F} : d_{{\cal P}'}(F) \leq \tfrac{3}{c} x f_{i_0}\}$. We have \[ |{\cal F}'| \geq \frac{cn^r}{3}. \] \end{lemma} \noindent {\bf Proof:} We observe that each $P_i=\{A_1^i,\ldots,A_{r-1}^i\} \in {\cal P}'$ contributes one to $d_{{\cal P}'}(F)$ for at most $n \prod_{j=1}^{r-1}|A_j^i|$ $r$-sets $F \in {\cal F}$. Recall the definition of ${\cal P}_i$ and Lemma \ref{l:lm4}. For $n$ large enough, we have \begin{align*} \sum_{F \in {\cal F}} d_{{\cal P}'}(F) & \leq \sum_{P_i \in {\cal P}'} n \prod_{j=1}^{r-1} |A_j^i|\\ &= \sum_{P_i \in {\cal P}' \setminus {\cal P}_{i_0}} n \prod_{j=1}^{r-1} |A_j^i| + \sum_{P_i \in {\cal P}_{i_0}} n \prod_{j=1}^{r-1} |A_j^i|\\ & \leq nf_{i_0}|{\cal P}' \setminus {\cal P}_{i_0}|+ nf_{i_0+1}|{\cal P}_{i_0}| \\ &\leq xf_{i_0}n^r + \frac{tn f_{i_0+1} }{q(n)} \\ & \leq 2x f_{i_0} n^r \end{align*} we applied facts $t \leq n^{r-1}$ and $x \geq 0.01c$ as well as the definition of $i_0$. We get the following inequality \[ \frac{3x}{c} f_{i_0} |{\cal F} \setminus {\cal F}'| \leq \sum_{F \in {\cal F} \setminus {\cal F}'} d_{{\cal P}'}(F) \leq \sum_{F \in {\cal F}} d_{{\cal P}'}(F) \leq 2xf_{i_0} n^r . \] Clearly, the inequality above implies $|{\cal F} \setminus {\cal F}'| \leq \tfrac{2cn^r}{3}$. Equivalently, $|{\cal F}'| \geq \tfrac{cn^r}{3}$. {\cal H}fill $\square$ Before proving a lower bound on the number of uncovered edges, we need one more lemma. \begin{lemma} \label{l:lm51} Let ${\cal F}'$ be the subfamily of ${\cal F}$ given by Lemma \ref{l:lm5}. There is a subset ${\cal W} \subseteq {\cal F}'$ and a collection of $r$-sets ${\cal C}(F) \subset \binom{[n]}{r} \setminus {\cal W}$ associated with each $F \in {\cal W}$ which satisfy following \noindent 1: $|{\cal W}| \geq \tfrac{c^2n^r}{10xf_{i_0}}.$ \noindent 2: $|{\cal C}(F)| \leq \tfrac{3}{c}xf_{i_0}$ for each $F \in {\cal W}$. \noindent 3: For each $F=\{v_1,\ldots,v_r\} \in {\cal W}$ and each $1 \leq i \leq r$, if $P=\{A_1,\ldots,A_{r-1}\} \in {\cal P}' $ and $P \in N_{{\cal P}',F}(v_i)$, then there is $w \in A_{r-1} \setminus F$ such that $(F\setminus v_i) \cup w \in {\cal C}(F)$. \end{lemma} \noindent {\bf Proof:} To define ${\cal W}$, we first give a linear ordering of $r$-sets in ${\cal F}'$ and consider the following algorithm. We will define sets ${\cal F}_i$ recursively and build the set ${\cal W}$ step by step. Initially, let ${\cal F}_0={\cal F}'$ and ${\cal W}=\emptyset$. For each $i \geq 1$, if ${\cal F}_{i-1} \not = \emptyset$, then let $F_i=\{v_1,v_2,\ldots,v_r\}$ be the first $r$-set in ${\cal F}_{i-1}$. For each $1 \leq j \leq r$ and each $P=\{A_1,\ldots, A_{r-1}\} \in N_{{\cal P}',F_i}(v_j)$, we note $|F \cap A_s|=1$ for each $1 \leq s \leq r-1$ and $v_j \not \in \cup_{s=1}^{r-1} A_s$ by the definition of $P \in N_{{\cal P}', F_i}(v_j)$. Suppose $F_i \cap A_{r-1}=u$. We notice $|A_{r-1}| \geq 2$ as $P$ is the prefix of a nontrivial complete $r$-partite $r$-uniform hypergraph. If $\left( F_i \setminus u \right) \cup v \not \in {\cal F}_{i-1} \cup {\cal W} $ for some $v \in A_{r-1}$, then for each $w \in A_{r-1} \setminus v$ , we move $(F_i \setminus u) \cup w$ from $ {\cal F}_{i-1}$ to ${\cal W}$ provided $(F_i \setminus u) \cup w \in {\cal F}_{i-1}$. Otherwise, $\left( F_i \setminus u \right) \cup v \in {\cal F}_{i-1} \cup {\cal W} $ for each $v \in A_{r-1}$. We claim actually $\left( F_i \setminus u \right) \cup v \in {\cal F}_{i-1} $ for each $v \in A_{r-1}$. We proceed with the algorithm by assuming this claim. We choose an arbitrary $w \in A_{r-1} \setminus u$ and delete $\left( F_i \setminus u \right) \cup w $ from ${\cal F}_{i-1}$. Moreover, for each $v \in A_{r-1} \setminus w$, we move $\left( F_i \setminus u \right) \cup v $ from ${\cal F}_{i-1}$ to ${\cal W}$. We define ${\cal F}_i$ as the resulted subset of ${\cal F}_{i-1}$ for each case. Now we prove the claim. Suppose there is some $z \in A_{r-1} \setminus u$ such that $(F_i \setminus u) \cup z \in W$. We pick such a vertex $z$ so that the $r$-set $F'=(F_i \setminus u) \cup z $ is the smallest one in $W$ under the linear ordering. Suppose $F'$ was added to $W$ at step $j$ with $j<i$. We examine the moment that $F'$ was moved to ${\cal W}$. If there is some $s \in A_{r-1} \setminus z$ such that $(F' \setminus z) \cup s \not \in {\cal F}_{j-1} \cup {\cal W}$, then $s \not = u$. Otherwise, $F_i \not \in {\cal F}_{i-1}$. We notice $({\cal F}_{i-1} \cup {\cal W}) \subseteq ({\cal F}_{j-1} \cup {\cal W})$ as $j<i$. Therefore, $(F_i \setminus u) \cup s \not \in {\cal F}_{i-1} \cup {\cal W}$ and we are in the first case, which is a contradiction. We obtain $F'$ must satisfy the statement of the claim (because $F'$ is the first one of the form $(F_i \setminus u) \cup z$). Thus $F_i$ was moved to ${\cal W}$ when we were moving $F'$ to ${\cal W}$, which leads a contradiction. Figure \ref{fg:fg3} is an illustrative example. If ${\cal F}_{i-1} = \emptyset$, then we halt the process and output ${\cal W}$. Recall the definition of ${\cal F}'$, i.e., $d_{{\cal P}'}(F) \leq \tfrac{3}{c} x f_{i_0}$ for each $F \in {\cal F}'$. We get that each $F \in {\cal F}'$ can make at most $\tfrac{3}{c}xf_{i_0}$ other $r$-sets in ${\cal F}_{i-1}$ deleted from ${\cal F}_{i-1}$ if $F$ is added to ${\cal W}$ at time $i$. Recall $|{\cal F}'| \geq \tfrac{cn^r}{3}$. Thus, \[ |{\cal W}| \geq \frac{|{\cal F}'|}{\tfrac{3}{c}xf_{i_0}+1} \geq \frac{c^2n^r}{10xf_{i_0}}. \] For each $F \in {\cal W}$, we next associate with $F$ a set of $r$-sets ${\cal C}(F) \subset \binom{[n]}{r} \setminus {\cal W}$. Assume $F=\{v_1,\ldots,v_r\}$. For each $1 \leq i \leq r$ and each $\{A_1,\ldots,A_{r-1}\} \in N_{{\cal P}',F}(v_i)$, as the construction of ${\cal W}$, there is some $v \in A_{r-1} \setminus F$ such that $\left( F \setminus v_i \right) \cup v \not \in {\cal W}$. The desired vertex $v$ exists by considering when $F$ is moved to ${\cal W}$. If $\left( F \setminus v_i \right) \cup v $ is not an edge, then it excludes the possibility that $F$ get covered by the complete $r$-partite $r$-uniform hypergraph with the prefix $\{A_1,\ldots,A_{r-1}\}$. We put the $r$-set $\left( F \setminus v_i \right) \cup v $ in ${\cal C}(F)$. For an example, see Figure \ref{fg:fg3}. We will call each $R \in {\cal C}(F)$ a {\it certificate} for $F$. We note that if $R \in {\cal C}(F)$, then $|F \cap R|=r-1$ and the symmetric difference $F \bigtriangleup R$ is in $A_{r-1}$. \begin{figure} \caption{An example with $r=3$, $F=\{u,v,w\} \label{fg:fg3} \end{figure} We have $|{\cal C}(F)| \leq \frac{3}{c}xf_{i_0}$ as the assumption for $|N_{{\cal P}'}(F)|$ for each $F \in {\cal F}'$. {\cal H}fill $\square$ The next lemma will tell us that with high probability the number of $r$-sets $F \in {\cal W}$ such that $F$ is an edge in $H \in H^{(r)}(n,p)$ and $F$ is not contained in any nontrivial complete $r$-partite $r$-uniform hypergraph with the prefix from ${\cal P}'$ is large. \begin{lemma} \label{l:lm6} Assume $1/\log\log \log \log n \leq p \leq 1/2$, $|{\cal P}' \setminus {\cal P}_{i_0}|=xn^{r-1}$ with $x \geq 0.01c$, $H \in H^{(r)}(n,p)$, and Lemma \ref{l:lm1} holds. With probability at most $2{\rm exp}(-n^{r-0.92})$, the number of edges in $E(H) \cap {\cal F} $ which is not contained in any complete $r$-partite $r$-uniform hypergraph with the prefix from ${\cal P}'$ is less than \[ \frac{c^2 n^r p(1-p)^{\tfrac{3}{c}xf_{i_0} }}{12 xf_{i_0}}. \] \end{lemma} \noindent {\bf Proof:} We will work on the collection of $r$-sets ${\cal W}$ given by Lemma \ref{l:lm51}. Let $Y$ be the number of $r$-sets from ${\cal W}$ which is an edge in $H \in H^{(r)}(n,p)$ and is not contained by any complete $r$-partite $r$-uniform hypergraph with the prefix from ${\cal P}'$. For each $F \in {\cal W}$ and $R \in {\cal C}(F)$, we recall that $R$ is a certificate for $F$. We remark that an $r$-set $R$ could be a certificate for more than one $r$-set $F \in {\cal W}$. Let ${\cal C}=\cup_{F \in {\cal W}} {\cal C}(F)$. For an $r$-set $R \in {\cal C}$, if $R \in {\cal C}(F)$ for more than $n^{0.45}$ sets $F \in {\cal W}$, then we call $R$ a {\it bad certificate}. Let ${\cal C}_1$ be the collection of bad certificates. For each $F \in {\cal W}$, we set ${\cal C}'(F)={\cal C}(F) \setminus {\cal C}_1$. We fix the selection of ${\cal W}$, ${\cal C}'(F)$ for each $F \in {\cal W}$, and the collection of bad certificates ${\cal C}_1$. We sample all possible edges and let $X_F$ be the indicator random variable such that $F$ is an edge in $H \in H^{(r)}(n,p)$ and $R$ is not a edge in $H \in H^{(r)}(n,p)$ for each $R \in {\cal C}'(F)$. We observe that if $X_F=1$, then $F$ is not covered by any nontrivial complete $r$-partite $r$-uniform hypergraphs with the prefix from ${\cal P}'$ and containing no bad certificate. To see this, suppose $F$ is covered by some $H$ with vertex parts $A_1,\ldots,A_{r-1},A_r$ and $H$ dose not contain any bad certificate. Since the definition of ${\cal C}'(F)$, there is some $F' \in {\cal C}'(F) \cap \prod_{i=1}^{r} A_i $. As the definition of $X_F=1$, we get $F'$ is not an edge. Thus, $A_1,\ldots, A_r$ do not form a complete $r$-partite $r$-uniform hypergraph, which is a contradiction. We define $X=\sum_{F \in {\cal W}} X_F$. Applying Lemma \ref{l:lm3} with ${\cal F}={\cal W}$, $k(n)=\tfrac{3}{c}xf_{i_0}$, and $l(n)=n^{0.45}$, we obtain with probability at least $1-2{\rm exp}(-n^{r-0.92})$, we have \[ X \geq \frac{c^2n^r p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{11xf_{i_0}}. \] We note $n^{r-0.01}$ is a lower term as the definition of $f_{i_0}$ and the assumption for $p$. We use ${\cal F}''$ to denote those $r$-sets $F \in {\cal W}$ such that $X_F=1$. The argument above gives that with probability at least $1-2{\rm exp}(-n^{r-0.92})$, we have \[ |{\cal F}''| \geq \frac{c^2n^r p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{11xf_{i_0}}. \] Let us condition on this. We note that an edge in ${\cal F}''$ could be covered by some complete $r$-partite $r$-uniform hypergraph which contains a bad certificate. We next prove an upper bound on the number of such edges. This upper bound works for all samplings of edges. Let $A_1,\ldots,A_r$ be the vertex parts of such a complete $r$-partite $r$-uniform hypergraph $H$. Suppose $\{A_1,\ldots,A_{r-1}\} \in {\cal P}'$. We define \[ A_r'=\{v_r \in A_r: \textrm{ there are } v_1 \in A_1, \ldots,v_{r-1} \in A_{r-1} \textrm{ such that } \{v_1,\ldots,v_r\} \in {\cal F}'' \}. \] The number of edges from ${\cal F}''$ covered by $H$ is at most $|A_i'| \prod_{i=1}^{r-1}|A_i|$. We next relate the number of bad certificates contained in $H$ to the size of $A_r'$. For each $w \in A_r'$, by the definition of $A_r'$, there is some $F=\{v_1,\ldots,v_{r-1},w\} \in A_1 \times \cdots \times A_{r-1} \times \{w\}$ such that $F \in {\cal F}''$. We observe $\{A_1,\ldots,A_{r-1}\} \in N_{{\cal P}'}(F)$. Let $\{v_1,\ldots,v_{r-2},z,w\}$ be the certificate of $F$ associated with $\{A_1,\ldots,A_{r-1}\}$, where $z \in A_{r-1}$. We notice $\{v_1,\ldots,v_{r-2},z,w\}$ must be a bad certificate. Otherwise, as $F \in {\cal F}''$, we get $\{v_1,\ldots,v_{r-2},z,w\}$ is a nonedge. Then $A_1,\ldots,A_r$ do not form a complete $r$-partite $r$-uniform hypergraph which is a contradiction. Therefore, each $w \in A_r'$ gives at least one bad certificate from $A_1 \times \cdots \times A_{r-1} \times \{w\}$ and these bad certificates are distinct for different $w$ from $A_r'$. We obtain that the number of bad certificate in $H$ is at least $|A_r'|$. We divide those hypergraphs which contains a bad certificate into two subsets ${\cal H}_1$ and ${\cal H}_2$, where ${\cal H}_1=\{H: |A_r'| \leq n^{0.9}\}$ and ${\cal H}_2=\{H: |A_r'| \geq n^{0.9}\}$. We note that each $H \in {\cal H}_2$ contains at least $n^{0.9}$ bad certificates as the analysis above. We next prove absolute upper bounds for the number of edges from ${\cal F}''$ which are covered by ${\cal H}_1$ and ${\cal H}_2$. We observe that each $H \in {\cal H}_1$ can cover at most $ |A_r'| \prod_{j=1}^{r-1} |A_j| \leq (r+1) n^{0.9}\log n $ edges from ${\cal F}''$ as we assume Lemma \ref{l:lm1} holds. There are at most $t<n^{r-1}$ of them as assumptions in Theorem \ref{t:submain}. Therefore, ${\cal H}_1$ covers at most $(r+1)n^{r-0.1}\log n$ edges from ${\cal F}''$. We need an upper bound for the number of bad certificates in total. We consider pairs $(F,R)$ such that $F \in {\cal W}$ and $R \in {\cal C}(F)$. As $|{\cal C}(F)| \leq \tfrac{3}{c} x f_{i_0}$ for each $F \in {\cal W}$, the number of such pairs is less than \[ |{\cal W}| \frac{3}{c} x f_{i_0}< \frac{3}{c}xf_{i_0} n^r < n^{r} \log n, \] here we used the fact $|{\cal W}| < n^r$ and the definition of $f_{i_0}$. As the definition of a bad certificate, a simple double counting method yields that the number of bad certificates is at most $n^{r-0.55} \log n$. Since each bad certificate (viewed as an edge) is contained in at most one $H \in {\cal H}_2$ (we are considering the partition of edges) and each $H \in {\cal H}_2$ contains at least $n^{0.9}$ bad certificates, we have $|{\cal H}_2| \leq n^{r-1.45} \log n$. The number of edges contained in each $H \in {\cal H}_2$ has an absolute upper bound $(r+1)n\log n$. Therefore, the number of edges from ${\cal F}''$ which are covered by ${\cal H}_2$ is at most $(r+1)n^{r-0.45} \log^2 n$. Thus, those complete $r$-partite $r$-uniform hypergraphs containing a bad certificate cover at most $(r+1)n^{r-0.1}\log n+(r+1)n^{r-0.45} \log^2 n$ edges from ${\cal F}''$. Therefore, we have \begin{align*} Y & \geq \frac{c^2n^r p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{11xf_{i_0}}-(r+1)n^{r-0.1}\log n-(r+1)n^{r-0.45} \log^2 n\\ & > \frac{c^2n^r p(1-p)^{\tfrac{3}{c}xf_{i_0}}} {12xf_{i_0}} \end{align*} the proof of this lemma is complete. {\cal H}fill $\square$ We are now ready to prove Theorem \ref{t:submain}. \noindent {\bf Proof of Theorem \ref{t:submain}:} To simplify the notation, we define the following prefix sets: \begin{align*} {\cal Q}_1={\cal P}' \setminus {\cal P}_{i_0} &=\left\{P_i \in {\cal P}: \prod_{j=1}^{r-1} |A_j^i| < f_{i_0} \right\} \\ {\cal Q}_2=({\cal P}\setminus {\cal P}') \cup {\cal P}_{i_0} &=\left\{P_i \in {\cal P}: \prod_{j=1}^{r-1} |A_j^i| \geq f_{i_0}\right\} \\ {\cal Q}_3={\cal P} \setminus {\cal P}' &=\left\{P_i \in {\cal P}: \prod_{j=1}^{r-1} |A_j^i| \geq f_{i_0+1} \right\} \\ \end{align*} Let $c_1(n)=2, c_2(n)=f_{i_0},$ and $c_3(n)=f_{i_0+1}$. For $H \in H^{(r)}(n,p)$ and each $i \in \{1,2,3\}$, let ${\cal Z}_i$ be the event that $g(H,{\cal Q}_i) \leq |{\cal Q}_i|c_i(n)p^{c_i(n)}n+2n^{r-0.3}$. Lemma \ref{l:lm2} implies that with probability at least $1-6{\rm exp}(-n^{r-0.8})$ all events ${\cal Z}_1,{\cal Z}_2,{\cal Z}_3$ hold simultaneously. We condition on these three events. We note that for each $i \in \{1,2,3\}$, the number of edges from ${\cal F}$ which is covered by complete $r$-partite $r$-uniform hypergraphs with the prefix from ${\cal Q}_i$ is bounded above by the function $g(H,{\cal Q}_i)$. We proceed to prove $|{\cal Q}_1| \geq 0.01c$. Suppose not. Because the event ${\cal Z}_1$ occurs, the number of edges from ${\cal F}$ covered by those complete $r$-partite $r$-uniform hypergraphs with the prefix $P \in {\cal Q}_1$ is at most $(2+o(1))p^2 n|{\cal Q}_1| \leq \left(0.02cp^2+o(1) \right) n^r $, here $2n^{r-0.2}$ is a lower term as we assume $p \geq 1/\log\log \log \log n$. A simple application of Theorem \ref{t:chernoff} yields that with probability at least $1-{\rm exp}(-cpn^r/8)$ the number of $r$-sets in ${\cal F}$ being an edge in $H \in H^{(r)}(n,p)$ is at least $\tfrac{cpn^r}{2}$. Therefore, the number of edges covered by those complete $r$-partite $r$-uniform hypergraphs with the prefix ${\cal Q}_2$ is at least $\tfrac{cp n^r}{4} $. As the event ${\cal Z}_2$, we get \[ |{\cal Q}_2| \geq \frac{(\tfrac{pc}{4}+o(1))n^r}{ f_{i_0} p^{f_{i_0}} n} > n^{r-1} \] when $n$ is large enough. This is a contradiction to the assumption $|{\cal P}| \leq n^{r-1}$. Therefore, as long as events ${\cal Z}_1$ and ${\cal Z}_2$ as well as the lower bound for the number of edges from ${\cal F}$ hold, we have $|{\cal Q}_1| \geq 0.01c$ which is one of the assumptions in Lemma \ref{l:lm6}. Recall Lemma \ref{l:lm6}. Those uncovered edges given by Lemma \ref{l:lm6} must be covered by complete $r$-partite $r$-uniform hypergraphs with the prefix from ${\cal Q}_3$. As the event ${\cal Z}_3$, we get \[ |{\cal Q}_3| \geq \frac{c^2 n^{r-1} p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{13 x f_{i_0} f_{i_0+1} p^{f_{i_0+1}}}, \] we note $n^{r-0.3}$ is a lower order term. Recall $1/\log\log \log \log n \leq p \leq 1/2$ and $f_i=K^i 2^{q(n)}$. We get \begin{align} |{\cal P}|=|{\cal P}'|+|{\cal Q}_3| & \geq xn^{r-1}+\frac{c^2 n^{r-1} p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{13 x f_{i_0} f_{i_0+1} p^{f_{i_0+1}}} \nonumber\\ &\geq \frac{c^2 n^{r-1} p(1-p)^{\tfrac{3}{c}xf_{i_0}}}{13 x f_{i_0} f_{i_0+1} p^{f_{i_0+1}}} \nonumber \\ & \geq \frac{c^2 n^{r-1} p 2^{f_{i_0+1}-\tfrac{3}{c}xf_{i_0}}}{13xf_{i_0}f_{i_0+1}} \label{lb1}\\ & \geq \frac{c^2 n^{r-1} p 2^{f_{i_0+1}-\tfrac{3}{c}f_{i_0}}}{13f_{i_0}f_{i_0+1}} \label{lb2}\\ &= \frac{c^2 n^{r-1} p 2^{K^{i_0+1}2^{q(n)}-\tfrac{3}{c}K^{i_0}2^{q(n)}}}{13K^{2i_0+1}2^{2q(n)}} \nonumber\\ &=\frac{c^2 n^{r-1} p 2^{\tfrac{1}{c}K^{i_0}2^{q(n)}}}{13K^{2i_0+1}2^{2q(n)}} \label{lb3}\\ &> n^{r-1} \nonumber \end{align} when $n$ is large enough. We used $p \leq 1/2$ to get inequality \eqref{lb1}, $x \leq 1$ to get inequality \eqref{lb2}, and $K=\tfrac{4}{c}$ to get inequality \eqref{lb3}. Therefore, as long as events ${\cal Z}_1, {\cal Z}_2, {\cal Z}_3$ occur, the lower bound for the number of edges in ${\cal F} \cap E(H)$ holds, and Lemma \ref{l:lm6} holds, we get a contradiction. With probability at most $2{\rm exp}(-n^{r-0.92})+6{\rm exp}(-n^{r-0.8})+{\rm exp}(-cpn^r/8) \leq 3{\rm exp}(-n^{r-0.92})$, one of them does not hold, this completes the proof of the theorem. {\cal H}fill $\square$ \section{Proof of Theorem \ref{main}} Before we prove the main theorem, we need to show an upper bound on the number of choices for the prefix set ${\cal P}$. \begin{lemma} \label{l:lm10} Suppose ${\cal P}=\{P_1,\ldots,P_q\}$, where $P_i=\{A_1^i,\ldots,A_{r-1}^i\}$ and $1 \leq \prod_{j=1}^{r-1} |A_j^i| < (r+1)\log n$ for each $1 \leq i \leq q$. The number of choices for ${\cal P}$ with $|{\cal P}| \leq n^{r-1}$ is bounded from above by $n^{(r+3)n^{r-1}\log n}$ when $n$ is large enough. \end{lemma} \noindent {\bf Proof:} We shall show the desired upper bound step by step. We have at most $n^{r-1}$ choices for the size of ${\cal P}$. First, we fix the size of ${\cal P}$. We will establish an absolute upper bound on the number of choices for each element $P_i$ of ${\cal P}$. For each $P_{i}=\{A_1^i,\ldots,A_{r-1}^i\} \in {\cal P}$, we have $t_i=|\cup_{j=1}^{r-1} A_j^i| \leq (r+1)\log n+s < (r+2)\log n$ as $\prod_{j=1}^{r-1} |A_j^i| \leq (r+1)\log n$. Therefore, $\cup_{j=1}^{r-1} A_j^i \in \binom{[n]}{\leq (r+2) \log n}$, which implies that the number of choices for $\cup_{j=1}^{r-1} A_j^i$ is at most $n^{(r+2)\log n}$. We fix the selection of $\cup_{j=1}^{r-1} A_j^i$ and wish to partition it into $r-1$ disjoint parts $A_j^i$. Let $a_j=|A_j^i|$ for $1 \leq j \leq r-1$. Then we have $a_1+\ldots+a_{r-1}=t_i$. The number of choice for the size of $a_1,\ldots,a_{j-1}$ equals the number of solutions to the equation $a_1+\ldots+a_{j-1}=t_i$. Since $a_j \geq 1$, we have at most $\binom{t_i}{r-1}$ choices for $a_1,\ldots,a_{j-1}$ , which can be bounded from above by $((r+2)\log n)^{r-1}$ as $t_i \leq (r+2)\log n$. If we fix the size of each $A_j^i$, then the number of ways to partition $\cup_{j=1}^{r-1} A_j^i$ into $A_1^i,\ldots,A_{r-1}^i$ equals $\binom{t_i}{a_1,\ldots,a_{j-1}}$, which is at most $t_i! \leq ((r+2)\log n)^{(r+2)\log n}$. Therefore, the number of choices for $P_i$ is at most \[ n^{(r+2)\log n}\left((r+2)\log n \right)^{(r+2)\log n+r-1} \] Recall the assumption $|{\cal P}| \leq n^{r-1}$. We get that the number of choices for ${\cal P}$ is at most \[ n^{r-1} \left(n^{(r+2)\log n}\left((r+2)\log n \right)^{(r+2)\log n+r-1} \right)^{|{\cal P}|} <n^{(r+3)n^{r-1}\log n}, \] provide $n$ is sufficiently large. {\cal H}fill $\square$ We are ready to prove Theorem \ref{main}. \noindent {\bf Proof of the upper bound:} We shall exhibit an explicit decomposition of each $r$-uniform hypergraph with $n$ vertices using at most $(1-\pi(K_r^{(r-1)})+o(1))\binom{n}{r-1}$ trivial complete $r$-partite $r$-uniform hypergraphs. For each $r \geq 3$, let $G=([n],E)$ be an $(r-1)$-uniform hypergraph which has $\textrm{ex}(n,K_{r}^{(r-1)})$ edges and does not contain $K^{(r-1)}_{r}$ as a subhypergraph. Obviously, $G$ is well-defined. Let $G'$ be the complement of $G$. Therefore, $E(G')=\binom{[n]}{r-1} \setminus E(G)$. We observe that an independent set of size $r$ in $G'$ will be a $K_{r}^{(r-1)}$ in $G$. As $G$ does not contain $K_{r}^{(r-1)}$, we get each $F \in \binom{[n]}{r}$ contains at least one edge of $G'$. Suppose $q=|E(G')|$ and we list edges in $G'$ as $e_1,\ldots,e_q$. For each $r$-uniform hypergraph $H$ with $n$ vertices, we will show that $H$ can be decomposed into at most $q$ complete $r$-partite $r$-uniform hypergraphs as follows. Let $H_0=H$ and we will define a sequence of complete $r$-partite $r$-uniform hypergraphs recursively. For each $1 \leq i \leq q$, we assume the edge $e_i$ in $G'$ is $\{v_1,v_2,\ldots,v_{r-1}\}$. The key observation is the following. For an edge $F \in E(H)$, if $F$ is contained in a trivial complete $r$-partite $r$-uniform hypergraphs with vertex parts $\{v_1\} \times \cdots \times \{v_{r_1}\} \times V_r$, then the set $\{v_1,\ldots,v_{r-1}\}$ must be a subset of $F$. We define ${\cal F}_i=\{F \in E(H_{i-1}): e_i \subset F\}$ and $A_{r}=\cup_{F \in {\cal F}_i} F \setminus e_i$. If $A_{r}\not = \emptyset$, then the $i$-th complete $r$-partite $r$-uniform hypergraphs $H_i'$ will have vertex parts $\{v_1\},\ldots, \{v_{r_1}\}, A_r$. If the set ${\cal F}_i$ is empty, then we do not define $H_i'$. We set $E(H_i)=E(H_{i-1}) \setminus E(H_i')$ for each $1 \leq i \leq q-1$. We note that each $F \in E(H)$ contains at least one of $e_i$. The definition of $H_i$'s ensures that each edge in $H$ is in exactly one of these trivial complete $r$-partite $r$-uniform hypergraphs. Clearly, for sufficiently large $n$, we have $q=(1-\pi(K^{(r-1)}_{r})+o(1)) \binom{n}{r-1}$. Since the decomposition above applies to all $H$, it also works for the random hypergraph $H \in H^{(r)}(n,p)$. \noindent {\bf Proof of the lower bound:} We assume Lemma \ref{l:lm1} holds. Thus each complete $r$-partite $r$-uniform hypergraph with vertex parts $A_1,A_2,\ldots,A_{r}$ satisfying $\prod_{i=1}^{r-1} |A_r| < (r+1) \log n$ provided $|A_1| \leq |A_2| \leq \ldots \leq |A_{r}|$. For any fixed small positive constant $\epsilon$, we shall show that the probability $f(H) \leq (1-\pi(K_r^{(r-1)})-\epsilon) \binom{n}{r-1}$ is small, where $H \in H^{(r)}(n,p)$. For a fixed prefix set ${\cal P}=\{P_1,\ldots,P_t\}$, where $P_i=\{A_1^i,\ldots,A_{r-1}^i\}$ and $\prod_{j=1}^{r-1} |A_j^i| < (r+1) \log n$ for each $1 \leq i \leq t$ and $t \leq (1-\pi(K_{r}^{(r-1)})-\epsilon)\binom{n}{r-1}$. Let ${\cal X}$ denote the event that there are $t$ sets $A_r^1,\ldots,A_r^t$ such that \[ E(H)=\bigsqcup_{i=1}^t \prod_{j=1}^{r} A_j^i, \] provided $H \in H^{(r)}(n,p)$. Here $A_1^i,\ldots,A_{r}^i$ form a complete $r$-partite $r$-uniform hypergraph for each $1 \leq i \leq t$. We assume the first $s$ of them are trivial complete $r$-partite $r$-uniform hypergraphs, i.e., $|A_1^i|=\cdots=|A_{r-1}^i|=1$ for each $1 \leq i \leq s$. As we did for proving the upper bound, we define an $(r-1)$-uniform hypergraph $G$ such that $V(G)=[n]$ and $E(G)= \binom{[n]}{r-1} \setminus (\cup_{i=1}^s \prod_{j=1}^{r-1} A_j^i)$. We note $|A_j^i|=1$ for each $1 \leq i \leq s$ and $1 \leq j \leq r-1$. We get $|E(G)| \geq (\pi(K_{r}^{(r-1)})+\epsilon) \binom{n}{r-1}$. By the supersaturation result for hypergraphs (see Theorem 1 in \cite{es}), we get that there are at least $c(\epsilon) n^{r}$ copies of $K_{r}^{(r-1)}$ in $G$. Let $G'$ be the complement of $G$ and ${\cal F}$ be the collection of independent sets with size $r$ in $G'$. We have $|{\cal F}| \geq c(\epsilon)n^{r}$. We observe that if $H \in H^{(r)}(n,p)$, then edges in ${\cal F} \cap E(H)$ must be covered by those nontrivial complete $r$-partite $r$-uniform hypergraphs in the partition. Let $\cal Y$ be the event that each $F \in {\cal F} \cap E(H)$ is contained in exactly one of the last $t-s$ nontrivial complete $r$-partite $r$-uniform hypegraphs. We have two cases depending on the range of the probability $p$. \begin{description} \item[Case 1:] $1/\log\log \log \log n \leq p \leq 1/2$. Applying Theorem \ref{t:submain} with ${\cal P}'=\{P_{s+1},\ldots,P_{t}\}$, we get that $\cal Y$ holds with the probability at most $2{\rm exp}(-n^{r-0.92})$. This implies that the event ${\cal X}$ occurs with probability at most $2{\rm exp}(-n^{r-0.92})$. By Lemma \ref{l:lm10}, there are at most $n^{(r+3)n^{r-1} \log n}$ choices for ${\cal P}$ satisfying the desired properties. Applying the union bound, we get that the probability $f(H) \leq (1-\pi(K_r^{(r-1)})-\epsilon) \binom{n}{r-1}$ is at most $2{\rm exp}(-n^{r-0.92}) n^{(r+3)n^{r-1} \log n}< {\rm exp}(-n^{r-0.94})$ for any positive $\epsilon$. \item[Case 2:] $(\log n)^{2.001}/n \leq p \leq 1/\log\log \log \log n$. We observe that the set ${\cal F}$ is determined by the prefix set ${\cal P}$. Therefore, Lemma \ref{l:lm10} also gives an upper bound on the number of possible choices of ${\cal F}$. A simple application of Theorem \ref{t:chernoff} yields that with high probability $|{\cal F} \cap E(H)| \geq \tfrac{pcn^r}{4}$ for all ${\cal F}$ with $|{\cal F}| \geq n^r/\log \log n$. As edges in ${\cal F} \cap E(H)$ must be covered by the last $t-s$ nontrivial complete $r$-partite $r$-uniform hypergraphs. Since $t-s \leq \binom{n}{r-1}$, Lemma \ref{l:lm9} tells us that the event $\cal Y$ occurs with probability at most $2^{-0.05pc\log(1/p)n^r}$. This also implies that the event ${\cal X}$ occurs with probability at most $2^{-0.05pc\log(1/p)n^r}$. By Lemma \ref{l:lm9} and the union bound, we get the probability $f(H) \leq (1-\pi(K_r^{(r-1)})-\epsilon) \binom{n}{r-1}$ is at most $2^{-0.05pc\log(1/p)n^r} n^{(r+3)n^{r-1} \log n} \leq 2^{-0.04pc\log(1/p)n^r}$ as $np \geq (\log n)^{2.001}$ and $c$ is a constant. \end{description} The proof of the theorem is finished. {\cal H}fill $\square$ \section{Concluding remarks} In this paper, we studied the problem of partitioning the edge set of a random $r$-uniform hypergraph into edge sets of complete $r$-partite $r$-uniform hypergraphs. We were able to show if $(\log n)^{2.001}/n \leq p \leq 1/2$ and $H \in H^{(r)}(n,p)$, then with high probability $f(H)=(1-\pi(K^{(r-1)}_r)+o(1))\binom{n}{r-1}$. For the case of $r=2$, results from \cite{alon2} and \cite{cp} assert that if $p$ is a constant, $p \leq 1/2$, and $G \in G(n,p)$, then with high probability $n-o((\log n)^{3+\epsilon}) \leq f(G) \leq (2+o(1))\log n$ for any positive constant $\epsilon$. For sparse random graphs, Alon \cite{alon2} determined the order of magnitude of the second term of $f(G)$. However, we do not have any information on the second term of $f(H)$ for $r \geq 3$. This leads the following question. \noindent {\bf Problem 1:} Determine the order of the second term of $f(H)$ for $H \in H^{(r}(n,p)$ and $r \geq 3$. We note that we were only able to determine the leading coefficient of $f(H)$ for $p \geq (\log n)^{2.001}/n$ and $H \in H^{(r}(n,p)$. A natural question is to prove similar results for other range of the probability $p$. We recall that for a graph $G$, the {\it strong bipartition number} ${\rm bp}'(G)$ of $G$ is the minimum number of nontrivial complete bipartite subgraphs (which are not stars) of $G$ such that each edge of $G$ is in exactly one of them. This parameter was introduced by Chung and the author in \cite{cp} when they were studying the bipartition number of random graphs. In particular, they proved that if $p$ is a constant, $p \leq 1/2$, and $G \in G(n,p)$, then ${\rm bp}'(G) \geq 1.0001 n$ with high probability. For sparse random graphs, Alon \cite{alon2} proved a better lower bound. Namely, he showed with high probability ${\rm bp}'(G) \geq 2n$ if $G \in G(n,p)$. We remark here that our methods for proving Theorem \ref{t:submain} implicitly yield the following theorem. \begin{theorem} If $p$ is a constant, $p \leq 1/2$, and $G \in G(n,p)$, then with high probability \[ \frac{{\rm bp}'(G)}{n} \to \infty \textrm{ as } n \to \infty. \] \end{theorem} \end{document}
\begin{document} \title[A new minimizing-movements scheme]{A new minimizing-movements scheme \\ for curves of maximal slope} \author[U. Stefanelli]{Ulisse Stefanelli} \address[Ulisse Stefanelli]{Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria, Vienna Research Platform on Accelerating Photoreaction Discovery, University of Vienna, W\"ahringerstra\ss e 17, 1090 Wien, Austria, \& Istituto di Matematica Applicata e Tecnologie Informatiche {\it E. Magenes}, via Ferrata 1, I-27100 Pavia, Italy } \email{[email protected]} \urladdr{http://www.mat.univie.ac.at/$\sim$stefanelli} \subjclass[2010]{35K55} \keywords{Curves of maximal slope, minimizing movements, generalized geodesic convexity, nonlinear diffusion, Wasserstein spaces.} \begin{abstract} Curves of maximal slope are a reference gradient-evolution notion in metric spaces and arise as variational formulation of a vast class of nonlinear diffusion equations. Existence theories for curves of maximal slope are often based on minimizing-movements schemes, most notably on the Euler scheme. We present here an alternative minimizing-movements approach, yielding more regular discretizations, serving as a-posteriori convergence estimator, and allowing for a simple convergence proof. \end{abstract} \maketitle \section{Introduction} Gradient-flow evolution in metric spaces has been the subject of intense research in the last years. Starting from the pioneering remarks in \cite{DeGiorgi80}, the theory has been boosted by the monograph by Ambrosio, Gigli, \& Savar\'e \cite{Ambrosio08} and now encompasses existence and approximation results, as well as long-time behavior, decay to equilibrium, and regularity \cite{Santambrogio}. The applicative interest in evolution equations in metric spaces has been revived by the seminal observations in \cite{JKO} and the work by Otto \cite{Otto01} that a remarkably large class of diffusion equations can be variationally reinterpreted as gradient flows in Wasserstein spaces. More precisely, consider the nonlinear diffusion equation \begin{equation} \label{eq:introd2} \partial_t \rho - {\rm div}\big( \rho\nabla ( V + F'(\rho) + W \ast \rho) \big) = 0 \quad \text{in}\ \mathbb{R}^d \times (0,T). \end{equation} Here, $\rho=\rho(x,t)\geq 0$ is a time-dependent density with fixed total mass $\int_{\mathbb{R}^d}\rho(x,t)\, {\rm d} x=1$ and finite second moment $\int_{\mathbb{R}^d}|x|^2\rho(x,t)\, {\rm d} x<\infty$. Finally, $V : \mathbb{R}^d \to \mathbb{R}$ is a {\it confinement} potential, $F:[0,\infty) \to \mathbb{R}$ is an {\it internal-energy density}, $W:\mathbb{R}^d \to \mathbb{R}$ is an {\it interaction} potential, and $\ast$ stands for the standard convolution in $\mathbb{R}^d$. Equation \eqref{eq:introd2} can be variationally reformulated in terms of the gradient flow in the metric space $(\mathcal P_2(\mathbb{R}^d), W_2)$ of probability measures with finite second moment endowed with the $2$-Wasserstein distance $W_2$ of the functional $\phi$ defined as \begin{equation} \phi(u)= \int_{\mathbb{R}^d}V(x) \, {\rm d} u(x) + \int_{\mathbb{R}^d}F(\rho(x)) \,{\rm d} x + \frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x{-}y) \, {\rm d} (u \otimes u)(x,y)\label{eq:f} \end{equation} if $u = \rho {\mathcal L}^d$ and $\phi(u)=\infty$ if $u$ is not absolutely continuous with respect to the Lebesgue measure ${\mathcal L}^d$ in $\mathbb{R}^d$ see \cite{Ambrosio08} and Section \ref{sec:Wass}. The reference notion of solution to gradient flows in metric spaces is that of {\it curves of maximal slope} \cite{DeGiorgi80}, see Definition \ref{def:curve} below. This is based on a specific reformulation of \eqref{eq:introd2} in form of a single scalar relation, featuring specific scalar quantities playing the role of the norm of time derivative of the trajectory and of the gradient of the energy, in the spirit of \eqref{eq:metric} below. Existence and decay to equilibrium of curves of maximal slope for $\phi$ in $(\mathcal P_2(\mathbb{R}^d), W_2)$ are available, see \cite{Ambrosio08,CMV,CMV2}, for instance. In this paper, we focus on a novel time-discretization scheme for gradient flows in metric spaces, falling within the class of {\it Minimizing Movements} in the sense of De Giorgi \cite{Ambrosio95,DeGiorgi93}. Our theory is framed in abstract metric spaces, see Sections \ref{sec:prem}-\ref{sec:p2}, and applied in linear and Wasserstein spaces in Sections \ref{sec:appl0} and \ref{sec:Wass}, respectively. To keep this introductory discussion as simple as possible, we present here the idea in the case of the doubly nonlinear ODE system driven by a smooth potential $\phi$ on $\mathbb{R}^d$, namely \begin{equation} |u'|^{p-2}u'+ \nabla \phi(u)=0 \quad \text{in} \ \times (0,T)\label{ode} \end{equation} for $p>1$, where the prime denotes time differentiation. This equation can be equivalently rewritten as \begin{equation} \phi(u(t)) + \frac1p\int_0^t |u'(r)|^p{\rm d} r +\frac1q\int_0^t |\nabla\phi(u(r))|^q{\rm d} r - \phi(u(0))=0 \quad \forall t \in (0,T) \label{eq:metric} \end{equation} where now $q=p/(p-1)$ is conjugate to $p$. Note that the left-hand side above is always nonnegative, so that \eqref{eq:metric} corresponds indeed to a so-called {\it null-minimization} principle: the left-hand side is minimized and one checks that the minimum value is $0$. This approach has been lately referred to as {\it De Giorgi's Energy-Dissipation} principle and has already been applied in a variety of different contexts, including generalized gradient flows \cite{Bacho,Rossi08}, rate-independent \cite{Mielke12,Roche} and GENERIC systems \cite{Peletier,generic_euler}, and optimal control \cite{portinale}. We complement equation \eqref{ode} by specifying the initial condition $u(0)=u^0$ for some $u^0 \in \mathbb{R}^d$. By introducing a time partition of $(0,T)$ with uniform steps $\tau=T/N>0$, $N\in \mathbb{N}$ (note however that we consider nonuniform partitions below), and letting $ u_0=u^0$, the new minimizing-movements scheme reads \begin{equation} \label{eq:2} u_i \in {\rm arg\,min}_u \left(\phi(u) + \frac{\tau^{1-p}}{p } \left|{u-u_{i-1}} \right|^p +\frac{\tau}{q} |\nabla \phi(u)|^q - \phi(u_{i-1}) \right) \end{equation} for $i=1,\dots,N$. With respect to the classical implicit Euler method, scheme~\eqref{eq:2} includes an extra term featuring the norm of the gradient. This modification with respect to Euler makes the function to be minimized in \eqref{eq:2} a discrete and localized version of the left-hand side in \eqref{eq:metric}. As such, scheme~\eqref{eq:2} is nothing by the canonical {\it variational integrator} scheme \cite{Hairer} associated with the De Giorgi's Energy-Dissipation principle. Compared to Euler, the new minimizing-movements scheme \eqref{eq:2} shows some distinguishing features. First of all, the direct occurrence of the gradient in \eqref{eq:2} entails additional regularity of discrete solutions, see \eqref{eq:exreg}. As a matter of illustration, in the case of the {\it linear} heat equation ($p=2$) with homogeneous Dirichlet boundary conditions scheme \eqref{eq:2} corresponds to solving the problem $$\frac{u_i-u_i}{\tau}-\Delta u_i +\tau \Delta^2 u_i =0,$$ which is reminiscent of a {\it singular perturbation} of the Euler scheme, see Section \ref{sec:illu}. Secondly, the exact correspondence of \eqref{eq:2} to the left-hand side of \eqref{eq:metric} allows to check convergence of discrete solutions without the need of introducing the so-called {\it De Giorgi's variational interpolation} function \cite[Def. 3.2.1]{Ambrosio08}. Thirdly, in using a time discretization to detect a minimum point of $\phi$ by iterating on the time steps, the new scheme shows enhanced performance with respect to Euler for large time steps, see \cite{generic_euler} and \eqref{prox} below. Finally, the functional under minimization in \eqref{eq:2} may serve as an a-posteriori estimator for the convergence of {\it any} discrete solution, regardless of the specific method used to obtain it. In particular, one can resort to approximate minimizers instead of true minimizers. The minimizing-movements scheme \eqref{eq:2} was already analyzed in \cite{generic_euler} in the case of gradient flows in Hilbert spaces. In particular, convergence of the scheme for $\phi$ being a $C^{1,\alpha}$ perturbation of a convex function and sharp, order-one error estimates in finite dimensions can be found there. The case of curves of maximal slope in metric spaces is also mentioned in \cite{generic_euler}, where nevertheless the analysis is limited to $p=2$ and geodesically convex potentials. In this note, we extend the analysis of \cite{generic_euler} to the case $p>1$ and to potentials $\phi$ being $(\lambda,p)$-generalized-geodesically convex for $\lambda\in \mathbb{R}$. More precisely, the combination of our main results, Theorems \ref{thm:main1}-\ref{thm:main2}, entails that solutions to the new minimizing-movements scheme \eqref{eq:2} in metric spaces, see \eqref{eq:min}, converge to curves of maximal slope for all $p>1$, if $\lambda \geq 0$, and for $p>2$, if $\lambda <0$. In addition, in Theorem \ref{thm:main3} we are able to provide a convergence result for not geodesically convex functionals, provided that some weak differentiability of its slope in form of a generalized one-sided Taylor expansion condition holds, see~\eqref{eq:taylor}. Before closing this introduction let us mention that alternative time-discrete scheme with respect to Euler are available, also in the nonlinear setting of metric spaces \cite{Clement2,Matthes,Turinici,Tribuzio}. We postpone an account on the literature to Subsection \ref{sec:literature}, for some preliminary material is needed to put these contributions in perspective. This is the plan of the paper. We introduce some notation and preliminaries in Section \ref{sec:prem} and present our main convergence results in Section \ref{sec:main}. In particular, assumptions are collected in Subsection \ref{sec:assumptions} and statements are given in Subsection \ref{sec:conv}. Some illustration of the theory on two linear equations, both in finite and infinite dimensions, is in Subsection \ref{sec:illu}. The convergence results are then proved in Sections \ref{sec:p1}-\ref{sec:p3}. Eventually, we comment on the application of the abstract theory in linear spaces in Section \ref{sec:appl0} and in Wasserstein spaces in Section \ref{sec:Wass}. \section{Preliminaries}\label{sec:prem} \setcounter{equation}{0} We briefly collect here some classical notation and preliminaries on evolution in metric spaces, for completeness. The reader familiar with the classical reference \cite{Ambrosio08} may consider moving directly to Section \ref{sec:main}. In all of the following, $(U,d)$ denotes a complete metric space and $\phi:U \to (-\infty,\infty]$ is a proper functional, i.e., the {\it effective domain} $D(\phi):=\{u \in U \ : \ \phi(u) <\infty \}$ is assumed to be nonempty. Let $p,\,q>1$ be given with $1/p+1/q=1$. A curve $u: [0,T]\to U$ is said to belong to $AC^p([0,T];U)$ if there exists $m\in L^p(0,T)$ with \begin{equation}\label{metric_dev} d(u(s),u(t))\leq \int_s^t m(r)\,{\rm d} r \quad \text{for all \ $0 \leq s\leq t < T.$} \end{equation} If $u \in AC^p([0,T];U) $, the limit $$|u'|(t) := \lim_{s\to t}\frac{d(u(s),u(t))}{|t-s|}$$ exists for almost everywhere $t\in (0,T)$, see \cite[Thm. 1.1.2]{Ambrosio08}, and is referred to as {\it metric derivative} of $u$ at $t$. Moreover, the map $t \mapsto|u'|(t)$ is in $ L^p(0,T) $ and is minimal within the class of functions $m\in L^p(0,T)$ fulfilling \eqref{metric_dev}. The {\it local slope} \cite{Ambrosio08,Cheeger99,DeGiorgi80} of $ \phi $ at $ u \in D(\phi) $ is defined via $$ |\partial \phi|(u) := \limsup_{v \to u} \frac{(\phi(u) - \phi(v))^+}{d(u,v)}.$$ If $U$ is a Banach space and $\phi$ is Fr\'echet differentiable, we have that $|\partial \phi|(u) = \| {\rm D} \phi (u)\|_*$ (dual norm). In the following, we will make use of the notion of {\it geodesic convexity} for $\phi$. More precisely, we call {\it (constant-speed) geodesic} any curve $\gamma : [0,1]\to U$ such that $d(\gamma(t),\gamma(s)) = (t-s)d(\gamma(0),\gamma(1))$ for all $0\leq s \leq t \leq T$ and we say that $\phi$ is {\it $(\kappa,p)$-geodesically convex} for $\kappa\in \mathbb{R}$ if for all $ v_0,\, v_1 \in D(\phi)$ there exists a geodesic with $\gamma(0)=v_0 $ and $\gamma(1)=v_1$ such that \begin{align} \phi(\gamma(\theta)) \leq \theta \phi(v_1) + (1-\theta) \phi(v_0) -\frac{\kappa}{p}\theta(1-\theta) d^p(v_0,v_1) \ \ \forall \theta \in [0,1]\label{def:l-p-geod-convex} \end{align} The definition is classical for $p=2$. For this $p$-extension see \cite[Remark.~2.4.7]{Ambrosio08} or~\cite{Agueh2003}. Note that geodesic convexity in particular implies that $U$ is a {\it geodesic space}, for each pair $v_0$, $v_1$ is connected by a geodesic. More generally, we say that $\phi$ is {\it $(\kappa,p)$-generalized-geodesically convex} if \eqref{def:l-p-geod-convex} holds for some curve $\gamma$ connecting $v_0$ and $v_1$, not necessarily being a geodesic. In this case, $U$ is implicitly assumed to be path-connected. From \cite[Prop. 2.7]{rsss} we have that if $\phi$ is $(\kappa,p)$-geodesically convex and $d$-lower semicontinuous, the local slope $|\partial \phi|$ is $d$-lower semicontinuous as well. In addition, $|\partial \phi|$ admits the representation \begin{equation} |\partial \phi|(u)= \sup_{v \neq u} \left( \frac{\phi(u) - \phi(v)}{d(u,v)} + \frac{\kappa}p d^{p-1}(u,v)\right)^+\quad \forall u \in D(\phi).\label{repr-loc-slope} \end{equation} We denote by $D(|\partial \phi|)$ the effective domain of $|\partial \phi|$, namely, $D(|\partial \phi|)=\{u \in D(\phi) \ : \ |\partial \phi|(u)<\infty\}$. Under the above-mentioned geodesic convexity assumption, the local slope $|\partial \phi|$ is a {\it strong upper gradient} \cite[Def.~1.3.2]{Ambrosio08}. Namely, for all $u\in AC^p([0,T];U)$, the map $r \mapsto |\partial \phi|(r)$ is Borel and $$|\phi(u(t)) - \phi(u(s))| \leq \int_s^t |\partial \phi| (u(r))\,|u'|(r) \, {\rm d} r \quad \forall 0\leq s \leq t \leq T.$$ Note that, if $r \mapsto |\partial \phi| (u(r))|u'|(r) \in L^1(0,T) $ the latter entails that $\phi \circ u \in W^{1,1}(0,T) $ and $(\phi \circ u)' = |\partial \phi| (u) |u'|$ almost everywhere in $(0,T)$. Along with the above provisions, we specify the notion of gradient-driven evolution as follows. \begin{definition}[Curve of maximal slope]\label{def:curve} The trajectory $u\in AC^p([0,T];U)$ is said to be a \emph{curve of maximal slope} if $\phi\circ u \in W^{1,1}(0,T)$ and \begin{equation} \label{eq:curve} \phi(u(t)) + \frac1p\int_0^t |u'|^p(r) \, {\rm d} r + \frac1q \int_0^t |\partial \phi|^q(u(r))\, {\rm d} r = \phi(u(0)) \quad \forall t \in [0,T]. \end{equation} \end{definition} \section{Main results}\label{sec:main} \setcounter{equation}{0} To each time partition $0=t_0<t_1<\dots< t_{N}=T$ we associate the time steps $\tau_i = t_i - t_{i-1}$ and the diameter $\tau = \max \tau_i$. Given the vector $\{u_i\}_{i=0}^N \in U^{N+1}$ we define its backward piecewise constant interpolant $\overline u:[0,T] \to U$ on the time partition to be \begin{align*} &\overline u(0)= u_0\quad \text{and} \quad \overline u(t)=u_i \quad \forall t \in (t_{i-1},t_i], \ i =1, \dots, N. \end{align*} Moreover, we define the piecewise constant function $|\widehat u'|:[0,T]\setminus\{t_0,\dots, t_{N}\} \to [0,\infty)$ as $$|\widehat u'|(t) := \frac{d(u_{i-1},u_i)}{\tau_i} \quad \forall t \in (t_{i-1},t_i), \ i =1, \dots, N.$$ The notation $|\widehat u'|(t)$ alludes to the fact that in the Hilbert-space case the latter is nothing but the norm of the time derivative of the piecewise affine interpolant of the values $\{u_i\}_{i=0}^N$ on the time partition. Our new minimizing-movements scheme is specified by means of the {\it incremental functional} $G: (0,\infty) \times D(\phi) \times D(|\partial \phi|) $ given by \begin{equation} \label{eq:G} \boxed{G(\tau,v,u):= \phi(u) + \frac{\tau^{1-p}}{p}d^p(v,u) + \frac{\tau}{q}|\partial \phi|^q(u)- \phi(v).} \end{equation} In the setting of the assumptions specified later in Subsection \ref{sec:assumptions}, for all $(\tau,v) \in (0,\infty) \times D(\phi) $ the functional $u\in D(|\partial \phi|)\mapsto G(\tau,v,u)$ admits a minimizer, possibly being not unique. We indicate the set of such minimizers by $M_G(\tau,v)$ and the minimum value of $ G(\tau,v,\cdot)$ by $\widehat G(\tau,v)$, namely, $$M_G(\tau,v):={\rm arg\,min}_{u \in D(|\partial \phi|)} G(\tau,v,u), \quad \widehat G(\tau,v) := \min_{u \in D(|\partial \phi|)}G(\tau,v,u).$$ With this notation, the new minimizing-movements scheme reads \begin{equation} \boxed{u_0=u^0 \ \ \text{and} \ \ u_i \in M_G(\tau_i,u_{i-1})\quad \text{for} \ i =1,\dots,N,}\label{eq:min} \end{equation} for some given initial datum $u^0\in D(\phi)$. For later purposes, we introduce also the incremental functional $E: (0,\infty) \times D(\phi) \times D(\phi)$ associated with the classical backward Euler method \begin{equation} \label{eq:F} E(\tau,v,u):= \phi(u) + \frac{\tau^{1-p}}{p}d^p(v,u) - \phi(v), \end{equation} as well as the corresponding notation $$M_E(\tau,v):={\rm arg\,min}_{u \in D(\phi)} E(\tau,v,u), \quad \widehat E(\tau,v) := \min_{u \in D(\phi)} E(\tau,v,u).$$ In particular, the Euler method corresponds to the incremental problem \begin{equation} u_0=u^0 \ \ \text{and} \ \ u_i \in M_E(\tau_i,u_{i-1})\quad \text{for} \ i =1,\dots,N.\label{eq:mine} \end{equation} In the context of Wasserstein spaces, see Section \ref{sec:Wass}, the latter is often referred to as Jordan-Kinderlehrer-Otto scheme \cite{JKO}. \subsection{Assumptions}\label{sec:assumptions} In this subsection, we fix our assumptions and collect some comment. We start by asking that \begin{equation} \label{eq:X} (U,d) \ \ \text{is a complete metric space}. \end{equation} In addition to the metric topology, $(U,d)$ is assumed to be endowed with \begin{equation} \label{eq:sigma} \text{a Hausdorff topology} \ \sigma, \ \text{compatible with the metric $d$}. \end{equation} The latter compatibility is intended in the following sense \begin{equation} \label{eq:compat} u_n \sto u, \ v_n\sto v \ \ \Rightarrow \ \ d(u,v) \leq \liminf_{n \to \infty} d(u_n,v_n) \end{equation} and, in essence, means that $\sigma$ is weaker than the topology induced by $d$. An early example for $\sigma$ complying with \eqref{eq:sigma} is the topology induced by $d$. In applications it may however be useful to keep the two topologies separate. In particular, if $U$ is a Banach space $\sigma$ is often chosen to be some weak topology whereas $d$ usually corresponds to the strong one. The initial datum is assumed to satisfy \begin{equation} \label{eq:initial} u^0 \in D(\phi). \end{equation} We assume the proper potential $\phi:U \to (-\infty,\infty]$ to be such that \begin{equation} \label{eq:phicomp} \text{the sublevels of $\phi$ are sequentially $\sigma$-compact}. \end{equation} The latter in particular entails that $\phi$ is sequentially $\sigma$-lower semicontinuous and bounded from below. In the following, we hence assume with no loss of generality that $\phi$ is nonnegative. Note however that assumption \eqref{eq:phicomp} could be weakened by asking compactness on $d$-bounded sublevels of $\phi$ only. In addition, we ask that \begin{align} &|\partial \phi| \ \text{is a strong upper gradient for $\phi$ and it is sequentially}\nonumber\\ & \quad \text{$\sigma$-lower semicontinuous on $d$-bounded sublevels of $\phi$.} \label{eq:phisl} \end{align} The latter assumption could be weakened by developing the theory for some relaxation of $|\partial \phi|$. Still, \cite[Prop.~2.7]{rsss} ensures that \eqref{eq:phisl} hold, as soon as $\phi$ is $(\lambda,p)$-geodesically convex and $\sigma$ is the metric topology induced by $d$. In the setting of assumptions \eqref{eq:X}-\eqref{eq:phisl}, the solvability of the incremental minimization problem \eqref{eq:min} follows from the Direct Method. Indeed, for all $\tau>0$ and $v\in D(\phi)$ the incremental functional $u \in D(|\partial \phi|) \mapsto G(\tau,v,u)$ is coercive and lower semicontinuous by \eqref{eq:phicomp}-\eqref{eq:phisl}. We will later check in \eqref{eq:slopeestimate2} that indeed \begin{equation} u\in M_G(\tau,v) \ \ \Rightarrow \ \ |\partial (\phi +\tau |\partial \phi|^q/q)|(u)<\infty. \label{eq:exreg} \end{equation} In particular, minimizers of $ G(\tau,v,\cdot)$ show additional regularity. This extra regularity may be not preserved by the time-continuous limit. Under the sole \eqref{eq:phicomp} the incremental Euler minimization problem \eqref{eq:mine} is solvable as well. In particular, for all $\tau>0$ and $v\in D(\phi)$ the functional $u \in D(\phi) \mapsto E(\tau,v,u)$ admits a minimizer. Along the analysis, we will make reference to specific generalized geodesically convex cases. In particular, we may ask for \begin{align} &\exists \tau_*>0, \ \lambda \in \mathbb{R} \ \ \text{such that} \ \ \forall \tau \in (0,\tau_*), \ \forall v \in D(\phi)\nonumber\\ &u \mapsto E(\tau,v,u) \ \ \text{is $(\kappa,p)$-generalized-geodesically convex}\nonumber\\ & \text{with $\kappa=(p-1)\tau^{1-p}+\lambda$}. \label{eq:F2} \end{align} Note that \eqref{eq:F2} holds if $\phi$ is $(\lambda,p)$-geodesically convex and the $p$-power of the distance is $(p-1,p)$-geodesically convex. In case $p=2$, the $(1,2)$-geodesic convexity of $u\mapsto d^2(u,v)/2$ qualifies nonpositively curved spaces in the Alexsandrov sense \cite{Alexandrov,Jost}. In particular, Euclidean and Hilbert spaces, as well as Riemannian manifolds of nonpositive sectional curvature \cite[Rem. 4.0.2]{Ambrosio08} fall into this class. Condition \eqref{eq:F2} is more demanding for $p\not =2$. In fact, by letting $\tau \to 0$ it implies that the $p$-power of the distance is $(p-1,p)$-geodesically convex. This is actually not the case in linear spaces, as one can check already in $\mathbb{R}$, but see also \cite[Lem. 3.1]{AguehE}. Indeed, let $\theta=1/2$ and $v_0=-1$, $v_1=1$, $\theta=1/2$ for $p>2$ and $v_0=0$, $v_1=1$ for $p<2$ in order to get $$ \frac1p |\theta v_1 + (1-\theta) v_0|^p >\frac{\theta}{p}|v_1|^p + \frac{1-\theta}{p}|v_0|^p - \theta(1-\theta)\frac{p-1}{p} |v_1-v_0|^p$$ contradicting $(p-1,p)$-geodesic convexity. See \cite[Ex.~1, p.~55]{Jost} for some similar argument, proving the failure of $(1,2)$-geodesic convexity of $(x_1,x_1) \in \mathbb{R}^2\mapsto (x_1^p + x_2^p)^{1/p}$. In fact, condition \eqref{eq:F2} for $p\not =2$ is actually meaningful only in spaces of qualified negative curvature. This is not the case for the Wasserstein space $(\mathcal P_2(\mathbb{R}^d) , W_2)$, which is actually of positive curvature, see Section \ref{sec:Wass}. As we deal in Sections \ref{sec:appl0}-\ref{sec:Wass} with applications in linear and Wasserstein spaces, condition~\eqref{eq:F2} is used there only for $p=2$. In case of not geodesically convex potentials, we are still in the position of providing a convergence result under the following generalized one-sided Taylor-expansion condition on $|\partial \phi|$ \begin{align} &\exists \tau_*>0, \, \forall C>0, \, \exists g:(0,\tau_*) \to [0,\infty] \ \text{with} \ \frac{1}{\tau}\int_0^\tau g(r)\, {\rm d} r \searrow 0 \ \text{as} \ \tau \to 0 \ \text{such that}\nonumber\\ &\forall \tau \in (0,\tau_*), \ \forall v \in D(|\partial \phi|) \ \text{with} \ \max\{\phi(v),\tau |\partial \phi|^q(v)\}\leq C, \ \forall u \in M_G(\tau,v) \nonumber\\[2mm] & \text{we have that} \ \ |\partial \phi|^q(u) - |\partial (\phi +\tau|\partial \phi|^q/q)|^q(u) \leq g(\tau). \label{eq:taylor} \end{align} Notice that the last inequality makes sense, for we have the additional regularity \eqref{eq:exreg}. We discuss some applications fulfilling condition \eqref{eq:taylor} in Sections \ref{sec:appl0} and \ref{sec:Wass}. A caveat on notation: In the following we use the same symbol $C$ in order to indicate a generic positive constant, possibly depending on data and changing from line to line. Where needed, dependencies are indicated by subscripts. \subsection{Convergence results}\label{sec:conv} We are now ready to state our main results. \begin{theorem}[Conditional convergence]\label{thm:main1} Under \eqref{eq:X}-\eqref{eq:phisl} let $\{0=t_0^n<t_1^n<\dots<t^{n}_{N^n}=T\}$ be a sequence of partitions with $\tau^n:=\max (t^n_i-t^n_{i-1}) \to 0$ as $n\to \infty$. Moreover, let $\{u_i^n\}_{i=0}^{N^n}$ be such that $u_0^n$ are $d$-bounded, $u_0^n \stackrel{\sigma}{\to} u^0$, $\phi(u_0^n) \to \phi(u^0)$, and \begin{equation} \sum_{i=1}^{N^n} (G(\tau_i^n,u_{i-1}^n,u_i^n))^+ \to 0 \ \ \text{as} \ \ n \to \infty.\label{eq:cond} \end{equation} Then, up to a not relabeled subsequence, we have that $\overline u^n(t) \sto u(t)$, where $u$ is a curve of maximal slope with $u(0)=u^0$. \end{theorem} Note that the statement of Theorem \ref{thm:main1} does not require that $u^n_i\in M_G(\tau^n_i,u^n_{i-1})$, namely that $\{u^n_i\}_{i=0}^{N^n}$ is a solution of the new minimizing-movements scheme \eqref{eq:min}. In particular, Theorem \ref{thm:main1} can serve as an a-posteriori tool to check the convergence of time-discrete approximations, regardless of the method used to generate them. In particular, the above conditional convergence result directly applies to {\it approximate} minimizers, namely solutions of $$u_0^n=u^0 \quad \text{and} \quad G(\tau_i,u^n_{i-1},u^n_i) \leq \inf G(\tau_i^n,u^n_{i-1},\cdot) +g^n_i\quad \text{for} \ i=1,\dots,N^n$$ (compare with \eqref{eq:min}) as long as $\sum_{i=1}^{N^n}g^n_i\to 0$ as $n\to \infty$. See \cite{Fleissner} for a result on approximate minimizers of $E(\tau_i^n,u^n_{i-1},\cdot)$ instead. The conditional convergence result of Theorem \ref{thm:main1} thus relies on the possibility of solving the inequality $G(\tau_i^n , u_{i-1}^n,u_i^n)\leq 0$ up to a small, controllable error, and establishing some a priori bounds on the discrete solution. The validity of condition \eqref{eq:cond} is to be checked on the specific problem at hand. In the specific case of $(\lambda,p)$-generalized-geodesically convex functionals $\phi$ on a properly nonpositively curved space, condition \eqref{eq:cond} actually holds for solutions of the new minimizing-movements scheme \eqref{eq:min}. This is the content of our second main result. \begin{theorem}[Convergence in the geodesically convex case]\label{thm:main2} Under assump- \linebreak tions \eqref{eq:X}-\eqref{eq:phisl} and \eqref{eq:F2}, let $\{0=t_0^n<t_1^n<\dots<t^{n}_{N^n}=T\}$ be a sequence of partitions with $\tau^n:=\max (t^n_i-t^n_{i-1})<\tau_*$ and $\tau^n\to 0$ as $n\to \infty$. Moreover, assume that either $\lambda \geq 0$ or $p> 2$ in \eqref{eq:F2}. Then, solutions $\{u_i^n\}_{i=0}^{N^n}$ of \eqref{eq:min} fulfill condition \eqref{eq:cond}. Hence, $\overline u^n$ converges pointwise to a curve of maximal slope up to subsequences. \end{theorem} We now turn to a convergence result in the not geodesically convex case. Here, some stronger topological assumption, an approximation of the initial datum, and the generalized one-sided Taylor-expansion assumption \eqref{eq:taylor} for $|\partial \phi|$ are necessary. \begin{theorem}[Convergence without geodesic convexity]\label{thm:main3} Under assumptions \linebreak\eqref{eq:X}-\eqref{eq:phisl}, let $\sigma$ be the metric topology induced by $d$, $U$ be separable, and $\phi$ fulfill \eqref{eq:taylor}. Moreover, let $\{0=t_0^n<t_1^n<\dots<t^{n}_{N^n}=T\}$ be a sequence of partitions with $\tau^n:=\max (t^n_i-t^n_{i-1})<\tau_*$, $(\tau_i^n - \tau_{i-1}^n)^+/\tau^n_{i-1} \leq C\tau^n$ for $i=2,\dots,N^n$, and $\tau^n\to 0$ as $n\to \infty$. Choose $u^{0n} \in M_E(\tau^n,u^0)$. Then, solutions $\{u_i^n\}_{i=0}^{N^n}$ of \eqref{eq:min} with $u_0^n = u^{0n}$ fulfill condition \eqref{eq:cond}. Hence, $\overline u^n$ converges pointwise to a curve of maximal slope up to subsequences. \end{theorem} Note that the one-sided nondegeneracy condition $(\tau_i^n - \tau_{i-1}^n)^+/\tau^n_{i-1} \leq C\tau^n$ in the statement is fulfilled if $i \mapsto \tau^n_i$ in nonincreasing. In particular, it holds for uniform partitions. In case $u^0\in D(|\partial \phi|)$ no approximation of the initial datum as in Theorem \ref{thm:main3} is actually needed. Theorems \ref{thm:main1}, \ref{thm:main2}, and \ref{thm:main3} are proved in Sections \ref{sec:p1}, \ref{sec:p2}, and \ref{sec:p3}, respectively. \subsection{An illustration on linear equations}\label{sec:illu} The focus of our theory is on nonlinear problems. Still, as a way of illustrating the results, we present here two linear ODE and PDE examples. Nonlinear applications are then discussed in Sections \ref{sec:appl0}-\ref{sec:Wass} below. Let us start from the finite-dimensional example of the gradient flow in $(\mathbb{R}^d,|\cdot|)$ of $\phi(u)=\lambda |u|^2/2$ with $\lambda \in \mathbb{R}$ and take $p=2$. In this case, the incremental functional $G$ reads $$G(\tau,v,u) = \frac{\lambda}{2}|u|^2 + \frac{1}{2\tau}|u-v|^2 + \frac{\tau \lambda^2}{2}|u|^2 - \frac{\lambda}{2}|v|^2.$$ For all $v\in \mathbb{R}^d$ given, the latter can be readily minimized, giving the only minimum point $u = v/(1+\lambda \tau + \lambda^2 \tau^2)$. Correspondingly, the minimal value $\widehat G(t,v)$ can be checked to be \begin{equation}\widehat G(t,v) ={} -\frac{|v|^2\lambda^3\tau^2}{2(1+\lambda \tau + \lambda^2 \tau^2)}.\label{eq:fin} \end{equation} If $\lambda \geq 0$ the minimal value is nonpositive and condition \eqref{eq:cond} trivially holds. If $\lambda <0$, the minimal value scales as $\tau^2$ and condition \eqref{eq:cond} still holds. Indeed, by letting \begin{equation} \label{r} r^n :=\sum_{i=1}^{N^n} \big( G(\tau_i^n,u_{i-1}^n,u_i^n)\big)^+ \end{equation} we have that \begin{equation} r^n = \sum_{i=1}^{N^n} \frac{|u_{i-1}^n|^2(\lambda^-)^3(\tau_i^n)^2}{2(1+\lambda \tau_i^n + \lambda^2 (\tau_i^n)^2)}\leq C \max_i|u_i^n|^2 \tau^n\label{lin} \end{equation} where we tacitly assumed that $\lambda^-\tau^n\leq \lambda^-\tau_*<1$ and we used the standard notation for the negative part $\lambda^- = \max\{0,-\lambda\}$. Condition \eqref{eq:cond} hence follows as soon as $ \max_i|u_i^n|$ stays bounded with respect to $n$, which happens to be the case as the evolution takes place in the finite time interval $[0,T]$. In fact, the order of convergence in \eqref{lin} is sharp, as illustrated in Figure \ref{figure1} for the choice $d=1$, $\lambda=-1$, $ u^0=1$, $T=1$. Here, $r_n$ in computed for the uniform partition $\tau^n_i=\tau^n=2^{-n}$, $n=1, \dots, 12$ or, equivalently, for $N^n=2^n$. \begin{figure} \caption{Values $r^n$ from \eqref{r} \label{figure1} \end{figure} On a uniform partition of time step $\tau>0$, the solution of the new minimizing movement scheme $\{u_i\}$ and the solution $\{u_i^e\}$ of the Euler scheme read \begin{equation} u_i = \frac{u_0}{(1+\lambda \tau+ \lambda^2 \tau^2)^i}\quad \text{and} \quad u_i^e = \frac{u_0}{(1+\lambda \tau)^i},\label{eq:veresol} \end{equation} respectively. It is hence a standard matter to compute \begin{equation} |u_i - u^e_i| = |u_0|\left| \frac{(1+\lambda \tau)^i - (1+\lambda \tau+ \lambda^2 \tau^2)^i}{ (1+\lambda \tau+ \lambda^2 \tau^2)^i (1+\lambda \tau)^i}\right | \label{eq:standard} \end{equation} which scales like $\tau^2$ as $\tau \to 0$. As the Euler scheme is of first order, the same holds true for the new minimizing-movements scheme, see Figure \ref{figure2} for $d=1$, $\lambda=-1$, $u_0=1$. Indeed, Figure \ref{figure2} shows that this order is sharp. Note in fact that the new minimizing-movements scheme is proved in \cite[Prop. 4.3]{generic_euler} to be of first order for all nonnegative potentials $\phi$ in $C^2$ in finite dimensions. \begin{figure} \caption{$L^\infty$ error with respect to $\tau^n$ for the new minimizing-movements scheme (stars) and the Euler scheme (dots) in log-log scale. The solid line represents order $1$.} \label{figure2} \end{figure} Assume now to be interested in computing the minimum of $\phi$ by following the discrete scheme for a fixed number $m$ of iterations, a classical strategy in optimization \cite{Combettes,Rockafellar}. In the specific case of our ODE example we compute from \eqref{eq:veresol} \begin{equation} \phi(u_m) = \frac{\lambda}{2(1+\lambda\tau +\lambda^2 \tau^2)^{2m}} \quad \text{and} \quad \phi(u_m^e) = \frac{\lambda}{2(1+\lambda\tau)^{2m}}.\label{prox} \end{equation} Due to the presence of the extra term $\lambda^2 \tau^2$ in the denominator, the new scheme is advantageous with respect to Euler as for reduction of the potential after a fixed number of iterations. Note that this effect is enhanced by choosing {\it large} time steps. Let us move to an infinite-dimensional example by considering the standard heat equation on the space time cylinder $\Omega \times (0,T)$ where $\Omega \subset \mathbb{R}^d$ is a smooth, open, and bounded set and homogeneous Dirichlet conditions are imposed (other choices being of course possible). We classically reformulate this as the gradient flow in $(L^2(\Omega), \| \cdot \|)$, of the {\it Dirichlet} energy $$\phi(u) = \left\{ \begin{array}{ll}\displaystyle\frac12 \int_\Omega |\nabla u(x)|^2 \, {\rm d} x \quad& \text{for} \ \ u \in H^1_0(\Omega)\\ \infty&\text{elsewhere in} \ L^2(\Omega). \end{array} \right. $$ where $\| \cdot \|$ is the norm corresponding to the natural $L^2$ scalar product $(\cdot, \cdot)$. In this case, we have that $\partial \phi (u) = - \Delta u$ with $D(\partial \phi) = H^2(\Omega) \cap H^1_0(\Omega)$. The symbol $\partial$ indicates the {\it subdifferential} in the sense of convex analysis \cite{Brezis73}. In particular, $\partial \phi$ is single-valued and $|\partial \phi|(u) = \| \Delta u \|$ for all $u\in D(\partial \phi)$. The incremental functional $G: (0,\infty) \times H^1_0(\Omega) \times H^2(\Omega) \cap H^1_0(\Omega)$ hence reads $$G(\tau,v,u) = \int_\Omega \left( \frac12 |\nabla u|^2 + \frac{1}{2\tau}|u-v|^2+ \frac{\tau}{2} |\Delta u|^2 - \frac12 |\nabla v|^2\right) {\rm d} x .$$ For all $v\in H^1_0(\Omega)$ given, the latter can be readily minimized in $H^2(\Omega) \cap H^1_0(\Omega)$. Given linearity one can easily identify the subgradient of $ u \mapsto G(\tau,v,u)$ as $$(\partial G(\tau,v,\cdot))(u) = -\Delta u + \frac{u-v}{\tau} +\tau \Delta^2 u $$ and $D(\partial G(\tau,v,\cdot)) = \{ u \in H^4(\Omega)\cap H^1_0(\Omega) \, : \, \Delta u =0 \ \text{on} \ \partial \Omega\}$. Hence, the minimizer $u$ of $G(\tau,v,\cdot)$ solves \begin{align} u -\tau\Delta u +\tau^2 \Delta^2 u =v \ \ \text{a.e. in} \ \ \Omega,\quad u=\Delta u = 0 \ \ \text{on} \ \ \partial \Omega. \label{eq:heat} \end{align} The latter is reminiscent of a singular perturbation of \begin{align} u^e -\tau\Delta u^e =v \ \ \text{a.e. in} \ \ \Omega,\quad u^e = 0 \ \ \text{on} \ \ \partial \Omega,\label{eq:heat2} \end{align} corresponding instead to the incremental step of the Euler scheme. Let now $\{w^k\}$ be a complete orthonormal basis of $L^2$ of eigenfunctions of $-\Delta$ with homogeneous Dirichlet boundary conditions, namely, $w^k\in H^2(\Omega) \cap H^1_0(\Omega)$ with $w^k \not = 0$ and $-\Delta w^k = \lambda^k w^k$ for some $\lambda^k>0$. By inserting in \eqref{eq:heat}-\eqref{eq:heat2} $u = \sum_ku^kw^k$, $u^e = \sum_k(u^e)^kw^k$, and $v = \sum_kv^kw^k$ for $u^k := (u,w^k)$, $(u^e)^k := (u^e,w^k)$, and $v^k:=(v,w^k)$, respectively, we get that $$ u^k = \frac{v^k}{1+ \tau \lambda^k + (\tau\lambda^k)^2} \quad \text{and} \quad (u^e)^k = \frac{v^k}{1+ \tau \lambda^k}.$$ In particular, by arguing as in \eqref{eq:fin} one readily checks that $$\widehat G(\tau,v) = -\sum_k \frac{|v^k|^2(\lambda^k)^3\tau^2}{2(1+ \tau \lambda^k + (\tau\lambda^k)^2)} \leq 0$$ and condition \eqref{eq:cond} holds. By iterating on the time steps, the solution $\{u_i\}$ of the new minimizing movement scheme and that $\{u^e_i\}$ of the Euler scheme read $u_i = \sum_k u^k_i w^k$ and $u^e_i = \sum_k (u^e)^k_i w^k$ where $$ u^k_i = \frac{(u^0)^k}{(1+ \tau \lambda^k + (\tau\lambda^k)^2)^i} \quad \text{and} \quad (u^e_i)^k = \frac{(u^0)^k}{(1+ \tau \lambda^k )^i}$$ and $(u^0)^k:=(u^0,w^k)$. Proceeding as in \eqref{prox} one computes \begin{align*} & \phi(u_m) = \frac12 \sum_k \lambda^k (u^k_m)^2 = \frac12 \sum_k \frac{\lambda^k ((u^0)^k)^2}{(1+\tau \lambda^k + (\tau \lambda^k))^{2m}}, \\ &\phi(u_m) = \frac12 \sum_k \lambda^k ((u^e_m)^k_m)^2 = \frac12 \sum_k \frac{\lambda^k ((u^0)^k)^2}{(1+\tau \lambda^k)^{2m}} \end{align*} and the same observations as in the ODE case on the effectiveness of the reduction of the potential for a fixed number of iterations apply. \subsection{Literature}\label{sec:literature} Before moving on, let us record here some other alternatives to the Euler scheme, specifically focusing on the case $p=2$. Legendre and Turinici advance in \cite{Turinici} the {\it midpoint} scheme \begin{align*} u_i \in {\rm arg\,min}_{u} \Bigg( \inf \Bigg( 2\phi(w) + \frac{1}{2\tau} d(u,u_{i-1})\ : \ w \in \Gamma(u,u_{i-1}) \Bigg) \Bigg) \end{align*} where $$\Gamma(u,u_{i-1}) = \{\gamma (1/2) \ : \ \gamma:[0,1]\to U \ \text{geodesic with} \ \gamma(0)=u_{i-1} \ \text{and} \ \gamma(1)=u\}.$$ By assuming \eqref{eq:phicomp}-\eqref{eq:phisl}, as well as some additional closure property relating to the specific structure of the set $\Gamma$, they prove that this midpoint scheme is solvable and convergent. A variant of this scheme is also proposed in \cite{Turinici} in the specific case of nonbranching geodesic spaces, namely, spaces where any two points are connected by a unique geodesic. In these spaces, for all $w$ and $u_{i-1}$ there exists a unique $u$ such that $w\in \Gamma(u,u_{i-1})$. An {\it extrapolated} version of the Euler scheme is hence defined by the relations $$ u^e_{1/2} \in \Gamma(u_i,u_{i-1}) \ \ \text{where} \ \ u^e_{1/2} \in M_E(\tau/2,u_{i-1}).$$ Albeit not purely variational, this scheme is based on the solution of the Euler scheme with halved time step. Matthes and Plazotta \cite{Matthes} address a variational version of the Backward Differentiation Formula (BDF2) method, namely, $$u_i \in {\rm arg\,min}_{u\in D(\phi)}\left(\frac{1}{\tau}d^2(u,u_{i-1})-\frac{1}{4\tau} d^2(u,u_{i-2})+\phi(u) \right) \quad \text{for} \ i=2,\dots,N$$ where now both $u_0$ and $u_1$ are given. Under some lower semicontinuity and convexity conditions, it is proved in \cite{Matthes} that the scheme admits a solution, whose piecewise-in-time interpolant converges to a curve of maximal slope with rate $\tau^{1/2}$. It also shown that under natural regularity assumptions on the limiting time-continuous curve of maximal slope, the convergence rate can be $\tau$ at best. Perturbations of the Euler method of the form $$u_i \in {\rm arg\,min}_{u\in D(\phi)}\left(\frac{a_i^\tau}{2\tau}d^2(u,u_{i-1}) +\phi(u) \right) \quad \text{for} \ i=2,\dots,N,$$ are considered by Tribuzio in \cite{Tribuzio}. Here, one is given the sequence of positive weights defined as $a_i^\tau=a^\tau(i\tau)$ for some functions $a^\tau: (0,\infty) \to (0,\infty) $. This generalization with respect to the classical Euler scheme yields a modification of the metric as time evolves. By asking $1/a^\tau$ to be locally equiintegrable with respect to $\tau$, one can prove that minimizers converge to curves of maximal slope according to a specific time-dependent limiting metric. Under some more general assumptions on $a^\tau$, discontinuous evolutions can also be obtained. These can be proved to be capable of exploring the different wells of a multiwell potential $\phi$. Let us also mention the approach \`a la Crandall-Liggett by Cl\'ement and Desch~\cite{Clement1,Clement2}, see also~\cite{Clement3}, who recursively define $u^n_0=u^0$ and $u^n_i = J(u^n_{i-1})$ for $i=1,\dots,N = T/\tau$, where $J(u^n_{i-1})$ is the set of points $u\in D(\phi)$ fulfilling the inequality $$\frac{1}{2\tau} d^2(u,w) - \frac{1}{2\tau}d^2(u_{i-1}^n,w) +\frac{1}{2\tau}d^2(u,u_{i-1}^n) + \phi(u) \leq \phi(w) \quad \forall w \in D(\phi).$$ Such points exist for $\phi$ geodesically convex and the corresponding interpolants $\overline u^n$ converge to {\it evolutionary variational inequality} solutions~\cite{Muratori}, a specific class of curves of maximal slope. \section{Conditional convergence} \label{sec:p1} This section is devoted to the proof of Theorem \ref{thm:main1}. The ingredients of the argument are quite classical. Still, as already mentioned, the current minimizing-movement setting of \eqref{eq:min} expedites the proof, for there is no need to resort to the De~Giorgi variational interpolant \cite[Def. 3.2.1]{Ambrosio08}. Let $\{u_0^n\}$ be $d$-bounded with $u_0^n \stackrel{\sigma}{\to} u^0$ and $\phi(u_0^n) \to \phi(u^0)$ fulfill \eqref{eq:cond}. We have that \begin{align} &\phi(\overline u^n(t_m^n)) + \frac1p\int_0^{t^n_m} |(\widehat u^n)'| ^p(r)\, {\rm d} r +\frac1q \int_0^{t^n_m} |\partial \phi|^q(\overline u^n(r))\, {\rm d} r \nonumber\\ &\quad = \phi(u_m^n) + \frac1p\sum_{i=1}^m (\tau_i^n)^{1-p} d^p(u_{i-1}^n,u_{i}^n) + \frac1q\sum_{i=1}^m \tau_i^n |\partial \phi|^q(u_i^n)\nonumber\\ & \quad= \sum_{i=1}^m G(\tau_i^n,u_{i-1}^n,u_{i}^n) + \phi(u_0^n).\label{eq:pass} \end{align} Condition \eqref{eq:cond} ensures that the above right-hand is bounded independently of $m=1,\dots,N^n$ and $n$. A first consequence of estimate \eqref{eq:pass} is that $\{u_m^n\}$ is $d$-bounded independently of $m=1,\dots,N^n$ and $n$. Indeed, one has that \begin{align*} & d^p(u^n_0,u^n_m) \leq 2^{p-1 }\sum_{i=1}^m d^p(u^n_{i-1},u^n_{i}) \leq 2^{p-1} (\tau^n)^{p-1} \sum_{i=1}^m (\tau_i^n)^{1-p}d^p(u^n_{i-1},u^n_{i})\nonumber\\ &\leq \quad 2^{p-1}(\tau^n)^{p-1}p \left(\sum_{i=1}^m G(\tau_i^n,u_{i-1}^n,u_{i}^n) + \phi(u_0^n)\right). \end{align*} The right-hand side is bounded independently of $m=1,\dots,N^n$ and $n$. Since $\{u_0^n\}$ are $d$-bounded, the $d$-boundedness of $\{u_m^n\}$ follows. As the sublevels of $\phi$ are sequentially $\sigma$-compact, one can apply the extended Ascoli-Arzel\`a Theorem from \cite[Prop. 3.3.1]{Ambrosio08} and find a not relabeled subsequence $\{\overline u^n\}$ such that $\overline u^n\stackrel{\sigma}{\to} u$ pointwise, where $u:[0,T] \to U$, and $|(\widehat u^n)'| \to m$ weakly in $L^p(0,T)$. In particular, we have that $u(0)=\lim_{n\to \infty} u^n(0) = \lim_{n\to \infty}u^n_0=u^0$. For all $0 < s \leq t < T$, define $s^n=\max\{t^n_i \, : \, t_i^n<s\}$ and $t^n=\min\{t^n_i\, : \, t < t^n_i\}$. Then, $$d(u(s),u(t)) \stackrel{\eqref{eq:compat}}{\leq} \liminf_{n \to \infty} d(\overline u^n(s), \overline u^n(t)) \leq \liminf_{n \to \infty} \int_{s^n}^{t^n} |(\widehat u^n)'|(r)\, {\rm d} r = \int_s^t m(r) \,{\rm d} r.$$ This entails that $u \in AC^p([0,T];U)$ since we just checked that the function $m \in L^p(0,T)$ fulfills \eqref{metric_dev}. As $|u'|$ is the minimal function in $L^p(0,T)$ fulfilling \eqref{metric_dev}, we also have that $|u|\leq m$ almost everywhere and $$ \int_0^t |u'|^p(r)\,{\rm d} r \leq \int_0^t m^p(r) \,{\rm d} r \leq \liminf_{\tau \to 0} \int_0^t |(\widehat u^n)'|^p(r)\,{\rm d} r \quad \forall t>0. $$ For all fixed $t\in (0,T]$, choose $t^n_m = t^n$ in \eqref{eq:pass} in order to get that \begin{align*} &\phi(\overline u^n(t)) + \frac1p\int_0^{\overline t^n(t)} |(\widehat u^n)'| ^p(r)\, {\rm d} r +\frac1q \int_0^{\overline t^n(t)} |\partial \phi|^q(\overline u^n(r))\, {\rm d} r \\ &\quad \stackrel{\eqref{eq:pass}}{\leq} \sum_{i=1}^{N^n} (G(\tau_i^n,u_{i-1}^n,u_{i}^n))^+ + \phi(u_0^n). \end{align*} Owing to the sequential $\sigma$-lower semicontinuity of $\phi$ and $|\partial \phi|$, see \eqref{eq:phicomp}-\eqref{eq:phisl}, we can pass to the $\liminf$ in the latter and, using again condition \eqref{eq:cond} and the fact that $\phi(u^n_0) \to \phi(u^0)$, we obtain \begin{equation}\phi(u(t)) + \frac1p \int_0^t |u'|^p(r)\, {\rm d} r + \frac1q \int_0^t |\partial \phi|^q(u(r))\, {\rm d} r \leq \phi(u(0)) \quad \forall t \in [0,T].\label{eq:actually} \end{equation} As $|\partial \phi|$ is a strong upper gradient for $\phi$ by \eqref{eq:phisl}, we have that \begin{align*} &\phi(u(0)) \leq \phi(u(t)) + \int_0^t |\partial \phi|(u(r))\,|u'|(r)\, {\rm d} r \\ &\quad \leq \phi(u(t)) + \frac1p \int_0^t |u'|^p(r)\, {\rm d} r + \frac1q \int_0^t |\partial \phi|^q(u(r))\, {\rm d} r \end{align*} so that \eqref{eq:actually} is actually an equality and $u$ is a curve of maximal slope in the sense of Definition \ref{def:curve}. \section{Convergence in the geodesically convex case} \label{sec:p2} \setcounter{equation}{0} We now turn to the proof of Theorem \ref{thm:main2}. Recall that for all $\tau^n_i>0$ and $v\in D(\phi)$ the functional $u \in D(\phi) \mapsto E(\tau_i^n,v,u)$ admits a minimizer. We first prove a $p$-variant for $p>1$ of the slope estimate \cite[Lem. 3.1.3, p. 61]{Ambrosio08}, which was originally proved for $p=2$. In particular, we aim at the following \begin{equation} \label{eq:slopeestimate} |\partial \phi|(u) \leq (\tau_i^n)^{1-p} d^{p-1}(v,u) \quad \forall u \in M_E(\tau_i^n,v). \end{equation} Note that this estimate is already mentioned in \cite[Rem. 3.1.7]{Ambrosio08} without proof. We give an argument here. Let $w\in D(\phi)$ be given. From the minimality $E(\tau_i^n,v,u) \leq E(\tau_i^n,v,w) $ we deduce that \begin{align*} &\phi(u) - \phi(w) \leq \frac{(\tau^n_i)^{1-p}}{p}\Big(d^p(v,w) -d^p(v,u) \Big) \\ &\quad\leq \frac{(\tau^n_i)^{1-p}}{p}\Big( \big( d(u,w)+d(v,u) \big)^p -d^p(v,u) \Big) \\ &\quad= \frac{(\tau^n_i)^{1-p}}{p} \left( \sum_{k=0}^\infty \binom{p}{k} d^k(u,w)\,d^{p-k}(v,u) - d^p(v,u)\right)\\ &\quad = d(u,w) \frac{(\tau^n_i)^{1-p}}{p} \sum_{k=1}^\infty \binom{p}{k} d^{k-1}(u,w)\,d^{p-k}(v,u) \end{align*} where we have made use of the generalized binomial formula and the generalized binomial coefficients $$\binom{p}{k} = \frac{p(p-1)\dots (p-k+1)}{k!}.$$ Assume now that $w\not = u$, divide by $d(u,w)$, and compute the $\limsup$ as $w \to u$ in order to get \begin{align*} &|\partial \phi|(u) = \limsup_{w \to u}\frac{\big(\phi(u) - \phi(w)\big)^+}{d(u,w)} \leq \limsup_{w \to u}\frac{(\tau^n_i)^{1-p}}{p} \sum_{k=1}^\infty \binom{p}{k} d^{k-1}(u,w)\,d^{p-k}(v,u) \\ &\quad = \frac{(\tau^n_i)^{1-p}}{p} \binom{p}{1} d^{p-1}(v,u) = {(\tau^n_i)^{1-p}} d^{p-1}(v,u) \end{align*} so that \eqref{eq:slopeestimate} holds. Above, we have used the fact that \begin{align*} &0\leq \lim_{w\to u} \sum_{k=2}^\infty \binom{p}{k} d^{k-1}(u,w)\,d^{p-k}(v,u) \leq \lim_{w\to u}d(u,w) \sum_{k=2}^\infty \binom{p}{k}d^{p-k}(v,u)\\ &\quad = \lim_{w\to u} d(u,w) \left( (1+d(v,u))^p - \binom{p}{1}d^{p-1}(v,u) - \binom{p}{0}d^p(v,u)\right) = 0. \end{align*} Let now $u^e \in D(\phi)$ be a minimizer of $u\mapsto E(\tau_i^n,u_{i-1}^n,u)$. Taking into account the convexity assumption \eqref{eq:F2}, let $\gamma:[0,1]\to U$ be a curve with $\gamma(0)=u_{i-1}^n$ and $\gamma(1) = u^e$, so that \begin{align*} &E(\tau_i^n,u_{i-1}^n,u^e) \leq E(\tau_i^n,u_{i-1}^n,\gamma(\theta)) \\ &\quad \stackrel{\eqref{eq:F2}}{\leq} \theta E(\tau_i^n,u_{i-1}^n,u^e) + (1-\theta) E(\tau_i^n,u_{i-1}^n, u_{i-1}^n) \\ &\quad- \theta (1-\theta)\frac{(p-1)(\tau_i^n)^{1-p}+\lambda}{p}d^p(u_{i-1}^n,u^e) \end{align*} where in the first inequality we have again used minimality. Let $\theta\in [0,1)$, divide by $1-\theta$, and take $\theta \to 1$ in order to get \begin{align} & E(\tau_i^n,u_{i-1}^n,u^e) + \frac{ (\tau_i^n)^{1-p}}{q}d^p(u_{i-1}^n,u^e)\nonumber\\ &\quad\leq E(\tau_i^n,u_{i-1}^n, u_{i-1}^n)-\frac{\lambda}{p}d^p(u_{i-1}^n,u^e).\label{eq:qpo} \end{align} By taking the $q$-power of the slope estimate \eqref{eq:slopeestimate} with $v=u^n_{i-1}$ we get $$ |\partial \phi|^q(u^e) \leq (\tau_i^n)^{-p}d^p(u_{i-1}^n,u^e).$$ We use this to estimate from below the second term on the left-hand side of \eqref{eq:qpo} obtaining \begin{align*} & E(\tau_i^n,u_{i-1}^n,u^e) +\frac{\tau_i^n}{q}|\partial \phi|^q(u^e)\leq E(\tau_i^n,u_{i-1}^n, u_{i-1}^n) - \frac{\lambda}{p}d^p(u_{i-1}^n,u^e). \end{align*} As $E(\tau_i^n,u_{i-1}^n, u_{i-1}^n)=0$, given any $u_i^n\in M_G(\tau_i^n,u^n_{i-1})$ the latter entails that \begin{align} &G(\tau_i^n,u_{i-1}^n,u_i^n) \leq G(\tau_i^n,u_{i-1}^n,u^e) \nonumber\\ &\quad = E(\tau_i^n,u_{i-1}^n,u^e) +\frac{\tau_i^n}{q}|\partial \phi|^q(u^e) \leq - \frac{\lambda}{p}d^p(u_{i-1}^n,u^e). \label{eq:han} \end{align} Recall now that the minimality $u^e\in M_E(\tau_i^n,u^n_{i-1})$ and the nonnegativity of $\phi$ ensure that $$\frac{(\tau_i^n)^{1-p}}{p}d^p(u_{i-1}^n,u^e) \leq \phi(u_{i-1}^n).$$ Hence, inequality \eqref{eq:han} yields \begin{equation} G(\tau_i^n,u_{i-1}^n,u_i^n)\leq \lambda^- (\tau_i^n)^{p-1}\phi(u_{i-1}^n).\label{eq:usero} \end{equation} Taking the sum on $i=1,\dots,m$ for $m\leq N^n$ we get \begin{align*} &\phi(u_m^n) + \frac1p \sum_{i=1}^m (\tau_i^n)^{1-p}d^p(u_{i-1}^n,u_i^n) + \frac1q \sum_{i=1}^m\tau^n_i |\partial \phi|^q(u_i^n) -\phi(u^0) \\ &\quad = \sum_{i=1}^mG(\tau^n_i,u^n_{i-1},u^n_i) \leq \lambda^- (\tau^n)^{p-2}\sum_{i=0}^{m-1} \tau^n_i\phi(u_i^n). \end{align*} We can hence use the discrete Gronwall Lemma and deduce that \begin{align*} &\phi(u_m^n) + \frac1p \sum_{i=1}^m (\tau_i^n)^{1-p}d^p(u_{i-1}^n,u_i^n) + \frac1q \sum_{i=1}^m\tau^n_i |\partial \phi|^q(u_i^n) \\ &\quad \leq \phi(u^0) \,{\rm exp}\left( \lambda^- (\tau^n)^{p-2} t^n_m\right). \end{align*} Going back to \eqref{eq:usero}, this entails that $$(G(\tau_i^n,u_{i-1}^n,u_i^n))^+ \leq \lambda^- (\tau_i^n)^{p-1}\phi(u^0) \,{\rm exp}\left( \lambda^- (\tau^n)^{p-2} T\right). $$ Adding up for $i=1,\dots,N^n $ we get \begin{align*} & \sum_{i=1}^{N^n} \big(G(\tau_i^n,u_{i-1}^n,u_i^n) \big)^+ \leq \lambda^- (\tau^n)^{p-2}T\,\phi(u^0) \,{\rm exp}\left( \lambda^- (\tau^n)^{p-2}T\right)=:R^n. \end{align*} If $\lambda\geq 0$, we have that $R^n=0$ and condition \eqref{eq:cond} trivially holds. If $\lambda<0$ and $p> 2$, one can readily check that $R^n \to 0$ as $n\to \infty$ and \eqref{eq:cond} again holds. \section{Convergence without geodesic convexity}\label{sec:p3} \setcounter{equation}{0} We now turn to the proof of Theorem \ref{thm:main3}, where the convexity assumption is replaced by the generalized one-sided Taylor-expansion assumption \eqref{eq:taylor}. The argument follows the general strategy of \cite[Chap. 3]{Ambrosio08}, by revisiting the theory and adapting it to the incremental functional $G$ and to the case $p>1$. In particular, it is fairly different with respect to that of Section \ref{sec:p2} and does not rely on the existence of solutions of the Euler scheme. We prepare some preliminary arguments in Subsections \ref{sec:meas}-\ref{sec:slope}, deduce an a priori estimate in Subsection \ref{sec:apriori} and eventually present the proof of Theorem \ref{thm:main3} in Subsection \ref{sec:conclude}. \subsection{A measurable selection in $\tau \mapsto M_G(\tau,v)$}\label{sec:meas} Let us recall that for all $\tau\in (0,\tau_*]$ and $v\in D(\phi)$ the set of minimizers $M_G(\tau,v)$ is not empty. By additionally defining $M_G(0,v) = \{v\}$, the set-valued function $\tau \in [0,\tau_*] \mapsto M_G(\tau,v)$ has nonempty values. The aim of this section is to check that it admits a measurable selection, namely, \begin{equation} \label{eq:sel} \exists\, \tau \in [0,\tau_*] \mapsto u_\tau \in M_G(\tau,v) \ \ \text{measurable}. \end{equation} To this aim, we firstly check that $M_G(\tau,v)$ is closed for all $\tau \in [0,\tau_*]$. Indeed, assume $\tau >0$ (the case $\tau=0$ being trivial) and let $u_k\in M_G(\tau,v)$ with $u_k \to u_\infty$. In particular, we have that $$\phi(u_k) + \frac{\tau^{1-p}}{p}d^p(v,u_k) + \frac{\tau}{q}|\partial \phi|^q(u_k) -\phi(v)= G(\tau,v,u_k) \leq G(\tau,v,w)$$ for any $w\in D(|\partial \phi|).$ Owing to the lower semicontinuity \eqref{eq:phicomp}-\eqref{eq:phisl} we can pass to the lower limit and check that $G(\tau,v,u_\infty) \leq G(\tau,v,w)$, so that $ u_\infty \in M_G(\tau,v)$ as well. Secondly, we check that $\tau \mapsto M_G(\tau,v)$ is measurable in the sense of set-valued functions \cite{Wagner}. In particular, we have to check that, for all $C \subset U$ closed, the set $$A=\{ \tau \in [0,\tau_*] \ : \ M_G(\tau,v)\cap C \not = \emptyset\}$$ is measurable. Indeed, one can prove that $A$ is closed: Take $\tau_k \in A$ such that $\tau_k \to \tau_\infty$ and let $u_k \in M_G(\tau_k,v)\cap C $. We have that \begin{align} &\phi(u_k) + \frac{\tau^{1-p}_k}{p}d^p(v,u_k) + \frac{\tau_k}{q}|\partial \phi|^q(u_k) -\phi(v)= G(\tau_k,v,u_k) \nonumber\\ &\quad \leq G(\tau_k,v,v) = \frac{\tau_k}{q}|\partial \phi|^q(v)<\infty.\label{eq:ttau} \end{align} One can hence deduce uniform estimates for $u_k$ and from compactness \eqref{eq:phicomp} one extracts a not relabeled subsequence such that $u_k \to u_\infty$. If $\tau_\infty>0$, by passing to the liminf in the minimality condition for $u_k$ one gets $$G(\tau_\infty,v,u_\infty)\leq \liminf_{k \to \infty}G(\tau_k,v,u_k) \leq \liminf_{k \to \infty}G(\tau_k,v,w) = G(\tau_\infty,v,w) $$ for any $w\in D(|\partial \phi|)$. This implies that $u_\infty\in M_G(\tau_\infty,v)$. On the other hand, if $\tau_\infty=0$ we obtain from \eqref{eq:ttau} that $$ d^p(v,u_k) \leq p\tau_k^{p-1}\phi(v)+\frac{p\tau_k^p}{q} |\partial \phi|^q(v)\to 0,$$ so that $u_\infty = v \in M_G(0,v)$. Since $C$ is closed, $u_\infty\in C$ as well and we have proved that $M_G(\tau_\infty,v) \cap C$ is not empty. In particular, $\tau_\infty\in A$ which is hence closed. As the metric space $(U,d)$ is complete and separable and $\tau \mapsto M_G(\tau,v)$ has nonempty and closed values, the Ryll-Nardzewski Theorem \cite{Ryll} applies and \eqref{eq:sel} holds. \subsection{Continuity of $\tau \mapsto \widehat G(\tau,v)$}\label{sec:diff} We now turn our attention to the real map $\tau \in [0,\tau_*] \mapsto \widehat G(\tau,v)$ for some given $v\in D(\phi)$, where we define $\widehat G(0,v) = 0$. In order to check that this function is continuous on $[0,\tau_*]$, take $\tau_k \in [0,\tau_*] \to \tau_\infty$ and $u_k \in M_G(\tau_k,v)$. Following the argument of Subsection \ref{sec:meas}, we can extract a not relabeled subsequence such that $u_k\to u_\infty \in M_G(\tau_\infty,v)$. If $\tau_\infty>0$ the lower semicontinuity \eqref{eq:phicomp}-\eqref{eq:phisl} implies that \begin{align*} &G(\tau_\infty,v,u_\infty) \leq \liminf_{k\to \infty} G(\tau_k,v,u_k) \leq \limsup_{k\to \infty} G(\tau_k,v,u_k)\\ &\quad \leq \limsup_{k\to \infty} G(\tau_k,v,u_\infty) = G(\tau_\infty,v,u_\infty). \end{align*} The case $\tau_\infty=0$ is even simpler as $u_\infty=v$ and we can compute \begin{align} & 0=\widehat G(0,v) = \phi(u_\infty) - \phi(v) \leq \liminf_{k \to \infty} \phi(u_k) - \phi(v) \leq \liminf_{k\to \infty} G(\tau_k,v,u_k) \nonumber\\ &\quad \leq \limsup_{k\to \infty} G(\tau_k,v,u_k)\leq \limsup_{k\to \infty} G(\tau_k,v,v) = \lim_{k\to \infty} \frac{\tau_k}{q} |\partial \phi|^q(v) = 0. \label{eq:zero} \end{align} In both cases, we have proved that $\widehat G(\tau_k,v) \to \widehat G(\tau_\infty,v)$. \subsection{Differentiability of $\tau \mapsto \widehat G(\tau,v)$}\label{sec:diff} The aim of the subsection is to show that $\tau \mapsto \widehat G(\tau,v)$ is even locally Lipschitz continuous and to compute its almost-everywhere derivative, see equation \eqref{eq:fallo3} below. Take $0<\tau_0 <\tau_1 <\tau_*$, $u_0\in M_G(\tau_0,v)$, and $u_1\in M_G(\tau_1,v)$ where $v\in D(\phi)$ is fixed. From minimality we deduce \begin{align*} \widehat G(\tau_1,v) \leq G(\tau_1,v,u_0) = \widehat G(\tau_0,v) + \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p} d^p(v,u_0) +\frac{\tau_1 - \tau_0}{q}|\partial \phi|^q(u_0) \end{align*} so that one has $$ \widehat G(\tau_1,v) - \widehat G(\tau_0,v) \leq \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p} d^p(v,u_0) +\frac{\tau_1 - \tau_0}{q}|\partial \phi|^q(u_0).$$ By exchanging the roles of $\tau_0$ and $\tau_1$ we also get $$ \widehat G(\tau_0,v) - \widehat G(\tau_1,v) \leq \frac{\tau_0^{1-p} - \tau_1^{1-p}}{p} d^p(v,u_1) +\frac{\tau_0 - \tau_1}{q}|\partial \phi|^q(u_1).$$ By dividing by $\tau_1 - \tau_0$ we hence obtain \begin{align} &\frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} d^p(v,u_1)\leq \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} d^p(v,u_1) +\frac{1}{q}|\partial \phi|^q(u_1)\nonumber\\ &\quad\leq \frac{\widehat G(\tau_1,v) - \widehat G(\tau_0,v) }{\tau_1- \tau_0}\nonumber\\ &\quad \leq \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} d^p(v,u_0) +\frac{1}{q}|\partial \phi|^q(u_0)\leq \frac{1}{q}|\partial \phi|^q(u_0).\label{eq:fallo} \end{align} The latter implies that $\tau \mapsto \widehat G(\tau,v)$ is locally Lipschitz continuous on $(0,\tau_*]$. Indeed, take $0<\underline \tau< \tau_*$ and $\tau \in [\underline \tau,\tau_*]$. Given $u_\tau \in M_G(\tau,v)$, we readily deduce that \begin{align*} &d^p(v,u_\tau) \leq p \tau^{p-1}_*\phi(v) + \frac{p \tau^p_*}{q}|\partial \phi|^q(v), \\ &\frac{1}{q}|\partial \phi|^q(u_\tau) \leq \frac{1}{\underline \tau}\phi(v) + \frac{1}{q}|\partial \phi|^q(v),\\ &{}-\frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} \leq \frac{1}{q\tau_0^p} \leq \frac{1}{q\underline \tau^p}. \end{align*} In particular, moving from \eqref{eq:fallo}, for all $\underline \tau\in(0,\tau_*]$ we find $C_{\underline \tau}$ depending on $\underline \tau$, $\phi(v)$, and $|\partial \phi|(v)$ such that $$\left| \frac{\widehat G(\tau_1,v) - \widehat G(\tau_0,v) }{\tau_1- \tau_0} \right|\leq C_{\underline \tau} \quad \forall \underline \tau< \tau_0 < \tau_1 < \tau_*.$$ Hence, $\tau \in (0,\tau_*]\mapsto \widehat G(\tau,v)$ is locally Lipschitz continuous and therefore almost everywhere differentiable in $(0,\tau_*)$. Define now \begin{align*} &\overlinerline f(\tau_0,\tau_1) = \sup\left\{ \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} d^p(v,u_1) +\frac{1}{q}|\partial \phi|^q(u_1) \ : \ u_1\in M_G(\tau_1,v) \right\},\\ &\underline f(\tau_0,\tau_1) = \inf\left\{ \frac{\tau_1^{1-p} - \tau_0^{1-p}}{p(\tau_1 - \tau_0)} d^p(v,u_0) +\frac{1}{q}|\partial \phi|^q(u_0) \ : \ u_0\in M_G(\tau_0,v) \right\}. \end{align*} By using again relation \eqref{eq:fallo} one has \begin{equation} \label{eq:fallo2} \overline f(\tau_0,\tau_1) \leq \frac{\widehat G(\tau_1,v) - \widehat G(\tau_0,v) }{\tau_1- \tau_0}\leq \underline f(\tau_0,\tau_1). \end{equation} Let $\tau \in (0,\tau_*)$ be such that $\tau \mapsto \widehat G(\tau,v)$ is differentiable at $\tau$, take $h \in (0,\tau_*-\tau)$ and any $u_{\tau+h} \in M_G(\tau+h,v)$. By arguing as in Subsection \ref{sec:meas}, one can extract a not relabeled subsequence $u_{\tau+h} \to u_\tau$ as $h \to 0$ and check that $u_\tau\in M_G(\tau,v)$. Moreover, going back to \eqref{eq:fallo2} and choosing $\tau_0=\tau$ and $\tau_1=\tau+h$ we deduce that \begin{align*} & -\frac{\tau^{-p}}{q}d^p(v,u_\tau)+\frac{1}{q}|\partial \phi|^q(u_\tau)\leq \liminf_{h\to 0}\overline f(\tau,\tau+h) \\ &\quad= \frac{\rm d}{{\rm d} \tau} \widehat G(\tau,v) \leq \liminf_{h\to 0} \underline f(\tau,\tau+h) \leq -\frac{\tau^{-p}}{q}d^p(v,\tilde u_\tau)+\frac{1}{q}|\partial \phi|^q(\tilde u_\tau) \end{align*} where $\tilde u_\tau$ is any element of $M_G(\tau,v)$. Passing to the infimum in $M_G(\tau,v)$ left and right we get \begin{equation}\label{eq:fallo3} \frac{\rm d}{{\rm d} \tau} \widehat G(\tau,v) = \inf \left\{ -\frac{\tau^{-p}}{q}d^p(v,u_\tau)+\frac{1}{q}|\partial \phi|^q(u_\tau) \ : \ u_\tau \in M_G(\tau,v) \right\} \end{equation} almost everywhere in $(0,\tau_*)$. \subsection{Slope estimate}\label{sec:slope} Let us prepare a version of the slope estimate \eqref{eq:slopeestimate} adapted to our setting, namely for points in $u \in M_G(\tau,v)$ for $v \in D(\phi)$ instead of $M_E(\tau,v)$. Let $w\in D(|\partial \phi|)$ be given. From minimality we deduce that \begin{align*} & \phi(u) - \phi(w) + \frac{\tau}{q}|\partial \phi|^q(u) - \frac{\tau}{q}|\partial \phi|^q(w) \leq \frac{\tau^{1-p}}{p}\left(d^p( v,w)-d^p(v,u)\right)\\ &\leq d(u,w) \frac{\tau^{1-p}}{p}\sum_{k=1}^\infty \binom{p}{k}d^{k-1}( u,w)d^{p-k}(v,u). \end{align*} By assuming that $w\not = u$, dividing by $d(u,w) $, and taking $w \to u$ we get \begin{equation} \label{eq:slopeestimate2} |\partial ( \phi + \tau|\partial \phi|^q/q)|(u) \leq \tau^{1-p} d^{p-1}(v,u)\quad \forall u \in M_G(\tau,v). \end{equation} This proves in particular the additional regularity $$ M_G(\tau,v) \subset D(\partial ( \phi + \tau|\partial \phi|^q/q))$$ for minimizers of $G$. \subsection{A priori estimate}\label{sec:apriori} Let now $\{u_i^n\}_{i=0}^{N^n}$ solve the incremental minimization problem \eqref{eq:min} with $u^0$ replaced by the approximating $u^{0n}\in M_E(\tau^n,u^0)$. From minimality we obtain that \begin{equation} \phi(u^n_i) + \frac{\tau^n_i}{q}|\partial \phi|^q(u^n_i) + \frac1p (\tau^n_i)^{1-p} d^p(u^n_{i-1},u^n_i) \leq \phi(u^n_{i-1}) + \frac{\tau^n_i}{q}|\partial \phi|^q(u^n_{i-1}).\label{eq:3} \end{equation} Taking into account the one-sided nondegeneracy of the time partition $$(\tau^n_i -\tau^n_{i-1})^+/\tau^n_{i-1} \leq C \tau^n$$ we can control the above right-hand side of \eqref{eq:3} as follows \begin{align*} &\phi(u^n_{i-1}) + \frac{\tau^n_i}{q}|\partial \phi|^q(u^n_{i-1}) = \phi(u^n_{i-1}) + \frac{\tau^n_{i-1}}{q}|\partial \phi|^q(u^n_{i-1}) + \frac{\tau_i^n - \tau^n_{i-1}}{q}|\partial \phi|^q(u^n_{i-1}) \\ &\quad \leq \phi(u^n_{i-1}) + \frac{\tau^n_{i-1}}{q}|\partial \phi|^q(u^n_{i-1}) + C\tau^n \frac{\tau^n_{i-1}}{q}|\partial \phi|^q(u^n_{i-1}) . \end{align*} Owing to this bound, we can take the sum in \eqref{eq:3} for $i=2,\dots,m$ and get \begin{align*} &\phi(u_m^n) + \frac{\tau^n_m}{q}|\partial \phi|^q(u^n_m) + \frac1p\sum_{i=1}^{m}(\tau^n_i)^{1-p}d^p(u_{i-1}^n, u_i^n) \nonumber\\ &\quad \leq \phi(u^n_{1}) + \frac{\tau^n_{1}}{q}|\partial \phi|^q(u^n_{1}) + C\sum_{i=2}^{m}\tau^n \frac{\tau^n_{i-1}}{q}|\partial \phi|^q(u^n_{i-1}) \nonumber\\ &\quad \stackrel{\eqref{eq:3}}{\leq} \phi(u^{0n}) + \frac{\tau^n }{q}|\partial \phi|^q(u^{0n}) + C\sum_{j=1}^{m-1}\tau^n \frac{\tau^n_{j}}{q}|\partial \phi|^q(u^n_{j}). \end{align*} By applying the discrete Gronwall Lemma we hence obtain \begin{align} &\phi(u_m^n) + \frac{\tau^n_m}{q}|\partial \phi|^q(u^n_m) + \frac1p\sum_{i=1}^{m}(\tau^n_{i})^{1-p}d^p(u_{i-1}^n, u_i^n) \nonumber\\ &\quad \leq C\left(\phi(u^{0n}) + \frac{\tau^n }{q}|\partial \phi|^q(u^{0n}) \right). \label{eq:4} \end{align} Recall now that $u^{0n} \in M_E(\tau^n,u^0)$ and use the slope estimate \eqref{eq:slopeestimate} to get that $$\frac{\tau^n }{q}|\partial \phi|^q(u^{0n}) \leq \frac1q (\tau^n)^{1-p} d^p(u^0,u^{0n}) \leq \frac{p}{q}\phi(u^0).$$ Hence, $\{u^{0n}\}$ are in particular $d$-bounded and the bound \eqref{eq:4} entails the estimate \begin{align} &\phi(u_m^n) + \frac{\tau^n_m}{q}|\partial \phi|^q(u^n_m) + \frac1p\sum_{i=1}^{m}\tau^{1-p}d^p(u_{i-1}^n, u_i^n) \leq C\left(1+\frac{p}{q}\right) \phi(u^0) \nonumber\\ &\quad \forall m =1, \dots, N^n, \ \forall n. \label{eq:estimate} \end{align} \subsection{Conclusion of the proof}\label{sec:conclude} For all $i =1,\dots,N^n$ and $\tau_0 \in (0,\tau^n_i]$ we use the Lipschitz continuity of $\tau \in (0,\tau^n_i]\mapsto \widehat G(\tau,u^n_{i-1})$ and write \begin{align} \widehat G(\tau_i^n,u_{i-1}^n) = \widehat G(\tau_0,u_{i-1}^n) +\int_{\tau_0}^{\tau_i^n}\frac{\rm d }{\rm d \tau} \widehat G(\tau,u^n_{i-1})\, {\rm d} \tau . \label{eq:eatme} \end{align} Let now $\tau\in [\tau_0,\tau^n_i] \mapsto u_\tau $ be a measurable selection in $M_G(\tau,u^n_{i-1})$. The existence of such a selection is ascertained in Subsection \ref{sec:meas}. Take $\tau_0 \to 0$ in \eqref{eq:eatme}, use $\widehat G(\tau_0,u^n_{i-1}) \to 0$ from \eqref{eq:zero} and \eqref{eq:fallo3} to get \begin{equation} G(\tau^n_i,u^n_{i-1},u^n_i) \leq \int_0^{\tau^n_i} \left( - \frac{\tau^{-p}}{q} d^p(u^n_{i-1},u_\tau)+\frac1q|\partial \phi|^q(u_\tau)\right){\rm d} \tau.\label{eq:controlG} \end{equation} In order to conclude the proof of Theorem \ref{thm:main2}, one has to check that condition \eqref{eq:cond} holds, so that Theorem \ref{thm:main1} applies. This calls for controlling the right-hand side of \eqref{eq:controlG}. By means of the slope estimate \eqref{eq:slopeestimate2} for $v=u^n_{i-1}$ and $u=u_\tau$ we can control the right-hand of \eqref{eq:controlG} as $$ G(\tau^n_i,u^n_{i-1},u^n_i) \leq \int_0^{\tau^n_i} \left(\frac1q|\partial \phi|^q(u_\tau) - \frac1q |\partial ( \phi + \tau|\partial \phi|^q/q)|^q(u_\tau) \right){\rm d} \tau. $$ We now use estimate \eqref{eq:estimate} and the generalized one-sided Taylor expansion condition \eqref{eq:taylor} in order to conclude that \begin{align*} &\sum_{i=1}^{N^n} \left( G(\tau^n_i,u^n_{i-1},u^n_i)\right)^+ \leq \frac1q \sum_{i=1}^{N^n} \int_0^{\tau^n_i}g(\tau)\,{\rm d} \tau \\ &\quad = \frac1q \sum_{i=1}^{N^n} \tau^n_i \left(\frac{1}{\tau^n_i}\int_0^{\tau^n_i}g(\tau)\,{\rm d} \tau \right) \leq \frac{T}{q} \frac{1}{\tau^n}\int_0^{\tau^n}g(\tau)\,{\rm d} \tau. \end{align*} As $(1/\tau^n)\int_0^r g(\tau^n)\, {\rm d} \tau \searrow 0$ as $\tau^n\to 0$ condition \eqref{eq:cond} holds. The statement hence follows from Theorem \ref{thm:main1}. \section{Applications in linear spaces}\label{sec:appl0} We collect in this section some comments on the application of the abstract convergence results of Theorem \ref{thm:main1}-\ref{thm:main3} in linear finite and infinite-dimensional spaces. Let us start from the convex case of Theorem \ref{thm:main2}. We hence restrict to $p=2$, for assumption \eqref{eq:F2} cannot hold for $p\not =2$ in linear spaces, as commented in Subsection \ref{sec:assumptions}. Correspondingly, the potential $\phi$ is requires to be convex ($\lambda \geq 0$). In the finite-dimensional ODE case, let the proper, convex potential $\phi:\mathbb{R}^d \to [0,\infty]$ and the initial datum $u^0 \in D(\phi)$ be given. In this case, we have that $|\partial \phi|(u) = |(\partial \phi (u))^\circ|$, where $(\partial \phi (u))^\circ$ is the element of minimal norm in the convex and closed set $\partial \phi (u)$. In particular, $|\partial \phi|(u)$ is lower semicontinuous. As such, the new minimizing-movements scheme \eqref{eq:min} has a solution $\{u_i^n\}$ for any partition and the corresponding interpolants converge to a solution of $u' +\partial \phi(u) \ni 0$, up to subsequences. In order to give an application of Theorem \ref{thm:main2} in infinite dimensions, we consider \begin{equation} \label{eq:pde} \partial_t u - \nabla {\cdot} \beta (\nabla u) + \alpha (u)\ni 0 \quad \text{in}\ \ \Omega \times (0,T). \end{equation} Here, $\Omega \subset \mathbb{R}^d$ is open, bounded, and smooth, $u: \Omega \times (0,T) \to \mathbb{R}$ is scalar-valued, and $\partial_t$ and $\nabla$ indicate partial derivatives in time and space, respectively. We assume that $\beta = \partial \widehat \beta$ and $\alpha = \partial \alpha$ where the potentials $\widehat \beta: \mathbb{R}^d\to [0,\infty]$ and $\widehat \alpha:\mathbb{R} \to [0,\infty]$ are proper and convex. In addition, we assume $\widehat \beta$ to be coercive in the following sense \begin{equation} \label{beta} \exists \, c_\beta >0, \ m>\frac{2d}{d+2} : \quad \widehat \beta (\xi) \geq c_\beta |\xi|^m - \frac{1}{c_\beta} \quad \forall \xi \in \mathbb{R}^d. \end{equation} Equation \eqref{eq:pde} is intended to be complemented with homogeneous Dirichlet boundary conditions (other choices being of course possible) hence corresponding to the gradient flow in $U=L^2(\Omega)$ of the functional \begin{equation} \phi(u)= \left\{ \begin{array}{ll} \displaystyle\int_\Omega \big(\widehat \beta (\nabla u ) + \widehat \alpha (u) \big) \, {\rm d} x &\text{for} \ \ u \in W^{1,m}_0(\Omega), \ \\ &\quad\text{with} \ \widehat \beta (\nabla u ) + \widehat \alpha (u)\in L^1(\Omega) \\[2mm] \infty &\text{elsewhere in} \ \ L^2(\Omega). \end{array} \right.\label{def:phi} \end{equation} As $\phi:U \to [0,\infty]$ is convex, proper, and lower semicontinuous, we have that $u \mapsto \partial \phi(u)$ is strongly-weakly closed and $|\partial \phi|(u) = \| (\partial \phi(u))^\circ\|$ (norm in $L^2(\Omega)$) is lower semicontinuous. Moreover, $\partial \phi$ fulfills the chain rule \cite[Lem. 3.3]{Brezis73}, so that $|\partial \phi|$ is a strong upper gradient. Note that the sublevels of $\phi$ are bounded in $W^{1,m}_0 (\Omega)$, which embeds compactly into $L^2(\Omega)$. We can hence apply Theorem \ref{thm:main2}. In particular, the new minimizing-movements scheme \eqref{eq:min} has a solution, which converges to a solution of \eqref{eq:pde}, up to subsequences. Let us now turn to some application of Theorem \ref{thm:main3} to nonconvex problems. In the finite-dimensional case, assume $\phi$ to be twice differentiable and coercive with $\nabla \phi$ and ${\rm D}^2\phi$ locally bounded. Then, one computes \begin{align*} &|\nabla \phi(u)|^q - |\nabla (\phi(u) + \tau |\nabla \phi(u)|^q/q)|^q \nonumber\\ &\quad = |\nabla \phi(u)|^q - |\nabla \phi(u) + \tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)|^q \\ &\quad \leq \left(\big|\nabla \phi(u)+\tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big| + \big| \tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big|\right)^q \nonumber\\ &\qquad - \big|\nabla \phi(u) + \tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big|^q \nonumber\\ &\quad \leq \tau \sum_{k=1}^\infty\binom{q}{k}\big|\nabla \phi(u)+\tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big|^{q-k} \big|\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big|^k\nonumber\\ &\quad \leq \tau \left( \big|\nabla \phi(u)+\tau |\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big| + \big||\nabla \phi(u)|^{q-2} {\rm D}^2\phi(u) \, \nabla \phi(u)\big|\right)^q. \end{align*} Hence, the one-sided Taylor-expansion condition \eqref{eq:taylor} holds for the choice $$g(\tau) = \tau \sup_{\phi(v)\leq C}\left( |\nabla \phi(v)| + 2 |\nabla \phi(v)|^{q-1} |{\rm D}^2 \phi(v)|\right)^q.$$ Note that the above computation simplifies in case $p=2$, for we have \begin{align*} & |\nabla \phi(u)|^2 - |\nabla (\phi(u) + \tau |\nabla \phi(u)|^2/2)|^2 = |\nabla \phi(u)|^2 - |\nabla \phi(u) + \tau {\rm D}^2\phi(u) \, \nabla \phi(u)|^2 \nonumber\\ &\quad - \tau^2 |{\rm D}^2\phi(u) \, \nabla \phi(u)|^2 - 2\tau \nabla \phi(u) {\cdot} ({\rm D}^2\phi(u) \, \nabla \phi(u)). \end{align*} In particular, if $\phi$ is convex condition \eqref{eq:taylor} holds with the trivial choice $g(\tau)=0$. In all cases, if ${\rm D}^2\phi(u) $ is bounded below on sublevels of $\phi$ in the following sense \begin{equation} \forall C>0, \, \exists c>0, \, \forall v,\, \xi \in \mathbb{R}^d \ \text{with} \ \phi(v)\leq C: \quad \xi {\cdot} {\rm D}^2\phi(v)\xi \geq - \frac{c}{2} | \xi|^2\label{anche} \end{equation} and $\nabla \phi(u)$ is bounded on the sublevels of $\phi$, namely, $|\nabla \phi(u)| \leq \ell (\phi(u))$ for some $\ell$ increasing, we can choose $g(\tau)= 2\tau c (\ell(C))^2$ in order to get again condition \eqref{eq:taylor}. This in particular applies to $\phi\in C^2$ and coercive. In all cases, we can apply Theorem \ref{thm:main3} and deduce that the solution of the new minimizing-movements scheme converges up to subsequences to a solution of \eqref{ode}. Let us now turn to the infinite-dimensional case. To simplify notation, let again $p=2$ and $U=L^2(\Omega)$ (the case $p\not =2$ and $U=L^p(\Omega) $ can also be treated) and define $\phi$ as in \eqref{def:phi} by dropping the convexity requirement on $\widehat \alpha$. More precisely, we ask $\beta = {\rm D} \widehat \beta \in C^2(\mathbb{R}^d;\mathbb{R}^d)$ and $\alpha = \widehat \alpha' \in C^2(\mathbb{R})$ and $\widehat \beta$ fulfill the coercivity \eqref{beta}. In this case, we have that \begin{align*}&\partial \phi(u)= - \nabla {\cdot} \beta(\nabla u) + \alpha(u),\\ &\quad \text{with} \ \ D(\partial \phi)= \{u \in L^2(\Omega) \ : \ - \nabla{\cdot} \beta(\nabla u) + \alpha(u) \in L^2(\Omega) \}. \end{align*} Recall that the {\it Fr\'echet subdifferential} \cite{Rockafellar} of $\psi:U \to [0,\infty]$ at $u\in D(\psi)$ is the set $$ \partial \psi(u)=\left\{ \xi \in U \ : \ \liminf_{v \to u} \frac{\psi(v)- \psi(u)- (\xi,v-u)}{\|v-u\|} \geq 0\right\} $$ and $D(\partial \psi) = \{u \in D(\psi) \ : \ \partial \psi(u)\not = \emptyset\}$. In case of $\psi(u)= \| \partial \phi(u)\|^2/2$ we obtain that the Fr\'echet subdifferential is single-valued and \begin{align*}& \partial \frac12 \| \partial \phi(u)\|^2 = \nabla {\cdot} \left( {\rm D} \beta(\nabla u) \nabla \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\right) - \left(\nabla {\cdot} \beta(\nabla u) - \alpha(u) \right) \alpha'(u) \end{align*} with domain given by \begin{align} D\left( \partial \frac12 \| \partial \phi(u)\|^2 \right) &= \Big\{u \in D(\partial \phi)\ : \ \partial \| \partial \phi(u)\|^2 \in L^2(\Omega), \nonumber\\ &\quad \text{with} \ \ \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right) {\rm D} \beta(\nabla u) \nu=0 \ \ \text{on} \ \ \partial \Omega \Big\}.\label{eq:domain} \end{align} In particular, an extra natural boundary condition arises, where $\nu$ denotes the outer normal vector to $\partial \Omega$. In the linear case of $\beta(\xi)=\xi$ (see Subsection \ref{sec:illu}), we have that ${\rm D} \beta = I$ (identity matrix) and we deduce again \begin{align*} & \partial \phi(u) = -\Delta u, \quad D(\partial \phi) = \{ u \in H^1_0(\Omega) \ : \ -\Delta u \in L^2(\Omega)\}=H^2(\Omega) \cap H^1_0(\Omega), \\ &\partial \frac12\| \Delta u\|^2 = \Delta^2 u, \\ &D\left( \partial \frac12\| \Delta u\|^2\right) = \{ u \in H^2(\Omega) \cap H^1_0(\Omega) \ : \ \Delta^2 u \in L^2(\Omega) \ \text{and} \ \Delta u =0 \ \text{on} \ \partial \Omega\}\\ &\quad =\{ u \in H^4(\Omega) \cap H^1_0(\Omega) \ : \ \Delta u \in H^2(\Omega) \cap H^1_0(\Omega)\}. \end{align*} In order to assess the one-sided Taylor-expansion condition \ref{eq:taylor} we argue as follows \begin{align} & |\partial \phi|^2(u) - |\partial (\phi + \tau \partial |\partial \phi|^2/2)|^2(u) \nonumber\\ & \quad=\|\partial \phi(u)\|^2 - \| \partial \phi(u) + \tau \partial \| \partial \phi(u)\|^2/2\|^2 \nonumber\\ &\quad = - \tau^2 \| \nabla {\cdot} \left( {\rm D} \beta(\nabla u) \nabla \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\right) - \left(\nabla {\cdot} \beta(\nabla u) - \alpha(u) \right) \alpha'(u) \|^2 \nonumber\\ &\qquad + 2\tau \int_\Omega \Bigg(\nabla {\cdot} \left( {\rm D} \beta(\nabla u) \nabla \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\right) - \left(\nabla {\cdot} \beta(\nabla u) - \alpha(u) \right) \alpha'(u) \Bigg){\cdot} \nonumber\\ &\qquad \qquad \qquad \qquad {\cdot} \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\, {\rm d} x \nonumber\\ &\quad \leq -2\tau \int_\Omega \nabla\left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right){\cdot}{\rm D} \beta(\nabla u) \nabla \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\, {\rm d} x \nonumber\\ &\qquad -2\tau \int_\Omega \alpha'(u) \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)^2\, {\rm d} x\label{eq:local} \end{align} where we have used also the additional natural condition from \eqref{eq:domain} in the last inequality. The one-sided Taylor-expansion condition \eqref{eq:taylor} then holds if $\widehat \beta$ and $\widehat \alpha$ are convex. In addition, some nonconvex $\widehat \alpha$ can be considered as well. Assume $m>d$. Due to the coercivity of $\beta$, one has that the sublevels of $\phi$ are bounded in $W^{1,m}$ hence in $L^\infty$. In particular, $\phi(u) \leq c \ \Rightarrow \ \|u \|_{L^\infty} \leq\ell (c)$ for some $\ell: (0,\infty)\to (0,\infty)$ increasing. Assume $u^0$ to be given and use \eqref{eq:estimate} to bound $\phi(u)$. Owing to the above discussion we hence have that $\| u \|_{L^\infty} \leq \ell (2C\phi(u^0))$ along the discrete evolution, where $C$ is the constant in \eqref{eq:estimate}. Let now $C_{\rm P}>0$ be the Poincar\'e constant giving $\| w \|_{L^2}^2 \leq C_{\rm P}\| \nabla w\|_{L^2}^2$ for all $w \in H^1_0(\Omega)$. Assume $\widehat \alpha$ to be such that $\alpha'$ locally bounded from below. Under the following smallness assumption $$\inf\big\{\alpha'(r) \ : \ |r| \leq \ell (2C\phi(u^0)) \big\}\geq - \frac{c_\beta}{C_{\rm P}}$$ one has that the right hand side of \eqref{eq:local} can be controlled from above as \begin{align*} & -2\tau c_\beta \| \nabla \left( \nabla {\cdot} \beta(\nabla u) - \alpha(u)\right)\|^2_{L^2}+2\tau c_\beta |\Omega|+2\tau \frac{c_\beta}{C_{\rm P}} \|\nabla {\cdot} \beta(\nabla u) - \alpha(u)\|^2_{L^2} \end{align*} and the one-sided Taylor-expansion condition \eqref{eq:taylor} follows with $g(\tau)=2\tau c_\beta |\Omega|$, at least on the relevant energy sublevel. In this case, Theorem \ref{thm:main3} again ensures that the solution of the new minimizing-movement scheme converges to a solution of \eqref{eq:pde}, up to subsequences. \section{Applications in Wasserstein spaces}\label{sec:Wass} Let us now give some detail in the direction of the application of the above theory to the case of the nonlinear diffusion equation \eqref{eq:introd2}. To start with, let us specify the space of probability measures of finite $p$-moment as $$U = \mathcal P_p(\mathbb{R}^d) = \left\{ u \in \mathcal P(\mathbb{R}^d) \ : \ \int_{\mathbb{R}^d}|x|^p{\rm d}u(x)<+\infty \right\}$$ where $\mathcal P(\mathbb{R}^d)$ denotes probability measures on $\mathbb{R}^d$, and endow it with the $p$-Wasserstein distance $$W_p^p(u_1,u_2) = \inf\left\{ \int_{\mathbb{R}^d \times \mathbb{R}^d}|x-y|^p{\rm d} \mu (x,y) \ : \ \mu \in \mathcal P(\mathbb{R}^d {\times} \mathbb{R}^d), \ \pi^1_\# \mu = u_1, \pi^2_\# \mu = u_2\right\}$$ where $u_1, \, u_2 \in \mathcal P_p(\mathbb{R}^d)$ and $\pi^i_\# $ denotes the push-forward of the projection $\pi^i$ on the $i$-th component. Let $\sigma$ indicate the {\it narrow} topology, namely, $u_n \stackrel{\sigma}{\to} u$ iff $$\lim_{n \to \infty}\int_{\mathbb{R}^d} f(x) \, {\rm d} u_n(x) \to \int_{\mathbb{R}^d} f(x) \, {\rm d} u(x)\quad \forall f: \mathbb{R}^d \to \mathbb{R} \ \text{continuous and bounded}.$$ Note that $(\mathcal P_p(\mathbb{R}^d) , W_p)$ is a complete metric space \cite[Prop.~7.1.5]{Ambrosio08} and that $\sigma$ is compatible with $W_p$ \cite[Lemma~7.1.4]{Ambrosio08}, namely, assumptions \eqref{eq:X}-\eqref{eq:compat} hold. Let su now fix some assumptions on potentials $V$, $F$, and $W$. We follow the setting of \cite[Sec.~10.4.7]{Ambrosio08}, also referring to \cite[Sec.~7]{rsss} for some additional discussion. In particular, we assume \begin{align} &V:\mathbb{R}^d \to [0,\infty) \ \ \text{$(\lambda,2)$-convex with} \ \limsup_{|x| \to \infty}\frac{V(x)}{|x|^2}=\infty, \label{eq:V}\\[1mm] &F:[0,\infty) \to \mathbb{R} \ \ \text{convex, differentiable, superlinear for $|x|\to \infty$, $F(0)=0$, and} \ \nonumber\\ &\quad \exists C_F>0: \quad F(x+y) \leq C_F (1+F(x)+F(y)) \quad \forall x,\, y \in \mathbb{R}^d,\nonumber\\ &\quad r \in (0,\infty)\mapsto r^d F(r^{-d}) \ \ \text{is convex and nonincreasing}, \label{eq:F}\\[3mm] & W: \mathbb{R}^d \to [0,\infty) \ \ \text{convex, differentiable, even, such that} \ \nonumber\\ &\quad \exists C_W>0:\quad W(x+y) \leq C_W (1+W(x)+W(y)) \quad \forall x,\, y \in \mathbb{R}^d.\label{eq:W} \end{align} Note that the assumptions on $F$ cover the classical cases $F(r) = r \ln r$ and $F(r) = r^m$ for $m>1$, respectively related to Fokker-Planck and porous media equations. Under assumptions \eqref{eq:V}-\eqref{eq:W} we have that the potential $\phi$ from \eqref{eq:f} is $(\lambda,2)$-geodesically convex. Combining this with the $(1,2)$-generalized-geodesic convexity of $u \mapsto W_2^2(v,u)$ \cite[Lemma 9.2.1]{Ambrosio08} one has that condition \eqref{eq:F2} holds. Note that resorting to generalized-geodesic convexity is here crucial, for the Wasserstein space $({\mathcal P}_2(\mathbb{R}^d), W_2)$ is positively curved \cite[Prop.~3.1]{Ohta}, namely, $u \mapsto W_2^2(v,u)$ is actually $(1,2)$-geodesically {\it concave}. In addition, $\phi$ has $\sigma$-sequentially compact sublevels and its local slope $|\partial \phi|$ is a strong upper gradient and is $\sigma$-sequentially lower semicontinuous \cite[Prop.~10.4.14]{Ambrosio08}. In particular, \eqref{eq:phicomp}-\eqref{eq:phisl} holds and we have the following. \begin{proposition}\label{prop:prop2} Assume \eqref{eq:V}-\eqref{eq:W} and $u^0\in \mathcal P_2(\mathbb{R}^d)$ with $\phi(u^0)<\infty$. Let $\{0=t_0^n<t_1^n<\dots<t^{n}_{N^n}=T\}$ be a sequence of partitions with $\tau^n:=\max (t^n_i-t^n_{i-1}) \to 0$ as $n\to \infty$. Moreover, let $u^n_i \in M_G(\tau^n_i,u^n_{i-1})$ for $i=1, \dots, N^n$. Then, up to a not relabeled subsequence, we have that $\overline u^n(t) \sto u(t)$, where $u \in AC^2([0,T];\mathcal P_2(\mathbb{R}^d))$ and there exists a density $\rho: t \in [0,T] \to L^1(\mathbb{R}^d)$ such that $u(t) = \rho(t) \mathcal L^d$, $ \int_{\mathbb{R}^d}\rho(x,t)\, {\rm d} \mathcal L^d(x)=1$, and $ \int_{\mathbb{R}^d}|x|^2\rho(x,t)\, {\rm d} \mathcal L^d(x)<\infty$ for all $t \in [0,T]$, satisfying $u^0=\rho(\cdot,0)\mathcal L^d$ and the nonlinear diffusion equation $$ \partial_t \rho - {\rm div}\big( \rho\nabla ( V + F'(\rho) + W \ast \rho) \big)=0\quad \text{in} \ \ \mathcal D'(\mathbb{R}^d \times (0,T)).$$ \end{proposition} Let us now turn to an application of the one-sided Taylor-expansion condition \eqref{eq:taylor} for general $p$. In the metric situation of \eqref{eq:f}, one can use such condition in the purely trasport case $F=0$ and $W=0$. By assuming periodic boundary conditions, we formulate the problem on the torus $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$. Let $V \in C^3(\mathbb{T}^d)$ and define \begin{equation} \label{eq:ff} \phi(u) = \int_{\mathbb{T}^d} V(x)\, {\rm d} u(x) \quad \forall u \in \mathcal P(\mathbb{T}^d). \end{equation} From \cite[Prop.~10.4.2]{Ambrosio08} we have that $$|\partial \phi|^q(u) = \int_{\mathbb{T}^d} |\nabla V(x)|^q {\rm d} u(x).$$ In case $p=2$ we obtain $$ \phi(u) + \frac{\tau}{2}|\partial \phi|^2(u) =\int_{\mathbb{T}^d}\left(V(x)+\frac{\tau}{2}|\nabla V(x)|^2 \right) {\rm d} u(x) =: \int_{\mathbb{T}^d} \tilde V(x) \, {\rm d} u(x).$$ One readily checks that ${\rm D}^2\tilde V = {\rm D}^2 V +\tau {\rm D}^3 V \,\nabla V + \tau {\rm D}^2 V \,{\rm D}^2 V $ is bounded below. We can hence apply \cite[Prop.~10.4.2]{Ambrosio08} once more and deduce that \begin{align*} &\quad |\partial \phi|^2(u) - |\partial(\phi + \tau |\partial \phi|^2/2)|^2(u) = \int_{\mathbb{T}^d} \left(|\nabla V(x)|^2 - |\nabla \tilde V(x)|^2 \right) {\rm d} u(x) \\ &\quad =\int_{\mathbb{T}^d} \left(|\nabla V(x)|^2 - |\nabla V(x) + \tau {\rm D}^2V(x) \nabla V(x)|^2\right) {\rm d} u(x)\\ &\quad = - \int_{\mathbb{T}^d} \tau^2| {\rm D}^2V(x) \nabla V(x)|^2\, {\rm d} u(x) - 2\int_{\mathbb{T}^d} \tau \nabla V(x) {\cdot} {\rm D}^2V(x) \nabla V(x) \,{\rm d} u(x) \\ &\quad \leq 2\tau \lambda^- \int_{\mathbb{T}^d}|\nabla V(x)|^2\, {\rm d}u(x) \leq 2\tau \lambda^- \| \nabla V\|_{L^\infty(\mathbb{T}^d)}^2 \end{align*} where we have defined $\lambda = \min\{ \xi {\cdot} {\rm D}^2V(x) \xi \ : \ x \in \mathbb{T}^d,\, \xi \in \mathbb{R}^d, \, |\xi|=1\}$. Hence, condition \eqref{eq:taylor} holds with $g(\tau):=2\tau \lambda^- \| \nabla V\|_{L^\infty(\mathbb{T}^d)}^2$ (and, in particular, $g(\tau)=0$ if $V$ is convex). In fact, the above computation can be adapted to the case $p\not = 2$ by letting $\tilde V = V +\tau |\nabla V|^q/q$. Let us shorten notation by denoting by $\xi(x) = \nabla V (x)$ and by $A(x) = {\rm D}^2 V(x)$. Then, $\nabla \tilde V(x) = \xi(x) + \tau |\xi(x)|^{q-2}A(x)\xi(x)$. We compute \begin{align*} &\quad |\partial \phi|^q(u) - |\partial(\phi + \tau |\partial \phi|^q/q)|^q(u) = \int_{\mathbb{T}^d} \left(|\nabla V|^q - |\nabla \tilde V|^q \right) {\rm d} u \\ &\quad =\int_{\mathbb{T}^d} \left(|\xi|^q - |\xi+ \tau |\xi|^{q-2}A\xi|^q\right) {\rm d} u\\ &\quad \leq \int_{\mathbb{T}^d} \left(\left(\big|\xi+ \tau |\xi|^{q-2}A\xi\big|+\big| \tau |\xi |^{q-2}A\xi \big| \right)^q -|\xi+ \tau |\xi|^{q-2}A\xi|^q \right) {\rm d} u \\ &\quad = \int_{\mathbb{T}^d} \sum_{k=1}^{\infty}\binom{q}{k}\big|\xi+ \tau |\xi|^{q-2}A\xi \big|^{q-k} \big| \tau |\xi|^{q-2}A\xi \big|^k {\rm d} u \\ &\quad \leq \tau \sum_{k=1}^{\infty}\binom{q}{k}\|\xi+ \tau |\xi|^{q-2}A\xi \|_{L^{\infty}(\mathbb{T}^d)}^{q-k} \| |\xi|^{q-2}A\xi \|^k_{L^\infty(\mathbb{T}^d)} \\ &\quad \leq \tau \left(\|\xi+ \tau |\xi|^{q-2}A\xi \|_{L^{\infty}(\mathbb{T}^d)} + \| |\xi|^{q-2}A\xi \|_{L^\infty(\mathbb{T}^d)} \right)^q. \end{align*} The one-sided Taylor-expansion condition~\eqref{eq:taylor} hence follows with the choice $$g(\tau) = \tau \left(\|\nabla V\|_{L^{\infty}(\mathbb{T}^d)} +2 \|\nabla V\|^{q-1}_{L^{\infty}(\mathbb{T}^d)} \| {\rm D}^2V\|_{L^{\infty}(\mathbb{T}^d)} \right)^q.$$ By applying Theorem \ref{thm:main3} we obtain the following. \begin{proposition} Assume $V\in C^3(\mathbb{T}^d)$ and $u^0\in \mathcal P(\mathbb{T}^d)$. Let $\{0=t_0^n<t_1^n<\dots<t^{n}_{N^n}=T\}$ be a sequence of partitions with $\tau^n:=\max (t^n_i-t^n_{i-1}) \to 0$ as $n\to \infty$ and $(\tau^n_i - \tau^n_{i-1})^+/\tau^n_{i-1}\leq \widehat C \tau^n$ for $i=1,\dots,N^n$ . Moreover, let $u^n_i \in M_G(\tau^n_i,u^n_{i-1})$ for $i=1, \dots, N^n$ and $\phi$ defined in~\eqref{eq:ff}. Then, up to a not relabeled subsequence, we have that $\overline u^n(t) \sto u(t)$, where $u \in AC^p([0,T];\mathcal P(\mathbb{T}^d))$ satisfies $u(0)=u_0$ and the nonlinear transport equation $$ \partial_t u - {\rm div}\left( u | \nabla V|^{q-2} \nabla V \right)=0\quad \text{in} \ \ \mathcal D'(\mathbb{T}^d \times (0,T)).$$ \end{proposition} \end{document}
\mathbf{e}gin{document} { {\tilde{i}} lde{t}} le[Gamma II from HMS]{Gamma II for toric varieties from integrals on T-dual branes and homological mirror symmetry} \address{Bohan Fang, Beijing International Center for Mathematical Research, Peking University, 5 Yiheyuan Road, Beijing 100871, China} \email{[email protected]} \author{Bohan Fang} \address{Peng Zhou, Institut des Hautes \'Etudes Scientifiques. Le Bois-Marie, 35 route de Chartres, 91440 Bures-sur-Yvette France} \email{[email protected]} \author{Peng Zhou} \mathbf{e}gin{abstract} In this paper we consider the oscillatory integrals on Lefschetz thimbles in the Landau-Ginzburg model as the mirror of a toric Fano manifold. We show these thimbles represent the same relative homology classes as the characteristic cycles of the corresponding constructible sheaves under the equivalence of \cite{GPS18-2}. Then the oscillatory integrals on such thimbles are the same as the integrals on the characteristic cycles and relate to genus $0$ Gromov-Witten descendant potential for $X$, and this leads to a proof of Gamma II conjecture for toric Fano manifolds. \end{abstract} \maketitle \section{Introduction} The mirror of a toric Fano variety $X$ is a Landau-Ginzburg model $W:(\mathbb{C}^*)^{\dim X} \to \mathbb{C}$ where \emph{superpotential} $W$ is a Laurent polynomial. The \emph{closed} A-model of $X$, mathematically, is about its Gromov-Witten theory. By mirror symmetry it can be read from the \emph{closed} B-model on the mirror Landau-Ginzburg model, usually in the form of period integrals. In principle, genus $0$ Gromov-Witten descendant potential function of $X$ is equal to oscillatory integrals on its mirror \cites{Gi94} \[ \int e^{-\mathfrak{r}ac{W}{z}}\mathfrak{r}ac{dX_1\dots dX_n}{X_1\dots X_n}. \] A natural question to ask is that the mirror effect on the Gromov-Witten side for the choice of cycles one integrates over. The answer is given by identification of the K-group $K(X)$ with relative homology cycles of $H_n((\mathbb{C}^*)^n, \mathbb{R}e(W/z)\gg 0)$ in \cites{Iritani09}. By inserting certain Gamma-function related characteristic class of such $K$-group element in the Gromov-Witten correlator function, then one can show this is equal to the oscillatory integral over such relative cycle. Moreover in \cites{Fang16} this identification is shown to agree with homological mirror symmetry: a categorical identification of coherent sheaves on $X$ and a Fukaya-type category of Lagrangian cycles in $(\mathbb{C}^*)^n$, which relates the \emph{open} {B-model} on $X$ to the \emph{open} A-model on its mirror Landau-Ginzburg model. There are various consequences of such mirror symmetry. Since genus $0$ descendant correlator functions are solutions to quantum differential equations, one can investigate the properties of such solutions by analytic methods on the oscillatory integrals. A particularly interesting type of relative cycles for the Landau-Ginzburg model are Lefschetz thimbles. On one hand they correspond to a full exceptional collection in any reasonbly-defined Fukaya-type category associated to $((\mathbb{C}^*)^n,W)$. On the other hand the oscillatory integrals over them have nice asymptotic properties. The Gamma conjectures \cites{GGI16} for Fano varieties, especially the Gamma II conjecture, are related to these features. \mathsf{u}bsection{Gamma conjectures for Fano varieties} The Gamma conjecture \cites{GGI16} for Fano varieties is about its quantum cohomology and its certain characteristic classes. Let $X$ be a Fano variety and we define the limit of the $J$-function (assembled from some Gromov-Witten genus $0$ descendant invariants) on a ray with coordinate $t>0$ in its K\"ahler cone (under certain assumption, namely \emph{property $\mathcal O$}) \[ A_X=\lim_{t\to +\infty} \mathfrak{r}ac{J_X(t)}{\langlengle [\mathrm{pt}],J_X(t)\ranglengle} \in H^*(X). \] The \emph{Gamma I conjecture} says this class is the \emph{Gamma class} of its tangent bundle $\hat \Gammamma_X$ (see Equation \eqref{eqn:Gamma}) \cites{Iritani09,KKP08}. Genus $0$ Gromov-Witten theory defines the (small) quantum connection on the trivial $H^*(X)$ bundle over $H^2(X) {\tilde{i}} mes \mathbb{C}^*$ (Equation \eqref{eqn:qde-tau}), which can be solved asymptotically \cites{Du93,Gi01a} (Theorem \ref{thm:Dubrovin-Givental-decomposition}). These asymptotic solutions $y_1,\dots,y_\mathfrak{s}$ form a basis of soutions. On the other hand, any solution is described by (linearly combination of) certain genus $0$ Gromov-Witten correlator function $\mathcal{Z}(E)$ for $E\in K(X)$, whose definition involves inserting ${\boldsymbol{A}}_E=\mathbb{C}h(E)\cdot \hat\Gammamma_X$, where $i=1,\dots,\mathfrak{s}=\dim H^*(X)$. The Gamma II conjecture says the following. \mathbf{e}gin{conjecture}[Gamma II, see Conjecture \ref{conj:gamma-ii} for its precise form, and Theorem \ref{thm:gamma-ii} for complete toric Fano manifolds] There exists a full exceptional collection $E_1,\dots, E_\mathfrak{s}\in \mathbb{C}oh(X)$ such that \[ \mathcal{Z}([E_i])=y_i. \] \end{conjecture} These $A_i:={\boldsymbol{A}}_{[E_i]}$ are called \emph{higher asymptotic classes}. The Gamma I conjecture has been proved for complex Grassmannians \cite{GGI16}, Fano 3-folds of Picard rank one \cite{GoZa16}, Fano complete intersections in projective spaces \cites{GaIr15, SaSh17, Ke18}, toric Fano manifolds that satisfy B-model analogue of Property $\mathcal{O}$ \cites{GaIr15}, and del Pezzo surfaces \cites{HuKeLiYang19}. Gamma II conjecture is an extension of Dubrovin's conjecture \cites{Dubrovin1998} and formulated in \cites{Dubrovin13, GGI16}. It is known for projective spaces \cites{GaIr15}. A $K$-theoretic version of the Gamma II conjecture is also shown for complete toric Fano manifolds \cites{GaIr15}. We would like to remark that Gamma conjecture is motivated by mirror symmetry, and has a version more directly related to Strominger-Yau-Zaslow conjecture for Calabi-Yaus \cites{AGIS18}. In particular the Gamma class arises naturally in the B-model period integral computation \cites{HLY96,Ho06}. \mathsf{u}bsection{Oscillatory integrals and mirror symmetry} In some situation, the Gamma conjecture can be mathematically proved by mirror symmetry (see \cite{GaIr15} for various cases). This paper considers a smooth toric Fano manifold $X$ with Landau-Ginzburg mirror $W:(\mathbb C^*)^n\to \mathbb{C}$. In \cites{Iritani09}, it is shown that the oscillatory integrals over integral cycles in $H_n((\mathbb C^*)^n, \mathrm{Re}(W/z)\gg 0)$ are given by genus $0$ descendant potential with classes ${\boldsymbol{A}}_{[E]}$ inserted -- the lattice of such relative homology cycles is isomorphic to the $K$-group lattice of holomorphic vector bundles $E$ on $X$. In \cite{Fang16} such isomorphism is further shown to agree with homological mirror symmetry in the sense of \cites{FLTZ11, FLTZ12,Kuwagaki17,ZhouPeng17b,Vaintrob16}. Such HMS (or more precisely \emph{coherent-constructible correspondence}) say for any coherent sheaf $E$ on $X$ one can associates a constructible sheaf on $(S^1)^n$, and the category of coherent sheaves $\mathbb{C}oh(X)$ is derived equivalent to the category of such constructible sheaves on $(S^1)^n$, denoted by $\mathrm{Sh}^w_\Lambdambda((S^1)^n)$. Iritani's isomorphism of the $K$-group and the relative homology for the LG model is obtained by taking a coherent sheaf $E$'s corresponding constructible sheaf on $(S^1)^1$ and then taking its characteristic cycle, which is a Lagrangian cycle in $(\mathbb C^*)^n\cong T^* (S^1)^n$ and represents a class in $H_n((\mathbb C^*)^n,\mathrm{Re}(W/z)\gg 0)$. In this paper we want to consider Lagrangian thimbles associated to the Landau-Ginzburg mirror $W:(\mathbb C^*)^n\to \mathbb{C}$. Their images on $W$ are right-pointing rays, and thus represent classes in $H_n((\mathbb C^*)^n,\mathrm{Re}(W/z)\gg 0)$ (for $\mathrm{Re}(z)>0$). They are naturally objects in the Fukaya-Seidel category of the Landau-Ginzburg model \cite{Seidelbook}. We use the recently developed Ganatra-Pardon-Shende's wrapped Fukaya category \cite{GPS17,GPS18-1} $\mathcal{W}FS$ instead of the original version \cite{Seidelbook} as the Fukaya-Seidel category of the Landau-Ginzburg model (see Section \ref{sec:HMS-Lag} for the notion). Then we have the following. \mathbf{e}gin{itemize} \item These thimbles form a full exceptional collection. \item By the recent result of \cite{GPS18-2}, they correspond to constructible sheaves on $(S^1)^n$. \item By \cite{ZhouPeng18, GammageShende17}, such constructible sheaves are in $\mathrm{Sh}^w_\Lambdambda((S^1)^n)$. \end{itemize} We further show that such thimbles represent the same relative homology classes as the characteristic cycles of their corresponding constructible sheaves. Then we have a natural pathway to Gamma II conjecture: passing to constructible sheaves and then to coherent sheaves on $X$ we have a full exceptional collection $E_1,\dots, E_n$. On one hand we analyze the asymptotic behavior of the oscillatory integrals on these thimbles, which are precisely asymptotic solutions $y_i$. On the other hand they are integrals over characteristic cycles of corresponding constructible sheaves and thus are equal to $1$-point Gromov-Witten descendant potentials with ${\boldsymbol{A}}_{[E_i]}$ inserted, and thus $y_i=\mathcal Z([E_i])$. \mathbf{e}gin{remark} We show the Gamma II conjecture in a neighborhood of the large radius limit (complex parameter $|q|\ll 1$). Actually the validity of Gamma II at any semisimple point implies the rest (see the proof of Theorem 6.4 \cite{GaIr15}.) \end{remark} \mathsf{u}bsection{Outline} We recall the notion of a smooth toric Fano variety $X$ and its mirror in Section \ref{sec:mirror}. In particular we very carefully define the mirror $(\mathbb C^*)^n$ as a complex manifold (in Section \ref{sec:LG-B-model}) and as a symplectic manifold $T^*(S^1)^n$ (in Section \ref{sec:lg-a}). A key ingredient is the identification of both which is explained in Section \ref{sec:lg-a}. Then we define closed-sector theory in Section \ref{sec:thimbles} for both B-side (oscillatory integrals) and A-side (descendant Gromov-Witten invariants and quantum connection on $X$). In Section \ref{sec:HMS-sheaf} we recall the coherent-constructible correspondence in \cites{FLTZ11,Kuwagaki17,ZhouPeng17b,Vaintrob16} first and show the convergence of oscillatory integration on characteristic cycles. In Section \ref{sec:HMS-Lag} we show that in Ganatra-Pardon-Shende's wrapped Fukaya category \cites{GPS17,GPS18-1}, a Lefstchez thimble and or a ``standard'' Lagrangians represent the same relative homology classes as its corresponding constructible sheaf's (by \cites{GPS18-2}) characteristic cycle. Then in Section \ref{sec:gamma-ii} we show the Gamma II conjecture for a toric Fano manifold by looking at the oscillatory integral on Lefschetz thimbles, and its mirror version: genus $0$ descendant GW potential with asymptotic classes of the mirror coherent sheaves in an exceptional collection. \mathsf{u}bsection{Acknowledgements} BF would like to thank Hiroshi Iritani for bringing this probem to attention. He is also grateful to the very helpful discussion with Chiu-Chu Melissa Liu, David Nadler, Vivek Shende and Eric Zaslow. The work of BF is partially support by an NSFC grant 11831017. The work of PZ is supported by an IHES Simons Postdoctoral Fellowship as part of the Simons Collaboration on HMS. \section{Mirror symmetry for toric manifolds} \langlebel{sec:mirror} In this section, we fix the notion of toric manifolds and discuss their mirror Landau-Ginzburg A and B-models. \mathsf{u}bsection{Definition of a toric manifold} Let $N\cong \mathbb{Z}^n$ be a finitely generated free abelian group, and let $N_\mathbb{R}=N\otimesimes_\mathbb{Z}\mathbb{R}$. We consider complete smooth toric manifolds given by a simplicial fan $\Sigma$ in $N_\mathbb{R}$ such that the set of $1$-cones is $$ \{\rho_1,\dots,\rho_{\mathfrak{r}}\}, $$ where $\rho_i\cap N=\mathbb{Z}_{\ge 0} b_i$, $i=1,\dots, \mathfrak{r}$. We require \mathbf{e}gin{itemize} \item $\Sigma$ is complete: $|\Sigma|=N_\mathbb{R}$; \item $\Sigma$ is smooth: for every top dimensional cone $\sigma$, the lattice $\oplus_{b_i\in\sigma}\mathbb{Z} b_i\cong N$. \end{itemize} There is a surjective group homomorphism \mathbf{e}gin{eqnarray*} \phi: & {\tilde{N}} :=\oplus_{i=1}^r \mathbb{Z}{ {\tilde{i}} lde{b}}_i & \longrightarrow N,\\ & { {\tilde{i}} lde{b}}_i & \mapsto b_i. \end{eqnarray*} Define $\mathbb{L} :=\mathrm{Ker}(\phi) \cong \mathbb{Z}^\mathfrak{p}$ where $\mathfrak{p}:=\mathfrak{r}-n$. Then we have the following short exact sequence of finitely generated abelian groups: \mathbf{e}gin{equation}\langlebel{eqn:NtN} 0\to \mathbb{L} {\mathrm{st}}ackrel{\psi }{\longrightarrow} {\tilde{N}} {\mathrm{st}}ackrel{\phi}{\longrightarrow} N\to 0. \end{equation} Applying $ - \otimesimes_\mathbb{Z} \mathbb{C}^*$ and $\mathrm{Hom}(-,\mathbb{Z})$ to \eqref{eqn:NtN}, we obtain two exact sequences of abelian groups: \mathbf{e}gin{align} \langlebel{eqn:bT} &1 \to G \to { {\tilde{i}} lde{\bT}} \to \mathbb{T} \to 1,\\ &\langlebel{eqn:MtM} 0 \to M {\mathrm{st}}ackrel{\phi^\vee}{\to} {\tilde{M}} {\mathrm{st}}ackrel{\psi^\vee}{\to} \mathbb{L}^\vee \to 0, \end{align} where \mathbf{e}gin{align*} &\mathbb{T}= {N}\otimesimes_\mathbb{Z} \mathbb{C}^* \cong (\mathbb{C}^*)^n,\ { {\tilde{i}} lde{\bT}} = {\tilde{N}}\otimesimes_\mathbb{Z} \mathbb{C}^* \cong (\mathbb{C}^*)^\mathfrak{r},\ G = \mathbb{L}\otimesimes_\mathbb{Z} \mathbb{C}^* \cong (\mathbb{C}^*)^\mathfrak{p},\\ &M = \mathrm{Hom}(N,\mathbb{Z}) = \mathrm{Hom}(\mathbb{T},\mathbb{C}^*), \ {\tilde{M}} = \mathrm{Hom}({\tilde{N}},\mathbb{Z})= \mathrm{Hom}( { {\tilde{i}} lde{\bT}} ,\mathbb{C}^*),\ \mathbb{L}^\vee = \mathrm{Hom}(\mathbb{L},\mathbb{Z}) =\mathrm{Hom}(G,\mathbb{C}^*). \end{align*} The action of $ { {\tilde{i}} lde{\bT}} $ on itself extends to a $ { {\tilde{i}} lde{\bT}} $-action on $\mathbb{C}^\mathfrak{r} = \mathrm{Spec}\mathbb{C}[Z_1,\dots, Z_\mathfrak{r}]$. The group $G$ acts on $\mathbb{C}^\mathfrak{r}$ via the group homomorphism $G\to { {\tilde{i}} lde{\bT}} $ in \eqref{eqn:bT}. Define the set of ``anti-cones'' $$ \mathcal{A}=\{I\mathsf{u}bset \{1,\dots, \mathfrak{r}\}: \text{$\mathsf{u}m_{i\notin I} \mathbb{R}_{\ge 0} b_i$ is a cone of $\Sigma$}\}. $$ Given $I\in \mathcal{A}$, let $\mathbb{C}^I$ be the subvariety of $\mathbb{C}^\mathfrak{r}$ defined by the ideal in $\mathbb{C}[Z_1,\ldots, Z_\mathfrak{r}]$ generated by $\{ Z_i \mid i\in I\}$. Define the toric orbifold $X$ as the stack quotient $$ X:=U_\mathcal{A}/ G, $$ where $$ U_\mathcal{A}:=\mathbb{C}^\mathfrak{r} \backslash {\bold{i}}gcup_{I \in \mathcal A} \mathbb{C}^I. $$ The smooth compact variety $X$ contains the torus $\mathbb{T}:= { {\tilde{i}} lde{\bT}} /G$ as a dense open subset, and the $ { {\tilde{i}} lde{\bT}} $-action on $\mathcal{U}_\mathcal{A}$ descends to a $\mathbb{T}$-action on $X$. Let $ { {\tilde{i}} lde{\mathcal{D}}}_i$ be the $ { {\tilde{i}} lde{\bT}} $-divisor in $\mathbb{C}^\mathfrak{r}$ defined by $Z_i=0$. Then $ { {\tilde{i}} lde{\mathcal{D}}}_i \cap \mathcal{U}_A$ descends to a $\mathbb{T}$-divisor $\mathcal{D}_i$ in $\mathcal{X}$. We have $$ {\tilde{M}} \cong {\mathrm{Pic}} _{ { {\tilde{i}} lde{\bT}} }(\mathbb{C}^\mathfrak{r}) \cong H^2_{ { {\tilde{i}} lde{\bT}} }(\mathbb{C}^\mathfrak{r};\mathbb{Z}), $$ where the second isomorphism is given by the $ { {\tilde{i}} lde{\bT}} $-equivariant first Chern class $(c_1)_{ { {\tilde{i}} lde{\bT}} }$. Define \mathbf{e}gin{gather*} D_i=(c_1)_G(\mathcal{O}_{\mathbb{C}^\mathfrak{r}}( { {\tilde{i}} lde{\mathcal{D}}}_i)) \in H^2_G(\mathbb{C}^r;\mathbb{Z})\cong \mathbb{L}^\vee \end{gather*} We have $$ {\mathrm{Pic}} (X)\cong H^2(X;\mathbb{Z}) \cong \mathbb{L}^\vee. $$ \mathsf{u}bsection{The nef and Mori cone} \langlebel{sec:nef-NE} In this paragraph, $\mathbb{F}=\mathbb{Q}$, $\mathbb{R}$, or $\mathbb{C}$. Given a finitely generated free abelian group $\Gammamma \cong \mathbb{Z}^m$, define $\Gammamma_\mathbb{F}= \Lambdambda\otimesimes_\mathbb{Z} \mathbb{F} \cong \mathbb{F}^m$. We have the following short exact sequences of vector spaces ($\otimesimes_\mathbb{Z} \mathbb{F}$ with Equation \eqref{eqn:NtN} and \eqref{eqn:MtM}): \mathbf{e}gin{eqnarray*} && 0\to \mathbb{L}_\mathbb{F}\to {\tilde{N}}_\mathbb{F} \to N_\mathbb{F} \to 0,\\ && 0\to M_\mathbb{F}\to {\tilde{M}}_\mathbb{F}\to \mathbb{L}^\vee_\mathbb{F}\to 0. \end{eqnarray*} Let $\Sigma(d)$ be the set of $d$-dimensional cones. For each $\sigma\in \Sigma(d)$. Given a maximal cone $\sigma\in \Sigma(n)$, we define the nef cone $ {\mathrm{Nef}} _X$ as below $$ {\mathrm{Nef}} _\sigma = \mathsf{u}m_{i\in I_\sigma}\mathbb{R}_{\geq 0} D_i,\quad {\mathrm{Nef}} _{X}:={\bold{i}}gcap_{\sigma\in \Sigma(n)} {\mathrm{Nef}} _\sigma. $$ The $\sigma$-K\"{a}hler cone $C_\sigma$ is defined to be the interior of $ {\mathrm{Nef}} _\sigma$; the K\"{a}hler cone of $X$, $C_{X}$, is defined to be the interior of the nef cone $ {\mathrm{Nef}} _{X}$. Let $\langlengle-, -\ranglengle$ be the natural pairing between $\mathbb{L}^\vee_\mathbb{Q}$ and $\mathbb{L}_\mathbb{Q}$. We define the Mori cone $ {\mathrm{NE}} _\sigma\mathsf{u}bset \mathbb{L}_\mathbb{R}$ to be $$ {\mathrm{NE}} _{X}:= {\bold{i}}gcup_{\sigma\in \Sigma(n)} {\mathrm{NE}} _\sigma,\quad {\mathrm{NE}} _\sigma=\{ \mathbf{e}ta \in \mathbb{L}_\mathbb{R}\mid \langlengle D,\mathbf{e}ta\ranglengle \geq 0 \ \forall D\in {\mathrm{Nef}} _\sigma\}. $$ Finally, we define curve classes $$ \mathbb{K}_{{\mathrm{eff}},\sigma}:= \mathbb{L} \cap {\mathrm{NE}} _\sigma,\quad \mathbb{K}_{{\mathrm{eff}}}:= \mathbb{L}\cap {\mathrm{NE}} _X. $$ \mathbf{e}gin{assumption}[Fano condition] \langlebel{semi-positive} From now on, we assume $D_1+\dots + D_\mathfrak{r}$ is contained in the K\"ahler cone $C_X$, which is equivalent to $c_1(X)>0$, i.e. $X$ is a Fano variety. \end{assumption} \mathsf{u}bsection{Landau-Ginzburg as the B-model} \langlebel{sec:LG-B-model} In this subsection, we define the mirror Landau-Ginzburg model from the viewpoint of complex geometry, and identify it with $T^*{M_\mathbb{R}}=N_\mathbb{R} {\tilde{i}} mes M_\mathbb{R}$. We fix an integral basis $e_1,\dots,e_\mathfrak{p} \in \mathbb{L}$ and its dual basis $e_1^\vee,\dots,e_\mathfrak{p}^\vee$ in $\mathbb{L}^\vee$. We require that each $e_a^\vee$ is in $ {\mathrm{Nef}} _X$. As discussed in \cite[p1037]{Iritani09}, this choice is always possible. We let $H_1,\dots, H_\mathfrak{s}$ be a $\mathbb{Z}$-basis of $H^*(X;\mathbb{Z})$, and $H_a=e_a^\vee$ for $a=1,\dots,\mathfrak{p}$. Here $\mathfrak{s}=\dim H^*(X)$. Define the \emph{charge vectors} \[ l^{(a)}=(l_1^{(a)},\dots, l_\mathfrak{r}^{(a)})\in \mathbb{Z}^\mathfrak{r},\quad \psi(e_a)=\mathsf{u}m_{i=1}^\mathfrak{r} l_i^{(a)} { {\tilde{i}} lde{b}}_i. \] So \[ D_i=\psi^\vee(D_i^\mathbb{T})=\mathsf{u}m_{a=1}^\mathfrak{p} l_i^{(a)}e_a^\vee,\quad i=1,\dots,\mathfrak{p}. \] Define the Landau-Ginzburg B-model as follows $$ \mathcal{Y}_q=\{(\tilde{X}_1,\dots,\tilde{X}_\mathfrak{r})\in (\mathbb{C}^*)^\mathfrak{r}|\mathrm{pr}od_{i=1}^\mathfrak{r} \tilde{X}_i^{l^{(a)}_i}=q_a,\ a=1,\dots,\mathfrak{p}\}. $$ Here $q_1,\dots,q_\mathfrak{r}$ are \emph{complex parameters}. Apply the exact functor $\mathrm{Hom}(-,\mathbb{C}^*)$ to the short exact sequence \eqref{eqn:NtN} and we get \[ 1\to \mathrm{Hom}(N,\mathbb{C}^*) \to (\mathbb{C}^*)^\mathfrak{r} {\mathrm{st}}ackrel{\mathfrak{q}}{\longrightarrow} \mathcal{M}=\mathrm{Hom}(\mathbb{L},\mathbb{C}^*)\to 1. \] We see that $\mathcal{Y}_q=\mathfrak{q}^{-1}(q)\cong (\mathbb{C}^*)^k$ is a subtorus in $(\mathbb{C}^*)^\mathfrak{r}$. Here $q=(q_1,\dots, q_\mathfrak{p})$ are coordinates on $\mathcal{M}$. For any $\mathbf{e}ta\in \mathbb{L}$, denote $q^\mathbf{e}ta=\mathrm{pr}od_{a=1}^\mathfrak{p} q_a^{\langlengle \mathbf{e}ta, e_a^\vee \ranglengle}$. Let $u_1,\dots,u_n$ and $u'_1,\dots,u'_n$ be the two sets of coordinates on $M_\mathbb{R}$. Let $y_i=-v_i+2\pi {\bold{i}} u_i$ and $Y_i=e^{y_i}$. Then $y_1,\dots,y_n$ are complex coordinates on $ { {\tilde{i}} lde{c}}Y=M_\mathbb{R} {\tilde{i}} mes M_\mathbb{R}\cong \mathbb{C}^n$, while $Y_1,\dots,Y_n$ are complex coordinates on $\mathcal{Y}= M_{\mathbb{C}^*}=M_\mathbb{R}/M {\tilde{i}} mes M_\mathbb{R}\cong T(M_\mathbb{R}/M)\cong (\mathbb{C}^*)^n$. We fix a splitting of the exact sequence \eqref{eqn:NtN}, i.e. we choose a surjective map $\eta: {\tilde{N}} \to \mathbb{L}$ such that $\eta({ {\tilde{i}} lde{b}}_i)=\mathsf{u}m_{a=1}^\mathfrak{p} \eta_{ia} e_a$ (so $e^\vee_a=\mathsf{u}m_{i=1}^\mathfrak{r} \eta_{ia}D_i$) and $\psi\circ\eta=\mathrm{id}$, where $\eta_{ia}\in \mathbb{Z}$. This splitting identifies $\mathcal{Y}_q$ with $\mathcal{Y}=M_{\mathbb{C}^*}=\mathrm{Hom}(N,\mathbb{C}^*)$ \mathbf{e}gin{equation} \langlebel{eqn:B-model-identification} X_i=q'_iY^{b_i},\quad Y^{b_i}=\mathrm{pr}od_{j=1}^n Y_j^{b_{ij}},\quad q_i'=\mathrm{pr}od_{a=1}^\mathfrak{p} q_a^{\eta_{ia}}. \end{equation} Here $b_i=(b_{i1},\dots,b_{in})$ is the coordinate of $b_i$ in $N$. We also identifies $ { {\tilde{i}} lde{c}}Y_q$ with $ { {\tilde{i}} lde{c}}Y=M_\mathbb{R} {\tilde{i}} mes M_\mathbb{R}$. The splitting $\eta$ specifies an isomorphism ${\tilde{N}}=\mathbb{L}\oplus N$ and ${\tilde{M}}=\mathbb{L}^\vee \oplus M$. The superpotential on $\mathcal{Y}_q$ is $$ W=\mathsf{u}m_{i=1}^\mathfrak{r} \tilde{X}_i. $$ The following holomorphic form on $\mathcal{Y}=M_\mathbb{R}/M {\tilde{i}} mes M_\mathbb{R}=TM_T$ $$ \Omega=\mathfrak{r}ac{dY_1\dots dY_n}{Y_1\dots Y_n}. $$ Here we denote $M_T=M_\mathbb{R}/M\cong (S^1)^n$. Let $W_\eta$ be the function $W$ on $\mathcal{Y}$ once we identify $\mathcal{Y}_q$ with $\mathcal{Y}$ by $\eta$ via Equation \eqref{eqn:B-model-identification}. We still denote $W=W_\eta$ as a holomorphic function on $Y$, keeping in mind the choice $\eta$ we have made. We will see later it does not play any role in the computation of integral, and even this fact is not directly needed in the proof of the main theorem of this paper (Theorem \ref{thm:gamma-ii}). The superpotential $W$ on $\mathcal{Y}$ is a Laurent polynomials in $X_1,\dots, X_n$ and $q_1,\dots, q_\mathfrak{p}$ and we denote it by $W_q$ when fixing $q_1,\dots, q_\mathfrak{p}$. \mathsf{u}bsection{Setup of LG A-model} \langlebel{sec:lg-a} Recall that in the fan $\Sigmagma \mathsf{u}bset N_\mathbb{R}$, rays $\rho_i=\mathbb{R}_{\geq 0} b_i\in \Sigmagma(1)$ for $i=1,\dots,\mathfrak{r}$, while $b_i$ are primitive generators of $\rho_i \cap N$. By the smoothness condition of our toric variety $X$, each top dimensional cone $\sigmagma \in \Sigmagma$ is simplicial, and the ray generators $b_i\in\sigmagma$ forms a $\mathbb{Z}$-basis of $N$. Let $A = \{b_1, \cdots, b_\mathfrak{r}\}$, and $Q = \text{ConvHull}(A)$ be the convex hull of $A$ in $N_\mathbb{R}$. By Equation \eqref{eqn:B-model-identification} the superpotential $W_q$ can thus be written as \[ W_q(z) = \mathsf{u}m_{\alpha \in A}^{\mathfrak{r}} c_\alpha(q) X^{\alpha} \] where $c_\alpha(q) \in \mathbb{C}^*$ depends on choice of $q \in \mathcal{M}$. If we choose $q=1 \in \mathcal{M}$, then $c_\alpha(q) = 1$. We have a canonical Log map, $\Log: M_\mathbb{C}S \to M_\mathbb{R}$, induced by $\log|\cdot |: \mathbb{C}^* \to \mathbb{R}$. Mikhalkin shows that \cites{Mikhalkin04}, the image $\mathcal{A}_t:= \mathfrak{r}ac{\Log(W_q^{-1}(t ))}{\log |t|}$ of a fiber of $W_q$ over $t \in \mathbb{C}$ under the rescaled $\Log$ map, converges as $|t| \to \infty$ to a polyhedral complex $\Pi_A$ in $M_\mathbb{R}$, which only depends on $A \mathsf{u}bset N$ and is independent of $q$. The complements of $\Pi$ has a one-to-one correspondence with elements in $A \sqcup \{0\}$, and this $0\in N$ corresponds to the compact polytope \[ P = \{ x \in M_\mathbb{R} \mid \langle x, \alpha \rangle \leq 1 \} \mathsf{u}bset M_\mathbb{R}. \] $P$ is also the dual polytope to $Q \mathsf{u}bset N_\mathbb{R}$. By the smooth and Fano condition, $P$ is also a lattice polytope. We choose a continuous and homogeneous degree two convex function $\varphi_\mathbb{R}: M_\mathbb{R} \to \mathbb{R}$, which is smooth on $M_\mathbb{R} \mathbb{R}M 0$, such that each positive dimension face $F$ of $P$ has a minimum of $\varphi$ in the interior of $F$.\footnote{We may smooth $\varphi_\mathbb{R}$ near $0$, but it does not matter for the statement on skeleton for a fiber of $W_q$ near $\infty$. Furthermore, our identification $M_\mathbb{R}$ and $N_\mathbb{R}$ is later used to translate Lagrangian cycles in $T^*M_T$ to $\mathcal{Y}$ to perform oscillatory integrals on, which does not require the smoothness of this identification.} In \cites{ZhouPeng18}, the second-named author shows that such function $\varphi$ exists and has a contractible choice. Let $\varphi = \Log^{*}(\varphi_\mathbb{R})$ be a Kahler potential on $M_\mathbb{C}S$, and \[ \langlembda_\varphi = - d^c \varphi, \quad \omega_\varphi = -dd^c\varphi \] be the Liouville one-form and symplectic two-form on $M_\mathbb{C}S$. If we fix an identification of $M_\mathbb{C}S \cong (\mathbb{C}S)^n$, with complex coordinates $z_i$ and polar coordinates $(\rho_i, \theta_i) \in \mathbb{R} {\tilde{i}} mes S^1$, such that $z_i = e^{\rho_i + i \theta_i}$, then we have \[ \langlembda_\varphi = \mathsf{u}m_i \partial_i \varphi_\mathbb{R}(\rho) d \theta_i , \quad \omega_\varphi = \mathsf{u}m_{i,j} \partial_{ij} \varphi_\mathbb{R}(\rho) d\rho_i \wedge d \theta_i. \] The Riemannian metric defined by $g_\varphi (X, Y) = \omega_\varphi(X, JY)$ is then \[ g_\varphi = \mathsf{u}m_{i,j} \partial_{ij} \varphi_\mathbb{R}(\rho)( d\rho_i \otimes d\rho_j + d\theta_i \otimes d\theta_j). \] The above choice of $\varphi_\mathbb{R}: M_\mathbb{R} \to \mathbb{R}$ induces a Legendre transformation \[ \Psi_\mathbb{R}: M_\mathbb{R} \xrightarrow{\sigmam} N_\mathbb{R}, \quad \rho \mapsto - d \varphi_\mathbb{R}|_\rho \in T^*_\rho M_\mathbb{R} \cong N_\mathbb{R}. \] The extra minus sign here is added for the purpose of integration which we will discuss in next section. Since $\varphi_\mathbb{R}$ is convex and homogeneous degree $2$, $\Psi_\mathbb{R}$ sends rays in $M_\mathbb{R}$ to that in $N_\mathbb{R}$. Thus, using canonical isomorphism $M_\mathbb{C}S \cong M_T {\tilde{i}} mes M_\mathbb{R}$ and $T^*M_T \cong M_T {\tilde{i}} mes N_\mathbb{R}$, we have \mathbf{e}gin{equation} \Psi: M_\mathbb{C}S \xrightarrow{\sigmam} T^*M_T. \langlebel{legendre} \end{equation} If we equip $T^*M_T \cong \{(\theta_i, p_i) \in T^n {\tilde{i}} mes \mathbb{R}^n\} $ with the exact symplectic structure : \[ \langlembda_{std} = -\mathsf{u}m_i p_i d \theta_i, \quad \omega_{std} = \mathsf{u}m_{i} d \theta_i \wedge dp_i, \] then one easily checks that, $\Psi$ preserves the Liouville structure \[ \Psi^*(\langlembda_{\mathrm{std}}) = \langlembda_{\varphi}, \quad \Psi^*(\omega_{\mathrm{std}}) = \omega_{\varphi}. \] \section{Oscillatory integrals, GW invariants and Gamma class} \mathsf{u}bsection{Lattice generated by Lefschetz thimbles} \langlebel{sec:thimbles} We define regions ${U_{\epsilon}}$ and ${U_{\epsilon}}e$ for complex parameters $q_1,\dots, q_\mathfrak{p}$ as below \mathbf{e}gin{align*} &{U_{\epsilon}}=\{|q_i|<\epsilonsilon, i=1,\dots,\mathfrak{p}\}.\\ &{U_{\epsilon}}e=\{|q_i|<\epsilonsilon, -\epsilonsilon'< \arg q_i<\epsilonsilon, i=1,\dots,\mathfrak{p}\}. \end{align*} For any $q\in \mathcal{M}$ we define \[ {\mathbb{H}}_q=H_n(\mathcal{Y}, \mathbb{R}e(W_q)\gg 0;\mathbb{Z}). \] By \cite[Lemma 3.8 and Proposition 3.12]{Iritani09}, when $|q|\in {U_{\epsilon}}$, ${\mathbb{H}}_q\cong \mathbb{Z}^\mathfrak{s}$ where $\mathfrak{s}=\dim H^*(X)$ for some small $\epsilonsilon$. We choose such $\epsilonsilon$ and requrie $q\in {U_{\epsilon}}$. We denote ${U_{\epsilon}}c$ and ${U_{\epsilon}}ec$ to be an open subset of ${U_{\epsilon}}$ and ${U_{\epsilon}}e$ such that $W_q$ is holomorphic Morse with distinct critical values respectively. They are open and dense subsets. Let ${\mathsf{cr}}_1,\dots,{\mathsf{cr}}_\mathfrak{s}$ be such critical values. \mathbf{e}gin{definition} An \emph{admissible phase $\theta$ of $W_q$} satisfies \[ {\mathrm{Im}}({\mathsf{cr}}_i e^{-{\bold{i}} \theta})\neq {\mathrm{Im}}({\mathsf{cr}}_j e^{-{\bold{i}} \theta}), \] i.e. the segment between ${\mathsf{cr}}_i$ and ${\mathsf{cr}}_j$ is not parallel to $e^{{\bold{i}}\theta}$. Here we denote ${\bold{i}}=\sqrt{-1}$. \end{definition} \mathbf{e}gin{definition} \langlebel{def:thimbles-cycle} We define the Lefschetz thimbles $\Gammamma_i$ in phase $\theta$ to be the Leschetz thimbles associated to $\gamma_i$ where $\gamma_i$ is illustrated in the following figure. \mathbf{e}gin{figure}[h] \centering{ {\tilde{i}} kz{ \draw (0,0) to +(-2,1) to[out=150, in =180] (3,3); \draw [dashed, opacity=0.3] (0, 0) to +(1,0); \node [below] at (0,0) {$\mathsf{cr}_{i}$}; \draw (0.5, 0.5) to +(-1.5, 0.75) to[out=150, in =180] (3,2.5); \draw [dashed, opacity=0.3] (0.5, 0.5) to +(1,0); \node [right, above] at (0.5,0.5) {$\mathsf{cr}_{i-1}$}; \draw (-2,0.5) to +(-1.5,0.75) to[out=150, in =180] (3, 3.5); \draw [dashed, opacity=0.3] (-2, 0.5) to +(1,0); \node [below] at (-2,0.5) {$\mathsf{cr}_{i+1}$}; \draw (0.2, 0) arc (0:150:0.2); \node at (0,0.4) {$\theta$}; } \caption{Vanishing paths and Fukaya-Seidel category} \langlebel{fig:topView} } \end{figure} Each $\gamma_i$ starts from the critical value ${\mathsf{cr}}_i$ in a straight line of direction $\theta$, and turns to the direction of a very small positive argument $\delta$ (clockwisely of $\theta>\delta$ or counterclockwisely if $\theta<\delta$). It then becomes a ray in the direction of $e^{{\bold{i}} \delta}$ and goes to $e^{{\bold{i}}\delta}\infty$. Since $\theta$ is admissible, these $\gamma_i$ do not intersect each other. \end{definition} These Lefschetz thimbles $\Gammamma_1,\dots, \Gammamma_\mathfrak{s}$ are generators of ${\mathbb{H}}_q$. For any cycle $\Gammamma\in {\mathbb{H}}_q$, we use \[ \int_{\Gammamma} e^{-\mathfrak{r}ac{W}{z}}\Omega \] to denote the integration of the differential form $e^{-\mathfrak{r}ac{W}{z}}$ on $\Gammamma$ for any $\mathbb{R}e (z)>0$. There is a pairing in ${\mathbb{H}}_q$. We may parallel translate $\Gammamma\in H_n(\mathcal{Y}, \mathbb{R}e (W_q) \gg 0;\mathbb{Z})$ to $e^{{\bold{i}} \pi} \Gammamma\in H_n(\mathcal{Y}, \mathbb{R}e (W_q) \ll 0;\mathbb{Z})$ by isotope the class through $H_n(\mathcal{Y}, \mathbb{R}e ( e^{-{\bold{i}}\theta}W_q) \gg 0;\mathbb{Z})$ for $\theta$ from $0$ to $\pi$, i.e. rotating the tail of the vanishing paths counter-clockwise by 180 degree. Then we define the pairing of $\Gammamma,\Gammamma'\in {\mathbb{H}}_q$ by the signed count of the intersection number \[ S(-,-): {\mathbb{H}}_q {\tilde{i}} mes {\mathbb{H}}_q \to \mathbb{Z}, \quad S(\Gammamma, \Gammamma') := \sharp(e^{{\bold{i}} \pi} \Gammamma \cap \Gammamma'). \] This is also the perfect pairing between $H_n(\mathcal{Y},\mathbb{R}e(W_q)\gg 0;\mathbb{Z}) $ with $H_n(\mathcal{Y},\mathbb{R}e(W_q)\ll 0;\mathbb{Z})$. Define the \emph{Stokes matrix} of $(\mathcal{Y},W)$ at phase $\theta$ \[ S_{ij}=S(\Gammamma_i,\Gammamma_j). \] In particular this is an upper triangular integral matrix (\cite[Corollary 4.12]{Iritani09}) \footnote{Our sign convention slightly differs from Iritani's, but $S_{ij}$ remains an upper triangular matrix as in there. } \[ S_{ii}=1, \quad \text{and}\;\; S_{ij}=0 \, \forall i > j \] \mathsf{u}bsection{Genus-0 descendant potential and Iritani's theorem} We fix some notions on Gromov-Witten theory. \mathbf{e}gin{definition} \langlebel{def:gw-potential} Let $X$ be a complete toric manifold. We define genus $g$, degree $d\in H_2(\mathcal{X};\mathbb{Z})$, descendant Gromov-Witten invariants of $\mathcal{X}$ as \[ \langlengle \tau_{a_1}(\gamma_1)\dots \tau_{a_n}(\gamma_n)\ranglengle_{g,n,d}^{X}=\langlengle \gamma_1 \psi_1^{a_1}\dots \gamma_n\psi_n^{a_n}\ranglengle_{g,n,d}^{X}=\int_{[\overline{\mathcal{M}}_{g,n}(X;d)]^{\mathrm{vir}}}\mathrm{pr}od_{j=1}^n \psi_j^{a_j}\mathrm{ev}^*_j(\gamma_j)\in \mathbb{Q}, \] where $\gamma_i\in H^*(X)$ and $\mathrm{ev}_j:\overline{\mathcal{M}}(X;d)\to X$ is the $j$-th evaluation map. Let $\mathbf{t}au\in H^{2}(X;\mathbb{C})$. We also define \mathbf{e}gin{align} \langlebel{eqn:correlator} \llangle \tau_{a_1}(\gamma_1),\dots,\tau_{a_n}(\gamma_n)\rrangle_{g,n}^{\mathcal{X}}=\llangle \gamma_1 \psi_1^{a_1},\dots,\gamma_n\psi_n^{a_n}\rrangle_{g,n}^{\mathcal{X}}\\\nonumber =\mathsf{u}m_{d\in \mathbb{K}_{\mathrm{eff}}} \mathsf{u}m_{\end{lemma}l=0}^\infty \mathfrak{r}ac{1}{\end{lemma}l!}\langlengle \tau_{a_1}(\gamma_1),\dots, \tau_{a_n}(\gamma_n),\underbrace{\tau_0(\mathbf{t}au),\dots,\tau_0(\mathbf{t}au)}_{\text{$\end{lemma}l$ times}}\ranglengle_{g,n+\end{lemma}l,d}^{\mathcal{X}}. \end{align} \end{definition} We do not use Novikov variables since the convergence issue regarding $\mathbf{t}au$ is resolved in \cites{Iritani07,CoIr15}, or after invoking the mirror theorem and the oscillatory integral expression of the $I$-function. The Equation \eqref{eqn:correlator} is a complex analytic function of $e^{\mathbf{t}au}$ in ${U_{\epsilon}}e$ for small $\epsilonsilon$. We further define \[ \llangle f(\psi/z),\dots \rrangle_{g,h}^X=\mathsf{u}m_{i\geq 0} z^{-i}\llangle a_i \psi^i,\dots\rrangle_{g,h}^X, \] where $f=\mathsf{u}m_{i\geq 0} a_i z^i$ is an analytic function at $0$. Define a degree operator $\widetilde{\deg}: H^*(X)\to H^*(X)$ such that \[\widetilde{\mathrm{deg}}\vert_{H^{2p}(X)}=(p-\mathfrak{r}ac{\dim(X)}{2})\mathrm{id}_{H^{2p}(X)}.\] We cite the main theorem from Iritani's paper \cite[Theorem 4.11]{Iritani09}. \mathbf{e}gin{theorem} \langlebel{thm:iritani} There is an isomorphism $\sigma:K(X)\to {\mathbb{H}}_q$ such that \mathbf{e}gin{itemize} \item $\mathrm{ch}i(V,W)=S(\sigma(V),\sigma(W)),\forall V,W\in K(X)$. \item We have the following identity $$\llangle1, \mathfrak{r}ac{z^{-\widetilde \deg}z^{c_1(X)}{\boldsymbol{A}}_V}{z+\psi}\rrangle_{0,2}^X=(-2\pi z)^{-n/2}\int_{\sigma(V)}e^{-\mathfrak{r}ac{W}{z}}\Omega.$$ \end{itemize} Here $z>0$ and \emph{higher asymptotic classeses} ${\boldsymbol{A}}_V=\mathbb{C}h(V)\hat\Gammamma_X$, where $\mathbb{C}h(V)$ is the modified Chern character \[ \mathbb{C}h(V)=\mathsf{u}m_{p\geq 0}(2\pi{\bold{i}})^p \mathrm{ch}_p(V). \] We understand $z^s$ as $\exp(s\log z)$ for $\log z\in \mathbb{R}$. The \emph{Gamma class} \mathbf{e}gin{equation} \langlebel{eqn:Gamma} \hat\Gammamma_X=\mathsf{u}m_{i=1}^k\Gammamma(1+\delta_i), \end{equation} where $\delta_i$ are Chern roots of $TX$. \end{theorem} \mathsf{u}bsection{Quantum cohomology and Frobenius algebra} The dimension of $H^*(X;\mathbb{C})$ is $\mathfrak{s}$ -- recall that in Section \ref{sec:LG-B-model} we have chosen $e^\vee_1,\dots,e^\vee_\mathfrak{p}\in \mathbb{L}^\vee\cong H^2(X;\mathbb{Z})$ -- we let $H_i=e^\vee_i$ for $i=1,\dots,\mathfrak{p}$ and let $H_1,\dots,H_\mathfrak{s}$ be a homogeneous basis of $H^*(X;\mathbb{C})$. It dual basis in $H^*(X;\mathbb{C})$ are $H^1,\dots, H^\mathfrak{s}$. The \emph{small quantum product} of the toric Fano variety $X$ is defined as the following \mathbf{e}gin{align*} \alpha{\mathrm{st}}ar_\mathbf{t}au\mathbf{e}ta&=\mathsf{u}m_{\end{lemma}l\geq 0,\mathbf{e}ta\in \mathbb{K}_{\mathrm{eff}}}\mathsf{u}m_{a=1}^\mathfrak{r} \langlengle\alpha,\mathbf{e}ta,H_a,\mathbf{t}au,\dots,\mathbf{t}au\ranglengle_{0,3+\end{lemma}l,\mathbf{e}ta}^X H^a\\ &=\mathsf{u}m_{\mathbf{e}ta\in \mathbb{K}_{\mathrm{eff}}(X)} \mathsf{u}m_{a=1}^\mathfrak{r} \langlengle \alpha,\mathbf{e}ta,H_a\ranglengle_{0,3,\mathbf{e}ta}^X e^{\langlengle\mathbf{t}au,\mathbf{e}ta\ranglengle} H^a. \end{align*} where $\mathbf{t}au=\tau_1 H_1+\dots +\tau_\mathfrak{p} H_\mathfrak{p}\in H^2(X)$, $e^{\langlengle\mathbf{t}au,\mathbf{e}ta\ranglengle}=e^{\tau_1\langlengle\mathbf{e}ta,H_1\ranglengle }\dots e^{\tau_n\langlengle\mathbf{e}ta,H_\mathfrak{p}\ranglengle}$ while the second equality is from the divisor equation. Since $X$ is Fano, there is only finitely many curve classes $\mathbf{e}ta$ such that $\langlengle \alpha,\mathbf{e}ta, H_a\ranglengle_{0,3,\mathbf{e}ta}^X\neq 0$ due to the dimension reason, so the above definition is well-defined for all $\mathbf{t}au$.\footnote{The convergence of the big quantum cohomology when $\mathbf{t}au$ is not necessarily in degree $2$ is unknown in general.} The quantum product ${\mathrm{st}}ar_\mathbf{t}au$ and together with the usual non-degenerate pairing \mathbf{e}gin{align} \langlebel{eqn:cohomology-pairing} (\alpha,\mathbf{e}ta)=\int_X \alpha\cup\mathbf{e}ta,\ \alpha,\mathbf{e}ta\in H^*(X), \end{align} gives the cohomology group $H^*(X)$ a Frobenius algebra structure. We denote this algebra by $QH^*_\mathbf{t}au(X)$. We say the quantum cohomology $QH^*(X)$ is \emph{semisimple} at $\mathbf{t}au$ if there are basis $\phi_1,\dots,\phi_\mathfrak{s}\in H^*(X)$ such that \[ \phi_i{\mathrm{st}}ar_{\mathbf{t}au}\phi_j=\delta_{ij} \phi_i. \] This basis $\{\phi_i\}_{i=1}^\mathfrak{s}$ is called the \emph{idempotent basis} or \emph{canonical basis}. It is unique up to permutation. Being semisimple is an open condition for the parameter $\mathbf{t}au$, and for toric Fano variety $X$ the quantum cohomology is generically semisimple on $e^\mathbf{t}au \in {U_{\epsilon}}e$ for sufficiently small $\epsilonsilon$ \cite[Corollary 4.9]{Iritani09}. The canonical basis is then an $H^*(X)$-valued function of $\mathbf{t}au$ and we denote it by $\phi_i(\mathbf{t}au)$ in case we want to emphasize its dependence on $\mathbf{t}au$. We define the following quantum connection as a meromorphic connection on the trivial bundle $\mathcal{F}: H^*(X) {\tilde{i}} mes (H^2(X) {\tilde{i}} mes \mathbb{P}^1)\to H^2(X) {\tilde{i}} mes \mathbb{P}^1$ \mathbf{e}gin{align} \nabla_{\alpha}&=\partialrtial_\alpha+\mathfrak{r}ac{1}{z}\alpha{\mathrm{st}}ar_\mathbf{t}au, \langlebel{eqn:qde-tau} \\ \nabla^\mathbf{t}au_{z\partialrtial z}&=z\mathfrak{r}ac{\partialrtial}{\partialrtial z}-\mathfrak{r}ac{1}{z}(E{\mathrm{st}}ar_\mathbf{t}au)+\widetilde{\mathrm{deg}}.\langlebel{eqn:qde-tau-off} \end{align} Here the \emph{Euler vector field} is \[E=c_1(X)+\mathsf{u}m_{a=1}^\mathfrak{r}\left( 1-\mathfrak{r}ac{1}{2}\deg H_a\right)\tau_a H_a.\] Note that $\nabla_{z\partialrtial z}^\mathbf{t}au$ does not involve taking derivative in the $\mathbf{t}au$-direction. The \emph{mirror map} $q=q(\mathbf{t}au)$ relates B-model parameters $q=q_1,\dots,q_\mathfrak{p}$ with A-model parameters $\tau_1,\dots, \tau_\mathfrak{p}$\footnote{Since $X$ is Fano the mirror map takes such a simple form.} \mathbf{e}gin{equation} q_a=\exp(\tau_a),\ a=1,\dots,\mathfrak{p}. \langlebel{eqn:mirror-map} \end{equation} We denote $\tilde{U}e=q^{-1}({U_{\epsilon}})$, $\tilde{U}ec=q^{-1}({U_{\epsilon}}c)$, $\tilde{U}ee=q^{-1}({U_{\epsilon}}e)$, and $\tilde{U}eec=q^{-1}({U_{\epsilon}}ec)$. \mathsf{u}bsection{Flat sections from Gromov-Witten potentials} We define the following descendants operator for $\alpha\in H^*(X)$. \[ L(\mathbf{t}au,z)\alpha:=e^{-\mathfrak{r}ac{\mathbf{t}au}{z}}\alpha -\mathsf{u}m_{d\in {\mathrm{NE}} _X}\mathsf{u}m_{a=1}^\mathfrak{s} H^a\langlengle H_a,\mathfrak{r}ac{e^{-\mathfrak{r}ac{\mathbf{t}au}{z}}\alpha}{z+\psi}\ranglengle_{0,2,d}e^{\langlengle\mathbf{t}au,d\ranglengle}. \] The convergence of this operator $L(\mathbf{t}au,z)$ is known on $\tilde{U}e {\tilde{i}} mes \mathbb{C}^*$ for sufficiently small $\epsilonsilon$ since the convergence of ${\mathrm{st}}ar_\mathbf{t}au$ and the fact that $L(\mathbf{t}au,z)\alpha$ satisfies Equation \eqref{eqn:qde-tau} \cite[Proposition 2.4]{Iritani09}. \mathbf{e}gin{proposition}[Definition 2.5, Equation (19) and Definition 2.9 of \cites{Iritani09}] For any $K$-group element $E\in K(X)$, the generating function \mathbf{e}gin{equation} \langlebel{eqn:S} \mathcal{Z}(E)=L(\mathbf{t}au,z)z^{-\widetilde{\deg}}z^{c_1(X)} \mathbb{C}h(E)\hat\Gammamma_X=\mathsf{u}m_{i=1}^\mathfrak{s} \llangle H_i,\mathfrak{r}ac{z^{-\widetilde{\deg}}z^{c_1(X)}{\boldsymbol{A}}_{[E]}}{z+\psi}\rrangle_{0,2}^X H^i. \end{equation} is a flat section of the quantum connection $\nabla$. \end{proposition} \mathbf{e}gin{remark} Setting $z>0$ and choose $\log z\in \mathbb{R}$ as in Theorem \ref{thm:iritani}. We get a single valued section on ${U_{\epsilon}}e {\tilde{i}} mes \{0,\infty\}$. When we fix $\mathbf{t}au\in {U_{\epsilon}}e$ as a constant, this provides a solution to \eqref{eqn:qde-tau-off}. \end{remark} \mathsf{u}bsection{Mirror symmetry for quantum cohomology} Here we introduce a related B-model description of the D-module in \cite{Iritani09}. We do not explicitly describe an B-model D-module but just state facts from \cites{Iritani09} for our purposes here. Let $\mathcal{M}=(\mathbb{C}^*)^\mathfrak{p}$. We exclude bad points and denote the remaining Zariski open and dense $\mathcal{M}^\circ$ such that for $q\in \mathcal{M}^\circ$, $W_q$ is \emph{non-degenrate at infinity} \cite[Definition 3.6]{Iritani09}. For sufficiently small $\epsilonsilon$, ${U_{\epsilon}}\mathsf{u}bset \mathcal{M}^\circ$ \cite[Lemma 3.8]{Iritani09}. Following \cite[p1043]{Iritani09}, we define \[ R^\vee_{\mathbb{Z},(q,z)}=H_n(Y_q,\{\mathrm{Re}(W_q/z)\gg 0\};\mathbb{Z}), \] where $(q,z)\in \mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*$. These relative homology groups form a local system $R^\vee_\mathbb{Z}$ of rank $\mathfrak{r}$ over $\mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*$. When $z>0$, $R^\vee_{\mathbb{Z},(q,z)}={\mathbb{H}}_q$ in our notion. For $q\in {U_{\epsilon}}e$ with small $\epsilonsilon'$, any $\Gammamma \in {\mathbb{H}}_q$ extends to a flat section of $R^\vee_\mathbb{Z}$ over ${U_{\epsilon}}e {\tilde{i}} mes \widetilde \mathbb{C}^*$ where $\widetilde \mathbb{C}^*=\mathbb{C}$ is the unviersal cover of $\mathbb{C}^*$. Let $\mathsf{f}_a=\mathfrak{r}ac{\partialrtial W}{\partialrtial \tau_a}\in \mathcal{O}_{\mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*}[X_1^\pm,\dots,X_n^\pm]$ for $a=1,\dots,\mathfrak{p}$. We define an operator $D_a=-z\mathfrak{r}ac{\partialrtial}{\partialrtial\tau_a}+\mathsf{f}_a$. Since $\{H_a\}_{a=1}^\mathfrak{s}$ multiplicatively generate $QH_\mathbf{t}au^*(X)$, one may write $H_i,\ i=\mathfrak{p}+1,\dots,\mathfrak{r}$ in the following form \mathbf{e}gin{align*} H_i&=\mathsf{u}m_{j=1}^{k_i} A_{ij}(e^\mathbf{t}au)H_{s_{ij_1}}{\mathrm{st}}ar_\mathbf{t}au \cdots {\mathrm{st}}ar_\mathbf{t}au H_{s_{ij_{r_{ij}}}},\quad s_{ij_l}\in\{1,\dots,\mathfrak{s}\},\\ H^i&=\mathsf{u}m_{j=1}^{k^i} A^i_j(e^\mathbf{t}au)H_{s^i_{j_1}}{\mathrm{st}}ar_\mathbf{t}au \cdots {\mathrm{st}}ar_\mathbf{t}au H_{s^i_{j_{r^i_j}}},\quad s^i_{j_l}\in\{1,\dots,\mathfrak{s}\}. \end{align*} Then for $i=\mathfrak{p}+1,\dots,\mathfrak{s}$ we define \mathbf{e}gin{align*} &\mathsf{f}_i=\mathsf{u}m_{j=1}^{k_i} A_{ij}(q)D_{s_{ij_1}}\cdots D_{s_{ijr_{ij}}}1,\\ &\mathsf{f}^i=\mathsf{u}m_{j=1}^{k^i} A^i_{j}(q)D_{s^i_{j_1}}\cdots D_{s^i_{jr^i_{j}}}1. \end{align*} By the definition of $\mathsf{f}_i,\mathsf{f}^i$ are in $\mathcal{O}_{\mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*}[X_1^\pm,\dots,X_n^\pm]$ and actually they are a polynomial in $z$. We cite Iritani's identification of these two quantum D-modules \cite[Proposition 4.8]{Iritani09} in the following way. \mathbf{e}gin{theorem}[Iritani] \langlebel{thm:D-mod-isomorphism} Over $q\in {U_{\epsilon}}$ for some small $\epsilonsilon>0$, there is an morphism \[ {\mathsf{Mir}}: (\mathbf{t}au {\tilde{i}} mes \mathrm{id})^* (\mathcal{F}/H^2(X;\mathbb{Z})) \to \mathcal{O}_{\mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*} \] such that \mathbf{e}gin{itemize} \item The quotient of $H^2(X;\mathbb{Z})$ is on the base of this D-module $H^*(X) {\tilde{i}} mes \mathbb{P}^1$. Indeed the quantum cohomology at $\mathbf{t}au$ is the same as $\mathbf{t}au+2\pi{\bold{i}}\mathbf{t}au'$ for any $\mathbf{t}au'\in H^2(X;\mathbb{Z})$. \item ${\mathsf{Mir}}(H_{a_1}{\mathrm{st}}ar_\mathbf{t}au \dots{\mathrm{st}}ar_\mathbf{t}au H_{a_s})=(D_{a_1}\dots D_{a_s} 1),$ and then ${\mathsf{Mir}}(H_i)=\mathsf{f}_i, {\mathsf{Mir}}(H^i)=\mathsf{f}^i$. \item The image of ${\mathsf{Mir}}$ is also equipped with a D-module structure. In particular, a flat section $\mathsf{f}$ is characterized by the following pairing with any flat section $\Gammamma$ of $R^\vee_{\mathbb{Z}}$ being constant \[ \int_\Gammamma \mathsf{f} e^{-\mathfrak{r}ac{W}{z}}\Omega. \] The map ${\mathsf{Mir}}$ preserves this flat structure. \end{itemize} \end{theorem} This theorem implies the following. \mathbf{e}gin{corollary} \langlebel{cor:B-S-flat} For a flat section $\Gammamma$ of $R^\vee_\mathbb{Z}$, \[ s_\Gammamma:=(-2\pi z)^{-n/2}\mathsf{u}m_{i=1}^\mathfrak{s}\left( \int_\Gammamma \mathsf{f}_i e^{-\mathfrak{r}ac{W}{z}}\Omega \right) H^i \] is a flat section of $\mathcal{F}$. Moreover, given $\Gammamma \in {\mathbb{H}}_q$ with $q\in {U_{\epsilon}}e$ with small $\epsilonsilon'$, by extending $\Gammamma$ to a multi-valued section of ${U_{\epsilon}}e {\tilde{i}} mes \mathbb{C}^*$, \[ (-2\pi z)^{-n/2}\mathsf{u}m_{i=1}^\mathfrak{s}\left( \int_\Gammamma \mathsf{f}_i e^{-\mathfrak{r}ac{W}{z}}\Omega \right) H^i \] is a multi-valued flat section of $\mathcal{F}$ over $q\in {U_{\epsilon}}e {\tilde{i}} mes \mathbb{C}^*$. \end{corollary} Following \cite[Section 3.2.3]{Iritani09}, we introduce the Jacobian ring as \[ {\mathrm{Jac}}(W)=\mathfrak{r}ac{\mathbb{C}[X_1^\pm,\dots,X_n^\pm]}{\langlengle \mathfrak{r}ac{\partialrtial W}{\partialrtial X_1},\dots,\mathfrak{r}ac{\partialrtial W}{\partialrtial X_n}\ranglengle},\quad {\mathrm{Jac}}(W_q)={\mathrm{Jac}}(W)\otimesimes_{\mathbb{C}[q^\pm]}\mathbb{C}_q. \] We notice that ${\mathrm{Jac}}(W)$ is a $\mathbb{C}[q^\pm]=\mathbb{C}[q_1^\pm,\dots,q_\mathfrak{p}^\pm]$-algebra. For any $f\in \mathbb{C}[X_1^\pm,\dots,X_n^\pm]$, we denote its class in ${\mathrm{Jac}}(W_q)$ to be $[f]$. The residue pairing on ${\mathrm{Jac}}(W_q)$ is given by \mathbf{e}gin{equation} \langlebel{eqn:residue-pairing} ([f],[g]):=\mathfrak{r}ac{1}{(2\pi{\bold{i}})^n}\int_{|dW|=\delta} \mathfrak{r}ac{fg\Omega}{\mathrm{pr}od_{i=1}^\mathfrak{r} X_i\mathfrak{r}ac{\partialrtial W}{\partialrtial X_i}}. \end{equation} The mirror theorem of Givental/Lian-Liu-Yau \cites{Gi96a,Gi96b,LLY97}, or the direct proof of Fukaya-Oh-Ohta-Ono states \cites{FOOO10} the following ring isomorphism under the mirror map \eqref{eqn:mirror-map}. \[ {\mathsf{mir}}: QH^*_\mathbf{t}au(X)\xrightarrow{\sigmam} {\mathrm{Jac}}(W_{q(\mathbf{t}au)}). \] Moreover, the pairing $(,)$ of ${\mathrm{Jac}}(W_q)$ is indenfitified with the cohomology pairing \eqref{eqn:cohomology-pairing} with the residue pairing \eqref{eqn:residue-pairing}. \mathbf{e}gin{remark} The isomorphism ${\mathsf{mir}}$ is the map ${\mathsf{Mir}}$ restricted to $z=0$ \[ {\mathsf{Mir}}\vert_{z=0}: QH^*_\mathbf{t}au(X)\xrightarrow{\sigmam} {\mathrm{Jac}}(W_{q(\mathbf{t}au)}) \] after one regards the elements of the Jacobian ring in $\mathcal{O}_{\mathcal{M}^\circ {\tilde{i}} mes \mathbb{C}^*}[X^\pm]$. \end{remark} The Jacobian ring ${\mathrm{Jac}}(W_q)$ is semisimple if and only if it is holomorphic Morse. We denote the critical points of $W_q$ to be $p_1,\dots,p_\mathfrak{r}$ such that $W(p_i)=\mathsf{cr}_i$. Recall that ${U_{\epsilon}}ec$ is the dense and open subset of ${U_{\epsilon}}e$ where $W_q$ is holomorphic Morse. We list some properties of ${\mathrm{Jac}}(W_q)$ and ${\mathsf{mir}}$ which are derived directly from the definition. \mathbf{e}gin{proposition} \langlebel{prop:identification-mir} When $W_q$ is holomorphic Morse we have the following \mathbf{e}gin{itemize} \item $[f]=[g]\in {\mathrm{Jac}}(W_q)\Leftrightarrow f(p_i)=g(p_i),\ i=1,\dots,\mathfrak{s}.$ \item The canonical basis of ${\mathrm{Jac}}(W_q)$ is $[\varphi_i]$ such that $\varphi_i(p_j)=\delta_{ij}$. \item The length of $[\varphi_i]$ is $1/\sqrt{\det({\mathrm{Hess}}_{p_i}(W_q))}$. \item The map ${\mathsf{mir}}$ identifies \mathbf{e}gin{equation} \langlebel{eqn:QH=Jac-1} {\mathsf{mir}}(H_a)=[\mathfrak{r}ac{\partialrtial W_q}{\partialrtial \tau_a}],\quad {\mathsf{mir}}(D_i)=[\tilde{X}_i],\quad {\mathsf{mir}}(c_1(X))=[W_q]. \end{equation} where $a=1,\dots,\mathfrak{p}$, $i=1,\dots,\mathfrak{r}$. \item The map ${\mathsf{mir}}$ relates canonical basis \mathbf{e}gin{equation} \langlebel{eqn:QH=Jac-2} {\mathsf{mir}}(\phi_i)=[\varphi_i]\in H^*(X),\quad \Delta_i:=1/(\phi_i,\phi_i)=-\det({\mathrm{Hess}}_{p_i}(W_q)). \end{equation} \end{itemize} \end{proposition} \section{HMS with sheaves: Coherent-constructible correspondence} \langlebel{sec:HMS-sheaf} \mathsf{u}bsection{Categorical notions} In this spaper, we work in the setting of dg or $A_\infty$, although essentially most of our arguments are only needed on the level of K-theory. We also omit ``quasi-'' when talking about equivalences of categories. We use $\mathbb{C}oh(X)$ to denote its dg category of coherent sheaves on $X$ (a dg enhancement of the usual derived category) which is smooth and proper. We follow the notation of \cite{Nadler16}. For a real analytic manifold $B$, let $\mathrm{Sh}^\diamond(B)$ be the big dg category of $\mathbb{C}$-modules (possibly with unbounded cohomologies sheaf), and let us define the dg cat $\mathrm{Sh}^\diamond_\Lambdambda(B)$ as the full subcategory spanned by objects with singular support in $\Lambdambda \mathsf{u}bset T^*B$. Let $\mathrm{Sh}^w_\Lambdambda(B)$ be the full subcategory of compact objects in $\mathrm{Sh}^\diamond_\Lambdambda(B)$, called {\em wrapped microlocal sheaves} \cite[Definition 1.3]{Nadler16}, and let $\mathrm{Sh}_\Lambdambda(B)$ denote the traditional constructible sheaf with bounded and constructible cohomology sheaf. \footnote{In the literature, the wrapped microlocal sheaf is sometimes denoted as $\mathrm{Sh}_\Lambda(B)^\mathrm{c}$, where $\mathrm{c}$ denote compact object as in \cite{GPS18-2}. Note however, the traditional constructible sheaf is also denoted as $\mathrm{Sh}^\mathrm{c}$, as in \cite{Kuw16}. To avoid confusion, we use the original notation of Nadler $\mathrm{Sh}^w$ for wrapped sheaves.} \mathbf{e}gin{remark} In the case of smooth projective toric manifold (even smooth DM toric stack), the FLTZ skeleton $\Lambdambda$ ensures that \[ \mathrm{Sh}^w_\Lambda(T^n) \cong \mathrm{Sh}_\Lambda(T^n). \] \end{remark} As a variant of this notion when $\Lambdambda^\infty \mathsf{u}bset T^\infty B \cong S^*B$, $\mathrm{Sh}_{\Lambda^\infty}(B)$ denotes the full subcategory whose singular support's infinity is in $\Lambda^\infty$. \color{black} For any object $F$ in a triangulated (or $A_\infty$, dg) category $\mathcal{C}$, we use $[F]_K$ to denote its $K$-group element. \mathsf{u}bsection{Coherent-constructible correspondence (CCC)} Let $X$ be the complete simplicial toric variety defined by the $\Sigma$. We define a conical Lagrangian in $T^*M_\mathbb{R}=M_\mathbb{R} {\tilde{i}} mes N_\mathbb{R}$ \[ {\tilde{i}} lde\Lambdambda={\bold{i}}gcup_{\sigma\in\Sigma} (M+\sigma^\perp) {\tilde{i}} mes (-\sigma) \] where \[ \sigma^\perp=\{u\in M_\mathbb{R}\vert \langlengle u, v\ranglengle =0,\forall v\in \sigma \}. \] Coherent-constructible correspondence, observed by \cites{Bondal06}, relates coherent sheaves to constructibles of certain polyhedral types. For any equivariant anti-ample line bundle $E\in \mathbb{C}oh_\mathbb{T}(X)=\mathsf{u}m_{i=1}^\mathfrak{r}\mathcal{O}(c_i D_i)$, the set characterized by \[ \Delta_E=\{x\in M_\mathbb{R}, \langlengle b_i, x\ranglengle < c_i\} \] is an open polytope in $M_\mathbb{R}$ such that each vertex corresponds to a top dimensional cone in $\Sigma$. We call polytopes of such form \emph{toric polytopes}. The equivariant coherent constructible correspondence \cites{FLTZ11} says the following. \mathbf{e}gin{theorem}[Fang-Liu-Treumann-Zaslow \cites{FLTZ11}] \langlebel{thm:equivariant-ccc} There is an equivalence \[ \iota: \mathbb{C}oh_\mathbb{T}(X)\to \mathrm{Sh}tL, \] which sends $E=\mathcal{O}(\mathsf{u}m_{i=1}^\mathfrak{r} c_i D_i)$ to $i_{*}\mathbb{C}_{\Delta_E}$ where $i: \Delta_E\hookrightarrow M_\mathbb{R}$ is the embedding, and $\mathbb{C}_{U}$ is the constant sheaf $\mathbb{C}_U$ on an open set $U$. \end{theorem} This theorem has a non-equivariant version as follows. Since $ {\tilde{i}} lde \Lambdambda$ is invariant under the translation of $M$, we denote $\Lambdambda= {\tilde{i}} lde\Lambdambda/M\mathsf{u}bset T^*M_T=N_\mathbb{R} {\tilde{i}} mes (M_\mathbb{R}/M)$ and $M_{T,\sigma}=\sigma^\perp/M\mathsf{u}bset M_T$. Define $\Lambdambda_\sigma=M_{T,\sigma} {\tilde{i}} mes (-\sigma)\mathsf{u}bset \Lambdambda$. \mathbf{e}gin{theorem}[\cites{Kuw16, Vaintrob16, ZhouPeng17b}] \langlebel{thm:ccc} There is an equivalence \[ \iota: \mathbb{C}oh(X)\to \mathrm{Sh}L. \] \end{theorem} \mathbf{e}gin{remark} The functor $\iota$ in Theorem \ref{thm:equivariant-ccc} passes to a fully faithful exact functor \cites{Treumann10} in the non-equivariant setting -- then one needs to show that this is an equivalence. By a slight abuse of notation we use $\iota$ to denote both equivariant and non-equivariant CCC. \end{remark} We denote $\mathcal S$ as a (Whitney) stratification on $M_T$ such that each linear Lagrangian in $\Lambdambda$ is contained in a $T^*_N S$ for $S\in \mathcal S$. Let $ {\tilde{i}} lde{\mathcal S}$ be the lift of $\mathcal S$ on $M_\mathbb{R}$. \mathsf{u}bsection{Oscillatory integrals on characteristic cycles} We prove the following proposition, which allows us to do integration on characteristic cycles after identifying $M_\mathbb{R}$ with $N_\mathbb{R}$ by $\Psi$ as in Section \ref{sec:lg-a}. \mathbf{e}gin{proposition} \langlebel{prop:La-integrable} For each small $\epsilonsilon'' > 0$ and $\epsilonsilon>0$, we can find a sufficiently small $\epsilonsilon'$, such that for $q\in {U_{\epsilon}}e$ there exists a conical neighborhood $V \mathsf{u}bset T^*M_T \mathbb{R}M M_T$ containing $\Lambdambda_\Sigmagma \mathbb{R}M M_T$ and a compact set $K \in T^*M_T$, and the image of $W_q(\Psi^{-1}(V \mathbb{R}M K))$ lies in the sector $\{ | \arg z | < \epsilonsilon'' \} \in \mathbb{C}$. \end{proposition} \mathbf{e}gin{proof} For each positive dimensional cone $\sigmagma \mathsf{u}bset \Sigmagma$, we consider the corresponding component of $\Lambdambda_{\sigmagma}$ and the corresponding rays in $\sigmagma$. Recall that monomials in $W_q$ are labelled by $\Sigmagma(1)$. Let $W_{q,\sigmagma}$ be the partial sum consisting of terms in $\sigmagma\cap \Sigma(1)$. We claim the other terms are small in the following sense. Fix a norm $\| - \|$ on $N_\mathbb{R}$. {\bf Claim:} There exists $\delta>0, C>0$, such that for all $(\theta, p) \in \Lambda_\sigmagma$, \[ \mathfrak{r}ac{| W_q - W_{q,\sigmagma}| }{| W_{q,\sigmagma}|} {\bold{i}}g|_{\Psi^{-1} (\theta, p)} < C e^{ - \delta \| p \| }. \] Furthermore, there is an open neighborhood $V_\sigmagma$ of $\Lambda_\sigmagma$, such that the above estimate holds and $|\arg(W_{q,\sigmagma})| < \epsilonsilon''/2$. Assuming this claim, it is easy to check that, if we take $$V = {\bold{i}}gcup_{0 \neq \sigmagma \mathsf{u}bset \Sigmagma} V_\sigmagma,$$ then the condition is satisfied by taking $K = \{ \| p \| < R\}$ for large enough $R$. Now we prove the claim. It is easy to check that if $q$ is real (meaning all $q_1,\dots,q_\mathfrak{p}$ are real) if $(\theta, p) \in \Lambda_{\sigmagma} \mathsf{u}bset T^*M_T$, then $W_\sigmagma(\Psi^{-1}((\theta, p))) > 0$. Recall that $A$ are vertices of the Newton polytope $Q$ of $W$, which is also the set of primitive vectors for rays in $\Sigmagma$. Let $A_\sigmagma = \sigmagma \cap A$, and $F_\sigmagma = \partial Q \cap {\sigmagma}$ be the closed face. Let $\psi$ be the Legendre transformation of $\varphi$, then in as discussed in \cites{ZhouPeng18} (Corrolary 2.14) $\psi$ is ``adapted to $Q$'', i.e. each face of $Q$ has a minimum of $\psi$ in its interior. We define \[ f_\sigmagma(p) := \max_{\alpha \in A \mathbb{R}M A_\sigmagma} \langle \alpha, \| p \|^{-1} d \psi(p) \rangle - \max_{\mathbf{e}ta \in A_\sigmagma} \langle \mathbf{e}ta, \| p \|^{-1} d \psi(p) \rangle. \] Then since $\psi$ is ``adapted to $Q$'', \[ f_\sigmagma(p) > 0, \quad \forall p \in F_\sigmagma. \] Since $F_\sigmagma$ is compact, we have \[ \delta_\sigmagma = \min_{p \in F_\sigmagma} f_\sigmagma(p) > 0. \] Thus \[ \mathfrak{r}ac{| W - W_\sigmagma| }{| W_\sigmagma|} {\bold{i}}g|_{\Psi^{-1} (\theta, p)} \leq B|A| e^{- \delta_\sigmagma \|p\|} \] for $(\theta, p) \in \Lambda_\sigmagma = M_{T, \sigmagma} {\tilde{i}} mes (- \sigmagma)$, where $B$ is the largest of the ratio among $|c_\alpha(q)|$, which has a upper bound for $q\in {U_{\epsilon}}e$ with a fixed $\epsilonsilon$. There is neighborhood $U_\sigmagma$ of $F_\sigmagma$ in $\partial Q$, such that \[ \min_{p \in U_\sigmagma} f_\sigmagma(p) > \delta_\sigmagma/2 \] Thus \[ \mathfrak{r}ac{| W - W_\sigmagma| }{| W_\sigmagma|} {\bold{i}}g|_{\Psi^{-1} (\theta, p)} \leq |A| e^{- \mathfrak{r}ac{\delta_\sigmagma}{2} \|p\|} \] for $(\theta, p) \in M_{T, \sigmagma} {\tilde{i}} mes (\mathbb{R}_{<0} U_\sigmagma)\mathsf{u}bset T^*M_T$. Finally, Let $k = \dim \sigmagma$, then $W_\sigmagma$ has $k$ terms. On $M_{T, \sigmagma} {\tilde{i}} mes M_\mathbb{R}$ these $k$-terms' arguments are controlled by $\epsilonsilon'$ -- we choose sufficiently small $\epsilonsilon'$ that these arguments are less than $\epsilonsilon''/(4k)$. Let $\widetilde M_{T, \sigmagma}$ be a neighborhood of $M_{T, \sigmagma}$ on $M_T$ where these $k$-terms has arguments in $(-\epsilonsilon''/(2k), \epsilonsilon''/(2k))$. Then, we may verify that $\widetilde M_{T, \sigmagma} {\tilde{i}} mes (\mathbb{R}_{<0} U_\sigmagma)$ is the desired neighborhood $V_\sigmagma$ of the claim. We thus finishes the proof of the claim, and of the proposition. \end{proof} \mathbf{e}gin{corollary} Let $F\in \mathrm{Sh}Lc$, then the following integration when $|\arg z|<\mathfrak{r}ac{\pi}{2}$ \[ \int_{\mathbb{C}C(F)}e^{-\mathfrak{r}ac{W}{z}}\Omega:=\int_{\Psi^{-1}(\mathbb{C}C(F))} -e^\mathfrak{r}ac{W}{z}{\Omega}. \] is well-defined for on some ${U_{\epsilon}}e$. \end{corollary} \mathbf{e}gin{proof} The characteristic cycles are supported in the singular support $\Lambdambda$. We choose $\epsilonsilon,\epsilonsilon'$ for $\epsilonsilon''=\mathfrak{r}ac{\pi}{2}-|\arg z|$ by Proposition \ref{prop:La-integrable}. \end{proof} \mathsf{u}bsection{Iritani's isomorphism and oscillatory integral} Recall that Iritani's result Theorem \ref{thm:iritani} \cite[Theorem 4.11]{Iritani09} gives an identification of $\sigma: K(X)\xrightarrow{\sigmam} {\mathbb{H}}_q$. Let $F\in \mathrm{Sh}Lc$. The characteristic cycles of $F$ is a Lagrangian cycle in $T^*M_T$, and under the identification $\Psi$ it represents a class in ${\mathbb{H}}_q$ by Proposition \ref{prop:La-integrable}. We denote this class by $[\Psi^{-1}(\mathbb{C}C(F))]\in {\mathbb{H}}_q$. As discussed in Corollary \ref{thm:D-mod-isomorphism}, \[ s_\Gammamma=(-2\pi z)^{-n/2}\mathsf{u}m_{i=1}^\mathfrak{s}\left ( \int_\Gammamma \mathsf{f}_i e^{-\mathfrak{r}ac{W}{z}}\Omega\right) H^i \] is a flat section of $\mathcal{F}$. The main theorem of \cite{Fang16} tells us for $z>0$ and $q\in {U_{\epsilon}}e$ with small $\epsilonsilon$ and $\epsilonsilon'$ \[ \llangle1, \mathfrak{r}ac{z^{-\widetilde \deg}z^{c_1(X)}{\boldsymbol{A}}_{[E]}}{z+\psi}\rrangle_{0,2}^X=(-2\pi z)^{-n/2}\int_{\mathbb{C}C(\iota(E))}e^{-\mathfrak{r}ac{W}{z}}\Omega \] for any $E\in \mathbb{C}oh(X)$. The fact following fact is a direct application of divisor equation for Gromov-Witten invariants. \mathbf{e}gin{equation} \langlebel{eqn:take-derivative} z\mathfrak{r}ac{\partialrtial}{\partialrtial \tau_a}\llangle \alpha, \mathfrak{r}ac{\mathbf{e}ta}{z+\psi}\rrangle_{0,2}^X=\llangle H_a{\mathrm{st}}ar_\mathbf{t}au\alpha,\mathfrak{r}ac{\mathbf{e}ta}{z+\psi}\rrangle_{0,2}^X. \end{equation} Then we have the following proposition. \mathbf{e}gin{proposition} \langlebel{prop:any-primary-insertion} For $z>0$ and $q\in {U_{\epsilon}}e$ with small $\epsilonsilon$ and $\epsilonsilon'$, \[ \llangle H_i,\mathfrak{r}ac{z^{-\widetilde \deg}z^{c_1(X)}{\boldsymbol{A}}_{[E]}}{z+\psi}\rrangle_{0,2}^X=(-2\pi z)^{-n/2}\int_{\mathbb{C}C(\iota(E))} e^{-\mathfrak{r}ac{W}{z}}\mathsf{f}_i\Omega. \] \end{proposition} \mathbf{e}gin{proof} For $s_i\in\{1,\dots,\mathfrak{p}\}$ and Equation \eqref{eqn:take-derivative} \mathbf{e}gin{align*} &\llangle H_{s_r}{\mathrm{st}}ar_\mathbf{t}au\dots {\mathrm{st}}ar_\mathbf{t}au H_{s_1},\mathfrak{r}ac{z^{-\widetilde \deg}z^{c_1(X)}{\boldsymbol{A}}_{[E]}}{z+\psi}\rrangle_{0,2}^X=(-2 \pi z)^{-n/2}\int_{\mathbb{C}C(\iota(E))} e^{-\mathfrak{r}ac{W}{z}}(D_{s_r}\dots D_{s_1}f)\Omega, \end{align*} By the definition of $\mathsf{f}_i$, \mathbf{e}gin{align*} & \llangle H_i,\mathfrak{r}ac{z^{-\widetilde \deg}z^{c_1(X)}{\boldsymbol{A}}_{[E]}}{z+\psi}\rrangle_{0,2}^X=(-2\pi z)^{-n/2}\int_{\mathbb{C}C(\iota (E))} e^{-\mathfrak{r}ac{W}{z}}\mathsf{f}_i \Omega,\quad i=1,\dots,\mathfrak{r}. \end{align*} \end{proof} So as an flat section of $\mathcal{F}\vert_{{U_{\epsilon}}e {\tilde{i}} mes \{z>0\}}$, $\mathcal{Z}(E)=s_{\Psi^{-1}(\mathbb{C}C(\iota(E)))}$ for any $E\in \mathbb{C}oh(X)$, or one can extend to sections over ${U_{\epsilon}}e {\tilde{i}} mes \mathbb{C}^*$ and they are equal as multi-valued sections. In particular this matches Iritani's isomorphism $\sigma$ and the $K$-theoretic level of coherent-constructible correspondence functor $\iota$. \mathbf{e}gin{theorem} \langlebel{thm:central} \[\sigma([E]_K)=[\Psi^{-1}(\mathbb{C}C(\iota(E)))].\] \end{theorem} \section{HMS with Lagrangians} \langlebel{sec:HMS-Lag} \mathsf{u}bsection{Wrapped Fukaya categories $\mathcal{W}L$ and $\mathcal{W}FS$.} We consider the following partially wrapped Fukaya categories, defined on a Liouville sector with stop \cites{GPS18-1}: \mathbf{e}gin{itemize} \item The partially wrapped Fukaya category $\mathcal{W}L$ for $T^*M_T$ with stop at $\Lambdambda^\infty$. We refer to the definition of partially wrapped Fukaya in \cites{GPS17,GPS18-1}. Notice we have a Liouville manifold $T^*M_T$ with stop $\Lambdambda^\infty$, which is equivalent to a \emph{Liouville sector} in \cites{GPS17}. We only remark that the admissible Lagrangian as objects are cylindrical (conical) outside a compact set and do not intersect with $\Lambdambda^\infty$ at infinity. \item The Fukaya-Seidel category $\mathcal{W}FS$. We simply adopt the definition of Ganatra-Pardon-Shende \cite[last example on p2]{GPS17}, where the superpotential is $W:M_{\mathbb{C}^*}\to \mathbb{C}$ and we identify $M_{\mathbb{C}^*}$ with $T^* M_T$ by $\Psi$. We consider the partially wrapped Fukaya category $\mathcal{W}FS:=\mathcal{W}(T^*M_T; W^{-1}(+\infty))$ which is defined as follows. The superpotential $W$ gives an embedding of the total space of an $F_0$-bundle over $S^1$ to $\partialrtial_\infty (T^*M_T)$ with the contact form $dt-\langlembda$, where the Liouville domain $F_0$'s completion is the generic smooth fiber $F$ of $W$ at a very large positive real number, $\langlembda$ is the Liouville form on $F_0$, and $t\in S^1$. The circle $S^1$ captures the argument of the superpotential $W$ at infinity. The category $\mathcal{W}(T^*M_T; W^{-1}(+\infty))$ is simply defined to be the partially wrapped Fukaya category for $T^*M_T$ with stop $\{1\} {\tilde{i}} mes F_0$. It is shown in \cite[Corollary 2.9]{GPS18-1} that \[ \mathcal{W}FS=\mathcal{W}(T^*M_T;W^{-1}(+\infty))\cong \mathcal{W}(T^*M_T,\mathfrak c_{F}). \] Here $\mathfrak c_F$ is the \emph{core} (the part that under Liouville flow does not go to infinity) of the generic fiber $F$ or $F_0$. In particular when $q$ is real, by \cites{ZhouPeng18,GammageShende17} we know that $\Lambdambda^\infty$ is the core of $F_0$ and then \[ \mathcal{W}FS\cong \mathcal{W}L. \] Since $\mathcal{W}FS$ does not depend on $q$ for its small perturbation, we know $\mathcal{W}FS\cong \mathcal{W}L$ for $q\in {U_{\epsilon}}ec$ for small $\epsilonsilon$ and $\epsilonsilon'$. \end{itemize} \mathsf{u}bsection{$\mathcal{W}L$ and the microlocalization functor} We use \cites{GPS18-2}'s result to describe the equivalence between the partially wrapped Fukaya category of a cotangent bundle and the category of constructible sheaves on its base manifold. An earlier result of \cites{Nadler09, NaZa09} equates the infinitesimally wrapped Fukaya category and constructible sheaves. We use partially wrapped categories since the objects we consider (thimbles) fit more suitably. \footnote{It would be interesting to see if thimbles are in the infinitesimally wrapped Fukaya-Nadler-Zaslow category.} We cite the main theorem from \cites{GPS18-2}. \mathbf{e}gin{theorem}[Ganatra-Pardon-Shende] \langlebel{thm:GPS} There is a microlocalization functor \[ \mu: \mathrm{Sh}_\Lambda^w(M_T) \xrightarrow{\sigmam} {\mathrm{Perf}}\mathcal{W}L, \] which is a quasi-equivalence of $A_\infty$-categories. \end{theorem} Note that since we use the Liouville one-form $\langlembda = -p dx$ on the cotangent bundle, we do not have ${\mathrm{Perf}}\mathcal{W}Lo$ but rather ${\mathrm{Perf}}\mathcal{W}L$ in the above equivalence. By a slightly abuse of notation we also use $\mu$ to denote the quasi-equivalence between $\mathrm{Sh}tLc$ and $\mathcal{W}tL$. Let $U$ be an (open) toric polytope in $M_\mathbb{R}$, and ${ {\tilde{i}} lde{F}}$ be the associated costandard sheaf in $\mathrm{Sh}Lc$ while $F$ be the associated costandard sheaf in $\mathrm{Sh}Lc$. Let ${\tilde{L}}'$ be the Lagrangian brane graph $\Gammamma_{-d\log f}$, where $f: \bar U\to \mathbb{R}$ is a smooth function that \[ f|_{U}>0,\ f(\partialrtial U)=0; \] and $\Gammamma_{d\log f}$ is the Lagrangian graph in $T^*M_\mathbb{R}$. Define $L=\pi(L')$, where $\pi$ is the universal cover $T^*M_\mathbb{R} \to T^*B$. After a finite but small positive push of $L'$ (resp. ${\tilde{L}}'$) at the infinity, we obtain $L\in \mathcal{W}L$ (resp. ${\tilde{L}} \in \mathcal{W}tL$). We call this Lagrangian $L$ and ${\tilde{L}}$ the costandard Lagrangian associated to $\mathcal{U}\mathsf{u}bset M_\mathbb{R}$.\footnote{The terminology of a costandard Lagrangian is from \cites{NaZa09}, and the definition is slightly different since in the Nadler-Zaslow's infinitesimal Fukaya category $L'$ is the standard Lagrangian.} \mathbf{e}gin{proposition} \[ \mathbb{C}C({ {\tilde{i}} lde{F}})=\mathbb{C}C(\mu^{-1}({\tilde{L}})). \] \langlebel{prop:cc-standard} \end{proposition} \mathbf{e}gin{proof} Let ${ {\tilde{i}} lde{F}}'=\mu^{-1}({\tilde{L}})\in \mathrm{Sh}tLc$. Then \[ \mathbb{C}C({ {\tilde{i}} lde{F}})=\mathsf{u}m_{i=1}^k c_i \Lambdambda_i,\quad \mathbb{C}C({ {\tilde{i}} lde{F}}')=\mathsf{u}m_{i=1}^k c'_i \Lambdambda'_i, \] where $\Lambdambda_i$ (resp. $\Lambdambda'_i$) is a linear (not necessarily complete) Lagrangian $\Delta_i {\tilde{i}} mes (-\sigma_i)$ (resp. $\Delta'_i {\tilde{i}} mes (-\sigma'_i)$), where $\Delta_i$ and $\Delta'_i$ are strata in $ {\tilde{i}} lde{\mathcal S}$, while $\sigma_i$ (or $\sigma'_i$) are fans in $\Sigma$. In particular if $\sigma_i=\{0\}$ then $\Lambdambda_i=\Delta_i$ is an open set in $M_\mathbb{R}$. Notice that $\mathbb{C}C({ {\tilde{i}} lde{F}})$ and $\mathbb{C}C({ {\tilde{i}} lde{F}}')$ are determined by the coefficients of $c_i$ for $\sigma_i\neq \{0\}$ since $M_\mathbb{R}$ is contractible. Pick an interior point $(x,p)$ of any $\Lambdambda_i^\infty$ where $x\in M_\mathbb{R}$ and $p\in S^\infty_x M_\mathbb{R}$. Then \[ \mathrm{ch}i_{\mathrm{Sh}tLc}(\mathbb{C}_p, { {\tilde{i}} lde{F}})=c_i=\mathbf{e}gin{cases} 1,\text{ if $x\in \partialrtial U$},\\ 0,\text{ if $x\notin \partialrtial U$}. \end{cases} \] while \[ \mathrm{ch}i_{\mathrm{Sh}tLc}(\mathbb{C}_p, { {\tilde{i}} lde{F}}')=\mathrm{ch}i_{\mathcal{W}tL}(L_p,{\tilde{L}})=c_i=\mathbf{e}gin{cases} 1,\text{ if $x\in \partialrtial U$},\\ 0,\text{ if $x\notin \partialrtial U$}. \end{cases} \] Here $L_p$ is the Legendrian linking disk at $p$ which intersects $\Lambdambda_i$ transversally at a single point. \end{proof} If a (not necessarily closed) submanifold $V$ in $M_\mathbb{C}^*$ represents a class in $\mathbb H_n$ we denote such class by $[V]$. For example, $\Gammamma_i$ is a thimble over a path pointing rightwards, like in Figure \ref{fig:topView}, then it represents a class $[\Gammamma_i]\in \mathbb H_n$. By Proposition \ref{prop:La-integrable} a costandard Lagrangian $L$ also represents a class $[\Psi^{-1}(L)]\in \mathbb H_n$. We sometimes just write $[L]$ for $[\Psi^{-1}(L)]$ for $L\mathsf{u}bset T^*M_T$. Passing from $M_\mathbb{R}$ to $M_T$, from Proposition \ref{prop:cc-standard} and the fact that $[\Psi^{-1}(L)]=[\Psi^{-1}(\mathbb{C}C(F))]$ in ${\mathbb{H}}_q$ we know that \mathbf{e}gin{corollary} For any costandard and standard Lagrangian $L\in \mathcal{W}L$, \[ [\Psi^{-1}(\mathbb{C}C(\mu^{-1} (L)))]=[\Psi^{-1}(L)]. \] \langlebel{cor:cycle-standard} \end{corollary} \mathsf{u}bsection{Thimbles as objects in $\mathcal{W}L$.} \langlebel{sec:thimbles-obj} We require that $q\in {U_{\epsilon}}ec$. Then the superpotential $W_q$ has distinct Morse critical points with distinct critical values. Let $\theta$ be an admissible phase and $\Gammamma_1,\dots,\Gammamma_\mathfrak{s}$ be corresponding thimbles. We cite \cite[Corollary 1.14]{GPS18-1}. As in \cites{GPS17, GPS18-1}, $\Gammamma_i$ can be made into Lagragnian brane object $L_i$ in $\mathcal{W}L$. Notice that when we say $[L]$ we always mean the class represented by a particular form of $L$ -- in our paper they are always (co)standard Lagrangians or thimbles. \mathbf{e}gin{proposition} \langlebel{prop:exceptional} Thimbles $L_1,\dots,L_\mathfrak{s}$ form an exceptional collection, and they do generate $\mathcal{W}L$. \end{proposition} \mathbf{e}gin{lemma} \[ \mathrm{ch}i_\mathcal{W}L([L]_K,[L']_K)=S([L],[L']), \] where $L$ and $L'$ is either a thimble object or a standard object in $\mathcal{W}L$. \end{lemma} \mathbf{e}gin{proof} Since $\mathcal{W}L$ are generated by thimbles, without loss of generality we may assume $L, L'$ are thimbles, in particular lying over vanishing paths $\gamma, \gamma'$ in $\mathbb{C}$ tending to $+\infty$. By definition, $\mathrm{Hom}_{\mathcal{W}L}(L, L')$ is generated by the intersection points of thimbles $\widetilde L$ with $L'$, where $\widetilde L$ is the thimble over the counterclockwisely perturbed vanishing path $\widetilde \gamma$ of $\gamma$, such that $\widetilde \gamma \cap \gamma$ transversely. Hence both sides boils down to counting intersection points of $\widetilde L$ with $L'$ with signs, and the equality can be verified. \end{proof} A thimble object $L_i$ corresponds to a class in ${\mathbb{H}}_q$, denoted by $[L_i]$. By our construction $[L_i]=[\Gammamma_i]$. \mathbf{e}gin{proposition} Assuming \[ [L_i]_K=\mathsf{u}m_{j=1}^k c_{ij} [G_j]_K, \] where $G_j$ are standard Lagrangians, we have \[ [L_i]=\mathsf{u}m_{j=1}^k c_{ij} [G_j]. \] \langlebel{prop:thimble-in-standards} \end{proposition} \mathbf{e}gin{proof} Let $F'$ be any standard Lagrangian in $\mathcal{W}$. \[ \mathrm{ch}i_\mathcal{W}L([L_i]_K,[F']_K)=\mathrm{ch}i_\mathcal{W}L(\mathsf{u}m_j c_{ij}[G_j]_K, [F']_K)= \mathsf{u}m_j{c_{ij}}S([G_j],[F']). \] On the other hand \[ \mathrm{ch}i_\mathcal{W}L([L_i]_K,[F']_K)=S([L_i],[F']). \] Since $[F']$ could be chosen as any standard branes which span ${\mathbb{H}}$, the fact that $S$ is a perfect pairing implies \[ [L_i]=\mathsf{u}m_{j=1}^k c_{ij}[G_j]. \] \end{proof} Since standard Lagrangians generate $\mathcal{W}L$, by Corollary \ref{cor:cycle-standard} and Proposition \ref{prop:thimble-in-standards} we have the following. \mathbf{e}gin{corollary} If $L$ is a thimble object in $\mathcal{W}$, then \[ [L]=[\Psi^{-1}\mathbb{C}C(\mu^{-1}(L))]. \] \langlebel{cor:cycle-thimble} \end{corollary} \iffalse \section{Fukaya Category for smooth fano toric HMS} \subsection{Comparison between Fukaya Categories for Stein-Lefschetz fibration} Given a holomorphic Morse function $W: Y \to \mathbb{C}$ from a Stein manifold $(Y, J, \varphi)$ with Morse critical points, there are several ways to construct Fukaya categories associated with them. We give a sketch of their descriptions. \mathsf{u}bsubsection{Fukaya-Seidel category} There are (at least) two versions of Fukaya-Seidel category, by which we mean using vanishing path, thimbles and vanishing sphere. One version has all vanishing path contained in a disk and ending on a common regular value, the other one has infinite long vanishing paths ending in rays towards a common direction. We will recall their setups below. Fix a disk $D \mathsf{u}bset \mathbb{C}$ large enough that contains all the critical values of $W$, and pick vanishing paths inside the disk from the critical values to a common regular value on the boundary of the disk, denoted by $w_0 \in \partial D$. We label the vanishing path $\gamma_1, \cdots, \gamma_N$ by clockwise order as they approach $w_0$. We may use symplectic parallel transport with respect to $\omega_\varphi$ on $Y$ to build Lagrangian thimbles $L_1, \cdots, L_N$ (compact with boundary) ending on vanishing sphere $S_1, \cdots, S_N$ in $W^{-1} (w_0)$. The Fukaya-Seidel category is defined as the triangulated envelope of the directed category $\mathcal{C}$ with objects $S_i$ and hom \[ \mathrm{Hom}_\mathcal{C}(S_i, S_j) = \mathbf{e}gin{cases} 0 & i > j \\ \mathbb{C} \cdot id & i = j \\ \mathrm{Hom}_{Fuk(W^{-1}(w_0))} (S_i, S_j). & i < j \end{cases} \] Alternatively, we may replace the regular value by a direction $e^{i \theta} \infty$ to infinity, and require the vanishing path $\gamma_i$ to be asymptotic to this direction. Then we build the generating category $\mathcal{C}$ with objects the infinite long Lagrangian thimbles $L_i$, and with $\mathrm{Hom}_\mathcal{C}(L_i, L_j)$ computed by tilt the vanishing path of $L_i$ in CCW direction a bit, so that the perturb $L_i'$ intersects with $L_j$ transversely and over point in $\mathbb{C}$. The two approaches are the same, since we can make the intersection of path occuring at $w_0$, the previously chosen regular point. This approach has the advantage of using a thimble $Li$ on which one can do integration against the holomorphic closed $n$-form $e^{-W/z} \Omega$ for suitable phase of $z$. \mathsf{u}bsubsection{Wrapped Fukaya category for Lefschetz fibration} \langlebel{weinstein-pair} Let $X$ be a Liouville domain with $\partial X$ contact manifold. Let $\Lambdambda \mathsf{u}bset X$ be a closed subset. There is a partially wrapped Fukaya category $\mathcal{W}_\Lambda(X)$, where objects are Lagrangians with conical ends ending in $\partial X \mathbb{R}M \Lambda$ and homomorphism $\mathrm{Hom}(L_1, L_2)$ are computed by wrapping $L_1$ positively in $\partial X \mathbb{R}M \Lambda$. [cite GPS1,2,3]. The Fukaya-Seidel category can be defined using this language as well, as pointed out in Section 6.3 and 6.5 of [GPS3], and also [Gammage-Shende, Sec 1.3]. Here, we give more details for the geometrical construction, though we believe it is well known to experts. The overall goal here to show the following equivalence \[ FS( (\mathbb{C}S)^n, W) \xrightarrow{\sigmam} \mathcal{W}(U, \mathcal{H}) \] where $(U, \mathcal{H})$ is a Weinstein pair to be constructed below. Similar discussion are also made in [Gammage-Shende, section 1.3] (1) First, we follow constructions in [Giroux-Pardon] to turn a Lefschetz fibration $W: Y \to \mathbb{C}$ to a Weinstein pair $(U, \mathcal{H})$. Recall that a {\em Weinstein pair} $(U, \mathcal{H})$ consists of a Weinstein domain $(U, \omega = d\langlembda, \varphi)$ together with a Weinstein hypersurface $\mathcal{H}$ in its boundary $\partial U$ with the induced Weinstein structure. [Eli Section 2]. We may choose $r_1$ large enough, such that $W^{-1}(\overline{B(r_1)})$ contains the skeleton of $X$ in its interior. We may then choose $r_2$ large enough, such that $\{ \varphi < r_2 \}$ contains all the fiberwise skeleta for regular fibers over $B(r_1)$. By rescaling $W$ and shift $\varphi$ by $-r_2$, we may assume $r_1=1$, $r_2=0$. We define a manifold with corner as the candiadate Weinstein domain \[ U_0 := \{ x \in X: \varphi(x) \leq 0, |W(x)| \leq =1 \}, \] where we call $\partial_h U_0 = \partial U_0 \cap \{\varphi=0\}$ the {\em horizontal boundary} and $\partial_v U_0 = \partial U_0 \cap \{|W|=1\}$ the vertical boundary. We also define the candidate Weinstein hypersurface as \[ \mathcal{H}_0 := W^{-1}(1) \cap \partial U_0. \] As shown in [Giroux-Pardon, Lemma 6.5], we may replace the Kahler potential $\varphi$ by certain $g_1(\varphi)$, such that symplectic parallel transport in $U_0$ takes fiber to fiber. Also, we may further add another $g_2(|W|^2 -1)$ to the Kahler potential, to ensure the retracting Liouville vector field is transverse to both the vertical and the horizontal part of the boundary. We may then smooth the corner $U_0$ to get $U$, and define $\mathcal{H} = \partial U \cap W^{-1}(1)$. We thus get a Weinstein pair $(U, \mathcal{H})$. From the Weinstein pair, we have the wrapped Fukaya category $\mathcal{W}(U, \mathcal{H})$ and the {\em forward stopped subcategory} $\mathcal{W}^\epsilonsilon(U, \mathcal{H})$ [GPS3, Definition 6.4]. \mathbf{e}gin{remark} (1) Since $\mathcal{H}$ extends to a page of an open book decomposition of $\partial U$, hence all Lagrangians objects in $ \mathcal{W}^\epsilonsilon(U, \mathcal{H})$ is forwardly stopped, we thus have \[ \mathcal{W}^\epsilonsilon(U, \mathcal{H}) \cong \mathcal{W} (U, \mathcal{H}) \] (2) Consider a family of closed subset $\mathcal{H}_t$, $t \in [0,1]$ and $\mathcal{H}_0=\mathcal{H}$. As long as $\partial U \mathbb{R}M \mathcal{H}_t$ are contact isotopic to each other, the categories $\mathcal{W}(U, \mathcal{H}_t)$ are equivalent to each other. [GPS2] Hence, we may replace $\mathcal{H}$ by its Liouville skeleton, or replace $\mathcal{H}$ by its Reeb fattening, $\mathcal{H} {\tilde{i}} mes [-\epsilonsilon, +\epsilonsilon]$, they have isotopic complement (see e.g. [Eli, Prop 2.5]). Hence, sometimes, we consider the stop not a single fiber $W^{-1}(1)$, but over any connected arc, and get $\mathcal{W}(U, \partial U \cap W^{-1}(1)) \cong \mathcal{W}(U, \partial U \cap (\cup_{\theta \in S} W^{-1}(e^{i\theta}))$, where $S \mathsf{u}bset S^1$ is contractible. See Example 1.4 of [GPS1] \end{remark} Finally, we describe in more detail how to make a thimble that lies over an infinite long vanishing path $\gamma$ (asymptotic to $+\infty$, say) into an object in $\mathcal{W}(U, W^{-1}(1) \cap \partial U)$. First, we may isotope $\gamma$ with ends fixed such that $\gamma$ once exit the disk $D$ will never return to $D$. Next, we may assume that $\gamma$ intersects $\partial D$ at $e^{-i\epsilonsilon}$ for some small $\epsilonsilon$. The thimble $L$ intersects the fiber $W^{-1}(e^{- i \epsilonsilon})$ in a Lagrangian sphere $S$, however $S$ as an $n-1$-dimensional $\omega_\varphi$-isotropic submanifold in the contact $2n-1$ dimensional $\partial_v U$, is not a Legendrian. We only have $\langlembda_\varphi |_S = df$ for some premitive $f$, but $f$ may not be zero. \mathbf{e}gin{proposition} There exists Lagrangian isotopy of $L$ with $\partial L \mathsf{u}bset \partial U$, such that in the end $\partial L$ is a Legendrian sphere, contained in an arbitrarily small neighorhood of the skeleton of $W^{-1}(e^{-i \epsilonsilon})$. \end{proposition} \mathbf{e}gin{proof} One may first apply the retracting Liouville flow for the fiber $F := W^{-1}(e^{-i \epsilonsilon})$, this induces a Lagrangian isotopy of $S$ towards its skeleton $\Lambda_F$. Since $S$ is an exact Lagrangian sphere, the Lagrangian isotopy is also a Hamiltonian isotopy, hence, can be extended to $L$. During the flow, the primitive function $f$ of one-form $\langlembda_\varphi|_S$ also has modulus $|f|$ decreases, hence we may assume $|f| < \delta$ for any $\delta>0$ we want. Next, we may Legendrianize $S$, by extending $f: S \to \mathbb{R}$ to a neighborhood in the fiber $F$, then extends constantly along the Reeb flow to a neighorhood of $S$ in $\partial U$; finally extends linearly along Liouville flow to a neighorhood of $S$ in $M$. Apply Hamiltonian flow for this extended function $f$ moves $S$ together with a germ of $L$ such that $S$ is a Legendrian. \end{proof} \subsection{Homological Mirror Symmetry for smooth fano toric variety} With the above discussion of Fukaya categories for a LG A-model, we can now recall the mirror symmetry statement. Let $\Sigmagma \mathsf{u}bset N_\mathbb{R}$ be a smooth fano fan, $X_\Sigmagma$ be the toric variety, $\Lambda_{\Sigmagma, \arg{q} } \in T^* M_T$ be Lagrangian skeleton, and $W_{\Sigmagma, q}$ be superpotential. We have the following sequence of equivalence of categories \[ \mathbb{C}oh(X_\Sigmagma) \cong Sh^w(M_T, \Lambda_{\Sigmagma,1}) \cong \mathcal{W}(T^*M_T, \Lambda^\infty_{\Sigmagma,1}) \] \[ \cong\mathcal{W}(U, \mathcal{H}) \cong FS( (\mathbb{C}S)^n, W_{\Sigmagma, 1}), \] where $(U, \mathcal{H})$ is a Weinstein pair constructed in \ref{weinstein-pair}. The first equivalence is non-equivariant CCC, by [Kuwagaki]. The second equivalent is by [GPS3] where they show that the wrapped constructible sheaves are equivalent to wrapped Fukaya categories. The third equivalence follows from the identification of $T^*M_T$ with $M_\mathbb{C}S \cong (\mathbb{C}S)^n$ by the Legendre transformation $\Psi$ (Eq. \eqref{legendre}), and that $\Psi( \mathbb{R}_{>0} \cdot \Skel(\mathcal{H})) = \mathbb{R}_{>0} \cdot \Lambda^\infty_{\Sigmagma,1}$. Finally, the last equivalence follows from discusion of [GPS1,2,3, GS] and the discussion above. \subsection{Results to be placed somewhere} \fi \section{Gamma II conjecture} \langlebel{sec:gamma-ii} In this section we prove the Gamma II conjecture for complete smooth toric Fano variety $X$. \mathsf{u}bsection{Asymptotic flat sections and the Gamma II conjecture} Recall that the quantum cohomology $QH_\mathbf{t}au^*(X)$ is semisimple for $\mathbf{t}au\in \tilde{U}ee$ for sufficiently small $\epsilonsilon$, with canonical basis $\{\phi_i\}$. The quantum connection admits a set of asymptotic fundamental solutions. Let $\hat\phi_i=\phi_i/\sqrt{(\phi_i,\phi_i)}$ be the normalized idempotent basis. We define the map ${\mathbb{P}si}:\mathbb{C}^\mathfrak{s}\to H^*(X)$ to be \[ \mathbb{P}si\mathbf{e}gin{pmatrix}a_1\\\vdots\\a_\mathfrak{s}\end{pmatrix}=a_1 \hat\phi_s+\dots a_\mathfrak{s} \hat\phi_\mathfrak{s}. \] Recall that $\{H^i\}_{i=1}^\mathfrak{s}$ is a basis of $H^*(X)$, and one writes $\mathbb{P}si$ as a matrix left multiplication \mathbf{e}gin{equation} \langlebel{eqn:Psi} \mathbb{P}si=\mathbf{e}gin{pmatrix} H^1,\dots,H^\mathfrak{s} \end{pmatrix}\Psi\cdot \end{equation} where $\Psi$'s $j$-th column vector is $\hat\phi_j$'s coodinates in $H^1,\dots,H^\mathfrak{p}$, i.e. the $(i,j)$-th element is $(H_i,\hat\phi_j)$. The quantum multiplication $c_1(X){\mathrm{st}}ar_{\mathbf{t}au}$ acts on $H^*(X)$ and its eigenvectors are $\phi_1,\dots,\phi_\mathfrak{s}$ with eigenvalues $u_1,\dots,u_\mathfrak{s}$, i.e. $c_1(X){\mathrm{st}}ar_{\mathbf{t}au} \phi_i=u_i \phi_i$. Let $U$ be the diagonal matrix $\mathrm{diag}(u_1,\dots,u_\mathfrak{s})$. Then the following well-known theorem provides asymptotic fundamental solutions. \mathbf{e}gin{theorem}[\cites{Du93,Gi01a}] \langlebel{thm:Dubrovin-Givental-decomposition} When the small quantum cohomology $QH^*_{\mathbf{t}au}(X)$ is semisimple, the quantum connection \eqref{eqn:qde-tau-off} has the following fundamental solutions \mathbf{e}gin{equation} \langlebel{eqn:Dubrovin-Givental-decomposition} \mathbb{P}si R(z) e^{-\mathfrak{r}ac{U}{z}}, \end{equation} where $R(z)=\mathrm{id}+R_1z+R_2z^2+\dots \in \mathrm{End}(\mathbb{C}^\mathfrak{s})\llbracket z\rrbracket$ is a matrix-valued formal power series in $z$. This formal solution is unique up to a signed permutation matrix multiplied from the right, which corresponds to the ambiguity of the order of $\Psi_a$. \end{theorem} Similarly to Section \ref{sec:thimbles}, we say a phase $\theta\in \mathbb{R}$ is \emph{admissible} if $\mathrm{Im}(u_i e^{-{\bold{i}}\theta})\neq \mathrm{Im}(u_j e^{-{\bold{i}}\theta})$ for $u_i\neq u_j$, i.e. the segment between $u_i$ and $u_j$ is not parallel to $e^{{\bold{i}}\theta}$. We do not require $u_1,\dots, u_\mathfrak{s}$ are distinct but when $\mathbf{t}au\in \tilde{U}eec$, they are. These fundamental solutions have analytic lifts, as given in the following theorem. \mathbf{e}gin{theorem}\textup{\cite[Theorem 12.2]{Wasow1987}, \cite[Theorem A]{BJL1979}, \cite[Lectures 4,5]{Dubrovin1999}, \cite[Section 8]{BrTo2013}, \cite[Proposition 2.5.1]{GGI16}} At a semisimple point $\mathbf{t}au\in {U_{\epsilon}}e$, for an admissible phase $\theta\in \mathbb{R}$, there exists $\delta >0$ and analytical fundamental solutions $Y_\theta(z)=(y_1^\theta(z),\dots, y_\mathfrak{s}^\theta(z))$ to the quantum connection \eqref{eqn:qde-tau-off} in the region $|\arg(z)-\theta|<\mathfrak{r}ac{\pi}{2}+\delta$ around $z=0$ such that \[ Y_\theta(z) e^{\mathfrak{r}ac{U}{z}}\sigmam \mathbb{P}si R(z). \] \end{theorem} These analytic solutions $y_i^\theta$ can be analytically continuated, along a path in $\mathbb{C}^*$, to $\arg(z)=0$, denoted by $\bar y_i^\theta$. The Gamma II conjecture is about how to express these solutions by $\mathcal Z([E])$ for some $E\in D^b\mathbb{C}oh(X)$. \mathbf{e}gin{conjecture}[Gamma II conjecture] \langlebel{conj:gamma-ii} Assume the quantum cohomology of a Fano variety $X$ is semisimple at $\mathbf{t}au\in H^2(X;\mathbb{C})$ then any admissible phase $\theta$, there exists full exceptional collection $\{E^\theta_1,\dots,E^\theta_\mathfrak{s}\}$ in $D^b\mathbb{C}oh(X)$ such that $\bar y_i^\theta(z)=\mathcal{Z}([E^\theta_i])$ for $i=1,\dots,\mathfrak{s}$. \end{conjecture} We will prove this conjecture when $X$ is a complete toric Fano smooth manifold near large radius limit point ($q\in {U_{\epsilon}}ec$) using inputs from enumerative and homological mirror symmetry. \mathsf{u}bsection{Proof of Gamma II conjecture} \langlebel{sec:proof} We fix $q_0\in {U_{\epsilon}}ec$ and $\mathbf{t}au_0$ such that $q_0=q(\mathbf{t}au_0)$. Then $QH^*_{\mathbf{t}au_0}(X)$ is semisimple and $W_{q_0}$ is holomorphic Morse. Let $\theta$ be an admissible phase. For each critical value ${\mathsf{cr}}_i=W_{q_0}(p_i)$ one defines the ray $\gamma_i={\mathsf{cr}}_i+\mathbb{R}_{\geq 0} e^{{\bold{i}}\theta}$, i.e. $\gamma_i$ are rays starting from ${\mathsf{cr}}_i$ towards the direction of $e^{{\bold{i}}\theta}$. Let $\Gammamma^0_i$ be the Lefschetz thimbles associated to each $\gamma^0_i$, such that $W_{q_0}(\Gammamma^0_i)=\gamma^0_i$. Then by the stationary phase expansion \mathbf{e}gin{align} \nonumber \int_{\Gammamma^0_i} e^{-W_{q_0}/z} \mathsf{f}_j \Omega &\sigmam (-2\pi z)^\mathfrak{r}ac{n}{2} \mathfrak{r}ac{e^{-W_{q_0}(p_i)/z}}{\sqrt{-\det{\mathrm{Hess}}_{p_i}(W_{q_0})}}(\mathsf{f}_j(p_i)\vert_{z=0}+O(z))\\ \langlebel{eqn:asymptotic-expansion} &=(-2\pi z)^\mathfrak{r}ac{n}{2}e^{-u_i/z}(\hat\phi_i,H_j)(1+O(z)). \end{align} Here we use the fact that $\mathsf{f}_j(p_i)\vert_{z=0}=(H_j,\hat \phi_i)$, $\hat \phi_i=\sqrt{\Delta_i}\phi_i$ and $\Delta_i=-\det{\mathrm{Hess}}_{p_i}(W_{q_0})$ (c.f. Proposition \ref{prop:identification-mir}). By Corollary \ref{cor:B-S-flat}, consider the flat section of the quantum D-module $\mathcal{F}$ \[ y^\theta_i:= (-2\pi z)^{-n/2}\mathsf{u}m_{j=1}^\mathfrak{s} \left(\int_{\Gammamma_i^0}f_je^{-\mathfrak{r}ac{W_{q_0}}{z}}\right) H^j. \] By Equation \eqref{eqn:asymptotic-expansion}, $y_i^\theta$ has an asymptotic expansion \[ y_i^\theta e^{u_i/z}\sigmam(H^1,\dots,H^\mathfrak{s})\mathbf{e}gin{pmatrix}(\hat\phi_i,H_1)\\\vdots\\(\hat\phi_i,H_\mathfrak{s})\end{pmatrix}(\mathrm{id}_{\mathbb{C}^\mathfrak{s}}+O(z)), \] which precisely match Equation \eqref{eqn:Dubrovin-Givental-decomposition} (c.f. Equation \eqref{eqn:Psi}). So these flat sections $y_i^\theta$ are indeed analytic lift of the asymptotic solutions given by Theorem \ref{thm:Dubrovin-Givental-decomposition}. We rotate $\gamma^0_i$ by a family $\gamma_i^t$, $0\leq t\leq 1$ like the following figure. \[ {\tilde{i}} kz{ \node at (-2,2.5) {$\gamma_i^0$}; \node at (5,2.5) {$\gamma_i^1$}; \draw [dashed, ->] (-1,2.5) to (4,2.5); \mathbf{e}gin{scope}[scale=0.7] \draw (0,0) to +(-4,2); \draw (0.5, 0.5) to +(-3, 1.5); \draw (-2,0.5) to +(-3, 1.5); \end{scope} \mathbf{e}gin{scope}[shift={(5,0)}, scale=0.5] \draw (0,0) to +(-2,1) to[out=150, in =180] (3,3); \draw (0.5, 0.5) to +(-1.5, 0.75) to[out=150, in =180] (3,2.5); \draw (-2,0.5) to +(-1.5,0.75) to[out=150, in =180] (3, 3.5); \end{scope} } \] Let $\Gammamma_i^t$ be the thimbles over $\gamma_i^t$. Effectively $\Gammamma_i^t=e^{-{\bold{i}} t \theta}\Gammamma_i^0$. Then the resulting \[ \bar y^\theta_i(z):=s_{[\Gammamma_i^1]}. \] is an analytical continuation of $y_i^\theta(z)$, well-defined on $z>0$. These $\Gammamma_i^1=\Gammamma_i$ as in Definition \ref{def:thimbles-cycle}. By discussion in Section \ref{sec:thimbles-obj}, there exist $L_1,\dots, L_\mathfrak{s}\in \mathcal{W}L$ and $[L_i]=[\Gammamma_i]$ for all $i$. By Proposition \ref{prop:exceptional} they form an exceptional collection. By HMS Theorem \ref{thm:ccc} and \ref{thm:GPS}, there exists a full exceptional collection $E^\theta_1,\dots,E^\theta_\mathfrak{s}\in \mathbb{C}oh(X)$, such that each $L_i\cong \mu\circ\iota(E^\theta_i)$. Therefore by Proposition \ref{prop:any-primary-insertion} \mathbf{e}gin{align*} \bar y^\theta_i&=(-2\pi z)^{-n/2}\mathsf{u}m_{i=1}^\mathfrak{s} \left(\int_{\Gammamma_i}f_je^{-\mathfrak{r}ac{W_{q_0}}{z}}\right) H^j=\mathsf{u}m_{j=1}^\mathfrak{s}\llangle H_j,\mathfrak{r}ac{z^{\widetilde{-\deg}}z^{c_1(X)} A_{E^\theta_i}}{z+\psi}\rrangle_{0,2}^X\vert_{\mathbf{t}au=\mathbf{t}au_0} H^j\\ &=\mathcal{Z}([E_i^\theta])\vert_{\mathbf{t}au=\mathbf{t}au_0}. \end{align*} Then we have reached our conclusion. \mathbf{e}gin{theorem} \langlebel{thm:gamma-ii} The Gamma II conjecture (Conjecture \ref{conj:gamma-ii}) is true for a complete smooth Fano toric manifold at any point $\mathbf{t}au=\mathbf{t}au_0\in \tilde{U}eec$ for sufficiently small $\epsilonsilon$ and $\epsilonsilon'$, i.e. near the large radius limit. \end{theorem} \mathbf{e}gin{bibdiv} \mathbf{e}gin{biblist} {\bold{i}}bselect{mybib} \end{biblist} \end{bibdiv} \end{document}
\begin{document} \preprint{APS/123-QED} \title{Entanglement properties of a quantum-dot biexciton cascade in a chiral nanophotonic waveguide} \author{Eva M. González-Ruiz} \email{[email protected]} \affiliation{ Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute\\ University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark } \author{Freja T. {\O}stfeldt} \affiliation{ Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute\\ University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark } \author{Ravitej Uppu} \affiliation{ Department of Physics \& Astronomy, University of Iowa, Iowa City, IA 52242 United States } \affiliation{ Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute\\ University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark } \author{Peter Lodahl} \affiliation{ Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute\\ University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark } \author{Anders S. Sørensen} \affiliation{ Center for Hybrid Quantum Networks (Hy-Q), Niels Bohr Institute\\ University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark } \date{\today} \begin{abstract} We analyse the entanglement properties of deterministic path-entangled photonic states generated by coupling the emission of a quantum-dot biexciton cascade to a chiral nanophotonic waveguide, as implemented by \O{}stfeldt et al. [PRX Quantum \textbf{3}, 020363 (2022)]. We model the degree of entanglement through the concurrence of the two-photon entangled state in the presence of realistic experimental imperfections. The model accounts for imperfect chiral emitter-photon interactions in the waveguide and the asymmetric coupling of the exciton levels introduced by fine-structure splitting along with time-jitter in the detection of photons. The analysis shows that the approach offers a promising platform for deterministically generating entanglement in integrated nanophotonic systems in the presence of realistic experimental imperfections. \end{abstract} \maketitle \section{Introduction} \begin{figure*} \caption{\label{fig:scheme} \label{fig:scheme} \end{figure*} The generation of high-fidelity entanglement is key for the development of modern quantum technologies \cite{horodecki2009,jozsa2003}. Entangled states of photons have been widely generated probabilistically by employing spontaneous parametric down-conversion (SPDC) \cite{zhong2018}, but the probabilistic nature of this process is a major obstacle for scaling up to high photon numbers. The possibility of entanglement generation on demand is of utmost importance for a wide range of quantum information applications, such as measurement-based quantum computing \cite{bartolucci2021,briegel2009}. The biexciton cascade from quantum-dot (QD) photon sources has been investigated as an on-demand entanglement generator \cite{benson2000,akopian2006,liu2019,huber2018}. The emitted states are, however, entangled in the polarisation degree of freedom, which is incompatible with implementations in integrated photonic circuits \cite{politi2009} that typically support only a single polarisation mode. This poses a challenge for future integration and scalability of quantum technologies \cite{wang2020a} relying on biexciton-cascade entanglement sources. A solution to the integration of the biexciton source into nanophotonic devices was presented in Ref.~\cite{freja}. Here the photon emission from a cascaded-biexciton decay from InGaAs quantum dots was coupled to a chiral nanophotonic waveguide \cite{sollner2015} (see Fig.~\ref{fig:scheme}). The polarisation-dependent directional emission enabled by chiral coupling of dipoles in these waveguides enable a promising route for on-chip, path-entangled photon generation. Two-photon excitation of the quantum dot prepares the system in the biexciton state $\ket{XX}$ with energy $\omega_{XX}+\omega_{X}$, which decays through two possible channels to the exciton levels $\ket{X_{\pm}}$ (see Fig.~\ref{fig:scheme}(a)). In a homogenous medium, the biexciton decays radiatively to one of the exciton levels, emitting a photon with either right ($\sigma_+$) or left ($\sigma_-$) circular polarisation. The two exciton levels, $|X_+\rangle$ and $|X_-\rangle$, are degenerate with energy $\omega_{X}$ and decay to the ground state $\ket{g}$ emitting photons with opposite circular polarisation to that emitted during the biexciton decay due to angular momentum conservation. The two emitted photons are thus entangled in polarisation as there is no information regarding which decay path the system followed. To turn this into a chip-compatible, path-entangled photon source, the QD is placed in a single-mode chiral photonic crystal waveguide which allows converting the polarisation of the transition dipole moment to the emission direction of the photon, i.e. $\sigma_-$ dipoles emit to the left (path A) and $\sigma_+$ dipoles emit to the right (path B). The polarisation entangled state created by the biexciton cascade is thus translated into path encoding that can be used in integrated photonic circuits. Ref.~\cite{freja} reported on experimental measurements of the dynamics by out-coupling the photons from the waveguide and frequency-filtering them in order to separate photons emitted on the biexciton and exciton transitions. The desired correlations were then measured through a Hanbury-Brown-Twiss (HBT) experiment \cite{HBT}, as shown in Fig.~\ref{fig:scheme}. \enlargethispage{-7cm} While an ideal QD that is precisely positioned at a chiral point could generate maximally entangled, path-encoded photon pairs, imperfections in the QD as well as in the chiral coupling could impact the degree of entanglement. In particular, intrinsic asymmetry of the QD could lead to coupling between the exciton states $\ket{X_{\pm}}$ through a spin-flip oscillation with a frequency $S$ that is known as the fine-structure splitting (FSS) of the QD. In this work we provide a full theoretical analysis of the entanglement properties of the path-entangled state accounting for all these imperfections \footnote{The codes used in this study are available at the Electronic Research Data Archive (ERDA) of the University of Copenhagen. DOI: \url{https://doi.org/10.17894/ucph.a68d50d5-9f9b-4c3b-befd-ef99fcfe2959}}. This analysis already successfully described the experimental findings in Ref.~\cite{freja}, but here we provide the full details of the theory and apply it to systematically analyse the impact of various errors on the degree of entanglement. In particular, the aforementioned FSS induces a frequency splitting of the exciton levels (see Fig.~\ref{fig:scheme}), which effectively creates a time dependence of the entangled polarisation states. This can reduce the quality of entanglement when imperfect time detection of photons is taken into account. Moreover, since the photons emitted in the two different decay paths in Fig.~\ref{fig:scheme}(b) have different polarisations, the two paths may occur with different probabilities in photonic nanostructures, given by the polarisation dependent local density of states. These effects, together with imperfect chiral coupling to the waveguide, can reduce the amount of entanglement. The analysis and understanding of these effects will be important for further explorations of the biexciton cascade as an on-demand source of path-entangled photons in integrated quantum information platforms. \enlargethispage{-3cm} \section{Analysis} We start our analysis by introducing the Hamiltonian of the system and a wavefunction ansatz for the state generated by means of the light-mater interaction with the QD. The state is then fully characterised through studying its evolution by solving Schr\"odinger's equation. \subsection{The Hamiltonian and wavefunction ansatz} The biexciton level structure can be expressed in two different bases. In the linear polarisation basis (Fig.~\ref{fig:scheme}(b)), the emitted photons are linearly polarised (either horizontally or vertically, with $\gamma_x$ and $\gamma_y$ decay rates, respectively), while in the circular basis (Fig.~\ref{fig:scheme}(a)) the photons are circularly polarised (with right- and left-circularly polarised photons, and $\gamma_+$ and $\gamma_-$ decay rates, respectively). In the linear basis the two exciton levels have different energies, split by the FSS $S$, while in the circular basis the levels are degenerate. In the latter basis, there is a time-dependent oscillation between the two exciton levels at a frequency $S$. The full system is described by the total Hamiltonian $\hat{H} = \hat{H}_0 + \hat{H}_\textrm{f} + \hat{H}_\textrm{int}$, which can be decomposed into the free energy of the emitter- $\hat{H}_0$, the free field- $\hat{H}_\textrm{f}$ and the interaction $\hat{H}_\textrm{int}$ Hamiltonians. These are given by \begin{align} \begin{split} \hat{H}_{0} &= \hbar\left( \omega_{XX} + \omega_{X} \right) \ket{XX}\bra{XX} \\ & + \hbar\left( \omega_{X} + \frac{S}{2} \right) \ket{X_x}\bra{X_x} + \hbar\left( \omega_{X} - \frac{S}{2} \right) \ket{X_y}\bra{X_y} \\ \hat{H}_\textrm{f} &= \hbar\int\bigg(\omega_{\mathbf{k}}\hat{a}^{\dagger}_{\mathbf{k}}\hat{a}_{\mathbf{k}} + \omega'_{\mathbf{k}}\hat{a}'^{\dagger}_{\mathbf{k}}\hat{a}'_{\mathbf{k}}\bigg)d\mathbf{k} \\ \hat{H}_\textrm{int} &= -\frac{q}{m_0} \hat{\boldsymbol{A}}\cdot\hat{\boldsymbol{p}}\,, \label{eq:Hamiltonian} \end{split} \end{align} where we have chosen the Coulomb gauge with vector potential $\mathbf{A}$. The QD is described by the coordinate $\mathbf{r}$ with the conjugate variable or generalised momentum $\mathbf{p}$, charge $q$ and mass $m_0$ \cite{lodahl_review}. Ideally, the energy of the biexciton ($\ket{XX}$) and exciton ($\ket{X_{\alpha}}$ with $\alpha=x,y$) levels is given by $\hbar\omega_{XX}$ and $\hbar\omega_{X}$, respectively. The FSS $S$, however, splits the exciton levels into $\hbar(\omega_{X}\pm S/2)$ in the linear polarisation basis. Note that we express the total Hamiltonian in a linear polarisation basis as it simplifies the temporal dynamics of the system. The field annihilation operators $\hat{a}_{\mathbf{k}}$ are momentum dependent, where $\mathbf{k}$ expresses the corresponding wavevector, and the prime indicates whether it annihilates a biexciton ($\hat{a}_{\mathbf{k}}$) or an exciton ($\hat{a}'_{\mathbf{k}}$) photon with frequency $\omega^{(\prime)}_{\mathbf{k}}$, correspondingly. The biexciton and exciton binding energies are assumed to be sufficiently different to treat them as two independent reservoirs. This assumption is motivated by the 2 -- 3 meV energy splitting between the exciton and biexciton binding energies observed in QDs, which is over three orders of magnitude larger than the natural linewidths of these transitions \cite{pedersen2020}. To put the interaction Hamiltonian into a simpler form, the conjugate variable $\mathbf{p}$ (proportional to the dipole operator) can be expressed in terms of the transition matrix elements $\mathbf{\hat{p}} = \sum_{l,m} \bra{l}\mathbf{\hat{p}}\ket{m}\ket{l}\bra{m}$, where the indexes $l$ and $m$ represent the excited and ground states of the transition, respectively. This allows us to express the interaction Hamiltonian as \begin{equation} \hat{H}_\textrm{int} = \sum_{l,m,\mathbf{k}} \bra{l}\mathbf{\hat{p}}\ket{m} \cdot \mathbf{U}_{\mathbf{k}(\mathbf{r})}\hat{a}_k\ket{l}\bra{m}+{\rm H.c.}\,, \end{equation} where $\mathbf{U}_{\mathbf{k}(\mathbf{r})}$ is the mode-function of the field. We consider that the field propagates in the waveguide along the $x$ direction. Following Bloch's theorem we thus have $ \mathbf{U}_{k}(\mathbf{r}) = \mathbf{e}_{k}(\mathbf{r}) e^{ikx}$, where $\mathbf{e}_{k}(\mathbf{r})$ is the Bloch function describing the electric field with wavenumber $k$ at the QD position $\mathbf{r}$, and the field only propagates in the $x$ direction. Moreover we assume that the QD only interacts within a narrow frequency range around the resonance frequency with wavenumbers $\pm k_0$ yielding \begin{equation} \hat{H}_\textrm{int} = \sum_{\substack{l,m\\k\approx \pm k_0}} \bra{l}\mathbf{\hat{p}}\ket{m} \cdot\mathbf{e}_{k}(\mathbf{r})e^{ikx}\hat{a}_{k}\ket{l}\bra{m}+{\rm H.c.}\,, \label{eq:int_ham} \end{equation} where for brevity we have taken only the non primed annihilation operators, with the sign of $k$ indicating whether the field propagates to the right ($+k_0$) or to the left ($-k_0$). By assuming the same wavenumber in both directions, we implicitly assume time-reversal symmetry for the propagation of the field in the waveguide (i.e. without the QDs). This is valid as long as we can e.g. neglect the intrinsic Faraday effect of the waveguide. Since waveguides are very broad band this is typically an excellent approximation and does not exclude any possible violation of time-reversal symmetry of the QD if an external magnetic field was applied. The polarisation of the emitted light is determined by the symmetry of the states, which results in the following matrix elements for the dipole forbidden transitions in the linear polarisation basis \begin{equation} \begin{split} \bra{XX}\hat{p}_x\ket{X_y} &= \bra{XX}\hat{p}_y\ket{X_x} \\ &=\bra{X_x}\hat{p}_y\ket{g} = \bra{X_y}\hat{p}_x\ket{g} = 0\,, \end{split} \end{equation} as the $x$ ($y$) component of the dipole only couples to the horizontally (vertically) polarised light. Moreover, the allowed transitions from the exciton levels have a dipole moment defined as $P$, \begin{equation} \bra{X_x}\hat{p}_x\ket{g} = \bra{X_y}\hat{p}_y\ket{g} = P \,, \end{equation} whereas the two possible biexciton decay transitions are given by \cite{lodahl_review} \begin{equation} \bra{XX}\hat{p}_x\ket{X_x} = \bra{XX}\hat{p}_y\ket{X_y} = \sqrt{2}P\,. \label{eq:biexciton_decay} \end{equation} We now insert these dipole transitions in the interaction Hamiltonian from Eq.~\eqref{eq:int_ham} and calculate its Fourier transform. For now we only consider the modes propagating to the right (path B), yielding \begin{widetext} \begin{equation} \begin{aligned} \hat{H}_\textrm{int} = -P\cdot \bigg[\sqrt{2} &\bigg( \epsilon_{k_0,x}(\mathbf{r})\ket{XX}\bra{X_x}+\epsilon_{k_0,y}(\mathbf{r})\ket{XX}\bra{X_y}\bigg)e^{ik_0x_0}\hat{a}_B(x_0) \\ +&\bigg( \epsilon_{k_0',x}(\mathbf{r})\ket{X_x}\bra{g}+ \epsilon_{k_0',y}(\mathbf{r})\ket{X_y}\bra{g}\bigg)e^{ik_0x_0}\hat{a}_B'(x_0) \bigg] + \text{H.c.}, \end{aligned} \label{eq:ham_int_fourier} \end{equation} \end{widetext} where the position-dependent annihilation operator $\hat{a}_n(x)$ is defined as \begin{equation} \hat{a}_n(x) = \frac{1}{\sqrt{2\pi}}\int_0^\infty \hat{a}_{n,\pm k}e^{i(k-k_0)x}dk\,, \label{eq:a_pos} \end{equation} with $n=B(A)$ denoting fields propagating to the right (left) and the sign being positive (negative) for path $B$ ($A$) and $x_0$ is the position of the emitter. We note that since we separate the annihilation operator into left and right propagating modes ($A$ and $B$) the limit of the integration is $k=0$. In practice, however, we only expect the annihilation operator to give a contribution for $k\approx \pm k_0 $. We can therefore extend the limit of integration to $-\infty$ yielding the commutator \begin{equation} [ \hat{a}_n(x) , \hat{a}^{\dagger}_{n'}(x') ]=\delta_{n,n'}\delta(x-x')\,. \end{equation} We further note that with the definition in Eq. \eqref{eq:a_pos} we make the convention that both left and right propagating fields are traveling towards positive $x$, i.e. the direction of the $x$-axis is reversed for the left propagating modes. To relate the coupling of the right propagating modes with the left propagating modes we again invoke time-reversal symmetry of the waveguide modes. If the local electric field $\mathbf{\epsilon}_{k_0,x}(\mathbf{r})$ is a solution for the waveguide, then by time-reversal symmetry the solution for a wave propagating in the opposite direction is given by $\mathbf{\epsilon}_{-k_0}(\mathbf{r}) =\mathbf{\epsilon}^{*}_{k_0}(\mathbf{r})$. This allows us to obtain the full interaction Hamiltonian by combining Eq.~\eqref{eq:ham_int_fourier} with the corresponding expression for back-propagating waves. This results in \begin{equation} \begin{aligned} \hat{H}_\textrm{int} &= -\hbar\sum_{\alpha} \bigg[\bigg(g_{ A,\alpha} \hat{a}_{A}(0) +g_{ B,\alpha} \hat{a}_{B}(0)\bigg) \ket{XX}\bra{X_ \alpha} \\ & \quad + \bigg(g'_{ A,\alpha}\hat{a}'_{A}(0) +g'_{ B,\alpha}\hat{a}'_{B}(0)\bigg) \ket{X_ \alpha}\bra{g} +\text{H.c.} \bigg]\,, \label{eq:int_ham_2} \end{aligned} \end{equation} where we have set $x_0=0$ for simplicity and defined the complex coupling constants $g_{n,\alpha}=|g_{n,\alpha,n}| e^{i\phi_{n,\alpha}}$ and their phases in relation to the local electric field components $\epsilon_{\pm k_0,i}$ as \begin{equation} \begin{aligned}[t] g_{A,x} &= \sqrt{2}P\epsilon^{*}_{k_0,x}(\mathbf{r}), \\ g_{B,x} &= \sqrt{2}P\epsilon_{k_0,x}(\mathbf{r}), \\ g'_{A,x} &= P\epsilon'^{*}_{k_0,x}(\mathbf{r}),\\ g'_{B,x} &= P\epsilon'_{k_0,x}(\mathbf{r})\,. \end{aligned} \qquad \begin{aligned}[t] g_{A,y} &=\sqrt{2}P\epsilon^{*}_{k_0,y}(\mathbf{r}), \\ g_{B,y} &= \sqrt{2}P\epsilon_{k_0,y}(\mathbf{r}), \\ g'_{A,y} &= P\epsilon'^{*}_{k_0,y}(\mathbf{r}), \\ g'_{B,y} &=P\epsilon'_{k_0,y}(\mathbf{r})\,. \end{aligned} \label{eq:map_g_e} \end{equation} The coupling constants in Eq.~\eqref{eq:map_g_e} describe the light-matter interaction between the field and the waveguide including the chirality. In particular, their magnitude describes the coupling of a horizontally or vertically polarised photon (through the $x$ and $y$ components of the dipole, respectively) to the left or to the right paths. From Eq.~\eqref{eq:map_g_e} we note that $|g_{A,\alpha}|=|g_{B,\alpha}|$ for $\alpha=x,y$, so that linearly polarized dipoles always have the same coupling constant and hence the same decay rate in both directions $A$ and $B$. This does not, however, exclude that circular dipoles can have chiral interaction and predominantly decay in one direction. The existence of such chiral interactions is encoded in the relative phase of the coupling constants. From Eq.~\eqref{eq:map_g_e} we find that the phase difference $\Phi$ between the phases of the $x$ and $y$ components of the electric field is \begin{equation} \Phi \equiv \phi_{x} - \phi_{y} = \phi_{A,x} - \phi_{A,y} = - \left( \phi_{B,x} - \phi_{B,y} \right)\,, \label{eq:phase_eq} \end{equation} and similarly for the exciton phase difference $\Phi'$. Consider now a circularly polarized state $\ket{X_\pm}=(\ket{X_x}\pm i\ket{X_y})/\sqrt{2}$. We can calculate the coupling constants $g'_{n,+}$ for the decay of these states into the $n=A,B$ directions from the interaction Hamiltonian~\eqref{eq:int_ham_2}, yielding \begin{equation} g'_{n,\pm} = \frac{1}{\sqrt{2}}(g'_{n,x}\mp ig'_{n,y})\,. \end{equation} If $|g'_{n,x}|=|g'_{n,y}|=g'$, the decay rate of the circular states into the two directions will thus fulfill \begin{equation} \begin{aligned} \gamma'_{A,\pm}\propto |g'_{A,\pm}|^2 = {g'}^2(1\pm \sin\Phi')\\ \gamma'_{B,\pm}\propto |g'_{B,\pm}|^2 = {g'}^2(1\mp \sin\Phi')\,. \end{aligned} \end{equation} For $\Phi' = \pi/2$ the $x$ and $y$ components of the field in the waveguide are phase-shifted corresponding to circular polarisation. Furthermore, whether the waveguide mode is left- or right-hand circularly polarized is linked to the propagation direction of the light. As a consequence, the system exhibits perfect chiral coupling with the circularly polarized states coupling only to a single propagation direction, i.e. $\gamma'_{A,+}\neq 0$ and $\gamma'_{B,+}=0$, with the directions reversed for the opposite circular state. Complete absence of chirality occurs when $\Phi' = 0$, where the field in the waveguide is linearly polzarized. Thus, the parameters $\Phi$ and $\Phi'$ represent the degree of chirality of the system, which we employ in the subsequent sections of this article. To describe the emission into the waveguide, it is convenient to change the Hamiltonian into the position basis. While the Fourier transform of the free energy term in the total Hamiltonian~\eqref{eq:Hamiltonian} is itself, Fourier transforming the free field term yields \begin{align} \begin{split} &\hat{H}_\textrm{f} = \sum_{n}\bigg[i\hbar\int \bigg(v_{gXX}\frac{\partial \hat{a}^{\dagger}_{n}(x)}{\partial x}\hat{a}_{n}(x) \\ &\quad \quad \quad \quad \quad \quad \quad + v_{gX} \frac{\partial \hat{a}'^{\dagger}_{n}(x)}{\partial x}\hat{a}'_{n}(x)\bigg)dx\\ &\quad + \hbar\int\bigg(\omega_{XX}\hat{a}^{\dagger}_{n,k}\hat{a}_{n,k} + \omega_{X}\hat{a}'^{\dagger}_{n,k}\hat{a}'_{n,k}\bigg)dk\bigg]\,, \label{eq:ham_field_fourier} \end{split} \end{align} where the group velocities associated with the biexciton and exciton energy levels are given by $v_{g,XX} = \partial \omega'_{k}/\partial k$ and $v_{g,X} = \partial \omega_{k}/\partial k$ respectively. Note that these two group velocities could be different due to the dispersion of the waveguide and the different emission wavelengths of the exciton and biexciton levels. Here, we approximate them to lowest order around the exciton and biexciton frequencies, that is $\omega_{k} \approx \omega_{X} + v_{g,X}(k-k_0)$ and $\omega'_{k} \approx \omega_{XX} + v_{g,XX}(k-k_0)$. We can now write a wavefunction ansatz for the total state of the system in the real space domain. The state should describe that up to two photons can be emitted by the biexciton decay and that they couple into the left- or right-propagating waveguide modes. Based on the methods from Ref. \cite{sumanta} (with similar methods being developed in Refs.~\cite{fischer2018,trivedi2018,heuck2020}) we use the following ansatz: \begin{widetext} \begin{align} \begin{split} \ket{\psi(t)} = e^{-i(\omega_{XX} + \omega_{X})t}&(c_{XX}(t)\ket{XX}\ket{\emptyset} +\sqrt{v_{gXX}}\sum_{\alpha,n}\int dt_{XX}\psi_{\alpha,n}(t,t_{XX})\hat{a}^{\dagger}_{n}(v_{gXX}(t-t_{XX}))\ket{X_\alpha}\ket{\emptyset} \\ &+\sqrt{v_{gXX}v_{gX}}\sum_{n,m}\iint dt_{XX}dt_{X}\psi_{n,m}(t,t_{XX},t_{X})\hat{a}^{\dagger}_{n}(v_{gXX}(t-t_{XX}))\hat{a}'^{\dagger}_{m}(v_{gX}(t-t_{X}))\ket{g}\ket{\emptyset})\,, \label{eq:ansatz} \end{split} \end{align} \end{widetext} where $t_{X}$ and $t_{XX}$ are the two emission times with $t_{XX}<t_{X}$. This state describes that with an amplitude $c_{XX}(t)$ the system is in the biexciton state with the field being in the vacuum state $\ket{\emptyset}$. Since the system is initially excited to this state we have $c_{XX}(t=0)=1$. The amplitude $\psi_{\alpha,n}(t,t_{XX})$ describes the state after the emission of a photon in the direction $n=A,B$ at time $t_{XX}$ by the decay into the exciton state $\ket{X_\alpha}$. Since the photon propagates in the waveguide, this is associated with a photon at position $x=v_{gXX}(t-t_{XX})$. As this state still evolves in time the amplitude has an explicit dependence on time $t$ with the amplitude vanishing before the emission, $\psi_{\alpha,n}(t,t_{XX})=0$ if $t\leq t_{XX}$. Finally, after the emission of both photons, the system is in the ground state $\ket{g}$ and the two photons are emitted in directions $n,m$ with amplitude $\psi_{n,m}(t,t_{XX},t_{X})$. This amplitude vanishes unless $t\geq t_{X}\geq t_{XX}$. It should be noted that both for the left and right propagation directions in the waveguide $x\in[0,\infty]$, i.e. the reference frame is placed such that in both directions $x$ is positive after the QD. \subsection{Solving the Schr\"odinger equation} The wavefunctions $\ket{\psi(t)} $ from Eq.~\eqref{eq:ansatz} should be calculated to describe the state. We thus apply Schr\"odinger's equation $i\hbar\partial \ket{\psi}/\partial t = \hat{H}\ket{\psi}$ to the wavefunction ansatz using the space-domain Hamiltonian. Following the procedure from Ref.~\cite{sumanta}, we obtain the set of coupled differential equations: \begin{multline} \dot{c}_{XX}(t) = -\frac{i}{\sqrt{v_{gXX}}\hbar}\sum_{\alpha,n}g_{\alpha,n}\psi_{\alpha,n}(t,t), \\ \dot{\psi}_{x,n}(t,t_{XX}) = \frac{iS}{2\hbar}\psi_{x,n}(t,t_{XX}) - \frac{ig^{*}_{x,n}c_{XX}(t)}{\sqrt{v_{gXX}}\hbar}\delta(t-t_{XX}), \\ - \frac{i}{\sqrt{v_{gX}}\hbar}\sum_{m}g'_{x,n}\psi_{n,m}(t,t_{XX},t), \\ \dot{\psi}_{y,n}(t,t_{XX}) = -\frac{iS}{2\hbar}\psi_{y,n}(t,t_{XX}) - \frac{ig^{*}_{y,n}c_{XX}(t)}{\sqrt{v_{gXX}}\hbar}\delta(t-t_{XX}) \\ - \frac{i}{\sqrt{v_{gX}}\hbar}\sum_{m}g'_{y,n}\psi_{n,m}(t,t_{XX},t), \\ \dot{\psi}_{n,m}(t,t_{XX},t_{X}) = - \frac{i}{\sqrt{v_{gX}}\hbar}\sum_{\alpha}g^{\prime *}_{\alpha,n}\psi_{\alpha,n}(t,t_{XX})\delta(t-t_{X}) \,. \label{eq:dif_equations} \end{multline} We then apply the Laplace transform to the nine equations in Eq.~\eqref{eq:dif_equations}, with the system initially prepared in the biexciton state ($c_{XX}(t=0)=1$). The Laplace transform simplifies solving the coupled differential equations to solving an algebraic problem, where the initial conditions of the system are already specified in the Laplace space instead of in the solution of the differential equations. Inverting the Laplace transformation now yields \begin{align} \begin{split} &\dot{\psi}_{x,n}(t,t_{XX}) = - \frac{ig^{*}_{x,n}c_{XX}(t)}{\sqrt{v_{gXX}}\hbar}\delta(t-t_{XX}) \\ &\quad- \left(\frac{-iS + \gamma'_{x}}{2\hbar}\right) \psi_{x,n}(t,t_{XX}) -\frac{\Gamma}{2\hbar}\psi_{y,n}(t,t_{XX}) \\ &\dot{\psi}_{y,n}(t,t_{XX}) = - \frac{ig^{*}_{y,n}c_{XX}(t)}{\sqrt{v_{gXX}}\hbar}\delta(t-t_{XX}) \\ &\quad- \left(\frac{iS + \gamma'_{y}}{2\hbar}\right) \psi_{y,n}(t,t_{XX}) -\frac{\Gamma^{*}}{2\hbar}\psi_{x,n}(t,t_{XX}) \,, \end{split} \label{eq:diff2} \end{align} with the spontaneous emission rates given by \begin{align} \begin{split} \gamma^{ ( \prime )}_{\alpha} &= \sum_n\gamma^{ ( \prime )}_{\alpha,n} \equiv \sum_n\frac{|g^{ ( \prime )}_{\alpha,n}|^2}{v_{gX}}\,. \end{split} \end{align} A coupling between the $\ket{X_x}$ and $\ket{X_y}$ states mediated by the local electric field of the waveguide is captured by the cross terms with coupling coefficient \begin{align} \Gamma = \frac{g'_{A,x}g'^{*}_{A,y}+g'_{B,x}g'^{*}_{B,y}}{v_{gX}}\,, \end{align} which is real due to time-reversal symmetry~\eqref{eq:map_g_e}. This coupling is important if e.g. the local electric field in the waveguide is diagonally polarized, which leads to $\Gamma=\gamma'_x=\gamma'_y$. When solving the coupled set of differential equations~\eqref{eq:diff2} it is convenient to work in a basis that diagonalizes the dynamics, i.e. where the equations decouple. For a rotationally symmetric system, this is the case for any basis, but it is no longer the case once the symmetry is broken. The FSS is induced by the asymmetry of the QD and is assumed to be in the $x$ and $y$-directions such that Eqs.~\eqref{eq:diff2} decouple in that basis. On the other hand, the local waveguide field may have a different orientation, which also breaks the symmetry and thus leads to a coupling between the equations, i.e. $\Gamma\neq 0$. In practice, however, we typically have $S\gg\Gamma$, e.g. in the experimental implementation in Ref. \cite{freja} the fine structure splitting $S$ was an order of magnitude larger than the exciton emission rate $(\gamma'_x+\gamma'_y)/2$. The coupling between the exciton levels ($\ket{X_x}$ and $\ket{X_y}$) can therefore be neglected and we set $\Gamma=0$. We note that this assumption may lead to inconsistencies in the obtained results due to incorrect normalization of the state in QDs with small FSS, i.e., $S$ comparable to $(\gamma'_x+\gamma'_y)/2$. In the subsequent sections, we use $S=4(\gamma'_x+\gamma'_y)/2$ for which we find that the magnitude differs from unity by $<$6\%. We now solve the two coupled differential equations from Eq.~\eqref{eq:diff2} by taking the aforementioned limit $\Gamma=0$, such that the equations decouple. We can then straightforwardly solve them by again applying the Laplace transform, obtaining \begin{widetext} \begin{align} \begin{split} c_{XX}(t) &= e^{-\frac{1}{2\hbar}(\gamma_{x}+\gamma_y)t} \\ \psi_{x,n}(t,t_{XX}) &= -i\sqrt{\gamma_{x,n}} e^{-\frac{1}{2\hbar}(\gamma_{x}+\gamma_y)t_{XX}-\frac{1}{2\hbar}\left(\gamma'_{x}+iS\right)\left(t-t_{XX}\right)-i\phi_{x,n}}\theta(t-t_{XX}) \\ \psi_{y,n}(t,t_{XX}) &= -i\sqrt{\gamma_{y,n}} e^{-\frac{1}{2\hbar}(\gamma_{x}+\gamma_y)t_{XX}-\frac{1}{2\hbar}\left(\gamma'_{x}-iS\right)\left(t-t_{XX}\right)-i\phi_{y,n}}\theta(t-t_{XX}) \\ \psi_{n,m}(t,t_{XX},t_{X}) &= - e^{-\frac{1}{2\hbar}(\gamma_{x}+\gamma_y)t_{XX}}\Bigg(\sqrt{\gamma_{x,n}\gamma'_{x,m}}e^{-\frac{1}{2\hbar}\left(\gamma'_{x}+iS\right)\left(t_{X}-t_{XX}\right)-i\left(\phi_{x,n}+\phi'_{x,m}\right)}\\ &\quad \quad \quad \quad \quad \quad \quad \quad+\sqrt{\gamma_{y,n}\gamma'_{y,m}}e^{-\frac{1}{2\hbar}\left(\gamma'_{y}-iS\right)\left(t_{X}-t_{XX}\right)-i\left(\phi_{y,n}+\phi'_{y,m}\right)}\Bigg)\theta(t-t_{X}) \theta(t_{X}-t_{XX}) \,, \end{split} \label{eq:results1} \end{align} \end{widetext} where $\theta(x)$ is the Heaviside step function, i.e. $\theta(x) = 1$ if $x>0$, and $\theta(x) = 0$ otherwise. We now calculate the probability of detecting two photons simultaneously at the output of the waveguide in order to analyse the quality of the entanglement. To do so we correlate the biexciton and exciton photons with a time delay $\tau$ in two different settings: when both are coupled to the forward or back-propagating direction (noted as $A_{X}A_{XX}$ and $B_{X}B_{XX}$ respectively) and when they couple to opposite directions ($A_{X}B_{XX}$ and $B_{X}A_{XX}$): \begin{align} \begin{split} &P_{n,m}(t,t_{XX},t_{XX}+\tau) = \\ &|v_{gXX}||v_{gX}|\bra{\psi(t)}\hat{a}_{n}^{\dagger}(v_{gXX}t)\hat{a}_{n}(v_{gXX}t) \\ &\quad \quad \quad \quad \cdot \hat{a}'_{m}{}^{\dagger}\left(v_{gX}(t-\tau)\right)\hat{a}'_{m}\left(v_{gX}(t-\tau)\right)\ket{\psi(t)} \,. \end{split} \end{align} With the wavefunction ansatz Eq.~\eqref{eq:ansatz} and the results from Eq.~\eqref{eq:results1} we obtain \begin{multline} P_{n,m} = |\psi_{n,m}(t,t_{XX},t_{XX}+\tau)|^2 \\ = e^{-(\gamma_{x}+\gamma_y)t_{XX}}\bigg[\gamma_{x,n}\gamma'_{x,n}e^{-\gamma'_{x}\tau}+\gamma_{y,n}\gamma'_{y,n}e^{-\gamma'_{y}\tau}\\ +2\sqrt{\gamma_{x,n}\gamma_{y,n}\gamma'_{x,m}\gamma'_{y,m}}e^{-\frac{1}{2}(\gamma'_{x}+\gamma'_{y})\tau} \\ \cdot\cos\left(S\tau + (\phi_{x,n}-\phi_{y,n})+(\phi'_{x,m}-\phi'_{y,m})\right)\bigg]\\ \cdot\theta(t-t_{XX}-\tau)\,. \label{eq:probabiliti} \end{multline} \subsection{Entanglement generation} The state produced by the biexciton cascade coupled to the chiral waveguide has two different degrees of freedom: the path followed (to the left, $A$, or to the right, $B$) and the respective times of emission of the biexciton ($t_{XX}$) and exciton ($t_{X}$) photons. We project this state in time space by fixing the two times of detection $t_{X}-t_{XX}\equiv \tau > 0$. Note that the characteristics of the state produced depends only on the time difference $\tau$. From our wavefunction ansatz in Eq.~\eqref{eq:ansatz}, we post-select the two-photon emission terms by conditioning on detecting photons at times $t=t_{XX}$ and $t=t_{X}$, thus obtaining the state \begin{multline} \ket{\psi(\tau)} = \frac{1}{\sqrt{N}}( \psi_{AA}(\tau)\ket{AA} + \psi_{AB}(\tau)\ket{AB} \\ + \psi_{BA}(\tau)\ket{BA} + \psi_{BB}(\tau)\ket{BB}) \,, \end{multline} where, \begin{equation} \begin{split} N = &\abs{\psi_{AA}(\tau)}^2 + \abs{\psi_{AB}(\tau)}^2 + \abs{\psi_{BA}(\tau)}^2 + \abs{\psi_{BB}(\tau)}^2\,, \end{split} \label{eq:norm} \end{equation} is the normalisation factor. Note that we dropped the explicit subscripts for exciton $X$ and biexciton $XX$ photons on the direction index. Instead, we utilize time-ordered emission in the simplified notation, i.e. the subscript $AB$ should be read as $A_{XX} B_{X}$. In general the two possible decay channels do not have the same spontaneous emission rates, i.e., $\gamma_{x}\neq\gamma_{y}$ due to differences of the local electric field components in the waveguide. However, to achieve a high degree of chirality in the waveguide, the two exciton decay rates have to be similar $\gamma_{x}\approx\gamma_{y}$. This was also the case in the recent experiment in Ref. \cite{freja}. For most of the article we therefore set $\gamma_x=\gamma_y$ and $\gamma'_x=\gamma'_y$, but investigate the influence of differences in the rates in Sec. \ref{sec:asym}. Moreover, the biexciton and exciton spontaneous emission rates are given by \begin{equation} \gamma_{x}+\gamma_{y}\equiv \gamma_{XX}, \quad \gamma'_{x}=\gamma'_{y}\equiv \gamma_X\,. \label{eq:rates_relation} \end{equation} Since the biexciton decays twice as fast according to Eq.~\eqref{eq:biexciton_decay}, if we assume identical group velocities we have that $\gamma_X=\gamma_{XX}/2$. In the rest of the article, we assume this relation between the spontaneous emission rates. The difference in the phase of the transition dipoles for biexciton and exciton decays, $\Phi$ and $\Phi'$ respectively, satisfies Eq.~\eqref{eq:phase_eq}. Moreover as the optical wavelengths of the photons emitted from biexciton and the exciton decay channels are comparable, we can approximate the phase differences to be equal, i.e. $\Phi = \Phi'$. Under these assumptions, the total probability of detecting the first photon at time $t=t_{XX}$ is \begin{multline} P(t=t_{XX}) = (\gamma_x + \gamma_y)e^{-(\gamma_x + \gamma_y)t_{XX}/\hbar} \\ = 2\gamma_Xe^{-2\gamma_Xt_{XX}/\hbar}\,. \end{multline} We can thus calculate the path-dependent, two-photon emission probabilities to be \begin{align} \begin{split} P_{AA} &= \frac{\gamma_X}{4}e^{-\gamma_X\tau/\hbar}\left(1 + \cos\left(S\tau+2\Phi\right)\right) \\ P_{BB} &= \frac{\gamma_X}{4}e^{-\gamma_X\tau/\hbar}\left(1 + \cos\left(S\tau-2\Phi\right)\right)\\ P_{AB} &= P_{BA} = \frac{\gamma_X}{4}e^{-\gamma_X\tau/\hbar}\left(1 + \cos\left(S\tau\right)\right) \,. \end{split} \label{eq:prob_appr2} \end{align} A QD with $S=0$ that is perfectly chiral coupled to the waveguide, i.e., $\Phi = \Phi' =\pi/2$, results in $P_{AA}=P_{BB}=0$. In this case we thus have the ideal entangled state $(\ket{AB}+\ket{BA})/\sqrt{2}$, where the emission direction of the two photons is perfectly anticorrelated, as shown with the dashed and dotted lines in Fig.~\ref{fig:S_pi_2}. Note that, for $S=0$, our model can only accurately represent the perfect chiral coupling case and will lead to erroneous conclusions if $\Phi \neq \pi/2$ since this leads to $\Gamma\neq 0$. For the general case of $S > 0$, we can calculate the resulting entangled two-photon state by conditioning the solution in Eq.~\eqref{eq:results1} on the detection of a photon at time $t=t_{XX}$. For perfect chiral coupling the state is \begin{align} \begin{split} \ket{\psi(\tau)}_{\Phi=\pi/2} = \frac{1}{2}( &\cos\left(\frac{S \tau}{2}\right)\left( \ket{AB} + \ket{BA} \right) \\ + i &\sin\left(\frac{S \tau}{2}\right)\left( \ket{AA} + \ket{BB} \right) )\,, \label{eq:pi2_state} \end{split} \end{align} To understand the entanglement in this state we rewrite it as \begin{equation} \ket{\psi(\tau)}_{\Phi=\pi/2} = \frac{1}{2}(\ket{A}\ket{\xi} + \ket{B}\ket{\xi'})\,, \label{eq:entanglement_oscillation} \end{equation} which is in fact a maximally entangled state, with $\ket{\xi}=\cos\left(S \tau/2\right)\ket{B}+i\sin\left(S \tau/2\right)\ket{A}$ and $\ket{\xi'}=\cos\left(S \tau/2\right)\ket{A}+i\sin\left(S \tau/2\right)\ket{B}$. For perfect chirality the entanglement is thus maximal regardless of the detection time, although the specific entangled state varies with the emission time, resulting in a time varying detection pattern in Fig.~\ref{fig:S_pi_2}. In practice, this means that the corresponding measurement protocol must compensate for the time dependence. This task may be non-trivial depending on the specific application. Here, for simplicity we chose to characterise the state by its intrinsic entanglement that could be obtained in such an idealized setup. In contrast, if the waveguide interaction is not chiral ($\Phi=0,\pi$) the state is given by \begin{align} \begin{split} \ket{\psi(\tau)}_{\Phi=0,\pi} &= \frac{1}{2}\left( \ket{AB} + \ket{BA} + \ket{AA} + \ket{BB} \right) \\ &= \frac{1}{2}\left( \ket{A} + \ket{B} \right)_X\left( \ket{A} + \ket{B} \right)_{XX}\,, \label{eq:phi0_state} \end{split} \end{align} which is a separable state. As a consequence all detection patterns of two photons are equally probable. In real experimental settings, the directional (chiral) coupling could lie in between these two extreme cases depending on the local electric field at the location of the QD within the waveguide. This imperfect chirality will lower the entanglement quality of the source, which is quantified in the next section. \section{Results} \begin{figure} \caption{\label{fig:S_pi_2} \label{fig:S_pi_2} \end{figure} \begin{figure*} \caption{\label{fig:3ab} \label{fig:3ab} \end{figure*} As we have seen in the previous section, the emitted two-photon entangled state depends on the time difference $\tau$ between the biexciton and exciton emission times. We thus expect that any uncertainty in the emission times will affect the entanglement quality of the state. Moreover, imperfect chirality of the waveguide reduces the directionality of emission, thereby leading to non-perfect conversion into path encoding of the entangled state. In this section, we quantify the effect of imperfections on the entanglement quality of the state. To this end, we employ the concurrence $C$ as the entanglement measure to characterise the quality of the state. The concurrence of any quantum state with a density matrix $\rho$ is given by \cite{concurrence} \begin{align} C(\rho) = \max\{ 0,\lambda_1-\lambda_2-\lambda_3-\lambda_4\}\,, \label{eq:concurrence} \end{align} where $\{\lambda_i\}$ are the square root of the eigenvalues of $\rho\Tilde{\rho}$ in descending order and $\Tilde{\rho}=(\hat{\sigma}_y\otimes\hat{\sigma}_y)\rho^{*}(\hat{\sigma}_y\otimes\hat{\sigma}_y)$. We calculate the density matrix $\rho$ that represents the path-encoded state obtained from the biexciton cascade to be \begin{equation} \rho(\tau) = \sum_{\substack{n,n'\\m,m'}}\psi_{n,m}(\tau)\psi^{*}_{n',m'}(\tau) \ket{n,m}\bra{n',m'}\,. \label{eq:density_matrix} \end{equation} By calculating the resulting eigenvalues $\{\lambda_i\}$, we obtain the concurrence using Eq.~\eqref{eq:concurrence} \begin{align} C(\tau) = \frac{2}{N}\abs{\psi_{AA}(\tau)\psi_{BB}(\tau) - \psi_{AB}(\tau)\psi_{BA}(\tau)}\,. \end{align} Inserting the wavefunctions from Eq.~\eqref{eq:results1} and approximating $\gamma_x=\gamma_y$ as discussed earlier (cf. Eq.~\eqref{eq:rates_relation}), the dependence of $C$ on the chiral phase $\Phi$ and the time delay between biexciton and exciton emissions $\tau$ is found to be \begin{equation} C(\Phi,\tau) = \frac{\sin^2\left(\Phi\right)}{1+\cos\left(S\tau\right)\cos^2\left(\Phi\right)}\,. \label{eq:concurrence_0} \end{equation} We obtain perfect concurrence $C=1$ when the waveguide is perfectly chiral ($\Phi = \pi/2$) as discussed above. Furthermore, if the waveguide is completely non-chiral ($\Phi = 0,\pi$) the concurrence vanishes $C=0$, agreeing with the separable state obtained in Eq.~\eqref{eq:phi0_state}. In the following subsections we will independently analyse the effect of each of the imperfections in more detail. \subsection{Fine-structure splitting} In this subsection, we analyse the effect of the FSS on the entanglement quality of the path-entangled state. Non-zero FSS leads to a spin-flip between the exciton levels ($\ket{X_\pm}$), and it is therefore convenient to describe the decay in the linear polarisation basis with $x$- and $y$-polarized states, $\ket{X_x}$ and $\ket{X_y}$ respectively (c.f. Fig.~\ref{fig:scheme}(b)). In this basis, the states are decoupled and the FSS induced spin-flip frequency $S$ corresponds to an energy splitting between the exciton levels. The splitting makes the emitted photons distinguishable in energy, and crucially their frequencies are correlated with their polarisations. This leads to ``which-way'' information about the polarisation state, which means reduction in the degree of entanglement. To overcome this issue Ref.~\cite{fognini2018} has proposed using electro-optical modulators that rotate the polarisation of the biexciton and exciton photons separately to effectively erase the information gained from the splitting in the polarisation-encoded state. A phase modulator could similarly be applied to improve path-entangled states. Alternatively, narrow spectral filtering in between the two frequency components of either the exciton or biexciton emission can be implemented to erase the ``which-path'' information, however, at the expense of significantly reducing the entanglement generation rate \cite{akopian2006}. Another approach is to implement QDs with improved symmetry in order to obtain a smaller splitting $S$ \cite{huo2013}. The reference situation corresponds to an ideal system without fine structure splitting and perfect directional (chiral) coupling ($S=0$ and $\Phi=\pi/2$). This situation is easily understood from the level structure in Fig.~\ref{fig:scheme}(a), where emission occurs with two oppositely polarized circular dipoles ($\sigma_-$ and $\sigma_+$). With perfect chiral coupling these decay in opposite directions creating that maximally entangled state $(\ket{AB}+\ket{BA})/\sqrt{2}$. As a consequence, the probability of detecting both photons on the same side of the waveguide vanishes (dashed line in Fig.~\ref{fig:S_pi_2}). The probability of detecting one photon at each of the opposite ends of the waveguide decays exponentially with the exciton spontaneous emission rate $(\gamma'_x+\gamma'_y)/2$ (dotted line in Fig.~\ref{fig:S_pi_2}) as expected from the lifetime of the exciton states. We now consider a scenario where the FSS creates an asymmetry between the exciton levels ($S\neq 0$), while the chiral coupling is still ideal ($\Phi = \pi/2$). This generates an oscillation between two maximally entangled states as discussed below Eq.~\eqref{eq:entanglement_oscillation}. The corresponding probabilities of the various detection patterns is shown with the dash-dotted and solid lines in Fig.~\ref{fig:S_pi_2}. The amplitude of oscillations decays exponentially with the time constant set by the exciton spontaneous emission rate. As discussed in the previous section, although the emitted state changes over time, it remains maximally entangled, i.e., $C(\tau\geq0)=1$, and it is a superposition of standard Bell states. \subsection{Imperfect chirality} We now analyse the joint effect of both imperfect chirality ($\Phi\neq\pi/2$) and non-zero FSS ($S\neq 0$). An example of the detection probability for this situation is shown in Fig.~\ref{fig:3ab}(a). Curiously, the probabilities $P_{AA}$ and $P_{BB}$ are out of phase, meaning that with a given time delay there is a difference in the probabilities of detecting two photons at the two ends of the waveguide. This effect happens due to an interplay of the imperfect chirality and the FSS. A decay from the biexciton state and subsequent detection of the photon at one end creates a coherent superposition between the two exciton states $\ket{X_x}$ and $\ket{X_y}$ with a phase $\mp\Phi$ depending on where the photon was detected. The subsequent dynamics induced by the FSS $S$ may then evolve the state towards or away from the relative phase $\pm \Phi$, which gives the maximal emission in the same direction. With non-perfect chirality, the concurrence $C$ of the path-entangled, bi-photon state emitted by the biexciton cascade is reduced since the imperfect chirality limits the directional coupling of the QD emission. The dependence of $C(\tau)$ on $\Phi$ is shown in Fig.~\ref{fig:3ab}(b). We observe that $C$ is independent of $\tau$ only if $\Phi = n\pi/2$, where $n$ is a non-zero integer. If $n$ is even, $C(\tau\geq0) = 0$ and corresponds to the completely non-chiral case. If $n$ is odd, we reproduce the results of the perfect chiral case that results in a maximally entangled state with $C(\tau\geq0) = 1$ as discussed in the previous subsection. For partial chirality $\Phi \neq \pi/2$, the FSS induces oscillations between non-maximally entangled states and $C$ oscillates as a function of the detection time $\tau$. In general, $C$ is below unity except for $S\tau=\pi$, where the concurrence is unity for all $\Phi\neq0,\pi$. \subsection{Timing jitter}\label{sec:TimeJitter} \begin{figure} \caption{\label{fig:time_jitter} \label{fig:time_jitter} \end{figure} In this subsection we analyse the effect of uncertainty in the timing of photodetection events on the entanglement quality. We model the uncertainty in detection time by averaging the density matrix \eqref{eq:density_matrix} elements $\rho_{n,n',m,m'}$ with a Gaussian probability distribution with standard deviation $\sigma$ \begin{equation} \begin{split} \bar{\rho}_{n,n',m,m'}(\tau) = \int_{0}^{\infty}d\tau' \exp\left[{-\frac{(\tau'-\tau)^2}{2\sigma^2}}\right] \\ \times \psi_{n,m}(\tau')\psi^{*}_{n',m'}(\tau')\,. \label{eq:concurrence_1} \end{split} \end{equation} The time-averaged density matrix $\bar{\rho}(\tau)$ is then given by \begin{equation} \bar{\rho}(\tau) = \frac{1}{\bar{N}(\tau)}\begin{pmatrix} \bar{\rho}_{AAAA}(\tau) & \bar{\rho}_{AAAB}(\tau) & \dots & \bar{\rho}_{AABB}(\tau) \\ \bar{\rho}_{ABAA}(\tau) & \ddots & & \vdots\\ \vdots & &\ddots& \vdots \\ \bar{\rho}_{BBAA}(\tau) & \dots & \dots & \bar{\rho}_{BBBB}(\tau) \end{pmatrix}\,,\label{eq:rho_avg} \end{equation} where $\bar{N} = \int_{-\infty}^{\infty}d\tau' \exp[-(\tau'-\tau)^2/(2\sigma^2)]N$ is a normalisation constant equal to the probability density of the detection time and $N$ is given by Eq.~\eqref{eq:norm}. From this density matrix we can then calculate the concurrence $C$. Figure~\ref{fig:time_jitter}(a) shows the dependence of the concurrence $C$ on the detection timing jitter $\sigma$ at different combinations of chirality and time delay. As seen in the figure the concurrence drops when the uncertainty in detection time becomes comparable to the oscillation period $1/S$. This highlights the importance of keeping track of the time dependence for the quality of the final path-entangled state. Unlike the jitter-free case, even systems with perfect chirality ($\Phi = \pi/2$) exhibit $C<1$ for non-zero values of $\sigma$ since we do not know precisely which state we have. The asymptote of $C$ with increasing time jitter is observed to depend only on the phase $\Phi$, i.e., when the time jitter is comparable to or larger than the spread in emission time; the precise time of the detection is not important. Figure~\ref{fig:time_jitter}(b) shows the time evolution of $C$ for a fixed timing jitter $\sigma = 0.3/\gamma_X$ for different values of the chiral phase and $S=4\gamma_X$. Note that a peculiar effect occurs for $\Phi=\pi/2$ (Fig.~\ref{fig:time_jitter}(b)), where we observe that $C$ increases at negative time delays (grey shaded region). Since the emission of the exciton always occurs after the biexciton emission ($t_{X}-t_{XX}=\tau>0$), negative detection intervals ($\tau<0$) correspond to the case where the emission of the photon must have occurred close to $\tau=0$, i.e. with minimal time delay, but was measured to be at a negative value due to the time jitter. Therefore the uncertainty in the emission time, which is otherwise given by the detection time jitter, is effectively reduced for negative detection times, leading to a higher concurrence. The probability of measuring the state at negative time intervals, however, decays very rapidly as $\tau$ decreases, as shown in Fig.~\ref{fig:time_jitter}(c). The larger concurrence at small positive time delays ($0<\tau\lesssim\sigma$) compared to later times can be understood with similar arguments. On the other hand for $\Phi=\pi/4 $ and $\Phi=\pi/8$ the fidelity in the absence of time jitter is lower around $\tau=0$ than at later times, c.f. Fig.~\ref{fig:3ab}(b). As a consequence the peak concurrence still occurs around $S\tau=\pi$. The probability density in Fig.~\ref{fig:time_jitter}(c) decays with the decay rate $\gamma_X$ of the exciton states. On top of this it oscillates with increasing amplitude as the system becomes less chiral ($\Phi\rightarrow 0$). The reason is that the polarisation of waveguide modes becomes linear as the system loses chirality. After the decay of the biexciton, the polarisation of the exciton state rotates due to the FSS $S$ and may thus be more or less aligned with the waveguide polarisation. In contrast, for the chiral case the waveguide polarisation is circular and the rotation of the polarisation does not affect the decay rate. \subsection{Asymmetric exciton decay} \label{sec:asym} \begin{figure*} \caption{\label{fig:4abc} \label{fig:4abc} \end{figure*} In the experiments presented in Ref.~\cite{freja}, the decay rates of the $x$ and $y$-polarized exciton levels were nearly identical (i.e. $\gamma_x\approx\gamma_y$). However, in general these two decay rates may differ depending on the position of the QD in the waveguide, with the asymmetry more dominant at locations with a low degree of directional emission, i.e., far from perfect chirality. In this subsection we analyse how this asymmetry can affect the quality of entanglement. Figure~\ref{fig:4abc}(a,b) shows the impact of asymmetry $\epsilon\equiv(\gamma_x-\gamma_y)(\gamma_x+\gamma_y)$ on the concurrence for the case of $\epsilon = -0.4$. As the decay rates of the $x$- and the $y$-polarized exciton levels are different, one can gain ''which-path'' information about the photon decay from the photodetection time, i.e. the highest decay rate would result in increased likelihood of early detection of photon, and vice versa. This extra information about the emission process reduces the entanglement. Furthermore, the difference in decay rates of the biexciton state creates a difference in populations of the $\ket{X_x}$ and $\ket{X_y} $ states. However, if the difference in the emission time is comparable to the difference in decay rates, the `which-path' information arising from the asymmetric decay rates is erased and the entanglement is recovered. This interplay between the difference in emission times and the asymmetry $\epsilon$ leads to an optimal time delay $\tau$ that maximizes the concurrence as observed in Fig.~\ref{fig:4abc}(a,b). In addition to this optimality, we still observe that $C$ oscillates with emission time delay due to the non-zero $S$ as discussed in Sec. III.A. For a systematic study of the effect of asymmetry, we calculate the average concurrence $\Bar{C}$ over all $\tau$ detection times, defined as \begin{equation} \Bar{C} = \int_{-\infty}^{\infty}P(\tau)C(\tau)d\tau\,, \end{equation} where $P(\tau)$ is the corresponding probability density of the state at time $\tau$. The dependence of $\Bar{C}$ on the asymmetry parameter $\epsilon$ and the phase difference $\Phi$ is shown in Fig.~\ref{fig:4abc}(c), which highlights that $\Bar{C}$ is maximized for symmetric decay of the exciton dipoles, i.e. $\epsilon = 0$. \subsection{Dephasing noise} Electron-phonon interactions can induce dephasing processes that will degrade the indistinguishability of photons. These processes are nevertheless not expected to affect the entanglement quality of the state. This is due to the two exciton levels being symmetrically perturbed by the phononic interaction: the dephasing of the exciton level is expected to be induced solely by deformation potential of the quantum dot, which is independent of its spin properties \cite{muljarov2004a,tighineanu2018}, so that the two levels are dephased in an identical manner. Therefore the indistinguishability of the photons emitted at the exciton level is reduced by this effect, but it is not expected to degrade the entanglement quality, as witnessed experimentally in Ref.~\cite{coste2023}. \section{Conclusion} We have provided an in-depth analysis of the entanglement properties of a QD biexciton cascade embedded in a chiral nanophotonic waveguide, as experimentally realised in Ref.~\cite{freja}. We have calculated how the biexciton cascade can deterministically prepare a path-encoded state mediated by the chiral-coupling of the waveguide. The entanglement of the state is, however, affected by errors unavoidably present in the experimental implementation of the system. In particular, we have shown how the time dependence of the state induced by the FSS plays a crucial role in determining the generated entanglement. The amount of path-entanglement generated by the biexciton cascade can strongly depend on the emission time, while the presence of detection time jitter reduces the concurrence of the state. Finally, imperfect directional-coupling in the waveguide reduce the concurrence of the path-encoded entangled state as well. Our work quantifies the role of such imperfections and lay out a route to a deterministic source of path-encoded entangled photons of high entanglement quality. We hope our work will motivate further experimental improvements of this novel entanglement source. \section{Acknowledgments} We acknowledge the support of Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks). \end{document}
\begin{document} \begin{abstract} We compute the image of Enriquez' elliptic KZB associator in the (maximal) meta-abelian quotient of the fundamental Lie algebra of a once-punctured elliptic curve. Our main result is an explicit formula for this image in terms of Eichler integrals of Eisenstein series, and is analogous to Deligne's computation of the depth one quotient of the Drinfeld associator. We also show how to retrieve Zagier's extended period polynomials of Eisenstein series, as well as the values at zero of Beilinson--Levin's elliptic polylogarithms from the meta-abelian elliptic KZB associator. \end{abstract} \maketitle \section{Introduction} \label{sec:1} This paper deals with the computation of some of the coefficients of the elliptic KZB associator defined by Enriquez \cite{Enriquez:EllAss}. In order to put things into context, we first recall the analogous picture in genus zero, due to Deligne, Drinfeld and Ihara. Let $\mathfrak{p}(U):=\mathbb L(\mathsf{x}_0,\mathsf{x}_1)^{\wedge}$ be the lower central series completion of the free Lie algebra in variables $\mathsf{x}_0,\mathsf{x}_1$, and denote by $\exp \mathfrak{p}(U)$ the associated pro-unipotent algebraic group. The Drinfeld associator $\Phi(\mathsf{x}_0,\mathsf{x}_1)$ is an element of $\exp \mathfrak{p}(U)_{\mathbb R}:=\exp (\mathfrak{p}(U) \widehat{\otimes} \mathbb R)$, which is constructed from the monodromy of the universal Knizhnik--Zamolodchikov (KZ) connection on $\mathbb P^1_{\mathbb C} \setminus \{0,1,\infty\}$ (for this reason, $\Phi$ is sometimes called KZ-associator). First introduced in \cite{Drinfeld:Gal}, the Drinfeld associator plays a pivotal role in the context of quantum groups and Grothendieck--Teichm\"uller theory. We are interested in arithmetic properties of $\Phi(\mathsf{x}_0,\mathsf{x}_1)$. The following two aspects, which are in fact closely related to each other, are of particular relevance. \begin{enumerate} \item[(i)] The coefficients of $\Phi(\mathsf{x}_0,\mathsf{x}_1)$ are expressible as $\mathbb Q$-linear combinations of multiple zeta values \begin{equation} \label{eqn:mzv} \zeta(k_1,\ldots,k_n)=\sum_{m_1>\ldots>m_n>0}\mathfrak{r}ac{1}{m_1^{k_1}\ldots m_n^{k_n}}, \quad k_1 \geq 2, \, k_2,\ldots,k_n \geq 1, \end{equation} which are generalizations of the special values of the Riemann zeta function at positive integers. These numbers have (at least conjecturally) a rich algebraic structure \cite{Goncharov:MTM,IKZ}. \item[(ii)] The Lie algebra $\mathfrak{p}(U)$ is the de Rham realization of an element of the category $\mathsf{MTM}$ of mixed Tate motives over $\mathbb Z$ (\cite{DG}, \S 5). As a consequence, the unipotent fundamental group $\mathcal U_{\mathsf{MTM}}$ of $\mathsf{MTM}$ acts on $\exp\mathfrak{p}(U)$ (Ihara action), and in particular on $\Phi(\mathsf{x}_0,\mathsf{x}_1)$.\footnote{In this context, $\Phi(\mathsf{x}_0,\mathsf{x}_1)$ is usually denoted $dch$ (for `droit chemin').} The Deligne--Ihara conjecture (proved by Brown in \cite{Brown:MTM}) states that this action is faithful, thus elements of $\mathcal U_{\mathsf{MTM}}$ are completely determined by their action on $\Phi(\mathsf{x}_0,\mathsf{x}_1)$, which can be computed very explicitly \cite{Brown:Decomposition}. \end{enumerate} For both (i) and (ii), the archetypal result is due to Deligne (\cite{Deligne:P1}, \S 19), who inspired by unpublished work of Wojtkowiak essentially showed that \begin{equation} \label{eqn:depthone} \log(\Phi(\mathsf{x}_0,\mathsf{x}_1)) \equiv -\sum_{k=2}^{\infty}\zeta(k)\ad^{k-1}(\mathsf{x}_0)(\mathsf{x}_1) \mod [D^1\mathfrak{p}(U),D^1\mathfrak{p}(U)], \end{equation} where $D^1\mathfrak{p}(U) \subset \mathfrak{p}(U)$ denotes the ideal generated by $\mathsf{x}_1$. On the one hand, this exhibits the Riemann zeta values $\zeta(k)$ as coefficients of $\log(\Phi(\mathsf{x}_0,\mathsf{x}_1))$. On the other hand, since $\zeta(k) \neq 0$, one deduces from \eqref{eqn:depthone} that the generators $\exp(\sigma_{2n+1})$ of $\mathcal U_{\mathsf{MTM}}$ act non-trivially on $\exp \mathfrak{p}(U)$ (\cite{DG}, \S 6.8), which was a first step towards establishing the Deligne--Ihara conjecture. In this paper, we consider an elliptic analog of the above situation. Let $\mathfrak{H}$ be the Poincar\'e upper half-plane, and consider for $\tau \in \mathfrak{H}$ the once-punctured, complex elliptic curve $E_{\tau}^{\times}:=\mathbb C/(\mathbb Z+\mathbb Z\tau) \setminus \{0\}$. Following Hain--Matsumoto \cite{HM}, we denote its de Rham fundamental group by $\mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb L(\mathsf{a},\mathsf{b})^{\wedge}$. In \cite{Enriquez:EllAss}, Enriquez constructs the elliptic KZB associator $(A(\tau),B(\tau)) \in \exp \mathfrak{p}(E_{\tau}^{\times})_{\mathbb C} \times \exp \mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$ from the monodromy of the universal elliptic Knizhnik--Zamolodchikov--Bernard (KZB) connection \cite{CEE:KZB,LR}. It is an elliptic version of the Drinfeld associator, and the analogs of (i) and (ii) above are the following. \begin{enumerate} \item[(i)] The coefficients of the elliptic KZB associator are the elliptic multiple zeta values, first introduced in \cite{Enriquez:Emzv} and studied in more detail in \cite{BMS,LMS,Matthes:Thesis,Matthes:Edzv}. They are closely related to both multiple zeta values and to iterated integrals of Eisenstein series \cite{Brown:MMV,Manin:Iterated}. \item[(ii)] The Lie algebra $\mathfrak{p}(E_{\tau}^{\times})$, viewed as a local system over the moduli space $\mathcal M_{1,\overrightarrow{1}}$ of elliptic curves with a non-zero tangent vector at the origin, is the de Rham realization of an element of the category $\mathsf{MEM}_{\overrightarrow{1}}$ of universal mixed elliptic motives (over $\mathcal M_{1,\overrightarrow{1}}$). This category can be seen as an elliptic enhancement of the category of mixed Tate motives over $\mathbb Z$. The corresponding Galois group $\mathcal G_{\mathsf{MEM}_{\overrightarrow{1}}}$ acts on $\mathfrak{p}(E_{\tau}^{\times})$ \cite{HM}, and therefore also on the elliptic KZB associator. In analogy to the Deligne--Ihara conjecture, it is asked in \cite{HM}, \S 24.2 whether the action of $\mathcal G_{\mathsf{MEM}_{\overrightarrow{1}}}$ on $\mathfrak{p}(E_{\tau}^{\times})$ is faithful. \end{enumerate} The main goal of this article is to establish an analog of \eqref{eqn:depthone} for the elliptic KZB associator, i.e. the explicit computation of the images of the formal logarithms $\mathfrak{A}(\tau):=\log(A(\tau))$ and $\mathfrak{B}(\tau):=\log(B(\tau))$ in a certain quotient of $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. More precisely, let $D^1\mathfrak{p}(E_{\tau}^{\times}) \subset \mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb L(\mathsf{a},\mathsf{b})^{\wedge}$ be the commutator. Taking its lower central series defines a filtration $D^{\bullet}\mathfrak{p}(E_{\tau}^{\times})$, the elliptic depth filtration (\cite{HM}, \S 27). In particular, $D^2\mathfrak{p}(E_{\tau}^{\times})$ is the double commutator, and our goal is to compute the images $\mathfrak{A}(\tau)^{\rm met-ab}$ and $\mathfrak{B}(\tau)^{\rm met-ab}$ of the elliptic KZB associator in the meta-abelian quotient \begin{equation} \mathfrak{p}(E_{\tau}^{\times})^{\rm met-ab}_{\mathbb C}:=\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}/D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C} \cong (\mathbb C\cdot \mathsf{a}\oplus \mathbb C\cdot \mathsf{b}) \oplus \mathbb C[\![U,V]\!], \end{equation} where $U^kV^l:=\ad^k(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}])$. Our main result can then be stated as follows. \begin{intthm}[Theorem \ref{thm:arithgeo} below] Let $\overline{\mathsf{U}}:=\mathfrak{r}ac{\mathsf{U}}{2\pi i}$ and $\mathsf{W}:=\overline{\mathsf{U}}+\tau \mathsf{V}$. We have \begin{align} \mathfrak{A}(\tau)^{\rm met-ab}&=2\pi i\mathsf{b}+\exp\left(\tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)\mathfrak{A}^{(1)}_{\infty}-2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}, \end{align} and \begin{align} \mathfrak{B}(\tau)^{\rm met-ab}&=\mathsf{a}+2\pi i\tau \mathsf{b}+\exp\left(\tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)\mathfrak{B}^{(1)}_{\infty}-2\pi i\mathsf{W}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}. \end{align} Here, $\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}:=(2\pi i)^{2k-1}\int_{\tau}^{\overrightarrow{1}_{\infty}}G_{2k}(z)(\mathsf{W}-z\mathsf{V})^{2k-2}\mathrm{d} z$ is the regularized Eichler integral of $G_{2k}$ (\cite{Brown:MMV}, \S 4), and the series $\mathfrak{A}_{\infty}^{(1)}$, $\mathfrak{B}_{\infty}^{(1)}$ are given by \begin{align} \mathfrak{A}_{\infty}^{(1)}&=2\pi i\left(c(\mathsf{U})-\mathfrak{r}ac{(2\pi i)}{4} \mathsf{V}+\sum_{n \geq 3, {\rm odd}}\zeta(n)\mathsf{V}^n\right),\\ \mathfrak{B}_{\infty}^{(1)}&=-2\pi i\left(c(2\pi i\mathsf{V})-\mathsf{U} c(\mathsf{U})c(2\pi i\mathsf{V})\right)+\sum_{n \geq 3, \,{\rm odd}}\zeta(n)\mathsf{U}\mathsf{V}^{n-1}, \end{align} where $c(x)=\mathfrak{r}ac{1}{e^x-1}+\mathfrak{r}ac 12-\mathfrak{r}ac 1x=\sum_{k=2}^{\infty}\mathfrak{r}ac{B_k}{k!}x^{k-1}$. \end{intthm} Similar considerations have been made by Hain to prove that the generators $\exp(\mathbf{e}_{2k})$ of the geometric fundamental group $\mathcal G^{\rm geom}_{\mathsf{MEM}_{\overrightarrow{1}}}$ act non-trivially on $\mathfrak{p}(E_{\tau}^{\times})$ (\cite{Hain:HodgeDeRham}, Theorem 15.7). Moreover, our theorem gives a closed expression of elliptic multiple zeta values of depth one explicitly in terms of Riemann zeta values and Eichler integrals of Eisenstein series. The proof of Theorem \ref{thm:arithgeo} uses a result of Enriquez \cite{Enriquez:EllAss} to the effect that \begin{equation} \label{eqn:crucial} \mathfrak{A}(\tau)=g(\tau)(\mathfrak{A}_{\infty}), \quad \mathfrak{B}(\tau)=g(\tau)(\mathfrak{B}_{\infty}), \end{equation} for certain explicit elements $\mathfrak{A}_{\infty},\mathfrak{B}_{\infty} \in \mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$ and an automorphism $g(\tau) \in \Aut(\exp(\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}))$. Then, we separately compute the images of $\mathfrak{A}_{\infty}$ and $\mathfrak{B}_{\infty}$ in $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab}$ and of $g(\tau)$ in $\Aut(\exp(\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab}))$, and from this, we are able to deduce Theorem \ref{thm:arithgeo}. The series $\mathfrak{A}_{\infty}$ and $\mathfrak{B}_{\infty}$ are arithmetic: they can be expressed in terms of the Drinfeld associator and therefore come from genus zero. On the other hand, the automorphism $g(\tau)$ is geometric: it describes the action of $\mathcal G^{\rm geom}_{\mathsf{MEM}_{\overrightarrow{1}}}$ on $\exp \mathfrak{p}(E_{\tau}^{\times})$. As a byproduct of our proof, we see that already their images in the meta-abelian quotient are interesting objects in their own right. Namely, the automorphism $g(\tau)^{\rm met-ab}$ is essentially the generating series of the special values of elliptic polylogarithms at the zero section of the elliptic curve \cite{BeiLev,Levin:Compositio} (cf. Theorem \ref{thm:geometric} and Corollary \ref{cor:geometric}), while $\mathfrak{A}_{\infty}^{\rm met-ab}$, $\mathfrak{B}_{\infty}^{\rm met-ab}$ turn out to be generating series of the extended period polynomials of Eisenstein series \cite{Zagier:Periods} (cf. Theorem \ref{thm:arithmetic} and Corollary \ref{cor:arithmetic}). Finally, we note that Nakamura \cite{Nakamura:Galoisrep,Nakamura:Eisenrevisited} has studied an $\ell$-adic analog of the meta-abelian image of the elliptic KZB associator (called ``universal power series for Dedekind sums''), which is a genus one analog of Ihara's universal power series for Jacobi sums \cite{Ihara:Annals}. It would be very interesting to compare his results to ours. The plan of the paper is as follows. In Sections \ref{sec:2} and \ref{sec:3}, we collect some background in order to make the paper self-contained. Then, in Section \ref{sec:4}, we recall the definition of the elliptic KZB associator \cite{Enriquez:EllAss}, but from the point of view of the mixed Hodge structure on the unipotent fundamental group of $E_{\tau}^{\times}$ \cite{BL:MEP}. Finally, in Section \ref{sec:5}, the main results of this paper are proved. {\bf Acknowledgments:} Very many thanks to B. Enriquez and H. Nakamura for very inspiring discussions at the conference ``GRT, MZVs and associators'' in Les Diablerets in 2015, which formed the starting point of this project. Thanks are also due to A. Alekseev for the invitation to that conference. Also, many thanks to F. Brown, B. Enriquez, H. Furusho and F. Zerbini for helpful comments on an earlier version of this paper. This paper was written while the author was a Ph.D. student at Universit\"at Hamburg under the supervision of U. K\"uhn. \section{Preliminaries} \label{sec:2} \subsection{Notation and conventions} We start by introducing some general notation, to be used throughout the text. We denote by $\mathfrak{H}:=\{ z \in \mathbb C \, \vert \, \im(z)>0 \}$ the upper half-plane, with canonical coordinate $\tau$. For $\tau \in \mathfrak{H}$, we let $E_{\tau}^{\times}:=\mathbb C/(\mathbb Z+\mathbb Z\tau) \setminus \{0\}$ be the associated once-punctured complex elliptic curve. For any finite set $\{\mathsf{x}_1,\ldots,\mathsf{x}_n\}$ and a field $K$, we denote by $\mathbb L(\mathsf{x}_1,\ldots,\mathsf{x}_n)_K$ the free Lie algebra on $X$ over $K$ (we omit $K$ if $K=\mathbb Q$), and by $\mathbb L(\mathsf{x}_1,\ldots,\mathsf{x}_n)^{\wedge}_K$ the completion for its lower central series. It is a topological Lie algebra over $K$, whose topology is induced from the lower central series. Its topological universal enveloping algebra is given by $K\langle\!\langle \mathsf{x}_1,\ldots,\mathsf{x}_n\rangle\!\rangle$, the $K$-algebra of formal power series in the non-commuting variables $\mathsf{x}_1,\ldots,\mathsf{x}_n$, and the exponential map $\exp: \mathbb L(\mathsf{x}_1,\ldots,\mathsf{x}_n)^{\wedge}_K \rightarrow K\langle\!\langle \mathsf{x}_1,\ldots,\mathsf{x}_n\rangle\!\rangle$ defines an isomorphism onto the subspace of $K\langle\!\langle \mathsf{x}_1,\ldots,\mathsf{x}_n\rangle\!\rangle$ of group-like elements, denoted by $\exp \mathbb L(\mathsf{x}_1,\ldots,\mathsf{x}_n)^{\wedge}_K$. For more background, we refer to \cite{Reutenauer,Serre:Lie}. \subsection{Derivations on the fundamental Lie algebra of a once-punctured elliptic curve} Following \cite{HM}, we will denote by $\mathfrak{p}(E^{\times}_{\tau})$ the (de Rham) fundamental Lie algebra of the once-punctured elliptic curve $E_{\tau}^{\times}$. With notation as above, one has \begin{equation} \mathfrak{p}(E^{\times}_{\tau})\cong\mathbb L(\mathsf{a},\mathsf{b})^{\wedge} \end{equation} where the generators $\mathsf{a},\mathsf{b}$ correspond to the natural homology cycles on $E_{\tau}^{\times}$. We will need to consider a special family of derivations on $\mathfrak{p}(E_{\tau}^{\times})$. Denote by $\Der^0(\mathfrak{p}(E_{\tau}^{\times}))$ the Lie algebra of continuous derivations $D$, which satisfy $D([\mathsf{a},\mathsf{b}])=0$ and such that $D(\mathsf{b})$ has no linear term in $\mathsf{a}$. From these two conditions, it follows easily that every $D \in \Der^0(\mathfrak{p}(E_{\tau}^{\times}))$ is uniquely determined by its value on $\mathsf{a}$. \begin{dfn}[Tsunogai] \label{dfn:derivations} For every $k \geq 0$, define $\varepsilon_{2k} \in \Der^0(\mathfrak{p}(E_{\tau}^{\times}))$ by its value on $\mathsf{a}$: \begin{equation} \label{eqn:valueonx} \varepsilon_{2k}(\mathsf{a})=\begin{cases}-\mathsf{b} & k=0\\ \mathfrak{r}ac{2}{(2k-2)!}\ad^{2k}(\mathsf{a})(\mathsf{b}) & k>0.\end{cases} \end{equation} We also let $\mathfrak{u} \subset \Der^0(\mathfrak{p}(E_{\tau}^{\times}))$ be the Lie subalgebra generated by the $\varepsilon_{2k}$. \end{dfn} The derivations $\varepsilon_{2k}$ have first been introduced by Tsunogai (\cite{Tsunogai:Derivations}, \S 3) in the context of Galois actions on fundamental groups of punctured elliptic curves. They also play an important role in the theory of universal mixed elliptic motives, as the relative unipotent completion of $\SL_2(\mathbb Z)$ acts on $\mathfrak{p}(E_{\tau}^{\times})$ through them (\cite{HM}, \S 20). \begin{rmk} The value of $\varepsilon_{2k}$ on $\mathsf{b}$ is given by \begin{equation} \varepsilon_{2k}(\mathsf{b})=\mathfrak{r}ac{2}{(2k-2)!}\sum_{0 \leq j <k}(-1)^j[\ad^j(\mathsf{a})(\mathsf{b}),\ad^{2k-1-j}(\mathsf{a})(\mathsf{b})]. \end{equation} In particular, $\varepsilon_0(\mathsf{b})=0$. \end{rmk} \subsection{Eichler integrals of Eisenstein series} \label{ssec:2.2} Consider the Hecke-normalized Eisenstein series for $\SL_2(\mathbb Z)$ of weight $2k$: \begin{equation} \label{eqn:Eis} G_{2k}(q):=\begin{cases}-\mathfrak{r}ac{B_{2k}}{4k}+\sum_{n=1}^{\infty}\left( \sum_{d\vert n}d^{2k-1} \right)q^n & k \geq 1 \\ -1 & k=0,\end{cases} \end{equation} where $B_{2k}$ denotes the $2k$-th Bernoulli number and $q=e^{2\pi i\tau}$. Extending earlier work of Manin \cite{Manin:Iterated}, Brown \cite{Brown:MMV} introduced (regularized) iterated integrals of \eqref{eqn:Eis} (or \textit{iterated Eisenstein integrals} for short) \begin{equation} \label{eqn:IterEis} \mathcal G(2k_1,\ldots,2k_n;\tau):=\int_{\tau}^{\overrightarrow{1}_{\infty}}G_{2k_1}(\tau_1)\ldots G_{2k_n}(\tau_n) \mathrm{d}\tau_1\ldots\mathrm{d}\tau_n, \end{equation} where $\overrightarrow{1}_{\infty}$ denotes the tangential base point $1$ at $i\infty$. We refer to \cite{Brown:MMV}, \S 4, for the general definition, and only note the special case \begin{align} \mathcal G(\{0\}_n,2k;\tau)&=(-1)^n\idotsint\limits_{\tau \leq \tau_1\leq \ldots \tau_{n+1} \leq i\infty}G_{2k}(\tau_{n+1})-a_0(G_{2k})\mathrm{d} \tau_1\ldots\mathrm{d}\tau_{n+1}\notag\\ &-a_0(G_{2k})\mathfrak{r}ac{\tau^{n+1}}{(n+1)!},\label{eqn:IEIspecial1} \end{align} where $\{0\}_n$ denotes an $n$-tuple of zeros, and $a_0(G_{2k})=-\mathfrak{r}ac{B_{2k}}{4k}$ is the constant term in the Fourier expansion \eqref{eqn:Eis} of $G_{2k}$. From the shuffle product formula for (regularized) iterated integrals (\cite{Brown:MMV}, Proposition 4.7), we further deduce \begin{align} \mathcal G(\{0\}_{n-1},2k,0;\tau)&=\mathcal G(0;\tau)\mathcal G(\{0\}_{n-1},2k;\tau)-n\mathcal G(\{0\}_n,2k;\tau) \label{eqn:IEIspecial2}. \end{align} Both $\mathcal G(\{0\}_n,2k;\tau)$ and $\mathcal G(\{0\}_{n-1},2k,0;\tau)$ can be expressed in terms of generalized Eichler integrals \begin{equation} \label{eqn:EichlerIntegral} I_n(G_{2k};\tau):=\int_{\tau}^{i\infty}\Big[G_{2k}(z)-a_0(G_{2k})\Big](\tau-z)^n\mathrm{d} z-\int_0^{\tau}a_0(G_{2k})(\tau-z)^n\mathrm{d} z, \end{equation} with the classical Eichler integral of $G_{2k}$ being the special case $n=2k-2$ and $k \geq 2$ (cf. e.g. \cite{Zagier:Traces}, \S 1). \begin{prop} \label{prop:IterEisEichler} We have \begin{align} \mathcal G(\{0\}_n;\tau)&=\mathfrak{r}ac{\tau^n}{n!} \label{eqn:1}\\ \mathcal G(\{0\}_n,2k;\tau)&=\mathfrak{r}ac{1}{n!}I_n(G_{2k};\tau) \label{eqn:2}, \end{align} and for $k,n\geq 1$: \begin{equation} \label{eqn:3} \mathcal G(\{0\}_{n-1},2k,0;\tau)=\mathfrak{r}ac{1}{(n-1)!}\left(\tau I_{n-1}(G_{2k};\tau)-I_n(G_{2k};\tau)\right). \end{equation} \end{prop} \begin{prf} The first equality is immediate from the definition \eqref{eqn:IEIspecial1}. The second equality \eqref{eqn:2} is trivial for $n=0$, and the general case is easy to prove from \eqref{eqn:1} by induction on $n$. Finally, \eqref{eqn:3} follows directly from \eqref{eqn:1}, \eqref{eqn:2} and the definition \eqref{eqn:IEIspecial2}. \end{prf} \subsection{The elliptic KZB connection and the associated transport map} \label{ssec:2.3} We recall the definition of the elliptic KZB (Knizhnik--Zamolodchikov--Bernard) connection $\nabla_{\rm KZB}$ on $E_{\tau}^{\times}$, whose monodromy will give rise to the elliptic KZB associator. Originally, $\nabla_{\rm KZB}$ was defined as a meromorphic connection on $\mathbb C$ (cf. \cite{CEE:KZB,Hain:KZB,LR}). Here, we will instead follow \cite{BL:MEP}, which consider a certain $C^{\infty}$-trivialization of $\nabla_{\rm KZB}$, which is defined on the quotient $\mathbb C/(\mathbb Z+\mathbb Z\tau) \setminus \{0\}$. Let $\xi=r\tau+s$ be the canonical coordinate on $E_{\tau}^{\times}$, with $(r,s) \in \mathbb R^2 \setminus \mathbb Z^2$. Also, let \begin{equation} \theta_{\tau}(\xi)=\sum_{n \in \mathbb Z}(-1)^nq^{\mathfrak{r}ac{1}{2}(n+\mathfrak{r}ac 12)^2}e^{(n+\mathfrak{r}ac 12)\xi}, \quad q=e^{2\pi i\tau}, \end{equation} be the classical Jacobi theta function. \begin{dfn}[Brown--Levin,Calaque--Enriquez--Etingof,Levin--Racinet] Define a connection $\nabla_{\rm KZB}$ on the trivial bundle\footnote{Note that the normalization of the variables $\mathsf{a},\mathsf{b}$ differs from \cite{BL:MEP}, Example 5.3.1, by $\mathsf{a}=-2\pi i\mathsf{x}_0$ and $\mathsf{b}=-(2\pi i)^{-1}\mathsf{x}_1$. Our conventions are compatible with \cite{Hain:KZB}, \S 11.1.} \begin{equation} E_{\tau}^{\times} \times \mathbb C\langle\!\langle \mathsf{a},\mathsf{b}\rangle\!\rangle \rightarrow E_{\tau}^{\times} \end{equation} by setting $\nabla_{\rm KZB}(f):=\mathrm{d} f-\omega_{\rm KZB}\cdot f$ for a local section $f$, where \begin{equation} \label{eqn:KZBform} \omega_{\rm KZB}=\mathrm{d} r\cdot \mathsf{a}+2\pi i\ad(\mathsf{a})e^{r\ad(\mathsf{a})}F_{\tau}(2\pi i\xi,\ad(\mathsf{a}))(\mathsf{b})\mathrm{d} \xi, \end{equation} where \begin{equation} F_{\tau}(\xi,\eta):=\mathfrak{r}ac{\theta'_{\tau}(0)\theta_{\tau}(\xi+\eta)}{\theta_{\tau}(\xi)\theta_{\tau}(\eta)}. \end{equation} \end{dfn} \begin{prop} \label{prop:connprop} The connection $\nabla_{\rm KZB}$ satisfies the following properties. \begin{itemize} \item[\rm (i)] We have $\nabla_{\rm KZB}^2=0$; in other words, $\nabla_{\rm KZB}$ is integrable. \item[\rm (ii)] The connection $\nabla_{\rm KZB}$ has a simple pole at $\xi=0$ with residue \begin{equation} \operatorname{Res}_0(\nabla_{\rm KZB})=[\mathsf{a},\mathsf{b}]. \end{equation} \end{itemize} \end{prop} \begin{prf} \begin{itemize} \item[\rm (i)] The condition $\nabla_{\rm KZB}^2=0$ is equivalent to \begin{equation} \mathrm{d}\omega_{\rm KZB}-\omega_{\rm KZB}\wedge \omega_{\rm KZB}=0, \end{equation} which in turn follows from a direct computation: \begin{align} \mathrm{d}\omega_{\rm KZB}&=2\pi i\mathrm{d} r \cdot \ad(\mathsf{a}) \wedge \ad(\mathsf{a})e^{r\ad(\mathsf{a})}F_{\tau}(2\pi i\xi,\ad(\mathsf{a}))(\mathsf{b})\mathrm{d} \xi\\ &=\omega_{\rm KZB}\wedge \omega_{\rm KZB}. \end{align} \item[\rm (ii)] The residue of the connection $\nabla_{\rm KZB}$ is just the residue of the one-form $\omega_{\rm KZB}$. But the computation of the latter is easy from the definition, using the fact that the residue of $2\pi iF_{\tau}(2\pi i\xi,\eta)$ at $\xi=0$ is equal to one (cf. \cite{Hain:KZB}, eqn.(8)). \end{itemize} \end{prf} Now for any two base points $\rho_1,\rho_2$, let $\pi_1(E_{\tau}^{\times};\rho_2,\rho_1)$ be the fundamental torsor of paths from $\rho_1$ to $\rho_2$. The integrability of $\nabla_{\rm KZB}$ implies that the transport function \begin{align} T^{\rm KZB}_{\rho_2,\rho_1}: \pi_1(E_{\tau}^{\times};\rho_2,\rho_1) &\rightarrow \mathbb C\langle\!\langle \mathsf{a},\mathsf{b}\rangle\!\rangle\\ \gamma &\mapsto \sum_{k=0}^{\infty}\int_{\gamma}\omega^k_{\rm KZB}, \end{align} is well-defined, where $\int_{\gamma}\omega_{\rm KZB}^k$ denotes the iterated integral in the sense of Chen \cite{Chen:PathIntegrals} \begin{equation} \int_{\gamma}\omega_{\rm KZB}^k:=\int\limits_{1\geq t_1\geq \ldots \geq t_k \geq 1}\gamma^*(\omega_{\rm KZB})(t_1)\ldots \gamma^*(\omega_{\rm KZB})(t_k). \end{equation} In other words, $\int_{\gamma}\omega_{\rm KZB}^k$ depends only on the homotopy class of $\gamma$. Rather than choosing points $\rho_1,\rho_2 \in E_{\tau}^{\times}$, which is not canonical, we work with tangential base points, in the sense of \cite{Deligne:P1}, \S 15, at the puncture $0$. Since $\nabla_{\rm KZB}$ has only a simple pole at $\xi=0$, one can extend the definition of the transport function to the case of tangential base points as in \cite{Deligne:P1}, Proposition 15.45. More precisely, for any two non-zero tangent vectors $\overrightarrow{v}_0=\lambda \mathfrak{r}ac{\partial}{\partial \xi}$ and $\overrightarrow{w}_0=\mu\mathfrak{r}ac{\partial}{\partial \xi}$ at $0$, there is a well-defined function \begin{align} T^{\rm KZB}_{\overrightarrow{w}_0,\overrightarrow{v}_0}: \pi_1(E_{\tau}^{\times};\overrightarrow{w}_0,\overrightarrow{v}_0) &\rightarrow \mathbb C\langle\!\langle \mathsf{a},\mathsf{b}\rangle\!\rangle, \end{align} given by \begin{align} T^{\rm KZB}_{\overrightarrow{w}_0,\overrightarrow{v}_0}(\gamma)=\lim_{t\to 0}e^{\log(\mu^{-1}t)\operatorname{Res}_0(\nabla_{\rm KZB})}\Bigg[\sum_{k=0}^{\infty}\int_{\gamma_t^{1-t}}\omega^k_{\rm KZB}\Bigg] e^{-\log(\lambda^{-1}t)\operatorname{Res}_0(\nabla_{\rm KZB})}, \end{align} where $\operatorname{Res}_0(\nabla_{\rm KZB})=[\mathsf{a},\mathsf{b}]$ is the residue of the connection at $\xi=0$ (cf. Proposition \ref{prop:connprop}.(i)), $\gamma_t^{1-t}$ denotes the restriction of $\gamma$ to the interval $[t,1-t]$ (for $0<t<\mathfrak{r}ac 12$) and the branches of the logarithms are determined by the path $\gamma$. For arithmetic applications, it will be important that the tangent vectors are integral on the Tate curve $\mathbb C^{\times}/q^{\mathbb Z}$ and moreover non-zero modulo every prime number $p$, which fixes them uniquely (up to a sign): $\overrightarrow{v}_0=\pm \mathfrak{r}ac{\partial}{\partial z}=\pm (2\pi i)^{-1}\mathfrak{r}ac{\partial}{\partial \xi}$, where $z=e^{2\pi i\xi}$. \section{The elliptic depth filtration} \label{sec:3} We recall the definition of the elliptic depth filtration on the fundamental Lie algebra of $E_{\tau}^{\times}$ (cf. \cite{HM}, \S 27). This filtration is the elliptic analog of the depth filtration on the fundamental Lie algebra of $\mathbb P^1 \setminus \{0,1,\infty\}$ (\cite{DG}, \S 6 or \cite{Brown:Depth}, \S 4). \subsection{The elliptic depth filtration} Consider the canonical embedding \begin{equation} \label{eqn:embedding} E_{\tau}^{\times} \hookrightarrow E_{\tau} \end{equation} of the once-punctured elliptic curve $E_{\tau}^{\times}$ into the (complete) elliptic curve $E_{\tau}$. On fundamental Lie algebras, it induces the abelianization map \begin{equation} \pi: \mathfrak{p}(E_{\tau}^{\times}) \rightarrow \mathfrak{p}(E_{\tau}^{\times})^{\rm ab} \cong \mathfrak{p}(E_{\tau}). \end{equation} \begin{dfn}[Hain--Matsumoto] The \textit{elliptic depth filtration} $D^{\bullet}\mathfrak{p}(E_{\tau}^{\times})$ is the descending filtration on $\mathfrak{p}(E_{\tau}^{\times})$, defined by \begin{equation} D^n\mathfrak{p}(E_{\tau}^{\times})=\begin{cases}\mathfrak{p}(E_{\tau}^{\times}) & n=0\\ \ker(\pi) & n=1 \\ [D^1\mathfrak{p}(E_{\tau}^{\times}),D^{n-1}\mathfrak{p}(E_{\tau}^{\times})] & n \geq 2 \end{cases}. \end{equation} Also, let $\gr^{\bullet}_D\mathfrak{p}(E_{\tau}^{\times})$ be the associated graded Lie algebra. \end{dfn} It is clear from the definition that the elliptic depth filtration is the lower central series on the commutator of $\mathfrak{p}(E_{\tau}^{\times})$. Therefore, the quotient Lie algebra \begin{equation} \mathfrak{p}(E_{\tau}^{\times})^{\rm met-ab}:=\mathfrak{p}(E_{\tau}^{\times})/D^2\mathfrak{p}(E_{\tau}^{\times}) \end{equation} is the \textit{(maximal) meta-abelian quotient} of $\mathfrak{p}(E_{\tau}^{\times})$. The following proposition is well-known. \begin{prop} \label{prop:metablie} We have isomorphisms of (abelian) Lie algebras \begin{equation} \label{eqn:isom1} \gr^0_D\mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb Q \mathsf{a}\oplus \mathbb Q \mathsf{b} \end{equation} and \begin{align} \gr^1_D\mathfrak{p}(E_{\tau}^{\times}) &\stackrel{\cong}\longrightarrow \mathbb Q[\![\mathsf{U},\mathsf{V}]\!] \notag\\ \ad^k(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}]) &\mapsto \mathsf{U}^k\mathsf{V}^l. \label{eqn:isom2} \end{align} Moreover, \begin{equation} \mathfrak{p}(E_{\tau}^{\times})^{\rm met-ab} \cong \gr^0_D\mathfrak{p}(E_{\tau}^{\times}) \ltimes \gr^1_D\mathfrak{p}(E_{\tau}^{\times}) \end{equation} as Lie algebras, where $\mathbb Q \mathsf{a}\oplus \mathbb Q \mathsf{b}$ acts on $\gr^1_D\mathfrak{p}(E_{\tau}^{\times})\cong \mathbb Q[\![\mathsf{U},\mathsf{V}]\!]$ by the adjoint action. \end{prop} \begin{prf} The first isomorphism is clear, since the right hand side of \eqref{eqn:isom1} is just the abelianization of $\mathfrak{p}(E_{\tau}^{\times})$. It follows from the Jacobi identity that every element of $\gr^1_D\mathfrak{p}(E_{\tau}^{\times})$ is a series in the elements $\ad^k(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}])$, and then the isomorphism \eqref{eqn:isom2} is a consequence of the universal property of free Lie algebras. Finally, the last statement of the proposition follows from the fact that the adjoint action splits the short exact sequence of Lie algebras \begin{equation} 0 \longrightarrow \gr^1_D\mathfrak{p}(E_{\tau}^{\times}) \longrightarrow \mathfrak{p}(E_{\tau}^{\times})/D^2\mathfrak{p}(E_{\tau}^{\times}) \longrightarrow \gr^0_D\mathfrak{p}(E_{\tau}^{\times}) \longrightarrow 0. \end{equation} \end{prf} \begin{rmk} \label{rmk:elldepth} The relation between the elliptic depth filtration and the depth filtration on the fundamental Lie algebra of $\mathbb P^1 \setminus \{0,1,\infty\}$ can be explained as follows. First, recall (cf. \cite{DG}, \S 5) that the (de Rham) fundamental Lie algebra $\mathfrak{p}(U)$ of $U:=\mathbb P^1 \setminus \{0,1,\infty\}$ is isomorphic to $\mathbb L(\mathsf{x}_0,\mathsf{x}_1)^{\wedge}$. The depth filtration $D^n\mathfrak{p}(U)$ on $\mathfrak{p}(U)$ is then the lower central series on the kernel of the natural map between fundamental Lie algebras \begin{align} \mathfrak{p}(U) &\rightarrow \mathbb L(\mathsf{x}_0)^{\wedge} \cong \mathbb Q \mathsf{x}_0\\ \mathsf{x}_i &\mapsto \delta_{i,0}\mathsf{x}_0, \end{align} which is induced from the embedding $\mathbb P^1 \setminus \{0,1,\infty\} \hookrightarrow \mathbb P^1 \setminus \{0,\infty\}$ (cf. \cite{Brown:Depth,DG}). Interpreting $\mathbb P^1 \setminus \{0,1,\infty\}$ as the fiber over $q=0$ of the universal once-punctured Tate curve $(\mathbb C^{\times}/q^{\mathbb Z}) \setminus \{1\}$, one obtains a morphism of Lie algebras \cite{Brown:Depth3,Enriquez:EllAss,Hain:KZB} \begin{align} \iota:\mathfrak{p}(U) &\rightarrow \mathfrak{p}(E_{\tau}^{\times}) \label{eqn:Hainmorphism}\\ \mathsf{x}_0 &\mapsto \mathfrak{r}ac{\ad(\mathsf{a})}{e^{\ad(\mathsf{a})}-1}(\mathsf{b})=\sum_{k=0}^{\infty}\mathfrak{r}ac{B_k}{k!}\ad^k(\mathsf{a})(\mathsf{b})\\ \mathsf{x}_1 &\mapsto [\mathsf{a},\mathsf{b}], \end{align} which clearly respects the depth filtrations on both sides, i.e. \begin{equation} \iota(D^n\mathfrak{p}(U))=\iota(\mathfrak{p}(U))\cap D^n\mathfrak{p}(E_{\tau}^{\times}), \quad \mbox{for all $n \geq 0$}. \end{equation} For more details, see \cite{HM}, \S 27. \end{rmk} \subsection{Action of special derivations in depths zero and one} \label{ssec:3.2} We now compute the action of the derivations $\varepsilon_{2k}$ on the meta-abelian quotient $\mathfrak{p}(E_{\tau}^{\times})^{\rm met-ab}$. \begin{prop} \label{prop:metabaction} \begin{enumerate} \item[\rm (i)] The derivation $\varepsilon_0$ acts on $\gr^0_D\mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb Q \mathsf{a}\oplus \mathbb Q \mathsf{b}$ as the linear map $\left(\begin{smallmatrix}0&-1\\0&0\end{smallmatrix}\right)$, and on $\gr^1_D\mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb Q[\![\mathsf{U},\mathsf{V}]\!]$ as the derivation $ -\mathsf{V}\mathfrak{r}ac{\partial }{\partial \mathsf{U}}. $ \item[\rm (ii)] The derivations $\varepsilon_{2k}$, for $k>0$, act trivially on $\gr^i_D\mathfrak{p}(E_{\tau}^{\times})$, for every $i\geq 0$. \item[\rm (iii)] Let $\underline{2k}=(2k_1,\ldots,2k_n)$ be a multi-index, where $k_i \geq 0$. Then $\varepsilon_{\underline{2k}}=\varepsilon_{2k_1} \circ\ldots\circ \varepsilon_{2k_n}$ acts non-trivially on $\mathfrak{p}(E_{\tau}^{\times})^{\rm met-ab}\cong\gr^0_D\mathfrak{p}(E_{\tau}^{\times}) \ltimes \gr^1_D\mathfrak{p}(E_{\tau}^{\times})$, only if either $\underline{2k}=(0,\ldots,0,2k_n)$ or $\underline{2k}=(0,\ldots,0,2k_{n-1},0)$. \end{enumerate} \end{prop} \begin{prf} The action of $\varepsilon_0$ on $\gr^0_D\mathfrak{p}(E_{\tau}^{\times})$ is clear from the definition (cf. Definition \ref{dfn:derivations}). For the action on $\gr^1_D\mathfrak{p}(E_{\tau}^{\times})$, by the Jacobi identity, the linear operators $\ad(\mathsf{a}),\ad(\mathsf{b}) \in \End(\gr^1_D\mathfrak{p}(E_{\tau}^{\times}))$ commute with each other. Consequently, we have \begin{align} \varepsilon_0(\ad^k(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}])) &\equiv \sum_{i=0}^{k-1} -\ad^i(\mathsf{a})\ad(\mathsf{b})\ad^{k-1-i}(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}]) \notag\\ &\equiv -k\ad^{k-1}(\mathsf{a})\ad^{l+1}(\mathsf{b})([\mathsf{a},\mathsf{b}]) \mod D^2\mathfrak{p}(E_{\tau}^{\times}). \end{align} Therefore, under the isomorphism $\gr^1_D\mathfrak{p}(E_{\tau}^{\times}) \cong \mathbb Q[\![\mathsf{U},\mathsf{V}]\!]$ of Proposition \ref{prop:metablie}, the derivation $\varepsilon_0$ corresponds to $ -\mathsf{V}\mathfrak{r}ac{\partial }{\partial \mathsf{U}}$. As for (ii), the triviality of $\varepsilon_{2k}$, for $k>0$, on $\gr^0_D\mathfrak{p}(E_{\tau}^{\times})$ is clear from Definition \ref{dfn:derivations}, and triviality on $\gr^i_D\mathfrak{p}(E_{\tau}^{\times})$ follows by induction on $i$. Finally, (iii) follows easily from (i) and (ii). \end{prf} \section{The elliptic KZB associator} \label{sec:4} In this section, we define Enriquez's elliptic KZB associator \cite{Enriquez:EllAss}, which is an elliptic analogue of the Drinfeld associator \cite{Drinfeld:Gal}. Our approach differs slightly from \cite{Enriquez:EllAss} in that we define the elliptic KZB associator using the ``elliptic transport isomorphism'' of Brown--Levin. This definition is analogous to the definition of the Drinfeld associator using parallel transport along the KZ-connection \cite{DG}. We also recall an important result of Enriquez (cf. \cite{Enriquez:EllAss}, \S 6) which describes the variation of the elliptic KZB associator in the modulus of the once-punctured elliptic curve. \subsection{Definition via the transport function} \label{ssec:4.1} In Section \ref{ssec:2.3}, we have defined a transport function $T^{\rm KZB}_{\rho_2,\rho_1}$ on a once-punctured elliptic curve for any choice of base points $\rho_1,\rho_2$ (possibly tangential), using the elliptic KZB connection. We now specialize these base points to be $\pm \overrightarrow{v}_0$, where $\overrightarrow{v}_0$ is the tangent vector $-(2\pi i)^{-1}\mathfrak{r}ac{\partial}{\partial \xi}$ at $0 \in E_{\tau}$. Note that under the isomorphism $E_{\tau} \cong \mathbb C^{\times}/q^\mathbb Z$, we have $\overrightarrow{v}_0=-\mathfrak{r}ac{\partial}{\partial z}$, where $z=e^{2\pi i\xi}$. In particular, $\overrightarrow{v}_0$ is defined over $\mathbb Z$ on the Tate curve. Consider now the paths $\alpha,\beta \in \pi_1(E_{\tau}^{\times};-\overrightarrow{v}_0,\overrightarrow{v}_0)$ which are the images of, respectively, the (open) straight-line paths $(0,1)$ and $(0,\tau)$ under the projection $\mathbb C \setminus (\mathbb Z+ \mathbb Z\tau) \rightarrow E_{\tau}^{\times}$, where the path $(0,\tau)$ is additionally composed with a half-circle in the positive direction around $\tau$. Therefore (after ignoring the $-(2\pi i)^{-1}$-prefactor), the paths $\alpha,\beta$ look like in Figure 1 below (cf. \cite{Enriquez:EllAss}, p.550). \begin{figure} \caption{The paths $\alpha$ and $\beta$.} \end{figure} \begin{dfn}[\cite{Enriquez:EllAss}, \S 6.2] The \textit{elliptic KZB associator} is the tuple $(A(\tau),B(\tau))$, where \begin{equation} \label{eqn:AB} A(\tau):=T^{\rm KZB}_{-\overrightarrow{v}_0,\overrightarrow{v}_0}(\alpha), \quad B(\tau):=T^{\rm KZB}_{-\overrightarrow{v}_0,\overrightarrow{v}_0}(\beta) \end{equation} are the images of the paths $\alpha$ and $\beta$ under the transport map $T^{\rm KZB}_{-\overrightarrow{v}_0,\overrightarrow{v}_0}$. \end{dfn} \begin{rmk} The definition of the elliptic KZB associator given here is not exactly the same as the one given in \cite{Enriquez:EllAss}, but equivalent. Using the elliptic transport map, Enriquez definition is \begin{equation} A^{\rm Enr}(\tau):=T^{\rm KZB}_{\overrightarrow{v}_0}(\alpha), \quad B^{\rm Enr}(\tau):=T^{\rm KZB}_{\overrightarrow{v}_0}(\beta). \end{equation} Explicitly, the relation between the two versions is given by \begin{equation} A(\tau)=e^{-\pi i[\mathsf{a},\mathsf{b}]}A^{\rm Enr}(\tau), \quad B(\tau)=e^{\pi i[\mathsf{a},\mathsf{b}]}B^{\rm Enr}(\tau). \end{equation} \end{rmk} \subsection{Variation in the modulus} \label{ssec:4.2} An important property of the elliptic KZB associator is that it satisfies a linear differential equation, which relates it to iterated Eisenstein integrals and the special derivations $\varepsilon_{2k}$ reviewed in Section \ref{sec:2}. The boundary condition of this differential equation establishes a relation between the series $A(\tau)$, $B(\tau)$ and the Drinfeld associator $\Phi$. More precisely, we have the following theorem, due to Enriquez. \begin{thm}[\cite{Enriquez:Emzv}, \S 5.2] \label{thm:Diffeq} We have \begin{equation} A(\tau)=g(\tau)(A_{\infty}), \quad B(\tau)=g(\tau)(B_{\infty}), \end{equation} where \begin{equation} g(\tau)=\sum (-2\pi i)^n\mathcal G(2k_1,\ldots,2k_n;\tau)\cdot (\varepsilon_{2k_1}\circ \ldots \circ \varepsilon_{2k_n}), \end{equation} the sum being over all multi-indices $(k_1,\ldots,k_n) \in \mathbb Z_{\geq 0}^n$, for $n \geq 0$, and \begin{align} A_{\infty}&=e^{\pi i\iota(\mathsf{x}_1)}\Phi(\iota(\mathsf{x}_0)),\iota(\mathsf{x}_1))e^{2\pi i\iota(\mathsf{x}_0)}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1},\\ B_{\infty}&=\Phi(\iota(\mathsf{x}_{\infty})),\iota(\mathsf{x}_1))e^a\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1}, \end{align} where $\iota: \mathfrak{p}(U) \rightarrow \mathfrak{p}(E_{\tau}^{\times})$ is the morphism of Remark \ref{rmk:elldepth}. \end{thm} The element $g(\tau)$ defines an automorphism of $\exp \mathfrak{p}(E_{\tau}^{\times})$. Letting \begin{alignat}{3} &\mathfrak{A}(\tau)&&:=\log(A(\tau)), \quad \mathfrak{B}(\tau)&&:=\log(B(\tau)),\\ &\mathfrak{A}_{\infty}&&:=\log(A_{\infty}), \quad \mathfrak{B}_{\infty}&&:=\log(B_{\infty}),\\ \end{alignat} we also have \begin{equation} \mathfrak{A}(\tau)=g(\tau)(\mathfrak{A}_{\infty}), \quad \mathfrak{B}(\tau)=g(\tau)(\mathfrak{B}_{\infty}), \end{equation} since $g(\tau)$ commutes with exponential and logarithm functions. The next corollary follows immediately from Proposition \ref{prop:metabaction}. \begin{cor} \label{cor:gmetab} Let $g(\tau)^{\rm met-ab}$ be the image of $g(\tau)$ in $\End(\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab})$. We have \begin{align} g(\tau)^{\rm met-ab}= &\sum_{n \geq 0}(-2\pi i)^n\mathcal G(\{0\}_n;\tau)\cdot\varepsilon_0^n\\ &+\sum_{n\geq 0, \, k \geq 1}(-2\pi i)^{n+1}\mathcal G(\{0\}_n,2k;\tau)\cdot\Big(\varepsilon_0^n \circ \varepsilon_{2k}\Big)\\ &+\sum_{k,n\geq 1}(-2\pi i)^{n+1}\mathcal G(\{0\}_{n-1},2k,0;\tau)\cdot\Big(\varepsilon_0^{n-1}\circ \varepsilon_{2k}\circ \varepsilon_0\Big). \end{align} \end{cor} \begin{rmk} The pair $(A_{\infty},B_{\infty})$ is the image of the Drinfeld associator under the natural map (\cite{Enriquez:EllAss}, \S 4.5) \begin{equation} \underline{M}(\mathbb C)\rightarrow \underline{Ell}(\mathbb C), \end{equation} where $\underline{M}$ is the scheme of classical associators in the sense of \cite{Drinfeld:Gal}, and $\underline{Ell}$ is its elliptic counterpart \cite{Enriquez:EllAss}. A geometric way of interpreting this morphism is via the degeneration of the once-punctured Tate curve to $\mathbb P^1 \setminus \{0,1,\infty\}$ (cf. Remark \ref{rmk:elldepth}). \end{rmk} \subsection{Elliptic KZB associator in depth zero} \label{ssec:4.3} Let $\mathfrak{A}(\tau)^{0}$ be the image of $\mathfrak{A}(\tau)$ in $\gr^0_D\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}=\mathfrak{p}(E_{\tau}^{\times})/[\mathfrak{p}(E_{\tau}^{\times}),\mathfrak{p}(E_{\tau}^{\times})]$, and likewise let $\mathfrak{B}(\tau)^{(0)}$ be the image of $\mathfrak{B}(\tau)$ in $\gr^0_D\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. The following proposition shows that $\mathfrak{A}(\tau)^0$ and $\mathfrak{B}(\tau)^0$ precisely retrieve the periods of $H^1(E_{\tau}^{\times})$. \begin{prop} \label{prop:ab} We have \begin{equation} \mathfrak{A}(\tau)^{(0)}=2\pi i\mathsf{b}, \quad \mathfrak{B}(\tau)^{(0)}=\mathsf{a}+2\pi i\tau \mathsf{b}. \end{equation} \end{prop} \begin{prf} We only prove the result for $\mathfrak{A}(\tau)^{(0)}$, the formula for $\mathfrak{B}(\tau)^{(0)}$ is proved analogously. By Theorem \ref{thm:Diffeq}, we know that $A(\tau)=g(\tau)(A_{\infty})$, and since $g(\tau)$ is an automorphism, we also have \begin{equation} \mathfrak{A}(\tau)=\log(A(\tau))=g(\tau)(\log(A_{\infty}))=g(\tau)(\mathfrak{A}_{\infty}). \end{equation} On the other hand, it follows directly from the explicit formula for $A_{\infty}$ given in Theorem \ref{thm:Diffeq} that \begin{equation} \mathfrak{A}_{\infty} \equiv 2\pi i\mathsf{b} \mod D^1\mathfrak{p}(E_{\tau}^{\times}), \end{equation} since $\iota(\mathsf{x}_0) \equiv \mathsf{b} \mod D^1\mathfrak{p}(E_{\tau}^{\times})$ and $\iota(\mathsf{x}_1) \equiv 0 \mod D^1\mathfrak{p}(E_{\tau}^{\times})$. But as every derivation $\varepsilon_{2k}$ annihilates $\mathsf{b}$, we finally get $\mathfrak{A}(\tau)^{(0)}=g(\tau)(2\pi i\mathsf{b})=2\pi i\mathsf{b}$. \end{prf} \begin{rmk} \label{rmk:alt} Proposition \ref{prop:ab} could have also been proved directly without recourse to Enriquez' Theorem \ref{thm:Diffeq}, using that $\omega_{\rm KZB} \equiv \mathrm{d} r\cdot \mathsf{a}+2\pi i\mathrm{d}\xi \cdot \mathsf{b} \mod D^1\mathfrak{p}(E_{\tau}^{\times})$. \end{rmk} \section{The meta-abelian elliptic KZB associator} \label{sec:5} In this section, we compute the image of $\mathfrak{A}(\tau)$ and $\mathfrak{B}(\tau)$ in the meta-abelian quotient $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab}$ of $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. The strategy is to use Theorem \ref{thm:Diffeq} which yields that \begin{equation} \label{eqn:crucial2} \mathfrak{A}(\tau)=g(\tau)(\mathfrak{A}_{\infty}), \quad \mathfrak{B}(\tau)=g(\tau)(\mathfrak{B}_{\infty}) \end{equation} and then to compute the images of $\mathfrak{A}_{\infty}$ and $\mathfrak{B}_{\infty}$ in the meta-abelian quotient separately. This is done in Section \ref{ssec:5.1}. In Section \ref{ssec:5.2}, we then compute the action of $g(\tau)$ on the meta-abelian quotient. The two computations are then combined in Section \ref{ssec:5.3} to yield our formula for $\mathfrak{A}(\tau)^{\rm met-ab}$ and $\mathfrak{B}(\tau)^{\rm met-ab}$. \subsection{The arithmetic piece: periods of Eisenstein series} \label{ssec:5.1} Let $\mathfrak{A}^{\rm met-ab}_{\infty}$ (resp. $\mathfrak{B}^{\rm met-ab}_{\infty}$) be the image of $\mathfrak{A}_{\infty}$ (resp. the image of $\mathfrak{B}_{\infty}$) in the meta-abelian quotient $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab} \cong \gr^0_D\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C} \ltimes \gr^1_D\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$, so that we can write \begin{equation} \mathfrak{A}_{\infty}^{\rm met-ab}=\mathfrak{A}_{\infty}^{(0)}+\mathfrak{A}_{\infty}^{(1)}, \quad \mathfrak{B}_{\infty}^{\rm met-ab}=\mathfrak{B}_{\infty}^{(0)}+\mathfrak{B}_{\infty}^{(1)}. \end{equation} The computation of the depth zero component was already carried out in Proposition \ref{prop:ab} so that it remains to compute the depth one contribution. For this, we need a short lemma about the Drinfeld associator. \begin{lem} \label{lem:Drinf} Let $\varphi(\mathsf{x}_0,\mathsf{x}_1):=\log(\Phi(\mathsf{x}_0,\mathsf{x}_1))$. Then \begin{equation} \varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)) \equiv -\sum_{n \geq 2}\zeta(n)\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}]) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}, \end{equation} where $\iota(\mathsf{x}_0)=\mathfrak{r}ac{\ad(\mathsf{a})}{e^{\ad(\mathsf{a})}-1}(\mathsf{b})$ and $\iota(\mathsf{x}_1)=[\mathsf{a},\mathsf{b}]$ (cf. Remark \ref{rmk:elldepth}). In particular, we have $\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)) \in D^1\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. \end{lem} \begin{prf} It is well-known (cf. \cite{DG}, \S 6.7) that \begin{equation} \varphi(\mathsf{x}_0,\mathsf{x}_1) \equiv -\sum_{n=2}^{\infty}\zeta(n)\ad^{n-1}(\mathsf{x}_0)(\mathsf{x}_1). \end{equation} Applying $\iota$ to both sides, we get the result. \end{prf} \begin{thm} \label{thm:arithmetic} We have \begin{align} \mathfrak{A}_{\infty}^{(1)}&=2\pi i\left(c(\mathsf{U})-\mathfrak{r}ac{2\pi i}{4} \mathsf{V}+\sum_{n \geq 3, {\rm odd}}\zeta(n)\mathsf{V}^n\right), \label{eqn:constA}\\ \mathfrak{B}_{\infty}^{(1)}&=-2\pi i\left(c(2\pi i\mathsf{V})-\mathsf{U} c(\mathsf{U})c(2\pi i\mathsf{V})\right)+\sum_{n \geq 3, \,{\rm odd}}\zeta(n)\mathsf{U}\mathsf{V}^{n-1},\label{eqn:constB} \end{align} where $c(x)=\mathfrak{r}ac{1}{e^x-1}+\mathfrak{r}ac 12-\mathfrak{r}ac 1x=\sum_{k=2}^{\infty}\mathfrak{r}ac{B_k}{k!}x^{k-1}$. \end{thm} \begin{prf} By Theorem \ref{thm:Diffeq}, we know that \begin{equation} \mathfrak{A}_{\infty}=\log(e^{\pi i\iota(\mathsf{x}_1)}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))e^{2\pi i\iota(\mathsf{x}_0)}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1}). \end{equation} Using a ``truncated'' version of the Baker--Campbell--Hausdorff formula (cf. \cite{Reutenauer}, Corollary 3.24) and Lemma \ref{lem:Drinf}, we get \begin{align} \mathfrak{S}&:=\log(e^{\pi i\iota(\mathsf{x}_1)}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))\\ &\equiv \varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))+\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))(\pi i\iota(\mathsf{x}_1))\\ &\equiv \varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))+\pi i\iota(\mathsf{x}_1) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}.\label{eqn:S} \end{align} Similarly, since $\iota(\mathsf{x}_0) \equiv b \mod D^1\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$, we get \begin{align} \mathfrak{T}&:=\log(e^{2\pi i\iota(\mathsf{x}_0)}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1})\\ &\equiv -\log(\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))e^{-2\pi i\iota(\mathsf{x}_0)})\\ &\equiv 2\pi i\iota(\mathsf{x}_0)-\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(-\mathsf{b})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}.\label{eqn:T} \end{align} Combining \eqref{eqn:S} and \eqref{eqn:T} and again applying \cite{Reutenauer}, Corollary 3.24, we get \begin{align} \mathfrak{A}_{\infty}&\equiv \mathfrak{T}+\sum_{n \geq 0}\mathfrak{r}ac{B_n}{n!}\ad^n(\mathfrak{T})(\mathfrak{S})\notag\\ &\begin{aligned}\equiv 2\pi i\iota(\mathsf{x}_0)&-\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(-\mathsf{b})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))+\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))+\pi i\iota(\mathsf{x}_1)\\ &+\sum_{n \geq 1}\mathfrak{r}ac{B_n}{n!}\ad^n(2\pi i\iota(\mathsf{x}_0))(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))+\pi i\iota(\mathsf{x}_1)) \end{aligned} \\ &\begin{aligned}\equiv 2\pi i\iota(\mathsf{x}_0)+\pi i\iota(\mathsf{x}_1)&-\sum_{k \geq 1}\mathfrak{r}ac{B_k}{k!}\Big((-1)^k-1 \Big)\ad^k(\mathsf{b})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))\\ &+\sum_{n \geq 1}\mathfrak{r}ac{B_n}{n!}\ad^n(\mathsf{b})(\pi i\iota(\mathsf{x}_1))\end{aligned}\\ &\begin{aligned}\equiv 2\pi i\mathsf{b}&+2\pi i\sum_{k \geq 2}\mathfrak{r}ac{B_k}{k!}\ad^{k-1}(\mathsf{a})([\mathsf{a},\mathsf{b}])-\ad(\mathsf{b})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))\\&+ \mathfrak{r}ac{2\pi i}{2}\sum_{n \geq 1}\mathfrak{r}ac{B_n(2\pi i)^n}{n!}\ad^n(\mathsf{b})([\mathsf{a},\mathsf{b}])\mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C},\label{eqn:Aeq1} \end{aligned} \end{align} where in the last line, we have used that $B_1=-\mathfrak{r}ac 12$ and that $B_{2n+1}=0$ for all $n \geq 1$. Using Lemma \ref{lem:Drinf} together with Euler's formula $-\mathfrak{r}ac{\zeta(k)}{(-2\pi i)^{k}}=\mathfrak{r}ac{B_k}{2k!}$ for $k \geq 2$ even, it follows that \eqref{eqn:Aeq1} equals \begin{equation} \label{eqn:Aeq2} 2\pi i\left(\mathsf{b}+\sum_{k \geq 2}\mathfrak{r}ac{B_k}{k!}\ad^{k-1}(\mathsf{a})([\mathsf{a},\mathsf{b}])-\mathfrak{r}ac{2\pi i}{4}\ad(\mathsf{b})([\mathsf{a},\mathsf{b}])+\sum_{n \geq 3, {\rm odd}}\zeta(n)\ad^n(\mathsf{b})([\mathsf{a},\mathsf{b}])\right). \end{equation} Under the substitution $\ad^k(\mathsf{a})\ad^l(\mathsf{b})([\mathsf{a},\mathsf{b}]) \mapsto \mathsf{U}^k\mathsf{V}^l$ (cf. \eqref{eqn:isom2}), \eqref{eqn:constA} now follows immediately from \eqref{eqn:Aeq2} (the $2\pi i\mathsf{b}$-term belongs to $\mathfrak{A}_{\infty}^{(0)}$ and does not contribute to $\mathfrak{A}_{\infty}^{(1)}$). The calculation of $\mathfrak{B}_{\infty}^{(1)}$ is very similar, so we will omit some details. First, by definition \begin{equation} \mathfrak{B}_{\infty}=\log(\Phi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1))e^{\mathsf{a}}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1}), \end{equation} where $\mathsf{x}_{\infty}:=-\mathsf{x}_0-\mathsf{x}_1$. Furthermore, \begin{align} \mathfrak{T}&:=\log(e^\mathsf{a}\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))^{-1})\\ &\equiv -\log(\Phi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))e^{-\mathsf{a}})\\ &\equiv \mathsf{a}-\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(-\mathsf{a})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1))) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}.\label{eqn:B2}\\ \end{align} We obtain \begin{align} \mathfrak{B}_{\infty} &\equiv \log(\Phi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1))e^\mathfrak{T})\\ &\equiv \mathfrak{T}+\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(\mathsf{a})(\varphi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1))) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}, \end{align} where the last equality follows from the fact that $\mathfrak{T} \equiv \mathsf{a} \mod D^1\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. A short calculation shows that \begin{align} &\mathfrak{T}+\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(\mathsf{a})(\varphi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1)))\\ &\equiv \mathsf{a}-\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}(-1)^k\ad^k(\mathsf{a})(\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)))+\sum_{k \geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(\mathsf{a})(\varphi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1)))\\ & \equiv \mathsf{a}+\sum_{k\geq 0}\mathfrak{r}ac{B_k}{k!}\ad^k(\mathsf{a})\Big( \varphi(\iota(\mathsf{x}_{\infty}),\iota(\mathsf{x}_1))-(-1)^k\varphi(\iota(\mathsf{x}_0),\iota(\mathsf{x}_1)) \Big) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}\\ \label{eqn:B3} \end{align} The term in brackets is equal to \begin{equation} \begin{cases} \displaystyle 2\sum_{n\geq 2, \, \rm{even}}\zeta(n)\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}]) &\mbox{if $k$ is even}\\ \displaystyle-2\sum_{n\geq 3 \, \rm{odd}}\zeta(n)\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}]) & \mbox{if $k$ is odd.} \end{cases} \end{equation} Again using that $\zeta(k)=-\mathfrak{r}ac{B_k(2\pi i)^k}{2k!}$, if $k \geq 2$ is even, we obtain that \eqref{eqn:B3} equals \begin{align} \mathsf{a}-\sum_{n \geq 2}\mathfrak{r}ac{B_n(2\pi i)^n}{n!}\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}])&-\sum_{k,n \geq 2}\mathfrak{r}ac{B_kB_n(2\pi i)^n}{k!n!}\ad^k(\mathsf{a})\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}])\\ &+\begin{aligned}\sum_{n\geq 3, \, \rm{odd}}\zeta(n)\ad(\mathsf{a})&\ad^{n-1}(\mathsf{b})([\mathsf{a},\mathsf{b}]) \\ & \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}. \end{aligned}\label{eqn:B4} \end{align} The first term $\mathsf{a}$ belongs to $\mathfrak{B}_{\infty}^{(0)}$, and does not contribute to $\mathfrak{B}^{(1)}_{\infty}$. Applying the isomorphism \eqref{eqn:isom2} to the remaining terms in \eqref{eqn:B4}, we obtain the desired result \eqref{eqn:constB}. \end{prf} The series $\mathfrak{A}_{\infty}^{(1)}$ and $\mathfrak{B}_{\infty}^{(1)}$ are closely related to the extended period polynomials of Eisenstein series $r_{G_{2k}}(\mathsf{X},\mathsf{Y})$ \cite{Zagier:Periods}. Precisely, for $k \geq 2$, one has \begin{equation} \label{eqn:periodpolynomial} r_{G_{2k}}(\mathsf{X},\mathsf{Y})=\omega_{G_{2k}}^+P_{G_{2k}}(\mathsf{X},\mathsf{Y})^++\omega_{G_{2k}}^-P_{G_{2k}}(\mathsf{X},\mathsf{Y})^-, \end{equation} where \begin{align} P_{G_{2k}}(\mathsf{X},\mathsf{Y})^+&=\mathsf{X}^{2k-2}-\mathsf{Y}^{2k-2}\\ P_{G_{2k}}(\mathsf{X},\mathsf{Y})^-&=\sum_{-1\leq n \leq 2k-1}\mathfrak{r}ac{B_{n+1}B_{2k-n-1}}{(n+1)!(2k-1-n)!}\mathsf{X}^n\mathsf{Y}^{2k-2-n} \end{align} and $\omega_{G_{2k}}^-=-\mathfrak{r}ac{(2k-2)!}{2}$, $\omega_{G_{2k}}^+=\mathfrak{r}ac{\zeta(2k-1)}{(2\pi i)^{2k-1}}\omega_{G_{2k}}^-$ (the ``periods'' of $G_{2k}$). Now let \begin{equation} \widetilde{\mathfrak{A}}(\mathsf{U},\mathsf{V})=\mathfrak{r}ac{1}{\mathsf{V}}\mathfrak{A}_{\infty}^{(1)}(\mathsf{U},\mathsf{V}), \quad \widetilde{\mathfrak{B}}(\mathsf{U},\mathsf{V})=\mathfrak{r}ac{1}{\mathsf{U}}\mathfrak{B}_{\infty}^{(1)}(\mathsf{U},\mathsf{V}). \end{equation} These are formal Laurent series in the variables $\mathsf{U}$ and $\mathsf{V}$. In general, if $f(\mathsf{U},\mathsf{V})$ is a formal Laurent series, we denote by $f(\mathsf{U},\mathsf{V})_k$ its homogeneous component of degree $k$ and $f(\mathsf{U},\mathsf{V})^\pm:=\mathfrak{r}ac{f(\mathsf{U},\mathsf{V})\pm f(-\mathsf{U},\mathsf{V})}{2}$. Comparing now \eqref{eqn:periodpolynomial} with Theorem \ref{thm:arithmetic}, we get \begin{cor} \label{cor:arithmetic} We have \begin{align} r_{G_{2k}}(\mathsf{U},\mathsf{V})= \mathfrak{r}ac{\omega_{G_{2k}}^-}{2\pi i}\Bigg[\widetilde{\mathfrak{A}}(\overline{\mathsf{U}},\mathsf{V})_{2k-2}^++\widetilde{\mathfrak{B}}(\mathsf{V},\overline{\mathsf{U}})_{2k-2}^+ -\widetilde{\mathfrak{A}}(\overline{\mathsf{U}},\mathsf{V})_{2k-2}^--\widetilde{\mathfrak{B}}(\overline{\mathsf{U}},\mathsf{V})_{2k-2}^-\Bigg], \end{align} where $\overline{\mathsf{U}}=\mathfrak{r}ac{\mathsf{U}}{2\pi i}$. \end{cor} \subsection{The geometric piece: special values of elliptic polylogarithms} \label{ssec:5.2} Recall from Section \ref{ssec:4.2} the definition of the automorphism $g(\tau): \exp \mathfrak{p}(E_{\tau}^{\times})_{\mathbb C} \rightarrow \exp \mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. It naturally extends to the topological enveloping algebra $\mathbb Q\langle\!\langle \mathsf{a},\mathsf{b}\rangle\!\rangle$ of $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$. In this section, we compute the images of $g(\tau)(\mathsf{a})$, $g(\tau)(\mathsf{b})$ in the meta-abelian quotient $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}^{\rm met-ab}$ of $\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$, and relate the result to special values of Beilinson--Levin's elliptic polylogarithms \cite{BeiLev,Levin:Compositio}. \begin{thm} \label{thm:geometric} Let $\mathsf{W}=\mathfrak{r}ac{\mathsf{U}}{2\pi i}+\tau \mathsf{V}$. We have \begin{equation} \label{eqn:geom1} g(\tau)(\mathsf{a})^{\rm met-ab}=\mathsf{a}+2\pi i\tau \mathsf{b}-2\pi i\mathsf{W}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!} \int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}, \end{equation} and \begin{equation} \label{eqn:geom2} g(\tau)(\mathsf{b})^{\rm met-ab}=2\pi i\mathsf{b}-2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!} \int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}, \end{equation} where $\underline{G}_{2k}=(2\pi i)^{2k-1}G_{2k}(z)(\mathsf{W}-z\mathsf{V})^{2k-2}\mathrm{d} z$. \end{thm} \begin{prf} By Corollary \ref{cor:gmetab}, we have \begin{align} g(\tau)(\mathsf{a})^{\rm met-ab}& =\mathsf{a}+2\pi i\tau \mathsf{b}+\sum_{n\geq 0, \, k\geq 1}(-2\pi i)^{n+1}\mathcal G(\{0\}_n,2k;\tau)\Big(\varepsilon_0^n \circ \varepsilon_{2k}\Big)(\mathsf{a})\\ &+\sum_{k,n \geq 1}(-2\pi i)^{n+1}\mathcal G(\{0\}_{n-1},2k,0;\tau)\Big(\varepsilon_0^{n-1} \circ \varepsilon_{2k} \circ \varepsilon_0\Big)(\mathsf{a})\\ &= \mathsf{a}+2\pi i\tau \mathsf{b}+\sum_{n\geq 0, \, k\geq 1}\mathfrak{r}ac{2(-2\pi i)^{n+1}}{(2k-2)!}\mathcal G(\{0\}_n,2k;\tau)\varepsilon_0^n(\ad^{2k-1}(\mathsf{a})([\mathsf{a},\mathsf{b}]))\\ &-\sum_{k,n \geq 1}\mathfrak{r}ac{2(-2\pi i)^{n+1}}{(2k-2)!}\mathcal G(\{0\}_{n-1},2k,0;\tau)\varepsilon_0^{n-1}(\ad^{2k-2}(\mathsf{a})\ad(\mathsf{b})([\mathsf{a},\mathsf{b}])), \label{eqn:geom3} \end{align} Using the isomorphism of Proposition \ref{prop:metablie} together with Proposition \ref{prop:metabaction} and Proposition \ref{prop:IterEisEichler}, we see that \eqref{eqn:geom3} equals \begin{align} &\mathsf{a}+2\pi i\tau\mathsf{b}-\sum_{n\geq 0, \, k\geq 1}\mathfrak{r}ac{2(2\pi i)^{n+1}}{(2k-2)!n!}I_n(G_{2k};\tau)\left(\mathsf{V}\mathfrak{r}ac{\partial}{\partial \mathsf{U}} \right)^n\mathsf{U}^{2k-1}\\ &-\sum_{k,n \geq 1}\mathfrak{r}ac{2(2\pi i)^{n+1}}{(2k-2)!(n-1)!}\Big(\tau I_{n-1}(G_{2k};\tau)-I_n(G_{2k};\tau) \Big) \left(\mathsf{V}\mathfrak{r}ac{\partial}{\partial \mathsf{U}} \right)^{n-1} \mathsf{U}^{2k-2}\mathsf{V}. \end{align} Now we apply the differential operator $\mathsf{V}\mathfrak{r}ac{\partial }{\partial \mathsf{U}}$ and split the first and the last sum to obtain \begin{align} g(\tau)(\mathsf{a})^{\rm met-ab}&=\mathsf{a}+2\pi i\tau \mathsf{b}-\sum_{k\geq 1}\mathfrak{r}ac{2(2\pi i)}{(2k-2)!}I_0(G_{2k};\tau)\mathsf{U}^{2k-1}\\ &-\sum_{k,n\geq 1}\mathfrak{r}ac{2(2k-1)(2\pi i)^{n+1}}{(2k-1-n)!n!}I_n(G_{2k};\tau)\mathsf{U}^{2k-1-n}\mathsf{V}^n\\ &-2\pi i\tau\sum_{k,n \geq 1}\mathfrak{r}ac{2(2\pi i)^n}{(2k-1-n)!(n-1)!} I_{n-1}(G_{2k};\tau)\mathsf{U}^{2k-1-n}\mathsf{V}^{n-1}\\ &+\sum_{k,n \geq 1}\mathfrak{r}ac{2(2\pi i)^{n+1}}{(2k-1-n)!(n-1)!}I_n(G_{2k};\tau)\mathsf{U}^{2k-1-n}\mathsf{V}^{n-1}. \end{align} From the definition of $I_n(G_{2k};\tau)$, it is easy to see that the third sum equals \begin{equation} -2\pi i\tau \mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2(2\pi i)}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}G_{2k}(z)\Big(\mathsf{U}+2\pi i(\tau-z)\mathsf{V}\Big)^{2k-2}\mathrm{d} z. \end{equation} On the other hand, the first, second and fourth sum give \begin{equation} -\mathsf{U}\sum_{k=1}^{\infty}\mathfrak{r}ac{2(2\pi i)}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}G_{2k}(z)\Big(\mathsf{U}+2\pi i(\tau-z)\mathsf{V}\Big)^{2k-2}\mathrm{d} z. \end{equation} Combining the two equations and setting $\mathsf{W}=\mathfrak{r}ac{\mathsf{U}}{2\pi i}+\tau \mathsf{V}$, the first equality \eqref{eqn:geom1} follows. Since $g(\tau)$ is uniquely determined by its value on $e^\mathsf{a}$, the second statement \eqref{eqn:geom2} follows from the first, but can also be proved directly along similar lines. \end{prf} We now give the relation to special values of elliptic polylogarithms. Following the notation of \cite{Levin:Compositio}, we let $\Xi(\xi,\tau;\mathsf{X},\mathsf{Y})$ be the (modified) generating series of elliptic polylogarithms $\Lambda_{m,n}(\xi,\tau)$. These are holomorphic functions on the universal covering of the once-punctured elliptic curve $E_{\tau}^{\times}$, which are obtained by averaging the (Debye) polylogarithms along the spiral $q^{\mathbb Z}$. Let \begin{equation} \Xi^*(0,\tau;\mathsf{X},\mathsf{Y}):=(\Xi(\xi,\tau;\mathsf{X},\mathsf{Y})-\mathfrak{r}ac{1}{2\pi i}\log(2\pi i\xi))\vert_{\xi=0} \end{equation} be its (regularized) special value at the zero section of the elliptic curve. It has been shown in \cite{Levin:Compositio}, Theorem 4.1 that \begin{equation} \label{eqn:Levin} \Xi^*(0,\tau;\mathsf{X},\mathsf{Y})=\mathfrak{r}ac{-\tau}{X(X-\tau Y)}+\sum_{k=2}^{\infty}(-1)^{k-1}(k-1)\mathcal E_k, \end{equation} where for $k\geq 2$, $\mathcal E_k$ is the indefinite integral of $E_k(\tau)(X-\tau Y)^{k-2}\mathrm{d}\tau$ with $E_k(\tau)=\mathfrak{r}ac{2(2\pi i)^k}{(k-1)!}G_k(\tau)=\sum_{(m,n) \in \mathbb Z^2 \setminus \{(0,0)\}}\mathfrak{r}ac{1}{(m\tau+n)^k}$ the classical Eisenstein series of weight $k$. The constants of integration in the indefinite integrals can be retrieved uniformly as the (regularized) special value of $\Xi^*(0,\tau;X,Y)$ at $\tau=i\infty$, which is straightforwardly computed from the definitions and is given explicitly by \begin{align} \label{eqn:constellpol} \Xi^*(0,i\infty;X,Y)=-\sum_{n\geq 2}\mathfrak{r}ac{\zeta(n)}{(2\pi i)^n}Y^{n-1}+\mathfrak{r}ac{1}{e^X-1}\left( \mathfrak{r}ac{1}{e^Y-1}-\mathfrak{r}ac 1Y \right). \end{align} Now comparing \eqref{eqn:Levin} with Theorem \ref{thm:geometric}, we obtain \begin{cor} \label{cor:geometric} Let $g(\tau)(a)^{\rm met-ab}-a$, and replace $2\pi ib$ by $(\mathsf{W}-\tau \mathsf{V})^{-1}$. Then \begin{align} \mathfrak{r}ac{g(\tau)(\mathsf{a})^{\rm met-ab}-\mathsf{a}}{-(2\pi i)^2\mathsf{W}}=\Xi^*(0,\tau;2\pi i\mathsf{W},2\pi i\mathsf{V})-\Xi^*(0,i\infty;2\pi i\mathsf{W},2\pi i\mathsf{V}), \end{align} where $\Xi^*(0,i\infty;X,Y)$ is given in \eqref{eqn:constellpol} above. \end{cor} \subsection{Putting the pieces together} \label{ssec:5.3} We can now complete the computation of $\mathfrak{A}(\tau)^{\rm met-ab}$ and $\mathfrak{B}(\tau)^{\rm met-ab}$ by combining the results of the previous sections. \begin{thm} \label{thm:arithgeo} We have \begin{align} \mathfrak{A}(\tau)^{\rm met-ab}&=2\pi i\mathsf{b}+\exp\left(\tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)\mathfrak{A}^{(1)}_{\infty}-2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}, \end{align} and \begin{align} \mathfrak{B}(\tau)^{\rm met-ab}&=\mathsf{a}+2\pi i\tau \mathsf{b}+\exp\left(\tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)\mathfrak{B}^{(1)}_{\infty}-2\pi i\mathsf{W}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}. \end{align} where $\overline{\mathsf{U}}=\mathfrak{r}ac{\mathsf{U}}{2\pi i}$, $\mathsf{W}=\overline{\mathsf{U}}+\tau \mathsf{V}$ and $\mathfrak{A}^{(1)}_{\infty}$ and $\mathfrak{B}^{(1)}_{\infty}$ are as given in Theorem \ref{thm:arithmetic} \end{thm} \begin{prf} We only prove the first equality, the second one is shown analogously. By Theorem \ref{thm:Diffeq}, we have $\mathfrak{A}(\tau)=g(\tau)(\mathfrak{A}_{\infty})$, hence \begin{equation} \mathfrak{A}(\tau)^{\rm met-ab} \equiv g(\tau)(\mathfrak{A}_{\infty}) \mod D^2\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}, \end{equation} and from Proposition \ref{prop:metabaction}, we get \begin{equation} \mathfrak{A}(\tau)^{\rm met-ab}=g(\tau)(\mathfrak{A}^{(1)}_{\infty})+2\pi ig(\tau)(\mathsf{b})^{\rm met-ab}. \end{equation} The only derivation which acts non-trivially on $\gr^1_D\mathfrak{p}(E_{\tau}^{\times})_{\mathbb C}$ is $\varepsilon_0$ which itself acts as $-\mathfrak{r}ac{\partial}{\partial \mathsf{U}}\mathsf{V}=\mathfrak{r}ac{1}{2\pi i}\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V}$. Combining this with Theorem \ref{thm:geometric}, we get the result: \begin{align} \mathfrak{A}(\tau)^{\rm met-ab}=2\pi i\mathsf{b}+\exp\left(\tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V}\right)\mathfrak{A}^{(1)}_{\infty}-2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}. \end{align} \end{prf} \begin{rmk} The value for $\mathfrak{A}(\tau)^{\rm met-ab}$ given in Theorem \ref{thm:arithgeo} can be further simplified. To this end, recall from Theorem \ref{thm:arithmetic} that \begin{equation} \mathfrak{A}_{\infty}^{(1)}=2\pi i\left(\sum_{k=1}^{\infty}\mathfrak{r}ac{B_{2k}}{(2k)!}\mathsf{U}^{2k-1}-\mathfrak{r}ac{2\pi i}{4} \mathsf{V}+\sum_{n=3, \, \rm{odd}}\zeta(n)\mathsf{V}^n\right). \end{equation} Therefore \begin{align} \exp\left( \tau\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)\mathfrak{A}_{\infty}^{(1)}&=\mathfrak{A}_{\infty}^{(1)}+2\pi i\sum_{k,n\geq 1}\mathfrak{r}ac{\tau^n}{n!}\mathfrak{r}ac{B_{2k}}{(2k)!}\left(\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)^n\mathsf{U}^{2k-1}\\ &=\mathfrak{A}_{\infty}^{(1)}+2\pi i\mathsf{V}\sum_{k,n \geq 1}\mathfrak{r}ac{2(2\pi i)^{2k-1}}{(2k-2)!}\Bigg[\mathfrak{r}ac{\tau^n}{n!}\mathfrak{r}ac{B_{2k}}{4k}\left(\mathfrak{r}ac{\partial}{\partial \overline{\mathsf{U}}}\mathsf{V} \right)^{n-1}\overline{\mathsf{U}}^{2k-2}\Bigg]\\ &=\mathfrak{A}_{\infty}^{(1)}+2\pi iV\sum_{k,n \geq 1}\mathfrak{r}ac{2(2\pi i)^{2k-1}}{(2k-1-n)!}\Bigg[\mathfrak{r}ac{\tau^n}{n!}\mathfrak{r}ac{B_{2k}}{4k}\overline{\mathsf{U}}^{2k-1-n}\mathsf{V}^{n-1}\Bigg]\\ &=\mathfrak{A}_{\infty}^{(1)}+2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2(2\pi i)^{2k-1}}{(2k-2)!}\mathfrak{r}ac{B_{2k}}{4k}\int_0^{\tau}(\overline{\mathsf{U}}+(\tau-z)\mathsf{V})^{2k-2}\mathrm{d} z. \end{align} Note that $-\mathfrak{r}ac{B_{2k}}{4k}=a_0(G_{2k})$, the zeroth Fourier coefficient of $G_{2k}$. Consequently, we obtain \begin{align} \mathfrak{A}(\tau)^{\rm met-ab}&=2\pi i\mathsf{b}+\mathfrak{A}_{\infty}^{(1)}-2\pi i\mathsf{V}\sum_{k=1}^{\infty}\mathfrak{r}ac{2}{(2k-2)!}\int_{\tau}^{i\infty}\underline{G}^0_{2k}, \end{align} where $\underline{G}^0_{2k}=\underline{G}_{2k}-a_0(\underline{G}_{2k})=\underline{G}_{2k}-(2\pi i)^{2k-1}a_0(G_{2k})(\mathsf{W}-z\mathsf{V})^{2k-2}$, since \begin{align} \int_{\tau}^{\overrightarrow{1}_{\infty}}\underline{G}_{2k}=\int_{\tau}^{i\infty}\underline{G}^0_{2k}-\int_0^{\tau}a_0(\underline{G}_{2k}). \end{align} \end{rmk} \begin{bibtex}[\jobname] @article {And, AUTHOR = {Anderson, Greg W.}, TITLE = {The hyperadelic gamma function}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {95}, YEAR = {1989}, NUMBER = {1}, PAGES = {63--131}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {11S80 (11G20)}, MRNUMBER = {969414}, MRREVIEWER = {Gerd Faltings}, DOI = {10.1007/BF01394145}, URL = {http://dx.doi.org/10.1007/BF01394145}, } @book {Andre:Motifs, AUTHOR = {Andr{\'e}, Yves}, TITLE = {Une introduction aux motifs (motifs purs, motifs mixtes, p\'eriodes)}, SERIES = {Panoramas et Synth\`eses [Panoramas and Syntheses]}, VOLUME = {17}, PUBLISHER = {Soci\'et\'e Math\'ematique de France, Paris}, YEAR = {2004}, PAGES = {xii+261}, ISBN = {2-85629-164-3}, MRCLASS = {14F42 (11J91 14C25 19E15)}, MRNUMBER = {2115000 (2005k:14041)}, MRREVIEWER = {Luca Barbieri Viale}, } @article {Apery:Irrationalite, AUTHOR = {Ap{\'e}ry, Roger}, TITLE = {Irrationalit{\'e} de {$\zeta(2)$} et {$\zeta(3)$}}, JOURNAL = {Ast{\'e}risque}, VOLUME = {61}, YEAR = {1979}, PAGES = {11--13}, } @book {Bailey:Hypergeometric, AUTHOR = {Bailey, W. N.}, TITLE = {Generalized hypergeometric series}, SERIES = {Cambridge Tracts in Mathematics and Mathematical Physics, No. 32}, PUBLISHER = {Stechert-Hafner, Inc., New York}, YEAR = {1964}, PAGES = {v+108}, MRCLASS = {33.20 (40.00)}, MRNUMBER = {0185155 (32 \#2625)}, } @article {BR:Irrationalite, AUTHOR = {Ball, Keith and Rivoal, Tanguy}, TITLE = {Irrationalit\'e d'une infinit\'e de valeurs de la fonction z\^eta aux entiers impairs}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {146}, YEAR = {2001}, NUMBER = {1}, PAGES = {193--207}, ISSN = {0020-9910}, MRCLASS = {11J72 (11M06)}, MRNUMBER = {1859021}, MRREVIEWER = {F. Beukers}, DOI = {10.1007/s002220100168}, URL = {http://dx.doi.org/10.1007/s002220100168}, } @article {BKT:EllipticPol, AUTHOR = {Bannai, Kenichi and Kobayashi, Shinichi and Tsuji, Takeshi}, TITLE = {On the de {R}ham and {$p$}-adic realizations of the elliptic polylogarithm for {CM} elliptic curves}, JOURNAL = {Ann. Sci. \'Ec. Norm. Sup\'er. (4)}, FJOURNAL = {Annales Scientifiques de l'\'Ecole Normale Sup\'erieure. Quatri\`eme S\'erie}, VOLUME = {43}, YEAR = {2010}, NUMBER = {2}, PAGES = {185--234}, ISSN = {0012-9593}, MRCLASS = {11G55 (11G15 14F30 14G10)}, MRNUMBER = {2662664 (2011g:11125)}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, } @ARTICLE{BS:fundamentalLie, author = {{Baumard}, S. and {Schneps}, L.}, title = "{Relations dans l'alg\`ebre de Lie fondamentale des motifs elliptiques mixtes}", journal = {ArXiv e-prints}, volume = {math.NT/1310.5833}, primaryClass = "math.AG", keywords = {Mathematics - Algebraic Geometry}, year = 2013, adsurl = {http://adsabs.harvard.edu/abs/2013arXiv1310.5833B}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @incollection {BeiLev, AUTHOR = {Be{\u\i}linson, A. and Levin, A.}, TITLE = {The elliptic polylogarithm}, BOOKTITLE = {Motives ({S}eattle, {WA}, 1991)}, SERIES = {Proc. Sympos. Pure Math.}, VOLUME = {55}, PAGES = {123--190}, PUBLISHER = {Amer. Math. Soc., Providence, RI}, YEAR = {1994}, MRCLASS = {11G05 (11G09 11G40 14H52 19F27)}, MRNUMBER = {1265553}, MRREVIEWER = {J. Browkin}, } @book {Bloch:Regulators, AUTHOR = {Bloch, Spencer J.}, TITLE = {Higher regulators, algebraic {$K$}-theory, and zeta functions of elliptic curves}, SERIES = {CRM Monograph Series}, VOLUME = {11}, PUBLISHER = {American Mathematical Society, Providence, RI}, YEAR = {2000}, PAGES = {x+97}, ISBN = {0-8218-2114-8}, MRCLASS = {11G55 (11G40 11R70 14G10 19F27)}, MRNUMBER = {1760901 (2001i:11082)}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, } @article {MZVmine, AUTHOR = {Bl{\"u}mlein, J. and Broadhurst, D. J. and Vermaseren, J. A. M.}, TITLE = {The multiple zeta value data mine}, JOURNAL = {Comput. Phys. Comm.}, FJOURNAL = {Computer Physics Communications. An International Journal and Program Library for Computational Physics and Physical Chemistry}, VOLUME = {181}, YEAR = {2010}, NUMBER = {3}, PAGES = {582--625}, ISSN = {0010-4655}, CODEN = {CPHCBZ}, MRCLASS = {11M32 (11Y70)}, MRNUMBER = {2578167 (2011a:11163)}, MRREVIEWER = {Zhonghua Li}, DOI = {10.1016/j.cpc.2009.11.007}, URL = {http://dx.doi.org/10.1016/j.cpc.2009.11.007}, } @book {Bou, AUTHOR = {Bourbaki, N.}, TITLE = {\'{E}l\'ements de math\'ematique. {F}asc. {XXXIV}. {G}roupes et alg\`ebres de {L}ie. {C}hapitre {IV}: {G}roupes de {C}oxeter et syst\`emes de {T}its. {C}hapitre {V}: {G}roupes engendr\'es par des r\'eflexions. {C}hapitre {VI}: syst\`emes de racines}, SERIES = {Actualit\'es Scientifiques et Industrielles, No. 1337}, PUBLISHER = {Hermann, Paris}, YEAR = {1968}, PAGES = {288 pp. (loose errata)}, MRCLASS = {22.50 (17.00)}, MRNUMBER = {0240238 (39 \#1590)}, MRREVIEWER = {G. B. Seligman}, } @article {BK, AUTHOR = {Broadhurst, D. J. and Kreimer, D.}, TITLE = {Association of multiple zeta values with positive knots via {F}eynman diagrams up to {$9$} loops}, JOURNAL = {Phys. Lett. B}, FJOURNAL = {Physics Letters. B}, VOLUME = {393}, YEAR = {1997}, NUMBER = {3-4}, PAGES = {403--412}, ISSN = {0370-2693}, CODEN = {PYLBAJ}, MRCLASS = {11M41 (11Z05 57M25 81T18)}, MRNUMBER = {1435933 (98g:11101)}, MRREVIEWER = {Louis H. Kauffman}, DOI = {10.1016/S0370-2693(96)01623-1}, URL = {http://dx.doi.org/10.1016/S0370-2693(96)01623-1}, } @article {BMS, AUTHOR = {Broedel, Johannes and Matthes, Nils and Schlotterer, Oliver}, TITLE = {Relations between elliptic multiple zeta values and a special derivation algebra}, JOURNAL = {J. Phys. A}, FJOURNAL = {Journal of Physics. A. Mathematical and Theoretical}, VOLUME = {49}, YEAR = {2016}, NUMBER = {15}, PAGES = {155--203}, ISSN = {1751-8113}, MRCLASS = {11M32 (33E05)}, MRNUMBER = {3479125}, DOI = {10.1088/1751-8113/49/15/155203}, URL = {http://dx.doi.org/10.1088/1751-8113/49/15/155203}, } @article {BMMS, AUTHOR = {Broedel, Johannes and Mafra, Carlos R. and Matthes, Nils and Schlotterer, Oliver}, TITLE = {Elliptic multiple zeta values and one-loop superstring amplitudes}, JOURNAL = {J. High Energy Phys.}, FJOURNAL = {Journal of High Energy Physics}, YEAR = {2015}, NUMBER = {7}, PAGES = {112, front matter+41}, ISSN = {1126-6708}, MRCLASS = {83C47 (83E30)}, MRNUMBER = {3383100}, MRREVIEWER = {Farhang Loran}, } @article {BSS, AUTHOR = {Broedel, Johannes and Schlotterer, Oliver and Stieberger, Stephan}, TITLE = {Polylogarithms, multiple zeta values and superstring amplitudes}, JOURNAL = {Fortschr. Phys.}, FJOURNAL = {Fortschritte der Physik. Progress of Physics}, VOLUME = {61}, YEAR = {2013}, NUMBER = {9}, PAGES = {812--870}, ISSN = {0015-8208}, MRCLASS = {81T30 (11M32 33B30)}, MRNUMBER = {3104459}, MRREVIEWER = {Giuseppe Nardelli}, DOI = {10.1002/prop.201300019}, URL = {http://dx.doi.org/10.1002/prop.201300019}, } @article{BSST, author = "Broedel, Johannes and Schlotterer, Oliver and Stieberger, Stephan and Terasoma, Tomohide", title = "{All order $\alpha^{\prime}$-expansion of superstring trees from the Drinfeld associator}", journal = "Phys. Rev.", volume = "D89", year = "2014", number = "6", pages = "066014", doi = "10.1103/PhysRevD.89.066014", eprint = "1304.7304", archivePrefix = "arXiv", primaryClass = "hep-th", reportNumber = "DAMTP-2013-23, AEI-2013-195, MPP-2013-120", SLACcitation = " } @incollection {Brown:Colombia, AUTHOR = {Brown, Francis}, TITLE = {Iterated integrals in quantum field theory}, BOOKTITLE = {Geometric and topological methods for quantum field theory}, PAGES = {188--240}, PUBLISHER = {Cambridge Univ. Press, Cambridge}, YEAR = {2013}, MRCLASS = {81S40 (81T18 81T40)}, MRNUMBER = {3098088}, MRREVIEWER = {Roberto Quezada}, } @article {Brown:MTM, AUTHOR = {Brown, Francis}, TITLE = {Mixed {T}ate motives over {$\mathbb Z$}}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {175}, YEAR = {2012}, NUMBER = {2}, PAGES = {949--976}, ISSN = {0003-486X}, MRCLASS = {11S20 (11M32 14F42)}, MRNUMBER = {2993755}, MRREVIEWER = {Pierre A. Lochak}, DOI = {10.4007/annals.2012.175.2.10}, URL = {http://dx.doi.org/10.4007/annals.2012.175.2.10}, } @unpublished{Brown:MMV, author = {Brown, Francis}, title = {Multiple modular values and the relative completion of the fundamental group of $\mathcal{M}_{1,1}$}, year = {2016}, note = {arXiv:1407.5167v3}, } @unpublished{Brown:ICM, author = {Brown, Francis}, title = {Motivic periods and $\mathbb{P}^1$ minus three points}, year = {2014}, note = {proceedings of the ICM (2014)}, } @article {Brown:depth3, AUTHOR = {Brown, Francis}, TITLE = {ZETA ELEMENTS IN DEPTH 3 AND THE FUNDAMENTAL {L}IE ALGEBRA OF THE INFINITESIMAL {T}ATE CURVE}, JOURNAL = {Forum Math. Sigma}, FJOURNAL = {Forum of Mathematics. Sigma}, VOLUME = {5, e1}, YEAR = {2017}, PAGES = {56pp}, ISSN = {2050-5094}, MRCLASS = {11M32 (11F67)}, MRNUMBER = {3593496}, DOI = {10.1017/fms.2016.29}, URL = {http://dx.doi.org/10.1017/fms.2016.29}, } @unpublished{Brown:depth, author = {Brown, Francis}, title = {Depth-graded motivic multiple zeta values}, year = {2013}, note = {arXiv:1301.3053}, } @incollection {Brown:Decomposition, AUTHOR = {Brown, Francis}, TITLE = {On the decomposition of motivic multiple zeta values}, BOOKTITLE = {Galois-{T}eichm\"uller theory and arithmetic geometry}, SERIES = {Adv. Stud. Pure Math.}, VOLUME = {63}, PAGES = {31--58}, PUBLISHER = {Math. Soc. Japan, Tokyo}, YEAR = {2012}, MRCLASS = {11M32 (13B05 16T15)}, MRNUMBER = {3051238}, MRREVIEWER = {Antanas Laurin{\v{c}}ikas}, } @unpublished{BL:MEP, author = {Brown, Francis. and Levin, Andrey}, title = {Multiple elliptic polylogarithms}, note = {arXiv:1110.6917}, year = {2011}, } @incollection {Car, AUTHOR = {Cartier, Pierre}, TITLE = {A primer of {H}opf algebras}, BOOKTITLE = {Frontiers in number theory, physics, and geometry. {II}}, PAGES = {537--615}, PUBLISHER = {Springer, Berlin}, YEAR = {2007}, MRCLASS = {16W30 (01A60 05E05)}, MRNUMBER = {2290769 (2008b:16059)}, MRREVIEWER = {Ralf Holtkamp}, DOI = {10.1007/978-3-540-30308-4_12}, URL = {http://dx.doi.org/10.1007/978-3-540-30308-4_12}, } @incollection {CEE:KZB, AUTHOR = {Calaque, Damien and Enriquez, Benjamin and Etingof, Pavel}, TITLE = {Universal {KZB} equations: the elliptic case}, BOOKTITLE = {Algebra, arithmetic, and geometry: in honor of {Y}u. {I}. {M}anin. {V}ol. {I}}, SERIES = {Progr. Math.}, VOLUME = {269}, PAGES = {165--266}, PUBLISHER = {Birkh\"auser Boston, Inc., Boston, MA}, YEAR = {2009}, MRCLASS = {32G34 (11F55 17B37 20C08 32C38)}, MRNUMBER = {2641173 (2011k:32018)}, MRREVIEWER = {Gwyn Bellamy}, DOI = {10.1007/978-0-8176-4745-2_5}, URL = {http://dx.doi.org/10.1007/978-0-8176-4745-2\_5}, } @article {Chen:PathIntegrals, AUTHOR = {Chen, Kuo Tsai}, TITLE = {Iterated path integrals}, JOURNAL = {Bull. Amer. Math. Soc.}, FJOURNAL = {Bulletin of the American Mathematical Society}, VOLUME = {83}, YEAR = {1977}, NUMBER = {5}, PAGES = {831--879}, ISSN = {0002-9904}, MRCLASS = {55D35 (58A99)}, MRNUMBER = {0454968 (56 \#13210)}, MRREVIEWER = {Jean-Michel Lemaire}, } @book {Del2, AUTHOR = {Deligne, Pierre}, TITLE = {\'{E}quations diff\'erentielles \`a points singuliers r\'eguliers}, SERIES = {Lecture Notes in Mathematics, Vol. 163}, PUBLISHER = {Springer-Verlag, Berlin-New York}, YEAR = {1970}, PAGES = {iii+133}, MRCLASS = {14D05 (14C30)}, MRNUMBER = {0417174}, MRREVIEWER = {Helmut Hamm}, } @incollection {Deligne:P1, AUTHOR = {Deligne, P.}, TITLE = {Le groupe fondamental de la droite projective moins trois points}, BOOKTITLE = {Galois groups over ${\bf Q}$ ({B}erkeley, {CA}, 1987)}, SERIES = {Math. Sci. Res. Inst. Publ.}, VOLUME = {16}, PAGES = {79--297}, PUBLISHER = {Springer, New York}, YEAR = {1989}, MRCLASS = {14G25 (11G35 11M06 11R70 14F35 19E99 19F27)}, MRNUMBER = {1012168 (90m:14016)}, MRREVIEWER = {James Milne}, DOI = {10.1007/978-1-4613-9649-9_3}, URL = {http://dx.doi.org/10.1007/978-1-4613-9649-9\_3}, } @article {Deligne:Multizetas, AUTHOR = {Deligne, Pierre}, TITLE = {Multiz\^etas, d'apr\`es {F}rancis {B}rown}, NOTE = {S\'eminaire Bourbaki. Vol. 2011/2012. Expos\'es 1043--1058}, JOURNAL = {Ast\'erisque}, FJOURNAL = {Ast\'erisque}, NUMBER = {352}, YEAR = {2013}, PAGES = {Exp. No. 1048, viii, 161--185}, ISSN = {0303-1179}, ISBN = {978-2-85629-371-3}, MRCLASS = {11S40 (11G09 14C15 14F35)}, MRNUMBER = {3087346}, MRREVIEWER = {Damian R\~A\P ssler}, } @article {DG, AUTHOR = {Deligne, Pierre and Goncharov, Alexander B.}, TITLE = {Groupes fondamentaux motiviques de {T}ate mixte}, JOURNAL = {Ann. Sci. \'Ecole Norm. Sup. (4)}, FJOURNAL = {Annales Scientifiques de l'\'Ecole Normale Sup\'erieure. Quatri\`eme S\'erie}, VOLUME = {38}, YEAR = {2005}, NUMBER = {1}, PAGES = {1--56}, ISSN = {0012-9593}, CODEN = {ASENAH}, MRCLASS = {11G55 (14F42 14G10 19F27)}, MRNUMBER = {2136480 (2006b:11066)}, MRREVIEWER = {Tam{\'a}s Szamuely}, DOI = {10.1016/j.ansens.2004.11.001}, URL = {http://dx.doi.org/10.1016/j.ansens.2004.11.001}, } @book {DM, AUTHOR = {Deligne, Pierre and Milne, James S. and Ogus, Arthur and Shih, Kuang-yen}, TITLE = {Hodge cycles, motives, and {S}himura varieties}, SERIES = {Lecture Notes in Mathematics}, VOLUME = {900}, PUBLISHER = {Springer-Verlag, Berlin-New York}, YEAR = {1982}, PAGES = {ii+414}, ISBN = {3-540-11174-3}, MRCLASS = {14Kxx (10D25 12A67 14A20 14F30 14K22)}, MRNUMBER = {654325}, } @preamble{ "\def$'${$'$} " } @article {Drinfeld:Gal, AUTHOR = {Drinfel{$'$}d, V. G.}, TITLE = {On quasitriangular quasi-{H}opf algebras and on a group that is closely connected with {${\rm Gal}(\overline{\bf Q}/{\bf Q})$}}, JOURNAL = {Algebra i Analiz}, FJOURNAL = {Algebra i Analiz}, VOLUME = {2}, YEAR = {1990}, NUMBER = {4}, PAGES = {149--181}, ISSN = {0234-0852}, MRCLASS = {16W30 (17B37)}, MRNUMBER = {1080203 (92f:16047)}, MRREVIEWER = {Ivan Penkov}, } @preamble{ "\def$'${$'$} " } @article {Drinfeld:QuasiHopf, AUTHOR = {Drinfel{$'$}d, V. G.}, TITLE = {Quasi-{H}opf algebras}, JOURNAL = {Algebra i Analiz}, FJOURNAL = {Algebra i Analiz}, VOLUME = {1}, YEAR = {1989}, NUMBER = {6}, PAGES = {114--148}, ISSN = {0234-0852}, MRCLASS = {17B37 (16W30 57M25 81T40)}, MRNUMBER = {1047964}, MRREVIEWER = {Ya. S. So{\u\i}bel{$'$}man}, } @article {Ecalle, AUTHOR = {Ecalle, Jean}, TITLE = {A{RI}/{GARI}, la dimorphie et l'arithm\'etique des multiz\^etas: un premier bilan}, JOURNAL = {J. Th\'eor. Nombres Bordeaux}, FJOURNAL = {Journal de Th\'eorie des Nombres de Bordeaux}, VOLUME = {15}, YEAR = {2003}, NUMBER = {2}, PAGES = {411--478}, ISSN = {1246-7405}, MRCLASS = {11M41 (11G55 19F27 33B30)}, MRNUMBER = {2140864}, MRREVIEWER = {Alexey A. Panchishkin}, URL = {http://jtnb.cedram.org/item?id=JTNB_2003__15_2_411_0}, } @article {Enriquez:EllAss, AUTHOR = {Enriquez, Benjamin}, TITLE = {Elliptic associators}, JOURNAL = {Selecta Math. (N.S.)}, FJOURNAL = {Selecta Mathematica. New Series}, VOLUME = {20}, YEAR = {2014}, NUMBER = {2}, PAGES = {491--584}, ISSN = {1022-1824}, MRCLASS = {17B35 (11M32 14H10 16S30 20F36)}, MRNUMBER = {3177926}, DOI = {10.1007/s00029-013-0137-3}, URL = {http://dx.doi.org/10.1007/s00029-013-0137-3}, } @article {Enriquez:Emzv, AUTHOR = {Enriquez, Benjamin}, TITLE = {Analogues elliptiques des nombres multiz\'etas}, JOURNAL = {Bull. Soc. Math. France}, FJOURNAL = {Bulletin de la Soci\'et\'e Math\'ematique de France}, VOLUME = {144}, YEAR = {2016}, NUMBER = {3}, PAGES = {395--427}, ISSN = {0037-9484}, MRCLASS = {11M32 (17B01 17B35 17B40 33E30)}, MRNUMBER = {3558428}, MRREVIEWER = {A. Perelli}, } @article {Eichler, AUTHOR = {Eichler, M.}, TITLE = {Eine {V}erallgemeinerung der {A}belschen {I}ntegrale}, JOURNAL = {Math. Z.}, FJOURNAL = {Mathematische Zeitschrift}, VOLUME = {67}, YEAR = {1957}, PAGES = {267--298}, ISSN = {0025-5874}, MRCLASS = {33.0X}, MRNUMBER = {0089928}, MRREVIEWER = {H. Cohn}, } @incollection {Fal, AUTHOR = {Faltings, Gerd}, TITLE = {Mathematics around {K}im's new proof of {S}iegel's theorem}, BOOKTITLE = {Diophantine geometry}, SERIES = {CRM Series}, VOLUME = {4}, PAGES = {173--188}, PUBLISHER = {Ed. Norm., Pisa}, YEAR = {2007}, MRCLASS = {14G99 (11G30 14F20)}, MRNUMBER = {2349654 (2009i:14029)}, } @article {FurStab, AUTHOR = {Furusho, Hidekazu}, TITLE = {The multiple zeta value algebra and the stable derivation algebra}, JOURNAL = {Publ. Res. Inst. Math. Sci.}, FJOURNAL = {Kyoto University. Research Institute for Mathematical Sciences. Publications}, VOLUME = {39}, YEAR = {2003}, NUMBER = {4}, PAGES = {695--720}, ISSN = {0034-5318}, CODEN = {KRMPBV}, MRCLASS = {11M41 (14G32)}, MRNUMBER = {2025460}, URL = {http://projecteuclid.org/euclid.prims/1145476044}, } @incollection {FurMZVGT, AUTHOR = {Furusho, Hidekazu}, TITLE = {Multiple zeta values and {G}rothendieck-{T}eichm\"uller groups}, BOOKTITLE = {Primes and knots}, SERIES = {Contemp. Math.}, VOLUME = {416}, PAGES = {49--82}, PUBLISHER = {Amer. Math. Soc., Providence, RI}, YEAR = {2006}, MRCLASS = {14G32 (11M41)}, MRNUMBER = {2276136}, MRREVIEWER = {Alexey A. Panchishkin}, DOI = {10.1090/conm/416/07887}, URL = {http://dx.doi.org/10.1090/conm/416/07887}, } @article {Fur, AUTHOR = {Furusho, Hidekazu}, TITLE = {Double shuffle relation for associators}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {174}, YEAR = {2011}, NUMBER = {1}, PAGES = {341--360}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {14G32 (11G55 11M32 16W60)}, MRNUMBER = {2811601 (2012i:14031)}, MRREVIEWER = {Pierre A. Lochak}, DOI = {10.4007/annals.2011.174.1.9}, URL = {http://dx.doi.org/10.4007/annals.2011.174.1.9}, } @incollection {GKZ, AUTHOR = {Gangl, Herbert and Kaneko, Masanobu and Zagier, Don}, TITLE = {Double zeta values and modular forms}, BOOKTITLE = {Automorphic forms and zeta functions}, PAGES = {71--106}, PUBLISHER = {World Sci. Publ., Hackensack, NJ}, YEAR = {2006}, MRCLASS = {11M41 (11F11)}, MRNUMBER = {2208210 (2006m:11138)}, MRREVIEWER = {Hirofumi Tsumura}, DOI = {10.1142/9789812774415_0004}, URL = {http://dx.doi.org/10.1142/9789812774415_0004}, } @article {GonMod, AUTHOR = {Goncharov, A. B.}, TITLE = {Multiple polylogarithms, cyclotomy and modular complexes}, JOURNAL = {Math. Res. Lett.}, FJOURNAL = {Mathematical Research Letters}, VOLUME = {5}, YEAR = {1998}, NUMBER = {4}, PAGES = {497--516}, ISSN = {1073-2780}, MRCLASS = {11G55 (11F67 11R42 19E20 19F15 19F27)}, MRNUMBER = {1653320 (2000c:11108)}, MRREVIEWER = {Alexey A. Panchishkin}, DOI = {10.4310/MRL.1998.v5.n4.a7}, URL = {http://dx.doi.org/10.4310/MRL.1998.v5.n4.a7}, } @incollection {GonMZV, AUTHOR = {Goncharov, Alexander B.}, TITLE = {Multiple {$\zeta$}-values, {G}alois groups, and geometry of modular varieties}, BOOKTITLE = {European {C}ongress of {M}athematics, {V}ol. {I} ({B}arcelona, 2000)}, SERIES = {Progr. Math.}, VOLUME = {201}, PAGES = {361--392}, PUBLISHER = {Birkh\"auser, Basel}, YEAR = {2001}, MRCLASS = {11G55 (11G40 11M41 14G35 33B30 81Q30)}, MRNUMBER = {1905330}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, } @unpublished{Goncharov:MTM, author = {{Goncharov}, A.~B.}, title = "{Multiple polylogarithms and mixed Tate motives}", note = {arXiv:math/0103059}, year = 2001, } @article {GM, AUTHOR = {Goncharov, A. B. and Manin, Yu. I.}, TITLE = {Multiple {$\zeta$}-motives and moduli spaces {$\overline{\mathscr M}_{0,n}$}}, JOURNAL = {Compos. Math.}, FJOURNAL = {Compositio Mathematica}, VOLUME = {140}, YEAR = {2004}, NUMBER = {1}, PAGES = {1--14}, ISSN = {0010-437X}, MRCLASS = {11G55 (11M41 14H10)}, MRNUMBER = {2004120 (2005c:11090)}, MRREVIEWER = {Gilberto Bini}, DOI = {10.1112/S0010437X03000125}, URL = {http://dx.doi.org/10.1112/S0010437X03000125}, } @incollection {Hai, AUTHOR = {Hain, Richard M.}, TITLE = {The geometry of the mixed {H}odge structure on the fundamental group}, BOOKTITLE = {Algebraic geometry, {B}owdoin, 1985 ({B}runswick, {M}aine, 1985)}, SERIES = {Proc. Sympos. Pure Math.}, VOLUME = {46}, PAGES = {247--282}, PUBLISHER = {Amer. Math. Soc., Providence, RI}, YEAR = {1987}, MRCLASS = {14F40 (14C30 32C40 32G20 55P62 58A14)}, MRNUMBER = {927984 (89g:14010)}, MRREVIEWER = {Toshitake Kohno}, } @incollection {HaiPolylog, AUTHOR = {Hain, Richard M.}, TITLE = {Classical polylogarithms}, BOOKTITLE = {Motives ({S}eattle, {WA}, 1991)}, SERIES = {Proc. Sympos. Pure Math.}, VOLUME = {55}, PAGES = {3--42}, PUBLISHER = {Amer. Math. Soc., Providence, RI}, YEAR = {1994}, MRCLASS = {19F99 (11G99 11R70 19D55)}, MRNUMBER = {1265550 (94k:19002)}, MRREVIEWER = {Philippe Blanc}, } @unpublished{Hain:KZB, author = {{Hain}, R.}, title = "{Notes on the Universal Elliptic KZB Equation}", note = {arXiv:1309.0580}, year = 2013, } @incollection {Hain:HodgeDeRham, AUTHOR = {Hain, Richard}, TITLE = {The {H}odge--de {R}ham theory of modular groups}, BOOKTITLE = {Recent advances in {H}odge theory}, SERIES = {London Math. Soc. Lecture Note Ser.}, VOLUME = {427}, PAGES = {422--514}, PUBLISHER = {Cambridge Univ. Press, Cambridge}, YEAR = {2016}, MRCLASS = {14D07 (14Gxx 32S35 58A14)}, MRNUMBER = {3409885}, } @unpublished{HM, author = {{Hain}, R. and {Matsumoto}, M.}, title = "{Universal Mixed Elliptic Motives}", note = {arXiv:1512.03975}, year = 2015 } @article {Ihara:Annals, AUTHOR = {Ihara, Yasutaka}, TITLE = {Profinite braid groups, {G}alois representations and complex multiplications}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {123}, YEAR = {1986}, NUMBER = {1}, PAGES = {43--106}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {11G25 (14K22 20E18)}, MRNUMBER = {825839}, MRREVIEWER = {David Goss}, DOI = {10.2307/1971352}, URL = {http://dx.doi.org/10.2307/1971352}, } @incollection {Iha, AUTHOR = {Ihara, Yasutaka}, TITLE = {The {G}alois representation arising from {${\bf P}^1-\{0,1,\infty\}$} and {T}ate twists of even degree}, BOOKTITLE = {Galois groups over {${\bf Q}$} ({B}erkeley, {CA}, 1987)}, SERIES = {Math. Sci. Res. Inst. Publ.}, VOLUME = {16}, PAGES = {299--313}, PUBLISHER = {Springer, New York}, YEAR = {1989}, MRCLASS = {11F80 (11G20 11R23 11R58 14E22 14G25)}, MRNUMBER = {1012169}, MRREVIEWER = {Sheldon Kamienny}, DOI = {10.1007/978-1-4613-9649-9_4}, URL = {http://dx.doi.org/10.1007/978-1-4613-9649-9_4}, } @inproceedings {IhICM, AUTHOR = {Ihara, Yasutaka}, TITLE = {Braids, {G}alois groups, and some arithmetic functions}, BOOKTITLE = {Proceedings of the {I}nternational {C}ongress of {M}athematicians, {V}ol.\ {I}, {II} ({K}yoto, 1990)}, PAGES = {99--120}, PUBLISHER = {Math. Soc. Japan, Tokyo}, YEAR = {1991}, MRCLASS = {11G09 (11R32 14E20 16W30 20F34)}, MRNUMBER = {1159208}, MRREVIEWER = {J. Browkin}, } @incollection {IKY, AUTHOR = {Ihara, Yasutaka and Kaneko, Masanobu and Yukinari, Atsushi}, TITLE = {On some properties of the universal power series for {J}acobi sums}, BOOKTITLE = {Galois representations and arithmetic algebraic geometry ({K}yoto, 1985/{T}okyo, 1986)}, SERIES = {Adv. Stud. Pure Math.}, VOLUME = {12}, PAGES = {65--86}, PUBLISHER = {North-Holland, Amsterdam}, YEAR = {1987}, MRCLASS = {11R23 (11S80)}, MRNUMBER = {948237}, MRREVIEWER = {J. Browkin}, } @article {IKZ, AUTHOR = {Ihara, Kentaro and Kaneko, Masanobu and Zagier, Don}, TITLE = {Derivation and double shuffle relations for multiple zeta values}, JOURNAL = {Compos. Math.}, FJOURNAL = {Compositio Mathematica}, VOLUME = {142}, YEAR = {2006}, NUMBER = {2}, PAGES = {307--338}, ISSN = {0010-437X}, MRCLASS = {11M41}, MRNUMBER = {2218898}, MRREVIEWER = {David Bradley}, DOI = {10.1112/S0010437X0500182X}, URL = {http://dx.doi.org/10.1112/S0010437X0500182X}, } @article {IO, AUTHOR = {Ihara, Kentaro and Ochiai, Hiroyuki}, TITLE = {Symmetry on linear relations for multiple zeta values}, JOURNAL = {Nagoya Math. J.}, FJOURNAL = {Nagoya Mathematical Journal}, VOLUME = {189}, YEAR = {2008}, PAGES = {49--62}, ISSN = {0027-7630}, CODEN = {NGMJA2}, MRCLASS = {11M41}, MRNUMBER = {2396583}, MRREVIEWER = {David Bradley}, URL = {http://projecteuclid.org/euclid.nmj/1205156910}, } @book {Kas, AUTHOR = {Kassel, Christian}, TITLE = {Quantum groups}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {155}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {1995}, PAGES = {xii+531}, ISBN = {0-387-94370-6}, MRCLASS = {17B37 (16W30 18D10 20F36 57M25 81R50)}, MRNUMBER = {1321145 (96e:17041)}, MRREVIEWER = {Yu. N. Bespalov}, DOI = {10.1007/978-1-4612-0783-2}, URL = {http://dx.doi.org/10.1007/978-1-4612-0783-2}, } @article {Kim:Annals, AUTHOR = {Kim, Minhyong}, TITLE = {{$p$}-adic {$L$}-functions and {S}elmer varieties associated to elliptic curves with complex multiplication}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {172}, YEAR = {2010}, NUMBER = {1}, PAGES = {751--759}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {11G15 (11G05 11G40)}, MRNUMBER = {2680431 (2011i:11089)}, MRREVIEWER = {Francesc C. Castell{\`a}}, DOI = {10.4007/annals.2010.172.751}, URL = {http://dx.doi.org/10.4007/annals.2010.172.751}, } @article {KZ, AUTHOR = {Knizhnik, V. G. and Zamolodchikov, A. B.}, TITLE = {Current algebra and {W}ess-{Z}umino model in two dimensions}, JOURNAL = {Nuclear Phys. B}, FJOURNAL = {Nuclear Physics. B}, VOLUME = {247}, YEAR = {1984}, NUMBER = {1}, PAGES = {83--103}, ISSN = {0550-3213}, CODEN = {NUPBBO}, MRCLASS = {81E13 (81D15)}, MRNUMBER = {853258}, DOI = {10.1016/0550-3213(84)90374-2}, URL = {http://dx.doi.org/10.1016/0550-3213(84)90374-2}, } @book {Lang, AUTHOR = {Lang, Serge}, TITLE = {Introduction to modular forms}, NOTE = {Grundlehren der mathematischen Wissenschaften, No. 222}, PUBLISHER = {Springer-Verlag, Berlin-New York}, YEAR = {1976}, PAGES = {ix+261}, MRCLASS = {10DXX}, MRNUMBER = {0429740}, MRREVIEWER = {Neal Koblitz}, } @article {Landen, AUTHOR = {Landen, John}, TITLE = {Mathematical memoirs respecting a variety of subjects}, JOURNAL = {Nourse}, YEAR = {1780}, } @article {LM, AUTHOR = {Le, Thang Tu Quoc and Murakami, Jun}, TITLE = {Kontsevich's integral for the {K}auffman polynomial}, JOURNAL = {Nagoya Math. J.}, FJOURNAL = {Nagoya Mathematical Journal}, VOLUME = {142}, YEAR = {1996}, PAGES = {39--65}, ISSN = {0027-7630}, CODEN = {NGMJA2}, MRCLASS = {57M25 (11M99)}, MRNUMBER = {1399467 (97d:57009)}, MRREVIEWER = {Sergei K. Lando}, URL = {http://projecteuclid.org/euclid.nmj/1118772043}, } @article {LM:Compositio, AUTHOR = {Le, Tu Quoc Thang and Murakami, Jun}, TITLE = {The universal {V}assiliev-{K}ontsevich invariant for framed oriented links}, JOURNAL = {Compositio Math.}, FJOURNAL = {Compositio Mathematica}, VOLUME = {102}, YEAR = {1996}, NUMBER = {1}, PAGES = {41--64}, ISSN = {0010-437X}, CODEN = {CMPMAF}, MRCLASS = {57M25 (11M41)}, MRNUMBER = {1394520}, URL = {http://www.numdam.org/item?id=CM_1996__102_1_41_0}, } @article {Levin:Compositio, AUTHOR = {Levin, Andrey}, TITLE = {Elliptic polylogarithms: an analytic theory}, JOURNAL = {Compositio Math.}, FJOURNAL = {Compositio Mathematica}, VOLUME = {106}, YEAR = {1997}, NUMBER = {3}, PAGES = {267--282}, ISSN = {0010-437X}, CODEN = {CMPMAF}, MRCLASS = {11F37 (11F27 11G40 11R70 19F27)}, MRNUMBER = {1457106 (98d:11048)}, MRREVIEWER = {Alexey A. Panchishkin}, DOI = {10.1023/A:1000193320513}, URL = {http://dx.doi.org/10.1023/A:1000193320513}, } @unpublished{LR, author = {A. Levin and G. Racinet}, title = {Towards multiple elliptic polylogarithms}, note = {arXiv:math/0703237}, year = {2007}, } @unpublished{LMS, author = {Lochak, Pierre and Matthes, Nils and Schneps, Leila}, title = "{Elliptic multiple zeta values and the elliptic double shuffle relations}", year = {2017}, note = {arXiv:1703.09410}, } @incollection {Manin:Iterated, AUTHOR = {Manin, Yuri I.}, TITLE = {Iterated integrals of modular forms and noncommutative modular symbols}, BOOKTITLE = {Algebraic geometry and number theory}, SERIES = {Progr. Math.}, VOLUME = {253}, PAGES = {565--597}, PUBLISHER = {Birkh\"auser Boston, Boston, MA}, YEAR = {2006}, MRCLASS = {11F67 (11G55 11M41)}, MRNUMBER = {2263200 (2008a:11062)}, MRREVIEWER = {Caterina Consani}, DOI = {10.1007/978-0-8176-4532-8_10}, URL = {http://dx.doi.org/10.1007/978-0-8176-4532-8\_10}, } @article {Matthes:Edzv, AUTHOR = {Matthes, Nils}, TITLE = {Elliptic double zeta values}, JOURNAL = {J. Number Theory}, FJOURNAL = {Journal of Number Theory}, VOLUME = {171}, YEAR = {2017}, PAGES = {227--251}, ISSN = {0022-314X}, CODEN = {JNUTA9}, MRCLASS = {11M32 (11F50)}, MRNUMBER = {3556684}, DOI = {10.1016/j.jnt.2016.07.010}, URL = {http://dx.doi.org/10.1016/j.jnt.2016.07.010}, } @PhDThesis{Matthes:Thesis, author = {Matthes, Nils}, title = {{Elliptic multiple zeta values}}, school = {Universit\"at Hamburg}, year = {2016}, } @unpublished{Matthes:Metab, author = {{Matthes}, N.}, title = "{The meta-abelian elliptic KZB associator and periods of Eisenstein series}", note = {arXiv:1608.00740}, year = 2016, } @article {Nakamura:Galoisrep, AUTHOR = {Nakamura, Hiroaki}, TITLE = {On exterior {G}alois representations associated with open elliptic curves}, JOURNAL = {J. Math. Sci. Univ. Tokyo}, FJOURNAL = {The University of Tokyo. Journal of Mathematical Sciences}, VOLUME = {2}, YEAR = {1995}, NUMBER = {1}, PAGES = {197--231}, ISSN = {1340-5705}, MRCLASS = {11G05 (11F80 11G16 14H30)}, MRNUMBER = {1348028}, MRREVIEWER = {Yasutaka Ihara}, } @incollection {Nakamura:Eisen99, AUTHOR = {Nakamura, Hiroaki}, TITLE = {Tangential base points and {E}isenstein power series}, BOOKTITLE = {Aspects of {G}alois theory ({G}ainesville, {FL}, 1996)}, SERIES = {London Math. Soc. Lecture Note Ser.}, VOLUME = {256}, PAGES = {202--217}, PUBLISHER = {Cambridge Univ. Press, Cambridge}, YEAR = {1999}, MRCLASS = {14G32 (11G07 11G55)}, MRNUMBER = {1708607}, MRREVIEWER = {Helmut V{\"o}lklein}, } @article {Nakamura:Arithmetic, AUTHOR = {Nakamura, Hiroaki}, TITLE = {On arithmetic monodromy representations of {E}isenstein type in fundamental groups of once punctured elliptic curves}, JOURNAL = {Publ. Res. Inst. Math. Sci.}, FJOURNAL = {Publications of the Research Institute for Mathematical Sciences}, VOLUME = {49}, YEAR = {2013}, NUMBER = {3}, PAGES = {413--496}, ISSN = {0034-5318}, MRCLASS = {14G32 (11F20 11G16 14G25)}, MRNUMBER = {3097013}, MRREVIEWER = {Kirsten Wickelgren}, DOI = {10.4171/PRIMS/110}, URL = {http://dx.doi.org/10.4171/PRIMS/110}, } @unpublished{Nakamura:EisenRevisited, Author = {Nakamura,Hiroaki}, Title = {On profinite {E}isenstein periods in the monodromy of universal elliptic curves}, YEAR = {2016}, note = {http://www.math.sci.osaka-u.ac.jp/$\sim$nakamura/zoo/fox/EisenRevisited.pdf}, } @MastersThesis{Pollack:Thesis, author = {Pollack, Aaron}, title = {{Relations between derivations arising from modular forms}}, school = {Duke University}, year = {2009}, } @article {Rac, AUTHOR = {Racinet, Georges}, TITLE = {Doubles m\'elanges des polylogarithmes multiples aux racines de l'unit\'e}, JOURNAL = {Publ. Math. Inst. Hautes \'Etudes Sci.}, FJOURNAL = {Publications Math\'ematiques. Institut de Hautes \'Etudes Scientifiques}, NUMBER = {95}, YEAR = {2002}, PAGES = {185--231}, ISSN = {0073-8301}, MRCLASS = {11G55 (11M41)}, MRNUMBER = {1953193 (2004c:11117)}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, DOI = {10.1007/s102400200004}, URL = {http://dx.doi.org/10.1007/s102400200004}, } @article {Ree, AUTHOR = {Ree, Rimhak}, TITLE = {Lie elements and an algebra associated with shuffles}, JOURNAL = {Ann. of Math. (2)}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {68}, YEAR = {1958}, PAGES = {210--220}, ISSN = {0003-486X}, MRCLASS = {17.00 (20.00)}, MRNUMBER = {0100011 (20 \#6447)}, MRREVIEWER = {P. M. Cohn}, } @book {Reutenauer, AUTHOR = {Reutenauer, Christophe}, TITLE = {Free {L}ie algebras}, SERIES = {London Mathematical Society Monographs. New Series}, VOLUME = {7}, NOTE = {Oxford Science Publications}, PUBLISHER = {The Clarendon Press, Oxford University Press, New York}, YEAR = {1993}, PAGES = {xviii+269}, ISBN = {0-19-853679-8}, MRCLASS = {17-02 (05-02 17B05)}, MRNUMBER = {1231799 (94j:17002)}, MRREVIEWER = {Hartmut Laue}, } @article {SS, AUTHOR = {Schlotterer, O. and Stieberger, S.}, TITLE = {Motivic multiple zeta values and superstring amplitudes}, JOURNAL = {J. Phys. A}, FJOURNAL = {Journal of Physics. A. Mathematical and Theoretical}, VOLUME = {46}, YEAR = {2013}, NUMBER = {47}, PAGES = {475401, 37}, ISSN = {1751-8113}, MRCLASS = {81T30 (14E18)}, MRNUMBER = {3126883}, MRREVIEWER = {Jihye Sofia Seo}, DOI = {10.1088/1751-8113/46/47/475401}, URL = {http://dx.doi.org/10.1088/1751-8113/46/47/475401}, } @ARTICLE{Sch, author = {{Schneps}, L.}, title = "{Elliptic multiple zeta values, Grothendieck-Teichm{\"u}ller and mould theory}", journal = {ArXiv e-prints}, volume = {math.NT/1506.09050}, year = 2015 } @book {SerArith, AUTHOR = {Serre, Jean-Pierre}, TITLE = {Cours d'arithm\'etique}, SERIES = {Collection SUP: ``Le Math\'ematicien''}, VOLUME = {2}, PUBLISHER = {Presses Universitaires de France, Paris}, YEAR = {1970}, PAGES = {188}, MRCLASS = {10.01}, MRNUMBER = {0255476}, MRREVIEWER = {Burton W. Jones}, } @book {Ser, AUTHOR = {Serre, Jean-Pierre}, TITLE = {Repr\'esentations lin\'eaires des groupes finis}, EDITION = {revised}, PUBLISHER = {Hermann, Paris}, YEAR = {1978}, PAGES = {182}, ISBN = {2-7056-5630-8}, MRCLASS = {20-01 (20C99)}, MRNUMBER = {543841 (80f:20001)}, } @book {Serre:Lie, AUTHOR = {Serre, Jean-Pierre}, TITLE = {Lie algebras and {L}ie groups}, SERIES = {Lecture Notes in Mathematics}, VOLUME = {1500}, NOTE = {1964 lectures given at Harvard University, Corrected fifth printing of the second (1992) edition}, PUBLISHER = {Springer-Verlag, Berlin}, YEAR = {2006}, PAGES = {viii+168}, ISBN = {978-3-540-55008-2; 3-540-55008-9}, MRCLASS = {17-01 (22-01)}, MRNUMBER = {2179691}, } @incollection {Sou, AUTHOR = {Soul{\'e}, Christophe}, TITLE = {On higher {$p$}-adic regulators}, BOOKTITLE = {Algebraic {$K$}-theory, {E}vanston 1980 ({P}roc. {C}onf., {N}orthwestern {U}niv., {E}vanston, {I}ll., 1980)}, SERIES = {Lecture Notes in Math.}, VOLUME = {854}, PAGES = {372--401}, PUBLISHER = {Springer, Berlin-New York}, YEAR = {1981}, MRCLASS = {12A62 (12B22 13D03 18F25)}, MRNUMBER = {618313}, MRREVIEWER = {J. Browkin}, } @article {Terasoma:MTM, AUTHOR = {Terasoma, Tomohide}, TITLE = {Mixed {T}ate motives and multiple zeta values}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {149}, YEAR = {2002}, NUMBER = {2}, PAGES = {339--369}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {11G55 (11M41 19F27)}, MRNUMBER = {1918675}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, DOI = {10.1007/s002220200218}, URL = {http://dx.doi.org/10.1007/s002220200218}, } @incollection {Tera, AUTHOR = {Terasoma, Tomohide}, TITLE = {Geometry of multiple zeta values}, BOOKTITLE = {International {C}ongress of {M}athematicians. {V}ol. {II}}, PAGES = {627--635}, PUBLISHER = {Eur. Math. Soc., Z\"urich}, YEAR = {2006}, MRCLASS = {14C30 (11M41 14F42)}, MRNUMBER = {2275614 (2008e:14009)}, MRREVIEWER = {Jan Nekov{\'a}{\v{r}}}, } @article {Tsunogai:Derivations, AUTHOR = {Tsunogai, Hiroshi}, TITLE = {On some derivations of {L}ie algebras related to {G}alois representations}, JOURNAL = {Publ. Res. Inst. Math. Sci.}, FJOURNAL = {Kyoto University. Research Institute for Mathematical Sciences. Publications}, VOLUME = {31}, YEAR = {1995}, NUMBER = {1}, PAGES = {113--134}, ISSN = {0034-5318}, CODEN = {KRMPBV}, MRCLASS = {11G05 (11R32 14H30 17B40)}, MRNUMBER = {1317526}, MRREVIEWER = {Douglas L. Ulmer}, DOI = {10.2977/prims/1195164794}, URL = {http://dx.doi.org/10.2977/prims/1195164794}, } @article {Tsumu, AUTHOR = {Tsumura, Hirofumi}, TITLE = {Combinatorial relations for {E}uler-{Z}agier sums}, JOURNAL = {Acta Arith.}, FJOURNAL = {Acta Arithmetica}, VOLUME = {111}, YEAR = {2004}, NUMBER = {1}, PAGES = {27--42}, ISSN = {0065-1036}, CODEN = {AARIA9}, MRCLASS = {11M41 (33E20)}, MRNUMBER = {2038060}, MRREVIEWER = {David Bradley}, DOI = {10.4064/aa111-1-3}, URL = {http://dx.doi.org/10.4064/aa111-1-3}, } @book {Was, AUTHOR = {Wasow, Wolfgang}, TITLE = {Asymptotic expansions for ordinary differential equations}, NOTE = {Reprint of the 1976 edition}, PUBLISHER = {Dover Publications, Inc., New York}, YEAR = {1987}, PAGES = {x+374}, ISBN = {0-486-65456-7}, MRCLASS = {34-02 (34E05)}, MRNUMBER = {919406}, } @book {Wat, AUTHOR = {Waterhouse, William C.}, TITLE = {Introduction to affine group schemes}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {66}, PUBLISHER = {Springer-Verlag, New York-Berlin}, YEAR = {1979}, PAGES = {xi+164}, ISBN = {0-387-90421-2}, MRCLASS = {14-01 (14Lxx 20G99)}, MRNUMBER = {547117 (82e:14003)}, MRREVIEWER = {M. Kh. Gizatullin}, } @book {W, AUTHOR = {Weil, Andr{\'e}}, TITLE = {Elliptic functions according to {E}isenstein and {K}ronecker}, NOTE = {Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 88}, PUBLISHER = {Springer-Verlag, Berlin-New York}, YEAR = {1976}, PAGES = {ii+93}, ISBN = {3-540-07422-8}, MRCLASS = {10DXX (01A55 10-03)}, MRNUMBER = {0562289 (58 \#27769a)}, MRREVIEWER = {S. Chowla}, } @article {ZagEll, AUTHOR = {Zagier, Don}, TITLE = {The {B}loch-{W}igner-{R}amakrishnan polylogarithm function}, JOURNAL = {Math. Ann.}, FJOURNAL = {Mathematische Annalen}, VOLUME = {286}, YEAR = {1990}, NUMBER = {1-3}, PAGES = {613--624}, ISSN = {0025-5831}, CODEN = {MAANA}, MRCLASS = {11R42 (11R70 19F27)}, MRNUMBER = {1032949 (90k:11153)}, MRREVIEWER = {V. Kumar Murty}, DOI = {10.1007/BF01453591}, URL = {http://dx.doi.org/10.1007/BF01453591}, } @article {Zagier:Periods, AUTHOR = {Zagier, Don}, TITLE = {Periods of modular forms and {J}acobi theta functions}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {104}, YEAR = {1991}, NUMBER = {3}, PAGES = {449--465}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {11F67 (11F27 11F55)}, MRNUMBER = {1106744 (92e:11052)}, MRREVIEWER = {Rolf Berndt}, DOI = {10.1007/BF01245085}, URL = {http://dx.doi.org/10.1007/BF01245085}, } @incollection {Zag, AUTHOR = {Zagier, Don}, TITLE = {Values of zeta functions and their applications}, BOOKTITLE = {First {E}uropean {C}ongress of {M}athematics, {V}ol.\ {II} ({P}aris, 1992)}, SERIES = {Progr. Math.}, VOLUME = {120}, PAGES = {497--512}, PUBLISHER = {Birkh\"auser, Basel}, YEAR = {1994}, MRCLASS = {11M41 (11F67 11G40 19F27)}, MRNUMBER = {1341859 (96k:11110)}, MRREVIEWER = {Fernando Rodr{\'{\i}}guez Villegas}, } @article {Zagier:Traces, AUTHOR = {Zagier, Don}, TITLE = {Periods of modular forms, traces of {H}ecke operators, and multiple zeta values}, NOTE = {Research into automorphic forms and $L$ functions (Japanese) (Kyoto, 1992)}, JOURNAL = {S\=urikaisekikenky\=usho K\=oky\=uroku}, FJOURNAL = {S\=urikaisekikenky\=usho K\=oky\=uroku}, NUMBER = {843}, YEAR = {1993}, PAGES = {162--170}, MRCLASS = {11F67 (11F25 11F72)}, MRNUMBER = {1296720}, MRREVIEWER = {Pavel Guerzhoy}, } @incollection {123, AUTHOR = {Zagier, Don}, TITLE = {Elliptic modular forms and their applications}, BOOKTITLE = {The 1-2-3 of modular forms}, SERIES = {Universitext}, PAGES = {1--103}, PUBLISHER = {Springer, Berlin}, YEAR = {2008}, MRCLASS = {11F11 (11-02 11E45 11F20 11F25 11F27 11F67)}, MRNUMBER = {2409678}, MRREVIEWER = {Rainer Schulze-Pillot}, DOI = {10.1007/978-3-540-74119-0_1}, URL = {http://dx.doi.org/10.1007/978-3-540-74119-0_1}, } @article {Zud, AUTHOR = {Zudilin, V. V.}, TITLE = {One of the numbers {$\zeta(5)$}, {$\zeta(7)$}, {$\zeta(9)$}, {$\zeta(11)$} is irrational}, JOURNAL = {Uspekhi Mat. Nauk}, FJOURNAL = {Rossi\u\i skaya Akademiya Nauk. Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk}, VOLUME = {56}, YEAR = {2001}, NUMBER = {4(340)}, PAGES = {149--150}, ISSN = {0042-1316}, MRCLASS = {11J72 (11J91 11M06)}, MRNUMBER = {1861452}, MRREVIEWER = {John H. Loxton}, DOI = {10.1070/RM2001v056n04ABEH000427}, URL = {http://dx.doi.org/10.1070/RM2001v056n04ABEH000427}, } \end{bibtex} { } \end{document}
\begin{document} \title{Bounding the $k$-rainbow total domination number} \begin{abstract} Recently the notion of $k$-rainbow total domination was introduced for a graph $G$, motivated by a desire to reduce the problem of computing the total domination number of the generalized prism $G \, \Box \, K_k$ to an integer labeling problem on $G$. In this paper we further demonstrate usefulness of the labeling approach, presenting bounds on the rainbow total domination number in terms of the total domination number, the rainbow domination number and the rainbow total domination number, as well as the usual domination number, where the latter presents a generalization of a result by Goddard and Henning (2018). We establish Vizing-like results for rainbow domination and rainbow total domination. By stating a Vizing-like conjecture for rainbow total domination we present a different viewpoint on Vizing's original conjecture in the case of bipartite graphs. \end{abstract} \noindent {\bf Keywords:} graph theory, domination, total domination, rainbow domination, Vizing's Conjecture \subseteqction{Introduction and preliminaries} \newcommand{\new}[1]{{\textcolor{red}{#1}}} Domination is a topic in graph theory with extensive research activity. Already in 1998, a classic book \cite{fund-1998} by Haynes et al. surveyed over 1200 papers. It is not surprising that this number is increasing rapidly as domination presents one of the most applicable branches of graph theory. For example, graphs can be used to model locations which exchange some resource along its edges. In such an application, ordinary domination is an optimization problem to determine the minimum number of locations necessary in order for each location to contain the resource or be adjacent to a location containing the resource. However, there are situations where certain additional requirements must be fulfilled, thus numerous variations of domination, motivated by real life as well as theoretical applications, developed over the time. In this paper we study the recently introduced concept of $k$-rainbow total domination, compare it with known domination parameters, and show that it leads to an interesting viewpoint on the famous Vizing conjecture. We begin with some general definitions and notation for graphs, followed by various definitions of domination in graphs. Graphs considered in this paper are finite, simple and undirected. For a graph $G$, we let $V(G)$ denote its set of vertices and $E(G)$ denote its set of edges. For a graph $G$, let $N_G(v)$ be the open neighborhood of vertex $v$ in graph $G$, that is the vertices in $G$ which are adjacent to $v$. When $G$ is apparent from context, we may just write $N(v)$. The closed neighborhood $N_G[v]$ is $N_G(v)\cup\{v\}$. For graphs $G$ and $H$, the Cartesian Product $G\, \Box \, H$ is the graph with vertex set $V (G)\times V (H)$. Vertices $(g, h)$ and $(g',h')$ are adjacent in $G\, \Box \, H$ if and only if either $g = g'$ and $hh'\in E(H)$ or $h = h'$ and $gg'\in E(G)$. For an integer $k$, we let $[k]$ refer to the set $\{1,2,\ldots,k\}$, and sometimes refer to the elements of $[k]$ as colors. We denote the power set of $[k]$ by $2^{[k]}$. A dominating set of a graph $G$ is a subset $D$ of $V(G)$ such that every vertex not in $D$ is adjacent to some vertex in $D$. The \textit{domination number}, $\gammaamma(G)$, is the minimum cardinality of a dominating set of $G$. If every vertex of $G$ is adjacent to a vertex in $D$, then $D$ is called a \textit{total dominating set} of $G$, and the minimum cardinality of a total dominating set of $G$ is the \textit{total domination number}, $\gammat(G)$, \cite{HenSur09} One of the well studied domination invariants is $k$-rainbow domination, introduced by Bre\v{s}ar, Henning and Rall~\cite{bhr-2008}. For a positive integer $k$, a \textit{$k$-rainbow dominating function} (or $k$RDF, for short) of a graph $G$ is a function $f : V(G) \gamma_{{\rm ri}2}ghtarrow 2^{[k]}$, such that for any vertex $v$ with $f(v)=\emptyset$ we have $\bigcup_{u\in N(v)} f(u) = [k]$. Let $||f||=\sum_{v\in V(G)}|f(v)|$; we refer to $||f||$ as the \emph{weight} of $f$. The \textit{$k$-rainbow domination number}, $\gamma_{{\rm r}k}(G)$, of a graph $G$ is the minimum value of $||f||$ over all $k$-rainbow dominating functions of $G$. A $k$RDF of weight $\gamma_{{\rm r}k}(G)$ is called a $\gamma_{{\rm r}k}$-function. For example, $\gammaamma_{r2}(C_4) = 2$ since an optimal choice is to assign $\{1\}$ to one vertex $v$ and assign $\{ 2 \}$ to the vertex not adjacent to $v$ in $C_4$. It was observed in \cite{bhr-2008} that for all $k\gammaeq 1$, $$\gammaamma_{rk}(G)=\gammaamma (G \, \Box \, K_k).$$ Since the seminal paper \cite{bhr-2008} introduced this invariant, it has been studied extensively, for example: its algorithmic properties \cite{chang10,greedy}, relevant graph operations and families \cite{btks-2007,TKSRT-lex}, and general properties \cite{philip,filo,wx-2010}. A \textit{$k$-rainbow total dominating function} $f$ of a graph $G$ (a $k$RTDF for short) was introduced in \cite{San19} as a $k$-rainbow dominating function satisfying an additional condition that for every $v\in V(G)$ such that $f(v)=\{i\}$ for some $i\in [k]$, there exists some $u\in N(v)$ such that $i\in f(u)$. The weight of a $k$RTDF is as in the case of a $k$RDF, $||f||=\sum_{v\in V(G)}|f(v)|$. The minimum weight of a $k$RTDF is called the {\em $k$-rainbow total domination number of $G$}, $\gamma_{k{\rm rt}}(G)$. A $k$RTDF of weight $\gamma_{k{\rm rt}}(G)$ is called a \textit{$\gamma_{k{\rm rt}}$-function}. For example, $\gammaamma_{2rt}(C_4) = 4$ since an optimal choice is to pick two vertices and assign each $\{1,2\}$; another optimal choice simply assigns $\{ 1 \}$ to every vertex. The definition of the $k$-rainbow total domination number is motivated by wanting to understand total domination in the generalized prism; in \cite{San19} it was observed that for all $k \gammae 1$, \begin{equation} \label{cpt} \gamma_{k{\rm rt}}(G)=\gammat(G \, \Box \, K_k). \end{equation} The main point of this paper is to compare the $k$-rainbow total domination number to other notions of domination, in particular to usual domination (i.e~$\gammaamma$), to total domination (i.e.~$\gammaamma_t$), and to $k$-rainbow domination (i.e.~$\gammaamma_{rk}$). In Section~\ref{sec_itself} we calculate some domination numbers of the complete bipartite graph $K_{a,b}$ and a variant of $K_{a,b}$. The reason for these calculations is mostly for showing the tightness of bounds in subsequent sections. In Section~\ref{sec_k_total} we bound the $k$-rainbow total domination number in terms of the other kinds of domination numbers. In Section~\ref{sec_dom} we obtain a lower bound on the $k$-rainbow total domination number in terms of the usual domination number, a generalization of the Goddard and Henning~\cite{GodHen18} result which applied to the case of $k = 2$. We also investigate the tightness of this lower bound and consider what happens when the graph is bipartite. In Section~\ref{sec_vizing} we consider Vizing's 1968 conjecture that for any graphs $G$ and $H$, \begin{equation} \gammaamma(G) \, \cdot \, \gammaamma(H) \le \gammaamma(G\,\square\, H). \end{equation} Variations on the Vizing conjecture have been considered by other authors, for example, Ho~\cite{Ho08} in the case of total domination, and Bre\v{s}ar et al.~\cite{viz} and Pilipczuk et al.~\cite{philip} in the case of $k$-rainbow domination. We investigate what happens in the case of $k$-rainbow total domination, proving a simple Vizing-like result, and discussing a stronger Vizing-like conjecture, which in the case of bipartite graphs coincides with the original one. \subseteqction{Special graph classes} \label{sec_itself} In this section we compute some domination invariants for $K_{a,b}$ and a related graph we call $K^+_{a,b}$. Some of the results of this section are interesting in their own right, however, the results of this section will be used in later sections, mostly to demonstrate that certain bounds are tight. In many calculations we use an alternative way to measure the weight of a $k$-rainbow dominating function $f$: $$||f|| = \sum_{S \subseteq [k]}| S|\,|V_S|\,$$ where $V_S=\{x \in V(G)\,:\,f(x)=S\}$. If a vertex $v$ is assigned a set $S$ by the function $f$, we may refer to $S$ as the \emph{label} of $v$, and when clear from context we may abbreviate labels by removing the set braces, e.g. we would typically abbreviate the label $\{1,2\}$ by just $12$. A vertex $v$ with $f(v)=\emptyset$ will be referred to as \emph{empty vertex\emph}. For integers $a$ and $b$ such that $a,b \gammae 1$, let $K_{a,b}$ be the complete bipartite graph where one of the partite sets, say $A$, consists of vertices $x_1,x_2,\ldots, x_a$, and $B$ is the other partite set. We define the graph $K^+_{a,b}$ to be $K_{a,b}$ with new vertices $y_1,y_2,\ldots, y_a$ and new edges $x_iy_i$ for every $i\in [a]$; see Figure~\ref{Kab+}. \begin{figure} \caption{The graph $K^+_{a,b} \label{Kab+} \end{figure} \begin{lemma} \label{thm_Kplus} $\gammaamma(K^+_{a,b}) = a$ and $\gammaamma_{k\rm{rt}}(K^+_{a,b}) = 2a$ for $a,b\gammae 1$ and $2 \le k \le a$. \end{lemma} \begin{proof} The set $A$ is clearly a dominating set of $K^+_{a,b}$, thus $\gammaamma(K^+_{a,b}) \leq a$. On the other hand, every dominating set of $K^+_{a,b}$ must contain either $y_i$ or $x_i$ for every $i\in [a]$, thus $\gammaamma(K^+_{a,b}) \gammaeq a$, which proves the first equality. Now let $f:V(K^+_{a,b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be a function defined by $f(v)=\emptyset$ for every $v\in B$, and for every $i \in [a]$, $f(x_i)=f(y_i)=\{j\}$ for some $j\in [k]$, so that for every $j\in [k]$ there exists $i$ with $f(x_i)=\{j\}$. Clearly $f$ is a $k$RTDF, thus $\gammaamma_{k\rm{rt}}(K^+_{a,b})\leq 2a$. Finally, note that $\gammaamma_{k\rm{rt}}(K^+_{a,b})$ cannot be less than $2a$, since each pair of vertices $x_i,y_i$ contributes at least $2$ to the weight of any $k$RTDF of $K^+_{a,b}$. \end{proof} \begin{lemma} \label{lem-Kab-2k} Suppose $a, b,$ and $k$ are positive integers, $k\gammaeq 1$ and $a\leq b$. If $a + b \le k$, then $\gamma_{{\rm r}k}(K_{a,b}) = a + b$. If $a + b > k$, then we have $$\gamma_{{\rm r}k}(K_{a,b}) = \begin{cases} 2k, & \hbox{if $a \gammae 2k$} \\ \hbox{$\max\{a,k\}$,} & \hbox{if $a < 2k$.} \\ \end{cases} $$ \end{lemma} \begin{proof} Let $\{A,B\}$ be the bipartition of $V(K_{a, b})$ with $A=\{x_1,x_2, \ldots, x_a\}$ and $B=\{y_1,y_2, \ldots, y_b\}$, where all the edges are between $A$ and $B$. We consider the following cases. \begin{itemize} \item If $a + b \le k$, the assertion holds by a simple property for a general graph $G$, that if $|V(G)|\leq k$, then $\gamma_{{\rm r}k}(G)=|V(G)|$. \item Let $a + b > k$, $a \gammae 2k$ and let $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be defined with $f(x_1)=f(y_1)=[k]$ and $f(v)=\emptyset$ for every $v\in V(K_{a, b})\subseteqtminus \{x_1,y_1\}$. Since $f$ is clearly a $k$RDF we have $\gamma_{{\rm r}k}(K_{a,b})\leq 2k$. Now let $g$ be an arbitrary $\gamma_{{\rm r}k}$-function of $K_{a, b}$. If both $A$ and $B$ contain an empty vertex, or neither of them contains such a vertex, we clearly have $||g||\gammaeq 2k$. If exactly one of $A$ or $B$ contains an empty vertex, then $||g|| \gammae a \gammae 2k$, so $\gamma_{{\rm r}k}(K_{a,b})\gammaeq 2k$. \item Let $a + b > k$ and $a < 2k$. Then obviously $\gamma_{{\rm r}k}(K_{a, b}) \gammae k$. If $a\leq k$, $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ defined with $f(x_i)=\{i\}$ for $i\in [a-1]$, $f(x_a)=\{a,a+1,\ldots,k\}$ and $f(v)=\emptyset$ for every $v\in B$, is a $k$RDF. Therefore in this case we infer $\gamma_{{\rm r}k}(K_{a,b})=k$. If $a>k$, then $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ defined with $f(x_i)=\{i\}$ for every $i\in [k]$, $f(x_i)=\{1\}$ for every $i\in \{k+1,k+2,\ldots,a\}$, and $f(v)=\emptyset$ for every $v\in B$, is a $k$RDF, thus $\gamma_{{\rm r}k}(K_{a, b}) \leq a$. Assume for contradiction that $\gamma_{{\rm r}k}(K_{a, b}) < a$, and let $g$ be an arbitrary $\gamma_{{\rm r}k}$-function. Then there is an empty vertex in $A$ and in $B$ since $b\gammaeq a>k$. Thus $||g||\gammaeq 2k$, contradicting $||g|| < a < 2k$. Therefore $\gamma_{{\rm r}k}(K_{a,b})=a$ if $a>k$. \end{itemize} \end{proof} \begin{proposition} \label{thm3} Suppose $a, b,$ and $k$ are positive integers, $k\gammaeq 2$ and $a\leq b$. If $a + b \le k$, then $\gamma_{k{\rm rt}}(K_{a,b}) = a + b$. If $a + b > k$, then we have $$\gamma_{k{\rm rt}}(K_{a,b}) = \begin{cases} k, & \hbox{if $a \le \bigl \lfloor \frac{k}{2} \bigr \rfloor$} \\ a+\bigl \lceil \frac{k+1}{2}\bigr \rceil, & \hbox{if $\bigl \lfloor \frac{k}{2} \bigr \rfloor < a < \bigl \lceil \frac{3k-2}{2} \rceil$} \\ 2k, & \hbox{if $a \gammae \bigl \lceil \frac{3k-2}{2} \bigr \rceil$.} \\ \end{cases} $$ \end{proposition} \begin{proof} If $a + b \le k$ the assertion is immediate (see Observation 4 in \cite{San19}). Now let $a + b > k$ and let $\{A,B\}$ be the bipartition of $V(K_{a, b})$ with $A=\{x_1,x_2, \ldots, x_a\}$ and $B=\{y_1,y_2, \ldots, y_b\}$, where $a\leq b$, and all the edges are between $A$ and $B$. First we consider the case when $a \le \bigl \lfloor \frac{k}{2} \bigr \rfloor$. Since $a + b > k$, we clearly have $\gamma_{k{\rm rt}}(K_{a, b})\gammaeq k$. Let $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be defined with $f(x_i)=\{2i-1,2i\}$ for every $i\in [a-1]$, $f(x_a)=\{2a-1,2a,\ldots, k\}$, and $f(y_i)=\emptyset$ for every $i\in [b]$. Since $f$ is a $k$RTDF and $||f||=k$ we also have $\gamma_{k{\rm rt}}(K_{a,b})\leq k$. Thus $\gamma_{k{\rm rt}}(K_{a,b})= k$ if $a \le \bigl \lfloor \frac{k}{2} \bigr \rfloor$. \emph{For the remainder of the proof we assume that $a \gammae \bigl \lfloor \frac{k}{2} \bigr \rfloor + 1 = \bigl \lceil \frac{k+1}{2} \bigr \rceil$.} Before we consider the remaining two cases for the value of $a$, we prove the following three claims. \textit{Claim 1. \ $\gamma_{k{\rm rt}}(K_{a,b})\leq 2k$}. Let $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be defined with $f(x_1)=f(y_1)=[k]$ and $f(v)=\emptyset$ for every $v \in V(K_{a,b})\subseteqtminus \{x_1,y_1\}$. It is straightforward to see that $f$ is a $k$RTDF, proving Claim 1. \textit{Claim 2. \ $\gamma_{k{\rm rt}}(K_{a,b})\leq a+ \bigl \lceil \frac{k+1}{2} \bigr \rceil$}. Let $f:V(K_{a, b})\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be defined with $f(x_i)=\{2i-1,2i\}$ if $i\in \{1,2,\ldots, \bigl \lfloor \frac{k}{2} \bigr \rfloor\}$, $f(x_i)=\{k\}$ if $i\in \{ \bigl \lfloor \frac{k}{2} \bigr \rfloor +1, \ldots, a\}$, $f(y_1)=\{k\}$ and $f(y_i)=\emptyset$ for $i \in \{2,3,\ldots,b\}$. It is easy to see that $f$ is a $k$RTDF. Thus $$ \gamma_{k{\rm rt}}(K_{a, b}) \le ||f|| = 2\cdot \Bigl\lfloor \frac{k}{2} \Bigr\rfloor+ 1 \cdot \left(a-\Bigl\lfloor \frac{k}{2} \Bigr\rfloor \gamma_{{\rm ri}2}ght)+1= a + \Bigl \lceil \frac{k+1}{2} \Bigr \rceil, $$ proving Claim 2. \textit{Claim 3. \ If $f$ is a $k$RTDF on $K_{a,b}$ such that all of $A$ consists of non-empty vertices or all of $B$ consists of non-empty vertices, then $||f|| \gammae a+ \bigl \lceil \frac{k+1}{2} \bigr \rceil$. } Let $f$ be a $\gamma_{k{\rm rt}}$-function on $K_{a,b}$ such that $|f(x_i)|\gammaeq 1$ for every $i\in [a]$; the argument is similar if all the vertices in $B$ are non-empty. Suppose that in $A$ there are exactly $s$ vertices which are singleton sets (i.e. suppose $|f(x_1)| = \cdots = |f(x_s)| = 1$ and $|f(x_i)| \gammae 2$ for $s < i \le a$). Without loss of generality, assume that the colors appearing among the singletons are $1,2, \ldots, r$, for some $r \le k$. If $s = 0$, then $||f|| \gammae 2a \gammae a+ \bigl \lceil \frac{k+1}{2} \bigr \rceil$, so we are done. Thus we assume that $s \gammae 1$; also recall that $f$ is a minimum weight $k$RTDF. In $B$ we must have exactly one occurrence of each of the colors $1,2, \ldots, r$, and no other colors. Among the labels in $A$ which are not singletons we can assume that collectively they include each color from $\{r+1, \ldots, k \}$ exactly once, and contain no other colors. To see why we can make that assumption, consider a vertex in $A$ with a non-singleton label $L$, such that $x, y \in L$ where $x$ and $y$ are distinct and $y$ appears on some other vertex in $A$. We can remove $y$ from $L$ and add the color $x$ to any label in $B$ to arrive at a new $\gamma_{k{\rm rt}}$-function. So we will assume that in $A$, $f$ consists of the $s$ singletons, and then the rest of $A$ contains each color from $\{r+1, \ldots, k \}$ exactly once. Thus $||f|| = s + k$, since we have the $s$ singletons in $A$, the colors $1,2, \ldots,r$ appearing exactly once in $B$, and the colors $r+1, \ldots, k$ appearing exactly once in $A$. We have at least one singleton, so $r \gammae 1$, and thus the non-singletons in $A$ use at most $\bigl \lfloor \frac{k-1}{2} \bigr \rfloor$ vertices of $A$, thus $s \gammae a - \bigl \lfloor \frac{k-1}{2} \bigr \rfloor$. We can compute as follows $$||f|| = s + k \gammae a - \Bigl \lfloor \frac{k-1}{2} \Bigr \rfloor + k = a + \Bigl \lceil \frac{k+1}{2} \Bigr \rceil, $$ completing the proof of Claim 3. We turn back to the proof of the theorem, considering the last two cases for the value of $a$. By Claims 1 and 2, we have the upper bounds on $\gamma_{k{\rm rt}}(K_{a, b})$ in both cases. For the lower bounds, we first observe the following lower bounds on a $k$RTDF $g$ depending on how it labels $A$ and $B$ (the first three lower bounds are simple observations, and the fourth is just a statement of Claim 3). \begin{enumerate} \item If $g$ has empty vertices in both $A$ and $B$ then $||g|| \gammae 2k$. \item If $g$ has empty vertices in all of $A$ or all of $B$, then $||g || \gammae 2a$. \item If $g$ has no empty vertices then $||g|| \gammae 2a$. \item If $g$ has no empty vertices in $A$ or no empty vertices in $B$ then $||g|| \gammae a+\bigl \lceil \frac{k+1}{2} \bigr \rceil$. \end{enumerate} Notice that the four conditions above are not mutually exclusive, but do cover all possible labellings of $A$ and $B$; thus to show a lower bound, it suffices to cover the above cases. If $\bigl \lfloor \frac{k}{2} \bigr \rfloor < a < \bigl \lceil \frac{3k-2}{2} \bigr \rceil$, then we refer to the above 4 conditions and note that $2a = a + a \gammae a +\bigl \lceil \frac{k+1}{2} \bigr \rceil$, thus covering Conditions 2 and 3. Condition 4 already matches this case. To deal with Condition 1, we note its impossibility for a $\gamma_{k{\rm rt}}$-function $g$, because (using Claim 2) $|| g || \le a + \bigl \lceil \frac{k+1}{2} \bigr \rceil < \bigl \lceil \frac{3k-2}{2} \bigr \rceil + \bigl \lceil \frac{k+1}{2} \bigr \rceil = 2k$. If $a \gammaeq \bigl \lceil \frac{3k-2}{2} \bigr \rceil$, then we refer to the above 4 conditions and note that the following calculations suffice in order to cover all the conditions (recall that $k \gammae 2$): $$2a \gammae 2 \cdot \Bigl \lceil \frac{3k-2}{2} \Bigr \rceil \gammaeq 2k \ \ \hbox{ and } \ \ a + \Bigl \lceil \frac{k+1}{2} \Bigr \rceil \gammaeq \Bigl \lceil \frac{3k-2}{2} \Bigr \rceil + \Bigl \lceil \frac{k+1}{2} \Bigr \rceil =2k.$$ \end{proof} \subseteqction{Bounds relating to $k$-rainbow total domination} \label{sec_k_total} In this section we study bounds on the $k$-rainbow total domination number in terms of other domination numbers. Shao et al.~\cite{shao14} proved that $\gammaamma_{{\rm r}k'}(G) \leq \gammaamma_{{\rm r}k}(G)+(k'-k)\lfloor \frac{\gammaamma_{{\rm r}k}(G)}{k} \rfloor$ and consequently $\gammaamma_{{\rm r}k'}(G) \leq \frac{k'}{k}\gammaamma_{{\rm r}k}(G)$, for a connected graph $G$ and positive integers $k$ and $k'$ such that $k'\gammaeq k$ (see Theorem 1 and Corollary 1 in \cite{shao14}). Using the same proof we can prove the same relations hold for $k$-rainbow total domination. For completeness, we briefly recall their proof. Let $f$ be a $\gamma_{k{\rm rt}}$-function of $G$ and $a_i$ the number of vertices $u$ for which $i\in f(u)$. We may without loss of generality assume that $a_1\gammaeq a_2\gammaeq \cdots \gammaeq a_k$. Since $\gamma_{k{\rm rt}}(G)=a_1+a_2+ \cdots + a_k$ we conclude that $ka_k\leq \gamma_{k{\rm rt}}(G)$. Now, by adding the colors $k+1,k+1,\ldots,k'$ to the label of any vertex $u$ such that $k \in f(u)$, we clearly get a $k'$RTDF whose weight is $\gamma_{k{\rm rt}}(G)+(k'-k) a_k$, therefore $\gammaamma_{k'{\rm rt}}(G) \leq \gamma_{k{\rm rt}}(G)+(k'-k) \frac{\gamma_{k{\rm rt}}(G)}{k}=\frac{k'}{k}\gamma_{k{\rm rt}}(G)$. Summarizing this result and adding an observation we conclude the following. \begin{proposition} \label{prop_frac_bounds} Let $k$ and $k'$ be integers such that $1 \le k \le k'$. Then the following bounds hold: $$\gammaamma_{k'rt}(G) \le \dfrac{k'}{k} \gammaamma_{krt}(G) \quad \textit{ and } \quad \gammaamma_{rk'}(G) \le \dfrac{k'}{k} \gammaamma_{rk}(G).$$ Furthermore, the bounds are tight when $G$ is $K_{a,b}$ with $a,b \gammae 2k'$. \end{proposition} \begin{proof} The upper bounds follow from the Shao et al. proof sketched above. To prove the tightness of the bounds we provide a family of graphs for which the equalities are attained. In both cases, we take the graphs $G$ to be $K_{a,b}$, where $a,b \gammae 2k'$. The tightness follows from Lemma~\ref{lem-Kab-2k} and Proposition~\ref{thm3}, which imply $\gammaamma_{rk}(G) = \gammaamma_{krt}(G) = 2k$ and $\gammaamma_{rk'}(G) = \gammaamma_{k'rt}(G) = 2k'$. \end{proof} Bringing the above results together with another observation, we can give a full description of how the rainbow domination numbers compare when the parameter is increased by 1. \begin{theorem} \label{thm_k_to_kminus} For any graph $G$, and integer $k$ such that $k \gammae 2$, we have the following: \begin{enumerate} \item $\gammaamma_{(k-1) {\rm rt}}(G) \le \gamma_{k{\rm rt}}(G)\le \frac{k}{k-1}\gammaamma_{(k-1) {\rm rt}}(G)$, \item $\gammaamma_{{\rm r}(k-1)}(G) \le \gamma_{{\rm r}k}(G) \le \frac{k}{k-1}\gammaamma_{{\rm r}(k-1)}(G)$. \end{enumerate} Moreover, both upper bounds are tight for $K_{a,b}$ where $a,b\gammaeq 2k$, and the first lower bound is tight for $k \gammae 3$, when the graph is $K^+_{k,b}$ and $b \gammae 1$. \end{theorem} \begin{proof} The upper bounds and their tightness follow from Proposition~\ref{prop_frac_bounds}. Now we discuss the lower bounds. We prove that $\gamma_{k{\rm rt}}(G) \gammaeq \gammaamma_{(k-1) {\rm rt}}(G)$. Let $f$ be a $\gamma_{k{\rm rt}}$-function on $G$. We use $f$ to define $g:V(G)\gamma_{{\rm ri}2}ghtarrow2^{[k-1]}$ by letting $g(v) = f(v)$ as long as $k \not \in f(v)$; if $k \in f(v)$ then $$g(v) = \begin{cases} (f(v) \subseteqtminus \{ k \}) \cup \{ 1 \}, & \hbox{if $k-1 \in f(v)$} \\ (f(v) \subseteqtminus \{ k \}) \cup \{ k-1 \}, & \hbox{otherwise.} \end{cases} $$ If $g(v)=\emptyset$, clearly all the colors from $[k-1]$ appear in the neighborhood of $v$. If $g(v)=\{i\}$ and $i \in [k-2]$, then by the definition of $g$ there exists a neighbor $u$ of $v$ such that $i\in g(u)$. If $g(v)=\{k-1\}$, then we have one of the following situations: either $f(v)=\{k-1\}$, so there exists $u\in N_G(v)$ such that $k-1\in f(u)$ and therefore also $k-1\in g(u)$, or $f(v)=\{k\}$ so there exists $u\in N_G(v)$ such that $k\in f(u)$, and thus $k-1\in g(u)$ again. Thus $g$ is a $(k-1)$RTDF and we have $\gammaamma_{(k-1) {\rm rt}}(G) \le ||g||\leq ||f|| =\gamma_{k{\rm rt}}(G)$, which shows the first upper bound. To prove the second upper bound, that $\gammaamma_{{\rm r}(k-1)}(G) \le \gamma_{{\rm r}k}(G)$, is even simpler, since by taking a $\gamma_{{\rm r}k}$-function $f$ on $G$ we can turn it to a $(k-1)$RDF simply by replacing every occurrence of the element $k$ by $k-1$. Now we discuss the tightness of the lower bounds. For the first inequality, when $k \gammae 3$, we obtain $\gammaamma_{(k-1) {\rm rt}}(G) = \gamma_{k{\rm rt}}(G)$ when $G$ is $K^+_{k,b}$ (recall Lemma~\ref{thm_Kplus}). When $k = 2$ the equality states that $\gammaamma_t(G)=\gamma_{2{\rm rt}}(G)$; there exist graphs satisfying the equality, which were in fact characterized in Theorem~$3$ of \cite{LuHou}. For the second inequality, when $k\gammaeq 3$ we take integers $a, b$ such that $k < a < 2k \le b$, so by Lemma~\ref{lem-Kab-2k} $\gammaamma_{{\rm r}(k-1)}(K_{a,b}) = \gamma_{{\rm r}k}(K_{a,b})=a$. When $k = 2$, we are interested in graphs $G$ that satisfy $\gammaamma(G)=\gammaamma_{r2}(G)$. Such graphs exist and were characterized by Hartnell and Rall, see Theorem~$4$ in~\cite{hart04}. \end{proof} The following simple corollary will lead to an interesting question. \begin{cor} \label{thm_rainbow_to_total} For any graph $G$ without isolated vertices, $\gammaamma_t(G) \le \gammaamma_{krt}(G) \le k \gammaamma_t(G)$, and the upper bound is tight. \end{cor} \begin{proof} Recall that if $G$ does not contain isolated vertices, then $\gammaamma_{1rt}(G)=\gammaamma_t(G)$. Therefore Theorem~\ref{thm_k_to_kminus} implies the lower bound. It is easy to observe the upper bound by assigning the entire set $[k]$ to each vertex in a total dominating set of $G$. The upper bound is tight when $G$ is $K_{a,b}$ ($a, b \gammae 2k$), since $\gammaamma_t(G) = 2$ is immediate, while $\gammaamma_{krt}(G) = 2k$ follows from Proposition~\ref{thm3}. \end{proof} \noindent Notice that in the last corollary there is no claim about the tightness of the lower bound. As we have already mentioned in the proof of Theorem~\ref{thm_k_to_kminus}, the lower bound in the case when $k=2$ can be attained and all such graphs were characterized in \cite{LuHou}. For $k \gammae 3$, the lower bound of Corollary~\ref{thm_rainbow_to_total} does not appear to be tight, which leads to the following question (where we conjecture that $b(k) > 1$). \begin{question} Find a function $b(k)$ such that for $k \gammae 3$, the following bound is true and tight for connected graphs $G$: $$b(k) \cdot \gammaamma_t(G) \le \gammaamma_{krt}(G).$$ \end{question} \noindent Note that $\gammaamma_t(C_n) \gammaeq \frac{n}{2}$ and $\gammaamma_{krt}(C_n) \le n$, so $b(k) \le 2$. In the next proposition and question we consider how the $k$-rainbow domination number compares to the $k$-rainbow total domination number. \begin{proposition} \label{rk-krt} For any graph $G$ and any integer $k$ such that $k \gammae 2$ the following holds: $$\gamma_{{\rm r}k}(G) \le \gamma_{k{\rm rt}}(G) \le 2\gamma_{{\rm r}k}(G).$$ Furthermore, the lower bound is tight for every $k$, and the upper bound is tight only for $k = 2$, that is, for $k \gammae 3$, $\gamma_{k{\rm rt}}(G) < 2\gamma_{{\rm r}k}(G)$. \end{proposition} \begin{proof} As we have already explained, the lower bound follows by definitions of both invariants. To see that the upper bound holds, let $f$ be a $\gamma_{{\rm r}k}$-function on $G$. Then a $k$-rainbow total dominating function $g$ can be constructed from $f$ by adding an element to every singleton label of $f$, so $g$ has no singleton labels. Then $\gamma_{k{\rm rt}}(G)\leq ||g||\leq 2||f||=2\gamma_{{\rm r}k}(G)$. The tightness of the lower bound holds when $G$ is $K_{a,b}$ for $b \gammae a \gammae 2k$, since then we have $\gamma_{{\rm r}k}(G) = \gamma_{k{\rm rt}}(G) = 2k$, using Lemma~\ref{lem-Kab-2k} and Proposition~\ref{thm3}. The tightness of the upper bound, when $k=2$, follows by taking $G$ to be $K_{2,b}$, where $b \gammae 2$, since by Proposition~\ref{thm3} and Lemma~\ref{lem-Kab-2k} we get $\gammaamma_{2rt}(G) = 4 = 2\gammaamma_{r2}(G)$. Now we show that the upper bound is strict, when $k \gammae 3$, i.e.~$\gammaamma_{krt}(G) < 2\gammaamma_{rk}(G)$ if $k \gammae 3$. Let $f$ be a $\gamma_{{\rm r}k}$-function on $G$, i.e.~$\gamma_{{\rm r}k}(G)=||f||=|V_1| + \cdots + |V_k| + \Sigma_{S \subseteq [k], |S| \gammae 2}|S|\cdot|V_S|$. As pointed out above, from $f$ we can construct a $k$RTDF $g$ by adding one label for each singleton, so the weight of $g$ is at most $2|V_1| + \cdots + 2|V_k| + \Sigma_{S \subseteq [k], |S| \gammae 2}|S|\cdot|V_S|$. If there exists at least one vertex $v$ with $|f(v)|\gammaeq 2$, we derive $$2\gamma_{{\rm r}k}(G)=2||f||>2|V_1| + \cdots + 2|V_k| + \Sigma_{S \subseteq [k], |S| \gammae 2}|S|\cdot|V_S|\gammaeq ||g||\gammaeq \gamma_{k{\rm rt}}(G),$$ and we are done. Thus now assume that $f$ assigns only singletons and empty sets to vertices of $G$. We can assume that $f$ has at least one empty vertex, otherwise we could change all the labels to $\{ 1 \}$ to achieve a $k$RTDF of the same weight, so we would be done. Let $f_1:V(G)\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be defined from $f$ as follows: $$f_1(v) = \begin{cases} \{ 1 \}, & \hbox{if $f(v) = \{ k -1 \}$} \\ \{ k-1, k \}, & \hbox{if $f(v) = \{ k \}$} \\ f(v), & \hbox{otherwise} \\ \end{cases} $$ Now construct $f_2$ from $f_1$ according to the following: \begin{itemize} \item let $v$ be a vertex such that $f(v)=f_1(v)=\emptyset$, and define $f_2(v)=\{1\}$. Note that since $f$ is a $k$RDF there exist $x\in V_1$ and $y\in V_{k-1}$ such that $v$ is adjacent to both of them. Recall that $f_1(x)=f_1(y)=f_2(v)=\{1\}$, \item for all non-empty vertices except $x, y,$ and $v$, add an element to it, so that it is no longer a singleton. \end{itemize} It is easy to see that $f_2$ is a $k$RTDF, thus we derive $$\gamma_{k{\rm rt}}(G)\leq ||f_2|| \leq 2|V_1| + 2|V_2| + \cdots + 2|V_k| -2 +1 = 2||f||-1<2\gamma_{{\rm r}k}(G),$$ which concludes the proof. \end{proof} \noindent The strict inequality in Proposition~\ref{rk-krt} leads to the next question. \begin{question} Find a function $a(k)$ such that for every $k$ we have the tight bound $$\gammaamma_{krt}(G) \le a(k) \cdot \gammaamma_{rk}(G).$$ \end{question} From Proposition~\ref{rk-krt} we know $a(k) < 2$ for $k \gammae 3$. By taking $G$ to be $K_{k, b}$, for $b \gammae k$, we get $\gammaamma_{rk}(G) = k$ and $\gammaamma_{krt}(G) > (3/2)k$, by Lemma~\ref{lem-Kab-2k}, and Proposition~\ref{thm3}, respectively; thus $a(k) > 3/2$. In summary, we know that for $k \gammae 3$, we must have $3/2 < a(k) < 2$. \subseteqction{Lower bounding $k$-rainbow total domination} \label{sec_dom} The main point of this section is to lower bound $\gamma_{k{\rm rt}}(G)$ in terms of $\gammaamma(G)$. In \cite{San19} it was observed that for a graph $G$ of order $n$ where $n > k > 1$, it is always the case that $\max\{k,\gammaamma(G)\}\leq \gamma_{k{\rm rt}}(G)$. While the lower bound of $k$ can be achieved, we will show that the lower bound of $\gammaamma(G)$ cannot. Goddard and Henning \cite{GodHen18} show that for any graph $G$, $\frac{4}{3}\gammaamma (G)$ is a tight lower bound for $\gamma_{2{\rm rt}}(G)$. We generalize their lower bound to all $k$ in the following theorem, where the interesting issue of tightness will be discussed after the theorem. \begin{theorem} \label{thm_rainbow_vs_domination} For a graph $G$ and $k \gammae 2$ we have $$\gamma_{k{\rm rt}}(G) \gammaeq \dfrac{2k}{k+1} \gammaamma(G).$$ \end{theorem} \begin{proof} We use the notation $G[S]$ for the subgraph of $G$ induced by the set $S\subseteq V(G)$. Let $f:V(G)\gamma_{{\rm ri}2}ghtarrow 2^{[k]}$ be a $\gamma_{k{\rm rt}}$-function. Let $V_i' \subseteq V_i$ be the set of vertices in $V_i$ having a neighbor in $V_i$, and let $D_i$ be a minimum dominating set for $G[V_i']$. Since $G[V_i']$ contains no isolated vertices, we have (due to a result of Ore in \cite{Ore}) that $| D_i | \le \frac{|V_i'|}{2} \leq \frac{|V_i|}{2}$. For $i\in [k]$ we define a set $U_i$ as follows: $$U_i = V_i \ \cup \bigcup_{\substack{S \subseteq [k],\\ |S| \gammae 2}} V_S \ \ \cup \ \ \bigcup_{j \neq i} D_j.$$ In other words, $U_i$ consists of the following set of vertices in $G$: those that are labeled by just $\{i\}$, those that are labeled by a subset of $[k]$ of size at least $2$, and from among those labeled by $\{j\}$, where $j \neq i$, take just the ones in $D_j$. Now we show that each $U_i$ is a dominating set of $G$. First, consider a vertex $v$ with $f(v)=\emptyset$. Since $f$ is a $k$RTDF, there is a neighbor $x$ of $v$ such that $i\in f(x)$. This means that $v$ is adjacent to a vertex in $V_i$ or a vertex labeled by a set of size $2$ or greater. Therefore $v$ is adjacent to a vertex in $U_i$. Second, consider a vertex $u\in V_j\subseteqtminus D_j$ where $j \neq i$. If $u$ is in $V_j'$ then it is adjacent to a vertex from $D_j$, otherwise, $u$ is not adjacent to a vertex labeled by $\{j\}$, so it must be adjacent to a vertex labeled by $S$ where $j \in S$ and $|S|\gammaeq 2$. Thus $u$ is adjacent to a vertex in $U_i$. Finally, the rest of the vertices are actually in $U_i$. Thus $U_i$ is a dominating set. Now we can estimate as follows: \begin{alignat*}{2} k \cdot \gammaamma(G) & \le |U_1| + \ldots + |U_k| & \hspace{2cm} & \\ & = |V_1| + \cdots + |V_k| + k \cdot \sum_{\substack{S \subseteq [k],\\ |S| \gammae 2}} |V_S| + (k-1)(|D_1| + \ldots + |D_k|)\\ & \le |V_1| + \cdots + |V_k| + \sum_{\substack{S \subseteq [k],\\ |S| \gammae 2}} k \cdot|V_S|+ \frac{k-1}{2}(|V_1| + \ldots + |V_k|) & \hspace{2cm} & \\ & = \sum_{S \subseteq [k]} |S| \cdot |V_S| + \sum_{\substack{S \subseteq [k],\\ |S| \gammae 2}} (k - |S|) |V_S| + \frac{k-1}{2}(|V_1| + \ldots + |V_k|)\\ & \le \gammaamma_{krt}(G) + (k-1)\sum_{\substack{S \subseteq [k],\\ |S| \gammae 2}} |V_S| + \frac{k-1}{2}(|V_1| + \ldots + |V_k|)& \hspace{2cm} & \\ & = \gammaamma_{krt}(G) +\frac{k-1}{2}\left( \ \left(\sum_{\substack{S \subseteq [k],\\ |S| \gammae 2}} 2 \cdot |V_S| \gamma_{{\rm ri}2}ght)+ \ |V_1| + \ldots + |V_k| \ \gamma_{{\rm ri}2}ght)\\ & \le \gammaamma_{krt}(G) + \frac{k-1}{2} \gammaamma_{krt}(G) & \hspace{2cm} & \\ & = \frac{k+1}{2} \gammaamma_{krt}(G). \end{alignat*} Summarizing, we have $k \gammaamma(G) \le \frac{k+1}{2} \gammaamma_{krt}(G)$, thus $\frac{2k}{k+1}\gammaamma(G) \le \gammaamma_{krt}(G)$. \end{proof} \noindent In what follows we consider the tightness of the bound in the above theorem. The tightness for $k=2$ follows from the mentioned result by Goddard and Henning; see Theorem 3.1 in~\cite{GodHen18}. To see that the lower bound is tight for $k=3$, we modify a construction from \cite{GodHen18} in order to define a family of graphs $\mathcal{T}_{m}$, where $m \gammae 2$ is an integer parameter. A graph $G$ is in the family $\mathcal{T}_{m}$ if we can construct it as follows. There are two parts to $G$: $H$ and $I$. $H$ consists of the three graphs $H_1, H_2$, and $H_3$, where each $H_i$ is $m$ disjoint edges. To construct $I$, for each $3$-tuple of vertices $\bar{x} = (x_1, x_2, x_3$), where $x_i \in H_i$, add an independent set of size at least $m$ to $I$, calling it $I_{\bar{x}}$. Attach each $x_i$ to all vertices in $I_{\bar{x}}$. See Figure~\ref{t3} for an illustration of a graph from $\mathcal{T}_{3}$; for clarity, only $3$ of the $6^3$ independent sets are depicted. We calculate $\gammaamma_{3rt}(G)$ for $G$ in $\mathcal{T}_{m}$, and then state the immediate and interesting corollary. \begin{figure} \caption{A graph from $\mathcal{T} \label{t3} \end{figure} \begin{proposition} \label{thm_special_graph} Suppose $G$ is in the family $\mathcal{T}_{m}$. Then $$ \gammaamma_{3rt}(G) = \dfrac{3}{2}\gammaamma(G).$$ \end{proposition} \begin{proof} We obtain $\gammaamma_{3rt}(G) \gammae \dfrac{3}{2}\gammaamma(G)$ from Theorem~\ref{thm_rainbow_vs_domination}. The rest of the proof shows that $\gammaamma_{3rt}(G) \le \dfrac{3}{2}\gammaamma(G)$. It suffices to show that $\gammaamma_{3rt}(G) \le 6m$ and $\gammaamma(G) \gammae 4m$. For the first inequality, let $f: V(G)\gamma_{{\rm ri}2}ghtarrow 2^{[3]}$ be defined by $f(v)=\{i\}$ if $v\in V(H_i)$ and $f(v)=\emptyset$ for every vertex $v$ in $I$. It is easy to see that $f$ is a $3$RTDF of $G$ thus $\gammaamma_{3rt}(G)\leq ||f|| \le 6m$. The rest of the proof shows that $\gammaamma(G) \gammae 4m$. Let $D$ be a minimum size dominating set of $G$, where $D$ is chosen to have the following property: $D$ contains as many vertices from $H_1 \cup H_2 \cup H_3$ as possible. We claim that such a $D$ must contain every vertex of $H_i$ for some $i =1, 2, $ or $3$. Assume for contradiction that there is no such $H_i$; then there must be a tuple $(u_1, u_2, u_3)$ such that $u_i \in H_i$ and no $u_i$ is in $D$. The structure of $G$ implies that every vertex of $I_{(u_1, u_2, u_3)}$ must be in $D$. However the closed neighborhood of $I_{(u_1, u_2, u_3)}$ contains the vertices $I_{(u_1, u_2, u_3)}$ along with $\{ u_1, u_2, u_3\}$, while the closed neighborhood of $\{ u_1, u_2, u_3 \}$ contains the same vertices as well as others. Since $I_{(u_1, u_2, u_3)}$ contains at least $3$ vertices, we could remove $I_{(u_1, u_2, u_3)}$ from $D$ and replace them by $\{ u_1, u_2, u_3\}$ to arrive at a different minimum dominating set that contains more vertices of $H_1 \cup H_2 \cup H_3$, contradicting the property of $D$ having as many vertices from $H_1 \cup H_2 \cup H_3$ as possible. Thus we assume $D$ contains all of $H_1$. The vertices of $H_1$ are not adjacent to the vertices of $H_2 \cup H_3$, so we still must dominate all of them. Note that for any vertex in the graph, it dominates at most two vertices in $H_2 \cup H_3$, thus to dominate the $4m$ vertices of $H_2 \cup H_3$ we need at least $2m$ more vertices in addition to the $2m$ vertices of $H_1$. Thus $\gammaamma(G) \gammae 4m$. \end{proof} \begin{cor} \label{cor_3rt_bound} For a graph $G$ we have the following tight inequality: $$\gammaamma_{3rt}(G) \gammae \dfrac{3}{2}\gammaamma(G).$$ \end{cor} The construction of $\mathcal{T}_{m}$, along with the last corollary does not obviously generalize to $k \gammae 4$. Summarizing, we have the following tight lower bounds for the cases of $k = 2$ and $k = 3$: $\gammaamma_{2rt}(G) \gammae \frac{4}{3}\gammaamma(G)$ and $\gammaamma_{3rt}(G) \gammae \frac{3}{2}\gammaamma(G)$. We make the following conjecture for the other values of $k$. \begin{conjecture} \label{conj_2gammaLower} For a graph $G$ and $k \gammae 4$, we have the tight bound $\gamma_{k{\rm rt}}(G)\gammaeq 2\gammaamma(G)$. \end{conjecture} \noindent In Conjecture~\ref{conj_2gammaLower} the correct coefficient $c$ in front of $\gammaamma(G)$ (which we conjecture to be 2) satisfies $\frac{2k}{k+1} \le c \le 2$. The first inequality holds by Theorem~\ref{thm_rainbow_vs_domination}. The second inequality holds because $\gammaamma_{\rm{k}rt}(K^+_{k, b}) = 2k = 2\cdot \gammaamma(K^+_{k, b})$, by Lemma~\ref{thm_Kplus} (for $k \gammae 2, b \gammae 1$). In the case of bipartite graphs, when $k=2$, the bound in Theorem~\ref{thm_rainbow_vs_domination} can be improved. Using hypergraphs Azarija et al.~showed that for a bipartite graph $G$, $\gammaamma_{2rt}(G) = 2\gammaamma(G)$ (see Theorem 1 in~\cite{azar}). Goddard and Henning~\cite{GodHen18} (see Theorem 2.1) presented a shorter proof using paired domination. Using $2$-rainbow total domination the proof of the mentioned result is even simpler. \begin{theorem}[\cite{azar,GodHen18}] \label{azar} If $G$ is a bipartite graph then $\gammaamma_{2rt}(G) = 2\gammaamma(G)$. \end{theorem} \begin{proof} Let $G$ be a bipartite graph with bipartition $(L,R)$, and let $f$ be a $\gamma_{2{\rm rt}}$-function for $G$. We define a set $A$ as the union of the following three subsets of $V(G)$: all vertices of $G$ with label $\{1,2\}$, all vertices from $L$ with label $\{2\}$, and all vertices from $R$ with label $\{1\}$. Obviously, $A$ is a dominating set in $G$. By interchanging $L$ and $R$ in the above definition, a set $B$ is defined, which is also a dominating set in $G$. Therefore $$2\gammaamma(G) \leq |A|+|B|=2 |V_{12}|+|V_1|+|V_2|=||f||.$$ The opposite inequality $||f|| \le 2 \gammaamma(G)$ is immediate (see Observation 2.3 in \cite{San19}). \end{proof} \subseteqction{Vizing-like conjectures} \label{sec_vizing} Vizing's well known conjecture states that for any graphs $G$ and $H$ \begin{equation}\label{in.v} \gammaamma(G) \, \cdot \, \gammaamma(H) \le \gammaamma(G\,\square\, H). \end{equation} By a result of Clark and Suen \cite{CS2000} it is known that for any graphs $G$ and $H$ $$\gammaamma(G) \, \cdot \, \gammaamma(H) \le 2 \gammaamma(G\,\square\, H).$$ Ho~\cite{Ho08} proved that for any graphs $G$ and $H$ without isolated vertices, $$ \gammaamma_t(G) \, \cdot \, \gammaamma_t(H) \le 2 \cdot \gammaamma_t(G\,\square\, H), $$ where the inequality is tight. We use above results to provide a simple proof of the following Vizing-like inequalities for $k$-rainbow and $k$-rainbow total domination. \begin{proposition} Let $G$ and $H$ be graphs and $k\gammaeq 2$. Then $$ \gamma_{k{\rm rt}}(G) \, \cdot \, \gamma_{k{\rm rt}}(H) \le 2k \cdot \gamma_{k{\rm rt}}(G\,\square\, H) $$ and $$ \gamma_{{\rm r}k}(G) \, \cdot \, \gamma_{{\rm r}k}(H) \le 2k \cdot \gamma_{{\rm r}k}(G\,\square\, H). $$\end{proposition} \begin{proof} Using Ho's result, the equality in equation~(\ref{cpt}), and Corollary~\ref{thm_rainbow_to_total} we derive \begin{alignat*}{2} \gamma_{k{\rm rt}}(G) \, \cdot \, \gamma_{k{\rm rt}}(H) & = \gammaamma_t(G \,\square\, K_k) \, \cdot \, \gammaamma_t(H \,\square\, K_k) && \text{} \\ & \le 2\gammaamma_t(G \,\square\, K_k \,\square\, H \,\square\, K_k) && \text{} \\ & = 2\gammaamma_t(G \,\square\, H \,\square\, K_k \,\square\, K_k) && \text{} \\ & = 2\gamma_{k{\rm rt}}(G \,\square\, H \,\square\, K_k) && \text{} \\ & \le 2k\gammaamma_t(G \,\square\, H \,\square\, K_k) && \text{} \\ & = 2k\gamma_{k{\rm rt}}(G \,\square\, H). && \text{} \end{alignat*} Following the same lines, using the above result of Clark and Suen, and known bound $\gamma_{{\rm r}k}(G)\leq k\gammaamma(G)$ from \cite{hart04}, the Vizing-like inequality for $k$-rainbow domination can be derived. \end{proof} \noindent We believe that the following stronger inequalities might hold. \begin{conjecture}\label{conB} Let $G$ and $H$ be graphs and $k\gammaeq 2$. Then $$\gamma_{{\rm r}k}(G) \, \cdot \, \gamma_{{\rm r}k}(H) \le 2 \cdot \gamma_{{\rm r}k}(G\,\square\, H).$$ \end{conjecture} \begin{conjecture}\label{conA} Let $G$ and $H$ be graphs and $k\gammaeq 2$. Then \begin{equation}\label{in.a} \gamma_{k{\rm rt}}(G) \, \cdot \, \gamma_{k{\rm rt}}(H) \le 2 \cdot \gamma_{k{\rm rt}}(G\,\square\, H). \end{equation} \end{conjecture} For the first conjecture the constant 2 is attained for $G=H=C_4$ and $k=4$. It is easy to see that $\gammaamma_{{\rm r}4}(C_4)=4$. To see that $\gammaamma_{{\rm r}4}(C_4\,\square\, C_4) = 8$, observe that the upper bound $\gammaamma_{{\rm r}4}(C_4\,\square\, C_4)\leq 8$ follows from the construction in Figure \ref{c4c4}, while the lower bound $\gammaamma_{{\rm r}4}(C_4\,\square\, C_4)\gammaeq 8$ follows from Theorem~3 in \cite{shao14} (that theorem states that for a connected graph $G$, $\gamma_{{\rm r}k}(G)\gammaeq \Bigl\lceil \frac{|V(G)|k}{\Delta(G)+k}\Bigr\rceil$, where $\Delta(G)$ stands for the maximum degree of $G$). \begin{figure} \caption{$4$-rainbow domination of $C_4 \,\square\, C_4$.} \label{c4c4} \end{figure} Regarding Conjecture~\ref{conA}, if we exclude graphs containing isolated vertices and take $k=1$, then inequality~(\ref{in.a}) is just a restatement of Ho's result. Furthermore, when $k=1$, Ho observes that the inequality is sharp. When $k=2$ we obtain an interesting observation relating Vizing's conjecture to Conjecture~\ref{conA}. Due to Theorem~\ref{azar}, if $G$ and $H$ are bipartite graphs, then $$4 \gammaamma(G) \gammaamma(H) = \gamma_{2{\rm rt}}(G) \gamma_{2{\rm rt}}(H) \quad\hbox{ and }\quad 4\gammaamma(G\,\square\, H) = 2 \gamma_{2{\rm rt}} (G\,\square\, H),$$ which implies that for bipartite graphs, inequality (\ref{in.v}) holds if and only if for $k = 2$ inequality (\ref{in.a}) holds. Furthermore, one inequality is tight if and only if the other is. The next observation summarizes this discussion. \begin{observation} Restricting to the case $k=2$ and to the class of bipartite graphs, Conjecture~\ref{conA} and Vizing's conjecture are equivalent. \end{observation} \noindent The relevance of the last observation is highlighted by the fact that there are many pairs of graphs for which equality is achieved in Vizing's conjecture (see \cite{hart98}). \vskip 1pc \noindent{\bf Acknowledgments.} Slovenian researchers were partially supported by Slovenian research agency ARRS, program no.\ P1--0383, project no.\ J1--1692 and bilateral projects between Slovenia and United states of America BI-US/18-20-061 and BI-US/18-20-052. \end{document}
\begin{document} \title{On Efficient Noncommutative Polynomial Factorization via Higman Linearization} \begin{abstract} In this paper we study the problem of efficiently factorizing polynomials in the free noncommutative ring $\mathbb{F}\angle{x_1,x_2,\ldots,x_n}$ of polynomials in noncommuting variables $x_1,x_2,\ldots,x_n$ over the field $\mathbb{F}$. We obtain the following result: \begin{itemize} \item[] Given a noncommutative arithmetic formula of size $s$ computing a noncommutative polynomial $f\in\mathbb{F}\angle{x_1,x_2,\ldots,x_n}$ as input, where $\mathbb{F}=\mathbb{F}_q$ is a finite field, we give a randomized algorithm that runs in time polynomial in $s, n$ and $\log_2q$ that computes a factorization of $f$ as a product $f=f_1f_2\cdots f_r$, where each $f_i$ is an irreducible polynomial that is output as a noncommutative algebraic branching program. \item[] The algorithm works by first transforming $f$ into a linear matrix $L$ using Higman's linearization of polynomials. We then factorize the linear matrix $L$ and recover the factorization of $f$. We use basic elements from Cohn's theory of free ideals rings combined with Ronyai's randomized polynomial-time algorithm for computing invariant subspaces of a collection of matrices over finite fields. \end{itemize} \noindent\textbf{Keywords: Noncommutative Polynomials, Arithmetic Circuits, Factorization, Identity testing.} \end{abstract} \setcounter{page}{0} \tableofcontents \thispagestyle{empty} \section{Introduction}\label{intro} Let $\mathbb{F}$ be any field and $X=\{x_1,x_2,\ldots,x_n\}$ be a set of $n$ free noncommuting variables. Let $X^*$ denote the set of all free words (which are monomials) over the alphabet $X$ with concatenation of words as the monoid operation and the empty word $\epsilon$ as identity element. The \emph{free noncommutative ring} $\mathbb{F}X$ consists of all finite $\mathbb{F}$-linear combinations of monomials in $X^*$, where the ring addition $+$ is coefficient-wise addition and the ring multiplication $*$ is the usual convolution product. More precisely, let $f,g\in\mathbb{F}X$ and let $f(m)\in\mathbb{F}$ denote the coefficient of monomial $m$ in polynomial $f$. Then we can write $f=\sum_m f(m) m$ and $g=\sum_m g(m) m$, and in the product polynomial $fg$ for each monomial $m$ we have \[ fg(m)=\sum_{m_1m_2=m} f(m_1)g(m_2). \] The \emph{degree} of a monomial $m\in X^*$ is the length of the monomial $m$, and the degree $\deg f$ of a polynomial $f\in\mathbb{F}X$ is the degree of a largest degree monomial in $f$ with nonzero coefficient. For polynomials $f,g\in\mathbb{F}X$ we clearly have $\deg (fg) = \deg f + \deg g$. A \emph{nontrivial factorization} of a polynomial $f\in\mathbb{F}X$ is an expression of $f$ as a product $f=gh$ of polynomials $g,h\in\mathbb{F}X$ such that $\deg g > 0$ and $\deg h > 0$. A polynomial $f\in\mathbb{F}X$ is \emph{irreducible} if it has no nontrivial factorization and is \emph{reducible} otherwise. For instance, all degree $1$ polynomials in $\mathbb{F}X$ are irreducible. Clearly, by repeated factorization every polynomial in $\mathbb{F}X$ can be expressed as a product of irreducibles. In this paper we study the algorithmic complexity of polynomial factorization in the free ring $\mathbb{F}X$. The factorization algorithm is by an application of Higman's linearization process followed by factorization of a matrix with linear entries (under some technical conditions) using Cohn's factorization theory. It is interesting to note that Higman's linearization process \cite{Hig} has been used to obtain a deterministic polynomial-time algorithm for the RIT problem. That is, the problem of testing if a noncommutative rational formula (which computes an element of the free skew field $\mathbb{F}\newbrak{X}$) is zero on its domain of definition \cite{GGOW20,IQS17, IQS18,HW15}. \subsection{Overview of the results} The main result of the paper is the following. \begin{theorem*}[Main Theorem] Given a multivariate noncommutative polynomial $f\in\mathbb{F}_q\angle{X}$ for a finite field\footnote{We present the detailed randomized algorithm over large finite fields. In the case of small finite fields we obtain a deterministic $\mathrm{poly}(s,q,|X|)$ time algorithm with minor modifications.} $\mathbb{F}_q$ by a noncommutative arithmetic formula of size $s$ as input, a factorization of $f$ as a product $f=f_1f_2\cdots f_r$ can be computed in randomized time $\mathrm{poly}(s,\log_2q,|X|)$, where each $f_i\in\mathbb{F}_q\angle{X}$ is an irreducible polynomial that is output as an algebraic branching program. \end{theorem*} The proof has three broad steps described below. \begin{itemize} \item \textbf{Higman linearization and Cohn's factorization theory}~~ Briefly, given a noncommutative polynomial $f\in\mathbb{F}X$ by a formula, we can transform it into a linear matrix $L$ such that $f\oplus I = PLQ$, where $P$ is an upper triangular matrix with polynomial entries and all $1$'s diagonal and $Q$ is a lower triangular matrix with polynomial entries and all $1$'s diagonal, $P$ and $Q$ are the matrices implementing the sequence of row and column operations required for the Higman linearization process. Cohn's theory of factorization of noncommutative linear matrices gives us sufficient information about the structure of irreducible linear matrices.\\ \item \textbf{Ronyai's common invariant subspace algorithm}~~ Next, the most important tool algorithmically, is Ronyai's algorithm for computing common invariant subspaces of a collection of matrices over finite fields \cite{Ronyai2}. We show that Ronyai's common invariant subspace algorithm can be repeatedly applied to factorize a linear matrix $L=A_0+\sum_{i=1}^n A_i x_i$, into a product of irreducible linear matrices provided $A_0$ is invertible and $[A_1 A_2\cdots A_n]$ has full row rank or $[A_1^T A_2^T\cdots A_n^T]^T$ has full column rank. The later conditions are called as right and left monicity of the linear matrix $L$ respectively. With some technical work we can ensure these conditions for a linear matrix $L$ that is produced from a polynomial $f$ by Higman linearization. Then Ronyai's algorithm yields the factorization of $L$ into a product of irreducible linear matrices (upto multiplication by units).\\ \item \textbf{Recovering the factors of $f$}~~ Finally, we design a simple linear algebraic algorithm for trivializing a matrix product $AB=0$, where $A$ is a linear matrix and $B$ is a column vector of polynomials from $\mathbb{F}X$, using which we are able to extract the irreducible factors of $f$ from the factors of $L$. An invertible matrix $M$ with polynomial entries \emph{trivializes} the relation $AB=0$ if the modified relation $(AM)(M^{-1}B)=0$ has the property that for every index $i$ either the $i^{th}$ column of $AM$ is zero or the $i^{th}$ row of $M^{-1}B$ is zero. While such matrices $M$ exist for any matrix product $AB=0$ with entries from $\mathbb{F}X$, we obtain an efficient algorithm in the special case when $A$ is linear and $B$'s entries are polynomials computed by small arithmetic circuits. This special case is sufficient for our application. \end{itemize} There are some additional technical aspects we need to deal with. Let $L=A_0+\sum_{i=1}^n A_i x_i$ be the linear matrix obtained from $f\in\mathbb{F}_q\angle{X}$ by Higman linearization, where $X=\{x_1,x_2,\ldots,x_n\}$ and $A_i\in \mathbb{F}_q^{d\times d}, 0\le i\le n$. If $A_0$ is an invertible matrix then it turns out that the problem of factorizing $L$ can be directly reduced to the problem of finding a common invariant subspace for the matrices $A_0^{-1}A_i, 1\le i\le n$. In general, however, $A_0$ is not invertible. Two cases arise: \begin{itemize} \item[(a)] The polynomial $f$ is \emph{commutatively nonzero}. That is, it is nonzero on $\mathbb{F}_q^n$ (or on $\mathbb{F}^n$ for a small extension field $\mathbb{F}$). In this case, by the DeMillo-Lipton-Schwartz-Zippel Lemma~\cite{DL78,Sch80, Zip79}, we can do a linear shift of the variables $x_i\leftarrow x_i+\alpha_i$ in the polynomial $f$, for $\alpha_i$ randomly picked from $\mathbb{F}_q$ (or $\mathbb{F}$). Let the resulting polynomial be $f'$ and let its Higman linearization be $L_{f'}$. In $L_{f'}$ the constant matrix term $A'_0$ will be invertible with high probabilty, and the reduction steps outlined above will work for $L_{f'}$. Furthermore, from the factorization of $f'$ we can efficiently recover the factorization of $f$. Section~\ref{cnz-sec} deals with Case~(a), with Theorem~\ref{cnzthm} summarizing the algorithm for factorizing $f$. Theorem~\ref{lfact1} describes the algorithm for factorization of the linear matrix $L_{f'}$, and the factor extraction lemma (Lemma~\ref{extract}) allows us to efficiently recover the factorization of $f'$ from the factorization of $L_{f'}$. \item[(b)] In the second case, suppose $f$ is zero on all scalars. Then, for example by Amitsur's theorem \cite{Ami66}, for a random matrix substitution $x_i\leftarrow M_i\in \mathbb{F}^{2s\times 2s}$ the matrix $f(M_1,M_2,\ldots,M_n)$ is \emph{invertible} with high probability, where $s$ is the formula size of $f$.\footnote{Amitsur's theorem strengthens the Amitsur-Levitski theorem \cite{AL50} often used in noncommutative PIT algorithms \cite{BW05}.} \footnote{In the actual algorithm we pick the matrices $M_i$ using a result from \cite{DM17}} Accordingly, we can consider the factorization problem for shifted and dilated linear matrix $L'=A_0\otimes I_{\ell} + \sum_{i=1}^n A_i\otimes (Y_i+M_i)$ which will have the constant matrix term invertible, where each $Y_i$ is an $\ell \times \ell$ matrix of distinct noncommuting variables, where $\ell=2s$. Recovering the factorization of $L$ from the factorization of $L'$ requires some additional algorithmic work based on linear algebra. A lemma from \cite{HKV20} (refer Section \ref{cz-sec} and the Appendix for the details) turns out to be crucial here. The algorithm handling Case~(b) is described in Section~\ref{cz-sec}. Indeed, the new aspect of the algorithm is factorization of the dilated matrix $L'$ from which we recover the factorization of the Higman linearization $L_f$ of $f$. The remaining algorithm steps are exactly as in Section~\ref{cnz-sec}. \end{itemize} \subsection{Small Finite fields}\label{small-field} We now briefly explain the deterministic $\mathrm{poly}(s,q,|X|)$ time factorization algorithm (when $\mathbb{F}_q$ is small). There are two places in the factorization algorithm outlined above where randomization is used: first, to obtain a matrix tuple $(M_1,M_2,\ldots,M_n)$ such that $f(M_1,M_2,\ldots,M_n)$ is invertible, which ensures that the constant matrix term of the linear matrix $L'$ is invertible. When $q=\Omega(d)$, where $d= \deg f$, it suffices to randomly pick $M_i\in\mathbb{F}_q^{2s\times 2s}$. However, if $q<d$ we can choose entries of the matrices $M_i$ from a small extension field $\mathbb{F}_{q^k}$ such that $q^k=\Omega(d)$. Thereby, we will obtain factorization of $L'$ and subsequently that of the polynomial $f$ over the extension field $\mathbb{F}_{q^k}$. However, we can use the fact that the finite field $\mathbb{F}_{q^k}$ can be embedded using the regular representation of the elements of $\mathbb{F}_{q^k}$ in the matrix algebra $\mathbb{F}_q^{k\times k}$. Thus, we can obtain from $(M_1,M_2,\ldots,M_n)$ a matrix tuple $(M'_1,M'_2,\ldots,M'_n)$ with $M'_i\in\mathbb{F}_q^{2sk\times 2sk}$ such that $f(M'_1,M'_2,\ldots,M'_n)$ is invertible. This will ensure that the linear matrix $L'$ can be factorized over the field $\mathbb{F}_q$ which will allow us to obtain a complete factorization of $f$ into irreducible factors over $\mathbb{F}_q$. In order to get a deterministic polynomial-time algorithm for finding such matrices $M'_i, 1\le i\le n$ we will use the fact that the polynomial $f$ is given by a small noncommutative formula and hence has a small algebraic branching program. Then, using ideas from \cite{RS05,F14,ACDM20} we can easily find such matrices $M'_i$ in deterministic polynomial time. Next, we notice that Ronyai's algorithm for finding common invariant subspaces of matrices over $\mathbb{F}_q$ is essentially a polynomial-time reduction to univariate polynomial factorization over $\mathbb{F}_q$. We can use Berlekamp's deterministic $\mathrm{poly}(q,D)$ algorithm for the factorization of univariate degree $D$ polynomials over $\mathbb{F}_q$. Putting it together, we can obtain a deterministic $\mathrm{poly}(s,q,|X|)$ time algorithm for factorization of $f\in\mathbb{F}_q\angle{X}$ as a product of irreducible factors over $\mathbb{F}_q$. \begin{comment} Infact, it turns out that, building on ideas from Arvind et.al. \cite{ACDM20} combined with determinsitic algorithm for PIT of Read Once ABPs by Forbes and Shpilka \cite{} and using regular representation of $\mathbb{F}_{q^k}$ as $k \times k$ matrices over $\mathbb{F}_q$ we can efficiently find a matrix substitution $\overline{N}= (N_1, N_2, \ldots, N_n)$ such that entries of $N_i$'s are from base field $\mathbb{F}_q$ and the size of $N_i$ is at most $k$ times as that of the original substitution matrices $M_i$'s and the linear matrix obtained by using the substitution $\overline{N}$ instead of $\overline{M}$ also has invertible constant term. As a result we can obtain factorization of $f$ over $\mathbb{F}_q$. Moreover, for the case of small finite fields, Ronyai's algorithm works in deterministic polynomial time. Putting it together, in the case of the small finite fields, we get deterministic polynomial time factorization algorithm for factorization of the polynomial $f$ given by noncommutative formula. \end{comment} \subsection{Finite fields versus Rationals}\label{finite-vs-rationals} Unfortunately, the algorithm outlined above does not yield an efficient algorithm for noncommutative polynomial factorization over rationals. The bottlneck is the problem of computing common invariant subspaces for a collection of matrices over $\mathbb{Q}$. Ronyai's algorithm for the problem over finite fields \cite{Ronyai2} builds on the decomposition of finite-dimensional associative algebras over fields. Given an algebra $\mathcal{A}$ over a finite field $\mathbb{F}_q$ the algorithm decomposes $\mathcal{A}$ as a direct sum of minimal left ideals of $\mathcal{A}$ which is used to find nontrivial common invariant subspaces. However, as shown by Friedl and Ronyai \cite{FR85}, over rationals the problem of decomposing a \emph{simple} algebra as a direct sum of minimal left ideals is at least as hard as factoring square-free integers. \subsection{Related research} The study of factorization in noncommutative rings is systematically investigated as part of Cohn's general theory of noncommutative free ideal rings \cite{Cohnfir,Cohnintro} which is based on the notion of the weak algorithm. In fact, there is a hierachy of weak algorithms generalizing the division algorithm for commutative integral domains \cite{Cohnfir}. \noindent\textit{Algorithmic:}~~To the best of our knowledge, the complexity of noncommutative polynomial factorization has not been studied much, unlike the problem of commutative polynomial factorization \cite{vzg-book,K89,KT90}. Prior work on the complexity of noncommutative polynomial factorization we are aware of is \cite{AJR18} where efficient algorithms are described for the problem of factoring \emph{homogeneous} noncommutative polynomials (which enjoy the unique factorization property, and indeed the algorithms in \cite{AJR18} crucially use the unique factorization property). When the input homogeneous noncommutative polynomial has a small noncommutative arithmetic circuit (even given by a black-box as in Kaltofen's algorithms \cite{K89, KT90}) it turns out that the problem is efficiently reducible to commutative factorization by set-multilinearizing the given noncommutative polynomial with new commuting variables. This also works in the black-box setting and yields a randomized polynomial-time algorithm which will produce as output black-boxes for the irreducible factors (which will all be homogeneous). When the input homogeneous polynomial is given by an algebraic branching program there is even a deterministic polynomial-time factorization algorithm. Indeed, the noncommutative factorization problem in for homogeneous polynomials efficiently reduces to the noncommutative PIT problem \cite{AJR18}, analogous to the commutative case \cite{KSS14}, modulo the randomness required for univariate polynomial factorization in the case of finite fields of large characteristic. The motivation of the present paper is to extend the above results to the inhomogeneous case. \noindent\textit{Mathematical:}~~From a mathematical perspective, building on Cohn's work there is a lot of research on the study of noncommutative factorization. For example, \cite{BS15,BHL17} focus on the lack of unique factorization in noncommutative rings and study the structure of multiple factorizations. The research most relevant to our work is the study of noncommutative analogues of the Nullstellensatz by Helton, Klep and Volcic \cite{HKV18,HKV20}. In these papers the authors study the free singularity locus of a noncommutative polynomial $f\in\mathbb{F}X$ where $\mathbb{F}$ is an algebraically closed field of characteristic zero (in \cite{HKV20} mostly they consider complex numbers). This is the set of all matrix tuples $\bar{M}\in\mathcal{L}_n(f)$ (in all matrix dimensions $d$) where $\mathcal{L}_n(f)=\{\bar{M}\mid \det f(\bar{M}) = 0,$ where $\bar{M}$ is an $n$-tuple of matrices$\}$. It turns out that $f\in\mathbb{F}X$ is irreducible if and only if for all $d\ge d_0$ for some $d_0$ the hypersurface $\mathcal{L}_d(f)$ is irreducible which in turn holds iff $\det f(\bar{X})$ is an irreducible commutative polynomial, where $\bar{X}$ are generic matrices with commuting variables of dimension $d\ge d_0$. However, $d_0$ turns out to be exponentially large. \noindent\textbf{Plan of the paper.}~ In Section~\ref{prelim} we present basic definitions and the background results from Cohn's work on factorization. In Section~\ref{bas-res-sec} we further present some results from Cohn's work relevant to the paper. In Section~\ref{cnz-sec} we present the factorization algorithm for polynomials $f$ that does not vanish on scalars and in Section~\ref{cz-sec} we present the algorithm for the general case. \section{Preliminaries}\label{prelim} In this section we give some basic definitions and results relevant to the paper, mainly from Cohn's theory of factorization. Analogous to integral domains and unique factorization domains in commutative ring theory, P.M.~Cohn \cite{Cohnfir,Cohnintro} has developed a theory for noncommutative rings based on the weak algorithm (a noncommutative generalization of the Euclidean division algorithm) and the notion of free ideal rings. We present the relevant basic definitions and results, specialized to the ring $\mathbb{F}X$ of noncommutative polynomials with coefficients in a (commutative) field $\mathbb{F}$, and also for matrix rings with entries from $\mathbb{F}X$. The results about $\mathbb{F}X$ in Cohn's text \cite[Chapter 5]{Cohnfir} are stated uniformly for algebraically closed fields $\mathbb{F}$. However, those we discuss hold for any field $\mathbb{F}$ (in particular for $\mathbb{F}_q$ or a small degree extension of it). The proofs are essentially based on linear algebra. Since we will be using Higman's linearization \cite{Hig} to factorize noncommutative polynomials, we are naturally lead to studying the factorization of linear matrices in $\FX^{d\times d}$ using Cohn's theory. \begin{definition}{\rm\cite{Cohnfir}} A matrix M in $\FX^{d\times d}$ is called \emph{full} if it has (noncommutative) rank $d$. That is, it cannot be decomposed as a matrix product $M= M_1\cdot M_2$, for matrices $M_1 \in \skewf^{d \times e}$ and $M_2 \in \skewf^{e \times d}$ with $e <d$. \end{definition} \begin{remark} Based on the notion of noncommutative matrix rank \cite{Cohnfir}, the square matrix $M\in\FX^{d\times d}$ is full precisely when it is invertible in the skew field $\skewf$. That is, $M$ is full if and only if there is a matrix $N\in\skewf^{d\times d}$ such that $MN=NM=I_d$, where $I_d$ is $d \times d$ identity matrix. \end{remark} We note the distinction between full matrices and units in the matrix ring $\FX^{d\times d}$. \begin{definition} A matrix $U \in \FX^{d\times d}$ is a \emph{unit} if there is a matrix $V\in \FX^{d\times d}$ such that $UV=VU=I_d$, where $I_d$ is $d \times d$ identity matrix. \end{definition} Clearly, units in $\FX^{d \times d}$ are full. Examples of units in $\FX^{d\times d}$, which have an important role in our factorization algorithm, are upper (or lower) triangular matrices in $\FX^{d \times d}$ whose diagonal entries are all \emph{nonzero scalars}. Full matrices, in general, need not be units: for example, the $1\times 1$ matrix $x$, where $x$ is a variable, is full but it is not a unit in the ring $\FX^{1\times 1}=\FX$. \begin{remark} Full non-unit matrices are essentially non-unit non-zero-divisors. For the factorization of elements in $\FX^{d \times d}$, units are similar to scalars in the factorization of polynomials in polynomial rings. Cohn's theory \cite{Cohnfir} considers factorizations of full non-unit elements in $\FX^{d \times d}$. \end{remark} We next define \emph{atoms} in $\FX^{d\times d}$, which are essentially the irreducible elements in it. \begin{definition} A full non-unit element $A$ in $\FX^{d \times d}$ is an \emph{atom} if $A$ cannot be factorized as $A=A_1A_2$ for full non-unit matrices $A_1, A_2$ in $\FX^{d \times d}$. \end{definition} Noncommutative polynomials do not have unique factorization in the usual sense of commutative polynomial factorization.\footnote{However, as shown by Cohn, using the notion of stable associates there is a more general sense in which noncommutative polynomials have ``unique'' factorization \cite{Cohnfir}.} A classic example \cite{Cohnfir} is the polynomial $x+xyx$ with its two different factorizations \[ x+xyx = x(1+yx) = (1+xy)x, \] where $1+xy$ and $1+yx$ are distinct irreducible polynomials. \begin{definition} Elements $A \in \FX^{d \times d}$ and $B \in \FX^{d' \times d'}$ are called \emph{stable associates} if there are positive integers $t$ and $t'$ such that $d+t=d'+t'$ and units $P, Q\in\FX^{(d+t)\times (d+t)}$ such that $A\oplus I_t=P(B \oplus I_{t'})Q$. \end{definition} It is easy to check that the polynomials $1+xy$ and $1+yx$ are stable associates. Notice that if $A$ and $B$ are full non-unit matrices that are stable associates then $A$ is atom if and only if $B$ is atom. Furthermore, we note that stable associativity defines an equivalence relation between full matrices over the ring $\FX$. We observe that the problem of checking if two polynomials in $\FX$ given as arithmetic formulas are stable associates or not has an efficient randomized algorithm (Lemma~\ref{stable-algo}). Now we turn to the problem of noncommutative polynomial factorization. By Higman's linearization \cite{Hig,Cohnfir}, given a polynomial $f \in \FX$ there is a positive integer $\ell$ such that $f$ is stably associated with a \emph{linear matrix} $L \in \FX^{\ell \times \ell}$, that is to say, the entries of $L$ are affine linear forms.\footnote{More generally, by Higman's linearization any matrix of polynomials $M$ is stably associated with a linear matrix $L \in \FX^{\ell \times \ell}$ for some $\ell$.} Higman's linearization process is a simple algorithm obtaining the linear matrix $L$ for a given $f$, and it plays a crucial role in our factorization algorithm. We describe it and state an effective version \cite{GGOW20} which gives a simple polynomial-time algorithm to compute $L$ when $f$ is given as a non-commutative arithmetic formula. \subsection*{Higman's linearization process} We describe a single step of the linearization process. Given an $m \times m$ matrix $M$ over $\FX$ such that $M[m,m]=f+g\times h$, apply the following: \begin{itemize} \item Expand $M$ to an $(m+1)\times (m+1)$ matrix by adding a new last row and last column with diagonal entry $1$ and remaining new entries zero: \[ \left[ \begin{array}{c|c} M & 0 \\ \hline 0 & 1 \end{array} \right]. \] \item Then the bottom right $2\times 2$ submatrix is transformed as follows by elementary row and column operations \[ \left( \begin{array}{cc} f+gh & 0 \\ 0 & 1 \end{array} \right)\rightarrow \left( \begin{array}{cc} f+gh & g \\ 0 & 1 \end{array} \right)\rightarrow \left( \begin{array}{cc} f & g \\ -h & 1 \end{array} \right) \] \end{itemize} Given a polynomial $f\in\mathbb{F}X$ by repeated application of the above step we will finally obtain a \emph{linear matrix} $L=A_0+\sum_{i=1}^n A_i x_i$, where each $A_i, 0\le i\le n$ is an $\ell\times \ell$ over $\mathbb{F}$, for some $\ell$. The following theorem summarizes its properties. \begin{theorem}[Higman Linearization]{\rm\cite{Cohnfir}}\label{higthm} Given a polynomial $f\in\FX$, there are matrices $P, Q\in\FX^{\ell \times \ell}$ and a linear matrix $L\in\FX^{\ell\times \ell}$ such that \begin{equation}\label{higeq} \left( \begin{array}{c|c} f & 0 \\ \hline 0 & I_{\ell-1} \end{array} \right) ~=~PLQ \end{equation} with $P$ upper triangular, $Q$ lower triangular, and the diagonal entries of both $P$ and $Q$ are all $1$'s (hence, $P$ and $Q$ are both units in $\FX^{\ell \times \ell}$). \end{theorem} Instead of a single $f$, we can apply Higman linearization to a matrix of polynomials $M\in\FX^{m\times m}$ to obtain a linear matrix $L$ that is stably associated to $M$. We state the algorithmic version of Garg et al.~\cite{GGOW20} in this general form. \begin{theorem}\label{ehigman}{\rm\cite[Proposition A.2]{GGOW20}} Let $M \in \FX^{m \times m}$ such that $M_{i,j}$ is computed by a non-commutative arithmetic formula of size at most $s$ and bit complexity at most $b$. Then, for $k=O(s)$, in time $\mathrm{poly}(s,b)$ we can compute the matrices $P, Q$ and $L$ in $\FX^{\ell \times \ell }$ of Higman's linearization such that \[ \left( \begin{array}{c|c} M & 0 \\ \hline 0 & I_{k} \end{array} \right) ~=~PLQ. \], where $\ell= m+k$. Moreover, the entries of the matrices $P$ and $Q$ as well as $P^{-1}$ and $Q^{-1}$ are given by polynomial-size algebraic branching programs which can also be obtained in polynomial time. \end{theorem} We will sometimes denote the block diagonal matrix $\left( \begin{array}{c|c} M & 0 \\ \hline 0 & I_{k} \end{array} \right)$ by $M\oplus I_k$. As $P$ and $Q$ are units with diagonal entries all $1$'s, the matrix $M$ is full iff the linear matrix $L$ is full. Also, the scalar matrix $M(\overline{0})$ (obtained by setting all variables to zero) is invertible iff the scalar matrix $L(\overline{0})$, similarly obtained, is invertible.\\ \subsection*{Invariant Subspaces and Ronyai's Algorithm} \begin{definition} Let $A_1, \ldots, A_n \in \mathbb{F}^{d \times d}$. A subspace $V\subseteq \mathbb{F}^n$ is called as common invariant subspace of $A_1, \ldots, A_n$ if $A_i v \in V$ for all $i\in [n]$ and $v\in V$. \end{definition} Clearly $0$ and $\mathbb{F}^n$ are, trivially, common invariant subspaces for any collection of matrices. The algorithmic problem is to find a \emph{non-trivial} common invariant subspace if one exists. Ronyai \cite{Ronyai2} gives a randomized polynomial-time algorithm for this problem when $\mathbb{F}$ is finite field. \begin{theorem}\label{thm-ronyai}{\rm\cite{Ronyai2}} Given $A_1, \ldots, A_n \in \mathbb{F}_q^{d \times d}$ there is a randomized algorithm running in time polynomial in $n,d, \log q$ that computes with high probability a non-trivial common invariant subspace of $A_1, \ldots, A_n$ if such a subspace exists, and outputs ``no'' otherwise. \end{theorem} \begin{remark} We should note here, the classical Burnside's theorem \cite{Bur05} for matrix algebras over algebraically closed fields. It essentially shows that the algebra generated by $A_1, A_2, \ldots, A_n$ is the full matrix algebra iff there is no nontrivial common invariant subspace. \end{remark} \begin{remark} As already mentioned in the introduction, Friedl and Ronyai \cite{FR85} have shown that over rationals the problem is at least as hard as factoring square-free integers, and hence likely to be intractable. \end{remark} \subsection*{Noncommutative Formulas, Algebraic branching programs} Next we recall standard definitions of a noncommutative formulas and noncommutative algebraic branching programs (ABPs). More details about noncommutative arithmetic computation can be found in Nisan's work \cite{N91}: A \emph{noncommutative arithmetic circuit} $C$ over a field $\mathbb{F}$ and indeterminates $x_1,x_2,\ldots,x_n$ is a directed acyclic graph (DAG) with each node of indegree zero labeled by a variable or a scalar constant from $\mathbb{F}$: the indegree $0$ nodes are the input nodes of the circuit. Internal nodes, representing gates of the circuit, are of indegree two and are labeled by either a $+$ or a $\times$ (indicating the gate type). Furthermore, the two inputs to each $\times$ gate are designated as left and right inputs prescribing the order of gate gate multiplication. Each internal gate computes a polynomial (by adding or multiplying its input polynomials), where the polynomial computed at an input node is just its label. A special gate of $C$ is the \emph{output} and the polynomial computed by the circuit $C$ is the polynomial computed at its output gate. An arithmetic circuit is a \emph{formula} if the fan-out of every gate is at most one. A noncommutative \emph{algebraic branching program} (ABP) is a layered directed acyclic graph with one source and one sink. The vertices of the graph are partitioned into layers numbered from $0$ to $d$, where edges may only go from layer $i$ to layer $i+1$. The source is the only vertex at layer $0$ and the sink is the only vertex at layer $d$. Each edge is labeled with a linear linear form in the noncommuting variables $x_1, x_2, \ldots, x_n$ The size of the ABP is the number of vertices. The polynomial in $\mathbb{F}\angle{X}$ computed by the ABP is defined as follows: the sum over all source-to-sink paths of the product of the linear forms by which the edges of the path are labeled. \begin{comment} If instead of homogeneous linear forms we use affine linear forms as edge labels in the branching program, it will compute a possibly inhomogeneous polynomial. For more details on noncommutative arithmetic circuits, formulas, ABPs, the reader is referred to \cite{Nis91}. \end{comment} \section{Some Basic Results}\label{bas-res-sec} In this section we present some basic results required for our factorization algorithm. \subsection*{Monic linear matrices} \begin{definition}\label{def-monic}{\rm\cite{Cohnfir}} Let $L=A_0+A_1x_1+ \ldots + A_nx_n\in\FX^{d\times d}$ be a linear matrix, where each $A_i$ is a $d\times d$ scalar matrix over $\mathbb{F}$. Then $L$ is called \emph{right monic} if the $d\times nd$ scalar matrix $[A_1~A_2~\ldots~A_n]$ has full row rank. Equivalently, if there are matrices $B_1, \ldots, B_n \in \mathbb{F}^{d \times d}$ such that $\Sigma_{i=1}^n A_iB_i = I_d$ (i.e. the matrix $[A_1~A_2~\ldots~A_n]$ has right inverse). Similarly, $L$ is \emph{left monic} if the $nd\times d$ matrix $[A_1^T~A_2^T~\ldots~A_n^T]^T$ has full column rank. $L$ is called \emph{monic} if it is both left and right monic. \end{definition} The next two results from Cohn \cite{Cohnfir} are important properties of monic linear matrices. \begin{lemma}\label{monic-nonunit}{\rm\cite{Cohnfir}} A right (or left) monic linear matrix in $\FX^{d\times d}$ is not a unit in $\FX^{d\times d}$. \end{lemma} \begin{proof} Let $L=A_0 + \sum_{i=1}^n A_i x_i$ be right monic, where each $A_i\in \mathbb{F}^{d\times d}$. By definition, there are matrices $B_i\in\mathbb{F}^{d\times d}, 1\le i\le d$ such that $\sum_{i=1}^d A_iB_i=I_d$. Now, suppose $L$ is a unit. Then there is a matrix $C\in\FX^{d\times d}$ such that $CL=I_d$. Let the maximum degree of polynomials occurring in $C$ be $k$, and let $\hat{C}\in\FX^{d\times d}$ denote the degree $k$ component of $C$ (so each nonzero entry of $\hat{C}$ is a homogeneous polynomial of degree $k$). Clearly, $\hat{C}\cdot (\sum_{i=1}^n A_i x_i)=0$. The homogeneity of $\hat{C}$'s entries implies that $\hat{C} A_i=0$ for each $i$. Hence, $\sum_{i=1}^n \hat{C}A_i B_i =0$ which implies $\hat{C}=0$, contradicting the assumption that $C\in\FX^{d\times d}$ is the inverse of $L$. The case when $L$ is left monic is symmetric. \end{proof} Let $f\in\FX$ be a nonzero polynomial and $L$ be a linear matrix obtained from $f$ by Higman linearization as in Equation~\ref{higeq}. Clearly, $L$ is a full linear matrix. We show that we can transform $L$ to obtain a full and right (or left) monic linear matrix $L'$ that is stably associated to $f$. Furthermore, we can efficiently compute $L'$ and the related transformation matrices. \begin{theorem}{\rm\cite{Cohnfir}}\label{full-monic} Let $L = A_0 + \sum_{i=1}^n A_i x_i$ be a full linear matrix in $\FX^{d\times d}$ obtained by Higman linearization from a non constant polynomial $f\in\FX$. Then there are deterministic $\mathrm{poly}(n,d,\log_2 q)$ time algorithms that compute units $U, U'\in \FX^{d\times d}$ and invertible scalar matrices $S, S'\in\mathbb{F}_q^{d\times d}$ such that: \begin{enumerate} \item $ULS= L'\oplus I_r$, and $L'$ is right monic. Moreover, if $L$ is not right monic then $r>0$. \item $S'LU'=L'\oplus I_{r'}$, and $L'$ is left monic. Moreover, if $L$ is not left monic then $r'>0$. \end{enumerate} \end{theorem} \begin{proof} We prove only the first part. The second part has an essentially identical proof. We present a proof with a polynomial-time algorithm for computing $L'$. If $L$ is already right monic there is nothing to show. Otherwise, the row rank of the matrix $B = [A_1~A_2~\cdots~A_n]$ is strictly less than $d$. By row operations we can drive at least one row of $B$ to zero. So, there is an invertible scalar matrix $U_1\in \mathbb{F}^{d\times d}$ such that $U_1B$ has its last row as zeros. Now $U_1A_0$ must have its last row non-zero since $L$ is a full linear matrix. So the last row of $U_1L$ has only scalar entries and at least one of these is non-zero. By a column swap applied to $U_1L$ we can bring this non-zero scalar $\alpha$ in the $(d,d)^{th}$ position. Hence, the $(d,d)^{th}$ entry of $U_1LS_1$ is nonzero, where $S_1$ is the matrix implementing the column swap. Now, with suitable row operations using the last row, we can make all entries above the $(d,d)^{th}$ entry of the $d^{th}$ column zero. Applying column operations we can make all entries of the $d^{th}$ row to the left of the $(d,d)^{th}$ entry zero. The resulting matrix is of the form $RU_1LS_1S' = \tilde{L}\oplus 1$, where the unit $R$ is a linear matrix and $S'$ is an invertible scalar matrix implementing the row and column operations. If $\tilde{L}$ is not right monic, we can recursively apply the above procedure on $\tilde{L}$ until we finally obtain a unit $\tilde{U}\in\FX^{d\times d}$ and a scalar invertible matrix $\tilde{S}\in\mathbb{F}^{d\times d}$ such that $\tilde{U}\tilde{L} \tilde{S}= L' \oplus I_r$, for some positive integer $r<d$, such that $L'$ is right monic. To see why this recursive procedure terminates for $r<d$, note that the dimension of matrix $\tilde{L}$ is reducing by $1$ in each recursive step and the matrix $\tilde{L}$ obtained is a stable associate of $L$. So, if $r=d$ it would imply $L$ is a unit which is a contradiction as we know that $L$ is obtained via Higman linearization on a non-constant polynomial $f$, so $L$ is noninvertible. Putting $U=RU_1\tilde{U}$ and $S=S_1\tilde{S}$ we have $ULS=L'\oplus I_r$ where $L'$ is right monic as desired. It is clear that the entire construction is polynomial time bounded, and that we have small ABPs for the entries of $U$. \end{proof} \begin{remark}\label{remark-full-monic} By repeated application of the algorithm in Theorem \ref{full-monic} we can compute units $U_1,U_2\in\FX^{d\times d}$ such that $U_1 L U_2 = L' \oplus I_r$, where $L'$ is \emph{both left and right monic}. Such a two-sided monic $L'$ is called \emph{monic} in \cite{Cohnfir}. For our factorization algorithm, it suffices to compute an $L'$ that is either left or right monic that is associated to $L$ as in Theorem \ref{full-monic}. It turns out that either a left monic or a right monic $L'$ suffices to use Ronyai's common invariant subspace algorithm to factorize $L'$ (and hence also $L$) as we show in Theorem \ref{lfact1}. More importantly, the fact that matrices $S$ and $S'$ in Theorem~\ref{full-monic} are scalar is important for the factor extraction algorithm as discussed in Theorem \ref{cnzthm}. \end{remark} \begin{lemma}\label{stable-algo} Given polynomials $f,g\in\FX$ as input by noncommutative arithmetic formulas, we can check in randomized polynomial time if $f$ and $g$ are stable associates. \end{lemma} \begin{proof} Given $f$ and $g$, using Higman linearization we first compute in polynomial time full and monic linear matrices $A$ and $B$ such that $f$ and $A$ are stable associates and $g$ and $B$ are stable associates (see Theorem \ref{full-monic} and Remark \ref{remark-full-monic}). Now, $f$ and $g$ are stable associates iff $A$ and $B$ are stable associates. As both $A$ and $B$ are full and monic linear matrices, they are stable associates iff both $A$ and $B$ are matrices of the same dimension, say $d$, and there are scalar invertible matrices $P$ and $Q$ in $\mathbb{F}^{d\times d}$ such that $PA=BQ$ \cite[Theorem 5.8.3]{Cohnfir}, where $\mathbb{F}=\mathbb{F}_q$ or a small field extension. Letting the $2d^2$ entries of $P$ and $Q$ be variables, we can find a linearly independent set of solutions to $PA=BQ$ in polynomial time. Now, there exists invertible $P$ and $Q$ in the solution set iff the degree-$2d$ polynomial $\det P \times \det Q$ is nonzero on the solutions to $PA=BQ$. We can check this by the DeMillo-Lipton-Schwartz-Zippel Lemma \cite{DL78,Sch80,Zip79} by evaluating $\det P$ and $\det Q$ on a random linear combination of the basis of solutions to $PA=BQ$. This will be correct with high probablity. \end{proof} The next result shows how irreducibility (more generally, the property of being an atom) is preserved by Higman linearization. \begin{theorem}\label{associates-preseve-atoms} Let $f\in\FX$ be a nonconstant polynomial and $L$ be a full linear matrix stably associated with $f$ (obtained via Higman linearization).Then the polynomial $f$ is irreducible iff $L$ is an atom. \end{theorem} We give a self-contained proof of the above theorem, using the following (suitably paraphrased) result of Cohn. \begin{lemma}[Matrix Product Trivialization]{\rm\cite[pp.~198]{Cohnintro}}\label{triv-lemma} Let $A\in\FX^{m\times n}$ and $B\in\FX^{n\times s}$ be polynomial matrices such that their product $AB=0$. Then there exists a unit $P\in \FX^{n\times n}$ such that for every index $i\in [n]$ either the $i^{th}$ column of the matrix product $AP$ is all zeros or the $i^{th}$ row of the matrix product $P^{-1}B$ is all zeros. \end{lemma} \begin{proofof}{Theorem~\ref{associates-preseve-atoms}} By Higman linearization, we have upper and lower triangular matrices $P$ and $Q$, respectively, such that \[ f\oplus I_s = PLQ, \] for some positive integer $s$. Now, if $f$ is not irreducible then we can write $f=f_1f_2$, where $f_1$ and $f_2$ are both nonconstant polynomials in $\FX$. Hence $f\oplus I_s$ factorizes as the product of non-units $(f_1\oplus I_s)\cdot (f_2\oplus I_s)$, which implies the factorization \[ L = P^{-1}(f_1\oplus I_s)(f_2\oplus I_s)Q^{-1}. \] Now, we claim $P^{-1}(f_1\oplus I_s)$ and $(f_2\oplus I_s)Q^{-1}$ are non-units. Suppose $P^{-1}(f_1\oplus I_s)$ is a unit. Then $f_1\oplus I_s$ is a unit which would imply there is an invertible matrix $M\in\FX^{d\times d}$ such that $(f_1\oplus I_s)M=I_{s+1}$. But that implies $f\cdot M_{1,1}=1$ which is impossible since $f_1$ is a nonconstant polynomial. Similarly, $(f_2\oplus I_s)Q^{-1}$ cannot be a unit. Hence $L$ is not an atom. Conversely, suppose $L$ is not an atom. Then we can factorize it as $L=M_1M_2$, where $M_1,M_2\in \FX^{d\times d}$ are full non-units. Therefore, we have the factorization \[ f\oplus I_s = (PM_1)(M_2Q). \] Writing the matrices $PM_1$ and $M_2Q$ as $2\times 2$ block matrices, we have: \[\left( \begin{array}{c|c} f & 0 \\ \hline 0 & I_s \end{array} \right) = \left( \begin{array}{c|c} c_1 & c_3 \\ \hline c_2 & c_4 \end{array} \right)\cdot \left( \begin{array}{c|c} d_1 & d_3 \\ \hline d_2 & d_4 \end{array} \right). \] {From} the $(2,1)^{th}$ matrix block on the left hand side of the above equation, we obtain the following matrix identity: \[ 0 = \left( \begin{array}{c c} c_2 & c_4 \end{array} \right)\cdot \left( \begin{array}{c} d_1 \\ d_2 \end{array} \right), \] where $C=(c_2~c_4)$ is in $\FX^{s\times (s+1)}$ and $D=\left( \begin{array}{c} d_1 \\ d_2 \end{array} \right)$ is in $\FX^{(s+1)\times 1}$. By Lemma~\ref{triv-lemma} there is a unit $U\in \FX^{(s+1)\times (s+1)}$ such that for every $1\le i\le s+1$ either the $i^{th}$ column of $C''=C\cdot U$ is all zeros or the $i^{th}$ row of $D''=U^{-1} D$ is all zeros. Note that $D''$, and hence $D$, cannot be the all zeros column as $M_2Q$ is full. So, at least one entry of $D''$ is nonzero. Hence, at least one column of $C''$ is all zeros. By a suitable column permutation matrix $\Pi$ we can ensure that the first column of $C \cdot U \Pi$ is all zeros. Clearly, first entry of $ \Pi^{-1}U^{-1} D$ is nonzero. Writing $f \oplus I_s$ as a product of $C'= PM_1U\Pi$ and $D'=\Pi^{-1}U^{-1}M_2Q$ we have \[\left( \begin{array}{c|c} f & 0 \\ \hline 0 & I_s \end{array} \right) = \left( \begin{array}{c|c} c'_1 & c'_3 \\ \hline c'_2 & c'_4 \end{array} \right)\cdot \left( \begin{array}{c|c} d'_1 & d'_3 \\ \hline d'_2 & d'_4 \end{array} \right), \] where $c_2'$ is an all zeros column and $d_1'$ is nonzero. {From} the $(2,2)^{th}$ matrix block of the above equation, we obtain $c_4' d_4' = I_s$ so $c_4'$ and $d_4'$ are units. By observing $(2,1)^{th}$ matrix block of the above equation we get $c_4' d_2' =0$, which implies $d'_2$ is an all zeros column as $c_4'$ is unit. It follows that $f=c'_1\cdot d'_1$. Furthermore, it is a nontrivial factorization because both $c'_1$ and $d'_1$ are non-units (because $C'$ and $D'$ are non-units, and $c'_4$ and $d'_4$ are units). \end{proofof} Let $L \in\FX^{d\times d}$ be a full and right (or left) monic linear matrix. Let $L=A_0+\sum_{i=1}^n A_i x_i$. For a positive integer $\ell$ let $M_i, i\in [n]$ be $\ell\times \ell$ scalar matrices with entries from $\mathbb{F}$ (or a small degree extension of $\mathbb{F}$). Let $Y_i, i\in [n]$ be $\ell\times \ell$ matrices whose entries are distinct noncommuting variables $y_{ijk}, 1\le j, k\le \ell$. Then the evaluation of the linear matrix $L$ at $x_i\leftarrow Y_i+M_i, 1\le i\le n$ is the $d\ell\times d\ell$ linear matrix in the $y_{ijk}$ variables: \[ L' = A_0\otimes I_\ell + \sum_{i=1}^n A_i\otimes M_i + \sum_{i=1}^n\sum_{j,k=1}^\ell (A_i\otimes E_{jk})\cdot y_{ijk} \] \begin{lemma}\label{shift-inv-lemma} There is a positive integer $\ell\le 2d$ such that for randomly picked $\ell \times \ell$ matrices $M_i, i\in[n]$ ( with entries from $\mathbb{F}$ or a small degree extension field) the matrix $A_0\otimes I_\ell + \sum_{i=1}^n A_i\otimes M_i$ is an invertible matrix. \end{lemma} \begin{proof} Since $L\in\FX^{d\times d}$ is a full linear matrix, it has noncommutative rank $d$. Hence, by the result of \cite{DM17} for the generic $2d\times 2d$ matrix substitution $x_i \leftarrow X_i, i\in[n]$, where $X_i$ is a matrix of distinct commuting variables, the commutative rank of $L(X_1,X_2,\ldots,X_n)$ is $2d^2$ (which means it is invertible). Hence there is a least $\ell\le 2d$ such that the commutative rank of $L(X_1,X_2,\ldots,X_n)$ is $d\ell$, where $X_i$ are generic $\ell\times \ell$ matrices with commuting variables. Hence, by the DeMillo-Lipton-Schwarz-Zippel lemma \cite{DL78,Sch80,Zip79} the rank of the scalar matrix $L(M_1,M_2,\ldots,M_n)$ is $d\ell$, where $M_i$ is a random scalar matrix with entries from $\mathbb{F}$ or a small extension. \end{proof} Finally, we state and prove a \emph{modified version} of a result due to Cohn that allows us to relate the factorization of a polynomial $f\in\FX$ to the factorization of its Higman linearization $L$. The proof is given in the appendix. \begin{theorem}{\rm\cite[Theorem 5.8.8]{Cohnfir}}\label{cohnthm} Let $C\in\FX^{d\times d}$ be a full and right monic (or left monic) linear matrix for $d>1$. Then $C$ is not an atom if and only if there are $d\times d$ invertible scalar matrices $S$ and $S'$ such that \begin{equation}\label{eq-cohnthm} SCS' = \left( \begin{array}{cc} A & 0 \\ D & B \end{array} \right) \end{equation} where $A$ is an $r\times r$ full right (respec. left) monic linear matrix and $B$ is an $s\times s$ full right (respec. left) monic linear matrix such that $r+s=d$. \end{theorem} \begin{remark} In \cite{Cohnfir} the theorem is proved under the stronger assumption that $C$ is monic. However, as we show, it holds even for $C$ that is right monic or left monic with minor changes to Cohn's proof. We require the above version for our factorization algorithm. \end{remark} \section{Polynomial factorization: commutatively non-zero case}\label{cnz-sec} Recall that $\FX$ denotes the free noncommutative polynomial ring $\mathbb{F}\angle{x_1,x_2,\ldots,x_n}$ and our goal is to give a randomized polynomial-time factorization algorithm for input polynomials in $\FX$ given as arithmetic formulas when $\mathbb{F}=\mathbb{F}_q$ is a finite field of size $q$. A polynomial $f\in\FX$ is \emph{commutatively nonzero} if $f(\alpha_1,\alpha_2,\ldots,\alpha_n)\ne 0$ for scalars $\alpha_i\in \mathbb{F}$ (or a small extension field of $\mathbb{F}$). In this section we will present the factorization algorithm for commutatively nonzero polynomials.\footnote{In the next section we will deal with the general case. The algorithm is more technical in detail, although in essence the same.} It has three broad steps: \begin{itemize} \item[(i)] We transform the given polynomial $f$ to a full and \emph{right (or left) monic} linear matrix $L$ by first the Higman linearization of $f$ followed by the algorithm in the proof of Theorem~\ref{full-monic}. \item[(ii)] Next, we factorize the full and right (or left) monic linear matrix $L$ into atoms. \item[(iii)] Finally, we recover the irreducible factors of $f$ from the atomic factors of $L$. \end{itemize} We formally state the three problems of interest in this paper. \begin{problem}[$\prob{FACT}(\mathbb{F})$] {~}\\ \textbf{Input}~A noncommutative polynomial $f\in\mathbb{F}X$ given by an arithmetic formula.\\ \textbf{Output}~Compute a factorization $f=f_1f_2\cdots f_r$, where each $f_i$ is irreducible, and each $f_i$ is output as an algebraic branching program. \end{problem} \begin{problem}[$\prob{LIN{-}FACT}(\mathbb{F})$] {~}\\ \textbf{Input}~A full and right (or left) monic linear matrix $L\in\FX^{d\times d}$.\\ \textbf{Output}~Compute a factorization $L=F_1F_2\cdots F_r$, where each $F_i$ is a full linear matrix that is an atom. \end{problem} \begin{problem}[$\prob{INV}(\mathbb{F})$] {~}\\ \textbf{Input}~A list of scalar matrices $A_1,A_2,\ldots,A_n\in \mathbb{F}^{d\times d}$.\\ \textbf{Output}~Compute a nontrivial invariant subspace $V\subset \mathbb{F}^d$ or report that the only invariant subspaces are $0$ and $\mathbb{F}^d$. \end{problem} In the three-step outline of the algorithm, for the second step we will show that factoring a full and right (or left) monic linear matrix is randomized polynomial-time reducible to the problem of computing a common invariant subspace for a collection of scalar matrices. For the third step, we will give a polynomial-time algorithm (based on Lemma~\ref{triv-lemma}) to recover the irreducible factors of $f$ from the atomic factors of $L$. \begin{remark} We use Ronyai's randomized polynomial-time algorithm \cite{Ronyai2} to solve the problem of computing a a common invariant subspace for a collection of matrices over $\mathbb{F}_q$. Over rational numbers $\mathbb{Q}$, even for a special case the problem of computing a common invariant subspace turns out to be at least as hard as factoring square-free integers~\cite{FR85}. Hence, our approach to noncommutative polynomial factorization does not yield an efficient algorithm over $\mathbb{Q}$. \end{remark} Suppose $f\in\FX$ is given by a noncommutative arithmetic formula. Since $f$ has small degree we can check if it is commutatively nonzero in randomized polynomial-time by the DeMillo-Lipton-Schwatrtz-Zippel test \cite{DL78,Sch80,Zip79} and, if so, find $\alpha_i\in\mathbb{F}, i\in[n]$ such that $f(\alpha_1,\alpha_2,\ldots,\alpha_n)\ne 0$ (if $\mathbb{F}$ is small, we pick $\alpha_i$ from a small extension field). Furthermore, by a linear shift of the variables $x_i\leftarrow x_i + \alpha_i, i\in[n]$ followed by scaling we can assume $f(\overline{0}) = 1$. Note that from the factorization of the linear shift of $f$ we can recover the factors of $f$ by shifting the variables back, and irreducibility is preserved by linear shift. For the rest of this section we will assume $f(\overline{0})=1$. Let $L=A_0+\sum_{i=1}^n A_i x_i$. As $f(\overline{0})=1$, we have $L(\overline{0})=A_0$ is an invertible matrix. We now present an efficient algorithm for factoring $L$ as a product of linear matrices $L_1L_2\cdots L_r$, where each $L_i$ is an atom. \begin{remark} The factorization algorithm for arbitrary full and right (or left) monic linear matrices (in which $A_0$ need not be invertible) is similar but more involved. It is based on Lemma~\ref{shift-inv-lemma} and is dealt with in the next section. \end{remark} \subsection{Algorithm for a special case of $\prob{LIN{-}FACT}(\mathbb{F}_q)$} \begin{theorem}\label{lfact1} There is a randomized polynomial-time algorithm for the following two special cases of the $\prob{LIN{-}FACT}(\mathbb{F}_q)$ problem: \begin{enumerate} \item Given a full right monic matrix $L$ as input such that $L(\overline{0})$ is an invertible matrix, the algorithm outputs a factorization of $L$ as a product of linear matrices that are atoms. \item Given a full left monic matrix $L$ as input such that $L(\overline{0})$ is an invertible matrix, the algorithm outputs a factorization of $L$ as a product of linear matrices that are atoms. \end{enumerate} \end{theorem} \begin{proof} We present the algorithm only for the first part, as the second part has essentially the same solution. Let $L=A_0+\sum_{i=1}^n A_i x_i$ in $\FX^{d\times d}$ be such an instance of $\prob{LIN{-}FACT}(\mathbb{F}_q)$. We can write $L=A_0\cdot L'$ where $L'$ is the full and right monic linear matrix \[ L'= I_d + \sum_{i=1}^n A_0^{-1}A_i x_i. \] Clearly, it suffices to factorize the linear matrix $L'$ into atoms. First we show that $L'$ is an not atom iff matrices $A_0^{-1}A_i$, $1\leq i \leq n$ have a nontrivial common invariant subspace. By Theorem~\ref{cohnthm}, $L'$ is not an atom if and only if we can write $S_1L'S_2 = \left( \begin{array}{cc} B & 0 \\ D & C \end{array} \right)$ for invertible scalar matrices $S_1$ and $S_2$, where $B$ and $C$ are full and right monic linear matrices, and $D$ is some linear matrix. Equating the constant terms on both sides of the above equation we have $S_1S_2= \left( \begin{array}{cc} B_0 & 0 \\ D_0 & C_0 \end{array} \right)$ as the constant term of $L'$ is $I_d$. Thus the matrices $S_1S_2$ and its inverse also has the same block form which implies that $S_1L'S_1^{-1}=S_1L'S_2(S_1S_2)^{-1}$ also has the same block form. It follows that the $n$ matrices $A_0^{-1}A_i, 1\le i\le n$ have a nontrivial common invariant subspace. Conversely, if the matrices $A_0^{-1}A_i, 1\le i\le n$ have a nontrivial common invariant subspace then we have a basic change scalar matrix $S$ such that $SL'S^{-1}$ has the block form $\left( \begin{array}{cc} L_1 & 0 \\ * & L_2 \end{array} \right)$, where $L_1$ and $L_2$ are full and right monic linear matrices. So by Theorem~\ref{cohnthm} $L'$ is not an atom. So we have established, $L'$ (and hence $L$) is not an atom iff matrices $A_0^{-1}A_i$, $1\leq i \leq n$ have a nontrivial common invariant subspace. We will use Ronyai's randomized polynomial-time algorithm for finding a nontrivial common invariant subspace for matrices $A_0^{-1}A_i, 1\le i\le n$ over finite field $\mathbb{F}_q$. If there is no nontrivial invariant subspace then the linear matrix $L'$ (and hence $L$) is an atom. Otherwise, by repeated application of Ronyai's algorithm we will obtain a basis change scalar matrix $T$ which when applied to $L'$ yields a linear matrix in the following \emph{atomic block diagonal form}: \begin{equation}\label{eq3} TL' T^{-1} = \left( \begin{array}{ccccc} L_1 & 0 &0 &\ldots & 0\\ * & L_2 &0 &\ldots & 0\\ * & * &L_3 &\ldots & 0\\ & & &\ddots &\\ * & * &* &\ldots & L_r\\ \end{array} \right), \end{equation} where for each $j\in[r]$, the full right monic linear matrix $L_j \in \FX^{d_j \times d_j}$ is an atom, and each $*$ stands for some unspecified linear matrix. It is now easy to factorize $TL'T^{-1}$ as a product of atoms by noting one step of the factorization of $TL'T^{-1}$ from its form: \[ TL'T^{-1}= \left( \begin{array}{cc} A & 0 \\ D & L_r \end{array} \right) = \left( \begin{array}{cc} A & 0 \\ 0 & I \end{array} \right)\cdot \left( \begin{array}{cc} I & 0 \\ D & I \end{array} \right)\cdot \left( \begin{array}{cc} I & 0 \\ 0 & L_r \end{array} \right). \] We note that $\left( \begin{array}{cc} I & 0 \\ D & I \end{array} \right)$ is a unit. Since $L_r$ is an atom the product $\left( \begin{array}{cc} I & 0 \\ D & I \end{array} \right)\cdot \left( \begin{array}{cc} I & 0 \\ 0 & L_r \end{array} \right)$ is also an atom and a linear matrix, and it is the rightmost factor of $TL'T^{-1}$. Continuing thus with $A$ now, we can factorize $TL'T^{-1}$ as a product $F'_1F'_2\cdots F'_r$ of $r$ atoms, each of which is a linear matrix. It follows that $L=A_0T^{-1}F'_1F'_2\cdots F'_rT$ is a complete factorization of $L$ as a product of atomic linear matrices (both $A_0$ and $T$ are scalar invertible matrices). \end{proof} \begin{remark} We note that Ronyai's algorithm \cite{Ronyai2} for $\prob{INV}(\mathbb{F}_q)$ is actually a deterministic polynomial-time \emph{reduction} from $\prob{INV}(\mathbb{F}_q)$ to univariate polynomial factorization over $\mathbb{F}_q$. \end{remark} Based on whether we want to work with right monic or left monic case we will express $f \oplus I_s$ in an appropriate form using Higman linearization and Theorem~\ref{full-monic} as described in the equation below: \begin{equation}\label{eq2} f\oplus I_s = \begin{cases} ~~PU(L'\oplus I_t) SQ,~ \text{in the right monic case}\\ ~~PS(L'\oplus I_{t}) UQ,~ \text{in the left monic case} \end{cases} \end{equation} where $d+t=s+1$, $L'\in\FX^{d\times d}$ is a full and right (or left) monic linear matrix, $P$ is upper triangular with all $1$'s diagonal, $Q$ is lower triangular with all $1$'s diagonal, $U\in\FX^{(d+t)\times (d+t)}$ is a unit, and $S\in\mathbb{F}^{(d+t)\times (d+t)}$ is an invertible scalar matrix. \subsection*{Algorithm for $\prob{FACT}(\mathbb{F}_q)$} We are now ready to describe the polynomial factorization algorithm for commutatively nonzero polynomials in $\FX$. Starting with the Higman linearization of the input polynomial $f\in\FX$ as in Equation~\ref{eq2}, by an application of the first parts of Theorems \ref{full-monic} and \ref{lfact1} we obtain the factorization $f\oplus I_s = PUF'_1F'_2\cdots F'_r SQ$ using the structure in Equation~\ref{eq3}. Alternatively, by applying the second part of Theorem \ref{full-monic} we can compute a left monic linear matrix $L'$ that is a stable associate of $f$ and, applying the second part of Theorem~\ref{lfact1} we can compute the factorization \begin{equation}\label{eq4} f\oplus I_s = PS'F'_1F'_2\cdots F'_r U'Q. \end{equation} where each linear matrix $F'_i$ is an atom, $P$ is upper triangular with all $1$'s diagonal, $Q$ is lower triangular with all $1$'s diagonal, $U'$ is a unit and $S'$ is a scalar invertible matrix. Equation~\ref{eq4} is the form we will use for the algorithm (we could equally well use the other factorization). {From} the structure of the atomic block diagonal matrix $TL'T^{-1}$ in Equation~\ref{eq3} notice that the product $S'F'_1F'_2\cdots F'_i $ is a linear matrix for each $1\le i<r$. The next lemma presents an algorithm that is crucial for extracting the factors of $f$. \begin{lemma}\label{triv-lem} Let $C\in\FX^{u\times d}$ be a linear matrix and $v\in\FX^{d\times 1}$ be a column of polynomials such that $Cv=0$. Each entry $v_i$ of $v$ is given by an algebraic branching program as input. Then, in polynomial time we can compute a invertible matrix $N\in\FX^{d\times d}$ such that \begin{itemize} \item For $1\le i\le d$ either the $i^{th}$ column of $CN$ is all zeros or the $i^{th}$ row of $N^{-1}v$ is zero. \item Each entry of $N$ is a polynomial of degree at most $d^2$ and is computed by a polynomial size ABP, and also each entry of $N^{-1}$ is computed by a polynomial size ABP. \end{itemize} \end{lemma} \begin{proof} We will describe the algorithm as a recursive procedure Trivialize that takes matrix $C$ and column vector $v$ as parameters and returns a matrix $N$ as claimed in the statement. \begin{enumerate} \item[] Procedure Trivialize$(C\in\FX^{u\times d},v\in\FX^{d\times 1})$ \item If $d=1$ then (since $Cv=0$ iff either $C=0$ or $v=0$) \textbf{return} the identity matrix. \item If $d>1$ then \item write $C=C_0+C_1$, where $C_0$ is a scalar matrix and $C_1$ is the degree $1$ homogeneous part of $C$. Let $k$ be the degree of the highest degree nonzero monomials in the polynomial vector $v$, and let $m$ be a nonzero degree $k$ monomial. Let $v(m)\in\mathbb{F}_q^{d\times 1}$ denote its (nonzero) coefficient in $v$. Then $Cv=0$ imples $C_1 v(m)=0$. Let $T_0\in\mathbb{F}_q^{d\times d}$ be a scalar invertible matrix with first column $v(m)$ obtained by completing the basis. \begin{enumerate} \item If $C_0v(m)=0$ then the first column of $CT_0$ is zero. \item Otherwise, $CT_0$ has first column as the nonzero scalar vector $Cv(m)=C_0v(m)$. Suppose $i^{th}$ entry of $Cv(m)$ is a nonzero scalar $\alpha$. With column operations we can drive the $i^{th}$ entry in all other columns of $CT_0$ to zero. Let the resulting matrix be $CT_0T_1$ (where the matrix $T_1$ is invertible as it is a product of elementary matrices corresponding to these column operations, each of which is of the form $\col_i\leftarrow (\col_i + \col_1\cdot \alpha_0+\sum_i \alpha_i x_i)$). Notice that $CT_0T_1$ is still linear. \item As $Cv=(CT_0T_1)(T_1^{-1}T_0^{-1}v)$, and in the $i^{th}$ row of $CT_0T_1$ the only nonzero entry is $\alpha$ which is in its first column, we have that the first entry of $T_1^{-1}T_0^{-1}v$ is zero. \end{enumerate} \item Let $C'\in\FX^{u\times (d-1)}$ obtained by dropping the first column of $CT_0T_1$. Let $v'\in\FX^{(d-1)\times 1}$ be obtained by dropping the first entry of $T_1^{-1}T_0^{-1}v$. Note that $C'$ is still linear. \item Recursively call Trivialize$(C'\in\FX^{u\times (d-1)},v'\in\FX^{(d-1)\times 1})$. and let the matrix returned by the call be $T_2\in\FX^{(d-1)\times (d-1)}$. \item Putting it together, return the matrix $T_0T_1(I_1\oplus T_2)$. \end{enumerate} To complete the proof, we note that a highest degree monomial $m$ such that $v(m)\ne 0$ is easy to compute in deterministic polynomial time if each $v_i$ is given by an algebraic branching program using the PIT algorithm of Raz and Shpilka \cite{RS05}. Notice that for the recursive call we need $C'$ to be also a linear matrix and each entry of $v'$ to have a small ABP. $C'$ is linear because $CT_0T_1$ is a linear matrix since $CT_0$ is linear, its first column is scalar, and each column operation performed by $T_1$ is scaling the first column of $CT_0$ by a linear form and subtracting from another column of $CT_0$. Each entry of $v'$ has a small ABP because $T_0^{-1}$ is scalar and it is easy to see that the entries of $T_1^{-1}$ have ABPs of polynomial size. Finally, we note that $T_1$ is a product of at most $d-1$ linear matrices (each corresponding to a column operation), and $N$ is an iterated product of $d$ such matrices. Hence, each entry of $N$ as well as $N^{-1}$ is a polynomial of degree at most $d^2$ and is computable by a small ABP. \end{proof} Turning back to our algorithm for $\prob{FACT}(\mathbb{F}_q)$, in the next lemma we design an efficient algorithm that will allow us to extract all the irreducible factors of $f$ (given Equation~\ref{eq4}). \begin{lemma}[Factor Extraction]\label{extract} Let $f\in\FX$ be a polynomial and $G\in\FX^{(d-1)\times (d-1)}$ be a unit such that \begin{equation}\label{eq5} \left( \begin{array}{cc} f & u \\ 0 & G \end{array} \right) = P CD, \end{equation} such that \begin{itemize} \item $C$ is a full linear matrix that is a non-unit, $P$ is upper triangular with all $1$'s diagonal, and $D\in\FX^{d\times d}$ is a full non-unit matrix which is also an atom. \item The polynomial $f$, and the entries of $u, G, P, D$ are all given as input by algebraic branching programs. \end{itemize} Then we can compute in deterministic polynomial time a nontrivial factorization $f=g\cdot h$ of the polynomial $f$ such that $h$ is an irreducible polynomial. \end{lemma} \begin{proof} Let \[C=\left( \begin{array}{cc} c_1 & c_3 \\ c_2 & c_4 \end{array} \right) \text{ and } D = \left( \begin{array}{cc} d_1 & d_3\\ d_2 & d_4 \end{array} \right), \] written as $2\times 2$ block matrices where $c_1$ and $d_1$ are $1\times 1$ blocks. By dropping the first row of the matrix in the left hand side of Equation~\ref{eq5} and the first row of $P$ we get \[ (0~G) = (0~P') C D, \] where $P'$ is also an upper triangular matrix with all $1$'s diagonal. Equating the first columns on both sides we have \begin{eqnarray*} 0 & =& (0~P')\left( \begin{array}{cc} c_1 & c_3 \\ c_2 & c_4 \end{array} \right)\left(\begin{array}{c} d_1\\ d_2 \end{array} \right), \text{ which implies that}\\ 0 & = & P'(c_2~c_4)\left(\begin{array}{c} d_1\\ d_2 \end{array} \right), \text{ and hence}\\ 0 & = & (c_2~c_4)\left(\begin{array}{c} d_1\\ d_2 \end{array} \right), \text{ since $P'$ is invertible.} \end{eqnarray*} Since $(c_2~c_4)\in\FX^{(d-1)\times d}$ is a matrix with linear entries and $\left(\begin{array}{c} d_1\\ d_2 \end{array} \right)\in\FX^{d\times 1}$ is a column vector of polynomials which are given by ABPs as input, we can apply the algorithm of Lemma~\ref{triv-lem} to compute a unit $N$ such that its entries are all given by ABPs such that for $1\le i\le d$, either the $i^{th}$ column of $(c'_2~c'_4)=(c_2~c_4)N$ is zero or the $i^{th}$ row of $\left(\begin{array}{c} d_1'\\ d_2' \end{array} \right)=N^{-1}\left(\begin{array}{c} d_1\\ d_2 \end{array} \right)$ is zero. Now the following argument is almost identical with the argument towards the end of the proof of the Theorem \ref{associates-preseve-atoms}. We give it below for completeness. Since $D$ is a full matrix, the matrix $N^{-1}D$ is also full which implies its first column $\left(\begin{array}{c} d_1'\\ d_2' \end{array} \right)$ cannot be all zeros. So there is at least one nonzero entry in $\left(\begin{array}{c} d_1'\\ d_2' \end{array} \right)$ and the corresponding column in $(c'_2~c'_4)$ is all zero. This implies there exist a permutation matrix $\Pi$ such that the first column of $C(c'_2~c'_4)\Pi$ is all zero and first entry of $\Pi^{-1} \left(\begin{array}{c} d_1'\\ d_2' \end{array} \right)$ is non zero. Consider the matrices $C''=CN\Pi=\left( \begin{array}{cc} c''_1 & c''_3 \\ c_2'' & c''_4 \end{array} \right) $ and $D''=\Pi^{-1}N^{-1}D=\left( \begin{array}{cc} d''_1 & d''_3 \\ d_2'' & d''_4 \end{array} \right) $. We have \[ \left( \begin{array}{cc} f & * \\ 0 & G' \end{array} \right) =P^{-1} \left( \begin{array}{cc} f & u \\ 0 & G \end{array} \right) = \left( \begin{array}{cc} c''_1 & c''_3 \\ c_2'' & c''_4 \end{array} \right) \left( \begin{array}{cc} d''_1 & d''_3\\ d_2'' & d''_4 \end{array} \right) \] , where $G'=(P')^{-1}G$ is a unit, $c_2''$ is all zero column matrix and $d_1''$ is non-zero. Now observing $(2,1)^{th}$ matrix block in the above equation, we get $d_2''$ is all zero column. Hence, by looking at $(2,2)^{th}$ block in the above equation, we can see that $c_4''$ and $d_4''$ are units as $G'$ is a unit. Clearly, we have $f= c_1'' \cdot d_1 ''$. Now, since $C$ and $D$ are non-units (by assumption), the matrices $C''$ and $D''$ are also non-units. Therefore, $c''_1$ is not a scalar for otherwise $C''$ would be a unit. Similarly, $d''_1$ is not a scalar. It follows that $f=c''_1d''_1$ is a nontrivial factorization of $f$. Furthermore, since $D$ is an atom by assumption and $D''$ is a stable associate of $D$, $D''$ is an atom. As $D'' = \left( \begin{array}{cc} d''_1 & d''_3\\ 0 & d''_4 \end{array} \right)$ and $d_4''$ is invertible, we get $\left( \begin{array}{cc} 1 & 0\\ 0 & (d''_4)^{-1} \end{array} \right) \cdot D'' = \left( \begin{array}{cc} d''_1 & d''_3\\ 0 & I_{s} \end{array} \right)$. Now applying suitable row operations to the matrix $(1 \oplus (d''_4)^{-1})D''$ we can drive $d_3''$ to zero. So we have $U(1 \oplus (d''_4)^{-1})D''=(d_1'' \oplus I_s)$ for a unit $U$. Hence $d_1''$ is an associate of $D''$ and therefore $d_1''$ is irreducible as $D''$ is an atom. \end{proof} Finally, we describe the factorization algorithm for commutatively nonzero polynomials $f\in\FX$ over finite fields $\mathbb{F}_q$. \begin{theorem}\label{cnzthm} Let $\FX=\mathbb{F}_q\angle{X}$ and $f\in \FX$ be a commutatively nonzero polynomial given by an arithmetic formula of size $s$ as input instance of $\prob{FACT}(\mathbb{F}_q)$. Then there is a $\mathrm{poly}(s, \log q)$ time randomized algorithm that outputs a factorization $f=f_1f_2\cdots f_r$ such that each $f_i$ is irreducible and is output as an algebraic branching program. \end{theorem} \begin{proof} Given $f$ as input, we apply Higman linearization followed by the algorithm for $\prob{LIN{-}FACT}(\mathbb{F}_q)$ described in Theorem~\ref{lfact1} to obtain the factorization of $f\oplus I_s=PSS_1F_1F_2 \ldots F_rS_2UQ$ where each linear matrix $F_i$ is an atom, $P$ is upper triangular with all $1$'s diagonal, $Q$ is lower triangular with all $1$'s diagonal, $U$ is a unit and $S$ is a scalar invertible matrix, as given in Equation~\ref{eq4}. We can now apply Lemma~\ref{extract} to extract irreducible factors of $f$ (one by one from the right). For the first step, let $C=S S_1F_1F_2\cdots F_{r-1}$ and $D=F_rS_2UQ$ in Lemma~\ref{extract}. The proof of Lemma~\ref{extract} yields the matrix $N_r=N\Pi$ such that both matrices $C''=PS S_1F_1F_2\cdots F_{r-1}N_r$ and $D''=N_r^{-1}F_rS_2UQ$ has the first column all zeros except the $(1,1)^{th}$ entries $c''_1$ and $d''_1$ which yields the nontrivial factorization $f=c''_1d''_1$, where $d''_1=f_r$ is irreducible. Renaming $c''_1$ as $g_r$ we have from the structure of $C''$: \[ \left( \begin{array}{cc} g_r & * \\ 0 & G_r \end{array} \right) = P(SS_1F_1F_2\cdots F_{r-2}) (F_{r-1}N_r). \] Setting $C= SS_1F_1F_2\cdots F_{r-2}$ and $D=F_{r-1}N_r$ in Lemma~\ref{extract} we can compute the matrix $N_{r-1}$ using which we will obtain the next factorization $g_r=g_{r-1}f_{r-1}$, where $f_{r-1}$ is irreducible because the linear matrix $F_{r-1}$ is an atom. Lemma~\ref{extract} is applicable as all conditions are met by the matrices in the above equation (note that $G_r$ will be a unit). Continuing thus, at the $i^{th}$ stage we will have $f=g_{r-i+1}f_{r-i+1}f_{r-i+2}\cdots f_r$ after obtaining the rightmost $i$ irreducible factors by the above process. At this stage we will have \[ \left( \begin{array}{cc} g_{r-i+1} & * \\ 0 & G_{r-i+1} \end{array} \right) = P(SS_1F_1F_2\cdots F_{r-i-1}) (F_{r-i}N_{r-i+1}), \] where $G_{r-i+1}$ is a unit and all other conditions are met to apply Lemma~\ref{extract}. Thus, after $r$ stages we will obtain the complete factorization $f=f_1f_2\cdots f_r$. For the running time, it suffices to note that the matrix $N$ computed in Lemma~\ref{extract} is a product of degree at most $d^2$ many linear matrices (corresponding to the column operations). Thus, at the $i^{th}$ of the above iteration, the sizes of the ABPs for the entries of $N_{r-i+1}$ are independent of the stages. Hence, the overall running time is easily seen to be polynomial in $s$ and $\log q$. \end{proof} \begin{corollary}\label{sparse-cor1} If $f\in\FX$ is commutatively nonzero polynomial given as input in sparse representation (as an $\mathbb{F}_q$-linear combination of its monomials) then in randomized polynomial time we can compute a factorization into irreducible factors in sparse representation. \end{corollary} \begin{proof} Let $f$ be given as input in sparse representation. Suppos $\deg f = d$ and it is $t$-sparse. Then there are at most $td^2$ many monomials that can occur as a substring of the monomials of $f$. We can apply the randomized algorithm of Theorem~\ref{cnzthm} to obtain the factorization $f=f_1f_2\cdots f_r$, where each $f_i$ is given by an ABP. Now, for each of the $td^2$ many candidate monomials of $f_i$ we can find its coefficient in $f_i$ in polynomial time (using the Raz-Shpilka algorithm \cite{RS05}). Hence we can obtain the factorization $f=f'_1f'_2\cdots f'_r$, where each $f'_i$ is a $t$-sparse polynomial. \end{proof} \section{Factorization of Commutatively zero polynomials}\label{cz-sec} In this section we will describe the general case of the factorization algorithm when the input polynomial $f\in\FX$ is a commutatively zero polynomial. That is, $f$ evaluates to zero on all scalar substitutions from $\mathbb{F}_q$ or any (commutative) extension field. The factorization algorithm will follow the three broad steps described in Section \ref{cnz-sec} for the commutatively nonzero case: first, using Higman linearization and Theorem \ref{full-monic}, transform the polynomial $f$ to a stably associated linear matrix $L$ that is full and left (or right) monic. Next, factorize the linear matrix $L$ into atoms. Finally, recover the irreducible factors of $f$ from the atomic factors of the linear matrix $L$ using the factor extraction procedure described in Lemma~\ref{extract}. The step that requires a new algorithm is factorizing a full and right (or left) monic linear matrix $L\in\FX$ into atoms when $f$ is commutatively zero, which means there is no scalar substitution $x_i\leftarrow \alpha_i, i\in[n]$ such that $L(\alpha_1,\alpha_2,\ldots,\alpha_n)$ is invertible. Note that in this case we cannot apply the algorithm for factorizing a linear matrix as discussed in the proof of Theorem \ref{lfact1}). \subsection{Factorization of full and monic linear matrices} Let $f\in \FX$ be the input polynomial given by a size $s$ formula and let $L \in\FX^{d\times d}$ be a full, right monic linear matrix stably associated with $f$ obtained via Higman linearization and an application of Theorem \ref{full-monic}. Recall, by Equation~\ref{eq2} we have $f \oplus I_s= PU(L \oplus I_t)SQ$ where, $P$, $Q$ are respectively upper triangular and lower triangular units with diagonal entries $1$, $U$ is a unit and $S$ is scalar invertible. Let $L=A_0+\sum_{i=1}^n A_n x_i\in\FX^{d\times d}$ be the given full and right monic linear matrix. First, by Lemma~\ref{shift-inv-lemma}, we will find a suitable scalar matrix $n$-tuple $\bar{M}=(M_1,M_2,\ldots,M_n)$, each $M_i\in \mathbb{F}_q^{\ell\times \ell}$ for $\ell\le 2d$, such that under the substitution $x_i\leftarrow M_i$ the matrix $L(\bar{M})$ is invertible. For $1\le i\le n$ let $Y_i$ be an $\ell\times \ell$ matrix of distinct noncommuting variables $y_{ijk}$. We consider the dilated linear matrix \begin{equation}\label{dilate-eq1} L' = A_0\otimes I_\ell + \sum_{i=1}^n A_i\otimes (Y_i+M_i). \end{equation} It is not hard to see that $L'$ is full and $L'$ is right monic as $L$ is right monic. Additionally, its constant term is invertible. So, we can apply Theorem~\ref{lfact1} to factorize $L'$ as a product of two linear matrices, both non-units. The following lemma \cite{HKV20} has an important role in our algorithm for recovering the factorization for $L$ from a factorization of $L'$. \begin{lemma}\cite{HKV20}\label{hkv20-main-lemma} Let $L \in \FX^{d \times d}$ be a full linear matrix with $L= A_0+ A_1x_1+\ldots +A_n x_n$ such that $A_i \neq 0$ for at least one $i$, $1\leq i \leq n$ and $L'\in R^{d\ell \times d\ell}$ be a matrix obtained from $L$ by substituting variable $x_i$ by $Y_i$ for $i \in [n]$, where $Y_i$ is $\ell \times \ell$ matrix whose $(j,k)^{th}$ entry is a fresh noncommuting variable $y_{i,j,k}$ for $1 \leq j,k \leq \ell$. Then \begin{enumerate} \item If $L'$ is of the form $G L' H = \left( \begin{array}{c|c} A' & 0 \\ \hline D' & B' \end{array} \right)$, where $A'$ is $d' \times d'$ matrix and $B'$ is $d'' \times d''$ matrix for $0< d', d''$, with $d'+d''=d\ell$ and $G,H$ are $d\ell \times d\ell$ invertible scalar matrices then there exist $d \times d$ invertible scalar matrices $U, V$ such that $U L V = \left( \begin{array}{c|c} A & 0 \\ \hline D & B \end{array} \right)$, where $A$ is $e' \times e'$ matrix and $B$ is $e'' \times e''$ matrix for $0<e', e''$, with $e' +e''= d$. \item Moreover, given $L'$ explicitly along with its representation mentioned above, we can find the matrices $U, V$ in deterministic polynomial time (in $n,\ell,d$). \end{enumerate} \end{lemma} \begin{remark} We give a self-contained complete proof of the above linear-algebraic lemma in the appendix for $\mathbb{F}_q$, because the proof given in \cite{HKV20} is sketchy in parts with some details missing, and also their lemma is stated only for complex numbers and they are not concerned about computing the matrices $U$ and $V$. \end{remark} Now, we can apply Lemma \ref{hkv20-main-lemma} to transform the factorization of $L'$ to a factorization of $L$ as a product of two linear matrices, both non-units. Repeating the above on both the factors of $L$ we will get a complete atomic factorization of $L$. Formally, we prove the following. \begin{theorem}\label{lfact2} On input a full and right (or left) monic linear matrix $L= A_0 + \sum_{i=1}^n A_i x_i$ where $A_i \in \mathbb{F}^{d \times d}$ for $i\in [n]$, there is a randomized polynomial time ($poly(n,d)$) algorithm to compute scalar invertible matrices $S,S'$ such that $SLS'$ has atomic block diagonal form. \end{theorem} \begin{proof} We present the algorithm only for right monic $L$; the left monic case has essentially the same solution. If the input $L$ is not full or right monic the algorithm can efficiently detect that and output ``failure''. If $L$ is an atom the algorithm will output that $L$ is an atom and set the matrices $S$ and $S'$ to $I_d$. Otherwise, the algorithm will compute invertible scalar matrices $S$ and $S'$ such that \begin{equation}\label{eq-block-atom} S L S' = \left( \begin{array}{ccccc} L_1 & 0 &0 &\ldots & 0\\ * & L_2 &0 &\ldots & 0\\ * & * &L_3 &\ldots & 0\\ & & &\ddots &\\ * & * &* &\ldots & L_r\\ \end{array} \right), \end{equation} where the matrix on the right is in atomic block diagonal form, that is, each linear matrix $L_i$ is an atom. \begin{enumerate} \item[] {\bf Procedure Factor(L)}. \item Test if $L$ has full noncommutative rank using the algorithm in \cite{IQS17} or \cite{GGOW20}. Test if $L$ is right monic by checking if the matrix $[A_1 A_2 \ldots A_n]$ has full row rank (which is $d$). If $L$ is not full and right monic the algorithm outputs ``fail''. \item Assume $L$ is full and right monic. Using Lemma \ref{shift-inv-lemma}, find smallest positive integer $\ell \leq 2d$ and $\ell\times \ell$ scalar matrices $M_i, i\in [n]$ with entries from $\mathbb{F}$ (or a small degree extension of $\mathbb{F}$) such that $W=L(\bar{M})$ is $d \cdot \ell \times d \cdot \ell$ invertible scalar matrix. Compute the dilated linear matrix $L'$ in the $y_{ijk}$ variables as in Equation~\ref{dilate-eq1} which can be rewritten as: \[ L' = A_0\otimes I_\ell + \sum_{i=1}^n A_i\otimes M_i + \sum_{i=1}^n\sum_{j,k=1}^\ell (A_i\otimes E_{jk})\cdot y_{ijk}. \] Let $L''= W^{-1} L'$. Clearly $L''(\overline{0})= I_{d\ell}$. Hence, by the algorithm of Theorem \ref{lfact1} we can either detect that $L''$ is an atom or factorize $L''$. If $L''$ is an atom then $L$ is also an atom and the algorithm can output that and stop. Otherwise, $L'$ is not an atom and by Theorem \ref{lfact1} we will obtain a basis change matrix $T$ such that $T W^{-1} L' T^{-1}= T L'' T^{-1} = \left( \begin{array}{cc} C'' & 0 \\ * & D'' \end{array} \right)$ where $C''$ and $D''$ are linear matrices of dimension $c'' \times c''$ and $d'' \times d''$ respectively, such that $c''+ d'' = d \ell$. \item By linear shift of variables $y_{ijk} \leftarrow y_{ijk} - M_i(j,k)$ we obtain $\tilde{T} \tilde{L} \tilde{T'} = \left( \begin{array}{cc} C' & 0 \\ * & D' \end{array} \right)$ for some scalar invertible matrices $\tilde{T}, \tilde{T'}$ where $\tilde{L} = L( Y_1, \ldots, Y_n)$. \item Applying the algorithm of Lemma \ref{hkv20-main-lemma} to $\tilde{L}$, $\tilde{T}$, and $\tilde{T'}$, in deterministic polynomial time we obtain scalar invertible matrices $\tilde{S}, \tilde{S'}$ such that $\tilde{S}L\tilde{S'} = \left( \begin{array}{cc} C & 0 \\ * & D \end{array} \right) $ where $C$, $D$ are square matrices of dimensions $e \times e$ and $g \times g$, respectively, such that $e+g = d$. \item Recursively call Factor$(C)$ and Factor$(D)$. Let $S_1, S'_1$ be the matrices returned by Factor$(C)$ and $S_2, S'_2$ be the matrices returned by Factor$(D)$. \item Let $S =(S_1 \oplus S_2)\tilde{S}$ and $S' = \tilde{S'}(S'_1 \oplus S'_2)$. Return the invertible scalar matrices $S$ and $S'$. Note that at this stage $SLS'$ has the desired atomic block diagonal form. \end{enumerate} Next we give a brief argument for proving correctness of the above algorithm. Firstly, the algorithm declares $L$ as an atom iff $L$ is indeed an atom. To see this, we will prove $L$ is not an atom iff $L''$ is not an atom. Forward direction is obvious. To prove the reverse direction of implication, let $L''$ is not an atom. Which implies $L' = W L''$ is not an atom. $\tilde{L}$ is a linear matrix obtained by substituting $M_i(j,k)=0$ for all $i,j,k$ in $L'$. Clearly, $\tilde{L}$ is not an atom as $L'$ is not an atom. Using Lemma \ref{hkv20-main-lemma} it follows that $L$ is not an atom. So we have established $L$ is not a atom iff $L''$ is not an atom. So if input linear matrix $L$ is an atom, the algorithm will correctly declare it to be an atom in step 2. Now we argue that we will get correct atomic block diagonal form in the last step of the algorithm. Firstly, for giving recursive calls to the Factor procedure for the matrices $C$, $D$, we must have $C, D$ to be right monic as stated in the claim below. This is proved by the same argument as in the proof of Theorem~\ref{cohnthm}. \begin{claim} \label{monic-monic} Let $L \in \FX^{d \times d}$ be a full and right monic linear matrix such that $P'LQ' = \left( \begin{array}{cc} C & 0 \\ E & D \end{array} \right)$ where $C$ and $D$ are linear matrices of dimensions $e \times e$, $g \times g$, respectively, such that $e+g = d$. Then both $C, D$ are right monic. \end{claim} \begin{comment} \begin{proof} Let $L=A_0 + \sum_{i=1}^n A_i x_i$ where $A_i \in \mathbb{F}^{d \times d}$ for each $i$. Similarly, let $C=C_0 + \sum_{i=1}^n C_i x_i$, $D=D_0 + \sum_{i=1}^n D_i x_i$, and $E=E_0 + \sum_{i=1}^n E_i x_i$ where $C_i \in \mathbb{F}^{e \times e}$, $D_i \in \mathbb{F}^{g \times g}$ and $E_i \in \mathbb{F}^{g \times e}$ for each $i$. As $L$ is right monic, the matrix $[A_1 A_2 \ldots A_n]$ has full row rank which implies $[A'_1 A'_2 \ldots A'_n]$ has full row rank, where $A'_i=PA_iQ$ for each $i$. Since $PA_iQ = A'_i = \left( \begin{array}{cc} C_i & 0 \\ E_i & D_i \end{array} \right)$, it follows that $[C_1 0 C_2 0 \ldots C_n ~0]$ has full row rank, where each $0$ denotes an $e \times g$ sized block of zeros. So $[C_1~C_2~\ldots ~C_n]$ is full row rank and hence $C$ monic. Now by elementary row operations applied to matrix $[A_1 A_2 \ldots A_n]$, we can use the full row rank matrix $[C_1~C_2~\ldots ~C_n]$ (occupying a subset of columns in the top $e$ rows) to drive the matrix $[E_1 E_2 \ldots E_n]$ in the corresponding columns of the bottom $g$ rows to zero without changing the $D_i$ matrices. That means, for an invertible scalar matrix $R$ corresponding to row operations and suitable column permutation matrix $S$ we have $R [A_1 A_2 \ldots A_n] S = \left( \begin{array}{c|c} C_1~C_2~\ldots ~ C_n & 0 \\ \hline ~~~~~~0~~~~~ & D_1~D_2~\ldots ~ D_n \end{array} \right)$ Hence $[D_1 D_2 \ldots D_n]$ has full row rank implying that $D$ is right monic. Similarly, using left monicity of $L$ we can prove that $C$ and $D$ are left monic. \end{proof} \end{comment} By recursive calls Factor($C$) and Factor($D$) obtain matrices $S_1, S'_1, S_2, S'_2$ such that $S_1C S'_1=C'$ and $S_2 D S'_2=D'$ are in atomic block diagonal form. We can write $\tilde{S}L\tilde{S'}$ as \begin{eqnarray*} &=& \left( \begin{array}{cc} C & 0 \\ E & D \end{array} \right)\\ &=& \left( \begin{array}{cc} C & 0 \\ 0 & I_g \end{array} \right)\left( \begin{array}{cc} I_e & 0 \\ E & I_g \end{array} \right)\left( \begin{array}{cc} I_e & 0 \\ 0 & B \end{array} \right)\\ &=& (S_1^{-1} \oplus I_g)(C' \oplus I_g)({S'}_1^{-1} \oplus I_g)\left( \begin{array}{cc} I_e & 0 \\ E & I_g \end{array} \right)(I_e \oplus S_2^{-1})(I_e \oplus D')(I_e \oplus {S'}_2^{-1})\\ &=& (S_1^{-1} \oplus I_g)(C' \oplus I_g)(I_e \oplus {S'}_2^{-1})\left( \begin{array}{cc} I_e & 0 \\ S_2ES'_1 & I_g \end{array} \right)({S'}_1^{-1}\oplus I_g)(I_e \oplus D')(I_e \oplus {S'}_2^{-1})\\ &=& (S_1^{-1} \oplus I_g)(I_e \oplus S_2^{-1})(C' \oplus I_g)\left( \begin{array}{cc} I_e & 0 \\ S_2ES'_1 & I_g \end{array} \right)(I_e \oplus D')({S'}_1^{-1}\oplus I_g)(I_e \oplus {S'}_2^{-1})\\ &=& (S_1^{-1} \oplus I_g)(I_e \oplus S_2^{-1})\left( \begin{array}{cc} C' & 0 \\ S_2ES'_1 & D' \end{array} \right)({S'}_1^{-1}\oplus I_g)(I_e \oplus {S'}_2^{-1})\\ &=& (S_1^{-1} \oplus S_2^{-1})\left( \begin{array}{cc} C' & 0 \\ S_2ES'_1 & D' \end{array} \right)({S'}_1^{-1}\oplus {S'}_2^{-1}). \end{eqnarray*} Thus we have \[ (S_1 \oplus S_2) \tilde{S}L\tilde{S'}(S'_1 \oplus S'_2) = \left( \begin{array}{cc} C' & 0 \\ S_2ES'_1 & D' \end{array} \right). \] As $C'$ and $D'$ are in atomic block diagonal form, it follows that $\left( \begin{array}{cc} C' & 0 \\ S_2ES'_1 & D' \end{array} \right)$ is also in atomic block diagonal form. Letting $S=(S_1 \oplus S_2) \tilde{S}$ and $S'=\tilde{S'}(S'_1 \oplus S'_2)$, it follows that $SLS'$ is in the desired atomic block diagonal form which proves the correctness of Factor procedure. In each call to the procedure (excluding the recursive calls) the algorithm takes $\mathrm{poly}(n,d,\log_2q)$ time. The total number of recursive calls overall is bounded by $d$. Hence, the overall running time is $\mathrm{poly}(n,d,\log_2q)$. This completes the proof of the theorem. \end{proof} For the factorization of $f$, we assume the stably associated full linear matrix $L$ is left monic. After we obtain atomic block diagonal form as in Equation \ref{eq-block-atom}, we can factorize $L$ into atomic factors by Theorem~\ref{lfact1}. Combined with Equation~\ref{eq4} we have \begin{equation*} f\oplus I_s = PS'F'_1F'_2\cdots F'_r U'Q, \end{equation*} where each linear matrix $F'_i$ is an atom, $P$ is upper triangular with all $1$'s diagonal, $Q$ is lower triangular with all $1$'s diagonal, and $S'$ is scalar invertible and $U'$ is a unit. Now, applying Lemma \ref{extract} and Theorem \ref{cnzthm} we obtain the complete factorization of $f$ into irreducible factors. This is summarized in the following. \begin{theorem}\label{comzthm} Let $f\in \FX$ be a polynomial given by an arithmetic formula as input instance of $\prob{FACT}(\mathbb{F}_q)$. Then there is a $\mathrm{poly}(s, \log q,|X|)$ time randomized algorithm that outputs a factorization $f=f_1f_2\cdots f_r$ such that each $f_i$ is irreducible and is output as an algebraic branching program. \end{theorem} Analogous to Corollary \ref{sparse-cor1}, when the polynomial is given in a sparse representation, we have \begin{corollary} \label{sparse-comnz} If $f\in\FX$ is a polynomial given as input in sparse representation (that is, an $\mathbb{F}_q$-linear combination of its monomials) then in randomized polynomial time we can compute a factorization into irreducible factors in sparse representation. \end{corollary} \subsection{Factorization over small finite fields} Finally, we briefly discuss the factorization problem over small finite fields. As explained in Section \ref{small-field}, the two steps in our factoring algorithm requiring randomization can be replaced with deterministic $\mathrm{poly}(s, q, |X|)$ time computation. Furthermore, as explained in Section~\ref{small-field}, the matrix shift $(M_1, M_2, \ldots, M_n)$ required for the Theorem \ref{lfact2} can be obtained in deterministic polynomial time such that the entries of the matrices $M_i$ are from $\mathbb{F}_q$ for each $i$. Putting it together, it gives us a deterministic factorization algorithm for noncommutative polynomials that are input as arithmetic formulas over $\mathbb{F}_q$. In summary, we have the following. \begin{theorem} Given as input a multivariate polynomial $f\in\mathbb{F}_q\angle{X}$ for a finite field $\mathbb{F}_q$ by a noncommutative arithmetic formula of size $s$, a factorization of $f$ as a product $f=f_1f_2\cdots f_r$ can be computed in deterministic time $\mathrm{poly}(s,q,|X|)$, where each $f_i\in\mathbb{F}_q\angle{X}$ is an irreducible polynomial that is output as an algebraic branching program. \end{theorem} \section{Concluding Remarks} In this paper we present a randomized polynomial-time algorithm for the factorization of noncommutative polyomials \emph{over finite fields} that are input as \emph{arithmetic formulas}. The irreducible factors are output as algebraic branching programs. Several open questions arise from our work. We mention two of them. The first question is the complexity of factorization over rationals of noncommutative polynomials given as arithmetic formulas. Our approach involves the crucial use of Ronyai's algorithm for invariant subspace comptutation which turns out to be a hard problem over rationals. We believe a different approach may be required for the rational case. The use of Higman linearization prevents us from generalizing this approach to noncommutative polynomials given as arithmetic circuits. We do not know any nontrivial complexity upper bound for the factorization problem for noncommutative polynomials given as arithmetic circuits. \appendix \section{Appendix} \subsection{Missing proofs from Section~\ref{prelim}} \begin{proofof}{Theorem~\ref{cohnthm}} Let $C\in\FX^{d\times d}$ be a full and right monic linear matrix. Suppose Equation~\ref{eq-cohnthm} holds for some invertible scalar matrices $S,S'$. Then we can write \[ SCS'= \left( \begin{array}{cc} A & 0 \\ D & B \end{array} \right) = \left( \begin{array}{cc} A & 0 \\ 0 & I \end{array} \right)\cdot \left( \begin{array}{cc} I & 0 \\ D & I \end{array} \right)\cdot \left( \begin{array}{cc} I & 0 \\ 0 & B \end{array} \right). \] Since $C$ is right monic and $S, S'$ are invertible scalar matrices the linear matrix $SCS'=\left(\begin{array}{cc} A & 0 \\ D & B \end{array}\right)$ is also full and right monic. Writing it as \[ \left( \begin{array}{cc} A & 0 \\ D & B \end{array} \right) = \left( \begin{array}{cc} A_0 & 0 \\ D_0 & B_0 \end{array} \right) + \sum_{i=1}^n \left( \begin{array}{cc} A_i & 0 \\ D_i & B_i \end{array} \right)\cdot x_i, \] it means the matrix \[ \left[ \begin{array}{cc|} A_1 & 0 \\ D_1 & B_1 \end{array} \begin{array}{cc|} A_2 & 0 \\ D_2 & B_2 \end{array} \ldots \begin{array}{|cc} A_n & 0 \\ D_n & B_n \end{array} \right] \] is full row rank. With suitable row operations applied to the above we can see that both $[A_1 A_2\ldots A_n]$ and $[B_1 B_2 \ldots B_n]$ are full row rank. Therefore, both $A$ and $B$ are full right monic matrices hence they are nonunits by Lemma~\ref{monic-nonunit}. Hence $A\oplus I$ and $B\oplus I$ are both non-units which implies that the factorization of $SCS'$ is nontrivial and hence $C$ is not an atom. Conversely, suppose $C$ is not an atom and $C=F\cdot G$ is a nontrivial factorization. That means both $F$ and $G$ are full and non-units. As $C$ is a linear matrix, applying \cite[Lemma 5.8.7]{Cohnfir} we can assume that both $F$ and $G$ are linear matrices. Now, since $F$ is a full linear matrix, by Theorem~\ref{full-monic} (and Remark~\ref{remark-full-monic}) there are a scalar invertible matrix $S_1$ and polynomial matrix $U_1$, which is a unit, such that $S_1FU_1 = A\oplus I$ such that $A$ is \emph{left} monic. Therefore, we have \[ S_1 C = S_1 F U U^{-1} G = \left( \begin{array}{cc} A & 0 \\ 0 & I \end{array} \right) \cdot \left( \begin{array}{cc} G'_1 & G'_3 \\ G'_2 & G'_4 \end{array} \right) = \left( \begin{array}{cc} AG'_1 & AG'_3 \\ G'_2 & G'_4 \end{array} \right). \] As $S_1 C$ is a linear matrix and $A$ is a left monic linear matrix we can assume that $G'_1$ and $G'_3$ are scalar matrices. Since $S_1 C$ is full rank, it forces the matrix $[G'_1 G'_3]$ to be full row rank (say $r$, where $A$ is $r\times r$). Therefore, there is an invertible scalar matrix $S'$ such that $[G'_1 G'_3]S' = [I_r 0]$. Putting it together, we get the factorization \[ SCS' = \left( \begin{array}{cc} A & 0 \\ 0 & I \end{array} \right) \cdot \left( \begin{array}{cc} I_r & 0 \\ G''_2 & G''_4 \end{array} \right) = \left( \begin{array}{cc} A & 0 \\ G''_2 & G''_4 \end{array} \right) \] as claimed by the theorem. \end{proofof} \subsection{Proof of Lemma~\ref{hkv20-main-lemma}} We present a self-contained proof of Lemma~\ref{hkv20-main-lemma} of \cite{HKV20}. \begin{definition}\label{def-setminus} Let $U, V \subseteq \mathbb{F}^D$ be subspaces of $\mathbb{F}^D$ and $d= \dim U$. Fix a basis $u_1, u_2, \ldots, u_\ell \in \mathbb{F}^D$ for $U \cap V$ and extend it to a basis $u_1, u_2, \ldots, u_\ell, u_{\ell+1}, \ldots, u_d$ for $U$. Further, let $u_1, u_2, ..., u_D$ be a basis for $\mathbb{F}^D$ obtained by extending the above basis for $U$. Then $U\setminus V$ is defined as $span(u_{\ell+1}, u_{\ell+2}, \ldots, u_d)$, i.e. \[ U\setminus V = \{ \sum_{i= \ell+1}^{d} \alpha_i u_i | \alpha_i \in \mathbb{F} \textrm{ for } \ell < i \leq d \} \] \end{definition} Clearly $\dim U\setminus V = \dim U - \dim U \cap V$. Notice that although the subspace $U\setminus V$ is basis dependent, the number $\dim U\setminus V$ is independent of the construction of $U\setminus V$. \begin{definition}\label{projection} Let $\mathcal{U}=\{U_1, U_2, \ldots, U_d \}$ be a collection of subspaces of $\mathbb{F}^{D}$. For each $i \in [d]$ define $\hat{U_i}^{(\mathcal{U})} = U_i \setminus (\sum_{k \neq i} U_k)$ as above with respect to fixed bases for the subspaces. \end{definition} We first prove a technical lemma, essentially using the inclusion-exclusion principle. \begin{lemma}\label{hkv-rank-lemma} Let $\mathcal{U}= \{U_1, U_2, \ldots, U_d\}$ be a collection of subspaces of $\mathbb{F}^D$ for $d\geq 1$. Then \[ \sum_{i=1}^{d}~ \left[\dim U_i + \dim \hat{U}_i^{(\mathcal{U})} \right]~~ \geq ~~2~\cdot~\dim \sum_{i=1}^d U_i. \] \end{lemma} \begin{proof} The proof will be by induction on $d$. The base case, $d=1$, is obvious. Suppose it is is true for all $t <d$. I.e. for any subspace collection $\mathcal{V} = \{V_1, V_2, \ldots, V_t\}$ we have \[ \sum_{i=1}^{t}~ \left[\dim V_i + \dim \hat{V}_i^{(\mathcal{V})} \right]~~ \geq ~~2~\cdot ~\dim\sum_{i=1}^t V_i. \] Letting $V_i= U_i$ for $1\leq i \leq d-2$ and $V_{d-1}= U_{d-1}+U_d$ in the above, we have \[ \sum_{i=1}^{d-1}~ \left[\dim V_i + \dim \hat{V}_i^{(\mathcal{V})} \right] ~~\geq ~~2~\cdot~ \dim\sum_{i=1}^{d-1} V_i = 2\cdot\dim\sum_{i=1}^d U_i. \] For the induction we need to show that $\sum_{i=1}^{d-1} (\dim V_i + \dim \hat{V}_i^{(\mathcal{V})})\leq \sum_{i=1}^{d} (\dim U_i + \dim \hat{U}_i^{(\mathcal{U})})$. Now, \[\sum_{i=1}^{d-1}~ (\dim V_i + \dim \hat{V}_i^{(\mathcal{V})})\] is \begin{eqnarray*} &=& \dim V_{d-1} + \dim \hat{V}_{d-1}^{(\mathcal{V})} + \sum_{i=1}^{d-2}~ \left[ ~ \dim U_i + \dim(U_i \setminus ( U_{d-1}+U_{d} + \sum_{k \neq i, k<d-1}U_k ))~ \right] \\ &=& \dim V_{d-1} + \dim \hat{V}_{d-1}^{(\mathcal{V})} + \sum_{i=1}^{d-2}~ \left[ ~\dim U_i + \dim( U_i \setminus \sum_{k \neq i, k\leq d} U_k) \right] \\ &=& \dim V_{d-1} + \dim \hat{V}_{d-1}^{(\mathcal{V})}+ \sum_{i=1}^{d-2} ~\left[ \dim U_i + \dim \hat{U}_i^{(\mathcal{U})} \right]\\ &=& \dim U_{d-1} + \dim U_d -dim( U_{d-1} \cap U_d ) + \dim \hat{V}_{d-1}^{(\mathcal{V})}+ \sum_{i=1}^{d-2}~\left [ \dim U_i + \dim \hat{U}_i^{(\mathcal{U})} \right] \\ &=& \left[ \sum_{i=1}^{d} \dim U_i \right] + \left[ \sum_{i=1}^{d-2}~ \dim \hat{U}_i^{(\mathcal{U})} \right] + \dim \hat{V}_{d-1}^{(\mathcal{V})}- \dim (U_{d-1} \cap U_d). \end{eqnarray*} Hence, to complete the proof it suffices to show the following claim. \begin{claim} \[ \dim \hat{V}_{d-1}^{(\mathcal{V})}~ \leq~ \dim \hat{U}_{d-1}^{(\mathcal{U})} + \dim \hat{U}_d^{(\mathcal{U})} + \dim (U_{d-1} \cap U_d) \] \end{claim} \begin{proofof}{Claim} Let $T= U_1 + U_2 + \ldots + U_{d-2}$. Let $D, D_1, D_2$, and $D_3$ denote dimensions of $T+U_{d-1}+U_d, U_{d-1}, U_d$, and $T$ respectively. We have \begin{eqnarray*} \dim \hat{V}_{d-1}^{(\mathcal{V})} &=& \dim (U_{d-1}+U_d)-\dim ((U_{d-1}+U_d) \cap T)\\ &=&\dim (U_{d-1}+U_d)-\dim (U_{d-1}+U_d)-\dim (T) + \dim (U_{d-1}+U_d+T)\\ &=& D-D_3 \end{eqnarray*} Similarly, \begin{eqnarray*} \dim \hat{U}_{d-1}^{(\mathcal{U})} &=& \dim (U_{d-1}) - \dim (U_{d-1} \cap (T+ U_d))\\ &=&D_1 + \dim (T+U_{d-1}+U_d)- \dim (U_{d-1}) - \dim (T+U_d)\\ &=&D_1 +D -D_1- \dim (T)-\dim (U_d)+\dim (T \cap U_d)\\ &=& D-D_3-D_2 + \dim(T \cap U_d). \end{eqnarray*} Likewise, we also have \[ \dim ( \hat{U}_{d}^{(\mathcal{U})} )=D-D_3-D_1+ \dim (T \cap U_{d-1}). \] It is clear that the claim is equivalent to \[ D \geq D_1 + D_2 + D_3 - \dim (T \cap U_d) - \dim (T \cap U_{d-1})- \dim (U_{d-1} \cap U_d) \] which follows immediately from the Inclusion-Exclusion Principle. \end{proofof} \end{proof} Now we will present a complete proof for Lemma \ref{hkv20-main-lemma} from \cite{HKV20} where the proof is sketchy. {\bf Proof of Lemma \ref{hkv20-main-lemma} } Let $L=A_0 + \sum_{i=1}^n A_i x_i$ Where $A_i \in \mathbb{F}_q^{d \times d}$ for $i \in [n]$. So, $L'= A_0 \otimes I_\ell + \sum_{i=1}^{n} A_i \otimes Y_i$ . From standard properties of the Kronecker product of matrices, there are a row permutation matrix $R$ and a column permutation matrix $C$ such that $L''= RL'C = I_\ell \otimes A_0 + \sum_{i=1}^n Y_i \otimes A_i$. So $L'' = I_\ell \otimes A_0 + \sum_{i=1}^n \sum_{1 \leq j, k \leq \ell} (E_{j,k} \otimes A_i)~ y_{i,j,k}$, where $E_{j,k}$ is a a matrix with $(j,k)^{th}$ entry one and rest all entries equal to zero. We have $G L' H = \left( \begin{array}{c|c} A' & 0 \\ \hline D' & B' \end{array} \right)$ Where $A'$ is $d' \times d'$ linear matrix for $d'>0$, $B'$ is $d'' \times d''$ linear matrix with $d' + d'' = d \ell$. Hence, $GR^{-1} L''C^{-1} H = \left( \begin{array}{c|c} A' & 0 \\ \hline D' & B' \end{array} \right)$. Let $GR^{-1}=P_0$ and $C^{-1}H=Q_0$. Let $[ P_1 P_2 \ldots P_\ell]$ be the (full row rank) matrix obtained by picking the top $d'$ rows of $P_0$ where each $P_i$ is $d' \times d$ scalar matrix. Similarly let $[ Q_1^T Q_2^T \ldots Q_\ell^T]^T$ be the (full column rank) matrix obtained by picking the rightmost $d''$ columns of $Q_0$ where each $Q_i$ is $d \times d''$ scalar matrix. Clearly, \[ [ P_1 P_2 \ldots P_\ell] L'' [ Q_1^T Q_2^T \ldots Q_\ell^T]^T = 0, \] which implies, \[ [ P_1 P_2 \ldots P_\ell]\left[ I_\ell \otimes A_0 + \sum_{i=1}^n \sum_{1 \leq j, k \leq \ell} (E_{j,k} \otimes A_i)~ y_{i,j,k} \right] [ Q_1^T Q_2^T \ldots Q_\ell^T]^T = 0. \] Equating the coefficients of each $y_{i,j,k}$ to zero we get the following. \begin{equation} \label{eqn-A0} \sum _{i=1}^\ell P_i A_0 Q_i = 0. \end{equation} \begin{equation} \label{eqn-An0} P_j A_i Q_k =0 \textrm{ for each } i>0 \textrm { and } 1\leq j,k \leq \ell. \end{equation} For each $i\in [\ell]$ the matrix $P_i$ is a linear transformation from $\mathbb{F}^d$ to $\mathbb{F}^{d'}$. Let $U_i= \range(P_i)=\{P_i u | u \in \mathbb{F}^{d}\}$ for each $i$, and $\mathcal{U}= \{U_1, U_2, \ldots, U_{\ell} \}$. Let $T_i = \sum _{j \neq i} U_j$ for $i\in [\ell]$. Clearly, $U_1+ U_2 + \ldots + U_\ell= \mathbb{F}_q^{d'}$ as $[ P_1 P_2\ldots P_\ell]$ is a full row rank matrix. For $i\in[\ell]$, let $\hat{P}_i$ be a linear transformation from $\mathbb{F}^{d'}$ to $\mathbb{F}^{d'}$ defined as follows. Fix a basis $u_{i,1}, u_{i,2}, \ldots, u_{i,r_i}$ of the subspace $U_i \cap T_i$. Extend it to a basis $u_{i,1}, u_{i,2}, \ldots, u_{i,r_i}, u_{i,r_i+1}, \ldots, u_{i,k_i}$, $k_i \geq r$, for $U_i$. Further, extend this basis of $U_i$ to a complete basis $u_{i,1}, u_{i,2}, \ldots, u_{i,d'}$, for $\mathbb{F}^{d'}$, where $d' \geq k_i$. For any vector $u = \sum_{j=1}^{d'} \alpha_j u_{i,j}$ in $\mathbb{F}^{d'}$ let $\hat{P}_i (u) = \sum_{j=r+1}^k \alpha_j u_{i,j}$. So, $\hat{P}_i(u)$ is the vector obtained by projecting to the subspace $U_i\setminus T_i$ (which is defined w.r.t.\ the above basis). Hence, $\hat{P}_i(u_{i,t})= u_{i,t}$ for $r_i < t \leq k_i$ and $\hat{P}_i(u_{i,t})= 0$ otherwise. This defines a $d' \times d'$ matrix for each $\hat{P}_i$ for $i \in [\ell]$, which we also refer to as $\hat{P}_i$ by abuse of notation. From the Definition \ref{projection}, it follows that $\range(\hat{P}_i) = \hat{U}_i^{(\mathcal{U})}$, so $\op{rank}(P_i)=\dim \hat{U}_i^{(\mathcal{U})}$. Clearly, $\op{rank}(\hat{P}_i P_i) = \op{rank}(\hat{P}_i)$ for $i \in [\ell]$. Now, by Lemma \ref{hkv-rank-lemma} applied to the collection $\mathcal{U}= \{U_1, U_2, \ldots, U_{\ell} \}$ we get \[ \sum_{i=1}^{\ell}~ \left[ \op{rank} P_i ~+~ \op{rank} \hat{P}_i \right]~~ \geq ~~ 2\cdot ~ \dim \sum_{i=1}^{\ell} \range(P_i) = 2d'. \] Similarly, each $Q_i:\mathbb{F}^{d''}\to \mathbb{F}^d$ is a linear map. We can define the corresponding linear maps $\hat{Q}_i : \mathbb{F}^{d''} \to \mathbb{F}^{d''}$ and associated $d'' \times d''$ sized matrices and we will have $\op{rank} Q_i\hat{Q}_i = \op{rank} \hat{Q_i}$ for each $i$. Applying the above argument we will get \[ \sum_{i=1}^{\ell}~ \left[ \op{rank} Q_i ~+~ \op{rank} \hat{Q}_i \right]~~ \geq ~~ 2\cdot ~ \dim \sum_{i=1}^{\ell} \range(Q_i) = 2d''. \] Adding the two inequalities yields \begin{equation}\label{eqn-php} \sum_{i=1}^{\ell}~ \left( \op{rank} \hat{P}_i ~+~ \op{rank} Q_i \right)~ + \left( \op{rank} {P}_i ~+~ \op{rank} \hat{Q}_i \right) ~~\geq ~~ 2\cdot ~ (d'+d'')= 2d\ell. \end{equation} From the Equation \ref{eqn-php} we would like to prove the following Claim. \begin{claim} \label{php-simplify} There exist index $i \in[\ell]$ such that $\op{rank} \hat{P}_i ~+~ \op{rank} Q_i \geq d$ and $\op{rank} \hat{P}_i$, $\op{rank} Q_i>0$ or $\op{rank} {P}_i ~+~ \op{rank} \hat{Q}_i \geq d$ and $\op{rank} {P}_i$, $\op{rank} \hat{Q}_i>0$. \end{claim} First we complete the proof of the Lemma \ref{hkv20-main-lemma} assuming the Claim \ref{php-simplify}. Without loss of generality, let index $i=1$ satisfies the Claim \ref{php-simplify} and further, let $\op{rank} \hat{P}_1 + \op{rank} Q_1 \geq d$ with $\op{rank} \hat{P}_1, \op{rank} Q_1 >0$ (other case handled similarly). Equation \ref{eqn-A0} implies that $\range(P_1 A_0 Q_1) \subseteq T_1$, also clearly, $\range(P_1 A_0 Q_1) \subseteq \range(P_1)= U_1$. Which implies $\range(P_1 A_0 Q_1)\subseteq U_1 \cap T_1$. Hence $\hat{P}_1 P_1 A_0 Q_1 =0$. Equation \ref{eqn-An0} implies that, $\hat{P}_1 P_j A_i Q_k= 0$ for all $i \geq 1$ and $1\leq j, k \leq \ell$. So we get $\hat{P}_1 P_1 L Q_1 = 0$. Now $\op{rank}(\hat{P}_1P_1) = \op{rank}(\hat{P}_1)\geq 1$, $\op{rank}(Q_1) \geq 1$ and $\op{rank}(\hat{P}_1 P_1)+\op{rank}(Q_1) \geq d$. It follows that there exist $0 < e' \leq \op{rank}(\hat{P}_1 P_1)$ and $0 < e'' \leq \op{rank}(Q_1)$ with $e'+e'' = d$. By choosing $e'$ linearly independent rows of $\hat{P}_1P_1$ and $e''$ linearly independent columns of $Q_1$ we obtain full row rank matrix $U' \in \mathbb{F}_q^{e' \times d}$ and a full column rank matrix $V' \in \mathbb{F}_q^{d \times e''}$ respectively. Now we extend $U'$ to a $d \times d$ matrix $U$ by adding any $d-e'$ linearly independent rows such that $U$ is invertible. Similarly we extend $V'$ to a $d \times d$ matrix $V$ by adding any $d-e''$ linearly independent columns such that $V$ is invertible. We clearly have $ULV =\left( \begin{array}{c|c} A & 0 \\ \hline D & B \end{array} \right)$ for some linear matrices $A, D, B$ such that $A \in \FX^{e' \times e'}$ and $B \in \FX^{e'' \times e''}$ with $0< e', e''$ and $e' + e'' =d$ as required. This completes the proof of Lemma \ref{hkv20-main-lemma}. \\ \begin{proofof}{Claim~\ref{php-simplify}} Each $P_i$ has $d$ columns and each $Q_i$ has $d$ rows. Thus, $\op{rank} P_i\le d$ and $\op{rank} Q_i\le d$. Also, $\op{rank} \hat{P}_i \leq \op{rank} P_i$ and $\op{rank} \hat{Q}_i \leq \op{rank} Q_i$. Hence, $\op{rank} \hat{P}_i + \op{rank} Q_i\le 2d$ and $\op{rank} P_i + \op{rank} \hat{Q}_i\le 2d$ for each $i$. It follows from Inequality \ref{eqn-php} that if there is an $i$ for which either $\op{rank} \hat{P}_i + \op{rank} Q_i <d$ or $\op{rank} {P}_i + \op{rank} \hat{Q}_i< d$ then there must be an index $j$ such that either $\op{rank} \hat{P}_j + \op{rank} Q_j > d$ or $\op{rank} {P}_j + \op{rank} \hat{Q}_j > d$. Two cases arise: \begin{enumerate} \item for all $j\in [\ell]$, $\op{rank} \hat{P}_j + \op{rank} Q_j = d$ and $\op{rank} {P}_j + \op{rank} \hat{Q}_j = d$. \item there is $j \in [\ell]$ with either $\op{rank} \hat{P}_j + \op{rank} Q_j > d$ or $\op{rank} {P}_j + \op{rank} \hat{Q}_j > d$. \end{enumerate} Suppose the first case occurs. It has the following two subcases. \begin{enumerate} \item[(a)] for all $j \in[\ell]$, $\op{rank} \hat{P}_j=0$ or $\op{rank} Q_j = 0$ and $\op{rank} {P}_j=0$ or $\op{rank} \hat{Q}_j=0$. \item[(b)] there is $j\in [\ell]$ such that $\op{rank} \hat{P}_j, \op{rank} Q_j >0$ or $\op{rank} {P}_j, \op{rank} \hat{Q}_j> 0$ \end{enumerate} First, consider Case 1(a). Note that $\op{rank} \hat{P}_j=0$ implies $\op{rank} Q_j =d$. And $\op{rank} Q_j=0$ implies $\op{rank} \hat{P}_j=d$, which implies $\op{rank} P_j =d$. Thus, either $\op{rank} P_j =d$ or $\op{rank} Q_j =d$ for every $j$. Moreover, Case 1(a) also implies $\op{rank} P_j, \op{rank} Q_j\in \{0,d\}$ for each $j$. Now as $[P_1 P_2 \ldots P_\ell]$ has full row rank and $[Q_1^T Q_2^T~\ldots Q_\ell^T]^T$ has full column rank, Case 1(a) implies that there are indices $j,k \in [\ell]$ such that $P_j$ and $Q_k$ are both rank $d$ matrices. As $P_j$ is full column rank matrix, there is a $d \times d'$ matrix $P_j'$ such that $P_j' P_j = I_d$. Similarly, there is a $d'' \times d$ matrix $Q_k'$ such that $Q_k Q_k'=I_d$. Now from Equation \ref{eqn-An0} we know that $P_j A_i Q_k = 0$ for all $i$, $1\leq i \leq n$. Hence $P_j' P_j A_i Q_k Q_k' = 0$ for all $i$, $1\leq i \leq n$. Consequently, $A_i=0$ for $1\le i\le n$ which is a contradiction to the lemma statement. Hence case 1(a) cannot occur. If case 1(b) or 2 holds then for some index $j\in [\ell]$ either $\op{rank} \hat{P}_j + \op{rank} Q_j \geq d$ with $\op{rank} \hat{P}_j$, $\op{rank} Q_j >0$ or $\op{rank} {P}_j + \op{rank} \hat{Q}_j \geq d$ with $\op{rank} {P}_j$, $\op{rank} \hat{Q}_j>0$. \end{proofof} \end{document}
\begin{document} \title{Book crossing numbers of the complete graph\ and small local convex crossing numbers} \begin{abstract} A \emph{$ k $-page book drawing} of a graph $ G $ is a drawing of $ G $ on $ k $ halfplanes with common boundary $ l $, a line, where the vertices are on $ l $ and the edges cannot cross $ l $. The \emph{$ k $-page book crossing number} of the graph $ G $, denoted by $ \nu_k(G) $, is the minimum number of edge-crossings over all $ k $-page book drawings of $ G $. Let $G=K_n$ be the complete graph on $n$ vertices. We improve the lower bounds on $ \nu_k(K_n) $ for all $ k\geq 14 $ and determine $ \nu_k(K_n) $ whenever $ 2 < n/k \leq 3 $. Our proofs rely on bounding the number of edges in convex graphs with small local crossing numbers. In particular, we determine the maximum number of edges that a graph with local crossing number at most $ \ell $ can have for $ \ell\leq 4 $. \end{abstract} \section{Introduction} In a \emph{$ k $-page book drawing} of a graph $ G $, the vertices of $ G $ are placed on a line $ l $ and each edge is completely contained in one of $ k $ fixed halfplanes whose boundary is $ l $. The line $ l $ is called the \emph{spine} and the halfplanes are called \emph{pages}. The \emph{$ k $-page book crossing number} of the graph $ G $, denoted by $ \nu_k(G) $, is the minimum number of edge-crossings over all $ k $-page book drawings of $ G $. Book crossing numbers have been studied in relation to their applications in VLSI designs \cite{CLR87, L83}. We are concerned with the $ k $-page book crossing number of the complete graph $ K_n $. In 1964, Bla\v{z}ek and Koman \cite{BC64} described $ k $-page book drawings of $ K_n $ with few crossings and proposed the problem of determining $ \nu_k(K_n) $. They only described their construction in detail for $ k=2 $, explicitly gave the exact number of crossings in their construction for $ k=2 $ and $ 3 $, and indicated that their construction could be generalized to larger values of $ k $, implicitly conjecturing that their construction achieves $ \nu_k(K_n) $. In 1994, Damiani, D'Antona, and Salemi \cite{DaDASa} described constructions in detail using adjacency matrices, but did not explicitly compute their exact crossing numbers. Two years later, Shahrokhi, S\'{y}kora, Sz\'{e}kely, and Vrt'o \cite{SSSV96} provided a geometric description of $ k $-page book drawings of $ K_n $ and bounded their number of crossings above, showing that \[ \nu_k(K_n)\leq \frac{2}{k^2}\left(1-\frac{1}{2k}\right)\binom{n}{4}+\frac{n^3}{2k}. \] In 2013, De Klerk, Pasechnik, and Salazar \cite{DPS13} (see Proposition 5.1) gave another construction and computed its exact number of crossings using the geometric approach in \cite{SSSV96}. It can be expressed as \begin{equation}\label{eq:zk1} Z_k(n):=(n \bmod k)\cdot F\left( \left\lfloor {\dfrac{n}{k}}\right\rfloor+1,n \right)+(k-(n \bmod k))\cdot F\left( \left\lfloor {\dfrac{n}{k}}\right\rfloor,n \right), \end{equation} where \begin{equation}\label{eq:zk2} F(r,n):=\frac{r}{24}(r^2-3r+2)(2n-3-r). \end{equation} Then $ \nu_k (K_n)\leq Z_k(n). $ All the constructions in \cite{DaDASa}, \cite{SSSV96}, and \cite{DPS13} generalize the original Bla\v{z}ek-Koman construction. They coincide when $ k $ divides $ n $ but are slightly different otherwise. They are widely believed to be asymptotically correct. In fact, the constructions in \cite{DaDASa} and \cite{DPS13} have the same number of crossings (which is in some cases smaller than that in \cite{SSSV96}), giving rise to the following conjecture on the $ k $-page book crossing number of $ K_n $ (as presented in \cite{DPS13}). \begin{conj}\label{conj:k-page} For any positive integers $ k $ and $ n $, $$ \nu_k (K_n)=Z_k(n).$$ \end{conj} In Section \ref{sect:const}, we provide several other constructions achieving $ Z_k(n) $ crossings. \'Abrego et al. \cite{AAFRS2} proved Conjecture \ref{conj:k-page} for $ k=2 $, which can be rewritten as \[ \nu_2 (K_n)=Z_2(n)=\frac{1}{4} \left\lfloor\frac{\mathstrut n}{\mathstrut 2}\right\rfloor \left\lfloor\frac{n-1}{2}\right\rfloor \left\lfloor\frac{n-2}{2}\right\rfloor \left\lfloor\frac{n-3}{2}\right\rfloor. \] The only other previously known exact values of $ \nu_k(K_n) $ are $ \nu_k (K_n)=0 $ for $ k>\ceil {n/2} $, as $ Z_k(n)=0 $ in this case, and those in clear cells in Table \ref{table:known_values} \cite{DPS13} (for which the conjecture holds). In Section \ref{sect:exact}, we prove the conjecture for an infinite family of values, namely for any $ k $ and $ n $ such that $ 2< n/k \leq 3$ (see Theorem \ref{theorem:range2to3}), and give improved lower bounds for $ n/k > 3$ (see Theorem \ref{theorem:rangeall}). \begin{table}[h] \begin{center} \scalebox{0.7}{ \begin{tabular}{ | c | c| c | c| c | c| c| c | c| c| c| c| c| c| c| c| c| c|c|c|c|} \hline $ n $ & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13& 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22& $ \cdots $& $ n $\\ \hline $\nu_2(K_n) $ & 1 & 3 & 9 & 18 & 36 & 60 & 100 & 150 & 225 & 315 & 441 & 588 & 784 & 1008 & 1296 & 1620 & 2025 & 2475 & & $ Z_2(n) $ \\ \hline $\nu_3(K_n) $ & 0 & 0 & 2 & 5 & 9 & 20 & 34 & 51 & 83 & 121 & 165 & - & - & - & - & - & - & - & & -\\ \hline $\nu_4(K_n) $ & 0 & 0 & 0 & 0 & 3 & 7 & 12 & 18 & 34 & - & - & - & - & - & - & - & - & - & & -\\ \hline $\nu_5(K_n) $ & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 9 & \cellcolor[gray]{0.8}15 & \cellcolor[gray]{0.8}22 & \cellcolor[gray]{0.8}30& - & - & - & - & - & - & - & & -\\ \hline $\nu_6(K_n) $ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.8}5 & \cellcolor[gray]{0.8}11 & \cellcolor[gray]{0.8}18 & \cellcolor[gray]{0.8}26 & \cellcolor[gray]{0.8}35 & \cellcolor[gray]{0.8}45 & - & - & -& - & & -\\ \hline $\nu_7(K_n) $ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.8}6 & \cellcolor[gray]{0.8}13 & \cellcolor[gray]{0.8}21 & \cellcolor[gray]{0.8}30 & \cellcolor[gray]{0.8}40 & \cellcolor[gray]{0.8}51 & \cellcolor[gray]{0.8}63& - & & -\\ \hline \vdots& & & & & & & & & & & & & & \vdots& & & & & & \\ \hline $\nu_k(K_n) $ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $ \cdots $ & 0 & 0 & 0 & 0 &\multicolumn{7}{|c|}{\cellcolor[gray]{0.8}$ \nu_k(K_n)=Z_k(n) $ for $ 2k<n\leq 3k $} & -\\ \hline \end{tabular} } \end{center} \caption{Known values of $ \nu_k(K_n). $ New values in this paper are shaded.} \label{table:known_values} \end{table} In terms of general lower bounds, Shahrokhi et al. \cite{SSSV96} proved a bound for $ \nu_k(G) $ for any graph $ G $. Using this bound for $ K_n $ gives \[ \nu_k(K_n)\geq \frac{n(n-1)^3}{296k^2}-\frac{27kn}{37}=\frac{3}{37k^2}\binom{n}{4}+O(n^3). \] This general bound was improved by De Klerk et al. \cite{DPS13} to \begin{equation}\label{eq:previous_lower_bound} \nu_k (K_n) \geq \left\{ \begin{array}{ll} \frac{3}{119}\binom{n}{4}+O(n^3) & \textup{if } k=4,\\ \frac{2}{(3k-2)^2}\binom{n}{4}& \textup{if } k \textup{ is even, and } n\geq k^2/2+3k-1,\\ \frac{2}{(3k+1)^2}\binom{n}{4}& \textup{if } k \textup{ is odd, and } n\geq k^2+2k-7/2. \end{array} \right. \end{equation} Using semidefinite programming, they further improved the lower bound for several values of $ k\leq 20 $. In Section \ref{sect:asympt}, we prove the following theorem that improves these lower bounds for $ k\geq 14 $ (see Table \ref{table:improvement}), but more importantly, it improves the asymptotic bound (\ref{eq:previous_lower_bound}) for every $ k $. \begin{thm}\label{theorem:asympt_improvement} For any integers $ k\geq 3 $ and $ n\geq \floor {111k/20} $, \begin{multline*} \nu_k(K_n)\geq \frac{8000(4107k^2-5416k+1309)}{37(111k-17)(111k-77)(37k-19)(3k-1)}\binom{n}{4} =\left(\frac{8000}{12321}\cdot\frac{1}{k^2}+\Theta\left(\frac{1}{k^3}\right)\right)\binom{n}{4}. \end{multline*} \end{thm} In contrast, $ Z_k(n) $ was asymptotically estimated in \cite{DPS13}, \[ \nu_k (K_n)\leq Z_k(n)=\left(\left(\frac{2}{k^2}\right)\left(1-\frac{1}{2k}\right)\right)\binom{n}{4}+O(n^3). \] This improves the ratio of the lower to the upper bound on $ \lim_{n\to \infty}\frac{\nu_k(K_n)}{\binom{n}{4}} $ from approximately $ \frac{1}{9} \approx 0.1111$ to $ \frac{4000}{12321} \approx 0.3246$. All our results (exact values and asymptotic bounds) heavily rely on a different problem for convex graphs that is interesting on its own right. \begin{table}[h] \begin{center} \scalebox{0.8}{ \begin{tabular}{ |D|C|C|C|C|} \hline $ k $ & Lower bound in \cite{DPS13} & New lower bound & Upper bound & \parbox[t]{3cm}{Ratio of lower to\\upper bound} \\ \hline $14 $ & $ 3.2930\times 10^{-3} $ & $ 3.4342\times 10^{-3} $ & $ 9.8396\times 10^{-3} $ & $ 0.3490 $ \\ \hline $15 $ & $ 2.5870\times 10^{-3} $ & $ 2.9852\times 10^{-3} $ & $ 8.5925\times 10^{-3} $ & $ 0.3474 $ \\ \hline $16 $ & $ 2.0348\times 10^{-3} $ & $ 2.6193\times 10^{-3} $ & $ 7.5683\times 10^{-3} $ & $ 0.3461 $ \\ \hline $17 $ & $ 1.6023\times 10^{-3} $ & $ 2.3166\times 10^{-3} $ & $ 6.7168\times 10^{-3} $ & $ 0.3449 $ \\ \hline $18 $ & $ 1.2562\times 10^{-3} $ & $ 2.0621\times 10^{-3} $ & $ 6.0013\times 10^{-3} $ & $ 0.3436 $ \\ \hline $19 $ & $ 9.8258\times 10^{-4} $ & $ 1.8490\times 10^{-3} $ & $ 5.3943\times 10^{-3} $ & $ 0.3428 $ \\ \hline $20 $ & $ 7.7482\times 10^{-4} $ & $ 1.6653\times 10^{-3} $ & $ 4.8750\times 10^{-3} $ & $ 0.3416 $ \\ \hline \end{tabular} } \end{center} \caption{ Bound comparison for $ \displaystyle\lim _{n\to \infty}\frac{\nu_k(K_n)}{\binom{n}{4}}. $} \label{table:improvement} \end{table} There are several models to study crossing numbers in $ k $-page book drawings. In the \emph{circular model}, a given $ k $-page book drawing of a graph $ G $ is drawn on the plane as follows. The spine is now a circle $ C $. The vertices of $ G $ are placed on $ C $, typically forming the set of vertices of a regular polygon inscribed in $ C $. The edges are diagonals or sides (straight line segments) of the polygon that are $ k $-colored in such a way that two edges get the same color if and only if they originally were on the same page. Using this model, the problem of determining $ \nu_k(K_n) $ is equivalent to finding the minimum number of monochromatic crossings in a $ k $-edge coloring of a circular drawing of $ K_n $. The subgraph induced by each of the colors is known as a \emph{convex} or \emph{outerplanar graph}. We denote by $ G_n $ the complete convex graph (that is, the convex drawing of $ K_n $). Since we are interested in crossings, it is often convenient to disregard the sides of the underlying polygon as edges. We denote by $ D_n $ the complete convex graph minus all the edges corresponding to the sides of the underlying polygon. Let $e_\ell(n)$ be the maximum number of edges over all convex subgraphs of $ D_n $ with \emph{local crossing number} at most $ \ell $, that is, such that each edge is crossed at most $ \ell $ times. Local crossing numbers of convex graphs were studied by Kainen \cite{Kai73,Kai90}. The problem of maximizing the number of edges over convex graphs satisfying certain crossing conditions was studied by Brass, K\'arolyi, and Valtr \cite{BKV03}. Functions equivalent to $ e_\ell(n) $ for general drawings of graphs in the plane were studied by Ackerman, Pach, Radoi\u{c}i\'c, Tardos, and T\'oth \cite{A15,PRTT06,PT97}. In Section \ref{sect:theoremL}, we prove the following theorem that relates the functions $ e_\ell(n) $ to the $k$-page book crossing numbers $\nu_k(K_n) $. Theorem \ref{theorem:RangeGeneralIntro} is used to prove Conjecture \ref{conj:k-page} for $2k<n\leq3k$, the asymptotic bound of Theorem \ref{theorem:asympt_improvement}, and the lower bound improvements in Table \ref{table:improvement}. \begin{thm}\label{theorem:RangeGeneralIntro} Let $ n \geq 3$ and $ k \geq 3 $ be fixed integers. Then, for all integers $ m\geq 0$, \[ \nu_{k}(K_n) \geq \frac{m}{2}n(n-3)-k\sum_{\ell=0}^{m-1}e_\ell(n). \] \end{thm} In Section \ref{sect:maxedges}, we bound $ e_\ell(n) $ above for every $ \ell $ and $ n $, examine the behavior of the optimal sets for $ \ell $ fixed and $ n $ large enough. This gives rise to a conjecture on the value of $ e_\ell(n) $. Finally, we prove the following theorem that provides the exact values of $ e_\ell(n) $ for $ \ell\leq 4 $, which are fundamental for the results in Section \ref{sect:kbook}. \begin{thm}\label{theorem:epsilons} For $ 0\leq \ell \leq 4 $ and any $ n\geq \max(3,\ell) $, $$e_{\ell}(n) =C_{\ell}(n-3)+\delta_{\ell}(n),$$ where $ C_{\ell} $ and $ \delta_{\ell} $ satisfy \end{thm} \begin{table}[h] \begin{center} \scalebox{.8} { {\renewcommand{1.5}{1.5} \begin{tabular}{c|c|c|c|c|c} $\ell $ & 0 & 1 & 2 & 3 & 4 \\ \hline $C_\ell$ & 1 & $3/2$ & 2 & $9/4$ & $5/2$ \\ \hline $ \delta_\ell(n) $ & 0 & $ \begin{array}{ll} 1/2 & \textup{if } n \equiv 0 \pmod 2,\\ 0 & \textup{otherwise.} \end{array} $ & $ \begin{array}{ll} 1 & \textup{if } n \equiv 2 \pmod 3,\\ 0 & \textup{otherwise.} \end{array} $ & $ \begin{array}{ll} -1/4 & \textup{if } n \equiv 0 \pmod 4,\\ 1/2 & \textup{if } n \equiv 1 \pmod 4,\\ 5/4 & \textup{if } n \equiv 2 \pmod 4,\\ 0 & \textup{if } n \equiv 3 \pmod 4. \end{array} $ & $ \begin{array}{ll} 1/2 & \textup{if } n \equiv 0 \pmod 4,\\ 0 & \textup{if } n \equiv 1 \pmod 4,\\ 3/2 & \textup{if } n \equiv 2 \pmod 4,\\ 1 & \textup{if } n \equiv 3 \pmod 4. \end{array} $ \\ \end{tabular}} \quad } \end{center} \end{table} \section{$ k $-page book crossing number of $ K_n $}\label{sect:kbook} \subsection{The constructions}\label{sect:const} Bla\v{z}ek and Koman \cite{BC64} described their construction in detail only for $ k=2 $, indicating that it could be generalized to larger values of $ k $, and gave the exact number of crossings for $ k=2 $ and $ 3 $. It is not clear what generalization they had in mind, as any of the constructions in \cite{DaDASa}, \cite{SSSV96}, and \cite{DPS13} could be such a generalization. These three constructions coincide when $ k $ divides $ n $, but handle the remainder in all other cases in slightly different ways. Using the geometric approach in \cite{SSSV96}, De Klerk et al. \cite{DPS12, DPS13} described a construction, which we call the \emph{DPS construction}, and computed its exact number of crossings, $ Z_k(n) $. They actually claimed to be counting the number of crossings in the construction found in \cite{DaDASa}, which they call the \emph{DDS construction}, indicating that they were only using a different model. These two constructions do have the same number of crossings but they are slightly different when $ n $ is not a multiple of $ k $. For example, Figure \ref{fig:14_4comparison} shows the differences between the DDS and the DPS constructions when $ n=14 $ and $ k=4 $, both have $ Z_4(14)=53 $ crossings. \begin{figure} \caption{A comparison between the DDS construction \cite{DaDASa} \label{figure:14_4comparison} \label{fig:14_4comparison} \end{figure} In fact, we describe several other $ k $-page book drawings with $ Z_k(n) $ crossings. Using the circular model, each of these constructions is a $ k $-edge coloring of the complete convex graph on $ n $ vertices with exactly $ Z_k(n) $ crossings. Let $g_m$ be the set of edges in $ G_n $ whose endpoints $v_i$ and $v_j$ satisfy $m\equiv i+j \pmod n$. Then $ g_m $ is a matching (not necessarily a perfect matching) whose edges are all parallel to a side of the polygon or to one of its shortest diagonals. Note that the set of matchings $\{g_0,g_1,...,g_{n-1}\}$ is a partition of the edges of $G_n$. For the pair $ (n,k) $, write $n = qk+r$, where $ q$ and $ r $ are integers and $0<r\leq k$. The construction in \cite{DPS12, DPS13} assigns $ q+1 $ matchings in $\{g_0,g_1,...,g_{n-1}\}$ to the first $r$ pages and $q$ to the remaining $k-r$ pages \emph{in order}. More precisely, the color classes are \[ \left\lbrace\bigcup _{i=0}^q g_{c(q+1)+i} \colon 0 \leq c <r\right\rbrace \cup \left\lbrace\bigcup _{i=0}^{q-1} g_{(r-1)(q+1)+(c-r)q+i} \colon r \leq c <k\right\rbrace \] This construction can be modified in two ways that preserve the number of crossings. First, the color assignment can be done in any order as long as the matchings in each color class are consecutive and there are exactly $ r $ classes consisting of $q+1$ matchings and $ k-r $ classes consisting of $q$ matchings. Second, if a color class with $q+1$ matchings, say $ \{g_{i},g_{i+1},\cdots,g_{i+q} \}$, is followed by a color class with $q$ matchings, say $ \{g_{i+q+1},g_{i+q+2},\cdots,g_{i+2q} \}$ (or vice versa), then any subset of the edges $ g_{i+q} $ can be moved to the other class. It can be verified that any of these constructions has exactly $ Z_k(n) $ crossings. \subsection{Proof of Theorem \ref{theorem:RangeGeneralIntro}}\label{sect:theoremL} We first give a lower bound on the number of crossings of any convex graph $ H $ in terms of the functions $ e_\ell(n) $. \begin{thm}\label{theorem:CrossingsMaxEdges} Let $ n\geq 3 $ and $ m\geq 0 $ be any integers. If $ H $ is any subgraph of $ D_n $, then \begin{equation}\label{eq:CrossingsMaxEdges} \mathop{\rm cr} (H) \geq me(H)-\sum_{\ell=0}^{m-1}e_\ell(n). \end{equation} \end{thm} \begin{proof} We prove the result by induction on $ m $. Inequality \ref{eq:CrossingsMaxEdges} is trivially true for $ m=0 $ as the right hand side of (\ref{eq:CrossingsMaxEdges}) is 0. Let $ m\geq 1 $ and let $ H $ be a subgraph of $ D_n $. We consider two cases. \paragraph{Case 1:} $ e(H)\leq e_{m-1}(n) $. By induction \begin{eqnarray} \mathop{\rm cr} (H) &\geq &(m-1)e(H)-\sum_{\ell=0}^{m-2}e_\ell(n) \nonumber \\ & = & me(H)-\sum_{\ell=0}^{m-1}e_\ell(n)+(e_{m-1}(n)-e(H)) \geq me(H)-\sum_{\ell=0}^{m-1}e_\ell(n). \nonumber \end{eqnarray} \paragraph{Case 2:} $ e(H) > e_{m-1}(n) $. Let $ c= e(H) - e_{m-1}(n)$ and $ H_0=H $. For $ 1\leq i \leq c $, recursively define $ a_i $ and $ H_i $ as follows. The graph $ H_{i-1} $ has $ e(H)-(i-1)\geq e(H)-(c-1)>e_{m-1} (n)$ edges. Thus, by definition of $ e_{m-1} $, there exists an edge $ a_i $ that is crossed at least $ m $ times in $ H_{i-1} $. Let $ H_i $ be the graph obtained from $ H_{i-1} $ by deleting $ a_i $. We have the following. \begin{equation*} \mathop{\rm cr} (H) = \sum_{i=1}^{c}(\text{number of crossings of $ a_i $ in $ H_i $})+\mathop{\rm cr} (H_c) \geq mc+\mathop{\rm cr} (H_c) \end{equation*} By induction, $ \mathop{\rm cr} (H_c)\geq (m-1)e(H_c)-\sum_{\ell=0}^{m-2}e_\ell(n)$ and thus \begin{equation*} \mathop{\rm cr} (H) \geq mc+ (m-1)(e(H)-c)-\sum_{\ell=0}^{m-2}e_\ell(n) = me(H)-\sum_{\ell=0}^{m-1}e_\ell(n). \end{equation*} \end{proof} For any integers $ k\geq 1 $, $ n\geq 3 $, and $ m \geq 0 $, define \[ L_{k,n}(m)=\frac{m}{2}n(n-3)-k\sum_{\ell=0}^{m-1}e_\ell(n). \] Then Theorem \ref{theorem:RangeGeneralIntro} can be stated as follows and we prove it using Theorem \ref{theorem:CrossingsMaxEdges}. \begin{thm}\label{theorem:RangeGeneral} Let $ n \geq 3$ and $ k \geq 3 $ be fixed integers. Then, for all integers $ m\geq 0$, \[ \nu_{k}(K_n) \geq L_{k,n}(m). \] \end{thm} \begin{proof} Consider any $ k $-coloring of the edges of $ D_n $ using colors $ 1,2,\ldots,k $. Let $ H_i $ be the graph on the same set of vertices as $ D_n $ whose edges are those of color $ i $. By Theorem \ref{theorem:CrossingsMaxEdges}, \begin{equation}\label{eq:twostepinequality} cr(H_i)\geq me(H_i)-\sum_{\ell=0}^{m-1}e_\ell(n). \end{equation} Add (\ref{eq:twostepinequality}) over all colors, $ 1\leq i \leq k $, to show that the number of monochromatic crossings in the $ k $-edge coloring is at least \begin{eqnarray} \sum_{i=1}^{k} \left(me(H_i) -\sum_{\ell=0}^{m-1}e_\ell(n)\right) &=& m\left({n \choose 2} - n\right) -k\sum_{\ell=0}^{m-1}e_\ell(n)\nonumber\\ &=&\frac{m}{2}n(n-3)-k\sum_{\ell=0}^{m-1}e_\ell(n).\nonumber \end{eqnarray} \end{proof} \subsection{Exact values of $ \nu_k(K_n) $ and new lower bounds}\label{sect:exact} Since Theorem \ref{theorem:RangeGeneral} works for any nonnegative integer $ m $, we start by maximizing $ L_{k,n}(m) $ for fixed $ k $ and $ n $ to obtain the best possible lower bound provided by Theorem \ref{theorem:RangeGeneral}. \begin{prop}\label{prop:RangeGeneral} For fixed integers $ k\geq 1 $ and $ n\geq 3 $, the value of $ L_{k,n}(m) $, defined over all nonnegative integers $ m $, is maximized by the smallest $ m $ such that $ e_{m}(n) \geq \frac{n(n-3)}{2k}$. \end{prop} \begin{proof} Note that $ L_{k,n}(m)=\sum_{\ell=0}^{m-1} \left(\frac{n}{2}(n-3)-ke_\ell(n)\right) =k\sum_{\ell=0}^{m-1} \left(\frac{n(n-3)}{2k}-e_\ell(n)\right) $. So $ L_{k,n} $ is increasing as long as $ e_{m-1}(n)<\frac{n(n-3)}{2k} $ and nonincreasing afterwards. Thus the maximum is achieved when $ m $ is the smallest integer such that $e_{m-1}(n) \geq \frac{n(n-3)}{2k}$. Note that this value in fact exists because $ e_\ell(n)=\binom{n}{2}-n> \frac{n(n-3)}{2k} $ whenever $ \ell > \floor{\frac{n-2}{2}}\ceil{\frac{n-2}{2}} $. \end{proof} We now use Proposition \ref{prop:RangeGeneral} to explicitly state the best lower bounds guaranteed by Theorem \ref{theorem:RangeGeneral} using the values of $ e_\ell(n) $ obtained in Section \ref{sect:maxedges}. In what follows, $ \delta_1(n) $, $ \delta_2(n) $, $ \delta_3(n) $, and $ \delta_4(n) $ are defined as in Theorem \ref{theorem:epsilons}. \begin{thm}\label{theorem:rangeall} For any integers $ k\geq 3 $ and $ n> 2k $, \begin{equation}\label{ineq:rangeall} \nu_{k}(K_n) \geq \left\{ \begin{array}{ll} \mathstrut \frac{1}{2}(n-3)(n-2k) & \textup{if } 2k <n\leq 3k, \\ (n-3)(n-\frac{5}{2}k)-k\delta_1(n) & \textup{if } 3k <n\leq 4k,\\ \frac{3}{2}(n-3)(n-3k)-k(\delta_1+\delta_2)(n) & \textup{if } 4k <n\leq \floor{4.5k} +\beta\\ 2(n-3)(n-\frac{27}{8}k)-k(\delta_1+\delta_2+\delta_3)(n) & \textup{if } \floor{4.5k} +\beta <n\leq 5k\\ \frac{5}{2}(n-3)(n-\frac{37}{10}k)-k(\delta_1+\delta_2+\delta_3+\delta_4)(n) & \textup{otherwise.} \end{array} \right. \end{equation} where $ \beta= \left\{ \begin{array}{ll} -1 & \hspace{-.1in}\textup{if } k \textup{ even and } 4|n,\\ 1 & \hspace{-.1in}\textup{if } k \textup{ odd and } 4|n-2,\\ 0 & \hspace{-.1in}\textup{otherwise,} \end{array} \right. $ \end{thm} \begin{proof} The $ m ^{th}$ row on the right hand side of Inequality (\ref{ineq:rangeall}) is equal to $ L_{k,n}(m) $, using the values of $ e_0(n) $, $ e_1(n) $, $ e_2(n) $, $ e_3(n)$, and $ e_4(n) $, stated in Theorem \ref{theorem:epsilons}. In each case, the range for $ n $ corresponds to the values of $ n $ for which $ e_{m-1}(n)<\frac{n(n-3)}{2k} \leq e_m(n)$, thus guaranteeing the best possible bound obtained from Theorem \ref{theorem:RangeGeneral} by Proposition \ref{prop:RangeGeneral}. For example, by Theorems \ref{theorem:epsilons} and \ref{theorem:RangeGeneral}, \[ \nu_k(K_n) \geq L_{k,n}(1)=\frac{1}{2}n(n-3)-ke_0(n)= \frac{1}{2}n(n-3)-k(n-3)=\frac{1}{2}(n-3)(n-2k). \] Although this bound holds for any $ n $ and is tight for $2 < \frac{n}{k} \leq 3$ (as stated in the next result), it can actually be improved for larger values of $ n $. For instance, using $ m=2 $ in Theorem \ref{theorem:RangeGeneral} together with Theorem \ref{theorem:epsilons} yields \[ \nu_k(K_n) \geq L_{k,n}(2)=n(n-3)-k\left((n-3)+\frac{3}{2}(n-3)+\delta_1(n)\right)=(n-3)(n-\frac{5}{2}k)-\delta_1(n)k. \] By Proposition \ref{prop:RangeGeneral}, this is the best bound guaranteed by Theorem \ref{theorem:RangeGeneral} whenever $ 3<n/k\leq 4 $ because $ e_1(n)=\frac{3}{2}(n-3)+\delta_1(n)<\frac{n(n-3)}{2k} \leq 2(n-3)\leq e_2(n)$ in this range. \end{proof} The first part of Theorem \ref{theorem:rangeall} settles Conjecture \ref{conj:k-page} when $2 < \frac{n}{k} \leq 3$. \begin{thm}\label{theorem:range2to3} If $2 < \frac{n}{k} \leq 3$, then $$\nu_{k}(K_n) = \frac{1}{2}(n-3)(n-2k).$$ \end{thm} \begin{proof} Suppose $2 < \frac{n}{k} \leq 3$. Since $\nu_{k}(K_n)\leq Z_k(n) $, we just need to show that $ Z_k(n) $ is indeed equal to $ \frac{1}{2}(n-3)(n-2k) $ for these values of $ n $ and $ k $. By (\ref{eq:zk2}), $ F(2,n)=0 $ and $ F(3,n)=\frac{1}{2}(n-3) $. If $ n=3k $, then by (\ref{eq:zk1}), \[ Z_k(n)= k\cdot F (3,n)=\frac{n}{3}\cdot\frac{1}{2}(n-3)=\frac{1}{2}(n-3)(n-2k). \] If $2 < \dfrac{n}{k} < 3$, then $ n=2k+r $, where $ r=n \bmod k $. By (\ref{eq:zk1}), \[ Z_k(n)= r\cdot F (3,n)+(k-r) \cdot F (2,n)=(n-2k)\cdot\frac{1}{2}(n-3). \] \end{proof} \subsection{Improving the asymptotics for fixed $ k $}\label{sect:asympt} The bound in Theorem \ref{theorem:rangeall} becomes weaker as $ n/k $ grows. We now use a different approach to improve this bound when $ n $ is large with respect to $ k $. For fixed $ k $, it is known that $$ \frac{\nu_k(K_n)}{\binom{n}{4}} \geq \frac{\nu_k(K_{n'})}{\binom{n'}{4}} $$ for all $ n\geq n'\geq 4 $. By Theorem \ref{theorem:RangeGeneral}, for $ n\geq n'\geq 4 $ we have $$ \frac{\nu_k(K_n)}{\binom{n}{4}} \geq \max_{\substack{1\leq m \leq 5 \\ n'\geq 2k}}\frac{L_{k,n'}(m)}{\binom{n'}{4}}. $$ For fixed $ k $, it would be ideal to find the $ n' $ achieving the previous maximum. We use $ n'=\floor{\frac{111}{20}k} $, which gives the actual maximum when $ k \equiv 6 $, $ 9 $, $ 11 $, $ 19 $, $ 24 $, $ 32 $, $ 37 $, $ 45 $, $ 50 $, $ 58 $, $ 60 $,$ 63 $,$ 73 $,$ 76\pmod {64}$ and close to the maximum for all other values of $ k $. The universal bound given in Theorem \ref{theorem:asympt_improvement} is obtained when $ k\equiv 8 \pmod {80} $ and it is the minimum of the maxima over all classes mod $ 80 $. Finally, using $ n'=\floor{\frac{81}{16}k} $ for $ 14\leq k \leq 20$, we obtain the bounds in Table \ref{table:n_15_to_20}, which improve the previous bounds for these values of $ k $ as compared in Table \ref{table:improvement}. \begin{table}[h] \begin{center} \scalebox{1}{ \begin{tabular}{ DEE |DEE} $ k $ & $ \nu_k(K_n) \geq $ & for all $ n\geq $ & $ k $ & $ \nu_k(K_n) \geq $ & for all $ n\geq $ \\ \hline $ 14 $ & $\frac{4406}{1282975}\binom{n}{4} $ & $ 77 $ & $ 18 $ & $\frac{8086}{3921225}\binom{n}{4} $ & $ 99 $ \\ $ 15 $ & $\frac{640}{214389}\binom{n}{4} $ & $ 83 $ & $ 19 $ & $\frac{8839}{4780230}\binom{n}{4} $ & $ 105 $ \\ $ 16 $ & $ \frac{3054}{1165945}\binom{n}{4} $ & $ 88 $ & $ 20 $ & $ \frac{85}{51039}\binom{n}{4} $ & $ 111 $\\ $ 17 $ & $ \frac{6764}{2919735}\binom{n}{4}$ & $ 94 $ & & & \\ \end{tabular} } \end{center} \caption{New lower bounds for $ \nu_k(K_n)$ when $ 14\leq k \leq 20 $.} \label{table:n_15_to_20} \end{table} \section{Maximizing the number of edges}\label{sect:maxedges} In this section, we only consider convex graphs that do not use the sides of the polygon, that is, subgraphs of $ D_n $. Given such a graph $ G $ and two of its vertices $ x $ and $ y $, the segment $ xy $ is either an \emph{edge} of $ G $ (which must be a diagonal of the polygon), a \emph{side} of $ G $ (a side of the polygon, and thus not an edge of $ G $), or a \emph{nonedge} of $ G $ (a diagonal of the polygon that is not an edge of $ G $). Let $ e(G) $, $ cr(G) $, and $ \lc(G) $ denote the number of edges, number of crossings, and local crossing number of $ G $, respectively. We label the vertices of $ G$ by $ v_1 , v_2,\cdots, v_n$ in clockwise order. As defined before, $e_\ell(n)$ denotes the maximum number of edges over all subgraphs of $ D_n $ with local crossing number at most $ \ell $. In Section \ref{sect:bound_e}, we bound the function $e_{\ell}(n)$ for every $\ell$ and provide several conjectures on its general behavior and exact value. In Section \ref{sect:exact_e}, we determine the exact value of $e_{\ell}(n)$ for $\ell \leq 4$. \subsection{Bounding $e_{\ell}(n)$}\label{sect:bound_e} We start bounding $e_\ell(n)$ below by creating large convex graphs using copies of smaller convex graphs. Given two subgraphs $ H_1 $ and $ H_2 $ of $ D_n $, a \emph{parallel composition} (by a side) of $ H_1 $ and $ H_2 $ is obtained from the disjoint union of $ H_1 $ and $ H_2 $ by merging one side of each graph and adding the merged side as an edge (Figure \ref{fig:FigCopiesD4}). If the original graphs have $ n_1 $ and $ n_2 $ vertices, respectively, a parallel composition is a subgraph of $ D_n $ with $ n=n_1+n_2-2 $ vertices and $ e(H_1) +e(H_2)+1$ edges. There are several parallel compositions of two graphs but all of them have the same number of vertices, number of edges, and local crossing number. Any parallel composition of $ H_1 $ and $ H_2 $ is denoted by $H_1 \oslash H_2$; and any parallel composition of $ q $ copies of $ H $ is denoted by $ qH $. \begin{figure} \caption{(a) A parallel composition $ 3D_4 \oslash D_3 $ (three copies of $D_4$ and one copy of $D_3$). The thick lines are the merged sides. (b) A second instance of $ 3D_4 \oslash D_3 $. Note that the merged edges do not have to be parallel. (c) An instance of $ 4D_5 \oslash D_4 $. } \label{fig:FigCopiesD4} \end{figure} \begin{lem}\label{lem:maxedgesupperboundgen} Suppose $ H $ and $ R $ are subgraphs of $ D_{n'} $ and $ D_r $, respectively, both with local crossing number at most $ \ell $ and such that $ n=q(n'-2)+r$ for some integers $ q\geq 0 $ and $ r\geq 2$. Then \begin{equation*}\label{Ineq:UpperMaxEdgesgen} e_\ell(n) \geq e(qH\oslash R)=(n-r)\frac{e(H)+1}{n'-2}+a(R), \end{equation*} where $ a(R)=e(R) $ if $ r>2 $, and $ a(R)=-1 $ if $ r=2 $. \end{lem} \begin{proof} Note that $ qH\oslash R $ has $ q(n'-2)+r=n $ vertices and local crossing number at most $ \ell $. Also $ qH\oslash R $ has $ qe(H)+e(R) $ edges coming from the $ q $ copies of $ H $ and the copy of $ R $, plus $ q $ merged edges if $ r>2 $ or $ q-1 $ edges if $ r=2 $. So \[ e_\ell(n)\geq e(qH\oslash R)=q(e(H)+1)+a(R)=(n-r)\frac{e(H)+1}{n'-2}+a(R). \] \end{proof} In order to obtain a lower bound for any $ n $ and $ \ell $, we use Lemma \ref{lem:maxedgesupperboundgen} in the particular case when $ H $ is a complete or almost complete graph. For any $ n' $ and $ 0\leq i \leq \min (0,\floor {\frac{n'-4}{2}}) $, let $ D_{n',i} $ be the graph obtained from $ D_{n'} $ by removing the set of $ i $ or $ i+1 $ main diagonals $ E_{n',i}$ defined as (see Figure \ref{fig:FigCopiesD6} and note that $ D_{n',0}=D_{n'} $) $$E_{n',i}= \left\{ \begin{array}{ll} \emptyset& \textup{if } i=0, \\ \{v_{2t}v_{2t+\frac{n'-1}{2}}:0\leq t \leq i\} & \textup{if } n' \textup{ is odd and } 0<i\leq \frac{n'-5}{2}, \\ \{v_{2t}v_{2t+\frac{n'}{2}}:0\leq t \leq i-1\} & \textup{if } n' \textup{ is even and } 0< i\leq \frac{n'}{4}, \\ \hspace{-.05in}\begin{array}{l} \{v_{2t}v_{2t+\frac{n'}{2}}:0\leq t \leq \frac{n'}{4}-1\}\\ \cup \{v_{2t+1}v_{2t+1+\frac{n'}{2}}:0\leq t \leq i-\floor{\frac{n'}{4}}\} \end{array} & \textup{if } n' \textup{ is even and } \frac{n'}{4}< i \leq \frac{n'-4}{2}. \end{array} \right. $$ \begin{thm}\label{th:maxedgesupperbound} For integers $ n \geq 3 $ and $ \ell\geq 0 $, let $ n',q $, and $ r $ be integers such that $ n'=2+\max(1,\ceil{2\sqrt{\ell}}) $, $ n=q(n'-2)+r $, and $ 2\leq r < n' $. Then \begin{equation}\label{Ineq:UpperMaxEdges1} e_\ell(n) \geq e_\ell({qD_{n',i}\oslash D_r}) = \frac{1}{2}\left(n'-1-\frac{2(i+\delta)}{n'-2}\right)(n-r)+\binom{r}{2}-r, \end{equation} where $ i=\left\lfloor \left(\frac{n'-2}{2}\right)^2\right \rfloor-\ell $ and $\delta= \left\{ \begin{array}{ll} 0 & \textup{if } n' \textup{ is even and } i\leq n'/4, \textup{ or } n' \textup{ is odd and } i=0,\\ 1 & \textup{otherwise.} \end{array} \right. $ \end{thm} \begin{figure} \caption{(a) $ D_{6,1} \label{fig:FigCopiesD6} \end{figure} \begin{proof} First note that in fact $ 0\leq i \leq \min (0,\floor {\frac{n'-4}{2}}) $. It can be checked that $$ e(D_{n',i})= \binom{n'}{2}-n'-i-\delta \text{\hspace{.2in} and \hspace{.2in}} \lc(D_{n',i})=\left\lfloor \left(\frac{n'-2}{2}\right)^2\right \rfloor-i. $$ Also, \begin{eqnarray} \lc (D_r) &\leq & \lc (D_{n'-1}) =\left \lfloor\left(\frac{n'-3}{2}\right)^2\right \rfloor\nonumber\\ &\leq &\left \lfloor\left(\frac{n'-2}{2}\right)^2 \right \rfloor-\max \left( 0,\left \lfloor\frac{n'-4}{2} \right \rfloor \right)\leq \left \lfloor\left(\frac{n'-2}{2}\right)^2 \right \rfloor -i=\ell. \nonumber \end{eqnarray} Then Lemma \ref{lem:maxedgesupperboundgen} implies \begin{eqnarray} e_\ell(n)&\geq& e(qD_{n',i}\oslash D_r)=(n-r)\frac{\binom{n'}{2}-n'-i-\delta +1}{n'-2}+\binom{r}{2}-r\nonumber\\ &=& (n-r)\frac{e(D_{n',i})+1}{n'-2}+a(D_r) = \frac{1}{2}\left(n'-1-\frac{2(i+\delta)}{n'-2}\right)(n-r)+\binom{r}{2}-r.\nonumber \end{eqnarray} \end{proof} For fixed $ \ell $, Inequality (\ref{Ineq:UpperMaxEdges1}) can be expressed as \begin{equation}\label{Ineq:UpperMaxEdges2} e_\ell(n) \geq C_\ell\cdot n+\Theta(1), \end{equation} where $$C_\ell= \left\{ \begin{array}{ll} \frac{1}{2}+\frac{\ceil{2\sqrt{\ell}\:}}{4} +\frac{\ell}{\ceil{2\sqrt{\ell}\:}}& \textup{if } \ceil{2\sqrt{\ell}\:} \textup{ is even and } \ceil{2\sqrt{\ell}\:}^2-\ceil{2\sqrt{\ell}\:}\leq 4\ell+2,\\ \frac{1}{2}+\frac{\ceil{2\sqrt{\ell}\:}}{4} +\frac{\ell-1}{\ceil{2\sqrt{\ell}\:}}& \textup{if } \ceil{2\sqrt{\ell}\:} \textup{ is even and } \ceil{2\sqrt{\ell}\:}^2-\ceil{2\sqrt{\ell}\:}> 4\ell+2,\\ \frac{1}{2}+\frac{\ceil{2\sqrt{\ell}\:}}{2}& \textup{if } \ceil{2\sqrt{\ell}\:} \textup{ is odd and } \ceil{2\sqrt{\ell}\:}^2 < 4\ell+5,\\ \frac{1}{2}+\frac{\ceil{2\sqrt{\ell}\:}}{4} +\frac{\ell-3/4}{\ceil{2\sqrt{\ell}\:}}& \textup{if } \ceil{2\sqrt{\ell}\:} \textup{ is odd and } \ceil{2\sqrt{\ell}\:}^2\geq 4\ell+5. \end{array} \right. $$ Theorem \ref{th:maxedgesupperbound} is tight for $ \ell\leq 3 $ but not necessarily for $ \ell\geq 4 $, as shown by Theorem \ref{theorem:epsilons}. For example, for $ \ell=4 $ and $ n=7 $ or $ 8 $ the graphs $ S_7 , S_7',$ and $ S_8 $ in Figure \ref{fig:FigGraphsC4}, have one more edge than the lower bound in Theorem \ref{th:maxedgesupperbound}. However, this discrepancy does not improve the coefficient of $ n $ in Theorem \ref{th:maxedgesupperbound} as the factor $ \frac{e(H)+1}{n'-2} $ in Lemma \ref{lem:maxedgesupperboundgen} is larger when $ H=D_6 $ than when $ H =S_7,S_7' $, or $ S_8 $. In other words, $ S_7 , S_7',$ and $ S_8 $ only slightly improve Theorem \ref{th:maxedgesupperbound} when they are used as the ``remainder'' graph $ R $ as shown in Theorem \ref{theorem:epsilons}. We believe that for $ \ell $ fixed and any $ n $ large enough, there are optimal constructions that are parallel compositions of smaller graphs. More precisely, \begin{conj}\label{conjecture:maxedge1} For each integer $\ell\geq 0 $ there is a positive integer $ N_\ell $ such that for all integers $ n\geq N_\ell$ there is a subgraph $ G $ of $ D_n $ with local crossing number $ \ell $ such that $ G $ has a crossing free edge and $ e(G)=e_\ell(n) $. \end{conj} \begin{figure} \caption{(a-b) The graphs $ S_7 $ and $ S_7' $, the unique subgraphs of $ D_7 $ with $ 11 $ edges and local crossing number $ 4 $. (c) The graph $ S_8 $, a subgraph of $ D_8 $ with $ 13 $ edges and local crossing number $ 4 $. } \label{fig:FigGraphsC4} \end{figure} The validity of this conjecture would imply the following statement. \begin{conj}\label{conjecture:maxedge2} For an integer $ \ell\geq 0 $ and $ N_\ell $ defined as in the previous conjecture, let $$ M_\ell=\max\left\{\frac{e_\ell(n)+1}{n-2}: 3\leq n\leq N_\ell-1\right\}. $$ Then \begin{equation*} e_\ell(n) = M_\ell(n-2)+\Theta (1). \end{equation*} \end{conj} \begin{proof}(Assuming Conjecture \ref{conjecture:maxedge1}) We first prove that $ e_\ell(n)\leq M_\ell(n-2)-1 $ by induction on $ n $. If $ n\leq N_\ell-1 $, then $ \frac{e_\ell(n)+1}{n-2}\leq M_\ell $ and so $ e_\ell(n)\leq M_\ell(n-2)-1 $. If $ n\geq N_\ell $, then by Conjecture \ref{conjecture:maxedge1}, there is a subgraph $ G $ of $ D_n $ with local crossing number $ \ell $ such that $ G $ has a crossing free edge, say $ v_1v_t $, and $ e(G)=e_\ell(n) $. Consider the subgraphs $ G_1 $ and $ G_2 $ of $ G $ induced by the sets of vertices $ \{v_1,v_2,\cdots,v_t\} $ and $ \{v_t,v_{t+1},v_{t+2},\cdots,v_n,v_1\} $, respectively, and removing the edge $ v_1v_t $. Then $ G_1 $ and $ G_2 $ are subgraphs of $ D_t $ and $ D_{n-t+2} $, respectively. Since $ t\leq n-1 $ and $ n-t+2\leq n-1 $, then by induction, $$ e_\ell(n)= e(G)=e(G_1)+e(G_2)+1\leq (M_\ell(t-2)-1)+(M_\ell(n-t)-1)+1=M_\ell(n-2)-1. $$ Finally, by Lemma \ref{lem:maxedgesupperboundgen}, $ e_\ell(n)\geq M_\ell(n-2)-M_\ell(N_\ell-2)-1 $. \end{proof} On the other hand, $ \mathop{\rm cr} (G)\geq \frac{m^3}{27n^2} $ for any convex graph with $ n $ vertices and $ m $ edges \cite{SSSV04}. So if $ G $ is a subgraph of $ D_n $ such that $ \lc (G)=\ell $ and $ e(G)=e_\ell(n) $, then \begin{equation}\label{Ineq:CrossingLemma1} \mathop{\rm cr}(G)\geq \frac{e_\ell(n)^3}{27n^2}. \end{equation} Also, because each edge of $ G $ is crossed at most $ \ell $ times, \begin{eqnarray}\label{Ineq:CrossingLemma2} \mathop{\rm cr}(G)\leq \frac{e_\ell(n)\ell}{2}. \end{eqnarray} Inequalities (\ref{Ineq:UpperMaxEdges2}), (\ref{Ineq:CrossingLemma1}), and (\ref{Ineq:CrossingLemma2}) together imply $$ \sqrt{\ell}n +\Theta (1)\leq C_\ell\cdot n +\Theta (1)\leq e_\ell(n) \leq \sqrt{\frac{27\ell}{2}}n<3.675\sqrt{\ell}n. $$ We believe that the lower bound is correct, leading to the following conjecture. \begin{conj}\label{conjecture:maxedge3} For fix $ \ell \geq 0 $ and any $ n\geq 3 $ and $ \ell\geq 0 $, $ e_\ell(n)=C_\ell\cdot n+O(1) $. \end{conj} If Conjectures \ref{conjecture:maxedge1}, \ref{conjecture:maxedge2}, and \ref{conjecture:maxedge3} were true, then $ M_\ell=C_\ell $. \subsection{Exact values of $e_{\ell}(n)$ for $\ell \leq 4$}\label{sect:exact_e} In this section, we prove Theorem \ref{theorem:epsilons}, which verifies Conjectures \ref{conjecture:maxedge1}, \ref{conjecture:maxedge2} and \ref{conjecture:maxedge3} for $ \ell\leq 4 $. In these cases, $ N_\ell=3$, $4$, $ 6$, $ 7$, and $ 8$; and $C_\ell=M_\ell=1$, $3/2$, $2$, $9/4$, and $5/2$ for $\ell=0$, $1$, $2$, $3$, and $4$, respectively. In fact, Inequality \ref{Ineq:UpperMaxEdges1} is tight for $ \ell=0 $ and $\ell=1$. Similar results for $ \ell=0 $ and 1 were proved in \cite{BKV03}, Theorems 3 and 8. Theorem 3 in \cite{BKV03} proves that the maximum number of edges in a plane subgraph of $ G_n $ (instead of $ D_n $) is $ 2n-3 $. This is equivalent to $ e_0(n)=n-3 $ in Theorem \ref{theorem:epsilons}, as the sides of the polygon are included as possible edges, but their proof is different. Theorem 8 in \cite{BKV03}, however, is somewhat different from our result for $ e_1(n)=\frac{3}{2}(n-3)+\delta_1(n) $. It shows that the largest number of edges in a subgraph of $ G_n $ such that any two edges sharing a vertex cannot both be crossed by a common edge is $\floor*{\frac{5}{2}n-4}=n+\ceil*{\frac{3}{2}(n-3)}$. Recall that this result uses the sides of the polygon. Let $ G $ be a convex graph. The \emph{crossing graph} of $ G $, denoted by $ G^{\otimes} $, is the graph whose vertices are the edges of $ G $ and two vertices of $ G^{\otimes} $ are adjacent in $ G^{\otimes} $ if the corresponding edges in $ G $ cross. We start by proving the following lemma. \begin{lem}\label{lemma:nocycles3} Let $ G $ be a subgraph of $ D_n $, $ n\geq 3 $. If $ G^{\otimes} $ has no cycles, then $ e(G)\leq 2n-6 $. \end{lem} \begin{proof} Let $ e^*(n) $ be the maximum number of edges in a subgraph $ G $ of $ D_n $ such that there are no cycles in $ G^{\otimes} $. We actually prove the identity $ e^*(n) =2n-6$. The graph whose edges are $ v_1v_i $ and $ v_{i-1}v_{i+1}$ for $ 3\leq i \leq n-1 $ (Figure \ref{fig:best_e3_nocycles}a) has $ 2n-6 $ edges and satisfies the conditions of the lemma, showing that $ e^*(n) \geq 2n-6$. We prove that $ e^*(n) \leq 2n-6$ by induction on $ n $. The result is clearly true if $ n=3 $ or $ 4 $. Let $ G $ be a graph with the required properties on $ n\geq 5 $ vertices and with the maximum number of edges. Because there are no cycles in $ G^{\otimes} $, it follows that there must be an edge $ uv $ in $ G $ crossed at most once. Consider the subgraphs $ G_1 $ and $ G_2 $ of $ G $ on each side of $ uv $, each including the vertices $ u $ and $ v $ but not the edge $ uv $. Let $ n_1 $ and $ n_2 $ be the number of vertices of $ G_1 $ and $ G_2 $, respectively. Note that the graphs $ G_1 $ and $ G_2 $ inherit the conditions of the lemma from $ G $ and so they have at most $ 2n_1-6 $ and $ 2n_2-6 $ edges, respectively. By induction, and since $ n_1+n_2=n+2 $ and the only edges that are not part of either $ G_1 $ or $ G_2 $ are $ uv $ and the edge (if any) crossing $ uv $, $$ e(G)\leq e(G_1)+e(G_2)+2\leq 2n_1-6 +2n_2-6 +2=2n-6. $$ \end{proof} \begin{figure} \caption{(a) Optimal graph for $ e^*(n). $ (b) The corners (dark shade) and the polygon (light shade) whose vertices are the crossings in the cycle $ C $.} \label{fig:best_e3_nocycles} \end{figure} \subsubsection{Proof of Theorem \ref{theorem:epsilons}.} \begin{proof} The inequality $ e_{\ell}(n) \geq C_{\ell}(n-3)+\delta_{\ell}(n) $ holds by Theorem \ref{th:maxedgesupperbound} for $ \ell\leq 3 $ and by Lemma \ref{lem:maxedgesupperboundgen} for $ \ell =4 $ with $ H=D_6 $ and $ R=S_8,D_5,D_6,$ or $S_7 $ for $ n \equiv 0,1,2,$ or $3 \pmod 4 $, respectively. We prove the inequality $e_\ell(n) \leq C_\ell(n-3)+\delta_\ell(n)$ by induction on $ n $. This can be easily verified for $ \ell\leq n \leq \ell+3 $. Assume $ n\geq\ell+4 $ and let $G$ be a subgraph of $D_n$ with $n$ vertices and local crossing number at most $ \ell $. We consider three main cases. \paragraph{Case 1.} Suppose $ G $ has a crossing-free diagonal, say $ v_1v_t $ with $ 3\leq t \leq n-1 $. Consider the two subgraphs $ G_1 $ and $ G_2 $ induced by the sets of vertices $ \{v_1,v_2,\ldots,v_t\} $ and $ \{v_t,v_{t+1},\ldots,v_{n-1},v_n,v_1\} $, respectively, and the edge $ v_1v_t $. Then $ G_1 $ and $ G_2 $ are subgraphs of $ D_t $ and $ D_{n+2-t} $, respectively, with at most $ \ell $ crossings per edge. Therefore, \begin{eqnarray} \nonumber e(G)& \leq & e(G_1)+e(G_2)+1 \leq e_\ell(t) + e_\ell(n-t+2) + 1\\ \nonumber &= & C_\ell(t-3)+\delta_\ell(t)+C_\ell(n-t+2-3)+\delta_\ell(n-t+2)+1\\ \nonumber &= & \big(C_\ell(n-3)+\delta_\ell(n)\big)+\big(\delta_\ell(t)+\delta_\ell(n-t+2)-\delta_\ell(n)-C_\ell+1\big). \end{eqnarray} It can be verified that the inequality $ \delta_\ell(t)+\delta_\ell(n-t+2)\leq \delta_\ell(n)-C_\ell+1 $ holds for all $ \ell \leq 4 $ and any integers $ n $ and $ t $. Case 1 always holds for $ \ell=0 $ and $ n\geq 4 $. We claim that Case 1 also always holds for $ \ell=1$ and $ n\geq 5$. Indeed, we can assume without loss of generality that for some indices $ 1<i_1<i_2<i_3<i_4\leq n $ the edges $ v_{i_1}v_{i_3} $ and $ v_{i_2}v_{i_4} $ are in $ G $, that is, vertex $ v_1 $ does not participate in this crossing. In this case, the diagonal $ v_{i_4}v_{i_1} $ cannot be crossed by any edge in $ G $. This is because such an edge would also intersect $ v_{i_1}v_{i_3} $ or $ v_{i_2}v_{i_4} $, but $ v_{i_1}v_{i_3} $ and $ v_{i_2}v_{i_4} $ already cross each other and so they cannot cross any other edge. This concludes the proof for $ \ell \leq 1 $. We now assume that $ 2\leq \ell \leq 4 $ and that $ G $ has no crossing-free diagonals. \paragraph{Case 2.} Suppose $ G^{\otimes} $ has no cycles. Then Lemma \ref{lemma:nocycles3} implies that $ e(G) \leq 2n-6 < C_\ell(n-3)+\delta_\ell(n) $ for $ \ell\geq 2$. \paragraph{Case 3.} Suppose that $ G^{\otimes} $ has a cycle. In this case, we modify $ G $ to obtain a graph $ G' $ on the same vertex-set, with at least as many edges as $ G $, with local crossing number at most $ \ell $, and with a crossing-free diagonal, so that $ e(G)\leq e(G') \leq C_\ell(n-3)+\delta_\ell(n) $ by Case 1. Given a set of vertices $ U $, we denote by $ \conv(U) $ the convex hull of $ U $ and by $ \bd(U) $ the boundary of $ \conv(U) $. We say that $ G $ has a \emph{valid replacement} if there is a proper subset $ V_0 $ of the vertices such that the set $ E_0 $ of edges intersecting the interior of $ \conv(V_0) $ satisfies that $ |E_0|\leq C_\ell(|V_0|-3)+\delta_\ell(|V_0|)+b_0 $, where $ b_0 $ is the number of nonedges on $ \partial\conv(V_0) $. In this case, we say that $V_0$ \emph{generates} a valid replacement and write $ \gen(V_0)=E_0 $. If $ G $ has a valid replacement generated by $ V_0 $, then we obtain $ G' $ by removing $ E_0 $ (leaving the interior of $ \conv(V _0)$ empty), adding any nonedges of $ G $ on $ \partial \conv(V_0) $, and adding a copy $ H_0 $ of a graph with vertex-set $ V_0 $, local crossing number at most $ \ell $, and $ C_\ell(|V_0|-3)+\delta_\ell(|V_0|) $ edges (which we have proved exists). Note that $ G' $ has at least as many edges as $ G $, has local crossing number at most $ \ell $ (because the added edges are contained in $ \conv(V_0) $ and so they do not cross any edges in $ E-E_0 $), and has a crossing-free diagonal (namely, any diagonal on $ \partial\conv(V_0) $ that is not a side; the inequality $ n_0<n $ is guaranteed if there is an edge in $ E_0 $ crossing $ \partial\conv(V_0) $). Let $ C $ be the smallest cycle in $ G^{\otimes} $ and $ j $ its size. $ C $ corresponds to a sequence of edges $ a_1,a_2,\ldots,a_j $ in $ G $ such that edge $ a_i $, with endpoints $ p_i $ and $ q_i $, crosses the edges $ a_{i-1} $ and $ a_{i+1} $ (the subindices are taken mod $ j $). The edges $ a_{i-1} $ and $ a_{i+1} $ can be incident to the same vertex or disjoint. In the first case, we say that the triangle formed by $ a_{i-1} $, $ a_i $, and $ a_{i+1} $ is a \emph{corner} of $ C $ (see Figure \ref{fig:best_e3_nocycles}b). Note that, by minimality of $ C $, the edges $ a_{i-1} $ and $ a_{i+1} $ cross only if $ j=3 $, in which case $ C $ has no corners. Let $ V= \{p_1,p_2,\ldots,p_j,q_1,q_2,\ldots,q_j \}$, $ c $ be the number of corners of $ C $, $ E=\{a_1,a_2,\ldots,a_j\} $, $ D $ be the set of edges of $ G $ not in $ E $ that cross at least one edge in $ E $, $ n' =|V|$, and $ e'=|E\cup D| $. Note that the edges of $ G $ that are not in $ E\cup D $ cannot cross the convex hull of $ V $. If $ e'\leq C_\ell(n'-3)+\delta_\ell(n') $ and $ n'<n $, then $ V $ generates a valid replacement with $ |\gen(V)|=|E\cup D|=e' $. We now analyze under which circumstances we can guarantee that \begin{equation}\label{eq:need} e'\leq C_\ell(n'-3)+\delta_\ell(n') \end{equation} and therefore a valid replacement. Note that $ n'=2j-c $ and $ |D|\leq j(\ell-2) $ because each edge in $ E $ already crosses two other edges in $ E $ and so it can only cross at most $ \ell -2 $ edges in $ D $. Then $ e'\leq j(\ell-1) $. \paragraph{Case 3.1} Suppose $ j=3 $. Then $ c=0 $ and thus $ n'=6 $ and $ e'\leq 3(\ell-1)\leq 3C_\ell+\delta_\ell (6) = C_\ell(n'-3)+\delta_\ell (n')$. \paragraph{Case 3.2} Suppose $ j=4 $. Figure \ref{fig:Casej4} shows all possible cycles $ C $ formed by 4 edges in $ G $. They satisfy that $ 6\leq n'\leq 8 $ and $ e'\leq 4(\ell-1) $. It can be verified that Inequality (\ref{eq:need}) holds for $ 2\leq \ell\leq 4 $ and $ 6 \leq n' \leq 8 $ except when $ \ell=4 $, $ n'=7 $, and $ e'=12 $ (Figure \ref{fig:Casej4N}b); or $ \ell=4 $, $ n'=8 $, and $ e'=11 $ or $ 12 $ (Figure \ref{fig:Casej4N}c). (In all other cases $ C_\ell(n'-3)+\delta_\ell (n')\geq 4(\ell-1) $.) These remaining cases involve a long and careful case analysis. All details are included in Appendix \ref{sect:case3.3}. \begin{figure} \caption{Possible cycles $ C $ with $ j=4 $.} \label{fig:Casej4N} \end{figure} \paragraph{Case 3.3} Suppose $ j\geq 5 $. The subgraph $ G $ whose edge set is $ E $ has vertices of degree 2, the \emph{corners}, and vertices of degree 1, the \emph{leaves}. The edges of $ E $ are of three types: type 2 edges join two leaves, type 1 edges join a leaf and a corner, and type 0 edges join two corners. Let $ t_2, t_1, $ and $ t_0 $ be the number of edges in $ E $ of type 2, 1, and 0, respectively. By minimality of $ C $, any edge of $ C $ joining two corners cannot be crossed by edges not in $ E $. Then edges of type 0 are not crossed by edges in $ D $. The edges of types 2 and 1 are crossed by at most $ \ell-2 $ edges in $ D $. Then $ |D|\leq (\ell-2)(t_2+t_1)$. Also $2t_2+t_1=n'-c=(2j-c)-c=2(j-c) $. Then \begin{eqnarray} e'&=&|D\cup E|=|D|+j\leq (\ell-2)(2t_2+t_1)-(\ell-2)t_2+j\nonumber\\ &=&2(\ell-2)(j-c)-(\ell-2)t_2+j=(2\ell-3)j-2(\ell-2)c-(\ell-2)t_2. \end{eqnarray} Thus $ (2\ell-3)j-2(\ell-2)c-(\ell-2)t_2\leq C_\ell(n'-3)+\delta_\ell(n')=C_\ell(2j-c-3)+\delta_\ell(2j-c) $ guarantees (\ref{eq:need}). This is equivalent to \begin{equation}\label{eq:need2all} (2C_\ell-2\ell+3)j+(2l-4-C_\ell)c-3C_\ell+(\ell-2)t_2+\delta_\ell(2j-c) \geq 0. \end{equation} If $ \ell=2 $, Inequality (\ref{eq:need2all}) becomes \begin{equation}\label{eq:need_ell2} 3j-2c-6+\delta_2(2j-c)\geq 0. \end{equation} Since $ j\geq c $ and $ \delta_2(2j-c)\geq 0 $, then $3j-2c-6+\delta_2(2j-c)\geq 3j-2j-6=j-6\geq 0$ for $ j\geq 6 $. If $ j=5 $, Inequality (\ref{eq:need_ell2}) becomes $ 9-2c+\delta_2(10-c)\geq 0$, which holds for any $ 0\leq c \leq 5. $ If $ \ell=3 $, Inequality (\ref{eq:need2all}) becomes \begin{equation}\label{eq:need_ell3} \frac{1}{4}(6j-c-27)+t_2+\delta_3(2j-c)\geq 0. \end{equation} Since $ j\geq c $, $ t_2\geq 0 $, and $ \delta_3(2j-c)\geq -\frac{1}{4} $, then $\frac{1}{4}(6j-c-27)+t_2+\delta_3(2j-c)\geq \frac{1}{4}(6j-j-27-1)=\frac{1}{4}(5j-28)\geq 0$ for $ j\geq 6 $. If $ j=5 $, Inequality (\ref{eq:need_ell2}) becomes $ \frac{1}{4}(3-c)+\delta_3(10-c)\geq 0$, which holds for any $ 0\leq c \leq 5. $ This concludes the proof for $ \ell=2 $ and $ \ell=3 $. If $ \ell=4 $, Inequality (\ref{eq:need2all}) becomes \begin{equation}\label{eq:need2} \frac{3}{2}(c-5)+2t_2+\delta_4(2j-c)\geq 0. \end{equation} \begin{itemize} \item Inequality (\ref{eq:need2}) holds for $ c\geq 5 $ as $ t_2 $ and $ \delta_4 (2j-c) $ are nonnegative. \item If $ c=4 $, Inequality (\ref{eq:need2}) is equivalent to $ 2t_2-\frac{3}{2}+\delta_4(2j-4) \geq 0 $. Note that $ \delta_4(2j-4)=\frac{3}{2} $ if $ j $ is odd and $ \delta_4(2j-4)=\frac{1}{2} $ if $ j $ is even. So the only case in which (\ref{eq:need2}) does not hold is when $ j $ is even and $ t_2=0 $. In this case, $ 5\leq j=\frac{1}{2}(2c+t_1+2t_2)\leq \frac{1}{2}(2c+2c+2t_2)=2c+t_2=8 $. So $ j=6 $ or $ j=8 $. The only possibilities for $ C $ are shown in Figure \ref{fig:CaseC4}. \begin{figure} \caption{Remaining possibilities in Case 3.2 with $ c=4. $} \label{fig:CaseC4} \end{figure} \item If $ c=3 $, Inequality (\ref{eq:need2}) is equivalent to $ 2t_2-3+\delta_4(2j-3) \geq 0 $. Note that $ \delta_4(2j-3)=1 $ if $ j $ is odd and $ \delta_4(2j-3)=0 $ if $ j $ is even. So the only cases in which (\ref{eq:need2}) does not hold is when $ j $ is odd and $ t_2=0 $, or when $ j $ is even and $ t_2=0 $ or 1. In this case, $ 5\leq j=\frac{1}{2}(2c+t_1+2t_2)\leq \frac{1}{2}(2c+2c+2t_2)=2c+t_2=6+t_2 $. So $ j=5 $ or $ j=6 $. The only possibilities for $ C $ are shown in Figure \ref{fig:CaseC3}. \begin{figure} \caption{Remaining possibilities in Case 3.2 with $ c=3. $} \label{fig:CaseC3} \end{figure} \item If $ c=2 $, Inequality (\ref{eq:need2}) is equivalent to $ 2t_2-\frac{9}{2}+\delta_4(2j-2) \geq 0 $. Note that $ \delta_4(2j-2)=\frac{1}{2} $ if $ j $ is odd and $ \delta_4(2j-2)=\frac{3}{2} $ if $ j $ is even. So the only cases in which (\ref{eq:need2}) does not hold is when $ t_2=0 $ or 1 (any $ j $). In this case, $ 5\leq j=\frac{1}{2}(2c+t_1+2t_2)\leq \frac{1}{2}(2c+2c+2t_2)=2c+t_2=4+t_2 $. So $ t_2=1 $ and $ j=5 $. The only possibility for $ C $ is shown in Figure \ref{fig:CaseC2}. \begin{figure} \caption{Remaining possibilities in Case 3.2 with $ c=2. $} \label{fig:CaseC2} \end{figure} \item If $ c=1 $, then $ t_2=j-2\geq 3 $. Inequality (\ref{eq:need2}) is equivalent to $ 2t_2-6+\delta_4(2j-1) \geq 6-6+0=0 $. \item If $ c=0 $, then $ t_2=j\geq 5 $. Inequality (\ref{eq:need2}) is equivalent to $ 2t_2-\frac{15}{2}+\delta_4(2j) \geq 10-5/2+0>0 $. \end{itemize} Table \ref{table:cases} lists the remaining cases (i.e., the cases where $ j\geq 5 $ and Inequality (\ref{eq:need}) is not necessarily satisfied), including the values of $ n'$ and $e' $. Note that, by minimality of $ C $, each edge in $ D $ crosses exactly one edge in $ E $ and can be crossed by up to three other edges of $ G $. The tick marks in each figure indicate the possible crossings of edges in $ D $ with edges in $ E $. Note that all such crossings, except at most one of them in Figures \ref{fig:CaseC3}a and \ref{fig:CaseC2}, must happen in order for Inequality \ref{eq:need} to fail. All crossings with type 1 edges must happen exactly at the indicated place. However, the two crossings with type 2 edges could happen on the same side of the edge (although Table \ref{table:cases} shows one tick mark per side). Consider an edge $ xy\in D $. If $ x\notin V $ and $ y\notin V $, let $ V_1=V\cup \{x,y\} $ and $ E_1=\gen(V_1) $, that is, $ E_1 $ is the union of $ E' $ and any edges crossing $ xy $. Let $ n_1=|V_1| =n'+2$ and $ e_1=|E_1|\leq e'+3 $. Whenever $ e_1\leq \frac{5}{2}(n_1-3)+\delta_4(n_1) $, $ V_1 $ generates a valid replacement. This happens in all cases except for Figure \ref{fig:CaseC3}a when $ e'=13 $. Thus, in all these cases we assume that each edge in $ D $ is incident to at least one vertex in $ V $. We treat Figure \ref{fig:CaseC3}a with $ e'=3 $ separately afterwards. \begin{table}[h] \begin{center} \scalebox{0.7}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \includegraphics[width=0.14\linewidth]{./CaseC4a} & \includegraphics[width=0.14\linewidth]{./CaseC4b} & \includegraphics[width=0.14\linewidth]{./CaseC4c} & \includegraphics[width=0.14\linewidth]{./CaseC3a} & \includegraphics[width=0.14\linewidth]{./CaseC3b} & \includegraphics[width=0.14\linewidth]{./CaseC3c} & \includegraphics[width=0.14\linewidth]{./CaseC2} \\ \hline Case & Figure \ref{fig:CaseC4}a & Figure \ref{fig:CaseC4}b & Figure \ref{fig:CaseC4}c & Figure \ref{fig:CaseC3}a & Figure \ref{fig:CaseC3}b & Figure \ref{fig:CaseC3}c & Figure \ref{fig:CaseC2} \\ \hline $ n' $ & 8 & 8 & 12 & 7 & 9 & 9 & 8 \\ \hline $ e' $ & $ 6+8=14 $ & $ 6+8=14 $ & $ 8+16=24 $ & \begin{tabular}{@{}c@{}c@{}}$ 5+7=12 $\\or \\ $ 5+8=13 $ \end{tabular}& $ 6+10=16 $ & $ 6+10=16 $ & \begin{tabular}{@{}c@{}c@{}}$ 5+9=14 $ \\or\\ $ 5+10=15 $ \end{tabular} \\ \hline $ \frac{5}{2}(n'-3)+\delta_4(n') $ & 13 & 13 & 23 & 11 & 15 & 15 & 13 \\ \hline $ n_1 $ & 10 & 10 & 14 & 9 & 11 & 11 & 10 \\ \hline $ e_1 =e'+3 $ & 17 & 17 & 27 & 15 or 16& 19 & 19 & 17 or 18 \\ \hline $ \frac{5}{2}(n_1-3)+\delta_4(n_1) $ & 19 & 19 & 29 & 15 & 21 & 21 & 19 \\ \hline \end{tabular} } \end{center} \caption{ Remaining cases for $ j\geq 5 $.} \label{table:cases} \end{table} Assume that three consecutive edges $ a_{i-1} $, $ a_i $, and $ a_{i+1} $ in $ C $ satisfy the following (Figure \ref{fig:LastPart1}a): (1) $ a_{i-1} $ and $ a_{i+1} $ are of type 1, (2) $ a_{i-1} $ and $ a_{i+1} $ do not have vertices in common (in fact we only need $ q_{i-1} \neq p_{i+1} $), (3) $ a_{i-1} $ is crossed by two edges in $ D $, call them $ b_1 $ and $ b_2 $, and (4) $ a_{i+1} $ is crossed by two edges in $ D $, call them $ c_1 $ and $ c_2 $. For each case in Table \ref{table:cases}, we have highlighted a possible choice of such three edges. In Figure \ref{fig:CaseC3}a, we assume for now that $ e'=12 $ and thus the highlighted choice of edges or its vertically symmetric (at most one tick mark is missing) satisfies the previous conditions. By minimality of $ C $, there are no quadrilaterals in $G^{\otimes} $. Since all edges in $ D $ are incident to at least one vertex in $ V $, $ b_1 $ must be incident either to $ p_i $ or $ p_{i+1} $, same for $ b_2 $. But $ b_1 $ and $ b_2 $ are not incident to $ p_{i+1} $ (Figure \ref{fig:LastPart1}b), because this edge, $ c_1, c_2 $, and $ e_{i+1} $ would form a quadrilateral in $ G^{\otimes} $. So both edges are incident to $ p_i $ (and not incident to $ p_{i+1} $). This means that both cross the diagonal $ q_{i-1}p_{i+1} $, which must be a nonedge because otherwise it would form a quadrilateral with $ e_{i-1},b_1$, and $ b_2 $ in $ G^{\otimes} $ (Figure \ref{fig:LastPart1}c).But then $ V $ generates a valid replacement. Only in Figure \ref{fig:CaseC2}, at most one of the edges $ b_1, b_2,c_1 $, or $ c_2 $ could be missing. Assume by symmetry that $ c_2 $ is missing (then $ e'=14 $). Then $ b_1 $ and $ b_2 $ are not both incident to $ p_{i+1} $ (Figure \ref{fig:LastPart1}d) because $ b_1,b_2,c_1 $, and $ e_{i-1} $ would form a quadrilateral, and the same argument as before works if $ b_1 $ and $ b_2 $ are both incident to $ p_i $ but not to $ p_{i+1} $. The only remaining possibility is that $ b_1 $ is incident to both $ p_i $ and $ p_{i+1} $, and $ b_2 $ is incident to $ p_i $ but not to $ p_{i+1} $. But then $ c_1 $ is not incident to $ q_{i-1} $ (Figure \ref{fig:LastPart1}e) because $ b_1,b_2,c_1 $, and $ e_{i-1} $ would form a quadrilateral; and $ c_1 $ is not incident to $ q_i $ (Figure \ref{fig:LastPart1}f) because any edge in $ D $ crossing $ a_i $ would form a quadrilateral with either $ e_{i-1}, e_i,b_1 $ or with $e_i,e_{i+1}, c_1 $. \begin{figure} \caption{Three consecutive edges in $ E $. Tick marks represent edges in $ D $.} \label{fig:LastPart1} \end{figure} Finally, in Figure \ref{fig:CaseC3}a with $ e'=13 $ all edges in $ D $ shown by tick marks in Table \ref{table:cases} are present. If two edges $ wx $ and $ yz $ in $ D $ are not incident to vertices in $ V $, then let $ V_2=V\cup \{w,x,y,z\}$ and $ E_2 $ be the union of $ E $, $ D $ and any edges crossing $ wx $ or $ yz $. Then $ |V_2|=10 $ or $11 $ and $ |E_2|\leq e' +6=19$, and thus $ V_2 $ generates a valid replacement. Lastly, the two edges crossing the edge $ a_i $ in Figure \ref{fig:specialcasej5}a, cannot be incident to vertices in $ V $, because a quadrilateral would be formed in $ G^{\otimes} $ (Figures \ref{fig:specialcasej5}b-c). \begin{figure} \caption{Two edges in Figure \ref{fig:CaseC3} \label{fig:specialcasej5} \end{figure} \appendix \section{Proof of remaining cases for $ j=4 $ and $ \ell=4 $ in Theorem \ref{theorem:epsilons}}\label{sect:case3.3} Figure \ref{fig:Casej4} shows all remaining cases for $ j=4 $ and $ \ell=4 $: Figure \ref{fig:Casej4}a with $ e'=12 $ (all tick marks represent edges in $ D $) and Figure \ref{fig:Casej4}b with $ 10\leq e'\leq 12 $ (one or two tick marks could be missing). \begin{figure} \caption{Remaining cases for $ j=4 $ and $ \ell=4 $. (a) $ e'=12 $. (b) $ 10\leq e' \leq 12 $.} \label{fig:Casej4} \end{figure} We use the following observation several times along the proof. \begin{lem}\label{obs:boundary} If there are at least $ e'-9 $ nonedges in $ \bd (V) $, then $ V $ generates a valid replacement. \end{lem} \begin{proof} $ V $ generates a valid replacement because $ H_6=D_6 $ has 9 edges and $ e'\leq 9+(e'-9)=e' $. \end{proof} For the rest of the proof we assume that there are at most $ e'-10 $ nonedges in $ \bd (V) $. Consider Figure \ref{fig:Casej4}a with $ e'=12 $. If there is an edge $ xy\in D $ with $ x\notin V $ and $ y\notin V $, let $ V_1=V\cup \{x,y\} $ and $ E_1=\gen(V_1) $, that is, $ E_1 $ is the union of $ E' $ and any edges crossing $ xy $. Then $ |V_1| =9$ and $ |E_1|\leq e'+3=15$. So $ V_1 $ generates a valid replacement. Similarly, if there is an edge $ xy\in D $ with $ x\in V $ and $ y\notin V $ that crosses other two edges in $ D $, then let $ V_1=V\cup \{x\} $ and $ E_1 =\gen(V_1)$, the union of $ E' $ and any edges crossing $ xy $. Then $ |V_1| =8$ and $ |E_1|\leq e'+1=13$. So $ V_1 $ generates a valid replacement. So assume that any edge in $ D $ is incident to at least one vertex in $ V $ and that any edge incident to exactly one vertex in $ V $ crosses at most one edge in $ D $. Because $ e'=12 $, then each edge in $ D $ crosses exactly one edge in $ E $ and so it does not cross the shaded region in Figure \ref{fig:Casej4}a. \texttt{Add another figure.}Let $ b_i $ and $ c_i $ be the two edges in $ D $ crossing $ e_i $, $ 1\leq i \leq 4 $. Then each of $ b_1 $ and $ c_1 $ must be incident to $ p_2 $ or $ p_3 $ (or both); and each of $ b_3 $ and $ c_3 $ must be incident to $ q_1 $ or $ q_2 $ (or both). Suppose first that neither $ b_3 $ nor $ c_3 $ are incident to $ q_2 $. Then they are both incident to $ q_1 $. \texttt{Add figure.} Then $ b_1 $ and $ c_1 $ cross $ b_3 $ and $ c_3 $ and at least one of them is incident to exactly one vertex of $ V $ contradicting the assumptions. Then (by symmetry) we can assume that $ b_3 $ is incident to $ q_2 $ and not to $ q_1 $; and $ b_1 $ is incident to $ p_2 $ and not to $ p_3 $. Note that the diagonal $ q_1p_3 $ is not a side (it is crossed by $ b_1 $ and $ b_3 $). If $ q_1p_3 $ is a nonedge, then $ E' $ can be replaced by $ q_1p_3 $ and a copy of $ S_7 $ with vertex set $ V $ and continue as before. Then assume that $ q_1p_3 $ is an edge. Note that $ b_2 $ must cross $ b_1 $ or $ b_3 $ depending on where $ b_2 $ crosses $ e_2 $. Assume by symmetry that $ b_2 $ crosses $ b_1 $. So $ b_1 $ is crossed by at most one edge not in $ E' $ ($ b_1 $ already crosses $ b_2 $, $ e_1 $, and the edge $ q_1p_3 $). Let $ V_1 $ be the union of $ V $ and the vertex not in $ V $ incident on $ b_1 $; and let $ E_1 $ be the union of $ E' $ and any edges crossing $ b_1 $. Then $ |V_1| =8$ and $ |E_1|\leq e'+1=13$. So $ V_1 $ generates a valid replacement. Consider Figure \ref{fig:Casej4}b with $ 10\leq e'\leq 12, $. We assume that there is no set of four edges in $ G $ crossing like in Figures \ref{fig:Casej4}a-b. We can also assume that no edge in $ D $ crosses the shaded region in Figure \ref{fig:Casej4}b. Indeed, if such edges exist and they are incident to $ p_1 $ or $ p_2 $, then we replace $ C $ by the two edges in $ E' $ incident on $ p_1 $ closest to $ p_1p_2 $ and the two edges in $ E' $ incident on $ p_2 $ closest to $ p_1p_2 $. Any edge crossing the shaded region after this replacement would either create a triangle in $G^{\otimes} $ or 4 edges crossing as in Figures \ref{fig:Casej4}a-b. Suppose that there is at least one edge in $ D $ crossing more than one edge in $ E $. Then such an edge is incident on $ p_1 $ or $p_2$. Assume first that there are 2 (or more) such edges, say $ ux $ and $ vy $ with $ \{u,v\}\subseteq\{p_1,p_2\} $. Then $ e'=10 $ and $ p_3q_2 $ is an edge ($ p_3q_2 $ is not a side because it is crossed by $ ux $; and it is not a nonedge because it is on $\bd(V)$). So there is at most one edge not in $ E' \cup\{p_3q_2\}$ crossing $ ux $ and at most one crossing $ vy $. Let $ V_1=V\cup \{x,y\}$ and $ E_1 $ be the union of $ E' $ and any other edges crossing $ ux $ or $ vy $. Then $ |V_1|=7 $ or $ 8 $, $ |E_1|\leq 12 $ or $ 13 $, respectively. In all cases, except when $ |V_1|=7 $ and $ |E_1| =12$, $ V_1 $ generates a valid replacement. Assume that $ |V_1|=7 $ and $ |E_1| =12$. In this case $ x=y $ and there is one edge not in $ E' \cup\{p_3q_2\}$ simultaneouly crossing $ ux $ and $ vy $. This edge is incident on $ p_3 $ or $ q_2 $ as otherwise it would form a triangle or a quadrilateral like those in Figures \ref{fig:Casej4}a-b. Suppose $ p_3w $ is such and edge. Since $ e'=10 $, there is an edge $ b_1 $ in $ D $ crossing $ e_1 $. But then $ b_1 $ also crosses $ p_3w $ and so there is at most one edge not in $ E_1 $ crossing $ p_3w $. Let $ V_2=V_1\cup \{w\} $ and $ E_2 $ be the union of $ E_1 $ and any other edges crossing $ p_3w $. Then $|V_2|=8 $ and $ |E_2|= 13 $. So $ V_2 $ generates a valid replacement Assume now that $ p_1x $ is the only edge in $ D $ crossing more than one edge in $ E $. Then $ n'=10 $ or $ 11 $. So there is at least one edge in $ D $ crossing $ e_3 $. Note that no edge in $ D $ crosses $ p_1x $ because otherwise it would create a triangle in $ G^{\otimes} $ or a cycle like the one in Figure \ref{fig:Casej4}a. If $ yz$ is the only edge crossing $ e_3$, then $ n'=10 $. If $ y $ and $ z $ are not in $ V $, then let $ V_1=V\cup\{y,z\} $ and $ E_1 $ be the union of $ E' $ and any other edges crossing $ yz $. Then $ |V_1|=8 $ and $|E_1|\leq 10+3=13 $. So $ V_1 $ generates a valid replacement. Then we can assume that $ y $ is incident on $ q_1 $ and $ z\notin V $ (because $ yz $ does not cross $ p_1x $). In this case $ e_1 $ is crossed by two edges in $ D $ which also cross $ yz $. Then $ yz $ is crossed by at most one edge not in $ E' $. Let $ V_1=V\cup\{z\} $ and $ E_1 $ be the union of $ E' $ and any other edge crossing $ yz $. Then $ |V_1|=7 $ and $ |E_1|\leq 10+1=11 $. So $ V_1 $ generates a valid replacement. Assume then that there are two edges $ b_3 $ and $ c_3 $ in $ D $ crossing $ e_3 $. Let $ V_1 $ be the union of $ V $ and any vertices incident on $ b_3 $ or $ c_3 $; and let $ E_1 $ be the union of $ E' $ and any other edges crossing $ b_3 $ or $ c_3 $. Let $ n_1=|V_1| $ and $ e_1=|E_1| $. Then $ 8\leq n_1 \leq 10 $ and $ e_1\leq e'+6\leq 17 $. If $ n_1=10 $, and since $ e_1\leq 19 $, then $ V_1 $ generates a valid replacement. \begin{figure} \caption{Possible cases for Figure \ref{fig:Casej4} \label{fig:Casej4_1_9} \end{figure} If $ n_1=9 $, then each of the diagonals $ q_1p_3 $ and $ p_3q_2 $ is crossed by $ b_3 $ or $ c_3 $ (Figure \ref{fig:Casej4_1_9}) and since $ e'\leq 11 $, then at least one of these diagonals is an edge. Also, if $ b_3 $ is incident on $ e_1 $, and since at least one edge $ b_1\in D $ crosses $ e_1 $, then $ b_1 $ also crosses $ b_3 $. In any case, $ e_1=|E_1|\leq e'+5=16 $. If $ e_1\leq 15 $, then $ V_1 $ generates a valid replacement. So assume that $ e_1=16 $. Then no other edges in $ D $ cross $ b_3 $ and $ c_3 $; and one of the diagonals $ q_1p_3 $ and $ p_3q_2 $ is not an edge and so $ e'=11 $. This is not possible in Figure \ref{fig:Casej4_1_9}a because then there must be a second edge $ c_1 $ crossing $ e_1 $ and thus crossing $ b_3 $. In Figures \ref{fig:Casej4_1_9}b and c, one of the two edges crossing $ e_1 $, say $ b_1 $, is incident on a vertex $ w \notin V_1 $, then let $ V_2=V_1\cup\{w\} $ and $ E_2 $ be the union of $ E_1 $ and any other edges crossing $ b_1 $. Then $ |V_2|=9+1=10 $, $ |E_2|\leq e_1+3=16+3=19 $ and so $ V_2 $ generates a valid replacement. \begin{figure} \caption{Possible cases for Figure \ref{fig:Casej4} \label{fig:Casej4_1_8} \end{figure} If $ n_1=8 $, then there are two vertices $ y $ and $z $ not in $ V $ such that either $ b_3=q_1y $ and $ c_3=q_1z $, or $ b_3=q_1y $ and $ c_3=zy $ (Figure \ref{fig:Casej4_1_8}). If only one edge $ b_1 \in D $ crosses $ e_1 $, then $ e'=10 $ and $ p_3q_2 $ is an edge. Then $ e_1\leq 10+3=13 $ and thus $ V_1 $ generates a valid replacement. If two edges $ b_1 $ and $ c_1 $ cross $ e_1 $, then at most one edge not in $ E' $ crosses $ b_3 $. In Figure \ref{fig:Casej4_1_8}a the same holds for $ c_3 $ and thus $e_1\leq 11+2=13 $. Then $ V_1 $ generates a valid replacement. In Figure \ref{fig:Casej4_1_8}b, the edge $ c_3 $ could be crossed by as many as three edges not in $ E' $. If $ c_3 $ is crossed by at most two edges not in $ E' $, then $ e_1\leq 11+2=13 $ and so $ V_1 $ generates a valid replacement. Then assume that $ c_3 $ is crossed by three edges not in $ E' $. This means that $ b_1 $ and $ c_1 $ do not cross $ c_3 $. If $ b_1=uv $ and both $ u $ and $ v $ are not in $ V $, then let $ V_2=V\cup\{u,v\} $ and let $ E_2 $ be the union of $ E' $ and any edges crossing $ b_1 $. Then $ |V_2|=8 $ and $ |E_2|\leq 11+2=13 $. So $ V_2 $ generates a valid replacement. Since $ b_1 $ cannot be incident on $ p_3 $ (as it would cross $ c_3 $), then $ b_1 $ is incident on $ p_2 $. The same applies to $ c_1 $. So we can assume that $ b_1=p_2u $ and $ c_1=p_2v $ with $ u \notin V_1 $. Let $ V_3=V_1\cup u $, and $ E_3 $ be the union of $ E_1 $ and any other edge crossing $ b_1 $. Then $ |V_3|=9 $ and $ |E_3|\leq 14+2=16 $. If $ |E_3| \leq 15$, then $ V_3 $ generates a valid replacement. If $ |E_3|=16 $, then $ q_1p_3 $ is not an edge and so $ p_3q_2 $ is an edge. If there is another edge $ b \in E_3 $ that is incident to a vertex $ w \notin V_3 $, then let $ V_4=V_3\cup\{w\} $ and $ E_4 $ be the union of $ E_3 $ and any other edges crossing $ b $. Then $ |V_4|=10 $ and $ |E_4|\leq 16+3=19 $. Then $ V_4 $ generates a valid replacement. But the only two vertices in $ V_3 $ that can be incident to the two edges not in $ E' $ crossing $ c_3 $ are $ u $ and $ p_3 $. So one of those edges is incident on a vertex not in $ V_3 $ concluding this case. Suppose that any edge in $ D $ crosses exactly one edge in $ E $. We consider 9 subcases (see Table \ref{table:subcases}) according to the numbers $ (r_1,r_3,r_2,r_4) $ of edges in $ D $ crossing $ (e_1,e_3, e_2,e_4) $, respectively. \begin{table}[h] \begin{center} \begin{tabular}{|c|c||c|c||c|c|} \hline Subcase $ (r_1,r_3,r_2,r_4) $ & $ e' $& Subcase $ (r_1,r_3,r_2,r_4) $ & $ e' $ & Subcase $ (r_1,r_3,r_2,r_4) $ & $ e' $\\ \hline $ (2,2,2,2) $ & 12 & $ (2,2,2,0) $ & 10 & $ (2,1,2,1) $ & 10\\ \hline $ (2,2,2,1) $ & 11 & $ (2,2,1,1) $ & 10 & $ (2,2,0,2) $ & 10\\ \hline $ (2,2,1,2) $ & 11 & $ (1,2,2,1) $ & 10 & $ (2,1,1,2) $ & 10\\ \hline \end{tabular} \end{center} \caption{ Division into subcases according on how the edges in $ D $ cross the edges in $ E $.} \label{table:subcases} \end{table} \begin{figure} \caption{Classification of edges in $ E $ crossed by one edge in $ D $.} \label{fig:Casej4_types1} \end{figure} \paragraph{Edge-classification} We classify the edges of $ E $ according to how they are crossed by the edges of $ D $. Let $ e_i\in E $. \begin{itemize} \item We say that $ e_i $ is of Type O if it does not cross edges in $ D $. \item Figure \ref{fig:Casej4_types1} shows the possible types of $ e_i $ if it is crossed by exactly one edge $ b_i\in D $. $ e_i $ is of type B, L, R, or N according to whether $ b_i $ is incident on \emph{both} $ x $ and $ z $, only $ x $ (the vertex on its \emph{left}), only $ z $ (the vertex on its \emph{right}), or \emph{neither} $ x $ nor $ z $. \item Figure \ref{fig:Casej4_types2} shows the possible types of $ e_i $ if it is crossed by two edges $ b_i $ and $ c_i $ in $ D $: Types $ LB$ (left-both), $RB$ (right-both), $LL$ (left-left), $RR$ (right-right), $BN$ (both-none), $LNr$ (left-none with $ b_i $ and $ c_i $ sharing a vertex, , which in this case is on their right), $LN$ (left-none with $ b_i $ and $ c_i $ not sharing vertices), $RNl$ (right-none with $ b_i $ and $ c_i $ sharing a vertex, which in this case is on their left), $RN$ (right-none with $ b_i $ and $ c_i $ not sharing a vertices), $NNl$ (none-none sharing the left vertex), $ NNr$ (none-none sharing the right vertex), $NN $ (none-none sharing no vertices). Note that type $ BB $ is not possible because we would have a double edge, and type $ LR $ is not posible because $ b_i,c_i, $ and $ e_i $ would form a triangle in $ G^{\otimes} $. \item We group the types of edges into three big groups: B-, which includes any of the types B, LB, RB, and BN; R- including the types R, RB, RR, RN; L- including the types L, LB, LL, and LN; and N- including the types N, LN, RN, BN and NN. \end{itemize} \begin{figure} \caption{Classification of edges in $ E $ crossed by two edges in $ D $.} \label{fig:Casej4_types2} \end{figure} The following lemma identifies the incompatibility of certain types of edges. \begin{lem}\label{obs:gen1} The graph $ G $ satisfies the following two conditions. \begin{enumerate} \item \label{obs:2middle_edges} The edges $ (e_3,e_2) $ cannot be of types (R-,N-), (R-,R-), (B-,R-), (N-,L-), (L-,L-), or (L-,B-). \item \label{obs:2B2} Within the list $ e_1,e_3,e_2,e_4 $, an edge of type B- cannot be between two edges crossed by two edges in $ D $. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Whenever the edges $ (e_3,e_2) $ are of types (R-,N-), (R-,R-), (B-,R-), (N-,L-), (L-,L-), or (L-,B-) a 4-cycle like the one in Figure \ref{fig:Casej4}a is formed as shown in Figure ???. \texttt{Add Figure.} \item Suppose $ e_i, e_j, $ and $ e_k $ are consecutive in the list, $ b_j $ is of type B-, and $ e_j $ and $ e_k $ are each crossed by two edges in $ D $. Then $ b_j $ is crossed by $ b_i,c_i,e_j,b_k, $ and $ c_k $ (Figure \ref{fig:Casej4_gen2}a), which is impossible since any edge in $ G $ is crossed at most 4 times. \end{enumerate} \end{proof} \begin{figure} \caption{Illustration for Lemmas \ref{obs:gen1} \label{fig:Casej4_gen2} \end{figure} The following lemma identifies conditions on the types of edges that guarantee the existence of a valid replacement. \begin{lem}\label{obs:gen2} $ G $ has a valid replacement if, within the list $ e_1,e_3,e_2,e_4 $, one of the following conditions is satisfied. \begin{enumerate} \item \label{obs:RR2} There is an edge of type RR followed by an edge crossed by two edges in $ D $. \item \label{obs:2LL} There is an edge of type LL preceded by an edge crossed by two edges in $ D $. \item \label{obs:N-N} There are two disjoint edges of $ D $ not incident to any vertices in $ V $. This happens, in particular, when there are two non-consecutive edges of type N- or when the two edges crossing an edge of type NN are disjoint. \end{enumerate} \end{lem} \begin{proof} Suppose $ e_i$, and $ e_j $ are consecutive in the list $ e_1,e_3,e_2,e_4 $. \begin{enumerate} \item If $ e_i $ is of type RR and $ e_j $ is crossed by two edges in $ D $ (Figure \ref{fig:Casej4_gen2}b), then there are three vertices $ x,y, $ and $ z $ not in $ V $ such that $ b_i,c_i, $ and $ b_j $ (or $ c_j $) are incident on $ x,y, $ and $ z $. Also $ b_i $ and $ c_i $ cross $ b_j $ and $ c_j $. This means that each $ b_i, c_i ,$ and $ b_j $ is crossed by at most one edge not in $ E' $. Let $ V''=V\cup \{x,y,z\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $ b_i,c_i, $ or $ b_j $. Then $ V'' $ generates a valid replacement. \item This situation (Figure \ref{fig:Casej4_gen2}c) is vertically symmetric to the previous part. \item Suppose that there are two edges $ wx$ and $ yz$ in $ D $ not incident to any vertices in $ V $. Let $V''=V\cup \{w,x,y,z\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $ wx $ or $ yz $. Then $ |V''|=6+4=8 $ and $ |E''|\leq 12+6=18 $. Then $ V'' $ generates a valid replacement. \end{enumerate} \end{proof} We now analyze each of the subcases in Table \ref{table:subcases}. \paragraph{Subcase (2,2,2,2).} In this subcase $ e'=12 $ and by Lemma \ref{obs:boundary} we assume that there are at most 2 nonedges on $ \partial\conv(V) $. We start with the following lemma which implies Property 1 in Table \ref{table:compatibility12}. Property 1 shows conditions on $ G $ under which a valid replacement is guaranteed. \begin{lem}\label{lem:cond12} $ G $ has a valid replacement if one of the following conditions is satisfied. \begin{enumerate} \item \label{obs: unattached_12} There is an edge of $D $ not incident on vertices in $ V $ and crossed by at least two edges in $ D $. \item \label{obs: simultaneous12}There are three edges in $ D $ incident on at least four vertices not in $ V $ and such that \begin{enumerate} \item either they are simultaneously crossed by an edge not in $ E' $ or \item one is crossed by an edge in $ D $ and two are simultaneously crossed by an edge not in $ E' $. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Let $ xy $ be an edge in $ D $ not incident on any vertices in $ V $. \begin{enumerate} \item If $ xy $ is crossed by two edges in $ D $, then let $V''=V\cup \{x,y\} $ and $ E'' $ be the union of $ E' $ and any other edges crossing $ xy $ (at most one). Then $|V''|=6+2=8 $ and $|E''|\leq 12+1=13 $. So $ V'' $ generates a valid replacement. \item In this case there are other two edges $ uw $ and $ vz $ in $ D $ such that $ w $ and $ z $ are not in $ V $. Let $V''=V\cup \{x,y\} $ and $ E'' $ be the union of $ E' $ and any other edges crossing $ xy,uw, $ or $ vz $. If the edges $ xy,uw, $ and $ vz $ are simultaneously crossed by an edge not in $ E' $, or if at least two of them are simultaneously crossed by an edge not in $ E' $ and one is crossed by an edge in $ D $, then $ |E''|\leq 12+7=19 $. Since $ |V''|=6+4=10 $, then $ V'' $ generates a valid replacement. \end{enumerate} \end{proof} \begin{table}[t] \begin{center} \begin{tabular}{|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.8in}|>{\centering\arraybackslash}m{.8in}|>{\centering\arraybackslash}m{1in}|>{\centering\arraybackslash}m{2.3in}|} \hline Property & Type of $ e_j $ & Type of $ e_i $ & Valid replacement guaranteed by& Figure\\ \hline 1&\begin{tabular}{@{}c@{}c@{}}RR or RB\\ \\ N-\end{tabular}&\begin{tabular}{@{}c@{}c@{}}N-\\ \\ LL or LB\end{tabular} &Lemma \ref{lem:cond12}.\ref{obs: unattached_12}&\includegraphics[width=.8\linewidth]{./Casej4_together11}\\ \hline \end{tabular} \end{center} \caption{ Property 1: Type compatibility when $ e'=12 $.} \label{table:compatibility12} \end{table} By Lemma \ref{obs:gen1}.\ref{obs:2B2}, \ref{obs:gen2}.\ref{obs:RR2} and \ref{obs:gen2}.\ref{obs:2LL}, there is a valid replacement when $ e_3 $ or $ e_2 $ is of type B-, RR, or LL, respectively. Then assume that $ e_3 $ and $ e_2 $ are of types LN, RN, or NN. By Lemma \ref{obs:gen1}.\ref{obs:2middle_edges}, only the cases when $ e_3 $ is of type LN or NN and $ e_2 $ is of type RN or NN are left. By Property 1 and Lemma \ref{obs:gen2}.\ref{obs:N-N}, the only cases left are when $ e_1 $ is of type LL or LB and $ e_4 $ is of type RR or RB. Since $ e_3 $ and $ e_2 $ are of type N-, then Lemma \ref{obs:gen2}.\ref{obs:N-N} guarantees a valid replacement unless $ c_3 =xy $ and $ c_2=yz $ for some vertices $ x,y, $ and $ z $ not in $ V $. Assuming this situation and in order to avoid triangles in $ G^{\otimes} $, any edges crossing $ e_3 $ or $ e_2 $ must be incident on $ y $. Because $ q_1p_3,p_3q_2, $ and $ q_2p_4 $ are not sides (they are crossed by $ c_3 $ or $ c_2 $), then at least one of them is an edge by Lemma \ref{obs:boundary}. Suppose first that $ p_3q_2 $ is an edge. If $ e_3 $ is of type NN, then $ b_3=wy$ for some vertex $ w\notin V\cup \{x,y,z\}$, then $ b_3,c_3, $ and $ c_2 $ are simultaneously crossed by $ p_3q_2 $ and thus there is a valid replacement by Lemma \ref{lem:cond12}.\ref{obs: simultaneous12}. So assume by symmetry that $ e_3 $ and $ e_2 $ are of types LN and RN. If $ b_1 $ is incident on a vertex $ w \notin V\cup \{x,y,z\} $, then there is a valid replacement because $ b_1,c_3, $ and $ c_2 $ satisfy the conditions of Lemma \ref{lem:cond12}.\ref{obs: simultaneous12}. So assume by symmetry that $ e_1 $ and $ e_4 $ are of types B-. Let $ V''=V\cup \{x,y,z\} $ and $ E''=\gen(V'')$, the union of $ E' $ and any edges crossing $ b_3 $ or $b_2$. Then $ |V''|=6+3=9 $, $ |E''|\leq 12+3=15, $ and thus $ V'' $ generates a valid replacement. Now assume that $ q_1p_3 $ is an edge. Suppose first that $ e_3 $ is of type NN. If $ b_3,c_3, $ or $ c_2 $ is crossed by an edge in $ D $, then there is a valid replacement by Lemma \ref{lem:cond12}.\ref{obs: simultaneous12}. Otherwise $ e_1 $ is of type LL and $ b_1 $ (or $ c_1 $) is incident on a vertex $ w\notin V $. Then $ b_1,b_3, $ and $ c_3 $ are simultaneously crossed by $ q_1p_3 $ and thus there is a valid replacement by Lemma \ref{lem:cond12}.\ref{obs: simultaneous12}. Suppose now that $ e_3 $ is of type LN. If $ b_1 $ (or $ c_1 $) is incident on a vertex $ w\notin V $, then $ b_1,c_3, $ and $ c_2 $ satisfy the conditions of Lemma \ref{lem:cond12}.\ref{obs: simultaneous12} and thus there is a valid replacement. Otherwise, we can assume that $ e_1 $ is of type LB and $ b_1=q_1x $. If $ q_1p_3 $ is the only edge not in $ D $ crossing $ xy $, then let $ V''=V\cup \{x,y\} $ and $ E''=E'\cup \{q_1p_3\} $. Then $ |V''|=6+2=8 $, $ |E''|\leq 12+1=13 $, and thus $ V'' $ generates a valid replacement. Otherwise, there is an edge $ vw $, with $ w\notin V $crossing $ xy $. This edge must be incident on $ q_1 $ or $ p_3 $ too because otherwise $ uv,q_1p_3,b_1, $ and $ c_3 $ would form a quadrilateral like the one in Figure \ref{fig:Casej4}a. Let $V''=V\cup \{w,x,y,z\} $ and $ E''=\gen(V'') $, the union of $ E' $ and any other edges crossing $ xy,yz, $ or $ uv $. There are two edges not in $ D $ crossing $ xy $ (namely, $ q_1p_3 $ and $ uv $), at most three crossing $ yz $, and at most two crossing $ uv $. Then $ |V''|=6+4=10 $, $ |E''|\leq 12+7=19 $, and thus $ V'' $ generates a valid replacement. \paragraph{Subcases (2,2,2,1) and (2,2,1,2).} In these subcases $ e'=11 $ and by Lemma \ref{obs:boundary} we assume that there is at most 1 nonedge on $ \partial\conv(V) $. We start with the following lemma. \begin{lem}\label{lem:cond11} $ G $ has a valid replacement whenever one of the following conditions is satisfied. \begin{enumerate} \item \label{obs: unattached_11} There is an edge of $D $ not incident on any vertices in $ V $ and crossed by another edge in $ D $. \item \label{obs: 1attached_3cross_11} There is an edge of $ D $ crossed by three other edges in $ D $ and incident on a vertex not in $ V $. \item \label{obs: 2attached_2crosseach_11} There are two edges of $ D $, each crossed by at least two edges in $ D $ and incident on a vertex not in $ V $, and these two vertices are different. \item \label{obs:NN_11} There is an edge in $ E $ of type NN. \item \label{obs:new_vertex} There is an edge of type RR or LL preceded by an edge of type R- or followed by an edge of type L-; and there are at least four vertices incident on edges in D. \end{enumerate} \end{lem} \begin{proof} Let $ xy $ and $ x'y' $ be edges in $ D $. \begin{enumerate} \item If $ xy $ is not incident on any vertices in $ V $ and is crossed by another edge in $ D $, then let $V''=V\cup \{x,y\} $ and $ E''=\gen(V'') $, the union of $ E' $ and any edges crossing $ xy $. Then $|V''|=6+2=8 $, $|E''|\leq 11+2=13 $, and so $ V'' $ generates a valid replacement. \item If $ x\notin V $ and $ xy $ is crossed by three other edges in $ D $, then no other edges cross $ xy $. Let $ V'=V\cup \{x\} $. Then $ |V'|=6+1=7,|E'|= 11, $, and so $ V'' $ generates a valid replacement. \item Suppose that $ xy$ and $ x'y'$ s are crossed by at least two more edges in $ D $, $x\notin V , x'\notin V, $ and $ x\neq x' $. Let $V''=V\cup \{x,x'\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $ xy $ or $x'y' $. Then $ |V''|=6+2=8 $, $ |E''|\leq 11+2=13 $, and so $ V'' $ generates a valid replacement. \item Suppose there is an edge $ e_i\in E $ of type NN. Then by Part \ref{obs: 2attached_12}, $ b_i=xy $ and $ c_i=xz $ for some vertices $ x,y, $ and $ z $ not in $ V $. Note that there is an edge $ b_j $ in $ D $ not crossing $ e_i $, not sharing any vertices with $ b_i $ and $ c_i $, and incident to a vertex $ w\notin V $. Let $ V''=V\cup \{w,x,y,z\} $ and $ E''$ be the union of $ E' $ and any edges crossing $ b_i,c_i, $ or $ b_j $. By Observation \ref{obs:boundary}, one of the diagonals in $ \bd(V) $ crossed by $ b_i $ is an edge. This edge simultaneously crosses $ b_i $ and $ c_i $. Then $ |V''|=6+4=10 $, $ |E''|\leq 11+5+3=19 $, and so $ V'' $ generates a valid replacement. \item Suppose that $ e_i $ and $ e_j $ are consecutive edges. If $ e_i $ is of type RR or LL and $ e_j $ is of type L-, then $ b_i $ and $ c_i $ cross $ b_j $ and these edges are incident to three vertices $ x,y, $, and $ z $ not in $ D $. Then there is another edge $ uw $ in $ D $ with $ w \notin V $ and such that $ w,x,y, $ and $ z $ are all different. Let $ V''=V\cup \{w,x,y,z\} $ and $ E''$ be the union of $ E' $ and any edges crossing crossing $ b_i,c_i, b_j$ or $ uw $. Then $ |V'|=6+4=10 $, $ |E''|\leq 12+6=18 $, and so $ V'' $ generates a valid replacement. \end{enumerate} \end{proof} Table \ref{table:compatibility11} shows Properties 2-4 on $ G $ under which a valid replacement is guaranteed. These properties follow from Lemma \ref{lem:cond11}. We also assume that there are no edges of type NN, otherwise there would be a valid replacement by Lemma \ref{lem:cond11}.\ref{obs:NN_11}. \begin{table}[t] \begin{center} \begin{tabular}{|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.55in}|>{\centering\arraybackslash}m{.65in}|>{\centering\arraybackslash}m{.55in}|>{\centering\arraybackslash}m{.95in}|>{\centering\arraybackslash}m{2.2in}|} \hline Property & Type of $ e_j $ & Type of $ e_i $ & Type of $ e_k $ & Valid replacement guaranteed by& Figure\\ \hline 2&\begin{tabular}{@{}c@{}c@{}}R- or B-\\ \\ N-\end{tabular}&\begin{tabular}{@{}c@{}c@{}}N-\\ \\ L- or B-\end{tabular}& &Lemma \ref{lem:cond11}.\ref{obs: unattached_11}&\includegraphics[width=.8\linewidth]{./Casej4_together8}\\ \hline 3& \begin{tabular}{@{}c@{}c@{}}any in\\ Fig. \ref{fig:Casej4_types2}\\ \\ R- or B-\\ \end{tabular}& \begin{tabular}{@{}c@{}c@{}} \\L- \\ \\ R- \\ \end{tabular}& \begin{tabular}{@{}c@{}c@{}} \\ L- or B- \\ \\ any in \\ Fig. \ref{fig:Casej4_types2}\end{tabular} & Lemma \ref{lem:cond11}.\ref{obs: 1attached_3cross_11}& \includegraphics[width=.8\linewidth]{./Casej4_together9}\\ \hline 4 &R- or B-& RR or LL & L- or B- & Lemma \ref{lem:cond11}.\ref{obs: 2attached_2crosseach_11} & \includegraphics[width=.8\linewidth]{./Casej4_together10}\\ \hline \end{tabular} \end{center} \caption{ Properties 2-4: Type compatibility when $ e'=11 $.} \label{table:compatibility11} \end{table} In \textbf{Subcase (2,2,2,1)}, there is a valid replacement when $ e_3 $ is of type B-, RR, or LL by Lemma \ref{obs:gen1}.\ref{obs:2B2}, \ref{obs:gen2}.\ref{obs:RR2}, or \ref{obs:gen2}.\ref{obs:2LL}, respectively. So we assume that $ e_3 $ is of type RN or LN. If $ e_3 $ is of type RN, then there is a valid replacement when $ e_2 $ is of type N-, L-, or B- by Property 2, or of type R- by Lemma \ref{obs:gen1}.\ref{obs:2middle_edges}. If $ e_3 $ is of type LN, then there is a valid replacement when $ e_1 $ is of type N-, R-, or B- by Property 2; or when $ e_2 $ is of type L- or B- by Property 2. So we assume that $ e_1 $ is of type LL and $ e_2 $ is of type RR or RN. Then $ b_1,c_1, b_3, $ and $ c_2 $ (or $ b_2 $) are incident to four different vertices not in $ V $ and so there is a valid replacement by Lemma \ref{lem:cond11}.\ref{obs:new_vertex}. In \textbf{Subcase (2,2,1,2)}, we first assume that $ e_3 $ is of type L-. By Property 3, there is a valid replacement if $ e_2 $ is of type L or B. So assume that $ e_2 $ is of type R or N. Then there is a valid replacement when $ e_3 $ is of type LL by Lemma \ref{obs:gen2}.\ref{obs:2LL} or of type R- or B- by Properties 2 and 3 (if $ e_2 $ is of type N or R, respectively). So assume that $ e_3 $ is of type LN. By Property 2, there is a valid replacement unless $ e_1 $ is of type LL. But in this case, one of the edges $ b_4 $ or $ c_4 $ is incident on a vertex not in $ V $ and both edges are disjoint from $ b_1,c_1, $ and $ b_3 $, then there is a valid replacement by Lemma \ref{lem:cond11}.\ref{obs:new_vertex}. Now assume that $ e_3 $ is of type R-. By Lemmas \ref{obs:gen1}.\ref{obs:2middle_edges} and \ref{obs:gen1}.\ref{obs:2B2}, a valid replacement is guaranteed unless $ e_2 $ is of type L. If $ e_2 $ is of type L, a valid replacement exists if $ e_4 $ is of type L- or B- by Property 3 or if $ e_3 $ is of type N- or L- by Lemma \ref{obs:gen1}.\ref{obs:2middle_edges}. So we assume that $ e_3 $ is of type RR or RB, $ e_2 $ is of type L, and $ e_4 $ is of type RR or RN. If $ e_3 $ is of type RR, then there is a valid replacement by Lemma \ref{lem:cond11}.\ref{obs:new_vertex} because $ b_3,c_3, b_2,$ and $ c_4 $ (or $ b_4 $) are incident to four different vertices not in $ V $. If $ e_3 $ is of type RB, then there is a valid replacement if $ e_1 $ is of type N- or RR by Property 2 and Lemma \ref{obs:gen2}.\ref{obs:RR2}, respectively. If $ e_1 $ is of type B-, then $ b_3 $ is incident to a vertex $ x $ not in $ V $ and it is crossed by $ c_1$ and $ b_2 $; and $ b_2 $ is incident to another vertex $ y $ not in $ V $ and it is crossed by $ b_3 $ and $ c_3 $. Then there is a valid replacement by Lemma \ref{lem:cond11}.\ref{obs: 2attached_2crosseach_11}. So assume that $ e_1 $ is of type LL. Then $ b_1 $ (or $ c_1 $), $ b_3,b_2, $ and $ c_4 $ (or $ b_4 $) are incident to four vertices $ w,x,y, $ and $ z $ not in $ V $ and are crossed by at most 2,2,1, and 3 edges not in $ E' $, respectively. Let $ V''=V\cup \{w,x,y,z\} $ and $ E''$ be the union of $ E' $ and any edges crossing $ b_1,b_3,b_2$ or $ c_4 $. Then $ |V''|=6+4=10, |E''|\leq 12+6=18, $, and so $ V'' $ generates a valid replacement. \paragraph{All other Subcases.} In all the remaining subcases, $ e'=10 $ and by Lemma \ref{obs:boundary} we assume that $ \partial \conv(V) $ is formed by sides and edges only. We start with the following lemma that guarantees a valid replacement under certain circumstances. \begin{lem}\label{lem:cond10} $ G $ has a valid replacement whenever one of the following conditions is satisfied. \begin{enumerate} \item \label{obs: unattached_10} There is an edge of $D $ not incident on any vertices in $ V $. \item \label{obs: 1attached_2cross_10} There is an edge of $ D $ crossed by two other edges in $ D $ and incident on a vertex not in $ V $. \item \label{obs: 2attached_simulcross_1crosseach_10} There are two edges of $ D $, simultaneously crossed by an edge not in $ E' $, each crossed by at least one more edge in $ D $, each incident on a vertex not in $ V $, and these two vertices are different. \end{enumerate} \end{lem} \begin{proof} Let $ xy $ and $ x'y' $ be edges in $ D $. \begin{enumerate} \item If $ xy $ is not incident to any vertices in $ V $, let $V''=V\cup \{x,y\} $ and $ E'' $ be the union of $ E' $ and any other edges crossing $ xy $. Then $|V''|=6+2=8 $ and $|E''|\leq 10+3=13 $. So we can replace $ E'' $ by a copy of $ S_8 $ with vertex-set $ V'' $ and continue as usual. \item If $ xy $ is crossed by two other edges in $ D $ and $ x\notin V $, let $ V''=V\cup \{x\} $ and $ E'' $ be the union of $ E' $ and any other edges crossing $ xy $ (at most one). Then $ |V''|=6+1=7 $ and $ |E''|\leq 10+1=11 $. So we can replace $ E'' $ by a copy of $ S_7 $ with vertex-set $ V'' $ and continue as usual. \item Suppose $ xy $ and $ x'y' $ are simultaneously crossed by an edge not in $ E' $, each is crossed by at least one edge in $ D $, $ x\notin V$, $ x'\notin V$, and $ x\neq x' $. Let $V''=V\cup \{x,x'\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $ xy $ or $ x'y' $. Then $ |V''|=6+2=8 $ and $ |E''|\leq 10+3=13 $. Then $ E'' $ can be replaced by a copy of $ S_8 $ with vertex-set $ V'' $ and the proof continues as usual. \end{enumerate} \end{proof} \begin{table}[h] \begin{center} \begin{tabular}{|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{1.2in}|>{\centering\arraybackslash}m{2.5in}|} \hline Property & Type of $ e_j $ & Type of $ e_i $ & Valid replacement guaranteed by & Figure\\ \hline 5 & L- or R-& BL or LL & Lemma \ref{lem:cond10}.\ref{obs: 1attached_2cross_10} & \includegraphics[width=.8\linewidth]{./Casej4_together1}\\ \hline 6 & BR or RR & L- or R- & Lemma \ref{lem:cond10}.\ref{obs: 1attached_2cross_10} & \includegraphics[width=.8\linewidth]{./Casej4_together2}\\ \hline 7 & B- or R- & LL or RR & Lemmas \ref{obs:boundary} and \ref{lem:cond10}.\ref{obs: 2attached_simulcross_1crosseach_10} & \includegraphics[width=1\linewidth]{./Casej4_together3}\\ \hline 8 & LL or RR & B- or L-& Lemmas \ref{obs:boundary} and \ref{lem:cond10}.\ref{obs: 2attached_simulcross_1crosseach_10} & \includegraphics[width=1\linewidth]{./Casej4_together4}\\ \hline 9 & BL, LL, BR, or RR & L- & Lemma \ref{lem:cond10}.\ref{obs: 1attached_2cross_10} & \includegraphics[width=.3\linewidth]{./Casej4_together5}\\ \hline 10 & R- & BL, LL, BR, or RR & Lemma \ref{lem:cond10}.\ref{obs: 1attached_2cross_10} & \includegraphics[width=.3\linewidth]{./Casej4_together6}\\ \hline \end{tabular} \end{center} \caption{ Properties 5-10: Type compatibility of two consecutive edges when $ e'=10 $.} \label{table:compatibility} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.5in}|>{\centering\arraybackslash}m{.5in}|>{\centering\arraybackslash}m{.5in}|>{\centering\arraybackslash}m{1in}|>{\centering\arraybackslash}m{2.4in}|} \hline Property & Type of $ e_j $ & Type of $ e_i $ & Type of $ e_k $ & Valid replacement guaranteed by & Figure\\ \hline 11 & B- or R- & L- or R- & B- or L- & Lemma \ref{lem:cond10}.\ref{obs: 1attached_2cross_10} & \includegraphics[width=.8\linewidth]{./Casej4_together7}\\ \hline \end{tabular} \end{center} \caption{ Properties 11: Type compatibility of three consecutive edges when $ e'=10 $.} \label{table:compatibility2} \end{table} By Lemma \ref{lem:cond10}.\ref{obs: unattached_10}, we can assume that the edges in $ E $ are of Types O, B, L, R, BL, BR, LL, and RR. Tables \ref{table:compatibility} and \ref{table:compatibility2} show Properties 5-11 on $ G $ under which a valid replacement is guaranteed. All these properties follow by Lemmas \ref{obs:boundary} and \ref{lem:cond10}. The following corollary follows from Properties 5-11 and is used for Subcases (2,2,2,0), (2,2,1,1), and (1,2,2,1). \begin{cor}\label{cor:22} Let $ e_i $ and $ e_j $ be two consecutive edges in the list $ e_1,e_3,e_2,e_4 $, each crossed by two edges in $ D $. Then there is a valid replacement unless $ e_i $ and $ e_j $ are of types (in this order) BL-BR or LL-RR. \end{cor} \begin{figure} \caption{Remaining possibilities for Subcases (2,2,1,2) and (2,1,1,1).} \label{fig:Casej4_bd} \end{figure} In \textbf{Subcase (2,2,2,0)}, there are three consecutive twos. If $ e_1 $ and $ e_3 $ are of types BL-BR or LL-RR, then $ e_3 $ and $ e_2 $ are not of those types and thus a valid replacement is guaranteed by Corollary \ref{cor:22}. In \textbf{Subcases (2,2,1,1) and (1,2,2,1)}, there is a 2,2,1 substring. By Corollary \ref{cor:22}, the only possible remaining cases are when the first two of these three edges are of types BL-BR or LL-RR. If the third edge is of type L or R, then Property 6 guarantees a valid replacement. If the third edge is of type B, then LL-RR-B is covered by Property 8 and BL-BR-B is covered by Property 11. In \textbf{Subcase (2,1,2,1)}, by Properties 9 and 10, the only remaining possibility is that $ e_3 $ is of type $ B $. But $ e_3 $ cannot be of type B by Lemma \ref{obs:gen1}.\ref{obs:2B2}. In \textbf{Subcase (2,2,0,2)}, the only remaining possibilities are the left-to-right paths in Figure \ref{fig:Casej4_bd}a. Figure \ref{fig:Casej4_b}a corresponds to LL-RR-O-R- (analogous to LL-RR-O-L-, Figure \ref{fig:Casej4_b}e), Figures \ref{fig:Casej4_b}b-c to BL-BR-O-RR (analogous to BL-BR-O-LL, Figure \ref{fig:Casej4_b}f-g), and Figure \ref{fig:Casej4_b}d to BL-BR-O-BR (analogous to BL-BR-O-BL, Figure \ref{fig:Casej4_b}h). We focus on Figures \ref{fig:Casej4_b}a-d. In all these figures, we can assume that $ q_1p_3 $ and $ q_2p_4 $ are edges by Lemma \ref{obs:boundary}. \begin{figure} \caption{Possible subgraphs for Subcase (2,2,0,2).} \label{fig:Casej4_b} \end{figure} In Figure \ref{fig:Casej4_b}a, the edges $ b_1,c_1,b_3, $ and $ c_3 $ do not cross because they would form a triangle with $ q_1p_3 $ in $ G^\otimes $. This means that at least one of the edges $ b_1 $ or $ c_1 $ does not share vertices with $ b_3 $ and $ c_3 $. Let $ b_1,b_3, c_3,$ and $ b_4 $ be incident on $ w,x,y $, and $ z $, respectively, and assume that $ w,x,y, $ and $ z $ are all different. Let $ V''=V\cup\{w,x,y,z\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $ b_1,b_3, c_3,$ and $ b_4 $. Thus $ |V''|=10 $ and $ |E''|\leq 10+7=17 $. Then $ E'' $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V'' $. In Figures \ref{fig:Casej4_b}b-c, $ b_1, b_3,b_4, $ and $ c_4 $ are incident on different vertices $ w,x,y $ and $ z $ not in $ V $. Figure \ref{fig:Casej4_b}b shows the case when $ w\neq x $. Let $ V''=V\cup\{w,x,y,z\} $ and $ E'' $ be the union of $ E' $ and any other edges crossing $ b_1,b_3,b_4 $ and $ c_4 $. Then $ |V''|=10 $ and $ |E''|\leq 10+7=18 $. Thus $ E'' $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V'' $. Figure \ref{fig:Casej4_b}c shows the situation when $ w=x $. Here $ c_4 $ can be replaced by $ p_1q_2 $ without adding new crossings and falling into the case BL-BR-O-BR, Figure \ref{fig:Casej4_b}d. Let $ V''=V\cup\{x,y\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $b_1,b_3,$ or $ b_4$. Then $ |V''|=8 $ and $ |E''|\leq 10+4=14 $. As long as $ |E''|\leq 13 $, $ E'' $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V'' $. So assume that $ |E"|=14 $ and the border of $ V'' $ does not contain any nonedges. Let $ u'u $ be the edge simultaneously crossing $ b_1 $ and $ b_3 $ (other than $ q_1p_3 $). If $ u' $ and $ u $ are not in $ V $, then let $ V_3=V\cup\{x,y,u',u\} $ and $ E_3 $ be the union of $ E'' $ and any edges crossing $u'u$. Then $ |V_3|=10 $ and $ |E_3|\leq 14+4=18 $ and thus $ E_3 $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V_3$. So we assume that $ u'=q_1 $ and that as shown in Figure \ref{fig:Casej4_left}a. This is the only remaining possibility for which we have not yet found a valid replacement in this subcase. \begin{figure} \caption{Possible subgraphs for Subcase (2,1,1,2).} \label{fig:Casej4_e} \end{figure} In \textbf{Subcase (2,1,1,2)}, by Properties 5-11, the only remaining possibilities are the left-to-right paths in Figure \ref{fig:Casej4_bd}b. Figure \ref{fig:Casej4_e}a corresponds to LL-R-L-RR. Each of the corresponding six edges is incident on a vertex in $ V $ and on a vertex not in $ V $. Let $ x $ and $ y $ the vertices not in $ V $ incident on $ b_3 $ and $ b_2 $, respectively. Then we can assume that $ b_1 $ is incident on a vertex $ w\neq x$ not in $ V $; and $ b_4 $ is incident on a vertex $ z\neq y$ not in $ V $. Let $ V''=V\cup\{w,x,y,z\} $ and $ E'' $ be the union of $ E' $ and any edges crossing $b_1,b_2,b_3,b_4$. Note that by Lemma \ref{obs:boundary}, the diagonals $ q_1p_3 $ and $ q_2p_4 $ are edges. Then $ b_1 $ and $ b_3 $ are simultaneously crossed by $ q_1p_3 $, $ b_1 $ is crossed by at most 2 other edges, and $ b_3 $ by at most 1 more. This is a contribution of at most 4 new edges to $ E'' $. The same reasoning applies to $ b_2 $ and $ b_4 $. Therefore $ |E''|\leq 10+8=18 $ and so $ E'' $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V'' $ and continue as usual. Figures \ref{fig:Casej4_e}b-e correspond to BL-B-B-BR, BR-B-B-BL, BR-B-B-BR, and BL-B-B-BL, respectively. The argument for all these cases is analogous, we focus on Figure \ref{fig:Casej4_e}b. The edges $ b_1 $ and $ b_4 $ are incident on some vertices $ x\notin V $ and $ y\notin V $, respectively. Let $ V_2=V\cup\{x,y\} $ and $ E_2 $ be the union of $ E' $ and any edges crossing $b_1$ or $ b_4 $. Then $ |V_2|=8 $ and $ |E_2|\leq 10+4=14 $. Then $ E_2 $ can be replaced by a copy of $ S_8 $ with vertex-set $ V_2 $ as long as $ |E_2|\leq 13 $ or there is a nonedge on $ \partial \conv(V_2) $. Assume that $ |E"|=14 $ and that $ \partial \conv(V_2) $ is formed by sides and edges of $ G $ only. By Lemma \ref{obs:boundary}, we can assume that $ q_1p_3 $ and $ q_2p_4 $ are edges. Let $ uu' $ and $ vv' $ (Figure \ref{fig:Casej4_ext2112}a) be the fourth edge crossing $ b_1 $ and $ b_4 $, respectively. If $ u $ and $ u' $ are not in $ V $, then let $ V_3=V_2\cup\{u,u'\} $ and $ E_3$ be the union of $ E' $ and any edges crossing $ uu' $. Then $ |V_3|=10 $, $ |E_3|\leq 14+3=17 $, and thus $ E_3 $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V_3 $. So assume that $ u'=q_1 $ and $ v'=p_4 $, Figure \ref{fig:Casej4_ext2112}b. (The argument is analogous if instead $ u=p_3 $ or $ v=q_2 $.) Let $ V_4=V\cup\{x,y,u,v\} $ and $ E_4$ be the set of edges crossing the interior of $ \conv(V_4) $, that is, the union of $ E_3 $ and any edges crossing $uq_1$ or $ vp_4 $. Then $ |V_4|=10 $ and $ |E_4|\leq 10+10=20 $. As long as $ |E_4|\leq 19 $ or there is a nonedge on $ \partial \conv(V_4) $, $ E_4 $ can be replaced by a copy of $ H_{10} $ with vertex-set $ V_4$ and any nonedges on $ \partial \conv(V_4) $. So assume that $ |E_4|=20 $ and that $ \partial \conv(V_4) $ is formed by sides and edges of $ G $ only. If one of the edges $ w'w $ crossing $ uq_1 $ or $ vp_4 $ is not incident on any vertices of $ V_4 $, let $ V_5=V_4\cup\{w,w'\} $ and $ E_5$ be the union of $ E_4 $ and any edges crossing $w'w$. Then $ |V_5|=12 $, $ |E_5|\leq 20+3=23 $, and thus $ E_5 $ can be replaced by a copy of $ H_{12} $ with vertex-set $ V_5$. Then (in order to avoid triangles in $ G^{\otimes} $) we can assume that the two extra edges crossing $ uq_1 $ are $ xw_1 $ and $ xw_2 $, the two extra edges crossing $ vp_4 $ are $ yw_3 $ and $ yw_4 $, and $ up_3 $ and $ vq_2 $ are edges (Figure \ref{fig:Casej4_ext2112}c). Let $ V_6=V_4\cup \{w_1,w_2,w_3,w_4\} $ and $ E_6 $ be the union of $ E_4 $ and any edges crossing $ xw_1, xw_2, yw_3, $ and $ yw_4 $. Then $ |V_6|=14 $ and $ |E_6|\leq 20+10=30 $. As long as $ |E_6|\leq 29$, $ E_6$ can be replaced by a copy of $ H_{14} $ with vertex-set $ V_6$. So assume that $ |E_6|=30 $ as shown in Figure \ref{fig:Casej4_left}b. This is the only remaining possibility for which we have not yet found a valid replacement in this subcase. \begin{figure} \caption{Extending a cycle 2112.} \label{fig:Casej4_ext2112} \end{figure} Figures \ref{fig:Casej4_left}a-b are all remaining possibilities for which we have not found valid replacements. Figure \ref{fig:Casej4_left}a corresponds to Subcase (2,2,0,2) and we say that the quadrilateral formed by $ e_1,e_2,e_3, $ and $ e_4 $ is of type 2202. Figure \ref{fig:Casej4_left}b corresponds to Subcase (2,1,1,2) and we say that the quadrilateral formed by $ e_1,e_2,e_3, $ and $ e_4 $ is of type 2112. Note that the existence of a quadrilateral of type 2202 implies the existence of either a valid replacement or a quadrilateral of type 2112. This is because in Figure \ref{fig:Casej4_left}a the edges $ b_1,c_1, e_1,$ and $ c_3 $ (or $ e_3,c_1,c_3,$ and $b_3 $) form a quadrilateral in $ G^\otimes $ of type 2112. So assume that there is a quadrilateral in $ G^{\otimes} $ of type 2112, Figure \ref{fig:Casej4_left}b. Since the quadrilateral formed by $ xw_1,xw_2,up_3, $ and $ uq_1 $ is of type 2202, then either there is a valid replacement or this quadrilateral is of type 2202. So we assume that the quadrilateral formed by $ xw_1,xw_2,up_3, $ and $ uq_1 $ is of type 2202. (\ref{fig:Casej4_left}c with $ x,u,w_1,x',w_2,p_3,p_2, $ and $ q_1 $ in Figure \ref{fig:Casej4_left}c corresponding to $ p_1,p_2,q_1,x,p_3,q_2,y,$ and $p_4 $ in Figure \ref{fig:Casej4_left}a.) Note that Figure \ref{fig:Casej4_left}c includes another quadrilateral of type 2112, which appears shaded in Figure \ref{fig:Casej4_left}d. So either there is a valid replacement or we can assume that this quadrilateral is of type 2112. So the existence of a 2112 quadrilateral implies either a valid replacement or the situation in \ref{fig:Casej4_left}d, where two partial copies of Figure \ref{fig:Casej4_left}b (only the points $ y,p_4,p_1,p_2,q_1,p_3,$ and $q_2 $), namely the subgraphs induced by $ y,a,b,c,d,e, f$ and $ y',a',b',c',d',e',f' $ with $ y'=d $ and $ f'=e $, are put together. If we repeat the argument two more times, we can guarantee either a valid replacement or the subgraph in Figure \ref{fig:Final22_48} that consists of a set $ V'' $ of 22 vertices whose convex hull interior is crossed by at most 48 edges. Then $ V'' $ generates a valid replacement ($ H_{22} $ has $ 49 $ edges). \begin{figure} \caption{(a-b) The only remaining subgraphs in Subcases (2,2,0,2) and (2,1,1,2), respectively. (c-d) Puting together Subcases (2,2,0,2) and (2,1,1,2).} \label{fig:Casej4_left} \end{figure} \begin{figure} \caption{A valid replacement with 22 vertices and 48 edges. The graph $ H_{22} \label{fig:Final22_48} \end{figure} \end{proof} \end{document}
\begin{document} \title{Mean Field Models to Regulate Carbon Emissions in \Electricity Production} \begin{abstract} The most serious threat to ecosystems is the global climate change fueled by the uncontrolled increase in carbon emissions. In this project, we use mean field control and mean field game models to analyze and inform the decisions of electricity producers on how much renewable sources of production ought to be used in the presence of a carbon tax. The trade-off between higher revenues from production and the negative externality of carbon emissions is quantified for each producer who needs to balance in real time reliance on reliable but polluting (fossil fuel) thermal power stations versus investing in and depending upon clean production from uncertain wind and solar technologies. We compare the impacts of these decisions in two different scenarios: 1) the producers are competitive and hopefully reach a \textit{Nash Equilibrium}; 2) they cooperate and reach a \textit{Social Optimum}. We first prove that both problems have a unique solution using forward-backward systems of stochastic differential equations. We then illustrate with numerical experiments the producers' behavior in each scenario. We further introduce and analyze the impact of a regulator in control of the carbon tax policy, and we study the resulting Stackelberg equilibrium with the field of producers. \end{abstract} \vskip3mm \keywords{Mean field games; Mean field control; Carbon emission; Stackelberg equilibrium} \vskip 12pt\noindent \emph{\textbf{Acknowledgments.}} {The authors were partially supported by NSF DMS-1716673, ARO W911NF-17-1-0578 and AFOSR FA9550-19-1-0291.} \section{Introduction} \label{sec:intro} Nowadays, it is widely accepted that the most serious threat to ecosystems is the global warming fueled by the uncontrolled increase in carbon emissions, and for the last twenty-some years, starting with the Kyoto Protocol in 1997, international treaties have sprung out in hope to address this negative externality. The most recent of these treaties is the \textit{Paris Agreement} with $196$ signatories aiming at keeping the increase in temperature below $2 ^\circ$C. Throughout the world, local and federal governments try to disincentivize reliance on polluting means of production by introducing carbon taxes or cap-and-trade programs. In the latter case, regulators put a limit on the allowable quantity of Green House Gas (GHG) emissions, any quantity above this limit having to be covered by emission certificates (allowances) or the payment of a penalty. In the former case, whether they are levied \textit{upstream} or \textit{downstream}, carbon taxes aim at penalizing the use of fossil fuels for their carbon content. The interested reader is referred to \cite{carmona_SICON,carmona_SIREV,carmona_ASCONA} for a review of the state of affairs in the early days of the European Union Emission Trading System, and mathematical treatments of thorough partial equilibrium models for the comparison of realistic implementations of these policies in the electricity sector. According to the Environmental Protection Agency, electricity production claims the lion share ($25\%$) of the total Greenhouse Gas emissions in the US. \footnote{\url{https://www.epa.gov/ghgemissions/global-greenhouse-gas-emissions-data}} So here, we concentrate on the electricity sector and we propose a model for the analysis of the impact of investments in clean means of production (e.g. solar and wind). While a model of the electricity sector should comprise at least three types of agents: electricity producers, resellers / retailers, and the end-users, we shall concentrate our modeling effort on the producers. Until the challenging technological problem of electricity storage is resolved at a larger scale, the demand for this commodity remains inelastic, and we shall penalize the producers for not matching the demand, forcing the Independent System Operator (ISO) to rely on costly reserves. In the following, we shall use the term \textit{renewable} to mean electricity produced from wind turbines or solar panels. Alternatively, we shall use the term \textit{non-renewable} to mean electricity produced by burning fossil fuels like coal, crude oil or natural gas. We chose this convention for convenience, even if this literary license is not completely accurate. Individual producers control over time their usage of fossil fuels, and hence, the amount of $CO_2$ emissions they are responsible for. They also control their possible investment in solar or wind production, should they decide to go that route. Notice that while the decision to use fossil fuels changes over time, the investment in solar panels or wind turbines is a one-time decision made at the beginning of the time period under consideration. In our model, producing electricity from renewable sources involves an initial investment and no extra cost over time since the marginal cost of running these production assets is practically zero (except from maintenance costs and possible subsidies which we ignore here). While the zero cost of production is an attractive feature, it comes with the very high risks due to the difficulties to predict the weather and the uncertainty associated with the high volatility of these predictions. On the other hand, production from traditional power plants is more predictable, the costs depending upon the prices of the fuels and the price put on the $CO_2$ emissions by the regulator. Each producer has to find the right balance between the pros and the cons of the two major means of production we single out in our stylized model. The overarching goal is to decarbonize so as to meet emission targets, harnessing demand-side policies through the establishment of a tax, as well as supply-side resources including wind and solar production technologies. Our economic model is based on the premises that the individual producers and the regulator have only access to aggregate quantities. Basically, they only have access to the statistical distributions of the productions, emissions, investments, etc of the individual producers. As a result, we propose two separate frameworks for the individual producers to optimize the mix of renewable and nonrenewable production they should include in their portfolios. We compute and compare the optimal centralized strategies by solving mean field control problems, and the optimal decentralized strategies by solving mean field game problems. Our theoretical analysis relies on the probabilistic approach to construct forward-backward stochastic differential equation (FBSDE) systems for which we show, in both settings, existence and uniqueness of the solutions. Further, we propose a numerical approach to monitor the effect of a carbon tax on the optimal and equilibrium decisions in both cases. Quantifying the differences between the two approaches is reminiscent of what is known as the Price of Anarchy (PoA). Among the conclusions drawn from the analysis of our model, we confirm that a carbon tax is an effective incentive for the use of renewables. Also intuitive is the fact that in the absence of a carbon tax, the overall pollution is greater when producers compete than when they cooperate. Less obvious is the fact that cooperating producers will pollute less than when they compete, even if the carbon tax is significant. We also show that stricter regulations tend to reduce the differences between competitive and cooperative equilibria. Further, we argue that the best way for the regulator to encourage producers to match the demand is to incentivize competition over cooperation among the producers. Mean Field Game (MFG) models appeared simultaneously and independently in the original works of \cite{caines_huang_malhame_2006} and \cite{lasry_lions_2007}. The thrust of these works was to propose a paradigm to overcome the challenges of the search for Nash equilibria in large games by considering models for which the interactions between the players were of a mean field type, and deriving effective equations in the limit when the number of players goes to infinity. Models in which a single player plays a different role from the field of remaining players were introduced and studied under the name of MFGs with major and minor players. In their Stackelberg version, they had a significant impact on problems in economic contract theory. See for example \cite{bensoussan_yam_2016}, \cite{salhab2016}, \cite{posamai_2019}, or \cite{wang_2020}, \cite{elie_2020} or \cite{aurell2020optimal}. Notice that in these models, the major player uses a time dependent control, while in this paper, we shall assume that the regulator uses time independent controls. Using mean field models for energy applications is very natural. Competition in the oil industry and the impact of the renewable energy competition was analyzed in \cite{gueant_2010} and \cite{sircar_2017}. The early work \cite{gueant_2010} was extended with the addition of a regulator in \cite{achdou_2020}. In \cite{aid_2020}, optimal entry and exit times for two types of agents, electricity producers using either renewable or nonrenewable energy resources, are analyzed using MFGs. Competition among electricity producers is analyzed in \cite{djehiche_2018} by using Mean Field Type Game where the mean field interactions come through conditional expectation of the electricity price and in \cite{alasseur_2020} by using a model where the interactions enter the electricity spot price. In \cite{huyen_2020} and \cite{elie_2020}, electricity consumers constitute the mean field population and a single electricity producer plays the role of the principal, in contrast to our model where we take the electricity producers as the mean field population and the regulator as the principal. Mean field models have also been used to model environmental impacts. In \cite{malhame_2017}, a MFG model is proposed to model climate change negotiations among countries interacting through a $CO_2$ emission permit market. Emission certificate markets are also studied in \cite{shrivats_2020} and \cite{zhang_2016}, again without the presence of a regulator. The paper is structured as follows. In Section~\ref{sec:minormodel}, we introduce the minor players' model and the various equilibrium notions used in the sequel. In Section~\ref{sec:minor_main_theor_res} (resp. \ref{sec:minor_main_numeric_res}), the main theoretical (resp. numerical) results for the minor players' model are given. In Section~\ref{sec:regulator_model}, we introduce the regulator and define the relevant notions of equilibrium. Finally, we provide numerical results for the combined model with minor players and the regulator in Section~\ref{sec:reg_main_numeric_res} and we summarize our findings in a short Section~\ref{sec:conclusion}. \section{Mean Field Model for Electricity Producers}\label{sec:minormodel} \subsection{N-Player Model} Although we will focus on mean field limits involving an infinite number of players, we start with the description of what the \textbf{$\mathcal{N}$-player version} of the game would be. For symmetry reasons, we assume that the total electricity demand is split equally between all the agents, and each agent faces the same demand, say $D_t$ at time $t$. The state of producer $i$ is five-dimensional: instantaneous electricity production $Q^i_t \in \mathbb{R}_+$, instantaneous irradiance $S^i_t \in \mathbb{R}_+$, instantaneous emission level $E^i_t \in \mathbb{R}_+$, cumulative pollution $P^i_t \in \mathbb{R}_+$, and instantaneous nonrenewable energy production $\tilde{N}^i_t \in \mathbb{R}_+$. Producer $i$ controls their state by choosing at time $t=0$, their initial investment $R^i_e \in \mathbb{R}$ in renewable production assets (e.g. the number of solar panels they purchase), and at each subsequent time $t$, by choosing the rate of change $N^i_t \in \mathbb{R}$ in nonrenewable energy production. Notice that $N^i_t$ is time dependent while $R^i_e$ is time independent. This will be a challenging feature of the mathematical analysis of our model. \begin{remark} For the sake of definiteness, we use the terminology of solar power production. However, other types of renewable energy can be modelled in a similar way. For example, for wind power, $S^i_t$ would stand for the instantaneous output of a wind farm and $R_e$ would be the corresponding units of initial investment. \end{remark} So with these proviso out of the way, we define the time evolution of the state of producer $i$ as: \begin{equation*} \label{eq:nplayer_dynamics} \begin{alignedat}{2} dQ^i_t &= \underbracket{\kappa_{1} N^i_tdt}_{\text{Term~1}} + \underbracket{ \kappa_{2}R_e^i\left( \alpha \cos(\alpha t) dt + \textcolor{Bittersweet}{dS^i_t}\right)}_{\text{Term~2}}, &&\\ \textcolor{Bittersweet}{dS^i_t} &= (\theta - S^i_t) dt + \sigma_0 d\widecheck W^i_t, &&dE^i_t = \delta N^i_t dt + \sigma_1 d W^i_t,\\ dP^i_t &= E^i_t dt, &&d \tilde{N}^i_t = N^i_t dt. \end{alignedat} \end{equation*} The instantaneous electricity production changes depend on the instantaneous nonrenewable energy usage (given by term 1) and the instantaneous yield from the renewable energy investment (given by term 2). This second term includes a seasonality component (sinusoidal term) and a random shock for the variability of the sun irradiance. The form of the seasonality component was chosen for the sake of simplicity. It can easily be extended to several harmonics to include nightly and daily, monthly and yearly effects. In any case, we have $ Q_t^i= Q^i_0 + \kappa_1 \tilde{N}^i_t + \kappa_2 R^i_e (\sin(\alpha t) + S^i_t) $ where $\kappa_1, \kappa_2>0$ are constants that give the efficiency of the production from nonrenewable and renewable energy, respectively. The constant $\alpha>0$ gives the period of the seasonality of the renewable energy. We model the idiosyncratic noise terms $S^i_t$ in the renewable productions as independent stationary processes. For the sake of definiteness, we assume that they are Ornstein-Uhlenbeck processes with the same mean $\theta>0$ and volatility $\sigma_0>0$, the $\widecheck W^i$ being independent Wiener processes. The dynamics of the instantaneous emissions $E^i_t$ have two components: the contribution from the production from nonrenewable energy power plants, and idiosyncratic random shocks with constant volatility $\sigma_1>0$ given by independent Wiener processes $W^i$, also independent of the $\widecheck W^i$'s. The choice of the constant $\delta$ could include the effects of some abatement measures such as carbon capture, sequestration and the use of filters. Using the notation $\tilde N^i_t$ for the instantaneous nonrenewable given by $ \tilde N^i_t = \tilde N^i_0 + \int_0^t N^i_s ds $, the expected cost to producer $i$ over the whole period is: \begin{multline} \label{eq:nplayer_minorcost} C^{\mathcal{N}}(N^i, R^i_e; \bar Q) = \mathbb{E}\Big[\int_0^T \Big[\underbracket{c_{1} |N^i_t|^2}_{\text{Term~1}} + \underbracket{p_1 \tilde{N}^i_t}_{\text{Term~2}}+ \underbracket{c_2|Q^i_t-D_t|^2}_{\text{Term~3}} - \\ \underbracket{c_3\big(\rho_0 - \rho_1(D_t - \bar Q_t)\big)Q^i_t}_{\text{Term~4}}\Big] dt + \underbracket{\tau|P^i_T|^2}_{\text{Term~5}} + \underbracket{p(R_e^i)\Big]}_{\text{Term~6}}, \end{multline} where $\bar Q = \sum_{j=1}^{\mathcal{N}} Q^j / \mathcal{N}$ and $p: \mathbb{R}_+ \mapsto \mathbb{R}_+$ is the price function for the investment in renewable energy. Term~1 with $c_1>0$, is a penalty (ie. delay cost) for attempting to ramp up and down nonrenewable energy power plants too quickly. Term~2 represents the costs of the fossil fuels used in nonrenewable power plants. The constant $p_1>0$ can be understood as the average cost of one unit of fossil fuel. In lieu of storage which is not included in our models because of its scarcity, Term~3 with $c_3>0$, imposes a penalty on producers for not matching the demand and forcing the system operator to use costly reserves. Term~4 represents the revenues from electricity production, $\big(\rho_0 + \rho_1 (D_t - \bar Q_t)\big)$ being the inverse demand function which is assumed to be linear in excess demand or supply. Here $\rho_0$and $\rho_1$ are strictly positive constants. It captures the fact that the price increases if there is excess demand, and it decreases if there is excess supply. We assume that the producers are selling what they produce. This term introduces the mean field interactions into the model. Term~5 gives the carbon tax levied by the regulator. We emphasize its role by assuming it is proportional to the square of the terminal pollution. Term~6 is the total cost related to the initial investment in renewable electricity production including the price of the solar panels and the cost of the land used. \subsection{The Mean Field Model} In discussing the mean field regime of the model, we focus on a \textit{representative} producer interacting with the field of the other producers, so we drop the superscript $i$ and the dynamics equations become: \begin{equation} \label{eq:minordynamics} \begin{aligned} dQ_t &= \kappa_{1} N_tdt + \kappa_{2}R_e\big( \alpha \cos(\alpha t) dt + \hskip-3mm &&\textcolor{Bittersweet}{(\theta- S_t) dt + \sigma_0 d\widecheck W_t}\big), \\ \textcolor{Bittersweet}{dS_t} &= (\theta - S_t) dt + \sigma_0 d\widecheck W_t, && dE_t = \delta N_t dt + \sigma_1 d W_t,\\ dP_t &= E_t dt, &&d \tilde{N}_t = N_t dt, \end{aligned} \end{equation} where $W$ and $\widecheck W$ are independent Wiener processes. Accordingly, the total expected cost becomes: \begin{multline} \label{eq:minorcost} C(N, R_e; \bar Q) = \mathbb{E}\Big[\int_0^T \Big[c_{1} |N_t|^2 + p_1 \tilde{N}_t+ c_2|Q_t-D_t|^2 - \\ c_3\big(\rho_0 + \rho_1(D_t- \bar Q_t)\big)Q_t\Big] dt + \tau|P_T|^2 + p(R_e)\Big], \end{multline} where $\bar Q_t=\mathbb{E}[Q_t]$. We shall sometimes use the notation $\bar Q_t(N,R_e)$ to emphasize the fact that the expectation is computed under the state dynamics controlled by the admissible control $(N,R_e)$. \subsection{Equilibrium Notions} We consider two different models: mean field game (MFG) and mean field control (MFC). In the mean field game model, producers behave competitively and minimize their total expected costs (search for their best responses) given the other players' decisions. A Nash equilibrium is then characterized as a fixed point of the best response map so defined. In the sequel, we restrict our attention to admissible strategies $(N,R_e)$ such that $\mathbb{E}[\int_0^T|N_t|^2 dt] <+\infty$ and $R_e \in \mathbb{R}_+$. \begin{definition}[\textit{MFG Nash Equilibrium}] An admissible strategy and mean field flow tuple, $(\hat N, \hat R_e, \bar Q)$, is called an \textit{MFG Nash equilibrium} for any admissible $(N, R_e)$, we have: $$ C\Big(\textcolor{Bittersweet}{(N, R_e)}; \bar Q\Big) \geq C\Big((\hat N, \hat R_e); \bar Q\Big), $$ and $\bar Q = \bar Q(\hat N, \hat R_e)$. \end{definition} In the mean field control case, we assume that the producers cooperate and leave the choice of the control to a social planner minimizing the total expected cost as defined in \eqref{eq:minorcost}. In the realistic setup, the producers can be thought as the production facilities of a monopolistic electricity production firm and the social planner's decisions refer to the decisions taken by the headquarter. In this case, if one player changes their behavior, every player changes in the same way, and the mean field is affected. The problem is now an optimal control problem. \begin{definition}[\textit{Social Planner's MFC Optimum}] An admissible strategy and mean field flow tuple, $(\hat N,\hat R_e, \bar Q)$, is called an \textit{MFC optimum} if for any admissible $(N, R_e)$, we have: $$ C\Big(\textcolor{Bittersweet}{(N, R_e)}; \bar Q(\textcolor{Bittersweet}{N, R_e})\Big) \geq C\Big((\hat N, \hat R_e); \bar Q\Big), $$ and $\bar Q = \bar Q(\hat N, \hat R_e)$. \end{definition} \section{Main Theoretical Results} \label{sec:minor_main_theor_res} In this section, the following forward backward stochastic differential equation system (FBSDE) is going to be of interest: \begin{align} \label{eq:fbsde} dQ_t &= -\frac{\kappa_{1}}{2c_1} (Y_t^1\kappa_1+Y_t^3\delta+Y_t^5)dt \nonumber\\ &\qquad+ \kappa_{2} (p^{\prime})^{-1} \Big(-\mathbb{E}\Big[\int_0^T \kappa_2Y_t^1\Big(\alpha \cos(\alpha t) + (\theta {-S_t}) \Big) dt \Big]\Big)\nonumber\\ &\pushright{\times \left( \alpha \cos(\alpha t) dt + (\theta- S_t) dt + \sigma_0 d\widecheck W_t\right),} &Q_0 &= q_0 \nonumber\\ dS_t &= (\theta - S_t) dt + \sigma_0 d\widecheck W_t, &S_0 &= \theta \nonumber\\ dE_t &= -\frac{\delta}{2c_1} (Y_t^1\kappa_1+Y_t^3\delta+Y_t^5) dt + \sigma_1 d W_t, &E_0 &= e_0\nonumber\\ dP_t &= E_t dt, &P_0 &= p_0 \nonumber\\ d \tilde{N}_t &= \frac{1}{2c_1} (Y_t^1\kappa_1+Y_t^3\delta+Y_t^5) dt, \qquad &\tilde{N}_0 &= \tilde{n}_0\nonumber\\ dY^1_t &= \Big(-{2c_2({Q}_t-D_t)} + {c_3\big(\rho_0 + \rho_1(D_t- \bar Q_t)\big)}\Big)dt + & &\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad Z_t^{1,1}d\widecheck{W}_t + Z_t^{1,2}dW_t &Y_T^1 &= 0 \nonumber\\ dY^2_t &= \Big(\kappa_2 (p^{\prime})^{-1} \Big(-\mathbb{E}\Big[\int_0^T \kappa_2Y_t^1\Big(\alpha \cos(\alpha t) + (\theta {-S_t}) \Big) dt \Big]\Big) Y_t^1 + \nonumber\\ &\pushright{Y_t^2\Big)dt + Z_t^{2,1}d\widecheck{W}_t + Z_t^{2,2}dW_t, } &Y_T^2 &= 0\nonumber\\ dY^3_t &= -Y_t^4dt + Z_t^{3,1}d\widecheck{W}_t + Z_t^{3,2}dW_t, &Y_T^3 &= 0 \nonumber\\ dY^4_t &= Z_t^{4,1}d\widecheck{W}_t + Z_t^{4,2}dW_t, &Y_T^4 &= 2\tau\widehat{P}_T \nonumber\\ dY^5_t &= -p_1dt + Z_t^{5,1}d\widecheck{W}_t + Z_t^{5,2}dW_t, &Y_T^5 &= 0. \end{align} \begin{theorem} \label{theorem:fbsde_mfg} $(\hat N_t,\hat R_e, \bar Q)$ is a Nash equilibrium if and only if $(\hat N, \hat R_e)$ is given by: \begin{equation} \begin{aligned} \label{eq:fbsde_opt_cond} \hat N_t &= -\dfrac{Y_t^1\kappa_1+Y_t^3\delta+Y_t^5}{2c_1}, \quad t \in [0,T]\quad\text{and}\quad\\ \hat R_e&=(p^{\prime})^{-1} \Big(-\mathbb{E}\Big[\int_0^T \kappa_2Y_t^1\Big(\alpha \cos(\alpha t) + (\theta {-S_t}) \Big) dt \Big]\Big), \end{aligned} \end{equation} where $(Q,S,E,P,\tilde{N},Y^1,Y^2,Y^3,Y^4,Y^5)$ is a solution to the FBSDE given in \eqref{eq:fbsde}.\footnote{Here, $(p^\prime)^{-1}(\cdot)$ refers to the inverse of the first derivative of the function $p(\cdot)$.} \end{theorem} \begin{condition} \label{cond:cond_on_p} \begin{enumerate}[label={(\roman*)}] \item $p$ is convex. \item $(p^{\prime})^{-1}$ is bounded i.e. $(p^{\prime})^{-1}:\mathbb{R}\mapsto[0, R_e^{\text{max}}]$, continuous and monotone. \end{enumerate} \end{condition} \begin{theorem} \label{theorem:fbsde_existence_mfg} Assume Condition~\ref{cond:cond_on_p} holds, then there exists a unique Nash Equilibrium mean field flow $\bar Q$. \end{theorem} \begin{theorem} \label{theorem:fbsde_mfc} $(\hat N, \hat R_e)$ is an MFC optimum if and only if $(\hat N, \hat R_e)$ is given by \eqref{eq:fbsde_opt_cond} where $(Q,S,E,P,\tilde{N},$ $Y^1,Y^2,Y^3,Y^4,Y^5)$ is a solution to the FBSDE given in \eqref{eq:fbsde} where the equation for $(Y^1_t)_t$ is replaced by \begin{equation} \label{eq:mfc_y1} dY^1_t = \Big(-{2c_2({Q}_t-D_t)} + {c_3\big(\rho_0 + \rho_1(D_t- \textcolor{Bittersweet}{2\bar Q_t})\big)}\Big)dt + Z_t^{1,1}d\widecheck{W}_t + Z_t^{1,2}dW_t. \end{equation} \end{theorem} \begin{theorem} \label{theorem:fbsde_existence_mfc} Assume Condition~\ref{cond:cond_on_p} holds, then there exists a unique mean field control optimum flow $\bar Q$. \end{theorem} \section{Numerical Approach} \label{sec:minor_main_numeric_res} For numerical purposes, given the technical challenges posed by the solution of the large FBSDE in~\ref{eq:fbsde} with the existence of time dependent and independent controls, we implement an analytic approach for which we give the details below. For this reason, we first notice that: \begin{equation*} \inf_{(N_t)_t, R_e} C(N, R_e; \bar Q) = \inf_{R_e} \inf_{(N_t)_t} C(N, R_e; \bar Q), \end{equation*} and we assume that $R_e$ is fixed in a first analysis. Next, we rewrite the model in matrix form using $X_t:=[Q_t\quad S_t\quad E_t\quad P_t\quad \tilde{N}_t]^{\top}$ as $5$-dimensional state process at time $t$, and rewrite the optimization problem as: \begin{equation} \begin{aligned} \label{eq:minorcost_matrix} \inf_{(N_t)_t} \tilde{C}\Big(N; R_e, \bar X\Big) =& \inf_{(N_t)_t} \mathbb{E}\Bigg[\int_0^{T} \Big[\frac{R}{2} |N_t|^2 + H^{\top}_t X_t + \bar{X}_t^{\top}F X_t + X_t^{\top} G X_t + J_t\Big] dt\\&\pushright{+ X^{\top}_T S_T X_T + p(R_e)} \Bigg] \end{aligned} \end{equation} \begin{equation*} dX_t = \Big(A X_t + B \cdot N_t + C_t \Big) dt + \Sigma d\widetilde{W}_t \end{equation*} where $R$ and $J_t$ are the scalars given by $R = 2c_1$ and $J_t = c_2 D_t^2$ and: {\small\begin{equation*}\arraycolsep3pt H_t = \begin{bmatrix} -(2c_2+c_3\rho_1)D_t-c_3 \rho_0\\ 0\\ 0\\ 0\\ p_1 \end{bmatrix}, F = \begin{bmatrix} c_3 \rho_1& 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, G = \begin{bmatrix} c_2 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, S_T = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \tau & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \end{equation*} \vskip-2mm \begin{equation*}\arraycolsep3pt A = \begin{bmatrix} 0 & -\kappa_2 R_e & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, B = \begin{bmatrix} \kappa_1\\ 0\\ \delta\\ 0\\ 1 \end{bmatrix} , C_t = \begin{bmatrix} \kappa_2 R_e\Big( \alpha cos(\alpha t) + \theta \Big)\\ \theta\\ 0\\ 0\\ 0 \end{bmatrix}, \Sigma = \begin{bmatrix} \kappa_2 R_e \sigma_0 & 0\\ \sigma_0 & 0\\ 0 & \sigma_1\\ 0 & 0\\ 0 & 0 \end{bmatrix}. \end{equation*}} Furthermore, we define $\widetilde{W}_t$ and $a$ as: {\small\begin{equation*}\arraycolsep3pt \widetilde{W}_t = \begin{bmatrix} \widecheck W_t\\ W_t \end{bmatrix},\qquad a = \frac{1}{2}\Sigma \Sigma^{\top} = \frac{1}{2} \begin{bmatrix} (\kappa_2 R_e \sigma_0)^2 & \kappa_2 R_e \sigma_0^2 & 0 & 0 & 0\\ \kappa_2 R_e \sigma_0^2 & \sigma_0^2 & 0 & 0 & 0\\ 0 & 0 & \sigma_1^2 & 0& 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \end{equation*}} and the value function $u(t,X)$ as: \begin{equation} \begin{aligned} u(t, X)= \inf_{(N_s)_s} \mathbb{E}\Bigg[\int_t^{T} \Big[\frac{R}{2} |N_s|^2 + H^{\top}_s X_s + \bar{X}_s^{\top}F X_s + X_s^{\top} G X_s + J_s\Big] ds\\ \pushright{ + X^{\top}_T S_T X_T + p(R_e)\Big| X_t= X \Bigg]}. \end{aligned} \end{equation} \begin{lemma}[ODE System for the MFG] \label{lem:ode_mfg} For $R_e$ fixed, if there exists a function $t\mapsto (\eta_t, r_t, \bar X_t)$ solving the following system of Ordinary Differential Equations (ODEs): \begin{subequations} \label{eq:mfg_ode} \begin{empheq}[left=\empheqlbrace]{align} &\dfrac{d{\eta_t}}{dt} -{\eta_t} BR^{-1}B^{\top} {\eta_t} + A^{\top} {\eta_t} +{\eta_t} A +2G = 0, &&{\eta_T} = 2S_T\label{eq:mfg_mfc_eta}\\ &-\dfrac{d{r_t}}{dt} = \left(A^{\top} -{\eta_t} B R^{-1} B^{\top}\right) {r_t} + {\eta_t} C_t + H_t + F^{\top} \bar{X}_t, &&{r_T} = 0\label{eq:mfg_r}\\ &\dfrac{d\bar{X}_t}{dt} = (A-B R^{-1} B^{\top}{\eta_t})\bar{X}_t - B R^{-1} B^{\top}{r_t}+C_t, &&\bar{X}_0 =\bar{x}_0\label{eq:mfg_mfc_xbar} \end{empheq} \end{subequations} and if $s_0$ is given by: \begin{equation} \label{eq:mfg_mfc_s} s_0 = p(R_e) + \int_0^{T} \Big(tr(a{\eta_t}) -\frac{1}{2} {r_t}^{T} B R^{-1} B^{\top} {r_t} +C_t^{\top} {r_t}+ J_t\Big)dt, \end{equation} then $\hat N_t(R_e) = -R^{-1}B^{\top}(\eta_t X_t + r_t)$ is the MFG equilibrium given $R_e$ fixed, and the expected cost to the representative producer in this equilibrium is: \begin{equation} \label{eq:mfg_cost_eq} \inf_{N=(N_t)_t} \tilde{C}^{MFG}\Big(N; R_e, \bar X \Big) = \frac{1}{2} \left( Var(\sqrt{\eta_0} X_0) + \mathbb{E}[\sqrt{\eta_0} X_0]^2 \right) +\bar X_0^{\top} r_0 + s_0. \end{equation} \end{lemma} \begin{theorem}\label{the:exist_uniq_mfg} For $R_e$ fixed, if $T$ is small enough, there exists a unique MFG equilibrium. \end{theorem} \begin{lemma}[MFC ODE System] \label{lem:ode_mfc} Given $R_e$, if there exists a function $t\mapsto (\eta_t, r_t, \bar X_t)$ solving the ODE system \eqref{eq:mfg_ode} with the equation \eqref{eq:mfg_r} replaced by: \begin{equation} \label{eq:mfc_ode} -\frac{d{r_t}}{dt} = \left(A^{\top} - {\eta_t} B R^{-1} B^{\top}\right) {r_t} + {\eta_t} C_t + H_t + F^{\top} {\bar{X}_t} + \textcolor{Bittersweet}{F {\bar{X}_t}}, \qquad {r_T} = 0 \end{equation} and the same $s_0$ given by \eqref{eq:mfg_mfc_s}, then $N^*_t(R_e) = -R^{-1}B^{\top}(\eta_t X_t + r_t)$ is an optimum for the MFC problem given $R_e$, and the minimal expected cost is \begin{equation} \begin{aligned} \label{eq:mfc_cost_eq} \inf_{(N_t)_t} \tilde{C}^{MFC}\Big(N; R_e \Big) &= \frac{1}{2} \left( Var(\sqrt{\eta_0} X_0) + \mathbb{E}[\sqrt{\eta_0} X_0]^2 \right) +\bar X_0^{\top} r_0 + s_0\\ &\pushright{\textcolor{Bittersweet}{-\int_0^T \bar X_t^{\top}F \bar X_t dt}}. \end{aligned} \end{equation} \end{lemma} \begin{theorem}\label{the:exist_uniq_mfc} For $R_e$ fixed, if $T$ is small enough, there exists a unique MFC optimum. \end{theorem} Numerically, we search for the $R_e$ and the corresponding equilibrium $N=(N_t)_{t}$ that minimizes the cost of the minor players by using the ODE systems given in \eqref{eq:mfg_ode} and \eqref{eq:mfc_ode}. As emphasized earlier, the main difference between MFC and MFG is whether the mean field is affected by the decision of the representative producer (MFC), or taken to be fixed (MFG). This difference translates into the addition of a fixed point argument in the MFG case. For pedagogical reasons, we first discuss the MFC case, then the MFG. After solving the Riccati equation which is the same in both cases, we solve the coupled ODE system directly in the MFC case in order to find the mean field; on the other hand, notice that in the MFG case, the ODEs are decoupled since the mean field is assumed to be fixed in each iteration of the fixed point algorithm. \subsection{Mean Field Control Algorithm} In order to solve the system of MFC coupled ODEs for $(\bar{X}_t)_t$ and $r_t$ given by equations \eqref{eq:mfg_mfc_xbar} and \eqref{eq:mfc_ode}, we discretize the time with uniform step size $\Delta t$ and solve the following linear equation: {\small\begin{equation} \label{eq:mfc_linearsystem} \begin{bmatrix} \bar{X}\\ r \end{bmatrix} = M \begin{bmatrix} \bar{X}\\ r \end{bmatrix} + K, \end{equation}} where $\bar{X} = [\bar{X}_0, \bar{X}_{\Delta t} ,\bar{X}_{2\Delta t}, \dots, \bar{X}_T]^{\top}$, $r = [r_0, r_{\Delta t} ,r_{2\Delta t}, \dots, r_T]^{\top}$. \begin{algorithm}[H] \caption{\small Computation of the Mean Field Control Cost over $(N_t)_t$ given $R_e$} {\small \begin{algorithmic}[1] \Function{\texttt{Optim-MFC-N}}{$R_e$} \vskip2mm \State Calculate $(\eta_t)_t$ by solving the Riccati Equation in \eqref{eq:mfg_mfc_eta} \vskip1mm \State Solve the coupled $(\bar{X}_t)_t$ and $(r_t)_t$ linear system \eqref{eq:mfc_linearsystem} given $(\eta_t)_t$ and $R_e$ \vskip1mm \State Calculate $s$ given $R_e$, $(r_t)_t$ and $(\eta_t)_t$ using the equation in \eqref{eq:mfg_mfc_s} \vskip1mm \State Calculate the expected cost associated with $R_e$, $\hat c:=\inf \tilde{C}^{MFC}(N;R_e, \bar X)$ using \eqref{eq:mfc_cost_eq} \vskip2mm \State \Return ($\hat c, \bar X$) \vskip3mm \EndFunction \end{algorithmic}} \end{algorithm} \begin{algorithm}[H] \caption{\small Search for a Social Optimum} {\small \begin{algorithmic}[1] \Function {\texttt{SocialOpt}}{} \vskip2mm \State Search for the optimal $\hat R_e$ where the optimal cost $R_e \rightarrow c(R_e)$ and optimal mean field $R_e \rightarrow \bar X(R_e)$ are computed by \texttt{Optim-MFC-N} \vskip1mm \State Let $\hat c = c( \hat R_e )$ and $\hat{\bar X} = \bar X( \hat R_e )$ \vskip2mm \State \Return $(\hat c, \hat R_e, \hat{\bar{X}})$ \vskip3mm \EndFunction \end{algorithmic}} \end{algorithm} \subsection{Mean Field Game Algorithm} In Mean Field Game case, since in each iteration it is assumed that the $(\bar{X}_t)_t$ is fixed, the ODE for $(r_t)_t$ in equation \eqref{eq:mfg_r} can be solved directly by using the following linear equation after we discretize time: {\small\begin{equation} \label{eq:mfg_linearsystem_r} r = M_r r+ K_r, \end{equation}} where $r = [r_0, r_{\Delta t} ,r_{2\Delta t}, \dots, r_T]^{\top}$. Then with this $(r_t)_t$, the time discretization of $(\bar{X}_t)_t$ with dynamics given by the equation \eqref{eq:mfg_mfc_xbar} can be written as: {\small\begin{equation} \label{eq:mfg_linearsystem_xbar} \bar{X} = M_{\bar{X}} \bar{X} + K_{\bar{X}}, \end{equation}} where $\bar{X} = [\bar{X}_0, \bar{X}_{\Delta t} ,\bar{X}_{2\Delta t}, \dots, \bar{X}_T]^{\top}$. The numerical algorithms to find the Mean Field Control and Game Equilibria are given in detail in the following sections. \begin{algorithm}[H] \caption{\small Computation of the Expected Cost over $(N_t)_t$ given $R_e$, $(\bar X_t)_t$} {\small \begin{algorithmic}[1] \Function {\texttt{Optim-MFG-N}}{$R_e, (\bar X_t)_t$} \vskip2mm \State Calculate $(\eta_t)_t$ by solving Riccati Equation in \eqref{eq:mfg_mfc_eta} \vskip1mm \State Solve the linear system for $(r_t)_t$ in \eqref{eq:mfg_linearsystem_r} \textcolor{Bittersweet}{given $(\bar{X}_t)_t$}, $R_e$ and $(\eta_t)_t$ \vskip1mm \State Calculate $s$ given $R_e$, $(r_t)_t$ and $(\eta_t)_t$ using the equation in \eqref{eq:mfg_mfc_s} \vskip1mm \State Calculate the cost associated with $R_e$ \textcolor{Bittersweet}{and $(\bar{X}_t)_t$}, $\hat c:=\inf \tilde{C}^{MFG}(N; R_e, \bar X)$ using \eqref{eq:mfg_cost_eq} \vskip2mm \State \Return $\hat c$ \vskip3mm \EndFunction \end{algorithmic}} \end{algorithm} \begin{algorithm}[H] \caption{\small Search for a Nash Equilibrium} {\small \begin{algorithmic}[1] \Function {\texttt{NashEq}}{} \vskip2mm \State Initialize $(\bar{X}^0_t)_t$ \vskip2mm \While{$||\bar{X}^k - \bar{X}^{k-1}|| > \epsilon$} \vskip2mm \State Search for the optimal $\hat R_e$ given $\bar{X}^k$ where the optimal cost $(R_e, \bar{X}^{\textcolor{Bittersweet}{k}}) \rightarrow c^{\textcolor{Bittersweet}{k}}(R_e, \bar{X}^{\textcolor{Bittersweet}{k}})$ is computed by \texttt{Optim-MFG-N} \vskip1mm \State Let $\hat R_e^{\textcolor{Bittersweet}{k}} = \argmin_{R_e} c^{\textcolor{Bittersweet}{k}}(R_e, \bar{X}^{\textcolor{Bittersweet}{k}})$ \vskip1mm \State Compute $(\bar{X}^{\textcolor{Bittersweet}{{k+1}}}_t)_t$ given $\hat R_e^k,(\bar{X_t}^{\textcolor{Bittersweet}{k}})_t$ by solving the linear equation \eqref{eq:mfg_linearsystem_xbar} \EndWhile \vskip2mm \State Let $\hat R_e = \hat R^{\textcolor{Bittersweet}{k}}_e$, $\hat c = c^{\textcolor{Bittersweet}{k}}( \hat R_e )$ and $\hat{\bar{X}} = \bar{X}^{\textcolor{Bittersweet}{k}}$ \vskip2mm \State \Return $(\hat c, \hat R_e, \hat{\bar{X}})$ \vskip3mm \EndFunction \end{algorithmic}} \end{algorithm} \subsection{Numerical Experiments} \label{subsec:minor_param} In the numerical experiments reported below, we use the following parameter values: \vskip5mm \begin{center} {\small {\renewcommand{1.2}{1.2} \begin{tabular}{ l|c|l } \hline $ p_1 = 7/\Delta_t$ (dollar/time)& $ \rho_0=40/\Delta_t$ & $\theta=5 $\\ $ p_2=10^4$, $p_3=10^{-10}$ &$ \rho_1=0.1/\Delta_t$ & $T =20$ years, $\Delta_t= 10$ days \\ $ c_1=10^{-4}$ (dollar/$10^3$ cu ft$^2$)& $\alpha=40\pi $& $R_e=[0,5\times10^3]$ (10,000 dollars)\\ $ c_3=1$ (dollar/$10^3$ cu ft)& $\delta=0.15$ & $ D_t =2\times 10^4 - 5\times10^2 \cos(80\pi \Delta_t)$\\ $\kappa_1 =0.13$ (MWh/$10^3$ cu ft)&$\sigma_0=0.01 $ &$\bar{X}_0 = [0,\theta,0,0,0]$\\ $\kappa_2=0.1$ (MWh/$10^3$ dollars)&$\sigma_1=0.01 $ &$Var[X_0] = [0,0.1,0,0,0]$\\ \hline \end{tabular}}} \end{center} \vskip5mm Furthermore, we assume that $p(R_e)=p_2 R_e-p_3\sqrt{R_e(R_e^{\max}-R_e)}+\epsilon$ where $p_2, p_3$ are positive constants and $\epsilon>0$ is a small constant that ensures the nonnegativity of the price of the units of the renewable energy investment. \footnote{We note that this function satisfies the assumptions that are necessary for existence and uniqueness. The fact that $p(\cdot)$ is convex can be justified by the increasing unit costs that comes from the search of land that is large enough to construct the solar panels on. However, in the numerical application, we take the $p_3$ and $\epsilon$ small to have a function that is nearly ``linear".} We focus on natural gas as the source of nonrenewable energy. In our numerical experiments, we ignore the effect of the COVID-19 pandemic, and we run simulations for $20$ years starting from March 2020. For the cost of solar power, we use the current assumption that a $1$MW solar farm needs a $1$M\$ investment\footnote{\url{https://news.energysage.com/solar-farms-start-one/}}, and use daily peak sun hours data to compute daily average production from solar panels. We assume that on average, peak hours last approximately 5 hours in the US, and we infer that a solar farm built with an initial investment of $\$ 10,000$ generates on average $0.5$MWh in $10$ days. We choose $\alpha$ to take into account the seasonality, since sun exposure levels are maximum during summers, minimum during winters. Therefore, we infer that one unit investment of $R_e$ corresponds to $\approx$\$10,000 and on average it generates $\kappa_2(\theta \pm 1)=0.1(5\pm1)$MWh electricity in $10$ days. According to the data provided by the U.S. Energy Information Administration (EIA), in 2018, $1,365,822$ Million KWh were produced by the electric sector by using 10,215 billion cubic feet of natural gas\footnote{\url{https://www.eia.gov/totalenergy/data/monthly/pdf/sec7.pdf}, (Table 7.2b \& 7.3b)}. Therefore, we assume that $1000$ cubic feet of natural gas produce approximately $ 0.13$MWh. Again, according to the data provided by EIA, typical natural gas power plants produce $0.91$ pound of carbon emission per kWh electricity generation.\footnote{\url{https://www.eia.gov/tools/faqs/faq.php?id=74&t=11}} Since $1000$ cubic feet of natural gas produce around $0.13$ MWh, we conclude that around $122$ pounds of carbon is emitted with the use of natural gas. Moreover, if coal or other fossil fuels were used, this carbon pollution would have to be much more than doubled. Taking this into account and converting to metric tone we end up with $\delta\approx0.15$. We assume that the average demand of electricity for each plant is around the capacity of the plants. According to EIA\footnote{\url{https://www.eia.gov/electricity/annual/archive/pdf/epa_2018.pdf}, (Table 4.3)}, the average daily capacity of a natural gas plant is around $2166.5$ MWh in 2018. Furthermore, the monthly seasonal component is found by using the monthly residential electricity consumption in 2018 data given by EIA\footnote{\url{https://www.eia.gov/totalenergy/data/monthly/pdf/sec7.pdf}, (Table 7.6)}. Therefore, $10$ day demand is taken sinusoidal to show the seasonality around $20,000$MWh. According to the data provided by EIA\footnote{\url{https://www.eia.gov/electricity/annual/html/epa_08_04.html}, \\ \url{https://www.eia.gov/dnav/ng/ng_pri_sum_a_EPG0_PEU_DMcf_a.htm}} nonrenewable energy has 40\% of its fuel cost as the operation and maintenance costs on top of the fuel cost in 2018 and the price of 1000 cu ft natural gas can be assumed \$5. Therefore, we take $p_1=\$7$. Finally by using the daaata given by EIA\footnote{\url{https://www.eia.gov/todayinenergy/detail.php?id=37912}}, we see that the average price of wholesale electricity is around \$40 per MWh, therefore we take $\rho_0=40$. \subsubsection{Price of Anarchy (PoA) Analysis} From the heat maps in Figure \ref{fig:minorcost_poa}, we see that the expected cost of the representative producer is increasing with the carbon tax $\tau$ and the penalty $c_2$. The second observation is that as expected, for any given couple $(\tau, c_2)$, the expected cost is higher in the Nash equilibrium than for the Social Optimum. Next, we quantify how inefficient the Nash equilibrium is, and the effect of $\tau$ and $c_2$ on this inefficiency. In other words, we quantify the adverse effect of the non-cooperative behavior of the producers by computing the Price of Anarchy (PoA) defined in \eqref{eq:poa_minor} for different values of $\tau$ and $c_2$. \begin{equation}\label{eq:poa_minor} PoA(\tau, c_2) = \frac{\inf_{N_t, R_e} C^{MFG}(N_t, R_e; \bar Q,\tau, c_2)}{\inf_{N_t, R_e} C^{MFC}(N_t, R_e; \bar Q, \tau, c_2)}. \end{equation} The results are given in the bottom subfigure in Figure \ref{fig:minorcost_poa}. Since for any given $(\tau,c_2)$ the expected cost in a MFG equilibrium is higher, PoA is expected to be greater than $1$ and as it gets higher, the Nash Equilibrium is getting less efficient. It can be seen that PoA gets smaller as we increase $\tau$ and $c_2$. This means that for higher levels of $\tau$ and $c_2$, the expected costs of to the producers become closer. In other words, \textit{the impact of the social planner diminishes and the advantages of cooperation lessen as the regulator imposes stricter regulations.} \begin{figure} \caption{\textbf{Top:} \label{fig:minorcost_poa} \end{figure} \subsubsection{Electricity Production Decomposition Analysis} Here, we analyze the effect of the penalty $c_2$ for not matching the demand and the carbon tax $\tau$, on the optimal energy production portfolio in both MFC and MFG models. Figure \ref{fig:production} shows the total production and the decomposition of this production over a $20$ year period together with a detailed zoom in behavior between years $1$ and $3$. The left subfigure in Figure~\ref{fig:production}, shows that the demand is not matched by the producers in the MFC case. This is because the penalty coefficient $c_2$ is low and the increased revenue from scarce supply is more advantageous. \textit{Here, we see that in the control setting, producers behave as a big monopoly when not matching the demand is inexpensive.} When the penalty is increased the middle subfigure in Figure~\ref{fig:production} shows that producers try to match the demand and their behaviors in the MFC and MFG cases are similar. In both of these figures there is no carbon tax, therefore the producers do not have incentives to invest in renewable energy, and as a result, all the production is exclusively from the nonrenewable sources. On the right subfigure in Figure~\ref{fig:production} \textit{when the carbon tax is increased we see that the producers have an incentive to invest in renewable energy.} We also analyze the effect of the planning horizon where we compare the cases in which the producers are planning for the next $2$ years vs. planning for the next $20$ years. As it can be seen in the left and middle subfigures in Figure~\ref{fig:time_poll}, when the planning horizon is short, the fixed costs of renewable energy outweigh its advantages. \textit{Short-sighted producers do not have an incentive to invest in renewable energy production.} \begin{figure} \caption{Production decompostion, \textbf{left:} \label{fig:production} \end{figure} \begin{figure} \caption{Planning time horizon effect in MFC (\textbf{left} \label{fig:time_poll} \end{figure} \subsubsection{Pollution Analysis} The right subfigure in Figure~\ref{fig:time_poll}, shows that \textit{whatever the level of the carbon tax, the terminal pollution levels are higher when the producers are competitive} (MFG). Further,\textit{ in the absence of a carbon tax, producers can decrease the pollution levels further by cooperating and following a social planner instead of implementing a carbon tax.} \section{Models with a Regulator} \label{sec:regulator_model} In this section, we describe how the previous models can be extended to include a major player in charge of choosing the tax level $\tau$ on behalf of a policy maker, and the penalty $c_2$ for not matching the demand on behalf of system operator. We shall treat this major player as a \textit{regulator}, and we shall often speak of minor players when we talk about the producers. We extend the ``minor player only" model used previously by offering the producers the option to withdraw their entire production, de facto \textit{walking away} from the contract imposed by the regulator. This decision is made when the expected cost to the producer is higher than a fixed level above which producing at such a level of loss does not make sense. If we refer to the plots in Figure~\ref{fig:minorcost_poa}, we can see that the cost of the minor player is increasing with higher tax and the penalty for not matching the demand. Therefore, the regulator should be careful not to enact policies with very high values of $\tau$ and $c_2$. In the new model, the regulator does not have a private state per se. It only has $2$ controls which are the carbon tax level ($\tau \in \mathbb{R}_+$) and the penalty $(c_2\in \mathbb{R}_+)$. Both controls are assumed to be time independent. This assumption is especially realistic when the period $[0,T]$ is too short for changes in regulation to make sense. The\textit{ cost function} of the regulator is given as: \begin{equation} \begin{aligned} \label{eq:regulatorcost} J\Big(\tau, c_2; \bar P_T, \bar Q\Big)=& \underbracket{\alpha_1 \big( \bar P_T - \bar P^*_T \big)_+}_\text{Term~1} \underbracket{ - \alpha_2 \tau \big(\bar P_T-\bar P_0\big)}_\text{Term~2} \underbracket{ + \alpha_3 \big|\tau \big|^2}_\text{Term~3} \underbracket{ + \int_0^T \alpha_4 \big|\bar{Q}_t - D_t \big|^2 dt}_\text{Term~4} \underbracket{+ \alpha_5c_2^2}_\text{Term~5}. \end{aligned} \end{equation} The first term is the cost for exceeding the pollution target $\bar P_T^*\geq\bar P_0$ announced at $t=0$. Since we use the notation $x_+=\max(0,x)$, there is no penalty if the terminal pollution level is below the target. The constant $\alpha_1>0$ quantify the size of the penalty. The second term is the revenue from the carbon tax. To prevent the regulator from choosing an abusive high tax to increase their revenue, Term~3 is added to represent a reputation cost ($\alpha_2, \alpha_3>0$). The joint roles of Term~4 and Term~5 is to insure that the responsibility of matching the demand is not only incumbent on the producers, but also on the regulator, influencing the choice of $\alpha_4>0$. This is consistent with our characterization of our major player / regulator as a policy maker as well as a system operator bearing the brunt of managing the ancillary services to avoid disruptions like \textit{system black-outs}. \subsection{Equilibrium Notions} \label{subsec:regulator_equilibrium} We analyze two types of equilibria in the models with a regulator. In both cases, we consider that the regulator announces their policy first, and the producers react accordingly. This is in the realm of Stackelberg games. We call the first equilibrium \textit{Stackelberg MFC equilibrium}. In this case, the regulator assumes that a social planner chooses the controls used by the electricity producers. The latter behave like one big monopolistic firm. Therefore, the regulator chooses the tax level, $\tau$ and penalty coefficient $c_2$, assuming that the producers will settle in a MFC optimum. Note that in this interpretation the regulator and the social planner are two different entities. We define this equilibrium formally as: \begin{definition}[\textit{Stackelberg MFC equilibrium}] For every $(\tau, c_2)$, let\\ $\Big(N^*(\tau, c_2), R^*_e(\tau,c_2)\Big)$ be the social planner's \textit{MFC optimum} given the tax level $\tau$ and the penalty coefficient $c_2$. In other words, for every $\tau, c_2$ and any admissible $\big(N, R_e\big)$, we have: \begin{equation*} \begin{aligned} &C\Big(\textcolor{Bittersweet}{\big(N, R_e\big)}; \bar{X}\big(\textcolor{Bittersweet}{N, R_e}\big), (\tau, c_2) \Big) \ge \\ &\pushright{C\Big(\big(N^*(\tau, c_2), R^*_e(\tau, c_2)\big); \bar{X}\big(N^*(\tau, c_2), R^*_e(\tau, c_2)\big), (\tau, c_2)\Big)}, \end{aligned} \end{equation*} where we added the notation $\bar{X}\big(N, R_e\big)$ to emphasize the parameters for which the mean field term $\bar X$ is computed. Then the strategy profile $(\tau^*, c_2^*)$ is \textit{Stackelberg MFC equilibrium with a regulator} if, for any admissible $(\tau, c_2)$: $$ J\Big(\textcolor{Bittersweet}{(\tau, c_2)}; \bar{X}\big(N^*(\textcolor{Bittersweet}{\tau, c_2}),R^*_e(\textcolor{Bittersweet}{\tau, c_2}) \big)\Big) \ge J\Big((\tau^*, c_2^*); \bar{X}\big(N^*(\tau^*, c^*_4),R^*_e(\tau^*, c_2^*) \big)\Big). $$ \end{definition} The second equilibrium is called \textit{Stackelberg MFG Equilibrium}. In this one regulator assumes that electricity producers are competitive and it chooses $\tau$ and $c_2$ levels by assuming that the minor player population is at Nash Equilibrium. We can define this Equilibrium formally as \begin{definition}[\textit{Stackelberg MFG Equilibrium}] For every $(\tau, c_2)$, let \\ $\Big(\hat N(\tau, c_2), \hat R_e(\tau,c_2)\Big)$ be the producers MFG \textit{Nash equilibrium} given the tax level $\tau$ and the demand satisfaction coefficient $c_2$. In other words, for any admissible $\big(N, R_e\big)$, we have: \begin{equation*} \begin{aligned} &C\Big(\textcolor{Bittersweet}{\big(N, R_e\big)}; {\bar{X}}\big(\hat N(\tau, c_2), \hat R_e(\tau, c_2)\big), (\tau, c_2) \Big) \ge \\ &\pushright{C\Big(\big(\hat N(\tau, c_2), \hat R_e(\tau, c_2)\big); {\bar{X}}\big(\hat N(\tau, c_2), \hat R_e(\tau, c_2)\big), (\tau, c_2)\Big).} \end{aligned} \end{equation*} Then the strategy profile $(\hat{\tau}, \hat{c}_2)$ is a \textit{Stackelberg MFG equilibrium with a regulator} if, for any admissible $(\tau, c_2)$, we have: $$ J\Big(\textcolor{Bittersweet}{(\tau, c_2)}; {\bar{X}}\big(\hat N(\textcolor{Bittersweet}{\tau, c_2}), \hat R_e(\textcolor{Bittersweet}{\tau, c_2}) \big)\Big) \ge J\Big((\hat{\tau}, \hat{c}_2); {\bar{X}}\big(\hat N(\hat{\tau},\hat{c}_2), \hat R_e(\hat{\tau},\hat{c}_2)\big)\Big). $$ \end{definition} \section{Numerical Results in the Presence of a Regulator} \label{sec:reg_main_numeric_res} \subsection{Algorithms}\label{subsec:stackelberg_alg} To implement the walk-away option of the producers, we modify the \texttt{SocialOpt} and \texttt{NashEq} algorithms. This is done by simply adding an \texttt{IF} condition to these algorithms that assigns \texttt{Accept=1} if the cost of the minor player is lower than the threshold and \texttt{Accept=0} otherwise. After the algorithms for the producers are modified and called "\texttt{ModifiedSocialOpt}" and "\texttt{ModifiedNashEq}" respectively, we implement the Stackelberg Equilibrium Algorithm where we assume that if the producers reject the contract (\texttt{Accept=0}), the regulator cost is equal to infinity. \begin{algorithm}[H] \caption{\small Computation of Regulator's Cost} {\small\begin{algorithmic}[1] \Function {\texttt{RegulatorCost}}{$\tau, c_2, \bar X, \text{Accept}$} \vskip2mm \If{Accept = 1} \State Compute the regulator's cost $J=J(\tau, c_2; \bar X)$ by using \eqref{eq:regulatorcost} \vskip1mm \Else \State $J = \infty$ \EndIf \vskip2mm \State \Return $J$ \EndFunction \vskip3mm \end{algorithmic}} \end{algorithm} \begin{algorithm}[H] \caption{\small Search for a Stackelberg Equilibrium with MFC and MFG} {\small\begin{algorithmic}[1] \Function {\texttt{StackelbergEq}}{Type} \vskip2mm \If{Type = MFC} \State Search for optimal $(\hat \tau, \hat c_2)$ couple where optimal mean field $(\tau, c_2) \rightarrow \bar X(\tau, c_2)$, investment in renewable $(\tau, c_2) \rightarrow R_e(\tau, c_2)$ and minor cost $(\tau, c_2) \rightarrow c(\tau,c_2)$ are computed by \texttt{ModifiedSocialOpt} algorithm and optimal cost of regulator $(\tau, c_2) \rightarrow J(\tau, c_2; \bar X(\tau, c_2), \text{Accept})$ is found by using \texttt{RegulatorCost} \vskip1mm \ElsIf{Type = MFG} \State Search for optimal $(\hat \tau, \hat c_2)$ couple where optimal mean field $(\tau, c_2) \rightarrow \bar X(\tau, c_2)$, investment in renewable $(\tau, c_2) \rightarrow R_e(\tau, c_2)$ and minor cost $(\tau, c_2) \rightarrow c(\tau,c_2)$ are computed by \texttt{ModifiedNashEq} algorithm and optimal cost of regulator $(\tau, c_2) \rightarrow J(\tau, c_2; \bar X(\tau, c_2), \text{Accept})$ is found by using \texttt{RegulatorCost} \EndIf \vskip2mm \State Let $(\hat \tau, \hat c_2) = \argmin_{\tau, c_2} J(\tau, c_2; \bar X(\tau, c_2))$, $\hat{\bar X }= \bar X(\hat \tau, \hat c_2)$, $\hat{R_e}= R_e(\hat \tau, \hat c_2)$, $\hat{c}= c(\hat \tau, \hat c_2)$ and $\hat J = J(\hat \tau,\hat c_2; \hat{\bar X})$ \vskip2mm \State \Return $(\hat \tau, \hat c_2, \hat{\bar X}, \hat R_e, \hat c, \hat J)$ \EndFunction \vskip3mm \end{algorithmic}} \end{algorithm} \begin{remark} In the two Stackelberg equilibria, the numerical algorithms only differ in the solution of producers' problem. \end{remark} \subsection{Numerical Experiments} \subsubsection{Analysis of Regulator's Cost} \begin{figure} \caption{Regulator Cost in both MFC and MFG settings where penalty for not matching the demand, $c_2$, (\textbf{left} \label{fig:reg_2d} \end{figure} For the experiments of this section, we used the same parameters as for the producers' model in the previous section. For the regulator we used\footnote{For the minor player's problem, we have been able to choose realistic parameters by using real life data as explained in Subsection~\ref{subsec:minor_param}. However the parameters for the regulator's cost depend on the type of the regulator we focus on. For example, a regulator can care about minimizing the pollution relatively more than the other objectives or its main goal can be to maximize demand matching by the producers. Therefore, in the experiments, we focus on showing the effect of these different parameter choices on the decision of the regulator.}: \vskip 12pt {\small\begin{center} {\renewcommand{1.2}{1.2} \begin{tabular}{c} \hline $\alpha_1 = 1$, $\alpha_2 = 3.3$, $\alpha_3 = 500$, $\alpha_4 = 0.01$, $\alpha_5 = 0.25$ \\ $\tau \in \{0, 10, 15, 20, 25, 30, 40, 50, 75, 100\}$ \\ $c_2 \in\{50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, 3000, 4000, 5000\}$\\ \hline \end{tabular}} \end{center}} \vskip5mm First, we analyze the regulator's expected cost for different values of the carbon tax given a fixed penalty for not matching the demand. Then we switch the roles of the two controls of the regulator. Plots in Figure~\ref{fig:reg_2d} show that \textit{the cost of the regulator is convex as a function of the carbon tax or the penalty, when the other control is fixed.} We also analyze the effect of the coefficients in the regulator's cost. First we start with the analysis of the importance given to demand matching by the regulator by tracking the effect of $\alpha_4$ in regulator's cost. The left subfigure in Figure~\ref{fig:reg_2d_coeff} shows that when the tax is fixed, the regulator's minimum cost is attained at higher $c_2$ values when $\alpha_4$ is higher. This shows that \textit{the regulator should impose higher penalties for not matching the demand to producers when demand matching is more important for the regulator.} The middle subfigure shows that when the penalty for not matching the demand is fixed and when $\alpha_4$ is higher then the optimal tax is lower. The reasoning here is that when the regulator cares about demand matching \textit{since the production from renewable energy is more unpredictable, the regulator is not opposed to the nonrenewable energy usage in order to have more stable demand matching.} Finally the right subfigure shows the effect of the importance given to minimizing the excess pollution by the regulator by tracking the effect of $\alpha_1$ in regulator's cost. Here, it can be seen that when the penalty for not matching the demand is fixed, the \textit{optimal carbon tax is higher when the regulator wants to keep pollution at a lower level.} These subfigures are for the MFC case but similar results hold in the MFG case as well. \begin{figure} \caption{Effects of the coefficients of the regulator's cost and decisions: \textbf{left:} \label{fig:reg_2d_coeff} \end{figure} Finally, Figure \ref{fig:reg_3d} gives 3D plots of the regulator cost as a function of their controls $\tau$ and $c_2$. Here, the minimum is attained at $(\tau^*, c_2^*) = (50, 1000)$ in the MFC case and at $(\hat \tau, \hat c_2) = (75, 1000)$ in the MFG case. Also, we see that for any given tax and penalty, the expected cost of the regulator is higher if the producers are cooperative instead of competitive when the regulator gives more importance to demand matching. This is because for any given couple $(\tau, c_2)$, in the cooperative setting producers are behaving like a big monopolistic firm and care less about matching the demand than in the competitive setting in order to maximize their revenues by keeping the prices higher. \textit{When demand matching is important for the regulator, the regulator benefits from the competition among the electricity producers even if this competition creates adverse effect for the producers themselves.} \begin{figure} \caption{Regulator cost given admissible $c_2$ and $\tau$ values in when the regulator cares about matching the demand in MFC (in both MFC and MFG settings) and in MFG (\textbf{middle} \label{fig:reg_3d} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we investigate the behavior of rational electricity producers in the presence of a carbon tax. We analyze how they manage the trade-off between reliance on traditional and predictable fossil fuel power production assets which emit greenhouse gas and hence cost revenues because of the carbon tax, and the temptation to invest in clean energy production assets which will not be the source of emissions but which make matching the demand problematic because of the volatility of their output. We study a large population of producers in two different models, a first one in which they compete and hopefully reach a Nash equilibrium, and a second one in which they cooperate and rely on the solution of a centralized optimization problem. In a second set of models, we introduce a regulator choosing the level of the carbon tax in hope to control the overall emissions in the economy, and a penalty to be imposed on producers failing to meet the demand in hope to avoid power outages and reputation costs. Our models are based on recent progress in the theory and the numerical analysis of mean field games and mean field control problems. We showed that when the producers cooperate, they are better off by behaving like a single monopolistic firm. However, if the regulator raises excessively the penalty to match the demand, they can take advantage of the competitive behavior of the producers. While our models remain stylized, they open the door to more complex models, e.g. involving time dependent policies the regulator could base on the response of the producers. Furthermore, our models could be the used to include more features of the energy markets such as storage, and the interactions between neighboring states or countries. \appendix \section{Proofs of Theorems} \begin{proof}[Proof of Theorem~\ref{theorem:fbsde_mfg}] Assume that the strategy couple $(\hat N, \hat R_e)$ is optimal in the mean field game and the corresponding mean field flow is given as $\bar Q = \bar Q(\hat N, \hat R_e)$. Now, assume that the representative player deviates from the optimal strategy and uses $(\widehat{N}+ \epsilon \widecheck{N}, \widehat{R}_e+ \epsilon \widecheck{R}_e)$. Then: \begin{equation} \begin{aligned} \label{eq:perturbedcostmfg} \dfrac{dC(\widehat{N}+ \epsilon \widecheck{N}, \widehat{R}_e+ \epsilon \widecheck{R}_e; \bar Q)}{d\epsilon}|_{\epsilon=0} =& \mathbb{E}\Big[\int_0^T \Big[2{c_{1} \widehat{N}_t\widecheck{N}_t} + {p_1 \widecheck{\tilde{N}}_t}+ {2c_2(\widehat{Q}_t-D_t)\widecheck{Q}_t}\\ &- {c_3\big(\rho_0 + \rho_1(D_t- \bar Q_t)\big)\widecheck{Q}_t}\Big] dt + {2\tau \widehat{P}_T\widecheck{P}_T} + {p^\prime (\widehat{R}_e)}\widecheck R_e\Big]. \end{aligned} \end{equation} Furthermore we have the following dynamics: \begin{equation*} \begin{alignedat}{2} d\widecheck{Q}_t &= \kappa_{1} \widecheck{N}_tdt + \kappa_{2}\widecheck{R}_e\left( \alpha \cos(\alpha t) dt + (\theta-S_t) dt + \sigma_0 d\widecheck W_t\right) \qquad, &&d\widecheck{E}_t = \delta \widecheck{N}_t dt, \\ d\widecheck{P}_t &= \widecheck{E}_t dt, &&d \widecheck{\tilde{N}}_t = \widecheck{N}_t dt, \end{alignedat} \end{equation*} with initial conditions: $\widecheck{Q}_0 = \widecheck{E}_0 = \widecheck{P}_0 = \widecheck{\tilde N}_0 = 0$. We can introduce the adjoint variables with the following dynamics: \begin{align*} dY^1_t &= \Big(-{2c_2(\widehat{Q}_t-D_t)} + {c_3\big(\rho_0 + \rho_1(D_t- \bar Q_t)\big)}\Big)dt + Z_t^{1,1}d\widetilde{W}_t + Z_t^{1,2}dW_t, \quad &&Y_T^1 = 0 \\ dY^2_t &= \Big(\kappa_2 \widehat{R}_e Y^1_t +Y^2_t\Big)dt + Z_t^{2,1}d\widetilde{W}_t + Z_t^{2,2}dW_t, &&Y_T^2 = 0 \\ dY^3_t &= -Y_t^4dt + Z_t^{3,1}d\widetilde{W}_t + Z_t^{3,2}dW_t, && Y_T^3 = 0 \\ dY^4_t &= Z_t^{4,1}d\widetilde{W}_t + Z_t^{4,2}dW_t, && Y_T^4 = 2\tau\widehat{P}_T \\ dY^5_t &= -p_1 dt + Z_t^{5,1}d\widetilde{W}_t + Z_t^{5,2}dW_t, && Y_T^5 = 0. \end{align*} Plugging these dynamics in the perturbed cost function \eqref{eq:perturbedcostmfg} and applying integration by parts: \begin{equation*} \begin{aligned} &\dfrac{dC(\widehat{N}+ \epsilon \widecheck{N}, \widehat{R}_e+ \epsilon \widecheck{R}_e; \bar Q)}{d\epsilon}|_{\epsilon=0}\\ &\qquad\qquad=\mathbb{E}\Big[\int_0^T \Big[2{c_{1} \widehat{N}_t\widecheck{N}_t}+ {\big(-dY_t^5 + Z_t^{5,1}d\widetilde{W}_t + Z_t^{5,2}dW_t\big) \widecheck{\tilde{N}}_t}\\ &\qquad\qquad\qquad\qquad +\big(-dY_t^1 +Z_t^{1,1}d\widetilde{W}_t + Z_t^{1,2}dW_t\big) \widecheck{Q}_t\Big] dt + {Y^4_T\widecheck{P}_T} + {p^\prime (\widehat{R}_e)}\widecheck R_e\Big]\\ &\qquad\qquad=\mathbb{E}\Big[\int_0^T \widecheck{N}_t \Big(2{c_{1} \widehat{N}_t} +Y_t^5+Y_t^1\kappa_1+Y_t^3\delta\Big)dt\Big] \\ &\qquad\qquad\qquad\qquad+ \mathbb{E}\Big[\int_0^T \left(\kappa_2 Y_t^1\Big(\alpha \cos(\alpha t) + (\theta {-S_t}) \Big) dt + {p^\prime (\widehat{R}_e)} \right)\widecheck R_e\Big]. \end{aligned} \end{equation*} By optimality, the above expression should be equal to 0 for any given $\widecheck N_t$ and $\widecheck R_e$; therefore: \begin{equation} \label{eq:optcond_fbsde} \widehat{N}_t = -\frac{Y_t^1\kappa_1+Y_t^3\delta+Y_t^5}{2c_1}\quad \text{and} \quad \hat R_e = (p^{\prime})^{-1} \Big(-\mathbb{E}\Big[\int_0^T \kappa_2Y_t^1\Big(\alpha \cos(\alpha t) + (\theta {-S_t}) \Big) dt \Big]\Big). \end{equation} For the proof of the FBSDE system in the mean field control setting, assume that strategy couple $(\widehat N, \widehat R_e)$ is optimal in the mean field control problem and the corresponding mean field is given as $\widehat{\bar Q}$. Now assume that the representative player deviates from optimal strategy and uses $(\widehat N + \epsilon \widecheck N, \widehat R_e + \epsilon \widecheck R_e)$. Since in the mean field control version every player is going to deviate, the mean field term also changes to $\widehat{\bar Q} +\epsilon \widecheck{\bar Q}$. Then: \begin{equation} \begin{aligned} \label{eq:perturbedcostmfc} \dfrac{dC({\widehat N}+ \epsilon \widecheck{N}, \widehat {R}_e+ \epsilon \widecheck{R}_e; \bar Q)}{d\epsilon}|_{\epsilon=0} &=\mathbb{E}\Big[\int_0^T \Big[2{c_{1} \widehat {N}_t\widecheck{N}_t} + {p_1 \widecheck{\tilde{N}}_t}+ {2c_2(\widehat {Q}_t-D_t)\widecheck{Q}_t}\\ &\pushright{- {c_3\big(\rho_0 + \rho_1(D_t- 2{\widehat{\bar Q}}_t)\big)\widecheck{Q}_t}\Big] dt + {2\tau \widehat {P}_T\widecheck{P}_T} + {p^\prime (\widehat {R}_e)}\widecheck R_e\Big].} \end{aligned} \end{equation} For the sufficiency, let us assume that $(N_t, R_e)$ is the Nash equilibrium. Then, we want to show by using FBSDE result, $\forall (\widecheck N_t, \widecheck R_e)$ we have: \begin{equation*} C(N, R_e; \bar Q)-C(\check N, \check R_e; \bar Q)\leq 0. \end{equation*} Therefore we have: \begin{align*} &C(N_t, R_e; \bar Q)-C(\check N_t, \check R_e; \bar Q)\\ &\qquad= \mathbb{E}\Big[\tau(P_T^2 - \check P_T^2) + p(R_e)-p(\check R_e)\\ &\qquad\qquad+\int_0^T \big(c_1(N_t^2 - \check N_t^2) + p_1(N_t -\check{\tilde N}_t)+ c_2 (Q_t^2-\widecheck Q_t^2)-2c_2(Q_t-\widecheck Q_t)D_t\\ &\pushright{ -c_3\rho_0(Q_t-\widecheck Q_t)-c_3\rho_1(D_t- \bar Q_t)(Q_t-\widecheck Q_t)\big)dt\Big]}\\ &\qquad\leq \mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e)\\ &\qquad\qquad+\int_0^T \big(c_1(N_t^2 - \check N_t^2) + p_1(N_t -\check{\tilde N}_t)+ c_2 (Q_t^2-\widecheck Q_t^2)-2c_2(Q_t-\widecheck Q_t)D_t\\ &\pushright{ -c_3\rho_0(Q_t-\widecheck Q_t)-c_3\rho_1(D_t- \bar Q_t)(Q_t-\widecheck Q_t)\big)dt\Big]}\\ &\qquad= \mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e)\\ &\qquad\qquad+\int_0^T -dY_t^1(Q_t-\widecheck Q_t) + \int_0^T [-2c_2Q_t(Q_t -\widecheck Q_t)+c_2(Q_t^2-\widecheck Q_t^2)+c_1(N_t^2-\widecheck N_t^2)\\ &\qquad\qquad\qquad\qquad\quad{+p_1(\tilde N_t -\widecheck{\tilde N}_t)]dt} + \int_0^T Z_t^{1,1}(Q_t- \widecheck Q_t)d\widecheck W_t + \int_0^T Z_t^{1,2}(Q_t- \widecheck Q_t)d W_t\Big]\\ &\qquad\leq \mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e)\\ &\qquad\qquad+\int_0^T -dY_t^1(Q_t-\widecheck Q_t) + \int_0^T [-2c_2Q_t(Q_t -\widecheck Q_t)+2c_2Q_t(Q_t -\widecheck Q_t)\\ &\pushright{+2c_1N_t(N_t-\widecheck N_t)+p_1(\tilde N_t -\widecheck{\tilde N}_t)]dt\Big]}\\ &\qquad=\mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e)\\ &\pushright{+\int_0^T -dY_t^1(Q_t-\widecheck Q_t)+\int_0^T 2c_1N_t(N_t-\widecheck N_t)dt+\int_0^T-dY_t^5(\tilde N_t -\widecheck{\tilde N}_t)\Big]}. \end{align*} Now we apply integration by parts: \begin{align*} &\mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e)\\ &\pushright{+\int_0^T -dY_t^1(Q_t-\widecheck Q_t)+\int_0^T 2c_1N_t(N_t-\widecheck N_t)dt+\int_0^T-dY_t^5(\tilde N_t -\widecheck{\tilde N}_t)\Big]}\\ &=\mathbb{E}\Big[Y_T^4(P_T - \widecheck P_T) + p(R_e)-p(\check R_e) +\int_0^T (\kappa_1Y_t^1 + 2c_1 N_t +Y_t^5)(N_t-\widecheck N_t)dt \\ &\pushright{ + \int_0^T \kappa_2 (R_e -\widecheck R_e)Y_t^1(\alpha\cos(\alpha t)+(\theta-S_t))dt}\Big]\\ &=\mathbb{E}\Big[p(R_e)-p(\check R_e) +\int_0^T (\delta Y_t^3 + \kappa_1Y_t^1 + 2c_1 N_t +Y_t^5)(N_t-\widecheck N_t)dt \\ &\pushright{ + \int_0^T \kappa_2 (R_e -\widecheck R_e)Y_t^1(\alpha\cos(\alpha t)+(\theta-S_t))dt}\Big]. \end{align*} By using the optimality conditions we have: \begin{align*} &\delta Y_t^3 + \kappa_1Y_t^1 + 2c_1 N_t +Y_t^5=0,\\ &p^{\prime}(R_e)=-\mathbb{E}\Big[\int_0^T \kappa_2 Y_t^1(\alpha \cos(\alpha t)+\theta-S_t)dt\Big]. \end{align*} Therefore we have: \begin{equation*} \begin{aligned} &\mathbb{E}\Big[p(R_e)-p(\check R_e) +\int_0^T (\delta Y_t^3 + \kappa_1Y_t^1 + 2c_1 N_t +Y_t^5)(N_t-\widecheck N_t)dt \\ &\pushright{+ \int_0^T \kappa_2 (R_e -\widecheck R_e)Y_t^1(\alpha\cos(\alpha t)+(\theta-S_t))dt}\Big]\\ &\qquad=p(R_e)-p(\widecheck R_e) -(R_e -\widecheck R_e)p^{\prime}(R_e)\\ &\qquad \leq 0, \end{aligned} \end{equation*} by using the convexity of function $p(\cdot)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:fbsde_existence_mfg}] In order to show existence of the MFG equilibrium mean field flow system, we show the existence of the solution of the FBSDE given in \eqref{eq:fbsde}. We first fix an $R_e$, then solve the FBSDE and calculate a new $R_e$ by using the optimality condition in \eqref{eq:fbsde_opt_cond}. In other words, we need to show that there exists a fixed point for a function $f$, $f(R_e)= R_e$, by using Brouwer Fixed Point Theorem. Therefore, we need to show that $f:[0, R_e^{\text{max}}]\mapsto [0, R_e^{\text{max}}]$ is continuous in $R_e$. In order to simplify the notations we define $X_t := [Q_t, S_t, E_t, P_t, \tilde{N}_t]$, $Y_t := [Y^1_t, Y^2_t, Y^3_t, Y^4_t, Y^5_t]$ and $$ Z_t := \begin{bmatrix} Z_t^{1,1}\quad & Z_t^{2,1}\quad & Z_t^{3,1}\quad & Z_t^{4,1}\quad& Z_t^{5,1}\\ Z_t^{1,2}\quad & Z_t^{2,2}\quad & Z_t^{3,2}\quad & Z_t^{4,2}\quad& Z_t^{5,2}\\ \end{bmatrix}^{\top}. $$ Further, we define: \begin{equation*}\arraycolsep3pt K^x = -\frac{1}{2c_1}\begin{bmatrix} \kappa^2_1 & 0 & \kappa_1\delta & 0 & \kappa_1\\ 0 & 0 & 0 & 0 & 0\\ \kappa_1\delta & 0 & \delta^2 & 0 & \delta\\ 0 & 0 & 0 & 0 & 0\\ \kappa_1 & 0 & \delta & 0 & 1 \end{bmatrix}, L^x = -(K^y)^{\top} = \begin{bmatrix} 0 & -\kappa_2 R_e & 0 &0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, O^y = \begin{bmatrix} -c_3 \rho_1& 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \end{equation*} \begin{equation*}\arraycolsep3pt L^y = \begin{bmatrix} -2c_2 &0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, M_t^y = \begin{bmatrix} (2c_2+c_3\rho_1)D_t\\ 0\\ 0\\ 0\\ -p_1 \end{bmatrix}, M^x_t = \begin{bmatrix} \kappa_2 R_e\Big( \alpha cos(\alpha t) + \theta \Big)\\ \theta\\ 0\\ 0\\ 0 \end{bmatrix}, \end{equation*} \begin{equation*}\arraycolsep3pt S_T = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 2\tau & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \Sigma = \begin{bmatrix} \kappa_2 R_e \sigma_0 & 0\\ \sigma_0 & 0\\ 0 & \sigma_1\\ 0 & 0\\ 0 & 0 \end{bmatrix}, \widetilde{W}_t = \begin{bmatrix} \widecheck W_t\\ W_t \end{bmatrix}. \end{equation*} When $R_e$ is fixed, we can write the FBSDE system as \begin{equation} \begin{aligned} dX_t &= K^x Y_t + L^x X_t + M_t^x +\Sigma d\widetilde W_t, &X_0=x_0\\ dY_t &= K^y Y_t + L^y X_t + O^y \bar X_t + M_t^y + Z_t d\widetilde W_t, \qquad &Y_T= S_T X_T. \end{aligned} \end{equation} For the proof of continuity, we first focus on the mean processes: \begin{equation} \begin{aligned} d\bar X_t &= K^x \bar Y_t + L^x \bar X_t + M_t^x,\\ d\bar Y_t &= K^y \bar Y_t + (L^y + O^y) \bar X_t + M_t^y. \end{aligned} \end{equation} By introducing ansatz $\bar Y_t = \bar A_t \bar X_t + \bar B_t$, we can decouple the forward backward ODE and end up with \begin{equation} d\bar X_t = (K^x \bar A_t + L^x) \bar X_t + (K^x\bar B_t+ M_t^x), \end{equation} where $\bar A_t$ is the solution of the following matrix Riccati differential equation and $\bar B_t$ is the solution of the linear ODE system: \begin{equation} \begin{aligned} \dot{\bar A}_t & = - \bar A_t K^x\bar A_t + K^y \bar A_t - \bar A_t L^x +L^y + O^y,\\ \dot{\bar B}_t & = (K^y - \bar A_t K^x)\bar B_t + M_t^y - \bar A_t M_t^x. \end{aligned} \end{equation} By using the general ODE continuity results with respect to parameters, we can analyze the continuity of $\bar A_t$ with respect to $R_e$. Since the solution of the matrix Riccati equation is bounded (conditions that give the boundedness of a matrix Riccati differential equation can be found in \cite{Jacobson_riccatiboundedness}), the necessary lipschitzness assumption holds and we can conclude that $\bar A_t$ is continuous in $R_e$. Further, again by using the general ODE continuity results and the fact that $\bar A_t$ is continuous in $R_e$, we can conclude that $\bar B_t$ is continuous in $R_e$. In this way, we have shown that $\bar X_t$ is continuous in $R_e$. Now assume that $\bar X$ is exogenous and define $U_t:= \kappa_2 \big(\alpha cos(\alpha t) + \theta - S_t \big)$, $\Delta Y_t^1 := Y_t^{1, R_e}- Y_t^{1, \widetilde R_e}$. We can write: \begin{equation} \begin{aligned} \hat{R}_r^{R_e} - \hat{R}_r^{\widetilde R_e} &= (p^{\prime})^{-1} \Big(\mathbb{E}\Big[\int_0^T - \big(Y_t^{1, R_e}- Y_t^{1, \widetilde R_e}\big)U_tdt\Big]\Big)\\ &\leq (p^{\prime})^{-1} \Big(\int_0^T \mathbb{E}\Big[\big|\Delta Y_t^1 U_t\big|\Big]dt\Big)\\ &\leq (p^{\prime})^{-1} \Big(\int_0^T \Big(\mathbb{E}\big[|\Delta Y_t^1|^2\big]\Big)^{1/2} \Big(\mathbb{E}\big[|U_t|^2\big]\Big)^{1/2}dt\Big)\\ &\leq (p^{\prime})^{-1} \Big(C_T \Big(\sup_{t \in [0, T]} \mathbb{E}\big[|\Delta Y_t^1|^2\big]\Big)^{1/2}\Big)\\ &\leq (p^{\prime})^{-1} \Bigg(\tilde C_T\mathbb{E}\Bigg[\Big(\int_0^T \Big|(L^x- \widetilde{L}^x) X_t + (M^x- \widetilde{M}_t^x)+ (K^x- \widetilde{K}^y)Y_t \\&\pushright{+O^y (\bar X_t-\widetilde{\bar X}_t )\Big|dt\Big)^2 + \int_0^T |\Sigma - \widetilde \Sigma|^2 dt \Bigg]^{1/2}}\Bigg), \end{aligned} \end{equation} where the first inequality comes from the convexity of $p$, the second inequality comes from Cauch-Schwarz inequality and the last inequality is the result of \cite[Theorem 5.4]{hu2019wellposedness}. By using the continuity of $\bar X$ in $R_e$ and the continuity of $(p^{\prime})^{-1}(\cdot)$, as $R_e-\tilde R_e$ goes to 0 the upper bound goes to 0. Therefore, we can infer that $\hat{R}_r^{R_e} - \hat{R}_r^{\widetilde R_e}$ also goes to 0, which gives continuity. Since, we assume that $(p^{\prime})^{-1}:\mathbb{R}\mapsto [0, R_e^{\max}]$, we also have $f: [0, R_e^{\max}] \mapsto [0, R_e^{\max}]$. We conclude the existence proof by using Brouwer Fixed Point Theorem. Uniqueness can be concluded as follows: Assume there exist two mean field game equilibria: $({N}, R_e, {\bar Q}) =({N}_t, R_e, \bar Q_t)_{t\in[0,T]}$ and $({N}^{\prime}, R_e^{\prime}, \bar Q^{\prime}) = ({N}^{\prime}_t, R_e^{\prime}, {\bar Q}_t^{\prime})_{t\in[0,T]}$ such that $\bar Q \neq \bar Q'$. Then the control processes $({N}, R_e)$ and $({N}^{\prime}, R_e^{\prime})$ should differ since if they are the same we would have the same state processes and the distributions would be the same. By using the definition of ``minimizer" of a cost functional, we have: \begin{equation*} \begin{aligned} C(N, R_e; \bar Q)&\leq C(N^{\prime}, R^{\prime}_e; \bar Q), \qquad C( N^{\prime}, R^{\prime}_e; \bar Q^{\prime})&\leq C( N, R_e; \bar Q^{\prime}). \end{aligned} \end{equation*} By adding the two inequalities, we get: \begin{equation} \label{eq:uniqueness_ineq} \Big(C(N, R_e; \bar Q)- C( N, R_e; \bar Q^{\prime})\Big)-\Big(C(N^{\prime}, R^{\prime}_e; \bar Q)-C( N^{\prime}, R^{\prime}_e; \bar Q^{\prime})\Big)\leq 0. \end{equation} Now we use the fact that the drift and the volatility terms are independent of the state distribution, $\mathcal{L}(X)=\boldsymbol{ \mu}$. Therefore, in environment $\boldsymbol{ \mu}$ , the controlled path driven by $({N}^{\prime}, R^{\prime}_e)$ is ${\bar Q}^{\prime}$ and in environment $\boldsymbol{ \mu}^{\prime}$ , the controlled path driven by $( N, R_e)$ is ${\bar Q}$. By using this, we write: \begin{equation*} \begin{aligned} &C(N, R_e; \bar Q)- C( N, R_e; \bar Q^{\prime}) = c_3\rho_1\int_0^T \bar Q_t (\bar Q_t - \bar Q_t')dt. \end{aligned} \end{equation*} In the same way, we have: \begin{equation*} \begin{aligned} C(N^{\prime}, R^{\prime}_e; \bar Q)-C( N^{\prime}, R^{\prime}_e; \bar Q^{\prime})= c_3\rho_1\int_0^T \bar Q_t' (\bar Q_t - \bar Q_t')dt. \end{aligned} \end{equation*} Therefore the expression on the left of the inequality \eqref{eq:uniqueness_ineq} becomes: \begin{equation*} \begin{aligned} & \Big(C(N, R_e; \bar Q)- C( N, R_e; \bar Q^{\prime})\Big)-\Big(C(N^{\prime}, R^{\prime}_e; \bar Q)-C( N^{\prime}, R^{\prime}_e; \bar Q^{\prime})\Big) = c_3\rho_1\int_0^T (\bar Q_t - \bar Q'_t)^2 dt >0. \end{aligned} \end{equation*} By contradiction, we conclude the uniqueness. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:fbsde_mfc}] We introduce the same adjoint variables as for the MFG FBSDE except that for $Y_t^1$ we take: \begin{equation*} dY^1_t = \Big(-{2c_2(\widehat{Q}_t-D_t)} + {c_3\big(\rho_0 + \rho_1(D_t- 2\widehat{\bar Q}_t)\big)}\Big)dt + Z_t^{1,1}d\widecheck{W}_t + Z_t^{1,2}dW_t, \qquad Y_T^1 = 0. \end{equation*} By plugging the adjoint variable dynamics in the perturbed cost \eqref{eq:perturbedcostmfc} and applying integration by parts, we end up with the same optimality conditions as in \eqref{eq:optcond_fbsde}. The sufficiency condition is proved by following the same ideas as in the proof of Theorem~\ref{theorem:fbsde_mfg}: we have \begin{equation*} \begin{aligned} &C(N_t, R_e; \bar Q_t)-C(\check N_t, \check R_e; \check{\bar{Q}}_t)\\ &\qquad=p(R_e)-p(\check R_e)-p^{\prime}(R_e)(R_e-\check R_e)\\ &\pushright{+\mathbb{E}\Big[\int_0^T\big[c_3\rho_1(Q_t \bar Q_t- \check Q_t \check{\bar Q}_t)-2c_3\rho_1\bar Q_t(Q_t-\check Q_t)\big]dt\Big]}\\ &\qquad=p(R_e)-p(\check R_e)-p^{\prime}(R_e)(R_e-\check R_e)-\int_0^T \big[c_3\rho_1(\bar Q_t+ \check{\bar Q}_t)^2 \big]dt\\ &\qquad\leq 0. \end{aligned} \end{equation*} which is obtained by using the convexity of the function $p(\cdot)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:fbsde_existence_mfc}] The proof of the existence of the solution in the MFC case follows the same ideas of the proof of Theorem~\ref{theorem:fbsde_existence_mfg} and for the sake of space, it is omitted. To prove uniqueness of the MFC optimal mean field term, we introduce an auxiliary MFG which has the same FBSDE as the MFC problem and for which we prove uniqueness. To wit, we first focus on the mean field game problem that has the same dynamics as in \eqref{eq:minordynamics} and that has the following cost functional for an infinitesimal agent given a mean field flow $(\bar Q_t)_t$: \begin{equation*} \begin{aligned} \mathbb{E}\Big[\int_0^T \Big[c_{1} |N_t|^2 + p_1 \tilde{N}_t+ c_2|Q_t-D_t|^2 - c_3\big(\rho_0 + \rho_1(D_t- 2\bar Q_t)\big)Q_t\Big] dt + \tau|P_T|^2 + p(R_e)\Big]. \end{aligned} \end{equation*} Following the idea given in the proof of Theorem~\ref{theorem:fbsde_mfg}, the FBSDE system that characterizes the solution of this new game is found to be the same FBSDE that characterizes the solution of the mean field control. Uniqueness of the mean field flow of the new mean field game can be proved by using the approach given in the proof of Theorem~\ref{theorem:fbsde_existence_mfg} and it is omitted for the sake of space. This in turn concludes the uniqueness for the mean field control problem. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:ode_mfg}] Following \cite[ch. 3]{carmona_2018}, we write the Hamiltonian: \begin{equation} \label{Hamiltonian} H(t, N, X, \bar{X}, q) = (Ax)^{\top} q + (B \cdot N) ^{\top} q + C_t^{\top} q + \frac{R}{2} |N|^2 +H_t^{\top} X + \bar{X}^{\top} F X + X^{\top} G X +J_t, \end{equation} where $q$ is the adjoint process. Therefore, the optimal $N$ to optimize $H$ can be expressed as: \begin{equation} \label{MinorOptimalControl} \hat{N}(q) = - R^{-1} B^{\top} q. \end{equation} By plugging $\hat N(q)$ in the Hamiltonian \eqref{Hamiltonian}, the \textit{optimal Hamiltonian} can be written as: \begin{equation} \label{eq:OptimalHamiltonian} \hat H(t, X, \bar{X}, q) = -\frac{1}{2} q^{\top} B R^{-1}B^{\top} q + X^{\top} A^{\top} q + C_t^{\top} q + H_t^{\top} X + \bar{X}^{\top} F X+ X^{\top} G X +J_t. \end{equation} Then the Hamilton Jacobi Bellman (HJB) equation can be written as: \begin{equation} \label{eq:hjb_mfg} -\frac{\partial u(t,X)}{\partial t} - tr(aD^2u(t,X)) = \hat H(t, X, \bar{X}_t, Du(t,X)),\qquad u(T,X) = X^{\top} S_T X + p_2 R_e, \end{equation} where $\bar{X}_t = \int_{\mathbb{R}^5} X m(t,X)dX$. The Kolmogorov Fokker Planck (KFP) equation for our MFG problem can be written as: \begin{equation} \begin{aligned} \label{eq:kfp_mfg} &\dfrac{\partial m(t,X)}{\partial t} - tr(aD^2 m(t,X))+ \nabla_x\big(m(t,X)(AX-Br^{-1}B^{\top}Du(t,X)+C_t)\big)=0, \\ &\pushright{\quad m(0,X) = m_0(X).} \end{aligned} \end{equation} Introduce the following ansatz for the value function: \begin{equation} \label{eq:value_ansatz} u(t,X) = \frac{1}{2} X^{\top} {\eta_t} X + X^{\top} {r_t} +s_t. \end{equation} We have $Du(t,X) = \eta_t X + r_t$ and $D^2u(t,x)=\eta_t$. By plugging \eqref{eq:value_ansatz} into the HJB equation in \eqref{eq:hjb_mfg}, we obtain that $\eta_t$ is the solution of the following symmetric matrix Riccati equation: \begin{equation} \label{eq:riccati_mfg} \frac{d{\eta_t}}{dt} - {\eta_t} BR^{-1}B^{\top} {\eta_t} + A^{\top} {\eta_t} + {\eta_t} A +2G = 0, \qquad\qquad {\eta_T} = 2S_T. \end{equation} This Riccati equation has a unique positive symmetric solution, see~\cite[ch. 14.3]{kucera_2009}. By plugging \eqref{eq:value_ansatz} into the HJB equation~\eqref{eq:hjb_mfg}, we also obtain the differential equation for $r_t$ that is coupled with $\bar X_t$: \begin{equation} \label{eq:ode_r_mfg} -\frac{d{r_t}}{dt} = \left(A^{\top} - {\eta_t} B R^{-1} B^{\top}\right) {r_t} + {\eta_t} C_t + H_t + F^{\top}{\bar{X}_t}, \qquad\qquad {r_T} = 0. \end{equation} The differential equation for $\bar X_t$ can be found by plugging the ansatz in the KFP equation: \begin{equation} \begin{aligned} \label{eq:KFP_mfg_ansatz} \frac{d \bar X_t}{dt}&=\frac{d}{dt}\int_{\mathbb{R}^5} X \cdot m(t,X) dX= \int_{\mathbb{R}^5} X \cdot \frac{\partial m(t,X)}{\partial t} dX\\ &= \int_{\mathbb{R}^5} X \cdot \Big(tr(aD^2 m(t,X))- \nabla_x\big(m(t,X)(AX-BR^{-1}B^{\top}(\eta_tX +r_t)+C_t)\big)\Big) dX\\ &=-\int_{\mathbb{R}^5} X\cdot \frac{\partial m(t,X)}{\partial X} (AX-BR^{-1}B^{\top}(\eta_tX +r_t)+C_t) dX \\ &\pushright{- \int_{\mathbb{R}^5} X\cdot m(t,X) (A-BR^{-1}B^{\top}\eta_t)dX}\\ &=(A-BR^{-1}B^{\top}\eta_t) \bar X_t -BR^{-1}B^{\top}r_t+C_t. \end{aligned} \end{equation} Finally from the HJB equation where the ansatz is plugged in we find that: \begin{equation*} \frac{ds_t}{dt}= -tr(a\eta_t) + \frac{1}{2}{r_t}^{\top} B R^{-1} B^{\top} {r_t} -C_t^{\top} {r_t}- J_t, \qquad\qquad s_T=p_2R_e. \end{equation*} Therefore we have: \begin{equation} \label{eq:s_mfg} s_t = p_2 R_e + \int_t^{T} \Big(tr(a{\eta_s}) -\frac{1}{2} {r_s}^{T} B R^{-1} B^{\top} {r_s} +C_s^{\top} {r_s}+ J_s\Big)ds. \end{equation} The expected cost of the representative minor player given fixed mean field and $R_e$ can be calculated by using: \begin{align*} \begin{split} \inf_{(N_t)_t} \tilde{C}^{MFG}\Big(N; R_e, \bar X \Big) &= \mathbb{E}\left[u(0, X_0)\right]\\ &= \mathbb{E}\left[ \frac{1}{2} X_0^{\top}\eta_0 X_0 + X_0^{\top} r_0 +s_0\right]\\ &= \frac{1}{2} \left( Var(\sqrt{\eta_0} X_0) + \mathbb{E}[\sqrt{\eta_0} X_0]^2 \right) +\bar X_0^{\top} r_0 + s_0. \end{split} \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{the:exist_uniq_mfg}] For the existence and uniqueness proof, we make use of Banach Fixed Point theorem. We follow the line of proof used for a stochastic system in \cite[Thm 5.1]{ma_2007}. First we fix $(r_t^1)_t$ and $(r_t^2)_t$, then corresponding $(\bar X^i_t)_t$ can be found by solving the following ODE: \begin{equation*} d\bar X_t^i = [(A-BR^{-1}B^{\top}\eta_t) \bar X_t^i -BR^{-1}B^{\top} r_t^i + C_t] dt, \qquad \bar X_0^i=\bar x_0, \qquad i=\{1,2\}. \end{equation*} Further let $\widetilde{\bar X}_t = \bar X_t^1 - \bar X_t^2$ and $\tilde r_t = r_t^1 - r_t^2$, then we have: \begin{equation*} d\widetilde{\bar X}_t = [(A-BR^{-1}B^{\top}\eta_t) \widetilde{\bar X}_t -BR^{-1}B^{\top} \tilde r_t] dt, \qquad \widetilde{\bar X}_0=0. \end{equation*} Now we introduce $(r_t^{i'})_t$ that solves: \begin{equation*} dr^{i'}_t = [(\eta_t BR^{-1}B^{\top}-A^{\top})r^i_t -\eta_t C_t -H_t -F^{\top}\bar X_t^i]dt, \qquad r^{i'}_T = 0, \qquad\forall i \in \{1,2\} . \end{equation*} and let $\tilde r_t^{'}=r_t^{1'}-r_t^{2'}$. Then we have the following ODE: \begin{equation*} d \tilde r^{'}_t = [(\eta_t BR^{-1}B^{\top}-A^{\top})\tilde r_t -F^{\top}\widetilde{\bar X}_t]dt, \qquad \tilde r^{'}_T=0. \end{equation*} Therefore we have defined a mapping $r \mapsto r^{'}$. We now show that it is a contraction mapping to be able to use the Banach Fixed Point theorem. \noindent {\bf Step 1. } Using It\^o's formula, we write the dynamics for $||\widetilde{\bar X}_t||^2$: \begin{equation*} d||\widetilde{\bar X}_t||^2 = 2(\widetilde{\bar X}_t)^{\top} d \widetilde{\bar X}_t = 2(\widetilde{\bar X}_t)^{\top} [(A-BR^{-1}B^{\top}\eta_t) \widetilde{\bar X}_t -BR^{-1}B^{\top} \tilde r_t]dt. \end{equation*} By using these dynamics we can find a bound for $||\widetilde{\bar X}_t||^2$: \begin{align} \label{eq:mfg_ode_xbar_bound} ||\widetilde{\bar X}_t||^2 &= \int_0^t 2(\widetilde{\bar X}_s)^{\top} [(A-BR^{-1}B^{\top}\eta_s) \widetilde{\bar X}_s -BR^{-1}B^{\top} \tilde r_s]ds\nonumber \\ & \leq \int_0^t (|| 2(A-BR^{-1}B^{\top}\eta_s) ||) ||\widetilde{\bar X}_s||^2 ds + \int_0^t || BR^{-1}B^{\top}|| 2 <\widetilde{\bar X}_s, \tilde r_s> ds\nonumber\\ & \leq \int_0^t (|| 2(A-BR^{-1}B^{\top}\eta_s) ||) ||\widetilde{\bar X}_s||^2ds + \int_0^t || BR^{-1}B^{\top}||\big(|| \widetilde{\bar X}_s||^2 + ||\tilde r_s||^2\big)ds\nonumber\\ & \leq \exp\Big(\int_0^t(2||A-BR^{-1}B^{\top}\eta_s|| +||BR^{-1}B^{\top}||)ds\Big) \int_0^t (||BR^{-1}B^{\top}||) ||\tilde r_s||^2 ds\nonumber\\ & \leq C^{(1)} \int_0^T ||\tilde r_s||^2 ds, \end{align} where the third to last inequalities stem from the Gronwall's inequality, and we define $C^{(1)} = \exp\Big(T\big(2||A||+ 2(||BR^{-1}B^{\top}||)||\eta||_T +||BR^{-1}B^{\top}||\big)\Big) ||BR^{-1}B^{\top}|| $ with $||\eta||_T := \sup_{0\leq t \leq T} ||\eta_t||$. \noindent {\bf Step 2. } We write the dynamics for $||\tilde r_t^{'}||^2$: \begin{equation*} d||\tilde r^{'}_t||^2 = 2(\tilde r^{'}_t)^{\top} d \tilde r^{'}_t = 2(\tilde r^{'}_t)^{\top} [(\eta_t BR^{-1}B^{\top}-A^{\top})\tilde r_t -F^{\top}\widetilde{\bar X}_t]dt. \end{equation*} Now we find a bound for $||\tilde r_t^{'}||^2$ as follows by using Young's inequality: \begin{equation*} \begin{aligned} ||\tilde r^{'}_t||^2&= \int_t^T 2(\tilde r^{'}_s)^{\top} [(A^{\top}-\eta_sBR^{-1}B^{\top}) \tilde r_s +F^{\top} \widetilde{\bar X}_s]ds\\ & \leq \int_t^T ||A^{\top}-\eta_sBR^{-1}B^{\top} || 2<\tilde r^{'}_s, \tilde r_s> ds + \int_t^T ||F^{\top}|| 2 <\tilde r^{'}_s, \widetilde{\bar X}_s> ds\\ & = \int_t^T (||A^{\top}-\eta_sBR^{-1}B^{\top} || + ||F^{\top}|| ) ||\tilde r^{'}_s||^2 ds\\ &\pushright{+ \int_t^T (||A^{\top}-\eta_sBR^{-1}B^{\top} ||)||\tilde r_s||^2 ds + \int_t^T (||F^{\top}||) ||\widetilde{\bar X}_s||^2 ds.} \end{aligned} \end{equation*} Now the expression found in \eqref{eq:mfg_ode_xbar_bound} can be plugged in and by using Gronwall's inequality: \begin{align*} ||\tilde r^{'}_t||^2 &\leq \int_t^T (||A^{\top}-\eta_sBR^{-1}B^{\top} || + |F^{\top}|| ) ||\tilde r^{'}_s||^2 ds + \int_0^T (||A^{\top}-\eta_sBR^{-1}B^{\top} ||)||\tilde r_s||^2 ds +\\ & \pushright{ \int_0^T ||F^{\top}|| C^{(1)} \Big(\int_0^T ||\tilde r_s||^2 ds\Big) ds}\\ & \leq \int_t^T (||A^{\top}-\eta_sBR^{-1}B^{\top} || + |F^{\top}|| ) ||\tilde r^{'}_s||^2 ds + \int_0^T (||A^{\top}-\eta_sBR^{-1}B^{\top} ||)||\tilde r_s||^2 ds +\\ & \pushright{ T\Big[ ||F^{\top}|| C^{(1)} \Big(\int_0^T ||\tilde r_s||^2 ds\Big) \Big]}\\ & \leq \exp\Big({T(||A|| +(||BR^{-1}B^\top||)||\eta||_T +||F^\top||)}\Big)\\ & \pushright{\int_0^T \Big[||A^{\top}-\eta_sBR^{-1}B^{\top} || + T||F^{\top}|| C^{(1)} \Big]||\tilde r_s||^2 ds}. \end{align*} Now define $||r||_T := \sup_{0\leq t \leq T} ||r_t||$, then we have: $ ||\tilde r^{'}||_T^2 \leq c_T||\tilde r||_T^2 $, where $c_T$ is \begin{equation*} \begin{aligned} c_T = &T e^{{T(||A|| +(||BR^{-1}B^\top||)||\eta||_T +||F^\top||)}}\\ &\pushright{\times \big(||A^\top|| +(||\eta||_T + T||F^{\top}|| e^{T(2||A|| +(2||\eta||_T+1 )||BR^{-1}B^{\top}||)})||BR^{-1}B^\top||\big).} \end{aligned} \end{equation*} With small $T$ we have $c_T<1$, which concludes the proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:ode_mfc}] For Mean Field Control problems we have the following HJB and FP systems. The detailed proof of the derivation can be found in \cite[ch. 6]{frehse_2013}. \begin{equation*} \begin{aligned} &-\frac{\partial u(t,X)}{\partial t} - tr(aD^2u(t,X)) = \hat H(t, X, m(t), Du(t,X)) \\ &\pushright{+ \int_{\mathbb{R}^n} \frac{\partial \hat H}{\partial m}(t,\xi,m(t), Du(t,\xi))(x)m(t,\xi)d\xi,}\\ &\pushright{u(T,X) = X^{\top} S_T X + p_2 R_e} \\ &\dfrac{\partial m(t,X)}{\partial t} - tr(aD^2 m(t,X))+ \nabla_x\big(m(t,X)(AX-BR^{-1}B^{\top}Du(t,X)+C_t)\big)=0,\\ &\pushright{m(0,X)=x_0,} \end{aligned} \end{equation*} where $\frac{\partial \hat H}{\partial m}$ denotes the G\^ateaux differential of $\hat H$ on $L^2(\mathbb{R}^5)$. As it can be seen, the KFP equation stays same as in MFC but HJB equation changes. By rewriting \eqref{eq:OptimalHamiltonian} as: \begin{equation} \hat H(t, X, m, q) = -\frac{1}{2} q^{\top} B R^{-1}B^{\top} q + X^{\top} A^{\top} q + C_t^{\top} q + H_t^{\top} X + \Big(\int_{\mathbb{R}^n} \xi m(\xi)d\xi\Big)^{\top} F X+ X^{\top} G X +J_t. \end{equation} We find that \begin{equation} \label{eq:exp_hamiltonian_dens_der} \begin{aligned} \int_{\mathbb{R}^n}\frac{\partial \hat H(t, X, m, q)}{\partial m}(x)m(\xi)d\xi = \int_{\mathbb{R}^n}X^{\top} F \xi m(\xi)d\xi=X^{\top} F \bar X . \end{aligned} \end{equation} Therefore, the HJB equation becomes: \begin{equation} \label{eq:hjb_mfc} -\frac{\partial u}{\partial t} - tr(aD^2u) = \hat H(t, X, \bar{X}, Du) + X^{\top} F \bar X, \qquad u(X_T,T) = X_T^{\top} S_T X_T + p_2 R_e . \end{equation} We introduce the same ansatz as in \eqref{eq:value_ansatz}. By plugging this ansatz in the HJB equation given in \eqref{eq:hjb_mfc}, we end up with the same Riccati equation and equation for $s_0$. Only the differential equation of $r_t$ changes as follows: \begin{equation*} -\frac{d{r_t}}{dt} = \left(A^{\top} - {\eta_t} B R^{-1} B^{\top}\right) {r_t} + {\eta_t} C_t + H_t + F^{\top} {\bar{X}_t} + \textcolor{Bittersweet}{F {\bar{X}_t}}, \qquad {r_T} = 0, \end{equation*} where $\bar{X}_t = \int_{\mathbb{R}^5} X m(t,X) dX$. Since the Kolmogorov-Fokker-Planck equation stayed the same, we have the same expression for the differential equation of $\bar X_t$ as in \eqref{eq:KFP_mfg_ansatz}. We obtain the MFC cost given any fixed $R_e$ by using the ansatz (see e.g.~\cite{AMSnotesLauriere} for more details): \begin{align*} \begin{split} \inf_{(N_t)_t} \tilde{C}^{MFC}\Big(N; R_e \Big) &= \mathbb{E}\left[u(0, X_0)\textcolor{Bittersweet}{ - \int_0^T X_t^{\top}F \bar X_t dt}\right]\\ &= \mathbb{E}\left[ \frac{1}{2} X_0^{\top}\eta_0 X + X_0^{\top} r_0 +s_0\right]\textcolor{Bittersweet}{-\int_0^T \bar X_t^{\top}F \bar X_t dt}\\ &= \frac{1}{2} \left( Var(\sqrt{\eta_0} X_0) + \mathbb{E}[\sqrt{\eta_0} X_0]^2 \right) +\bar X_0^{\top} r_0 + s_0\textcolor{Bittersweet}{-\int_0^T \bar X_t^{\top}F \bar X_t dt}. \end{split} \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{the:exist_uniq_mfc}] For the sake of space, we omit the proof of existence and uniqueness, which follows the same steps as in the proof of Theorem~\ref{the:exist_uniq_mfg}. \end{proof} \end{document}
\begin{document} \title{A Fast Exact Algorithm for Airplane Refueling Problem \thanks{This work is supported by Key Laboratory of Management, Decision and Information Systems, CAS.}} \author{Jianshu Li \inst{1, 2 (}\Envelope\inst{)} \and Xiaoyin Hu\inst{1, 2} \and Junjie Luo\inst{1, 2} \and Jinchuan Cui \inst{1}} \authorrunning{J. Li et al.} \institute{Academy of Mathematics and Systems Science Chinese Academy of Sciences, Beijing 100190, China\\ \email{\{ljs, hxy, luojunjie, cjc\}@amss.ac.cn} \and School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China} \maketitle \begin{abstract} We consider the airplane refueling problem, where we have a fleet of airplanes that can refuel each other. Each airplane is characterized by specific fuel tank volume and fuel consumption rate, and the goal is to find a drop out order of airplanes that last airplane in the air can reach as far as possible. This problem is equivalent to the scheduling problem $1||\sum w_j (- \frac{1}{C_j})$. Based on the dominance properties among jobs, we reveal some structural properties of the problem and propose a recursive algorithm to solve the problem exactly. The running time of our algorithm is directly related to the number of schedules that do not violate the dominance properties. An experimental study shows our algorithm outperforms state of the art exact algorithms and is efficient on larger instances. \keywords{Scheduling \and Dominance properties \and Branch and bound.} \end{abstract} \section{Introduction} \label{sec:introduction} The \emph{airplane refueling problem}, originally introduced by Gamow and Stern \cite{puzzle_math_1958}, is a special case of single machine scheduling problem. Consider a fleet of several airplanes with certain fuel tank volume and fuel consumption rate. Suppose all airplanes travel at the same speed and they can refuel each other in the air. An airplane will drop out of the fleet if it has given out all its fuel to other airplanes. The goal is to find a drop out order so that the last airplane can reach as far as possible. \subsubsection{Problem definition} \label{ssub:problem_definition} In the original description of airplane refueling problem, all airplanes are defaulted to be identical. Woeginger \cite{chrobaketal:DSP:2010:2536} generalized this problem that each airplane $j$ can have arbitrary tank volume $w_j$ and consumption rate $p_j$. Denote the set of all airplanes by $J$, a solution is a drop out order $\sigma: \{1, 2, \dots, |J|\} \mapsto J$, where $\sigma(i) = j$ if airplane $j$ is the $i^{\text{\tiny th}}$ airplane to leave the fleet. As a result, the objective value of the drop out order $\sigma$ is: \[ \sum_{j = 1}^{n} \Big( {w_{\sigma(j)}} \text{\bfseries \large{/}} \text{\tiny \ } {\sum_{k = j}^{n}p_{\sigma(k)}} \Big). \] As pointed out by Vásquez \cite{vasquez_for_2015}, we can rephrase the problem as a single machine scheduling problem, which is equivalent to finding a permutation $\pi$(the reverse of $\sigma$) that minimizes: \[ \sum_{j = 1}^{n} \Big( - {w_{\pi(j)}} \text{\bfseries \large{/}} \text{\tiny \ } {\sum_{k = j}^{n}p_{\pi(k)}} \Big) = \sum_{j=1}^{n} -w_j / C_j, \] where $C_j$ is the completion time of job $j$, and $p_j, w_j$ correspond to its processing time and weight, respectively. This scheduling problem is specified as $1||\sum w_j (- \frac{1}{C_j})$ using the classification scheme introduced by Graham et al. \cite{graham_optimization_1979}. In the rest of this paper, we study the equivalent scheduling problem instead. \subsubsection{Dominance properties} \label{ssub:dominance_relation} Since the computational complexity status of airplane scheduling problem is still open \cite{chrobaketal:DSP:2010:2536}, existing algorithms find the optimal solution with branch and bound method. While making branching decisions in a branch and bound search, it would be much more useful if we know the dominance relation among jobs. For example, if we know job $i$ always precedes job $j$ in an optimal solution, we can speed up the searching process by pruning all the branches with job $i$ processed after job $j$. Let the start time of job $j$ be $t_j$, we refer to \cite{durr_order_2014} for the definition of \emph{local dominance} and \emph{global dominance} : \begin{itemize} \item \textit{local dominance:} Suppose job $j$ starts at time $t$ and is followed directly by job $i$ in a schedule. If exchanging the positions of jobs $i,j$ strictly improves the cost, we say that \emph{job $i$ locally dominants job $j$ at time $t$} and denote this property by $i \prec_{l(t)} j$. If $i \prec_{l(t)} j$ holds for all $t \in [a, b]$, we denote it by $i \prec_{l[a, b]} j$. \item \textit{global dominance:} Suppose schedule $S$ satisfies $a \leq t_j \leq t_i - p_j \leq b$. If exchanging the positions of jobs $i,j$ strictly improves the cost, we say that \emph{job $i$ globally dominants job $i$ in time interval $[a, b]$} and denote this property by $i \prec_{g[a, b]} j$. If it holds that $i \prec_{g[0, \infty)} j$, we denote this property by $i \prec_{g} j$. \end{itemize} We call the schedule that do not violate the dominance properties as \emph{potential schedule}. The effect of dominance properties is to narrow the search space to the set of all potential schedules, whose cardinality is much smaller than $n!$, the number of all job permutations. \subsubsection{Related work} \label{ssub:related_work} Airplane refueling problem is a special case of a more general scheduling problem $1||\sum w_j C_j^\beta$. For most problems of this form, including airplane refueling problem, no polynomial algorithm has been found. Existing methods resort to approximation algorithms or branch and bound schemes. For approximations, several constant factor approximations and polynomial time approximation scheme(PTAS) have been devised for different cost functions \cite{bansal_geometry_2014,cheung_primal-dual_2017,hohn_how_2018,megow_dual_2013}. Recently, Gamzu and Segev\cite{gamzu_polynomial-time_2019} gave the first polynomial-time approximation scheme for airplane refueling problem. The focus of exact methods is to find stronger dominance properties. Following a long series of improvements \cite{bagga_node_1980,croce_minimizing_1993,bader_experimental_2012,mondal_improved_2000-1,sen_minimizing_1990}, Dürr and Vásquez \cite{durr_order_2014} conjectured that for all cost functions of the form $f_j(t) = w_jt^\beta, \beta >0$ and all jobs $i, j$, $i \prec_l j$ implies $i \prec_g j$. Latter, Bansal et al. \cite{bansal_localglobal_2017} confirmed this conjecture, and they also gave a counter example of the generalized conjecture that $i \prec_{l[a, b]} j$ implies $i \prec_{g[a, b]} j$. For airplane refueling problem, Vásquez \cite{vasquez_for_2015} proved that $i \prec_{l[a, b]} j$ implies $i \prec_{g[a, b]} j$. The establish dominance properties are commonly incorporated into a branch and bound scheme, such as Algorithm $A^*$ \cite{hart_formal_1968}, to speed up the searching process. \subsubsection{Our contribution} \label{ssub:our_contribution} Existing branch and bound algorithms search for potential schedules in a trail and error manner. Specifically, when making branching decisions, it is unknown whether current branch contains any potential schedule unless we exhaust the entire branch. So if we can prune all the branches that do not contain potential schedule, it will considerably improve the efficiency of the searching process. In this paper we give an exact algorithm with the merit above for airplane refueling problem. Specifically, every branch explored by our algorithm is guaranteed to contain at least one potential schedule, and the time to find each potential schedule is bounded by $\mathcal{O}(n^2)$. Numerical experiments exhibit empirical advantage of our algorithm over the $A^*$ algorithms proposed by former studies, and the advantage is more significant on instances with more jobs. The main difference between previous methods and our algorithm is that instead of branching on possible succeeding jobs, we branch on the possible start times of a certain job. To this end, we introduce a prefixing technique to determine the possible start times of a certain job in a potential schedule. In addition, the relative order of other jobs regarding that certain job is also decided. Thus, each branch divides the original problem into subproblems with fewer jobs and we can solve the problem recursively. \subsubsection{Organization} \label{ssub:organization} The rest of the paper is organized as follows. In section 2, we introduce a new auxiliary function and give a concise representation of dominance property. Section 3 establishes some useful lemmas. We present our algorithm in section 4 and experimental results in section 5. Finally, section 6 concludes the paper. \section{Preliminaries} \label{sec:preliminaries} A \emph{dominance rule} stipulates the local and global dominance properties among jobs. We can refer to a dominance rule by the set $R:= \cup_{i, j \in J} \{r_{ij}\}$, where $r_{ij}$ represents the dominance properties between jobs $i, j$ specified by the rule. Given a dominance rule $R$, we formally define the potential schedule as follows. \begin{definition}[Potential Schedule] We call $S$ a potential schedule with respect to dominance rule R if for all $r_{ij} \in R$, start times of jobs $i,j$ in $S$ do not violate relation $r_{ij}$. \end{definition} \subsubsection{Auxiliary function} \label{ssub:auxiliary_function} For each job $j$, we introduce an auxiliary function $\varphi_j(t)$, where $t \geq 0$ represents the possible start time: \[ \varphi_j(t) = \frac{w_j}{p_j(p_j + t)}. \] We remark that $\varphi_j(t)$ is well defined since the processing time $p_j$ is positive. With the help of this auxiliary function, we can obtain a concise characterization of local precedence between two jobs. \begin{lemma} \label{lemma: local_new} For any two jobs $i, j$ and time $t \geq 0$, $i \prec_{l(t)} j$ if and only if $\varphi_{i}(t) > \varphi_{j}(t)$. \end{lemma} \begin{proof} Suppose $S = AijB$ is an arbitrary schedule, where job $i$ starts at time $t$ and job $j$ is processed directly after job $i$. We use $F(S)$ to represent the cost of the schedule $S$. To explore the local dominance relation between job $i$ and job $j$, we need to check the impact of swapping the order of $i, j$, which leads to: \begin{align*} F(AijB) - F(AjiB) & = \frac{w_i}{p_i + t} + \frac{w_j}{p_j + p_i + t} - \frac{w_j}{p_j + t} - \frac{w_i}{p_i + p_j + t} \\ & = w_i \cdot \frac{p_j}{(p_i + t)(p_i + p_j + t)} - w_j \cdot \frac{p_i}{(p_j + t)(p_i + p_j + t)} \\ & = \frac{p_ip_j}{p_i + p_j + t}(\frac{w_i}{p_i(p_i + t)} - \frac{w_j}{p_j(p_j + t)}) \\ & = \frac{p_ip_j}{p_i + p_j + t}(\varphi_i(t) - \varphi_j(t)). \end{align*} Since $\frac{p_ip_j}{p_i + p_j + t} > 0$, the effect of exchanging jobs $i, j$ depends on the term $(\varphi_i(t) - \varphi_j(t))$, which completes the proof. \qed \end{proof} \subsubsection{Dominance rule} \label{ssub:modified_rule} Vásquez \cite{vasquez_for_2015} proved the following dominance properties of airplane refueling problem. \begin{theorem}[Vásquez] \label{theorem: vas} For all jobs $i, j$ and time points $a, b$, the dominance property $i \prec_{l[a, b]} j$ implies $i \prec_{g[a, b]} j$. \end{theorem} Based on Theorem \ref{theorem: vas} and Lemma \ref{lemma: local_new} we obtain the following dominance rule of airplane refueling problem. \begin{corollary} \label{rule: new_rule} For any two jobs $i, j$ with $w_i > w_j$: \begin{enumerate} \item If $\varphi_i(t) \geq \varphi_j(t)$ for $t \in [0, \infty)$, $i \prec_g j$; \item If $\varphi_j(t) \geq \varphi_i(t)$ for $t \in [0, \infty)$, $j \prec_g i$; \item else $\exists$ $t^*_{ij} = \frac{w_jp_i^2 - w_ip_j^2}{w_i p_j - w_j p_i} > 0$ with: \begin{itemize} \item $\varphi_i(t) > \varphi_j(t)$ for $t \in [0, t^*_{ij})$, $i \prec_{g[0, t^*_{ij})} j$; \item $\varphi_j(t) \geq \varphi_i(t)$ for $t \in [t^*_{ij}, \infty)$, $j \prec_{g[t^*_{ij}, \infty)}i$. \end{itemize} \end{enumerate} \end{corollary} Actually, this rule is equivalent to the rule given in \cite{vasquez_for_2015}, whereas we use $\varphi$ to indicate the dominance relation between any jobs $i, j$. Figure \ref{fig: auxilary_function} gives an illustrative example of the scenario where $t^*_{ij} = \frac{w_jp_i^2 - w_ip_j^2}{w_i p_j - w_j p_i} > 0$. In this paper, we care about potential schedules with respect to Corollary \ref{rule: new_rule}. We refer to these schedules as \emph{potential schedules} for short. \begin{figure} \caption{Illustration of dominance property between job $i, j$. For $t \in [0, t^*_{ij} \label{fig: auxilary_function} \end{figure} \section{Technical lemmas} \label{sec:technical_lemmas} In this section, we establish several important properties concerning potential schedules. \subsection{Relative order between two jobs} \label{sub:relative_order_between_two_jobs} To begin with, we show that the dominance rule makes it impossible for a job to start within some time intervals in a potential schedule. Let $T:= \sum_{j \in J} p_j$ be the total processing time, we have: \begin{lemma} \label{lemma: banned_interval} For two jobs $i, j \in J$ with $\varphi_i(0) > \varphi_j(0)$ and $t^*_{ij} \in (0, T)$, if in a complete schedule $S$ job $i$ starts in time interval $[t^*_{ij}, t^*_{ij} + p_j)$, then $S$ is not a potential schedule. \end{lemma} \begin{proof} By Corollary \ref{rule: new_rule} it follows that $i \prec_{g[0, t^*_{ij})} j$ and $j \prec_{g[t^*_{ij},\infty)} i$. If job $j$ precedes job $i$ in $S$, then $t_j < t_j + p_j \leq t_i$. Since $t_i \in [t^*_{ij}, t^*_{ij} + p_j)$, we have: \[ 0 \leq t_j \leq t_i - p_j < t^*_{ij}, \] which leads to a violation of the relation $i \prec_{g[0, t^*_{ij})} j$ . Otherwise job $j$ is processed after job $i$ in $S$. In a similar manner we can derive: \[ t^*_{ij} \leq t_i \leq t_j - p_i < \infty. \] Again relation $j \prec_{g[t^*_{ij}, \infty)} i$ is not satisfied. \qed \end{proof} According to Lemma \ref{lemma: banned_interval}, job $i$ starts either in $[0, t^*_{ij})$ or in $[t^*_{ij} + p_j, T - p_i)$ in a potential schedule. Next lemma shows that if we fix the start time of job $i$ to one of these intervals, the relative order of job $i, j$ in a potential schedule is already decided. \begin{lemma} \label{lemma: left_or_right} For two jobs $i, j \in J$ with $\varphi_i(0) > \varphi_j(0)$ and $t^*_{ij} \in (0, T)$, suppose $S$ is a schedule with $t_j \not \in [t^*_{ij}, t^*_{ij} + p_j)$. \begin{enumerate} \item If $t_i < t^*_{ij}$, job $i$ should be processed before job $j$, otherwise $S$ is not a potential schedule; \item If $t_i \geq t^*_{ij} + p_j$, job $i$ should be processed after job $j$, otherwise $S$ is not a potential schedule. \end{enumerate} \end{lemma} \begin{proof} According to Corollary \ref{rule: new_rule} we have $i \prec_{g[0, t^*_{ij})} j$ and $j \prec_{g[t^*_{ij}, \infty)} i$. For the first scenario, processing job $j$ before job $i$ in $S$ would imply $t_j + p_j < t_i$, which leads to the following inequalities: \[ 0 \leq t_j \leq t_i - p_j < t^*_{ij}. \] Thus we have reached a negation to $i \prec_{g[0, t^*_{ij})} j$. Similarly, for the second scenario if job $i$ precedes job $j$ we will have: \[ t^*_{ij} \leq t_i \leq t_j - p_i < T. \] Relation $j \prec_{g[t^*_{ij}, \infty)} i$ does not hold, which concludes the proof. \qed \end{proof} Conditioning on the start time of job $i$, Lemma \ref{lemma: banned_interval} and Lemma \ref{lemma: left_or_right} provide a complete characterization of the relative order between jobs $i, j$ in a potential schedule. See Figure \ref{fig: relative_order} for an overview of these relations. \begin{figure} \caption{The relative order of job $i$ and job $j$ in a potential schedule conditioned on the start time of job $i$ with $t^*_{ij} \label{fig: relative_order} \end{figure} \subsection{Possible start times in a potential schedule} \label{sub:possible_start_times_in_a_potential_schedule} In this subsection we consider a general \emph{sub-problem} scenario. Let $J' \subseteq J$ be a set of jobs to be processed consecutively at start time $t_o$, where $t_o \in [0, T - \sum_{j \in J'} p_j]$. For convenience we denote the completion time $t_o + \sum_{j \in J'} p_j$ by $t_e$. Notice that when $J' = J$, we will have $t_o = 0$ by definition, the setting above describes the original problem. We further denote the partial schedule of $J'$ by $S'$. If $S'$ does not violate any dominance rule, we call it as a \emph{partial potential schedule}. Now consider job $\alpha \in J'$ such that: \[ \alpha = \operatorname*{arg\,max}_i \varphi_i(t_o). \] When there are more than one job with the maximum $\varphi(t_o)$, the tie breaking rule is to choose the job with the maximum $\varphi(t_e)$. Our goal is to study the possible start times of job $\alpha$ in a partial potential schedule $S'$. We start with analyzing the precedence relation of job $\alpha$ with other jobs in a partial potential schedule. To that end, we divide time interval $[t_o, t_e]$ into consecutive subintervals with the time set: \[ C_{J'}^{\alpha} := \{c_{\alpha j} | c_{\alpha j} = t^*_{\alpha j} + p_j, c_{\alpha j} \in (t_o, t_e), j \in J'\setminus \{\alpha\} \} \cup \{t_o, t_e\}. \] We re-index the set $C_{J'}^{\alpha}$ according to their value rank, that is, if a time point ranks the $q^{\text{\tiny th}}$ in the set, it will be denoted by $c_q$. Besides, we use mapping $M_{J'}^{\alpha}: J' \mapsto \mathbb{Z^+}$ to maintain job information of the index. The mapping is defined as: \[ M_{J'}^{\alpha}(j) = \begin{cases} 1, & \text{if $j = \alpha$}; \\ \text{the rank of $c_{\alpha j}$ in $C_{J'}^\alpha$}, & \text{if $t^*_{\alpha j} \in (t_o, t_e)$}; \\ |J'| + 1, & \text{if $t^*_{\alpha j} \notin (t_o, t_e)$}. \end{cases} \] We consider the start time of job $\alpha$ in each subinterval $[c_q, c_{q + 1})$. Next lemma shows that once we fix $t_\alpha$ to a subinterval, the positions of all remaining jobs in a potential schedule relative to $\alpha$ are determined. \begin{lemma} \label{lemma: J_l_J_r} Suppose $S'$ is partial potential schedule that job $\alpha$ starts within time interval $[c_q, c_{q + 1})$, then all $j$ with $M_{J'}^{\alpha}(j) \leq q$ should proceed $\alpha$, and the rest jobs with $M_{J'}^{\alpha}(j) > q$ should come after $\alpha$. \end{lemma} \begin{proof} Suppose $j$ is an arbitrary job in the set $J'$ other than $\alpha$, by the definition of job $\alpha$ and Corollary \ref{rule: new_rule} we know either it is dominated by job $\alpha$ in time interval $[t_o, t_e]$, or there exists $t^*_{ij} \in (t_o, t_e)$ that $\alpha \prec_{g[t_o, t^*_{ij})} j$ and $j \prec_{g[t^*_{ij}, t_e)} \alpha$. For the first case, we always have $M_{J'}^{\alpha}(j) = |J'| + 1 > q$, job $j$ comes after job $\alpha$. While for the second case, Lemma \ref{lemma: banned_interval} and Lemma \ref{lemma: left_or_right} apply, we have: \begin{itemize} \item If $M_{J'}^{\alpha}(j) \leq q$, it implies that $t_\alpha \geq t^*_{\alpha j} + p_j$. By Lemma \ref{lemma: left_or_right} job $j$ should precedes job $\alpha$ in a potential schedule. \item Otherwise $M_{J'}^{\alpha}(j) < q$, it follows that $t_\alpha < t^*_{\alpha j} + p_j$. Since $S'$ is a partial potential schedule, Lemma \ref{lemma: banned_interval} rules out the possibility that $t_\alpha \in [t^*_{\alpha j}, t^*_{\alpha j} + p_j)$, we have $t_\alpha < t^*_{\alpha j}$. Again by Lemma \ref{lemma: left_or_right} job $j$ should come after job $\alpha$. \end{itemize} \qed \end{proof} At last, for the possible start times of job $\alpha$ in a partial potential schedule, we have: \begin{lemma} \label{lemma: at_most_start_times} For every time interval $[c_q, c_{q + 1})$, there is at most one possible start time for job $\alpha$ in $[c_q, c_{q + 1})$. Thus, there are at most $|J'|$ possible start times of job $\alpha$ in a partial potential schedule $S'$. \end{lemma} \begin{proof} Suppose that $t_{\alpha} \in [c_q, c_{q + 1})$, according to Lemma \ref{lemma: left_or_right}, the positions of all remaining jobs in $J' \setminus \{\alpha\}$ relative to $\alpha$ are determined. We denote the set of jobs before and after job $\alpha$ as $J_l, J_r$, respectively. The start time of job $\alpha$ is determined by $J_l$. More precisely, we have: $t_\alpha = t_o + \sum_{j \in J_l} p_j.$ While if $t_o + \sum_{j \in J_l} p_j \notin [c_q, c_{q + 1})$, we have reached a conflict, which means there is no partial potential schedule that job $\alpha$ starts in current subinterval. As a result, there is at most one possible start time for job $\alpha$ in each subinterval. \qed \end{proof} \section{Algorithm} \label{sec:algorithm} In this section we devise an exact algorithm for airplane refueling problem and analyze its running time. \subsection{An exact algorithm for air plane refueling problem} \label{sub:an_exact_algorithm_for_air_plane_refueling_problem} We start with some notations. For each job pair $i, j$ of Lemma \ref{lemma: banned_interval}, we name the time interval $b_{ij} = [t^*_{ij}, t^*_{ij} + p_j)$ as the \emph{banned interval} of job $i$ imposed by job $j$, and denote $B_i := \cup_{j \in J \setminus {i}} b_{ij}$ as the union of all banned intervals of job $i$. \begin{figure} \caption{Possible start times of job $\alpha$.} \label{fig: pos_start_time} \end{figure} Given an instance of airplane refueling problem, we find the job $\alpha$ and try to start $\alpha$ in each interval induced by $C_{J'}^{\alpha}$. For an interval $[c_q, c_{q+1})$ and corresponding $J_l, J_r$, the possible start time of $\alpha$ will be $t_\alpha = \sum_{j \in J_l} p_j$. If $t_\alpha \in [c_q, c_{q+1})$ and $t_\alpha \notin B_\alpha$, then we have found a potential start time of job $\alpha$. Figure \ref{fig: pos_start_time} provides an illustrative example of the above procedure, where $J' = \{i, j, k, l, \alpha \}$. As shown in the figure, the set $C^{\alpha}_{J'}$ divided time interval $[t_o, t_e)$ into 4 subintervals. Job $i$ should always behind job $\alpha$ since $t^*_{\alpha i} < t_o$. For other jobs in $J'$, condition on the start time of job $\alpha$ we have: \begin{enumerate} \item $t_\alpha \in [c_1, c_2)$: Job $\alpha$ serves as the first job and $t_\alpha^1 = t_o$. \item $t_\alpha \in [c_2, c_3)$: Job $j$ precedes job $\alpha$ in a potential schedule. As $t_\alpha^2 = t_o + p_j > c_3$, job $\alpha$ can not start in this interval. \item $t_\alpha \in [c_3, c_4)$: Job $j, k$ precedes job $\alpha$ in a potential schedule. As $t_\alpha^3 = t_o + p_j + p_k$ is in banned intervals of job $\alpha$, job $\alpha$ can not start in this interval. \item $t_\alpha \in [c_4, c_5)$: Job $j, k, l$ precedes job $\alpha$, $t_\alpha^4 = t_o + p_j + p_k + p_l$. \end{enumerate} Once we find a possible start time of job $\alpha$, we can solve $J_l$ and $J_r$ recursively with the procedure above until the subproblem has only one job. See Algorithm \ref{algo: fast_schedule} for the pseudo code of our algorithm, where the initial inputs are the complete job set $J$ and original start time $t=0$. \begin{algorithm}[H] \caption{Fast Schedule} \label{algo: fast_schedule} \begin{algorithmic}[1] \Require{Job set $J$ and start time $t$.} \Ensure{The optimal schedule and the optimal cost.} \Function{FastSchedule}{$J, t$} \If{$J$ is empty} \State \Return{$0$, $[]$} \ElsIf{$J$ contains only one job $j$} \State \Return{$\frac{w_j}{p_j + t}$, $[j]$} \EndIf \State Find job $\alpha$ \State $J_l, J_r, opt \gets [], J, 0$ \For{$c_{q} \in C_J^\alpha$} \State Find job $i$ with $M_{J}^{\alpha}(i) = q$ \Comment{$i = \alpha$ for $q=1$, i.e. job $\alpha$ is the first job} \State $J_l \gets J_l \cup \{i\}$ \State $J_r \gets J_r \setminus \{i\}$ \If{$c_{q} \leq t + \sum_{i \in J_l} p_i <{c}_{q+1} \And \ (t + \sum_{i \in J_l} p_i) \notin B_\alpha$} \label{line: new_branch} \Comment{new branch} \State $opt_l, seq_l \gets $ \Call{FastSchedule}{$J_l \setminus \{\alpha\}, t$} \State $opt_r, seq_r \gets $ \Call{FastSchedule}{$J_r, t + \sum_{i \in J_l} p_i + p_\alpha$} \If{$opt_l + opt_r + \frac{w_\alpha}{p_\alpha + \sum_{i \in J_l} p_i} > opt$} \State $opt, seq \gets (opt_l + opt_r + \frac{w_\alpha}{p_\alpha + \sum_{i \in J_l} p_i})$, $[seq_l, \alpha, seq_r]$ \EndIf \EndIf \EndFor \State \Return{opt, seq} \EndFunction \end{algorithmic} \end{algorithm} Next, we prove the correctness of Algorithm \ref{algo: fast_schedule}. \begin{theorem} Algorithm \ref{algo: fast_schedule} returns the optimal solution of airplane refueling problem. \end{theorem} \begin{proof} We only need to show that Algorithm \ref{algo: fast_schedule} does not eliminate any potential schedules. While locating possible start times of job $\alpha$, our algorithm drops out schedules with $t_\alpha$ in banned intervals or $t_\alpha$ exceeds current interval. According to Lemma \ref{lemma: banned_interval} and Lemma \ref{lemma: at_most_start_times}, all the excluded schedules violate Corollary \ref{rule: new_rule}. Each recursive call also drops out schedules that have jobs in $J_l$ processed after job $\alpha$ or jobs in $J_r$ processed before job $\alpha$. By Lemma \ref{lemma: left_or_right}, these schedules are not potential schedules either. \qed \end{proof} \subsection{Running time of Algorithm \ref{algo: fast_schedule}} \label{sub:running_time_of_algorithm_algo: fast_schedule} In this section we analyze the running time of Algorithm \ref{algo: fast_schedule} with respect to the number of potential schedules. Our main result can be stated as follows: \begin{theorem} \label{theorem: num_sol} Algorithm \ref{algo: fast_schedule} finds the optimal solution of airplane refueling problem in $\mathcal{O}(n^2(\log n + K))$ time, where $K$ is the number of potential schedules with respect to Corollary \ref{rule: new_rule}. \end{theorem} Before proving this theorem, we need to establish some properties of Algorithm \ref{algo: fast_schedule}. We start by showing that for any job set $J'$ that starts at time $t$, there is at least one partial potential schedule. \begin{lemma} \label{lemma: greedy} Suppose $J' \subseteq J$ is a set of job that starts at time $t$, then there exists a procedure that can find one partial potential schedule for $J'$. \end{lemma} \begin{proof} Consider the following procedure that construct the schedule successively. At each stage, choose the job with the maximum $\varphi(C)$ from the unscheduled jobs as the next job, where $C$ is the total processing time of the scheduled jobs plus $t$. We claim this procedure returns a partial potential schedule of $J'$. Let $S'$ be the schedule returned by the procedure above. Assume by contradiction that there are two jobs $i,j $ that violate dominance properties. Suppose $i$ precedes $j$ in $S'$, then we have $j \prec_{g[t_i, t_j - p_i)} i$, which implies $\varphi_j(t_i) > \varphi_i(t_i)$ by Lemma \ref{lemma: local_new}. In that case, we should have chosen job $j$ at time $t_i$ instead of $i$, absurdity is obtained. \qed \end{proof} While Lemma \ref{lemma: greedy} concerns about a single job set, the following lemma describes the relationship between two job sets $J_l$ and $J_r$. \begin{lemma} \label{lemma: independent} Given start time $t_{\alpha} \in [c_{q}, c_{q + 1})$ of job $\alpha$ with $t_\alpha \in [t_o, t_e-p_\alpha]$ and $t_\alpha \notin B_\alpha$, for any two jobs $j \in J_l$ and $k \in J_r$ with $M_{J'}^{\alpha}(j) \leq q < M_{J'}^{\alpha}(k)$, if job $j$ is scheduled before job $\alpha$ and job $k$ is scheduled after job $\alpha$, then no matter where job $j$ and job $k$ start, dominance properties among $\alpha,j$ and $k$ will not be violated. \end{lemma} \begin{proof} The dominance relations between jobs $\alpha,j$ and jobs $\alpha,k$ are satisfied, we only need to consider the relation between job $j$ and job $k$. From the definition of $c_q$, it follows that \[ t^*_{\alpha j} + p_j \leq t_{\alpha} < t^*_{\alpha k} + p_k. \] Since $t_\alpha$ is not in banned interval $b_{\alpha k} = [t^*_{\alpha k}, t^*_{\alpha k} + p_k)$, the equation above leads to: \[ t^*_{\alpha j} < t_\alpha < t^*_{\alpha k}. \] We distinguish the cases whether job $j$ and job $k$ have global precedence in time interval $[t_o, t_e]$. \begin{case}[Global dominance]\\ Job $j$ and job $k$ have global precedence in time interval $[t_o, t_e]$. \begin{enumerate} \item $j \prec_{g[t_o, t_e]} k$: Since job $j$ is scheduled before job $k$, precedence rule is satisfied. \item $k \prec_{g[t_o, t_e]} j$: We will show this scenario is impossible. At time $t_\alpha$, since $t^*_{\alpha j} < t_\alpha < t^*_{\alpha k}$ we have $j \prec_{l(t_\alpha)} \alpha$ and $\alpha \prec_{l(t_\alpha)} k$, which implies $j \prec_{l(t_\alpha)} k$. Thus we have constructed a contradiction to global dominance $j \prec_{g[t_o, t_e]} k$. \end{enumerate} \end{case} \begin{case}[Partial dominance] \\ Job $j$ and job $k$ have no global precedence in time interval $[t_o, t_e]$. In this case there exists $t^*_{jk} \in (t_o, t_e)$. \begin{enumerate} \item $t^*_{jk} < t^*_{\alpha j}$: On one hand by the definition of $\alpha$ we have $\varphi_k(t_o) > \varphi_j(t_o)$. On the other hand for the start time of job $k$, $t_k > t_\alpha > t^*_{\alpha j} + p_j > t^*_{jk} + p_j$. According to Lemma \ref{lemma: left_or_right} job $j$ precedes job $k$ does not violate the dominance properties between $i, j$. Figure \ref{fig: case_1} depicts the auxiliary functions of job $\alpha, j, k$ of this scenario. \item $t^*_{jk} = t^*_{\alpha j}$: We claim this scenario is impossible. If $t^*_{jk} = t^*_{\alpha j}$ we will have $t^*_{jk} = t^*_{\alpha j} = t^*_{\alpha k}$. By the definition of $i, j$ this relation will lead to $t^*_{\alpha k} < t^*_{\alpha j} + p_j < t_{\alpha} < t^*_{\alpha k} + p_k$, that is, $t_\alpha \in [t^*_{\alpha k}, t^*_{\alpha k} + p_k)$. Since $t_\alpha \notin B_\alpha$ and $[t^*_{\alpha k}, t^*_{\alpha k} + p_k) \subseteq B_\alpha$, we have reached a contradiction. See Figure \ref{fig: case_3} for the auxiliary functions of job $\alpha, j, k$ of this scenario \item $t^*_{jk} > t^*_{\alpha j}$: At first we show $t^*_{jk} > t^*_{\alpha k}$. Assume by contradiction that $t^*_{\alpha j} < t^*_{jk} < t^*_{\alpha k}$, which would imply $\alpha \prec_{g(t^*_{\alpha j}, t_e)} j$ and $\alpha \prec_{g(t_o, t^*_{\alpha k})} k$. Then for any time $t \in (t^*_{\alpha j}, t^*_{\alpha k})$ we have the relation $j \prec_{l(t^*_{\alpha j})} \alpha \prec_{l(t^*_{\alpha j})} k$. However, this is impossible since $t^*_{jk} \in (t^*_{\alpha j}, t^*_{\alpha k})$. Therefore, we have $t^*_{jk} > t^*_{\alpha k}$, and job $j$ starts before time $t^*_{kj}$. In that case, having job $j$ precedes job $k$ does not violate the dominance properties between $i, j$, our claim holds. See Figure \ref{fig: case_2} for this scenario. \end{enumerate} \end{case} \qed \end{proof} \begin{figure} \caption{$t^*_{jk} \label{fig: case_1} \caption{$t^*_{jk} \label{fig: case_3} \caption{$t^*_{jk} \label{fig: case_2} \end{figure} Combining Lemma \ref{lemma: greedy} and Lemma \ref{lemma: independent}, we can identify a strong connection between Algorithm \ref{algo: fast_schedule} and potential schedules. \begin{lemma} \label{lemma: least_one} Whenever Algorithm \ref{algo: fast_schedule} adds a new branch for an instance $(J, t)$, there is at least one potential schedule on that branch. \end{lemma} \begin{proof} Suppose after we find a possible start time of job $\alpha$ the remaining jobs are divided into $J_l$ and $J_r$. By Lemma \ref{lemma: greedy} both job sets have at least one partial potential schedule. Denote the partial schedule of $J_l$ by $S_l$ and the partial schedule of $J_r$ by $S_r$, then according to Lemma \ref{lemma: independent} the jointed schedule $[S_l, \alpha, S_r]$ is also a potential schedule. \qed \end{proof} Now we are ready to prove Theorem \ref{theorem: num_sol}. \begin{proof}[Proof of Theorem \ref{theorem: num_sol}] We only need to generate all $t^*_{ij}$ and sorted them once, which takes $\mathcal{O}(n^2 \log n)$ time. While finding the possible start times of $\alpha$, with the already calculated $t^*_{ij}$, we can construct the set $C_J^\alpha$ and the mapping $M_{J}^{\alpha}$ in $\mathcal{O}(n)$ time. The iteration over each subinterval $[c_q, c_{q + 1})$ also needs at most $\mathcal{O}(n)$ time. Therefore, it takes at most $\mathcal{O}(n)$ time to find a potential start time of a job. By Lemma \ref{lemma: least_one} there exists at least one potential schedule with job $\alpha$ starting at that time. For any potential schedule there are $n$ start times to be decided, as a consequence, Algorithm \ref{algo: fast_schedule} finds each potential schedule in at most $\mathcal{O}(n^2)$ time. To find all the potential schedules it will take $\mathcal{O}(n^2 (\log n + K))$ time. \qed \end{proof} \section{Experimental Study} \label{sec:experimental_study} We code our algorithm with Python 3 and perform the experiment on a Linux machine with one Intel Core i7-9700K @ 3.6Ghz $\times$ 8 processor and 16Gb RAM. Notice that our implementation only invokes one core at a time. For experimental data set we adopt the method introduced by Höhn and Jacobs \cite{bader_experimental_2012} to generate random instances. For an instance with $n$ jobs, the processing time $p_i$ of job $i$ is an integer generated from uniform distribution ranging from 1 to 100, whereas the priority weight $w_i = 2^{N(0, \sigma^2)} \cdot p_i$ with $N$ being normal distribution. Therefore, a random instance is characterized by two parameters $(n, \sigma)$. According to previous results \cite{bader_experimental_2012,vasquez_for_2015}, instances generated with smaller $\sigma$ are more likely to be harder to solve, which means we can roughly tune the hardness of instances by adjusting the value $\sigma$. At first we compare our algorithm with the $A^*$ algorithm given by Vásquez \cite{vasquez_for_2015}. This algorithm modeles airplane refueling problem as a shortest path problem on a directed acyclic graph $G$. The vertexes of $G$ consist of all subset of $J$ and arcs are linked according to Corollary \ref{rule: new_rule}, and a path from vertex $\{\}$ to $J$ corresponds to a schedule. Since there will be $2^n$ vertexes in $G$ and it takes $\mathcal{O}(n)$ time to process each vertex in the worst case, the complexity of this algorithm is $\mathcal{O}(n \cdot 2^n)$. This part of experiment is conducted on data set $S_1$, which is generated with $\sigma=0.1$ and job size ranges from $\{10, 20, \dots, 140\}$, where for each configuration there are 50 instances. Secondly, we evaluate the empirical performance of Algorithm \ref{algo: fast_schedule} on data set $S_2$. In detail, for each job size $n$ in $\{100, 500, 1000, 2000, 3000 \}$ and for each $\sigma$ value $\{0.1, 0.101, 0.102, \dots, 1 \}$, there are 5 instances, that is, $S_2$ has 4505 instances in total. At last, we generate data set $S_3$ to examine the relations of instance hardness with the number of potential schedules and the value of $\sigma$. This data set has 5 instances of 500 jobs for each $\sigma$ from $\{0.100, 0.101, 0.102, \dots, 1\}$. \subsubsection{Comparison with $A^*$} \label{ssub:comparison_with_} We consider the ratio between the average running time of $A^*$ and Algorithm \ref{algo: fast_schedule} on data set $S_1$. As shown in Figure \ref{fig: compare}, our algorithm outperforms $A^*$ on all sizes and the speed up is more significant on instances with larger size. For instances with 140 jobs, our algorithm is over 100 times faster. \begin{figure} \caption{Speed up factor on dependence of instance size, on data set $S_1$.} \label{fig: compare} \end{figure} \subsubsection{Empirical performance} \label{ssub:empirical_performance} Table \ref{tab: running_time} presents the running time of Algorithm \ref{algo: fast_schedule} on data set $S_2$. We set a 10000 seconds timeout while performing the experiment. When our algorithm does not solve all the instances within this time bound, we present the percentage of the solved instances. For all instances with less than 2000 jobs as well as most instances with 2000 and 3000 jobs, our algorithm returns the optimal solution within the time bound. While for hard instances of 2000 and 3000 jobs generated with $\sigma$ less than $0.2$, our algorithm can solve $92\%$ and $79\%$ of the instances within the time bound respectively. \begin{table}[H] \centering \caption{Running time on data set $S_2$.} \label{tab: running_time} \scriptsize \begin{tabularx}{\textwidth}{@{}p{.1\textwidth}<{\centering}YYYYYYYYYY@{}} \toprule \multirow{2}{*}{$|J|$} & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{500} & \multicolumn{2}{c}{1000} & \multicolumn{2}{c}{2000} & \multicolumn{2}{c}{3000} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} & $\mathrm{avg.}$ & $\mathrm{std.}$ & $\mathrm{avg.}$ & $\mathrm{std.}$ & $\mathrm{avg.}$ & $\mathrm{std.}$ & $\mathrm{avg.}$ & $\mathrm{std.}$ & $\mathrm{avg.}$ & $\mathrm{std.}$ \\ \midrule $[0.1, 0.2)$ & 0.37 & 0.16 & 62.26 & 128 & 645.4 & 2223 & 92.40\% & - & 79.60\% & - \\ $[0.2, 0.3)$ & 0.30 & 0.05 & 13.38 & 8.5 & 65.08 & 43.23 & 328.6 & 210.2 & 955.4 & 871.5 \\ $[0.3, 0.4)$ & 0.27 & 0.03 & 7.11 & 9.27 & 41.91 & 12.46 & 222.7 & 128.0 & 538.6 & 339.4 \\ $[0.4, 0.5)$ & 0.26 & 0.02 & 8.16 & 0.91 & 35.74 & 5.00 & 172.8 & 38.70 & 421.5 & 87.15 \\ $[0.5, 0.6)$ & 0.25 & 0.01 & 7.64 & 0.59 & 33.27 & 3.24 & 159.9 & 28.54 & 392.0 & 53.67 \\ $[0.6, 0.7)$ & 0.25 & 0.01 & 7.37 & 0.38 & 31.84 & 2.74 & 151.0 & 14.37 & 367.3 & 41.37 \\ $[0.7, 0.8)$ & 0.25 & 0.01 & 7.18 & 0.35 & 30.52 & 1.52 & 142.1 & 12.97 & 349.4 & 22.09 \\ $[0.8, 0.9)$ & 0.24 & 0.01 & 7.07 & 0.32 & 30.10 & 1.62 & 144.8 & 9.07 & 341.4 & 22.79 \\ $[0.9, 1.0]$ & 0.24 & 0.01 & 6.94 & 0.23 & 29.28 & 1.00 & 141.5 & 7.41 & 327.3 & 16.93 \\ \bottomrule \end{tabularx} \end{table} \subsubsection{Instance hardness} \label{ssub:instance_hardness} Figure \ref{fig: hardness} depicts relations of different hardness indicators. The chart on the left shows the number of potential schedules and the running time of Algorithm \ref{algo: fast_schedule} on data set $S_3$. Since both variables cover a large range, we present the figure as a log-log plot. As indicated by Theorem \ref{theorem: num_sol}, there is a strong correlation between the number of potential schedules and the running time of Algorithm \ref{algo: fast_schedule}. Therefore, the number of potential schedules can serve as a rough measure of the instance hardness. For the relation between the number of potential schedules and the $\sigma$ value, the second chart exhibits that instances generated with small $\sigma$ are more likely to have a large number of potential schedules. The relation above is more obvious on smaller $\sigma$, while on larger $\sigma$ the difference between the number of potential schedules is less significant. \begin{figure} \caption{Different indicators of instance hardness, on data set $S_3$.} \label{fig: hardness} \end{figure} \section{Conclusions} \label{sec:conclusions} We devise an efficient exact algorithm for airplane refueling problem. Based on the dominance properties of the problem, we propose a method that can prefix some jobs' start times and determine the relative orders among jobs in a potential schedule. This technique enables us to solve airplane refueling problem in a recursive manner. Our algorithm outperforms the state of the art exact algorithm on random generated data sets. For large instances with hard configurations that can not be tackled by previous algorithms, our algorithm can solve most of them in a reasonable time. The empirical efficiency of our algorithm can be attributed to two factors. First, our algorithm explores only the branches that contain potential schedules. Second, on the root node of each branch, the problem is further divided into smaller subproblems, which can also speedup the searching process. Theoretically, we prove that the running time of our algorithm is upper bounded by the number of potential schedules times a polynomial overhead in the worst case. Another contribution of this work is that we give some new structural properties of airplane refueling problem, which may be helpful in understanding the computational complexity of the problem. \end{document}
\begin{document} \begin{center} {\large \textbf{Automorphisms of large-type free-of-infinity Artin groups.}} \end{center} \begin{center} Nicolas Vaskou \end{center} \begin{abstract} \centering \justifying We compute explicitly the automorphism and outer automorphism group of all large-type free-of-infinity Artin groups. Our strategy involves reconstructing the associated Deligne complexes in a purely algebraic manner, i.e. in a way that is independent from the choice of standard generators for the groups. \end{abstract} \noindent \rule{7em}{.4pt}\par \small \noindent 2020 \textit{Mathematics subject classification.} 20F65, 20F36, 20F28, 20E36. \noindent \textit{Key words.} Artin groups, Automorphisms, Deligne complex. \normalsize \section{Introduction.} Artin groups form a large family of groups that have drawn an increasing attention in the past few decades. They are defined as follows. Start from a simplicial graph $\Gamma$ with finite vertex set $V(\Gamma)$ and finite edge set $E(\Gamma)$. For every edge $e^{ab}\in E(\Gamma)$ with vertices $a, b \in V(\Gamma)$, associate an integer coefficient (or label) $m_{ab} \geq 2$. Then $\Gamma$ is the \textbf{presentation graph} of an \textbf{Artin group} $A_{\Gamma}$ whose presentation is the following: $$A_{\Gamma} \coloneqq \langle \ V(\Gamma) \ | \ \underbrace{aba \cdots}_{m_{ab}} = \underbrace{bab \cdots}_{m_{ab}} \text{ for every } e^{ab} \in E(\Gamma) \ \rangle.$$ We will say that $m_{ab} = \infty$ whenever $a$ and $b$ are not adjacent. The elements of $V(\Gamma)$ are called the \textbf{standard generators} of $A_{\Gamma}$, and their number $|V(\Gamma)|$ is called the \textbf{rank} of $A_{\Gamma}$. We suppose throughout the paper that $\Gamma$ is connected, a weak condition that ensure the group does not trivially decomposes as a free product of infinite groups. Artin groups are cousins of Coxeter groups: whenever $\Gamma$ defines an Artin group $A_{\Gamma}$, it also defines a \textbf{Coxeter group} $W_{\Gamma}$ that can be obtained from the presentation of $A_{\Gamma}$ by adding the relation $a^2 = 1$ for every standard generator. While Coxeter groups are rather well-understood, much less is known about Artin groups in general. Although they are conjectured to have many properties with various flavours (torsion-free-ness, solvable conjugacy problem, $K(\pi, 1)$-conjecture, biautomaticity, acylindrical hyperbolicity, CAT(0)-ness, etc.), proving any property in full generality has remained exceedingly complicated. In general, such conjectures are solved for more specific classes of Artin groups, assuming additional properties about their presentation graphs (\cite{charney1995k}, \cite{vaskou2021acylindrical}, \cite{huang2019metric}, \cite{haettel2019xxl}). In \cite{goldsborough2023random}, the authors introduced a notion of randomness for presentation graphs. In particular, they showed that for each of the aforementioned conjectures, there is a class for which it has been solved that has a “non-trivial” asymptotic size within the family of all Artin groups. However, there are two (intrinsically related) questions regarding Artin groups that have remained more mysterious: that of solving the isomorphism problem, and that of computing their automorphism groups. The \textbf{isomorphism problem} for Artin groups asks what can be said about two presentations graphs $\Gamma$ and $\Gamma'$ assuming their corresponding Artin groups $A_{\Gamma}$ and $A_{\Gamma'}$ are isomorphic. In \cite{vaskou2023isomorphism}, the author solved this problem for \textbf{large-type} Artin groups (those with coefficients at least $3$). The study of isomorphisms between Artin groups is inherently to the study of automorphisms of Artin groups. As for the isomorphism problem, the study of the automorphisms of Artin groups has turned out to be quite difficult. The most famous results are that of right-angled Artin groups (\cite{droms1987isomorphisms}, \cite{servatius1989automorphisms}, \cite{laurence1992automorphisms}). The situation becomes even more complicated when introducing non-commuting relations. The only results on Artin groups that are not right-angled concern the class of “connected large-type triangle-free” Artin groups introduced by Crisp (\cite{crisp2005automorphisms}, \cite{an2022automorphism}). The main goal of the present paper is to compute the automorphism group and outer automorphism group of a larger family of Artin groups. This provide a first example of a family that does not have trivial asymptotic size for which this problem has been solved. Before giving our main results, we introduce some terminology. An Artin group $A_{\Gamma}$ is said to be \textbf{free-of-infinity} if $m_{ab} < \infty$ for every pair of standard generators $a, b \in V(\Gamma)$. The class of free-of-infinity Artin groups is particularly interesting: in \cite{godelle2012basic}, Godelle and Paris proved that many important conjectures of Artin groups can be solved in full generality if they were solved for free-of-infinity Artin groups. Our main result is the following: \begin{thmA} Let $A_{\Gamma}$ be a large-type free-of-infinity Artin group. Then $Aut(A_{\Gamma})$ is generated by the following automorphisms: \noindent \textbf{1. Conjugations.} Also known as inner automorphisms, these are the automorphisms of the form $\varphi_g : h \mapsto g h g^{-1}$ for some $g \in A_{\Gamma}$. \noindent \textbf{2. Graph automorphisms.} Every label-preserving graph automorphism $\phi \in Aut(\Gamma)$ induces a permutation of the standard generators, hence an automorphism of the group. \noindent \textbf{3. The global involution.} This is the order $2$ automorphism $\iota$ that sends every standard generator to its inverse. In particular, $Out(A_{\Gamma})$ is finite and isomorphic to $Aut(\Gamma) \times (\quotient{\mathbf{Z}}{2 \mathbf{Z}})$. \end{thmA} Note that it would not be possible to extend the previous theorem to all large-type Artin groups, as these contain a fourth type of automorphisms called “Dehn twists automorphisms” (see \cite{crisp2005automorphisms}). Before explaining our strategy we recall a few notions related to Artin groups. If $A_{\Gamma}$ is an Artin group, every induced subgraph $\Gamma' \subseteq \Gamma$ generates a subgroup $\langle V(\Gamma') \rangle \subseteq A_{\Gamma}$. It is a well-known result that this subgroup is isomorphic to the Artin group $A_{\Gamma'}$ itself (\cite{van1983homotopy}). Such subgroups are called \textbf{standard parabolic subgroups}, and their conjugates are called \textbf{parabolic subgroups}. If $\Gamma'$ is such that the associated Coxeter group $W_{\Gamma'}$ is finite, then $A_{\Gamma'}$ is called \textbf{spherical}. In large-type Artin groups, a parabolic subgroup $g A_{\Gamma'} g^{-1}$ is spherical if and only if $\Gamma'$ is a single vertex, a single edge, or empty. In \cite{charney1995k}, Charney and Davis introduced a combinatorial complex $X_{\Gamma}$ known as the \textbf{Deligne complex}, which has since then proved multiple times to be an incredibly efficient geometric tool in the study of Artin groups. This complex is constructed from the combinatorics of the spherical parabolic subgroups (see Definition \ref{DefiDeligne}). For large-type Artin groups, its geometry is generally better understood. Coming back onto our strategy for proving Theorem A. The Deligne complex $X_{\Gamma}$ is a priori very much dependent on the choice of presentation graph $\Gamma$ for the associated Artin group. That said, if we find a way to reconstruct $X_{\Gamma}$ with purely algebraic objects, then any automorphism of the Artin group will preserve the structure of these algebraic objects, and hence preserve the Deligne complex itself. This approach allows to build an action of the automorphism group $Aut(A_{\Gamma})$ on the Deligne complex, from which we can recover a full description of $Aut(A_{\Gamma})$. This kind of technique was originally used by Ivanov (\cite{ivanov2002mapping}) to study the automorphisms of mapping class groups and has since then been extended to other groups like Higman's group (\cite{martin2017cubical}) or graph products of groups (\cite{genevois2018automorphism}). In our case, when the Artin groups considered are large-type and free-of-infinity, we find a way to “reconstruct” the associated Deligne complexes in a purely algebraic manner. We obtain the following: \begin{thmB} Let $A_{\Gamma}$ be a large-type free-of-infinity Artin group. Then $X_{\Gamma}$ can be reconstructed in a way that is invariant under isomorphisms (in particular, independent of $\Gamma$). Consequently, there is a natural combinatorial action of $Aut(A_{\Gamma})$ on $X_{\Gamma}$, and this action can be described explicitly. \end{thmB} For the precise description of the action, see Theorem \ref{ThmIsomorphicD}. Once again, this theorem cannot be extended to all large-type Artin groups. Indeed, in this class, two Artin groups may be isomorphic while their presentation graphs are not (see \cite{brady2002rigidity}). In particular, their associated Deligne complexes are not isomorphic either. Consider a large-type Artin group $A_{\Gamma}$. A first step into reconstructing the associated Deligne complex $X_{\Gamma}$ is to reconstruct what are called the “type $2$” vertices of the complex. These vertices are in one-to-one correspondence with the spherical parabolic subgroups of $A_{\Gamma}$ on $2$ generators, which have been proved in \cite{vaskou2023isomorphism} to be invariant under automorphisms. A second step, that is the main technical result of this paper, is to be able to reconstruct the “type $1$” vertices of the Deligne complex. This is where the hypothesis of being free-of-infinity comes into play. The stabilisers of these type $1$ vertices are parabolic subgroups of type $1$ of $A_{\Gamma}$, which have been proved to not be preserved under automorphisms for general large-type Artin groups (\cite{vaskou2023isomorphism}, Theorem H). However, they are preserved for large-type free-of-infinity Artin groups. That said, this correspondence between the type $1$ vertices and the parabolic subgroups of type $1$ of $A_{\Gamma}$ is far from being a bijection, as infinitely many type $1$ vertices may lie on a common “standard tree” and hence have the same stabiliser. On such a standard tree, it can generally be quite hard to give a purely algebraic condition that translates when two type $1$ vertices should be “adjacent”. Such a condition actually cannot exist for large-type Artin groups in general. In this paper we provide such a condition, under the hypothesis that the groups are also free-of-infinity. At this point, we will have reconstructed (most of) the $1$-skeleton of the Deligne complex in a purely algebraic way. With a little bit more work, we will then be able to reconstruct the whole complex, proving Theorem B. \textbf{Organisation of the paper:} Section 2 serves as a preliminary section where we introduce various algebraic and geometric notions, such as the definition of the Deligne complex. In Section 3, we focus on large-type free-of-infinity Artin groups, and we reconstruct their Deligne complexes purely algebraically, proving Theorem B. Finally in Section 4, we use this algebraic description of the Deligne complex to prove Theorem A. \section{The Deligne complex.} In this section we introduce the Deligne complex and various related geometric objects. We start by defining this complex: \begin{defi} \label{DefiDeligne} Let $A_{\Gamma}$ be any Artin group. The \textbf{Deligne complex} $X_{\Gamma}$ associated with $A_{\Gamma}$ is the simplicial complex constructed as follows: \begin{itemize} \itemsep0em \item The vertices of $X_{\Gamma}$ are the left-cosets $g A_{\Gamma'}$, where $g \in A_{\Gamma}$ and $A_{\Gamma'}$ is any spherical standard parabolic subgroup of $A_{\Gamma}$. \item Every string of inclusion of the form $g_0 A_{\Gamma_0} \subsetneq \cdots \subsetneq g_n A_{\Gamma_n}$ spans an $n$-simplex. \end{itemize} The group $A_{\Gamma}$ acts on $X_{\Gamma}$ by left multiplication. \end{defi} The geometry of the Deligne complex for large-type (and more generally $2$-dimensional) Artin groups is better understood than the general case. This is mostly due to the following result: \begin{thm} \label{ThmXCAT(0)} \textbf{(\cite{charney1995k}, Proposition 4.4.5)} Let $A_{\Gamma}$ be a large-type Artin group. Then the Deligne complex $X_{\Gamma}$ is $2$-dimensional and admits a CAT(0) piecewise Euclidean metric. \end{thm} \begin{rem} \label{RemGammaBar} The fundamental domain of the action of $A_{\Gamma}$ on $X_{\Gamma}$ is the subcomplex $K_{\Gamma}$ whose vertices are the spherical standard parabolic subgroups of $A_{\Gamma}$. Since $\{1\}$ is contained in every spherical parabolic subgroup, the corresponding vertex is attached to every spherical standard parabolic subgroup $A_{\Gamma'}$. In particular, $K_{\Gamma}$ is a cone whose apex is $\{1\}$. It is not hard to see that the boundary of $K_{\Gamma}$ is graph-isomorphic to the barycentric subdivision $\Gamma_{bar}$ of $\Gamma$. Hence we will often write $\Gamma_{bar}$ to denote the boundary of $K_{\Gamma}$. \end{rem} \begin{figure} \caption{\underline{Top-left:} \label{FigDeligneComplex} \end{figure} In Section 3, the first step into reconstructing the Deligne complex purely algebraically will be to reconstruct the subcomplex corresponding to the points with non-trivial stabiliser. This subcomplex, that we define thereafter, plays an important role throughout the paper: \begin{defi} The \textbf{essential 1-skeleton} of $X_{\Gamma}$ is the subcomplex of the $1$-skeleton $X_{\Gamma}^{(1)}$ defined by $$X_{\Gamma}^{(1)-ess} \coloneqq \bigcup\limits_{g \in A_{\Gamma}} g \Gamma_{bar}.$$ \end{defi} \begin{rem} \label{RemX1essCone} Since $K_{\Gamma}$ is the cone-off of $\Gamma_{bar}$, the Deligne complex $X_{\Gamma}$ can be obtained from $X_{\Gamma}^{(1)-ess}$ by coning-off the translates $g \Gamma_{bar}$, for all $g \in A_{\Gamma}$. \end{rem} At last, we want to talk about fixed-point sets and types of elements: \begin{defi} The \textbf{fixed set} of a subset $S \subseteq A_{\Gamma}$ for the action on $X_{\Gamma}$ is the set $$Fix(S) \coloneqq \{p \in X_{\Gamma} \ | \ \forall g \in S, \ g \cdot p = p \}.$$ \end{defi} \begin{defi} \label{DefiType} The \text{type} of a parabolic subgroup $g A_{\Gamma'} g^{-1}$ is the integer $|V(\Gamma')|$. The \textbf{type} of a simplex of $\sigma \subseteq X_{\Gamma}$ is the type of its stabiliser $G_{\sigma}$. \end{defi} \begin{lemma} \label{LemmaClassificationByType} \textbf{(\cite{crisp2005automorphisms}, Lemma 8)} Let $A_{\Gamma}$ be a large-type Artin group, and let $A_{\Gamma'} \subseteq A_{\Gamma}$ be a standard parabolic subgroup. Then: \\ $\bullet$ $type(A_{\Gamma'}) \geq 3$ or $\Gamma' = e^{ab}$ for some $a, b \in V(\Gamma)$ with $m_{ab} = \infty$ $\Longleftrightarrow$ $Fix(A_{\Gamma'}) = \emptyset$. \\ $\bullet$ $\Gamma' = e^{ab}$ for some $a, b \in V(\Gamma)$ with $m_{ab} < \infty$ $\Longleftrightarrow$ $Fix(A_{\Gamma'})$ is the type $2$ vertex $A_{ab}$. \\ $\bullet$ $\Gamma' = \{a\}$ $\Longleftrightarrow$ $Fix(A_{\Gamma'})$ is a tree called the \textbf{standard tree} of $a$. \\ The same applies to all parabolic subgroups, as $Fix(g A_{\Gamma'} g^{-1}) = g Fix(A_{\Gamma'})$. \end{lemma} \begin{rem} When $\Gamma$ is connected, the standard tree $Fix(\langle a \rangle)$ contains infinitely many type $1$ vertices, including the vertex $\langle a \rangle$ itself. It also contains infinitely many type $2$ vertices (unless $a$ lies at the tip of an even-labelled leaf - see (\cite{vaskou2023isomorphism}, Lemma 2.17)). \end{rem} \section{Reconstructing the Deligne complex algebraically.} Let $A_{\Gamma}$ will be a large-type free-of-infinity Artin group. That is, every pair of distinct standard generators $a, b \in V(\Gamma)$ has a coefficient $3 \leq m_{ab} < \infty$. Note that the free-of-infinity condition forces $\Gamma$ to be a complete graph. This section is dedicated to reconstructing the Deligne complex of $A_{\Gamma}$ in a purely algebraic way. This will allow to build a suitable action of $Aut(A_{\Gamma})$ onto $X_{\Gamma}$, proving Theorem B. \noindent \textbf{Strategy and notation:} Our strategy can be divided in four steps. At each step, the goal will be to introduce a set of algebraic objects that “corresponds” to a set of geometric objects of $X_{\Gamma}$. These various correspondences will be made explicit through maps that will be bijections, graph isomorphisms or combinatorial isomorphisms, depending on the context. We sum up the various notations that will be used in the following table: \begin{figure} \caption{Notations used throughout Section 3.} \end{figure} Recall that $A_{\Gamma}$ is any large-type free-of-infinity Artin group of rank at least $3$. Our first goal will be to construct algebraic equivalent of the sets $V_2$ and $V_1$ of vertices of type $2$ and type $1$ respectively. Then, we will describe when the algebraic objects corresponding to the elements of $V_2$ and $V_1$ should be “adjacent”, allowing to reconstruct the type $1$ edges (i.e. $X_{\Gamma}^{(1)-ess}$). We start with the following definition: \begin{defi} \label{DefiDV2} Let $D_{V_2}$ be the set of type $2$ spherical parabolic subgroups of $A_{\Gamma}$. \end{defi} \noindent Note that while parabolic subgroup are in general not purely algebraically defined, in (\cite{vaskou2023isomorphism}, Corollary 4.17) the author proved that type $2$ spherical parabolic subgroups of large-type Artin groups are in fact invariant under automorphisms. In particular, the set $D_{V_2}$ is also invariant under automorphisms. \begin{lemma} \label{LemmaType2VertAlg} The map $f_{V_2} : D_{V_2} \rightarrow V_2$ defined as follows is a bijection: \\(1) For every subgroup $H \in D_{V_2}$, $f_{V_2}(H)$ is the fixed set $Fix(H)$; \\(2) For every vertex $v \in V_2$, $f_{V_2}^{-1}(v)$ is the local group $G_v$. \end{lemma} \noindent \textbf{Proof:} This directly follows from Lemma \ref{LemmaClassificationByType}. \(\Box\) \noindent Reconstructing the type $1$ vertices of $X_{\Gamma}$ algebraically will be harder, because showing that parabolic subgroups of type $1$ of invariant under automorphisms does not imply a result similar to Lemma \ref{LemmaType2VertAlg} for the type $1$ vertices of $X_{\Gamma}$. We start by introducing the following property: \begin{defi} \label{DefiEdgeProp} A couple of subgroups $(H_1, H_2) \in D_{V_2} \times D_{V_2}$ is said to have the \textbf{adjacency property} if there exists a subgroup $H_3 \in D_{V_2}$ such that we have \begin{align*} &(A1) \ H_i \cap H_j \neq \{1\}, \ \forall i, j \in \{1,2,3\}; \\ &(A2) \ \bigcap\limits_{i=1}^3 H_i = \{1\}. \end{align*} \end{defi} \noindent Definition \ref{DefiEdgeProp} really is geometric in essence, as highlighted in the next lemma. \begin{lemma} \label{LemmaEdgePropGeom} A couple $(H_1, H_2)$ has the \text{adjacency property} relatively to a third subgroup $H_3$ if and only if the following hold: \\(1) The three $H_i$'s are distinct subgroups. \\(2) The three intersections $(H_i \cap H_j)$'s are parabolic subgroups of type $1$, and they are distinct. Equivalently, the sets $Fix(H_i \cap H_j)$ are distinct standard trees. \\(3) The standard trees $Fix(H_i \cap H_j)$'s intersect each other two-by-two, but the triple-intersection is trivial. \end{lemma} \noindent \textbf{Proof:} $(\Rightarrow)$ Suppose that $(H_1, H_2)$ has the adjacency property relatively to a third subgroup $H_3$. Let $i, j, k \in \{1,2,3\}$ be distinct, and suppose that $H_i = H_j$. Then $$\{1\} \overset{(A2)} = H_i \cap H_j \cap H_k = H_i \cap H_k \overset{(A1)} \neq \{1\},$$ a contradiction. This proves $(1)$. In particular, any intersection $H_i \cap H_j$ is a proper non-trivial intersection of parabolic subgroups of type $2$ of $A_{\Gamma}$, hence is a parabolic subgroup of type $1$ of $A_{\Gamma}$, by (\cite{vaskou2023isomorphism}, Proposition 2.22.(1)). It follows that each $Fix(H_i \cap H_j)$ is a standard tree. This proves $(2)$. Finally, on one hand the three standard trees intersect each other two-by-two, as for instance the intersection of $Fix(H_i\cap H_j)$ and $Fix(H_i \cap H_k)$ is the vertex $Fix(H_i)$. On the other hand, the intersection of the three standard trees is the intersection of all the two-by-two intersections. It is trivial because the three vertices $Fix(H_i)$, $Fix(H_j)$ and $Fix(H_k)$ are distinct, as their corresponding subgroups are. This proves $(3)$. $(\Leftarrow)$ Suppose that the three subgroups $H_1, H_2, H_3 \in D_{V_2}$ satisfy the properties $(1)$, $(2)$ and $(3)$ of the lemma. The fact that all the intersections $(H_i \cap H_j)$'s are parabolic subgroups of type $1$ directly implies $(A1)$. The subgroups $H_i \cap H_j$ and $H_i \cap H_k$ are parabolic subgroups of type $1$ of $A_{\Gamma}$, so their intersection is a parabolic subgroup of $A_{\Gamma}$ as well, by (\cite{vaskou2023isomorphism}, Proposition 2.22.(1)). By (\cite{vaskou2023isomorphism}, Proposition 2.22.(4)), this intersection cannot be a parabolic subgroup of type $1$ of $A_{\Gamma}$, because $H_i \cap H_j$ and $H_i \cap H_k$ are distinct. So it must be trivial. This imples $(A2)$. \(\Box\) \begin{prop} \label{PropAdjEqAdj} Consider two subgroups $H_1, H_2 \in D_{V_2}$. The following are equivalent: \\(1) The two type $2$ vertices $v_1, v_2$ of $X_{\Gamma}$ defined by $v_i \coloneqq f_{V_2}(H_i)$ are at combinatorial distance $2$ in $X_{\Gamma}^{(1)-ess}$. \\(2) The couple $(H_1, H_2)$ satisfies the adjacency property. \end{prop} \noindent Note that the minimal combinatorial distance one can have between two type $2$ vertices of $X_{\Gamma}^{(1)-ess}$ is $2$, so the previous proposition gives an algebraic description of when two type $2$ vertices of $X_{\Gamma}$ are “as close as possible”. In order to prove the proposition, we will need the following theorem: \begin{thm} \label{ThmGB} \textbf{(\cite{mccammond2002fans}, Theorem 4.6, Combinatorial Gauss-Bonnet)} Let $M$ be a $2$-dimensional subcomplex of $X_{\Gamma}$ obtained as the union of finitely many polygons. Let $M_0$ denote the set of type $2$ vertices that belong to $M$, and let $M_2$ denote the set of polygons whose union is exactly $M$. A corner of a vertex $v \in M_0$ is a polygon of $M$ in which $v$ is contained, and a corner of a polygon $f$ is a vertex at which two edges of $f$ meet. Let us also define \begin{align*} &\forall v \in int(M_0), \ curv(v) \coloneqq 2 \pi - \left( \sum\limits_{c \in Corners(v)} \angle_v (c) \right), \\ &\forall v \in \partial M_0, \ curv(v) \coloneqq \pi - \left( \sum\limits_{c \in Corners(v)} \angle_v (c) \right), \\ &\forall f \in M_2, \ curv(f) \coloneqq 2 \pi - \left( \sum\limits_{c \in Corners(f)} (\pi - \angle_c (f)) \right). \end{align*} Then we have $$\sum\limits_{f \in M_2} curv(f) + \sum\limits_{v \in M_0} curv(v) = 2 \pi.$$ \end{thm} \begin{lemma} \label{LemmaNeighOfType1} Let $x$ be a vertex of type $1$ of $X_{\Gamma}$, i.e. $x = g \langle a \rangle$ for some $g \in A_{\Gamma}$ and $a \in V(\Gamma)$. We recall that $\Gamma_{bar}$ can be seen as the boundary of the fundamental domain $K_{\Gamma}$, as explained in Remark \ref{RemGammaBar}. Then the star $St_{X_{\Gamma}^{(1)-ess}}(x)$ of $x$ in $X_{\Gamma}^{(1)-ess}$ is the $g$-translate of the star $St_{\Gamma_{bar}}(x)$ of $x$ in $\Gamma_{bar}$, and takes the form of a $n$-pod for some $n \geq 1$. It is contained in the standard tree $Fix(G_x)$, and in any translate of the fundamental domain that contains $x$. \end{lemma} \noindent \textbf{Proof:} First notice that $St_{X_{\Gamma}^{(1)-ess}}(x) = St_{X_{\Gamma}}(x) \cap X_{\Gamma}^{(1)-ess}$. By (\cite{bridson2013metric}, II.12.24), the structure of $St_{X_{\Gamma}}(x)$ can be described as the development of a (sub)complex of groups that only depends on the local groups around $x$. Intersecting with $X_{\Gamma}^{(1)-ess}$ means further restricting to the local groups around $x$ that contain $G_x$. These local groups are the $g$-conjugates of the local groups around $\langle a \rangle$, so $St_{X_{\Gamma}^{(1)-ess}}(x)$ is the $g$-translate of $St_{\Gamma_{bar}}(\langle a \rangle)$, which is easily seen to be a $n$-pod, where $n$ is the number of edges attached to $\langle a \rangle$ in $\Gamma_{bar}$. The inclusion $St_{X_{\Gamma}^{(1)-ess}}(x) \subseteq Fix(G_x)$ comes from the fact that every local group in the star contains $G_x$. Moreover, $St_{X_{\Gamma}^{(1)-ess}}(\langle a \rangle) \subseteq K_{\Gamma}$ and thus $St_{X_{\Gamma}^{(1)-ess}}(x) \subseteq h K_{\Gamma}$ for every $h \in A_{\Gamma}$ for which $x \in h K_{\Gamma}$. \(\Box\) \noindent \textbf{Proof of Proposition \ref{PropAdjEqAdj}:} [(1) $\Rightarrow$ (2)]: The vertices $v_1$ and $v_2$ are at combinatorial distance $2$ from each other, so there is a type $1$ vertex $x_{12}$ that is adjacent to both $v_1$ and $v_2$. Let us first suppose that $x_{12}$ belongs to $K_{\Gamma}$. By Lemma \ref{LemmaNeighOfType1}, $K_{\Gamma}$ contains the star $St_{X_{\Gamma}^{(1)-ess}}(x_{12})$, and this star is the simplicial neighbourhood of $x_{12}$ in $\Gamma_{bar}$. In particular then, $v_1$ and $v_2$ are distinct vertices of $\Gamma_{bar}$ that are adjacent to $x_{12}$. Because $\Gamma$ is complete, the path joining $v_1$, $x$ and $v_2$ can be completed into a cycle $\gamma \coloneqq (v_1, x_{12}, v_2, x_{23}, v_3, x_{31})$ of length $6$ in $\Gamma_{bar}$, where the $v_i$'s are type $2$ vertices and the $x_{ij}$'s are type $1$ vertices. Let now $H_3 \coloneqq f_{V_2}^{-1}(v_3)$. All that's left to do is to check that the couple $(H_1,H_2)$ satisfies the adjacency property, with respect to the third group $H_3$. This directly follow from Lemma \ref{LemmaEdgePropGeom}: the $H_i$'s are distinct subgroups, the sets $Fix(H_i \cap H_j)$'s are distinct standard trees as they contain the type $1$ vertex $x_{ij}$ and no other type $1$ vertex of $\gamma$, and the trees $Fix(H_i \cap H_j)$'s intersects two-by-two along distinct type $2$ vertices, hence the triple intersection is trivial. \noindent If $x_{12}$ does not belong to $K_{\Gamma}$, then $x_{12} = g \cdot \bar{x}_{12}$, where $\bar{x}_{12}$ is a type $1$ vertex of $K_{\Gamma}$. Proceeding as before on $\bar{x}_{12}$ yields groups $H_i$ for $i \in \{1,2,3\}$. Then one can recover an analogous reasoning for $x_{12}$, using the groups $g H_i g^{-1}$ instead, for $i \in \{1,2,3\}$. \noindent [(2) $\Rightarrow$ (1)]: Let $(H_1, H_2)$ have the adjacency property relatively to a third subgroup $H_3$, and let $v_i \coloneqq f_{V_2}(H_i)$ for $i \in \{1, 2, 3\}$. We suppose that the following Claim holds: \noindent \underline{Claim:} Let $v_1$, $v_2$ and $v_3$ be three distinct type $2$ vertices of $X_{\Gamma}$, and suppose that the three geodesics connecting the vertices are contained in distinct standard trees that intersect two-by-two but whose triple intersection is empty. Then the triangle formed by these three geodesics is contained in a single fundamental domain $g K_{\Gamma}$. In particular, the vertices are at combinatorial distance $2$ from each other. \noindent The claim clearly gives us the desired result, but we still need to show that the hypotheses of the claim are satisfied. This is a direct consequence of Lemma \ref{LemmaEdgePropGeom}: the three $v_i$'s are distinct, and the three geodesics of the form $\gamma_{ij}$ connecting $v_i$ and $v_j$ are contained into the standard trees $Fix(H_i \cap H_j)$. The three $\gamma_{ij}$'s intersect two-by-two, but the triple intersection is empty, by Lemma \ref{LemmaEdgePropGeom} again. We now check that the claim holds: \noindent \underline{Proof of the Claim:} Let $T$ be the geodesic triangle connecting $v_1$, $v_2$ and $v_3$ and let $M \coloneqq T \cup int(T)$. We want to prove that $M$ is contained into a single fundamental domain $g K_{\Gamma}$. To do so we suppose that this is not the case, and we will exhibit a contraction. We want to apply the Gauss-Bonnet formula on $M$. By construction, $M$ is a combinatorial subcomplex of $X_{\Gamma}$ whose simplices are triangles corresponding to inclusions of the form $\{g\} \subsetneq g \langle a \rangle \subsetneq g A_{ab}$. To make the use of the Gauss-Bonnet formula easier, we decide to see $M$ with a coarser combinatorial structure: the one obtained by removing every edge of type $0$ and every vertex of type $0$ in $M$. Note that the boundary of $M$ is a union of edges of type $1$ of $X_{\Gamma}$, so $M$ is still a subcomplex of $X_{\Gamma}$ with this new combinatorial structure. It is a union of polygons of $X_{\Gamma}$ whose boundaries are contained in $X_{\Gamma}^{(1)-ess}$. By Theorem \ref{ThmGB}, we have $$\sum\limits_{\text{faces } f \text{ in } M} curv(f) \ \ + \sum\limits_{\text{type } 2 \text{ vertices in } M} curv(v) = 2 \pi. \ \ \ (*)$$ We rewrite this is a manner that is easier to deal with. Let $M_2^i$ be the set of polygons in $M$ that don't contain any element of $\{v_1, v_2, v_3 \}$, $M_2^c$ be the set of polygons in $M$ that contain at least one of $v_1$, $v_2$ or $v_3$, $M_0^i$ be the set of type $2$ vertices in $int(M)$, $M_0^b$ be the set of type $2$ vertices of $\partial M \backslash \{v_1,v_2,v_3\}$, and $M_0^c$ be the set $\{v_1,v_2,v_3\}$ of corners of $M$. Then: \\ $\bullet$ Let $C_2^i \coloneqq \sum\limits_{f \in M_2^i} curv(f)$. Consider a polygon $f \in M_2$, and let $m_c$ be the coefficient of the local group of a corner $c$ of $f$. Then $$curv(f) = 2 \pi - \left( \sum\limits_{c \in Corners(f)} (\pi - \frac{\pi}{m_c}) \right).$$ Note that $m_c \geq 3$ for all $c \in Corners(f)$, so eventually $\pi - \frac{\pi}{m_c} \geq \frac{2 \pi}{3}$. In particular, $f$ has at least $3$ corners, so we obtain $$curv(f) \leq 2 \pi - 3 \cdot (\frac{2 \pi}{3}) = 0.$$ It follows that $C_2^i \leq 0$ as well. Note that as soon as one polygon has at least $4$ edges, or as soon as the coefficient of one of the local groups is at least $4$, we have $curv(f) < 0$ and thus $C_2^i < 0$. \\ $\bullet$ Let $C_0^i \coloneqq \sum\limits_{v \in M_0^i} curv(v)$. Because $X_{\Gamma}$ is CAT(0), the systole of the link of any vertex $v$ in $X_{\Gamma}$ is at least $2 \pi$. In particular, if $v \in M_0^i$, the systole of the link of $v$ in $M$ is at least $2 \pi$. It follows that the sum of the angles around $v$ in $M$ is at least $2 \pi$. In particular, $curv(v) \leq 0$ and thus $C_0^i \leq 0$. \\ $\bullet$ Let $C_0^b \coloneqq \sum\limits_{v \in M_0^b} curv(v)$. Any $v \in M_0^b$ belongs to a side of $T$ that is a geodesic, so its angle with $M$ must satisfy $\angle_v M \geq \pi$. It follows that $curv(v) = \pi - \angle_v M \leq 0$, and thus $C_0^b \leq 0$ as well. \\ $\bullet$ Let $C_0^c \coloneqq \sum\limits_{v_i \in M_0^c} curv(v_i)$ and let $C_2^c = \sum\limits_{f \in M_2^c} curv(f)$. Any corner $v_i$ of $T = \partial M$ belongs to $\lambda_i \geq 1$ polygons of $M$. By construction of the Deligne complex, the angle $\angle_{v_i} M$ is precisely $\lambda_i \cdot \frac{\pi}{m_i}$, where $m_i \geq 3$ is the coefficient of $H_i$. Each of the $\lambda_i$ polygons $f$ of $M$ containing $v_i$ is such that \begin{align*} curv(f) =\ &2 \pi - (\pi - \angle_{v_i} (f)) - \left( \sum\limits_{c \in Corners(f) \backslash \{v_i\} } (\pi - \frac{\pi}{m_c}) \right) \\ \overset{(**)} \leq &2 \pi - (\pi - \frac{\pi}{m_i}) - 2 \cdot (\pi - \frac{\pi}{3}) \\ \leq \ &\frac{\pi}{m_i} - \frac{\pi}{3}. \end{align*} The inequality $(**)$ comes from the fact that $f$ has at least $2$ other corners than $v_i$, and that the angle at any corner of $f$ is at most $\pi/3$, because every local group has coefficient at least $3$. Note that if $f$ has at least $4$ edges then we obtain a strict inequality $curv(f) < \frac{\pi}{m_i} - \frac{\pi}{3}$. Summing everything, we obtain \begin{align*} C_0^c + C_2^c = &\sum\limits_{v_i \in M_0^c} curv(v_i) + \sum\limits_{f \in M_2^c} curv(f) \\ \overset{(***)} \leq &\sum\limits_{ i \in \{1, 2, 3 \} } (\pi - \lambda_i \cdot \frac{\pi}{m_i}) + \sum\limits_{ i \in \{1, 2, 3 \} } \lambda_i \cdot (\frac{\pi}{m_i} - \frac{\pi}{3}) \\ = \ &3 \pi - \sum\limits_{ i \in \{1, 2, 3 \} } \lambda_i \cdot \frac{\pi}{3} \leq 2 \pi. \end{align*} Note that it is easy to check that the inequality $(***)$ holds no matter if the polygons containing the $v_i$'s are distinct or if there are polygons of $M$ that contain several of the $v_i$'s. We now notice two things. The first is that as soon as one of the $v_i$'s is contained inside two distinct polygons of $M$, then $\lambda_i \geq 2$ and $C_0^c + C_2^c < 2 \pi$. The second is that if a polygon containing one of the $v_i$'s has at least $4$ edges, then $curv(f) < \pi/m_i - \pi/3$ and thus $C_0^c +C_2^c < 2 \pi$ as well. \noindent With this setting, the equation $(*)$ becomes: $$C_2^i+C_0^i+C_0^b+(C_0^c+C_2^c) = 2\pi.$$ Note that this equation can hold only if the four terms on the left-hand side are maximal, i.e.: \\ $\bullet$ $C_2^i = 0$. In particular, every polygon in $M_2^i$ is a triangle, whose corners have local groups with coefficient exactly $3$. \\ $\bullet$ $C_0^i =0$. In particular, the sum of the angles around any vertex of $M_0^i$ is exactly $2 \pi$. \\ $\bullet$ $C_0^b = 0$, i.e. the angles along the sides of $T$ are exactly $\pi$. \\ $\bullet$ $C_0^c+C_2^c = 2 \pi$. In particular, each of the $v_i$'s is contained in a single polygon of $M$, which is always a triangle. By hypothesis $M$ does not contain a single polygon, and it is not hard to see that in that case there must be polygons in $M$ that do not contain any of the $v_i$'s (in other words, $M_2^i$ is non-trivial). The first of the above four points implies that every polygon in $M_2^i$ is a flat equilateral triangle. Since the angles along the sides of $T$ are exactly $\pi$ and since the sum of the angles around any vertex of $M_0^i$ is $2 \pi$, the whole subcomplex $M_2^i$ is actually flat. Let us now consider a triangle $f \in M_2^c$, and let $f'$ be the (unique) polygon in $M_2^i$ that is adjacent to $f$ in the sense that $f$ and $f'$ share an edge (see Figure \ref{FigNew}). Note that $f'$ is a flat triangle, whose corners have local groups with coefficient $3$. We can now easily determine the coefficients of the local groups of the corners of $f$. First, assign a different colour to each standard generator $a \in V(\Gamma)$, and push this colour onto all the edges of the form $g \langle a \rangle$ for $g \in A_{\Gamma}$. When two edges with different colours meet at a type $2$ vertex, one can easily determine the colour of all the edges containing that vertex (they are an alternating sequence of the colours associated with the two original edges). See Figure \ref{FigNew}). In particular, the coefficients of the local groups around $f$ must also be $3$, which forces $f$ to be an equilateral triangle as well. By applying the same argument to the other polygons of $M_2^c$, this shows that the whole of $M$ is actually flat, i.e. isometrically embedded into a flat plane. \begin{figure} \caption{Showing that triangles of $M_2^c$ are also equilateral and Euclidean. The simplices that belong to $M_2^i$ are highlighted in grey. They are already known to be equilateral and Euclidean. The edges of $f'$ are drawn with colours corresponding to the edges in $K_{\Gamma} \label{FigNew} \end{figure} \noindent Our next argument relies on the following notion: \noindent \textbf{Definition. (\cite{vaskou2023isomorphism}, Definition 3.18)} Let $g \cdot f$ and $h \cdot f$ be two adjacent equilateral triangles of $M$. Then there exists a standard generator $a \in V(\Gamma)$ and some integer $k \neq 0$ such that $g^{-1} h = a^k$. From that we define a \textbf{system of arrows} on the triangles of $M$: \\(1) Draw a single arrow from $g \cdot K$ to $h \cdot K$ if $g^{-1} h = a$; \\(2) Draw a double arrow between $g \cdot \Delta$ and $h \cdot \Delta$ if $g^{-1} h = a^k$ with $|k| \geq 2$. We now put a system of arrows on $M$. Consider a side $\gamma$ of $M$. By hypothesis, $\gamma$ is contained in a standard tree $Fix(g \langle a \rangle g^{-1})$ for some $a \in V(\Gamma)$ and some $g \in A_{\Gamma}$, and $g \langle s \rangle g^{-1}$ acts transitively on the set of strips around $\gamma$. Thus we can assume that we have double arrows on $\gamma$, as drawn on Figure \ref{FigM}. As showed in (\cite{vaskou2023isomorphism}, Lemma 3.20), systems of arrows are rather rigid, and must (up to symmetries or rotations) take one of the following forms: \begin{figure}\label{FigurePolygon333} \end{figure} \noindent In our case, all the arrows along $\gamma$ must be simple arrows. We now proceed to determine all the arrows in $M$: \\ \underline{Step 1:} Put double arrows on the sides of $M$. \\ \underline{Step 2:} The arrow between the two topmost triangles of $M$ must be simple by ($\star$). We suppose without loss of generality that it is pointing down. \\ \underline{Step 3:} Use ($\star$) to complete the hexagons around this first arrow. We obtain two new arrows in $M$. \\ \underline{Step 4:} Use ($\star$) on these two arrows and complete an hexagon of $M$. \\ \underline{Step 5:} Proceed by induction using ($\star$) to determine every arrow in $M$. \begin{figure} \caption{Putting a system of arrows on $M$. \underline{Left:} \label{FigM} \end{figure} Finally, we can see that the system of arrows of any of the hexagons along the bottommost side of $M$ contains two simple arrows pointing away from each other and pointing towards double arrows (see Figure \ref{FigM}). This gives a contradiction to ($\star$). It follows that $M$ contains a single triangle. In particular, the vertices $v_1$, $v_2$ and $v_3$ are at combinatorial distance $2$ from each other. \(\Box\) \noindent We are now able to define explicitly the algebraic analogue of the type $1$ vertices of $X_{\Gamma}$: \begin{defi} \label{DefiDV1} Let us consider the poset $\mathcal{P}_f(D_{V_2})$ of finite sets of distinct elements of $D_{V_2}$, ordered by the inclusion. We now define $D_{V_1}$ to be the subset of $\mathcal{P}_f(D_{V_2})$ of sets $\{H_1, \cdots, H_k\}$ satisfying the following: \noindent \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (P1) \ Any subset $\{ H_i, H_j \} \subseteq \{H_1, \cdots, H_k\}$ is such that $(H_i, H_j)$ \noindent \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ satisfies the adjacency property; \noindent \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (P2) \ $\bigcap\limits_{i=1}^k H_i \neq \{1\}$; \noindent \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (P3) \ $\{H_1, \cdots, H_k\}$ is maximal in $\mathcal{P}_f(D_{V_2})$ with these properties. \end{defi} \noindent As is was the case for the adjacency property, there is also a geometric meaning behind Definition \ref{DefiDV1}. While we managed to reconstruct the type $2$ vertices of $X_{\Gamma}$ directly from the spherical parabolic subgroups of type $2$ of $A_{\Gamma}$, we reconstruct a type $1$ vertex $x$ of $X_{\Gamma}$ from the sets of type $2$ vertices of $X_{\Gamma}$ that are adjacent to $x$. This is made more precise thereafter: \begin{prop} \label{PropType1VertAlg1} The map $f_{V_1} : D_{V_1} \rightarrow V_1$ defined by the following is well-defined and is a bijection: \\(1) For every element $\{H_1, \cdots, H_k \} \in D_{V_1}$, $f_{V_1}(\{H_1, \cdots, H_k \})$ is the unique vertex $x \in V_1$ that is adjacent to $v_i \coloneqq f_{V_2}(H_i)$ for every $H_i \in \{H_1, \cdots, H_k \}$. \\(2) For every vertex $x \in V_1$, $f_{V_1}^{-1}(x)$ is the set $\{H_1, \cdots, H_k \} \in D_{V_1}$ of all the subgroups for which $v_i \coloneqq f_{V_2}(H_i)$ is adjacent to $x$. \end{prop} \noindent \textbf{Proof:} We first show that the two maps are well-defined. Then, checking that the composition of the two maps gives the identity is straightforward. \noindent \underline{$f_{V_1}$ is well-defined:} Let $\{H_1, \cdots, H_k \} \in D_{V_1}$. The intersection $H_1 \cap \cdots \cap H_k$ is an intersection of parabolic subgroups of type $2$ of $A_{\Gamma}$, hence is also a parabolic subgroup, by (\cite{vaskou2023isomorphism}, Proposition 2.22.(1)). It is proper in any $H_i$ and non-trivial by definition, so it is a parabolic subgroup of type $1$ of $A_{\Gamma}$. The corresponding fixed set $T \coloneqq Fix(H_1 \cap \cdots \cap H_k)$ is a standard tree on which all the vertices $v_i \coloneqq f_{V_2}(H_i)$ lie. The convex hull $C$ of all the $v_i$'s in $T$ is a subtree of $T$. By hypothesis, any couple $(H_i, H_j)$ satisfies the adjacency property. Using Proposition \ref{PropAdjEqAdj}, this means the combinatorial distance between any two of the vertices defining the boundary of $C$ is $2$, so $C$ has combinatorial diameter $2$. As a tree with diameter $2$, $C$ contains exactly one vertex that is not a leaf of $C$, and this vertex must have type $1$. \noindent \underline{$f_{V_1}^{-1}$ is well-defined:} Let now $x \in V_1$, let $\{v_1, \cdots, v_k \}$ be the set of all the type $2$ vertices that are adjacent to $x$, and set $H_i \coloneqq f_{V_2}^{-1}(v_i)$. We want to check that $\{H_1, \cdots, H_k \} \in D_{V_1}$, i.e. that the properties (P1), (P2) and (P3) of Definition \ref{DefiDV1} are satisfied. First of all, we know that the combinatorial neighbourhood of $x$ is an $n$-pod that belongs to $Fix(G_x)$, by Lemma \ref{LemmaNeighOfType1}. In particular, all the $v_i$'s lie on the standard tree $Fix(G_x)$, which means that $G_x$ is contained in every $H_i$. This proves (P2). Proving (P1) is straightforward if we use Proposition \ref{PropAdjEqAdj}: the $v_i$'s are distinct but they are all connected to a common vertex $x$, so the combinatorial distance between two distinct $v_i$'s is exactly $2$. At last, if $\{H_1, \cdots, H_k \}$ was not maximal, there would be some $H_{k+1}$ such that $\{H_1, \cdots, H_{k+1}\}$ satisfies (P1) and (P2) of Definition \ref{DefiDV1}. The vertex $v_{k+1} \coloneqq f_{V_2}(H_{k+1})$ lies on $Fix(G_x)$ (use (P2)) and is at distance $2$ from all the other $v_i$'s (use (P1)), but is not adjacent to $x$ by hypothesis. This means one can connect $v_1$ and $v_2$ through $Fix(G_x)$ but without going through the star of $x$ in $Fix(G_x)$. This contradicts $Fix(G_x)$ being a tree. Therefore $\{H_1, \cdots, H_k \}$ is maximal, proving (P3). \(\Box\) \begin{rem} \label{RemAdjacent12} Let $H \in D_{V_2}$, let $\{H_1, \cdots, H_k \} \in D_{V_1}$, let $v \coloneqq f_{V_2}(H)$, and let $x \coloneqq f_{V_1}(\{H_1, \cdots, H_k \})$. Then one can easily deduce from the proof of Proposition \ref{PropType1VertAlg1} that $v$ and $x$ are adjacent if and only if $H \in \{H_1, \cdots, H_k \}$. \end{rem} \noindent We have reconstructed the algebraic analogue of the type $2$ vertices and the type $1$ vertices of $X_{\Gamma}$ (Lemma \ref{LemmaType2VertAlg} and Proposition \ref{PropType1VertAlg1}). To reconstruct the whole of $X_{\Gamma}^{(1)-ess}$, we only have left to describe when an element of $D_{V_2}$ and an element of $D_{V_1}$ should be adjacent. This directly follows from Remark \ref{RemAdjacent12}: \begin{defi} \label{DefiD1} We define the graph $D_1$ by the following: \\(1) The vertex set of $D_1$ is the set $D_{V_2} \sqcup D_{V_1}$; \\(2) We draw an edge between $H \in D_{V_2}$ and $\{H_1, \cdots, H_k \} \in D_{V_1}$ if and only if $H \in \{H_1, \cdots, H_k \}$. \end{defi} \begin{prop} \label{PropType1VertAlg} The bijections $f_{V_2}$ and $f_{V_1}$ can be extended into a graph isomorphism $F_1 : D_1 \rightarrow X_{\Gamma}^{(1)-ess}$. \end{prop} \noindent \textbf{Proof:} Let $f_{V_2} \sqcup f_{V_1} : D_{V_2} \sqcup D_{V_1} \rightarrow V_2 \sqcup V_1$. Then $f_{V_2} \sqcup f_{V_1}$ is a bijection by Lemma \ref{LemmaType2VertAlg} and Proposition \ref{PropType1VertAlg1}. We only need to show that two elements of $D_{V_2} \sqcup D_{V_1}$ are adjacent if and only if their images through $f_{V_2} \sqcup f_{V_1}$ are adjacent. Notice that \begin{align*} & H \in D_{V_2} \text{ and } \{H_1, \cdots, H_k \} \in D_{V_1} \text{ are adjacent in } D_1 \\ \overset{(\ref{DefiD1})} \Longleftrightarrow & H \in \{H_1, \cdots, H_k \} \\ \overset{(\ref{RemAdjacent12})} \Longleftrightarrow & f_{V_2}(H) \text{ and } f_{V_1}(\{H_1, \cdots, H_k \}) \text{ are adjacent in } X_{\Gamma}^{(1)-ess}. \end{align*} \(\Box\) We have just reconstructed $X_{\Gamma}^{(1)-ess}$ purely algebraically. Our next goal is to reconstruct the whole of $X_{\Gamma}$. We start with the following definition: \begin{defi} \label{DefiDV0} A subgraph $G$ of $D_1$ or of $X_{\Gamma}^{(1)-ess}$ is called \textbf{characteristic} if: \noindent \phantom{mmm} (C1) $G$ is isomorphic to the barycentric subdivision of a complete graph \noindent \phantom{mmmmmm} on at least $3$ vertices; \noindent \phantom{mmm} (C2) $G$ is maximal with that property. \noindent We call $\mathcal{CS}$ the set of characteristic subgraphs of $D_1$. \end{defi} \begin{lemma} \label{LemmaGSingleFD} The set of characteristic subgraphs of $X_{\Gamma}^{(1)-ess}$ is precisely $\{ g \Gamma_{bar} \ | \ g \in A_{\Gamma} \}$. In particular, $\mathcal{CS} = \{ F_1^{-1}(g \Gamma_{bar}) \ | \ g \in A_{\Gamma} \}$. \end{lemma} \noindent \textbf{Proof:} We focus on proving the first statement, as the second then directly follows from Proposition \ref{PropType1VertAlg}. We first prove the two following claims: \noindent \underline{Claim 1:} Any (non backtracking) cycle $\gamma \subseteq X_{\Gamma}^{(1)-ess}$ of length $6$ is contained in a single $g$-translate of the fundamental domain $K_{\Gamma}$. \noindent \underline{Proof of Claim 1:} Recall that $X_{\Gamma}^{(1)-ess}$ is a bipartite graph with partition sets $V_2$ and $V_1$. Consequently $\gamma = (x_1, v_{12}, x_2, v_{23}, x_3, v_{31})$, where the $x_i$'s are type $1$ vertices and the $v_{ij}$'s are type $2$ vertices of $X_{\Gamma}$. Consider now the three subgeodesics $c_1 \coloneqq (v_{31}, x_1, v_{12})$, $c_2 \coloneqq (v_{12}, x_2, v_{23})$ and $c_3 \coloneqq (v_{23}, x_3, v_{31})$, whose union is $\gamma$. Each geodesic $c_i$ is contained in the star $St_{X_{\Gamma}^{(1)-ess}}(x_i)$, which we know by Lemma \ref{LemmaNeighOfType1} is itself included in the standard tree $Fix(G_x)$. Also note that the three corresponding standard trees are distinct, or the fact that $\gamma$ is a cycle of length $6$ would contradict either the convexity of the standard trees, or the fact that they are uniquely geodesic. The three geodesics intersect two-by-two, but their triple intersection is empty. We can now use the claim in the proof of Proposition \ref{PropAdjEqAdj}, and recover that $\gamma$ must be contained in a single translate $g K_{\Gamma}$. This finishes the proof of Claim 1. \noindent \underline{Claim 2:} For every subgraph $G$ of $X_{\Gamma}^{(1)-ess}$ that satisfies (C1) there exists an element $g \in A_{\Gamma}$ such that $G \subseteq g \Gamma_{bar}$. \noindent \underline{Proof of Claim 2:} $G$ is the barycentric subdivision of a complete graph $\widetilde{G}$ with at least $3$ vertices. In particular, $G$ contain at least one $6$-cycle, call it $\gamma_0$. This cycle corresponds to a $3$-cycle $\widetilde{\gamma_0}$ in $\widetilde{G}$. Because $\widetilde{G}$ is complete, every edge of $\widetilde{G}$ can be “reached” from $\widetilde{\gamma_0}$ by a string of $3$-cycles that consecutively intersect along an edge of $\widetilde{G}$. This means that for every edge $e$ of $G$, there exists a string of $6$-cycles $\gamma_0, \cdots, \gamma_n$ such that $e$ is contained in $\gamma_n$ and such that $\gamma_i, \gamma_{i+1}$ share exactly two edges. Using Claim 1, we know that each $\gamma_i$ is contained in a single translate $g_i K_{\Gamma}$. We want to show that all the $g_i$'s are the same element. To do so, we show that for every $0 \leq i < n$ we have $g_i = g_{i+1}$. Let $M_i \coloneqq \gamma_i \cup int(\gamma_i)$. We know by Claim 1 that $M_i \subseteq g_i K_{\Gamma}$ for some $g_i \in A_{\Gamma}$. The two cycles $\gamma_0$ and $\gamma_1$ share two edges, whose union corresponds to a single edge of $\Gamma$. This means $M_i$ and $M_{i+1}$ share two edges of $X_{\Gamma}^{(1)-ess}$ that are attached to a common type $2$ vertex (see Figure \ref{FigCommonFD}). The convex hull of these two edges belongs to a single translate $g K_{\Gamma}$, yet belongs to both $g_i K_{\Gamma}$ and $g_{i+1} K_{\Gamma}$. This forces $g_i = g_{i+1}$. In particular, $e$ belongs to $g K_{\Gamma}$. As this works for every edge $e$ of $G$, we obtain $G \subseteq g K_{\Gamma}$. \begin{figure} \caption{The combinatorial subcomplexes $M_i$ (on the left) and $M_{i+1} \label{FigCommonFD} \end{figure} \noindent Finally, $G$ is contained in the intersection $X_{\Gamma}^{(1)-ess} \cap g K_{\Gamma} = g \Gamma_{bar}$. This finishes the proof of Claim 2. We can now prove the main statement of the lemma: ($\supseteq$) Consider a subgraph of $X_{\Gamma}^{(1)-ess}$ of the form $g \Gamma_{bar}$ for some $g \in A_{\Gamma}$. Because $\Gamma$ is a complete graph on at least $3$ vertices, $g \Gamma_{bar}$ checks (C1). If $g \Gamma_{bar}$ didn't satisfy (C2), there would be a graph $G$ that satisfies (C1) and stritly contain $g \Gamma_{bar}$. Using Claim 2, we obtain an element $g' \in A_{\Gamma}$ such that $$g \Gamma_{bar} \subsetneq G \subseteq g' \Gamma_{bar}$$ By comparing the convex hulls as before, this forces $g = g'$, a contradiction. ($\subseteq$) Let $G$ be a characteristic subgraph of $X_{\Gamma}^{(1)-ess}$. By Claim 2, there is an element $g \in A_{\Gamma}$ such that $G \subseteq g \Gamma_{bar}$. Using (C2) and the fact that $g \Gamma_{bar}$ is a characteristic subgraph shows this inclusion is an equality. \(\Box\) \begin{defi} \label{DefiAlgebraic} Let $D_{\Gamma}$ be the $2$-dimensional combinatorial complex defined by starting with $D_1$, and then coning-off every characteristic graph of $D_1$. The complex $D_{\Gamma}$ is called the \textbf{algebraic Deligne complex} associated with $A_{\Gamma}$. \end{defi} \begin{prop} \label{PropDIsomDeligne} The graph isomorphism $F_1$ from Proposition \ref{PropType1VertAlg} can be extended to a combinatorial isomorphism $F : D_{\Gamma} \rightarrow X_{\Gamma}$. \end{prop} \noindent \textbf{Proof:} We already know that the map $F_1$ of Proposition \ref{PropType1VertAlg} gives a graph isomorphism between $D_1$ and $X_{\Gamma}^{(1)-ess}$. The result now follows from the fact that $D_{\Gamma}$ and $X_{\Gamma}$ can respectively be obtained from $D_1$ and $X_{\Gamma}^{(1)-ess}$ by coning-off their characteristic subgraphs: \\ $\bullet$ for $D_{\Gamma}$, this is the definition of the complex; \\ $\bullet$ for $X_{\Gamma}$, this follows from Lemma \ref{LemmaGSingleFD} and Remark \ref{RemX1essCone}. \(\Box\) \noindent \textbf{Notation:} Following Proposition \ref{PropDIsomDeligne}, and to make the notation lighter, we will from now on slightly abuse the notation and identify $X_{\Gamma}$ with $D_{\Gamma}$, without caring about the combinatorial isomorphism $F$. Theorem B as formulated in the introduction follows from the next theorem and its corollary, along with Proposition \ref{PropDIsomDeligne}. \begin{thm} \label{ThmIsomorphicD} \textbf{(Theorem B)} Let $A_{\Gamma}$ and $A_{\Gamma'}$ be two large-type free-of-infinity Artin groups of rank at least $3$, with respective (algebraic) Deligne complexes $D_{\Gamma}$ and $D_{\Gamma'}$. Then any isomorphism $\varphi: A_{\Gamma} \rightarrow A_{\Gamma'}$ induces a natural combinatorial isomorphism $\varphi_* : D_{\Gamma} \rightarrow D_{\Gamma'}$, that can be described explicitly as follows: \\ $\bullet$ For an element $H \in D_{V_2}^{\Gamma}$, $\varphi_*(H)$ is the subgroup $\varphi(H)$. \\ $\bullet$ For a set $\{H_1, \cdots, H_k \} \in D_{V_1}^{\Gamma}$, $\varphi_*(\{H_1, \cdots, H_k \})$ is the set $\{ \varphi(H_1), \cdots, \varphi(H_k) \}$. \\ $\bullet$ For an edge $e$ of $D_1^{\Gamma}$ connecting $H$ to $\{H_1, \cdots, H_k \}$, $\varphi_*(e)$ is the edge of $D_1^{\Gamma'}$ connecting $\varphi_*(H)$ to $\varphi_*(\{H_1, \cdots, H_k \})$. \\ $\bullet$ For a simplex $f$ of $D_1^{\Gamma}$ connecting $H$, $\{H_1, \cdots, H_k \}$ and a vertex of type $0$ corresponding to the apex of a cone over a characteristic graph $G$, $\varphi_*(f)$ is the simplex of $D_1^{\Gamma'}$ connecting $\varphi_*(H)$, $\varphi_*(\{H_1, \cdots, H_k \})$, and the vertex of type $0$ corresponding to the apex of the cone over the characteristic graph $\varphi_*(G)$. \end{thm} \noindent \textbf{Proof:} The fact that $\varphi_*$ is a combinatorial isomorphism directly follows from the definition of $D_{\Gamma}$, that was constructed using algebraic notions such as inclusions, intersections and maximality, that are all preserved under isomorphisms. The explicit description of $\varphi_*$ is clear from the way we constructed the algebraic Deligne complexes. \(\Box\) A direct consequence of the previous theorem is the following corollary: \begin{coro} \label{CoroCombiAction} There is a natural combinatorial action of $Aut(A_{\Gamma})$ onto $D_{\Gamma}$ (and $X_{\Gamma}$), and this action is explicitly described by the map $\varphi_*$ from Theorem \ref{ThmIsomorphicD}. \end{coro} \begin{rem} \label{RemAction} (1) The action of an automorphism $\varphi \in Aut(A_{\Gamma})$ on $D_{\Gamma}$ is entirely determined by its action on the set of type $2$ vertices of the complex. This is because every simplex of $D_{\Gamma}$, whether it is a type $1$ vertex, an edge, or a $2$-dimensional simplex, is defined algebraically from the set of type $2$ vertices of the complex. \\(2) An almost immediate consequence of Theorem \ref{ThmIsomorphicD} is that the class of large-type free-of-infinity Artin groups is rigid. We will not bother writing the proof out, as this is already a consequence of (\cite{vaskou2023isomorphism}, Theorem B). \end{rem} \section{Automorphism groups.} Let $A_{\Gamma}$ be a large-type free-of-infinity Artin group of rank at least $3$. In Section 3 we introduced various algebraic objects and proved that the Deligne complex $X_{\Gamma}$ associated with $A_{\Gamma}$ can be reconstructed in a purely algebraic way. This allowed to build a natural combinatorial action of the automorphism group $Aut(A_{\Gamma})$ onto the Deligne complex (Theorem B). In this section we will see that this action can be used to compute $Aut(A_{\Gamma})$ explicitly, proving Theorem A. \begin{lemma} \label{LemmaInnIsAction} The group $Inn(A_{\Gamma})$ of inner automorphisms of $A_{\Gamma}$ acts on $X_{\Gamma}$ in a natural way: every inner automorphism $\varphi_g : h \mapsto g h g^{-1}$ acts on $X_{\Gamma}$ like the element $g$. Moreover $Inn(A_{\Gamma}) \cong A_{\Gamma}$. \end{lemma} \noindent \textbf{Proof:} We begin by proving the first statement. By Remark \ref{RemAction}.(1), it is enough to check that this holds when we restrict the action to type $2$ vertices of $X_{\Gamma}$. Let $g \in A_{\Gamma}$, and let $v \in V_2$ be a type $2$ vertex of $X_{\Gamma}$. Then $$\varphi_g \cdot v \coloneqq (F \circ \varphi_g \circ F^{-1})(v) = F(\varphi_g (G_v)) = F(g G_v g^{-1}) = F(G_{g \cdot v}) = g \cdot v.$$ The fact that $Inn(A_{\Gamma}) \cong A_{\Gamma}$ is a consequence of $A_{\Gamma}$ having trivial centre (\cite{vaskou2021acylindrical}, Corollary C). \(\Box\) \begin{lemma} \label{LemmaHeightPreserving} Let $\iota$ be the automorphism of $A_{\Gamma}$ defined by $\iota(s) \coloneqq s^{-1}$ for every generator $s \in V(\Gamma)$, and let $\varphi \in Aut(A_{\Gamma})$ be any automorphism. Then one of $\varphi$ or $\varphi \circ \iota$ is height-preserving. \end{lemma} \noindent \textbf{Proof:} By Corollary \ref{CoroCombiAction} the automorphism $\varphi$ acts combinatorially on $X_{\Gamma}$. In particular, it sends the vertex $v_{\emptyset}$ onto the vertex $g v_{\emptyset}$ for some $g \in A_{\Gamma}$. Using Lemma \ref{LemmaInnIsAction}, the automorphism $\varphi_{g^{-1}} \circ \varphi $ fixes $v_{\emptyset}$. Since inner automorphisms preserve height, we can simply suppose that $\varphi$ fixes $v_{\emptyset}$. In particular, $\varphi$ preserves $\Gamma_{bar}$ and thus sends the set of type $1$ vertices of $K_{\Gamma}$ onto itself. Looking at the action of $\varphi$ on $D_{\Gamma}$, this means $\varphi$ sends any standard parabolic subgroup of type $1$ of $A_{\Gamma}$ onto a similar subgroup. Consequently, every standard generator must be sent by $\varphi$ onto an element that generates such a subgroup, i.e. that has height $1$ or $-1$. There are three possibilities: \noindent \underline{(1) $ht(\varphi(s)) = 1, \forall s \in V(\Gamma)$:} Then $\varphi$ is height-preserving. \noindent \underline{(2) $ht(\varphi(s)) = -1, \forall s \in V(\Gamma)$:} Then $\varphi \circ \iota$ is height-preserving. \noindent \underline{(3) $\exists s, t \in V(\Gamma): ht(\varphi(s)) = 1$ and $ht(\varphi(t)) = -1$:} This means there are generators $a, b \in V(\Gamma)$ such that $\varphi(s) = a$ and $\varphi(t) = b^{-1}$. Because $A_{\Gamma}$ is free-of-infinity, the generators $s$ and $t$, as well as the generators $a$ and $b$, generate dihedral Artin subgroups of $A_{\Gamma}$. Note that $\varphi(A_{st}) = \langle \varphi(s), \varphi(t) \rangle = \langle a, b^{-1} \rangle = A_{ab}$. Because $\varphi$ is an isomorphism we must have $m_{st} = m_{ab}$ (use \cite{paris2003artin}, Theorem 1.1). Applying $\varphi$ on both sides of the relation $sts \cdots = tst \cdots$ yields $$ab^{-1}a \cdots = b^{-1}ab^{-1} \cdots.$$ Note that if we put everything on the same side, we obtain a word with $2 m_{st} = 2 m_{ab}$ syllables, that is trivial in $A_{ab}$. The words of length $2 m_{ab}$ that are trivial in $A_{ab}$ have been classified in (\cite{martin2020abelian}, Lemma 3.1), and the word we obtained does not fit this classification, which yields a contradiction. \(\Box\) \begin{defi} Let $Aut(\Gamma)$ be the group of label-preserving graph automorphism of $\Gamma$. We say that an isomorphism $\varphi \in Aut(A_{\Gamma})$ is \textbf{graph-induced} if there exists a graph automorphism $\phi \in Aut(\Gamma)$ such that $\varphi_*(\Gamma_{bar}) = \phi(\Gamma_{bar})$. We denote by $Aut_{GI}(A_{\Gamma})$ the subgroup of $Aut(A_{\Gamma})$ consisting of the graph-induced automorphisms. \end{defi} \begin{lemma} \label{LemmaGI1to1Aut} The map $\mathcal{F} : Aut_{GI}(A_{\Gamma}) \rightarrow Aut(\Gamma) \times \{id, \iota\}$ defined by the following is a group isomorphism: \\Any $\varphi \in Aut_{GI}(A_{\Gamma})$ induces an automorphism of $\Gamma_{bar}$ and thus of $\Gamma$. This isomorphism defines the first component of $\mathcal{F}(\varphi)$. The second component of $\mathcal{F}(\varphi)$ is $id$ if $\varphi$ is height-preserving, and $\iota$ otherwise. \end{lemma} \noindent \textbf{Proof:} It is easy to check that $\mathcal{F}$ defines a morphism, so we show that it defines a bijection by describing its inverse map. Let $\phi \in Aut(\Gamma) \times \{id, \iota \}$. Then for any standard generator $s \in V(\Gamma)$, the automorphism $\phi$ sends the vertex $\langle s \rangle$ corresponding to $s$ onto the vertex $\phi(\langle s \rangle)$ corresponding to a standard generator that we note $s_{\phi}$. Define $\varphi_{\phi}$ as the (unique) automorphism of $A_{\Gamma}$ that sends every standard generator $s$ onto the standard generator $s_{\phi}$. Note that when acting on $X_{\Gamma}$, $\varphi_{\phi}$ restricts to an automorphism of $\Gamma_{bar}$ that corresponds to the automorphism $\varphi$ of $\Gamma$. For $\varepsilon \in \{0, 1 \}$ we let $\mathcal{F}^{-1}((\phi, \iota^{\varepsilon})) \coloneqq \varphi_{\phi} \circ \iota^{\varepsilon}$. It is clear that $\varphi_{\phi} \circ \iota^{\varepsilon}$ is graph-induced, and it is easy to check that composing $\mathcal{F}^{-1}$ with $\mathcal{F}$ on either side yields the identity. \(\Box\) We are now able to prove the main result of this paper: \begin{thm} \label{MainThm} \textbf{(Theorem A)} Let $A_{\Gamma}$ be a large-type free-of-infinity Artin group. Then we have $$Aut(A_{\Gamma}) \cong A_{\Gamma} \rtimes (Aut(\Gamma) \times (\quotient{\mathbf{Z}}{2 \mathbf{Z}})) \ \ \ \text{ and } \ \ \ Out(A_{\Gamma}) \cong Aut(\Gamma) \times (\quotient{\mathbf{Z}}{2 \mathbf{Z}}).$$ \end{thm} \noindent \textbf{Proof:} Let $\varphi \in Aut(A_{\Gamma})$. The same argument as the one in the proof of Lemma \ref{LemmaHeightPreserving} shows that up to post-composing with an inner automorphism, we may as well assume that $\varphi$ preserves $\Gamma_{bar}$, i.e. that $\varphi$ is graph-induced. This means $$Aut(A_{\Gamma}) \cong Inn(A_{\Gamma}) \rtimes Aut_{GI}(A_{\Gamma}),$$ Using Lemma \ref{LemmaInnIsAction} and Lemma \ref{LemmaGI1to1Aut}, we obtain $$Aut(A_{\Gamma}) \cong A_{\Gamma} \rtimes (Aut(\Gamma) \times \{id, \iota\}) \cong A_{\Gamma} \rtimes (Aut(\Gamma) \times (\quotient{\mathbf{Z}}{2 \mathbf{Z}})).$$ In particular, we have $$Out(A_{\Gamma}) \cong Aut(\Gamma) \times (\quotient{\mathbf{Z}}{2 \mathbf{Z}}).$$ \(\Box\) \noindent \textbf{Acknowledgments:} I thank Alexandre Martin for our many discussions. This work was partially supported by the EPSRC New Investigator Award EP/S010963/1. \nocite{*} \newcommand{\etalchar}[1]{$^{#1}$} \text{E-mail address:} \texttt{\href{mailto: [email protected]}{[email protected]}} School of Mathematics, University of Bristol, Woodland Road Bristol BS8 1UG. \end{document}
\begin{document} \title[analytic properties of Ohno function]{Analytic properties of Ohno function} \author{Ken Kamano} \address[Ken Kamano]{Department of Robotics, Osaka Institute of Technology, 1-45 Chaya-machi, Kita-ku, Osaka 530-8568, Japan} \email{[email protected]} \author{Tomokazu Onozuka} \address[Tomokazu Onozuka]{Institute of Mathematics for Industry, Kyushu University 744, Motooka, Nishi-ku, Fukuoka, 819-0395, Japan} \email{[email protected]} \subjclass[2010]{Primary 11M32} \keywords{Multiple zeta functions, Ohno's relation, Ohno function} \begin{abstract} Ohno's relation is a well-known relation on the field of the multiple zeta values and has an interpolation to complex function. In this paper, we call its complex function Ohno function and study it. We consider the region of absolute convergence, give some new expressions, and show new relations of the function. We also give a direct proof of the interpolation of Ohno's relation. \end{abstract} \maketitle \section{Introduction} For positive integers $k_1,\ldots,k_r$ with $k_r \ge 2$, the multiple zeta values (MZVs) are defined by \begin{align*} \zeta(k_1,\ldots, k_r) :=\sum_{1\le n_1<\cdots <n_r} \frac {1}{n_1^{k_1}\cdots n_r^{k_r}}. \end{align*} We say that an index $(k_1,\ldots,k_r)\in\mathbb{Z}_{\ge1}^r$ is admissible if $k_r\ge2$. For an index $\boldsymbol{k}=(k_1,\ldots,k_r)$, $\text{wt}(\boldsymbol{k}):=k_1+\cdots +k_r$ and $\text{dep}(\boldsymbol{k}):= r$ are called weight and depth of $\boldsymbol{k}$, respectively. It is known that MZVs satisfy many algebraic relations over $\mathbb{Q}$. Ohno's relation is a well-known relation among MZVs. \begin{defn} For an admissible index \[ \boldsymbol{k}:=(\underbrace{1,\ldots,1}_{a_1-1},b_1+1,\dots,\underbrace{1,\ldots,1}_{a_d-1},b_d+1) \quad (a_i, b_i\ge1), \] we define the dual index of $\boldsymbol{k}$ by \[ \boldsymbol{k}^\dagger :=(\underbrace{1,\ldots,1}_{b_d-1},a_d+1,\dots,\underbrace{1,\ldots,1}_{b_1-1},a_1+1). \] \end{defn} \begin{thm}[Ohno's relation; Ohno \cite{Oho99}] \label{ohno} For an admissible index $(k_1,\ldots,k_r)$ and $m\in\mathbb{Z}_{\ge 0}$, we have \begin{align*} \sum_{\substack{ e_1+\cdots+e_r=m \\ e_i\ge0\,(1\le i\le r) }} \zeta (k_1+e_1,\ldots,k_r+e_r) =\sum_{\substack{ e'_1+\cdots+e'_{r'}=m \\ e'_i\ge0\,(1\le i\le r') }} \zeta (k'_1+e'_1,\ldots,k'_{r'}+e'_{r'}), \end{align*} where the index $(k'_1,\ldots,k'_{r'})$ is the dual index of $(k_1,\ldots,k_r)$. \end{thm} For an admissible index $\boldsymbol{k}=(k_1,\ldots,k_r)$ and $s\in\mathbb{C}$ with $\Re(s)>-1$, Hirose-Murahara-Onozuka \cite{HMO20} defined the Ohno function $I_{\boldsymbol{k}}(s)$ by \begin{align}\label{ohnofunc} I_{\boldsymbol{k}}(s):=\sum_{i=1}^{r}\sum_{0<n_1<\cdots<n_r} \frac{1}{n_1^{k_1}\cdots n_r^{k_r}} \cdot \frac{1}{n_i^{s}} \prod_{j\ne i} \frac{n_j}{ n_j-n_i }. \end{align} This is a sum of special cases of $\zeta_{\mathfrak{sl}(r+1)}(\boldsymbol{s})$ which is called the Witten multiple zeta function associated with $\mathfrak{sl}(r+1)$. The Witten multiple zeta function was first introduced in Matsumoto-Tsumura \cite{MaTs06}, which is also called the zeta function associated with the root system of type $A_r$ (for more details, see Komori-Matsumoto-Tsumura \cite{KMT10}), and continued meromorphically to the whole complex space $\mathbb{C}^{r(r+1)/2}$. Hence $I_{\boldsymbol{k}}(s)$ can be continued meromorphically to $\mathbb{C}$. When $s=m\in\mathbb{Z}_{\ge 0}$, the Ohno function is the Ohno sum, that is, \begin{align*} I_{\boldsymbol{k}}(m)=\sum_{\substack{ e_1+\cdots+e_r=m \\ e_i\ge0\,(1\le i\le r) }}\zeta (k_1+e_1,\ldots,k_r+e_r), \end{align*} and by Theorem \ref{ohno}, we have \begin{align*} I_{\boldsymbol{k}}(m)=I_{\boldsymbol{k}^\dagger}(m). \end{align*} In \cite{HMO20}, Hirose-Murahara-Onozuka gave an interpolation of Ohno's relation to complex function. \begin{thm}[an interpolation of Ohno's relation] \label{interpolation} For an admissible index $\boldsymbol{k}$ and $s\in\mathbb{C}$, we have \begin{align*} I_{\boldsymbol{k}}(s)=I_{\boldsymbol{k}^\dagger}(s). \end{align*} \end{thm} In this paper, we study the Ohno function $I_{\boldsymbol{k}}(s)$. In Section 2, we give a precise region of absolute convergence of the series \eqref{ohnofunc}. \begin{thm}\label{main1} The series \eqref{ohnofunc} converges absolutely only for \begin{align*} \max_{1\leq j\leq r}\{r-2j+2-(k_j+\cdots+k_r)\}<\Re(s). \end{align*} \end{thm} In Section 3, we give the following integral expression of the Ohno function; \begin{thm}\label{thm:integral_exp} For an admissible index \[ \boldsymbol{k}:=(\underbrace{1,\ldots,1}_{a_1-1},b_1+1,\dots,\underbrace{1,\ldots,1}_{a_d-1},b_d+1) \quad (a_i, b_i\ge1) \] and $s\in\mathbb{C}$ with $\Re(s)>-1$, we have \begin{align*} I_{\boldsymbol{k}}(s) &=\dfrac{1}{(a_1-1)!(b_1-1)! \cdots (a_d-1)! (b_d-1)! \Gamma (s+1) }\\ &\times \int_{0<t_1<\cdots < t_{2d}<1} \dfrac{dt_1 \cdots dt_{2d}}{(1-t_1)t_2 \cdots (1-t_{2d-1})t_{2d} }\\ &\times \left( \log \dfrac{1-t_1}{1-t_2} \right)^{a_1-1} \left( \log \dfrac{t_3}{t_2} \right)^{b_1-1} \cdots \left( \log \dfrac{1-t_{2d-1}}{1-t_{2d}} \right)^{a_{d}-1} \left( \log \dfrac{1}{t_{2d}} \right)^{b_{d}-1} \left( \log \dfrac{t_2 \cdots t_{2d} }{t_1 \cdots t_{2d-1}} \right)^{s}. \end{align*} \end{thm} In Section 4, we give a new proof of Theorem \ref{interpolation}. In the original proof, we assume Ohno's relation, but our new proof gives Theorem \ref{interpolation} directly. Our method is based on Theorem \ref{thm:integral_exp} and the proof given by Ulanskii \cite{U}. In Section 5, we consider an interpolation of $T$-interpolated sum formula. Finally, in Section 6, we show another expression of the Ohno function. \begin{thm}\label{main6} For an admissible index $\boldsymbol{k}=(k_1,\ldots,k_r)$ and $s\in\mathbb{C}$ with $\max_{1\leq j\leq r}\{r-2j+2-(k_j+\cdots+k_r)\}<\Re(s)<0$, we have \begin{align*} I_{\boldsymbol{k}}(s)=&-\frac{\sin(\pi s)}{\pi}\sum_{0<n_1<\cdots<n_r}\frac{1}{n_1^{k_1-1}\cdots n_r^{k_r-1}}\int_0^\infty\frac{w^{-s-1}}{(w+n_1)\cdots(w+n_r)}dw. \end{align*} \end{thm} By applying this theorem, we can deduce linear relations among Ohno functions. \begin{thm}\label{main7} Let $l$ be a positive integer with $l \le r$ and an admissible index $\boldsymbol{k}=(k_1,\ldots,k_r)$ satisfy $$\max_{\substack{1\leq j\leq r\\|\boldsymbol{e}|=m\\e_1,\ldots,e_{r}\leq1\\e_l=0}}\{r-2j+2-(k_j+e_j+\cdots+k_r+e_r)\}<-m$$ for all $m=0,\ldots,r-1$. For $s\in\mathbb{C}$, we have \begin{align*} \sum_{j=0}^{r-1}(-1)^{j}\sum_{\substack{|\boldsymbol{e}|=j\\e_1,\ldots,e_{r}\leq1\\e_l=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s-j) =\zeta(k_1,\ldots,k_{l-1},k_l+s,k_{l+1},\ldots,k_r). \end{align*} \end{thm} By theorem \ref{main6}, we can deduce the following corollary. \begin{cor}\label{cor} For an admissible index $\boldsymbol{k}=(k_1,\ldots,k_r)$ and a negative integer $n$ with $\max_{1\leq j\leq r}\{r-2j+2-(k_j+\cdots+k_r)\}<n<0$, we have \begin{align*} I_{\boldsymbol{k}}(n)=0. \end{align*} \end{cor} \section{Region of absolute convergence} We give a precise region of absolute convergence of the series \eqref{ohnofunc}. It is enough to consider a region of absolute convergence of the Dirichlet series \begin{align*} \begin{split} I_{\boldsymbol{k},i}(s):=&\sum_{0<n_1<\cdots<n_r} \frac{1}{n_1^{k_1}\cdots n_r^{k_r}} \cdot \frac{1}{n_i^{s}} \prod_{j\ne i} \frac{n_j}{ n_j-n_i }\\ =& (-1)^{i-1} \sum_{m_1,\ldots,m_r=1}^\infty \frac{1}{ m_1^{k_1-1}(m_1+m_2)^{k_2-1}\cdots(m_1+\cdots+m_r)^{k_r-1} } \\ &\,\,\,\,\qquad \times \frac{1}{ (m_1+\cdots+m_i)^{s+1} } \, \prod_{j<i} \frac{1}{m_{j+1}+\cdots+m_i } \, \prod_{j>i} \frac{1}{m_{i+1}+\cdots+m_j }, \end{split} \end{align*} for all $1\le i\le r$. Since the above series is a special case of the zeta function associated with the root system of type $A_r$ for each $i$, we can apply the result given by Zhao-Zhou \cite[Proposition 2.1]{ZhZh11}. \begin{thm}[Zhao-Zhou \cite{ZhZh11}] \label{zhzh} The series \begin{align}\label{ZZ} \sum_{m_1,\ldots,m_r=1}^\infty \prod_{\boldsymbol{i}\subseteq[r]}\left(\sum_{j=1}^{{\rm lg}(\boldsymbol{i})}m_{i_j}\right)^{-\sigma_{\boldsymbol{i}}} \end{align} converges if and only if for all $\ell=1,\ldots,r$ and $\boldsymbol{i}=(i_1,\ldots,i_\ell)\subseteq[r]$ \begin{align}\label{ZZ1} \sum_{\substack{\boldsymbol{j}{\rm \;contains\;at\;least\;one\;of}\\ i_1,\ldots,i_\ell}}\sigma_{\boldsymbol{j}}>\ell, \end{align} where the product $\prod_{\boldsymbol{i}\subseteq[r]}$ runs over all nonempty subsets of $[r]=(1,2,\ldots,r)$ as a poset and ${\rm lg}(\boldsymbol{i})$ is the length of $\boldsymbol{i}$. \end{thm} Note that if we put \begin{align}\label{sigma} \sigma_{\boldsymbol{i}}= \begin{cases} k_j-1&(\boldsymbol{i}=(1,2,\ldots,j)\mbox{ for } j\ne i),\\ k_i+s&(\boldsymbol{i}=(1,2,\ldots,i)),\\ 1&(\boldsymbol{i}=(j+1,\ldots,i)\mbox{ for } j< i\mbox{ or }\boldsymbol{i}=(i+1\ldots,j)\mbox{ for } j> i),\\ 0&(\mbox{otherwise}), \end{cases} \end{align} then the series \eqref{ZZ} is $(-1)^{i-1}I_{\boldsymbol{k},i}(s)$. \begin{proof}[Proof of Theorem \ref{main1}] In the case $\ell=r$ for Theorem \ref{zhzh}, since $\boldsymbol{i}=(1,\ldots,r)$, the left-hand side of \eqref{ZZ1} is the sum of all $\sigma_{\boldsymbol{i}}$'s in \eqref{sigma}, so we obtain the following condition of absolute convergence \begin{align}\label{cond1} k_1+\cdots +k_r+\Re(s)>r. \end{align} When $\ell=r-1$, let us consider the case $\boldsymbol{i}=(2,\ldots,r)$ first. In this case, the left-hand side of \eqref{ZZ1} is the sum of all $\sigma_{\boldsymbol{i}}$'s but $\sigma_{(1)}$ in \eqref{sigma}, so we obtain the following conditions for $I_{\boldsymbol{k},i}(s)$: \begin{align*} \begin{cases} k_2+\cdots +k_r+\Re(s)>r-2 &(i\neq1,\ \boldsymbol{i}=(2,\ldots,r)),\\ k_2+\cdots +k_r>r-1 &(i=1,\ \boldsymbol{i}=(2,\ldots,r)). \end{cases} \end{align*} In the case $\boldsymbol{i}=(1,\ldots,i-1,i+1,\ldots,r)$ for $i\neq1$ or $\boldsymbol{i}=(1,\ldots,i,i+2,\ldots,r)$, the left-hand side of \eqref{ZZ1} is the sum of all $\sigma_{\boldsymbol{i}}$'s but $\sigma_{(i)}$ or $\sigma_{(i+1)}$ in \eqref{sigma}, respectively, so we obtain the following condition: \begin{align*} k_1+\cdots +k_r+\Re(s)>r \quad(i\neq1,\ \boldsymbol{i}=(1,\ldots,i-1,i+1,\ldots,r))\ {\rm or}\ (\boldsymbol{i}=(1,\ldots,i,i+2,\ldots,r)). \end{align*} Otherwise, we obtain the following condition: \begin{align*} k_1+\cdots +k_r+\Re(s)>r-1\quad(\rm{otherwise}). \end{align*} The conditions obtained by the case $\boldsymbol{i}\neq(2,\ldots,r)$ are contained in \eqref{cond1}, hence only the case $\boldsymbol{i}=(2,\ldots,r)$ deduces the region of absolute convergence when $\ell=r-1$. (In general, it suffices to consider the case $\boldsymbol{i}=(j,j+1,\ldots,r)$ when $\ell=r-j+1$.) By the similar way, considering all $\ell=1,\ldots,r-2$ for Theorem \ref{zhzh}, the series $I_{\boldsymbol{k},i}(s)$ converges absolutely only for \begin{align*} &k_1+\cdots +k_r+\Re(s)>r,\\ &k_2+\cdots +k_r+\Re(s)>r-2,\\ &\cdots\\ &k_i+\cdots+k_r+\Re(s)>r-2i+2 \end{align*} and \begin{align*} &k_{i+1}+\cdots +k_r>r-i,\\ &k_{i+2}+\cdots +k_r>r-i-1,\\ &\cdots\\ &k_r>1. \end{align*} Hence, the series \eqref{ohnofunc} is absolutely convergent for \begin{align*} &k_1+\cdots +k_r+\Re(s)>r,\\ &k_2+\cdots +k_r+\Re(s)>r-2,\\ &\cdots\\ &k_r+\Re(s)>-r+2 \end{align*} and \begin{align*} &k_{2}+\cdots +k_r>r-1,\\ &k_{3}+\cdots +k_r>r-2,\\ &\cdots\\ &k_r>1. \end{align*} Since the index $\boldsymbol{k}$ is admissible, we obtain Theorem \ref{main1}. \end{proof} \section{New integral expression} We need the following lemma to prove Theorem \ref{thm:integral_exp}. \begin{lem}\label{lamme:integral} For $r\in \mathbb{Z}_{\ge 1}$, $c_i>0$ $(1\le i \le r)$ with $c_i\neq c_j$ $(i\neq j)$ and $s\in\mathbb{C}$ with $\Re (s) >-1$, we have \begin{equation}\label{eq:int_lemma} \begin{split} & \sum_{i=1}^r \left( \dfrac{1}{c_i^{s+1}} \prod_{j\neq i} \dfrac{1}{c_j-c_i} \right) \\ &= \dfrac{1}{\Gamma(s+1)} \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+\cdots +x_r)^{s} \, dx_1 \cdots dx_r . \end{split} \end{equation} \end{lem} \begin{proof} First we prove the integral in the right-hand side of \eqref{eq:int_lemma} converges for $\Re(s)>-1$. Let $\sigma:=\Re(s)$. When $-1<\sigma<0$, we have \begin{align*} &\left| \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+\cdots +x_r)^{s} \, dx_1 \cdots dx_r \right|\\ &\le \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+\cdots +x_r)^{\sigma} \, dx_1 \cdots dx_r \\ &\le \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} x_r^{\sigma} \, dx_1 \cdots dx_r \\ &\le \dfrac{1}{c_1\cdots c_{r-1}} \int_0^{\infty} e^{-c_r x_r} x_r^{\sigma} \, dx_r \\ &= \dfrac{1}{c_1\cdots c_{r-1}} \dfrac{1}{c_r^{\sigma+1}} \Gamma(\sigma +1). \end{align*} Hence the integral in the right-hand side of \eqref{eq:int_lemma} converges. When $\sigma \ge 0$, we have \begin{align*} &\left| \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+\cdots +x_r)^{s} \, dx_1 \cdots dx_r \right|\\ &\le \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+1)^{\sigma}\cdots (x_r+1)^{\sigma} \, dx_1 \cdots dx_r \\ &= e^{c_1} \cdots e^{c_r} \int_{1}^{\infty} \cdots \int_{1}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} x_1^{\sigma}\cdots x_r^{\sigma} \, dx_1 \cdots dx_r \\ &\le e^{c_1} \cdots e^{c_r} \dfrac{1}{c_1^{\sigma+1}\cdots c_r^{\sigma+1}} \Gamma(\sigma+1)^r. \end{align*} Hence the integral in the right-hand side of \eqref{eq:int_lemma} also converges in this case. Next we prove the equation \eqref{eq:int_lemma} by induction on $r$. When $r=1$, the right-hand side of \eqref{eq:int_lemma} equals \begin{align*} \dfrac{1}{\Gamma(s+1)}\int_0^{\infty}e^{-c_1x} x^s\, dx = \dfrac{1}{c_1^{s+1}}\end{align*} and the equation \eqref{eq:int_lemma} holds. Suppose $r\ge 2$ and \eqref{eq:int_lemma} holds for $r-1$ variables. By the change of variables \begin{align} \begin{cases}\label{CoV} x_1+\cdots +x_r=y_1, \\ x_2+\cdots +x_r=y_2, \\ \vdots \\ x_r=y_r, \end{cases} \end{align} we have \begin{align*} & \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_r x_r} (x_1+\cdots +x_r)^{s} \, dx_1 \cdots dx_r \\ &= \int \cdots \int_{y_1>\cdots >y_r\ge0} e^{-c_1 y_1} e^{(c_1-c_2)y_2} \cdots e^{(c_{r-1}-c_r)y_r} y_1^{s} \, dy_1 \cdots dy_r \\ &= \int \cdots \int_{y_1>\cdots >y_{r-1}>0} e^{-c_1 y_1} e^{(c_1-c_2)y_2} \cdots e^{(c_{r-2}-c_{r-1})y_{r-1}} \dfrac{e^{(c_{r-1}-c_r)y_{r-1}}-1}{c_{r-1}-c_r} y_1^{s} \, dy_1 \cdots dy_{r-1} \\ &= \int \cdots \int_{y_1>\cdots >y_{r-1}>0} e^{-c_1 y_1} e^{(c_1-c_2)y_2} \cdots e^{(c_{r-3}-c_{r-2})y_{r-2}} \dfrac{e^{(c_{r-2}-c_r)y_{r-1}} - e^{(c_{r-2}-c_{r-1})y_{r-1}}}{c_{r-1}-c_r} y_1^{s} \, dy_1 \cdots dy_{r-1}. \end{align*} By the change of variables \eqref{CoV} with $x_r=y_r=0$, we have \begin{align*} &\int \cdots \int_{y_1>\cdots >y_{r-1}>0} e^{-c_1 y_1} e^{(c_1-c_2)y_2} \cdots e^{(c_{r-3}-c_{r-2})y_{r-2}} \dfrac{e^{(c_{r-2}-c_r)y_{r-1}} - e^{(c_{r-2}-c_{r-1})y_{r-1}}}{c_{r-1}-c_r} y_1^{s} \, dy_1 \cdots dy_{r-1}\\ &=\dfrac{1}{c_{r-1}-c_r} \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_{r-2} x_{r-2}-c_r x_{r-1}} (x_1+\cdots +x_{r-1})^{s} \, dx_1 \cdots dx_{r-1} \\ &\quad-\dfrac{1}{c_{r-1}-c_r} \int_{0}^{\infty} \cdots \int_{0}^{\infty} e^{-c_1 x_1- \cdots -c_{r-2} x_{r-2}-c_{r-1} x_{r-1}} (x_1+\cdots +x_{r-1})^{s} \, dx_1 \cdots dx_{r-1} . \end{align*} By the induction hypothesis, this equals \begin{align*} &\dfrac{\Gamma(s+1)}{c_{r-1}-c_r} \Bigg( \sum_{i=1}^{r-2} \left( \frac{1}{c_i^{s+1}} \frac{1}{c_r-c_i} \prod_{ \substack{j\neq i \\ 1\le j \le r-2} } \frac{1}{c_j-c_i} \right) + \frac{1}{c_r^{s+1}} \prod_{ \substack{ 1\le j \le r-2} } \frac{1}{c_j-c_r} \\ &- \sum_{i=1}^{r-2} \left( \frac{1}{c_i^{s+1}} \frac{1}{c_{r-1}-c_i} \prod_{ \substack{j\neq i \\ 1\le j \le r-2} } \frac{1}{c_j-c_i} \right) - \frac{1}{c_{r-1}^{s+1}} \prod_{ \substack{ 1\le j \le r-2} } \frac{1}{c_j-c_{r-1}} \Bigg) \\ &= \Gamma(s+1) \left( \sum_{i=1}^{r-2} \frac{1}{c_i^{s+1}} \prod_{ \substack{j\neq i \\ 1\le j \le r} } \frac{1}{c_j-c_i} + \frac{1}{c_r^{s+1}} \prod_{ \substack{j\neq r \\ 1\le j \le r-1} } \frac{1}{c_j-c_r} + \frac{1}{c_{r-1}^{s+1}} \prod_{ \substack{j\neq r-1 \\ 1\le j \le r} } \frac{1}{c_j-c_{r-1}} \right) \\ &= \Gamma(s+1) \sum_{i=1}^{r} \frac{1}{c_i^{s+1}} \prod_{ \substack{j\neq i \\ 1\le j \le r} } \frac{1}{c_j-c_i} \end{align*} and this proves that \eqref{eq:int_lemma} holds for $r$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:integral_exp}] We set \[ \boldsymbol{k}=(\underbrace{1,\ldots,1}_{a_1-1},b_1+1,\dots,\underbrace{1,\ldots,1}_{a_d-1},b_d+1) = (k_1,\ldots , k_r). \] Note that $a_1+\cdots +a_d=r$. By definition, we have \begin{align*} I_{\boldsymbol{k}}(s)= \sum_{0<n_1< \cdots <n_r} \dfrac{1}{n_1^{k_1-1} n_2^{k_2-1} \cdots n_r^{k_r-1} } \sum_{i=1}^r \left( \dfrac{1}{n_i^{s+1}} \prod_{ j\neq i }\dfrac{1}{n_j- n_i } \right). \end{align*} Because $(k_1-1, \ldots ,k_r-1)= ( \underbrace{0,\ldots ,0}_{a_1-1}, b_1, \ldots , \underbrace{0,\ldots ,0}_{a_d-1}, b_d )$, we have \begin{align*} I_{\boldsymbol{k}}(s)= \sum_{0<n_1< \cdots <n_r} \dfrac{1}{n_{a_1}^{b_1} n_{a_1+a_2}^{b_2} \cdots n_{a_1+\cdots + a_d}^{b_d} } \sum_{i=1}^r \left( \dfrac{1}{n_i^{s+1}} \prod_{ j\neq i }\dfrac{1}{n_j- n_i } \right). \end{align*} By using the well-known identity \[ \displaystyle \dfrac{1}{n^b} = \dfrac{1}{\Gamma(b)} \int_0^{\infty} e^{-ny} y^{b-1}dy\ \ \ (n, \, b>0)\] and Lemma \ref{lamme:integral}, we get \begin{align*} I_{\boldsymbol{k}}(s) &= \dfrac{1}{\Gamma(s+1) (b_1-1)! \cdots (b_d-1)!} \\ &\sum_{0<n_1< \cdots <n_r} \int_0^{\infty} \cdots \int_0^{\infty} e^{-n_{a_1}y_1} e^{-n_{a_1+a_2}y_2}\cdots e^{-n_{a_1+\cdots +a_d}y_d} y_1^{b_1-1} \cdots y_d^{b_d-1}\,dy_1\cdots dy_d\\ &\hspace{110pt}e^{-n_1x_1} e^{-n_2x_2}\cdots e^{-n_{r}x_r}(x_1+\cdots +x_r)^s \,dx_1\cdots dx_r. \end{align*} Set $n_i= m_1+\cdots +m_i$ $(1\le i \le r)$. Then each $m_i$ runs over all positive integers and \begin{align*} I_{\boldsymbol{k}}(s) =& \dfrac{1}{\Gamma(s+1) (b_1-1)! \cdots (b_d-1)!} \\ &\sum_{m_1\ge 1, \ldots , m_r\ge 1} \int_0^{\infty} \cdots \int_0^{\infty} \exp\left(-\sum_{i=1}^{r}m_i(X_i+Y_i)\right)\\ &\hspace{40pt} (x_1+\cdots +x_r)^s y_1^{b_1-1} \cdots y_d^{b_d-1} \,dx_1\cdots dx_r\,dy_1\cdots dy_d\\ =& \dfrac{1}{\Gamma(s+1) (b_1-1)! \cdots (b_d-1)!} \\ &\int_0^{\infty} \cdots \int_0^{\infty} \prod_{i=1}^r \dfrac{e^{-X_i-Y_i}}{1-e^{-X_i-Y_i}}\\ &\hspace{40pt} (x_1+\cdots +x_r)^sy_1^{b_1-1} \cdots y_d^{b_d-1} \,dx_1\cdots dx_r\,dy_1\cdots dy_d, \end{align*} where \begin{align*} X_i&=x_i+\cdots+x_r\ \ (1\le i \le r),\\ Y_i&=\begin{cases} y_1+\cdots +y_d &(1\le i \le a_1),\\ y_2+\cdots +y_d &(a_1< i \le a_1+a_2),\\ \vdots\\ y_d &(a_1+\cdots +a_{d-1}< i \le r). \end{cases} \end{align*} We apply the following change of variables: \[ \begin{cases} t^{(1)}_{1}&= \exp(-X_1-Y_{a_1}),\\ \vdots &\hspace{20pt}\vdots \\ t^{(1)}_{a_1}&= \exp(-X_{a_1}-Y_{a_1}),\\ u_1&= \exp(-X_{a_1+1}-Y_{a_1}),\end{cases}\ \ \ \ \ \begin{cases} t^{(2)}_{1}&= \exp(-X_{a_1+1}-Y_{a_1+a_2}),\\ \vdots &\hspace{20pt}\vdots \\ t^{(2)}_{a_2}&= \exp(-X_{a_1+a_2}-Y_{a_1+a_2}),\\ u_2&= \exp(-X_{a_1+a_2+1}-Y_{a_1+a_2}),\end{cases}\] \[ \hspace{20pt} \cdots \begin{cases} t^{(d)}_{1}&= \exp(-X_{a_1+\cdots +a_{d-1}+1}-Y_r),\\ \vdots &\hspace{20pt}\vdots \\ t^{(d)}_{a_d}&= \exp(-X_{r}-Y_r),\\ u_d&= \exp(-Y_r).\end{cases}\] Then it can be easily checked that \begin{itemize} \item $0<t^{(1)}_{1}<\cdots <t^{(1)}_{a_1}<u_1 < \cdots \cdots < t^{(d)}_{1} <\cdots < t^{(d)}_{a_d} < u_d<1$, \item $\displaystyle \prod_{i=1}^r \dfrac{e^{-X_i-Y_i}}{1-e^{-X_i-Y_i}} =\dfrac{t_1^{(1)}}{1-t_1^{(1)}} \cdots \dfrac{t_{a_d}^{(d)}}{1-t_{a_d}^{(d)}}$, \item $y_1=\log \dfrac{t_1^{(2)}}{u_1},\ \ \ldots ,\ \ y_{d-1}=\log \dfrac{t_1^{(d)}}{u_{d-1}},\ \ y_d=\log \dfrac{1}{u_d}$, \item $x_1+\cdots +x_r= \log \dfrac{u_1}{t_1^{(1)}} \dfrac{u_2}{t_1^{(2)}} \cdots \dfrac{u_d}{t_1^{(d)}}$, \item $ dx_1 \cdots dx_r dy_1\cdots dy_d = \dfrac{1}{t^{(1)}_{1}\cdots t^{(d)}_{a_d} u_1 \cdots u_d} dt^{(1)}_{1}\cdots dt^{(d)}_{a_d} du_1 \cdots du_d$. \end{itemize} Therefore \begin{align*} I_{\boldsymbol{k}}(s) =& \dfrac{1}{\Gamma(s+1) (b_1-1)! \cdots (b_d-1)!} \int_{0<t^{(1)}_{1}<\cdots <t^{(1)}_{a_1}<u_1 < \cdots < t^{(d)}_{1} <\cdots < t^{(d)}_{a_d} < u_d<1}\\ &\dfrac{dt^{(1)}_{1}\cdots dt^{(d)}_{a_d} du_1 \cdots du_d}{ (1-t^{(1)}_{1}) \cdots (1-t^{(1)}_{a_1}) u_1 \cdots (1-t^{(d)}_{1}) \cdots (1-t^{(1)}_{a_d})u_d }\\ &\left( \log \dfrac{t^{(2)}_{1}}{u_1} \right)^{b_1-1} \cdots \left( \log \dfrac{t^{(d)}_{1}}{u_{d-1}} \right)^{b_{d-1}-1} \left( \log \dfrac{1}{u_{d}} \right)^{b_{d}-1} \left( \log \dfrac{u_1}{t^{(1)}_{1}} \dfrac{u_2}{t^{(2)}_{1}} \cdots \dfrac{u_d}{t^{(d)}_{1}} \right)^{s} \\ =& \dfrac{1}{\Gamma(s+1) (a_1-1)! \cdots (a_d-1)! (b_1-1)! \cdots (b_d-1)!} \int_{0<t_1<u_1 < \cdots < t_{d}< u_d<1}\\ &\dfrac{dt_1\cdots dt_d du_1\cdots du_d}{ (1-t_{1}) u_1 \cdots (1-t_d) u_d }\\ & \left( \log \dfrac{1-t_{1}}{1-u_1} \right)^{a_1-1} \cdots \left( \log \dfrac{1-t_{d}}{1-u_d} \right)^{a_d-1} \\ &\left( \log \dfrac{t_{2}}{u_1} \right)^{b_1-1} \cdots \left( \log \dfrac{t_{d}}{u_{d-1}} \right)^{b_{d-1}-1} \left( \log \dfrac{1}{u_{d}} \right)^{b_{d}-1} \left( \log \dfrac{u_1}{t_{1}} \dfrac{u_2}{t_{2}} \cdots \dfrac{u_d}{t_{d}} \right)^{s}. \end{align*} Here $t^{(i)}_1$ is replaced by $t_i$ in the last equation. This completes the proof. \end{proof} \section{New Proof of Theorem \ref{interpolation}} In this section, we give another proof of $I_{\boldsymbol{k}}(s) =I_{\boldsymbol{k^{\dagger}}}(s)$ by using Theorem \ref{thm:integral_exp}. Let $d$ be a positive integer. We consider the following change of variables: \begin{align*} \dfrac{1-t_{2\ell-1}}{1-t_{2\ell}} = \dfrac{u_{2(d-\ell+1)+1}} {u_{2(d-\ell+1)}}, \ \ \ \dfrac{t_{2\ell}}{t_{2\ell+1}} = \dfrac{1-u_{2(d-\ell+1)}} {1-u_{2(d-\ell+1)-1}}\ \ \ \ \ (1\le \ell \le d). \end{align*} Here we set $u_{2d+1}=t_{2d+1}=1$. \begin{rem} The change of variables above is a special case of that of Ulanskii \cite{U}. By using this change of variables, he gave a direct proof of Ohno's relation for MZVs. \end{rem} This change of variables satisfies the following properties (cf. \cite[Sect.~2]{U}): \begin{enumerate} \item the region $0<t_1 < \cdots <t_{2d}<1$ corresponds to the region $0<u_1 < \cdots <u_{2d}<1$, \item $\dfrac{dt_1 \cdots dt_{2d}}{(1-t_1)t_2 \cdots (1-t_{2d-1})t_{2d} } = \dfrac{du_1 \cdots du_{2d}}{(1-u_1)u_2 \cdots (1-u_{2d-1})u_{2d} }$, \item $\dfrac{t_2}{t_1} \cdots \dfrac{t_{2d}}{t_{2d-1}} = \dfrac{u_2}{u_1} \cdots \dfrac{u_{2d}}{u_{2d-1}}$. \end{enumerate} By this change of variables, we have \begin{align*} &\int_{0<t_1<\cdots < t_{2d}<1} \dfrac{dt_1 \cdots dt_{2d}}{(1-t_1)t_2 \cdots (1-t_{2d-1})t_{2d} }\\ &\times \left( \log \dfrac{1-t_1}{1-t_2} \right)^{a_1-1} \left( \log \dfrac{t_3}{t_2} \right)^{b_1-1} \cdots \left( \log \dfrac{1-t_{2d-1}}{1-t_{2d}} \right)^{a_{d}-1} \left( \log \dfrac{1}{t_{2d}} \right)^{b_{d}-1} \left( \log \dfrac{t_2 \cdots t_{2d} }{t_1 \cdots t_{2d-1}} \right)^{s} \\ =& \int_{0<u_1<\cdots < u_{2d}<1} \dfrac{du_1 \cdots du_{2d}}{(1-u_1)u_2 \cdots (1-u_{2d-1})u_{2d} }\\ &\hspace{20pt}\times \left( \log \dfrac{1}{u_{2d}} \right)^{a_1-1} \left( \log \dfrac{1-u_{2d-1}}{1-u_{2d}} \right)^{b_1-1} \cdots \left( \log \dfrac{u_3}{u_2} \right)^{a_{d}-1} \left( \log \dfrac{1-u_1}{1-u_2} \right)^{b_{d}-1} \left( \log \dfrac{u_2 \cdots u_{2d} }{u_1 \cdots u_{2d-1}}\ \right)^{s}. \end{align*} By multiplying the gamma factors, we have $I_{\boldsymbol{k}}(s) =I_{\boldsymbol{k^{\dagger}}}(s)$. \section{$T$-interpolation of Ohno function} For an admissible index $\boldsymbol{k}=(k_1,\ldots ,k_r)$, Yamamoto \cite{Y} introduced an interpolated multiple zeta value $\zeta^T(\boldsymbol{k})$ as \begin{align*} \zeta^T(\boldsymbol{k}) = \sum_{\boldsymbol{p}} T^{r-\text{dep}(\boldsymbol{p})} \zeta(\boldsymbol{p}) \ \in \mathbb{R}[T]. \end{align*} Here the sum runs over all indices $\boldsymbol{p}$ such that \[\boldsymbol{p} =(k_1 \square k_2 \square \cdots \square k_r), \] where each $\square$ is filled by the comma $,$ or the plus $+$. This polynomial in $T$ interpolates two kinds of multiple zeta values, i.e., $\zeta^{0}(\boldsymbol{k}) =\zeta(\boldsymbol{k})$ and $\zeta^{1}(\boldsymbol{k}) =\zeta^{\star}(\boldsymbol{k})$, where $\zeta^{\star}(\boldsymbol{k})$ is the multiple zeta-star value defined by \begin{align*} \zeta^{\star}(k_1,\ldots, k_r) :=\sum_{1\le n_1\le \cdots \le n_r} \frac {1}{n_1^{k_1}\cdots n_r^{k_r}}. \end{align*} Yamamoto proved the following sum formula for $\zeta^T(\boldsymbol{k})$, which is an interpolation of the sum formulas for multiple zeta and zeta-star values. \begin{thm}[{\cite[Theorem 1.1]{Y}}]\label{thm:t-sum} For integers $m$, $a \ge 1$, we have \begin{align*} \sum_{\substack{ {\rm wt}(\boldsymbol{k})=m+a+1 \\ \boldsymbol{k}:{\,\rm admissible},\,{\rm dep}(\boldsymbol{k})=a }} \zeta^T(\boldsymbol{k}) &= \sum_{j=0}^{a-1} \binom{m+a}{j} T^j (1-T)^{a-1-j} \zeta(m+a+1) \\ &= \sum_{i=0}^{a-1} \binom{m+i}{i} T^i \zeta(m+a+1). \end{align*} \end{thm} For $a\in \mathbb{Z}_{\ge 1}$, let $\boldsymbol{a} = ( \underbrace{1,\ldots ,1}_{a-1}, 2)$. We define an interpolated version of $I_{\boldsymbol{a}}(s)$ as \begin{align*} I^T_{\boldsymbol{a}}(s) &:=\dfrac{1}{(a-1)! \Gamma (s+1) }\\ &\times \int_{0<t_1<t_{2}<1} \dfrac{dt_1 dt_{2}}{(1-t_1)t_2 } \left( \log \dfrac{1-t_1}{1-t_2} + T \log \dfrac{t_2}{t_1} \right)^{a-1} \left( \log \dfrac{t_2 }{t_1 } \right)^{s}\ \ \ (\Re(s)>-1). \end{align*} When $T=0$, we have $I^0_{\boldsymbol{a}}(s) = I_{\boldsymbol{a}}(s)$. When $s=m \in \mathbb{Z}_{\ge 0}$, the value $I^T_{\boldsymbol{a}}(m)$ is the sum of all interpolated multiple zeta values for fixed weight and depth: \[ I^T_{\boldsymbol{a}}(m) = \sum_{|\boldsymbol{e}|=m} \zeta^{T}(\boldsymbol{a}\oplus \boldsymbol{e}) =\sum_{\substack{ {\rm wt}(\boldsymbol{k})=m+a+1 \\ \boldsymbol{k}: \text{admissible}, \text{dep} (\boldsymbol{k})=a }} \zeta^T(\boldsymbol{k}). \] We can give the following formula, which is an interpolation of the sum formula for $T$-interpolated multiple zeta values. In fact, this theorem deduces Theorem \ref{thm:t-sum} by setting $s=m$. Since the proof is same as that in the last section, we omit it. \begin{thm} For $s\in \mathbb{C}$ and $a\in\mathbb{Z}_{\ge 1}$, we have \begin{align*} I^T_{\boldsymbol{a}}(s)= \left( \sum_{i=0}^{a-1} \binom{s+i}{i} T^i \right) \zeta(s+a+1). \end{align*} \end{thm} \section{New Relations} We first prove Theorem \ref{main6}. \begin{proof}[Proof of Theorem \ref{main6}] By the partial fraction decomposition, we have \begin{align*} \frac{1}{(w+n_1)\cdots(w+n_r)}=\sum_{i=1}^r \frac{1}{w+n_i}\cdot\prod_{j\ne i} \frac{1}{ n_j-n_i }. \end{align*} Let $B(x,y)$ be the beta function. Since $$ \int_0^\infty\frac{w^{-s}}{w+n}dw=n^{-s}B(s,1-s)=n^{-s}\frac{\pi}{\sin(\pi s)} $$ for $0<\Re(s)<1$, we have \begin{align*} \int_0^\infty\frac{w^{-s-1}}{(w+n_1)\cdots(w+n_r)}dw=-\frac{\pi}{\sin(\pi s)}\sum_{i=1}^r \frac{1}{n_i^{s+1}}\prod_{j\ne i} \frac{1}{ n_j-n_i }. \end{align*} Therefore, the statement of theorem holds for $-1<\Re(s)<0$. For $-r<\Re(s)<0$ and $0\le i\le r$, we have \begin{align*} \int_{n_i}^{n_{i+1}}\frac{w^{-\sigma-1}}{(w+n_1)\cdots(w+n_r)}dw&\le \frac{1}{n_{i+1}n_{i+2}\cdots n_r}\int_{n_i}^{n_{i+1}}w^{-\sigma-i-1}dw\\ &\ll \begin{cases} (n_{i+1}n_{i+2}\cdots n_r)^{-1}n_{i+1}^{-\sigma-i}&(-\sigma>i),\\ (n_{i+1}n_{i+2}\cdots n_r)^{-1}\log n_{i+1}&(-\sigma=i),\\ (n_{i+1}n_{i+2}\cdots n_r)^{-1}n_{i}^{-\sigma-i}&(-\sigma<i), \end{cases}\\ &\ll \begin{cases} (n_{i+1}n_{i+2}\cdots n_r)^{-1}n_{i+1}^{-\sigma-i}\log (n_{i+1}+1)&(-\sigma\ge i),\\ (n_{i+1}n_{i+2}\cdots n_r)^{-1}n_{i}^{-\sigma-i}&(-\sigma<i), \end{cases} \end{align*} where $n_0=0$ and $n_{r+1}=\infty$. Hence we can estimate \begin{align}\label{bunkatsu} \begin{split} &\sum_{0<n_1<\cdots<n_r}\frac{1}{n_1^{k_1-1}\cdots n_r^{k_r-1}}\int_{0}^{\infty}\frac{w^{-\sigma-1}}{(w+n_1)\cdots(w+n_r)}dw\\ &\ll \sum_{0\le i\le-\sigma}\sum_{0<n_1<\cdots<n_r}(n_1^{-k_1+1}\cdots n_i^{-k_i+1})n_{i+1}^{-k_{i+1}-\sigma-i}(n_{i+2}^{-k_{i+2}}\cdots n_r^{-k_r})\log (n_{i+1}+1)\\ &\quad+\sum_{-\sigma<i\le r}\sum_{0<n_1<\cdots<n_r}(n_1^{-k_1+1}\cdots n_{i-1}^{-k_{i-1}+1})n_{i}^{-k_{i}-\sigma-i+1}(n_{i+1}^{-k_{i+1}}\cdots n_r^{-k_r}). \end{split} \end{align} Note that by Matsumoto \cite{Mat02}, the infinite series $\sum_{0< n_1<\cdots <n_r} n_1^{-\sigma_1}\cdots n_r^{-\sigma_r}$ converges for \begin{align*} &\sigma_{1}+\cdots +\sigma_r>r,\\ &\sigma_{2}+\cdots +\sigma_r>r-1,\\ &\cdots\\ &\sigma_r>1. \end{align*} Thus the first series in the right-hand side of \eqref{bunkatsu} converges when \begin{align*} &k_{1}+\cdots +k_r+\sigma>r,\\ &k_{2}+\cdots +k_r+\sigma>r-2,\\ &\cdots\\ &k_{i+1}+\cdots +k_r+\sigma>r-2i \end{align*} for $0\le i\le-\sigma$. These conditions can be simply written as \begin{align*} \max_{1\leq j\leq 1-\sigma}\{r-2j+2-(k_j+\cdots+k_r)\}<\sigma. \end{align*} Similarly the second series of the right-hand side in \eqref{bunkatsu} converges when \begin{align*} \max_{1\leq j\leq r}\{r-2j+2-(k_j+\cdots+k_r)\}<\sigma. \end{align*} Hence by the identity theorem, we obtain the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{main7}] Let $E_j$ be the elementary symmetric polynomial of degree $j$ in\\ $(n_1,\ldots,n_{\ell-1},n_{\ell+1},\ldots,n_{r})$. Then we have \begin{align}\label{symmetric} \begin{split} &-\frac{\sin(\pi s)}{\pi}\sum_{0<n_1<\cdots<n_r}\frac{n_\ell}{n_1^{k_1}\cdots n_r^{k_r}}\int_0^\infty\frac{w^{-s-1}(w+n_1)\cdots(w+n_{\ell-1})(w+n_{\ell+1})\cdots(w+n_{r})}{(w+n_1)\cdots(w+n_r)}dw\\ &=-\frac{\sin(\pi s)}{\pi}\sum_{0<n_1<\cdots<n_r}\frac{n_\ell}{n_1^{k_1}\cdots n_r^{k_r}}\int_0^\infty\frac{w^{-s-1}(w^{r-1}+E_1w^{r-2}+\cdots+E_{r-1})}{(w+n_1)\cdots(w+n_r)}dw. \end{split} \end{align} The left-hand side in \eqref{symmetric} is $\zeta(k_1,\ldots,k_{\ell-1},k_\ell+s,k_{\ell+1},\ldots,k_r)$ for $-1<\sigma<0$. On the other hand, the right-hand side in \eqref{symmetric} is \begin{align*} &(-1)^{r-1}\sum_{\substack{|\boldsymbol{e}|=r-1\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s-r+1)+(-1)^{r-2}\sum_{\substack{|\boldsymbol{e}|=r-2\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s-r+2)\\ &+(-1)^{r-3}\sum_{\substack{|\boldsymbol{e}|=r-3\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s-r+3)\\ &+\cdots\\ &+\sum_{\substack{|\boldsymbol{e}|=0\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s)\\ =&\sum_{m=0}^{r-1}(-1)^{m}\sum_{\substack{|\boldsymbol{e}|=m\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}I_{\boldsymbol{k}+\boldsymbol{e}}(s-m). \end{align*} for \begin{align}\label{last} \max_{\substack{1\leq j\leq r\\|\boldsymbol{e}|=m\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}\{r-2j+2-(k_j+e_j+\cdots+k_r+e_r)\}<\Re(s-m)<0\quad(m=0,\ldots,r-1). \end{align} If $\boldsymbol{k}$ satisfies $$\max_{\substack{1\leq j\leq r\\|\boldsymbol{e}|=m\\e_1,\ldots,e_{r}\leq1\\e_\ell=0}}\{r-2j+2-(k_j+e_j+\cdots+k_r+e_r)\}<-m$$ for all $m=0,\ldots,r-1$, the inequality \eqref{last} holds for $-1<\sigma<0$. Hence the theorem is valid for $-1<\sigma<0$. By meromorphic continuation, we obtain the result. \end{proof} \end{document}
\begin{document} \title{Computable copies of $\ell^p$} \author{Timothy H. McNicholl} \address{Department of Mathematics\\ Iowa State University\\ Ames, Iowa 50011} \email{[email protected]} \thanks{Subsection \ref{subsec:proof.thm.main.2} previously appeared in the conference proceedings of CiE 2015 \cite{McNicholl.2015}. The author's participation in CiE 2015 was supported by a Simons Foundation Collaboration Grant for Math\-e\-ma\-ti\-cians} \subjclass[2010]{Primary: 03D78, 03D45. Secondary: 46B25} \begin{abstract} Suppose $p$ is a computable real so that $p \geq 1$. It is shown that the halting set can compute a surjective linear isometry between any two computable copies of $\ell^p$. It is also shown that this result is optimal in that when $p \neq 2$ there are two computable copies of $\ell^p$ with the property that any oracle that computes a linear isometry of one onto the other must also compute the halting set. Thus, $\ell^p$ is $\mathbb{D}elta_2^0$-categorical and is computably categorical if and only if $p = 2$. It is also demonstrated that there is a computably categorical Banach space that is not a Hilbert space. These results hold in both the real and complex case. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} We start by considering the very general question ``Given two computable and linearly isometric Banach spaces, how hard is it to compute a linear isometry from one onto the other?" (Roughly speaking, a Banach space is computable if there are algorithms that compute its scalar multiplication, vector addition, and norm.) We specialize this question to the case of Banach spaces that are linearly isometric to $\ell^p$ where $p \geq 1$ is a computable real (i.e. a real whose decimal expansion is computable). Our first result is that this is no harder than computing membership in the halting set. Namely, we show that when $p$ is a computable real so that $p \geq 1$, the halting set is capable of computing a surjective linear isometry between any two computable copies of $\ell^p$. Our second result is that this problem is not easier than the halting set. Namely, when $p$ is a computable real so that $p \geq 1$ and $p \neq 2$, there are two computable copies of $\ell^p$ so that any oracle that computes a surjective linear isometry from one onto the other must also compute the halting set. It is already known that any two computable copies of $\ell^2$ are computably linearly isometric \cite{Pour-El.Richards.1989}. This is essentially due to the fact that $\ell^2$ is a Hilbert space and mirrors the classical fact that any two infinite-dimensional separable Hilbert spaces are linearly isometric \cite{Halmos.1998}. The first of our two results is based on a sharpening of an inequality due to J. Lamperti which we prove in Section \ref{sec:classical}. In the main, our second result was previously shown for $p = 1$ by Pour-El and Richards \cite{Pour-El.Richards.1989}. Their proof rests on an observation about the extreme points of the closed unit ball of $\ell^1$ that does not generalize to $\ell^p$ when $p > 1$. The proof presented here uses the characterization of the linear isometries of $L^p$ spaces due to S. Banach and J. Lamperti \cite{Banach.1987}, \cite{Fleming.Jamison.2003}, \cite{Lamperti.1958}. Our findings can be recast in the setting of computable categoricity. A mathematical structure is \emph{computably categorical} if any two of its computable copies are isomorphic via a computable map. A structure is \emph{$\mathbb{D}elta^0_2$-categorical} if the halting set can compute an isomorphism between any two of its computable copies \cite{Ash.Knight.2000}, \cite{Fokina.Harizanov.Melnikov.2014}. Our results can be interpreted in the setting of computable categoricity by replacing `isomorphism' with `surjective linear isometry'; i.e. when $p$ is a computable real so that $p \geq 1$, $\ell^p$ is $\mathbb{D}elta_2^0$-categorical, and $\ell^p$ is computably categorical if and only if $p = 2$. The latter resolves a question posed by A.G. Melnikov in 2013 \cite{Melnikov.2013}. Although our theorems are proven for the complex version of $\ell^p$, they also hold for the real version of $\ell^p$. The paper is organized as follows. Section \ref{sec:background} covers background and preliminaries from functional analysis and computable analysis. Section \ref{sec:overview} gives an overview of the proof that $\ell^p$ is $\mathbb{D}elta_2^0$-categorical. The remainder of the work is then divided into two parts each corresponding to a different mathematical universe: the classical world (Section \ref{sec:classical}), wherein we have full access to all the concepts, principles, and methods of classical mathematics, and the computable world (Section \ref{sec:computable}) wherein we can only see approximations of classical objects and can only access computable operations on these approximations. In Section \ref{sec:additional}, we use the methods developed in the previous sections to provide simple proofs that there is a computably categorical Banach space that is not a Hilbert space and that $\ell^p$ has a computable copy if and only if $p$ is computable. Section \ref{sec:conclusion} presents concluding remarks. \section{Background and preliminaries}\label{sec:background} \subsection{Background and preliminaries from functional analysis}\label{subsec:background.fa} Throughout this paper, it is assumed that all Banach spaces are Banach spaces over the field of complex numbers $\mathbb{C}$. We begin with some notation and terminology. Let $\mathcal{B} = (V, \cdot, +, \norm{\ })$ be a Banach space. By a \emph{subspace} of $\mathcal{B}$ we will always mean a linear subspace of $\mathcal{B}$ that is topologically closed. When $S \subseteq V$ and $F \subseteq \mathbb{C}$, we let $\mathcal{L}_F(S)$ denote the set of all linear combinations of vectors in $S$ whose coefficients lie in $F$; i.e. \[ \mathcal{L}_F(S) = \left\{ \sum_{j = 0}^M \alpha_j v_j\ :\ M \in \mathbb{N}\ \wedge\ \alpha_0, \ldots, \alpha_M \in F\ \wedge\ v_0, \ldots, v_M \in S\right\}. \] The \emph{subspace generated by $S$} is the closure of the linear span of $S$; we denote this by $\langle S \operatorname{ran}gle$. We say that $G \subseteq V$ is a \emph{generating set} for $\mathcal{B}$ if it generates all of $\mathcal{B}$; i.e. $V = \langle G \operatorname{ran}gle$. For example, let $e_n = \chi_{\{n\}}$ for all $n \in \mathbb{N}$ (where $\chi_A$ denotes the characteristic function of $A$). Then, $E := \{e_0, e_1, \ldots\}$ is a generating set for $\ell^p$ which we refer to as the \emph{standard generating set} for $\ell^p$. Also, the set of all $f_n(x) = x^n$ for $n \in \mathbb{N}$ is a generating set for $C[0,1]$. A map between two Banach spaces is \emph{linear} if it preserves scalar multiplication and vector addition; it is an \emph{isometry} (or is \emph{isometric}) if it preserves the metric induced by the norm; i.e. $\norm{T(x) - T(y)} = \norm{x - y}$. Thus, every isometry is injective. An \emph{endomorphism} of a Banach space is a linear (but not necessarily isometric) map of the space into itself. When $p$ is a positive number, $\ell^p$ denotes the space of all functions $f : \mathbb{N} \rightarrow \mathbb{C}$ so that \[ \sum_{n = 0}^\infty |f(n)|^p < \infty. \] $\ell^p$ is a vector space over $\mathbb{C}$ with the usual scalar multiplication and vector addition. When $p \geq 1$ it is a Banach space under the norm defined by \[ \norm{f}_p = \left( \sum_{n = 0}^\infty |f(n)|^p \right)^{1/p}. \] It is often convenient to view $\ell^p$ as $L^p(\mu)$ where $\mu$ is the counting measure on $\mathbb{N}$. When $f \in \ell^p$, the \emph{support} of $f$ is the set of all $t \in \mathbb{N}$ so that $f(t) \neq 0$; we denote this set by $\operatorname{supp}(f)$. If $f_0, f_1, \ldots$ are vectors in $\ell^p$ so that $\operatorname{supp}(f_m) \cap \operatorname{supp}(f_n) = \emptyset$ whenever $m \neq n$, then we say that $f_0, f_1, \ldots$ are \emph{disjointly supported}. Note that if $f,g \in \ell^p$ are disjointly supported then $\norm{f +g }_p^p = \norm{f}_p^p + \norm{g}_p^p$. We now describe a simple numerical test for disjointness of support when $p \neq 2$. When $z, w \in \mathbb{C}$ let: \[ \sigma_1(z, w) = |2 (|z|^p + |w|^p) - (|z - w|^p + |z + w|^p)| \] In 1958, J. Lamperti proved that $\sigma_1(z,w) = 0$ iff $zw = 0$ and that the sign of $2 (|z|^p + |w|^p) - (|z - w|^p + |z + w|^p)$ depends only on $p$. Define $\sigma_1(f,g)$ to be $\sum_n \sigma_1(f(n), g(n))$. Thus, $\sigma_1(f,g) = |2(\norm{f}^p + \norm{g}^p) - (\norm{f - g}^p + \norm{f + g}^p)|$ and $\sigma_1(f,g) = 0$ if and only if $f,g$ are disjointly supported. Note also that $\sigma_1$ is invariant under linear isometries. Thus, every isometric endomorphism of $\ell^p$ preserves the `disjoint support' relation. That is, if $T : \ell^p \rightarrow \ell^p$ is a linear isometry, then $T(f)$ and $T(g)$ are disjointly supported whenever $f,g \in \ell^p$ are disjointly supported. When $f,g \in \ell^p$, write $f \preceq g$ if $f(n) = 0$ whenever $g(n) \neq f(n)$. It follows that $\preceq$ is a partial order of $\ell^p$. Note that the atoms of this partial order are the nonzero scalar multiples of the $e_n$'s. Note also that $f \preceq g$ if and only if $g - f$ and $f$ are disjointly supported. Thus, $\preceq$ is preserved by isometric endomorphisms of $\ell^p$. The proof that $\ell^p$ is not computably categorical when $p \neq 2$ is based on the following. \begin{theorem}[Banach-Lamperti]\label{thm:classification} Suppose $1 \leq p < \infty$ and $p \neq 2$. Suppose $T$ is an endomorphism of $\ell^p$. Then, $T$ is a surjective isometric endomorphism of $\ell^p$ if and only if there are unimodular constants $\lambda_0, \lambda_1, \ldots$ and a permutation of $\mathbb{N}$, $\phi$, so that $T(e_n) = \lambda_n e_{\phi(n)}$ for all $n$. \end{theorem} In his seminal text on linear operators, S. Banach stated Theorem \ref{thm:classification} for the case of $\ell^p$ spaces over the reals \cite{Banach.1987}. He also stated a classification of the linear isometries of $L^p[0,1]$ in the real case. Banach's proofs of these claims were sketchy and did not easily generalize to the complex case. In 1958, J. Lamperti rigorously proved a generalization of Banach's claims to real and complex $L^p$-spaces of $\sigma$-finite measures \cite{Lamperti.1958}. Theorem \ref{thm:classification} follows from J. Lamperti's work as it appears in Theorem 3.2.5 of \cite{Fleming.Jamison.2003}. Note that Theorem \ref{thm:classification} fails when $p = 2$. For, $\ell^2$ is a Hilbert space. So, if $\{f_0, f_1, \ldots\}$ is any orthonormal basis for $\ell^2$, then there is a unique surjective linear isometry $T$ of $\ell^2$ so that $T(e_n) = f_n$ for all $n$. \subsection{Background and preliminaries from computable analysis}\label{subsec:background.ca} We assume the reader is familiar with the basic notation and terminology of computability theory as expounded in \cite{Cooper.2004}. We cover here the basic notions from computable analysis necessary to understand the results herein. A more expansive treatment can be found in \cite{Pour-El.Richards.1989}, \cite{Weihrauch.2000}. Suppose $\mathcal{B}$ is a Banach space and $F = \{f_0, f_1, \ldots \} \subseteq \mathcal{B}$ is a generating set for $\mathcal{B}$. We say that $F$ is an \textit{effective generating set} for $\mathcal{B}$ if there is an algorithm that, given any $f \in \mathcal{L}_{\mathbb{Q}(i)}(F)$ and a nonnegative integer $k$ as input computes a rational number $q$ so that $| \|f\| - q| < 2^{-k}$; less formally, the map $f \in \mathcal{L}_{\mathbb{Q}(i)}(F) \mapsto \|f\|$ is computable. For example, $\{1, i\}$ is an effective generating set for $\mathbb{C}$, and the standard generating set of $\ell^p$ is an effective generating set for $\ell^p$. On the other hand, if $|\zeta| = 1$, then $\zeta E := \{\zeta e_0, \zeta e_1, \ldots \}$ is also an effective generating set for $\ell^p$, even if $\zeta$ is incomputable. We designate $\{1,i\}$ and $E$ as the default effective generating sets for $\mathbb{C}$ and $\ell^p$ respectively; i.e. when discussing computability on these spaces without mention of an effective generating set it is implicit that we are using the default generating set. Suppose $F$ is an effective generating set for a Banach space $\mathcal{B}$. We say that a vector $g \in \mathcal{B}$ is \textit{computable with respect to} $F$ if there is an algorithm that given any nonnegative integer $k$ as input computes a vector $f \in \mathcal{L}_{\mathbb{Q}(i)}(F)$ so that $\norm{g - f} < 2^{-k}$. Thus a point $z \in \mathbb{C}$ is computable (with respect to the default generating set) if there is an algorithm that given any $k \in \mathbb{N}$ as input, produces a rational point $q$ so that $|q - z| < 2^{-k}$. A vector $f \in \ell^p$ is computable (with respect to the default generating set $E$) if there is an algorithm that given any $n,k \in \mathbb{N}$ as input computes a rational point $q$ so that $|q - f(n)| < 2^{-k}$. On the other hand, if $\zeta$ is an incomputable unimodular point, then only the zero vector is computable with respect to both $E$ and $\zeta E$. Suppose $\mathcal{B}$ is a Banach space. When $f \in \mathcal{B}$, and $r > 0$, let $B(f; r)$ denote the open ball with center $f$ and radius $r$. Suppose $F$ is an effective generating set for $\mathcal{B}$. When $f \in \mathcal{L}_{\mathbb{Q}(i)}(F)$ and $r$ is a positive rational number, we call $B(f; r)$ a \textit{rational ball} (with respect to $F$). Suppose that for each $j \in \{1,2\}$, $F_j$ is an effective generating set for $\mathcal{B}_j$. We say that a map $T: \mathcal{B}_1 \rightarrow B_2$ is \textit{computable with respect to $(F_1, F_2$)} if there is an algorithm $P$ that meets the following three criteria. \begin{enumerate} \item \textbf{Approximation:} Given as input a rational ball with respect to $F_1$, $P$ either does not halt or produces a rational ball with respect to $F_2$. \item \textbf{Correctness:} If $B_2$ is the output of $P$ on input $B_1$, then $T(f) \in B_2$ whenever $f \in B_1$. \item \textbf{Convergence:} If $V$ is a neighborhood of $T(f)$, and if $U$ is a neighborhood of $f$, then $f$ belongs to a rational ball $B_1 \subseteq U$ with the property that $P$ halts on $B_1$ and produces a rational ball that is included in $U$. \end{enumerate} When we speak of an algorithm accepting a rational ball $B(\sum_{j = 0}^M \alpha_j f_j; r)$ as input, we mean that it accepts some representation of the ball such as a code of the sequence $(r, M, \alpha_0, \ldots, \alpha_M)$ and similarly when we say it produces a rational ball as output we mean that it produces codes of the center and radius. It is well-known that many familiar functions of a complex variable (such as $\sin$, $\exp$) are computable (with respect to the generating set $\{1,i\}$ used in the domain and range). Note also that when $|\zeta| = 1$ the `multiplication by $\zeta$' operator on $\ell^p$ is computable with respect to $E$ and $\zeta E$. A \emph{computable Banach space} consists of a pair $(\mathcal{B}, F)$ where $\mathcal{B}$ is a Banach space and $F$ is an effective generating set for $\mathcal{B}$. Unless the effective generating set truly requires explicit mention, for the sake of economy of expression we will just refer to `the computable Banach space $\mathcal{B}$'. If $(\mathcal{B}_1, F_1)$ and $(\mathcal{B}_2, F_2)$ are computable Banach spaces, then we say a map $T : \mathcal{B}_1 \rightarrow \mathcal{B}_2$ is computable if it is computable with respect to $(F_1, F_2)$. It easily follows that if $T : \mathcal{B}_1 \rightarrow \mathcal{B}_2$ is a computable surjective linear isometry, then $T^{-1}$ is also computable. If $(\mathcal{B}_1, F_1)$ and $(\mathcal{B}_2, F_2)$ are computable Banach spaces, then $(\mathcal{B}_1 \times \mathcal{B}_2, F_1 \times F_2)$ is a computable Banach space. Thus, if $\mathcal{B}$ is a computable Banach space, then vector addition is a computable map from $\mathcal{B} \times \mathcal{B}$ onto $\mathcal{B}$ and scalar multiplication is a computable map from $\mathbb{C} \times \mathcal{B}$ onto $\mathcal{B}$. In addition the norm of $\mathcal{B}$ is a computable map from $\mathcal{B}$ into $[0, \infty)$. Suppose $\mathcal{B}$ is a computable Banach space. A closed set $C \subseteq \mathcal{B}$ is \emph{c.e. closed} if the set of all rational balls that contain a point of $C$ is c.e.. An open set $U \subseteq \mathcal{B}$ is \emph{c.e. open} if the set of all rational balls included in $U$ is c.e.. Suppose $\mathcal{B}'$ is a computable Banach space and $f : \mathcal{B} \rightarrow \mathcal{B}'$ is computable. It is well-known that if $U \subseteq \mathcal{B}'$ is c.e. open, then $f^{-1}[U]$ is c.e. open. The following is `folklore'. \begin{proposition}\label{prop:bounding.principle} Suppose $\mathcal{B}$ is a computable Banach space and $f : \mathcal{B} \rightarrow \mathbb{R}$ is a computable function with the property that $f(v) \geq d(v, f^{-1}[\{0\}])$ for all $v\in \mathcal{B}$. Then, $f^{-1}[\{0\}]$ is c.e. closed. \end{proposition} The computability notions we have covered are all relativized in the usual way. We now formally state our two main theorems. \begin{theorem}\label{thm:main.1} Suppose $p$ is a computable real so that $p \geq 1$. Then, whenever $\mathcal{B}_0$ and $\mathcal{B}_1$ are computable Banach spaces that are linearly isometric to $\ell^p$, the halting set computes a surjective linear isometry of $\mathcal{B}_0$ onto $\mathcal{B}_1$. \end{theorem} \begin{theorem}\label{thm:main.2} Suppose $p$ is a computable real so that $p \geq 1$ and $p \neq 2$. Suppose $C$ is a \emph{c.e.} set. Then, there is a computable Banach space $\mathcal{B}$ so that $C$ computes a surjective linear isometry of $\ell^p$ onto $\mathcal{B}$ and so that any oracle that computes a surjective linear isometry of $\ell^p$ onto $\mathcal{B}$ also computes $C$. \end{theorem} So if we take $C$ to be the halting set in Theorem \ref{thm:main.2}, then it follows that the problem of computing a linear isometric map of one computable copy of $\ell^p$ onto another is at least as hard as computing membership in the halting set. We close this section by mentioning some related work. A.G. Melnikov and K.M. Ng have investigated computable categoricity questions with regards to the space $C[0,1]$ of continuous functions on the unit interval with the supremum norm. In particular, they have shown that $C[0,1]$ is not computably categorical as a metric space nor as a Banach space \cite{Melnikov.2013}, \cite{Melnikov.Ng.2014}. The study of computable categoricity for countable structures goes back at least as far as the 1961 work of A.I. Malcev. The text of Ash and Knight has a thorough discussion of the main early results of this line of inquiry \cite{Ash.Knight.2000}. The survey by Fokina, Harizanov, and Melnikov contains a wealth of recent results on computable categoricity and other directions in the countable computable structures program \cite{Fokina.Harizanov.Melnikov.2014}. \section{Overview}\label{sec:overview} The proof of Theorem \ref{thm:main.2} is fairly straightforward. Here, we set forth the key concepts and supporting intermediate results for the proof of Theorem \ref{thm:main.1}. We first reduce the problem to the computation of surjective isometric endomorphisms. Fix a computable real so that $p \geq 1$. Suppose $(\mathcal{B}, G)$ is a computable Banach space, and suppose $T$ is a linear isometric mapping of $\mathcal{B}$ onto $\ell^p$. Then, $T[G]$ is an effective generating set for $\ell^p$, and $T$ is computable with respect to $(G, T[G])$. Thus, since inverses of computable surjective linear isometries are computable, the study of computable Banach spaces that are linearly isometric to $\ell^p$ can be reduced to the study of computability notions on $\ell^p$ with respect to different generating sets. In particular, to prove Theorem \ref{thm:main.1}, it suffices to show that whenever $F$ is an effective generating set for $\ell^p$, the halting set computes a surjective isometric endomorphism of $\ell^p$ with respect to $(E,F)$. Now, suppose $g_0, g_1, \ldots, $ are disjointly supported unit vectors in $\ell^p$. Then, there is a unique linear isometric map $T : \ell^p \rightarrow \ell^p$ so that $T(e_n) = g_n$ for all $n$; if the $g_n$'s generate $\ell^p$, then $T$ is also surjective. Furthermore, if an oracle $X$ computes $\{g_n\}_{n = 0}^\infty$ with respect to an effective generating set $F$, then $X$ also computes $T$ with respect to $(E,F)$. So, to prove Theorem \ref{thm:main.1}, it suffices to prove the following. \begin{theorem}\label{thm:comp.linear.ext} If $p$ is a computable real so that $p \geq 1$ and $p \neq 2$, and if $F$ is an effective generating set for $\ell^p$, then with respect to $F$ the halting problem computes a sequence of disjointly supported unit vectors that generate $\ell^p$. \end{theorem} Our main tool for producing such a sequence of unit vectors is the concept of a \emph{disintegration} which we define now. To begin, fix a real $p \geq 1$. Suppose $S \subseteq \omega^{<\omega}$ and $\phi : S \rightarrow \ell^p$. We say that $\phi$ is a \emph{reverse-order homomorphism} if $\phi(\tau) \preceq \phi(\rho)$ whenever $\tau, \rho \in S$ are such that $\tau \supset \rho$. (Recall from Subsection \ref{subsec:background.fa} that $f \preceq g$ if and only if $f(n) = 0$ whenever $f(n) \neq g(n)$.) We say that $\phi$ is a \emph{strong reverse-order homomorphism} if it is a reverse-order homomorphism with the additional feature that it maps incomparable nodes to disjointly supported vectors. Accordingly, an injective (strong) reverse-order homomorphism will be called a (strong) reverse-order monomorphism. Suppose $S$ is a subtree of $\omega^{<\omega}$. When $\nu$ is a nonterminal node of $S$, let $\nu^+_S$ denote the set of all children of $\nu$ in $S$. Call a map $\phi : S \rightarrow \ell^p$ \emph{summative} if $\phi(\nu) = \sum_{\nu' \in \nu^+_S} \phi(\nu')$ whenever $\nu$ is a nonterminal node of $S$. A \emph{disintegration} is a summative strong reverse-order monomorphism whose range generates $\ell^p$. That disintegrations exist is easy to see; e.g. set $\phi(\emptyset) = \sum_n 2^{-n} e_n$ and set $\phi((n)) = 2^{-n} e_n$. The challenge is to produce a disintegration that is computable with respect to an effective generating set $F$ (in the sense that there is an algorithm that given a $\nu \in S$ and a $k \in \mathbb{N}$ computes an $f \in \mathcal{L}_{\mathbb{Q}(i)}(F)$ so that $\norm{\phi(\nu) - f}_p < 2^{-k}$). Accordingly, in Section \ref{sec:computable}, we prove the following. \begin{theorem}\label{thm:comp.disintegration} If $p \geq 1$ is a computable real besides $2$, and if $F$ is an effective generating set for $\ell^p$, then there is a disintegration of $\ell^p$ that is computable with respect to $F$. \end{theorem} How does possession of a disintegration $\phi : S \rightarrow \ell^p$ that is computable with respect to an effective generating set $F$ help us to prove Theorem \ref{thm:comp.linear.ext}? Intuitively, to define $g_n$ we want to use the halting set to compute the limit of $\phi(\nu)$ as $\nu$ descends some carefully chosen branch of $S$. To see how we choose these branches, we now define the concept of an almost norm-maximizing chain. When $\nu$ is a non-root node of $\omega^{< \omega}$, let $\nu^-$ denote its parent. \begin{definition}\label{def:almost.norm.max} Suppose $\phi : S \rightarrow \ell^p$ is a disintegration. \begin{enumerate} \item If $\nu$ is a non-root node of $S$, then we say $\nu$ is an \emph{almost norm-maximizing child} of $\nu^-$ if $\norm{\phi(\mu)}_p^p \leq \norm{\phi(\nu)}_p^p + 2^{-(|\nu|+1)}$ whenever $\mu$ is a child of $\nu^-$ in $S$. \item A chain $\mathcal{C} \subseteq S$ is \emph{almost norm-maximizing} if every nonterminal node in $\mathcal{C}$ has an almost norm-maximizing child in $\mathcal{C}$. \end{enumerate} \end{definition} In Section \ref{sec:classical} we prove the following. \begin{theorem}\label{thm:an.chains} Suppose $\phi : S \rightarrow \ell^p$ is a disintegration. \begin{enumerate} \item If $\mathcal{C} \subseteq S$ is an almost norm-maximizing chain, then the $\preceq$-infimum of $\phi[\mathcal{C}]$ exists and is either $\mathbf{0}$ or an atom of $\preceq$. Furthermore, $\inf \phi[\mathcal{C}]$ is the limit in the $\ell^p$ norm of $\phi(\nu)$ as $\nu$ traverses the nodes in $\mathcal{C}$ in increasing order. \label{thm:an.chains::itm:infimum} \item If $\{\mathcal{C}_n\}_{n = 0}^\infty$ is a partition of $S$ into almost norm-maximizing chains, then $\inf \phi[\mathcal{C}_0]$, $\inf \phi[\mathcal{C}_1]$, $\ldots$ are disjointly supported vectors that generate $\ell^p$. \label{thm:an.chains::itm:partition} \end{enumerate} \end{theorem} And, in Section \ref{sec:computable}, we prove: \begin{theorem}\label{thm:comp.partition} Suppose $\phi : S \rightarrow \ell^p$ is a disintegration that is computable with respect to an effective generating set $F$. Then, there is a computable partition of $S$ into c.e. almost norm-maximizing chains. \end{theorem} So, to prove Theorem \ref{thm:comp.linear.ext} we first compute with respect to $F$ a disintegration $\phi : S \rightarrow \ell^p$, and then compute a partition of $S$ into c.e. almost norm-maximizing chains $\mathcal{C}_0$, $\mathcal{C}_1$, $\ldots$. Set $g_n = \inf \phi[\mathcal{C}_n]$. Note that $\norm{g_n}_p$ is a right-c.e. real. Thus, the halting set computes $\norm{g_n}_p$ from $n$. Since $g_n \preceq \phi(\nu)$ for all $\nu \in \mathcal{C}_n$, it follows that the halting set computes $g_n$ with respect to $F$ (since $\norm{\phi(\nu) - g_n}_p^p = \norm{\phi(\nu)}_p^p - \norm{g_n}_p^p$ for all $\nu \in \mathcal{C}_n$). We can also use the halting set to enumerate all values of $n$ for which $g_n \neq \mathbf{0}$; denote these $n_0 < n_1 < \ldots$. Then, $\{\norm{g_{n_j}}^{-1} g_{n_j}\}_{j = 0}^\infty$ is a disjointly supported sequence of unit vectors that generates $\ell^p$, and the halting set computes $\{g_{n_j}\}_{j = 0}^\infty$ with respect to $F$. Let us now return to the proof of Theorem \ref{thm:comp.disintegration}. The idea is to construct a disintegration $\phi$ via an ascending sequence of partial disintegrations that are computable uniformly with respect to $F$. Specifically, we define a \emph{partial disintegration} to be a strong order monomorphism $\psi : S- \{\emptyset\} \rightarrow \ell^p$ where $S$ is a finite non-empty subtree of $\omega^{< \omega}$. We say a partial disintegration $\psi_2$ \emph{extends} a partial disintegration $\psi_1$ if $\operatorname{dom}(\psi_1) \subseteq \operatorname{dom}(\psi_2)$ and if $\psi_2(\nu) = \psi_1(\nu)$ for all $\nu \in \operatorname{dom}(\psi_1)$. Let $F = \{f_0, f_1, \ldots\}$ be a generating set for $\ell^p$. If $F$ is an effective generating set, then it is quite easy to produce a partial disintegration that is computable with respect to $F$. Namely, set $S = \{\emptyset\}$ and $\psi = \emptyset$. Of course, this partial disintegration does not do much for us and is quite far from being a disintegration. Accordingly, when $\psi : S- \{\emptyset\} \rightarrow \ell^p$ is a partial disintegration, we define the \emph{success index of $\psi$} (with respect to $F$) to be the largest integer $N$ so that $d(f_j, \langle \operatorname{ran}(\psi) \operatorname{ran}gle) < 2^{-N}$ whenever $0 \leq j < N$ and \[ \norm{ \psi(\nu) - \sum_{\nu' \in \nu^+_S} \psi(\nu')}_p < 2^{-N} \] whenever $\nu$ is a non-root nonterminal node of $S$. Here is how we can glue an ascending sequence of partial disintegrations into a disintegration. Suppose $\psi_0, \psi_1, \ldots$ is an ascending sequence of partial disintegrations (in the sense that $\psi_{n+1}$ extends $\psi_n$) so that the success index of $\psi_{n+1}$ is larger than $n$ for all $n$. Set $S = \{\emptyset\} \cup \bigcup_n \operatorname{dom}(\psi_n)$, and set \[ \psi(\nu) = 2^{-\nu(0)} \lim_n (\norm{\psi_n(\nu(0))}_p + 1)^{-1}\psi_n(\nu) \] for all $\nu \in S - \{\emptyset\}$. Set $\psi(\emptyset) = \sum_{(a) \in S} \psi((a))$. Then, $\psi$ is a disintegration. Such a chain of partial disintegrations can be obtained by a fairly straightforward application of the following which is proven in Section \ref{sec:computable}. \begin{theorem}\label{thm:comp.partial.disintegration.ext} Suppose $F$ is an effective generating set for $\ell^p$ where $p \geq 1$ is a computable real besides $2$. If $N,k \in \mathbb{N}$, and if $\phi : S - \{\emptyset\} \rightarrow \ell^p$ is a partial disintegration that is computable with respect to $F$, then there exists a map $\mathbf \phi' : S- \{\emptyset\} \rightarrow \ell^p$ so that \[ \max\{\phi(\nu) - \phi'(\nu)\ : \nu \in S - \{\emptyset\} \} < 2^{-k} \] and so that $\phi'$ extends to a partial disintegration $\psi$ that is computable with respect to $F$ and whose success index with respect to $F$ is larger than $N$. Furthermore, an index of $\psi$ can be computed from $N$, $k$ and an index of $\phi$. \end{theorem} \section{Classical world}\label{sec:classical} \subsection{Results on disintegrations and partial disintegrations}\label{subec:disint} The following is a preliminary step to proving Theorem \ref{thm:an.chains}. \begin{proposition}\label{prop:limit.desc.chain} If $g_0 \succ g_1 \succ \ldots$ are vectors in $\ell^p$, then $\lim_n g_n$ exists pointwise and in the $\ell^p$-norm and is the $\preceq$-infimum of $\{g_0, g_1, \ldots \}$. \end{proposition} \begin{proof} Let \[ S = \bigcap_n \operatorname{supp}(g_n). \] Set $g = g_0 \cdot \chi_S$. Since $g_{n+1} \preceq g_n$, it follows that $g$ is the pointwise limit of $\{g_n\}_n$. We claim that $|g_n(t) - g(t)| \leq |g_0(t)|$ for all $t$. For, either $g_n(t) = g(t)$, or $g(t) = 0$ and $g_n(t) = g_0(t)$. By regarding summation as integration with respect to the counting measure, it now follows from the Dominated Convergence Theorem that $\lim_{n \rightarrow \infty} \norm{g_n - g}_p = 0$. Suppose $h \preceq g_n$ for all $n$. Thus, as discussed in Subsection \ref{subsec:background.fa}, $\sigma_1(g_n - h, h) = 0$ for all $n$. Since $\{g_n\}_{n = 0}^\infty$ converges to $g$ in the $\ell^p$-norm, $\sigma_1(g - h, h) = 0$; that is $h \preceq g$. Thus, $g = \inf \{g_0, g_1, \ldots\}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:an.chains}:] (\ref{thm:an.chains::itm:infimum}): Suppose $\mathcal{C} \subseteq S$ is an almost norm-maximizing chain. It follows from Proposition \ref{prop:limit.desc.chain} that the $\preceq$-infimum of $\phi[\mathcal{C}]$ exists; let $g$ denote this infimum. By way of contradiction, suppose $j_0, j_1 \in \operatorname{supp}(g)$ and $j_0 \neq j_1$. Since $\phi$ maps incomparable nodes to disjointly supported vectors, whenever $\nu \in S$, the support of $\phi(\nu)$ contains both $j_0$ and $j_1$ if it contains either one of them. Since $\phi$ is reverse-order preserving, if $j_0$ and $j_1$ belong to the support of $\phi(\nu)$, then $\phi(\nu)(j_0) = g(j_0)$ and $\phi(\nu)(j_1) = g(j_1)$. But, since $\phi$ is a disintegration, the range of $\phi$ generates $\ell^p$- a contradiction. Thus, $g$ is either $\mathbf{0}$ or an atom.\\ (\ref{thm:an.chains::itm:partition}): Suppose $\mathcal{C}_0$, $\mathcal{C}_1$, $\ldots$ is a partition of $S$ into almost norm-maximizing chains. By part (\ref{thm:an.chains::itm:infimum}), $\inf \phi[\mathcal{C}_n]$ exists for each $n$; let $h_n = \inf \phi [\mathcal{C}_n]$. We first claim that for every $j$, there is a $k$ so that $j$ belongs to the support of $h_k$. If there is an atom in $\operatorname{ran}(\phi)$ whose support contains $j$, then there is nothing to prove. So, suppose $j$ does not belong to the support of any atom in $\operatorname{ran}(\phi)$. We claim that there is a $\nu \in S$ so that $j \in \operatorname{supp}(\phi(\nu))$ and $|\nu| = 1$. For otherwise, $j \not \in \operatorname{supp}(g)$ for all $g \in \operatorname{ran}(\phi)$. But, since $\phi$ is a disintegration, $\operatorname{ran}(\phi)$ generates $\ell^p$- a contradiction. Since $\phi$ is a disintegration, if $\nu$ is a nonterminal node of $S$, then $\phi(\nu) = \sum_{\nu' \in \nu^+_S} \phi(\nu')$. It thus follows by induction that for each $s$, $j$ belongs to the support of a $\phi(\nu)$ so that $|\nu| = s$; since $\phi$ maps incomparable nodes to disjointly supported vectors, $\nu$ is unique and accordingly we denote it by $\nu_s$. Let $g_s = \phi(\nu_s)$. Again, since $\phi$ maps incomparable nodes to disjointly supported vectors, $\nu_{s+1} \supset \nu_s$ for all $s$. Since $\phi$ is a reverse-order monomorphism, $g_{s+1} \prec g_s$ for all $s$. Thus, $g_s(j) = g_0(j) \neq 0$ for all $s$. Now, for each $s$, let $k_s$ denote the $k$ so that $g_s \in \phi[\mathcal{C}_k]$. We claim that $\lim_s k_s$ exists. By way of contradiction suppose otherwise. Let $s_0 < s_1 < \ldots$ be the increasing enumeration of all values of $s$ for which $k_s \neq k_{s+1}$. Since $\nu_{s_m + 1} \supset \nu_{s_m}$, $\nu_{s_m}$ is a nonterminal node of $S$. Therefore, since $\mathcal{C}_{k_{s_m}}$ is almost norm-maximizing, it must contain a child of $\nu_{s_m}$ in $S$; let $\mu_m$ denote this child and let $\lambda_m = \phi(\mu_m)$. Thus, $\lambda_m \prec g_{s_m}$ and the supports of $\lambda_m$ and $g_{s_m + 1}$ are disjoint (since $\mu_m$ and $\nu_{s_m + 1}$ are distinct nodes at the same level of $S$). In addition, since $\mu_m$ is an almost norm-maximizing child of $\nu_{s_m}$, $|g_0(j)|^p = |g_{s_m+1}(j)|^p \leq \norm{\lambda_m}^p_p + 2^{-s_m}$. Since $\lambda_{m + r} \preceq g_{s_{m+r}} \preceq g_{s_m + 1}$, the supports of $\lambda_m$ and $\lambda_{m+r}$ are disjoint if $r > 0$. That is to say, $\operatorname{supp}(\lambda_m) \cap \operatorname{supp}(\lambda_{m'}) = \emptyset$ whenever $m \neq m'$. Thus, $\infty = \sum_m \norm{\lambda_m}^p_p \leq \norm{g_0}^p_p$- a contradiction. Thus, $k := \lim_s k_s$ exists, and so $j$ belongs to the support of $h_k$. Moreover, it follows from part (\ref{thm:an.chains::itm:infimum}) that $h_k$ is a nonzero scalar multiple of $e_j$. It then follows that $h_0, h_1, \ldots$ generate $\ell^p$. Finally, we claim that $h_0, h_1, \ldots$ are disjointly supported. For, suppose $k \neq k'$. It suffices to show that there are incomparable nodes $\nu, \nu'$ so that $\nu \in \mathcal{C}_k$ and $\nu' \in \mathcal{C}_{k'}$. If there is an integer $s$ so that $\mathcal{C}_k$ and $\mathcal{C}_{k'}$ both contain a node of length $s$, then we may choose $\nu$ and $\nu'$ to be these nodes (since $\mathcal{C}_k \cap \mathcal{C}_{k'} = \emptyset$). So, suppose there is no $s$ so that $\mathcal{C}_k$ and $\mathcal{C}_{k'}$ both contain a node of length $s$. Let $\mu$ be the $\subseteq$-minimal node in $\mathcal{C}_k$ and let $\mu'$ be the $\subseteq$-minimal node in $\mathcal{C}_{k'}$. Let $s = |\mu|$, and let $s' = |\mu'|$. Thus, $s \neq s'$. Without loss of generality, assume $s < s'$. This entails that $\mathcal{C}_k$ contains a terminal node of $S$; let $\tau$ denote this node and let $t = |\tau|$. Thus, $h_k = \phi(\tau)$. Furthermore, $t < s'$. Let $\mu''$ denote the length $t$ ancestor of $\mu'$. Since $\tau$ is a terminal node of $S$, $\tau$ and $\mu''$ are incomparable. Thus, $\tau$ and $\mu'$ are incomparable. \end{proof} For the sake of proving Theorem \ref{thm:comp.partition}, we prove the following existence result. \begin{proposition}\label{prop:max} If $\phi : S \rightarrow \ell^p$ is a disintegration, and if $\nu$ is a nonterminal node of $S$, then \[ \max\{ \norm{\phi(\nu')}^p_p\ :\ \nu' \in \nu^+_S\} \] exists. \end{proposition} \begin{proof} Since $\phi$ is a disintegration, $\phi(\nu) = \sum_{\nu' \in \nu^+_S} \phi(\nu')$. Since $\phi$ maps incomparable nodes to disjointly supported vectors it follows that \[ \sum_{\nu' \in \nu^+_S} \norm{\phi(\nu')}^p_p = \norm{\phi(\nu)}^p_p < \infty. \] Therefore, there is a finite set $\{\nu'_0, \ldots, \nu'_t\} \subseteq \nu^+_S$ so that \[ \norm{\phi(\nu')}^p_p \leq \max\{\norm{\phi(\nu'_0)}^p_p, \ldots, \norm{\phi(\nu'_t)}^p_p\} \] whenever $\nu' \in \nu^+_S - \{\nu_0', \ldots, \nu_t'\}$. Thus, \[ \sup\{\norm{\phi(\nu')}^p_p\ :\ \nu' \in \nu^+_S\} = \max\{\norm{\nu'_0}^p_p, \ldots \norm{\nu'_t}^p_p\}. \] \end{proof} For the sake of proving Theorem \ref{thm:comp.partial.disintegration.ext}, we prove the following existence theorem; Theorem \ref{thm:comp.partial.disintegration.ext} will then be demonstrated by a search procedure. Recall that the success index of a partial disintegration, which was defined in Section \ref{sec:overview}, measures how close a partial disintegration is to being a disintegration. \begin{theorem}\label{thm:partial.disintegration.ext} Suppose $F = \{f_0, f_1, \ldots\}$ is a generating set for $\ell^p$. If $\phi : S- \{\emptyset\} \rightarrow \ell^p$ is a partial disintegration, and if $N_0 \in \mathbb{N}$, then $\phi$ extends to a partial disintegration $\psi$ whose success index (with respect to $F$) is larger than $N_0$. \end{theorem} \begin{proof} There is a nonnegative integer $N_1$ so that $d(f_j, \langle e_0, \ldots, e_{N_1}\operatorname{ran}gle) < 2^{-N_0}$ whenever $0 \leq j < N_0$ and so that $d(\phi(\nu), \langle e_0, \ldots, e_{N_1}\operatorname{ran}gle) < 2^{-N_0}$ whenever $\nu \in S - \{\emptyset\}$. When $0 \leq k \leq N_1$, let $\nu_k = \emptyset$ if $k \not \in \bigcup_{\nu \in S} \operatorname{supp}(\phi(\nu))$; otherwise let $\nu_k$ denote the $\subseteq$-maximal node in $S$ so that $k \in \operatorname{supp}(\phi(\nu))$. Intuitively, we define $\psi$ so that its range includes nonzero scalar multiples of each of $e_0, \ldots, e_{N_1}$. We first define the domain of $\psi$. Let \[ S' = S \cup \{\nu_k^\frown(k + \# S)\ :\ 0 \leq k \leq N_1\ \wedge\ (\nu_k = \emptyset\ \vee\ \#\operatorname{supp}(\phi(\nu_k)) \geq 2)\}. \] (Here, $\# A$ denotes the cardinality of $A$, and ${\ }^\frown$ denotes concatenation.) Let $\psi(\nu) = \phi(\nu)$ if $\nu \in S - \{\emptyset\}$. Suppose $\nu_k^\frown(k + \#S) \in S'$. Let \[ \psi(\nu^\frown(k + \#S)) = \left\{ \begin{array}{cc} \phi(\nu_k) \cdot e_k & \mbox{if $\nu_k \neq \emptyset$}\\ e_k & \mbox{otherwise}\\ \end{array} \right. \] Thus, by construction, $\psi$ is a partial disintegration, and $\psi$ extends $\phi$. We claim that the success index of $\psi$ is at least as large as $N_0$. For, by construction, $e_0, \ldots, e_{N_1} \in \langle \operatorname{ran}(\psi) \operatorname{ran}gle$. Thus, by choice of $N_1$, $d(f_j, \langle \operatorname{ran}(\psi) \operatorname{ran}gle) < 2^{-N_0}$ whenever $0 \leq j < N_0$. Suppose $\nu$ is a nonterminal node of $S'$. Thus, by definition of $S'$, $\nu \in S$. We show that \begin{equation} \norm{\psi(\nu) - \sum_{\nu' \in \nu^+_{S'} } \psi(\nu')}_p < 2^{-N_0}.\label{ineqn:sum} \end{equation} Since $\nu \in S$, $\psi(\nu) = \phi(\nu)$. By choice of $N_1$, $\norm{\phi(\nu) - \phi(\nu)\cdot \chi_{\{0, \ldots, N_1\}}}_p < 2^{-N_0}$. By definition of $S'$, whenever $0 \leq k < N_1$ and $k \in \operatorname{supp}(\phi(\nu))$, $k$ belongs to the support of $\phi(\nu')$ for some child $\nu'$ of $\nu$ in $S'$; furthermore $\phi(\nu) \cdot e_k \preceq \psi(\nu')$. The inequality (\ref{ineqn:sum}) follows. \end{proof} \subsection{On the distance to the nearest strong reverse-order homomorphism}\label{subsec:distance.strong} Let $S$ be a finite nonempty subset of $\omega^{<\omega}$. Define $\ell^p_S$ to be the space of all functions that map $S$ into $\ell^p$. When $\phi \in \ell^p_S$, define $\norm{\phi}_S$ to be $\max \{\norm{\phi(\nu)}_p\ :\ \nu \in S\}$. Thus, $\norm{\ }_S$ is a norm on $\ell^p_S$ under which $\ell^p_S$ is a Banach space. Suppose $\phi \in \ell^p_{S - \{\emptyset\}}$ is a partial disintegration, and let $S' \supseteq S$ be a finite subtree of $\omega^{< \omega}$. Define $\mathcal{H}_{\phi, S'}$ to be the set of all strong reverse-order homomorphisms $\psi \in \ell^p_{S' - \{\emptyset\}}$ so that $\psi(\nu) =\phi(\nu)$ whenever $\nu$ is a non-root node of $S$. Thus, $\mathcal{H}_{\phi, S'}$ is a closed subset of $\ell^p_{S'- \{\emptyset\}}$. For the sake of a search procedure we will employ in the proof of Theorem \ref{thm:comp.partial.disintegration.ext}, we wish to find a reasonable upper bound on $d(\psi, \mathcal{H}_{\phi, S'})$ in terms of $\phi, \psi$. When $p \neq 2$, set $c_p = |4 - 2\sqrt{2}^p|^{-1}$. When $z,w \in \mathbb{C}$ set $\sigma(z,w) = c_p \sigma_1(z,w)$. As a first step toward bounding $d(\psi, \mathcal{H}_{\phi, S'})$ above, we prove the following sharpening of an inequality due to J. Lamperti \cite{Lamperti.1958}. \begin{theorem}\label{thm:lamperti} Suppose $p \geq 1$ and $p \neq 2$. Then, \begin{equation} \min\{|z|^p, |w|^p\} \leq \sigma(z,w) \label{ineq:lamperti} \end{equation} for all $z,w \in \mathbb{C}$. Furthermore, if $1 \leq p < 2$, then \[ 2|z|^p + 2|w|^p - |z+w|^p - |z-w|^p \geq 0 \] and if $2 < p$ then \[ 2|z|^p + 2|w|^p - |z+w|^p - |z-w|^p \leq 0. \] \end{theorem} \begin{proof} Without loss of generality, assume $0 < |z| \leq |w|$. Set $w/z = t e^{i \theta}$ where $t \geq 1$. Then, (\ref{ineq:lamperti}) reduces to \[ 1 \leq \frac{|2 + 2t^p - |1 + t e^{i \theta}|^p - |1 - te^{i\theta}|^p|}{|4 - 2\sqrt{2}^p|}. \] This leads to consideration of the function \[ f_p(\theta,t) := \left\{\begin{array}{cc} 2 + 2t^p - |1 + t e^{i \theta}|^p - |1 - t e^{i \theta}|^p & 1 \leq p < 2\\ |1 + t e^{i\theta}|^p + |1 - t e^{i \theta}|^p - 2t^p - 2 & p > 2\\ \end{array} \right. \] We show that \[ \min_{t \geq 1} f_p(\theta, t) = |4 - 2 \sqrt{2}^p|. \] We note that $f_p(\theta + \pi,t) = f_p(\theta, t)$. So, we restrict attention to values of $\theta$ between $0$ and $\pi$. We use basic multivariable calculus to minimize $f_p(\theta,t)$ in the region $[0,\pi] \times [1, \infty)$. To this end, we first note that \[ \frac{\partial}{\partial t} |1 \pm t e^{i \theta}| = \frac{t \pm \cos(\theta)}{|1 \pm t e^{i\theta}|} \] and that \[ \frac{\partial}{\partial \theta} |1 \pm t e^{i \theta}| = \frac{\mp t \sin(\theta)}{|1 \pm t e^{i \theta}|} \] It follows that when $1 \leq p < 2$: \begin{eqnarray*} \frac{\partial f_p}{\partial t}(\theta,t) & = & 2pt^{p-1} - p[(t - \cos(\theta))|1 - te^{i \theta}|^{p-2} + (t + \cos(\theta))|1 + t e^{i \theta}|^{p-2}]\\ \frac{\partial f_p}{\partial \theta}(\theta,t) & = & pt\sin(\theta)[ |1 + te^{i \theta}|^{p-2} - |1 - te^{i \theta}|^{p-2}]. \end{eqnarray*} The signs are reversed when $p > 2$. So, when $0 < \theta_0 < \pi$ and $t_0 \geq 1$, \begin{eqnarray*} \frac{\partial f_p}{\partial \theta}(\theta_0, t_0) = 0 & \Leftrightarrow & |1 + t_0 e^{i \theta_0}| = |1 - te^{i \theta_0}|\\ & \Leftrightarrow & \theta_0 = \pi/2. \end{eqnarray*} We now claim that $\frac{\partial f_p}{\partial t}(\pi/2, t_0) > 0$ when $t_0 \geq 1$. We first consider the case $1 \leq p < 2$. We have \[ \frac{\partial f_p}{\partial t}(\pi/2, t) = 2pt^{p-1} - 2pt|1 + ti|^{p-2}. \] Since $t < |1 + ti|$ and $p - 2 < 0$, $t^{p-2} > |1 + ti|^{p-2}$. Thus, $\frac{\partial f_p}{\partial t}(\pi/2, t_0) > 0$. The case $2 < p$ is symmetric; the signs are merely reversed and $p - 2 > 0$. We next claim that $\frac{\partial f_p}{\partial t}(0,t) \geq 0$ if $t \geq 1$. We first consider the case $1 \leq p < 2$. In this case the claim reduces to \[ 2 \geq \left( \frac{t-1}{t} \right)^{p-1} + \left( \frac{t+1}{t} \right)^{p-1}. \] Since $0 \leq p-1 < 1$, $x \mapsto x^{p-1}$ is concave. Thus, \[ 1 = \left( \frac{\frac{t-1}{t} + \frac{t+1}{t}}{2} \right)^{p-1} \geq \frac{1}{2} \left[\left(\frac{t-1}{t} \right)^{p-1} + \left( \frac{t + 1}{t} \right)^{p-1}\right]. \] This verifies the claim when $1 \leq p < 2$. The case $2 < p$ is symmetric: signs are reversed and the function $x \mapsto x^{p-1}$ is convex. Thus, $\frac{\partial f_p}{\partial t} (\pi, t) \geq 0$ if $1 \leq t$. So, let $t_0 > 1$, and let $R$ denote the rectangle $[0, \pi] \times [1, t_0]$. It follows from what has just been shown that the minimum of $f_p$ on $R$ is achieved on the lower line segment $[0, \pi] \times \{1\}$. Moreover, it is achieved at one of the points $(0,1)$, $(\pi/2, 1)$, $(\pi,1)$. $f_p(0,1) = f_p(0, \pi) = |2 - 2^p|$ and $f_p(0, \pi/2) = |4 - 2 \sqrt{2}^p|$. Since $p \neq 2$, it follows that $|4 - 2^p| > |4 - 2\sqrt{2}^p|$. Thus, the minimum of $f_p$ on $R$ is $|4 - 2 \sqrt{2}^p|$. Since $t_0$ can be any number larger than $1$, the minimum of $f_p$ on $[0,\pi] \times [1, \infty)$ is $|4 - 2\sqrt{2}^p|$. The theorem now follows. \end{proof} When $\psi \in \ell_S^p$, set \[ \sigma(\psi) = \sum_{\nu | \nu'} \sigma(\psi(\nu), \psi(\nu')) + \sum_{\nu' \supset \nu} \sigma(\psi(\nu') - \psi(\nu), \psi(\nu')) \] where $\nu, \nu'$ range over all nodes of $S$ and $\nu | \nu'$ denotes that $\nu$ and $\nu'$ are incomparable. Note that $\sigma : \ell^p_S \rightarrow [0, \infty)$ is continuous and $\sigma(\psi) = 0$ if and only if $\psi$ is a strong order homomorphism. The following is the main result of this subsection. \begin{theorem}\label{thm:distance.to.ext} Suppose $\phi \in \ell^p_{S - \{\emptyset\}}$ is a partial disintegration. Suppose $\psi \in \ell^p_{S' - \{\emptyset\}}$ where $S' \supseteq S$ is a finite subtree of $\omega^{<\omega}$ so that each node of $S'$ extends a node of $S$. Then, \[ d(\psi, \mathcal{H}_{\phi, S'})^p \leq \norm{\phi|_{S - \{\emptyset\}} - \psi|_{S - \{\emptyset\}} }^p_{S - \{ \emptyset\}} + 2^p\sigma(\phi \cup \psi|_{S' - S}). \] \end{theorem} \begin{proof} Let $\psi_0 = \phi \cup \psi|_{S' - S}$. Set \begin{eqnarray*} \hat{\sigma}(\psi_0)(n) & = & \sum_{\nu | \nu'} \min\{|\psi_0(\nu)(n)|^p, |\psi_0(\nu')(n)|^p\}\\ & & + \sum_{\nu' \supset \nu} \min\{|\psi_0(\nu')(n) - \psi_0(\nu)(n)|^p, |\psi_0(\nu')(n)|^p\} \end{eqnarray*} where $\nu, \nu'$ range over the nodes of $S' - \{\emptyset\}$. Thus, by Theorem \ref{thm:lamperti}, $\sum_n \hat{\sigma}(\psi_0)(n) \leq \sigma(\psi_0)$. We now construct $\psi_1$. If $\nu \in S - \{\emptyset\}$, set $\psi_1(\nu) = \phi(\nu)$. If $\nu \in S' - S$, and if $n \in \mathbb{N}$, set \[ \psi_1(\nu)(n) = \left\{ \begin{array}{cc} \psi_1(\nu^-)(n) & \mbox{if $|\psi_0(\nu)(n)|^p > \hat{\sigma}(\psi_0)(n)$ and $\nu^- \neq \emptyset$}\\ \psi(\nu)(n) & \mbox{if $|\psi_0(\nu)(n)|^p > \hat{\sigma}(\psi_0)(n)$ and $\nu^- = \emptyset$}\\ 0 & \mbox{otherwise.} \\ \end{array} \right. \] By construction $\psi_1$ is a reverse-order homomorphism. We show it is a strong reverse-order homomorphism. Suppose $\nu, \nu' \in S'$ are incomparable. Since $\psi_1$ is a reverse-order homomorphism, it suffices to consider the case where $\nu, \nu' \not \in S$. Suppose $\psi_1(\nu)(n) \neq 0$. Then, $|\psi_0(\nu)(n)|^p > \hat{\sigma}(\psi_0)(n)$. So, $|\psi_0(\nu)(n)|^p > |\psi_0(\nu')(n)|^p$. Thus, $\hat{\sigma}(\psi_0)(n) \geq |\psi_0(\nu')(n)|^p$. Hence, $\psi_1(\nu')(n) = 0$. Thus, $\psi_1(\nu)$ and $\psi_1(\nu')$ are disjointly supported. If $\nu \in S - \{\emptyset\}$, then $\norm{\psi(\nu) - \psi_1(\nu)}_p = \norm{\psi(\nu) - \phi(\nu)}_p$. Suppose $\nu \in S' - S$. Let $n \in \mathbb{N}$. It suffices to show that $|\psi(\nu)(n) - \psi_1(\nu)(n)|^p \leq 2^p \hat{\sigma}(\psi_0)(n)$. We first consider the case $\psi_1(\nu)(n) = 0$. We can assume $|\psi(\nu)(n)|^p > \hat {\sigma}(\psi_0)(n)$. Thus, there exists $\mu \subset \nu$ so that $\psi_1(\mu)(n) = 0$; take the least such $\mu$. Therefore $|\psi_0(\mu)(n)|^p \leq \hat{\sigma}(\psi_0)(n)$. On the other hand, $|\psi_0(\mu)(n) - \psi_0(\nu)(n)|^p \leq \hat{\sigma}(\psi_0)(n)$. Therefore, $|\psi(\nu)(n)|^p \leq 2^p \hat{\sigma}(\psi_0)(n)$. Now, suppose $\psi_1(\nu)(n) \neq 0$. Then, $|\psi(\nu)(n)|^p > \hat{\sigma}(\psi_0)(n)$. Let $\nu_0$ denote the maximum prefix of $\nu$ that belongs to $S$ or has length $1$. Then, $\psi_1(\nu)(n) = \psi_0(\nu_0)(n)$. However, $|\psi(\nu)(n) - \psi(\nu_0)(n)|^p \leq \hat{\sigma}(\psi_0)(n)$. \end{proof} We note that the hypothesis that each node of $S'$ extends a node in $S$ is not superfluous. For, let $p = 1$. Choose $x > 0$ so that $2\sigma(x, 1) < 1$. Let $S = \{(0)\}$, and let $\phi = \{(0), x e_0)\}$. Let $S' = \{(0), (1)\}$, and let $\psi = \phi \cup \{ ((1), e_0)\}$. If $\psi' : S' \rightarrow \ell^1$ is a strong reverse-order homomorphism that extends $\phi$, then $\norm{\psi' - \psi}_{S'} \geq 1 > 2\sigma(\psi)$. \section{Computable world}\label{sec:computable} \subsection{Proof of Theorem \ref{thm:comp.partition}} Suppose $\phi : S \rightarrow \ell^p$ is a disintegration that is computable with respect to $F$. Since $\phi$ is computable, $S$ is c.e.; fix a computable enumeration of $S$, $\{S_t\}_{t \in \mathbb{N}}$. We can choose this enumeration so that each $S_t$ is a finite subtree of $\omega^{<\omega}$. It suffices to show that from a nonterminal node $\mu \in S$ we can compute an almost-norm maximizing child of $\mu$ in $S$. We base the proof of this claim on a sequence of lemmas as follows. When $\nu \in S_t$, let $\nu^+_t$ abbreviate $\nu^+_{S_t}$. \begin{lemma}\label{lm:max.norm.below} If $\nu$ is a non-root and nonterminal node of $S$, then there are infinitely many numbers $t$ so that \begin{equation} \norm{\phi(\nu)}_p^p - \sum_{\mu \in \nu^+_t} \norm{\phi(\mu)}_p^p < \max\{ \norm{\phi(\mu)}_p^p\ :\ \mu \in \nu^+_t\} \label{ineq:max.norm.below} \end{equation} When $t$ is such a number, \[ \max\{\norm{\phi(\mu)}_p^p\ :\ \mu \in \nu_t^+\} = \max\{ \norm{\phi(\mu)}_p^p\ :\ \mu \in \nu^+_S\}. \] \end{lemma} \begin{proof} By Proposition \ref{prop:max}, there is a $\mu_0 \in \nu^+_S$ so that \[ \norm{\phi(\mu_0)}_p^p = \max\{\norm{\phi(\mu)}_p^p\ :\ \mu \in \nu^+_S\}. \] Since $\phi$ is a disintegration, $\norm{\phi(\mu_0)}_p^p \neq 0$ and \[ \lim_{t \rightarrow \infty} \norm{\phi(\nu)}^p_p - \sum_{\mu \in \nu^+_t} \norm{\phi(\mu)}^p_p = 0. \] Thus, there are infinitely many numbers $t$ so that (\ref{ineq:max.norm.below}) holds. Now, suppose $t$ is a number so that (\ref{ineq:max.norm.below}) holds. By way of contradiction, suppose $\norm{\phi(\mu_0)}^p_p > \max\{\mu \in \nu_t^+\ :\ \norm{\phi(\mu)}^p_p\}$. Therefore, \[ \norm{\phi(\nu)}^p_p < \sum_{\mu \in \nu_t^+} \norm{\phi(\mu)}^p_p + \norm{\phi(\mu_0)}^p_p. \] Since $\mu_0 \in \nu^+_S$ but $\mu_0 \not \in \nu^+_t$, it follows that $\mu_0$ is incomparable with every node in $\nu^+_t$. So, since $\phi$ maps incomparable nodes to disjointly supported vectors, \[ \sum_{\mu \in \nu_t^+} \norm{\phi(\mu)}^p_p + \norm{\phi(\mu_0)}^p_p \leq \sum_{\mu \in \nu^+_S} \norm{\phi(\nu)}^p_p = \norm{\phi(\nu)}^p_p. \] This is a contradiction. Therefore, $\norm{\phi(\mu_0)}^p_p = \max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_S\}$. \end{proof} Since $\phi$ is computable with respect to $F$, $\nu \mapsto \norm{\phi(\nu)}_p$ is computable. So, there is a computable function $q : S \times \mathbb{N} \rightarrow \mathbb{Q}$ so that $|q(\nu,t) - \norm{\phi(\nu)}^p_p | < 2^{-(t+1)}$. Set: \begin{eqnarray*} m(\nu,t) & = & \min\{q(\nu,t) - 2^{-(t+1)}, 0\}\\ M(\nu,t) & = & q(\nu,t) + 2^{-(t+1)}\\ \Sigma^-(X,t) & = & \sum_{\nu \in X} m(\nu,t)\\ \overline{m}(X, t) & = & \max\{m(\mu,t)\ :\ \mu \in X\} \end{eqnarray*} Thus, $m(\nu,t)$ is a lower bound on $\norm{\phi(\nu)}^p_p$, and $M(\nu, t)$ is an upper bound on $\norm{\phi(\nu)}^p_p$. $\Sigma^-(\nu_t^+, t)$ is a lower bound on $\sum_{\mu \in \nu^+_S} \norm{\phi(\mu)}^p_p$, and $\overline{m}(\nu^+_t, t)$ is a lower bound on $\max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_S\}$. Also $\overline{m}(\nu_t^+, t) + 2^{-t}$ is an upper bound on $\max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_S\}$. \begin{lemma}\label{lm:max.norm.below.approx} Suppose $\nu$ is a non-root and nonterminal node of $S$. Then, there are infinitely many stages $t$ so that $M(\nu,t) -\Sigma^-(\nu^+_t,t) < \overline{m}(\nu_t^+, t)$. At such a stage $t$, \[ 0 \leq \max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_S\} - \overline{m}(\nu^+_t, t) < 2^{-t} \] \end{lemma} \begin{proof} Let $N \in \mathbb{N}$. By Lemma \ref{lm:max.norm.below}, there is a stage $t_0 > N$ so that \[ \norm{\phi(\nu)}^p_p - \sum_{\mu \in \nu_{t_0}^+} \norm{\phi(\mu)}^p_p < \max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_{t_0}\}. \] Set $U = \nu_{t_0}^+$. Then, \[ \lim_{t \rightarrow \infty} M(\nu,t) - \Sigma^-(U,t) = \norm{\phi(\mu)}^p_p - \sum_{\mu \in U} \norm{\phi(\mu)}^p_p \] and, \[ \lim_{t \rightarrow \infty} \overline{m}(\nu^+_t, t) = \max\{\mu \in \nu^+_S\ :\ \norm{\phi(\mu)}^p_p\}. \] So, there is a number $t_1 > t_0$ so that \[ M(\nu,t_1) - \Sigma^-(U, t_1) < \overline{m}(\nu_{t_1}^+, t_1). \] By definition, $m(\nu, t) \geq 0$. Since $t_1 > t_0$, $U \subseteq \nu^+_{t_1}$. Thus, $M(\nu,t_1) -\Sigma^-(\nu^+_{t_1},t_1) < \overline{m}(\nu_{t_1}^+, t_1)$. Now, suppose $M(\nu,t) -\Sigma^-(\nu^+_t,t) < \overline{m}(\nu_t^+, t)$. By definition of $M$, $\Sigma^-$, $m$, \[ \norm{\phi(\nu)}^p_p - \sum_{\mu \in \nu^+_t} \norm{\phi(\mu)}^p_p \leq M(\nu,t) -\Sigma^-(\nu^+_t,t), \] and \[ \overline{m}(\nu_t^+, t) \leq \max_{\mu \in \nu^+_t} \norm{\phi(\mu)}^p_p \] So, by Lemma \ref{lm:max.norm.below}, \[ \max\{\norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_S\} = \max\{ \norm{\phi(\mu)}^p_p\ :\ \mu \in \nu^+_t\}. \] Furthermore, \[ \max\{\norm{\phi(\nu)}^p_p\ :\ \mu \in \nu^+_t\} < \overline{m}(\nu,t) + 2^{-t}. \] This proves the lemma. \end{proof} Now, suppose $\mu$ is a nonterminal node of $S$. Search for $t> |\mu| $ so that \[ M(\mu,t) -\Sigma^-(\mu^+_t,t) < \overline{m}(\mu_t^+, t). \] Then, find $\tau \in \mu_t^+$ so that $m(\tau, t) = \overline{m}(\mu_t^+, t)$. Therefore, \begin{eqnarray*} \max\{\norm{\phi(\mu')}^p_p\ :\ \mu' \in \mu^+\} & = & \max\{\norm{\phi(\mu')}^p_p\ :\ \mu' \in \mu_t^+\}\\ & < & m(\tau,t) + 2^{-t}\\ & < & m(\tau,t) + 2^{-|\mu|}\\ & \leq & \norm{\phi(\tau)}^p_p + 2^{-|\mu|} \end{eqnarray*} Thus, $\tau$ is an almost norm-maximizing child of $\mu$ in $S$. \subsection{Proof of Theorem \ref{thm:comp.partial.disintegration.ext}} Suppose $F = \{f_0, f_1, \ldots\}$ is an effective generating set for $\ell^p$. Let $F^S$ denote the set of all maps from $S$ into $F$. It follows that $F^S$ is an effective generating set for $\ell^p_S$; i.e. $(\ell^p_S, F^S)$ is a computable Banach space. Furthermore, $\mathcal{L}_{\mathbb{Q}(i)}(F^S)$ coincides with the set of all maps from $S$ into $\mathcal{L}_{\mathbb{Q}(i)}(F)$. We now introduce some notation. Suppose $S'$ is a finite subtree of $\omega^{< \omega}$ that includes $S$. Let: \begin{eqnarray*} \mathcal{M}_{S'} & = & \{\psi \in \ell^p_{S'- \{\emptyset\}}\ :\ \mbox{$\psi$ is injective}\}\\ \mathbb{D}elta_{S', N} & = & \{\psi \in \ell^p_{S' - \{\emptyset\}}\ :\ \forall 0 \leq j < N\ d(f_j, \langle \operatorname{ran}(\psi) \operatorname{ran}gle) < 2^{-N}\} \end{eqnarray*} Let $\mathcal{S}_{S', N}$ denote the set of all $\psi \in \ell^p_{S' - \{\emptyset\}}$ so that \[ \norm{\psi(\nu) - \sum_{\nu' \in \nu^+_{S'}} \psi(\nu')}_p < 2^{-N} \] whenever $\nu$ is a non-root and nonterminal node of $S'$. \begin{lemma}\label{lm:ce.closed.open} \begin{enumerate} \item If each node of $S'$ extends a node of $S$, then $\mathcal{H}_{\phi, S'}$ is c.e. closed uniformly in $\phi, S'$.\label{lm:ce.closed.open::itm:closed} \item The sets $\mathcal{M}_S$, $\mathbb{D}elta_{S',N}$, $\mathcal{S}_{S',N}$ are c.e. open uniformly in $S, S', N$. \label{lm:ce.closed.open::itm:open} \end{enumerate} \end{lemma} \begin{proof} (\ref{lm:ce.closed.open::itm:closed}): When $\psi \in \ell^p_{S' - \{\emptyset\}}$, set \[ \mathcal{E}(\psi) = \norm{\psi'|_{S - \{\emptyset\}} - \phi|_{S - \{\emptyset\}} }_p^p + 2^p\sigma(\phi \cup \psi'|_{S' - S}). \] Therefore, $\psi \in \mathcal{H}_{\phi, S'}$ if and only if $\mathcal{E}(\psi) = 0$. By Theorem \ref{thm:distance.to.ext}, $d(\psi, \mathcal{H}_{\phi, S'}) \leq \mathcal{E}(\psi)^{1/p}$. Since $\phi, p$ are computable, $\mathcal{E}$ is computable. Thus, by Proposition \ref{prop:bounding.principle}, $\mathcal{H}_{\phi, S'}$ is c.e. closed.\\ (\ref{lm:ce.closed.open::itm:open}): When $\psi \in \ell^p_S$, let \[ G_1(\psi) = \min\{\norm{\psi(\nu) - \psi(\nu')}_p\ :\ \nu, \nu' \in S\ \wedge\ \nu \neq \nu'\}. \] Therefore, $G_1$ is computable with respect to $F^{S'}$. Since, $\mathcal{M}_{S'} = G_1^{-1}[(0, \infty)]$, $\mathcal{M}_{S'}$ is c.e. open. When $\psi \in \ell^p_{S'}$, let $G_2(\psi)$ denote the minimum of \[ \norm{\psi(\nu) - \sum_{\nu' \in \nu^+_{S'}} \psi(\nu')}_p \] as $\nu$ ranges over the nonterminal non-root nodes of $S'$. It follows that $G_2$ is computable with respect to $F^{S'}$ and so $\mathcal{S}_{S', N} = G_2^{-1}[(-\infty, 2^{-N})]$ is c.e. open. Note that $\psi \in \mathbb{D}elta_{S', N}$ if and only if there exists $\beta : \{0, \ldots, N-1\}\ \times\ S' - \{\emptyset\} \rightarrow \mathbb{Q}(i)$ so that \[ \norm{f_j - \sum_{\nu \in S' - \{\emptyset\}} \beta(j,\nu) \psi(\nu)}_p < 2^{-N}. \] whenever $0 \leq j < N$. When $\beta : \{0, \ldots, N-1\} \times S' - \{\emptyset\} \rightarrow \mathbb{Q}(i)$, set \[ \mathbb{D}elta_{S', N, \beta} = \left\{\psi \in \ell^p_{S' - \{\emptyset\}}\ :\ \forall 0 \leq j < N\ \norm{f_j - \sum_{\nu \in S' - \{\emptyset\}} \beta(j, \nu) \psi(\nu)}_p < 2^{-N} \right\}. \] Thus, $\mathbb{D}elta_{S', N} = \bigcup_\beta \mathbb{D}elta_{S', N, \beta}$. Set \[ G_{3, \beta}(\psi) = \max\left\{\norm{f_j - \sum_{\nu \in S' - \{\emptyset\}} \beta(j, \nu) \psi(\nu)}_p\ :\ 0 \leq j < N\right\}. \] Therefore, $\mathbb{D}elta_{S', N, \beta} = G_{3, \beta}^{-1}[(-\infty, 2^{-N})]$. Hence, $\mathbb{D}elta_{S', N, \beta}$ is c.e. open uniformly in $S', N, \beta$. Thus, $\mathbb{D}elta_{S', N}$ is c.e. open uniformly in $S', N$. \end{proof} We can now prove Theorems \ref{thm:comp.partial.disintegration.ext} and \ref{thm:comp.disintegration}. \begin{proof}[Proof of Theorem \ref{thm:comp.partial.disintegration.ext}:] For the moment, fix a finite tree $S' \supseteq S$. When $S' \supseteq S$, let $\pi_{S'}$ denote the canonical projection of $\ell^p_{S' - \{\emptyset\}}$ onto $\ell^p_{S - \{\emptyset\}}$, and let \[ C_{S'} = \mathcal{H}_{\emptyset, S'} \cap \mathcal{M}_{S'} \cap \mathbb{D}elta_{S,N} \cap \mathcal{S}_{S',N} \cap \pi^{-1}_{S'}[B(\phi; 2^{-k})]. \] By Theorem \ref{thm:partial.disintegration.ext}, there \emph{is} an $S'$ so that $C_{S'} \neq \emptyset$. Such an $S'$ can be found by an effective search procedure. Since $\mathcal{H}_{\emptyset, S'}$ is c.e. closed, it follows that $C_{S'}$ contains a vector $\psi$ that is computable with respect to $F^{S' - \{\emptyset\}}$ and an index of $\psi$ can be computed from $k$, $N$, and an index of $\phi$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:comp.disintegration}:] Let $F = \{f_0, f_1, \ldots\}$. Set $S_0 = \{(0)\}$. Compute $j_0$ so that $f_{j_0} \neq \mathbf{0}$. Set $\hat{\phi}_0((0)) = f_{j_0}$. By Lemma \ref{lm:ce.closed.open}, we can compute a $k_0 \in \mathbb{N}$ so that each map in $B(\hat{\phi}_0; 2^{-k_0})$ is injective and never zero. It now follows from Theorem \ref{thm:comp.partial.disintegration.ext} and Lemma \ref{lm:ce.closed.open} that there is a sequence $\{\hat{\phi}_n\}_n$ of computable partial disintegrations of $\ell^p$ and a sequence $\{k_n\}_n$ of nonnegative integers that have the following properties. \begin{enumerate} \item An index of $\hat{\phi}_n$ and a canonical index of $\operatorname{dom}(\hat{\phi}_n)$ can be computed from $n$. \item If $S_n =\operatorname{dom}(\phi_n)$, then $S_{n+1} \supset S_n$ and $\norm{ \hat{\phi}_{n+1}|_{S_n} - \hat{\phi}_n}_{S_n} < 2^{-(k_n+1)}$. \item Each map in $\overline{B(\hat{\phi}_n; 2^{-k_n})}$ is injective, never zero, and has a success index that is at least $n$. \end{enumerate} Let $\phi_{n,t} = \hat{\phi}_{t + n} | S_n$ for all $n,t$. It follows that $\{\phi_{n,t}\}_t$ is computable with respect to $F^{S_n - \{\emptyset\}}$; furthermore, an index of this sequence can be computed from $n$. It also follows that $\norm{\phi_{n,t+1} - \phi_{n,t}}_{S_n} < 2^{-(k_{n+t} + 1)}$. Thus, $\phi_n := \lim_t \phi_{n,t}$ is computable with respect to $F^{S_n - \{\emptyset\}}$; furthermore, an index of $\phi_n$ can be computed from $n$. Also, $\norm{\hat{\phi}_n - \phi_n}_{S_n} \leq 2^{-k_n}$. Thus, $\phi_n$ is a partial disintegration whose success index is at least $n$. By definition, $\phi_{n,t+1} \subseteq \phi_{n+1,t}$. Thus, $\phi_n \subseteq \phi_{n+1}$. Let $\phi = \bigcup_n \phi_n$. Let $S = \operatorname{dom}(\phi)$. For each $\nu \in S$, let \[ \psi(\nu) = 2^{-\nu(0)} \norm{\phi((\nu(0))}_p^{-1} \phi(\nu). \] Then, let \[ \psi(\emptyset) = \sum_{\nu \in \mathbb{N}^1 \cap S} \psi(\nu). \] Since $S$ is computable, it follows that $\psi(\emptyset)$ is a computable with respect to $F$. It then follows that $\psi$ is a disintegration and is computable with respect to $F$. \end{proof} \subsection{Proof of Theorem \ref{thm:main.2}}\label{subsec:proof.thm.main.2} Suppose $p$ is a computable real so that $p \geq 1$ and $p \neq 2$, and assume $C$ is a c.e. set. Again, we can reduce to the consideration of surjective linear endomorphisms of $\ell^p$. Specifically, it suffices to show there is an effective generating set $F$ for $\ell^p$ so that, with respect to $(E, F)$, $C$ computes a surjective linear endomorphism of $\ell^p$ and so that any oracle that with respect to $(E,F)$ computes a surjective linear endomorphism of $\ell^p$ also computes $C$. We demonstrate this as follows. We can assume $C$ is incomputable. Without loss of generality, we also assume $0 \not \in C$. Let $\{c_n\}_{n \in \mathbb{N}}$ be a one-to-one effective enumeration of $C$. Set \[ \gamma = \sum_{k \in C} 2^{-k}. \] Thus, $0 < \gamma < 1$, and $\gamma$ is an incomputable real. Set: \begin{eqnarray*} f_0 & = & (1 - \gamma)^{1/p} e_0 + \sum_{n = 0}^\infty 2^{- c_n / p} e_{n + 1}\\ f_{n + 1} & = & e_{n + 1}\\ F & = & \{f_0, f_1, f_2, \ldots \} \end{eqnarray*} Since $1 - \gamma > 0$, we can use the standard branch of $\sqrt[p]{\ }$. We divide the rest of the proof into a sequence of lemmas. \begin{lemma}\label{lm:EGS} $F$ is an effective generating set. \end{lemma} \begin{proof} Since \[ (1 - \gamma)^{1/p} e_0 = f_0 - \sum_{n =1}^\infty 2^{-c_{n-1} / p} f_n \] the closed linear span of $F$ includes $E$. Thus, $F$ is a generating set for $\ell^p$. Note that $\norm{f_0} = 1$. Suppose $\alpha_0, \ldots, \alpha_M$ are rational points. When $1 \leq j \leq M$, set \[ \mathcal{E}_j = |\alpha_0 2^{-c_{j-1} / p} + \alpha_j |^p - |\alpha_0|^p 2^{-c_{j-1}}. \] It follows that \begin{eqnarray*} \norm{\alpha_0 f_0 + \ldots +\alpha_M f_m}^p & = & |\alpha_0|^p \norm{f_0}^p + \mathcal{E}_1 + \ldots + \mathcal{E}_M\\ & = & |\alpha_0|^p + \mathcal{E}_1 + \ldots + \mathcal{E}_m. \end{eqnarray*} Since $\mathcal{E}_1$, $\ldots$, $\mathcal{E}_M$ can be computed from $\alpha_0, \ldots, \alpha_M$, $\norm{\alpha_0 f_0 + \ldots + \alpha_M f_M}$ can be computed from $\alpha_0, \ldots, \alpha_M$. Thus, $F$ is an effective generating set. \end{proof} \begin{lemma}\label{lm:X.computes.C.0} Every oracle that with respect to $F$ computes a unimodular scalar multiple of $e_0$ must also compute $C$. \end{lemma} \begin{proof} Suppose that with respect to $F$, $X$ computes a vector of the form $\lambda e_0$ where $|\lambda| = 1$. It suffices to show that $X$ computes $(1 - \gamma)^{-1/p}$. Fix a rational number $q_0$ so that $(1 - \gamma)^{-1/p} \leq q_0$. Let $k \in \mathbb{N}$ be given as input. Compute $k'$ so that $2^{-k'} \leq q_0 2^{-k}$. Since $X$ computes $\lambda e_0$ with respect to $F$, we can use oracle $X$ to compute rational points $\alpha_0, \ldots, \alpha_M$ so that \begin{equation} \norm{\lambda e_0 - \sum_{j = 0}^M \alpha_j f_j} < 2^{-k'}.\label{inq:1} \end{equation} We claim that $|(1 - \gamma)^{-1/p} - |\alpha_0| | < 2^{-k}$. For, it follows from (\ref{inq:1}) that $|\lambda - \alpha_0 (1 - \gamma)^{1/p}| < 2^{-k'}$. Thus, $|1 - |\alpha_0| (1 - \gamma)^{1/p}| < 2^{-k'}$. Hence, \[ |(1 - \gamma)^{-1/p} - |\alpha_0|| < 2^{-k'}(1 - \gamma)^{-1/p} \leq 2^{-k'}q_0 \leq 2^{-k}. \] Since $X$ computes $\alpha_0$ from $k$, $X$ computes $(1 - \gamma)^{-1/p}$. \end{proof} \begin{lemma}\label{lm:X.computes.C.1} If $X$ computes a surjective isometric endomorphism of $\ell^p$ with respect to $(E,F)$, then $X$ must also compute $C$. \end{lemma} \begin{proof} Let $T$ be a surjective endomorphism of $\ell^p$, and suppose $X$ computes $T$ with respect to $(E,F)$. By Theorem \ref{thm:classification}, there exists $j_0, \lambda$ so that $T(e_{j_0}) = \lambda e_0$ and $|\lambda| = 1$. So, by Lemma \ref{lm:X.computes.C.0}, $X$ computes $C$. \end{proof} \begin{lemma}\label{lm:C.computes.e_0} With respect to $F$, $C$ computes $e_0$. \end{lemma} \begin{proof} Fix an integer $M$ so that $(1 - \gamma)^{-1/p} < M$. Let $k \in \mathbb{N}$. Using oracle $C$, we can compute an integer $N_1$ so that $N_1 \geq 3$ and \[ \norm{ \sum_{n = N_1}^\infty 2^{-c_{n - 1}/p} e_n } \leq \frac{2^{-(kp + 1)/p}}{2^{-(kp + 1)/p} + M}. \] We can use oracle $C$ to compute a rational number $q_1$ so that $|q_1 - (1 - \gamma)^{-1/p}| \leq 2^{-(kp + 1)/p}$. Set \[ g = q_1 \left[ f_0 - \sum_{n = 1}^{N_1 - 1} 2^{-c_{n-1}/p} f_n \right]. \] It suffices to show that $\norm{e_0 - g} < 2^{-k}$. Note that since $1 - \gamma < 1$,\\ $|q_1(1 - \gamma)^{1/p} - 1| \leq 2^{-(kp + 1)/p}$. Note also that $|q_1| < M + 2^{-(kp +1)/p}$. Thus, \begin{eqnarray*} \norm{e_0 - g}^p & = & \norm{e_0 - q_1(1 - \gamma)^{1/p} e_0 - q_1 \sum_{n = N_1}^\infty 2^{-c_{n - 1/p}}e_n}^p\\ & \leq & |q_1 (1 - \gamma)^{1/p} - 1|^p + |q_1|^p \norm{\sum_{n = N_1}^\infty 2^{-c_{n-1}/p} e_n}^p\\ & < & 2^{-(kp + 1)} + 2^{-(kp + 1)} = 2^{-kp} \end{eqnarray*} Thus, $\norm{e_0 - g} < 2^{-k}$. This completes the proof of the lemma. \end{proof} \begin{lemma}\label{lm:C.computes.identity} With respect to $(E,F)$, $C$ computes a surjective linear isometry of $\ell^p$. \end{lemma} \begin{proof} By Lemma \ref{lm:C.computes.e_0}, $C$ computes $e_0$ with respect to $F$. Thus, $C$ computes $\{e_n\}_{n = 0}^\infty$ with respect to $F$, and it follows that $C$ computes the identity map with respect to $(E, F)$. \end{proof} \section{Additional results}\label{sec:additional} Suppose $n$ is a positive integer and $1 \leq p < \infty$. Define $\ell^p_n$ to be the set of all $f \in \ell^p_n$ so that $f(j) = 0$ whenever $j \geq n$; i.e. $\operatorname{supp}(f) \subseteq \{0, \ldots, n-1\}$. Thus, $\ell^p_n$ is a subspace of $\ell^p$, and $\{e_0, \ldots, e_{n-1}\}$ is an effective generating set for $\ell^p_n$. Now, suppose $p$ is computable and $p \neq 2$. Let $F$ be an effective generating set for $\ell^p_n$. Via the methods of the previous section, we can show that there are disjointly supported unit vectors $f_1, \ldots, f_n \in \ell^p_n$ so that each $f_j$ is computable with respect to $F$. Thus, $f_1, \ldots, f_n$ generate $\ell^p_n$. It then follows that $\ell^p_n$ is computably categorical. However, since $p \neq 2$, $\ell^p_n$ is not a Hilbert space. Thus, \it there is a computably categorical Banach space that is not a Hilbert space.\rm \section{Conclusion}\label{sec:conclusion} To summarize, we have investigated the complexity of isometries between computable copies of $\ell^p$. We have shown that the halting set bounds the complexity of computing these isometries and that this bound is optimal. Along the way we have strengthened an important inequality due to J. Lamperti. These results stand as a contribution to the emergent program of grafting computable structure theory onto computable analysis. It is anticipated that there will be many other interesting discoveries in this area and that the proofs will present opportunities to blend methods from classical analysis and computability theory. \end{document}
\betaegin{document} \tauitle[A Cannon-Thurston map for surviving complexes]{A Universal Cannon-Thurston map and the surviving curve complex.} \alphauthor {Funda G\"ultepe} \tauhanks{The first author was partially supported by a University of Toledo startup grant.} \alphauthor {Christopher J. Leininger} \tauhanks{The second author was partially supported by NSF grant DMS-1510034, 1811518, and 2106419.} \alphauthor{Witsarut Pho-on} \alphaddress{Department of Mathematics and Statistics \\ University of Toledo\\ Toledo, OH, 43606} \email{[email protected]} \urladdr{http://www.math.utoledo.edu/~fgultepe/} \alphaddress{Department of Mathematics\\ Rice University\\ Houston, TX 77005} \email{[email protected]} \urladdr{https://sites.google.com/view/chris-leiningers-webpage/home} \alphaddress{Department of Mathematics\\Faculty of Science\\Srinakharinwirot University\\ Bangkok 10110, Thailand} \email{[email protected]} \urladdr{https://sites.google.com/g.swu.ac.th/witsarut/} \betaegin{abstract} Using the Birman exact sequence for pure mapping class groups, we construct a universal Cannon-Thurston map onto the boundary of a curve complex for a surface with punctures we call {\em surviving curve complex}. Along the way we prove hyperbolicity of this complex and identify its boundary as a space of laminations. As a corollary we obtain a universal Cannon-Thurston map to the boundary of the ordinary curve complex, extending earlier work of the second author with Mj and Schleimer. \end{abstract} \maketitle \section{introduction} \langlebel{S:intro} Given a closed hyperbolic $3$--manifold $M$ that fibers over the circle with fiber a surface $S$, Cannon and Thurston \cite{CT} proved that the lift to the universal covers $\mathbb{H}^2 \tauo \mathbb{H}^3$ of the inclusion $S \tauo M$ extends to a continuous $\pi_1(S)$-equivariant map of the compactifications. This is quite remarkable as the ideal boundary map $\mathbb S^1_\infty \tauo \mathbb S^2_\infty$ is a $\pi_1S$--equivariant, sphere--filling Peano curve. A {\em Cannon-Thurston map}, $\mathbb S^1_\infty \tauo \mathbb S^2_\infty$, for a type-preserving, properly discontinuous actions of the fundamental group $\pi_1S$ of hyperbolic surfaces (closed or punctured) acting on hyperbolic $3$--space was shown to exist in various situations (see \cite{Minsky-rigidity,ADP-CT,McMullen,BowCT1}), with Mj \cite{Mj1} proving the existence in general (see Section~\ref{S:historical} for a discussion of even more general Cannon-Thurston maps). Suppose that $S$ is a hyperbolic surface with basepoint $z \in S$, and write ${\sf d}ot S = S \smallsetminus\{z\}$. The curve complex of ${\sf d}ot S$ is a ${\sf d}elta$--hyperbolic space on which $\pi_1S = \pi_1(S,z)$ acts via the Birman exact sequence. In \cite{LeinMjSch}, the second author, Mj, and Schleimer constructed a {\em universal Cannon-Thurston map} when $S$ is a closed surface of genus at least $2$. Here we complete this picture, extending this to all surfaces $S$ with complexity $\xi(S) \geq 2$. \betaegin{theorem}[Universal Cannon-Thurston Map] \langlebel{T:UCT C short} Let $S$ be a connected, orientable surface with $\xi(S) \geq 2$. Then there exists a subset $\BS^1_{\hspace{-.1cm}\calA}C \subset \mathbb S^1_\infty$ and a continuous, $\pi_1S$--equivariant, finite-to-one surjective map $\partial \Phi_0 \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA}C \tauo \partial {\mathcal C}({\sf d}ot S)$. Moreover, if $\partial i \colon\tauhinspacelon \mathbb S^1_\infty \tauo \mathbb S^2_\infty$ is any Cannon-Thurston map for a proper, type-preserving, isometric action on $\mathbb H^3$ without accidental parabolics, then there exists a map $q \colon\tauhinspacelon \partial i(\BS^1_{\hspace{-.1cm}\calA}C) \tauo \partial {\mathcal C}({\sf d}ot S)$ so that $\partial \Phi_0$ factors as \[ \xymatrix{ \BS^1_{\hspace{-.1cm}\calA}C \alphar[r]_{\partial i \quad} \alphar@/^1pc/[rr]^{\partial \Phi_0} & \partial i(\BS^1_{\hspace{-.1cm}\calA}C) \alphar[r]_{q} & \partial {\mathcal C}({\sf d}ot S). }\] \end{theorem} For the reader familiar with Cannon-Thurston maps in the setting of cusped hyperbolic surfaces, the finite-to-one condition may seem unnatural. We address this below in the process of describing the subset $\BS^1_{\hspace{-.1cm}\calA}C \subset \mathbb S^1_\infty$. First, we elaborate on the universal property of the theorem (that is, the ``moreover" part). Let $p \colon\tauhinspacelon \mathbb{H} = \mathbb{H}^2 \tauo S$ denote the universal cover \footnote{We will mostly be interested in real hyperbolic space in dimension $2$, so will simply write $\mathbb{H} = \mathbb{H}^2$.}. A proper, type-preserving, isometric action of $\pi_1S$ on a $\mathbb H^3$ has quotient hyperbolic $3$--manifold homeomorphic to $S \tauimes \mathbb R$. Each of the two ends (after removing cusp neighborhoods) is either geometrically finite or simply degenerate. In the latter case, there is an associated {\em ending lamination} that records the asymptotic geometry of the end; see \cite{TNotes,bonahon,Minsky-endinglam,BCM-endinglam}]. The Cannon-Thurston map $\mathbb S^1_\infty \tauo \mathbb S^2_\infty$ for such an action is an embedding if both ends are geometrically finite; see \cite{Floyd}. If there are one or two degenerate ends, the Cannon-Thurston map is a quotient map onto a dendrite or the entire sphere $\mathbb S^2_\infty$, respectively, where a pair of points $x,y \in \mathbb S^1_\infty$ are identified if and only if $x$ and $y$ are ideal endpoints of a leaf or complementary region of the $p^{-1}(\mathcal{L})$ for (one of) the ending lamination(s) $\mathcal{L}$; see \cite{CT,Minsky-rigidity,BowCT1,Mj2}. A more precise version of the universal property is thus given by the following. Here $\ensuremath{\mathcal{EL}}\xspace(S)$ is the space of ending laminations of $S$, which are all possible ending laminations of ends of hyperbolic $3$--manifolds as above; see Section~\ref{S:laminations} for definitions. \betaegin{theorem} \langlebel{T:Phi0 identified} Given two distinct points $x,y\in \BS^1_{\hspace{-.1cm}\calA}C$, $\partial\Phi_0(x) = \partial \Phi_0(y)$ if and only if $x$ and $y$ are the ideal endpoints of a leaf or complementary region of $p^{-1}(\mathcal{L})$ for some $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$. \end{theorem} When $S$ has punctures, $\partial {\mathcal C}({\sf d}ot S)$ is not the most natural ``receptacle" for a universal Cannon-Thurston map. Indeed, there is another hyperbolic space whose boundary naturally properly contains $\partial {\mathcal C}({\sf d}ot S)$. The {\em surviving curve complex} of ${\sf d}ot S$, denoted ${\mathcal C}^s({\sf d}ot S)$ is the subcomplex of ${\mathcal C}({\sf d}ot S)$ spanned by curves that ``survive" upon filling $z$ back in. In section \ref{S:hyperbolicity}, we prove that ${\mathcal C}^s({\sf d}ot S)$ is hyperbolic. One could alternatively verify the axioms due to Masur and Schleimer \cite{MSch1}, or try to relax the conditions of Vokes \cite{Vokes} to prove hyperbolicity; see Section~\ref{S:hyperbolicity}. The projection $\Pi \colon\tauhinspacelon {\mathcal C}^s({\sf d}ot S) \tauo {\mathcal C}(S)$ was studied by the second author with Kent and Schleimer in \cite{LeinKentSch} where it was shown that for any vertex $v \in {\mathcal C}(S)$, the fiber $\Pi^{-1}(v)$ is $\pi_1S$--equivariantly isomorphic to the Bass-Serre tree dual to the splitting of $\pi_1S$ defined by the curve determined by $v$; see also \cite{Harer},\cite{Hat-Vogt}. As such, there is a $\pi_1S$--equivariant map $\Phi_v \colon\tauhinspacelon \mathbb{H} \tauo \Pi^{-1}(v) \subset {\mathcal C}^s({\sf d}ot S)$; see \S\ref{S:tree map construction}. As we will see, the first part of Theorem~\ref{T:UCT C short} is a consequence of the following; see Section~\ref{S:UCT maps}. \newcommand{{\mathcal C}Tstatement}{ For any vertex $v\in {\mathcal C}$, the map $\Phi_v: \mathbb{H} \rightarrow {\mathcal C}^s({\sf d}ot S)$ has a continuous $\pi_1(S)$--equivariant extension \[\overline\Phi_v: \mathbb{H} \cup \BS^1_{\hspace{-.1cm}\calA} \rightarrow \overline{\mathcal C}^s({\sf d}ot S)\] and the induced map \[ \partial\Phi = \overline\Phi_v|_{\BS^1_{\hspace{-.1cm}\calA}}: \BS^1_{\hspace{-.1cm}\calA} \rightarrow \partial {\mathcal C}^s({\sf d}ot S)\] is surjective and does not depend on $v$. Moreover, $\partial \Phi$ is equivariant with respect to the action of the pure mapping class group $\PMod({\sf d}ot S)$.} \betaegin{theorem} \langlebel{CT} {\mathcal C}Tstatement \end{theorem} The subset $\BS^1_{\hspace{-.1cm}\calA} \subset \mathbb S^1_\infty$ is defined analogously to the set $\mathbb A \subset \mathbb S^1_\infty$ in \cite{LeinMjSch}. Specifically, $x \in \BS^1_{\hspace{-.1cm}\calA}$ if and only if any geodesic ray $r \subset \mathbb{H}$ starting at any point and limiting to $x$ at infinity has the property that every essential simple closed curve $\alphalpha \subset S$ has nonempty intersection with $p(r)$; see Section~\ref{S:UCT maps}. It is straightforward to see that $\BS^1_{\hspace{-.1cm}\calA}$ is the largest set on which a Cannon-Thurston map can be defined to $\partial {\mathcal C}^s({\sf d}ot S)$. As we explain below, $\BS^1_{\hspace{-.1cm}\calA}C \subsetneq \BS^1_{\hspace{-.1cm}\calA}$ and a pair of points in $\BS^1_{\hspace{-.1cm}\calA}C$ are identified by $\partial \Phi_0$ if and only if they are identified by $\partial \Phi$, and thus $\partial \Phi$ is also finite-to-one on $\BS^1_{\hspace{-.1cm}\calA}C$. It turns out that this precisely describes the difference between $\BS^1_{\hspace{-.1cm}\calA}$ and $\BS^1_{\hspace{-.1cm}\calA}C$. Let $Z \subset \partial {\mathcal C}^s({\sf d}ot S)$ be the set of points $x$ for which $\partial \Phi^{-1}(x)$ is infinite. \betaegin{proposition} \langlebel{P:CT difference} We have ${\sf d}isplaystyle \BS^1_{\hspace{-.1cm}\calA} {\smallsetminus} \BS^1_{\hspace{-.1cm}\calA}C = \partial \Phi^{-1}(Z)$. \end{proposition} The analogue of Theorem~\ref{T:Phi0 identified} is also valid for $\Phi$. \betaegin{theorem}\langlebel{CTU} Given two distinct points $x,y\in \BS^1_{\hspace{-.1cm}\calA}$, $\partial\Phi(x) = \partial \Phi(y)$ if and only if $x$ and $y$ are the ideal endpoints of a leaf or complementary region of $p^{-1}(\mathcal{L})$ for some $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$. \end{theorem} It is easy to see that for any ending lamination $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$, the endpoints at infinity of any leaf of $p^{-1}(\mathcal{L})$ (and hence also the non parabolic fixed points of complementary regions) are contained in $\BS^1_{\hspace{-.1cm}\calA}$, though this a fairly small subset; for example, almost-every point $x \in \mathbb S^1_\infty$ has the property that any geodesic ray limiting to $x$ has dense projection to $S$. The complementary regions that contain parabolic fixed points are precisely the regions with infinitely many ideal endpoints. Together with Proposition~\ref{P:CT difference} provides another description of the difference $\BS^1_{\hspace{-.1cm}\calA} {\smallsetminus} \BS^1_{\hspace{-.1cm}\calA}C$; see Corollary~\ref{C:CT difference 2}. A important ingredient in the proofs of the above theorems is an identification of the Gromov boundary $\partial {\mathcal C}^s({\sf d}ot S)$, analogous to Klarreich's Theorem \cite{Klarreich}; see Theorem~\ref{T:Klarreich}. Specifically, we let $\mathcal{E}\mathcal{L}^s({\sf d}ot S)$ denote the space of ending laminations on ${\sf d}ot S$ together with ending laminations on all proper {\em witnesses} of ${\sf d}ot S$; see Section~\ref{S:witnesses defined}. We call $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ the space of {\em surviving ending laminations}. A more precise statement of the following is proved in Section~\ref{S:boundary}; see Theorem~\ref{T:boundary ending precise} \betaegin{theorem}\langlebel{survivalendinglam} There is a $\PMod({\sf d}ot S)$--equivariant homeomorphism $\partial {\mathcal C}^s({\sf d}ot S) \tauo \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. \end{theorem} To describe the map $\partial \Phi_0$ in Theorem~\ref{T:UCT C short} we consider the map $\partial \Phi \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA} \tauo \partial {\mathcal C}^s({\sf d}ot S)$ from Theorem~\ref{CT}, composed with the homeomorphism $\partial {\mathcal C}^s({\sf d}ot S) \tauo \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ from Theorem~\ref{survivalendinglam}. Since $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot S)$ is a subset of $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$, we can simply take $\BS^1_{\hspace{-.1cm}\calA}C \subset \BS^1_{\hspace{-.1cm}\calA}$ to be the subset that maps onto $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot S)$, and compose the restriction $\partial \Phi$ to this subset with the homeomorphism $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot S) \tauo \partial {\mathcal C}({\sf d}ot S)$ from Klarreich's Theorem. The more geometric description of $\BS^1_{\hspace{-.1cm}\calA}C$ is obtained by a more detailed analysis of the map $\partial \Phi$ carried out in Section~\ref{S:UCT maps}. \subsection{Historical discussion} \langlebel{S:historical} Existence of the Cannon-Thurston map in the context of Kleinian groups is proved by several authors starting with Floyd \cite{Floyd} for geometrically finite Kleinian groups and then by Cannon and Thurston for fibers of closed hyperbolic 3-manifolds fibering over the circle. Cannon and Thurston's work was circulated as a preprint around 1984 and inspired works of many others before it was published in 2007 \cite{CT}. The existence of the Cannon-Thurston map was proven by Minsky \cite{Minsky-rigidity} for closed surface groups of bounded geometry and by by Mitra and Klarreich \cite{Mitra2, Klarreich2} for hyperbolic 3-manifolds of bounded geometry with an incompressible core and without parabolics. Alperin-Dicks-Porti \cite{ADP-CT} proved the existence of the Cannon-Thurston map for figure eight knot complement, McMullen \cite{McMullen} for punctured torus groups, and then Bowditch \cite{BowCT1,BowCT2} for more general punctured surface groups of bounded geometry. Mj completed the investigation for all finitely generated Kleinian surface groups without accidental parabolics, first for closed and then for punctured surfaces in a series of papers that culminated in the two papers \cite{Mj1} and \cite{Mj2}, the latter with an appendix by S.~Das. For general Kleinian groups, see Das-Mj \cite{DasMj} and Mj \cite{MjKleinian}, and the survey \cite{Mj-survey}. Moving beyond real hyperbolic spaces, it is now classical that a quasi-isometric embedding of one Gromov hyperbolic space into another extends to an embedding of the Gromov boundaries. One of the first important generalizations of Cannon and Thurston's work outside the setting of Kleinian groups is due to Mitra in \cite{Mitra1} who proved that given a short exact sequence \[1\rightarrow H \rightarrow \Gamma \rightarrow G \rightarrow 1\] of infinite word hyperbolic groups, the Cannon--Thurston map exists and it is surjective. In this case the Cannon-Thurston map $\partial H \rightarrow \partial \Gamma$ is defined between the Gromov boundary $\partial H$ of the fiber group $H$ and the Gromov boundary $\partial \Gamma$ of its extension $\Gamma$. Mitra defined an algebraic ending lamination associated to points in the Gromov boundary of the base group $G$ in \cite{Mitra-ending}, and recent work of Field \cite{field} proves that the quotient of $\partial H$ in terms of such an ending lamination is a dendrite (compare the Kleinian discussion above). In a different direction, Mitra later extended his existence result to trees of hyperbolic spaces; see \cite{Mitra2}. In 2013 Baker and Riley gave the first example example of a hyperbolic subgroup of a hyperbolic group with no continuous Cannon-Thurston map (\cite{BakerRiley1}); see also Matsuda \cite{Matsuda}. On the other hand, Baker and Riley (\cite{BakerRiley2}) proved existence of Cannon-Thurston maps even under \emph{arbitrarily heavy distortion} of a free subgroup of a hyperbolic group. For free groups and their hyperbolic extensions, Cannon-Thurston maps are better understood than arbitrary hyperbolic extensions. Kapovich and Lustig characterized the Cannon-Thurston maps for hyperbolic free-by-cyclic groups with fully irreducible monodromy \cite{CTLustKapo}. Later Dowdall, Kapovich and Taylor characterized Cannon-Thurston maps for hyperbolic extensions of free groups coming from convex cocompact subgroups of outer automorphism group of the free group \cite{CTDowKapo}. Finally we note that we have only discussed a few of the many results on the existence or structure of Cannon-Thurston maps in various settings. For more see e.g.~\cite{Mj2,MjRa,MjRel,CTLeinKapo,GUER,Fenley-SC,Frankel,Fenley-QG,Mousley}). \subsection{Outline} \langlebel{S:outline} In Section~\ref{S:preliminaries}, we give preliminaries on curve complexes, witnesses and Gromov boundary of a hyperbolic space along with basics of spaces of laminations. In particular, subsection~\ref{S:tree map construction} is devoted to the construction of the survival map and in subsection~\ref{S:cusps and witnesses} the relation between cusps and witnesses via the survival map is given. In Section~\ref{S:survival paths}, we define survival paths in ${\mathcal C^s({\sf d}ot S)}$ and give an upper bound on the survival distance $d^s$ in terms of projection distances into curve complexes of witnesses. In Section~\ref{S:hyperbolicity} we prove the hyperbolicity of ${\mathcal C^s({\sf d}ot S)}$. Section~\ref{S:distance formula} is devoted to the distance formula for ${\mathcal C^s({\sf d}ot S)}$, a-la Masur-Minsky, and as a result we prove that survival paths are uniform quasi-geodesics in ${\mathcal C^s({\sf d}ot S)}$. In Section~\ref{S:boundary} we explore the boundary of the survival curve complex ${\mathcal C^s({\sf d}ot S)}$ and prove that it is homeomorphic to the space of survival ending laminations on ${\sf d}ot S$, a result analogous to that of Klarreich \cite{Klarreich}. In Section~\ref{S:extended survival} we extend the definition of survival map to the closures of curve complexes. Finally in Section~\ref{S:UCT maps}, we prove Theorem~\ref{CT} and the rest of the theorems from the introduction. Specifically, we prove the existence and continuity of the map $\partial \Phi$ in Section~\ref{S:existence} and its surjectivity in Section~\ref{S:surjectivity}. Finally, we Section~\ref{S:Universal} we prove the universal property of $\partial \Phi$ as well as constructing the map $\partial \Phi_0$. \section{Preliminaries} \langlebel{S:preliminaries} Throughout what follows, we assume $S$ is surface of genus $g \geq 0$ with $n \neq 0$ punctures, and complexity $\xi(S) = 3g-3+n \geq 2$. We fix a complete hyperbolic metric of finite area on $S$ and a locally isometric universal covering $p \colon\tauhinspacelon \mathbb{H} \tauo S$. We also fix a point $z \in S$, and write ${\sf d}ot S$ to denote either the punctured surface $S {\smallsetminus} \{z\}$ or the surface with an additional marked point $(S,z)$, with the situation dictating the intended meaning when it makes a difference. We sometimes refer to the puncture produced by removing $z$ as the {\em $z$--puncture}. We further choose $\tauilde z \in p^{-1}(z) \subset \mathbb{H}$ and use this to identify $\pi_1S = \pi_1(S,z)$ with the covering group of $p \colon\tauhinspacelon \mathbb{H} \tauo S$, acting by isometries. \subsection{Notation and conventions} Let $x,y,C,K \geq 0$ with $K \geq 1$. We write $x \mathbin{\mid}ackrel{K,C}\preceq y$ to mean $x \leq Ky + C$. We also write \[ x \mathbin{\mid}ackrel{K,C}{\alphasymp} y \qquad \Longleftrightarrow \qquad x \mathbin{\mid}ackrel{K,C}\preceq y \, \, \mbox{ and } x \, \, \mathbin{\mid}ackrel{K,C}\succeq y.\] When the constants are clear from the context or independent of any varying quantities and unimportant, we also write $x \preceq y$ as well as $x \alphasymp y$. In addition, we will use the shorthand notation $\lcut x\rcut_C$ denote the cut-off function giving value $x$ if $x \geq C$ and $0$ otherwise. Any connected simplicial complex will be endowed with a path metric obtained by declaring each simplex to be a regular Euclidean simplex with side lengths equal to $1$. The vertices of a connected simplicial complex will be denoted with a subscript $0$, and the distance between vertices will be an integer computed as the minimal length of a path in the $1$--skeleton. By a {\em geodesic} between a pair of vertices $v,w$ in a simplicial complex, we mean either an isometric embedding of an interval into the $1$--skeleton with endpoints $v$ and $w$ or the vertices encountered along such an isometric embedding, with the situation dictating the intended meaning. \subsection{Curve complexes} \langlebel{S:curve complexes defined} By a {\em curve} on a surface $Y$, we mean an essential (homotopically nontrivial and nonperipheral), simple closed curve. We often confuse a curve with its isotopy class. When convenient, we take the geodesic representative with respect to a complete finite area hyperbolic metric on the surface with geodesic boundary components (if any) and realize an isotopy class by its unique geodesic representative. A {\em multi-curve} is a disjoint union of pairwise non-isotopic curves, which we also confuse with its isotopy class and geodesic representative when convenient. The curve complex of a surface $Y$ with $2 \leq \xi(Y) < \infty$ is the complex ${\mathcal C}(Y)$ whose vertices are curves (up to isotopy) and whose $k$--simplices are multi-curves with $k+1$ components. According to work of Masur-Minsky \cite{MM1}, curve complexes are Gromov hyperbolic. For other proofs, see \cite{BOWhyp,Ham-hyp} as well as \cite{Aougab,Bowhyp2,ClayRafiSch,HPW} which prove uniform bounds on ${\sf d}elta$. \betaegin{theorem} \langlebel{T:C hyperbolic} For any surface $Y$, ${\mathcal C}(Y)$ is ${\sf d}elta$--hyperbolic, for some ${\sf d}elta > 0$. \end{theorem} The {\em surviving complex }${\mathcal C}^s({\sf d}ot S)$ is defined to be the subcomplex of the curve complex ${\mathcal C}({\sf d}ot S)$, spanned by those curves that do not bound a twice-punctured disk, where one of the punctures is the $z$--puncture. Given curves $\alphalpha,\betaeta \in {\mathcal C}^s_0({\sf d}ot S)$, we write $d^s(\alphalpha,\betaeta)$ for the distance between $\alphalpha$ and $\betaeta$ (in the $1$--skeleton). \subsection{Witnesses for ${\mathcal C^s({\sf d}ot S)}$ and subsurface projection to witnesses} \langlebel{S:witnesses defined} A {\em subsurface} of ${\sf d}ot S$ is either ${\sf d}ot S$ itself or a component $Y \subset {\sf d}ot S$ of the complement of a small, open, regular neighborhood of a (representative of a) multi-curve $A$; we assume $Y$ is not a pair of pants (a sphere with three boundary components/punctures). The boundary of $Y$, denoted $\partial Y$, is the sub-multi-curve of $A$ consisting of those components that are isotopic into $Y$. As with (multi-)curves, subsurfaces is considered up to isotopy, in general, but when convenient we will choose a representative of the isotopy class without comment. \betaegin{definition} A {\em witness} for ${\mathcal C^s({\sf d}ot S)}$ is a subsurface $W \subset {\sf d}ot S$ such that for every curve $\alphalpha$ in ${\sf d}ot S$, no representative of the isotopy class of $\alphalpha$ can be made disjoint from $W$. \end{definition} \betaegin{remark} Witnesses were introduced in a more general setting by Masur and Schleimer in \cite{MSch1} where they were called {\em holes}. \end{remark} Clearly, ${\sf d}ot S$ is a witness. Note that if $\betaeta$ is the boundary of a twice-punctured disk $D$, one of which is the $z$--puncture, and the complementary component $W \subset S$ with $\partial W = \betaeta$ is a witness. To see this, we observe that any curve $\alphalpha$ in ${\mathcal C^s({\sf d}ot S)}$ that can be isotoped disjoint from $W$ must be contained in $D$, but the only such curve in ${\sf d}ot S$ is $\betaeta$. It is clear that these two types of subsurfaces account for all witnesses. We let $\Omega({\sf d}ot S)$ denote the set of witnesses and $\Omega_0({\sf d}ot S) = \Omega({\sf d}ot S) \smallsetminus \{{\sf d}ot S\}$ the set of proper witnesses. We note that any proper witness $W$ is determined by its boundary curve, $\partial W$: if $W \neq {\sf d}ot S$, then $W$ is the closure of the component of ${\sf d}ot S {\smallsetminus} \partial W$ not containing the $z$--puncture. An important tool in what follows is the {\em subsurface projection} of curves in ${\mathcal C^s({\sf d}ot S)}$ to witnesses; see \cite{MM2}. \betaegin{definition} (Projection to witnesses) Let $W\subseteq {\sf d}ot S$ be a witness for ${\mathcal C^s({\sf d}ot S)}$ and $\alphalpha \in {\mathcal C}^s_0({\sf d}ot S)$ a curve. We define the {\em projection of $\alphalpha$ to $W$}, $\pi_W(\alphalpha)$ as follows. If $W = {\sf d}ot S$, then $\pi_W(\alphalpha) = \alphalpha$. If $W \neq {\sf d}ot S$, then $\pi_W(\alphalpha)$ is the {\em set} of curves \[ \pi_W(\alphalpha)= \betaigcup \partial(\mathcal{N}(\alphalpha_0 \cup \partial W)).\] where (1) we have taken representatives of $\alphalpha$ and $W$ so that $\alphalpha$ and $\partial W$ intersect transversely and minimally, (2) the union is over all complementary arcs $\alphalpha_0$ of $\alphalpha {\smallsetminus} \partial W$ that meet $W$, (3) $\mathcal{N}(\alphalpha_0 \cup \partial W)$ is a small regular neighborhood of of the union, and (4) we have discarded any components of $\partial (\mathcal{N} (\alphalpha_0 \cup \partial W))$ that are not essential curves in $W$. The projection $\pi_W(\alphalpha)$ is always a subset of ${\mathcal C}(W)$ with diameter at most $2$; see \cite{MM2}. We note that $\pi_W(\alphalpha)$ is never empty by definition of a witness. \end{definition} Given $\alphalpha,\betaeta \in {\mathcal C}^s_0({\sf d}ot S)$ and a witness $W$, we define the {\em distance between $\alphalpha$ and $\betaeta$ in $W$} by \[ d_W(\alphalpha,\betaeta) = {\sf d}iam\{\pi_W(\alphalpha) \cup \pi_W(\betaeta)\}.\] Note that if $W = {\sf d}ot S$, then $d_{{\sf d}ot S}(\alphalpha,\betaeta)$ is simply the usual distance between $\alphalpha$ and $\betaeta$ in ${\mathcal C}({\sf d}ot S)$. According to \cite[Lemma~2.3]{MM2}, projections satisfy a $2$--Lipschitz projection bound. \betaegin{proposition} \langlebel{P:2 Lipschitz} For any two distinct curves $\alphalpha,\betaeta \in {\mathcal C^s({\sf d}ot S)}$, $d_W(\alphalpha,\betaeta) \leq 2 d^s(\alphalpha,\betaeta)$. In fact, for any path $v_0,\ldots,v_n$ in ${\mathcal C}({\sf d}ot S)$ connecting $\alphalpha$ to $\betaeta$, such that $\pi_W(v_j) \neq \emptyset$ for all $j$, $d_W(\alphalpha,\betaeta) \leq 2n$. \end{proposition} We should mention that in \cite{MM2} Masur and Minsky consider the map from ${\mathcal C}({\sf d}ot S)$ and proves the second statement. Since ${\mathcal C}^s({\sf d}ot S)$ is a subcomplex of ${\mathcal C}({\sf d}ot S)$ for which every curve has non-empty projection, the first statement follows from the second. We will also need the following important fact about projections from \cite{MM2}. \betaegin{theorem} [Bounded Geodesic Image Theorem] \langlebel{BGIT} Let $\mathcal{G}$ be a geodesic in ${\mathcal C}(Y)$ for some surface $Y$, all of whose vertices intersect a subsurface $W\subset Y$. Then, there exists a number $M>0$ such that, \[{\sf d}iam _{W}(\pi_W(\mathcal{G})) < M\] where $\pi_W(\mathcal{G}) $ denotes the image of the geodesic $\mathcal{G}$ in $W$. \end{theorem} We assume (as we may) that $M \geq 8$, as this makes some of our estimates cleaner. In fact, there is a uniform $M$ that is independent of $Y$ in this theorem, given by Webb~\cite{Webb-BGIT}. \subsection{Construction of the survival map} \langlebel{S:tree map construction} Consider the forgetful map \[\Pi:{{\mathcal C^s({\sf d}ot S)}}\rightarrow {\mathcal C}(S)\] induced from the inclusion ${\sf d}ot S \rightarrow S$. By definition of ${\mathcal C^s({\sf d}ot S)}$, $\Pi$ is well defined since every curve in ${\mathcal C^s({\sf d}ot S)}$ determines a curve in ${\mathcal C}(S)$. Each point $v \in {\mathcal C}(S)$ determines a weighted multi-curve: $v$ is contained in the interior of a unique simplex, which corresponds to a multi-curve on $S$, and the barycentric coordinates determine weights on the components of the multi-curve. According to \cite{LeinKentSch}, the fiber of the map $\Pi$ is naturally identified with the Bass-Serre tree associated to the corresponding weighted multi-curve: $\Pi^{-1}(v)= T_v$. An important tool in our analysis is the \tauextit{survival map} \[\Phi: {\mathcal C}(S)\tauimes \mathbb{H}\rightarrow {\mathcal C^s({\sf d}ot S)}.\] The construction of the analogous map when $S$ is closed is described in \cite{LeinMjSch}. Since there are no real subtleties that arise, we describe enough of the details of the construction for our purposes, and refer the reader to that paper for details. Before getting to the precise definition of $\Phi$, we note that for every $v \in {\mathcal C}(S)$, the restriction of $\Phi$ to $\mathbb{H} \colon\tauhinspaceng \{v\} \tauimes \mathbb{H}$ will be denoted $\Phi_v \colon\tauhinspacelon \mathbb{H} \tauo {\mathcal C}^s({\sf d}ot S)$, and this is simpler to describe: $\Phi_v$ is a $\pi_1S$--equivariantly factors as $\mathbb{H} \tauo T_v \tauo \Pi^{-1}(v)$, where the action of $\pi_1S$ on $\mathbb{H}$ comes from our reference hyperbolic structure on $S$, the associated covering map $p \colon\tauhinspacelon \mathbb{H} \tauo S$, and choice of basepoint $\widetilde z \in p^{-1}(z)$. To describe $\Phi$ in general, it is convenient to construct a more natural map from which $\Phi$ is defined as the descent to a quotient. Specifically, we will define a map \[ \widetilde \Phi \colon\tauhinspacelon {\mathcal C}(S) \tauimes \Diff_0(S) \tauo {\mathcal C^s({\sf d}ot S)} \] where $\Diff_0(S)$ is the component of the group of diffeomorphisms that of $S$ containing the identity (all diffeomorphisms of $S$ are assumed to extend to to diffeomorphisms of the closed surface obtained by filling in the punctures). To define $\widetilde \Phi$, first for each curve $\alphalpha \in {\mathcal C}_0(S)$, we let $\alphalpha$ denote the geodesic representative in our fixed hyperbolic metric on $S$, and choose once and for all $\epsilonsilon(\alphalpha) >0$ so that for any two vertices $\alphalpha,\alphalpha'$, $i(\alphalpha,\alphalpha')$ is equal to the number of components of $N_{\epsilonsilon(\alphalpha)}(\alphalpha) \cap N_{\epsilonsilon(\alphalpha')}(\alphalpha')$. If $f(z)$ is disjoint from the interior of $N_{\epsilonsilon(\alphalpha)}(\alphalpha)$, then $\widetilde \Phi(\alphalpha,f) = f^{-1}(\alphalpha)$, viewed as a curve on ${\sf d}ot S$. If $f(z)$ is contained in the interior of $N_{\epsilonsilon(\alphalpha)}(\alphalpha)$, then we let $\alphalpha_\pm$ denote the two boundary components of this neighborhood, and define $\widetilde \Phi(\alphalpha,f)$ to be a point of the edge between the curves $f^{-1}(\alphalpha_-)$ and $f^{-1}(\alphalpha_+)$ determined by the relative distance to $\alphalpha_+$ and $\alphalpha_-$. For a point $s \in {\mathcal C}(S)$ inside a simplex $\Delta$ of dimension greater than $0$, we use the neighborhoods as well as the barycentric coordinates of $s$ inside $\Delta$ to define $\widetilde \Phi(s,f) \in \Pi^{-1}(s) \subset \Pi^{-1}(\Delta)$; see \cite[Section~2.2]{LeinMjSch} for details. Next we note that the isotopy from the identity to $f$ lifts to an isotopy from the identity to a {\em canonical lift} $\widetilde f$ of $f$. The map $\Phi$ is then defined from our choice $\widetilde z \in p^{-1}(z)$ and the canonical lift by the equation \[ \Phi(\alphalpha,\widetilde f(\widetilde z)) = \widetilde \Phi(\alphalpha,f).\] Alternatively, the we have the evaluation map ${\rm{ev}} \colon\tauhinspacelon \Diff_0(S) \tauo S$, ${\rm{ev}}(f) = f(z)$, which lifts to a map $\widetilde{{\rm{ev}}} \colon\tauhinspacelon \Diff_0(S) \tauo \mathbb H$ (given by $\widetilde{{\rm{ev}}}(f)= \widetilde f(\widetilde z)$, where again $\widetilde f$ is the canonical lift), and then $\Phi$ is defined as the descent by ${\rm{id}}_{{\mathcal C}(S)} \tauimes \widetilde{{\rm{ev}}}$: \[ \xymatrixcolsep{4pc}\xymatrixrowsep{2pc}\xymatrix{ {\mathcal C}(S) \tauimes \Diff_0(S) \alphar[rd]^{\widetilde \Phi} \alphar[d]_{{\rm{id}}_{{\mathcal C}(S)} \tauimes \widetilde{{\rm{ev}}}} \\ {\mathcal C}(S) \tauimes \mathbb{H} \alphar[r]_\Phi & {\mathcal C}^s({\sf d}ot S).}\] Note that every $\widetilde w \in \mathbb H$ is $\widetilde w = \widetilde f(\widetilde z)$ for some $f \in \Diff_0(S)$ (indeed, $\widetilde{{\rm{ev}}}$ defines a locally trivial fiber bundle). As is shown in \cite{LeinMjSch}, $\Phi(\alphalpha,\widetilde w)$ is well-defined independent of the choice of such a diffeomorphism $f$ with $\widetilde f(\widetilde z) = \widetilde w$ since any two differ by an isotopy fixing $z$, and $\Phi$ is $\pi_1S$--equivariant (where the points $\widetilde z$ is used to identify the fundamental group with the group of covering transformations). It is straightforward to see that $\Phi(\alphalpha, \cdot)$ is constant on components of $\mathbb H {\smallsetminus} p^{-1}(N_{\epsilonsilon(\alphalpha)}(\alphalpha))$: two points $\widetilde w,\widetilde w'$ in such a component are given by $\widetilde w = \widetilde f(\widetilde z)$ and $\widetilde w' = \widetilde f'(\widetilde z)$ where $f$ and $f'$ are isotopic by an isotopy $f_t$, so that $f_t(z)$ remains outside $N_{\epsilonsilon(\alphalpha)}(\alphalpha)$ for all $t$. \subsection{Cusps and witnesses} \langlebel{S:cusps and witnesses} The following lemma relates $\Phi$ to the proper witnesses. Let $\mathcal P \subset \partial \mathbb H$ denote the set of parabolic fixed points. Assume that for each $x \in \mathcal P$, we choose a horoball $H_x$ invariant by the parabolic subgroup ${\rm{Stab}}_{\pi_1S}(x)$, the stabilizer of $x$ in $\pi_1S$. We further assume, as we may, that (1) the union of the horoballs is $\pi_1S$--invariant, (2) the horoballs are pairwise disjoint (so all projected to pairwise disjoint cusp neighborhoods of the punctures), and (3) the horoballs all project disjoint from $N_{\epsilonsilon(\alphalpha)}(\alphalpha)$ for all curves $\alphalpha$. Recall that any proper witness is determined by its boundary. \betaegin{lemma} \langlebel{L:W parabolics} There is a $\pi_1S$--equivariant bijection $\mathcal W \colon\tauhinspacelon \mathcal P \tauo \Omega_0({\sf d}ot S)$ determined by \betaegin{equation} \langlebel{E:definition of W} \partial \mathcal W(x) = f^{-1}(\partial p(H_x)), \end{equation} for any $f \in \Diff_0(S)$ with $\widetilde f(\widetilde z)$ in the interior of the horoball $H_x$. Moreover, $\Phi({\mathcal C}(S) \tauimes H_x) = {\mathcal C}(\mathcal W(x))$, we have $\Phi(\Pi(u) \tauimes H_x)) = u$ for all $u \in {\mathcal C}(\mathcal W(x))$, and $Stab_{\pi_1S}(x)$, acts trivially on ${\mathcal C}(\mathcal W(x))$. \end{lemma} From the lemma (and as illustrated in the proof) $\Phi|_{C(S) \tauimes H_x}$ defines an isomorphism ${\mathcal C}(S) \tauo {\mathcal C}(\mathcal W(x))$ inverting the isomorphism $\Pi|_{{\mathcal C}(\mathcal W(x))} \colon\tauhinspacelon {\mathcal C}(\mathcal W(x)) \tauo {\mathcal C}(S)$. \betaegin{proof} For any $f \in \Diff_0(S)$ with $\widetilde w = \widetilde f(\widetilde z) \in H_x$ and any curve $v \in {\mathcal C}_0(S)$, we have $\Phi(v,\widetilde w) = \widetilde \Phi(v,f) = f^{-1}(v)$. On the other hand, $f^{-1}(\partial p(H_x))$ is the boundary of a twice punctured disk containing the $z$ puncture, and hence $f^{-1}(\partial p(H_x))$ is the boundary of a witness we denote $\mathcal W(x)$. Since $v$ and $\partial p(H_x)$ are disjoint, \[ \Phi(v,\widetilde w) \in {\mathcal C}(\mathcal W(x)) \subset {\mathcal C^s({\sf d}ot S)}.\] The same proof that $\Phi(v,\widetilde w)$ is well-defined (independent of the choice of $f \in \Diff_0(S)$ with $\widetilde f(\widetilde z) = \widetilde w$), shows $f^{-1}(\partial p(H_x))$ is independent of such a choice of $f$ (up to isotopy). Therefore, $\mathcal W$ is well defined by \eqref{E:definition of W}. Since $v \in {\mathcal C}_0(S)$ was arbitrary and $\Phi(v,\cdot)$ is constant on components of the complement of $p^{-1}(N_{\epsilonsilon(v)}(v))$, we have \[ \Phi({\mathcal C}(S) \tauimes H_x) \subset {\mathcal C}(\mathcal W(x)).\] Given $u \in {\mathcal C}(\mathcal W(x))$, we view $u$ as a curve disjoint from $f^{-1}(\partial p(H_x))$ and hence $f(u)$ is disjoint from $p(H_x)$. There is an isotopy of $f(u)$ to $v$ fixing $p(H_x)$ (since this is just a neighborhood of the cusp) and hence an isotopy of $u$ to $f^{-1}(v)$ disjoint from $f^{-1}(\partial p(H_x))$. This implies $\Phi(\{v\} \tauimes H_x) = u$, proving that $\Phi({\mathcal C}(S) \tauimes H_x) = {\mathcal C}(\mathcal W(x))$, as well as the formula $\Phi(\Pi(u)\tauimes H_x) = u$ for all $u \in {\mathcal C}(\mathcal W(x))$. Next observe that for any proper witness $W$, the subcomplex ${\mathcal C}(W) \subset {\mathcal C}^s({\sf d}ot S)$ uniquely determines $W$. Therefore, the property that $\Phi({\mathcal C}(S) \tauimes H_x) = {\mathcal C}(\mathcal{W}(x))$, together with the $\pi_1S$--equivariance of $\Phi$ implies that $\mathcal{W}$ is $\pi_1S$--equivariant. All that remains is to show that $\mathcal{W}$ is a bijection. Let $C_1,\ldots,C_n$ be the pairwise disjoint horoball cusp neighborhoods of the punctures obtained by projecting the horoballs $H_x$ for all $x \in \mathcal{P}$. For any proper witness $W$, there is a diffeomorphism $f \colon\tauhinspacelon S \tauo S$, isotopic to the identity by an isotopy $f_t$ which is the identity on $W$ for all $t$, and so that $f(z) \in C_i$, for some $i$. Note that there is an arc connecting $z$ to the $i^{th}$ puncture which is disjoint from both $\partial W$ and $\partial C_i$. It follows that $\partial W$ and $\partial C_i$ are isotopic, and thus by further isotopy (no longer the identity on $W$) we may assume that $f(\partial W) = \partial C_i$. Therefore, $f^{-1}(\partial C_i) = \partial W$. Observe that the canonical lift $\tauilde f$ has $\tauilde f(\tauilde z) \in H_x$ for some $x \in \mathcal{P}$ with $p(H_x) = C_i$. Therefore, $f^{-1}(\partial p(H_x)) = W$, and so $\mathcal W(x) = W$, so $\mathcal{W}$ is surjective. To see that $\mathcal{W}$ is injective, suppose $x,y \in \mathcal{P}$ are such that $\mathcal{W}(x) = \mathcal{W}(y)$. The two punctures surrounded by $\partial \mathcal{W}(x)$ and by $\partial \mathcal{W}(y)$ are therefore the same, hence there exists an element ${{\gothic o}thic a}mma \in \pi_1S$ so that ${{\gothic o}thic a}mma \cdot x= y$. By $\pi_1S$--equivariance, we must have \[ {{\gothic o}thic a}mma \cdot \partial \mathcal{W}(x) = \partial \mathcal{W}({{\gothic o}thic a}mma \cdot x) = \partial \mathcal{W}(y) = \partial \mathcal{W}(x).\] Choose a representative loop for ${{\gothic o}thic a}mma$ with minimal self-intersection and denote this ${{\gothic o}thic a}mma_0$. If ${{\gothic o}thic a}mma_0$ is simple closed, then the mapping class associated to ${{\gothic o}thic a}mma$ is the product of Dehn twists (with opposite signs) in the boundary curves of a regular neighborhood of ${{\gothic o}thic a}mma_0$. Otherwise, ${{\gothic o}thic a}mma_0$ fills a subsurface $Y \subset {\sf d}ot S$ and is pseudo-Anosov on this subsurface by a result of Kra \cite{Kra} (see also \cite{LeinKentSch}). It follows that ${{\gothic o}thic a}mma \cdot \partial \mathcal{W}(x) = \partial \mathcal{W}(x)$ if and only if ${{\gothic o}thic a}mma_0$ is disjoint from $\partial \mathcal{W}(x)$, which happens if and only if $f({{\gothic o}thic a}mma_0) \subset p(H_x)$ (up to isotopy relative to $f(z)$). In the action of $\pi_1S$ on $\mathbb{H}$, the element ${{\gothic o}thic a}mma$ sends $\tauilde f(\tauilde z)$ to ${{\gothic o}thic a}mma \cdot \tauilde f(\tauilde z)$, and these are the initial and terminal endpoints of the $\tauilde f$--image of the lift of ${{\gothic o}thic a}mma_0$ with initial point $\tauilde z$. On the other hand, $\tauilde f(\tauilde z) \in H_x$, and hence so is ${{\gothic o}thic a}mma \cdot \tauilde f(\tauilde z)$, which means that ${{\gothic o}thic a}mma$ is fixes $x$. Therefore, $y = {{\gothic o}thic a}mma \cdot x = x$, and thus $\mathcal{W}$ is injective. \end{proof} \subsection{Spaces of laminations} \langlebel{S:laminations} We refer the reader to \cite{Thurston1}, \cite{CEG}, \cite{FLP}, and \cite{CB} for details about the topics discussed here. By a {\em lamination} on a surface $Y$ we mean a {\em compact} subset of the interior of $Y$ foliated by complete geodesics with respect to some complete, hyperbolic metric of finite area, with (possibly empty) geodesic boundary; the geodesics in the foliation are uniquely determined by the lamination and are called the {\em leaves}. For example, any simple closed geodesic $\alphalpha$ is a lamination with exactly one leaf. For a fixed complete, finite area, hyperbolic metric on $Y$, all geodesic laminations are all contained in a compact subset of the interior of $Y$. For any two complete, hyperbolic metrics of finite area, laminations that are geodesic with respect to the first are isotopic to laminations that are geodesic with respect to the second. In fact, we can remove any geodesic boundary components, and replace the resulting ends with cusps, and this remains true. We therefore sometimes view laminations as well-defined up to isotopy, unless a hyperbolic metric is specified in which case we assume they are geodesic. A {\em complementary region} of a lamination $\mathcal{L} \subset Y$ is the image in $Y$ of the closure of a component of the preimage in the universal covering; intuitively, it is the union of a complementary component together with the ``leaves bounding this component''. We view the complementary regions as immersed subsurfaces with (not necessarily compact) boundary consisting of arcs and circles (for a generic lamination, the immersion is injective, though in general it is only injective on the interior of the subsurface). We will also refer to the closure of a complementary component in the universal cover of $Y$ as a complementary region (of the preimage of a lamination). We write $\mathcal{G}\mathcal{L}(Y)$ for the set of laminations on the surface $Y$, dropping the reference to $Y$ when it is clear from the context. The set of essential simple closed curves, up to isotopy (i.e.~the vertex set of ${\mathcal C}(Y)$) is thus naturally a subset of $\mathcal{G}\mathcal{L}(Y)$. A lamination is {\em minimal} if every leaf is dense in it, and it is \tauextit{filling} if its complementary regions are ideal polygons, or one-holed ideal polygons where the hole is either a boundary component or cusp of $Y$. A sublamination of a lamination is a subset which is also a lamination. Every lamination decomposes as a finite disjoint union of simple closed curves, minimal sublaminations without closed leaves (called the {\em minimal components}), and biinfinite isolated leaves (leaves with a neighborhood disjoint from the rest of the lamination). There are several topologies on $\mathcal{G}\mathcal{L}$ that will be important for us (in what follows, and whenever discussing convergence in the topologies, we view laminations as geodesic laminations with respect to a fixed complete hyperbolic metric of finite area; the resulting topology and convergence is independent of the choice of metric). The first is a metric topology called the {\em Hausdorff topology} (also known as the {\em Chabauty topology}), induced by the {\em Hausdorff metric} on the set of all compact subsets of a compact space (in our case, the compact subset of the surface that contains all geodesic laminations) defined by \[ d_H(\mathcal{L},\mathcal{L}') = \inf \{\epsilonsilon >0 \mid \mathcal{L} \subset \mathcal{N}_{\epsilonsilon}(\mathcal{L}') \mbox{ and } \mathcal{L}' \subset \mathcal{N}_{\epsilonsilon}(\mathcal{L}) \}.\] If a sequence of $\{\mathcal{L}_i\}$ converges to $\mathcal{L}$ in this topology, we write $\mathcal{L}_i \xrightarrow{\tauext{H}} \mathcal{L}$. The following provides a useful characterization of convergence in this topology; see \cite{CEG}. \betaegin{lemma} \langlebel{L:Hausdorff convergence} We have $\mathcal{L}_i \xrightarrow{\tauext{H}} \mathcal{L}$ if and only if \betaegin{enumerate} \item for all $x \in \mathcal{L}$ there is a sequence of points $x_i \in \mathcal{L}_i$ so that $x_i \tauo x$, and \item for every subsequence $\{\mathcal{L}_{i_k}\}_{k=1}^\infty$, if $x_{i_k} \in \mathcal{L}_{i_k}$, and $x_{i_k} \tauo x$, then $x \in \mathcal{L}$. \end{enumerate} \end{lemma} This lemma holds not just for Hausdorff convergence of laminations, but for any sequence of compact subsets of a compact metric space with respect to the Hausdorff metric. The set $\mathcal{G}\mathcal{L}$ can also be equipped with a weaker topology called the \emph{coarse Hausdorff topology}, \cite{Ham-CH}, introduced by Thurston in \cite{TNotes} where it was called the {\em geometric topology} (see also \cite{CEG} where it was {\em Thurston topology}). If a sequence $\{\mathcal{L}_i\}$ converges to $\mathcal{L}$ in the coarse Hausdorff topology, then we write $\mathcal{L}_i \xrightarrow{\tauext{CH}} \mathcal{L}$. The following describes convergence in this topology; see \cite{CEG}. \betaegin{lemma} \langlebel{L:CH} We have $\mathcal{L}_i \xrightarrow{\tauext{CH}} \mathcal{L}$ if and only if condition (1) holds from Lemma~\ref{L:Hausdorff convergence}. \end{lemma} The next corollary gives a useful way of understanding coarse Hausdorff convergence. \betaegin{corollary} \langlebel{C:CH sublamination} We have $\mathcal{L}_i \xrightarrow{\tauext{CH}} \mathcal{L}$ if and only if every Hausdorff convergent subsequence converges to a lamination $\mathcal{L}'$ containing $\mathcal{L}$. \end{corollary} Since any lamination has only finitely many sublaminations, from the corollary we see that while limits are not necessarily unique in the coarse Hausdorff topology, a sequence can have only finitely many limits. We let $\ensuremath{\mathcal{EL}}\xspace=\ensuremath{\mathcal{EL}}\xspace(Y)$ denote the space of {\em ending laminations} on $Y$, which are minimal, filling laminations, equipped with the coarse Hausdorff topology. As suggested by the name, these are precisely the laminations that occur as the ending laminations of a type preserving, proper, isometric action on hyperbolic $3$--space without accidental parabolics as discussed in the introduction. A \tauextit{measured lamination} is a lamination $\mathcal{L}$ together with an invariant transverse measure $\mu$; that is, an assignment of a measure on all arcs transverse to the lamination, satisfying natural subdivision properties which is invariant under isotopy of arcs preserving transversality with the lamination. The support of a measured lamination $(\mathcal{L},\mu)$ is the sublamination $|\mu| \subseteq \mathcal{L}$ with the property that a transverse arc has positive measure if and only if the intersection with $|\mu|$ is nonempty, and is a union of minimal components and simple closed geodesics. We often assume that $(\mathcal{L},\mu)$ has {\em full support}, meaning $\mathcal{L} = |\mu|$. In this case, we sometimes write $\mu$ instead of $(\mathcal L,\mu)$. The space $\ensuremath{\mathcal{ML}}\xspace = \ensuremath{\mathcal{ML}}\xspace(Y)$ of {\em measured laminations} on $Y$ is the set of all measured laminations of full support equipped with the weak* topology on measures on an appropriate family of arcs transverse to all laminations. Given an arbitrary measured lamination, $(\mathcal{L},\mu)$, we have $(|\mu|,\mu)$ is an element of $\ensuremath{\mathcal{ML}}\xspace$, and so every measured lamination determines a unique point of $\ensuremath{\mathcal{ML}}\xspace$. We let $\ensuremath{\mathcal{FL}}\xspace \subset \ensuremath{\mathcal{ML}}\xspace$ denote the subspace of measured laminations whose support is an ending lamination (i.e.~it is in $\ensuremath{\mathcal{EL}}\xspace$). We write $\ensuremath{\mathcal{PML}}\xspace$ and $\ensuremath{\mathcal{PFL}}\xspace$ for the respective projectivizations of $\ensuremath{\mathcal{ML}}\xspace$ and $\ensuremath{\mathcal{FL}}\xspace$, obtained by taking the quotient by scaling measures, with the quotient topologies. The following will be useful in the sequel; see \cite[Chapter 8.10]{TNotes}. \betaegin{proposition} \langlebel{P:support continuous} The map $\ensuremath{\mathcal{PML}}\xspace \tauo \mathcal{G}\mathcal{L}$, given by $\mu \mapsto |\mu|$, is continuous with respect to the coarse Hausdorff topology on $\mathcal{G}\mathcal{L}$. \end{proposition} For the surface ${\sf d}ot S$, we consider the subspace \[\ensuremath{\calE\calL^s({\sf d}ot S)}\xspace:=\betaigsqcup_{W \in \Omega({\sf d}ot S)} \mathcal{E} \mathcal{L}(W) \subset {\mathcal{G}\mathcal{L}}({\sf d}ot S),\] which is the union of ending laminations of all witnesses of ${\sf d}ot S$. Similarly, we will write $\ensuremath{\mathcal{FL}}\xspace^s({\sf d}ot S) \subset \ensuremath{\mathcal{ML}}\xspace({\sf d}ot S)$ for those measured laminations supported on laminations in $\ensuremath{\calE\calL^s({\sf d}ot S)}\xspace$, and $\ensuremath{\mathcal{PFL}}\xspace^s({\sf d}ot S) \subset \ensuremath{\mathcal{PML}}\xspace({\sf d}ot S)$ for its projectivization. \subsection{Gromov Boundary of a hyperbolic space} A ${\sf d}elta$--hyperbolic space $\mathcal{X}$ can be equipped with a boundary at infinity, $\partial \mathcal{X}$ as follows. Given $x,y \in \mathcal{X}$ and a basepoint $o \in \mathcal{X}$, the Gromov product of $x$ and $y$ based at $o$ is given by \[ \langlengle x, y \ranglengle_o = \frac12\left( d(x,o)+d(y,o) - d(x,y) \right).\] Up to a bounded error (depending only on ${\sf d}elta$), $\langlengle x,y \ranglengle_o$ is the distance from $o$ to a geodesic connecting $x$ and $y$. The quantity $\langlengle x,y \ranglengle_o$ is estimated by the distance from the basepoint $o$ to a quasi-geodesic between $x$ and $y$. There is an additive and multiplicative error in the estimate that depends only on the hyperbolicity constant and the quasi-geodesic constants. Using and slim triangles, we also note that for all $x,y,z \in \mathcal{X}$, \[ \langlengle x,y \ranglengle_o \succeq \min\{ \langlengle x,z \ranglengle_o, \langlengle y,z \ranglengle_o\} \] where the constants in the coarse lower bound depend only on the hyperbolicity constant. A sequence $\{x_n\} \subset \mathcal{X}$ is said to {\em converge to infinity} if ${\sf d}isplaystyle{\lim_{m,n\rightarrow\infty}\langle x_m,x_n\rangle_o=\infty}$. Two sequences $\{x_n\}$ and $\{y_n\}$ are equivalent if ${\sf d}isplaystyle{\lim_{m,n\rightarrow\infty}\langle y_m,x_n \rangle =\infty}$. The points in $\partial \mathcal{X}$ are equivalence classes of sequences converging to infinity, and if $\{x_k\} \in x \in \partial \mathcal{X}$, then we say $\{x_k\}$ converges to $x$ and write $x_k \tauo x$ in $\overline{\mathcal{X}} = \mathcal{X} \cup \partial \mathcal{X}$. The topology on the boundary is such that a sequence $\{x^n\}_n \subset \partial X$ converges to a point $x \in \partial X$ if there exist sequences $\{x_k^n\}_k$ representing $x^n$ for all $n$, and $\{x_m\}_m$ representing $x$ so that \[ \lim_{n\tauo \infty} \liminf_{k,m \tauo \infty} \langlengle x_k^n,x_m\ranglengle_o = \infty.\] For details see, e.g.~\cite{BH,Kapovich-bdry}. Klarreich \cite{Klarreich} proved that the Gromov boundary of the curve complex is naturally homeomorphic to the space of ending laminations equipped with the quotient topology from $\ensuremath{\mathcal{FL}}\xspace \subset \ensuremath{\mathcal{ML}}\xspace$ using the geometry of the Teichmuller space\footnote{In fact, Klarreich worked with the space of measured foliations, an alter ego of the space of measured laminations.}. Hamenst\"adt \cite{Ham-CH} gave a new proof, endowing $\ensuremath{\mathcal{EL}}\xspace$ with the coarse Hausdorff topology (which for $\ensuremath{\mathcal{EL}}\xspace$ is the same topology as the quotient topology), also providing additional information about convergence. Yet another proof of the version we use here was given by Pho-On \cite{Bom}. \betaegin{theorem} \langlebel{T:Klarreich} For any surface $Y$ equipped with a complete hyperbolic metric of finite area (possibly having geodesic boundary), there is a homeomorphism $\mathcal F_Y \colon\tauhinspacelon \partial {\mathcal C}(Y) \tauo \ensuremath{\mathcal{EL}}\xspace(Y)$ so that $\alphalpha_n\rightarrow x$ if and only if $\alphalpha_n \xrightarrow{\tauext{CH}} \mathcal{F}_Y(x)$. \end{theorem} \subsection{Laminations and subsurfaces} The following lemma relates coarse Hausdorff convergence of a sequence to coarse Hausdorff convergence of its projection to witnesses in important special case. \betaegin{lemma} \langlebel{L:proj and conv} If $\{\alphalpha_n\} \subset {\mathcal C}_0^s({\sf d}ot S)$ and $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(W)$ for some witness $W$, then $\alphalpha_n \mathbin{\mid}ackrel{CH}{\tauo} \mathcal{L}$ if and only if $\pi_W(\alphalpha_n) \mathbin{\mid}ackrel{CH}{\tauo} \mathcal{L}$. \end{lemma} Note that for each $n$, $\pi_W(\alphalpha_n)$ is a union of curves, which are not necessarily disjoint. In particular, $\pi_W(\alphalpha_n)$ is not necessarily a geodesic laminations, so we should be careful in discussing its coarse Hausdorff convergence. However, viewing the union as a subset of ${\mathcal C}(W)$, it has diameter at most $2$, and hence if $a_n,a_n' \subset \pi_W(\alphalpha_n)$ are any two curves, for each $n$, and $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(W)$, then $a_n$ and $a_n'$ either both coarse Hausdorff converge to $\mathcal{L}$ or neither does (by Theorem~\ref{T:Klarreich}). Consequently, it makes sense to say that $\pi_W(\alphalpha_n)$ coarse Hausdorff converges to a lamination in $\ensuremath{\mathcal{EL}}\xspace(W)$. \betaegin{proof} For the rest of this proof we fix a complete hyperbolic metric on ${\sf d}ot S$ and realize $W \subset {\sf d}ot S$ as an embedded subsurface with geodesic boundary. Let us first assume $\pi_W(\alphalpha_n) \mathbin{\mid}ackrel{CH}{\tauo} \mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(W)$. After passing to an arbitrary convergent subsequence, we may assume $\alphalpha_n \mathbin{\mid}ackrel{H}{\tauo} {\mathcal{L}'}$. It suffices to show that $\mathcal{L} \subset \mathcal{L}'$. Let $\ell_n^1,\ldots,\ell_n^r \subset \alphalpha_n \cap W$ be the decomposition into isotopy classes of arcs of intersection: that is, each $\ell_n^j$ is a union of all arcs of intersection of $\alphalpha_n$ with $W$ so that any two arcs of $\alphalpha_n \cap W$ are isotopic if and only if they are contained in the same set $\ell_n^j$ (we may have to pass to a further subsequence so that each intersection $\alphalpha_n \cap W$ consists of the same number $r$ of isotopy classes, which we do). For each $\ell_n^j$, let $\alphalpha_n^j \subset \pi_W(\alphalpha_n)$ be the geodesic multi-curve produced from the isotopy class $\ell_n^j$ by surgery in the definition of projection. Note that $\alphalpha_n^j$ and $\ell_n^j$ have no transverse intersections. Pass to a further subsequence so that $\alphalpha_n^j \mathbin{\mid}ackrel{H}\tauo \mathcal{L}^j$ and $\ell_n^j \mathbin{\mid}ackrel{H}\tauo \mathcal{L}_j'$; here, each $\ell_n^j$ is a compact subset of $W$ so Hausdorff convergence to a closed set still makes sense, though $\mathcal{L}_j'$ are not necessarily geodesic laminations. By Corollary~\ref{C:CH sublamination} (and the discussion in the paragraph preceding this proof), $\mathcal{L} \subset \mathcal{L}^j$, for each $j$. Appealing to Lemma~\ref{L:Hausdorff convergence}, it easily follows that $\mathcal{L}' \cap W = \mathcal{L}_1' \cup \cdots \cup \mathcal{L}_r'$. Since $a_n^j$ has no transverse intersections with $\ell_n^j$, $\mathcal{L}_j'$ has no transverse intersections with $\mathcal{L}^j$, for each $j$. Therefore, $\mathcal{L}$ has no transverse intersections with $\mathcal{L}' \cap W$, and since $\mathcal{L} \subset W$, $\mathcal{L}'$ has no transverse intersections with $\mathcal{L}$. Since $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(W)$, it follows that $\mathcal{L} \subset \mathcal{L}'$, as required. Now in the opposite direction we assume that $\alphalpha_n \mathbin{\mid}ackrel{CH}{\tauo} \mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(W)$. Let $\ell_n^1,\ldots,\ell_n^r \subset \alphalpha_n \cap W$ and $\alphalpha_n^1,\ldots,\alphalpha_n^r \subset \pi_W(\alphalpha_n)$ be as above, so that for each $j$ (after passing to a subsequence) we have \[ \ell_n^j \mathbin{\mid}ackrel{H}\tauo \mathcal{L}^j \quad \mbox{ and } \quad \alphalpha_n^j \mathbin{\mid}ackrel{H} \tauo \mathcal{L}_j'. \] Similar to the above, $\mathcal{L} \subset \mathcal{L}^1 \cup \cdots \cup \mathcal{L}^r$ and since $\ell_n^j$ has no transverse intersections with $\alphalpha_n^j$, $\mathcal{L}_j'$ has no transverse intersections with $\mathcal{L}$. Since $\mathcal{L}$ is an ending lamination, $\mathcal{L} \subset \mathcal{L}_j'$. Since the convergent subsequence was arbitrary, it follows that $\pi_W(\alphalpha_n) \mathbin{\mid}ackrel{CH}\tauo \mathcal{L}$. \end{proof} Finally, we note that just as curves can be projected to subsurfaces, whenever a lamination minimally intersects a subsurface in a disjoint union of arcs, we may use the same procedure to project laminations. \section{Survival paths} \langlebel{S:survival paths} To understand the geometry of ${\mathcal C}^s({\sf d}ot S)$, the Gromov boundary, and the Cannon-Thurston map we eventually construct, we will make use of some special paths we call {\em survival paths}. To describe their construction, we set the following notation. Given a witness $W \subseteq {\sf d}ot S$ and $x,y \in {\mathcal C}(W)$, let $[x,y]_W \subset {\mathcal C}(W)$ denote a geodesic between $x$ and $y$. The following definition is reminiscent of hierarchy paths from \cite{MM2}, though our situation is considerably simpler. \betaegin{definition} Given $x,y \in {\mathcal C}^s({\sf d}ot S)$, let $[x,y]_{{\sf d}ot S}$ be any ${\mathcal C}({\sf d}ot S)$--geodesic. If $W \subsetneq {\sf d}ot S$ is a proper witness such that $\partial W$ is a vertex of $[x,y]_{{\sf d}ot S}$, we say that $W$ is a {\em witness for $[x,y]_{{\sf d}ot S}$}. Note that if $W$ is a witness for $[x,y]_{{\sf d}ot S}$, then the immediate predecessor and successor $x',y'$ to $\partial W$ in $[x,y]_{{\sf d}ot S}$ are necessarily contained in ${\mathcal C}(W)$ (hence also in ${\mathcal C}^s({\sf d}ot S)$) and we let $[x',y']_W \subset {\mathcal C}(W)$ be a geodesic (which we also view as a path in ${\mathcal C}^s({\sf d}ot S)$). Replacing every consecutive triple $x',\partial W, y' \subset [x,y]_{{\sf d}ot S}$ with the path $[x',y']_W$ produces a path from $x$ to $y$ in ${\mathcal C}^s({\sf d}ot S)$ which we call a {\em survival path} from $x$ to $y$, and denote it $\sigma(x,y)$. We call $[x,y]_{{\sf d}ot S}$ the {\em main geodesic} of $\sigma(x,y)$ and, if $W$ is witness for $[x,y]_{{\sf d}ot S}$, we call the corresponding ${\mathcal C}(W)$--geodesic $[x',y']_W$ the {\em witness geodesic} of $\sigma(x,y)$ for $W$, and also say that $W$ is a {\em witness for $\sigma(x,y)$}. \end{definition} An immediate corollary of Theorem~\ref{BGIT}, we have \betaegin{corollary} \langlebel{C:necessarily witnesses} For any $x,y \in {\mathcal C}^s({\sf d}ot S)$ and proper witness $W$, if $d_W(x,y) > M$, then $W$ is a witness for $[x,y]_{{\sf d}ot S}$, for any geodesics $[x,y]_{{\sf d}ot S}$ between $x$ and $y$. \end{corollary} \betaegin{proof} Since $d_W(x,y) > M$, it follows by Theorem~\ref{BGIT} that some vertex of $[x,y]_{{\sf d}ot S}$ has empty projection to $W$. But the only multi-curve in ${\mathcal C}({\sf d}ot S)$ with empty projection to $W$ is $\partial W$, hence $\partial W$ is a vertex of $[x,y]_{{\sf d}ot S}$. \end{proof} No two consecutive vertices of $[x,y]_{{\sf d}ot S}$ can be boundaries of witness (since any two such boundaries nontrivially intersect). Therefore, the next lemma follows. \betaegin{lemma} \langlebel{L:distance witnesses} For any $x,y \in {\mathcal C^s({\sf d}ot S)}$ and geodesics $[x,y]_{{\sf d}ot S}$, there are at most $\taufrac{d_{{\sf d}ot S}(x,y)}2$ witnesses for $[x,y]_{{\sf d}ot S}$. \qed \end{lemma} The following lemma estimates the lengths of witness geodesics on a survival path. \betaegin{lemma}\langlebel{L:witnessLength} Given a survival path $\sigma (x,y)$ and a witness $W$ for $\sigma(x,y)$, the initial and terminal vertices $x'$ and $y'$ of the witness geodesic segment $[x', y']_{W}$ satisfy \[ d_W(x,x'),d_W(y,y') < M.\] Consequently, $d_W(x',y')$ of satisfies \[ d_W(x',y') \mathbin{\mid}ackrel{\mbox{\tauiny $2M\! ,\! 0$}}{\alphasymp} d_W(x,y).\] \end{lemma} \betaegin{proof} By Theorem~\ref{BGIT} applied to the subsegments of $[x,y]_{{\sf d}ot S}$ from $x$ to $x'$ and $y'$ to $y$ proves the first inequality. The second is immediate from the triangle inequality. \end{proof} Finally we have the easy half of a distance estimate (c.f.~\cite{MM2}). \betaegin{lemma} \langlebel{L:upper bound formula} For any $x,y \in {\mathcal C}^s({\sf d}ot S)$ and $k > M$, we have \[ d^s(x,y) \leq 2k^2 + 2k \sum_{W \in \Omega({\sf d}ot S)} \lcut d_W(x,y) \rcut_k. \] \end{lemma} Recall that $\Omega({\sf d}ot S)$ denotes the set of all witnesses for ${\mathcal C}^s({\sf d}ot S)$ and that $\lcut x \rcut_k$ is the cut-off function giving value $x$ if $x \geq k$ and $0$ otherwise. \betaegin{proof} Since $\sigma(x,y)$ is a path from $x$ to $y$, it suffices to prove that the length of $\sigma(x,y)$ is bounded above by the right-hand side. For each witness $W$ of $x,y$ whose boundary appears in $[x,y]_{{\sf d}ot S}$, we have replaced the length two segment $\{x',\partial W, y'\}$ with $[x',y']_W$, which has length $d_W(x',y')$. By Lemma \ref{L:witnessLength} we have \[ d_W(x',y') \leq 2M + d_W(x,y).\] If $d_W(x,y) \geq k > M$, this implies the length $d_W(x',y')$, of $[x',y']_W$ is less than $3d_W(x,y)$. Otherwise, the length is less than $3k$. Let $W_1,\ldots, W_n$ denote the witnesses for $x,y$ whose boundaries appear in $[x,y]_{{\sf d}ot S}$. By Lemma~\ref{L:distance witnesses}, $n \leq \taufrac12 d_{{\sf d}ot S}(x,y)$, half the length of $[x,y]_{{\sf d}ot S}$. Further note that by Corollary~\ref{C:necessarily witnesses}, if $d_W(x,y) \geq k > M$, then $W$ is one of the witnesses $W_j$, for some $j$. Combining all of these (and fact that $k > M > 2$) we obtain the following bound on the length of $\sigma(x,y)$, and hence $d^s(x,y)$: \[ \betaegin{array}{rclcl} d^s(x,y) & \leq & {\sf d}isplaystyle{d_{{\sf d}ot S}(x,y) + \sum_{j=1}^n 3d_{W_j}(x,y)} \leq {\sf d}isplaystyle{d_{{\sf d}ot S}(x,y)+ 3 \sum_{j=1}^n (\lcut d_{W_j}(x,y) \rcut_k + k)}\\ & = & {\sf d}isplaystyle{d_{{\sf d}ot S}(x,y) + 3nk + 3 \sum_{j=1}^n \lcut d_{W_j}(x,y) \rcut_k} \leq {\sf d}isplaystyle{(\taufrac{3k}2 + 1)d_{{\sf d}ot S}(x,y) + 3 \sum_{j=1}^n \lcut d_{W_j}(x,y) \rcut_k}\\ & \leq & {\sf d}isplaystyle{2k(\lcut d_{{\sf d}ot S}(x,y) \rcut_k + k) + 3 \sum_{j=1}^n \lcut d_{W_j}(x,y) \rcut_k} \leq {\sf d}isplaystyle{2k^2 + 2k \sum_{W \in \Omega({\sf d}ot S)} \lcut d_W(x,y) \rcut_k} \\ \end{array} \] \end{proof} \betaegin{lemma} \langlebel{L:projecting survival paths} Given $x,y \in {\mathcal C^s({\sf d}ot S)}$, if $W$ is not a witness for $[x,y]_{{\sf d}ot S}$, then \[ {\sf d}iam_W(\sigma(x,y)) \leq M + 4.\] \end{lemma} \betaegin{proof} Since $W$ is not a witness for $[x,y]_{{\sf d}ot S}$, every $z \in [x,y]_{{\sf d}ot S}$ has non-empty projection to $W$. Therefore, ${\sf d}iam_W([x,y])_{{\sf d}ot S} \leq M$ by Theorem~\ref{BGIT}. If $w' \in {\mathcal C}(W')$ is on a witness geodesic segment of $\sigma(x,y)$, then $d_{{\sf d}ot S}(w',\partial W') =1$ so $d_W(w',\partial W') \leq 2$ by Proposition~\ref{P:2 Lipschitz}. Since $\partial W' \in [x,y]_{{\sf d}ot S}$, the lemma follows by the triangle inequality. \end{proof} \betaegin{lemma} \langlebel{L:subpaths of survival paths} Suppose $\sigma(x,y)$ is a survival path and $x',y' \in \sigma(x,y)$ with $x \leq x' <y' \leq y$, with respect to the ordering from $\sigma(x,y)$. Then if $x',y'$ lie on the main geodesic $[x,y]_{{\sf d}ot S}$, then the subpath of $\sigma(x,y)$ from $x'$ to $y'$ is a survival path. If $x' \in {\mathcal C}(W)$ and/or $y' \in {\mathcal C}(W')$ for proper witnesses $W,W'$ for $x,y$, respectively, then the same conclusion holds, provided the subsegments of ${\mathcal C}(W)$ and/or ${\mathcal C}(W')$ in $\sigma(x,y)$ between $x'$ and $y'$ has length at least $2M$. \end{lemma} \betaegin{proof} When $x',y'$ are on the main geodesic, this is straightforward, since in this case, the subsegment of the main geodesic between $x'$ and $y'$ serves as the main geodesic for a survival path between $x'$ and $y'$. There are several cases for the second statement. The proofs are all similar, so we just describe one case where, say, $x' \in [x'',y'']_W \subset {\mathcal C}(W)$ with $x \leq x'' \leq x' \leq y'' \leq y' \leq y$, and $y'$ is in the main geodesic. The assumption in this case means that in ${\mathcal C}(W)$, the distance between $x'$ and $y''$ is at least $2M$. Lemma~\ref{L:witnessLength} implies that $d_W(y'',y) < M$, and so by the triangle inequality, $d_W(x',y) > M$. Therefore, by Theorem~\ref{BGIT} any geodesic from $x'$ to $y$ must pass through $\partial W$. In particular, the path that starts at $x'$, travels to $\partial W$, then continues along the subsegment of $[x,y]_{{\sf d}ot S}$ from $\partial W$ to $y'$, is a geodesic in ${\mathcal C}({\sf d}ot S)$. We can easily build a survival path from $x'$ to $y'$ using this geodesic that is a subsegment of $\sigma(x,y)$, as required. The other cases are similar. \end{proof} \subsection{Infinite survival paths} Masur-Minsky proved that for any surface $Z$ and any two points in ${\betaoldsymbol a}r {\mathcal C}(Z) = {\mathcal C}(Z) \cup \partial {\mathcal C}(Z)$, where $\partial {\mathcal C}(Z)$ is the Gromov boundary, there is a geodesic ``connecting" these points; see \cite{MM2}. Given $x,y \in {\betaoldsymbol a}r {\mathcal C}(Z)$, we let $[x,y]_Z$ denote such a geodesic. The construction of survival paths above can be carried out for geodesic lines and rays in ${\mathcal C}({\sf d}ot S)$, replacing any length two path $x',\partial W, y'$ with a ${\mathcal C}(W)$ geodesic from $x'$ to $y'$ to produce a {\em survival ray} or {\em survival line}, respectively. More generally, to a geodesic segment or ray of ${\mathcal C}({\sf d}ot S)$ we can construct other types of survival rays and survival lines. Specifically, first construct a survival path as above or as just described, then append to one or both endpoints an infinite witness ray (or rays). For example, for any two distinct witnesses $W$ and $W'$ and points $z,z'$ in the Gromov boundaries of ${\mathcal C}(W)$ and ${\mathcal C}(W')$, respectively, we can construct a survival line starts and ends with geodesic rays in ${\mathcal C}(W)$ and ${\mathcal C}(W')$, limiting to $z$ and $z'$, respectively, and having main geodesic being a segment. In this way, we see that survival lines can thus be constructed for any pair of distinct points in \[ z,z' \in \betaigcup_{W \in \Omega({\sf d}ot S)} {\betaoldsymbol a}r {\mathcal C}(W),\] and we denote such by $\sigma(z,z')$, as in the finite case. From this discussion, we have the following. \betaegin{lemma} \langlebel{L:visual on witness boundaries} For any distinct pair of elements \[ z,z' \in \betaigcup_{W \in \Omega({\sf d}ot S)} {\betaoldsymbol a}r {\mathcal C}(W) \] there exists a (possibly infinite) survival path $\sigma(z,z')$ ``connecting" these points.\qed \end{lemma} The next proposition allows us to deduce many of the properties of survival paths to infinite survival paths. \betaegin{proposition} \langlebel{P:exhaustion by survival} Any infinite survival path (line or ray) is an increasing union of {\em finite} survival paths. \end{proposition} \betaegin{proof} This follows just as in the proof of Lemma~\ref{L:subpaths of survival paths}. \end{proof} \betaegin{remark} Unless otherwise stated, the term ``survival path" will be reserved for finite survival paths. ``Infinite survival path" will mean either survival ray or survival line. \end{remark} \section{Hyperbolicity of the surviving curve complex} \langlebel{S:hyperbolicity} In this section we prove the following theorem using survival paths. The proof appeals to Proposition~\ref{bow}, due to Masur-Schleimer \cite{MSch1} and Bowditch (\cite{Bowhyp2}), which gives criteria for hyperbolicity. \betaegin{theorem} \langlebel{hyp} The complex ${\mathcal C^s({\sf d}ot S)}$ is Gromov-hyperbolic. \end{theorem} \betaegin{remark} There are alternate approaches to proving Theorem~\ref{hyp}. For example, Masur and Schleimer provide a collection of axioms in \cite{MSch1} whose verification would imply hyperbolicity. Another approach would be to show that Vokes' condition for hyperbolicity in \cite{Vokes} which requires an action of the entire mapping class group can be relaxed to requiring an action of the stabilizer of $z$, which is a finite index subgroup of the mapping class group. We have chosen to give a direct proof using survival paths since it is elementary and illustrates their utility. \end{remark} The condition for hyperbolicity we use is the following; see \cite{MSch1,Bowhyp2}. \betaegin{proposition}\langlebel{bow} Given $\epsilonsilon>0$, there exists ${\sf d}elta>0$ with the following property. Suppose that $G$ is a connected graph and for each $x,y \in V(G)$ there is an associated connected subgraph $\varsigma(x,y)\subseteq G$ including $x,y$. Suppose that, \betaegin{enumerate} \item For all $x,y,z \in V(G)$, \[\varsigma(x,y)\subseteq \mathcal{N}_{\epsilonsilon}(\varsigma(x,z)\cup \varsigma(z,y))\] \item For any $x,y \in V(G)$ with $d(x,y)\leq 1$, the diameter of $\varsigma (x,y)$ in $G$ is at most $\epsilonsilon$. \end{enumerate} Then, $G$ is ${\sf d}elta$--hyperbolic. \end{proposition} We will apply Proposition \ref{bow} to the graph ${\mathcal C}^s({\sf d}ot S)$, and for vertices $x,y \in {\mathcal C}^s({\sf d}ot S)$, the required subcomplex is a (choice of some) survival path $\sigma(x,y)$. Note that if $x,y$ are distance one apart, then $\sigma(x,y) = [x,y]$, which has diameter $1$. Therefore, as long as $\epsilonsilon \geq 1$, condition (2) in Theorem~\ref{bow} will be satisfied. We therefore focus on condition (1), and express this briefly by saying that $x,y,z$ span an {\em $\epsilonsilon$--slim survival triangle}. The next lemma verifies condition (1) in a special case. \betaegin{lemma} \langlebel{L:cobounded slim triangles} Given $R > 4$, there exists $\epsilonsilon > 0$ with the following property. If $x,y,z \in {\mathcal C}^s({\sf d}ot S)$ are any three points such that $d_W(u,v) \leq R$ for all proper witness $W \subsetneq {\sf d}ot S$ and every $u,v \in \{x,y,z\}$, then $x,y,z$ span an $\epsilonsilon$--slim survival triangle. \end{lemma} \betaegin{proof} First note that by Lemma~\ref{L:witnessLength}, the length of any witness geodesic of any one of the three sides is at most $R + 2M$; we will use this fact throughout the proof without further mention. We also observe that by Theorem~\ref{BGIT}, for any $w \in \sigma(x,y) \cap [x,y]_{{\sf d}ot S}$ and any proper witness $W \subsetneq {\sf d}ot S$, at least one of $d_W(x,w)$ or $d_W(y,w)$ is at most $M$. Next suppose $w$ is on a subsegment $[x',y']_W \subset \sigma(x,y)$ for some proper witness $W$ of $\sigma(x,y)$. Observe that $w$ is within $\taufrac{R+2M}2 = \taufrac{R}2 + M$ from either $x'$ or $y'$ and so by Theorem~\ref{BGIT} and the triangle inequality, one of $d_W(w,x)$ or $d_W(w,y)$ is at most $\taufrac{R}2+2M$. If $W'$ is any other proper witness, we claim that $d_{W'}(x,w)$ or $d_{W'}(y,w)$ is at most $M+2 \leq \frac{R}2+2M$. To see this, note that either $\partial W'$ lies in $[x,\partial W]_{{\sf d}ot S} \subset [x,y]_{{\sf d}ot S}$, in $[\partial W,y]_{{\sf d}ot S} \subset [x,y]_{{\sf d}ot S}$, or neither. In the first two cases, $d_{W'}(\partial W,y) \leq M$ or $d_{W'}(\partial W,x) \leq M$, respectively, by Theorem~\ref{BGIT}, while in the third case both of these inequalities hold. Therefore, since $w$ and $\partial W$ are disjoint, $d_{W'}(\partial W,w) \leq 2$, and hence $d_{W'}(x,w)$ or $d_{W'}(y,w)$ is at most $M+2 \leq \frac{R}2+2M$. Now let $w \in \sigma(x,y)$ be any vertex and $w_0 \in [x,y]_{{\sf d}ot S} \cap \sigma(x,y)$ the nearest vertex along $\sigma(x,y)$, and observe that $d_{{\sf d}ot S}(w,w_0) \leq 2$. Since ${\mathcal C}({\sf d}ot S)$ is ${\sf d}elta$--hyperbolic (for some ${\sf d}elta > 0$), there is a vertex $w_0' \in [x,z]_{{\sf d}ot S} \cup [y,z]_{{\sf d}ot S}$ with $d_{{\sf d}ot S}(w_0,w_0') \leq {\sf d}elta$. Without loss of generality, we assume $w_0' \in [x,z]_{{\sf d}ot S}$. Choose $w' \in \sigma(x,z)$ to be $w' = w_0'$ if $w_0' \in \sigma(x,z)$ or one of the adjacent vertices of $[x,z]_{{\sf d}ot S}$ if $w_0'$ is the boundary of a witness. Then $d_{{\sf d}ot S}(w_0',w') \leq 1$, so \[ d_{{\sf d}ot S}(w,w') \leq {\sf d}elta + 3. \] Now suppose $W \subsetneq {\sf d}ot S$ is a proper witness. Then at least one of $d_W(w,x)$ or $d_W(w,y)$ is at most $\taufrac{R}2 + 2M$ as is at least one of $d_W(w',x)$ or $d_W(w',z)$. If $d_W(w,x),d_W(w',x) \leq \frac{R}2+2M$, then applying the triangle inequality, we see that \[ d_W(w,w') \leq R + 4M.\] If instead, $d_W(x,w) \leq \frac{R}2 + 2M$ and $d_W(w',z) \leq \frac{R}2 + 2M$, then the triangle inequality implies \[ d_W(w,w') \leq d_W(w,x) + d_W(x,z) + d_W(z,w') \leq 2R + 4M.\] The other two possibilities are similar, and hence $d_W(w,w') \leq 2R + 4M$. Applying Corollary~\ref{L:upper bound formula} with $k = M$, recalling that by Lemma~\ref{L:distance witnesses} there are at most $\taufrac{d_{{\sf d}ot S}(w,w')}2 \leq \frac{{\sf d}elta + 3}2$ proper witnesses for any geodesic $[w,w']_{{\sf d}ot S}$, we have \[ d^s(w,w') \leq 2M^2 + 2M \sum_{W \in \Omega({\sf d}ot S)} \lcut d_W(w,w') \rcut_M \leq 2M^2+2M \left( {\sf d}elta + 3 + \taufrac{{\sf d}elta + 3}2(2R+4M) \right). \] Setting $\epsilonsilon$ equal to the right-hand side (which really depends only on $R$, since $M$ and ${\sf d}elta$ are independent of anything), completes the proof. \end{proof} A standard argument subdividing an $n$--gon into triangles proves the following. \betaegin{corollary} \langlebel{C:cobounded slim n-gons} Given $R > 0$ let $\epsilonsilon > 0$ be as in Lemma~\ref{L:cobounded slim triangles}. If $n \geq 3$ and $x_1,\ldots,x_n \in {\mathcal C}^s({\sf d}ot S)$ are such that $d_W(x_i,x_j) \leq R$ for all $1 \leq i,j \leq n$, then for all $w \in \sigma(x_i,x_{i+1})$, there exists $j \neq i$ and $w' \in \sigma(x_j,x_{j+1})$ (with all indices taken modulo $n$) such that $d^s(w,w') \leq \lceil \taufrac{n}2 \rceil \epsilonsilon$. \end{corollary} For the remainder of the proof (and elsewhere in the paper) it is useful to make the following definition. \betaegin{definition} Given $x,y,z \in {\mathcal C}^s({\sf d}ot S)$ and $R> 0$, consider the proper witnesses with projection at least $R$: \[ \Omega_R(x,y) = \{ W \in \Omega_0({\sf d}ot S) \mid d_W(x,y) > R\},\] and set \[ \Omega_R(x,y,z) = \Omega_R(x,y) \cup \Omega_R(x,z) \cup \Omega_R(y,z).\] \end{definition} In words, $\Omega_R(x,y)$ is the set of all proper witness for which $x$ and $y$ have distance greater than $R$. \betaegin{lemma} \langlebel{L:at most one large in 3}For any three points $x,y,z \in {\mathcal C}^s({\sf d}ot S)$ and $R \geq 2M$, there is at most one $W \in \Omega_R(x,y,z)$ such that \[ W \in \Omega_{R/2}(x,y) \cap \Omega_{R/2}(x,z) \cap \Omega_{R/2}(y,z).\] \end{lemma} \betaegin{proof} Suppose there exist two distinct \[ W,W' \in \Omega_{R/2}(x,y) \cap \Omega_{R/2}(x,z) \cap \Omega_{R/2}(y,z).\] Then by Theorem~\ref{BGIT}, $\partial W, \partial W'$ are (distinct) vertices in any ${\mathcal C}({\sf d}ot S)$--geodesic between any two vertices in $\{x,y,z\}$. Choose geodesics $[x,\partial W]_{{\sf d}ot S}$, $[y,\partial W]_{{\sf d}ot S}$, and $[z,\partial W]_{{\sf d}ot S}$, and note that concatenating any two of these (with appropriate orientations) produces a geodesic between a pair of vertices in $\{x,y,z\}$. Since $\partial W' \neq \partial W$ must also lie on all ${\mathcal C}({\sf d}ot S)$--geodesics between these three vertices, it must lie on at least one of the geodesic segments to $\partial W$; without loss of generality, suppose $\partial W' \in [x,\partial W]_{{\sf d}ot S}$. If $\partial W'$ is not a vertex of either $[y,\partial W]_{{\sf d}ot S}$ or $[z,\partial W]_{{\sf d}ot S}$, then our geodesic from $y$ to $z$ does not contain $\partial W'$, a contradiction. Without loss of generality, we may assume $\partial W' \in [y,\partial W]_{{\sf d}ot S}$. But then the geodesic subsegment between $x$ and $\partial W'$ in $[x,\partial W]_{{\sf d}ot S}$ together with the geodesic subsegment between $\partial W'$ and $y$ in $[y,\partial W]_{{\sf d}ot S}$ is also a geodesics (as above) and does not pass through $\partial W$, a contradiction. \end{proof} \betaegin{proof}[Proof of Theorem~\ref{hyp}] Let $x,y,z \in {\mathcal C}^s({\sf d}ot S)$. By the triangle inequality, if $W \in \Omega_{2M}(x,y)$, then at least one of $d_W(x,z)$ or $d_W(y,z)$ is greater than $M$. By Lemma~\ref{L:at most one large in 3}, there is at most one $W$ such that {\em both} are greater than $M$. If such $W$ exists, denote it $W_0$ and write $D_0 = \{W_0\}$; otherwise, write $D_0 = \emptyset$. Defining \[ D_x = \{ W \in \Omega_{2M}(x,y,z) {\smallsetminus} D_0 \mid d_W(x,y) > M, d_W(x,z) > M\} \] (and defining $D_y$, $D_z$ similarly), we can express $\Omega_{2M}(x,y,z)$ as a disjoint union \[ \Omega_{2M}(x,y,z) = D_x \sqcup D_y \sqcup D_z \sqcup D_0. \] By Theorem~\ref{BGIT}, the ${\mathcal C}({\sf d}ot S)$--geodesics $[x,y]_{{\sf d}ot S}$ and $[x,z]_{{\sf d}ot S}$ contain $\partial W$ for all $W \in D_x$, and we write \[ D_x = \{W_x^1,W_x^2,\ldots,W_x^{m_x}\} \] so that $x_1 = \partial W_x^1,x_2 = \partial W_x^2, \ldots, x_{m_x} = \partial W_x^{m_x}$ appear in this order along $[x,y]_{{\sf d}ot S}$ and $[x,z]_{{\sf d}ot S}$. Similarly write \[ D_y = \{W_y^1,\ldots, W_y^{m_y} \} \mbox{ and } D_z = \{W_z^1,\ldots,W_z^{m_z}\}. \] The ${\mathcal C}({\sf d}ot S)$--geodesic triangle between $x$, $y$, and $z$ must appear as in the examples illustrated in Figure~\ref{F:triangles in C(dot S)}. \betaegin{figure}[h] \betaegin{center} \betaegin{tikzpicture}[scale = .3] {\sf d}raw[thick] (0,0) .. controls (3,0) .. (4,-2); {\sf d}raw[thick] (0,0) .. controls (1,-2) .. (4,-2); {\sf d}raw[thick] (4,-2) .. controls (7,-2) .. (8,-4); {\sf d}raw[thick] (4,-2) .. controls (5,-4) .. (8,-4); {\sf d}raw[thick] (8,-4) .. controls (11,-4) .. (12,-6); {\sf d}raw[thick] (8,-4) .. controls (9,-6) .. (12,-6); {\sf d}raw[thick] (12,-6) .. controls (14,-5) .. (16,-6); {\sf d}raw[thick] (16,-6) .. controls (18,-5) .. (20,-6); {\sf d}raw[thick] (20,-6) .. controls (22,-5) .. (24,-6); {\sf d}raw[thick] (16,-6) .. controls (18,-7) .. (20,-6); {\sf d}raw[thick] (20,-6) .. controls (22,-7) .. (24,-6); {\sf d}raw[thick] (12,-6) .. controls (12,-8) .. (14,-9); {\sf d}raw[thick] (16,-6) .. controls (16,-8) .. (14,-9); {\sf d}raw[thick] (14,-9) .. controls (13,-11) .. (14,-13); {\sf d}raw[thick] (14,-9) .. controls (15,-11) .. (14,-13); {\sf d}raw[fill] (0,0) circle (.2cm); {\sf d}raw[fill] (4,-2) circle (.2cm); {\sf d}raw[fill] (8,-4) circle (.2cm); {\sf d}raw[fill] (12,-6) circle (.2cm); {\sf d}raw[fill] (16,-6) circle (.2cm); {\sf d}raw[fill] (20,-6) circle (.2cm); {\sf d}raw[fill] (24,-6) circle (.2cm); {\sf d}raw[fill] (14,-9) circle (.2cm); {\sf d}raw[fill] (14,-13) circle (.2cm); \node at (0,1) {$x$}; \node at (4.6,-1) {$x_1$}; \node at (8.6,-3) {$x_2$}; \node at (12.6,-4.8) {$x_3$}; \node at (16,-4.9) {$y_2$}; \node at (20,-4.9) {$y_1$}; \node at (24.5,-5.2) {$y$}; \node at (15,-9.5) {$z_1$}; \node at (14.5,-13.5) {$z$}; {\sf d}raw[thick] (18+6,2-2) .. controls (21+6,2-2) .. (22+6,0-2); {\sf d}raw[thick] (18+6,2-2) .. controls (19+6,0-2) .. (22+6,0-2); {\sf d}raw[thick] (22+6,0-2) .. controls (25+6,0-2) .. (26+6,-2-2); {\sf d}raw[thick] (22+6,0-2) .. controls (23+6,-2-2) .. (26+6,-2-2); {\sf d}raw[thick] (26+6,-2-2) .. controls (29+6,-2-2) .. (30+6,-4-2); {\sf d}raw[thick] (26+6,-2-2) .. controls (27+6,-4-2) .. (30+6,-4-2); {\sf d}raw[thick] (30+6,-4-2) .. controls (29+6,-6-2) .. (30+6,-8-2); {\sf d}raw[thick] (30+6,-4-2) .. controls (31+6,-6-2) .. (30+6,-8-2); {\sf d}raw[thick] (30+6,-8-2) .. controls (29+6,-10-2) .. (30+6,-12-2); {\sf d}raw[thick] (30+6,-8-2) .. controls (31+6,-10-2) .. (30+6,-12-2); {\sf d}raw[thick] (30+6,-4-2) .. controls (32+6,-3-2) .. (34+6,-4-2); {\sf d}raw[thick] (30+6,-4-2) .. controls (32+6,-5-2) .. (34+6,-4-2); {\sf d}raw[thick] (34+6,-4-2) .. controls (36+6,-3-2) .. (38+6,-4-2); {\sf d}raw[thick] (34+6,-4-2) .. controls (36+6,-5-2) .. (38+6,-4-2); {\sf d}raw[fill] (24,0) circle (.2cm); {\sf d}raw[fill] (28,-2) circle (.2cm); {\sf d}raw[fill] (32,-4) circle (.2cm); {\sf d}raw[fill] (36,-6) circle (.2cm); {\sf d}raw[fill] (40,-6) circle (.2cm); {\sf d}raw[fill] (44,-6) circle (.2cm); {\sf d}raw[fill] (36,-10) circle (.2cm); {\sf d}raw[fill] (36,-14) circle (.2cm); \node at (24,1) {$x$}; \node at (28.6,-1) {$x_1$}; \node at (32.6,-3) {$x_2$}; \node at (36.5,-4.7) {\tauiny $\partial W_0$}; \node at (40,-4.9) {$y_1$}; \node at (44.5,-5.2) {$y$}; \node at (37,-9.9) {$z_1$}; \node at (36.6,-14.4) {$z$}; \iffalse \node at (20,-1) {\tauiny ${{\gothic o}thic a}mma_9$}; \node at (22,1) {\tauiny ${{\gothic o}thic a}mma_{10}$}; \node at (24,-1) {\tauiny ${{\gothic o}thic a}mma_{11}$}; \node at (2,0) {\tauiny $t_0$}; \node at (4,0) {\tauiny $t_1$}; \node at (6,0) {\tauiny $t_2$}; \node at (8,0) {\tauiny $t_3$}; \node at (10,0) {\tauiny $t_4$}; \node at (12,0) {\tauiny $t_5$}; \node at (14,0) {\tauiny $t_6$}; \node at (16,0) {\tauiny $t_7$}; \node at (18,0) {\tauiny $t_8$}; \node at (20,0) {\tauiny $t_9$}; \node at (22,0) {\tauiny $t_{10}$}; \node at (24,0) {\tauiny $t_{11}$}; \fi \end{tikzpicture} \caption{Geodesic triangles in ${\mathcal C}({\sf d}ot S)$: Here $x_j = \partial W_x^j$, $y_j = \partial W_y^j$, and $z_j = \partial W_z^j$, and $\{x_1,x_2,x_3\} \subset [x,y]_{{\sf d}ot S} \cap [x,z]_{{\sf d}ot S}$, $\{y_1,y_2\} \subset [x,y]_{{\sf d}ot S} \cap [y,z]_{{\sf d}ot S}$, and $\{z_1\} \subset [x,z]_{{\sf d}ot S} \cap [y,z]_{{\sf d}ot S}$. The left triangle has $D_0 = \emptyset$, while the triangle on the right has $D_0 = \{W_0\}$, hence $\partial W_0 \in [x,y]_{{\sf d}ot S} \cap [x,z]_{{\sf d}ot S} \cap [y,z]_{{\sf d}ot S}$. Note: there may be more vertices in common to pairs of geodesics than the vertices $x_j,y_j,z_j$. Furthermore, there may be various degenerations, e.g.~$D_x= D_0 = \emptyset$, in which case the three bigons in the upper left-hand portion of the left figure disappears and $x_3$ becomes $x$.} \langlebel{F:triangles in C(dot S)} \end{center} \end{figure} We now subdivide each of the survival paths $\sigma(x,y)$, $\sigma(x,z)$, and $\sigma(y,z)$ into subsegments as follows. In this subdivision, $\sigma(x,y)$ is a concatenation of witness geodesics for each witness $W$ in $D_x \cup D_y \cup D_0$ and complementary subsegments connecting consecutive such witness geodesics. The complementary segments are themselves survival paths obtained as concatenations of ${\mathcal C}({\sf d}ot S)$--geodesic segments and witness geodesic segments for witnesses for which $d_W(x,y) \leq 2M$. The paths $\sigma(x,z)$ and $\sigma(y,z)$ are similarly described concatenations. Applying Lemma~\ref{L:witnessLength}, all of the witness segments that appear in the complementary segments (and are thus {\em not} from witnesses in $\Omega_{2M}(x,y,z)$) have length at most $4M$. Let $w \in \sigma(x,y)$ be any point. We must show that there is some $w' \in \sigma(x,z) \cup \sigma(y,z)$ so that $d^s(w,w')$ is uniformly bounded. There are two cases (which actually divide up further into several sub-cases), depending on whether or not $w$ lies on a witness geodesics for a witness $W \in \Omega_{2M}(x,y,z)$. Suppose first that $w$ lies on a witness geodesic $[x',y']_W \subset \sigma(x,y)$ for $W \in D_x$. By definition of $D_x$, $W \in \Omega_M(x,y) \cap \Omega_M(x,z)$, and so there is also a witness geodesic $[x'',z'']_W \subset \sigma(x,z)$. Since there are ${\sf d}ot S$--geodesics $[x,x']_{{\sf d}ot S}, [x,x'']_{{\sf d}ot S},[y',y]_{{\sf d}ot S}, [z'',z]_{{\sf d}ot S}$ so that every vertex has a nonempty projection to ${\mathcal C}(W)$, and since $d_W(y,z) < M$ (again, by definition of $D_x$), Theorem~\ref{BGIT} and the triangle inequality imply \betaegin{equation} \langlebel{E:witness endpoints close} \betaegin{array}{l} d_W(x',x'') \leq d_W(x',x) + d_W(x,x'') \leq 2M \mbox{ and }\\ d_W(y',z'') \leq d_W(y',y) + d_W(y,z) + d_W(z,z'') \leq 3M. \end{array} \end{equation} So $[x',y']_W$ and $[x'',z'']_W$ are ${\mathcal C}(W)$--geodesics whose starting and ending points are within distance $3M$ of each other. Since ${\mathcal C}(W)$ is ${\sf d}elta$--hyperbolic for some ${\sf d}elta >0$, it follows that there is some $w' \in [x'',z'']_W \subset \sigma(x,z)$ so that $d_W(w,w'') \leq 2 {\sf d}elta + 3M$. Since ${\mathcal C}(W)$ is a subgraph of ${\mathcal C}^s({\sf d}ot S)$, $d^s(w,w') \leq 2{\sf d}elta + 3M$. We can similarly find the required $w'$ if $w$ is in a witness geodesic segment for a witness $W \in D_y$. Next suppose $w$ lies in the witness geodesic $[x',y']_{W_0} \subset \sigma(x,y)$, for $W_0 \in D_0$ (if $D_0 \neq \emptyset$). The argument in this sub-case is similar to the previous one, as we now describe. Let $[x'',z'']_{W_0} \subset \sigma(x,z)$ and $[y'',z']_{W_0} \subset \sigma(y,z)$ be the $W_0$--geodesic segments. Arguing as in the proof of (\ref{E:witness endpoints close}), we see that the endpoints of these three geodesic segments in ${\mathcal C}(W)$ satisfy \[ d_{W_0}(x',x''), d_{W_0}(y',y''), d_{W_0}(z',z'') \leq 2M.\] Since ${\mathcal C}(W)$ is ${\sf d}elta$--hyperbolic, we can again easily deduce that for some \[ w' \in [x'',z'']_{W_0} \cup [y'',z']_{W_0} \subset \sigma(x,z) \cup \sigma(y,z),\] we have $d^s(w,w') \leq d_{W_0}(w,w') \leq 3{\sf d}elta + 2M$. Finally, we assume $w \in \sigma(x,y)$ lies in a complementary subsegment $\sigma(x',y') \subset \sigma(x,y)$ of one of the $\Omega_{2M}(x,y,z)$--witness subsegments of $\sigma(x,y)$ as described above. Note that $x',y' \in [x,y]_{{\sf d}ot S} \cap \sigma(x,y)$ both lie in one of the ``bigons'' in Figure~\ref{F:triangles in C(dot S)} (cases (1) and (2) below) or in the single central ``triangle" (case (3) below, which happens when $D_0 = \emptyset$). Thus, depending on which complementary subsegment we are looking at, we claim that one of the following must hold: \betaegin{enumerate} \item there exists $\sigma(x'',z'') \subset \sigma(x,z)$ so that $d^s(x',x''), d^s(y',z'') \leq 3M$, \item there exists $\sigma(y'',z'') \subset \sigma(y,z)$ so that $d^s(y',y''), d^s(x',z'') \leq 3M$, or \item there exists $\sigma(x'',z'') \subset \sigma(x,z)$ and $\sigma(y'',z') \subset \sigma(y,z)$ so that\\ $d^s(x',x''), d^s(y',y''), d^s(z',z'') \leq 3M$. \end{enumerate} The proofs of these statements are very similar to the proof in the case that $w \in D_x$ or $D_y$. If $\sigma(x',y')$ is a complementary segment which is part of a bigon and $x'$ is in ${\mathcal C}(W)$ for some $W \in D_x$ (or $x = x''$), then we are in case (1) and we take the corresponding complementary segment $\sigma(x'',z'') \subset \sigma(x,z)$ of the bigon with $x'' \in {\mathcal C}(W)$ (or $x'' = x$). It follows that all vertices of $[x',y]_{{\sf d}ot S}$, $[y,z]_{{\sf d}ot S}$, and $[z,x'']_{{\sf d}ot S}$ have non-empty projections to $W$, so by Theorem~\ref{BGIT} and the triangle inequality we have \[ d^s(x',x'') \leq d_W(x',x'') \leq d_W(x',y) + d_W(y,z) + d_W(z,x'') \leq 3M. \] On the other hand, $y',z'' \in {\mathcal C}(W')$ for some $W' \in D_x \cup D_0$ and similarly \[ d^s(y',z'') \leq d_{W'}(y',z'') \leq d_{W'}(y',x) + d_{W'}(x,z'') \leq 2M < 3M,\] and so the conclusion of (1) holds. If $y' \in {\mathcal C}(W)$ for some $W \in D_y$, then a symmetric argument proves (2) holds. The only other possibility is that $D_0 = \emptyset$, $x' \in {\mathcal C}(W)$, and $y' \in {\mathcal C}(W')$, where $W \in D_x$ and $W' \in D_y$, so that $\sigma(x',y')$ is a segment of the ``triangle". A completely analogous argument proves that condition (3) holds. In any case, note that the two subsegments of the bigon (respectively, three segments of the central triangle), together with segments in curve complexes of proper witnesses give a quadrilateral (respectively, hexagon) of survival paths. Furthermore, by the triangle inequality and application of Theorem~\ref{BGIT}, we see that there is a uniform bound $R >0$ to the projections to all proper witnesses of the vertices of this quadrilateral (respectively, hexagon). Let $\epsilonsilon > 0$ be the constant from Lemma~\ref{L:cobounded slim triangles} for this $R$. By Corollary~\ref{C:cobounded slim n-gons}, there is some $w'$ on one of the other sides of this quadrilateral/hexagon so that $d^s(w,w') \leq 3\epsilonsilon$. It may be that $w'$ is in $\sigma(x,z)$ or $\sigma(y,z)$, or that it lies in one of the witness segments. As described above, these segments have length at most $3M$, and so in this latter case, we can find $w'' \in \sigma(x,z) \cup \sigma(y,z)$ with $d^s(w,w'') \leq 3\epsilonsilon + 3M$. Combining all the above, we see that there is always some $w' \in \sigma(x,z) \cup \sigma(y,z)$ with $d^s(w,w')$ bounded above by \[ \max\{3 \epsilonsilon + 3M, 2 {\sf d}elta + 3M, 4 {\sf d}elta + 2M\}.\] This provides the required uniform bound on thinness of survival paths, and completes the proof of the theorem. \end{proof} \section{Distance Formula} \langlebel{S:distance formula} In this section we prove the following theorem. \betaegin{theorem} \langlebel{dist} For any $k \geq \max\{M,24\}$, there exists $K \geq 1$, $C \geq 0$ so that \[ d^s(x,y) \mathbin{\mid}ackrel{K,C}{\alphasymp} \sum_{W \in \Omega({\sf d}ot S)} \lcut d_{W}(x, y) \rcut_k,\] for all $x,y \in {\mathcal C^s({\sf d}ot S)}$. \end{theorem} Recall that here $x \mathbin{\mid}ackrel{K,C}{\alphasymp} y$ is shorthand for the condition $\frac{1}{K}(x - C) \leq y \leq Kx + C$ and that $\lcut x \rcut_k = x$ if $x \geq k$ and $0$, otherwise. Note that we have already proved an upper bound on $d^s(x,y)$ of the required form in Corollary~\ref{L:upper bound formula} and thus we need only prove the lower bound. \betaegin{remark} As with Theorem~\ref{hyp}, another approach to this theorem would be to follow Masur-Schleimer \cite{MSch1} or Vokes \cite{Vokes}. As with Theorem~\ref{hyp} we give a proof using survival paths, which is straightforward and elementary. \end{remark} One of the main ingredients in our proof is the following due to Behrstock \cite{BehrTH} (see \cite{Joh1} for the version here). \betaegin{lemma}[Behrstock Inequality]\langlebel{Behr} Assume that $W$ and $W'$ are witnesses for ${\mathcal C^s({\sf d}ot S)}$ and $u\in {\mathcal C}({\sf d}ot S)$ with nonempty projection to both $W$ and $W'$. Then \[ d_{W}(u, \partial W') \geq 10 \Rightarrow d_{W'}(u, \partial W) \leq 4 \] \end{lemma} We will also need the following application which we use to provide an ordering on the witnesses for a pair $x,y \in {\mathcal C^s({\sf d}ot S)}$ having large enough projection distances. A more general version was proved in \cite{BBML} (see also \cite{CLM}) and is related to the partial order on domains of hierarchies from \cite{MM2}. The version we will use is the following. \betaegin{proposition} \langlebel{order} Suppose $k \geq 14$ and $W,W'$ are witnesses in the set $\Omega_k(x,y)$. Then the following are equivalent: \betaegin{center} \betaegin{tabular}{lll} (1) $d_{W'}(y, \partial W)\geq10$ \quad \quad & (2) $d_{W}(y, \partial W')\leq4$\\ (3) $d_{W}(x, \partial W')\geq10$ & (4) $d_{W'}(x, \partial W)\leq4$ \end{tabular} \end{center} \end{proposition} \betaegin{proof} By Lemma \ref{Behr} we have $(1) \Rightarrow (2)$ and $(3) \Rightarrow (4)$. To prove $(2) \Rightarrow (3)$ we use triangle inequality: \[ d_{W}(x, \partial W')\geq d_{W}(x, y)-d_{W}(y, \partial W') \geq k - 4 \geq 10\] since $k \geq 14$. The proof of $(4) \Rightarrow (1)$ is identical to the proof that $(2) \Rightarrow (3)$. \end{proof} \betaegin{definition} For any $k\geq 14$, we define a relation $<$ on $\Omega_k(x,y)$, declaring $W < W'$ for $W, W' \in \Omega_k(x,y)$, if any of the equivalent statements of the Proposition \ref{order} is satisfied. \end{definition} \betaegin{lemma} For any $k \geq 20$, the relation $<$ is a total order on $\Omega_k(x,y)$. \end{lemma} \betaegin{proof} We first prove that any two element $W , W' \in \Omega_k(x,y)$ are ordered. If not, then that means Proposition~\ref{order} (3) fails to hold as stated, or with $y$ replacing $x$, and thus we have $d_{W}(y, \partial W') < 10 $ and $d_{W}(x, \partial W') < 10$. Hence, \[d_{W}(x,y) \leq d_{W}(x, \partial W')+d_{W}(y, \partial W') < 20 \leq k\] which contradicts the assumption that $W \in \Omega_k(x,y)$. The relation is clearly anti-symmetric, so it remains to prove that it is transitive. To that end, let $W < W' < W''$ in $\Omega_k(x,y)$, and we assume $W \not < W''$, hence $W'' < W$. Since $W< W'$ and $W'' < W$, we have $d_W(y,\partial W') \leq 4$ and $d_W(x,\partial W'') \leq 4$. So by the triangle inequality \[ d_W(\partial W',\partial W'' ) \geq d_W(x,y) - d_W(y,\partial W') - d_W(x,\partial W'') \geq k - 8 > 10.\] Then by Lemma~\ref{Behr}, we have \[ d_{W'}(\partial W,\partial W'') \leq 4. \] So, appealing to the fact that $W< W'$ and $W' < W''$ and Proposition~\ref{order} the triangle inequality implies \[ 20 \leq k \leq d_{W'}(x,y) \leq d_{W'}(x,\partial W) + d_{W'}(\partial W,\partial W'') + d_{W'}(\partial W'',y) \leq 12, \] a contradiction. \end{proof} The next lemma is also useful in the proof of Theorem \ref{dist}. \betaegin{lemma}\langlebel{aux} Let $x,y,u \in {\mathcal C^s({\sf d}ot S)}$, $W,W' \in \Omega_k(x,y)$ with $k \geq 20$, and $W<W'$. Then, \[ d_{W}(u, y) \geq 14 \Rightarrow d_{W'}(u, x) \leq 8\] \end{lemma} \betaegin{proof} From our assumptions, the definition of the order on $\Omega_k(x,y)$, and the triangle inequality we have \[ d_W(u,\partial W') \geq d_W(u,y) - d_W(y,\partial W') \geq 14 - 4 = 10.\] By Lemma~\ref{Behr}, we have $d_{W'}(u,\partial W) \leq 4$. Thus, by the definition of the order on $\Omega_k(x,y)$ and the triangle inequality, we have \[ d_{W'}(u,x) \leq d_{W'}(u,\partial W) + d_{W'}(\partial W,x) \leq 4 + 4 = 8.\] \end{proof} We are now ready to prove the lower bound in Theorem \ref{dist}, which we record in the following proposition. \betaegin{proposition}\langlebel{lowerbnd} Fix $k \geq 24$. Given $x,y \in {\mathcal C^s({\sf d}ot S)}$ we have \[d^s(x, y) \geq \frac1{96} \sum_{W \in \Omega({\sf d}ot S)} \lcut d_W(x,y) \rcut_k \] \end{proposition} \betaegin{proof} Let $[x,y]$ be a geodesic between $x,y$ in ${\mathcal C^s({\sf d}ot S)}$, and denote its vertices \[ x = x_0,x_1,\ldots,x_{n-1},x_n = y.\] So, $n = d^s(x,y)$ is the length of $[x,y]$. Let $m = |\Omega_k(x,y)|$, suppose $m > 0$, and write \[\Omega_k(x,y)= \{W_1<W_2<\cdots <W_m\}.\] For each $1 \leq j < m$ let $0 \leq i_j \leq n$ be such that $d_{W_j}(x_{i_j}, y) \geq 14$ and $d_{W_j}(x_{\ell}, y) \leq 13$ for all $\ell > i_j$. That is, $x_{i_j}$ is the last vertex $z \in [x,y]$ for which $d_{W_j}(z,y) \geq 14$. Then, if $j'>j$, so $W_j < W_{j'}$, Lemma~\ref{aux} implies $d_{W_{j'}}(x_{i_j}, x) \leq 8$ and so \[ d_{W_{j'}}(x_{i_j},y) \geq d_{W_{j'}}(x,y) - d_{W_{j'}}(x_{i_j},x) \geq k - 8 \geq 24-8 = 16.\] Since the projection $\pi_{W_{j'}}$ is $2$--Lipschitz (see Proposition~\ref{P:2 Lipschitz}) and $x_{i_j}$ and $x_{i_j+1}$ are distance $1$ in ${\mathcal C^s({\sf d}ot S)}$, we have \[ d_{W_{j'}}(x_{i_j+1},y) \geq d_{W_{j'}}(x_{i_j},y) - d_{W_{j'}}(x_{i_j},x_{i_j+1}) \geq 16-2 \geq 14.\] Therefore, $i_1<i_2<\cdots <i_{m-1}$. Set $i_0 = 0$ and $i_m = n$. Given $1 \leq j < m$, $d_{W_j}(x_{i_j+1},y) \leq 13$ and again appealing to Proposition~\ref{P:2 Lipschitz}, we have \[ d_{W_j}(x_{i_j},y) \leq d_{W_j}(x_{i_j},x_{i_j+1}) + d_{W_j}(x_{i_j+1},y) \leq 2+ 13 \leq 15.\] Observe this inequality is trivially true for $j = m$ since $y = x_n = x_{i_m}$ and so the left hand side is at most $2$ in this case. Another application of Lemma~\ref{aux} implies $d_{W_j}(x,x_{i_{j-1}}) \leq 8$ for all $1 \leq j \leq m$ (the case $j = 1$ is similarly trivial). Therefore \betaegin{equation} \langlebel{E:for the lower bound} d_{W_j}(x_{i_{j-1}},x_{i_j}) \geq d_{W_j}(x,y) - d_{W_j}(x,x_{i_{j-1}}) - d_{W_j}(x_{i_j},y) \geq d_{W_j}(x,y) - 23, \end{equation} for all $1 \leq j \leq m$. Appealing one more time to Proposition~\ref{P:2 Lipschitz}, together with Inequality~(\ref{E:for the lower bound}), we have \[ d^s(x, y) = n = \sum_{j=1}^m i_j-i_{j-1} \geq \frac12 \sum_{j=1}^m d_{W_j}(x_{i_{j-1}},x_{i_j}) \geq \frac{1}{2}\sum_{j=1}^{m}(d_{W_j}(x,y)-23) \] Next, observe that since $d_{W_j}(x,y)\geq k \geq 24$ we have \[d_{W_j}(x,y)-23 \geq \frac{1}{24}d_{W_j}(x,y).\] Since ${\mathcal C}^s({\sf d}ot S) \subset {\mathcal C}({\sf d}ot S)$ is a subcomplex, we have $d^s(x,y) \geq d_{{\sf d}ot S}(x,y)$ and so \[ 2 d^s(x,y) \geq d_{{\sf d}ot S}(x,y) + \frac1{48} \sum_{W \in \Omega_k(x,y)} d_W(x,y) \geq \frac1{48} \sum_{W \in \Omega({\sf d}ot S)} \lcut d_W(x,y) \rcut_k.\] \end{proof} \betaegin{proof}[Proof of Theorem \ref{dist}] Given $k\geq \max\{M,24\}$, let $K = \max\{2k,96\}$ and $C = 2k^2$. The theorem then follows from Corollary~\ref{L:upper bound formula} and Proposition~\ref{lowerbnd}. \end{proof} As a consequence of the Theorem \ref{dist} we have the following two facts. \betaegin{corollary} \langlebel{C:witnesses qi embed} Given a witness $W\subset S$, the inclusion map ${\mathcal C}(W)\hookrightarrow {\mathcal C^s({\sf d}ot S)}$ is a quasi-isometric embedding. \end{corollary} \betaegin{corollary}\langlebel{qd}Survival paths are uniform quasi-geodesics in ${\mathcal C^s({\sf d}ot S)}$. \end{corollary} Moreover, we have \betaegin{lemma} \langlebel{L:reparameterize survival} Survival paths can be reparametrized to be uniform quasi-geodesics in $\mathcal{C}({\sf d}ot S)$. \end{lemma} \betaegin{proof} Let $\sigma(x,y)$ be a survival path with main geodesic $[x,y]_{{\sf d}ot S}$. For every proper witness $W \subsetneq {\sf d}ot S$, if there is a $W$--witness geodesic segment in $\sigma(x,y)$, we reparametrize along this segment so that it is traversed along an interval of length $2$. Since such $W$--witness geodesic segments replaced geodesic subsegments of $[x,y]_{{\sf d}ot S}$ of length $2$, and since they lie in the $1$--neighborhood of $\partial W$, this clearly defines the required reparametrization. \end{proof} \iffalse \betaegin{corollary} \langlebel{5.11} Any uniform quasi-geodesic ${{\gothic o}thic a}mma\colon\tauhinspace [a,b] \rightarrow {\mathcal C^s({\sf d}ot S)}$ can be reparametrized to a uniform quasi-geodesic in $\mathcal{C}({\sf d}ot S)$.\end{corollary} \betaegin{proof} Suppose ${{\gothic o}thic a}mma(a) = x$ and ${{\gothic o}thic a}mma(b) = y$. There is a coarsely order preserving projection from $\sigma(x,y)$ to ${{\gothic o}thic a}mma([a,b])$ that sends a point of $\sigma(x,y)$ to a point of ${{\gothic o}thic a}mma([a,b])$ uniformly bounded $d^s$--distance away (since both are uniform quasi-geodesics). Composing this with the reparametrization map for $\sigma(x,y)$ from Lemma~\ref{L:reparameterize survival} on a sufficiently dense subset and appropriately interpolating between points for ${{\gothic o}thic a}mma$ provides the required reparametrization. \end{proof} \fi \betaegin{corollary} \langlebel{C:infinite survival qgeod} Any infinite survival path is a uniform quasi-geodesic. \end{corollary} \betaegin{proof} This is immediate from Corollary~\ref{qd} and Proposition~\ref{P:exhaustion by survival}. \end{proof} \section{Boundary of the surviving curve complex ${\mathcal C^s({\sf d}ot S)}$} \langlebel{S:boundary} Recall that we denote the disjoint union of ending lamination spaces of all witnesses by \[\mathcal{E}\mathcal{L}^s({\sf d}ot S):=\betaigsqcup_{W \in \Omega({\sf d}ot S)} \mathcal{E} \mathcal{L}(W).\] We call this the \tauextit{space of surviving ending laminations} of ${\sf d}ot S$, and give it the coarse Hausdorff topology. In this section we will prove Theorem~\ref{survivalendinglam} from the introduction. In fact, we will prove the following more precise version, that will be useful for our purposes. \betaegin{theorem} \langlebel{T:boundary ending precise} There exists a homeomorphism $\mathcal{F} \colon\tauhinspacelon \partial {\mathcal C}^s({\sf d}ot S) \tauo \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ such that for any sequence $\{\alphalpha_n\} \subset {\mathcal C}^s({\sf d}ot S)$, $\alphalpha_n \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$ if and only if $\alphalpha_n \xrightarrow{\tauext{CH}} \mathcal{F}(x)$. \end{theorem} We denote the Gromov product of $\alphalpha,\betaeta \in {\mathcal C^s({\sf d}ot S)}$ based at $o \in {\mathcal C^s({\sf d}ot S)}$ by $\langle \alphalpha,\betaeta \rangle_o^s$, and recall that the Gromov boundary $\partial {\mathcal C^s({\sf d}ot S)}$ of ${\mathcal C^s({\sf d}ot S)}$ is defined to be the set of equivalence classes of sequences $\{\alphalpha_n\}$ which converge at infinity with respect to $\langle \, \, , \, \, \rangle_o^s$. Throughout the rest of this section we will use (without explicit mention) the fact that the Gromov product between a pair of point (in any hyperbolic space) is uniformly estimated by the minimal distance from the basepoint to a point on a uniform quasi-geodesic between the points. For each proper witness $W \subsetneq {\sf d}ot S$, Corollary~\ref{C:witnesses qi embed} implies that $\partial {\mathcal C}(W)$ embeds into $\partial {\mathcal C^s({\sf d}ot S)}$. Likewise, Corollary~\ref{C:infinite survival qgeod}, combined together with the fact that $d_{{\sf d}ot S} \leq d^s$ implies that $\partial {\mathcal C}({\sf d}ot S)$ also embeds $\partial {\mathcal C^s({\sf d}ot S)}$. Using these embeddings, we view $\partial {\mathcal C}(W)$ as a subspace of $\partial {\mathcal C^s({\sf d}ot S)}$, for all witnesses $W \subseteq {\sf d}ot S$. The next proposition says that the subspaces are all disjoint. \betaegin{proposition} \langlebel{P:Witness boundaries embed disjoint} For any two witnesses $W \neq W'$ for ${\sf d}ot S$, $\partial {\mathcal C}(W) \cap \partial {\mathcal C}(W')= \emptyset$. \end{proposition} \betaegin{proof} Let $x\in \partial {\mathcal C}(W)$ and $x'\in \partial {\mathcal C}(W')$. Then by Lemma~\ref{L:visual on witness boundaries}, there is a bi-infinite survival path $\sigma(x,x')$ and by Corollary~\ref{C:infinite survival qgeod} this survival path is a quasi geodesic. Hence $x\neq x'$. \end{proof} We now have a natural inclusion of the disjoint union of Gromov boundaries \[ \betaigsqcup_{W \in \Omega({\sf d}ot S)} \partial {\mathcal C}(W) \subset \partial {\mathcal C^s({\sf d}ot S)}.\] In fact, this disjoint union accounts for the entire Gromov boundary. \betaegin{lemma} \langlebel{L:bijection} We have \[ \betaigsqcup_{W \in \Omega({\sf d}ot S)} \partial {\mathcal C}(W) = \partial {\mathcal C^s({\sf d}ot S)}.\] \end{lemma} \betaegin{proof} Let $x\in \partial {\mathcal C^s({\sf d}ot S)}$ and $\alphalpha_n\rightarrow x\in \partial{\mathcal C^s({\sf d}ot S)}$, and we assume without loss of generality that $\{\alphalpha_n\}$ is a quasi-geodesic in ${\mathcal C}^s({\sf d}ot S)$ and that the first vertex is the basepoint $\alphalpha_0 = o$. If $d_{{\sf d}ot S}(\alphalpha_n, o)\rightarrow \infty $ as $n \rightarrow \infty$, then given $R >0$, let $N > 0$ be such that $d_{{\sf d}ot S}(\alphalpha_n,o) \geq R$ for all $n \geq N$. For any $m \geq n \geq N$, the subsegment of the quasi-geodesics, $\{\alphalpha_n,\alphalpha_{n+1},\ldots,\alphalpha_m\}$, is some uniformly bounded distance $D$ to $\sigma(\alphalpha_n,\alphalpha_m)$ in ${\mathcal C}^s({\sf d}ot S)$, by hyperbolicity and Corollary~\ref{qd}. Therefore, the distance in ${\mathcal C}({\sf d}ot S)$ from any point of $\sigma(\alphalpha_n,\alphalpha_m)$ to $o$ is at least $R - D$. So the distance from any point of $[\alphalpha_n,\alphalpha_m]_{{\sf d}ot S}$ to $o$ is at least $R - D - 1$. Letting $R \tauo \infty$, it follows that $\langle \alphalpha_n, \alphalpha_m\rangle_o \rightarrow \infty$ in ${\mathcal C}dot$. Consequently, $\{\alphalpha_n\}$ converges to a point in $\partial {\mathcal C}({\sf d}ot S)$, so $x \in \partial {\mathcal C}({\sf d}ot S)$. For the rest of the proof, we may assume that $d_{{\sf d}ot S}(\alphalpha_n,o)$ is bounded by some constant $0 < R < \infty$ for all $n$. By the distance formula \ref{dist}, \[ d^s(\alphalpha_0,\alphalpha_n) \mathbin{\mid}ackrel{K,C}{\alphasymp} \sum_{W \in \Omega({\sf d}ot S)} \lcut d_{W}(\alphalpha_0, \alphalpha_n) \rcut_k,\] and since $d^s(\alphalpha_0, \alphalpha_n) \rightarrow \infty$, we can find a witness $W_n$ for $\sigma(\alphalpha_0,\alphalpha_n)$ for each $n \in \mathbb N$, so that we have $d_{W_n}(\alphalpha_0, \alphalpha_n) \rightarrow \infty$ as $n \tauo \infty$. We would like to show that there is a unique witness $W$ such that $d_W(\alphalpha_0, \alphalpha_n)\rightarrow \infty$. To do that, let $h > 0$ be the maximal Hausdorff distance in ${\mathcal C}^s({\sf d}ot S)$ between $\sigma(\alphalpha_0,\alphalpha_n)$ and $\{\alphalpha_k\}_{k=0}^n$, for all $n \geq 0$ (which is finite by hyperbolicity of ${\mathcal C}^s({\sf d}ot S)$ and Corollary~\ref{qd}). \betaegin{claim} \langlebel{Claim:large persists} Given $n \in \mathbb N$, if $d_W(\alphalpha_0,\alphalpha_n) \geq M + 1 + 2(h+1)$ for some witness $W \subsetneq {\sf d}ot S$, then $W$ is a witness for $\sigma(\alphalpha_0,\alphalpha_m)$, for all $m \geq n$. \end{claim} \betaegin{proof} Let $\alphalpha_n' \in \sigma(\alphalpha_0,\alphalpha_m)$ be such that $d^s(\alphalpha_n,\alphalpha_n') \leq h$. If $W$ is not a witness for $\sigma(\alphalpha_0,\alphalpha_m)$, then every vertex of the main geodesic $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$ of $\sigma(\alphalpha_0,\alphalpha_m)$ has nonempty project to $W$. Furthermore, the geodesic in ${\mathcal C}^s({\sf d}ot S)$ from $\alphalpha_n$ to $\alphalpha_n'$ of length at most $h$ can be extended to a path in ${\mathcal C}({\sf d}ot S$) to $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$ of length at most $h+1$ such that every vertex has nonempty project to $W$. By Proposition~\ref{P:2 Lipschitz} and the triangle inequality we have, \[ |d_W(\alphalpha_0,\alphalpha_n) - d_W(\alphalpha_0,\alphalpha_n')| \leq 2(h+1).\] If $\alphalpha_n' \in [\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$, then since every vertex of this geodesic has nonempty projection to $W$, it follows that $d_W(\alphalpha_0,\alphalpha_n') < M$. If $\alphalpha_n' \not \in [\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$, then there is a witness $W'$ for $[\alphalpha_0, \alphalpha_m]_{{\sf d}ot S}$ such that $d_{{\sf d}ot S}(\alphalpha'_n, \partial W')=1$, and as a result $d_W(\alphalpha_0,\alphalpha_n') < M+1$. In any case, \[ d_W(\alphalpha_0,\alphalpha_n) \leq d_W(\alphalpha_0,\alphalpha_n') + 2(h+1) < M + 1+ 2(h+1),\] which is a contradiction. This proves the claim. \end{proof} Since $d_{W_n}(\alphalpha_0,\alphalpha_n) \tauo \infty$, there exists $n_0 >0$ such that $d_{W_{n_0}}(\alphalpha_0,\alphalpha_{n_0}) \geq 2(h+1) + M + 1$, and hence for all $m \geq n_0$, $W_{n_0}$ is a witness for $\sigma(\alphalpha_0,\alphalpha_m)$. Let $\Omega([\alphalpha_0,\alphalpha_n]_{{\sf d}ot S})$ be the set of proper witnesses for $[\alphalpha_0,\alphalpha_n]_{{\sf d}ot S}$, and set \[ \Omega_n = \betaigcap_{m =n}^\infty \Omega([\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}).\] Note that $\Omega_n \subset \Omega_{n+1}$ for all $n$ and that $\Omega_n$ is nonempty for all $n \geq n_0$. Since each $\Omega_n$ contains no more than $R/2$ elements by Corollary~\ref{qd}, the (nested) union $\Omega_\infty$ is given by $\Omega_\infty = \Omega_N$ for some $N \geq n_0$. The boundaries of the witnesses in $\Omega_\infty$ lie on the geodesic $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$ for all $m \geq N$, and we let $W_\infty \in \Omega_\infty$ be the one furthest from $\alphalpha_0$. Without loss of generality, we may assume that $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$ and $[\alphalpha_0,\alphalpha_{m'}]_{{\sf d}ot S}$ all agree on $[\alphalpha_0,\partial W_\infty]$, for all $m,m' \geq N$. For any $m \geq N$ and any witness $W$ of $[\alphalpha_0,\alphalpha_N]_{{\sf d}ot S}$ with $\partial W$ {\em further} from $\partial W_\infty$, note that $d _W(\alphalpha_0,\alphalpha_m) < M+1+2(h+1)$: otherwise, by Claim~\ref{Claim:large persists} $W$ would be a witness for $[\alphalpha_0,\alphalpha_{m'}]_{{\sf d}ot S}$ for all $m' \geq m$ and so $W \in \Omega_\infty$ with $\partial W$ further from $\alphalpha_0$ than $\partial W_\infty$, a contradiction to our choice of $W_\infty$. For any $n \geq N$, let $\betaeta_n$ be the last vertex of $\sigma(\alphalpha_0,\alphalpha_n)$ in ${\mathcal C}(W_\infty)$. By the previous paragraph together with Theorem~\ref{BGIT} and the bound \[ d_{{\sf d}ot S}(\betaeta_n,\alphalpha_n) \leq d_{{\sf d}ot S}(\alphalpha_0,\alphalpha_n) \leq R/2,\] we see that the subpath of $\sigma(\alphalpha_0,\alphalpha_n)$ from $\betaeta_n$ to $\alphalpha_n$ has length bounded above by some constant $C >0$, independent of $n$. In particular, $d^s(\alphalpha_n,\betaeta_n) \leq C$. Therefore, $\alphalpha_n$ and $\betaeta_n$ converge to the same point $x$ on the Gromov boundary of ${\mathcal C}^s({\sf d}ot S)$. Since $\betaeta_n \in {\mathcal C}(W_\infty)$, which is quasi-isometrically embedded in ${\mathcal C}^s({\sf d}ot S)$, it follows that $x \in \partial {\mathcal C}(W_\infty)$, as required. \end{proof} \iffalse By the triangle inequality \[ d^s(\alphalpha_n,\Pi_{W_\infty}(\alphalpha_n)) \leq d^s(\alphalpha_n,\betaeta_n) + d^s(\betaeta_n,\Pi_{W_\infty}(\alphalpha_n)).\] By Lemma~\ref{L:witnessLength}, $d^s(\betaeta_n,\Pi_{W_\infty}(\alphalpha_n)) \leq d_{W_\infty}(\betaeta_n,\alphalpha_n) \leq M$, and so $d^s(\alphalpha_n,\Pi_{W_\infty}(\alphalpha_n)) \leq C+M$. Therefore, $\alphalpha_n$ and $\Pi_{W_\infty}(\alphalpha_n)$ converge to the same point $x$ on the Gromov boundary of ${\mathcal C}^s({\sf d}ot S)$. Since $\Pi_{W_\infty}(\alphalpha_n) \in {\mathcal C}(W)$, it follows that $x \in \partial {\mathcal C}(W_\infty)$, completing the proof. \end{proof} For all $m \geq n$, either $\alphalpha_m \in {\mathcal C}(W_\infty)$ or else there is some last vertex $\betaeta_m$ of $\sigma(\alphalpha_0,\alphalpha_m)$ contained in ${\mathcal C}(W_\infty)$. The remainder of $\sigma(\alphalpha_0,\alphalpha_m)$ beyond $\betaeta_m$ has uniformly bounded length since there are no more witnesses with large projections, and since $d_{{\sf d}ot S}(\alphalpha_0,\alphalpha_m) \leq R$ implies $d_{{\sf d}ot S}(\betaeta_m,\alphalpha_m) \leq R$. Therefore, by the Bounded Geodesic Image Theorem, Consequently, $\partial W_n$ is contained in each main geodesic $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$, for $m \geq n_1$, and so without loss of generality, we may assume that these geodesics all agree up to $\partial W_n$. Therefore, Since $d^s(\alphalpha_0,\alphalpha_m) \tauo \infty$, either $d_{W_{n_1}}(\alphalpha_0,\alphalpha_m) \tauo \infty$ or $d_{W_{n_1}}(\alphalpha_0,\alphalpha_m)$ is uniformly bounded. In the latter case, there exists $n_2 > n_1$ so that $d_{W_{n_2}}(\alphalpha_0,\alphalpha_{n_2})$ is greater than the maximum of $M+1+2(h+1)$ and the projections to all the witnesses $W$ with $\partial W$ in $[\alphalpha_0,\partial W_{n_1}]_{{\sf d}ot S}$. Consequently, $W_{n_2}$ is a witness for all $m \geq n_2$ {\em and} $\partial W_{n_2}$ occurs {\em further} from $\alphalpha_0$ than $\partial W_{n_1}$ along $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$. We can therefore assume all $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$ agree on this {\em longer} segment $[\alphalpha_0,\partial W_{n_2}]_{{\sf d}ot S}$, for all $m \geq n_2$. Now either $d_{W_{n_2}}(\alphalpha_0,\alphalpha_m) \tauo \infty$ as $m \tauo \infty$, or we can find $n_3 > n_2$ with $W_{n_3}$ a witness for $[\alphalpha_0,\alphalpha_m]_{{\sf d}ot S}$, for all $m \geq n_3$. By Lemma \ref{L:distance witnesses}, this process can only continue at most $R/2$ times, and hence there exists a witness $W_\infty$ ($=W_{n_k}$ for the last $n_k$ we find by this process) so that $d_{W_\infty}(\alphalpha_0,\alphalpha_m) \tauo \infty$ as $m \tauo \infty$. By the claim and the fact that $d_{W_n}(\alphalpha_0, \alphalpha_n) \rightarrow \infty, \,\, \tauext{for all}\, n\in \mathbb N$ ,there is a witness $W$ for $[\alphalpha_0, \alphalpha_m]_{{\sf d}ot S}$ for all $m\geq n$. Now we can assume that all $[\alphalpha_0, \alphalpha_m]_{{\sf d}ot S}$ agree on $[\alphalpha_0, \partial W]_{{\sf d}ot S}$. Now if $d_W(\alphalpha_0, \alphalpha_m) \nrightarrow \infty$, then $W\neq W_m$ for $m$ sufficiently large. On the other hand, by The Bounded Geodesic image Theorem \ref{BGIT}, $\partial W_m\in [\alphalpha_0, \alphalpha_m]_{{\sf d}ot S}$ but $\partial W_m \notin [\alphalpha_0, \partial W]_{{\sf d}ot S}$ since initial segments are the same. This will imply that $\partial W \prec\partial W_m $ where $\prec$ denotes the partial order of the section {5} on the witnesses along the geodesic. Now we can repeat the argument by assuming that for all $k>m$, $[\alphalpha_0, \alphalpha_k]_{{\sf d}ot S}$ agree on $[\alphalpha_0, \partial W_m]$. Now if $d_{W_m}(\alphalpha_0, \alphalpha_k) \nrightarrow \infty$, arguing as before (replacing $W$ with $W_m$)$W_m\prec W_k$. Continuing this way, since $|\Omega_k|\leq R/2$ we can pick the maximal $W_{\infty}$ with respect to the order $\prec$ such that $d_{W_{\infty}}(\alphalpha_0, \alphalpha_k) \rightarrow \infty$ as $k\rightarrow \infty$ and $[\alphalpha_0, \partial W_{\infty}]_{{\sf d}ot S}$ all agree for $k$ sufficiently large. Furthermore by the claim if $W$ is any witness with $d_W(\alphalpha_0, \alphalpha_k)\geq 2(h+1)+M$ then $W\prec W_{\infty}$. \betaegin{claim} The sequence $\{\Pi_{W_\infty}(\alphalpha_k)\}$ converges to a point in the Gromov boundary $\partial\mathcal{C}({W_{\infty}})$. \end{claim} \betaegin{proof} Let $n,m$ be sufficiently large that $d_{W_{\infty}}(\alphalpha_0,\alphalpha_n) > M + 2(h+1)$ and $d_{W_{\infty}}(\alphalpha_0,\alphalpha_m) > M + 2(h+1)$ and by the definition of $W_{\infty}$, all $W \neq W_\infty$ $d_W(\alphalpha_0,\alphalpha_m)<C$ for some constant $C$. Hence by triangle inequality, for all $W\neq {W_{\infty}}$ and $n,m$ sufficiently large, $d_W(\alphalpha_n,\alphalpha_m)<2C$ . As a result, by the distance formula, $d^s(\alphalpha_n, \alphalpha_m)\alphasymp d_{W_{\infty}}(\alphalpha_n,\alphalpha_m)$. Now observe that as $n,m \tauo \infty$, we have \betaegin{eqnarray*} \langle \Pi_{W_\infty}(\alphalpha_n), \Pi_{W_\infty} (\alphalpha_m)\rangle_{\Pi_{W_\infty}(\alphalpha_0)} & = & \taufrac{1}{2} \left\{ d_{W_{\infty}}(\alphalpha_0,\alphalpha_n)+d_{W_{\infty}}(\alphalpha_0,\alphalpha_m)-d_{W_{\infty}}(\alphalpha_n,\alphalpha_m) \right\}\\ & \alphasymp & \taufrac{1}{2} \left\{ d^s(\alphalpha_0,\alphalpha_n)+d^s(\alphalpha_0,\alphalpha_m)-d^s(\alphalpha_n,\alphalpha_m) \right\}\\ & = & \langlengle \alphalpha_n,\alphalpha_m \ranglengle_{\alphalpha_0} \tauo \infty. \end{eqnarray*} We note that the first Gromov product is taking place in ${\mathcal C}(W_\infty)$ (between sets of bounded diameter instead of points), while the last is in ${\mathcal C}^s({\sf d}ot S)$. Nonetheless, the computation shows that $\{\Pi_{W_\infty}(\alphalpha_n)\}$ converges to a point $x' \in \partial {\mathcal C}(W_\infty)$, as required. \end{proof} Now we claim that $d^s(\alphalpha_n,\Pi_{W_\infty}(\alphalpha_n))$ is uniformly bounded. To prove this claim, we note that the last point $\betaeta_n$ of $\sigma(\alphalpha_0,\alphalpha_n)$ that is inside ${\mathcal C}(W_\infty)$ is uniformly close to $\Pi_{W_\infty}(\alphalpha_n)$. \cl{is there a lemma for this?} Because there are no other projections between $\alphalpha_0$ and $\alphalpha_n$ greater than $C$, and since $d_{{\sf d}ot S}(\alphalpha_0,\alphalpha_n) \leq R$, it follows that the subsegment of $\sigma(\alphalpha_0,\alphalpha_n)$ from $\betaeta_n$ to $\alphalpha_n$ is uniformly bounded length. Consequently, \[ d^s(\alphalpha_n,\Pi_{W_\infty}(\alphalpha_n)) \leq d^s(\alphalpha_n,\betaeta_n) + d^s(\betaeta_n,\Pi_{W_\infty}(\alphalpha_n)) \] is uniformly bounded, as required. This completes the proof, because we have $\alphalpha_n \tauo x'$, and so $x = x' \in \partial {\mathcal C}(W_\infty)$. \end{proof} \fi The next Lemma provides a convenient tool for deciding when a sequence in ${\mathcal C}^s({\sf d}ot S)$ converges to a point in $\partial {\mathcal C}(W)$, for some proper witness $W$. \betaegin{lemma}\langlebel{L:proj and conv2} Given $\{\alphalpha_n\} \subset {\mathcal C^s({\sf d}ot S)}$ and $x\in \partial \mathcal{C}(W)$ for some witness $W$, then $\alphalpha_n {\rightarrow} x$ if and only if $\pi_W(\alphalpha_n) \rightarrow x$. \end{lemma} \betaegin{proof} Throughout, we assume $o=\alphalpha_0$, the basepoint, which without loss of generality we assume lies in $W$, and let $\{\betaeta_n\} \subset {\mathcal C}(W)$ be any sequence converging to $x$, so that for the Gromov product in ${\mathcal C}(W)$ we have $\langlengle \betaeta_n,\betaeta_m \ranglengle_o^W \tauo \infty$ as $n,m \tauo \infty$. Since $\sigma(\alphalpha_n,\betaeta_n)$ is a uniform quasi-geodesic by Corollary~\ref{qd} it follows that \[ d^s(o,\sigma(\alphalpha_n,\betaeta_n)) \alphasymp \langlengle \alphalpha_n,\betaeta_m \ranglengle^s_o ,\] with uniform constants (where the distance on the left is the minimal distance from $o$ to the survival path). Let ${\sf d}elta_{n,m}$ be the first point of intersection of $\sigma(\alphalpha_n,\betaeta_m)$ with ${\mathcal C}(W)$ (starting from $\alphalpha_n$). By Lemma~\ref{L:witnessLength}, $d_W({\sf d}elta_{n,m},\alphalpha_n) < M$. Consequently, because ${\sf d}elta_{n,m} \in {\mathcal C}(W)$ and $\pi_W(\alphalpha_n) \subset {\mathcal C}(W)$, this means \[ d^s({\sf d}elta_{n,m},\pi_W(\alphalpha_n)) \leq d_W({\sf d}elta_{n,m},\pi_W(\alphalpha_n) ) = d_W({\sf d}elta_{n,m},\alphalpha_n) < M.\] Therefore, by hyperbolicity, the ${\mathcal C}^s({\sf d}ot S$)--geodesic from (any curve in) $\pi_W(\alphalpha_n)$ to $\betaeta_n$ lies in a uniformly bounded neighborhood of $\sigma(\alphalpha_n,\betaeta_m)$, and so \[ \langlengle \pi_W(\alphalpha_n),\betaeta_m \ranglengle^s_o \alphasymp d^s(o,[\pi_W(\alphalpha_n),\betaeta_m]) \succeq d^s(o,\sigma(\alphalpha_n,\betaeta_m)) \alphasymp \langlengle \alphalpha_n,\betaeta_m \ranglengle^s_o. \] If $\alphalpha_n \tauo x$, then the right-hand side of the above coarse inequality tends to infinty, and hence so does the left-hand side. This implies $\pi_W(\alphalpha_n) \tauo x$. Next suppose that $\pi_W(\alphalpha_n) \tauo x \in \partial {\mathcal C}(W)$. As above, we have \[ \langlengle \alphalpha_n,\pi_W(\alphalpha_n) \ranglengle_o^s \alphasymp d^s(o,[\alphalpha_n,\pi_W(\alphalpha_n)]),\] and so it suffices to show that the right-hand side tends to infinity as $n \tauo \infty$. Since $\pi_W(\alphalpha_n) \tauo x \in \partial {\mathcal C}(W)$, we have $d_W(o,\alphalpha_n) \tauo \infty$, and setting ${\sf d}elta_n$ to be the first point of $\sigma(\alphalpha_n,o)$ in ${\mathcal C}(W)$, Lemma~\ref{L:witnessLength} implies that $d^s({\sf d}elta_n,\pi_W(\alphalpha_n)) < M$. Therefore, $\sigma(\alphalpha_n,o)$ passes within $d^s$--distance $M$ of $\pi_W(\alphalpha_n)$ on its way to $o$. Since $\sigma(\alphalpha_n,o)$ is a uniform quasi-geodesic by Corollary~\ref{qd}, it follows that $[\alphalpha_n,\pi_W(\alphalpha_n)]$ is uniformly Hausdorff close to the initial segment $J_n \subset \sigma(\alphalpha_n,o)$ from $\alphalpha_n$ to ${\sf d}elta_n$. Since the closest point of $J_n$ to $o$ is, coarsely, the point ${\sf d}elta_n$, which is uniformly close to $\pi_W(\alphalpha_n)$, we have \betaegin{eqnarray*} \langlengle \alphalpha_n,\pi_W(\alphalpha_n) \ranglengle_o^s & \alphasymp & d^s(o,[\alphalpha_n,\pi_W(\alphalpha_n)]) \\ & \alphasymp & d^s(o,J_n) \, \, \alphasymp \, \, d^s(o,{\sf d}elta_n) \, \, \alphasymp \, \, d^s(o,\pi_W(\alphalpha_n)) \, \, \alphasymp \, \, d_W(o,\alphalpha_n) \tauo \infty. \end{eqnarray*} Therefore, $\alphalpha_n$ and $\pi_W(\alphalpha_n)$ converge together to $x \in \partial {\mathcal C}(W)$. This completes the proof. \end{proof} \iffalse Then, by definition of being a point on the Gromov boundary $x=[\{\betaeta_m\}]$, where brackets denote the equivalence class of the sequence $\{\betaeta_m\}$. Now, since $\alphalpha_n \tauo x$, $\lim \langle(\alphalpha_n, \betaeta_m)\rangle_o^s\rightarrow \infty $ when $m,n\rightarrow \infty$. \cl{need to clarify this proof; specifically, clarify the comparison $\alphasymp$ } As a result, we deduce that $d_W(\alphalpha_n, \alphalpha_m) \alphasymp d^s(\betaeta_n, \betaeta_n)$. Now combining by the triangle inequality we have \betaegin{eqnarray*} \langle \Pi_{W}(\alphalpha_n), \Pi_{W} (\alphalpha_m)\rangle_{\Pi_{W}(\alphalpha_0)} & = & \taufrac{1}{2} \left\{ d_{W}(\alphalpha_0,\alphalpha_n)+d_{W}(\alphalpha_0,\alphalpha_m)-d_{W}(\alphalpha_n,\alphalpha_m) \right\}\\ & \alphasymp & \taufrac{1}{2} \left\{ d^s(\betaeta_o,\betaeta_n)+d^s(\betaeta_o,\alphalpha_m)-d^s(\betaeta_n,\betaeta_m) \right\}\\ & = & \langlengle \betaeta_n,\betaeta_m \ranglengle_{\betaeta_0} \tauo \infty. \end{eqnarray*} \end{proof} \fi \betaegin{proof}[Proof of Theorem~\ref{T:boundary ending precise}] By Lemma~\ref{L:bijection}, for any $x \in \partial {\mathcal C^s({\sf d}ot S)}$, there exists a witness $W \subseteq {\sf d}ot S$ so $x \in \partial {\mathcal C}(W)$. Let $\mathcal F(x) = \mathcal F_W(x)$, where $\mathcal F_W \colon\tauhinspacelon \partial {\mathcal C}(W) \tauo \ensuremath{\mathcal{EL}}\xspace(W)$ is the homeomorphism given by Theorem~\ref{T:Klarreich}. This defines a bijection $\mathcal F \colon\tauhinspacelon \partial {\mathcal C^s({\sf d}ot S)} \tauo \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. We let $x \in \partial {\mathcal C^s({\sf d}ot S)}$ with $\alphalpha_n \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$, and prove that $\alphalpha_n$ coarse Hausdorff converges to $\mathcal F(x)$. Let $W \subseteq {\sf d}ot S$ be the witness with $x \in \partial {\mathcal C}(W)$. According to Proposition~\ref{L:proj and conv2}, $\pi_W(\alphalpha_n) \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}(W)$. By Theorem~\ref{T:Klarreich}, $\pi_W(\alphalpha_n) \mathbin{\mid}ackrel{CH}{\tauo} \mathcal F_W(x) = \mathcal F(x)$, and by Lemma~\ref{L:proj and conv}, $\alphalpha_n \mathbin{\mid}ackrel{CH}{\tauo} \mathcal F(x)$, as required. To prove the other implication, we suppose that $\alphalpha_n \mathbin{\mid}ackrel{CH}{\tauo} \mathcal L$, for some $\mathcal L \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$, and prove that $\alphalpha_n \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$ where $\mathcal{F}(x) = \mathcal{L}$. Let $W \subseteq S$ be the witness with $\mathcal L \in \ensuremath{\mathcal{EL}}\xspace(W)$. By Lemma~\ref{L:proj and conv}, $\pi_W(\alphalpha_n) \mathbin{\mid}ackrel{CH}{\tauo} \mathcal L$. By Theorem~\ref{T:Klarreich}, $\pi_W(\alphalpha_n) \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}(W)$ where $\mathcal F_W(x) = \mathcal L$. By Proposition~\ref{L:proj and conv2}, $\alphalpha_n \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$ and $\mathcal F(x) = \mathcal F_W(x) = \mathcal L$. All that remains is to show that $\mathcal F$ is a homeomorphism. Throughout the remainder of this proof, we will frequently pass to subsequences, and will reindex without mention. We start by proving that $\mathcal F$ is continuous. Let $\{x^n\} \subset \partial {\mathcal C^s({\sf d}ot S)}$ with $x^n \tauo x$ as $n \tauo \infty$. Pass to {\em any} Hausdorff convergence subsequence so that $\mathcal{F}(x^n) \xrightarrow{\tauext{H}}\mathcal{L}$ for some lamination $\mathcal{L}$. If we can show that $\mathcal{F}(x) \subseteq \mathcal{L}$, then this will show that the original sequence coarse Hausdorff converges to $\mathcal F(x)$, and thus $\mathcal{F}$ will be continuous. For each $n$, let $\{ \alphalpha_k^n \}_{k=1}^\infty \subset {\mathcal C}^s({\sf d}ot S)$ be a sequence with $\alphalpha_k^n \rightarrow x^n$ as $k\rightarrow \infty$. Since $x^n \tauo x$, we may pass to subsequences so that for any sequence $\{k_n\}$, we have $\alphalpha_{k_n}^n \tauo x$ as $n \tauo \infty$. From the first part of the argument, $\alphalpha_k^n \xrightarrow{\tauext{CH}}\mathcal{F}(x^n)$ as $k \tauo \infty$, for all $n$. For each $n$, pass to a subsequence so that $ \alphalpha_k^n \xrightarrow{\tauext{H}}\mathcal{L}_n$, thus $\mathcal{F}(x^n)\subseteq \mathcal{L}_n$. By passing to yet a further subsequence for each $n$, we may assume $d_H(\alphalpha_k^n, \mathcal{L}_n) < \frac{1}n$ for all $k$; in particular, this holds for $k=1$. Now pass to a subsequence of $\{\mathcal{L}_n\}$ so that $\mathcal{L}_n \mathbin{\mid}ackrel{H}{\tauo} \mathcal{L}_o$ for some lamination $\mathcal{L}_o$, it follows that $\alphalpha_1^n \mathbin{\mid}ackrel{H}\tauo \mathcal{L}_o$, as $n \tauo \infty$. Since $\alphalpha_1^n \tauo x$ (from the above, setting $k_n=1$ for all $n$), this implies that $\mathcal{F}(x) \subseteq \mathcal{L}_o$. Since $\mathcal{F}(x^n)\subseteq \mathcal{L}_n$ we have $\mathcal{L}\subseteq \mathcal{L}_o$. If $\mathcal{F}(x) \in \ensuremath{\mathcal{EL}}\xspace({\sf d}ot S)$ then it is the unique minimal sublamination of $\mathcal{L}_o$, and since $\mathcal{L} \subseteq \mathcal{L}_o$, we have $\mathcal{F}(x) \subseteq \mathcal{L}$. If $\mathcal{F}(x) \in \ensuremath{\mathcal{EL}}\xspace(W)$ for some proper witness, then either $\mathcal{F}(x)$ is the unique minimal sublamination of $\mathcal{L}_o$, or $\mathcal{L}_o$ contains $\mathcal{F}(x) \cup \partial W$. Since $\partial W$ does not intersect the interior of $W$, whereas $\mathcal{L} \subset \mathcal{L}_o$ is a sublamination that does nontrivially intersects the interior of $W$, it follows that $\mathcal{F}(x) \subseteq \mathcal{L}$. Therefore we have $\mathcal{F}(x) \subseteq \mathcal{L}$ in both cases, and so $\mathcal{F}$ is continuous. To prove continuity of $\mathcal{G}=\mathcal{F}^{-1}$, suppose $\mathcal{L}_n \xrightarrow{\tauext{CH}} \mathcal{L}$, and we must show $\mathcal{G}(\mathcal{L}_n) \tauo \mathcal{G}(\mathcal{L})$. We first pick a sequence of curves $\alphalpha^n_k$ such that $\alphalpha^n_k \rightarrow \mathcal{G}(\mathcal{L}_n)$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$. Then, $\alphalpha^n_k \xrightarrow{\tauext{CH}}\mathcal{L}_n$ as $k\rightarrow \infty$, by the first part of the proof, and after passing to subsequences as necessary, we may assume: (i) $\alphalpha_k^n \mathbin{\mid}ackrel{H}\tauo \mathcal{L}_n'$ as $k \tauo \infty$, and hence $\mathcal{L}_n \subseteq \mathcal{L}_n'$ for all $n$; (ii) $d_H(\alphalpha_k^n,\mathcal{L}_n') < \taufrac1n$ for all $k$; and (iii) $\langlengle \alphalpha_k^n,\alphalpha_\ell^n \ranglengle_o \geq \min\{k,\ell\} + n$, for all $k,\ell,n$. Now pass to {\em any} Hausdorff convergent subsequence $\mathcal{L}_n' \mathbin{\mid}ackrel{H}\tauo \mathcal{L}'$. It suffices to show that for this subsequence $\mathcal{G}(\mathcal{L}_n) \tauo \mathcal{G}(\mathcal{L})$. Observe that we also have $\mathcal{L} \subseteq \mathcal{L}'$ and by (ii) above we also have $\alphalpha_{k_n}^n \tauo \mathcal{L}'$ as $n \tauo \infty$, for {\em any} sequence $\{k_n\}$. Thus, for example, we can conclude that $\alphalpha_1^n \mathbin{\mid}ackrel{CH}\tauo \mathcal{L}$, and so by the first part of the proof we have $\alphalpha_1^n \tauo \mathcal{G}(\mathcal{L})$. As equivalence classes of sequences, we thus have $\mathcal{G}(\mathcal{L}_n) = [\{\alphalpha_k^n\}]$ and $\mathcal{G}(\mathcal{L}) = [\{\alphalpha_1^m\}]$. We further observe that by hyperbolicity and the conditions above, for all $k,n,m$ we have \[ \langlengle \alphalpha_1^m,\alphalpha_k^n \ranglengle_o \succeq \min\{ \langlengle \alphalpha_1^m,\alphalpha_1^n \ranglengle_o,\langlengle \alphalpha_1^n,\alphalpha_k^n \ranglengle_o \} \geq \min\{ \langlengle \alphalpha_1^m,\alphalpha_1^n \ranglengle_o, 1+n\} .\] Therefore, \[ \sup_m \liminf_{k,n\tauo \infty} \langlengle \alphalpha_1^m,\alphalpha_k^n \ranglengle_o \succeq \sup_m \liminf_{n\tauo \infty} \langlengle \alphalpha_1^m,\alphalpha_1^n \ranglengle_o = \infty,\] from which it follows that $\mathcal{G}(\mathcal{L}_n) \tauo \mathcal{G}(\mathcal{L})$, as required. This completes the proof. \end{proof} \betaegin{proof}[Proof of Theorem~\ref{survivalendinglam}] Let $\mathcal{F} \colon\tauhinspacelon \partial {\mathcal C}^s({\sf d}ot S) \tauo \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ be the homeomorphism from Theorem~\ref{T:boundary ending precise}. It suffices to show that $\mathcal{F}$ is $\PMod({\sf d}ot S)$--equivariant. For this, let $f \in \PMod({\sf d}ot S)$ be any mapping class and $x \in \partial {\mathcal C}^s({\sf d}ot S)$ any boundary point. If $\{\alphalpha_n\} \subset {\mathcal C}^s({\sf d}ot S)$ is any sequence with $\alphalpha_n \tauo x$ in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$, then $f \cdot \alphalpha_n \tauo f \cdot x$ since $f$ acts by isometries on ${\mathcal C}^s({\sf d}ot S)$. Applying Theorem~\ref{T:boundary ending precise} to the sequence $\{f \cdot \alphalpha_n\}$ we see that $f \cdot \alphalpha_n \mathbin{\mid}ackrel{CH}\tauo \mathcal{F}(f \cdot x)$. On the other hand we also have $f \cdot \alphalpha_n \mathbin{\mid}ackrel{CH}\tauo f \cdot \mathcal{F}(x)$, since $f$ acts by homeomorphisms on the space of laminations with the coarse Hausdorff topology. Therefore, $f \cdot \mathcal{F}(x) = \mathcal{F}(f \cdot x)$, as required. \end{proof} \section{Extended survival map} \langlebel{S:extended survival} $\quad$ We start by introducing some notation before we define the extended survival map. First observe that there is an injection ${\mathcal C}(S) \tauo \ensuremath{\mathcal{PML}}\xspace(S)$ given by sending a point in the interior of the simplex $\{v_0,\ldots,v_k\}$ with barycentric coordinates $(s_0,\ldots,s_k)$ to the projective class, $[s_0v_0 + \cdots + s_kv_k]$; here we are viewing $s_0v_0 + \cdots s_kv_k$ as a measured geodesic lamination with support $v_0 \cup \ldots \cup v_k$ and with the transverse counting measure scaled by $s_i$ on the $i^{th}$ component, for each $i$. We denote the image by $\ensuremath{\mathcal{PML}}\xspace_{{\mathcal C}}(S)$, which by construction admits a bijective map $\ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S) \tauo {\mathcal C}(S)$ (inverse to the inclusion above). By Theorem~\ref{T:Klarreich}, $\partial \mathcal{C}(S) \colon\tauhinspaceng \ensuremath{\mathcal{EL}}\xspace(S)$, and so it is natural to define \[ \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r\mathcal{C}}(S)=\ensuremath{\mathcal{PML}}\xspace_{\mathcal{C}}(S) \cup \ensuremath{\mathcal{PFL}}\xspace(S), \] and we extend the bijection $\ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S) \tauo {\mathcal C}(S)$ to a surjective map \[ \ensuremath{\mathcal{PML}}\xspace_{\overline\mathcal{C}}(S) \rightarrow \overline \mathcal{C}(S)\] By Proposition~\ref{P:support continuous} and Theorem~\ref{T:Klarreich}, this is continuous at every point of $\ensuremath{\mathcal{PFL}}\xspace(S)$. Similar to the survival map $\tauilde \Phi$ defined in Section~\ref{S:tree map construction}, we can define a map \[ \widetilde \Psi \colon\tauhinspacelon \ensuremath{\mathcal{PML}}\xspace(S) \tauimes \Diff_0(S) \tauo \ensuremath{\mathcal{PML}}\xspace({\sf d}ot S). \] This is defined by exactly the same procedure as in Section~2.4 of \cite{LeinMjSch}, which goes roughly as follows: If $\mu$ is a measured lamination with no closed leaves in its support $|\mu|$, and if $f(z) \not\in |\mu|$, then $\widetilde \Psi(\mu,f) = f^{-1}(\mu)$. When $|\mu|$ contains closed leaves we replace those with the foliated annular neighborhoods of such curves defined in Section~\ref{S:tree map construction}). When the $f(z)$ lies on a leaf of $|\mu|$ (or the modified $|\mu|$ when there are closed leaves) we ``split $|\mu|$ apart at $f(z)$", then take the $f^{-1}$--image. The same proof as that given in \cite[Proposition~2.9]{LeinMjSch} shows that $\widetilde \Psi$ is continuous. As in Section~\ref{S:tree map construction} (and in \cite{LeinMjSch}) via the lifted evaluation map $\widetilde{{\rm{ev}}} \colon\tauhinspacelon \Diff_0(S) \tauo \mathbb H$, given by $\widetilde{{\rm{ev}}}(f) = \tauilde f(\tauilde z)$ (for $\tauilde f$ the canonical lift), the map $\widetilde \Psi$ descends to a continuous, $\pi_1S$--equivariant map $\Psi$ making the following diagram commute: \betaegin{center} \betaegin{tikzpicture} \node (E) at (0,0) {$\ensuremath{\mathcal{PML}}\xspace(S)\tauimes \Diff_0(S)$}; \node[below=of E] (B) {$\ensuremath{\mathcal{PML}}\xspace(S)\tauimes \mathbb{H}$}; \node[right=of B] (A) {$\ensuremath{\mathcal{PML}}\xspace({\sf d}ot S)$.}; {\sf d}raw[->] (E)--(A) node [midway, right,above] {$\widetilde\Psi$}; {\sf d}raw[->] (B)--(A) node [midway,above] {$\Psi$}; {\sf d}raw[->] (E)--(B) node [midway,left] {${\rm{id}}_{\ensuremath{\mathcal{PML}}\xspace(S)} \tauimes \widetilde{{\rm{ev}}}$}; \end{tikzpicture} \end{center} By construction, the restriction $\Psi_{\mathcal C} = \Psi|_{\ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S) \tauimes \mathbb H}$ and $\Phi$ agree after composing with the bijection between $\ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S)$ and ${\mathcal C}(S)$ in the first factor. Since $\Phi$ maps ${\mathcal C}(S) \tauimes \mathbb H$ onto ${\mathcal C}^s({\sf d}ot S)$, if we define $\ensuremath{\mathcal{PML}}\xspace_{{\mathcal C}^s}({\sf d}ot S)$ to be the image of ${\mathcal C}^s({\sf d}ot S)$ via the map ${\mathcal C}^s({\sf d}ot S) \tauo \ensuremath{\mathcal{PML}}\xspace({\sf d}ot S)$ defined similarly to the one above, then the following diagram of $\pi_1S$--equivariant maps commutes, with the vertical arrows being bijections \betaegin{equation} \langlebel{E:Psi C} \xymatrixcolsep{4pc}\xymatrixrowsep{3pc}\xymatrix{ \ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S) \tauimes \mathbb H \alphar[r]^{\quad \Psi_{\mathcal C}} \alphar[d] & \ensuremath{\mathcal{PML}}\xspace_{{\mathcal C}^s}({\sf d}ot S) \alphar[d] \\ {\mathcal C}(S) \tauimes \mathbb H \alphar[r]^{\quad \Phi} & {\mathcal C}^s({\sf d}ot S) } \end{equation} Similar to $\ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}}(S) = \ensuremath{\mathcal{PML}}\xspace_{\mathcal C}(S) \cup \ensuremath{\mathcal{PFL}}\xspace(S)$ above, we define \[ \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}^s}({\sf d}ot S) = \ensuremath{\mathcal{PML}}\xspace_{{\mathcal C}^s}({\sf d}ot S) \cup \ensuremath{\mathcal{PFL}}\xspace^s({\sf d}ot S),\] where, recall, $\ensuremath{\mathcal{PFL}}\xspace^s({\sf d}ot S)$ is the space of measured laminations on ${\sf d}ot S$ whose support is contained in $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. Then $\Psi_{\mathcal C}$ extends to a map \[ \Psi_{{\betaoldsymbol a}r {\mathcal C}} \colon\tauhinspacelon \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}}(S) \tauimes \mathbb H \tauo \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}^s}({\sf d}ot S).\] The fact that $\Psi([\mu],w)$ is in $\ensuremath{\mathcal{PFL}}\xspace^s({\sf d}ot S)$ for any $w \in \mathbb H$ and $[\mu] \in \ensuremath{\mathcal{PFL}}\xspace(S)$ is straightforward from the definition (c.f.~\cite[Proposition~2.12]{LeinMjSch}): for {\em generic} $w$, $\Psi([\mu],w)$ is obtained from $[\mu]$ by adding the $z$--puncture in one of the complementary components of $|\mu|$ and adjusting by a homeomorphism. With this, it follows that the map $\Phi$ extends to a map $\hat \Phi$ making the following diagram, extending \eqref{E:Psi C}, commute. \betaegin{equation*} \xymatrixcolsep{4pc}\xymatrixrowsep{3pc}\xymatrix{ \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}}(S) \tauimes \mathbb H \alphar[r]^{\quad \Psi_{{\betaoldsymbol a}r{\mathcal C}}} \alphar[d] & \ensuremath{\mathcal{PML}}\xspace_{{\betaoldsymbol a}r {\mathcal C}^s}({\sf d}ot S) \alphar[d] \\ {\betaoldsymbol a}r{\mathcal C}(S) \tauimes \mathbb H \alphar[r]^{\quad \hat \Phi} &{\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S) } \end{equation*} We will call the map $\hat\Phi:{\betaoldsymbol a}r{{\mathcal C}}(S)\tauimes \mathbb{H} \rightarrow {\overline{\mathcal C^s}({\sf d}ot S)} $ the \tauextit{extended survival map}. Vertical maps in the diagram are natural maps which take projective measured laminations to their supports and they send $\ensuremath{\mathcal{PFL}}\xspace(S)\tauimes \mathbb{H}$ onto $\ensuremath{\mathcal{EL}}\xspace(S)\tauimes \mathbb{H}$ and $\ensuremath{\mathcal{PFL}}\xspace^s({\sf d}ot S)$ onto $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. \betaegin{lemma} \langlebel{L:continuity of hat Phi} The extended survivial map $\hat \Phi$ is $\pi_1S$--equivariant and is continuous at every point of $\partial\mathcal{C}(S)$. \end{lemma} \betaegin{proof} To prove the continuity statement, we use the homeomorphism $\mathcal F$ from Theorem~\ref{T:boundary ending precise} to identity $\partial {\mathcal C}^s({\sf d}ot S)$ with $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. Now suppose $\{\mathcal{L}_n\}\in \overline\mathcal{C}(S)$, $\mathcal{L}\in \partial \mathcal{C}(S)$ , $\mathcal{L}_n \rightarrow \mathcal{L}$ and $\{x_n\}$ be a sequence in $\mathbb{H}$ such that $x_n \rightarrow x$. Passing to a subsequence, there is a measure $\mu_n$ on $\mathcal{L}_n$ and a measure $\mu$ on $\mathcal{L}$ such that $\mu_n \rightarrow \mu$ in $\ensuremath{\mathcal{ML}}\xspace(S)$. Since $\Psi$ is continuous on $\ensuremath{\mathcal{PFL}}\xspace(S)\tauimes \mathbb{H}$ \[ \Psi([\mu_n], x_n) \rightarrow \Psi([\mu], x). \] By Proposition~\ref{P:support continuous} this implies, \[| \Psi(\mu_n, x_n)| \mathbin{\mid}ackrel{CH}{\longrightarrow} |\Psi(\mu, x)|. \] On the other hand, $\Psi(\ensuremath{\mathcal{FL}}\xspace(S)\tauimes \mathbb{H})\subset \ensuremath{\mathcal{FL}}\xspace^s({\sf d}ot S)$, and by Theorem \ref{T:boundary ending precise} this means \[ \hat \Phi(\mathcal{L}_n,x_n) \tauo \hat \Phi(\mathcal{L},x) \] in ${\betaoldsymbol a}r {\mathcal C}^s({\sf d}ot S)$, since $|\hat \Psi(\mu_n, x_n)|= \hat \Phi(\mathcal{L}_n, x_n )$ and $|\hat \Psi(\mu, x)|=\hat\Phi (\mathcal{L},x)$. The $\pi_1S$--equivariance follows from that of $\Phi$ on ${\mathcal C}^s({\sf d}ot S)$ and continuity at the remaining points. \end{proof} The following useful fact and it's proof are identical to the statement and proof of \cite[Lemma~2.14]{LeinMjSch}. \betaegin{lemma} \langlebel{L:points identified by hat phi} Fix $(\mathcal{L}_1,x_1),(\mathcal{L}_2,x_2) \in \ensuremath{\mathcal{EL}}\xspace(S) \tauimes \mathbb{H}$. Then $\hat \Phi(\mathcal{L}_1,x_1) = \hat \Phi(\mathcal{L}_2,x_2)$ if and only if $\mathcal{L}_1 =\mathcal{L}_2$ and $x_1,x_2$ are on the same leaf of, or in the same complementary region of, $p^{-1}(\mathcal{L}_1) \subset \mathbb{H}$. \end{lemma} Suppose that $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$, $x \in \mathcal{P} \subset \partial \mathbb H$ is a parabolic fixed point, $H_x \subset \mathbb H$ is the horoball based at $x$ as in Section~\ref{S:cusps and witnesses}, and $U \subset \mathbb H$ is the complementary region of $p^{-1}(\mathcal{L})$ containing $H_x$. Given $y \in U$, choose any $f \in \Diff(S)$ so that $\tauilde f(\tauilde z) = y$, so that $\hat \Phi(\mathcal{L},y) = f^{-1}(\mathcal{L})$. Observe that $p(U)$ is a complementary region of $\mathcal{L}$ containing a puncture (corresponding to $x$), and hence $\hat \Phi(\mathcal{L},y)$ is a lamination with two punctures in the complementary component $f^{-1}(p(U))$ (one of which is the $z$-puncture). Therefore, $\hat \Phi(\mathcal{L},y)$ is an ending lamination in a proper witness. More precisely, by Lemma~\ref{L:points identified by hat phi}, we may assume $y \in H_x$ without changing the image $\hat \Phi(\mathcal{L},y)$, and then as in the proof of Lemma~\ref{L:W parabolics}, $f^{-1}(\partial p(H_x))$ is the boundary of the witness $\mathcal{W}(x)$ which is disjoint from $\hat \Phi(\mathcal{L},x)$. Thus, $\hat \Phi(\mathcal{L},y) \in \ensuremath{\mathcal{EL}}\xspace(\mathcal{W}(x))$. In fact, every ending lamination on a proper witness arises as such an image as the next lemma shows. \betaegin{lemma} \langlebel{L:where the witness ELs come from} Suppose $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace(W)$ is an ending lamination in a proper witness $W \subsetneq {\sf d}ot S$. Then there exists $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$, $x \in \mathcal{P}$, a complementary region $U$ of $p^{-1}(\mathcal{L})$ containing $H_x$, and $y \in H_x$ so that $\mathcal{W}(x)=W$ and $\hat \Phi(\mathcal{L},y) = \mathcal{L}_0$. \end{lemma} \betaegin{proof} Note that the inclusion of $W \subset {\sf d}ot S$ is homotopic through embeddings to a diffeomorphism, after filling in $z$ (since after filling in $z$, $\partial W$ is peripheral). Consequently, after filling in $z$, $\mathcal{L}_0$ is isotopic to a geodesic ending lamination $\mathcal{L}$ on $S$. Let $f \colon\tauhinspacelon S \tauo S$ be a diffeomorphism isotopic to the identity with $f(\mathcal{L}_0) = \mathcal{L}$. Then $\mathcal{L}_0 = f^{-1}(\mathcal{L}) = \widetilde \Psi([\mu],f)$ where $[\mu]$ is the projective class of any transverse measure on $\mathcal{L}$. Next, observe that $f(z)$ lies in a complementary region $V$ of $\mathcal{L}$ which is a punctured polygon (since $\partial W$ is a simple closed curve disjoint from $\mathcal{L}_0$ bounding a twice punctured disk including the $z$-puncture). Let $U \subset \mathbb H$ be the complementary region of $p^{-1}(\mathcal{L})$ that projects to $V$. Then $U$ is an infinite sided polygon invariant by a parabolic subgroup fixing some $x \in \mathcal{P}$. Now let $\tauilde f \colon\tauhinspacelon \mathbb H \tauo \mathbb H$ be the canonical lift as in Section~\ref{S:tree map construction} and let $y' = \tauilde f(\tauilde z)$, so that by definition $\widetilde \Psi([\mu],f) = \Psi([\mu],y') = \hat \Phi(\mathcal{L},y')$. By Lemma~\ref{L:points identified by hat phi}, for any $y \in H_x \subset U$, it follows that $\hat \Phi(\mathcal{L},y) = \hat \Phi(\mathcal{L},y') = \mathcal{L}_0$. From the remarks preceeding this lemma, it follows that $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace(\mathcal{W}(x))$. Since $\ensuremath{\mathcal{EL}}\xspace(W) \cap \ensuremath{\mathcal{EL}}\xspace(W') = \emptyset$, unless $W = W'$, it follows that $\mathcal{W}(x) = W$, completing the proof. \end{proof} \section{Universal Cannon--Thurston maps} \langlebel{S:UCT maps} In this section we will prove the following. \betaigskip \noindent {\betaf Theorem~\ref{CT}}{\em {\mathcal C}Tstatement} \betaigskip Before proceeding, we describe the subset $\BS^1_{\hspace{-.1cm}\calA} \subset \partial \mathbb H^2$ in Theorem~\ref{CT}. \betaegin{definition} Let $Y \subseteq {\sf d}ot S$ be a subsurface. A point $x\in \partial\mathbb{H}$ \tauextit{fills} $Y$ if, \betaegin{itemize} \item The image of every geodesic ending in $x$ projected to ${\sf d}ot S$ intersects every curve which projects to $Y$, \item There is a geodesic ray $r\subset \mathbb{H}$ ending at $x$ with $p(r)\subset Y$. \end{itemize} Now let $\BS^1_{\hspace{-.1cm}\calA}\subset \partial \mathbb{H}$ be the set of points that fill ${\sf d}ot S$. \end{definition} We note that when $x \not \in \BS^1_{\hspace{-.1cm}\calA}$, there is a ray $r$ ending at $x$ so that $r$ is contained in a proper subsurface $Y \subsetneq {\sf d}ot S$. The boundary of this subsurface is an essential curve $v$ and $\Phi_v(r) \subset T_v$ is a bounded diameter set. Thus, restricting to the set $\BS^1_{\hspace{-.1cm}\calA}$ is necessary (c.f.~\cite[Lemma~3.4]{LeinMjSch}). Given the modifications to the setup, the existence of the extension of Theorem~\ref{CT} follows just as in the case that $S$ is closed in \cite{LeinMjSch}; this is outlined in Section~\ref{S:existence}. The surjectivity requires more substantial modification, however, and this is carried out in Section~\ref{S:surjectivity}. The proof of the universal property of $\partial \Phi$, as well as the discussion of $\partial \Phi_0 \colon\tauhinspacelon \partial {\mathcal C}(S) \tauo \partial {\mathcal C}({\sf d}ot S)$, Theorem~\ref{T:UCT C short}, and the relationship to Theorem~\ref{CT} is carried out in Section~\ref{S:Universal}. \subsection{Quasiconvex nesting and existence of Cannon--Thurston maps} \langlebel{S:existence} In this section we will prove the existence part of the Theorem \ref{CT}. \betaegin{theorem}\langlebel{CTex} For any vertex $v\in \mathcal{C}(S)$, the induced survival map $\Phi_v \colon\tauhinspacelon \mathbb{H} \rightarrow {\mathcal C^s({\sf d}ot S)}$ has a continuous, $\pi_1(S)$--equivariant extension to \[\overline\Phi_v: \mathbb{H}\cup \BS^1_{\hspace{-.1cm}\calA}\rightarrow \overline{\mathcal C}^s({\sf d}ot S))\] Moreover, the restriction $\partial \Phi_v = \overline{\Phi}_v|_{\BS^1_{\hspace{-.1cm}\calA}} \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA} \tauo \partial {\mathcal C}^s({\sf d}ot S)$ does not depend on the choice of $v$. \end{theorem} By the last claim, we may denote the restriction as $\partial \Phi \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA} \tauo \partial {\mathcal C}^s({\sf d}ot S)$, without reference to the choice of $v$. To prove this theorem, we will use the following from \cite[Lemma~1.9]{LeinMjSch}, which is a mild generalization of a lemma of Mitra in \cite{Mitra1}. \betaegin{lemma} \langlebel{mitra} Let $X$ and $Y$ be two hyperbolic metric spaces, and $F: X \rightarrow Y$ a continuous map. Fix a basepoint $y\in Y$ and a subset $A \subset \partial X$. Then there is a $A$--Cannon-Thurston map \[\overline F: X\cup A \rightarrow Y\cup \partial Y\] if and only if for all $s\in A$ there is a neighborhood basis $\mathcal{B}_i\subset X \cup A$ of $s$ and a collection of uniformly quasiconvex sets $Q_i\subset Y$ such that; \betaegin{itemize} \item $F(\mathcal{B}_i\cap X )\subset Q_i$, and \item $d_Y(y, Q_i)\rightarrow \infty$ as $i\rightarrow \infty$. \end{itemize} Moreover, \[\betaigcap_{i}\overline Q_i=\betaigcap_{i}\partial Q_i =\{\overline F(s)\}\] determines $\overline F(s)$ uniquely, where $\partial Q_i = {\betaoldsymbol a}r Q_i \cap \partial Y$. \end{lemma} \noindent Given the adjustments already made to our setup, the proof of Theorem \ref{CTex} is now nearly identical to \cite[Theorem~3.6]{LeinMjSch}, so we just recall the main ingredients, and explain the modifications necessary in our setting.\\ \noindent We fix a bi-infinite geodesic ${{\gothic o}thic a}mma$ in $\mathbb{H}$ so that $p({{\gothic o}thic a}mma)$ is a closed geodesic that fills $S$ (i.e.~nontrivially intersects every essential simple closed curve or arc on $S$). As in \cite{LeinMjSch}, we construct quasi-convex sets from such ${{\gothic o}thic a}mma$ as follows. Define \[\mathcal{X}({{\gothic o}thic a}mma)=\Phi({\mathcal C}(S)\tauimes {{\gothic o}thic a}mma)\] where $\Phi$ is the survival map. Let $\mathcal{H}^{\pm}({{\gothic o}thic a}mma)$ denote the two half spaces bounded by ${{\gothic o}thic a}mma$ and define the sets \[ \mathscr{H}^{\pm}({{\gothic o}thic a}mma)= \Phi({\mathcal C}(S)\tauimes \mathcal{H}^{\pm}({{\gothic o}thic a}mma)) \] \noindent The proofs of the following two facts about these sets are identical to the quoted results in \cite{LeinMjSch}. \betaegin{itemize} \item \cite[Proposition~3.1]{LeinMjSch}: $\mathcal{X}({{\gothic o}thic a}mma)$, $ \mathscr{H}^{\pm}({{\gothic o}thic a}mma)$ are simplicial subcomplexes of ${\mathcal C^s({\sf d}ot S)}$ spanned by their vertex sets and are weakly convex (meaning every two points in the set are joined by {\em some} geodesic contained in the set). \item \cite[Proposition~3.2]{LeinMjSch}: We have, \[ \mathscr{H}^{+}({{\gothic o}thic a}mma)\cup \mathscr{H}^{-}({{\gothic o}thic a}mma)= {\mathcal C^s({\sf d}ot S)}\] and \[ {\mathscr{H}}^{+}({{\gothic o}thic a}mma)\cap \mathscr{H}^{-}({{\gothic o}thic a}mma) = \mathcal{X}({{\gothic o}thic a}mma).\] \end{itemize} Now we consider a set $\{{{\gothic o}thic a}mma_n\}$ of pairwise disjoint translates of ${{\gothic o}thic a}mma$ in $\mathbb{H}$ so that the corresponding (closed) half spaces nest: \[ \mathcal{H}^{+}({{\gothic o}thic a}mma_1) \supset\mathcal{H}^{+}({{\gothic o}thic a}mma_2)\supset\cdots\] \noindent Since the action is properly discontinuously on $\mathbb{H}$, there is a $x\in \partial \mathbb{H}$ such that \betaegin{equation}\langlebel{eq:1} \betaigcap^{\infty}_{n=1}\overline{\mathcal{H}^{+}({{\gothic o}thic a}mma_n)}=\{x\}. \end{equation} Here, $\overline{\mathcal{H}^+({{\gothic o}thic a}mma_n)}$ is the closure in $\overline{\mathbb H}$. For such a sequence, we say {\em $\{{{\gothic o}thic a}mma_n\}$ nests down on $x$}. On the other hand, if $r \subset \mathbb H$ is a geodesic ray ending in some point $x\in \partial \mathbb H$ which is {\em not} a parabolic fixed point, $p(r)$ intersects $p({{\gothic o}thic a}mma)$ infinitely many times. Hence, we can find a sequence $\{{{\gothic o}thic a}mma_n\}$ which nests down on $x$. In particular, for any element $x \in \BS^1_{\hspace{-.1cm}\calA}$ has a sequence $\{{{\gothic o}thic a}mma_k\}$ that nests down on $x$. The main ingredient in the proof of existence of the extension is the following. \betaegin{proposition}\langlebel{nestinglimit} If $\{{{\gothic o}thic a}mma_n\}$ nests down to $x\in \BS^1_{\hspace{-.1cm}\calA} $, then for a basepoint $b\in {\mathcal C^s({\sf d}ot S)}$, the sets $\mathscr{H}^{+}({{\gothic o}thic a}mma_n)$ are quasiconvex and we have \[d^s(b, \mathscr{H}^{+}({{\gothic o}thic a}mma_n))\rightarrow \infty \,\,\tauext{as}\,\, n\rightarrow \infty\] \end{proposition} The proof is nearly identical to that of \cite[Proposition~3.5]{LeinMjSch}, but since it's the key to the proof of existence, we sketch it for completeness. \betaegin{proof}[Sketch of proof] Because of the nesting in $\mathbb{H}$, we have nesting in ${\mathcal C}^s({\sf d}ot S)$, \[ \mathscr{H}^+({{\gothic o}thic a}mma_1) \supset \mathscr{H}^+({{\gothic o}thic a}mma_2) \supset \cdots . \] We must show that for any $R>0$, there exists $N >0$ so that $d^s(b,\mathscr{H}^+({{\gothic o}thic a}mma_n)) \geq R$, for all $n \geq N$. The first observation is that because $\Pi \colon\tauhinspacelon {\mathcal C}^s({\sf d}ot S) \tauo {\mathcal C}(S)$ is simplicial (hence $1$--Lipschitz), it suffices to find $N >0$ so that $d^s(b,\mathscr{H}^+({{\gothic o}thic a}mma_n) \cap \Pi^{-1}(B_R(\Pi(b))) \geq R$ for all $n \geq N$. To prove this, one can use an inductive argument to construct an increasing sequence $N_1 < N_2 < \ldots <N_{R+1}$ so that \[ \mathcal{X}({{\gothic o}thic a}mma_{N_j}) \cap \Pi^{-1}(B_R(\Pi(b))) \cap \mathcal{X}({{\gothic o}thic a}mma_{N_{j+1}}) = \emptyset.\] Before explaining the idea, we note that this implies that $\{\mathscr{H}^+({{\gothic o}thic a}mma_{N_j}) \cap \Pi^{-1}(B_R(\Pi(b)))\}_{j=1}^{R+1}$ are {\em properly} nested: a path from $b$ to $\mathscr{H}^+({{\gothic o}thic a}mma_{N_{R+1}})$ inside $\Pi^{-1}(B_R(\Pi(b)))$ must pass through a vertex of $\mathscr{H}^+({{\gothic o}thic a}mma_{N_j})$, for each $j$, before entering the next set. Therefore, it must contain at least $R+1$ vertices, and so have length at least $R$. This completes the proof by taking $N = N_{R+1}$, since then a geodesic from $b$ to a point of $\mathscr{H}^+({{\gothic o}thic a}mma_{N_{R+1}})$ will have length at least $R$ (if it leaves $\Pi^{-1}(B_R(\Pi(b)))$, then it's length is greater than $R$). The main idea to find the sequence $N_1 < N_2 < \ldots N_{R+1}$ is involved in the inductive step. If we have already found $N_1 < N_2 < \ldots N_{k-1}$, and we want to find $N_k$, we suppose there is no such $N_k$, and derive a contradiction. For this, assume \[ \mathcal{X}({{\gothic o}thic a}mma_{N_{k-1}}) \cap \mathcal{X}({{\gothic o}thic a}mma_n) \cap \Pi^{-1}(B_R(\Pi(b))) \neq \emptyset,\] for all $n > N_{k-1}$, and let $u_n$ be a vertex in this intersection. Set $v_n = \Pi(u_n)$, and recall that $\Phi_{v_n}^{-1}(u_n) = U_n \subset \mathbb{H}$ is a component of the complement a small neighborhood of the preimage in $\mathbb{H}$ of the geodesic representative of $v_n$ in $S$. The fact that $u_n \in \mathcal{X}({{\gothic o}thic a}mma_{N_{k-1}}) \cap \mathcal{X}({{\gothic o}thic a}mma_n)$ translates into the fact that ${{\gothic o}thic a}mma_{N_{k-1}} \cap U_n \neq \emptyset$ and ${{\gothic o}thic a}mma_n \cap U_n \neq \emptyset$. After passing to subsequences and extracting a limit, we find a geodesic from a point on ${{\gothic o}thic a}mma_{N_{k-1}}$ (or one of its endpoints in $\partial \mathbb{H}$) to $x$, which projects to have empty transverse intersection with $v_n$ in $S$. Since $v_n$ is contained in the bounded set $B_R(\Pi(b))$, any subsequential Hausdorff limit does not contain an ending lamination on $S$, by Theorem~\ref{T:Klarreich}, and so any ray with no transverse intersections is eventually trapped in a subsurface (a component of the minimal subsurface of the maximal measurable sublamination of the Hausdorff limit). This contradicts the fact that $x \in \BS^1_{\hspace{-.1cm}\calA}$, and completes the sketch of the proof. \end{proof} We are now ready for the proof of the existence part of Theorem~\ref{CT}. \betaegin{proof}[Proof of Theorem \ref{CTex}] The existence and continuity of $\overline\Phi_v$ follows by verifying the hypotheses in Lemma \ref{mitra}. Fix a basepoint $b\in {\mathcal C^s({\sf d}ot S)}$ and let $\{{{\gothic o}thic a}mma_n\}$ be a sequence nesting to a point $x\in \BS^1_{\hspace{-.1cm}\calA}$. The collection of sets \[ \{ \overline{\mathcal{H}^{+}({{\gothic o}thic a}mma_n)} \cap (\mathbb H \cup \BS^1_{\hspace{-.1cm}\calA})\}_{n=1}^\infty, \] is a neighborhood basis of $x$ in $\mathbb H \cup \BS^1_{\hspace{-.1cm}\calA}$. By definition of $\mathscr H^+({{\gothic o}thic a}mma_n)$ \[\Phi_v(\mathcal{H}^{+}({{\gothic o}thic a}mma_n))=\Phi (\{v\}\tauimes \mathcal{H}^{+}({{\gothic o}thic a}mma_n)) \subset \mathscr{H}^{+}({{\gothic o}thic a}mma_n),\] for all $n$. By Proposition \ref{nestinglimit}, $d^s(b, \mathscr{H}^{+}({{\gothic o}thic a}mma_n))\rightarrow \infty \,\,\tauext{as}\,\, n\rightarrow \infty$. Therefore, by Lemma \ref{mitra} we have a $\BS^1_{\hspace{-.1cm}\calA}$-Cannon--Thurston map $\overline\Phi_v$ defined on $x \in \BS^1_{\hspace{-.1cm}\calA}$ by \[ \{\overline\Phi_v(x)\} = \betaigcap_{n=1}^\infty \overline{\mathscr{H}^+({{\gothic o}thic a}mma_n)}.\] Since the sets on the right-hand side do not depend on the choice of $v$, and since $x \in \BS^1_{\hspace{-.1cm}\calA}$, we also write $\partial \Phi(x) = \overline\Phi_v(x)$, and note that $\partial \Phi \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA} \tauo \partial {\mathcal C}^s({\sf d}ot S)$ does not depend on $v$. \end{proof} Observe that for all $x \in \BS^1_{\hspace{-.1cm}\calA}$, we have \betaegin{equation} \langlebel{E:what is partial Phi} \partial \Phi (x) = \betaigcap_{n=1}^\infty \partial \mathscr{H}^+({{\gothic o}thic a}mma_n) \end{equation} where $\{{{\gothic o}thic a}mma_n\}$ is any sequence nesting down on $x$, because the intersection of the closure is in fact the intersection of the boundaries. \subsection{Surjectivity of the Cannon-Thurston map} \langlebel{S:surjectivity} We start with the following lemma. \betaegin{lemma}\langlebel{surj1}For any $v\in \mathcal{C}^0(S)$ we have \[\partial{\mathcal C^s({\sf d}ot S)} \subset\overline {\Phi_v(\mathbb{H})}\] \end{lemma} The analogous statement for $S$ closed is \cite[Lemma~3.12]{LeinMjSch}, but the proof there does not work in our setting. Specifically, the proof in \cite{LeinMjSch} appeals to Klarreich's theorem about the map from Teichm\"uller space to the curve complex, and extension to the boundary of that; see \cite{Klarreich}. In our situation, the analogue would be a map from Teichm\"uller space to ${\mathcal C}^s({\sf d}ot S)$, to which Klarreich's result does not apply. \betaegin{proof} We first claim that if $X\subset \partial{\mathcal C^s({\sf d}ot S)}$ is closed and $\PMod({\sf d}ot S)$--invariant then either $X=\emptyset$ or $X=\partial{\mathcal C^s({\sf d}ot S)}$. This is true since the set ${\rm{PA}}$ of fixed points of pseudo-Anosov elements of $\PMod({\sf d}ot S)$ is dense in $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot{S})$ and $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot{S})$ is dense in $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. As a result, ${\rm{PA}}$ is dense in $\partial{\mathcal C^s({\sf d}ot S)}$. Since any nonempty, closed, pure mapping class group invariant subset of $\partial {\mathcal C^s({\sf d}ot S)}$ has to include ${\rm{PA}}$, the claim is true. Now we will show that $\partial{\mathcal C^s({\sf d}ot S)} \cap \overline{\Phi_v(\mathbb{H})}$ contains a $\PMod({\sf d}ot S)$--invariant set. For this, first let ${\rm{PA}}_0 \subset {\rm{PA}}$ be the set of pseudo-Anosov fixed points for elements in $\pi_1S < \PMod({\sf d}ot S)$. Since the $\pi_1(S)$ action leaves $\Phi_v(\mathbb H)$ invariant, and since pseudo-Anosov elements act with north-south dynamics on $\overline{\mathcal C}^s({\sf d}ot S)$, it follows that ${\rm{PA}}_0 \subset \overline{\Phi_v(\mathbb H)}$. Next, we need to show that $f({\rm{PA}}_0)={\rm{PA}}_0$ for $f\in \PMod({\sf d}ot S)$. For any point $x \in PA_0$, let ${{\gothic o}thic a}mma \in \pi_1(S)$ be a pseudo-Anosov element with ${{\gothic o}thic a}mma(x) = x$. Then $f {{\gothic o}thic a}mma f^{-1}$ fixes $f(x)$, but $f {{\gothic o}thic a}mma f^{-1}$ is also a pseudo-Anosov element of $\pi_1(S)$, since $\pi_1(S)$ is a normal subgroup of $\PMod({\sf d}ot S)$. So, $f({\rm{PA}}_0) \subset {\rm{PA}}_0$, since $x \in {\rm{PA}}_0$ was arbitrary. Applying the same argument to $f^{-1}$, we find $f({\rm{PA}}_0) = {\rm{PA}}_0$. Since $f \in \PMod({\sf d}ot S)$ was arbitrary, ${\rm{PA}}_0$ is $\PMod({\sf d}ot S)$--invariant. Therefore, $\overline {\rm{PA}}_0$ is a nonempty closed $\PMod({\sf d}ot S)$--invariant subset of $\partial {\mathcal C^s({\sf d}ot S)} \cap \overline{\Phi_v(\mathbb H)}$, and so both of these sets equal $\partial {\mathcal C^s({\sf d}ot S)}$. \end{proof} To prove the surjectivity, we will need the following proposition. The exact analogue for $S$ closed is much simpler, but is false in our case (as the second condition suggests); see \cite[Proposition~3.13]{LeinMjSch}. To state the proposition, recall that $\mathcal P \subset \mathbb{H}$ denotes the set of parabolic fixed points; see Section~\ref{S:cusps and witnesses}. \betaegin{proposition}\langlebel{surj2} If $\{x_n\}$ is a sequence of points in $\mathbb{H}$ with limit $x\in \partial\mathbb{H} {\smallsetminus} \BS^1_{\hspace{-.1cm}\calA}$, then one of the following holds: \betaegin{enumerate} \item $\Phi_v(x_n)$ does not converge to a point of $\partial{\mathcal C^s({\sf d}ot S)} $; or \item $x \in \mathcal P$ and $\Phi_v(x_n)$ accumulates only on points in $\partial {\mathcal C}(\mathcal W(x))$. \end{enumerate} \end{proposition} To prove this, we will need the following lemma. For the remainder of this paper, we identify the points of $\partial {\mathcal C}^s({\sf d}ot S)$ with $\ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ via Theorem~\ref{T:boundary ending precise}. \betaegin{lemma} \langlebel{L:a new annulus} Suppose $x \in \mathcal P$ and $\{x_n\} \subset \mathbb H$ with $x_n \tauo x$. If $\Phi_v(x_n) \tauo \mathcal{L}$ in $\partial {\mathcal C^s({\sf d}ot S)}$, then $\mathcal{L} \in \partial {\mathcal C}(\mathcal W(x))$. \end{lemma} \betaegin{proof} We suppose $\Phi_v(x_n) \tauo \mathcal{L}$ in $\partial {\mathcal C^s({\sf d}ot S)}$. Let $H = H_x \subset \mathbb H$ be the horoball based at $x$ disjoint from all chosen neighborhoods of geodesics used to define $\Phi$ as in Sections~\ref{S:tree map construction} and \ref{S:cusps and witnesses}. Applying an isometry if necessary, we can assume that $x = \infty$ in the upper-half plane model and $H = \{z \in \mathbb C \mid {\rm{Im}}(z) \geq 1 \}$ is stabilized by the cyclic, parabolic group $\langlengle g \ranglengle < \pi_1(S,z)$. By Lemma~\ref{L:W parabolics}, the $\Phi_v$--image of $H$ is a single point $\Phi_v(H)= \{u\}$. Note that if ${\rm{Im}}(x_n) > \epsilonsilon > 0$ for some $\epsilonsilon > 0$, then $\Phi_v(x_n)$ remains a bounded distance from $u$, and hence does not converge to any $\mathcal{L} \in \partial {\mathcal C^s({\sf d}ot S)}$. Therefore, it must be that ${\rm{Im}}(x_n) \tauo 0$ and consequently ${\rm{Re}}(x_n) \tauo \pm \infty$. We may pass to a subsequence so that the hyperbolic geodesics $[x_n,x_{n+1}]$ nontrivially intersect $H$, and from this find a sequence of points $y_n \in H$ and curves $v_n \in {\mathcal C}(S)$ so that $u_n = \Phi(v_n,y_n) \tauo \mathcal{L}$ as $n \tauo \infty$ (c.f.~the proof of \cite[Proposition 3.11]{LeinMjSch}). According to Lemma~\ref{L:W parabolics}, $\Phi({\mathcal C}(S) \tauimes H) = {\mathcal C}(\mathcal{W}(x))$, and so $\Phi(v_n,y_n) \in {\mathcal C}(\mathcal{W}(x))$. Consequently, $\mathcal{L} \in \partial {\mathcal C}(\mathcal{W}(x))$, as required. \end{proof} \betaegin{proof}[Proof of Proposition~\ref{surj2}] We suppose $\Phi_v(x_n) \tauo \mathcal{L} \in \partial {\mathcal C^s({\sf d}ot S)}$ and argue as in \cite{LeinMjSch}. Specifically, the assumption that $x \in \partial \mathbb{H} {\smallsetminus} \BS^1_{\hspace{-.1cm}\calA}$ means that a ray $r$ ending at $x$, after projecting to $S$, is eventually trapped in some proper, $\pi_1$--injective subsurface $Y \subset S$, and fills $Y$ if $Y$ is not an annulus. If $Y$ is not an annular neighborhood of a puncture, then we arrive at the same contradiction from \cite[Proposition~3.13]{LeinMjSch}. On the other hand, if $Y$ is an annular neighborhood of a puncture, then by Lemma~\ref{L:a new annulus}, $\mathcal{L} \in \partial {\mathcal C}(\mathcal W(x))$, as required. \end{proof} \betaegin{theorem} The Cannon--Thurston map \[ \partial \Phi: \BS^1_{\hspace{-.1cm}\calA} \rightarrow \partial {\mathcal C^s({\sf d}ot S)}\] is surjective and $\PMod({\sf d}ot S)$--equivariant. \end{theorem} \betaegin{proof} Let $\mathcal{L} \in \partial {\mathcal C^s({\sf d}ot S)}$. Then, by Lemma \ref{surj1} $\mathcal{L}=\lim \Phi_v(x_n)$ for some sequence $\{x_n\}\in \mathbb{H}$. Passing to a subsequence, assume that $x_n \rightarrow x$ in $\partial\mathbb{H}$. If $x\in\BS^1_{\hspace{-.1cm}\calA} $ we are done since by continuity at every point of $\BS^1_{\hspace{-.1cm}\calA}$ we have, \[\mathcal{L}=\lim \Phi_v(x_n)=\overline\Phi_v(x)=\partial\Phi(x). \] If $x\notin \BS^1_{\hspace{-.1cm}\calA}$, then by Proposition~\ref{surj2}, $x\in \mathcal{P}$ and $\mathcal{L} \in \partial {\mathcal C}(W)$, where $W = \mathcal{W}(x)$. By Lemma~\ref{L:proj and conv2}, $\pi_W(\Phi_v(x_n)) \tauo \mathcal{L} \in \partial {\mathcal C}(W)$. Let $g\in \pi_1({\sf d}ot S)$ be the generator of ${\rm{Stab}}_{\pi_1S}(x)$. As in the proof of Lemma~\ref{L:a new annulus}, $x_n$ is not entirely contained in any horoball based at $x$, and hence it must be that there exists a sequence $\{k_n\}$ such that $g^{k_n}(x_n)\rightarrow \xi$ where $\xi\in \partial\mathbb H$ is some point such that $\xi\neq x$. Since $g$ is a Dehn twist in $\partial W$, it does not affect $\pi_W(\Phi_v(x_n))$. Thus $\pi_W(\Phi_v(g^{k_n}(x_n))) \tauo \mathcal{L}$ and hence $\Phi_v(g^{k_n}(x_n)) \tauo \mathcal{L}$ by another application of Lemma~\ref{L:proj and conv2}. But in this case since $\xi\neq x$ does not satisfy either of the possibilities given in Lemma \ref{surj2}, and hence $\xi\in \BS^1_{\hspace{-.1cm}\calA}$. But this implies \[ \mathcal{L} = \lim_{n \tauo \infty} \Phi_v(g^{k_n}(x_n)) = \overline{\Phi}_v(\xi),\] again appealing to continuity of $\overline{\Phi}$. The proof of $\PMod({\sf d}ot S)$--equivariance is identical to the proof of \cite[Theorem~1.2]{LeinMjSch}. The idea is to use $\pi_1S$--equivariance, and prove $\partial \Phi(\phi \cdot x) = \phi \cdot \partial \Phi(x)$ for $\phi \in \PMod({\sf d}ot S)$ and $x$ in the dense subset of $\BS^1_{\hspace{-.1cm}\calA}$ consisting of attracting fixed points of elements ${\sf d}elta \in \pi_1S$ whose axes project to filling closed geodesics on $S$. The point is that such points $x$ are attracting fixed points in $\partial\mathbb{H}$ of ${\sf d}elta$, but their images are also attracting fixed points in $\partial {\mathcal C}^s({\sf d}ot S)$ since ${\sf d}elta$ is pseudo-Anosov by Kra's Theorem \cite{Kra}, when viewed as an element of $\PMod({\sf d}ot S)$. The fact that $\phi(x)$ and $\phi(\partial \Phi(x))$ are the attracting fixed points of $\phi {\sf d}elta \phi^{-1}$ in $\partial \mathbb{H}$ and $\partial {\mathcal C}^s({\sf d}ot S)$, respectively, finishes the proof. \end{proof} \subsection{Universality and the curve complex} \langlebel{S:Universal} The following theorem on the universality is an analog of \cite[Corollary3.10]{LeinMjSch}. While the statement is similar, it should be noted that in \cite{LeinMjSch}, the map is finite-to-one, though this is not the case here since some of the complementary regions of the preimage in $\mathbb{H}$ of laminations in $S$ are infinite sided ideal polygons, and whose sides accumulate to a parabolic fixed point. We follow \cite{LeinMjSch} where possible, and describe the differences when necessary. \betaegin{CTUtheorem} \langlebel{T:CU theorem} Given two distinct points $x,y\in \BS^1_{\hspace{-.1cm}\calA}$, $\partial\Phi(x) = \partial \Phi(y)$ if and only if $x$ and $y$ are the ideal endpoints of a leaf or complementary region of $p^{-1}(\mathcal{L})$ for some $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$. \end{CTUtheorem} The proof will require a few additional facts. The first is the analogue of \cite[Proposition~3.8]{LeinMjSch} which states that the intersections at infinity of the images of the half-spaces satisfy \betaegin{equation} \langlebel{E:boundary intersections 1} \partial \mathscr{H}^+({{\gothic o}thic a}mma) \cap \partial \mathscr{H}^-({{\gothic o}thic a}mma) = \partial \mathcal{X} ({{\gothic o}thic a}mma) \end{equation} where as above, ${{\gothic o}thic a}mma$ is a geodesics that projects to a closed, filling geodesic in $S$. The next is the analogue in our setting of \cite[Lemma~3.9]{LeinMjSch}. To describe this, recall that the element ${\sf d}elta \in\pi_1S$ stabilizing ${{\gothic o}thic a}mma$ is a pseudo-Anosov mapping class when viewed in $\PMod({\sf d}ot S)$ by a theorem of Kra \cite{Kra}. Let $\pm \mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace({\sf d}ot S) \subset \partial {\mathcal C}^s({\sf d}ot S)$ be the attracting and repelling fixed points (i.e. the stable/unstable laminations). Then we have \betaegin{equation} \langlebel{E:boundary intersections 2} \partial \mathcal{X}({{\gothic o}thic a}mma) = \hat \Phi(\partial {\mathcal C}(S) \tauimes {{\gothic o}thic a}mma) \cup \{ \pm \mathcal{L} \} \end{equation} The proofs of these facts are identical to those in \cite{LeinMjSch}, and we do not repeat them. \betaegin{proof}[Proof of Theorem~\ref{CTU}] Given $x,y \in \BS^1_{\hspace{-.1cm}\calA}$, first suppose that there is an ending lamination $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$ and $E \subset \mathbb{H}$ which is either a leaf or complementary region of $p^{-1}(\mathcal{L})$, so that $x$ and $y$ are ideal vertices of $E$. Let $\{{{\gothic o}thic a}mma^x_n\},\{{{\gothic o}thic a}mma^y_n\}$ be $\pi_1S$--translates of ${{\gothic o}thic a}mma$ that nest down on $x$ and $y$, respectively. Then by \eqref{E:what is partial Phi}, we have \[ \partial \Phi(x) = \betaigcap_{n=1}^\infty \partial \mathscr{H}^+({{\gothic o}thic a}mma^x_n) \quad \mbox{ and } \quad \partial \Phi(y) = \betaigcap_{n=1}^\infty \partial \mathscr{H}^+({{\gothic o}thic a}mma^y_n) \] By Lemma~\ref{L:points identified by hat phi}, $\hat \Phi(\{\mathcal{L}\} \tauimes E)$ is a single point, which we denote $\hat \Phi(\{\mathcal{L}\} \tauimes E) = \mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. Now observe that because ${{\gothic o}thic a}mma^x_n$ intersects $E$ for all sufficiently large $n$, \eqref{E:boundary intersections 2} implies \[ \mathcal{L}_0 \in \betaigcap_{n=1}^\infty \hat \Phi(\{\mathcal{L}\} \tauimes {{\gothic o}thic a}mma^x_n) \subset \betaigcap_{n=1}^\infty \partial \mathcal{X}({{\gothic o}thic a}mma^x_n) \subset \betaigcap_{n=1}^\infty \partial \mathscr{H}^+({{\gothic o}thic a}mma^x_n) = \partial \Phi(x).\] Therefore, $\partial \Phi(x) = \mathcal{L}_0$. The exact same argument shows $\partial \Phi(y) = \mathcal{L}_0$, and hence \[ \partial \Phi(x) = \mathcal{L}_0= \partial \Phi(y),\] as required. Now suppose $\partial \Phi(x) = \partial \Phi(y) = \mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$. Again by \eqref{E:what is partial Phi} there are sequences $\{{{\gothic o}thic a}mma^x_n\}$ and $\{{{\gothic o}thic a}mma^y_n\}$ ($\pi_1S$-translates of ${{\gothic o}thic a}mma$) nesting down to $x$ and $y$ respectively so that \[\betaigcap_{n=1}^\infty \partial \mathscr{H}^{+}({{\gothic o}thic a}mma^x_n)= \mathcal{L}_0 = \betaigcap_{n=1}^\infty \partial \mathscr{H}^{+}({{\gothic o}thic a}mma^y_n). \] Because the intersections are nested, this implies that for all $n$ we have \[ \mathcal{L}_0 \in \partial \mathscr{H}^{+}({{\gothic o}thic a}mma_n^x) \cap \partial \mathscr{H}^{+}({{\gothic o}thic a}mma_n^y). \] Passing to a subsequence if necessary, we may assume that for all $n$, $\mathscr{H}^+({{\gothic o}thic a}mma_n^x) \subset \mathscr{H}^-({{\gothic o}thic a}mma_n^y)$ and $\mathscr{H}^+({{\gothic o}thic a}mma_n^y) \subset \mathscr{H}^-({{\gothic o}thic a}mma_n^x)$. Therefore, for all $n$ we have \betaegin{eqnarray*} \mathcal{L}_0 & \in & \partial \mathscr{H}^{+}({{\gothic o}thic a}mma_n^x) \cap \partial \mathscr{H}^{+}({{\gothic o}thic a}mma_n^y)\\ & = & \left( \partial \mathscr{H}^+({{\gothic o}thic a}mma_n^x) \cap \partial \mathscr{H}^-({{\gothic o}thic a}mma_n^y)\right) \cap \left( \partial \mathscr{H}^+({{\gothic o}thic a}mma_n^y) \cap \partial \mathscr{H}^-({{\gothic o}thic a}mma_n^x) \right)\\ & = & \left( \partial \mathscr{H}^+({{\gothic o}thic a}mma_n^x) \cap \partial \mathscr{H}^-({{\gothic o}thic a}mma_n^x)\right) \cap \left( \partial \mathscr{H}^+({{\gothic o}thic a}mma_n^y) \cap \partial \mathscr{H}^-({{\gothic o}thic a}mma_n^y) \right)\\ & = & \partial \mathcal{X}({{\gothic o}thic a}mma_n^x) \cap \partial \mathcal{X}({{\gothic o}thic a}mma_n^y). \end{eqnarray*} The last equality here is an application of \eqref{E:boundary intersections 1}. Combining this with the description of $\mathcal{L}_0$ above and \eqref{E:boundary intersections 2}, we have \[ \partial \Phi(x) = \partial \Phi(y) = \mathcal{L}_0 = \betaigcap_{n=1}^\infty (\partial \mathcal{X}({{\gothic o}thic a}mma_n^x) \cap \partial \mathcal{X}({{\gothic o}thic a}mma_n^y)) = \betaigcap_{n=1}^\infty (\hat \Phi(\partial {\mathcal C}(S) \tauimes {{\gothic o}thic a}mma_n^x) \cap \hat \Phi(\partial {\mathcal C}(S) \tauimes {{\gothic o}thic a}mma_n^y)). \] For the last equation where we have applied \eqref{E:boundary intersections 2}, we have used the fact that the stable/unstable laminations of the pseudo-Anosov mapping classes corresponding to ${\sf d}elta_n^x$ and ${\sf d}elta_n^y$ in $\pi_1S$ stabilizing ${{\gothic o}thic a}mma_n^x$ and ${{\gothic o}thic a}mma_n^y$, respectively, are all distinct, hence $\mathcal{L}_0$ is not one of the stable/unstable laminations. From the equation above, we have $\mathcal{L}_n^x,\mathcal{L}_n^y \in \ensuremath{\mathcal{EL}}\xspace(S)$ and $x_n \in {{\gothic o}thic a}mma_n^x, y_n \in {{\gothic o}thic a}mma_n^y$ so that $\hat \Phi(\mathcal{L}_n^x,x_n) = \hat \Phi(\mathcal{L}_n^y,y_n) = \mathcal{L}_0$, for all $n$. According to Lemma~\ref{L:points identified by hat phi}, there exists $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$ so that $\mathcal{L}_n^x = \mathcal{L}_n^y = \mathcal{L}$ for all $n$, and there exists a leaf or complementary region $E$ of $p^{-1}(\mathcal{L})$ so that $x_n,y_n \in E$. Since ${{\gothic o}thic a}mma_n^x$ and ${{\gothic o}thic a}mma_n^y$ nest down on $x$ and $y$, respectively, it follows that $x_n \tauo x$ and $y_n \tauo y$ as $n \tauo \infty$. Therefore, $x,y$ are endpoints of a leaf of $p^{-1}(\mathcal{L})$ or ideal endpoints of a complementary region of $p^{-1}(\mathcal{L})$, as required. \end{proof} We can now easily deduce the following, which also proves Proposition~\ref{P:CT difference}. \betaegin{proposition} \langlebel{P:preimage of witness ELs} Given $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$, $\partial \Phi^{-1}(\mathcal{L}_0)$ is infinite if and only if $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace(W)$ for some proper witness $W$. \end{proposition} \betaegin{proof} Theorem~\ref{CTU} implies that for $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$, $\partial \Phi^{-1}(\mathcal{L}_0)$ contains more than two points if and only if there is a lamination $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$ and a complementary region $U$ of $p^{-1}(\mathcal{L})$ so that $\partial \Phi^{-1}(\mathcal{L}_0)$ is precisely the set of ideal points of $U$. Moreover, in this case the proof above shows that $\mathcal{L}_0 = \hat \Phi(\{\mathcal{L}\} \tauimes U)$. On the other hand, Lemma~\ref{L:where the witness ELs come from} and the paragraph preceding it tell us that $\mathcal{L}_0 \in \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ is contained in $\ensuremath{\mathcal{EL}}\xspace(W)$ for a proper witness $W \subsetneq {\sf d}ot S$ if and only if it is given by $\mathcal{L}_0 = \hat \Phi(\{\mathcal{L}\} \tauimes U)$ where $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$ and $U$ is the complementary region of $p^{-1}(\mathcal{L})$ containing $H_x$, where $x \in \mathcal{P}$ with $W = \mathcal{W}(x)$. Finally, we note that a complementary region of a lamination $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$ has infinitely many ideal vertices if and only if it projects to a complementary region of $\mathcal{L}$ containing a puncture, and this happens if and only if it contains a horoball $H_x$ for some $x \in \mathcal{P}$. Combining all three of the facts above proves the proposition. \end{proof} Now we define $\BS^1_{\hspace{-.1cm}\calA}C \subset \BS^1_{\hspace{-.1cm}\calA}$ to be those points that map by $\partial \Phi$ to $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot S)$ and then define $\partial \Phi_0 \colon\tauhinspacelon \BS^1_{\hspace{-.1cm}\calA}C \tauo \partial {\mathcal C}({\sf d}ot S) = \ensuremath{\mathcal{EL}}\xspace({\sf d}ot S)$ to be the ``restriction'' of $\partial \Phi$ to $\BS^1_{\hspace{-.1cm}\calA}C$. Theorem~\ref{T:Phi0 identified} is a consequence of the Theorem~\ref{CTU} since $\partial \Phi_0$ is the restriction of $\partial \Phi$ to $\BS^1_{\hspace{-.1cm}\calA}C$. Then Proposition~\ref{P:CT difference} is immediate from Proposition~\ref{P:preimage of witness ELs} and the definitions. Theorem~\ref{T:UCT C short} then follows from Theorem~\ref{CT}. We end with an alternate description of $\BS^1_{\hspace{-.1cm}\calA}C$. For $\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)$, consider the subset $\mathcal{S}_{\mathcal{L}} \subset \partial \mathbb{H}$ consisting of all ideal endpoints of complementary components of $p^{-1}(\mathcal{L})$ {\em which have infinitely many such ideal endpoints}. That is, $\mathcal{S}_{\mathcal{L}}$ is the set of ideal endpoints of complementary regions that project to complementary regions of $\mathcal{L}$ that contain a puncture. The following is thus an immediate consequence of Theorem~\ref{CTU} and Proposition~\ref{P:preimage of witness ELs}. \betaegin{corollary} \langlebel{C:CT difference 2} The set of points $\BS^1_{\hspace{-.1cm}\calA}C \subset \BS^1_{\hspace{-.1cm}\calA} \subset \partial \mathbb{H}$ that map to $\ensuremath{\mathcal{EL}}\xspace({\sf d}ot S) \subset \ensuremath{\mathcal{EL}}\xspace^s({\sf d}ot S)$ is \[ \BS^1_{\hspace{-.1cm}\calA}C = \BS^1_{\hspace{-.1cm}\calA} {\smallsetminus} \betaigcup_{\mathcal{L} \in \ensuremath{\mathcal{EL}}\xspace(S)} \mathcal{S}_{\mathcal{L}}.\] $\Box$ \end{corollary} \iffalse We also need the analogue of \cite[Lemma~3.9]{LeinMjSch}, which guarantees that the above intersection, which is contained in $\partial \mathcal{X}({{\gothic o}thic a}mma_n^x) \cap \partial \mathcal{X}({{\gothic o}thic a}mma_n^y)$ for $n$ sufficiently large, must be in the image of $\hat \Phi(\partial {\mathcal C}(S) \tauimes {{\gothic o}thic a}mma_n^y) \cap \hat \Phi(\partial {\mathcal C}(S) \tauimes {{\gothic o}thic a}mma_n^x)$. In particular, Lemma~\ref{L:points identified by hat phi} again then implies that there exists $\mathcal{L}_n \in \ensuremath{\mathcal{EL}}\xspace(S)$ and $x_n \in {{\gothic o}thic a}mma_n^x$, $y_n \in {{\gothic o}thic a}mma_n^y$ so that $\hat \Phi(\mathcal{L}_n,x_n) = \hat \Phi(\mathcal{L}_n,y_n)$ and that $x_n,y_n$ are contained in a set $E_n$ which is either a leaf of, or the closure of a complementary component of, $p^{-1}(\mathcal{L}_n) \subset \mathbb{H}$, for all $n$. Next we connect $x_n$ and $y_n$ with a geodesic segment $\epsilonsilon_n$. The limit $\epsilonsilon$ of $\epsilonsilon_n$ is a geodesic that connects $x$ and $y$. If $p(\epsilonsilon)$ is simple, we are done, so the remainder of the proof involves investigating how this can fail. First, note that if $p(\epsilonsilon_n)$ projects to be simple for infinitely many $n$, then $p(\epsilonsilon)$ will also project to be simple. Since $\epsilonsilon_n$ is contained in $E_n$, the projection to $S$ is contained in $p(E_n)$. If $E_n$ is a leaf or the closure of a finite sided polygon, then the restriction of $p$ to $E_n$ is injective. In this case, $p(\epsilonsilon_n)$ is a simple geodesic, and hence $p(\epsilonsilon)$ is simple, as required. Therefore, if $p(\epsilonsilon)$ is nonsimple, it must be that the restriction of $p$ to $E_n$ is not injective for all sufficiently large $n$. This only happens when (the interior of) $E_n$ projects to a once-punctured complementary region of $\mathcal L_n$. This is an infinite cyclic cover obtained by taking the quotient of $E_n$ by a maximal parabolic subgroup. On the other hand, if $p(\epsilonsilon)$ is not simple, that means that there is some element $g \in \pi_1S$ such that $g \cdot \epsilonsilon$ transversely intersects $\epsilonsilon$. Thus for arbitrarily large $n$, $g \cdot \epsilonsilon_n$ transversely intersect $\epsilonsilon_n$. Combined with the previous paragraph and the fact that $\epsilonsilon_n$ is contained in $E_n$, this implies that $g$ is in the parabolic subgroup stabilizing $E_n$, for all sufficiently large $n$. Let $z$ be the parabolic fixed point of $g$, and let $\epsilonsilon_n^x$ and $\epsilonsilon_n^y$ be geodesic rays connecting $x_n$ to $z$ and $y_n$ to $z$, respectively. Note that $p(\epsilonsilon_n^x)$ and $p(\epsilonsilon_n^y)$ are simple and have no transverse intersections: they run from a point of $p(E_n)$ straight to the puncture. Since neither $x$ nor $y$ is a parabolic fixed point (being points of $\BS^1_{\hspace{-.1cm}\calA}$, $x \neq z$ and $y \neq z$, so we can take limits $\epsilonsilon_x$ and $\epsilonsilon_y$ of $\{\epsilonsilon_n^x\}$ and $\{\epsilonsilon_n^y\}$, respectively, and $p(\epsilonsilon_x)$ and $p(\epsilonsilon_y)$ are simple and have no transverse intersections, as required. \end{proof} ............ \betaegin{proof}[Proof of Theorem~\ref{CTU}] Now pick up the proof... this implies that there are points $x_n\in {{\gothic o}thic a}mma_n^x$ and $y_n\in {{\gothic o}thic a}mma_n^y$ for all $n$ in $\mathbb H$ and laminations $\mathcal{L}^x_n$ and $\mathcal{L}^y_n$ in $\partial\mathcal{C}(S)$ with $\hat\Phi(\mathcal{L}^x_n, x_n)=\hat\Phi(\mathcal{L}^y_n, y_n)$ for all $n\geq 0$. Lemma {2.14} of \cite{LeinMjSch} states that whenever images of two points $(\mathcal{L}_1, a)$ and $(\mathcal{L}_2, b)$ are the same under the map $\hat\Phi$, $\mathcal{L}_1=\mathcal{L}_2$ and the points $a$ and $b$ are either on the same leaf of a lift of $p^{-1}(\mathcal{L}_1)$, or they are in the closure of the same complementary region of $p^{-1}(\mathcal{L}_1)$ in $\mathbb H$. Now we deduce that this is the case for us and so $\mathcal{L}_n^x = \mathcal{L}_n^y = \mathcal{L}_n$. That is, there is a sequence $\{\mathcal{L}_n\}\subset \partial\mathcal{C}(S)$ of laminations such that for all $n$, each of ${{\gothic o}thic a}mma_n^x$ and ${{\gothic o}thic a}mma_n^y$ intersect either a lift of a leaf of $p^{-1}(\mathcal{L}_n)$ or a complementary region of $p^{-1}(\mathcal{L}_n)$ at points $x_n$ and $y_n$, respectively, hence giving a sequence of geodesics $\epsilonsilon_n \in \mathbb{H}$ connecting $x_n$ to $y_n$ for each $n$, so that $p(\epsilonsilon_n)$ has no transverse intersections with $\mathcal{L}_n$. Either $p(\epsilonsilon_n)$ is simple, or it lies in a punctured polygon component of the complement of $\mathcal{L}_n$. In the latter case, replace $\epsilonsilon_n$ with $\epsilonsilon_n^{x_n} \cup \epsilonsilon_n^{y_n}$ from $x_n$ to the parabolic fixed point $z_n$ and $y_n$ to $z_n$. Need to show that $z_n$ is eventually constant. By definition each $p(\epsilonsilon_n) $ is simple, and the limit $\lim \epsilonsilon_n:=\epsilonsilon$ is a lift of a leaf of $\lim \mathcal{L}_n:=\mathcal{L}$ with endpoints $x$ and $y$. Conversely, let $\mathcal{L}$ be a lamination and $x$ and $y$ be two points which are either the ends of a lift $\epsilonsilon$ of a leaf of $p^{-1}(\mathcal{L})$, or ideal vertices of a complementary region of a lift $\epsilonsilon$ of $\mathcal{L}$. Then, by Theorem $3.7$ of \cite{LeinMjSch} again $p(\epsilonsilon)$ simple and we have \[\hat\Phi(\mathcal{L},x)=\hat\Phi(\mathcal{L},y) \] hence, again by Lemma {2.14} of \cite{LeinMjSch}, $\hat\Phi(\mathcal{L}\tauimes \epsilonsilon)$ is a single point and therefore $\hat\Phi(\mathcal{L}\tauimes \epsilonsilon)=\partial\Phi(x)=\partial\Phi(y)$. \end{proof} \fi \betaibliographystyle{alpha} \betaibliography{main} \end{document}
{\textbf{m}athfrak b}egin{document} \title[Boundedness of some classical linear operators]{Some applications of the dual spaces of Hardy-amalgam spaces} {\textbf{m}athfrak a}uthor[Z.V.P. Abl\'e]{Zobo Vincent de Paul Abl\'e} {\textbf{m}athfrak a}ddress{Laboratoire de Math\'ematiques Fondamentales, UFR Math\'ematiques et Informatique, Universit\'e F\'elix Houphou\"et-Boigny Abidjan-Cocody, 22 B.P 582 Abidjan 22. C\^ote d'Ivoire} \email{{\tt [email protected]}} {\textbf{m}athfrak a}uthor[J. Feuto]{Justin Feuto} {\textbf{m}athfrak a}ddress{Laboratoire de Math\'ematiques Fondamentales, UFR Math\'ematiques et Informatique, Universit\'e F\'elix Houphou\"et-Boigny Abidjan-Cocody, 22 B.P 1194 Abidjan 22. C\^ote d'Ivoire} \email{{\tt [email protected]}} \subjclass{42B30, 42B35, 46E30, 42B20} \keywords{Amalgam spaces, Hardy-Amalgam spaces, Duality, Calder\'on-Zygmund operator, Convolution operator.} \date{} {\textbf{m}athfrak b}egin{abstract} In this paper, thanks to the generalizations of the dual spaces of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for $0<q\leq1$ and $q\leq p<\infty$, obtained in our earlier paper \cite{AbFt3}, we prove that the inclusion of $\textbf{m}athcal H^{(1,p)}$ in $(L^1,\ell^p)$ for $1\leq p<\infty$ is strict, and more generally, the one of $\textbf{m}athcal H^{(q,p)}$ in $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for $0<q\leq1$ and $q\leq p<\infty$. Moreover, as other applications, we obtain results of boundedness of Calder\'on-Zygmund and convolution operators, generalizing those known in the context of the spaces $\textbf{m}athcal H^1$ and $BMO(\textbf{m}athbb{R}^d)$. \end{abstract} \textbf{m}aketitle \section{Introduction} Let $\varphi\in\textbf{m}athcal C^\infty(\textbf{m}athbb R^d)$ with support in $B(0,1)$ such that $\int_{\textbf{m}athbb R^d}\varphi(x)dx=1$, where $B(0,1)$ is the unit open ball centered at $0$ and $\textbf{m}athcal C^\infty(\textbf{m}athbb R^d)$ denotes the space of infinitely differentiable complex valued functions on $\textbf{m}athbb R^d$. The Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ ($0<q,p<\infty$), introduced in \cite{AbFt}, are a generalization of the classical Hardy spaces $\textbf{m}athcal H^q$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^q$ in the sense that they are respectively the spaces of tempered distributions $f$ such that the maximal functions {\textbf{m}athfrak b}egin{equation} \textbf{m}athcal M_{\varphi}(f):=\sup_{t>0}|f{\textbf{m}athfrak a}st\varphi_t|\ \ \ \text{ and }\ \ \ {\textbf{m}athcal{M}_{\textbf{m}athrm{loc}}}_{_{\varphi}}(f):=\sup_{0<t\leq 1}|f{\textbf{m}athfrak a}st\varphi_t|\ \ \ \ \ \label{maximal} \end{equation} belong to the Wiener amalgam spaces $(L^q,\ell^p):=(L^q,\ell^p)(\textbf{m}athbb R^d)$, where $\varphi_t(x)=t^{-d}\varphi(t^{-1}x),\, t>0$ and $x\in\textbf{m}athbb R^d$. We recall that for $0<p,q\leq\infty$, a locally integrable function $f$ belongs to the amalgam space $(L^q,\ell^p)$ if $$\left\|f\right\|_{q,p}:=\left\|\left\{\left\|f\chi_{_{Q_k}}\right\|_{q}\right\}_{k\in\textbf{m}athbb{Z}^d}\right\|_{\ell^{p}}<\infty,$$ where $Q_k=k+\left[0,1\right)^{d}$ for $k\in\textbf{m}athbb Z^d$ (see \cite{BDD}, \cite{RBHS}, \cite{FSTW}, \cite{FH} and \cite{JSTW} for details). It is advisable to point out that the Wiener amalgam spaces $(L^q,\ell^p)$ are special cases of the Orlicz-slice spaces introduced by Zhang et al. in \cite{ZYYW}, which are themselves special cases of the ball (quasi-)Banach function spaces introduced by Sawano et al. in \cite{SHYY}. Wiener amalgam spaces are also isomorphic to special cases of the mixed-norm spaces defined by Benedek and Panzone \cite{BPR} (see \cite{RBHS} and \cite{FSTW} for details). For some properties and a survey of mixed-norm function spaces, the reader can refer to the papers of Hart et al. \cite{HTWX} and, Huang et al. \cite{HYD}. As for classical Hardy spaces, not only the Hardy-amalgam spaces can be characterized in terms of grand maximal functions, but also their definition do not depend on the particular function $\varphi$, and the regular function $\varphi$ can be replaced by the Poisson kernel. These spaces admit atomic characterizations with atoms which are exactly those used in classical Hardy spaces, when $0<q\leq1$ and $q\leq p<\infty$ (see \cite{AbFt} and \cite{AbFt2}). Recently in \cite{AbFt3}, we have generalized the characterizations of their dual spaces obtained in \cite{AbFt1} and \cite{AbFt2} for $0<q\leq p\leq 1$ in case $0<q\leq 1$ and $q\leq p<\infty$. We point out that several developments and generalizations of the Hardy spaces theory modeled on the above mentioned generalizations of Wiener amalgam spaces, were obtained by many authors. On the model of ball quasi-Banach function spaces, we can quote the Hardy spaces for ball quasi-Banach spaces introduced by Sawano et al. in \cite{SHYY}, and the Orlicz-slice Hardy spaces of Zhang et al. in \cite{ZYYW}. Also, we have the works of Wang et al. in \cite{WYYg} and \cite{WYYZg}, those of Yan et al. in \cite{YYYn}, of Zhang et al. in \cite{ZWYY}, and to complete, the paper of Chang et al. \cite{CWYZg}. On the other hand, on the model of mixed-norm function spaces, we can mention the anisotropic mixed-norm Hardy spaces defined by Cleanthous et al. in \cite{CGNM} and the works of Huang et al. \cite{HLYY1}, \cite{HLYY2} and \cite{HLYY3} on these spaces. Several of our results in \cite{AbFt}, \cite{AbFt1} and \cite{AbFt2} have been generalized in the context of these general spaces, namely atomic and molecular decompositions, boundedness of Calder\'on-Zygmund operators, convolution and pseudo-differential operators, and many others. Furthermore, characterizations of the dual spaces of these generalized Hardy spaces have been established, covering those of Hardy-amalgam spaces obtained in \cite{AbFt1} and \cite{AbFt2} when $0<q\leq p\leq 1$. However, the characterization of the dual spaces of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for the exponent range $0<q\leq1<p<\infty$, does not fall within the scope of what has been done in the context of these generalized Hardy spaces. The characterization of the dual spaces of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for $0<q\leq1<p<\infty$ was so left an open problem until our earlier paper \cite{AbFt3} where an answer was brought. The aim of this paper is to two kinds. First, to answer to two questions raised in \cite{AbFt3}; namely whether the inclusion of $\textbf{m}athcal H^{(1,p)}$ in $(L^1,\ell^p)$ for $1\leq p<\infty$ and the one of $\textbf{m}athcal H^{(q,p)}$ in $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for $0<q\leq1$ and $q\leq p<\infty$, are strict. Next, to generalize some boundedness results of Calder\'on-Zygmund and convolution operators known in the context of the spaces $\textbf{m}athcal H^1$ and $BMO(\textbf{m}athbb{R}^d)$ to the case of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and their dual spaces when $0<q\leq1$ and $q\leq p<\infty$. To this end, we organize this paper as follows. In Section 2, we recall some properties of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$, and their dual spaces. Section 3 is devoted to the study of the inclusion of $\textbf{m}athcal H^{(1,p)}$ in $(L^1,\ell^p)$ for $1\leq p<\infty$ and more generally, the one of $\textbf{m}athcal H^{(q,p)}$ in $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for $0<q\leq1$ and $q\leq p<\infty$. In the last section, we study the boundedness of Calder\'on-Zygmund and convolution operators on the dual spaces of the Hardy-amalgam spaces $\textbf{m}athcal H^{(q,p)}$ when $0<q\leq1$ and $q\leq p<\infty$. Throughout the paper, we always let $\textbf{m}athbb{N}=\left\{1,2,\ldots\right\}$ and $\textbf{m}athbb{Z}_{+}=\textbf{m}athbb{N}\cup\left\{0\right\}$. We use $\textbf{m}athcal S := \textbf{m}athcal S(\textbf{m}athbb R^{d})$ to denote the Schwartz class of rapidly decreasing smooth functions equipped with the topology defined by the family of norms $\left\{\textbf{m}athcal{N}_{m}\right\}_{m\in\textbf{m}athbb{Z}_{+}}$, where for all $m\in\textbf{m}athbb{Z}_{+}$ and $\psi\in\textbf{m}athcal{S}$, $$\textbf{m}athcal{N}_{m}(\psi):=\underset{x\in\textbf{m}athbb R^{d}}\sup(1 + |x|)^{m}\underset{|{\textbf{m}athfrak b}eta|\leq m}\sum|{\partial}^{\textbf{m}athfrak b}eta \psi(x)|,$$ with $|{\textbf{m}athfrak b}eta|={\textbf{m}athfrak b}eta_1+\ldots+{\textbf{m}athfrak b}eta_d$, ${\partial}^{\textbf{m}athfrak b}eta=\left(\partial/{\partial x_1}\right)^{{\textbf{m}athfrak b}eta_1}\ldots\left(\partial/{\partial x_d}\right)^{{\textbf{m}athfrak b}eta_d}$ for all ${\textbf{m}athfrak b}eta=({\textbf{m}athfrak b}eta_1,\ldots,{\textbf{m}athfrak b}eta_d)\in\textbf{m}athbb{Z}_{+}^d$ and $|x|:=(x_1^2+\ldots+x_d^2)^{1/2}$. The dual space of $\textbf{m}athcal S$ is the space of tempered distributions denoted by $\textbf{m}athcal S':= \textbf{m}athcal S'(\textbf{m}athbb R^{d})$ equipped with the weak-${\textbf{m}athfrak a}st$ topology. If $f\in\textbf{m}athcal{S'}$ and $\theta\in\textbf{m}athcal{S}$, we denote the evaluation of $f$ on $\theta$ by $\left\langle f,\theta\right\rangle$. The letter $C$ will be used for non-negative constants independent of the relevant variables that may change from one occurrence to another. When a constant depends on some important parameters ${\textbf{m}athfrak a}lpha,\gamma,\ldots$, we denote it by $C({\textbf{m}athfrak a}lpha,\gamma,\ldots)$. Constants with subscript, such as $C_{{\textbf{m}athfrak a}lpha,\gamma,\ldots}$, do not change in different occurrences but depend on the parameters mentioned in them. We propose the following abbreviation $\textbf{m}athrm{{\textbf{m}athfrak b}f A}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$} \textbf{m}athrm{{\textbf{m}athfrak b}f B}$ for the inequalities $\textbf{m}athrm{{\textbf{m}athfrak b}f A}\leq C\textbf{m}athrm{{\textbf{m}athfrak b}f B}$, where $C$ is a positive constant independent of the main parameters. If $\textbf{m}athrm{{\textbf{m}athfrak b}f A}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$} \textbf{m}athrm{{\textbf{m}athfrak b}f B}$ and $\textbf{m}athrm{{\textbf{m}athfrak b}f B}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$} \textbf{m}athrm{{\textbf{m}athfrak b}f A}$, then we write $\textbf{m}athrm{{\textbf{m}athfrak b}f A}{\textbf{m}athfrak a}pprox \textbf{m}athrm{{\textbf{m}athfrak b}f B}$. For any given quasi-normed spaces $\textbf{m}athcal{A}$ and $\textbf{m}athcal{B}$ with the corresponding quasi-norms $\left\|\cdot\right\|_{\textbf{m}athcal{A}}$ and $\left\|\cdot\right\|_{\textbf{m}athcal{B}}$, the notation $\textbf{m}athcal{A}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{B}$ means that if $f\in\textbf{m}athcal{A}$, then $f\in\textbf{m}athcal{B}$ and $\left\|f\right\|_{\textbf{m}athcal{B}}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$}\left\|f\right\|_{\textbf{m}athcal{A}}$. Also, $\textbf{m}athcal{A}\cong\textbf{m}athcal{B}$ means that $\textbf{m}athcal{A}$ is isomorphic to $\textbf{m}athcal{B}$, with equivalence of the quasi-norms $\left\|\cdot\right\|_{\textbf{m}athcal{A}}$ and $\left\|\cdot\right\|_{\textbf{m}athcal{B}}$. For a real number $\lambda>0$ and a cube $Q\subset\textbf{m}athbb R^{d}$ (by a cube we mean a cube whose edges are parallel to the coordinate axes), we write $\lambda Q$ for the cube with same center as $Q$ and side-length $\lambda$ times side-length of $Q$, while $\left\lfloor \lambda \right\rfloor$ stands for the greatest integer less or equal to $\lambda$. Also, for $x\in\textbf{m}athbb R^{d}$ and $\ell>0$, $Q(x,\ell)$ will denote the cube centered at $x$ and side-length $\ell$. We use the same notations for balls. For a measurable set $E\subset\textbf{m}athbb R^d$, we denote by $\chi_{_{E}}$ the characteristic function of $E$ and by $\left|E\right|$ its Lebesgue measure. To finish, we denote by $\textbf{m}athcal{Q}$ the set of all cubes of $\textbf{m}athbb R^{d}$. \section{Prerequisites on Hardy-amalgam spaces and their dual spaces} \subsection{On Hardy-amalgam spaces} Let $0<q,p<\infty$. The Hardy-amalgam spaces $\textbf{m}athcal{H}^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ are Banach spaces whenever $1\leq q,p<\infty$ and quasi-Banach spaces otherwise (see \cite[Proposition 3.8]{AbFt}). Moreover, for $0<q\leq 1$ and $q\leq p<\infty$, they admit atomic characterizations with atoms which are exactly those used in classical Hardy spaces (see \cite[Theorems 4.3, 4.4 and 4.6]{AbFt} and \cite[Theorems 3.2, 3.3 and 3.9]{AbFt2}). We recall that for $0<q\leq 1$, $q\leq p<\infty$, $1<r\leq\infty$ and $s\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$ being an integer, a function $\textbf{a}$ is a $(q,r,s)$-atom on $\textbf{m}athbb{R}^d$ for $\textbf{m}athcal{H}^{(q,p)}$ if there exists a cube $Q$ such that {\textbf{m}athfrak b}egin{enumerate} \item $\text{supp}(\textbf{a})\subset Q$; \item $\left\|\textbf{a}\right\|_r\leq|Q|^{\frac{1}{r}-\frac{1}{q}}$; \label{defratom1} \item $\int_{\textbf{m}athbb{R}^d}x^{{\textbf{m}athfrak b}eta}\textbf{a}(x)dx=0$, for all multi-indexes ${\textbf{m}athfrak b}eta$ with $|{\textbf{m}athfrak b}eta|\leq s$. \label{defratom2} \end{enumerate} We define in the same way the atoms of the local Hardy-amalgam spaces, namely the local $(q,r,\delta)$-atoms. But, just like those of the classical local Hardy spaces, for these local $(q,r,\delta)$-atoms, only Condition \ref{defratom2}. is not requiered when the corresponding cubes $Q$ have side length greater than or equal to 1. We denote by $\textbf{m}athcal{A}(q,r,s)$ the set of all $(\textbf{a},Q)$ such that $\textbf{a}$ is a $(q,r,s)$-atom and $Q$ is the associated cube, and $\textbf{m}athcal{A}_{\textbf{m}athrm{loc}}(q,r,s)$ for the local $(q,r,\delta)$-atoms. We suppose that $0<q\leq 1$ and $q\leq p<\infty$. Let $1<r\leq\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$ be an integer. We denote by $\textbf{m}athcal{H}_{fin}^{(q,p)}$ the subspace of $\textbf{m}athcal{H}^{(q,p)}$ consisting of finite linear combinations of $(q,r,\delta)$-atoms, and by $\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}$ the subspace of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ consisting of finite linear combinations of local $(q,r,\delta)$-atoms. The spaces $\textbf{m}athcal{H}_{fin}^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}$ are respectively dense subspaces of $\textbf{m}athcal{H}^{(q,p)}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ (see \cite[Remark 4.7]{AbFt} and \cite[Remark 3.12]{AbFt2}). \subsection{On the dual space of $\textbf{m}athcal{H}^{(q,p)}$} Let's fix an integer $\delta\geq0$. Let $1\leq r<\infty$, $g\in L_{\textbf{m}athrm{loc}}^r$ and ${\textbf{m}athcal O}mega\subsetneq\textbf{m}athbb R^d$ an open subset. We define $$\textit{O}(g,{\textbf{m}athcal O}mega,r):=\sup{\sum_{n\geq0}|\widetilde{Q^n}|^{\frac{1}{r'}}\left(\int_{\widetilde{Q^n}}\left|g(x)-P_{\widetilde{Q^n}}^{\delta}(g)(x)\right|^{r}dx\right)^{\frac{1}{r}}},$$ where $\frac{1}{r}+\frac{1}{r'}=1$ and the supremum is taken over all families of cubes $\left\{Q^n\right\}_{n\geq0}$ such that $Q^n\subset{\textbf{m}athcal O}mega$ for all $n\geq0$ and $\sum_{n\geq0}\chi_{_{Q^n}}\leq K(d)$, with $\widetilde{Q^n}=C_0Q^n$, where $K(d)>1$ and $C_0>1$ are fixed constants independent of ${\textbf{m}athcal O}mega$ and $\left\{Q^n\right\}_{n\geq0}$, and for a cube $Q$, $P_Q^{\delta}(g)$ stands for the unique polynomial of $\textbf{m}athcal{P_{\delta}}$ ($\textbf{m}athcal{P_{\delta}}:=\textbf{m}athcal{P_{\delta}}(\textbf{m}athbb{R}^d)$ is the space of polynomial functions of degree at most $\delta$) such that, for all $\textbf{m}athfrak{q}\in\textbf{m}athcal{P_{\delta}}$, {\textbf{m}athfrak b}egin{align*} \int_{Q}\left[g(x)-P_Q^{\delta}(g)(x)\right]\textbf{m}athfrak{q}(x)dx=0. \end{align*} We consider the functions $\phi_{1},\ \phi_{2}\ \text{and}\ \phi_{3}:\textbf{m}athcal{Q}\rightarrow(0,\infty)$ defined by {\textbf{m}athfrak b}egin{align} \phi_{1}(Q)=\frac{\left\|\chi_{_{Q}}\right\|_{q,p}}{|Q|}\ ,\ \phi_{2}(Q)=\frac{\left\|\chi_{_{Q}}\right\|_q}{|Q|}\ \text{ and }\ \phi_{3}(Q)=\frac{\left\|\chi_{_{Q}}\right\|_p}{|Q|}\ , \label{Campanatonolocnloc} \end{align} for all $Q\in\textbf{m}athcal{Q}$, whenever $0<q\leq 1$ and $0<p<\infty$. {\textbf{m}athfrak b}egin{defn}\label{corprecis} Suppose that $0<q\leq1$ and $0<p<\infty$. Let $0<\eta<\infty$ and $1\leq r<\infty$. We say that a function $g$ in $L_{\textbf{m}athrm{loc}}^r$ belongs to $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}:=\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}(\textbf{m}athbb{R}^d)$ if there is a constant $C>0$ such that, for all families of open subsets $\left\{{\textbf{m}athcal O}mega^j\right\}_{j\in\textbf{m}athbb{Z}}$ with $\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}<\infty$, we have {\textbf{m}athfrak b}egin{align} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(g,{\textbf{m}athcal O}mega^j,r)\leq C\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}. \label{dualqp} \end{align} \end{defn} We have $\textbf{m}athcal{P_{\delta}}\subset\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$. When $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$, we put $$\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}:=\inf\left\{C>0:\ C \text{ satisfies } (\ref{dualqp})\right\}.$$ Then $\left\|\cdot\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}$ defines a semi-norm on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ and a norm on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}/\textbf{m}athcal{P_{\delta}}$. In the sequel, $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ will designate $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}/\textbf{m}athcal{P_{\delta}}$. {\textbf{m}athfrak b}egin{defn}\cite[Definition 6.1]{NEYS} Let $1\leq r\leq\infty$, a function $\phi: \textbf{m}athcal{Q}\rightarrow (0,\infty)$ and $f\in L_{\textbf{m}athrm{loc}}^r$. One denotes {\textbf{m}athfrak b}egin{align*} \left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi,\delta}}:=\sup_{Q\in\textbf{m}athcal{Q}}\frac{1}{\phi(Q)}\left(\frac{1}{|Q|}\int_{Q}\left|f(x)-P_Q^{\delta}(f)(x)\right|^rdx\right)^{\frac{1}{r}}, \end{align*} when $r<\infty$, and {\textbf{m}athfrak b}egin{align*} \left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi,\delta}}:=\sup_{Q\in\textbf{m}athcal{Q}}\frac{1}{\phi(Q)}\left\|f-P_Q^{\delta}(f)\right\|_{L^{\infty}(Q)}, \end{align*} when $r=\infty$. Then, the Campanato space $\textbf{m}athcal{L}_{r,\phi,\delta}(\textbf{m}athbb{R}^d)$ is defined to be the set of all $f\in L_{\textbf{m}athrm{loc}}^r$ such that $\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi,\delta}}<\infty$. One considers elements in $\textbf{m}athcal{L}_{r,\phi,\delta}(\textbf{m}athbb{R}^d)$ modulo polynomials of degree $\delta$ so that $\textbf{m}athcal{L}_{r,\phi,\delta}(\textbf{m}athbb{R}^d)$ is a Banach space. When one writes $f\in\textbf{m}athcal{L}_{r,\phi,\delta}(\textbf{m}athbb{R}^d)$, then $f$ stands for the representative of $\left\{f+\textbf{m}athfrak{q}: \textbf{m}athfrak{q}\ \text{is a polynomial of degree}\ \delta\right\}$. \end{defn} The following inequalities were proved in \cite{AbFt3}.\\ For $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}, \label{dualqp3} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$, and hence {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{2},\delta}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}, \label{dualqp3bis} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$, whenever $0<q\leq1$ and $q\leq p<\infty$. Moreover, whenever $0<q,p\leq1$ and $0<\eta\leq1$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}{\textbf{m}athfrak a}pprox\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}}, \label{dualqp3bisbis} \end{align} which means that $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}=\textbf{m}athcal{L}_{r,\phi_{1},\delta}$ with equivalent norms. Also, for $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$, the space $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ endowed with the norm $\left\|\cdot\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}$ is complete (see \cite[Proposition 3.5]{AbFt3}). For $T\in\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ the topological dual space of $\textbf{m}athcal{H}^{(q,p)}$, we put $$\left\|T\right\|:=\left\|T\right\|_{\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}}=\sup_{\underset{\left\|f\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq 1}{f\in\textbf{m}athcal{H}^{(q,p)}}}|T(f)|.$$ The hereafter results were proved in \cite{AbFt3}. {\textbf{m}athfrak b}egin{thm}\cite[Theorem 3.7]{AbFt3}\label{theoremdualqp} Suppose that $0<q\leq1<p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $p<r\leq\infty$. Then the topological dual space $\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ of the Hardy-amalgam space $\textbf{m}athcal H^{(q,p)}$ is isomorphic to $\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$ with equivalent norms, where $\frac{1}{r}+\frac{1}{r'}=1$ and, $0<\eta<q$ if $r<\infty$ and $0<\eta\leq1$ if not. More precisely, we have the following assertions: {\textbf{m}athfrak b}egin{enumerate} \item Let $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$ and $\textbf{m}athcal{H}_{fin}^{(q,p)}$ be the subspace of $\textbf{m}athcal{H}^{(q,p)}$ consisting of finite linear combinations of $(q,r,\delta)$-atoms. Then the mapping $$T_g:\textbf{m}athcal{H}_{fin}^{(q,p)}\ni f\longmapsto\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,$$ extends to a unique continuous linear functional $\widetilde{T_g}$ on $\textbf{m}athcal{H}^{(q,p)}$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|\widetilde{T_g}\right\|=\left\|T_g\right\|\leq C\left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}, \end{eqnarray*} where $C>0$ is a constant independent of $g$. \label{dualpointqp1} \item Conversely, for every $T\in\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$, there exists $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$ such that $T=T_g$; namely $$T(f)=\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,\ \text{ for all }\ f\in\textbf{m}athcal{H}_{fin}^{(q,p)},$$ and {\textbf{m}athfrak b}egin{eqnarray*} \left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}\leq C\left\|T\right\|, \end{eqnarray*} where $C>0$ is a constant independent of $T$. \label{dualpointqp2} \end{enumerate} \end{thm} {\textbf{m}athfrak b}egin{remark}\cite[Remark 3.1]{AbFt3}\label{remarqedualeqp1} Suppose that $0<q\leq1<p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $1<r<p'$, $0<\eta<q$ and $0<\eta_1\leq1$. Then we have $$\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(q,p,\eta_1)}\cong\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_2,\delta}\cong\left(\textbf{m}athcal{H}^q\right)^{{\textbf{m}athfrak a}st}$$ and $$\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(q,p,\eta_1)}\cong\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}{\textbf{m}athfrak h}ookrightarrow L^{p'}\cong\left(\textbf{m}athcal{H}^p\right)^{{\textbf{m}athfrak a}st}.$$ For $q=1$, we have $$(L^{\infty},\ell^{p'}){\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}{\textbf{m}athfrak h}ookrightarrow BMO(\textbf{m}athbb{R}^d)$$ and $$(L^{\infty},\ell^{p'}){\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}{\textbf{m}athfrak h}ookrightarrow L^{p'}.$$ Moreover, the inclusion of $(L^{\infty},\ell^{p'})$ in $\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$ is strict. \end{remark} {\textbf{m}athfrak b}egin{thm}\cite[Theorem 3.8]{AbFt3}\label{theoremdualunifie} Suppose that $0<q\leq1$, $q\leq p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $\textbf{m}ax\left\{1,p\right\}<r\leq\infty$. Then $\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ is isomorphic to $\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$, where $\frac{1}{r}+\frac{1}{r'}=1$, $0<\eta<q$ if $r<\infty$, and $0<\eta\leq1$ if $r=\infty$, with equivalent norms. More precisely, we have the following assertions: {\textbf{m}athfrak b}egin{enumerate} \item Let $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$ and $\textbf{m}athcal{H}_{fin}^{(q,p)}$ be the subspace of $\textbf{m}athcal{H}^{(q,p)}$ consisting of finite linear combinations of $(q,r,\delta)$-atoms. Then the mapping $$T_g:\textbf{m}athcal{H}_{fin}^{(q,p)}\ni f\longmapsto\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,$$ extends to a unique continuous linear functional $\widetilde{T_g}$ on $\textbf{m}athcal{H}^{(q,p)}$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|\widetilde{T_g}\right\|=\left\|T_g\right\|\leq C\left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}, \end{eqnarray*} where $C>0$ is a constant independent of $g$. \item Conversely, for any $T\in\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$, there exists $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)}$ such that $T=T_g$; namely $$T(f)=\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,\ \text{ for all }\ f\in\textbf{m}athcal{H}_{fin}^{(q,p)},$$ and {\textbf{m}athfrak b}egin{eqnarray*} \left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}\leq C\left\|T\right\|, \end{eqnarray*} where $C>0$ is a constant independent of $T$. \end{enumerate} \end{thm} In Theorem \ref{theoremdualunifie}, when $p\leq1$, we can take $0<\eta\leq1$ for $1<r\leq\infty$. \subsection{On the dual space of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$} Let's fix an integer $\delta\geq0$. Let $1\leq r<\infty$, $g$ be a function in $L_{\textbf{m}athrm{loc}}^r$ and ${\textbf{m}athcal O}mega$ be an open subset such that ${\textbf{m}athcal O}mega\neq\textbf{m}athbb{R}^d$. We put {\textbf{m}athfrak b}egin{align*}\textit{O}(g,{\textbf{m}athcal O}mega,r)^{\textbf{m}athrm{loc}}&:=\sup\left[\sum_{|\widetilde{Q^n}|<1}|\widetilde{Q^n}|^{\frac{1}{r'}}\left(\int_{\widetilde{Q^n}}\left|g(x)-P_{\widetilde{Q^n}}^{\delta}(g)(x)\right|^{r}dx\right)^{\frac{1}{r}}\right. \\ &+\left. \sum_{|\widetilde{Q^n}|\geq1}|\widetilde{Q^n}|^{\frac{1}{r'}}\left(\int_{\widetilde{Q^n}}|g(x)|^{r}dx\right)^{\frac{1}{r}}\right], \end{align*} where $\frac{1}{r}+\frac{1}{r'}=1$ and the supremum is taken over all families of cubes $\left\{Q^n\right\}_{n\geq0}$ such that $Q^n\subset{\textbf{m}athcal O}mega$ for all $n\geq0$ and $\sum_{n\geq0}\chi_{_{Q^n}}\leq K(d)$, with $\widetilde{Q^n}=C_0Q^n$, $K(d)>1$ and $C_0>1$ are the same constants as in the definition of $\textit{O}(g,{\textbf{m}athcal O}mega,r)$. {\textbf{m}athfrak b}egin{defn} Suppose that $0<q\leq1$ and $0<p<\infty$. Let $0<\eta<\infty$ and $1\leq r<\infty$. We say that a function $g$ in $L_{\textbf{m}athrm{loc}}^r$ belongs to $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}:=\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}(\textbf{m}athbb{R}^d)$ if there exists a constant $C>0$ such that, for all families of open subsets $\left\{{\textbf{m}athcal O}mega^j\right\}_{j\in\textbf{m}athbb{Z}}$ with $\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}<\infty$, we have {\textbf{m}athfrak b}egin{align} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(g,{\textbf{m}athcal O}mega^j,r)^{\textbf{m}athrm{loc}}\leq C\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}. \label{dualqploc} \end{align} \end{defn} We define $\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}:=\inf\left\{C>0:\ C \text{ satisfies } (\ref{dualqploc})\right\}$, when $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$. $\left\|\cdot\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}$ defines a norm on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$. {\textbf{m}athfrak b}egin{defn}\cite[Definition 4.1]{AbFt2} Let $1\leq r\leq\infty$ and $\phi: \textbf{m}athcal{Q}\rightarrow (0,\infty)$ be a function. The space $\textbf{m}athcal{L}_{r,\phi,\delta}^{\textbf{m}athrm{loc}}:=\textbf{m}athcal{L}_{r,\phi,\delta}^{\textbf{m}athrm{loc}}(\textbf{m}athbb{R}^d)$ is the set of all $f\in L_{\textbf{m}athrm{loc}}^r$ such that $\left\|f\right\|_{{\textbf{m}athcal{L}}_{r,\phi,\delta}^{\textbf{m}athrm{loc}}}<\infty$, where {\textbf{m}athfrak b}egin{align*} \left\|f\right\|_{{\textbf{m}athcal{L}}_{r,\phi,\delta}^{\textbf{m}athrm{loc}}}&:=\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{1}{\phi(Q)}\left(\frac{1}{|Q|}\int_{Q}|f(x)|^rdx\right)^{\frac{1}{r}}\\ &+\sup_{\underset{|Q|<1}{Q\in\textbf{m}athcal{Q}}}\frac{1}{\phi(Q)}\left(\frac{1}{|Q|}\int_{Q}\left|f(x)-P_Q^{\delta}(f)(x)\right|^rdx\right)^{\frac{1}{r}}, \end{align*} when $r<\infty$, and {\textbf{m}athfrak b}egin{align*} \left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi,\delta}^{\textbf{m}athrm{loc}}}:=\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{1}{\phi(Q)}\left\|f\right\|_{L^{\infty}(Q)}+\sup_{\underset{|Q|<1}{Q\in\textbf{m}athcal{Q}}}\frac{1}{\phi(Q)}\left\|f-P_Q^{\delta}(f)\right\|_{L^{\infty}(Q)}, \end{align*} when $r=\infty$. \end{defn} The following inequalities were obtained in \cite{AbFt3}.\\ For $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{\textbf{m}athrm{loc}}}\leq2\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \label{dualqploc3} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$, and hence {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{2},\delta}^{\textbf{m}athrm{loc}}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{\textbf{m}athrm{loc}}}\leq2\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \label{dualqploc3bbi} \end{align} when $0<q\leq1$ and $q\leq p<\infty$. Futhermore, when $0<q,p\leq1$ and $0<\eta\leq1$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}{\textbf{m}athfrak a}pprox\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{\textbf{m}athrm{loc}}}, \label{dualqploc3bisbis} \end{align} which means that $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}=\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{\textbf{m}athrm{loc}}$ with equivalent norms. We have also for $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$, {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$}\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \label{dualqploc3bisbisbis} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$. For $T\in\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ the topological dual space of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$, we set $$\left\|T\right\|:=\left\|T\right\|_{\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}}=\sup_{\underset{\left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}\leq 1}{f\in\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}}|T(f)|.$$ The hereafter results were proved in \cite{AbFt3}. {\textbf{m}athfrak b}egin{thm}\cite[Theorem 4.3]{AbFt3} \label{theoremdualqploc} Suppose that $0<q\leq1<p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $p<r\leq\infty$. Then $\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ is isomorphic to $\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$, where $\frac{1}{r}+\frac{1}{r'}=1$, $0<\eta<q$ if $r<\infty$, and $0<\eta\leq1$ if $r=\infty$, with equivalent norms. More precisely, we have the following assertions: {\textbf{m}athfrak b}egin{enumerate} \item Let $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}$ be the subspace of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ consisting of finite linear combinations of local $(q,r,\delta)$-atoms. Then the mapping $$T_g:\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}\ni f\longmapsto\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,$$ extends to a unique continuous linear functional $\widetilde{T_g}$ on $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|\widetilde{T_g}\right\|=\left\|T_g\right\|\leq C\left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \end{eqnarray*} where $C>0$ is a constant independent of $g$. \label{dualpointqploc1} \item Conversely, for any $T\in\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$, there exists $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ such that $T=T_g$; namely $$T(f)=\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,\ \text{ for all }\ f\in\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)},$$ and {\textbf{m}athfrak b}egin{eqnarray*} \left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}\leq C\left\|T\right\|, \end{eqnarray*} where $C>0$ is a constant independent of $T$. \label{dualpointqploc2} \end{enumerate} \end{thm} {\textbf{m}athfrak b}egin{remark}\cite[Remark 4.1]{AbFt3}\label{remarqedualeqploc1} Suppose that $0<q\leq1<p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $1<r<p'$, $0<\eta<q$ and $0<\eta_1\leq1$. We have $$\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(q,p,\eta_1)\textbf{m}athrm{loc}}\cong\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_2,\delta}^{\textbf{m}athrm{loc}}\cong\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^q\right)^{{\textbf{m}athfrak a}st}$$ and $$\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(q,p,\eta_1)\textbf{m}athrm{loc}}\cong\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow L^{p'}\cong\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^p\right)^{{\textbf{m}athfrak a}st}.$$ In particular when $q=1$, we have $$(L^{\infty},\ell^{p'}){\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)\textbf{m}athrm{loc}}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow bmo(\textbf{m}athbb{R}^d)$$ and $$(L^{\infty},\ell^{p'}){\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)\textbf{m}athrm{loc}}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow L^{p'}.$$ \end{remark} {\textbf{m}athfrak b}egin{thm}\cite[Theorem 4.4]{AbFt3} \label{theoremdualunifieloc} Suppose that $0<q\leq1$, $q\leq p<\infty$ and $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$. Let $\textbf{m}ax\left\{1,p\right\}<r\leq\infty$. Then $\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$ is isomorphic to $\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$, where $\frac{1}{r}+\frac{1}{r'}=1$, $0<\eta<q$ if $r<\infty$, and $0<\eta\leq1$ if $r=\infty$, with equivalent norms. More precisely, we have the following assertions: {\textbf{m}athfrak b}egin{enumerate} \item Let $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ and $\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}$ be the subspace of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ consisting of finite linear combinations of local $(q,r,\delta)$-atoms. Then the mapping $$T_g:\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)}\ni f\longmapsto\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,$$ extends to a unique continuous linear functional $\widetilde{T_g}$ on $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|\widetilde{T_g}\right\|=\left\|T_g\right\|\leq C\left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \end{eqnarray*} where $C>0$ is a constant independent of $g$. \item Conversely, for any $T\in\left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}$, there exists $g\in\textbf{m}athcal{L}_{r',\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ such that $T=T_g$; namely $$T(f)=\int_{\textbf{m}athbb{R}^d}g(x)f(x)dx,\ \text{ for all }\ f\in\textbf{m}athcal{H}_{\textbf{m}athrm{loc},fin}^{(q,p)},$$ and {\textbf{m}athfrak b}egin{eqnarray*} \left\|g\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}\leq C\left\|T\right\|, \end{eqnarray*} where $C>0$ is a constant independent of $T$. \end{enumerate} \end{thm} In Theorem \ref{theoremdualunifieloc}, when $p\leq1$, we can take $0<\eta\leq1$ for $1<r\leq\infty$. Also, for $0<q\leq1$, $q\leq p<\infty$, $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$ and, $1\leq r<p'$ if $1<p$ or $1\leq r<\infty$ otherwise, where $\frac{1}{p}+\frac{1}{p'}=1$, with $0<\eta<q$ if $1<r$ or $0<\eta\leq1$ if $r=1$, we have {\textbf{m}athfrak b}egin{eqnarray} \left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}\cong\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}, \label{inclusqp} \end{eqnarray} thanks to Theorems \ref{theoremdualunifie} and \ref{theoremdualunifieloc}. Moreover, $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\subsetneq\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}$, and hence {\textbf{m}athfrak b}egin{align} \left(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}\subsetneq\left(\textbf{m}athcal{H}^{(q,p)}\right)^{{\textbf{m}athfrak a}st}. \label{compardeshqp1} \end{align} To complete, we give some embedding relations relating to the spaces $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ and similar to those obtained in the setting of the spaces $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}$ (see \cite[(3.17), (3.18), (3.19), Proposition 3.6 and (3.36)]{AbFt3}). Therefore, we leave the details of their proofs to the reader. For $0<q\leq1$, $0<p<\infty$, $1\leq r<\infty$ and $0<\eta_1\leq\eta_2<\infty$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta_1)\textbf{m}athrm{loc}}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta_2)\textbf{m}athrm{loc}}}, \label{revisiop07} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta_2)\textbf{m}athrm{loc}}$, and hence $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta_2)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta_1)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{\textbf{m}athrm{loc}}$. We have inequalities similar to (\ref{revisiop07}) with the exponents $q$ and $p$. More precisely, for $0<q\leq q_1\leq1$, $0<p\leq p_1<\infty$, $1\leq r<\infty$ and $0<\eta<\infty$, with the functions $\phi_{1}(Q):=\frac{\left\|\chi_{_{Q}}\right\|_{q,p}}{|Q|}$, $\psi_{1}(Q):=\frac{\left\|\chi_{_{Q}}\right\|_{q_1,p}}{|Q|}$ and $\varphi_{1}(Q):=\frac{\left\|\chi_{_{Q}}\right\|_{q,p_1}}{|Q|}$ for all $Q\in\textbf{m}athcal{Q}$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\psi_{1},\delta}^{(q_1,p,\eta)\textbf{m}athrm{loc}}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \label{revisiop008} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$, and hence $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\psi_{1},\delta}^{(q_1,p,\eta)\textbf{m}athrm{loc}}$; and {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}\leq\left\|g\right\|_{\textbf{m}athcal{L}_{r,\varphi_{1},\delta}^{(q,p_1,\eta)\textbf{m}athrm{loc}}}, \label{revisiop009} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\varphi_{1},\delta}^{(q,p_1,\eta)\textbf{m}athrm{loc}}$, and hence $\textbf{m}athcal{L}_{r,\varphi_{1},\delta}^{(q,p_1,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$. Notice that an inequality similar to (\ref{revisiop07}) holds also for the exponent $r$.\\ We can also define on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ another norm $|||\cdot|||_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}$ equivalent to $\left\|\cdot\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}$, with $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$. Indeed, given a function $g$ in $L_{\textbf{m}athrm{loc}}^r$ and an open subset ${\textbf{m}athcal O}mega$ such that ${\textbf{m}athcal O}mega\neq\textbf{m}athbb{R}^d$, we put {\textbf{m}athfrak b}egin{align*}\widetilde{\textit{O}(g,{\textbf{m}athcal O}mega,r,\delta)}^{\textbf{m}athrm{loc}}&:=\sup\left[\sum_{|\widetilde{Q^n}|<1}\inf_{\textbf{m}athfrak{p}\in\textbf{m}athcal{P_{\delta}}}|\widetilde{Q^n}|^{\frac{1}{r'}}\left(\int_{\widetilde{Q^n}}\left|g(x)-\textbf{m}athfrak{p}(x)\right|^{r}dx\right)^{\frac{1}{r}}\right. \\ &+\left. \sum_{|\widetilde{Q^n}|\geq1}|\widetilde{Q^n}|^{\frac{1}{r'}}\left(\int_{\widetilde{Q^n}}|g(x)|^{r}dx\right)^{\frac{1}{r}}\right], \end{align*} where $\frac{1}{r}+\frac{1}{r'}=1$ and the supremum is taken over all families of cubes $\left\{Q^n\right\}_{n\geq0}$ such that $Q^n\subset{\textbf{m}athcal O}mega$, for all $n\geq0$ and $\sum_{n\geq0}\chi_{_{Q^n}}\leq K(d)$, with $\widetilde{Q^n}=C_0Q^n$, where $K(d)>1$ and $C_0>1$ are the same constants as in the definition of $\textit{O}(g,{\textbf{m}athcal O}mega,r)$. We have the following proposition. {\textbf{m}athfrak b}egin{prop}\label{dualqpequival0} Suppose that $0<q\leq1$ and $0<p<\infty$. Let $0<\eta<\infty$ and $1\leq r<\infty$. Let $g$ be a function in $L_{\textbf{m}athrm{loc}}^r$. Then $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ if and only if there is a constant $C>0$ such that, for all families of open subsets $\left\{{\textbf{m}athcal O}mega^j\right\}_{j\in\textbf{m}athbb{Z}}$ with $\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}<\infty$, we have {\textbf{m}athfrak b}egin{align} \sum_{j\in\textbf{m}athbb{Z}}2^j\widetilde{\textit{O}(g,{\textbf{m}athcal O}mega^j,r,\delta)}^{\textbf{m}athrm{loc}}\leq C\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}. \label{dualqpequival} \end{align} Moreover, if we define $|||g|||_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}:=\inf\left\{C>0:\ C \text{ satisfies } (\ref{dualqpequival})\right\}$, then {\textbf{m}athfrak b}egin{align} |||g|||_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}{\textbf{m}athfrak a}pprox\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}} \label{dualqpequival1} \end{align} and $|||\cdot|||_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}$ is a norm on $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$. \end{prop} As a consequence of Proposition \ref{dualqpequival0}, for $0\leq\delta_1\leq\delta_2$ being two integers, $0<q\leq1$, $0<p<\infty$, $0<\eta<\infty$ and $1\leq r<\infty$, we have {\textbf{m}athfrak b}egin{align} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta_2}^{(q,p,\eta)\textbf{m}athrm{loc}}}\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$}\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta_1}^{(q,p,\eta)\textbf{m}athrm{loc}}}, \label{revisiop08} \end{align} for all $g\in\textbf{m}athcal{L}_{r,\phi_{1},\delta_1}^{(q,p,\eta)\textbf{m}athrm{loc}}$. Hence $\textbf{m}athcal{L}_{r,\phi_{1},\delta_1}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_{1},\delta_2}^{(q,p,\eta)\textbf{m}athrm{loc}}{\textbf{m}athfrak h}ookrightarrow\textbf{m}athcal{L}_{r,\phi_{1},\delta_2}^{\textbf{m}athrm{loc}}$. \section{On the inclusion of $\textbf{m}athcal{H}^{(1,p)}$ in $(L^1,\ell^p)$ and of $\textbf{m}athcal{H}^{(q,p)}$ in $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$} \subsection{On the inclusion of $\textbf{m}athcal{H}^{(1,p)}$ in $(L^1,\ell^p)$} We prove that the inclusion of $\textbf{m}athcal{H}^{(1,p)}$ in $(L^1,\ell^p)$, for $1\leq p<\infty$, is strict; generalizing the one of $\textbf{m}athcal{H}^1$ in $L^1$. For $p=1$, we have $\textbf{m}athcal{H}^{(1,1)}=\textbf{m}athcal{H}^1$ and $(L^1,\ell^1)=L^1$, and it is well known that $\textbf{m}athcal{H}^1\subsetneq L^1$. We so assume that $1<p<\infty$. Our argument is similar to the one of the proof of \cite[Theorem 3.6 (v), p. 26]{YZDYWY}. We know that $\textbf{m}athcal{H}^{(1,p)}\subset(L^1,\ell^p)$ with {\textbf{m}athfrak b}egin{align} \left\|f\right\|_{1,p}\leq\left\|f\right\|_{\textbf{m}athcal{H}^{(1,p)}}, \label{Bairetheori2} \end{align} for all $f\in\textbf{m}athcal{H}^{(1,p)}$, by \cite[Theorem 3.2, p. 1905]{AbFt}. Moreover, $(\textbf{m}athcal{H}^{(1,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(1,p)}})$ and $((L^1,\ell^p),\left\|\cdot\right\|_{1,p})$ are Banach spaces. Suppose now that $\textbf{m}athcal{H}^{(1,p)}=(L^1,\ell^p)$ as sets. Then $(\textbf{m}athcal{H}^{(1,p)},\left\|\cdot\right\|_{1,p})$ is Banach space since $((L^1,\ell^p),\left\|\cdot\right\|_{1,p})$ is so. Thus, (\ref{Bairetheori2}) and the fact that $(\textbf{m}athcal{H}^{(1,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(1,p)}})$ and $(\textbf{m}athcal{H}^{(1,p)},\left\|\cdot\right\|_{1,p})$ are Banach spaces, imply that {\textbf{m}athfrak b}egin{eqnarray} \left\|\cdot\right\|_{\textbf{m}athcal{H}^{(1,p)}}{\textbf{m}athfrak a}pprox\left\|\cdot\right\|_{1,p} \label{Bairetheori3} \end{eqnarray} on $\textbf{m}athcal{H}^{(1,p)}$; in other words, there exists a constant $C>0$ such that {\textbf{m}athfrak b}egin{eqnarray} \left\|f\right\|_{1,p}\leq\left\|f\right\|_{\textbf{m}athcal{H}^{(1,p)}}\leq C\left\|f\right\|_{1,p}, \label{Bairetheori4} \end{eqnarray} for all $f\in\textbf{m}athcal{H}^{(1,p)}$, by \cite[Corollary 2.12 (d), pp. 49-50]{WR1} or \cite[Corollary 2.8, p. 35]{HBZ1} or yet \cite[Remarque 5, p. 19]{HBZ}. From (\ref{Bairetheori3}), it comes that $(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}=(L^1,\ell^p)^{{\textbf{m}athfrak a}st}$; namely $(L^{\infty},\ell^{p'})\cong\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$ (see Remark \ref{remarqedualeqp1}). Indeed, for $T\in(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}$ and for all $f\in(L^1,\ell^p)$, we have {\textbf{m}athfrak b}egin{eqnarray*} \left|T(f)\right|\leq C\left\|f\right\|_{\textbf{m}athcal{H}^{(1,p)}}\leq C\left\|f\right\|_{1,p}, \end{eqnarray*} by (\ref{Bairetheori4}), since $(L^1,\ell^p)\subset\textbf{m}athcal{H}^{(1,p)}$ by assumption; which implies that $T$ is a continuous linear functional on $(L^1,\ell^p)$; in other words $(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}\subset(L^1,\ell^p)^{{\textbf{m}athfrak a}st}$. But this is opposite to the fact that $(L^1,\ell^p)^{{\textbf{m}athfrak a}st}\subsetneq(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}$; namely $(L^{\infty},\ell^{p'})\subsetneq\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$ (see Remark \ref{remarqedualeqp1}). Therefore, $\textbf{m}athcal{H}^{(1,p)}\neq(L^1,\ell^p)$, and hence $\textbf{m}athcal{H}^{(1,p)}\subsetneq(L^1,\ell^p)$. {\textbf{m}athfrak b}egin{remark} Notice that it is possible to deal with the cases $p=1$ and $1<p<\infty$ simultaneously by this method, since $(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}\cong\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$ and $(L^{\infty},\ell^{p'})\subsetneq\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$, for $1\leq p<\infty$, under the assumptions of Remark \ref{remarqedualeqp1}, by Theorem \ref{theoremdualunifie} (see \cite[Theorem 3.8]{AbFt3}). \end{remark} \subsection{On the inclusion of $\textbf{m}athcal{H}^{(q,p)}$ in $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$} We assume that $0<q\leq1$ and $q\leq p<\infty$. The reasoning is similar to the one of the strict inclusion of $\textbf{m}athcal{H}^{(1,p)}$ in $(L^1,\ell^p)$. We know that $\textbf{m}athcal{H}^{(q,p)}\subset\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ with {\textbf{m}athfrak b}egin{align} \left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}\leq\left\|f\right\|_{\textbf{m}athcal{H}^{(q,p)}}, \label{Bairetheor1} \end{align} for all $f\in\textbf{m}athcal{H}^{(q,p)}$, by definition. Also, $(\textbf{m}athcal{H}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(q,p)}})$ and $(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}})$ are quasi-Banach spaces, and hence Fr\'echet spaces (F-spaces) (for F-space, see \cite[Definition 1, p. 52]{KYos}). Suppose that $\textbf{m}athcal{H}^{(q,p)}=\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ as sets. Then $(\textbf{m}athcal{H}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}})$ is a Fr\'echet space because $(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}})$ is so. Thus, (\ref{Bairetheor1}) and the fact that $(\textbf{m}athcal{H}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(q,p)}})$ and $(\textbf{m}athcal{H}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}})$ are Fr\'echet spaces, imply that {\textbf{m}athfrak b}egin{align} \left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}{\textbf{m}athfrak a}pprox\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(q,p)}} \label{Bairetheor2} \end{align} on $\textbf{m}athcal{H}^{(q,p)}$; in other words, there exists a constant $C>0$ sucht that {\textbf{m}athfrak b}egin{align} \left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}\leq\left\|f\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq C\left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}, \label{Bairetheor3} \end{align} for all $f\in\textbf{m}athcal{H}^{(q,p)}$, by \cite[Corollary 2.12 (d), pp. 49-50]{WR1}. From (\ref{Bairetheor2}), it follows that $(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}=(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)})^{{\textbf{m}athfrak a}st}$; namely $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$ (see (\ref{inclusqp})). In fact, for $T\in(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}$ and for all $f\in\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$, we have {\textbf{m}athfrak b}egin{eqnarray*} \left|T(f)\right|\leq C\left\|f\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq C\left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}, \end{eqnarray*} by (\ref{Bairetheor3}), since $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}\subset\textbf{m}athcal{H}^{(q,p)}$ by assumption; which implies that $T$ is a continuous linear functional on $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$; in other words $(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}\subset(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)})^{{\textbf{m}athfrak a}st}$. But this contradicts the fact that $(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)})^{{\textbf{m}athfrak a}st}\subsetneq(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}$; namely $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\subsetneq\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}$ (see (\ref{compardeshqp1})). Consequently, $\textbf{m}athcal{H}^{(q,p)}\neq\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$, and hence $\textbf{m}athcal{H}^{(q,p)}\subsetneq\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$. {\textbf{m}athfrak b}egin{remark} We point out that $\textbf{m}athcal{H}^{(1,p)}\subsetneq(L^1,\ell^p)$, for $1\leq p<\infty$, can be deduced from $\textbf{m}athcal{H}^{(q,p)}\subsetneq\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$. Indeed, we have $\textbf{m}athcal{H}^{(1,p)}\subsetneq\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(1,p)}\subset(L^1,\ell^p)$, by \cite[Theorem 3.2, p. 1905]{AbFt}, and hence $\textbf{m}athcal{H}^{(1,p)}\subsetneq(L^1,\ell^p)$. Also, although $\textbf{m}athcal{H}^{(q,p)}\subset\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ with $\left\|f\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}}\leq\left\|f\right\|_{\textbf{m}athcal{H}^{(q,p)}}$, for all $f\in\textbf{m}athcal{H}^{(q,p)}$, and $(\textbf{m}athcal{H}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}^{(q,p)}})$ and $(\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)},\left\|\cdot\right\|_{\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}})$ are Fr\'echet spaces for $0<q\leq1$ and $0<p<q\leq1$, it is not yet clear that $\textbf{m}athcal{H}^{(q,p)}\subsetneq\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ for this case. On the other hand, contrary to $\textbf{m}athcal{H}^{(1,p)}\subsetneq(L^1,\ell^p)$, it is not clear that $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(1,p)}\subsetneq(L^1,\ell^p)$. In fact, although $(L^{\infty},\ell^{p'})\subset\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)\textbf{m}athrm{loc}}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}$, by Remark \ref{remarqedualeqploc1}, it is not clear that this inclusion is strict. The difficulty is that, contrary to the proof of $(L^{\infty},\ell^{p'})\subsetneq\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)}$, we have $(L^{\infty},\ell^{p'})\subset L^{\infty}$, $\textbf{m}athcal{P}_{\delta}\cap L^{\infty}=\textbf{m}athbb{C}=\textbf{m}athcal{P}_{\delta}\cap\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,1,\eta_1)\textbf{m}athrm{loc}}=\textbf{m}athcal{P}_{\delta}\cap\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,1,\eta)\textbf{m}athrm{loc}}$ and $\textbf{m}athcal{P}_{\delta}\cap\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)\textbf{m}athrm{loc}}=\textbf{m}athcal{P}_{\delta}\cap\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}=\left\{0\right\}$ for $1<p<\infty$. Hence we can not conclude that $(L^{\infty},\ell^{p'})\subsetneq\textbf{m}athcal{L}_{1,\phi_1,\delta}^{(1,p,\eta_1)\textbf{m}athrm{loc}}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}$ for $1\leq p<\infty$. More generally, contrary to $\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)}$ for $0<q\leq 1$, $0<p<\infty$, $1\leq r<\infty$, $0<\eta<\infty$ and $\delta\geq0$, we have for $\delta\geq\textbf{m}ax\left\{\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor,\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor\right\}$, {\textbf{m}athfrak b}egin{align} \textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athbb{C}, \label{Bairetheor5} \end{align} for $0<q,p\leq 1$ with $q\neq1$ or $p\neq1$, and {\textbf{m}athfrak b}egin{align} \textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\left\{0\right\}, \label{Bairetheor6} \end{align} for $0<q\leq 1$ and $1<p<\infty$, with $1\leq r<p'$ if $1<p$ or $1\leq r<\infty$ if $p\leq1$, and $0<\eta<q$ if $1<r$ or $0<\eta\leq1$ if $r=1$. When $q=p=1$, we have for $\delta\geq0$, {\textbf{m}athfrak b}egin{align} \textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,1,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athbb{C}, \label{Bairetheor7} \end{align} and hence $bmo(\textbf{m}athbb{R}^d)\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athbb{C}$. \end{remark} Let us give the proofs of (\ref{Bairetheor5}), (\ref{Bairetheor6}) and (\ref{Bairetheor7}). We shall need the following well-known estimates whose we shall give a proof for the reader's convenience. For $0<q,p<\infty$, we have {\textbf{m}athfrak b}egin{align} \left\|\chi_{Q}\right\|_{q,p}{\textbf{m}athfrak a}pprox\left\|\chi_{Q}\right\|_{p}, \label{Bairetheor8} \end{align} for all cubes $Q$ such that $|Q|\geq1$, and {\textbf{m}athfrak b}egin{align} \left\|\chi_{Q}\right\|_{q,p}{\textbf{m}athfrak a}pprox\left\|\chi_{Q}\right\|_{q}, \label{Bairetheor9} \end{align} for all cubes $Q$ such that $|Q|\leq1$. The proofs of (\ref{Bairetheor8}) and (\ref{Bairetheor9}) will be given after. Proof of (\ref{Bairetheor5}). Let $c\in\textbf{m}athbb{C}$. Consider the constant function $g=c$. We have $g\in\textbf{m}athcal{P}_{\delta}$ and, according to (\ref{Bairetheor8}), {\textbf{m}athfrak b}egin{align*} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_1,\delta}^{\textbf{m}athrm{loc}}}&=\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{|Q|}{\left\|\chi_{Q}\right\|_{q,p}}\left(\frac{1}{|Q|}\int_{Q}|g(x)|^rdx\right)^{\frac{1}{r}}\\ &{\textbf{m}athfrak a}pprox\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{|Q|}{\left\|\chi_{Q}\right\|_{p}}\left(\frac{1}{|Q|}\int_{Q}|g(x)|^rdx\right)^{\frac{1}{r}}\\ &=\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}|Q|^{1-\frac{1}{p}}\left(\frac{1}{|Q|}\int_{Q}|g(x)|^rdx\right)^{\frac{1}{r}}=|c|\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}|Q|^{1-\frac{1}{p}}=|c|<\infty, \end{align*} because $1-\frac{1}{p}\leq0$ implies that $0<|Q|^{1-\frac{1}{p}}\leq1$ for all cubes $Q$ such that $|Q|\geq1$. Hence $g=c\in\textbf{m}athcal{L}_{r,\phi_1,\delta}^{\textbf{m}athrm{loc}}\cong\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}$, by (\ref{dualqploc3bisbis}). Therefore, $\textbf{m}athbb{C}\subset\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$. For the converse inclusion; namely $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athbb{C}$, we have $$\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athcal{L}_{r,\phi_2,\delta}^{\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\Lambda_{d\left(\frac{1}{q}-1\right)}\cap\textbf{m}athcal{P}_{\delta}\subset L^{\infty}\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athbb{C},$$ when $0<q\leq p\leq 1$ and $q\neq1$, because $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\subset\textbf{m}athcal{L}_{r,\phi_2,\delta}^{\textbf{m}athrm{loc}}=\Lambda_{d\left(\frac{1}{q}-1\right)}\subset L^{\infty}$, by (\ref{dualqploc3bbi}), where $\Lambda_{d\left(\frac{1}{q}-1\right)}$ is the dual space of $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^q$ defined by D. Goldberg \cite{DGG}. On the other hand, we have $$\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athcal{L}_{r,\phi_3,\delta}^{\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}=\Lambda_{d\left(\frac{1}{p}-1\right)}\cap\textbf{m}athcal{P}_{\delta}\subset L^{\infty}\cap\textbf{m}athcal{P}_{\delta}=\textbf{m}athbb{C},$$ when $0<p,q\leq1$ and $p\neq1$, because for all $g\in\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$ (or $g\in\textbf{m}athcal{L}_{r,\phi_3,\delta}^{\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$), {\textbf{m}athfrak b}egin{align*} \left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}}{\textbf{m}athfrak a}pprox\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_1,\delta}^{\textbf{m}athrm{loc}}}&=\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{|Q|}{\left\|\chi_{Q}\right\|_{q,p}}\left(\frac{1}{|Q|}\int_{Q}|g(x)|^rdx\right)^{\frac{1}{r}}\\ &{\textbf{m}athfrak a}pprox\sup_{\underset{|Q|\geq1}{Q\in\textbf{m}athcal{Q}}}\frac{|Q|}{\left\|\chi_{Q}\right\|_{p}}\left(\frac{1}{|Q|}\int_{Q}|g(x)|^rdx\right)^{\frac{1}{r}}=\left\|g\right\|_{\textbf{m}athcal{L}_{r,\phi_3,\delta}^{\textbf{m}athrm{loc}}}, \end{align*} and $\textbf{m}athcal{L}_{r,\phi_3,\delta}^{\textbf{m}athrm{loc}}=\Lambda_{d\left(\frac{1}{p}-1\right)}\subset L^{\infty}$ (since $\delta\geq\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor$). This establishes (\ref{Bairetheor5}). For (\ref{Bairetheor6}), it is clear that $\left\{0\right\}\subset\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$. Conversely, let $g\in\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$. We have $g\in L^{p'}\cap\textbf{m}athcal{P}_{\delta}$, by Remark \ref{remarqedualeqploc1}. Hence necessarily $g$ is a null polynomial, because all non-null polynomials do not belong to $L^{p'}$, $1<p'<\infty$. Thus, $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(q,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\left\{0\right\}$, which proves (\ref{Bairetheor6}). For the proof of (\ref{Bairetheor7}), let $0<p<1$. If $\delta\geq\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor$, then we have $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,1,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athcal{L}_{r,\varphi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athbb{C}$, by (\ref{revisiop009}) and (\ref{Bairetheor5}). If $\delta\leq\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor$, then we have $\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,1,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athcal{L}_{r,\varphi_1,\delta}^{(1,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}\subset\textbf{m}athcal{L}_{r,\varphi_1,\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor}^{(1,p,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\left\lfloor d\left(\frac{1}{p}-1\right)\right\rfloor}\subset\textbf{m}athbb{C}$, by (\ref{revisiop009}), (\ref{revisiop08}) and (\ref{Bairetheor5}). For the converse inclusion; namely $\textbf{m}athbb{C}\subset\textbf{m}athcal{L}_{r,\phi_1,\delta}^{(1,1,\eta)\textbf{m}athrm{loc}}\cap\textbf{m}athcal{P}_{\delta}$, see the first part of the proof of (\ref{Bairetheor5}), and hence (\ref{Bairetheor7}) is proved.\\ Now we give the proofs of Estimates (\ref{Bairetheor8}) and (\ref{Bairetheor9}). Let $Q$ be a cube and $\ell_Q$ be its side-length. Without loss generality, we can assume that $Q$ is closed. We have {\textbf{m}athfrak b}egin{equation} 1\leq M_Q:=\sharp{\left\{k\in\textbf{m}athbb{Z}^d:\ Q\cap Q_k\neq\emptyset\right\}}<(\ell_Q+2)^d\leq\left\{{\textbf{m}athfrak b}egin{array}{lll}3^d&\text{ if }&|Q|\leq 1\\ \\ 3^d\ell_Q^d&\text{ if }&|Q|\geq1,\end{array}\right. \label{Bairetheor10} \end{equation} where $\sharp$ denotes the cardinal. To see (\ref{Bairetheor10}), denote by $x^Q=(x_1^Q,x_2^Q,\ldots,x_d^Q)$ the center of $Q$. Let $k\in\textbf{m}athbb{Z}^d$ such that $Q\cap Q_k\neq\emptyset$ (a such $k$ exists because $\left\{Q_k\right\}_{k\in\textbf{m}athbb{Z}^d}$ is a partition of $\textbf{m}athbb{R}^d$, and for recall $Q_k=k+[0,1)^{d}$). Then there exists $x=(x_1,x_2,\ldots,x_d)\in\textbf{m}athbb{R}^d$ such that $x\in Q\cap Q_k$, and hence satisfies {\textbf{m}athfrak b}egin{align} x_i^Q-\frac{\ell_Q}{2}\leq x_i\leq\frac{\ell_Q}{2}+x_i^Q\ \text{ and }\ k_i\leq x_i<k_i+1, \label{Afterwa} \end{align} for every $i=1,2,\ldots,d$, where $k_i$'s are the coordinates of $k$. But (\ref{Afterwa}) implies that {\textbf{m}athfrak b}egin{align*} x_i^Q-\frac{\ell_Q}{2}-1<k_i\leq x_i^Q+\frac{\ell_Q}{2}\ , \end{align*} for every $i=1,2,\ldots,d$. Since $k_i\in\textbf{m}athbb{Z}$, it follows that {\textbf{m}athfrak b}egin{align*} \left\lfloor x_i^Q-\frac{\ell_Q}{2}-1\right\rfloor+1\leq k_i\leq \left\lfloor x_i^Q+\frac{\ell_Q}{2}\right\rfloor\ , \end{align*} for every $i=1,2,\ldots,d$. Thus, for fixed $i$, denoting by $n(k_i)$ the number of possible values that $k_i$ can take, we have {\textbf{m}athfrak b}egin{align*} n(k_i)&=\left\lfloor x_i^Q+\frac{\ell_Q}{2}\right\rfloor-\left(\left\lfloor x_i^Q-\frac{\ell_Q}{2}-1\right\rfloor+1\right)+1\\ &<x_i^Q+\frac{\ell_Q}{2}-\left(x_i^Q-\frac{\ell_Q}{2}-1\right)+1\\ &=\ell_Q+2. \end{align*} Hence $$1\leq\sharp{\left\{k\in\textbf{m}athbb{Z}^d:\ Q\cap Q_k\neq\emptyset\right\}}<(\ell_Q+2)^d.$$ Moreover, if $|Q|\leq 1$, then $\ell_Q\leq1$, and hence $(\ell_Q+2)^d\leq 3^d$. If $|Q|\geq1$, then $\ell_Q\geq1$, and hence $(\ell_Q+2)^d\leq(\ell_Q+2\ell_Q)^d=3^d\ell_Q^d$. This establishes (\ref{Bairetheor10}).\\ Also, we have {\textbf{m}athfrak b}egin{align} \left\|\chi_{Q}\right\|_{q,p}^p=\sum_{k\in\textbf{m}athbb{Z}^d}|Q\cap Q_k|^{\frac{p}{q}}=\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|^{\frac{p}{q}}. \label{Afterwa1} \end{align} Assume that $\frac{p}{q}\leq1$ (ie $q\geq p$). Then we have {\textbf{m}athfrak b}egin{align*} |Q|^{\frac{p}{q}}=\left(\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|\right)^{\frac{p}{q}}\leq\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|^{\frac{p}{q}}&\leq M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}\left(\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|\right)^{\frac{p}{q}}\\ &=M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}|Q|^{\frac{p}{q}}, \end{align*} by \cite[Proposition 2.1, p. 311]{RBHS}, and hence {\textbf{m}athfrak b}egin{align*} \left\|\chi_{Q}\right\|_q^p=|Q|^{\frac{p}{q}}\leq\left\|\chi_{Q}\right\|_{q,p}^p\leq M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}|Q|^{\frac{p}{q}}&=M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}\left\|\chi_{Q}\right\|_q^p\\ &=M_Q^{1-\frac{p}{q}}\left\|\chi_{Q}\right\|_q^p\leq(\ell_Q+2)^{d\left(1-\frac{p}{q}\right)}\left\|\chi_{Q}\right\|_q^p, \end{align*} according to (\ref{Afterwa1}) and (\ref{Bairetheor10}). Therefore, {\textbf{m}athfrak b}egin{align*} \left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq(\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q. \end{align*} Thus: ${\textbf{m}athfrak b}ullet$ If $|Q|\leq1$, then {\textbf{m}athfrak b}egin{equation} \left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq(\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\leq3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q, \label{Bairetheor11} \end{equation} by (\ref{Bairetheor10}). This states (\ref{Bairetheor9}) when $q\geq p$. ${\textbf{m}athfrak b}ullet$ If $|Q|\geq1$, then {\textbf{m}athfrak b}egin{align*} \left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq(\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q&\leq3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\ell_Q^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\\ &=3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\ell_Q^{\frac{d}{p}}=3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_p, \end{align*} by (\ref{Bairetheor10}). Furthermore, $\left\|\chi_{Q}\right\|_p\leq\left\|\chi_{Q}\right\|_{q,p}$, when $q\geq p$. Hence {\textbf{m}athfrak b}egin{equation} \left\|\chi_{Q}\right\|_p\leq\left\|\chi_{Q}\right\|_{q,p}\leq3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_p. \label{Bairetheor12} \end{equation} This states (\ref{Bairetheor8}) when $q\geq p$. Assume that $\frac{p}{q}\geq1$ (ie $q\leq p$). Then we have {\textbf{m}athfrak b}egin{align*} M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}|Q|^{\frac{p}{q}}=M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}\left(\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|\right)^{\frac{p}{q}}&\leq\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|^{\frac{p}{q}}\\ &\leq\left(\sum_{\underset{Q\cap Q_k\neq\emptyset}{k\in\textbf{m}athbb{Z}^d}}|Q\cap Q_k|\right)^{\frac{p}{q}}=|Q|^{\frac{p}{q}}, \end{align*} by \cite[Proposition 2.1, p. 311]{RBHS}, and hence {\textbf{m}athfrak b}egin{align*} (\ell_Q+2)^{d\left(1-\frac{p}{q}\right)}\left\|\chi_{Q}\right\|_q^p\leq M_Q^{1-\frac{p}{q}}\left\|\chi_{Q}\right\|_q^p&=M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}\left\|\chi_{Q}\right\|_q^p\\ &=M_Q^{-\frac{\frac{p}{q}}{\left(\frac{p}{q}\right)'}}|Q|^{\frac{p}{q}}\leq\left\|\chi_{Q}\right\|_{q,p}^p\leq|Q|^{\frac{p}{q}}=\left\|\chi_{Q}\right\|_q^p, \end{align*} according to (\ref{Bairetheor10}) and (\ref{Afterwa1}). Consequently, {\textbf{m}athfrak b}egin{eqnarray*} (\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq\left\|\chi_{Q}\right\|_q. \end{eqnarray*} Thus: ${\textbf{m}athfrak b}ullet$ If $|Q|\leq1$, then {\textbf{m}athfrak b}egin{equation} 3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\leq(\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq\left\|\chi_{Q}\right\|_q, \label{Bairetheor13} \end{equation} by (\ref{Bairetheor10}). This establishes (\ref{Bairetheor9}) when $q\leq p$. ${\textbf{m}athfrak b}ullet$ If $|Q|\geq1$, then {\textbf{m}athfrak b}egin{align*} 3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_p=3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\ell_Q^{\frac{d}{p}}&=3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\ell_Q^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\\ &\leq(\ell_Q+2)^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq\left\|\chi_{Q}\right\|_q, \end{align*} by (\ref{Bairetheor10}). Moreover, $\left\|\chi_{Q}\right\|_{q,p}\leq\left\|\chi_{Q}\right\|_p$, when $q\leq p$. Hence {\textbf{m}athfrak b}egin{equation} 3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\left\|\chi_{Q}\right\|_p\leq\left\|\chi_{Q}\right\|_{q,p}\leq\left\|\chi_{Q}\right\|_p. \label{Bairetheor14} \end{equation} This establishes (\ref{Bairetheor8}) when $q\leq p$. To sum up, for $0<q,p<\infty$, we have\\ ${\textbf{m}athfrak b}ullet$ when $|Q|\leq1$, {\textbf{m}athfrak b}egin{equation*} \textbf{m}in\left\{1,3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\right\}\left\|\chi_{Q}\right\|_q\leq\left\|\chi_{Q}\right\|_{q,p}\leq\textbf{m}ax\left\{1,3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\right\}\left\|\chi_{Q}\right\|_q, \end{equation*} by (\ref{Bairetheor11}) and (\ref{Bairetheor13}), which gives (\ref{Bairetheor9}).\\ ${\textbf{m}athfrak b}ullet$ when $|Q|\geq1$, {\textbf{m}athfrak b}egin{equation*} \textbf{m}in\left\{1,3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\right\}\left\|\chi_{Q}\right\|_p\leq\left\|\chi_{Q}\right\|_{q,p}\leq\textbf{m}ax\left\{1,3^{d\left(\frac{1}{p}-\frac{1}{q}\right)}\right\}\left\|\chi_{Q}\right\|_p, \end{equation*} by (\ref{Bairetheor12}) and (\ref{Bairetheor14}), which gives (\ref{Bairetheor8}). {\textbf{m}athfrak b}egin{remark} We point out that (\ref{Bairetheor5}) is valid for $0<p,q\leq1$, $1\leq r<\infty$, $0<\eta\leq1$ and $\delta\geq0$, by (\ref{revisiop08}) and (\ref{Bairetheor7}), meanwhile (\ref{Bairetheor6}) is valid for $1\leq r<\infty$, $0<\eta<\infty$ and $\delta\geq0$, according to (\ref{revisiop07}) and (\ref{revisiop08}). \end{remark} \section{Boundedness of some classical operators on the dual spaces of Hardy-amalgam spaces} \subsection{Convolution Operator} Given a function $k$ defined and locally integrable on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$, we say that a tempered distribution $K$ in $\textbf{m}athbb{R}^d$ ($K\in\textbf{m}athcal{S'}:=\textbf{m}athcal{S'}(\textbf{m}athbb{R}^d)$) coincides with the function $k$ on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$, if {\textbf{m}athfrak b}egin{eqnarray} \left\langle K, \psi\right\rangle=\int_{\textbf{m}athbb{R}^d}k(x)\psi(x)dx,\label{applicattheo090} \end{eqnarray} for all $\psi\in\textbf{m}athcal{S}$, with $\text{supp}(\psi)\subset\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$. Here, we are interested in tempered distributions $K$ in $\textbf{m}athbb{R}^d$ that coincide with a function $k$ on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and that have the form {\textbf{m}athfrak b}egin{eqnarray} \left\langle K, \psi\right\rangle=\lim_{j\rightarrow\infty}\int_{|x|\geq\sigma_j}k(x)\psi(x)dx,\ \ \ \psi\in\textbf{m}athcal{S}, \label{applicattheo091} \end{eqnarray} for some sequence $\sigma_j\downarrow 0$ as $j\rightarrow\infty$ and independent of $\psi$. Also, we consider the convolution operators $T$: $T(f)=K{\textbf{m}athfrak a}st f$, $f\in\textbf{m}athcal{S}$. Thus, when $\widehat{K}\in L^{\infty}$, we have {\textbf{m}athfrak b}egin{eqnarray} T(f)(x)=\int_{\textbf{m}athbb{R}^d}k(x-y)f(y)dy, \label{applicattheo093} \end{eqnarray} for all $f\in L^2$ with compact support and all $x\notin\text{supp}(f)$. For (\ref{applicattheo093}), see \cite[Chap. 3, p. 113]{MA}. From now on, the letter $K$ stands for both the tempered distribution $K$ and the associated function $k$. We recall the following theorem. {\textbf{m}athfrak b}egin{thm}\cite[Theorem 5.1]{JD} \label{theoremsingaj} Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ which coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and is such that {\textbf{m}athfrak b}egin{eqnarray*} |\widehat{K}(\xi)|\leq A, \end{eqnarray*} {\textbf{m}athfrak b}egin{eqnarray*} \int_{|x|>2|y|}|K(x-y)-K(x)|dx\leq B,\ y\in\textbf{m}athbb{R}^d. \end{eqnarray*} then, for $1<r<\infty$, {\textbf{m}athfrak b}egin{eqnarray*} \left\|K{\textbf{m}athfrak a}st f\right\|_r\leq C_r\left\|f\right\|_r \end{eqnarray*} and {\textbf{m}athfrak b}egin{eqnarray*} \left|\left\{x\in\textbf{m}athbb{R}^d:\ |K{\textbf{m}athfrak a}st f(x)|>\lambda\right\}\right|\leq\frac{C}{\lambda}\left\|f\right\|_1. \end{eqnarray*} \end{thm} Notice that $C_r:=C(d,r,A,B)$ (see \cite{JD}, p. 110). We proved the following results in \cite{AbFt1}. {\textbf{m}athfrak b}egin{thm}\cite[Theorem 4.13]{AbFt1} \label{theoremsing4} Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ that coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and is such that {\textbf{m}athfrak b}egin{eqnarray} |\widehat{K}(\xi)|\leq A, \label{applicattheo12} \end{eqnarray} and, there exist an integer $\delta>0$ and a constant $B>0$ such that {\textbf{m}athfrak b}egin{eqnarray} |\partial^{{\textbf{m}athfrak b}eta}K(x)|\leq\frac{B}{|x|^{d+|{\textbf{m}athfrak b}eta|}}\ , \label{applicattheo15} \end{eqnarray} for all $x\neq0$ and all multi-indexes ${\textbf{m}athfrak b}eta$ with $|{\textbf{m}athfrak b}eta|\leq\delta$. If $\frac{d}{d+\delta}<q\leq 1$ and $q\leq p<\infty$, then the operator $T(f)=K{\textbf{m}athfrak a}st f$, for all $f\in\textbf{m}athcal{S}$, extends to a bounded operator from $\textbf{m}athcal{H}^{(q,p)}$ to $(L^q,\ell^p)$. \end{thm} {\textbf{m}athfrak b}egin{thm}\cite[Theorem 4.17]{AbFt1} \label{theoremsing5} Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ that coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and satisfies assumptions (\ref{applicattheo12}) and (\ref{applicattheo15}). If $\frac{d}{d+\delta}<q\leq 1$ and $q\leq p<\infty$, then the operator $T(f)=K{\textbf{m}athfrak a}st f$, for all $f\in\textbf{m}athcal{S}$, extends to a bounded operator from $\textbf{m}athcal{H}^{(q,p)}$ to $\textbf{m}athcal{H}^{(q,p)}$. \end{thm} Our results are the following. {\textbf{m}athfrak b}egin{thm} \label{applicadualqpconv2} Suppose that $1\leq p<\infty$. Let $1\leq r<p'$ and, $0<\eta<1$ if $1<r<p'$ or $0<\eta\leq1$ if $r=1$. Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ that coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and satisfies assumptions (\ref{applicattheo12}) and (\ref{applicattheo15}). Then the operator $T(f)=K{\textbf{m}athfrak a}st f$, for all $f\in\textbf{m}athcal{S}$, is extendable on $(L^{\infty},\ell^{p'})$ and there exists a constant $C>0$ such that {\textbf{m}athfrak b}egin{eqnarray} \left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|f\right\|_{\infty,p'}, \label{bisapplicattheo15} \end{eqnarray} for all $f\in(L^{\infty},\ell^{p'})$. \end{thm} {\textbf{m}athfrak b}egin{proof} If $p=1$, then $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,1,\eta)}\cong\textbf{m}athcal{L}_{r,\phi_{1},\delta}\cong\textbf{m}athcal{L}_{1,\phi_{1},0}=\textbf{m}athcal{L}_{1,\phi_{2},0}=\textbf{m}athrm{BMO}(\textbf{m}athbb{R}^d)$ et $(L^{\infty},\ell^{1'})=(L^{\infty},\ell^{\infty})=L^{\infty}$, and hence (\ref{bisapplicattheo15}) holds by \cite[Corollary 3.4.10 and Remark 3.4.11, pp. 193-194]{LG2} (see also \cite[Proposition 1, p. 156]{MA}). Suppose that $1<p<\infty$. Note that $T$ extends to a bounded operator from $\textbf{m}athcal{H}^{(1,p)}$ into $(L^1,\ell^p)$, by Theorem \ref{theoremsing4}. Moreover, $(L^{\infty},\ell^{p'})\subset L^{p'}$, $1<p'<\infty$ and $T$ extends on $L^{p'}$ (by Theorem \ref{theoremsingaj}), and hence $T$ is extendable on $(L^{\infty},\ell^{p'})$. Another justification of this fact is to remark that $(L^{\infty},\ell^{p'})\subset L^{\infty}$ and $T$ is well defined on $L^{\infty}$ (see \cite[Remark 4.1.18, p. 223]{LG2}). Without loss generality, we can assume that $1\leq r<p'$ is such that $r'>\textbf{m}ax\left\{2,p\right\}$. Now, consider a family $\left\{{\textbf{m}athcal O}mega^j\right\}_{j\in\textbf{m}athbb{Z}}$ of open subsets of $\textbf{m}athbb{R}^d$ such that $\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{1}{\eta},\frac{p}{\eta}}<\infty$. We have to show that {\textbf{m}athfrak b}egin{eqnarray} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|f\right\|_{\infty,p'}\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{1}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}, \label{01applicadualqpconv} \end{eqnarray} where $C>0$ is a constant independent of $f$. Consider the dual operator $T^{\textbf{m}athfrak a}st$ of $T$. The kernel $K^{\textbf{m}athfrak a}st$ associated to $T^{\textbf{m}athfrak a}st$ is given by $K^{\textbf{m}athfrak a}st(x)=K(-x)$, for all $x\in\textbf{m}athbb{R}^d$, and $$\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx=\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx,$$ for all $f,g\in L^2$. Moreover, $T^{\textbf{m}athfrak a}st$ and $K^{\textbf{m}athfrak a}st$ satisfy the same kind of inequalities as do $T$ and $K$ (for more details, see \cite[Chap. 1, (35), p. 36 and Chap. 4, 4.1, pp. 155-156]{MA}). Hence $T^{\textbf{m}athfrak a}st$ extends to a bounded operator from $\textbf{m}athcal{H}^{(1,p)}$ into $(L^1,\ell^p)$, according to Theorem \ref{theoremsing4}. Let $f\in(L^{\infty},\ell^{p'})$. Consider the subspace $\textbf{m}athcal{H}_{fin}^{(1,p)}$ of $\textbf{m}athcal{H}^{(1,p)}$ consisting of finite linear combinations of $(1,r',\delta)$-atoms. Then, for all elements $g$ of $\textbf{m}athcal{H}_{fin}^{(1,p)}$, we have $g\in L^2\cap L^{p}$ (because $r'>\textbf{m}ax\left\{2,p\right\}$ implies that $(1,r',\delta)$-atoms are $(1,2,\delta)$-atoms and $(1,p,\delta)$-atoms) and $T^{\textbf{m}athfrak a}st(g)\in(L^1,\ell^p)$ with $\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{1,p}\leq C\left\|g\right\|_{\textbf{m}athcal{H}^{(1,p)}}$, $C>0$ being a constant independent of $g$. Further, according to \cite[Chap. 1, (35), p. 36 or Chap. 4, 4.1, pp. 155-156]{MA}, {\textbf{m}athfrak b}egin{align} \int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx=\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx, \label{020applicadualqpconv} \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(1,p)}$, since $f\in(L^{\infty},\ell^{p'})\subset L^{\infty}\cap L^{p'}$ and $g\in L^2\cap L^{p}$. Hence {\textbf{m}athfrak b}egin{align} \left|\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx\right|&=\left|\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx\right|\nonumber\\ &\leq\left\|f\right\|_{\infty,p'}\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{1,p}\leq C\left\|f\right\|_{\infty,p'}\left\|g\right\|_{\textbf{m}athcal{H}^{(1,p)}},\label{021applicadualqpconv} \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(1,p)}$. Consequently, the mapping $G_{T(f)}:\textbf{m}athcal{H}_{fin}^{(1,p)}\ni g\textbf{m}apsto\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx$ extends to a unique continuous linear functional $\widetilde{G_{T(f)}}$ on $\textbf{m}athcal{H}^{(1,p)}$, with {\textbf{m}athfrak b}egin{align} \left\|\widetilde{G_{T(f)}}\right\|:=\left\|\widetilde{G_{T(f)}}\right\|_{(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}}=\sup_{\underset{g\in\textbf{m}athcal{H}^{(1,p)}}{\left\|g\right\|_{\textbf{m}athcal{H}^{(1,p)}}}\leq1}\left|G_{T(f)}(g)\right|\leq C\left\|f\right\|_{\infty,p'}. \label{2applicadualqpconv} \end{align} Furthermore, since $\widetilde{G_{T(f)}}\in(\textbf{m}athcal{H}^{(1,p)})^{{\textbf{m}athfrak a}st}$ and {\textbf{m}athfrak b}egin{align} \widetilde{G_{T(f)}}(g)=\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx, \label{022applicadualqpconv} \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(1,p)}$, with $T(f)\in L_{\textbf{m}athrm{loc}}^r$ (since $T(f)\in L^{p'}\subset L_{\textbf{m}athrm{loc}}^{p'}$ and $1\leq r<p'$), by repeating the second part of the proof of Theorem \ref{theoremdualqp} (see \cite[Theorem 3.7]{AbFt3}) with $\widetilde{G_{T(f)}}$, $T(f)$ and $g$ respectively to the place of $T$, $g$ and $f$, we get {\textbf{m}athfrak b}egin{align} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|\widetilde{G_{T(f)}}\right\|\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{1}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}, \label{023applicadualqpconv} \end{align} with $C>0$ a constant independent of $T(f)$. It follows from (\ref{2applicadualqpconv}) and (\ref{023applicadualqpconv}) that {\textbf{m}athfrak b}egin{align*} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|f\right\|_{\infty,p'}\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{1}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}, \end{align*} which establishes (\ref{01applicadualqpconv}). Hence $T(f)\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}$ with $\left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|f\right\|_{\infty,p'}$, which completes the proof. \end{proof} {\textbf{m}athfrak b}egin{remark} We point out that the proof of Theorem \ref{applicadualqpconv2} can be given without distinguishing the cases $p=1$ and $p>1$. For this approach, we have not to show Inequality (\ref{01applicadualqpconv}), which is not necessarily meaningful when $p=1$, since for $p=1$, it is not clear that $T(f)\in L_{\textbf{m}athrm{loc}}^r$, necessary condition to the definition of $\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)$, $j\in\textbf{m}athbb{Z}$. However, Relations (\ref{020applicadualqpconv}), (\ref{021applicadualqpconv}), (\ref{2applicadualqpconv}) and (\ref{022applicadualqpconv}) are valid for $1\leq p<\infty$, except $T(f)$ is not necessarily in $L^{\infty}$ for $p=1$. Thus overcoming the problem of Inequality (\ref{01applicadualqpconv}), once to Relation (\ref{022applicadualqpconv}), we appeal to Theorem 2.7 (2) (see \cite[Theorem 3.8 (2)]{AbFt3}), which allows to claim that there exists a function $h\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}$ such that $\widetilde{G_{T(f)}}=\left(\widetilde{G_{T(f)}}\right)_h$; namely $\widetilde{G_{T(f)}}(g)=\int_{\textbf{m}athbb{R}^d}h(x)g(x)dx$, for all $g\in\textbf{m}athcal{H}_{fin}^{(1,p)}$, with $\left\|h\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|\widetilde{G_{T(f)}}\right\|$. Hence, according to Relation (\ref{022applicadualqpconv}), we can claim that $T(f)$ can be identified with the function $h$ so that $\left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}=\left\|h\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|\widetilde{G_{T(f)}}\right\|$. Finally, we obtain $\left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|f\right\|_{\infty,p'}$, by (\ref{2applicadualqpconv}). \end{remark} {\textbf{m}athfrak b}egin{remark} In Theorem \ref{applicadualqpconv2} (\ref{bisapplicattheo15}), the positive integer $\delta$ can be replaced by $0$. In fact, under the assumptions of Theorem \ref{applicadualqpconv2}, $\textbf{m}athcal{L}_{r,\phi_{1},0}^{(1,p,\eta)}\cong\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}$ for $\delta>0$, by Theorem 2.7 (see \cite[Theorem 3.8]{AbFt3}). \end{remark} {\textbf{m}athfrak b}egin{cor}\label{applicadualqpconv3} The Riesz transforms $R_j$, $1\leq j\leq d$, are bounded from $(L^{\infty},\ell^{p'})$ into $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}$, for $1\leq p<\infty$, $\delta\geq0$, $1\leq r<p'$ and, $0<\eta<1$ if $1<r<p'$ or $0<\eta\leq1$ if $r=1$. \end{cor} {\textbf{m}athfrak b}egin{thm}\label{applicadualqpconv4general} Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ that coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and satisfies assumptions (\ref{applicattheo12}) and (\ref{applicattheo15}). Suppose that $\frac{d}{d+\delta}<q\leq1<p<\infty$. Let $1\leq r<p'$ and, $0<\eta<q$ if $1<r<p'$ or $0<\eta\leq1$ if $r=1$. Then the operator $T(f)=K{\textbf{m}athfrak a}st f$, for all $f\in\textbf{m}athcal{S}$, is extendable on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ and there exists a constant $C>0$ such that {\textbf{m}athfrak b}egin{align} \left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}, \label{0applicadualqpconv4general0} \end{align} for all $f\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$. \end{thm} {\textbf{m}athfrak b}egin{proof} Note that $T$ extends to a bounded operator from $\textbf{m}athcal{H}^{(q,p)}$ into $\textbf{m}athcal{H}^{(q,p)}$, by Theorem \ref{theoremsing5}. Moreover, $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}\subset L^{p'}$, $1<p'<\infty$, by Remark \ref{remarqedualeqp1}, and $T$ extends on $L^{p'}$ (by Theorem \ref{theoremsingaj}), and hence $T$ is extendable on $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$. Without loss generality, we can assume that $1\leq r<p'$ is such that $r'>\textbf{m}ax\left\{2,p\right\}$. Now, consider a family $\left\{{\textbf{m}athcal O}mega^j\right\}_{j\in\textbf{m}athbb{Z}}$ of open subsets of $\textbf{m}athbb{R}^d$ such that $\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{1}{\eta},\frac{p}{\eta}}<\infty$. We have to show that {\textbf{m}athfrak b}egin{align} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}, \label{3applicadualqpconvgener} \end{align} where $C>0$ is a constant independent of $f$. Consider the dual operator $T^{\textbf{m}athfrak a}st$ of $T$. Then $T^{\textbf{m}athfrak a}st$ extends to a bounded operator from $\textbf{m}athcal{H}^{(q,p)}$ to itself, by Theorem \ref{theoremsing5}. Let $f\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$. Consider the subspace $\textbf{m}athcal{H}_{fin}^{(q,p)}$ of $\textbf{m}athcal{H}^{(q,p)}$ consisting of finite linear combinations of $(q,r',\delta)$-atoms. Then, for all elements $g$ of $\textbf{m}athcal{H}_{fin}^{(q,p)}$, we have $T^{\textbf{m}athfrak a}st(g)\in\textbf{m}athcal{H}^{(q,p)}$ with $\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq C\left\|g\right\|_{\textbf{m}athcal{H}^{(q,p)}}$, $C>0$ being a constant independent of $g$. Moreover, for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$, we have $$\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx=\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx,$$ according to \cite[Chap. 1, (35), p. 36]{MA}, because $f\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}\subset L^{p'}$ (by Remark \ref{remarqedualeqp1}) and $g\in L^{p}$ (since $(q,r',\delta)$-atoms are also $(q,p,\delta)$-atoms given that $r'>\textbf{m}ax\left\{2,p\right\}$), and hence $T(f)\in L^{p'}$ and $T^{\textbf{m}athfrak a}st(g)\in L^{p}$, and {\textbf{m}athfrak b}egin{align} \left|\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx\right|\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{\textbf{m}athcal{H}^{(q,p)}}, \label{4applicadualqpconvgener} \end{align} with $C>0$ a constant independent of $f$ and $g$ (we admit for the moment this inequality). Hence {\textbf{m}athfrak b}egin{align} \left|\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx\right|&=\left|\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx\right| \nonumber\\ &\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|g\right\|_{\textbf{m}athcal{H}^{(q,p)}} \nonumber, \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$. Consequently, the mapping $G_{T(f)}:\textbf{m}athcal{H}_{fin}^{(q,p)}\ni g\textbf{m}apsto\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx$ extends to a unique continuous linear functional $\widetilde{G_{T(f)}}$ on $\textbf{m}athcal{H}^{(q,p)}$, with {\textbf{m}athfrak b}egin{align} \left\|\widetilde{G_{T(f)}}\right\|:=\left\|\widetilde{G_{T(f)}}\right\|_{(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}}=\sup_{\underset{g\in\textbf{m}athcal{H}^{(q,p)}}{\left\|g\right\|_{\textbf{m}athcal{H}^{(q,p)}}}\leq1}\left|G_{T(f)}(g)\right|\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}. \label{5applicadualqpconvgener} \end{align} Furthermore, since $\widetilde{G_{T(f)}}\in(\textbf{m}athcal{H}^{(q,p)})^{{\textbf{m}athfrak a}st}$, $$\widetilde{G_{T(f)}}(g)=\int_{\textbf{m}athbb{R}^d}T(f)(x)g(x)dx,$$ for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$, and $T(f)\in L_{\textbf{m}athrm{loc}}^r$ (because $r<p'$ implies that $T(f)\in L^{p'}\subset L_{\textbf{m}athrm{loc}}^{p'}\subset L_{\textbf{m}athrm{loc}}^r$), by repeating the second part of the proof of Theorem \ref{theoremdualqp}, with $\widetilde{G_{T(f)}}$, $T(f)$ and $g$ respectively to the place of $T$, $g$ and $f$, we get {\textbf{m}athfrak b}egin{align*} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|\widetilde{G_{T(f)}}\right\|\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}. \end{align*} It follows that {\textbf{m}athfrak b}egin{align*} \sum_{j\in\textbf{m}athbb{Z}}2^j\textit{O}(T(f),{\textbf{m}athcal O}mega^j,r)\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|\sum_{j\in\textbf{m}athbb{Z}}2^{j\eta}\chi_{{\textbf{m}athcal O}mega^j}\right\|_{\frac{q}{\eta},\frac{p}{\eta}}^{\frac{1}{\eta}}, \end{align*} by (\ref{5applicadualqpconvgener}). Hence $T(f)\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ with $\left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}$.\\ The proof of Theorem \ref{applicadualqpconv4general} will be complete if we prove (\ref{4applicadualqpconvgener}). Since $f\in\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$, we know that the mapping $G_f:\textbf{m}athcal{H}_{fin}^{(q,p)}\ni g\textbf{m}apsto\int_{\textbf{m}athbb{R}^d}f(x)g(x)dx$ extends to a unique continuous linear functional $\widetilde{G_f}$ on $\textbf{m}athcal{H}^{(q,p)}$, with {\textbf{m}athfrak b}egin{align} |\widetilde{G_f}(g)|=|G_f(g)|\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|g\right\|_{\textbf{m}athcal{H}^{(q,p)}}, \label{050applicadualqpconvgener} \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$. We also know that, for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$, $T^{\textbf{m}athfrak a}st(g)\in\textbf{m}athcal{H}^{(q,p)}$. However, it is not clear that $T^{\textbf{m}athfrak a}st(g)\in\textbf{m}athcal{H}_{fin}^{(q,p)}$, for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$. Hence we can not write $$\widetilde{G_f}(T^{\textbf{m}athfrak a}st(g))=G_f(T^{\textbf{m}athfrak a}st(g))=\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx,$$ for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$, and deduce (\ref{4applicadualqpconvgener}), according to (\ref{050applicadualqpconvgener}). However, we claim that {\textbf{m}athfrak b}egin{align} \widetilde{G_f}(T^{\textbf{m}athfrak a}st(g))=\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx, \label{051applicadualqpconvgener} \end{align} for all $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$. To see (\ref{051applicadualqpconvgener}), let $g\in\textbf{m}athcal{H}_{fin}^{(q,p)}$. We have $T^{\textbf{m}athfrak a}st(g)\in\textbf{m}athcal{H}^{(q,p)}\cap L^p$ (because $g\in L^p$ and $T^{\textbf{m}athfrak a}st$ is bounded from $L^{p}$ to itself, but also from $\textbf{m}athcal{H}^{(q,p)}$ to itself). Therefore, by the proof of \cite[Theorem 4.4, pp. 1916-1919]{AbFt}, there exist a family $\left\{\left(a_{j,n},Q_{j,n}\right)\right\}_{(j,n)\in\textbf{m}athbb{Z}\times\textbf{m}athbb{Z_{+}}}$ of elements of $\textbf{m}athcal{A}(q,r',\delta)$ and a family of scalars $\left\{\lambda_{j,n}\right\}_{(j,n)\in\textbf{m}athbb{Z}\times\textbf{m}athbb{Z_{+}}}$ such that {\textbf{m}athfrak b}egin{align} T^{\textbf{m}athfrak a}st(g)=\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}a_{j,n} \label{applicattheoqp1gener} \end{align} almost everywhere and in the sense of $\textbf{m}athcal{H}^{(q,p)}$ (unconditionally). Furthermore, {\textbf{m}athfrak b}egin{align} \sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}|\lambda_{j,n}a_{j,n}f|\in L^1. \label{applicattheoqp5gener} \end{align} For the proof of (\ref{applicattheoqp5gener}), first we recall that, by construction (see the proof of \cite[Theorem 4.4, pp. 1916-1919]{AbFt}), $|\lambda_{j,n}a_{j,n}|\leq C_1 2^j$ almost everywhere, $\text{supp}(a_{j,n})\subset Q_{j,n}:=C_0 Q_{j,n}^{{\textbf{m}athfrak a}st}$ and $\underset{n\geq 0}\sum\chi_{Q_{j,n}^{{\textbf{m}athfrak a}st}}\leq K(d)$ with, for every $j\in\textbf{m}athbb{Z}$, $\underset{n\geq 0}{\textbf{m}athfrak b}igcup Q_{j,n}^{{\textbf{m}athfrak a}st}=\textbf{m}athcal{O}^j:=\left\{x\in\textbf{m}athbb R^d:\textbf{m}athcal M_{\textbf{m}athcal F_N^0}(T^{\textbf{m}athfrak a}st(g))(x)>2^j\right\}$, $N\geq\textbf{m}ax\left\{\left\lfloor \frac{d}{q}\right\rfloor, \left\lfloor \frac{d}{p}\right\rfloor\right\}+1$ being an integer and $\textbf{m}athcal M_{\textbf{m}athcal F_N^0}(T^{\textbf{m}athfrak a}st(g))$ is the radial grand maximal function of $T^{\textbf{m}athfrak a}st(g)$ (with respect to $\textbf{m}athcal F_N$) (see \cite{AbFt}, p. 1907, for the definitions of $\textbf{m}athcal F_N$ and $\textbf{m}athcal M_{\textbf{m}athcal F_N^0}(T^{\textbf{m}athfrak a}st(g))$). Thus, {\textbf{m}athfrak b}egin{align*} \left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}|\lambda_{j,n}a_{j,n}f|\right\|_1&=\left\||f|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}|\lambda_{j,n}a_{j,n}|\right\|_1\\ &\leq\left\|f\right\|_{L^{p'}}\left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}|\lambda_{j,n}a_{j,n}|\right\|_{L^p}\\ &\leq C_1\left\|f\right\|_{L^{p'}}\left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}2^j\chi_{Q_{j,n}}\right\|_{L^p}\\ &\leq C(\varphi,d,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}2^j\left[\textbf{m}athfrak{M}\left(\chi_{Q_{j,n}^{{\textbf{m}athfrak a}st}}\right)\right]^2\right\|_{L^p}\\ &=C(\varphi,d,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\left(\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}2^j\left[\textbf{m}athfrak{M}\left(\chi_{Q_{j,n}^{{\textbf{m}athfrak a}st}}\right)\right]^2\right)^{\frac{1}{2}}\right\|_{L^{2p}}^2\\ &\leq C(\varphi,d,p,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\left(\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}2^j\left(\chi_{Q_{j,n}^{{\textbf{m}athfrak a}st}}\right)^2\right)^{\frac{1}{2}}\right\|_{L^{2p}}^2\\ &=C(\varphi,d,p,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}2^j\chi_{Q_{j,n}^{{\textbf{m}athfrak a}st}}\right\|_{L^p}\\ &\leq C(\varphi,d,p,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\sum_{j=-\infty}^{+\infty}2^j\chi_{\textbf{m}athcal{O}^j}\right\|_{L^p}\\ &\leq C(\varphi,d,p,N,\delta)\left\|f\right\|_{L^{p'}}\left\|\textbf{m}athcal M_{\textbf{m}athcal F_N^0}(T^{\textbf{m}athfrak a}st(g))\right\|_{L^p}, \end{align*} by H\"older inequality, \cite[Lemma 3.3]{AbFt3}, \cite[Theorem 1, p. 107]{CFES} and \cite[(4.18), p. 1919]{AbFt}, where $\textbf{m}athfrak{M}$ denotes the Hardy-Littlewood maximal operator, defined for a locally integrable function $f$ by {\textbf{m}athfrak b}egin{align*} \textbf{m}athfrak{M}(f)(x):=\underset{r>0}\sup\ |B(x,r)|^{-1}\int_{B(x,r)}|f(y)|dy,\ \ x\in\textbf{m}athbb{R}^d. \end{align*} But, since $p>1$, $N\geq\textbf{m}ax\left\{\left\lfloor\frac{d}{q}\right\rfloor,\left\lfloor\frac{d}{p}\right\rfloor\right\}+1=\left\lfloor\frac{d}{q}\right\rfloor+1>\left\lfloor\frac{d}{p}\right\rfloor+1$ and $T^{\textbf{m}athfrak a}st(g)\in L^{p}$, we have {\textbf{m}athfrak b}egin{align*} \left\|\textbf{m}athcal M_{\textbf{m}athcal F_N^0}(T^{\textbf{m}athfrak a}st(g))\right\|_{L^p}{\textbf{m}athfrak a}pprox\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{L^p}<\infty, \end{align*} by \cite[Remark, pp. 15-16]{MBOW}. Hence {\textbf{m}athfrak b}egin{align*} \left\|\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}|\lambda_{j,n}a_{j,n}f|\right\|_1\raisebox{-1ex}{$~\stackrel{\textstyle <}{\sim}~$}\left\|f\right\|_{L^{p'}}\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{L^p}<\infty, \end{align*} which states (\ref{applicattheoqp5gener}). From (\ref{applicattheoqp1gener}) and (\ref{applicattheoqp5gener}), it follows that {\textbf{m}athfrak b}egin{align*} \int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx&=\int_{\textbf{m}athbb{R}^d}f(x)\left(\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}a_{j,n}(x)\right)dx\\ &=\int_{\textbf{m}athbb{R}^d}\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}(\lambda_{j,n}a_{j,n}(x)f(x))dx\\ &=\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}\int_{\textbf{m}athbb{R}^d}f(x)a_{j,n}(x)dx, \end{align*} by Fubini Theorem. Moreover, {\textbf{m}athfrak b}egin{align*} \sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}\int_{\textbf{m}athbb{R}^d}f(x)a_{j,n}(x)dx&=\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}G_f(a_{j,n})\\ &=\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}\widetilde{G_f}(a_{j,n})\\ &=\widetilde{G_f}\left(\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}a_{j,n}\right)=\widetilde{G_f}(T^{\textbf{m}athfrak a}st(g)), \end{align*} since $T^{\textbf{m}athfrak a}st(g)=\sum_{j=-\infty}^{+\infty}\sum_{n\geq 0}\lambda_{j,n}a_{j,n}$ (unconditionally) in $\textbf{m}athcal{H}^{(q,p)}$ and $\widetilde{G_f}$ is a continuous linear functional on $\textbf{m}athcal{H}^{(q,p)}$. Therefore, {\textbf{m}athfrak b}egin{align*} \int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx=\widetilde{G_f}(T^{\textbf{m}athfrak a}st(g)), \end{align*} which establishes (\ref{051applicadualqpconvgener}). It follows that {\textbf{m}athfrak b}egin{align*} \left|\int_{\textbf{m}athbb{R}^d}f(x)T^{\textbf{m}athfrak a}st(g)(x)dx\right|&=\left|\widetilde{G_f}(T^{\textbf{m}athfrak a}st(g))\right|\\ &\leq\left\|\widetilde{G_f}\right\|\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{\textbf{m}athcal{H}^{(q,p)}}\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}}\left\|T^{\textbf{m}athfrak a}st(g)\right\|_{\textbf{m}athcal{H}^{(q,p)}}, \end{align*} which establishes (\ref{4applicadualqpconvgener}), and hence completes the proof of Theorem \ref{applicadualqpconv4general}. \end{proof} {\textbf{m}athfrak b}egin{remark}\label{0applicadualqpconv4generalbis0} In Theorem \ref{applicadualqpconv4general} (\ref{0applicadualqpconv4general0}), the positive integer $\delta$ can be replaced by $0$ provided that $\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor=0$. In fact, under the assumptions of Theorem \ref{applicadualqpconv4general}, $\textbf{m}athcal{L}_{r,\phi_{1},0}^{(q,p,\eta)}\cong\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ for $\delta>0$ provided that $\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor=0$, by Theorem 2.7 (see \cite[Theorem 3.8]{AbFt3}). \end{remark} {\textbf{m}athfrak b}egin{cor}\label{applicadualqpconv4generalbis} Let $K$ be a tempered distribution in $\textbf{m}athbb{R}^d$ that coincides with a locally integrable function on $\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\left\{0\right\}$ and satisfies assumptions (\ref{applicattheo12}) and (\ref{applicattheo15}). Suppose that $\frac{d}{d+\delta}<q\leq1$ and $q\leq p<\infty$. Let $\textbf{m}ax\left\{1,p\right\}<r\leq\infty$ and, $0<\eta<q$ if $r<\infty$ or $0<\eta\leq1$ if $r=\infty$. Then the operator $T(f)=K{\textbf{m}athfrak a}st f$, for all $f\in\textbf{m}athcal{S}$, is extendable on $\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}$ and there exists a constant $C>0$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|T(f)\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}\leq C\left\|f\right\|_{\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}}, \end{eqnarray*} for all $f\in\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}$. \end{cor} {\textbf{m}athfrak b}egin{proof} We distinguish the cases $p\leq1$ and $p>1$. The case $p>1$ is merely Theorem \ref{applicadualqpconv4general}. When $p\leq1$, we have $\textbf{m}athcal{L}_{r',\phi_{1},\delta}^{(q,p,\eta)}=\textbf{m}athcal{L}_{r',\phi_{1},\delta}$ with equivalent norms, by (\ref{dualqp3bisbis}), and $T$ is bounded from $\textbf{m}athcal{L}_{r',\phi_{1},\delta}$ to itself, according to \cite{PeeJ}. \end{proof} {\textbf{m}athfrak b}egin{remark} Remark \ref{0applicadualqpconv4generalbis0} is valid for Corollary \ref{applicadualqpconv4generalbis} provided that $\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor=0$. Also, when $p\leq1$, the assumption $q\leq p$ is not needed and we can take $0<\eta\leq1$ for $1<r\leq\infty$. \end{remark} {\textbf{m}athfrak b}egin{cor}\label{applicadualqpconv5general} The Riesz transforms $R_j$, $1\leq j\leq d$, are bounded from $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$ into $\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(q,p,\eta)}$, for $0<q\leq1$, $q\leq p<\infty$, $\delta\geq\left\lfloor d\left(\frac{1}{q}-1\right)\right\rfloor$ and, $1\leq r<p'$ if $1<p$ or $1\leq r<\infty$ if $p\leq1$, with $0<\eta<q$ if $1<r$ or $0<\eta\leq1$ if $r=1$. \end{cor} {\textbf{m}athfrak b}egin{remark} In Corollary \ref{applicadualqpconv5general}, when $p\leq1$, the assumption $q\leq p$ is not needed and we can take $0<\eta\leq1$ for $1<r\leq\infty$. \end{remark} \subsection{Calder\'on-Zygmund operator} Let $\triangle:=\left\{(x,x):\ x\in\textbf{m}athbb{R}^d\right\}$ be the diagonal of $\textbf{m}athbb{R}^d\times\textbf{m}athbb{R}^d$. We say that a function $K:\textbf{m}athbb{R}^d\times\textbf{m}athbb{R}^d{\textbf{m}athfrak b}ackslash\triangle\rightarrow\textbf{m}athbb{C}$ is a standard kernel if there exist a constant $A>0$ and an exponent $\textbf{m}u>0$ such that: {\textbf{m}athfrak b}egin{eqnarray} |K(x,y)|\leq A|x-y|^{-d}; \label{integralsing1} \end{eqnarray} {\textbf{m}athfrak b}egin{equation} |K(x,y)-K(x,z)|\leq A\frac{|y-z|^{\textbf{m}u}}{|x-y|^{d+\textbf{m}u}}\ ,\ \text{ if }\ \ |x-y|\geq 2|y-z| \label{integralsing2} \end{equation} and {\textbf{m}athfrak b}egin{equation} |K(x,y)-K(w,y)|\leq A\frac{|x-w|^{\textbf{m}u}}{|x-y|^{d+\textbf{m}u}}\ ,\ \text{ if }\ \ |x-y|\geq 2|x-w|. \label{integralsing3} \end{equation} We denote by $\textbf{m}athcal{SK}(\textbf{m}u,A)$ the class of all standard kernels $K$ with exponent and constant $\textbf{m}u$ and $A$. {\textbf{m}athfrak b}egin{defn}\cite[Definition 5.11]{JD} An operator $T$ is a (generalized) Calder\'on-Zygmund operator if {\textbf{m}athfrak b}egin{enumerate} \item $T$ is bounded on $L^2$; \item There exists a standard kernel $K$ such that for $f\in L^2$ with compact support, {\textbf{m}athfrak b}egin{eqnarray} T(f)(x)=\int_{\textbf{m}athbb{R}^d}K(x,y)f(y)dy,\ x\notin\text{supp}(f). \label{egopcalzyg1} \end{eqnarray} \end{enumerate} \end{defn} We proved the following result in \cite{AbFt1} (see \cite[Theorem 4.2]{AbFt1}). {\textbf{m}athfrak b}egin{thm} \label{theoremsing1} Let $T$ be a Calder\'on-Zygmund operator with kernel $K\in \textbf{m}athcal{SK}(\textbf{m}u,A)$. If $\frac{d}{d+\textbf{m}u}<q\leq 1$, then $T$ extends to a bounded operator from $\textbf{m}athcal{H}^{(q,p)}$ to $(L^q,\ell^p)$. \end{thm} Our main result is the following. {\textbf{m}athfrak b}egin{thm}\label{applicadualqp0} Suppose that $1\leq p<\infty$. Let $\delta\geq0$ be an integer, $1\leq r<p'$ and, $0<\eta<1$ if $1<r<p'$ or $0<\eta\leq1$ if $r=1$. Let $T$ be a Calder\'on-Zygmund operator with kernel $K\in \textbf{m}athcal{SK}(\textbf{m}u,A)$. Then there exists a constant $C>0$ such that {\textbf{m}athfrak b}egin{eqnarray*} \left\|T(f)\right\|_{\textbf{m}athcal{L}_{r,\phi_{1},\delta}^{(1,p,\eta)}}\leq C\left\|f\right\|_{\infty,p'}, \end{eqnarray*} for all $f\in(L^{\infty},\ell^{p'})$. \end{thm} {\textbf{m}athfrak b}egin{proof} Using Theorem \ref{theoremsing1}, the proof of Theorem \ref{applicadualqp0} is similar to the one of Theorem \ref{applicadualqpconv2}; the details are hence left to the reader. \end{proof} {\textbf{m}athfrak b}egin{thebibliography}{MTW1} {\textbf{m}athfrak b}ibitem{AbFt} Z. V. de P. Abl\'e and J. Feuto, \textit{Atomic decomposition of Hardy-amalgam spaces}, J. Math. Anal. Appl. 455 (2017), 1899-1936. {\textbf{m}athfrak b}ibitem{AbFt1} Z. V. de P. Abl\'e and J. Feuto, \textit{Dual of Hardy-amalgam spaces and norms inequalities}, Analysis Math., 45 (4) (2019), 647-686. {\textbf{m}athfrak b}ibitem{AbFt2} Z. V. de P. Abl\'e and J. Feuto, \textit{Dual of Hardy-amalgam spaces $\textbf{m}athcal{H}_{\textbf{m}athrm{loc}}^{(q,p)}$ and pseudo-differential operators}, arXiv: 1803.03595. {\textbf{m}athfrak b}ibitem{AbFt3} Z. V. de P. Abl\'e and J. Feuto, \textit{New characterizations of the dual spaces of Hardy-amalgam spaces}, Submitted. {\textbf{m}athfrak b}ibitem{BPR} A. Benedek and R. Panzone, \textit{The space $L^p$, with mixed norm}, Duke Math. J. 28 (1961), 301-324. {\textbf{m}athfrak b}ibitem{BDD} J. P. Bertrandias, C. Datry and C. Dupuis, \textit{Unions et intersections d'espaces} $L^p$ \textit{invariantes par translation ou convolution}, Ann. Inst. Fourier (Grenoble), 28 (1978), 53-84. {\textbf{m}athfrak b}ibitem{MBOW} M. Bownik, \textit{Anisotropic Hardy spaces and wavelets}, Mem. Amer. Math. Soc, 164 (2003). {\textbf{m}athfrak b}ibitem{RBHS} R. C. Busby and H. A. Smith, \textit{Product-convolution operators and mixed-norm spaces}, Trans. Amer. Math. Soc., 263 (1981), 309-341. {\textbf{m}athfrak b}ibitem{HBZ} H. Brezis, \textit{Analyse fonctionnelle: Th\'eorie et Applications}, Masson (1983). {\textbf{m}athfrak b}ibitem{HBZ1} H. Brezis, \textit{Functional Analysis, Sobolev spaces and Partial differential Equations}, Springer (2010). {\textbf{m}athfrak b}ibitem{CWYZg} D.-C. Chang, S. Wang, D. Yang and Y. Zhang, \textit{Littlewood-Paley characterizations of Hardy-type spaces associated with ball quasi-banach function spaces}, Complex Anal. Oper. Theory 14 (2020), Paper No. 40, 33 pp. {\textbf{m}athfrak b}ibitem{CGNM} G. Cleanthous, A. G. Georgiadis and M. Nielsen, \textit{Anisotropic mixed-norm Hardy spaces}, J. Geom. Anal. 27 (2017), 2758-2787. {\textbf{m}athfrak b}ibitem{JD} J. Duoandikoetxea, \textit{Fourier Analysis}, Grad. Stud. Math., vol. 29, American Mathematical Society, Providence, RI, (2001), translated and revised from the 1995 Spanish original by D. Cruz-Uribe. {\textbf{m}athfrak b}ibitem{CFES} C. Fefferman and E.M. Stein, \textit{Some maximal inequalities}, Amer. J. Math. 93 (1971) 107-115. {\textbf{m}athfrak b}ibitem{FSTW} J. J. F. Fournier and J. Stewart, \textit{Amalgams of} $L^p$ and $l^p$, Bull. Amer. Math. Soc., 13 (1) (1985), 1-21. {\textbf{m}athfrak b}ibitem{DGG} D. Goldberg, \textit{A local version of real Hardy spaces}, Duke Math. Journal., 46 (1979), 27-42. {\textbf{m}athfrak b}ibitem{LG2} L. Grafakos, \textit{Modern Fourier analysis}, third edition. Graduate Texts in Mathematics, 250. Springer, New York, 2014. {\textbf{m}athfrak b}ibitem{HTWX} J. Hart, R. H. Torres and X. Wu, \textit{Smoothing properties of bilinear operators and Leibniz-type rules in Lebesgue and mixed Lebesgue spaces}, Trans. Amer. Math. Soc. 370 (2018), 8581-8612. {\textbf{m}athfrak b}ibitem{FH} F. Holland, \textit{Harmonic analysis on amalgams of} $L^p$ and $l^q$, J. London Math. Soc., 2 (10) (1975), 295-305. {\textbf{m}athfrak b}ibitem{HLYY1} L. Huang, J. Liu, D. Yang and W. Yuan, \textit{Atomic and Littlewood-Paley characterizations of anisotropic mixed-norm Hardy spaces and their applications}, J. Geom. Anal. 29 (2019), 1991-2067. {\textbf{m}athfrak b}ibitem{HLYY2} L. Huang, J. Liu, D. Yang and W. Yuan, \textit{Dual spaces of anisotropic mixed-norm Hardy spaces}, Proc. Amer. Math. Soc. 147 (2019), 1201-1215. {\textbf{m}athfrak b}ibitem{HLYY3} L. Huang, J. Liu, D. Yang and W. Yuan, \textit{Identification of anisotropic mixed-norm Hardy spaces and certain homogeneous Triebel-Lizorkin spaces}, J. Approx. Theory 258 (2020), 105459. {\textbf{m}athfrak b}ibitem{HYD} L. Huang and D. Yang, \textit{On function spaces with mixed norms - a survey}, J. Math. Study (2020), DOI: 10.4208/jms.v54n3.21.03. {\textbf{m}athfrak b}ibitem{NEYS} E. Nakai and Y. Sawano, \textit{Hardy spaces with variable exponents and generalized Campanato spaces}, J. Funct. Anal., 262, (2012), 3665-3748. {\textbf{m}athfrak b}ibitem{PeeJ} J. Peetre, \textit{On convolution operators leaving $L^{p,\lambda}$ spaces invariant}, Ann. Math. Pura Appl., 72, (1966), 295-304. {\textbf{m}athfrak b}ibitem{WR1} W. Rudin, \textit{Functional Analysis}, Second edition, International Series in Pure and Applied Mathematics, McGraw-Hill, Inc., New York, 1991. {\textbf{m}athfrak b}ibitem{SHYY} Y. Sawano, K.-P. Ho, D. Yang and S. Yang, \textit{Hardy spaces for ball quasi-Banach function spaces}, Dissertationes Math. 525 (2017), 102 pp. {\textbf{m}athfrak b}ibitem{MA} E. M. Stein, \textit{Harmonic analysis: real-variable methods, orthogonality, and oscillatory integral}, Princeton Mathematical Series, 43. Monographs in Harmonic Analysis, III. Princeton University Press, Princeton, NJ, 1993. {\textbf{m}athfrak b}ibitem{JSTW} J. Stewart, \textit{Fourier transforms of unbounded measures}, Canad. J. Math., 31 (1979), 1281-1292. {\textbf{m}athfrak b}ibitem{WYYg} F. Wang, D. Yang and S. Yang, \textit{Applications of Hardy spaces associated with ball quasi-Banach function spaces}, Results Math. 75 (2020), no. 1, Art. 26, 58 pp. {\textbf{m}athfrak b}ibitem{WYYZg} S. Wang, D. Yang, W. Yuan and Y. Zhang, \textit{Weak Hardy-type spaces associated with ball quasi-banach function spaces II: Littlewood-Paley characterizations and real interpolation}, J. Geom. Anal. (2019), DOI: 10.1007/s12220-019-00293-1. {\textbf{m}athfrak b}ibitem{YYYn} X. Yan, D. Yang and W. Yuan, \textit{Intrinsic square function characterizations of Hardy spaces associated with ball quasi-Banach function spaces}, Front. Math. China 15 (2020), 769-806. {\textbf{m}athfrak b}ibitem{YZDYWY} Yangyang Zhang, Dachun Yang and Wen Yuan, \textit{Real-Variable Characterization of Local Orlicz-Slide Hardy Spaces with Application to Bilinear Decompositions}, arXiv: 2007.03467v1 [math.CA] 5 Jul 2020. {\textbf{m}athfrak b}ibitem{KYos} K. Yosida, \textit{Functional Analysis}, Reprint of the sixth (1980) edition, Classics in Mathematics, Springer-Verlag, Berlin, 1995. {\textbf{m}athfrak b}ibitem{ZWYY} Y. Zhang, S. Wang, D. Yang and W. Yuan, \textit{Weak Hardy-type spaces associated with ball quasi-Banach function spaces I: Decompositions with applications to boundedness of Calder\'on-Zygmund operators}, Sci. China Math. (2020), DOI: 10.1007/s11425-019-1645-1. {\textbf{m}athfrak b}ibitem{ZYYW} Y. Zhang, D. Yang, W. Yuan and S. Wang, \textit{Real-variable characterizations of Orlicz-slice Hardy spaces}, Anal. Appl. (Singap.) 17 (2019), 597-664. \end{thebibliography} \end{document}
\begin{document} \title{ Regularity results for viscous 3D\\ Boussinesq temperature fronts} \author{Francisco Gancedo and Eduardo Garc\'ia-Ju\'arez} \partialrtial_{\alpha}te{\today} \maketitle \begin{abstract} This paper is about the dynamics of non-diffusive temperature fronts evolving by the incompressible viscous Boussinesq system in $\mathbb{R}^3$. We provide local in time existence results for initial data of arbitrary size. Furthermore, we show global in time propagation of regularity for small initial data in critical spaces. The developed techniques allow to consider general fronts where the temperature is piecewise H\"older (not necessarily constant), which preserve their structure together with the regularity of the evolving interface. \end{abstract} {\bf Keywords: }Boussinesq equations, temperature front, global regularity, singular heat kernels. \setcounter{tocdepth}{1} \section{Introduction} In this paper we study the following active scalar equation \begin{equation}\Lambdabel{temperature} \theta_t + u\cdot\nabla\theta =0, \end{equation} where $\theta(x,t)$ is the temperature of a three dimensional incompressible fluid \begin{equation}\Lambdabel{incompressible} \nabla\cdot u=0. \end{equation} The velocity $u(x,t)$ evolves by the viscous Boussinesq system \begin{equation}\Lambdabel{Boussinesq} u_t +u\cdot\nabla u-{\rm div}\thinspacesplaystyleelta u+\nabla \pi=\theta e_3, \end{equation} with $\pi(x,t)$ the fluid pressure and $e_3=(0,0,1)$. The viscosity and gravity constants are taken equal to one for the sake of simplicity and $(x,t)=(x_1,x_2,x_3,t)\in\mathbb{R}^3\times[0,+\infty)$. The system above is the well-known Boussinesq equation \cite{Boussinesq1903} with viscosity and without heat diffusion. This system models natural convection phenomena generated by fluid flow due to the effect of buoyancy forces. Temperature gradients induce density variations from an equilibrium state, which gravity tends to restore. These flows are usually characterized by small deviations of the density with respect to a stratified reference state in hydrostatic balance. Potential energy is thus the main agent of movement, compared to inertia. Oberbeck was the first to notice by linearization that the buoyancy effect was proportional to temperature deviations \cite{Oberbeck1879}, and later Boussinesq \cite{Boussinesq1903} completed the model based on physical assumptions. It has been since then one of the main ingredients in geophysical models \cite{Gill82}, \cite{Majda03}, from ocean and atmosphere dynamics to mantle and solar inner convection, as well as a basic tool in building environmental engineering. In particular, it is an important model to understand the Rayleigh-B\'enard problem \cite{Constantin99}. From the mathematical point of view, the model is important due to the fact that it is related with the Euler and Navier-Stokes equations with constant density. Specifically the system (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}) contains 3D Navier-Stokes as a particular case and the inviscid 2D Boussinesq case corresponds to the 3D axisymmetric swirling Euler equations \cite{Majda02}. Furthermore, a very important feature is that, for the 2D and 3D cases in the Boussinesq system, vortex stretching mechanism is present. Therefore the well-posedness of the equations is a mayor open problem in the mathematical analysis of partial differential equations modeling incompressible fluids \cite{Yudovich03}. For smooth initial data, the system was proved to exist for all time in the 2D case with regular initial data \cite{Chae06},\cite{Hou05}. The regularity of the initial data was improved later in \cite{Abidi07} and \cite{Hmidi07} using Besov and Sobolev spaces, respectively. Global-in-time results were shown through the scale of Sobolev spaces of different regularity in \cite{Hu15}. The uniqueness of weak solution was proved in \cite{Danchin08} making use of paradifferential calculus techniques. On the other hand, in the 2D inviscid case the global existence of solution is still an open problem. There are numerical evidences of global-regularity \cite{E94} in the periodic setting but recent simulations indicated the possibility of finite-time blow-up in the case of bounded domain with regular boundary \cite{Luo14PNAS}. Based on this scenario, new one-dimensional Boussinesq models have been developed to show blow-up including boundary effects \cite{Choi15},\cite{Choi17}. They have been recently extended to dimension two \cite{Hoang16} for models which include the incompressibility condition \cite{Kiselev18}. Returning to the 2D inviscid Boussinesq system, it develops finite time blow-up in scenarios where the solutions have finite energy and evolve in spatial domains with a corner \cite{Elgindi17}. On the other hand, there is long-time existence for solutions in the whole plane close to stable regimes, where the temperature of the fluid in increasing in the vertical direction \cite{Elgindi16}. Considering this setting for bounded domains the solutions have finite energy and the same result has been proven adding damping to the model \cite{Castro18}. \partialr There are also global existence results for the 2D Boussinesq system with anisotropic and partial viscosities (see \cite{Adhikari11}, \cite{Cao13}, \cite{Larios13}, \cite{Adhikari16}, \cite{Li16} and references therein). These scenarios model important physical situations with different horizontal and vertical scales for atmospheric and oceanic flows, where the viscosity constant can be zero in some axis-directions. Taking the initial temperature trivial, $\theta(x,0)=0$, it is easy to obtain the Navier-Stokes equation from the Boussinesq approximation, so that in 3D the well-posedness is a challenging problem \cite{Fefferman06}. Similar blow-up criteria and global-in-time regularity for small initial data results can be shown \cite{Xu12}, \cite{Qiu10}, but the effect of gravity produces stratified temperature solutions in the inviscid case \cite{Widmayer15}. Here, we consider a free boundary problem governed by the incompressible Boussinesq system (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}), where the temperature has jumps of discontinuity. This setting provides important scenarios where the temperature is a front evolving with the fluid flow \cite{Gill82}, \cite{Majda03}. In particular, our main concern is to study the propagation of regularity of the boundary of the front. There is a long tradition in the study of these kind of patch solutions, starting with the well-known vortex patch problem \cite{Chemin93}, \cite{Bertozzi93}. Moreover, having global existence for these scenarios in Boussinesq provides low-regular solutions for Navier-Stokes with an external force given by gravity, which are interesting by itself in the problems of global in time regularity. This problem was first considered in \cite{Danchin17}, where initial fronts are given with regularity measured through the Besov spaces $\theta_0\in B_{q,1}^{2/q-1}$, $q\in(1,2)$. The authors use paradifferential calculus and striated regularity techniques to obtain $C^{1+\gamma}$ propagation of regularity of the boundary of the fronts, $0<\gamma<1$, in 2D for arbitrary initial data and in 3D adding smallness assumptions. This result can then be applied to patch-type temperature, where the front takes different constant values on complementary domains. More generally, patch-type solutions have been highly studied for different equations coming from fluid mechanics models (see \cite{Chemin93}, \cite{Bertozzi93}, \cite{Cordoba10}, \cite{Fefferman16}, \cite{Gancedo14}, \cite{Gancedo18GarciaJuarez}). In particular, in the setting of rapidly rotating temperature fronts, for the patch problem in 2D \cite{Rodrigo04} there is numerical evidence of pointwise collapse with curvature blow-up \cite{Cordoba05}. Moreover, it has been shown that the control of the curvature removes the possibility of pointwise interface collapse \cite{Gancedo14}. In \cite{Gancedo17GarciaJuarez}, the authors proved that for 2D Boussinesq temperature patches the curvature of the interface cannot blow up in finite time. A new cancellation on the time-dependent singular integral operators given by the second derivatives of the solution to the heat equation was needed. A different proof for the persistence of low regularity was also shown and, by making use of level-set methods, an extra cancellation in the tangential direction is used to propagate higher regular interfaces. In this paper we deal with 3D temperature fronts that do not need to be patches of constant temperature. We first provide a local-in-time existence result for very low regular initial temperature in Lebesgue spaces without size constraints (see theorem \rho_{\varpirepsilonsilonilon}f{Case1} for more details). Adding a smallness condition in critical spaces, the results are global in time. Notice that equation \eqref{Boussinesq} can be written as a forced heat equation and therefore the velocity is given by \begin{equation}\Lambdabel{decomposition} u=e^{t{\rm div}\thinspacesplaystyleelta}u_0-(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(u\cdot\nabla u)+(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3), \end{equation} where $\mathbb{P}$ is the Leray projector and $(\partial_t\!-\!\Delta)_0^{-1} f$ denotes the solution of the linear heat equation with force $f$ and zero initial condition: \begin{equation*} (\partial_t\!-\!\Delta)_0^{-1} f:=\int_0^t e^{(t-\tau){\rm div}\thinspacesplaystyleelta}f(\tau)d\tau. \end{equation*} Above we use the standard notation $e^{t{\rm div}\thinspacesplaystyleelta}f=\mathcal{F}^{-1}(e^{-t|\xi|^2}\hat{f})$, where $\hat{}$ and $\mathcal{F}^{-1}$ denote Fourier transform and its inverse. The particle trajectories of the system, \begin{equation*} \left\{\begin{aligned} \frac{dX}{dt}(a,t)&=u(X(a,t),t),\\ X(a,0)&=a, \end{aligned}\right.\mbox{ with the back-to-label map } A(X(a,t),t)=a, \end{equation*} gives the equation for the interface $$Z_t(\alphapha,t)=u(Z(\alphapha,t),t),\quad \alpha\in\mathbb{R}^2.$$ Then, the regularity obtained for the velocity field (see Theorem \rho_{\varpirepsilonsilonilon}f{Case1}) allows to propagate the structure and interface regularity of fronts given by $\theta_0(x)=\theta_0(x)1_{D_0}(x)$, with $D_0\subset \mathbb{R}^3$ a bounded simply connected domain with boundary $\partialrtial D_0\in C^{1+\gamma}$. Under this well-posed scenario, we are able to prove that piecewise H\"older fronts, \begin{equation*} \theta_0(x)=\theta_1(x)1_{D_0}(x)+\theta_2(x)1_{D^c_0}(x),\hspace{0.5cm} \theta_1\in C^{\mu_1}(\overline{D}_0), \theta_2\in C^{\mu_2}(\overline{D^c_0})\cap L^1,\hspace{0.3cm}\mu_1,\mu_2\in (0,1), \end{equation*} whose interface has bounded curvature, preserve this regularity locally in time for arbitrary data and globally in time for small data in critical spaces. Taking advantage of the space-time singular integral given by the heat kernel, we find a new perspective to deal with the operators given by second derivatives of the temperature term in \eqref{decomposition}. This will allow to consider non-constant patches. The new viewpoint connects the \thetaxtit{parabolic} approach used in \cite{Gancedo17GarciaJuarez} with the \thetaxtit{elliptic} one in \cite{Gancedo18GarciaJuarez}, avoiding the use of time-weights and interpolation theory. Moreover, the technique provides a unified approach to propagate higher regular interfaces. Indeed, for fronts with $C^{2+\gamma}$ boundary, where one cannot expect to gain enough regularity for the velocity globally in space, the approach used in 2D is no longer valid. That is due to the fact that in the 3D case the tangent vector fields are not divergence free. The \thetaxtit{parabolic-elliptic} approach overcomes this difficulty and shows in a clear way how to introduce contour dynamics methods to deal directly with the boundary evolution. Our new approach allows to use bootstrapping arguments getting propagation of regularity from weak solutions to $C^{1+\gamma}$, from $C^{1+\gamma}$ to $W^{2,\infty}$ and to higher regular interfaces. The paper is organized as follows: In Section \rho_{\varpirepsilonsilonilon}f{intro}, we include the definition of the functional spaces used in the paper, a summary of paradifferential calculus and some regularity properties of the heat equation. In Section \rho_{\varpirepsilonsilonilon}f{sec:2}, we give the local and global in time results for low regular temperature fronts, and show that $C^{1+\gamma}$ interfaces propagate preserving their regularity. Then, in Section \rho_{\varpirepsilonsilonilon}f{sec:3}, we present the new approach that will allow us to deal with piecewise H\"older fronts whose initial curvature is bounded. We will find that $u\in L^1(0,T;W^{2,\infty})$ and therefore the boundary evolves without possibility of curvature blow up. In Section \rho_{\varpirepsilonsilonilon}f{sec:4}, building upon the previous results, we will use contour dynamics methods to find the persistence of higher regularity. \section{Functional Spaces and Preliminary Estimates}\Lambdabel{intro} We recall here the definition of Sobolev, H\"older and Besov spaces, together with some paradifferential calculus estimates and regularity properties of the heat equation (see \cite{Bahouri11}, Chapters 1 and 2, for details). Let $s\in\mathbb{R}$, $k$ a nonnegative integer and $\alphapha\in(0,1)$. The homogeneous Sobolev space $\dot{H}^s(\mathbb{R}^3)$ is the space of tempered distributions $u$ with Fourier transform in $L^1_{loc}(\mathbb{R}^3)$ and such that \begin{equation*} \|u\|_{\dot{H}^s}^2=\int_{\mathbb{R}^3}|\xi|^{2s}|\hat{u}(\xi)|^2d\xi <\infty. \end{equation*} The nonhomogeneous counterpart is defined using the norm \begin{equation*} \|u\|_{H^s}=\|u\|_{L^2}+\|u\|_{\dot{H}^s}. \end{equation*} The H\"older space $C^{k+\alphapha}$ is the space of $C^k$ functions $u$ such that \begin{equation*} \|u\|_{C^{k+\alphapha}}=\sup_{|\beta|\leq k}\|\partialrtial^\beta u\|_{L^\infty}+\|u\|_{\dot{C}^{k+\alphapha}}<\infty, \end{equation*} where the H\"older semi-norm $\|\cdot\|_{\dot{C}^{k+\alphapha}}$ is defined by \begin{equation*} \|u\|_{\dot{C}^{k+\alphapha}}=\sup_{|\beta|=k}\sup_{x\neq y}\frac{|\partialrtial^\beta u(x)-\partialrtial^\beta u(y)|}{|x-y|^\alphapha}. \end{equation*} To define the homogeneous Besov spaces we first need to introduce the homogeneous Littlewood-Paley decomposition in $\mathbb{R}^3$. Let $B=\{|\xi|\in\mathbb{R}^2: |\xi|\leq 4/3\}$ and $C=\{|\xi|\in\mathbb{R}^3:3/4\leq|\xi|\leq8/3\}$, and fix two smooth radial functions $\chi$ and $\varpirphi$ supported in $B$, $C$, respectively, and satisfying \begin{equation*} \chi(\xi)+\sum_{j\geq0}\varpirphi(2^{-j}\xi)=1,\hspace{0.3cm}\forall \xi\in \mathbb{R}^3. \end{equation*} \begin{equation*} \sum_{j\in\mathbb{Z}}\varpirphi(2^{-j}\xi)=1,\hspace{0.3cm}\forall \xi\in \mathbb{R}^3\setminus\{0\}, \end{equation*} \begin{equation*} |j-l|\geq2\rightarrow \thetaxt{Supp } \varpirphi(2^{-j}\cdot)\cap \thetaxt{Supp }\varpirphi(2^{-l}\cdot)=\emptyset. \end{equation*} The homogeneous dyadic blocks are defined by $\dot{{\rm div}\thinspacesplaystyleelta}_jf=\mathcal{F}^{-1}(\varpirphi(2^{-j}\xi)\hat{f}(\xi))$. Then, the homogeneous Besov space $\dot{B}_{p,q}^s(\mathbb{R}^3)$, $p,q\in [1,\infty]$ consists of those tempered distributions $u\in S'_h(\mathbb{R}^3)$ for which \begin{equation*} \|u\|_{\dot{B}_{p,q}^s}=\|2^{js}\|\dot{{\rm div}\thinspacesplaystyleelta}_j u\|_{L^p}\|_{l^q(j)}<\infty, \end{equation*} where $u\in S'_h(\mathbb{R}^3)$ if $\lim_{\Lambdambda\to 0}\|\mathcal{F}^{-1}(\theta(\Lambdambda \xi)\hat{u}(\xi))\|_{L^\infty}=0$ for any $\theta$ smooth compactly supported function. \begin{rem} It is well-known that the semi-norms $\|\cdot\|_{\dot{H}^s}$ and $\|\cdot\|_{\dot{B}_{2,2}^s}$ are equivalent, as well as $\|\cdot\|_{\dot{C}^{k+\alphapha}}$ and $\|\cdot\|_{\dot{B}_{\infty,\infty}^{k+\alphapha}}$. \end{rem} The next two propositions contains the embeddings that will be used along the paper. \begin{prop}\Lambdabel{lpembed} For any $p\in[1,\infty]$, $q_1\in [2,\infty)$, $q_2\in(1,2]$, the following continuous embeddings hold \begin{equation*} \begin{aligned} \dot{B}_{p,1}^0\hookrightarrow L^p\hookrightarrow \dot{B}^0_{p,\infty},\qquad \dot{B}^0_{q_1,2}\hookrightarrow L^{q_1},\qquad L^{q_2}\hookrightarrow \dot{B}^0_{q_2,2}. \end{aligned} \end{equation*} Moreover, for $r_1\in[1,2]$, $r_2\in[2,\infty]$, it also holds that \begin{equation*} \dot{B}^0_{r_1,r_1}\hookrightarrow L^{r_1},\qquad L^{r_2}\hookrightarrow \dot{B}^0_{r_2,r_2}. \end{equation*} \end{prop} \begin{prop}\Lambdabel{besovembed} Let $1\leq p_1\leq p_2\leq \infty$ and $1\leq q_1\leq q_2\leq \infty$. Then, for any $s\in\mathbb{R}$, the space $\dot{B}^s_{p_1,q_1}(\mathbb{R}^3)$ is continuously embedded in $\dot{B}^{s-3\big(\frac{1}{p_1}-\frac{1}{p_2}\big)}_{p_2,q_2}(\mathbb{R}^3)$. \end{prop} As particular cases, Proposition \rho_{\varpirepsilonsilonilon}f{besovembed} together with Proposition \rho_{\varpirepsilonsilonilon}f{lpembed} can be used to recover the standard Sobolev embeddings into H\"older and Lebesgue spaces. We recall now some paradifferential calculus estimates. The homogeneous low-frequency cut-off operator $\dot{S}_j$ is defined by \begin{equation*} \dot{S}_ju=\mathcal{F}^{-1}(\chi(2^{-j}\xi)\hat{u}(\xi)). \end{equation*} Considering then the Littlewood-Paley decompositions \begin{equation*} u=\sum_{l}\dot{{\rm div}\thinspacesplaystyleelta}_lu,\quad v=\sum_{j}\dot{{\rm div}\thinspacesplaystyleelta}_jv, \end{equation*} we can introduce Bony's decomposition for the product \begin{equation*} uv=\sum_{l,j}\dot{{\rm div}\thinspacesplaystyleelta}_lu\dot{{\rm div}\thinspacesplaystyleelta}_jv=\dot{T}_uv+\dot{T}_vu+\dot{R}(u,v), \end{equation*} where the homogeneous paraproduct $\dot{T}_uv$ and remainder $\dot{R}(u,v)$ of $u$ and $v$ are defined by \begin{equation*} \dot{T}_uv=\sum_j\dot{S}_{j-1}u\dot{{\rm div}\thinspacesplaystyleelta}_jv,\quad \dot{R}(u,v)=\sum_{|l-j|\leq1}\dot{{\rm div}\thinspacesplaystyleelta}_{l}u\dot{{\rm div}\thinspacesplaystyleelta}_jv. \end{equation*} \begin{prop}\Lambdabel{paradif} For any $s, s_1,s_2\in \mathbb{R}$, $t<0$, $p,q\in[1,\infty]$, with $$\frac{1}{q}=\frac{1}{q_1}+\frac{1}{q_2},\quad \frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2},$$ the following estimates hold \begin{equation*} \begin{aligned} \|\dot{T}_uv\|_{\dot{B}^s_{p,q}}&\leq c \|u\|_{L^\infty}\|v\|_{\dot{B}^s_{p,q}},\\ \|\dot{T}_uv\|_{\dot{B}^{s+t}_{p,q}}&\leq c \|u\|_{\dot{B}^t_{\infty,q_1}}\|v\|_{\dot{B}^s_{p,q_2}},\\ \|\dot{R}(u,v)\|_{\dot{B}^{s_1+s_2}_{p,q}}&\leq c \|u\|_{\dot{B}^{s_1}_{p_1,q_1}}\|v\|_{\dot{B}^{s_2}_{p_2,q_2}}, \thetaxt{ if } s_1+s_2>0,\\ \|\dot{R}(u,v)\|_{\dot{B}^{s_1+s_2}_{p,\infty}}&\leq c \|u\|_{\dot{B}^{s_1}_{p_1,q_1}}\|v\|_{\dot{B}^{s_2}_{p_2,q_2}}, \thetaxt{ if } s_1+s_2\geq0 \thetaxt{ and } q=1. \end{aligned} \end{equation*} \end{prop} Using these estimates one can prove the following inequality in Sobolev spaces. \begin{prop} For any $(s_1,s_2)\in (-3/2,3/2)$, $s_1+s_2\geq 0$, a constant $c$ exists such that \begin{equation}\Lambdabel{SobolevParaDiff} \|uv\|_{\dot{H}^{s_1+s_2-3/2}}\leq c\|u\|_{\dot{H}^{s_1}}\|v\|_{\dot{H}^{s_2}}. \end{equation} \end{prop} Finally, we include some regularity estimates for the heat equation that will be used along the paper. \begin{prop}\Lambdabel{propoinitial} Let $s\geq0$, $r\in(1,\infty)$, $q\in\{1,\infty\}$, $\alphapha\in(0,1)$ and $\varpirepsilon>0$. Then, the following estimates hold \begin{equation}\Lambdabel{heatforcer} \|\nabla^2(\partialrtial_t-{\rm div}\thinspacesplaystyleelta)^{-1}_0f\|_{L^r_T(\dot{H}^{s})}\leq c\|f\|_{L^r_T(\dot{H}^s)}, \end{equation} \begin{equation}\Lambdabel{heatforces} \|\nabla^2(\partialrtial_t-{\rm div}\thinspacesplaystyleelta)^{-1}_0f\|_{L^q_T(\dot{H}^s)}\leq c\|f\|_{L^q_T(\dot{H}^{s+\varpirepsilon})}, \end{equation} \begin{equation}\Lambdabel{heatforcesholder} \|\nabla^2(\partialrtial_t-{\rm div}\thinspacesplaystyleelta)^{-1}_0f\|_{L^1_T(\dot{C}^\alphapha)}\leq c\|f\|_{L^1_T(\dot{C}^{\alphapha+\varpirepsilon})}, \end{equation} \begin{equation}\Lambdabel{heatforcesbesov} \|\nabla^2(\partialrtial_t-{\rm div}\thinspacesplaystyleelta)^{-1}_0f\|_{L^\infty_T(\dot{B}^s_{2,\infty})}\leq c\|f\|_{L^\infty_T(\dot{B}^s_{2,\infty})}, \end{equation} \begin{equation}\Lambdabel{heatinitial} \|\nabla^2 e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(\dot{H}^s)}\leq c \|u_0\|_{\dot{H}^{s+\varpirepsilon}}. \end{equation} \begin{equation}\Lambdabel{heatinitialholder} \|\nabla^2e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(\dot{C}^\alphapha)}\leq c \|u_0\|_{\dot{C}^{\alphapha+\varpirepsilon}}. \end{equation} \begin{equation}\Lambdabel{heatinitialbesov} \|e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^\infty_T(\dot{B}^s_{2,\infty})}\leq c \|u_0\|_{\dot{B}^s_{2,\infty}}. \end{equation} Furthermore, there exists $u_0\in \dot{H}^s$ for which $\nabla^2e^{t{\rm div}\thinspacesplaystyleelta}u_0\notin L^1_T(\dot{H}^s)$ and $u_0\in \dot{C}^\alphapha$ for which $\nabla^2 e^{t{\rm div}\thinspacesplaystyleelta}u_0\notin L^1_T(\dot{C}^\alphapha).$ \end{prop} Proof: The proof of \eqref{heatforcer} can be found in \cite{Krylov03}. The proof of \eqref{heatforces} follows from Bernstein inequalities and the decay of the heat kernel: \begin{equation*} \begin{aligned} \|\nabla^2 (\partial_t\!-\!\Delta)_0^{-1} &f\|_{L^q_T(\dot{H}^s)}\leq \Big|\Big|\hspace{0.1cm} \Big|\Big| 2^{j(s+\varpirepsilon)}2^{j(2-\varpirepsilon)}c\int_0^t e^{-c(t-\tau)2^{2j}}\|{\rm div}\thinspacesplaystyleelta_j f\|_{L^2}(\tau)d\tau \Big|\Big|_{l^2(j)}\Big|\Big|_{L^q_T}\\ &\leq \Big|\Big| \int_0^t \frac{c}{(t-\tau)^{1-\varpirepsilon/2}} \Big|\Big| 2^{j(s+\varpirepsilon)}\|{\rm div}\thinspacesplaystyleelta_j f\|_{L^2}(\tau) \Big|\Big|_{l^2(j)}d\tau\Big|\Big|_{L^q_T}\leq c(T)\|f\|_{L^q_T(\dot{H}^{s+\varpirepsilon})}. \end{aligned} \end{equation*} In the last inequality above we have used Young's inequality for convolutions. The proof of \eqref{heatforcesholder} is similar and can be found in \cite{Gancedo17GarciaJuarez}. On the other hand, there is no need of losing $\varpirepsilon$ derivatives if we work with Besov spaces with infinity as third index \begin{equation*} \begin{aligned} \|\nabla^2 (\partial_t\!-\!\Delta)_0^{-1} f&\|_{L^\infty_T(\dot{B}^s_{2,\infty})}\leq \sup_{t\leq T}\sup_{j\in\mathbb{Z}} 2^{2j}c\int_0^t e^{-c(t-\tau)2^{2j}}2^{sj}\|{\rm div}\thinspacesplaystyleelta_j f\|_{L^2}(\tau)d\tau\\ &\leq c\sup_{j\in\mathbb{Z}} 2^{2j}\sup_{t\leq T} \int_0^t e^{-c(t-\tau)2^{2j}}2^{sj}\|{\rm div}\thinspacesplaystyleelta_j f\|_{L^2}(\tau) d\tau\leq c(T)\|f\|_{L^\infty_T(\dot{B}^{s}_{2,\infty})}. \end{aligned} \end{equation*} We get \eqref{heatinitial} as before \begin{equation*} \begin{aligned} \|\nabla^2 e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(\dot{H}^s)}&\leq c \int_0^T \Big|\Big| 2^{j(s+\varpirepsilon)}2^{j(2-\varpirepsilon)}e^{-ct2^{2j}}\!\|{\rm div}\thinspacesplaystyleelta_j u_0\|_{L^2}\Big|\Big|_{l_2(j)}dt\\ &\leq c\int_0^T \!\frac{\|u_0\|_{\dot{H}^{s+\varpirepsilon}}}{t^{1-\varpirepsilon/2}}dt\leq c(T)\|u_0\|_{\dot{H}^{s+\varpirepsilon}}. \end{aligned} \end{equation*} See \cite{Gancedo17GarciaJuarez} for \eqref{heatinitialholder}. The last estimate follows directly. The counterexamples in the last statements can be found in \cite{Fefferman17} and \cite{Gancedo17GarciaJuarez}, respectively. \qed \section{Local and global regularity results for $C^{1+\gamma}$ fronts}\Lambdabel{sec:2} This section is devoted to show a framework to provide local-in-time existence of low regular solutions for the Boussinesq system (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}) with no restriction on the size of the initial data. It also shows global-in-time solutions with smallness assumption on the initial data in critical spaces. The proof is based on \thetaxt{a priori} energy estimates for weak solutions, then energy estimates for higher regularity and finally a bootstrapping argument using maximal regularity properties of the heat operator. For this last part we use the splitting \eqref{decomposition} as commented in the introduction. \begin{thm}\Lambdabel{Case1} Assume $\gamma\in(0,1)$, $\varpirepsilon\in(0,1-\gamma)$. Let $u_0\in H^{\frac12+\gamma+\varpirepsilon}$ be a divergence-free vector field and $\theta_0\in L^p$ for all $1\leq p<\frac{3}{1-\gamma-\varpirepsilon}$. Then, there is a unique solution $(u,\theta)$ of (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}) with $u(x,0)=u_0$ such that $$ \theta\in L^\infty(0,T;L^p),\quad u\in L^\infty(0,T;H^{\frac12+\gamma+\varpirepsilon})\cap L^2(0,T;H^{\frac32+\mu})\cap L^1(0,T;C^{1+\gamma+\tilde{\varpirepsilon}}), $$ for any $\mu\leq\min\{\gamma+\varpirepsilon,1/2\}$ and any $0<\tilde{\varpirepsilon}<\varpirepsilon$. The time of existence $T>0$ depends on the initial data in such a way that $$ \int_{0}^T\| e^{\tau{\rm div}\thinspacesplaystyleelta}u_0\|_{\dot{H}^1}^4(\tau)d\tau+\|\theta_0\|^2_{L^{3/2}}T<C_0, $$ for $C_0>0$ an universal constant. Furthermore, if the initial data satisfy \begin{equation}\Lambdabel{smalldatahypo} \|u_0\|_{\dot{H}^{\frac12}}+\|\theta_0\|_{L^1}<\delta, \end{equation} for $\delta>0$ an universal constant, the solutions exist for all time $T>0$. \end{thm} \begin{rem}\Lambdabel{remark1} The theorem above allows to propagate $C^{1+\gamma}$ regularity for temperature fronts; i.e. for initial $\theta_0(x)=\theta_0(x)1_{D_0}(x)$ with $D_0\subset \mathbb{R}^3$ a bounded simply connected domain with boundary $\partialrtial D_0\in C^{1+\gamma}$, $\theta_0\in L^p$, for all $1\leq p<\frac{3}{1-\gamma-\varpirepsilon}$, and $\partialrtial D_0\in C^{1+\gamma}$. The temperature is given by $$\theta(x,t)=\theta_0(A(x,t))1_{D(t)}(x) \hspace{0.2cm}{\rm{and}} \hspace{0.2cm} \partialrtial D\in L^\infty(0,T;C^{1+\gamma}),$$ where $D(t)=X(D_0,t)$. \end{rem} Proof: \\ \underline{\thetaxtbf{Local Existence}}: We consider first the $L^2$ energy balance for the Boussinesq system, obtaining that \begin{equation*} \frac12\dt\|u\|_{L^2}^2+\|\nabla u\|_{L^2}^2=\int u_3\theta dx\leq \frac12\|\nabla u\|^2_{L^2}+\frac12\|\theta\|^2_{\dot{H}^{-1}}\leq \frac12\|\nabla u\|^2_{L^2}+c\|\theta\|^2_{L^{6/5}}, \end{equation*} where the embedding $L^{6/5}\hookrightarrow \dot{H}^{-1}$ has been used. From the transport character of \eqref{temperature} it is possible to find \begin{equation*} \|\theta\|_{L^p}(t)\leq \|\theta_0\|_{L^p},\quad \mbox{ for any }p\in [0,+\infty], \end{equation*} so that integration in time provides \begin{equation}\Lambdabel{l2balance} \|u\|_{L^2}^2(t)+\int_0^T\|\nabla u\|_{L^2}^2(\tau)d\tau\leq\|u_0\|^2_{L^2}+c\|\theta_0\|^2_{L^{6/5}}T. \end{equation} Next we consider the $\dot{H}^{\frac12}$ norm evolution of the velocity. The argument is similar to \cite{Robinson16}, chapter 10, but the procedure is included for completeness. Writing $u=v+w$ we decompose the velocity into a linear heat equation and a nonlinear system with zero initial data as follows \begin{equation*} v_t-{\rm div}\thinspacesplaystyleelta v=0,\, v(x,0)=u_0(x);\qquad w_t-{\rm div}\thinspacesplaystyleelta w=\mathbb{P}(u\cdot\nabla u)+\mathbb{P}(\theta e_3),\,w(x,0)=0, \end{equation*} where $\mathbb{P}$ denotes the Leray projection. It is then clear that $$ \|v\|_{\dot{H}^{\frac12}}^2(t)+2\int_0^t\| v\|_{\dot{H}^{\frac32}}^2(\tau)d\tau=\|u_0\|_{\dot{H}^{\frac12}}^2. $$ On the other hand \begin{equation*} \begin{aligned} \frac12\frac{d}{dt}\|w\|_{\dot{H}^{\frac12}}^2+\|w\|_{\dot{H}^{\frac32}}^2&=\int \Lambda w\cdot (u\cdot\nabla u)dx +\int \Lambda w_3\theta dx\\ &\leq \|w\|_{\dot{H}^{\frac32}}\|u\cdot\nabla u\|_{\dot{H}^{-\frac12}}+\| w\|_{\dot{H}^{\frac32}}\|\theta\|_{\dot{H}^{-\frac12}}. \end{aligned} \end{equation*} The chain of bounds $$ \|u\cdot\nabla u\|_{\dot{H}^{-\frac12}}\leq c\|u\cdot\nabla u\|_{L^\frac32}\leq c\|u\|_{L^6}\|\nabla u\|_{L^2}\leq c\|u\|_{\dot{H}^{1}}^2, $$ together with Young's inequality and the embedding $L^{3/2}\hookrightarrow \dot{H}^{-1/2}$ provide that \begin{equation*} \begin{aligned} \frac{d}{dt}\|w\|_{\dot{H}^{\frac12}}^2+\|w\|_{\dot{H}^{\frac32}}^2&\leq c\| u\|_{\dot{H}^{1}}^4+c_2\|\theta\|_{L^\frac32}^2\leq c_1\|w\|_{\dot{H}^{\frac12}}^2\|w\|_{\dot{H}^{\frac32}}^2+c_1\| v\|_{\dot{H}^{1}}^4+c_2\|\theta_0\|_{L^\frac32}^2.\\ \end{aligned} \end{equation*} The above inequality yields \begin{equation*} \|w\|_{\dot{H}^{\frac12}}^2(t)+\int_0^t\|w\|_{\dot{H}^{\frac32}}^2(s)ds\leq \frac1{2c_1},\quad 0\leq t\leq T, \end{equation*} as long as $$ \int_0^T(c_1\| v\|_{\dot{H}^{1}}^4(s)+c_2\|\theta_0\|_{L^\frac32}^2)ds<\frac1{4c_1}. $$ In particular, for $t\in [0,T]$, \begin{equation}\Lambdabel{litb} \|u\|_{\dot{H}^{\frac12}}^2(t)+\int_0^t\|u\|_{\dot{H}^{\frac32}}^2(s)ds+\int_0^t\|u\|_{\dot{H}^{1}}^4(s)ds\leq C(T). \end{equation} Next we consider the evolution of $1/2+\tilde{\gamma}$ derivatives, with $\tilde{\gamma}=\gamma+\varpirepsilon$, as follows \begin{equation*} \begin{aligned} \frac12\frac{d}{dt}\|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2+\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^2&\leq\int\Lambda^{1+\tilde{\gamma}}u\cdot\Lambda^{\tilde{\gamma}}(u\cdot \nabla u)dx+\int\Lambda^{\frac32+\tilde{\gamma}}u_3\Lambda^{-\frac12+\tilde{\gamma}}\theta dx\\ &\leq \|u\|_{\dot{H}^{1+\tilde{\gamma}}}\|u\cdot\nabla u\|_{\dot{H}^{\tilde{\gamma}}}+\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}\|\theta\|_{\dot{H}^{-\frac12+\tilde{\gamma}}}. \end{aligned} \end{equation*} Sobolev interpolation together with estimate \eqref{SobolevParaDiff} gives that \begin{equation*} \begin{aligned} \frac12\frac{d}{dt}\|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2+\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^2&\leq \|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^\frac12\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^\frac12\| u\|_{\dot{H}^{1}}\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}+\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}\|\theta\|_{\dot{H}^{-\frac12+\tilde{\gamma}}} \\ &\leq \frac12\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^2+\frac{3^3}4\|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2\|u\|_{\dot{H}^{1}}^4+\|\theta\|_{\dot{H}^{-\frac12+\tilde{\gamma}}}^2, \end{aligned} \end{equation*} to find \begin{equation}\Lambdabel{1p2masgm} \frac{d}{dt}\|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2+\|u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^2\leq \frac{3^3}2\|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2\|u\|_{\dot{H}^{1}}^4+2\|\theta\|_{\dot{H}^{-\frac12+\tilde{\gamma}}}^2. \end{equation} We consider to cases.\\ \underline{Case 1}: $\tilde{\gamma}\in(0,\frac12]$. In this situation the bound $\|\theta\|_{\dot{H}^{-\frac12+\tilde{\gamma}}}\leq c_3(p)\|\theta_0\|_{L^p}$ for $p=3/(2-\tilde{\gamma})$ together with \eqref{litb} allow us to use Gronwall's inequality in \eqref{1p2masgm} in order to conclude that \begin{equation}\Lambdabel{cl12masgm} \|u\|_{\dot{H}^{\frac12+\tilde{\gamma}}}^2(t)+\int_0^t\| u\|_{\dot{H}^{\frac32+\tilde{\gamma}}}^2(s)ds\leq c_4(T),\qquad \forall t\in [0,T]. \end{equation} Next we provide the solution of the system as follows \begin{equation}\Lambdabel{dnc} u= e^{t{\rm div}\thinspacesplaystyleelta}u_0-(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(u\cdot\nabla u)+(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3), \end{equation} to get for $\gamma<\gamma+\tilde{\varpirepsilon}<\tilde{\gamma}=\gamma+\varpirepsilonsilonilon$ the following bound \begin{equation*} \begin{aligned} \|u\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})}&\leq \| e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(\dot{C}^{1\!+\!\gamma+\tilde{\varpirepsilon}})}+\|(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(u\cdot\nabla u)\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})}\\ &\qquad +\|(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})}. \end{aligned} \end{equation*} Using Sobolev embedding and the fact that the Leray projector is bounded in H\"older spaces we can find \begin{equation*} \begin{aligned} \|u\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})} &\leq \|e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(\dot{H}^{\frac52+\gamma+\tilde{\varpirepsilon}})}+\|(\partial_t\!-\!\Delta)_0^{-1}(u\cdot\nabla u)\|_{L^1_T(\dot{H}^{\frac52+\gamma+\tilde{\varpirepsilon}})}\\ &\qquad +\|{\rm div}\thinspacesplaystyleelta(\partial_t\!-\!\Delta)_0^{-1}({\rm div}\thinspacesplaystyleelta^{-1}\theta e_3)\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})}. \end{aligned} \end{equation*} Next we use \eqref{heatinitial}, \eqref{heatforces} and \eqref{heatforcesholder} to get \begin{equation*} \begin{aligned} \|u\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})} & \leq c(T)(\|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}+\|u\cdot\nabla u\|_{L^1_T(\dot{H}^{\frac12+\tilde{\gamma}})}+\|\theta\|_{L^1_T(\dot{B}^{-1+\tilde{\gamma}}_{\infty,\infty})}). \end{aligned} \end{equation*} Using \eqref{SobolevParaDiff} and the embeddings $L^p\hookrightarrow\dot{B}^0_{p,p}\hookrightarrow\dot{B}^{-1+\tilde{\gamma}}_{\infty,\infty}$ from Propositions \rho_{\varpirepsilonsilonilon}f{lpembed} and \rho_{\varpirepsilonsilonilon}f{besovembed} with $p=\frac{3}{1-\tilde{\gamma}}$, we obtain \begin{equation*} \begin{aligned} \|u\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})} & \leq c(T)(\|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}+\|u\|^2_{L^2_T(\dot{H}^{\frac32+\tilde{\gamma}})}+\|\theta_0\|_{L^{\frac{3}{1-\tilde{\gamma}}}})\leq c_4(T). \end{aligned} \end{equation*} Therefore, we are done with the regularity for \begin{equation*} u\in L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}}),\quad 0<\gamma<\gamma+\tilde{\varpirepsilon}<\tilde{\gamma}\leq 1/2. \end{equation*} Using the same splitting above, an analogous computation in nonhomogeneous spaces can be done to obtain that \begin{equation}\Lambdabel{l1tcmu} u\in L^1_T(C^{1+\gamma+\tilde{\varpirepsilon}}),\quad 0<\gamma<\gamma+\tilde{\varpirepsilon}<\tilde{\gamma}\leq 1/2. \end{equation} \underline{Case 2}: $\tilde{\gamma}\in(\frac12,1)$. Using \eqref{l2balance} and \eqref{cl12masgm} with $\tilde{\gamma}=1/2$ we find $u\in L^\infty_T(H^1)\cap L^2_T(H^2)$ so that interpolation provides \begin{equation}\Lambdabel{aux1} u\in L^{\frac4{1+2\alphapha}}_T(H^{\frac32+\alphapha})\hookrightarrow L^{\frac4{1+2\alphapha}}_T(C^{\alphapha}),\quad 0<\alphapha\leq 1/2, \end{equation} by Sobolev injection. Taking into account \eqref{aux1} with $\alphapha=1/2$ and \eqref{l1tcmu} with $\gamma+\tilde{\varpirepsilon}=1/2-\bar{\varpirepsilon}\in (0,1/2)$, interpolation inequality \begin{equation} \|u\|_{C^\sigma}\leq c \|u\|_{C^\frac12}^{\Lambdambda}\|u\|_{C^{\frac32-\bar{\varpirepsilon}}}^{1-\Lambdambda},\quad \frac12\leq\sigma<1, \, \quad\Lambdambda=(\frac32-\sigma-\bar{\varpirepsilon})/(1-\bar{\varpirepsilon}), \quad 0<\bar{\varpirepsilon}< 1/2, \end{equation} provides that \begin{equation*} u\in L^p_T(C^\sigma), \hspace{0.5cm} p=\frac{4(1-\bar{\varpirepsilon})}{1+2(\sigma-\bar{\varpirepsilon})},\quad \frac12\leq\sigma<1, \quad 0<\bar{\varpirepsilon}< 1/2. \end{equation*} Therefore, by choosing $\bar{\varpirepsilon}=1-\tilde{\gamma}\in(0,1/2)$ and $\alphapha=\frac{1-\tilde{\gamma}}{2\tilde{\gamma}}\in(0,1/2)$, we obtain that \begin{equation*} \begin{aligned} \|u\otimes u\|_{L^1_T(\dot{C}^{\tilde{\gamma}})}&\leq 2\int_0^T \|u\|_{L^\infty}\|u\|_{\dot{C}^{\tilde{\gamma}}}dt\leq 2\int_0^T\|u\|_{C^\alphapha}\|u\|_{C^{\tilde{\gamma}}}dt\\ &\leq 2\|u\|_{L^{4\tilde{\gamma}}_T\big(C^{\frac{1-\tilde{\gamma}}{2\tilde{\gamma}}}\big)}\|u\|_{L^{\frac{4\tilde{\gamma}}{4\tilde{\gamma}-1}}_T(C^{\tilde{\gamma}})}\leq c_4(T). \end{aligned} \end{equation*} From \eqref{dnc} we find that \begin{equation*} \begin{aligned} \|u\|_{L^1_T(\dot{C}^{1+\gamma+\tilde{\varpirepsilon}})}&\leq c(T)(\|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}\!+\!\|\theta_0\|_{L^\frac3{1-\tilde{\gamma}}})\!+\!\|u\otimes u\|_{L^1_T(\dot{C}^{\tilde{\gamma}})}\leq c(T)(\|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}\!+\!\|\theta_0\|_{L^\frac3{1-\tilde{\gamma}}})\!+\!c_4(T), \end{aligned} \end{equation*} which, together with \eqref{l1tcmu}, yields that \begin{equation*} u\in L^1_T(C^{1+\gamma+\tilde{\varpirepsilon}}),\quad 0<\gamma<\gamma+\tilde{\varpirepsilon}<\tilde{\gamma}=\gamma+\varpirepsilon<1. \end{equation*} Using \eqref{dnc} again, for $\varpirepsilon'\in (0,1-\tilde{\gamma})$, we also obtain that \begin{equation*} \begin{aligned} \|u\|_{L^\infty_T({\dot{H}}^{\frac12+\tilde{\gamma}})}&\leq c(T)\left( \|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}+\|u\cdot\nabla u\|_{L^\infty_T(\dot{H}^{-\frac32+\tilde{\gamma}+\varpirepsilon'})}+\|\theta_0\|_{L^{\frac3{3-\tilde{\gamma}-\varpirepsilon'}}}\right)\\ & \leq c(T)(\|u_0\|_{\dot{H}^{\frac12+\tilde{\gamma}}}+\|u\|_{L^\infty_T(\dot{H}^{1})}\| u\|_{L^\infty_T(\dot{H}^{\tilde{\gamma}+\varpirepsilon'})}+\|\theta_0\|_{L^\frac3{3-\tilde{\gamma}-\varpirepsilon'}})\leq c_4(T). \end{aligned} \end{equation*} \\\underline{\thetaxtbf{Global Existence}}: We consider the splitting \eqref{dnc} and apply \eqref{heatinitialbesov} together with \eqref{heatforcesbesov} to find \begin{equation*} \begin{aligned} \|u\|_{L_T^\infty(\dot{B}_{2,\infty}^{1/2})}&\leq c\|u_0\|_{\dot{B}_{2,\infty}^{1/2}}+k_1\|u\otimes u\|_{L_T^\infty(\dot{B}_{2,\infty}^{-1/2})}+c\|\theta\|_{L_T^\infty(\dot{B}_{2,\infty}^{-3/2})}\\ &\leq c\|u_0\|_{\dot{B}_{2,\infty}^{1/2}}+k_1\|u\|^2_{L_T^\infty(\dot{B}_{2,\infty}^{1/2})}+c\|\theta\|_{L_T^\infty(L^1)}, \end{aligned} \end{equation*} where we have used the paradifferential estimates of Proposition \rho_{\varpirepsilonsilonilon}f{paradif} to bound the second term. The $L^p$ maximum principle for $\theta$ together with the smallness condition $$ \|u_0\|_{\dot{B}_{2,\infty}^{1/2}}+\|\theta_0\|_{L^1}<\delta\leq \frac{1}{4k_1c} $$ yield $$ \|u\|_{L_T^\infty(\dot{B}_{2,\infty}^{1/2})}\leq \frac{1}{2k_1}, \quad \forall\, T>0. $$ The embedding $\dot{H}^\frac12\hookrightarrow \dot{B}_{2,\infty}^{1/2}$ allows to recover the more classical smallness assumption as state in the theorem. Next we use the splitting \eqref{dnc} one more time, together with Proposition \rho_{\varpirepsilonsilonilon}f{paradif} to obtain \begin{equation*} \begin{aligned} \|u\|_{L_T^2(\dot{H}^{\frac32})}&\leq c\|u_0\|_{\dot{H}^{\frac12}}+c\|u\otimes u\|_{L_T^2(\dot{H}^{\frac12})}+c\|\theta\|_{L_T^2(\dot{H}^{-\frac12})}\\ &\leq c(\|u_0\|_{\dot{H}^{\frac12}}+T^{1/2}\|\theta_0\|_{L^{3/2}})+c\|u\|_{L_T^\infty(\dot{B}_{2,\infty}^{1/2})}\|u\|_{L_T^2(\dot{H}^{\frac32})}. \end{aligned} \end{equation*} Taking $k_1$ big enough it is possible to get $$ \|u\|_{L_T^2(\dot{H}^{\frac32})}\leq C(\|u_0\|_{\dot{H}^{\frac12}}+T^{1/2}\|\theta_0\|_{L^{3/2}}),\quad \forall\, T>0. $$ Finally, the bound $\|u\|_{\dot{H}^{1}}\leq c\|u\|^{1/2}_{\dot{B}^{1/2}_{2,\infty}}\|u\|^{1/2}_{\dot{H}^{\frac32}}$ gives $$ \int_0^T\|u\|_{\dot{H}^{1}}^4(t)dt\leq \frac{C^2}{2k_1^2}(\|u_0\|_{\dot{H}^{\frac12}}^2+T\|\theta_0\|_{L^{3/2}}^2),\quad \forall\, T>0. $$ It yields global existence in \eqref{1p2masgm} so that we can continue the proof in the same way as in the local-in-time approach. $\square$ \section{Local and global regularity results for $W^{2,\infty}$ fronts}\Lambdabel{sec:3} In this section we provide the local-in-time and global-in-time results to propagate the regularity of fronts with bounded curvature. At this level of regularity, this problem can be considered as critical in the sense that one cannot expect more than $W^{2,\infty}$ regularity globally in space for the velocity since $\theta$ is merely bounded. Due to the singular integral operators given by two derivatives of the heat kernel, bounded functions would only yield $BMO$ type regular velocities, which are not generally bounded. Therefore, some extra cancellation is needed. The extra cancellation is achieved by the new elliptic-parabolic method that we introduced in this paper. Using the evolution equations together with integration by parts in time, we isolate the singularity of the space-time singular integrals. We reduce them to singular integrals only in space (fourth order Riesz transforms). These can be controlled thanks to the regularity provided by Theorem \rho_{\varpirepsilonsilonilon}f{Case1} in the previous section, together with techniques for singular integrals with even kernels. \begin{thm}\Lambdabel{Case2} Let $u_0\in H^{\frac32+\varpirepsilon}$ be a divergence-free vector field with $\varpirepsilon\in(0,1)$. Assume that $D_0\subset \mathbb{R}^3$ is a bounded simply connected domain with boundary $\partialrtial D_0\in W^{2,\infty}$, and $\theta_0(x)=\theta_0(x)1_{D_0}(x)$ with $\theta_0\in C^\mu(\overline{D}_0)$, $0<\mu<1$. Then, there is a unique solution $(u,\theta)$ of (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}) with $u(x,0)=u_0(x)$ such that $$\theta(x,t)=\theta_0(A(x,t))1_{D(t)}(x) \hspace{0.2cm}{\rm{and}} \hspace{0.2cm} \partialrtial D\in L^\infty(0,T;W^{2,\infty}),$$ where $D(t)=X(D_0,t)$. The regularity of the velocity is given by $$ u\in L^\infty(0,T;H^{\frac32+\varpirepsilon})\cap L^1(0,T;W^{2,\infty}).$$ The time of existence $T>0$ depends on the initial data in such a way that $$ \int_{0}^T\| e^{\tau{\rm div}\thinspacesplaystyleelta}u_0\|_{\dot{H}^1}^4(\tau)d\tau+\|\theta_0\|^2_{L^{3/2}}T<C_0, $$ for $C_0>0$ an universal constant. Furthermore, if the initial data satisfy \begin{equation*} \|u_0\|_{\dot{H}^{\frac12}}+\|\theta_0\|_{L^1}<\delta, \end{equation*} for $\delta>0$ an universal constant, the solutions exist for all time $T>0$. \end{thm} \begin{rem}\Lambdabel{remark42} The theorem above allows to show an analogous result for initial fronts of the form $\theta_0(x)=\theta_1(x)1_{D_0}(x)+\theta_2(x)1_{D^c_0}(x)$ with $\theta_1\in C^{\mu_1}(\overline{D}_0)$, $\theta_2\in C^{\mu_2}(\overline{D^c_0})\cap L^1$ and $\mu_1,\mu_2\in (0,1)$. Then, the same conclusions for $u$ and $\theta$ are obtained and the front propagates as \begin{equation*} \theta(x,t)=\theta_1(A(x,t))1_{D(t)}(x)+\theta_2(A(x,t))1_{D^c(t)}(x) \hspace{0.2cm}{\rm{with}} \hspace{0.2cm} \partialrtial D\in L^\infty(0,T;W^{2,\infty}). \end{equation*} \end{rem} Proof: First point in the argument is to use Theorem \rho_{\varpirepsilonsilonilon}f{Case1} in order to find a unique solution up to a time $T>0$ to the system. Under the smallness assumption the previous estimates are global and so are the following, giving the global existence result. Next we use splitting \eqref{dnc} to find \begin{equation}\Lambdabel{aux} \|u\|_{L^\infty_T(\dot{H}^{\frac32+\varpirepsilon})}\leq \|u_0\|_{\dot{H}^{\frac32+\varpirepsilon}}+c\|u\cdot\nabla u\|_{L^\infty_T(\dot{H}^{-\frac12+\varpirepsilon'})}+c\|\theta\|_{L^\infty_T(\dot{H}^{-\frac12+\varpirepsilon'})}, \end{equation} where $0<\varpirepsilon<\varpirepsilon'<1$. For $0<\varpirepsilon'\leq 1/2$, the last term is bounded by Sobolev embedding $L^{3/(2-\varpirepsilon')}\hookrightarrow \dot{H}^{-\frac12+\varpirepsilon'}$. For $1/2<\varpirepsilon'<1$, we use that Theorem \rho_{\varpirepsilonsilonilon}f{Case1} guarantees that $\theta(t)$ is a H\"older patch with $C^{1+\gamma}$ boundary for any $0<\gamma<1$. More specifically, we recall that $\theta(x,t)=\theta_0(A(x,t))1_{D(t)}(x)$, with $\theta_0(A(x,t))\in C^\mu(\overline{D}(t))$. In particular, $\theta_0(A(x,t))\in H^\mu(D(t))$ and therefore there is an extension of it to $\mathbb{R}^3$ (see \cite{McLean00}), which we denote $\tilde{\theta}(t)\in H^\mu\cap L^\infty$. Then, for $0<s<\min\{\mu,\frac12\}$, we make use of standard paradifferential calculus estimates (see Proposition \rho_{\varpirepsilonsilonilon}f{paradif}) to find that \begin{equation*} \begin{aligned} \|\theta(t)\|_{\dot{H}^{s}}&= \|T_{1_{D(t)}}\tilde{\theta}(t)\|_{\dot{H}^s}+\|T_{\tilde{\theta}(t)} 1_{D(t)}\|_{\dot{H}^s}+\|R(1_{D(t)},\tilde{\theta}(t))\|_{\dot{H}^s}\\ &\leq c(\|1_{D(t)}\|_{L^\infty}\|\tilde{\theta}\|_{\dot{H}^s}+\|\tilde{\theta}(t)\|_{L^\infty}\|1_{D(t)}\|_{\dot{H}^s})\leq C(T), \end{aligned} \end{equation*} where in the last step we have use the fact that the characteristic function of a bounded Lipschitz domain is in $\dot{H}^s$ for $0< s<\frac12$ (see \cite{Faraco13}). Then, using \eqref{SobolevParaDiff} for the second term in \eqref{aux} we obtain \begin{equation*} \begin{aligned} \|u\|_{L^\infty(\dot{H}^{\frac32+\varpirepsilon})}&\leq \|u_0\|_{\dot{H}^{\frac32+\varpirepsilon}}+c\|u\otimes u\|_{L^\infty_T(\dot{H}^{\frac12+\varpirepsilon})}+C(T)\\ &\leq \|u_0\|_{\dot{H}^{\frac32+\varpirepsilon}}+c\|u\|^2_{L^\infty_T(\dot{H}^{1+\frac{\varpirepsilon}{2}})}+C(T)\leq C(T), \end{aligned} \end{equation*} where the last terms are controlled due to the estimates found in the previous section. The last estimate for the velocity is performed: \begin{equation}\Lambdabel{W2inf} \begin{aligned} \|u\|_{L^1_T(\dot{W}^{2,\infty})}&\leq c( \|u_0\|_{H^{\frac32+\varpirepsilon}}+\|u\cdot\nabla u\|_{L^1_T(C^{\varpirepsilon})}+\|\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)\|_{L^1_T(L^\infty)})\\ &\leq c(\|u_0\|_{H^{\frac32+\varpirepsilon}}+\|u\|_{L^\infty_T(C^{\varpirepsilon})}\|\nabla u\|_{L^1_T(C^{\varpirepsilon})}+\|\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)\|_{L^1_T(L^\infty)})\\ &\leq C(T)+\|\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)\|_{L^1_T(L^\infty)}, \end{aligned} \end{equation} so that it remains to bound the last term above. Next we analyze the singular integral operator $\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)$. We apply the Fourier transform to find $$ \mathcal{F}(\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3))(\xi,t)=\xi_j\xi_k\Big(\delta_{l,3}-\frac{\xi_l\xi_3}{|\xi|^2}\Big)\int_0^t\exp(-|\xi|^2(t-\tau))\hat{\theta}(\xi,\tau)d\tau, $$ for $j$, $k$, $l$ form 1 to 3 and $\delta_{l,3}$ the Kronecker delta. It shows that we only need to deal with the following four cases $$ \partialrtial_1^3\partialrtial_3(-{\rm div}\thinspacesplaystyleelta)^{-1}(\partial_t\!-\!\Delta)_0^{-1}\theta(x,t)={\rm pv}\int_0^t\int_{\mathbb{R}^3}K_1(x-y,t-\tau)\theta(y,\tau)dyd\tau, $$ $$ \partialrtial_1^2\partialrtial_2\partialrtial_3(-{\rm div}\thinspacesplaystyleelta)^{-1}(\partial_t\!-\!\Delta)_0^{-1}\theta(x,t)={\rm pv}\int_0^t\int_{\mathbb{R}^3}K_2(x-y,t-\tau)\theta(y,\tau)dyd\tau, $$ $$ \partialrtial_1^2\partialrtial_3^2(-{\rm div}\thinspacesplaystyleelta)^{-1}(\partial_t\!-\!\Delta)_0^{-1}\theta(x,t)={\rm pv}\int_0^t\int_{\mathbb{R}^3}K_3(x-y,t-\tau)\theta(y,\tau)dyd\tau, $$ $$ \partialrtial_1^4(-{\rm div}\thinspacesplaystyleelta)^{-1}(\partial_t\!-\!\Delta)_0^{-1}\theta(x,t)={\rm pv}\int_0^t\int_{\mathbb{R}^3}K_4(x-y,t-\tau)\theta(y,\tau)dyd\tau, $$ by exchanging coordinates. The principal values above are understood as a limit removing the time singularity. The identity $${\rm div}\thinspacesplaystyleelta K=\partialrtial_t K,$$ for $K$ the heat kernel at any $t\neq 0$, implies that \begin{equation}\Lambdabel{kernels} \begin{aligned} \widehat{K_1}(\xi,t)&=\frac{\xi_1^3\xi_3}{|\xi|^4}\widehat{\partialrtial_t K}(\xi,t), \qquad \widehat{K_2}(\xi,t)= \frac{\xi_1^2\xi_2\xi_3}{|\xi|^4}\widehat{\partialrtial_tK}(\xi,t),\\ \widehat{K_3}(\xi,t)&=\frac{\xi_1^2\xi_3^2}{|\xi|^4}\widehat{\partialrtial_t K}(\xi,t), \qquad \widehat{K_4}(\xi,t)=\frac{\xi_1^4}{|\xi|^4}\widehat{\partialrtial_t K}(\xi,t). \end{aligned} \end{equation} Therefore, we can write (see Chapter 3.3 in \cite{Stein70}) \begin{equation}\Lambdabel{Ki} K_i(x,t)=\partialrtial_t(k_i(x)* K(x,t))+\frac1{15}\delta_{i3}\partialrtial_t K(x,t)+\frac1{5}\delta_{i4}\partialrtial_t K(x,t) \end{equation} where the kernels $k_i$, $i=1,..., 4$, correspond to fourth order Riesz transforms, hence they are even, homogeneous of degree $-3$, and have zero mean on spheres. We denote by $\delta_{ij}$ the Kronecker delta. Going back to \eqref{W2inf}, we can now bound the temperature terms by the following \begin{equation}\Lambdabel{i1234} \|\nabla^2(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(\theta e_3)\|_{L^1_T(L^\infty)}\leq c \sum_{i=1}^4\left|\left|{\rm pv} \int_0^t \int_{\mathbb{R}^3} K_i(x-y,t-\tau)\theta(y,\tau)dyd\tau\right|\right|_{L^1_T(L^\infty)}. \end{equation} We show the details in the case $i=4$, as the rest can be handled in a similar manner. Then we start by splitting as follows \begin{equation*} {\rm pv} \int_0^t \int_{\mathbb{R}^3} K_4(x-y,t-\tau)\theta(y,\tau)dyd\tau=I_1+I_2, \end{equation*} where \begin{equation}\Lambdabel{I2} \begin{aligned} I_1&=-{\rm pv} \int_0^t \int_{\mathbb{R}^3} \partialrtial_\tau ((k_4* K)(x-y,t-\tau))\theta(y,\tau)dyd\tau,\\ I_2&=-\frac15{\rm pv} \int_0^t \int_{\mathbb{R}^3} \partialrtial_\tau K(x-y,t-\tau))\theta(y,\tau)dyd\tau. \end{aligned} \end{equation} Using the equation \eqref{temperature}, the term $I_1$ becomes \begin{equation*} \begin{aligned} I_1&=-{\rm pv} \int_0^t \int_{\mathbb{R}^3} \partialrtial_\tau ((k_4* K)(x-y,t-\tau)\theta(y,\tau))dyd\tau\\ &\quad-{\rm pv}\int_0^t \int_{\mathbb{R}^3} (k_4* K)(x-y,t-\tau)\nabla\cdot(u(y,\tau)\theta(y,\tau)) dyd\tau, \end{aligned} \end{equation*} so integration by parts shows that \begin{equation}\Lambdabel{I1} \begin{aligned} I_1&=-\lim_{\tau\to t^-}\int_{\mathbb{R}^3}(k_4* K)(x-y,t-\tau)\theta(y,\tau)dy+\int_{\mathbb{R}^3}(k_4* K)(x-y,t)\theta(y,0)dy\\ &\quad+\int_0^t\int_{\mathbb{R}^3}\nabla K(x-y,t-\tau)\cdot (k_4* (u \theta))(y,\tau)dyd\tau\\ &=J_1+J_2+J_3. \end{aligned} \end{equation} The first term can be written as follows \begin{equation*} \begin{aligned} J_1&=-\lim_{\tau\to t^-}\int_{\mathbb{R}^3}K(x-y,t-\tau) (k_4* \theta)(y,\tau)dy=-\lim_{\varpirepsilonsilonilon \to 0^+} K(\varpirepsilonsilonilon)* (k_4* \theta(t-\varpirepsilonsilonilon))(x)\\ &=-( k_4* \theta) (x,t). \end{aligned} \end{equation*} We note here that $k_4$ defines a singular integral operator, and thus $J_1$ is not bounded for a general bounded function. We first define a cut-off distance \begin{equation}\Lambdabel{cutoff} \delta=\min_{\tau\in[0,t]}\left(\frac{|\nabla Z|_{\inf}(\tau)}{\|\nabla Z\|_{C^\eta}(\tau)}\right)^{1/\eta},\hspace{1cm}\eta\in(0,1), \end{equation} where $$ |\nabla Z|_{\inf}(t)=\min_{j}\Big\{\min\Big\{\inf_{\alpha\in \mathcal{N}_j} |\partialrtial_{\alpha_1}Z(\alpha,t)|, \inf_{\alpha\in \mathcal{N}_j} |\partialrtial_{\alpha_2}Z(\alpha,t)|,\inf_{\alpha,\beta\in \mathcal{N}_j,\alpha\neq\beta} \frac{|Z(\alpha,t)-Z(\beta,t)|}{|\alpha-\beta|}\Big\}\Big\}. $$ Above the neighborhoods $\mathcal{N}_j$, $j=1,...,L$, provide local charts of the free boundary $\partialrtial D(t)$ so that for any $x\in\partialrtial D(t)$ there exists a $\mathcal{N}_j\subset\mathbb{R}^2$ such that $x=Z(\alpha,t)$ with $\alpha\in\mathcal{N}_j$. The positive quantity $\delta$ is fixed due to Theorem \rho_{\varpirepsilonsilonilon}f{Case1}. Then we can perform the following splitting \begin{equation}\Lambdabel{J1} J_1=-\int_{D(t)\cap\{|x-y|\geq \delta \}}k_4(x-y)\theta(y,t)dy-{\rm pv}\int_{ D(t)\cap \{|x-y|< \delta \}}k_4(x-y)\theta(y,t)dy=L_1+L_2, \end{equation} where the first term is bounded by \begin{equation}\Lambdabel{L1} |L_1|\leq \int_{|x-y|\geq \delta}|k_4(x-y)||\theta(y,t)|dy\leq c \|\theta_0\|_{L^\infty}|\log{\delta}||D_0|. \end{equation} In order to bound $L_2$ we distinguish between two cases: $x\in \overline{D(t)}$ and $x\notin \overline{D(t)}$. From now on, if $x\in\partialrtial D(t)$ the meaning of $\theta(x,t)$ is the limit of $\theta(y,t)$ from inside $D(t)$ as $y\to x$. In the first case, we split $L_2$ as follows \begin{equation*} L_2=-{\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}}k_4(x-y) (\theta(y,t)-\theta(x,t))dy-\theta(x,t){\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}} k_4(x-y)dy, \end{equation*} and therefore \begin{equation*} |L_2|\leq c \|\theta(t)\|_{C^\mu(\overline{D(t)})}\frac{\delta^\mu}{\mu}+\Big|\theta(x,t){\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}} k_4(x-y)dy\Big|. \end{equation*} Since $\theta$ satisfies a transport equation, the regularity obtained for $u$ allows us to find that \begin{equation*} \|\theta(t)\|_{C^\mu(\overline{D(t)})}\leq \|\theta_0\|_{C^\mu(\overline{D}_0)}e^{c\int_0^t\|\nabla u\|_{L^\infty}(\tau)d\tau}\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\mu(\overline{D}_0)}, \end{equation*} so we have that \begin{equation}\Lambdabel{L2} |L_2|\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\mu(\overline{D}_0)}+\Big|\theta(x,t){\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}} k_4(x-y)dy\Big|. \end{equation} The last term above can be bounded since the kernels are homogeneous and even, and $\partialrtial D(t)\in C^{1+\gamma}$ due to Remark \rho_{\varpirepsilonsilonilon}f{remark1} (see \cite{Bertozzi93} for the complete argument in 2d, and \cite{Cordoba10} for its extension to the three dimensional case). In the case $x\notin \overline{D(t)}$, we define $\tilde{x}=\arg d(x, \partialrtial D(t))\in \partialrtial D(t)$. Then we can split $L_2$ as follows \begin{equation*} L_2=-{\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}}k_4(x-y) (\theta(y,t)-\theta(\tilde{x},t))dy-\theta(\tilde{x},t){\rm pv}\int_{D(t)\cap\{|x-y|< \delta \}} k_4(x-y)dy, \end{equation*} so taking into account the triangle inequality we find that \begin{equation*} \begin{aligned} |L_2|&\leq \|\theta(t)\|_{C^\mu(\overline{D(t)})}\int_{D(t)\cap\{|x-y|< \delta \}}|k_4(x-y)|2^\mu|x-y|^{\mu}dy\\ &\quad+\Big|\theta(\tilde{x},t){\rm pv}\int_{D(t)\cap \{|x-y|< \delta \}}k_4(x-y)dy\Big|, \end{aligned} \end{equation*} thus we conclude that \begin{equation}\Lambdabel{L2bound} |L_2|\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},\|\theta_0\|_{C^\mu(\overline{D}_0)},\delta,T). \end{equation} Going back to \eqref{J1}, the bounds \eqref{L1} and \eqref{L2bound} give \begin{equation}\Lambdabel{J1bound} \|J_1\|_{L^\infty_T(L^\infty)}\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},\|\theta_0\|_{C^s(\overline{D}_0)},\delta,|D_0|, T). \end{equation} We proceed now to bound $J_2$ in \eqref{I1}. We can write it in the following manner \begin{equation*} J_2=\int_{\mathbb{R}^3} K(x-y,t) (k_4*\theta_0)(y)dy=e^{t{\rm div}\thinspacesplaystyleelta}(k_4*\theta_0)(x), \end{equation*} so standard properties of the heat equation give us that \begin{equation*} \|J_2\|_{L^\infty_T(L^\infty)} \leq\|k_4*\theta_0\|_{L^\infty}. \end{equation*} Using the same reasoning as in the term $J_1$, we can conclude that \begin{equation}\Lambdabel{J2bound} \|J_2\|_{L^\infty_T({L^\infty})}\leq c(\|\theta_0\|_{C^s(\overline{D}_0)},\delta,|D_0|). \end{equation} The last term is indeed more regular and it can be bounded as follows \begin{equation} \begin{aligned}\Lambdabel{J3bound} \|J_3\|_{L^\infty_T(L^\infty)}&\leq \Big|\Big|\int_0^t \|\nabla K(t-\tau)\|_{L^{\frac{3}{2+\varpirepsilon}}}\|k_4*(u\theta)(\tau)\|_{L^{\frac{3}{1-\varpirepsilon}}}d\tau\Big|\Big|_{L^\infty_T}\leq c(T)\|u\theta\|_{L^\infty_T(L^{\frac{3}{1-\varpirepsilon}})}\\ &\leq c(\|\theta_0\|_{L^\infty},\|u_0\|_{H^{\frac12+\varpirepsilon}},T). \end{aligned} \end{equation} So introducing the bounds \eqref{J1bound}, \eqref{J2bound} and \eqref{J3bound} in \eqref{I1} we have that \begin{equation*} \|I_1\|_{L^\infty_T(L^\infty)}\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},\|\theta_0\|_{C^s(\overline{D}_0)},\delta,|D_0|, T). \end{equation*} The term $I_2$ is analogous to $I_1$ but replacing the Riesz transforms with identities, so we find that \begin{equation*} \|I_2\|_{L^\infty_T(L^\infty)}\leq c(\|\theta_0\|_{L^\infty},\|u_0\|_{H^{\frac12}},T). \end{equation*} We are then done with estimate \eqref{i1234} and therefore plugging it into \eqref{W2inf} we find the regularity needed to end the proof. \qed \section{Local and global regularity results for $C^{2+\gamma}$ fronts}\Lambdabel{sec:4} This section is devoted to prove the results for $C^{2+\gamma}$ fronts. As commented in the beginning of the previous section, one cannot expect to obtain $C^{2+\gamma}$ regularity globally in space for the velocity, since $\theta$ is only bounded. Indeed, taking two derivatives in \eqref{dnc}, the hardest part is to study the regularity of the last term on the boundary of the H\"older-patch. We first use the trick of the previous section to reduce the evolution terms of the velocity into singular integrals of Riesz type at the fixed time $t$. Then, we study the H\"older regularity of these singular integrals on the boundary. After some technical splittings, we take advantage of the fact that fourth order Riesz transform kernels can be integrated (see $L_6$ term, \eqref{L6}) to fully introduce a contour dynamics formulation. Studying these new kernels together with the previous regularity results, the new theorem below follows. \begin{thm}\Lambdabel{Case3} Let $u_0\in H^{\frac32+\gamma+\varpirepsilon}$ be a divergence-free vector field with $\gamma\in(0,1)$ and $0<\varpirepsilon<\min\{1/2,1-\gamma\}$. Assume that $D_0\subset \mathbb{R}^3$ is a bounded simply connected domain with boundary $\partialrtial D_0\in C^{2+\gamma}$, and $\theta_0(x)=\theta_0(x)1_{D_0}(x)$ with $\theta_0\in C^\gamma(\overline{D}_0)$. Then, there is a unique solution $(u,\theta)$ of (\rho_{\varpirepsilonsilonilon}f{temperature},\rho_{\varpirepsilonsilonilon}f{incompressible},\rho_{\varpirepsilonsilonilon}f{Boussinesq}) with $u(x,0)=u_0(x)$ such that $$\theta(x,t)=\theta_0(A(x,t))1_{D(t)}(x) \hspace{0.2cm}{\rm{and}} \hspace{0.2cm} \partialrtial D\in L^\infty(0,T;C^{2+\gamma}),$$ where $D(t)=X(D_0,t)$. The regularity of the velocity is given by $$ u\in L^\infty(0,T;H^{\frac32+\gamma+\varpirepsilon})\cap L^1(0,T;W^{2,\infty}).$$ The time of existence $T>0$ depends on the initial data in such a way that $$ \int_{0}^T\| e^{\tau{\rm div}\thinspacesplaystyleelta}u_0\|_{\dot{H}^1}^4(\tau)d\tau+\|\theta_0\|^2_{L^{3/2}}T<C_0, $$ for $C_0>0$ an universal constant. Furthermore, if the initial data satisfy \begin{equation*} \|u_0\|_{H^{\frac12}}+\|\theta_0\|_{L^1}<\delta, \end{equation*} for $\delta>0$ an universal constant, the solutions exist for all time $T>0$. \end{thm} \begin{rem} The theorem above allows to show an analogous result for initial fronts of the form $\theta_0(x)=\theta_1(x)1_{D_0}(x)+\theta_2(x)1_{D^c_0}(x)$ with $\theta_1\in C^{\gamma}(\overline{D}_0)$, $\theta_2\in C^{\gamma}(\overline{D^c_0})\cap L^1$. The same conclusions for $u$ and $\theta$ are obtained and the front propagates as $$\theta(x,t)=\theta_1(A(x,t))1_{D(t)}(x)+\theta_2(A(x,t))1_{D^c(t)}(x) \hspace{0.2cm}{\rm{with}} \hspace{0.2cm} \partialrtial D\in L^\infty(0,T;C^{2+\gamma}).$$ \end{rem} Proof: The regularity $u\in L^\infty_T(H^{\frac32+\gamma+\varpirepsilon})$ follows from Theorem \rho_{\varpirepsilonsilonilon}f{Case2}. We need to study the $C^{2+\gamma}$ regularity of the velocity. From the equation, since $\theta$ is not continuous, one cannot expect to obtain such regularity globally in space, so we need to study it on the surface. The nonlinear term and the corresponding to the initial data in the splitting \eqref{dnc} can be treated as in \eqref{aux}. In particular, for these terms one can indeed obtain the regularity globally in space: \begin{equation*} \begin{aligned} \|e^{t{\rm div}\thinspacesplaystyleelta}u_0\|_{L^1_T(C^{2+\gamma})}&\leq \|u_0\|_{H^{\frac32+\gamma+\varpirepsilon}},\\ \|(\partial_t\!-\!\Delta)_0^{-1}\mathbb{P}(u\cdot\nabla u)\|_{L^1_T(C^{2+\gamma})}&\leq c\|u\cdot\nabla u\|_{L^1_T(C^{\gamma+\varpirepsilon})}\leq c\|u\|_{L^\infty_T(C^{\gamma+\varpirepsilon})}\|\nabla u\|_{L^1_T(C^{\gamma+\varpirepsilon})}\leq C(T). \end{aligned} \end{equation*} It remains to deal with the temperature term. In order to deal with it, we consider as before one of the main kernels as explained in the proof of Theorem \rho_{\varpirepsilonsilonilon}f{Case2}. The others can be treated in a similar manner. We deal with the kernel $K_4$ given in \eqref{kernels}. Integration by parts in time provides as before \begin{equation}\Lambdabel{I1I2I3decomp} {\rm pv} \int_0^t \int_{\mathbb{R}^3} K_4(x-y,t-\tau)\theta(y,\tau)dyd\tau=I_1+I_2+I_3, \end{equation} where \begin{equation}\Lambdabel{I1I2I3} \begin{aligned} I_1&=-(k_4* \theta) (x,t)-\frac1{5}\theta(x,t),\\ I_2&=e^{t{\rm div}\thinspacesplaystyleelta}(k_4*\theta_0)(x)+\frac1{5}e^{t{\rm div}\thinspacesplaystyleelta}\theta_0(x),\\ I_3&=\int_0^t\int_{\mathbb{R}^3}\nabla K(x-y,t-\tau)\cdot ((k_4+\frac{1}{5}\delta_0)* (u \theta))(y,\tau)dyd\tau, \end{aligned} \end{equation} with $k_4$ given in \eqref{Ki} and $\delta_0$ the Dirac delta. Notice that the second and third term correspond to solutions of the linear heat equation, and therefore can be bounded as follows \begin{equation}\Lambdabel{I2I3bound} \begin{aligned} \|I_2\|_{L^1_T(C^\gamma)}&\leq c(T)\|\theta_0\|_{L^\infty},\\ \|I_3\|_{L^1_T(C^\gamma)}&\leq c(T)\|u\|_{L^1_T(L^\infty)}\|\theta\|_{L^\infty_T(L^\infty)}\leq c(T,\|u_0\|_{H^{\frac12+\varpirepsilon}},\|\theta_0\|_{L^\infty}). \end{aligned} \end{equation} Therefore, it only remains to deal with the term $I_1$. Since we want to study the H\"older regularity along the surface, we consider two points on the surface $x=Z(\alphapha,t)$, $x+h=Z(\tilde{\alphapha},t)$. Then we start with the following splitting to deal with the H\"older norm \begin{equation*} I_1(x+h)-I_1(x)=J_1+J_2, \end{equation*} where \begin{equation*} J_1=\int_{D(t)}(k_4(x-y)-k_4(x+h-y))\theta(y,t)dy, \end{equation*} and \begin{equation*} J_2=\frac{1}{5}(\theta(x)-\theta(x+h)). \end{equation*} The term above gives $$ |J_2|\leq \frac{1}{5}\|\theta\|_{C^{\gamma}(\overline{D(t)})}|h|^{\gamma}\leq C(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)}|h|^{\gamma}, $$ by using Theorem \rho_{\varpirepsilonsilonilon}f{Case2}. The term $J_1$ has to be decompose in the following manner $J_1=L_1+L_2+L_3+L_4+L_5+L_6$, where $$ L_1=\int_{\{|x-y|<2|h|\}\cap D(t)}k_4(x+h-y)(\theta(x+h,t)-\theta(y,t))dy, $$ $$L_2=-\int_{\{|x-y|<2|h|\}\cap D(t)}k_4(x-y)(\theta(x,t)-\theta(y,t))dy, $$ $$L_3=\int_{\{|x-y|\geq 2|h|\}\cap D(t)}k_4(x-y)(\theta(x+h,t)-\theta(x,t))dy, $$ $$L_4=\int_{\{|x-y|\geq 2|h|\}\cap D(t)}(k_4(x+h-y)-k_4(x-y))(\theta(x+h,t)-\theta(y,t))dy, $$ $$L_5=-(\theta(x+h,t)-\theta(x,t)){\rm pv}\int_{D(t)}k_4(x-y)dy, $$ $$L_6=-\theta(x+h,t)\int_{D(t)}(k_4(x+h-y)-k_4(x-y))dy. $$ Then \begin{equation*} |L_1|\leq \|\theta(t)\|_{C^\gamma(\overline{D(t)})}\int_{|x-y|<2|h|} |k_4(x+h-y)||x+h-y|^\gamma dy\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)}|h|^\gamma, \end{equation*} and analogously \begin{equation*} |L_2|\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)}|h|^\gamma. \end{equation*} Adding $L_3$ and $L_5$ we find \begin{equation*} \begin{aligned} |L_3+L_5|&\leq |\theta(x+h,t)-\theta(x,t)|\Big| {\rm pv}\int_{\{|x-y|<2|h|\}\cap D(t)}k_4(x-y)dy\Big|\\ &\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},\delta,T)\|\theta_0\|_{C^\gamma(\overline{D}_0)}|h|^\gamma, \end{aligned} \end{equation*} where the principal value is bounded as in \eqref{L2} taking $2|h|$ smaller than the cutoff \eqref{cutoff}. The next bound is performed as follows \begin{equation*} \begin{aligned} |L_4|&\leq \|\theta(t)\|_{C^\gamma(\overline{D(t)})}\int_{\{|x-y|\geq 2|h|\}\cap D(t)}|k_4(x+h-y)-k_4(x-y)||x+h-y|^\gamma dy\\ &\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)} \int_{|x-y|\geq 2|h|}|k_4(x+h-y)-k_4(x-y)||x-y|^\gamma dy\\ &\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)} \int_{|x-y|\geq 2|h|} \frac{|h|}{|x-y|^{4-\gamma}}dy\leq c(\|u_0\|_{H^{\frac12+\varpirepsilon}},T)\|\theta_0\|_{C^\gamma(\overline{D}_0)} |h|^\gamma, \end{aligned} \end{equation*} so that it remains to bound $L_6$ to be done with the H\"older regularity for $I_1$, and consequently with the proof. Therefore, the rest of the section is dedicated to bound $L_6$. \noindent \underline{\thetaxtbf{Bounding $L_6$}}: \noindent The goal is to take advantage of the fact that the kernels $k_4$ can be written as derivatives, so that one can integrate by parts to obtain operators on the boundary. Thus, we rewrite the term to see that \begin{equation*} \begin{aligned} L_6&=-\theta(x+h,t)(k_4*1_{D(t)}(x+h)-k_4*1_{D(t)}(x))\\ &=-\theta(x+h,t)((k_4+\frac15\delta_0)*1_{D(t)}(x+h)-(k_4+\frac15\delta_0)*1_{D(t)}(x)). \end{aligned} \end{equation*} From \eqref{kernels} and \eqref{Ki}, we recall that \begin{equation*} \mathcal{F}\Big(k_4+\frac15\delta_0\Big)(\xi)=\frac{\xi_1^4}{|\xi|^4}=\xi_1\frac{\xi_1^3}{|\xi|^4}, \end{equation*} so that \begin{equation}\Lambdabel{Gamma4} k_4+\frac15\delta_0=\partialrtial_1 \mathcal{F}^{-1}\Big(-i\frac{\xi_1^3}{|\xi|^4}\Big)=\partialrtial_1 \left(-3\frac{x_1(x_2^2+x_3^2)}{|x|^5}\right)=\partialrtial_1 \Gamma_4(x), \end{equation} in the distributional sense. Then, integration by parts gives that \begin{equation*} \begin{aligned} L_6=-\theta(x+h,t)\int_{\partialrtial D(t)} (\Gamma_4(x+h-y)-\Gamma_4(x-y))n_1(y,t)dS(y). \end{aligned} \end{equation*} Now, all the terms that will appear are singular integrals in two variables with kernels that depend on the free surface. Thus, we will use contour dynamics techniques together with the regularity previously obtained for the surface. Further splitting is needed to handle all singular integral kernels along the free surface. For simplicity, from now on we disregard the dependence in time of the notation. We take a cutoff distance $\eta>0$ and denote $B_\eta=\{y\in \partialrtial D(t): |x-y|<\eta\}$. We can always choose the size of $h$ small enough so that $x+h\in B_{\eta/2}$. Then we can write $L_6$ as follows \begin{equation}\Lambdabel{L6} \begin{aligned} L_6&=-\theta(x+h)\int_{B_\eta} (\Gamma_4(x+h-y)-\Gamma_4(x-y))n_1(y)dS(y)\\ &\quad-\theta(x+h)\int_{\partialrtial D(t)\smallsetminus B_\eta} (\Gamma_4(x+h-y)-\Gamma_4(x-y))n_1(y)dS(y)=M_1+M_2. \end{aligned} \end{equation} For the $M_2$ term we use the mean value theorem to obtain \begin{equation}\Lambdabel{M2bound} |M_2|\leq \frac{c}{\eta^3}\|\theta_0\|_{L^\infty}|\partialrtial D(t)||h|. \end{equation} Therefore, it only remains to deal with the $M_1$ term. To estimate it we choose a parametrization $Z(\beta,t)$ on $B_\eta$ of the surface $\partialrtial D(t)$ near the points $Z(\alphapha,t)$, $Z(\tilde{\alphapha},t)$ so that \begin{equation*} \begin{aligned} M_1&=-\theta(x+h)\int_{Z^{-1}(B_\eta)} (\Gamma_4(Z(\alphapha)-Z(\beta))-\Gamma_4(Z(\tilde{\alphapha})-Z(\beta)))N_1(Z(\beta))d\beta, \end{aligned} \end{equation*} and substituting $\Gamma_4$ from \eqref{Gamma4} we can write \begin{equation*} \begin{aligned} M_1&\!=\!-\theta(x\!+\!h)\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\frac{(Z_2(\alphapha)\!-\!Z_2(\beta))^2\!+\!(Z_3(\alphapha)\!-\!Z_3(\beta))^2}{|\alphapha-\beta|^2}\frac{|\alphapha-\beta|^5N_1(Z(\beta))}{|Z(\alphapha)\!-\!Z(\beta)|^5}\frac{Z_1(\alphapha)\!-\!Z_1(\beta)}{|\alphapha-\beta|^3}d\beta\\ &\quad+\theta(x\!+\!h)\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\frac{(Z_2(\tilde{\alphapha})\!-\!Z_2(\beta))^2\!+\!(Z_3(\tilde{\alphapha})\!-\!Z_3(\beta))^2}{|\tilde{\alphapha}-\beta|^2}\frac{|\tilde{\alphapha}-\beta|^5N_1(Z(\beta))}{|Z(\tilde{\alphapha})\!-\!Z(\beta)|^5}\frac{Z_1(\tilde{\alphapha})\!-\!Z_1(\beta)}{|\tilde{\alphapha}-\beta|^3}d\beta. \end{aligned} \end{equation*} For convenience, we will choose the parametrization with isothermal coordinates (see e.g. \cite{Taylor11}, Chap. 5.10), i.e., verifying that \begin{equation}\Lambdabel{isothermal} \partialrtial_{\alphapha_1}Z(\alphapha,t)\cdot \partialrtial_{\alphapha_2}Z(\alphapha,t)=0,\hspace{1cm}|\partialrtial_{\alphapha_1}Z(\alphapha,t)|^2=|\partialrtial_{\alphapha_2}Z(\alphapha,t)|^2. \end{equation} Then, we add and subtract the appropriate quantities and group the terms together, \begin{equation}\Lambdabel{M1} M_1=O_1(\alphapha)-O_1(\tilde{\alphapha})+O_2(\alphapha)-O_2(\tilde{\alphapha}), \end{equation} where \begin{equation*} O_1(\alphapha)=-\theta(x+h){\rm pv}\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2\!+\!(\partialrtial_\alphapha Z_3(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2}{|\alphapha-\beta|^2}\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha_1}Z(\alphapha)|^5}\frac{Z_1(\alphapha)\!-\!Z_1(\beta)}{|\alphapha-\beta|^3}d\beta, \end{equation*} \begin{equation*} \begin{aligned} O_2(\alphapha)&=\!-\theta(x+h)\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\frac{(Z_2(\alphapha)\!-\!Z_2(\beta))^2\!+\!(Z_3(\alphapha)\!-\!Z_3(\beta))^2}{|\alphapha-\beta|^2}\frac{|\alphapha-\beta|^5N_1(Z(\beta))}{|Z(\alphapha)\!-\!Z(\beta)|^5}\frac{Z_1(\alphapha)\!-\!Z_1(\beta)}{|\alphapha-\beta|^3} d\beta\\ &\quad+\theta(x+h){\rm pv}\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2\!+\!(\partialrtial_\alphapha Z_3(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2}{|\alphapha-\beta|^2}\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha_1} Z(\alphapha)|^5}\frac{Z_1(\alphapha)\!-\!Z_1(\beta)}{|\alphapha-\beta|^3}d\beta. \end{aligned} \end{equation*} To conclude the proof, we deal with the bounds for $O_1(\alphapha)-O_1(\alphapha)$ and the bounds for $O_2(\alphapha)-O_2(\alphapha)$ in two different subsections. \noindent \underline{Bounding $O_1(\alphapha)-O_1(\alphapha)$}: \noindent The term $O_1$ can be decomposed further: \begin{equation}\Lambdabel{O1} O_1(\alphapha)=\sum_{j=2,3} \left(P_{1,j}(\alphapha)+P_{2,j}(\alphapha)+P_{3,j}(\alphapha)\right), \end{equation} where \begin{equation*} \begin{aligned} P_{1,j}(\alphapha)&=-\theta(x+h)\frac{(\partialrtial_{\alphapha_1}Z_j(\alphapha))^2N_1(Z(\alphapha))}{|\partialrtial_{\alphapha_1}Z(\alphapha)|^5}{\rm pv}\int_{Z^{-1}(B_\eta)}\frac{(\alphapha_1-\beta_1)^2}{|\alphapha-\beta|^5}(Z_1(\alphapha)\!-\!Z_1(\beta))d\beta,\\ P_{2,j}(\alphapha)&=-\theta(x+h)\frac{(\partialrtial_{\alphapha_2}Z_j(\alphapha))^2N_1(Z(\alphapha))}{|\partialrtial_{\alphapha_1}Z(\alphapha)|^5}{\rm pv}\int_{Z^{-1}(B_\eta)}\frac{(\alphapha_2-\beta_2)^2}{|\alphapha-\beta|^5}(Z_1(\alphapha)\!-\!Z_1(\beta))d\beta,\\ P_{3,j}(\alphapha)&=\!-2\theta(x\!+\!h)\frac{\partialrtial_{\alphapha_1}Z_j(\alphapha)\partialrtial_{\alphapha_2}Z_j(\alphapha)N_1(Z(\alphapha))}{|\partialrtial_{\alphapha_1}Z(\alphapha)|^5}{\rm pv}\!\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1\!-\!\beta_1)(\alphapha_2\!-\!\beta_2)}{|\alphapha-\beta|^5}(Z_1(\alphapha)\!-\!Z_1(\beta))d\beta. \end{aligned} \end{equation*} We define the following quantity \begin{equation*} F(Z)(\alphapha,\beta,t)=\frac{|\alphapha-\beta|}{|Z(\alphapha,t)-Z(\beta,t)|}, \hspace{0.5cm}\alphapha,\beta\in Z^{-1}(B_\eta), \end{equation*} which measures the lack of self-intersection of $Z(t)$ on $B_\eta$. Then, it is not difficult to see that \begin{equation}\Lambdabel{P1} |P_{1,j}(\alphapha)-P_{1,j}(\tilde{\alphapha})|\leq Q_1+Q_2, \end{equation} with \begin{equation*} Q_1=c\|\theta_0\|_{L^\infty}\|F(Z)\|_{L^\infty}^5\|\partialrtial_\alphapha Z\|_{L^\infty}^3 \|\partialrtial_\alphapha Z\|_{C^\gamma}|\alphapha-\tilde{\alphapha}|^\gamma \Big|\int_{Z^{-1}(B_\eta)}\frac{(\tilde{\alphapha}_1-\beta_1)^2}{|\tilde{\alphapha}-\beta|^5}(Z_1(\tilde{\alphapha})-Z_1(\beta))d\beta\Big|, \end{equation*} \begin{multline*} Q_2=c\|\theta_0\|_{L^\infty}\|\partialrtial_\alphapha Z\|_{L^\infty}^4 \|F(Z)\|_{L^\infty}^5 \Big|\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(Z_1(\alphapha)\!-\!Z_1(\beta))(\alphapha_1\!-\!\beta_1)^2}{|\alphapha-\beta|^5}d\beta\!\\ -\!\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(Z_1(\tilde{\alphapha})\!-\!Z_1(\beta))(\tilde{\alphapha}_1\!-\!\beta_1)^2}{|\tilde{\alphapha}-\beta|^5}d\beta \Big|. \end{multline*} To deal with $Q_1$, we first notice that \begin{equation*} \begin{aligned} \Big|\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(Z_1(\tilde{\alphapha})\!-\!Z_1(\beta))(\tilde{\alphapha}_1-\beta_1)^2}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|&\!\leq\! \Big|\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\tilde{\alphapha}_1\!-\!\beta_1)^2(Z_1(\tilde{\alphapha})-Z_1(\beta)-\partialrtial_{\alphapha} Z_1(\tilde{\alphapha})\cdot (\tilde{\alphapha}-\beta))}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|\\ &\quad+\sum_{i=1,2} \Big|\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_i-\beta_i)}{|\tilde{\alphapha}-\beta|^5}\partialrtial_{\alphapha_i} Z_1(\tilde{\alphapha})d\beta\Big|, \end{aligned} \end{equation*} so using that $Z(t)\in C^{1+\mu}$, $0<\mu<1$, we obtain \begin{equation*} \begin{aligned} \Big|\!\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\frac{(Z_1(\tilde{\alphapha})\!-\!Z_1(\beta))(\tilde{\alphapha}_1\!-\!\beta_1)^2}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|&\leq c\|Z\|_{C^{1+\mu}}\!+\! \sum_{i=1,2}\Big|\partialrtial_{\alphapha_i} Z_1(\tilde{\alphapha})\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\frac{(\tilde{\alphapha}_1\!-\!\beta_1)^2(\tilde{\alphapha}_i\!-\!\beta_i)}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|. \end{aligned} \end{equation*} Recalling that $\tilde{\alphapha}\in Z^{-1}(B_{\eta/2})$, we have that $d(\tilde{\alphapha},Z^{-1}(\partialrtial B_\eta))\geq \eta /(2\|\partialrtial_{\alphapha}Z\|_{L^\infty})$. Therefore, if we take $\delta=\eta/(4\|\partialrtial_{\alphapha}Z\|_{L^\infty})$, we find that \begin{multline*} \Big|\int_{Z^{-1}(B_\eta)}\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_i-\beta_i)}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|=\Big|\underbrace{\int_{|\tilde{\alphapha}-\beta|\leq \delta}\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_i-\beta_i)}{|\tilde{\alphapha}-\beta|^5}d\beta}_{=0}\\ \quad+\int_{Z^{-1}(B_\eta)\smallsetminus\{|\tilde{\alphapha}-\beta|\leq \delta\}}\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_i-\beta_i)}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|\leq c\frac{\|\partialrtial_{\alphapha}Z\|_{L^\infty}}{\eta}, \end{multline*} and thus \begin{equation*} \begin{aligned} \Big|\!\int_{Z^{-1}(B_\eta)}\frac{(Z_1(\tilde{\alphapha})-Z_1(\beta))(\tilde{\alphapha}_1-\beta_1)^2}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|&\leq c\left(\|Z\|_{C^{1+\mu}}+\frac{\|\partialrtial_{\alphapha}Z\|^2_{L^\infty}}{\eta}\right). \end{aligned} \end{equation*} We conclude then the bound for $Q_1$ \begin{equation*} \begin{aligned} Q_1 &\leq c\|\theta_0\|_{L^\infty}\|F(Z)\|_{L^\infty}^5\|\partialrtial_\alphapha Z\|_{L^\infty}^3 \|\partialrtial_\alphapha Z\|_{C^\gamma}|\alphapha-\tilde{\alphapha}|^\gamma \left(\|Z\|_{C^{1+\mu}}+\frac{\|\partialrtial_{\alphapha}Z\|^2_{L^\infty}}{\eta}\right)\\ &\leq c\|\theta_0\|_{L^\infty}\|F(Z)\|_{L^\infty}^{5+\gamma}\|\partialrtial_\alphapha Z\|_{L^\infty}^3 \|\partialrtial_\alphapha Z\|_{C^\gamma} \left(\|Z\|_{C^{1+\mu}}+\frac{\|\partialrtial_{\alphapha}Z\|^2_{L^\infty}}{\eta}\right)|h|^\gamma, \end{aligned} \end{equation*} that is, \begin{equation}\Lambdabel{Q1bound} Q_1\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}},\eta)|h|^\gamma. \end{equation} To deal with $Q_2$, we use the following identity \begin{equation*} -\frac{x_1^2}{|x|^5}=\partialrtial_{x_1}\left(\frac{x_1^3}{|x|^5}\right)+\partialrtial_{x_2}\left(\frac{x_1^2x_2}{|x|^5}\right), \hspace{0.2cm}x\in \mathbb{R}^2, \end{equation*} followed by integration by parts to show that \begin{equation*} \begin{aligned} \int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(Z_1(\alphapha)-Z_1(\beta))(\alphapha_1-\beta_1)^2}{|\alphapha-\beta|^5}d\beta&=\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1\!-\!\beta_1)^3\partialrtial_{\alphapha_1} Z_1(\beta)}{|\alphapha-\beta|^5}d\beta\\ &\quad+\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1\!-\!\beta_1)^2(\alphapha_2\!-\!\beta_2)}{|\alphapha-\beta|^5}\partialrtial_{\alphapha_2} Z_1(\beta)d\beta\\ &\quad-\int_{\partialrtial Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1\!-\!\beta_1)^2(\alphapha_2\!-\!\beta_2)(Z_1(\alphapha)\!-\!Z_1(\beta))n_2(\beta)}{|\alphapha-\beta|^5}dl(\beta)\\ &\quad-\int_{\partialrtial Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1\!-\!\beta_1)^3(Z_1(\alphapha)\!-\!Z_1(\beta))n_1(\beta)}{|\alphapha-\beta|^5}dl(\beta). \end{aligned} \end{equation*} Then, $Q_2$ is bounded as follows \begin{equation}\Lambdabel{Q2} Q_2\leq c\|\theta_0\|_{L^\infty}\|\partialrtial_\alphapha Z\|_{L^\infty}^4 \|F(Z)\|_{L^\infty}^5 (R_1+R_2+R_3+R_4), \end{equation} where $$ R_1=\Big|\int_{Z^{-1}(B_\eta)}\frac{(\alphapha_1-\beta_1)^3\partialrtial_{\alphapha_1}Z_1(\beta)}{|\alphapha-\beta|^5}d\beta-\int_{Z^{-1}(B_\eta)}\frac{(\tilde{\alphapha}_1-\beta_1)^3\partialrtial_{\alphapha_1} Z_1(\beta)}{|\tilde{\alphapha}-\beta|^5}d\beta\Big|, $$ $$ R_2=\Big|\!\int_{\partialrtial Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha_1-\beta_1)^3(Z_1(\alphapha)\!-\!Z_1(\beta))n_1(\beta)}{|\alphapha-\beta|^5}dl(\beta)-\! \int_{\partialrtial Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\tilde{\alphapha}-\beta_1)^3(Z_1(\tilde{\alphapha})\!-\!Z_1(\beta))n_1(\beta)}{|\tilde{\alphapha}-\beta|^5}dl(\beta)\Big|, $$ $$ R_3=\Big|\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\frac{(\alphapha_1-\beta_1)^2(\alphapha_2-\beta_2)}{|\alphapha-\beta|^5}\partialrtial_{\alphapha_1} Z_1(\beta)d\beta-\int_{Z^{-1}(B_\eta)}\!\!\!\!\!\!\!\!\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_2-\beta_2)}{|\tilde{\alphapha}-\beta|^5}\partialrtial_{\alphapha_1} Z_1(\beta)d\beta\Big|, $$ and \begin{equation*} \begin{aligned} R_4=\Big|\!\int_{\partialrtial Z^{-1}(B_\eta)}&\frac{(\alphapha_1-\beta_1)^2(\alphapha_2-\beta_2)(Z_1(\alphapha)\!-\!Z_1(\beta))n_2(\beta)}{|\alphapha-\beta|^5}dl(\beta)\\ &-\int_{\partialrtial Z^{-1}(B_\eta)}\frac{(\tilde{\alphapha}_1-\beta_1)^2(\tilde{\alphapha}_2-\beta_2)(Z_1(\tilde{\alphapha})\!-\!Z_1(\beta))n_2(\beta)}{|\tilde{\alphapha}-\beta|^5}dl(\beta)\Big|. \end{aligned} \end{equation*} Introducing one more splitting, the term $R_1$ is written in the following manner \begin{equation*} \begin{aligned} R_1=\big|S_1+S_2+S_3+S_4+S_5+S_6\big|, \end{aligned} \end{equation*} where $$ S_1=\int_{Z^{-1}(B_\eta)\cap\{|\alphapha-\beta|<2|\alphapha-\tilde{\alphapha}|\}}\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}(\partialrtial_{\alphapha_1} Z_1(\alphapha)-\partialrtial_{\alphapha_1}Z_1(\beta))d\beta, $$ $$S_2=-\int_{Z^{-1}(B_\eta)\cap\{|\tilde{\alphapha}-\beta|\leq 2|\alphapha-\tilde{\alphapha}|\}}\frac{(\tilde{\alphapha}_1-\beta_1)^3}{|\tilde{\alphapha}-\beta|^5}(\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha})-\partialrtial_{\alphapha_1}Z(\beta))d\beta,$$ $$S_3=\int_{Z^{-1}(B_\eta)\cap\{|\alphapha-\beta|\geq 2|\alphapha-\tilde{\alphapha}|\}}\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}(\partialrtial_{\alphapha_1}Z_1(\alphapha)-\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha}))d\beta,$$ $$S_4=\int_{Z^{-1}(B_\eta)\cap\{|\tilde{\alphapha}-\beta|\geq 2|\alphapha-\tilde{\alphapha}|\}}\left(\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}-\frac{(\tilde{\alphapha}_1-\beta_1)^3}{|\tilde{\alphapha}-\beta|^5}\right)(\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha})-\partialrtial_{\alphapha_1}Z(\beta))d\beta,$$ $$S_5=-\int_{Z^{-1}(B_\eta)}\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}(\partialrtial_{\alphapha_1}Z_1(\alphapha)-\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha}))d\beta,$$ $$S_6=\int_{Z^{-1}(B_\eta)}\left(\frac{(\tilde{\alphapha}_1-\beta_1)^3}{|\tilde{\alphapha}-\beta|^5}-\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}\right)\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha})d\beta. $$ One immediately obtains that \begin{equation*} |S_1|+|S_2|+|S_4|\leq c \|Z\|_{C^{1+\gamma}}|\alphapha-\tilde{\alphapha}|^\gamma\leq c \|Z\|_{C^{1+\gamma}}\|F(Z)\|_{L^\infty}^\gamma |h|^\gamma. \end{equation*} The terms $S_3$ and $S_5$ are joined together \begin{equation*} S_3+S_5=(\partialrtial_{\alphapha_1}Z_1(\alphapha)-\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha}))\int_{Z^{-1}(B_\eta)\cap\{|\alphapha-\beta|\leq 2|\alphapha-\tilde{\alphapha}|\}}\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}d\beta. \end{equation*} Recalling that $Z(\alphapha)$ is the center of $B_\eta$, we know that $d(\alphapha,\partialrtial Z^{-1}(B_\eta))\geq \eta/\|\partialrtial_\alphapha Z\|_{L^\infty}$. Then, since $|\alphapha-\tilde{\alphapha}|\leq \|F(Z)\|_{L^\infty}|h|$, we can choose $|h|<(\eta/2)/(\|\partialrtial_{\alphapha}Z\|_{L^\infty}\|F(Z)\|_{L^\infty})$ to guarantee that \begin{equation*} |\alphapha-\tilde{\alphapha}|< \frac{\eta}{2\|\partialrtial_{\alphapha} Z\|_{L^\infty}}, \end{equation*} so that the integral is on a disk and therefore vanishes. Finally, integration by parts in $S_6$ shows that \begin{equation*} S_6\!=\!\partialrtial_{\alphapha_1}Z_1(\tilde{\alphapha}) \int_{\partialrtial Z^{-1}(B_\eta)}\!\!\left(\frac{3(\tilde{\alphapha}_1\!-\!\beta_1)^2\!+\!2(\tilde{\alphapha}_2\!-\!\beta_2)^2}{3|\tilde{\alphapha}-\beta|^3}\!-\!\frac{3(\alphapha_1\!-\!\beta_1)^2\!+\!2(\alphapha_2\!-\!\beta_2)^2}{3|\alphapha-\beta|^3}\right)n_1(\beta)dl(\beta), \end{equation*} so, since $\alphapha,\tilde{\alphapha}\in Z^{-1}(B_{\eta/2})$, we can apply the mean value theorem to conclude that \begin{equation*} |S_{6}|\leq c\|\partialrtial_{\alphapha} Z\|_{L^\infty}\left(\frac{2\|\partialrtial_{\alphapha}Z\|_{L^\infty}}{\eta}\right)^2|\alphapha-\tilde{\alphapha}|\leq c \frac{\|\partialrtial_{\alphapha}Z\|_{L^\infty}^3}{\eta^2}\|F(Z)\|_{L^\infty}|h|. \end{equation*} Joining the above bounds we have that \begin{equation}\Lambdabel{R1bound} R_1\leq c\left(\|Z\|_{C^{1+\gamma}}\|F(Z)\|_{L^\infty}^\gamma+ \frac{\|\partialrtial_{\alphapha}Z\|_{L^\infty}^3}{\eta^2}\|F(Z)\|_{L^\infty}\right)|h|^\gamma. \end{equation} We now rewrite $R_2$ as follows \begin{multline*} R_2=\Big|\int_{\partialrtial Z^{-1}(B_\eta)}\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}(Z_1(\alphapha)-Z_1(\tilde{\alphapha}))n_1(\beta)dl(\beta)\\ -\int_{\partialrtial Z^{-1}(B_\eta)}\left(\frac{(\tilde{\alphapha}_1-\beta_1)^3}{|\tilde{\alphapha}-\beta|^5}-\frac{(\alphapha_1-\beta_1)^3}{|\alphapha-\beta|^5}\right)(Z_1(\tilde{\alphapha})-Z_1(\beta))n_1(\beta)dl(\beta)\Big|, \end{multline*} which recalling again that $\alphapha,\tilde{\alphapha}\in Z^{-1}(B_{\eta/2})$ can be bounded by the following \begin{equation}\Lambdabel{R2bound} R_2\leq c \frac{\|\partialrtial_\alphapha Z\|_{L^\infty}^2}{\eta^2}\|Z\|_{C^\gamma}\|F(Z)\|_{L^\infty}^\gamma |h|^\gamma + c\|Z\|_{L^\infty}\frac{\|\partialrtial_{\alphapha}Z\|_{L^\infty}^3}{\eta^3}\|F(Z)\|_{L^\infty}|h|. \end{equation} The terms $R_3$ and $R_4$ can be bounded analogously to $R_1$ and $R_2$, respectively. Introducing the bounds \eqref{R1bound}, \eqref{R2bound} back in \eqref{Q2}, we obtain that \begin{equation}\Lambdabel{Q2bound} Q_2\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}},\eta)|h|^\gamma. \end{equation} From \eqref{P1}, the bounds \eqref{Q1bound} and \eqref{Q2bound} yields that \begin{equation}\Lambdabel{P1bound} |P_{1,j}(\alphapha)-P_{1,j}(\tilde{\alphapha})|\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}},\eta)|h|^\gamma. \end{equation} The terms $P_{2,j}$ and $P_{3,j}$ can be estimated analogously, so we conclude in \eqref{O1} that \begin{equation}\Lambdabel{O1bound} |O_1(\alphapha)-O_1(\tilde{\alphapha})|\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}},\eta)|h|^\gamma. \end{equation} Thus, it only remains to bound the term $O_2(\alphapha)-O_2(\alphapha)$. \noindent \underline{Bounding $O_2(\alphapha)-O_2(\alphapha)$}: \noindent Going back to \eqref{M1}, for simplicity of notation we will denote \begin{equation*} G(\alphapha,\beta)=\frac{(Z_2(\alphapha)-Z_2(\beta))^2+(Z_3(\alphapha)-Z_3(\beta))^2}{|\alphapha-\beta|^2}\frac{|\alphapha-\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5}, \end{equation*} and define \begin{equation*} \mathcal{G}(\alphapha,\beta)=\frac{(\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2\!+\!(\partialrtial_\alphapha Z_3(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2}{|\alphapha-\beta|^2}\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5}. \end{equation*} Then, by using the mean value theorem, the term $O_2(\alphapha)$ is rewritten as follows \begin{equation*} O_2(\alphapha)\!=\!-\theta(x+h)\!\int_0^1\!\!\int_{Z^{-1}(B_\eta)}\! \frac{\alphapha\!-\!\beta}{|\alphapha\!-\!\beta|^3}\cdot(\partialrtial_{\alphapha}Z_1((1\!-\!r)\beta\!+\!r\alphapha)) (G(\alphapha,\beta)\!-\!\mathcal{G}(\alphapha,\beta)) d\beta dr. \end{equation*} Therefore, the difference $|O_2(\alphapha)-O_2(\tilde{\alphapha})|$ can be bounded by introducing the following splitting \begin{equation}\Lambdabel{O2} \begin{aligned} |O_2(\alphapha)-O_2(\tilde{\alphapha})|\leq \|\theta_0\|_{L^\infty} (P_4+P_5+P_6+P_7+P_8), \end{aligned} \end{equation} where \begin{equation*} P_4=\Big|\!\int_0^1\!\!\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\leq 2|\alphapha-\tilde{\alphapha}|\}}\! \frac{\alphapha\!-\!\beta}{|\alphapha\!-\!\beta|^3}\cdot\partialrtial_{\alphapha}Z_1((1\!-\!r)\beta\!+\!r\alphapha) (G(\alphapha,\beta)\!-\!\mathcal{G}(\alphapha,\beta)) d\beta dr \Big|, \end{equation*} \begin{equation*} P_5=\Big|\!\int_0^1\!\!\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\leq 2|\alphapha-\tilde{\alphapha}|\}}\! \frac{\tilde{\alphapha}\!-\!\beta}{|\tilde{\alphapha}\!-\!\beta|^3}\cdot\partialrtial_{\alphapha}Z_1((1\!-\!r)\beta\!+\!r\tilde{\alphapha}) (G(\tilde{\alphapha},\beta)\!-\!\mathcal{G}(\tilde{\alphapha},\beta)) d\beta dr \Big|, \end{equation*} \begin{equation*} P_6\!=\!\Big|\!\int_0^1\!\!\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\geq2|\alphapha-\tilde{\alphapha}|\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{\big(\partialrtial_{\alphapha}Z_1((1\!-\!r)\beta\!+\!r\alphapha)-\partialrtial_{\alphapha}Z_1((1\!-\!r)\beta\!+\!r\tilde{\alphapha})\big)\cdot(\tilde{\alphapha}-\beta) (G(\tilde{\alphapha},\beta)-\mathcal{G}(\tilde{\alphapha},\beta))}{|\tilde{\alphapha}\!-\!\beta|^3} d\beta dr \Big|, \end{equation*} \begin{equation*} P_7=\Big|\!\int_0^1\!\!\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\geq 2|\alphapha-\tilde{\alphapha}|\}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\partialrtial_{\alphapha}Z_1((1-r)\beta+r\alphapha)\cdot\left(\frac{\alphapha\!-\!\beta}{|\alphapha\!-\!\beta|^3}\!-\!\frac{\tilde{\alphapha}\!-\!\beta}{|\tilde{\alphapha}\!-\!\beta|^3}\right) (G(\tilde{\alphapha},\beta)-\mathcal{G}(\tilde{\alphapha},\beta)) d\beta dr \Big|, \end{equation*} \begin{equation*} P_8\!=\!\Big|\!\int_0^1\!\!\!\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\geq 2|\alphapha-\tilde{\alphapha}|\}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{(\alphapha\!-\!\beta)\cdot\partialrtial_{\alphapha}Z_1((1-r)\beta+r\alphapha)\big(G(\alphapha,\beta)\!-\!G(\tilde{\alphapha},\beta)\!-\!\mathcal{G}(\alphapha,\beta)\!+\!\mathcal{G}(\tilde{\alphapha},\beta)\big) }{|\alphapha\!-\!\beta|^3} d\beta dr \Big|. \end{equation*} To deal with the first two terms, we first notice that for both $w=\alphapha$ and $w=\tilde{\alphapha}$ it holds that \begin{equation*} \begin{aligned} |G(w,\beta)-\mathcal{G}(w,\beta)|&\leq c\left(1+ \|\partialrtial_{\alphapha}Z\|_{L^\infty}\|F(Z)\|_{L^\infty}\right)\|\partialrtial_{\alphapha}Z\|_{L^\infty}^3\|F(Z)\|_{L^\infty}^5\|\partialrtial_{\alphapha}Z\|_{C^\gamma}|w-\beta|^\gamma\\ &\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})|w-\beta|^\gamma. \end{aligned} \end{equation*} Therefore, one can integrate to find that \begin{equation}\Lambdabel{P1P2} P_4+P_5\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})|\alphapha-\tilde{\alphapha}|^\gamma\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})|h|^\gamma. \end{equation} Next, $P_6$ is readily bounded as follows \begin{equation}\Lambdabel{P3} P_6\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})|h|^\gamma. \end{equation} The mean value theorem applied in $P_7$ provides that \begin{equation*} P_7\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})\|\partialrtial_{\alphapha}Z|\|_{L^\infty}\int_{Z^{-1}(B_\eta)\cap \{|\alphapha-\beta|\geq 2|\alphapha-\tilde{\alphapha}|\}}\frac{|\alphapha-\tilde{\alphapha}|}{|\alphapha-\beta|^3}|\tilde{\alphapha}-\beta|^\gamma d\beta, \end{equation*} and since $|\tilde{\alphapha}-\beta|\leq \frac32|\alphapha-\beta|$, we conclude that \begin{equation}\Lambdabel{P4} P_7\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma}})|h|^\gamma. \end{equation} It remains to deal with $P_8$. To bound it we decompose further $G(\alphapha,\beta)-G(\tilde{\alphapha},\beta)-\mathcal{G}(\alphapha,\beta)+\mathcal{G}(\tilde{\alphapha},\beta)$. First, \begin{equation}\Lambdabel{G1G2} G(\alphapha,\beta)=G_1(\alphapha,\beta)+G_2(\alphapha,\beta), \end{equation} with \begin{equation*} \begin{aligned} G_1(\alphapha,\beta)=\frac{(Z_2(\alphapha)\!-\!Z_2(\beta))^2}{|\alphapha-\beta|^2}\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5},G_2(\alphapha,\beta)=\frac{(Z_3(\alphapha)\!-\!Z_3(\beta))^2}{|\alphapha-\beta|^2}\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5}. \end{aligned} \end{equation*} Analogously, \begin{equation}\Lambdabel{G1G22} \mathcal{G}(\alphapha,\beta)=\mathcal{G}_1(\alphapha,\beta)+\mathcal{G}_2(\alphapha,\beta), \end{equation} where \begin{equation*} \begin{aligned} \mathcal{G}_1(\alphapha,\beta)=\frac{(\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2}{|\alphapha-\beta|^2}\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5},\hspace{0.5cm} \mathcal{G}_2(\alphapha,\beta)=\frac{(\partialrtial_\alphapha Z_3(\alphapha)\!\cdot\!(\alphapha\!-\!\beta))^2}{|\alphapha-\beta|^2}\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5}. \end{aligned} \end{equation*} Then, \begin{equation}\Lambdabel{G1H1H2} G_1(\alphapha,\beta)-G_1(\tilde{\alphapha},\beta)=H_1+H_2, \end{equation} \begin{equation*} \begin{aligned} H_1&=\left(\frac{(Z_2(\alphapha)\!-\!Z_2(\beta))^2}{|\alphapha-\beta|^2}-\frac{(Z_2(\tilde{\alphapha})\!-\!Z_2(\beta))^2}{|\tilde{\alphapha}-\beta|^2}\right)\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5},\\ H_2&=\frac{(Z_2(\tilde{\alphapha})\!-\!Z_2(\beta))^2}{|\tilde{\alphapha}-\beta|^2}\left(\frac{|\alphapha\!-\!\beta|^5}{|Z(\alphapha)-Z(\beta)|^5}-\frac{|\tilde{\alphapha}\!-\!\beta|^5}{|Z(\tilde{\alphapha})-Z(\beta)|^5}\right)N_1(Z(\beta)). \end{aligned} \end{equation*} Furthermore, \begin{equation*} H_1=\left(\frac{Z_2(\alphapha)\!-\!Z_2(\beta)}{|\alphapha-\beta|}\!-\!\frac{Z_2(\tilde{\alphapha})\!-\!Z_2(\beta)}{|\tilde{\alphapha}-\beta|}\right)\left(\frac{Z_2(\alphapha)\!-\!Z_2(\beta)}{|\alphapha-\beta|}\!+\!\frac{Z_2(\tilde{\alphapha})\!-\!Z_2(\beta)}{|\tilde{\alphapha}-\beta|}\right)\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5}. \end{equation*} We can perform a similar decomposition of \begin{equation}\Lambdabel{G1H1H22} \mathcal{G}_1(\alphapha,\beta)-\mathcal{G}_1(\tilde{\alphapha},\beta)=\mathcal{H}_1+\mathcal{H}_2, \end{equation} where $$\mathcal{H}_1=\left(\frac{\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta)}{|\alphapha-\beta|}\!-\!\frac{\partialrtial_\alphapha Z_2(\tilde{\alphapha})\!\cdot\!(\tilde{\alphapha}\!-\!\beta)}{|\tilde{\alphapha}-\beta|}\right)\left(\frac{\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta)}{|\alphapha-\beta|}\!+\!\frac{\partialrtial_\alphapha Z_2(\tilde{\alphapha})\!\cdot\!(\tilde{\alphapha}\!-\!\beta)}{|\tilde{\alphapha}-\beta|}\right)\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5}, $$ $$ \mathcal{H}_2=\frac{(\partialrtial_\alphapha Z_2(\tilde{\alphapha})\!\cdot\!(\tilde{\alphapha}\!-\!\beta))^2}{|\tilde{\alphapha}-\beta|^2}\left(\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5}-\frac{N_1(Z(\tilde{\alphapha}))}{|\partialrtial_{\alphapha}Z(\tilde{\alphapha})|^5}\right). $$ Denote $$g(\alphapha,\beta)=\frac{Z_2(\alphapha)\!-\!Z_2(\beta)}{|\alphapha-\beta|}, \hspace{0.5cm}\thetaxt{g}(\alphapha,\beta)=\frac{\partialrtial_\alphapha Z_2(\alphapha)\!\cdot\!(\alphapha\!-\!\beta)}{|\alphapha-\beta|}.$$ Then, we find that \begin{equation}\Lambdabel{H1H1} H_1-\mathcal{H}_1=Y_1+Y_2, \end{equation} \begin{equation*} \begin{aligned} Y_1&=\big((g(\alphapha,\beta)-g(\tilde{\alphapha},\beta))-(\thetaxt{g}(\alphapha,\beta)-\thetaxt{g}(\tilde{\alphapha},\beta))\big)\big(g(\alphapha,\beta)+g(\tilde{\alphapha},\beta)\big)\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5},\\ Y_2&=(\thetaxt{g}(\alphapha,\beta)-\thetaxt{g}(\tilde{\alphapha},\beta))\Big(\big(g(\alphapha,\beta)+g(\tilde{\alphapha},\beta)\big)\frac{|\alphapha\!-\!\beta|^5N_1(Z(\beta))}{|Z(\alphapha)-Z(\beta)|^5} - \big(\thetaxt{g}(\alphapha,\beta)+\thetaxt{g}(\tilde{\alphapha},\beta) \big)\frac{N_1(Z(\alphapha))}{|\partialrtial_{\alphapha}Z(\alphapha)|^5} \Big). \end{aligned} \end{equation*} We can bound $Y_1$ as follows \begin{equation*} |Y_1|\leq c(\|F(Z)\|_{L^\infty},\|\partialrtial_{\alphapha}Z\|_{L^\infty}) \big|(g(\alphapha,\beta)-g(\tilde{\alphapha},\beta))-(\thetaxt{g}(\alphapha,\beta)-\thetaxt{g}(\tilde{\alphapha},\beta))\big|, \end{equation*} where \begin{equation*} \big|g(\alphapha,\beta)-g(\tilde{\alphapha},\beta))-(\thetaxt{g}(\alphapha,\beta)-\thetaxt{g}(\tilde{\alphapha},\beta)\big|=\big|\delta g_1+\delta g_2+\delta g_3 \big|, \end{equation*} and, for $\sigma\in(0,1-\gamma)$, \begin{equation*} |\delta g_1|=\Big|\int_0^1\Big(\partialrtial_{\alphapha}Z_2((1-r)\beta+r\alphapha)-\partialrtial_{\alphapha}Z_2((1-r)\beta+r\tilde{\alphapha})\Big)\cdot\frac{\alphapha-\beta}{|\alphapha-\beta|}dr\Big|\leq c\|\partialrtial_{\alphapha}Z_2\|_{C^{\gamma+\sigma}}|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}, \end{equation*} \begin{equation*} |\delta g_2|=\Big|-(\partialrtial_{\alphapha}Z_2(\alphapha)-\partialrtial_{\alphapha}Z_2(\tilde{\alphapha}))\cdot\frac{\alphapha-\beta}{|\alphapha-\beta|}\Big|\leq c\|\partialrtial_{\alphapha}Z_2\|_{C^{\gamma+\sigma}}|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}, \end{equation*} \begin{equation*} |\delta g_3|\!=\!\Big|\int_0^1\! \Big(\partialrtial_{\alphapha}Z_2((1-r)\beta+r\tilde{\alphapha})-\partialrtial_{\alphapha}Z_2(\tilde{\alphapha})\Big)\cdot\Big(\frac{\alphapha\!-\!\beta}{|\alphapha\!-\!\beta|}- \frac{\tilde{\alphapha}\!-\!\beta}{|\tilde{\alphapha}\!-\!\beta|} \Big)dr\Big|\leq c\|\partialrtial_{\alphapha}Z_2\|_{C^{\gamma+\sigma}}\frac{|\alphapha-\tilde{\alphapha}|}{|\alphapha-\beta|^{1-\gamma-\sigma}}. \end{equation*} In the last inequality above, we used that in $P_8$ we are integrating in the region $|\alphapha-\beta|\geq 2|\alphapha-\tilde{\alphapha}|$. Thus, we have found the following bound for $Y_1$ \begin{equation*} |Y_1|\leq c(\|F(Z)\|_{L^\infty},\|\partialrtial_{\alphapha}Z\|_{C^{\gamma+\sigma}})|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}. \end{equation*} Proceeding as above, one obtains the analogous bound for $Y_2$ and then from \eqref{H1H1} $$|H_1-\mathcal{H}_1|\leq c(\|F(Z)\|_{L^\infty},\|\partialrtial_{\alphapha}Z\|_{C^{\gamma+\sigma}})|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}.$$ The same argument works for the difference $H_2-\mathcal{H}_2$, so that joining \eqref{G1H1H2} and \eqref{G1H1H22}, we can write \begin{equation}\Lambdabel{G} \begin{aligned} |G_1(\alphapha,\beta)-G_1(\tilde{\alphapha},\beta)-\mathcal{G}_1(\alphapha,\beta)+\mathcal{G}_1(\tilde{\alphapha},\beta)|&\leq |H_1-\mathcal{H}_1|+|H_2-\mathcal{H}_2|\\ &\leq c(\|F(Z)\|_{L^\infty},\|\partialrtial_{\alphapha}Z\|_{C^{\gamma+\sigma}})|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}. \end{aligned} \end{equation} Since the term corresponding to $G_2$, $\mathcal{G}_2$ in \eqref{G1G2}, \eqref{G1G22} is completely analogous, the same bound \eqref{G} holds for $G$. Therefore, introducing this estimate into $P_8$ \eqref{O2}, we obtain that \begin{equation}\Lambdabel{P5} \begin{aligned} P_8&\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma+\sigma}},\eta)|\alphapha-\tilde{\alphapha}|^{\gamma+\sigma}(1-\log{|\alphapha-\tilde{\alphapha}|})\\ &\leq c(\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma+\sigma}},\eta)|\alphapha-\tilde{\alphapha}|^{\gamma}. \end{aligned} \end{equation} Joining the above bounds \eqref{P1P2},\eqref{P3}, \eqref{P4} and \eqref{P5} and going back to \eqref{O2} we find that \begin{equation*} |O_2(\alphapha)-O_2(\tilde{\alphapha})|\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma+\sigma}})|h|^\gamma, \end{equation*} which concludes the subsection for $O_2$. This last bound combined with \eqref{O1bound} allow us to estimate \eqref{M1} \begin{equation*} |M_1|\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma+\sigma}},\eta)|h|^\gamma, \end{equation*} which jointly to \eqref{M2bound} gives in \eqref{L6} that \begin{equation*} |L_6|\leq c(\|\theta_0\|_{L^\infty},\|F(Z)\|_{L^\infty},\|Z\|_{C^{1+\gamma+\sigma}},\eta)|h|^\gamma, \end{equation*} and thus the H\"older estimate of $I_1$ \eqref{I1I2I3} is concluded. Since we already have the bounds \eqref{I2I3bound}, formula \eqref{I1I2I3decomp} shows that the proof is ended. \qed \subsection*{{\bf Acknowledgments}} This research was partially supported by the grant MTM2014-59488-P (Spain) and by the ERC through the Starting Grant project H2020-EU.1.1.-639227. EGJ was supported by MECD FPU grant from the Spanish Government. \begin{quote} \begin{tabular}{ll} \thetaxtbf{Francisco Gancedo}\\ {\small Departamento de An\'{a}lisis Matem\'{a}tico $\&$ IMUS}\\ {\small Universidad de Sevilla}\\ {\small C/ Tarfia s/n, Campus Reina Mercedes, 41012 Sevilla, Spain}\\ {\small Email: [email protected]} \end{tabular} \end{quote} \begin{quote} \begin{tabular}{ll} \thetaxtbf{Eduardo Garc\'ia-Ju\'arez}\\ \thetaxtbf{Former Address}\\ {\small Departamento de An\'{a}lisis Matem\'{a}tico $\&$ IMUS}\\ {\small Universidad de Sevilla}\\ {\small C/ Tarfia s/n, Campus Reina Mercedes, 41012 Sevilla, Spain}\\ \thetaxtbf{Current Address}\\ {\small Department of Mathematics}\\ {\small University of Pennsylvania}\\ {\small David Rittenhouse Lab., 209 South 33rd St., Philadelphia, PA 19104, USA}\\ {\small Email: [email protected]} \end{tabular} \end{quote} \end{document}
\begin{document} \title[Explicit formulas for Drinfeld modules] {Explicit formulas for Drinfeld modules \\ and their periods} \author{Ahmad El-Guindy} \address{Current address: Science Program, Texas A\&M University in Qatar, Doha, Qatar} \address{Permanent address: Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt 12613} \email{[email protected]} \author{Matthew A. Papanikolas} \address{Department of Mathematics, Texas A\&M University, College Station, TX 77843, U.S.A.} \email{[email protected]} \keywords{Drinfeld modules, exponentials, logarithms, periods, supersingularity} \subjclass[2010]{11G09, 11F52, 11R58} \thanks{Research of the second author was partially supported by NSF Grant DMS-0903838.} \date{December 22, 2011} \begin{abstract} We provide explicit series expansions for the exponential and logarithm functions attached to a rank $r$ Drinfeld module that generalize well known formulas for the Carlitz exponential and logarithm. Using these results we obtain a procedure and an analytic expression for computing the periods of rank 2 Drinfeld modules and also a criterion for supersingularity. \end{abstract} \maketitle \section{Introduction} The goal of this paper is to determine explicit formulas for exponential functions, logarithms, and periods of Drinfeld modules. Originally Carlitz~\cite{Carlitz35} and Wade~\cite{Wade46} worked out a complete picture for the Carlitz module by deducing closed formulas for both the power series expansions of its associated exponential and logarithm functions and the Carlitz period. Building on work of Hayes~\cite{Hayes79} on sgn-normalized Drinfeld modules of rank~$1$, Gekeler~\cite{GekelerBook} determined formulas for periods of rank~$1$. Later Thakur~\cite{Thakur92}, \cite{Thakur93}, building on Hayes' work, found explicit formulas for exponentials and logarithms of rank~$1$ modules using special values of shtuka functions. (See also \cite[Ch.~3, 4, 7]{GossBook} and \cite[Ch.~2, 8]{ThakurBook} for more information.) In the current paper we have attempted to continue these investigations in a similar spirit for Drinfeld modules of arbitrary rank by developing a combinatorial framework of ``shadowed partitions'' to keep track of coefficient data. As a result we obtain explicit formulas for the exponential and logarithm functions and for periods, and we further obtain a precise criterion for supersingularity that complements previous work of Cornelissen~\cite{Cornelissen99a}, \cite{Cornelissen99b} and Gekeler~\cite{Gekeler88}. Let $q$ be a power of a prime $p$, and let $\mathbb{F}_q$ denote the field with $q$ elements. Consider the polynomial ring $\mathbb{A}=\mathbb{F}_q[T]$, and let $\mathbb{K}$ denote the fraction field of $\mathbb{A}$. Consider the unique valuation on $\mathbb{K}$ defined by \[ v(T)=-1, \] which is the valuation at the ``infinite prime'' of the ring $\mathbb{A}$. Let $\mathbb{K}_\infty$ denote the completion of $\mathbb{K}$ with respect to $v$, and let $\mathbb{C}_\infty$ denote the completion of an algebraic closure of $\mathbb{K}_\infty$. It is well known that $v$ has a unique extension to $\mathbb{C}_\infty$ that we still denote by $v$, and that $\mathbb{C}_\infty$ is a complete algebraically closed field. For any integer $n \in \mathbb{N}=\{0,1,\dots\}$ we write \begin{equation}\lambdabel{dn} \begin{split} [n]&:=T^{q^n}-T,\\ D_n&:=[n][n-1]^q[n-2]^{q^2}\cdots [1]^{q^{n-1}},\quad D_0:=1,\\ L_n&:=(-1)^n [n][n-1]\cdots[2][1],\quad L_0:=1. \end{split} \end{equation} A field $L$ is called an \emph{$\mathbb{A}$-field} if there is a nonzero homomorphism $\imath:{\mathbb{A}}\to L$. Examples of such fields are extensions of either $\mathbb{K}$ or $\mathbb{A}/\mathfrak{p}$, where $\mathfrak{p}$ is a nonzero prime ideal of $\mathbb{A}$. For simplicity we will write $a$ in place of $\imath(a)$ when the context is clear. Such a field has a \emph{Frobenuis homomorphism} \[ \begin{split} \tau:L&\to L\\ z&\mapsto z^q, \end{split} \] and we can consider the ring $L\{\tau\}$ of polynomials in $\tau$ under addition and composition. Thus $\tau \ell=\ell^q\tau$ for any $\ell \in L$. A \emph{Drinfeld module of rank $r$ over $L$} is an $\mathbb{F}_q$-linear ring homomorphism $\phi:\mathbb{A}[T]\to L\{\tau\}$ such that \begin{equation} \phi_T=T+\sum_{i=1}^r A_i\tau^i, \quad A_i\in L,\quad A_r\neq 0. \end{equation} It then follows that the constant term of $\phi_a$ is $a$ for all $a \in \mathbb{A}$ and that the degree of $\phi_a$ in $\tau$ is $r\deg_T(a)$. The simplest example of a Drinfeld module is the \emph{Carlitz module} $\mathcal{C}$ given by \[ \mathcal{C}_T=T+\tau, \] which has rank $1$. Associated to the Carlitz module is the \emph{Carlitz exponential} \begin{equation}\lambdabel{Cexp} e_\mathcal{C}(z):=\sum_{n=0}^\infty \frac{z^{q^n}}{D_n}. \end{equation} The series for $e_\mathcal{C}$ converges for all $z\in \mathbb{C}_\infty$ and defines an entire, $\mathbb{F}_q$-linear, and surjective function. The key property connecting the Carlitz exponential to the Carlitz module is \begin{equation}\lambdabel{CFE} e_\mathcal{C}(Tz)=\mathcal{C}_T(e_\mathcal{C}(z)), \end{equation} from which it follows that for all $a \in \mathbb{A}$, \[ e_\mathcal{C}(az) = \mathcal{C}_a(e_\mathcal{C}(z)). \] The zeros of $e_\mathcal{C}(z)$ form an $\mathbb{A}$-lattice of rank one in $\mathbb{C}_\infty$ with a certain generator $\pi_\mathcal{C}\in \mathbb{C}_\infty$ called the \emph{Carlitz period}, and we have an alternate expression for the Carlitz exponential as \begin{equation}\lambdabel{Clattice} e_\mathcal{C}(z)=z\prod_{0\neq\lambda\in \pi_\mathcal{C}\mathbb{A}} \left(1-\frac{z}{\lambda}\right). \end{equation} It is useful to also consider the (local) composition inverse of $e_\mathcal{C}$, called the \emph{Carlitz logarithm}, defined by \begin{equation}\lambdabel{Clog} \log_\mathcal{C}(z):=\sum_{n=0}^\infty \frac{z^{q^n}}{L_n},\quad v(z)>\frac{-q}{q-1}. \end{equation} These results go back to Carlitz~\cite{Carlitz35} and Wade~\cite{Wade46}, who were investigating explicit class field theory over $\mathbb{F}_q(T)$. See \cite[Ch.~3]{GossBook}, \cite[Ch.~2]{ThakurBook} for more details on the above constructions. Analogues of \eqref{CFE}, \eqref{Clattice}, and \eqref{Clog} hold for any Drinfeld module over $\mathbb{C}_\infty$ (see \cite[Ch.~4]{GossBook}, \cite[Ch.~2]{ThakurBook} for more details). Indeed in \cite{Carlitz95}, Carlitz himself had begun to study lattice functions for higher rank lattices long before Drinfeld developed the complete story. In particular, if $\Lambda\subset \mathbb{C}_\infty$ is an $\mathbb{A}$-lattice of rank $r$ then the \emph{lattice exponential function} defined by \begin{equation}\lambdabel{lattice} e_\Lambda(z):=z\prod_{0\neq\lambda\in\Lambda} \left(1-\frac{z}{\lambda}\right) \end{equation} is an entire, surjective, $\mathbb{F}_q$-linear function from $\mathbb{C}_\infty$ to $\mathbb{C}_\infty$ with kernel $\Lambda$, and there exists a unique rank $r$ Drinfeld module $\phi=\phi(\Lambda)$ such that \begin{equation}\lambdabel{FE} e_\Lambda(Tz)=\phi_T(e_\Lambda(z)). \end{equation} Furthermore, $e_\Lambda$ has a series expansion of the form \begin{equation}\lambdabel{expseries} e_\Lambda(z)=\sum_{n=0}^\infty\alpha_nz^{q^n}. \end{equation} It also has a local composition inverse $\log_\Lambda$ with a series expansion \begin{equation}\lambdabel{logseries} \log_\Lambda(z)=\sum_{n=0}^\infty\beta_nz^{q^n}. \end{equation} Note that the coefficients $\alpha_n\in\mathbb{C}_\infty$ (and consequently $\beta_n$) could be expressed in terms of $\Lambda$ by expanding \eqref{lattice}: for instance \begin{equation}\lambdabel{coeff} \begin{split} \alpha_1&=\sum_{\begin{split}\{\lambda_1,\dots,\lambda_{q-1}\}\subset \Lambda\\ \lambda_i\neq \lambda_j\textrm{ for } i\neq j\end{split}}\prod_{i=1}^{q-1}\frac{1}{ \lambda_i},\\ \beta_1&=-\alpha_1, \end{split} \end{equation} which is explicit, but rather complicated and impractical as it involves infinitely many terms. We can also describe the Drinfeld module $\phi(\Lambda)$ in terms of $\Lambda$ as follows. Start by noticing that $\Lambda/T\Lambda$ is a vector space of dimension $r$ over $\mathbb{F}_q$. Write \begin{equation} f(x):=\prod_{\lambda \in \Lambda/T\Lambda}\left(x-e_\Lambda\left(\frac{\lambda}{T}\right)\right). \end{equation} It is well-known (see \cite[\S 4.3]{GossBook}) that $f(x)$ is $\mathbb{F}_q$-linear of degree $q^r$, hence of the form $f(x)=\sum_{n=0}^r A_n(\Lambda)x^{q^n}$ for some $A_n(\Lambda)\in \mathbb{C}_\infty$. Furthermore we have \begin{equation}\lambdabel{module} \phi_T(\Lambda)=\sum_{n=0}^r A_n(\Lambda)\tau^n. \end{equation} The general theme of the paper is in some sense to reverse the point of view of the previous paragraphs, and provide explicit identities for the lattice $\Lambda$, as well as the functions $e_\Lambda$ and $\log_\Lambda$, starting only from the knowledge of the Drinfeld module $\phi$. This is achieved by using relatively simple combinatorial objects that we name ``shadowed partitions'' which we introduce and study in \S 2. Using them, in \S 3 we obtain concrete formulas for $e_\Lambda$ and $\log_\Lambda$ (Theorem~\ref{alpha} and Theorem~\ref{beta}) that are as similar as could be hoped for to \eqref{Cexp} and \eqref{Clog}. In \S 4 we restrict our attention to rank $2$ modules, and we proceed to study the convergence properties of $\log_\Lambda$ making use of the detailed description we have for its coefficients (Corollary~\ref{logorder} and Corollary~\ref{logrange}). In \S 5 we examine the properties of $T$-torsion points of rank two Drinfeld modules, and show how, combined with the properties of $\log_\Lambda$, we can recover at least one, and sometimes both generators of the lattice $\Lambda$ in certain naturally defined ``families'' (Theorem~\ref{periods}), thus in some sense obtaining a converse of \eqref{module}. In \S 6 we introduce additional conditions that enable us to obtain a completely analytic description for the period with maximal valuation (Theorem~\ref{maxval}). In \S 7 we compare our results to an example of Thakur~\cite{Thakur92} arising from Drinfeld modules with complex multiplication. Finally in \S 8 we study yet another application of shadowed partitions, where we introduce a ``multinomial'' theorem for any rank $r$ Drinfeld module (Theorem~\ref{multinomialthm}) and obtain as a consequence a concrete condition for supersingularity of a rank $2$ Drinfeld module at a prime $\mathfrak p\in \mathbb{A}$ of any degree (Corollary~\ref{supersingular}). \textbf{A note on notation.} In order to emphasize that our starting point is the Drinfeld module rather than the lattice, from now on we shall write $e_\phi$, $\log_\phi$, and $A_n(\phi)$ instead of $e_\Lambda$, $\log_\Lambda$, and $A_n(\Lambda)$, respectively. \section{Shadowed partitions} Recall that a \emph{partition} of a set $S$ is a collection of subsets of $S$ that are pairwise disjoint, and whose union is equal to $S$ itself. Also, if $S\subset \mathbb{Z}$, $j\in \mathbb{Z}$, then $S+j:=\{i+j: i\in S\}$. For $r\in \mathbb{N}$ and $n \in \mathbb{Z}^+$, we set \begin{multline}\lambdabel{prndefn} P_r(n):=\bigl\{(S_1,\, S_2, \dots,\, S_r): S_i\subset \{0,\, 1, \dots,\, n-1\},\\ \textnormal{and $\{S_i+j: 1\leq i \leq r,\, 0\leq j \leq i-1\}$ form a partition of $\{0,\, 1, \dots,\, n-1\}$} \bigr\}. \end{multline} We also set $P_r(0):=\{\emptyset\}$ and $P_r(-n):=\emptyset$. We propose to name elements of $P_r(n)$ as \emph{order~$r$ index-shadowed partitions of $n$}, or \emph{shadowed partitions} for short, as each $S_i$ relies on its~$i$ ``shadows'' $S_i+j$ (including itself) so that all together they partition $n$ elements. Furthermore, for $1 \leq i \leq r$ we set \begin{equation} P_r^i(n):=\{(S_1,\, S_2, \dots,\, S_r) \in P_r(n): 0 \in S_i\}. \end{equation} We collect some simple, yet important facts about these objects in the following lemma. Recall that the sequence of \emph{$r$-step Fibonacci numbers} $\{F^{(r)}_n\}$ is defined as follows ($n\in \mathbb{Z}^+$) \begin{equation} F^{(r)}_{-n}=0,\,\,\, F^{(r)}_0=1, \,\,\, F^{(r)}_n=\sum_{i=n-r}^{n-1} F^{(r)}_i. \end{equation} \begin{lemma} \lambdabel{facts1} \begin{enumerate} \item $\{P^i_r(n): 1 \leq i \leq r \}$ is a partition of $P_r(n)$. \item For $1\leq i \leq r$, $P_r(n-i)$ could be identified with $P^i_r(n)$ via the well-defined bijection \begin{equation}\lambdabel{map} (S_1,\, S_2, \dots,\, S_r)\mapsto (S_1+i,\, S_2+i,\dots,\{0\}\cup(S_i+i),\dots, \, S_r+i). \end{equation} \item For all $r>0$ and all $n\in \mathbb{Z}$ we have $|P_r(n)|=F_n^{(r)}$. \end{enumerate} \end{lemma} \begin{proof} The proofs of (i) and (ii) consist of simple verifications that we leave to the reader, and statement (iii) follows from combining (i) and (ii). \end{proof} Let $S\subset \mathbb{N}$ be finite, and define the integer $w(S)$ by \begin{equation} w(S):=\sum_{i \in S} q^i. \end{equation} Note that $w(\emptyset)=0$. To help simplify our formulas, we shall usually denote $(S_1,\dots, S_r)\in P_r(n)$ by ${\bf S}$. We fix the following notation for the rest of the paper \[ |{\bf S}| :=\sum_{i=1}^r |S_i|,\quad \bigcup {\bf S} :=\bigcup_{i=1}^r S_i, \] and \[ {\bf S}+i :=(S_1+i,\dots,S_r+i)\in P_r(n+i), \, \textrm{ for } i\in \mathbb{N}. \] We collect some more facts that are relevant to our results. \begin{lemma} \begin{enumerate} \item For ${\bf S} \in P_r(n)$, $\cup {\bf S}$ uniquely defines ${\bf S}$. Hence \begin{equation}\lambdabel{rfib} F_n^{(r)}=|P_r(n)|\leq 2^n. \end{equation} \item The $r$-tuple of sets $(S_1,\dots,S_r)$ is in $P_r(n)$ if and only if \begin{equation}\lambdabel{qn-1} \sum_{i=1}^r (q^i-1)w(S_i)=q^n-1. \end{equation} \end{enumerate} \end{lemma} \begin{proof} To prove (i), write \[ \cup {\bf S}=\{s_1, \, s_2,\dots, s_m\}, \] with $s_i<s_{i+1}$. Note that the conditions in \eqref{prndefn} on $P_r(n)$ imply that $1\leq s_{i+1}-s_i \leq r$ and also $1\leq n-s_m \leq r$. It follows that we must have \[ s_m \in S_{n-s_m}, \] and, for $1\leq i\leq m-1$ \[ s_i \in S_{s_{i+1}-s_i}. \] Thus the sets $S_i$ are completely defined once we know $\cup{\bf S}$. It follows that the map \[\cup: P_r(n) \rightarrow \textrm { Subsets of }\{0,\dots, n-1\}\] is an injection, and \eqref{rfib} follows. To prove \eqref{qn-1} divide both sides by $q-1$ to get \begin{multline*} w(S_1)+(q+1)w(S_2)+\dots + (q^{r-1}+q^{r-2}+\dots+q+1)w(S_r)\\ =q^{n-1}+q^{n-2}+\dots+q+1, \end{multline*} and the statement of (ii) follows. \end{proof} \section{Explicit formulas for lattice functions} Let $\phi$ be a Drinfeld module of rank $r$ over $\mathbb{C}_\infty$. The corresponding exponential function $e_\phi$ on $\mathbb{C}_\infty$ satisfies \begin{equation}\lambdabel{expFE} e_\phi(Tz)=\phi_T(e_\phi(z)). \end{equation} It has the series expansion \[ e_\phi(z)=\sum_{n=0}^\infty \alpha_n z^{q^n}, \] with $\alpha_n=\alpha_n(\phi) \in \mathbb{C}_\infty$ and $\alpha_0=1$. In the next theorem we give an explicit formula for $\alpha_n$ in terms of the coefficients of $\phi$, and thus we give a proof of the existence of $e_\phi$ different from \eqref{lattice}. For notational convenience, for $(A_1,\dots, A_r)\in \mathbb{C}_\infty^r$ and ${\bf S}\in P_r(n)$ we write \begin{equation} {\bf A^S}:=\prod_{i=1}^r A_i^{w(S_i)}. \end{equation} Note that ${\bf A}^\emptyset=1$. \begin{theorem}\lambdabel{alpha} Let $\phi$ be a rank $r$ Drinfeld module given by \[ \phi_T=\sum_{i=0}^r A_i \tau^i,\quad A_i\in \mathbb{C}_\infty. \] For $n\geq 0$ and for any $S\subset \{0,\, 1, \dots, n-1\}$ set \begin{equation} D_n(S):=\prod_{i \in S} [n-i]^{q^{i}}. \end{equation} If we set \begin{equation}\lambdabel{alphaformula} \alpha_n=\sum_{{\bf S}\in P_r(n)} \frac{{\bf A^S}}{D_n(\cup {\bf S})}, \end{equation} then the series $\sum_{n=0}^\infty \alpha_nz^{q^n}$ converges on $\mathbb{C}_\infty$ and is the unique solution to \eqref{expFE} with $\alpha_0=1$ and thus $\exp_\phi(z) = \sum_{n=0}^\infty \alpha_nz^{q^n}$. \end{theorem} \begin{proof} The functional equation \[ e_\phi(Tz)=Te_\phi(z)+\sum_{i=1}^r A_i e_\phi(z)^{q^i} \] is equivalent to the recursion \begin{equation}\lambdabel{alpharecursion} \alpha_nT^{q^n}=\sum_{i=0}^rA_i \alpha_{n-i}^{q^i}. \end{equation} For convenience, we can set $\alpha_n = 0$ for $n< 0$ so that \eqref{alpharecursion} holds for all $n \geq 0$. We proceed by induction on $n$ to show that \eqref{alphaformula} is the unique solution to \eqref{alpharecursion} with $\alpha_0=1$. Since $D_n(\emptyset)=1$, formula \eqref{alphaformula} indeed gives $\alpha_0=1$. Substituting $A_0=T$, the induction hypothesis gives \[ \begin{split} \alpha_n&=\frac{1}{[n]} \sum_{i=1}^r A_i \sum_{{\bf S}\in P_r(n-i)}\left(\frac{{\bf A^S}}{D_{n-i}(\cup {\bf S})}\right)^{q^i}\\ &=\frac{1}{[n]} \sum_{i=1}^r A_i \sum_{{\bf S}\in P_r(n-i)}\frac{{\bf A}^{{\bf S}+i}}{D_{n}(\cup {\bf S}+i)}\\ &=\sum_{i=1}^r \sum_{{\bf S}\in P^i_r(n)}\frac{{\bf A^S}}{D_{n}(\cup {\bf S})}, \end{split} \] where the last equality follows from Lemma \ref{facts1}(ii) and the fact that \[ D_{n-i}(S)^{q^i}=D_n(S+i). \] Thus \eqref{alphaformula} is proved. We could deduce the convergence of $\sum \alpha_nz^{q^n}$ on all of $\mathbb{C}_\infty$ by relying on the corresponding property of the lattice exponential function given by \eqref{lattice}. However, to emphasize that our approach suffices to develop important aspects of the theory, we use \eqref{alphaformula} to give a direct proof from first principles. Note that $v([n-i]^{q^i})=-q^n$, and thus $v(D_n(S))=-q^n|S|$. It follows that for ${\bf S} \in P_r(n)$ we have \begin{equation}\lambdabel{vDn} v({D_n(\cup{\bf S})})\leq -\frac{nq^n}{r}. \end{equation} Next, set $v_0=\min_{1\leq i\leq r}v(A_i)$. It is easy to see that for ${\bf S} \in P_r(n)$ we have \begin{equation}\lambdabel{vAS} v({\bf A^S})\geq w(\cup{\bf S})v_0. \end{equation} For ${\bf S}\in P_r(n)$ we have \[ \frac{q^n-1}{q^r-1}\leq w(\cup{\bf S})\leq \frac{q^n-1}{q-1}. \] Together with \eqref{vDn} and \eqref{vAS} we get \begin{equation}\lambdabel{lim} v(\alpha_n z^{q^n})\geq \begin{cases} q^n\left(\frac{n}{r}+\frac{v_0}{q-1}+v(z)\right), \, &\textrm{ if } v_0<0, \\ q^n\left(\frac{n}{r}+v(z)\right), \, &\textrm{ if } v_0\geq 0, \end{cases} \end{equation} and it follows that \[ \lim_{n\to +\infty}v(\alpha_nz^{q^n})= +\infty. \] Hence the series converges for all $z\in \mathbb{C}_\infty$. \end{proof} \begin{example} We write a few concrete cases to clarify \eqref{alphaformula}. Let the superscript on $\alpha$ indicate the rank $r$ of the corresponding module. Then for $r=2$ we get, for instance \begin{gather*} \alpha_3^{(2)}=\frac{A_1^{q^2+q+1}}{[1]^{q^2}[2]^{q}[3]}+\frac{A_2^qA_1}{[2]^q[3]} +\frac{A_1^{q^2}A_2}{[1]^{q^2}[3]}, \\ \alpha_4^{(2)}=\frac{A_1^{q^3+q^2+q+1}}{[1]^{q^3}[2]^{q^{2}}[3]^q[4]} +\frac{A_2^qA_1^{q^3+1}}{[2]^{q^2}[3]^q[4]}+\frac{A_1^{q^3+q^2}A_2}{[1]^{q^3}[3]^q[4]} +\frac{A_2^{q^2}A_1^{q+1}}{[2]^{q^2}[3]^q[4]}+\frac{A_2^{q^2+1}}{[2]^{q^2}[4]}, \end{gather*} whereas for $r=3$ we get \[ \alpha_3^{(3)}=\frac{A_1^{q^2+q+1}}{[1]^{q^2}[2]^{q^{1}}[3]}+\frac{A_2^qA_1}{[2]^q[3]}+\frac{A_1^{q^2}A_2}{[1]^{q^2}[3]}+\frac{A_3}{[3]}, \] \begin{multline*} \alpha_4^{(3)}=\frac{A_1^{q^3+q^2+q+1}}{[1]^{q^3}[2]^{q^{2}}[3]^q[4]} +\frac{A_2^qA_1^{q^3+1}}{[2]^{q^2}[3]^q[4]}+\frac{A_1^{q^3+q^2}A_2}{[1]^{q^3}[3]^q[4]} +\frac{A_2^{q^2}A_1^{q+1}}{[2]^{q^2}[3]^q[4]}\\ +\frac{A_2^{q^2+1}}{[2]^{q^2}[4]} +\frac{A_3A_1^{q^3}}{[1]^{q^3}[4]}+\frac{A_3^qA_1}{[3]^{q}[4]}. \end{multline*} \end{example} Next we study the function $\log_\phi$. From \eqref{expFE} we obtain the functional equation \begin{equation}\lambdabel{logFE} T\log_\phi(z)=\log_\phi(\phi_T(z)). \end{equation} The following proposition provides a concrete description of the coefficients of $\log_\phi$. \begin{theorem} \lambdabel{beta} Given a Drinfeld module $\phi$ of rank $r$, write \[ \log_\phi(z)=\sum_{n=0}^\infty \beta_nz^{q^n}. \] For ${\bf S} \in P_r(n)$ set \begin{equation}\lambdabel{LS} L({\bf S}):=\prod_{j=1}^r \prod_{i \in S_j} (-[i+j]). \end{equation} Then \begin{equation}\lambdabel{betaformula} \beta_n=\sum_{{\bf S} \in P_r(n)}\frac{{\bf A^S}}{L({\bf S})}. \end{equation} \end{theorem} \begin{proof} Since $L(\emptyset)=1$, we see that $\beta_0=1$, as expected. Now the functional equation \eqref{logFE} gives the recursion \begin{equation}\lambdabel{tbetan} T\beta_n=\sum_{i=0}^r \beta_{n-i}A_{i}^{q^{n-i}}, \end{equation} where again we set $\beta_n=0$ for $n < 0$. Applying $A_0=T$ and the induction hypothesis gives \begin{equation}\lambdabel{betastep} \begin{split} -[n]\beta_n&=\sum_{i=1}^r A_i^{q^{n-i}}\sum_{{\bf S}\in P_r(n-i)}\frac{{\bf A^S}}{L({\bf S})}.\\ \end{split} \end{equation} Note that for $1\leq i\leq r$, the map $\Psi_i:P_r(n-i)\rightarrow P_r(n)$ given by \[ \Psi_i(S_1,\, S_2,\dots, S_i,\dots, S_r)=(S_1, \, S_2, \dots, S_i\cup\{n-i\}, \dots, S_r), \] is a well-defined injection, and furthermore the collection $\{\Psi_i(P_r(n-i)): 1\leq i\leq r\}$ is a partition of $P_r(n)$. Also, note that for all $1\leq i\leq r$, \[ L(\Psi_i{\bf S})= -[n]\cdot L({\bf S}). \] Thus from \eqref{betastep} we get \[ \beta_n=\sum_{i=1}^r \sum_{{\bf S}\in \Psi_i(P_{r}(n-i))}\frac{{\bf A^S}}{L({\bf S})}, \] and the result follows. \end{proof} \begin{remark} When $\phi$ is the Carlitz module $\mathcal{C}$, we recover \eqref{Cexp} and \eqref{Clog} from \eqref{alphaformula} and \eqref{betaformula}, respectively. \end{remark} \begin{remark} If we assign to each $A_n$ a ``weight" of $q^n-1$, and extend it to products in the usual way ($\textrm{wt}(AB)=\textrm{wt}(A)+\textrm{wt}(B)$), then by \eqref{qn-1} the total weight of any ${\bf A^S}$ appearing as a summand in the coefficient of $z^{q^n}$ in either $e_\phi$ or $\log_\phi$ is always $q^n-1$. \end{remark} \section{Convergence and range of $\log_\phi$ in rank two} Let $\phi$ be a rank $2$ Drinfeld module given by \begin{equation}\lambdabel{rank2} \phi_T=T+A\tau+B\tau^2. \end{equation} The \emph{$\jmath$-invariant} of $\phi$ is defined by \begin{equation}\lambdabel{jinv} \jmath(\phi):=\frac{A^{q+1}}{B}. \end{equation} In this section we study the convergence properties of the series defining $\log_\phi$. We start with determining the valuation of its coefficients. \begin{lemma}\lambdabel{vbeta} Let $\phi$ be as in \eqref{rank2} and write \[ \log_\phi(z)=\sum_{n=0}^\infty \beta_nz^{q^n}. \] Then for $n\in \mathbb{N}$, $v(\beta_n)$ is given by the following formula. \begin{equation}\lambdabel{betaval} v(\beta_n)=\begin{cases} {\displaystyle \frac{q^n-1}{q-1}(v(A)+q)},& \text{if $v(\jmath)<-q$,}\\[10pt] {\displaystyle \frac{q^n-1}{q^2-1}(v(B)+q^2)},& \text{if $v(\jmath)>-q$ and $n$ is even,}\\[10pt] {\displaystyle \frac{q^n-1}{q^2-1}(v(B)+q^2)+\frac{v(\jmath)+q}{q+1}},& \text{if $v(\jmath)>-q$ and $n$ is odd}. \end{cases} \end{equation} In the case where $v(\jmath)=-q$ we have \begin{equation}\lambdabel{betainequality} v(\beta_n)\geq \frac{q^n-1}{q-1} (v(A)+q)=\frac{q^n-1}{q^2-1}(v(B)+q^2), \end{equation} with equality holding infinitely often. \end{lemma} \begin{proof} From \eqref{betaformula} we have \[ \beta_n=\sum_{(S_1, S_2)\in P_2(n)} \frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}, \] where \[ L(S_1,S_2)=\prod_{i\in S_1}(-[i+1]) \prod_{i\in S_2} (-[i+2]). \] It is easy to see that \[ v\left(\frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}\right)=w(S_1)v(A)+w(S_2)v(B)+qw(S_1)+q^2w(S_2). \] In addition, by \eqref{qn-1} we have $(q-1)w(S_1)+(q^2-1)w(S_2)=q^n-1$, hence \[ w(S_1)=\frac{q^n-1}{q-1}-(q+1)w(S_2), \] and consequently \begin{equation}\lambdabel{betamainval} v\left(\frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}\right)=\frac{q^n-1}{q-1}(v(A)+q)-w(S_2)(v(\jmath)+q). \end{equation} Thus our analysis naturally breaks into the following three cases. \textbf{Case 1: $v(\jmath)>-q$.} In this case, we see that $v\left(\frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}\right)$ is a strictly decreasing function of $w(S_2)$, and hence attains its minimal value when $w(S_2)$ is maximal. It is easy to see that \begin{equation} \max_{(S_1,S_2) \in P_r(n)}(w(S_2))= \begin{cases} {\displaystyle \frac{q^n-1}{q^2-1}}, &\textnormal{if $n$ is even,}\\[10pt] {\displaystyle \frac{q^n-q}{q^2-1}}, &\textrm{if $n$ is odd. } \end{cases} \end{equation} (Corresponding to $S_1=\emptyset$ for $n$ even and $S_1=\{0\}$ for odd $n$). The ultrametric property implies \begin{equation} v(\beta_n)=\begin{cases} {\displaystyle \frac{q^n-1}{q-1}(v(A)+q)-\frac{q^n-1}{q^2-1}(v(\jmath)+q)}, &\textnormal{if $n$ is even,}\\[10pt] {\displaystyle \frac{q^n-1}{q-1}(v(A)+q)-\frac{q^n-q}{q^2-1}(v(\jmath)+q)}, &\textnormal{if $n$ is odd,} \end{cases} \end{equation} and the corresponding part of \eqref{betaval} follows. \textbf{Case 2: $v(\jmath)<-q$.} From \eqref{betamainval} we see that $v\left(\frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}\right)$ is strictly increasing in $w(S_2)$. Thus the minimal valuation is attained when $S_2=\emptyset$, which implies the first line of \eqref{betaval}. \textbf{Case 3: $v(\jmath)=-q$.} In this case we see that $v\left(\frac{A^{w(S_1)}B^{w(S_2)}}{L(S_1,S_2)}\right)$ is always equal to \[ \frac{q^n-1}{q-1} (v(A)+q)=\frac{q^n-1}{q^2-1}(v(B)+q^2), \] and hence $v(\beta_n)\geq \frac{q^n-1}{q-1}(v(A)+q)$. It is easy to see by direct calculation that we have equality for $n=0,1$. The recurrence formula \[ -[n]\beta_n=A^{q^{n-1}}\beta_{n-1}+B^{q^{n-2}}\beta_{n-2} \] implies that \[ v(\beta_n)-q^n\geq \inf\left(q^{n-1}v(A)+v(\beta_{n-1}),q^{n-2}v(B)+v(\beta_{n-2})\right).\] Since $v(\jmath)=-q$, an easy computation shows that \begin{multline*} q^{n-1}v(A)+\frac{q^{n-1}-1}{q-1}(v(A)+q)\\ =q^{n-2}v(B)+\frac{q^{n-2}-1}{q-1}(v(A)+q) =\frac{q^n-1}{q-1}(v(A)+q)-q^n. \end{multline*} Thus if $v(\beta_n)>\frac{q^n-1}{q-1}(v(A)+q)$, while $v(\beta_{n-1})=\frac{q^{n-1}-1}{q-1}(v(A)+q)$, then we must have $v(\beta_{n+1})=\frac{q^{n+1}-1}{q-1}(v(A)+q)$ and also $v(\beta_{n+2})=\frac{q^{n+2}-1}{q-1}(v(A)+q)$. Thus equality in \eqref{betainequality} actually occurs at least two thirds of the time, and the result follows. \end{proof} \begin{corollary}\lambdabel{logorder} Set \begin{equation} \begin{split} \rho_B&:=-\frac{q^2+v(B)}{q^2-1},\\ \rho_A&:=-\frac{q+v(A)}{q-1}. \end{split} \end{equation} If $v(\jmath)\geq-q$ then the series $\sum \beta_iz^{q^i}$ converges exactly for $z\in \mathbb{C}_\infty$ with $v(z)>\rho_B$, and if $v(\jmath)\leq-q$ then it converges exactly for $v(z)>\rho_A$. \end{corollary} \begin{proof} We know that the series converges if and only if $\lim_{n\to +\infty}v(\beta_nz^{q^n})=+\infty$. From Lemma~\ref{vbeta} we have \begin{equation}\lambdabel{orderlog} v(\beta_nz^{q^n})= \begin{cases} { (q^n-1)\left(\frac{v(A)+q}{q-1}+v(z)\right)+v(z)}, &\textnormal{if $v(\jmath)<-q$,}\\[10pt] { (q^n-1)\left(\frac{v(B)+q^2}{q^2-1}+v(z)\right)+v(z)}, &\parbox{1truein}{if $v(\jmath)>-q$ \\ and $n$ is even,}\\[10pt] {(q^n-1)\left(\frac{v(B)+q^2}{q^2-1}+v(z)\right)+v(z)+\frac{v(\jmath)+q}{q+1}}, &\parbox{1truein}{if $v(\jmath)>-q$ \\ and $n$ is odd.}\\ \end{cases} \end{equation} When $v(\jmath)=-q$ then we have \begin{equation}\lambdabel{manyn} v(\beta_n z^{q^n})\geq(q^n-1)\left(\frac{v(B)+q^2}{q^2-1}+v(z)\right)+v(z), \end{equation} with equality holding for infinitely many values of $n$. The result follows at once. \end{proof} \begin{corollary}\lambdabel{logrange} If the series for $\log_\phi$ converges for $z\in \mathbb{C}_\infty$ then we must have \begin{equation} v(\log_\phi(z))=v(z). \end{equation} \end{corollary} \begin{proof} From \eqref{orderlog} and \eqref{manyn}, it is easy to see that in the case of convergence we must have \[ v(z)<v(\beta_nz^{q^n}) \textrm{ for all } n\geq 1, \] and the result follows by the ultrametric property of $v$. \end{proof} \begin{remark} Note that $\rho_B-\rho_A=\frac{v(\jmath)+q}{q^2-1}$. If we set \[ \rho_\phi:=\max(\rho_A,\rho_B), \] then we can rephrase Corollary \ref{logorder} by saying that the series of $\log_\phi$ converges at $z\in \mathbb{C}_\infty$ if and only if $v(z)>\rho_\phi$. \end{remark} \section{Computing the periods of rank two Drinfeld modules} Let $\phi$ be a Drinfeld module given by \[ \phi_T=T+A\tau+B\tau^2, \] and let $\Lambda_\phi$ be the corresponding lattice. We can describe $\Lambda_\phi$ as the unique lattice for which \begin{equation} e_{\Lambda_\phi}=e_\phi. \end{equation} In other words, $\Lambda_\phi$ is the set of zeros of $e_\phi$. Our goal in this section is to outline a procedure for obtaining \emph{periods} of $\phi$, (i.e. elements of the lattice $\Lambda_\phi$) in terms of the coefficients $A$ and $B$. We start with an easy lemma on the values of $e_\phi$ at the $T$-division points of $\phi$. \begin{lemma}\lambdabel{Tdivision} For every $\lambda \in \Lambda_\phi$, $\delta_\lambda:=e_\phi\left(\frac{\lambda}{T}\right)$ is a root of the polynomial \[ Bx^{q^2}+Ax^{q}+Tx=0. \] \end{lemma} \begin{proof} The function $e_\phi$ satisfies the functional equation $e_\phi(Tz)=\phi_T(e_\phi(z))$. Hence \[ 0=e_\phi\left(T\cdot \frac{\lambda}{T}\right)=T\delta_\lambda+A\delta_\lambda^q+B\delta_\lambda^{q^2}, \] and the result follows. \end{proof} For the remainder of the paper, we let \begin{equation} f_\phi(x)=Bx^{q^2}+Ax^q+Tx. \end{equation} Also set \begin{align*} V_\phi&:=\{\delta \in \mathbb{C}_\infty: f_\phi(\delta)=0\},\\ V_\phi^*&:=\{\delta \in \mathbb{C}_\infty: B\delta^{q^2-1}+A\delta^{q-1}+T=0\}. \end{align*} As $V_\phi$ is the $T$-torsion submodule on $\phi$ it follows that $V_\phi$ is a $2$-dimensional vector space over~$\mathbb{F}_q$, and $V_\phi^*$ is its set of nonzero elements. The following lemma gives a complete description of the possible valuations on $V^*_\phi$. \begin{lemma}\lambdabel{vdelta} Let $\phi$ be a rank $2$ Drinfeld module given by \eqref{rank2}, and let $\jmath$ be its $\jmath$-invariant as in \eqref{jinv}. Exactly one of the following cases hold. \begin{enumerate} \item All the elements of $V_\phi^*$ have the same valuation given by \begin{equation}\lambdabel{allsame} v(\delta)=\frac{-(1+v(B))}{q^2-1} \textrm{ for all } \delta \in V_\phi^*. \end{equation} This case happens if and only if $v(\jmath)\geq -q$. \item There is an element $\eta\in V_\phi^*$ such that all elements of $\mathbb{F}_q^* \eta$ have strictly larger valuation than the rest of $V_\phi^*$ if an only if $v(\jmath)<-q$. In this case we have \begin{equation}\lambdabel{bigone} v(\eta)=\frac{-(1+v(A))}{q-1} \textrm{ and } v(\delta)=\frac{v(A)-v(B)}{q^2-q} \textrm{ for all } \delta \in V_\phi\setminus \mathbb{F}_q\eta. \end{equation} \end{enumerate} \end{lemma} \begin{proof} The lemma follows from an analysis of the Newton polygon of the defining polynomial $f_\phi(x)/x = Bx^{q^2-1} + Ax^{q-1} + T = 0$ of $V_\phi^*$ \cite[Ch.~2]{GossBook}, \cite[\S I.2]{KedlayaBook}. Indeed the line segment connecting $(0,-1)$ and $(q^2-1,v(B))$ has slope $\frac{v(B) + 1}{q^2-1}$, and one checks that $(q-1,v(A))$ lies on or above this line segment if and only if $v(\jmath) = (q+1)v(A) - v(B) \geq -q$. Thus $v(\jmath) \geq -q$ if and only if all zeroes of $f_\phi(x)/x$ have valuation $-\frac{v(B)+1}{q^2-1}$. Otherwise, when $v(\jmath) < -q$, the Newton polygon breaks into two segments: one of width $q-1$ from $(0,-1)$ to $(q-1,v(A))$ of slope $\frac{v(A)+1}{q-1}$, and another of width $q^2-q$ from $(q-1,v(A))$ to $(q^2-1,v(B))$ of slope $\frac{v(B)-v(A)}{q^2-q}$. The result then follows. \end{proof} Guided by the results above, we consider certain families of rank two Drinfeld modules as follows. Fix $0\neq\delta\in \mathbb{C}_\infty$, and set \begin{equation} \mathcal{F_\delta}:=\{\textrm{All rank 2 Drinfeld modules $\phi$ such that }\mathbb{F}_q\delta\subset V_\phi\}. \end{equation} The following theorem gives a complete description of the cases where the lattice $\Lambda_\phi$ could be recovered by applying $\log_\phi$ to $V_\phi$. \begin{theorem}\lambdabel{periods} Let $\phi\in\mathcal{F_\delta}$ be given by $\phi_T=T+A\tau+B\tau^2$. Fix a choice of a $(q-1)$-st root of $\frac{T}{B}$ and set \begin{equation}\lambdabel{c} c:=\delta^{-1}\left(\frac{T}{B}\right)^\frac{1}{q-1}. \end{equation} Let $\zeta$ be a root of \begin{equation}\lambdabel{delta} x^q-\delta^{q-1}x=c. \end{equation} We have the following cases. \begin{enumerate} \item If $v(\jmath) \geq -q$, then $v(\delta)=v(\zeta)=\frac{-(1+v(B))}{q^2-1}$. Hence $\log_\phi$ converges at $\delta$ and $\zeta$, and the period lattice $\Lambda_\phi$ is generated by $\{T\log_\phi(\delta),T\log_\phi(\zeta)\}$. \item If $v(\jmath)<-q$ and $v(\delta)=\frac{-(1+v(A))}{q-1}$, then $\log_\phi$ converges at $\delta$, and $T\log_\phi(\delta)$ is a period in $\Lambda_\phi$. Furthermore $v(\zeta)=\frac{v(A)-v(B)}{q^2-q}$, and $\log_\phi$ converges on all of $V_\phi$ if and only if \begin{equation}\lambdabel{jq2} v(\jmath)>-q^2. \end{equation} If \eqref{jq2} is satisfied then the period lattice $\Lambda_\phi$ is generated by $\{T\log_\phi(\delta),T\log_\phi(\zeta)\}$. \item If $v(\jmath)<-q$ and $v(\delta)=\frac{v(A)-v(B)}{q^2-q}$, then $v(\zeta)=\frac{-(1+v(A))}{q-1}$ and $\log_\phi$ converges at $\zeta$, hence $T\log_\phi(\zeta)$ is a period in $\Lambda_\phi$. Again $\log_\phi$ converges on all of $V_\phi$ if and only \eqref{jq2} is satisfied, in which case the period lattice $\Lambda_\phi$ is generated by $\{T\log_\phi(\delta),T\log_\phi(\zeta)\}$. \end{enumerate} \end{theorem} \begin{proof} The condition $\phi\in \mathcal{F}_\delta$ is equivalent to \begin{equation}\lambdabel{delta1} B\delta^{q^2}+A\delta^q+T\delta=0. \end{equation} Substituting \eqref{delta1} in $f_\phi(x)$ we get \[ \begin{split} f_\phi(x)&=Bx^{q^2}-(T\delta^{1-q}+B\delta^{q^2-q})x^q+Tx\\ &=B(x^q-\delta^{q-1}x)^q-T\delta^{1-q}(x^q-\delta^{q-1}x). \end{split} \] It follows that any $\zeta\in V_\phi\setminus \mathbb{F}_q\delta$ must satisfy \begin{equation}\lambdabel{zeta} \zeta^q-\delta^{q-1}\zeta=\delta^{-1}\left(\frac{T}{B}\right)^{\frac{1}{q-1}}. \end{equation} Obviously $\frac{-(1+v(B)}{q^2-1}>\rho_B$, and it follows that when $v(\jmath)\geq -q$, $\log_\phi$ converges on all of $V_\phi$. When $v(\jmath)<-q$, we also have $\frac{-(1+v(A))}{q-1}>\rho_A$; however we have \[ \frac{v(A)-v(B)}{q^2-q}>\rho_A=-\frac{q+v(A)}{q-1} \textrm { if and only if } v(\jmath)>-q^2. \] Finally assume that $\delta$ and $\zeta$ are linearly independent over $\mathbb{F}_q$, and that $\log_\phi$ converges at both of them. We need to show that $\log_\phi(\delta)$ and $\log_\phi(\zeta)$ are linearly independent over $\mathbb{A}$. From Lemma \ref{Tdivision} we see that $e_\phi(T^n\log_\phi\eta)=0$ for all $n\geq 1$ and all $\eta\in V_\phi$. Thus if $a,b \in \mathbb{A}$ are polynomials with constant terms $a_0$ and $b_0$ respectively, then \[ e_\phi(a\log_\phi(\delta)+b\log_\phi(\zeta))=a_0\delta+b_0\zeta, \] and it follows that indeed $\{\log_\phi(\delta),\log_\phi(\zeta)\}$ are linearly independent over $\mathbb{A}$. \end{proof} \begin{remark} We note that \eqref{delta} could be written as \begin{equation}\lambdabel{art-schr} X^q-X=\frac{c}{\delta^q}, \end{equation} where $X:=\frac{x}{\delta}$. Thus computing $\zeta$ is reduced to the extraction of an Artin-Schreier root. \end{remark} \section{An Analytic Expression for Periods} In the previous section we obtained a procedure for computing periods which involved the extraction of certain roots. In this section we show that under additional conditions (cf.~\eqref{fstar}) we can obtain a completely analytic expression for the periods. We start with a lemma on expressing the roots of a certain algebraic equation in terms of series. \begin{lemma}\lambdabel{equation} Let $C,\delta \in \mathbb{C}_\infty\setminus\{0\}$. If \begin{equation}\lambdabel{Cineq} v(C)>qv(\delta) \end{equation} then the set of solutions of the equation \begin{equation}\lambdabel{Ceqn} x^q-\delta^{q-1}x=C \end{equation} is given by \begin{equation}\lambdabel{sersol} \mathbb{F}_q\delta-\delta\sum_{i=0}^\infty \left(\frac{C}{\delta^q}\right)^{q^i}. \end{equation} \end{lemma} \begin{proof} Condition \eqref{Cineq} guarantees the convergence of the infinite series. It can easily be seen that it satisfies \eqref{Ceqn}. Finally, notice that a polynomial of degree $q$ can have at most $q$ distinct solutions. \end{proof} \begin{corollary}\lambdabel{eta} Let $\phi_T=T+A\tau+B\tau^2$ be a Drinfeld module with $v(\jmath)<-q$, and assume that $\delta\in V_\phi$ with $v(\delta)=\frac{v(A)-v(B)}{q^2-q}$. Fix a choice of a $(q-1)$-root of $\frac{T}{B}$. Then the unique subspace of $V_\phi$ where the valuation of the nonzero elements is $\frac{-(1+v(A))}{q-1}$ is generated by \begin{equation}\lambdabel{etaseries} \eta=-\delta \sum_{n=0}^\infty \left(\frac{T}{\delta^{q^2-1}B}\right)^\frac{q^n}{q-1}. \end{equation} \end{corollary} \begin{proof} With $c$ as in \eqref{c}, we see that \[ v\left(\frac{c}{\delta^q}\right) = -(q+1)\frac{v(A)-v(B)}{q^2-q}-\frac{1+v(B)}{q-1} =\frac{-(v(\jmath)+q)}{q^2-q}>0, \] and thus the series converges, and the valuation of the sum is equal to that of the first term by the ultrametric property. So indeed \[ v(\eta)=v(\delta^{1-q}c)=\frac{-q(v(A)-v(B))}{q^2-q}-\frac{(1+v(B))}{q-1}=\frac{-(1+v(A))}{q-1}, \] and the result follows from Lemma \ref{equation} and Lemma \ref{vdelta}. \end{proof} For $0\neq \delta \in \mathbb{C}_\infty$ we consider the subfamily $\mathcal{F}_\delta^\star$ of $\mathcal{F}_\delta$ defined by \begin{equation}\lambdabel{fstar} \mathcal{F}_\delta^\star:=\left\{\phi \in \mathcal{F}_\delta: v(\jmath)<-q \textrm { and } v(\delta)=\frac{v(A)-v(B)}{q^2-q}\right\}. \end{equation} We have the following analytic expression for periods in $\mathcal{F}_\delta^\star$. \begin{theorem} \lambdabel{maxval} Let $\phi \in \mathcal{F}_\delta^\star$ be given, and set $c$ as in \eqref{c}. Let $\beta_j$ be the coefficients of $\log_\phi$. Set \begin{equation}\lambdabel{a} \begin{split} \mathfrak a_\delta(n)&:=T\sum_{j=0}^n \beta_j\delta^{q^j}, \textrm{ and }\\ \mathfrak f(z)&:=\sum_{n=0}^\infty \mathfrak a_\delta(n)z^{q^n}. \end{split} \end{equation} Then the series $\mathfrak f$ converges for $z=\delta^{-q}c$, and $\mathfrak f(\delta^{-q}c)$ is a period of $\Lambda_\phi$ with maximal valuation. \end{theorem} \begin{proof} Let $\eta$ be as in \eqref{etaseries}. From Theorem \ref{periods} and Corollary \ref{eta}, we see that a period $\lambda\in \Lambda_\phi$ is given by \begin{multline*} \lambda := T\log_\phi(\eta)=T\log_\phi\left(\sum_{i=0}^\infty \delta^{1-q^{i+1}}c^{q^i}\right) =T\sum_{j=0}^\infty \beta_j\sum_{i=0}^\infty \delta^{q^j-q^{i+j+1}}c^{q^{i+j}}\\ =\sum_{n=0}^\infty T\left(\sum_{j=0}^n \beta_j\delta^{q^j}\right) \left(\frac{c}{\delta^q}\right)^{q^n} =\mathfrak f (\delta^{-q}c). \end{multline*} By Corollary \ref{logrange} we see that \[ v(\lambda)=-1-\frac{1+v(A)}{q-1}=\frac{-(q+v(A))}{q-1}. \] If $\lambdambda^\prime \in \Lambdambda_\phi$ has larger valuation, then $e_\phi(T^{-1}\lambdambda^\prime)$ is an element of $V_\phi$ with valuation larger than $\frac{-(1+v(A))}{q-1}$, which contradicts Lemma \ref{vdelta}, and the theorem follows. \end{proof} We end this section with a more detailed analysis of the function $\mathfrak f$. \begin{proposition} Let $\phi \in \mathcal{F}_\delta^\star$ be given, and let $\mathfrak a_\delta$ and $\mathfrak f$ be as in \eqref{a}. If $-q>v(\jmath)> -q^2$ then $\mathfrak f$ converges if and only if $v(z)>0$, and if $v(\jmath)<-q^2$ then $\mathfrak f$ converges if and only if \begin{equation} v(z)>\frac{-(v(\jmath)+q^2)}{q^2-q}>0. \end{equation} If $v(\jmath)=-q^2$, then $\mathfrak f$ converges at least for $v(z)>0$. Furthermore for $n\geq 0$ we have \begin{equation} \mathfrak a_\delta(n)=(T\delta)^{q^n}\beta_n-(B\delta^{q^2})^{q^{n-1}}\beta_{n-1}, \end{equation} and in the range $v(z)>\frac{-(q+v(\jmath))}{q^2-q}$, $\mathfrak f$ has the representation \begin{equation}\lambdabel{frakf} \mathfrak f(z)=\log_\phi(T\delta z)-\log_\phi(B\delta^{q^2}z^q). \end{equation} \end{proposition} \begin{proof} {From~\eqref{orderlog}}, we see that \[ \begin{split} v(\beta_n\delta^{q^n})&=(q^n-1)\left(\frac{q+v(A)}{q-1}+\frac{v(A)-v(B)}{q^2-q}\right)+v(\delta)\\ &=(q^n-1)\left(\frac{v(\jmath)+q^2}{q^2-q}\right)+v(\delta). \end{split} \] Thus our analysis naturally breaks into three cases. \textbf{Case 1: $v(\jmath)>-q^2$.} In this case $v(T\beta_n\delta^{q^n})$ is strictly increasing in $n$, and thus \[ v(\mathfrak a_\delta(n))=v(\delta)-1 \textrm{ for all } n\geq 0. \] It follows that the series for $\mathfrak f$ converges if and only if $v(z)>0$. \textbf{Case 2: $v(\jmath)<-q^2$.} In this case $v(T\beta_n\delta^{q^n})$ is strictly decreasing in $n$, and thus \[ v(\mathfrak a_\delta(n))=v(T\beta_n\delta^{q^n}). \] Hence \[ \begin{split} v(\mathfrak a_\delta(n)z^{q^n})&=v(z)+v(\delta)-1+(q^n-1)\left(v(z) +\frac{v(\jmath)+q^2}{q^2-q}\right), \end{split} \] and $\mathfrak f$ converges if and only if $v(z)>\frac{-(v(\jmath)+q^2)}{q^2-q}$. \textbf{Case 3: $v(\jmath)=-q^2$.} In this case $v(T\beta_n\delta^{q^n})=v(\delta)-1$ for all $n$, and thus \[ v(\mathfrak a_\delta(n))\geq v(\delta)-1 \textrm{ for all } n\geq 0. \] It follows that the series for $\mathfrak f$ converges at least for all $v(z)>0$. The first part of the proposition follows from the analysis above. To prove the second part, note that \eqref{tbetan} and \eqref{delta1} give \[ \begin{split} T\beta_i\delta^{q^i}&=(T\delta)^{q^i}\beta_i+(A\delta^q)^{q^{i-1}}\beta_{i-1} +(B\delta^{q^2})^{q^{i-2}}\beta_{i-2}\\ &=(T\delta)^{q^{i}}\beta_i-(T\delta)^{q^{i-1}}\beta_{i-1} -(B\delta^{q^2})^{q^{i-1}}\beta_{i-1}+(B\delta^{q^2})^{q^{j-2}}\beta_{i-2}. \end{split} \] Hence \[ \begin{split} \mathfrak a_\delta(n)&=(T\delta)^{q^n}\beta_n-(B\delta^{q^2})^{q^{n-1}}\beta_{n-1}. \end{split} \] Consequently we (formally) get \[ \mathfrak f(z)=\log_\phi(T\delta z)-\log_\phi(B\delta^{q^2}z^q). \] The expression on the right hand side converges for $v(T\delta z)>\rho_A$ and $v(B\delta^{q^2}z^q)>\rho_A$. Either statement is equivalent to \[ v(z)>\frac{-(v(\jmath)+q)}{q^2-q}, \] and the result follows. \end{proof} \begin{remark} Note that since $v(\delta^{-q}c)=\frac{-(v(\jmath)+q)}{q^2-q}$, we can not use \eqref{frakf} to evaluate $\lambda=\mathfrak f(\delta^{-q}c)$. Instead we can only use \eqref{a} for that evaluation, and indeed the argument above gives another proof that $\mathfrak f$ does converge at that point. \end{remark} \section{An example from complex multiplication} \lambdabel{ThakurA} In \cite{Thakur92}, Thakur determined the power series expansions of the exponential and logarithm functions of sgn-normalized rank~$1$ Drinfeld modules, and he showed how his constructions fit into the more general framework of shtuka functions in~\cite{Thakur93}, which is particular to the rank~$1$ theory. Thakur's Drinfeld modules, originally studied by Hayes~\cite{Hayes79} in the context of explicit class field theory, are rank~$1$ over extensions of $\mathbb{A}$ but can be thought of as higher rank Drinfeld $\mathbb{A}$-modules with complex multiplication. Here we consider one of Thakur's examples \cite[Ex.~A]{Thakur92} and compare it to the constructions of the previous sections. Let $q=3$, and let $y \in \mathbb{C}_\infty$ satisfy $y^2 = T^3-T-1$. Then define a rank $2$ Drinfeld module $\phi$ by setting \begin{equation} \phi_T := T + y(T^3-T)\tau + \tau^2. \end{equation} The module $\phi$ is special in that it has complex multiplication by the ring $\mathbb{F}_3[T,y]$, which itself has class number one, and so $\phi$ is a rank $1$ Drinfeld $\mathbb{F}_3[T,y]$-module but a rank $2$ Drinfeld $\mathbb{A}$-module. For $n \geq 1$, Thakur lets $[n]_y := y^{3^n}-y$ and sets \[ f_n = \frac{[n]_y - y[n]}{[n]-1}, \quad g_n = \frac{[n]_y - y^{3^n}[n]}{[n+1]+1}. \] He then establishes that if $e_\phi(z) = \sum \alpha_n z^{3^n}$ and $\log_\phi(z) = \sum \beta_n z^{3^n}$, then for $n \geq 1$, \[ \alpha_n = \frac{\alpha_{n-1}^3}{f_n}, \quad \beta_n = \frac{\beta_{n-1}}{g_n}. \] Therefore, \[ \alpha_n = \frac{1}{f_n f_{n-1}^3 \cdots f_1^{3^{n-1}}}, \quad \beta_n = \frac{1}{g_n g_{n-1} \cdots g_1}. \] After some calculations (and using that $v(y) = -\frac{3}{2}$), it follows that for $n \geq 1$, \begin{align} v(\alpha_n) &= {\textstyle \frac{1}{2}}(n-2) 3^n, \lambdabel{Thakalpha} \\ v(\beta_n) &= {\textstyle -\frac{3}{4}} (3^n-1). \lambdabel{Thakbeta} \end{align} See also Lutes~\cite[\S IV.C]{LutesThesis}. (In both Lutes and Thakur the formulas differ from the ones above by a factor of $\frac{1}{2}$, as they set $v(T)=-2$ instead of $-1$.) Certainly the valuation of $\alpha_n$ in \eqref{Thakalpha} is consistent with \eqref{lim}, since $v_0 = -\frac{9}{2}$ in this case. Now $\jmath(\phi) = y^4(T^3-T)^4$, and so $v(\jmath(\phi)) = -18$. Therefore \eqref{Thakbeta} matches with Lemma~\ref{vbeta}, and $\log_\phi(z)$ converges for $v(z) > \frac{3}{4}$, which coincides with Corollary~\ref{logorder}. Now the set $V_\phi$ is generated over $\mathbb{F}_3$ by $e_\phi(1/T)$ and $e_\phi(y/T)$, which have valuations $\frac{7}{4}$ and $-\frac{3}{4}$ respectively (see \cite[Ex.~4.15]{LutesP}), and thus $\phi$ fits into the situation of Theorem~\ref{periods} with $v(\jmath) < -q^2$ (so $\log_\phi$ does not converge on all of $V_\phi$). However, one can also calculate the period by other means (see \cite[\S III]{GekelerBook}, \cite[\S 7.10]{GossBook}, \cite[Ex.~4.15]{LutesP}), and one finds that a period $\pi$ with maximal valuation has $v(\pi) = \frac{3}{4}$, agreeing with Theorem~\ref{maxval}. \section{A multinomial formula and supersingular modules} Let $\mathfrak p \in \mathbb{A}$ be monic and irreducible of degree $d$, and let $L_\mathfrak p$ be a field extension of $\mathbb{A}/\mathfrak p$. A Drinfeld module $\phi$ of rank $r$ over $L_\mathfrak p$ is said to be \emph{supersingular} if $\phi_\mathfrak p$ is purely inseparable. If $\phi$ has rank $2$ then its supersingularity is equivalent to the vanishing of the coefficient of $\tau^d$ in $\phi_\mathfrak p$ modulo $\mathfrak p$. (See Gekeler \cite{Gekeler88}, \cite{Gekeler91} for further discussion and characterizations of supersingularity). It is thus natural to seek a characterization of $\phi_{T^m}$ for all $m\in\mathbb{N}$ in terms of the coefficients of $\phi_T$. It turns out that the shadowed partitions of \S 2 above will be crucial here as well. However, we need to introduce just a little more notation before we can state our result. Let $n\in \mathbb{Z}$ and let $S$ be a finite subset of $\mathbb{N}$. Set \begin{equation} I_n(S):=\{(k_i)_{i\in S}: k_i \in \mathbb{N} \textrm{ and } \sum_{i\in S}k_i=n\}, \end{equation} and define \begin{equation}\lambdabel{hndef} h_n^S:=\sum_{(k_i) \in I_n(S)} T^{\sum_{i\in S}k_i q^i} \in \mathbb{A}. \end{equation} Note that if $n<0$, then $I_n(S)=\emptyset$ and hence $h_n^S=0$. Also $h_0^S=1$. (These properties hold even if $S=\emptyset$ since $I_n(\emptyset)=\emptyset$ for all $n\neq 0$ and $I_0(\emptyset)=\{\emptyset\}$). We are now ready to state a Drinfeld multinomial Theorem. \begin{theorem}[Multinomial Formula] \lambdabel{multinomialthm} Let $\phi$ be a rank $r$ Drinfeld module over any $\mathbb{A}$-field $L$ given by \begin{equation}\lambdabel{Ar} \phi_T=T+\sum_{i=1}^r A_i \tau^i. \end{equation} For $m \in \mathbb{N}$ and $n\in \mathbb{Z}$ define the coefficients $c(n;m):=c(n;m;\phi)$ by \begin{equation}\lambdabel{cnm} \phi_{T^m}=\sum_{n=0}^{rm}c(n;m)\tau^n, \end{equation} and $c(n;m)=0$ for $n<0$ or $n>rm$. Then for all $m,\, n \geq 0$ we have \begin{equation}\lambdabel{multinomial} c(n;m)=\sum_{{\bf S}\in P_r(n)} {\bf A^S}\cdot h_{m-|{\bf S}|}^{(\cup {\bf S}\cup \{n\})}. \end{equation} \end{theorem} \begin{proof} We proceed by induction on $m$. It is easy to verify that formula \eqref{multinomial} gives $c(0;0)=1$ since $P_r(0)=\{\emptyset\}$. For $n>0$ and ${\bf S}\in P_r(n)$ we must have $|{\bf S}|>0$, hence $m-|{\bf S}|<m$, and thus formula \eqref{multinomial} gives $c(n;0)=0$ for all $n>0$. So the statement is valid for $m=0$ since $\phi_1=1$. Next we note that, because of the identity $\phi_{T^{m+1}}=\phi_T(\phi_{T^m})$, the coefficients $c(n;m)$ satisfy the recursion formula \begin{equation}\lambdabel{crecursion} c(n;m+1)=Tc(n;m)+\sum_{i=1}^r A_i\cdot c(n-i;m)^{q^i}. \end{equation} Thus, by the induction hypothesis we have \begin{equation}\lambdabel{step1} c(n;m+1)=T\sum_{{\bf S}\in P_r(n)} {\bf A^S}\cdot h_{m-{\bf |S|}}^{(\cup {\bf S}\cup \{n\})} +\sum_{i=1}^r A_i \sum_{{\bf S}^{(i)}\in P_r(n-i)} \left({\bf A}^{{\bf{S}}^{(i)}}\right)^{q^i} \left(h_{m- |\bf{S}^{(i)}|}^{(\cup {\bf S}^{(i)}\cup \{n-i\})}\right)^{q^i} \end{equation} Note that $A_j^{q^i\cdot w(S_j^{(i)})}=A_j^{w(S_j^{(i)}+i)}$, $A_i\cdot A_i^{q^i\cdot w(S_i^{(i)})}=A_i^{w(\{0\}\cup (S_j^{(i)}+i))}$, and that \[ \left(h_{m- |{\bf S}^{(i)}|}^{(\cup {\bf S}^{(i)}\cup \{n-i\})}\right)^{q^i}= h_{m- |{\bf S}^{(i)}|}^{(\cup ({\bf S}^{(i)}+i)\cup \{n\})}. \] Using the identification \eqref{map} of $P_r(n-i)$ and $P_r^i (n)$ we see that \eqref{step1} becomes \begin{equation}\lambdabel{step2} \begin{split} c(n;m+1)&=\sum_{i=1}^r \sum_{{\bf S} \in P_r^i(n)} {\bf A^S} \cdot\left[Th_{m-|{\bf S}|}^{(\cup {\bf S}\cup \{n\})}+ h_{m-[(|S_i|-1)+\sum_{j\neq i} |S_j|]}^{(\cup_{j=1}^r (S_j\setminus \{0\})\cup\{n\})}\right]\\ &=\sum_{i=1}^r \sum_{{\bf S} \in P_r^i(n)} {\bf A^S}\cdot h_{m+1-|{\bf S}|}^{(\cup {\bf S}\cup \{n\})}, \end{split} \end{equation} which proves the result, since the summands are uniform in $i$ and the sets $P_r^i(n)$ partition $P_r(n)$. \end{proof} \begin{corollary} \lambdabel{supersingular} Let $\mathfrak p \in \mathbb{A}$ be a monic prime of the form \begin{equation} \mathfrak p=\sum_{i=0}^{d} \mu_i T^i, \end{equation} and let $L_\mathfrak p$ be a field extension of $\mathbb{A}/\mathfrak p$. Let $\phi$ be a rank $2$ Drinfeld module over $L_\mathfrak p$ given by \[ \phi_T={T}+A\tau+B\tau^2, \] and let $c(n;m)$ be as in \eqref{multinomial}, then $\phi$ is supersingular at $\mathfrak p$ if and only if \begin{equation}\lambdabel{SS} \sum_{i=\lceil\frac{d}{2}\rceil}^d \mu_ic(d;i)\equiv 0 \pmod \mathfrak p . \end{equation} \end{corollary} \begin{proof} We have \[ \phi_{\mu_0+\dots+\mu_dT^d}=\sum_{i=0}^d\mu_i\sum_{n=0}^{2i} c(n;i)\tau^n =\sum_{n=0}^{2d}\left(\sum_{i=\lceil\frac{n}{2}\rceil}^d \mu_ic(n;i)\right)\tau^n, \] and the result follows by recognizing the coefficient of $\tau^d$. \end{proof} \begin{example} To illustrate the results above, we identify the condition for a rank $2$ Drinfeld module to be supersingular at a degree $4$ monic prime $\mathfrak p$. By \eqref{SS} this is equivalent to the vanishing modulo $ \mathfrak p$ of \begin{align} \mu_2 c(4;2)&{}+\mu_3 c(4;3)+c(4;4) \lambdabel{ss4}\\ &=\mu_2B^{1+q^2}+\mu_3[B^{1+q^2}(T+T^{q^2}+T^{q^4})+A^{1+q}B^{q^2}+A^{1+q^3}B^q+A^{q^2+q^3}B] \notag \\ &\mathbb{H}space*{10pt} {}+B^{1+q^2}(T^2+T^{2q^2}+T^{2q^4}+T^{1+q^2}+T^{1+q^4}+T^{q^2+q^4}) \notag \\ &\mathbb{H}space*{10pt} {}+A^{1+q}B^{q^2}(T+T^q+T^{q^2}+T^{q^4}) +A^{1+q^3}B^{q}(T+T^q+T^{q^3}+T^{q^4}) \notag \\ &\mathbb{H}space*{10pt} {}+A^{q^2+q^3}B(T+T^{q^2}+T^{q^3}+T^{q^4})+A^{1+q+q^2+q^3}. \notag \end{align} Note that $\{T^{q^i} \pmod \mathfrak p, 0\leq i\leq 3\}$ are the $4$ distinct roots of $\mathfrak p$ in $L_\mathfrak p$. Thus we have the following congruences \begin{equation}\lambdabel{modp} \begin{split} \mu_2&\equiv T^{1+q}+T^{1+q^2}+T^{1+q^3}+T^{q+q^2}+T^{q+q^3}+T^{q^2+q^3} \pmod \mathfrak p, \\ \mu_3&\equiv -(T+T^q+T^{q^2}+T^{q^4}) \pmod \mathfrak p,\\ T^{q^4}&\equiv T \pmod \mathfrak p. \end{split} \end{equation} Substituting \eqref{modp} into \eqref{ss4}, a simple computation yields \begin{multline*} \mu_2 c(4;2)+\mu_3 c(4;3)+c(4;4) \\ =A^{1+q+q^2+q^3}-[1]A^{q^2+q^3}B-[2]A^{1+q^3}B^q-[3]A^{1+q}B^{q^2}+[2][3]B^{1+q^2}. \end{multline*} Finally, dividing the above expression by $B^{1+q^2}$ we see that $\phi$ is supersingular at $\mathfrak p$ if and only if $\jmath(\phi)$ is a root of \begin{equation}\lambdabel{ssj4} \jmath^{q^2+1}-[1]\jmath^{q^2}-[2]\jmath^{q^2-q+1}-[3]\jmath+[1][3] \equiv 0 \pmod \mathfrak p. \end{equation} Using methods from Drinfeld modular forms, Cornelissen~\cite{Cornelissen99a}, \cite{Cornelissen99b} has also developed recursive formulas for polynomials defining supersingular $\jmath$-invariants, and one can successfully compare \eqref{ssj4} with $P_4(\jmath)$ in \cite[(2.2)]{Cornelissen99b}. \end{example} \end{document}
\begin{document} \title{Optimal Control Realizations of Lagrangian Systems with Symmetry} \alphauthor{M. Delgado-T\'ellez} \alphaddress{Depto. de Matem\'atica Aplicada Arquitectura T\'ecnica, Univ. Polit\'ecnica de Madrid. Avda. Juan de Herrera 6, 28040 Madrid, Spain} \email{[email protected]} \alphauthor{A. Ibort} \alphaddress{Depto. de Matem\'aticas, Univ. Carlos III de Madrid, Avda. de la Universidad 30, 28911 Legan\'es, Madrid, Spain.} \email{[email protected]} \alphauthor{T. Rodr\'{\i}guez de la Pe\~na} \alphaddress{Depto. de Ciencias, Univ. Europea de Madrid. C/ Tajo s/n, Villaviciosa de Od\'on, 28670, Madrid, Spain.} \email{[email protected]} \date{\today} \thanks{This work has been partially supported by the Spanish MCyT grants MTM2004-07090-C03-03, BFM2001-2272.} \begin{abstract} A new relation among a class of optimal control systems and Lagrangian systems with symmetry is discussed. It will be shown that a family of solutions of optimal control systems whose control equation are obtained by means of a group action are in correspondence with the solutions of a mechanical Lagrangian system with symmetry. This result also explains the equivalence of the class of Lagrangian systems with symmetry and optimal control problems discussed in \cite{Bl98}, \cite{Bl00}. The explicit realization of this correspondence is obtained by a judicious use of Clebsch variables and Lin constraints, a technique originally developed to provide simple realizations of Lagrangian systems with symmetry. It is noteworthy to point out that this correspondence exchanges the role of state and control variables for control systems with the configuration and Clebsch variables for the corresponding Lagrangian system. These results are illustrated with various simple applications. \end{abstract} \maketitle \tableofcontents \section{Introduction} A new insight on the properties of Lagrangian systems with symmetry has been gained by looking at them from the point of view of optimal control theory (see for instance \cite{Bl00}) and, conversely, a new representation for a class of optimal control problems and Pontryagin's maximum principle was obtained in this way. In particular, it was shown that the rigid body problem and Euler's equation for incompressible fluids, when formulated as optimal control problems, gave rise to the symmetric body realization of the rigid body and the impulse momentum representation of Euler's equations for inviscid incompressible fluids respectively. Thus, it was found that the application of Pontryagin maximum principle to those optimal control problems leads to a nicer, more symmetrical form of the system dynamical equations. In this paper we will discuss the underlying geometrical structure common to these two examples, showing that they are particular instances of a general correspondence among the solutions of a particular class of optimal control problems, that will be called Lie-Scheffers-Brockett optimal control systems, and Lagrangian systems with symmetry. We will say that a control system is of Lie-Scheffers-Brockett class\footnote{This class of systems were considered first, obviously not as a control problem, by Lie and Scheffers \cite{Li93} and much later, they where also discussed at length by R. Brockett \cite{Br70,Br73} already in the realm of control theory.} if in local coordinates $x^i$ in state space, has the form: $$\dot{x}^i = \sum_{a=1}^r c_a(t) X_a^i (x) ,$$ where in addition the vector fields $X_a = X_a^i \partial /\partial x^i$, satisfy the Lie closure relations: $$[X_a, X_b] = C_{ab}^c X_c ,$$ for some set of constants $C_{ab}^c$. Lie-Scheffers-Brockett systems are the prototypical examples of dynamical systems defined on Lie groups or homogeneous spaces. Lie and Scheffers found \cite{Li93} that for dynamical systems of this form there always exist non-linear superposition principles for the composition of solutions of the differential equation, the most celebrated and well-known example being the Riccati equation. Furthermore, if $M$ is a homogeneous space for the Lie group $G$ and we denote by $\xi_a$ a basis of its Lie algebra $ \goth g$, we can consider the family of vector fields $X_a$ induced on $M$ by them. A Brockett system on $M$ is just a vector field $$\Gamma = \sum_{a=1}^r u^a(t) X_a $$ where $u^a$ denote the control variables. In this sense, Brockett theory of control systems on homogeneous spaces are a global formulation of the systems considered by Lie and Scheffers (see \cite{Ca00} and references therein for a detailed account of Lie-Scheffers theorem and its applications). On the other hand, Lagrangian systems with symmetry exhibit important structures that have been widely used to study the qualitative structure of their solutions. It was the work on reduction of symplectic systems with symmetry by J. E. Marsden and A. Weinstein \cite{Ma74} summarizing and improving classical ideas on symmetry that opened the door to a systematic understanding of the structures of some of the paradigmatic systems in mechanics and differential equations, for instance the rigid body and Euler's equations. Soon it was realized that such systems can be obtained by reduction of simple Lagrangian systems with symmetry usually defined on the tangent bundle of a Lie group (finite or infinite dimensional) or, more generally, on a principal bundle over some configuration space. This idea has been extensively used to describe a large class of systems running from plasma physics to elasticity and other problems in continuum media. An important contribution to the field has been the intrinsic understanding of the reduction of Euler-Lagrange's equations and some of their most important related structures (see \cite{Ce01} and references therein for a panoramic vision of these ideas). Apart from reduction, an important tool in the study of Lagrangian systems with symmetry is provided by Clebsch variables and Lin constraints. They were introduced as a way of providing a variational derivation of Euler's equations in Eulerian variables \cite{La32,Li63,Ma83}. Later on, in \cite{Ce87a,Ce87b} a geometrical framework was developed for this idea and nowadays new applications in computational aspects of dynamics are arising (see for instance \cite{Co09,De09} and references therein). If $L$ is a Lagrangian system with symmetry group $G$ defined for instance on a principal fibre bundle $Q$ with structure group $G$, then a way to derive the equations of motion on the quotient space $TQ/G$ is to use an auxiliary space $P$ where the group $G$ acts nonlinearly in general. Then we choose on such space an appropriate subspace of curves. Such subspace is selected by using a given connection $B$ on $Q$. It was proven in \cite{Ce87b} that the space of horizontal curves (provided that a suitable technical condition is satisfied) is isomorphic to the space of curves of the original variational principle defined by $L$. Moreover, the conditions defining such subspace of horizontal curves can be expressed neatly as the vanishing of certain differential condition, requirement that was first considered by Lin in the context of fluid dynamics (for trivial bundles and connections) and, thereby, were named Lin constraints \cite{Ce87b}. When using a suitable Lagrange's multiplier theorem to incorporate the constraints into the Lagrangian density, it is found the expression described in Section \ref{Clebsch}. It will be called a Clebsch realization of the system $L$ with horizontal Lin constraints. The auxiliary spaces used for this construction were originally $E\oplus E^*$, where $E$ is a linear representation space for the group $G$. Variables on $E$ and $E^*$ were called Clebsch variables and this terminology comes from the work by Clebsch providing a suitable representation for the (eulerian) velocity field \cite{La32}. A key idea in this paper is that if we consider as auxiliary space $T^*P$, the cotangent bundle of $P$, the variables $(x^i, p_i)$ on it can be identified with state and costate variables of an optimal control problem. On the other hand, as it was indicated above, the system thus obtained by using Clebsch variables and horizontal Lin constraints, is equivalent to the original Lagrangian system with symmetry. In this way, we will prove the equivalence between a Lagrangian system with symmetry and the appropriate optimal control problem. The paper is organized as follows: in Section \ref{LSB} Lie-Scheffers-Brockett optimal control systems are presented. Afterwards, in Section \ref{Clebsch} we will discuss how optimal control problems of Lie-Scheffers-Brockett type can be identified with a Clebsch realization of a Lagrangian system. Section \ref{horizontal_curves} will be devoted to discuss the spaces of curves where the variational principle determined by the Clebsch Lagrangian is defined. In Section \ref{lin_cons} it will be shown that the variational principle of an optimal control system of Lie-Scheffers-Brockett type is equivalent to a Clebsch realization of a Lagrangian system defined by the objective functional and with horizontal Lin constraints given by the control equation itself. Then, the equivalence with a Lagrangian system with suitable end-point conditions is established in Section \ref{main}. Finally, some simple applications and examples are discussed in Section \ref{examples}. \section{Lie-Scheffers-Brockett optimal control systems}\label{LSB} Let $G$ be a Lie group acting on the right on a smooth manifold $P$. We shall denote by $\goth g$ the Lie algebra of $G$ and by $\xi \in \goth g$ a generic element. Then $\xi_P$ will denote the Killing vector field on $P$ associated to $\xi$ by the action of $G$, i.e., \begin{equation}\label{xiP} \xi_P(x) = \left.\frac{d}{dt} \Big(x\cdot \exp (-t\xi)\Big) \right|_{t = 0}, \quad \quad x \in P . \end{equation} If $\xi_a$ is a given basis in $\goth g$, $[\xi_a, \xi_b]= C_{ab}^c \xi_c$, and $X_a$ denotes the corresponding vector field on $P$, i.e., $X_a = (\xi_a)_P$ with the notation above. We will also have that: \begin{equation}\label{xi_P} \xi_P = \sum_{a=1}^r u^a X_a , \end{equation} where $\xi = u^a \xi_a$ and, \begin{equation}\label{lie_algebra} [X_a,X_b] = C_{ab}^c X_c . \end{equation} If we let the coordinates $u^a$ on the Lie algebra $\goth g$ be time-dependent functions, they can be interpreted as control variables controlling the system $\Gamma :=\xi_P$. We shall consider the non-autonomous dynamical system on $P$ defined by the vector field $\xi_P$, i.e., \Eq{\label{xiS} \dot{x}^i = \xi_P (x) = \sum_{a=1}^r u^a(t) X_a^i(x) ,}and we will interpret it as the state equation for a control system with state space $P$. We will further restrict the class of systems we are interested in by introducing another structure that will provide a particular realization of the control variables $u^a$. Suppose that the Lie group $G$ acts on the left on a smooth manifold $Q$. We shall assume for simplicity that the action is proper and free, hence the orbit quotient space $Q/G$ is a smooth manifold $N$ and the canonical projection map $\pi_2 \colon Q \to N$ is a submersion. Therefore, the map $\pi_2$ defines a principal fibration over $N$ with structure group $G$. Let $B$ be a principal connection on the principal bundle $Q(G,N)$, this is, $B$ is a $\goth g$-valued 1-form $B\colon TQ\to\goth g$ on $Q$ such that $B(\xi_Q(q))=\xi$, $\forall \xi\in \goth g$, and $L_g^* B = {\hbox{Ad}}_g B$, $g\in G$, where ${\hbox{Ad}}_g$ denotes the adjoint action of $G$ on $\goth g$ and $L_g$ the left action of $G$ on $Q$. The restriction of a connection to the tangent space $T_qQ$ is denoted as $B_q$. Then we can define a map $\xi\colon TQ \to \goth g$ by means of: \Eq{\label{controlrep} \xi(q,\dot{q}) = B_q(\dot{q}) , \quad \quad (q,\dot{q}) \in T_qQ .} If we have now a Lie-Scheffers-Brockett system of the form, \Eq{\label{controlrepres} \dot{x} =\xi(q,\dot{q})_P (x) ,} we can consider it as a control system whose control variables are $u = (q, \dot{q}) \in TQ$. In local coordinates $(q^i,\dot{q}^i)$ on $TQ$, the map $\xi$ will have the form: $$\xi(q,\dot{q}) = \dot{q}^i B_i(q) = \dot{q}^i B_i^a(q) \xi_a ,$$ and if we describe the vector fields $X_a$ in local coordinates $x^\alphalpha$ on $P$ as $$ X_a = \xi_a^\alphalpha (x) \frac{\partial}{\partial x^\alphalpha} ,$$ we finally get for the control system above the following expression in local coordinates: $$ \dot{x}^\alphalpha = \dot{q}^i B_i^a(q) \xi_a^\alphalpha (x) .$$ Finally, if $L\colon P\times TQ\to \mathbb{R}$ is a function on $P\times TQ$, we can construct the objective functional \Eq{\label{objective} S_L[x,q] = \int_0^T L(x,q,\dot{q}) dt ,}defined on a suitable space of curves $\Omega$ on $P \times Q$. We restrict our attention to $G$-invariant objective functionals, i.e., those $S$ defined by densities $L\colon P \times TQ\to \mathbb{R}$ which are $G$-invariant functions on $P\times TQ$. Notice that $G$ acts on the left on $TQ$ by lifting the action on $Q$ to $TQ$. The quotient space $P\times TQ/G$ can be identified, using an auxiliary connection, with the pull-back to $TN$ of the bundle $P\times {\rm ad}Q$, where ${\rm ad} Q$ is the $\goth g$-algebra bundle adjoint of $Q\to N$ that can be defined as $Q\times_G \goth g = Q\times \goth g /G$ with natural local coordinates $(n^\alphalpha,\dot{n}^\alphalpha,\xi^a )$, where $n^\alphalpha$ are local coordinates on $N$ and $\xi^a$ on $\goth g$. Finally, fixing the endpoint conditions \Eq{\label{endpoint} x_0 = x(0), \quad x_T = x(T) ,}we will consider the optimal control problem on the state space $P$ with control equation (\ref{controlrepres}), objective functional (\ref{objective}), and the fixed endpoint conditions above (\ref{endpoint}). Provided that the manifold $Q$ is boundaryless, it is well-known that if the curve $(x(t), q(t), \dot{q}(t))$ is a normal extremal of the optimal control problem (\ref{controlrepres}), (\ref{objective}) and (\ref{endpoint}), then there exists an extension $(x(t), p(t))$ of the state trajectory $x(t)$ to the costate space $T^*P$, satisfying Pontryagin equations: \begin{equation}\label{MPP} \dot{x} =\xi(q,\dot{q})_P (x), \quad \dot{p} = -\frac{\partial H_P}{\partial x}, \quad \frac{\partial H_P}{\partial q} = \frac{\partial H_P}{\partial \dot{q}} = 0 , \end{equation} where $H_P$ is Pontryagin's hamiltonian function, $$ H_P(x,p,q,\dot{q}) = \langle p, \xi(q,\dot{q})_P (x) \rangle - L(x, q,\dot{q}) .$$ In more geometrical terms, normal extremals are integral curves of the presymplectic system $(M_P,\Omega_P, H_P)$ where $M_P = T^*P\times TQ$ and $\Omega_P$ is the presymplectic form obtained by pull-back to $M_P$ of the canonical symplectic form $\omega_P$ on $T^*P$ (see for instance \cite{De03}, \cite{Ib10}). \section{Clebsch representation of Lie-Scheffers-Brockett optimal control systems}\label{Clebsch} Normal extremals of the optimal control problem above are critical paths of the objective functional (\ref{objective}) in an appropriate space of curves $\Omega (P \times Q ; x_0, x_T)$ subjected to the restriction imposed by eq. (\ref{controlrepres}). We will assume on this article that all curves and functions are smooth, which is consistent with the geometrical framework we are using and we introduce the constraints defined by the control equation (\ref{controlrep}) as Lagrange multipliers. In Appendix A we discuss and prove a version of Lagrange multipliers theorem which is suitable for this setting (see Thm. \ref{LMT} and Thm. \ref{smooth_LMT} in for details). Thus, if $(x(t), q(t))$ is a smooth extremal for the objective functional (\ref{objective}) satisfying the control equation (\ref{controlrep}), the lifted curve $(x(t),p(t))$ will be smooth and $(x(t), p(t), q(t))$ will be a critical path of the extended functional \Eq{\label{SL} S_{\mathbb{L}} [x,p,q]= \int_0^T \left( L(x,q,\dot{q}) + \langle p, \dot{x} - \xi(q,\dot{q})_P (x) \rangle \right) dt }in the space of smooth curves $\gamma(t) = (x(t),p(t),q(t)) \in \Omega_{x_0, x_T} (T^*P\times Q)$ with fixed endpoints $x_0$, $x_T$. A simple computation allows us to write the Lagrangian density $\mathbb{L}\colon T(T^*P) \times TQ\to \mathbb{R}$ of the functional (\ref{SL}), as \begin{eqnarray}\label{LClebsch} \mathbb{L} (x,p,\dot{x}, \dot{p}, q,\dot{q}) = L(x,q,\dot{q}) - \langle p, (B_q(\dot{q}))_P(x) \rangle + \langle p, \dot{x} \rangle =\\\nonumber= L(x,q,\dot{q}) + \langle J(x,p), B_q(\dot{q}) \rangle + \langle \theta_P(x,p), \dot{x}\rangle . \end{eqnarray} The map $J\colon T^*P \to {\goth g}^*$ in (\ref{LClebsch}) denotes the momentum map associated to the cotangent lifting of the action of $G$ to $P$, this is, $\langle J(x,p), \xi \rangle = \langle p, \xi_P(x) \rangle$, for all $\xi \in \goth g$, $(x,p)\in T^*P$. The 1-form $\theta_P$ denotes the canonical Liouville 1-form on $T^*P$ with the following expression in local coordinates $\theta_P = p_\alphalpha dx^\alphalpha$. We will show in the following sections that the expression on the r.h.s. of eq. (\ref{LClebsch}) has the form of a Clebsch Lagrangian. We summarize the discussion so far in the following statement: \begin{theorem}\label{thOpti} Let $x_0,x_T$ be two fixed points in $P$ and $q_0$ on $Q$, then the following assertions are equivalent: \begin{itemize} \item[i.-] The smooth curve $(x(t),q(t))$ is a critical point of the objective functional: $$ S_L [x,q] = \int_0^T L(x,q,\dot{q}) dt ,$$ on the space of smooth curves satisfying the equation $\dot{x}=\xi(q,\dot{q})_P (x)$, and $x(0) = x_0$, $x(T) = x_T$, $q(0) = q_0$. \item[ii.-] There exists a smooth lifting $p(t)$ of the curve $x(t)$ to $T^*P$ such that the curve $(x(t),p(t),q(t))$ is a critical point of the functional: $$ S_{\mathbb{L}}[x,p,q] = \int_0^T \left( L(x,q,\dot{q}) + \langle J(x,p), B_q(\dot{q}) \rangle + \langle \theta_P(x,p), \dot{x}\rangle \right) dt .$$ \end{itemize} \end{theorem} \begin{proof} We will consider the variational principle (i) as the problem of determining the critical points of the functional $S_L$ in the subspace of smooth curves on $P \times Q$ with fixed endpoints $x(0) = x_0$, $x(T) = x_T$ and $q(0) = q_0$ and satisfying the constraint $\dot{x}=\xi(q,\dot{q})_P (x)$. Now, because of the Lagrange multipliers theorem, (Appendix A, Thm. \ref{LMT}) and eq. (\ref{LClebsch}) that allows to write the Lagrange multiplier $\langle p, \dot{x} - (B_q(\dot{q}))_P(x) \rangle$ as the Lagrangian density in (ii), the critical points of (i) are the same as those of (ii). \end{proof} We will call the Lagrangian $\mathbb{L}$ the Clebsch Lagrangian for the Lie-Scheffers-Brockett optimal control problem stated in Section \ref{LSB}. \section{Spaces of horizontal curves on associated bundles}\label{horizontal_curves} One of the consequences of the previous discussion regarding solutions of a Lie-Scheffers-Brockett optimal control problem and critical points of a Clebsch Lagrangian is that the later is equivalent to a Lagrangian system with symmetry. It was established in \cite{Ce87b} that the critical points of a Lagrangian system with symmetry on $TQ$ are in one-to-one correspondence with the critical points of an appropriate Clebsch Lagrangian representation of the system. We will present here a similar result that is adapted to the setting used in this paper. Before giving a precise statement of the result we need to discuss some background material about spaces of horizontal curves. \subsection{The associated bundle $P\times_G Q$}\label{associated} As in previous sections $P$ will denote a right $G$-space whose space of orbits $P/G$ will be denoted by $M$. We will assume that $M$ is a smooth manifold and that the canonical projection map $\pi_1\colon P \to M$ is a submersion. Finally it will be assumed that $Q$ is a left $G$-principal bundle with base manifold $N$ and projection map $\pi_2\colon Q \to N$. Given the principal bundle $Q$ and the action of $G$ on $P$, we can construct the associated bundle over $N$ with fibre $P$ as follows: the group $G$ acts on $P\times Q$ on the left as $g\cdot (x,q) = (xg^{-1}, gq)$, for all $(x,q)\in P\times Q$, $g\in G$. Because $G$ acts freely on $Q$, then $G$ acts freely on $P\times Q$. We shall denote the equivalence class $[(x,q)]$ defined by the orbit of $G$ passing through the pair $(x,q)$ simply as $xq$, i.e., $$xq = \set{(xg^{-1},gq) \in P\times Q \mid g \in G } .$$ Notice that with this notation we have the ``associative'' property $(xg)q = x(gq)$ for all $x\in P$, $g\in G$ and $q \in Q$. The projection $\pi_2$ induces another projection $\pi_P\colon P\times_G Q \to N$, $\pi_P(xq) = \pi_2(q)$. Then it is clear that $P\times_G Q$ is a fiber bundle over $N$ with fiber $P$ and projection $\pi_P$. Given a point $q\in Q$, there is a natural map $i_q\colon P \to P\times_G Q$, defined by $i_q(x) = xq$ that maps $P$ into the fibre of $\pi_P$ over the point $\pi_2(q)$. Besides $\pi_P$ there is another natural projection $\pi_Q$ on $P\times_G Q$ induced by the projection $\pi_1\colon P\to M$, and defined by $\pi_Q(xq) = \pi_1(x)$. Again, for any $x\in P$, there is a natural map $i_x\colon Q\to P\times_G Q$ defined by $i_x(q) = xq$, that maps $Q$ into the fibre of $\pi_Q$ over the point $\pi_1(x)$. The diagramme below summarizes the spaces and projections introduced above. $$ { \bfig\xymatrix{ & P \times Q\alphar[dl]_{p_1}\alphar[dr]^{p_2}\alphar[d]_\Pi & \\ P\alphar[d]_{\pi_1} & P\times_G Q\alphar[dl]^{\pi_Q}\alphar[dr]_{\pi_P} & Q\alphar[d]^{\pi_2} \\ M & & N }\efig} $$ Tangent vectors $v\in T_{xq}(P\times_GQ)$ can be nicely described as follows. Let $x(t)q(t)$ be a curve such that $x(0)q(0)= x_0q_0$ and $d/dt(x(t)q(t))\mid_{t= 0} = v$. Then it is not difficult to show that $v = T_qi_x(\dot{q}) + T_xi_q(\dot{x})$. We will use a simplified and convenient notation for tangent vectors $v\in T_{xq} (P\times _G Q)$ following the conventions above, and we simply denote $x\dot{q} := T_qi_x(\dot{q})$ and $\dot{x}q := T_xi_q(\dot{x})$. With this notation we have: \begin{equation}\label{dot_xq} \dot{\overline{xq}} := \left.\frac{d}{dt} \Big(x(t)q(t) \Big) \right|_{t = 0} = x\dot{q}+ \dot{x}q. \end{equation} \subsection{Induced connections on the associated bundle $P\times_G Q$}\label{associated_connections} Recall that a principal connection $B$ on $Q$ is characterized by its vertical and horizontal spaces at $q\in Q$. They are denoted respectively by $V_q=\ker T_q\pi_2$, $H_q=\ker B_q$, and they provide a decomposition $T_qQ = H_q \oplus V_q$. Notice that the map $T_q\pi_2\colon H_q\to T_{\pi_2(q)}N$ is an isomorphism. We denote by $H(TQ)=\bigcup_{q\in Q}H_q$ and $V(TQ)=\bigcup_{q\in Q}V_q$ the corresponding invariant subbundles under the action of $G$ on $Q$. Then we have $TQ=H(TQ)\oplus V(TQ)$. The vertical and horizontal components of a vector $v\in T_qQ$ will be denoted by $V(v)$ or $v^V$, and $H(v)$ or $v^H$ respectively. By definition, $V(v)=B_q(v)_Q$ and $H(v)=v-B_q(v)_Q$. A tangent vector $v$ is called horizontal if its vertical component is zero; i.e., if $B_q(v)=0$. The vector $v$ is called vertical if its horizontal component is zero; i.e., $T_q\pi_2(v)=0$. A curve $q(t)$ will be said to be horizontal if $\dot{q}(t)$ is horizontal for all $t$. Hence, a curve $q(t)$ on $Q$ is horizontal if $B_{q(t)}(\dot{q}(t))=0$, for all $t$. Given a vector $X\in T_{\pi_2(q)}N$ the horizontal lift $X_q^H$ of $X$ at $q$ is the unique horizontal vector in $T_qQ$ such that $T_q\pi_2 (X_q^H) = X$. For any smooth curve $n(t)\in N$, $t\in [0,T]$ and $q_0 \in Q$ with $\pi_2 (q_0) = n_0 = n(0)$ we define its horizontal lift $n^h_{q_0}(t)$ as the unique horizontal curve projecting onto $n(t)$ and such that $n^h_{q_0}(0) = q_0$. Consider a smooth curve $q(t)\in Q$,$\ t\in\,[0,T]$. Then there is a unique horizontal curve $q^h(t)$ such that $q^h(t_0)=q(t_0)$ and $\pi_2(q^h(t))=\pi_2(q(t))$ for all $t\in[0,T]$. Therefore, there is a unique smooth curve $g(t)$, $t\in [0,T]$ in $G$ such that $q(t)=g(t)q^h(t)$. Also, notice that if we denote by $n(t)=\pi_2 (q(t))$ then $q^h(t) = n_{q_0}^h(t)$. Then $\dot{q}(t)=\dot{g}(t)q^h(t) + g(t)\dot{q}^h(t)$ where $\dot{g}(t)$ denotes the tangent vector on $G$ to the curve $g(t)$ at time $t$ and $\dot{g}(t)q^h(t)$ will denote the tangent vector $(\dot{g}(t))_Q(g(t)q^h(t))$. Similarly, $g(t)\dot{q}^h(t)$ will denote the horizontal tangent vector on $Q$ obtained by translating the horizontal tangent vector $\dot{q}^h(t)$ by the action of the element of the group $g(t)$. On the other hand we have the decomposition of the tangent vector \begin{equation}\label{hor_ver} \dot{q}(t)=\dot{q}^H(t) + \dot{q}^V(t); \quad \quad \dot{q}^H (t) = g(t) \dot{q}^h(t), \quad \dot{q}^V(t) = \dot{g}(t) q^h(t) . \end{equation} By definition of a horizontal vector, $B_{q(t)}(g(t)\dot{q}^h(t))=0$, thus $$B_{q(t)}(\dot{q}(t))=B_{q(t)}(\dot{g}(t)q^h(t)) =B_{q(t)}(\dot{g}(t)g^{-1}(t) q(t))=\dot{g}(t)g^{-1}(t) .$$ The principal connection $B$ on $Q$ induces a connection on any associated bundle by defining the horizontal space to be the space spanned by tangent vectors to all curves of the form $x_0q(t)$ where $q(t)$ is horizontal. Such curves will be called horizontal. This defines a distribution $H^P$ on $T(P\times_GQ)$, this is $v\in H^P_{xq}$ if there exists a horizontal curve $\gamma(t)=x(t)q(t)$, $\gamma(0) = xq$ on $P\times_GQ$ such that $v=\dot{\gamma}(0)$. Hence, because of the definition of the associated connection, horizontal vectors in $H^P$ will have the form $x \dot{q}^h$. Notice that if $\xi\in \goth g$ is an element on the Lie algebra of $G$, then for each $t$ we have: $$ (x\cdot \exp (t\xi ) )q = x(\exp(t\xi)\cdot q) . $$ Taking derivatives and using eq. (\ref{xiP}) we obtain, \Eq{\label{balance} x \xi_Q(q) = - \xi_P(x) q ,} where $\xi_Q(q) = d/dt (\exp (t\xi ) \cdot q)\mid_{t = 0}$. Given $x(t)$ and $q(t)$ curves in $P$ and $Q$ respectively and using eqs. (\ref{dot_xq}) and (\ref{hor_ver}) we have: $$\frac{d}{dt} (x(t)q(t)) =\dot{x}(t)q(t)+ x(t)g(t)\dot{q}^h(t) + x(t)B_{q(t)}(\dot{q}(t))_Q (q(t)) .$$ Then the tangent vector $\dot{\overline{x(t)q(t)}}$ will be horizontal if and only if $$x(t) B_{q(t)}(\dot{q}(t))_Q (q(t)) + \dot{x}(t)q(t) = 0.$$ Thus we can define a connection 1-form $B^P$ on the associated bundle $\pi_Q \colon P\times_G Q \to N$ with values in its vertical subbundle, $V(\pi_Q) = \ker T\pi_Q$, whose kernel is the horizontal distribution $H^P$ defined above, given by: \begin{equation}\label{connectionB} B_{xq}^P (\dot{\overline{xq}}) = x B_q(\dot{q})_Q(q) + \dot{x}q = (\dot{x} - B_q (\dot{q})_P(x))q , \end{equation} where we have used eq. (\ref{balance}) in the last equality of the previous formula. \subsection{Horizontal curves in $P\times Q$} We will denote the space of smooth curves in $P$ with fixed origin $x_0$ by $\Omega_{x_0}(P)$ and the space of smooth curves with fixed endpoints $x_0,x_T$ by $\Omega_{x_0,x_T}(P)$. Likewise the space of smooth curves in $Q$ with origin $q_0$ will be denoted by $\Omega_{q_0}(Q)$ and the space of smooth curves with fixed endpoints $q_0,q_T$ will be denoted by $\Omega_{q_0,q_T}(Q)$. All these spaces of curves define regular submanifolds of the Hilbert manifold of curves of Sobolev class $k\geq 1$ on $P$ or $Q$ respectively as it is discussed in Appendix B. As it was stated above it is clear that given a curve $q(t)$ in $Q$ there is a unique decomposition $q(t)=g(t)q^h(t)$, where $g(0)=e$ and $q^h(t)$ is horizontal with respect to the connection $B$, i.e., $B_{q(t)}(\dot{q}_h(t)) = 0$. Given a curve $q(t)\in \Omega_{q_0}(Q)$ and $x_0\in P$, there exists a unique curve denoted $x^h(t)$ in $\Omega_{x_0}(P)$ such that $x^h(t) q(t)$ is horizontal with respect to the induced connection $B^P$ on $P\times_G Q$ and satisfying $x^h(0)q(0) = x_0 q_0 $. It is easily seen that this curve is defined by $x^h(t)=x_0 g^{-1}(t)$, because $$ x^h(t)q(t) = x_0 g^{-1}(t)q(t) = x_0 {g}^{-1}(t)g(t)q^h(t)= x_0 q^h(t) ,$$ which is horizontal. The space of horizontal curves $x(t)q(t)$ in $P\times_GQ$ with respect to the affine connection $B^P$ with initial value $x_0q_0$ will be denoted as $\Omega^H_{x_0q_0}(P\times_GQ)$ as it is a regular submanifold of the space of curves $\Omega_{x_0q_0}(P\times_GQ)$ as it is discussed in Appendix B. Similarly we will denote by $\Omega^H_{x_0;q_0}(P\times Q)$ the set of smooth curves $(x(t),q(t))$ with domain $[0,T]$, $q(t)\in \Omega_{q_0}(Q)$, $x(t)$ with initial point $x_0$ and such that $x(t)q(t)$ is horizontal for all $t$. We will call such space the space of horizontal curves in $P\times Q$ with initial points $x_0, q_0$. Notice that if $(x(t), q(t)) \in \Omega^H_{x_0,q_0}(P\times Q)$, then the curve $(x(t)g(t)^{-1}, g(t)q(t))$ is horizontal too for any curve $g(t)$ on $G$ such that $g(0) = e$. Thus the natural projection $\Pi\colon \Omega^H_{x_0;q_0}(P\times Q) \to \Omega_{x_0q_0}^H(P\times_GQ)$ defined as $\Pi(x(t),q(t)) = x(t)q(t)$ is a principal fibration with structure group the group of smooth curves $\Omega_e(G)$ on $G$ starting at the neutral element $e$. Thus the assignment $q(t) \mapsto (x(t) = x_0g^{-1}(t), q(t))$ determines a one--to--one correspondence among the space $\Omega_{q_0}(Q)$ of smooth curves starting at $q_0$ and the space $\Omega^H_{x_0;q_0}(P\times Q)$ of horizontal curves above. Finally given a curve $(x(t), q(t))$ in $\Omega^H_{x_0;q_0}(P\times Q)$, we have the curves $n(t) = \pi_2(q(t))$ in $\Omega_{n_0}(N)$ and $x(t)$ in $\Omega_{x_0}(P)$. If the group $G$ acts transitively on $P$ it is easy to see that such correspondence is surjective because given $n(t)$ and $x(t)$ as above, we can define the curve $q(t) = g(t) n_{q_0}^h(t)$ where $x(t) = x_0g^{-1}(t)$. The curve $q(t)$ thus constructed is in $\Omega^H_{x_0;q_0}(P\times Q)$. Notice that $g(t)$ exists because of the transitivity of the action of $G$ on $P$, however it is not unique. This last requirement implies that $x(t)g(t) = x_0$, where $q(t) = g(t) q_h(t)$, and that $x_T = x(T) = x_0 g^{-1}(T) = x_0 g_T^{-1}$, where we are denoting $g(T) = g_T$. \section{Lin constraints and spaces of horizontal curves}\label{lin_cons} \subsection{Compatible end-point conditions} Given an initial condition $x_0 \in P$ for the control system (\ref{xiS}), we will denote by $x_0G$ the orbit of the group $G$ on $P$ passing through $x_0$, i.e., $x_0G = \set{x_0g \in P \mid g \in G}$. Given a smooth curve of controls $u(t)$, we denote as in eq. (\ref{xi_P}), by $\xi_P(t;u)$ the time-dependent vector field on $P$ defined by $\sum_{a = 1}^r u^a(t) X_a$ and by $x(t;u)$ the corresponding integral curve starting at $x_0$. The end-point $x_T= x(T;u)$ will lie in the orbit $x_0G$ for any finite $T$. In fact, there exists a (in general non-unique) smooth curve $\xi(t)$ on the Lie algebra $\goth g$ of $G$ such that $\xi(t)_P = \xi_P(t;u)$. The non-uniqueness of the choice of the lifted curve $\xi(t)_P$ depends on the isotropy algebra ${\goth g}_{x(t)}$ along the integral curves of the control vector field. Thus if we denote by $\mathfrak{g}_t$ the subalgebra of the Lie algebra $\mathfrak{g}$ generated by the elements $\xi$ such that $\xi_P = \xi_P(t;u)$ for the given time $t$, then the collection of all $\mathfrak{g}_t$, $0\leq t \leq T$, defines a trivial bundle over $[0,T]$ as well as the collection of all ${\goth g}_{x(t)}$, $0\leq t \leq T$, and all that it takes is to choose a smooth section of their quotient bundle. Then we integrate the differential equation $$ \dot{g}(t) = \xi(t)g(t)$$ on the group $G$ with initial condition $g(0) = e$ on the interval $[0,T]$. Denoting by $g(t;u)$ such integral curve, we then get that $x_T = x_0 g(T;u)$ proving that $x_T \in x_0G$. The integral curve $g(t;u)$ is given explicitly in terms of the chronological exponential map as $g(t;u) = \mathrm{Exp}(\int_0^T \xi(t) dt )$. (Notice that such integral exists on compact sets because the curve $\xi(t)$ is smooth, hence continuous.) The simple computation below shows that the curve $x(t) = x_0 g(t;u)$ is an integral curve of the control system (\ref{xiS}) with initial condition $x_0$, $$ \frac{d}{dt}{x(t)} = \frac{d}{dt}({x_0 g(t;u)}) =x_0\dot{g}(t;u)=x_0(\xi(t)g(t;u))=\xi_P(t;u)(x(t)) ,$$ with $x_0 g(0;u) = x_0e = x_0$. Thus for the endpoint $x_T$ to be accesible from $x_0$ it is necessary that $x_T\in x_0G$. If $x_T \in x_0G$, we will also say that the endpoints $x_0$ and $x_T$ are compatible. However not any point $x_T \in x_0G$ is accessible from $x_0$ for general $G$ even though for connected groups, if $x_0, x_T$ are compatible, then $x_T$ is accesible from $x_0$. In fact the following proposition can be proved easily: \begin{proposition} If $G$ is connected, any point $x_T\in x_0G$ is accesible from $x_0$. \end{proposition} \begin{proof} If the Lie group $G$ is connected then is arc--connected. Take now any smooth curve $g(t)$ such that $g^{-1}(0) = e$ and $g^{-1}(T) = g^{-1}_T$ where $x_T = x_0 g^{-1}_T $. Now the curve $x(t) = x_0 g^{-1}(t)$ satisfies that $x(0) = x_0$ and $x(T) = x_T$. Moreover,\\ $\dot{x}(t)q(t) = - x(t)\dot{g}(t) g^{-1}(t)q(t) =- x(t)B_{q(t)}(\dot{q}(t))_Q(q(t))=B_{q(t)}(\dot{q}(t))_P(x(t))q(t)$ where $q(t) = g(t) q_0 $. Consider then the vector field $\xi_P(t;u)) = B_{q(t)}(\dot{q}(t))_P(x)$ where $u(t) = (q(t), \dot{q}(t))$. Now $\xi(t) = B_{q(t)}(\dot{q}(t))$ and the proposition is proved. \end{proof} \subsection{Spaces of horizontal curves with fixed end--point conditions} Let us fix $x_0, x_T$ two compatible endpoints in $P$. Let $g_T \in G$ be such that $x_T = x_0g_T^{-1}$. If $G_{x_0}$ denotes the isotropy group of $x_0$, i.e., $ G_{x_0}= \set{h \in G \mid x_0h = x_0 }$, notice that for any $h \in G_{x_0}$, the group element $g_T ' = g_T h$ defines the same end-point $x_T = x_0 g_T'^{-1}$. Consider a point $q_0\in Q$. Thus if $q(t)$ is a curve on $Q$ such that $x(t)q(t)$ is horizontal with respect to the affine connection $B^P$, eq. (\ref{connectionB}), induced by the principal connection $B$ on $Q$, we have $x(t)q(t) = x_0 q^h(t)$. Thus $x_T q(T)=x_0 q^h(T)$, hence $x_0 g^{-1}_T q(T)=x_0 q^h(T)$ and $g^{-1}_T q(T) =g_0 q^h(T)$, $g_0 \in G_{x_0}$. Moreover if $g(t)$ is the curve on $G$ such that $q(t) = g(t) q^h(t)$, then $q(T) = g(T)q^h(T)$, and as $G$ acts freely on $Q$, we will conclude that $g(T) \in g_T G_{x_0}$. Thus given $x_0$, $x_T = x_0g_T^{-1}$, and denoting $q^h(T)$ by $q^h_T$ for the curve $q(t) \in \Omega_{q_0, g_T G_{x_0} q_T^h} (Q)$, i.e., such that $q(0) = q_0$ and $q(T) \in g_T G_{x_0} q_T^h$, we can associate to it a unique curve in $\Omega^H_{x_0,x_T;q_0}(P\times Q)$ by means of the natural correspondence $q(t) \mapsto (x(t),q(t))$, where $x(t) = x_0 g^{-1}(t)$ and $q(t) = g(t)q^h(t)$ as before. Then $x(0) = x_0 g^{-1}(0) = x_0$ and $x(T) = x_0 g^{-1}(T) =x_0 h^{-1}g_T^{-1}= x_0 g_T^{-1}= x_T$, $h\in G_{x_0}$. Finally, notice that the space of horizontal curves $\Omega^H_{x_0,x_T;q_0}(P\times Q)$ is a closed submanifold of the space of horizontal curves $\Omega^H_{x_0;q_0}(P\times Q)$ obtained as the level set $\mathrm{ev}_T^{-1}(x_T)$, of the evaluation map $\mathrm{ev}_T\colon \Omega^H_{x_0;q_0}(P\times Q) \to P$, $\mathrm{ev}_T (x,q) = x(T)$, The previous remarks and commentaries can be summarized in the following: \begin{proposition}\label{equivalence1} Let $Q$ be a left--principal $G$-bundle, $\pi_2\colon Q \to N$, with connection $B$ and $P$ a right $G$-space. Let $g_T\in G$, and two compatible points $x_0, x_T = x_0 g_T^{-1}$ in $P$, $q_0 \in Q$, $n_0 = \pi_2(q_0)$ and $n_T\in N$. Then there is a one-to-one correspondence among the space of parametrized smooth horizontal curves $(x(t), q(t)) \in \Omega^H_{x_0,x_T;q_0}(P\times Q)$ and the set of curves $q(t) \in \Omega_{q_0,g_T G_{x_0} q_T^h}(Q)$, where $q_T^h = n_{q_0}^h(T)$, $n_{q_0}^h(t)$ being the horizontal lifting starting at $q_0$ of $n(t)$, and $G_{x_0}$ the isotropy group of $x_0$. There is also a surjective correspondence among the set of horizontal curves $(x(t), q(t)) \in \Omega^H_{x_0,x_T;q_0}(P\times Q)$ and the set of curves $(x(t),n(t)) \in \Omega_{x_0,x_T}(x_0G)\times \Omega_{n_0,n_T}(N) $ where $x_0G \subset P$ is the $G$--orbit of $x_0$. If $G_{x_0} = e$, then the later correspondence is one--to--one too. \end{proposition} \begin{proof} The correspondence $\alphalpha \colon \Omega_{q_0,g_T G_{x_0} q_T^h}(Q) \to \Omega^H_{x_0,x_T;q_0}(P\times Q)$ is given by: \begin{equation} \alphalpha(q(t))= ( x(t), q(t)) , \end{equation} with $x(t) = x_0 g^{-1}(t)$ and $q(t) = g(t) q^h(t)$, and the correspondence $\beta\colon \Omega^H_{x_0,x_T;q_0}(P\times Q) \to \Omega_{x_0,x_T}(x_0G)\times \Omega_{n_0,n_T}(N)$, is explicitly given as: \begin{equation} \beta((x(t),q(t))) = (x(t),\pi_2(q(t))) . \end{equation} It is clear that the map $\alphalpha$ is bijective, the inverse given simply by $(x(t),q(t)) \mapsto q(t)$. From the definition is clear that $\beta$ is surjective. A right inverse of the map $\beta$ is given by $(x(t),n(t)) \mapsto q(t) = g(t)n^h_{q_0}(t)$, where $x(t) = x_0 g(t)$. Now, if $G_{x_0} = e$, then the curve $g(t)$ in $G$ is uniquely determined and the right inverse of $\beta$ is unique. \end{proof} \subsection{Lin constraints} Since $B^P\left(\frac{d}{dt}(x(t)q(t))\right)=0$ is equivalent to $x(t)q(t)$ being horizontal, it follows that $\Omega^H_{x_0,x_T;q_0}(P\times Q)$ is the subset of $\Omega_{x_0,x_T;q_0}(P\times Q)$ defined by the constraint $B^P(\frac{d}{dt}(x(t)q(t)))=0$. This constraint is called a Lin constraint \cite{Ce87a}. Now we will introduce the constraint in the variational principle using a Lagrange multiplier (see Apendixes A, B). Using the costate space $T^*P$ we will allow arbitrary variations of the curves along the vertical directions on $T^*P$ and the Lagrange multiplier will have the form: \begin{equation}\label{lin_constraint} \left\langle B^P\left(\frac{d}{dt}\left(x(t)q(t)\right)\right),p(t)q(t) \right\rangle \end{equation} with $p(t)$ being any lifting of the curve $x(t) \in \Omega_{x_0,x_T}(P)$. Notice that the action of $G$ on $P$ can be lifted naturally to an action of $G$ on $T^*P$, then we consider the quotient space $T^*P\times_G Q$ as in Section \ref{associated}. We take the curve $p(t)$ as a curve in $T^*P$ with endpoints lying on $\tau_P^{-1}(x_0)$ and $\tau_P^{-1}(x_T)$, this is $p(t)$ is a curve in $T^*P$ with free endpoints along the fibers of the canonical projection $\tau_P \colon T^*P \rightarrow P$ projecting over the points $x_0$ and $x_T$. Because of eq. (\ref{connectionB}) the horizontal Lin constraint eq. (\ref{lin_constraint}) can be written in terms of the canonical Liouville 1--form $\theta_P$ on $T^*P$ as \begin{eqnarray} \label{LC} \left\langle B^P \left(\frac{d}{dt}(xq)\right),pq \right\rangle &=& \left\langle (\dot{x} - B_q(\dot{q})_P(x))q,pq \right\rangle \nonumber \\ &=& -{\theta_P}{(x,p)} (\dot{x}, \dot{p}) - \langle J(x,p), B_q(\dot{q}) \rangle . \end{eqnarray} Finally, consider $L$ to be a $G$-invariant Lagrangian on $Q$, this is, $L(q,\dot{q})=L(gq,g\dot{q})$ with $(q,\dot{q}) \in TQ$ and $g\in G$. As indicated above, the action of $G$ on $P$ also permits us to define the associated bundle $T^*P\times_G Q$ over $N$ with fiber $T^*P$. We now define the Lagrangian $\mathbb{L}$ on $T(T^*P\times Q)$ by the formula: \begin{equation} \mathbb{L} (x, p, \dot{x},\dot{p}, q,\dot{q}) = L(q,\dot{q}) - \left\langle B^P\left(\frac{d}{dt}\left(xq\right)\right),pq \right\rangle , \label{LM1} \end{equation} or using the formulas previously obtained for $B^P$ we get: \begin{eqnarray} \mathbb{L} (x, p, \dot{x},\dot{p}, q,\dot{q}) & = & L(q,\dot{q})- \left\langle B^P\left(\frac{d}{dt}(xq)\right), pq \right\rangle \nonumber \\ \mbox{} & = & L(q,\dot{q}) + \langle J(x,p),B_q(\dot{q}) \rangle + \theta_P(x,p)(\dot{x},\dot{p}) \label{LM2} \end{eqnarray} which coincides with the Clebsh representation of the Lagrangian $L$ given in (\ref{LClebsch}). \section{The equivalence with Lagrangian systems with symmetry}\label{main} Now we can make precise the correspondence between critical points of $L$ and those of $\mathbb{L}$. The first result we will present is that the critical points of a Lagrangian system with symmetry $L$ can be obtained as critical points of the Clebsh Lagrangian representation of $L$ given above (\ref{LM2}). More precisely: \begin{theorem}\label{thClebsh} Let $g_T\in G$, $q_0 \in Q$, $x_0\in P$, $x_T = x_0 g_T^{-1} \in P$, and $q_T=g_T q_0 \in Q$. Then the following assertions are equivalent: \begin{itemize} \item[i.-] The smooth curve $q(t)\in \Omega_{q_0,g_T G_{x_0} q_T^h}(Q)$ is a critical point of the functional $S_L\colon \Omega_{q_0,g_T G_{x_0} q_T^h}(Q) \rightarrow {\mathbb{R}}$ defined by \begin{equation} \label{simpleL} S_L[q] = \int_0^T L(q,\dot{q}) dt \end{equation} \item[ii.-] There is a smooth curve $(x(t), p(t))\in \Omega_{x_0,x_T}(T^*P)$ such that the curve $(x(t), p(t), q(t))$ is a critical point of the functional $S_{\mathbb{L}} \colon \Omega_{x_0,x_T;q_0} (T^*P \times Q) \to \mathbb{R}$ defined by: \begin{eqnarray} \label{VP2} S_{\mathbb{L}} [x,p,q] &=& \int_0^T \mathbb{L} (x(t),p(t), \dot{x}(t), \dot{p}(t), q(t), \dot{q}(t)) dt \nonumber \\ \mbox{} & = & \int_0^T(L(q,\dot{q}) + \langle J(x,p),B_q(\dot{q}) \rangle + {\theta_P}{(x,p)}(\dot{x},\dot{p}))dt . \end{eqnarray} \end{itemize} \end{theorem} \begin{proof} Because of Prop. \ref{equivalence1} there is a one-to-one correspondence $\alphalpha$ among the space of curves $\Omega_{q_0,g_T G_{x_0} q^h_T}(Q) $ and the space of horizontal curves $\Omega^H_{x_0,x_T;q_0}(P\times Q)$. Thus we may think that $L$ is defined on the space of horizontal curves $\Omega^H_{x_0,x_T;q_0}(P\times Q)$. Now because of Thm. \ref{smooth_LMT} (Appendix B) a curve $(x(t), q(t)) \in \Omega^H_{x_0,x_T;q_0}(P\times Q)$ is a critical point of $S_L$ iff there exists a smooth lifting $(x(t),p(t), q(t))$ of this curve that is a critical point of the extended functional defined by the Lagrangian (\ref{LM1}) that due to (\ref{LM2}) is the same as the functional (\ref{VP2}). Conversely, if the curve $(x(t),p(t), q(t))$ is a critical point of the functional $S_\mathbb{L}$ defined in (\ref{VP2}), then because of eq. (\ref{LC}), $S_\mathbb{L}$ is just $S_L$ plus the Lagrange multiplier determined by the submanifold of horizontal curves $\Omega^H_{x_0,x_T;q_0}(P\times Q)$ inside the total space of curves $\Omega_{x_0,x_T;q_0} (T^*P \times Q)$. Thus because of the Lagrange multipliers theorem \ref{LMT} (Appendix A) we conclude that $q(t)$ is a critical point of $S_L$ in the space of curves $\Omega_{q_0,g_T G_{x_0} q^h_T}(Q) $. \end{proof} And now, collecting the results obtained in Thm. \ref{thOpti} and Thm. \ref{thClebsh} above, we can state the following relation among the extremals of a Lie-Scheffers-Brockett optimal control problem, the solutions of the corresponding Lagrangian system with symmetry and the critical points of the Clebsh Lagrangian associated to it. \begin{corollary}\label{LSBC_Thm} Given $g_T\in G$, $q_0 \in Q$, $x_0\in P$, $q_T=g_T q_0 \in Q$, and $x_T = x_0 g_T^{-1} \in P$. Then the following assertions are equivalent: \begin{itemize} \item[i.-] (Lie-Scheffers-Brockett Optimal control system.) The smooth curve $(x(t),q(t))$ in the subspace $\Omega_{x_0,x_T;q_0} (P\times Q)$ defined by the equation $\dot{x} = \xi(q,\dot{q})(x)$, is a critical point of the objective functional: $$ S [q] = \int_0^T L(q,\dot{q}) dt .$$ \item[i.-] (Invariant Lagrangian system.) The smooth curve $q(t)\in \Omega_{q_0,g_T G_{x_0} q^h_T}(Q)$ is a critical point of the functional $S_L: \Omega_{q_0,g_T G_{x_0} q^h_T}(Q) \rightarrow {\mathbb{R}}$ defined by \begin{equation} S_L[q] = \int_0^T L(q,\dot{q}) dt . \end{equation} \item[ii.-] (Clebsch Lagrangian system.) There is a smooth curve $(x(t), p(t))\in \Omega_{x_0,x_T}(T^*P)$ such that the smooth curve $(x(t), p(t), q(t))$ is a critical point of the functional $S_{\mathbb{L}} \colon \Omega_{x_0,x_T;q_0} (T^*P \times Q) \to \mathbb{R}$ defined by: \begin{eqnarray} S_{\mathbb{L}} [x,p,q] &=& \int_0^T \mathbb{L} (x(t),p(t), \dot{x}(t), \dot{p}(t), q(t), \dot{q}(t)) dt \nonumber \\ \mbox{} & = & \int_0^T (L(q,\dot{q}) + \langle J(x,p),B_q(\dot{q}) \rangle + {\theta_P}{(x,p)}(\dot{x},\dot{p}))dt . \end{eqnarray} \end{itemize} \end{corollary} \section{Some applications and examples}\label{examples} \subsection{Euler rigid body equations}\label{ejem1} \subsubsection{The group $SO(3)$} We will discuss first the simple case of Euler's equation for the rigid body as an optimal control problem that served as a guiding example for the discussion in \cite{Bl00}. Let $G = SO(3)$ be the rotation group. As a principal fibre bundle $Q$ we will consider the group $SO(3)$ acting on the left on itself. The state space $P$ will be the Lie group $SO(3)$ again, but this time acting on itself on the right. The control space for the optimal control representation problem will be the tangent bundle $TQ = TSO(3) \cong_L SO(3) \times {\goth {so}}(3)$ where the subscript $L$ indicates that we have used left translations on $SO(3)$ for the identification. The Lagrangian density is the kinetic energy $L(q,\omega) = 1/2 \langle \omega, J(\omega)\rangle$, $(q,\omega) \in SO(3) \times {\goth {so}}(3)$ defined by a right-invariant metric $J$ on $SO(3)$. The connection $B$ on $Q = SO(3)$ will be simply the canonical right-invariant Maurer-Cartan 1-form (that in the left-invariant representation of $TSO(3)$ above is the identity matrix). Thus, computing the vector field $B(\omega)$ on $P$ at $x$, we get $$(B(\omega))_P(x) = d/dt(x \exp (-tB(\omega)))\mid_{t = 0}=x \omega ,$$ and the control equation (\ref{controlrepres}) will be: $$\dot x = (B(\omega))_P(x) = x \omega.$$ Pontryagin's Hamiltonian will take the form: $$ H(q,\omega, x, p) = \langle p, x \omega \rangle - \frac{1}{2} \langle \omega, J(\omega)\rangle ,$$ where $(q,\omega)\in TQ = TSO(3) \cong SO(3) \times \mathfrak{so}(3)$ and $(x,p)\in T^*P\cong TP \cong_R SO(3) \times \mathfrak{so}(3)$ and Pontryagin's Hamilton equations (\ref{MPP}) are then $$ \dot x = x \omega , \quad \quad \dot p = p \omega,$$ together with the constraint condition: $$x^Tp-p^Tx - J \omega = 0 .$$ Finally, Euler--Lagrange equations for the Lagrangian $L$ on $TSO(3)$ are easily obtained as: $$ \dot{g} = g \omega, \quad J\dot{\omega} = [J \omega , \omega] . $$ \subsubsection{The group $SU(2)$} Now we consider as before $G=Q=P=SU(2)$ where we have replaced the orthogonal group $SO(3)$ by its universal cover, the special unitary group $SU(2)$. The Lie group $$ SU(2) = \set{ g = \matriz{rr}{a & b \\ -\bar{b} & \bar{a}} \mid a,b \in \mathbb{C}, \quad a\bar{a} +b\bar{b} = 1 } $$ has Lie algebra $$ {\goth {su}}(2) = \set{\xi_+ e_+ + \xi_- e_- + \xi_0 e_0 \mid \xi_+, \xi_-, \xi_0 \in \mathbb{R} } ,$$ with $$ e_+ = \matriz{rr}{0 & 1 \\ -1 & 0}, \quad e_- = \matriz{rr}{0 & i \\ i & 0}, \quad e_0 = \matriz{rr}{i & 0 \\ 0 & -i}.$$ We shall identify $TQ$ with $SU(2) \times {\goth {su}}(2)$ by left translations, i.e., $(g, \xi ) \in SU(2) \times {\goth {su}}(2)$, corresponds to the element $(g,\dot{g}) \in TSU(2)$, with $$ \dot{g} = g \xi.$$ The vector field $\xi_P (x)$ takes the form $\xi_P (x) =x \xi$. Hence the control equation becomes $$ \dot{x} =x \xi.$$ The objective functional is \Eq{\label{objcontrol4} S = \frac {1}{4} \int_0^T \langle \xi, J(\xi) \rangle dt ,} where we use the Killing form $\langle \xi, \zeta \rangle = 4\mathbb{T}r (\xi \zeta )$. The Euler-Lagrange equations for the Lagrangian \\ \Eq{L(g, \dot{g})=\frac{1}{2} \langle g^{-1}\dot{g},J( g^{-1}\dot{g})\rangle} are given again by: \Eq{\dot{g} = g \xi, \quad \quad J (\dot{\xi}) = [J(\xi), \xi].} The equations of motion given by Pontryagin maximum principle are: $$ \dot{x} =x \xi , \quad \dot{p} =p \xi ,$$ the relation among both sets of equation is given by: $$x^\alphast p-p^\alphast x - J \xi = 0,$$ and taking derivatives with respect to $t$ we obtain $$ \quad J\dot{\xi} = [J \xi ,\xi].$$ \subsection{Riccati equations} \subsubsection{ The group $G = SL(2,\mathbb{R})$ }\label{ricatti_sl2} We will consider now the group $G = SL(2,\mathbb{R})$ and as with the rigid body example, we will consider the control space $Q$ the group $SL (2,\mathbb{R})$ itself. To keep the conventions held along the paper we will consider the group $G$ acting on the left on $Q$ by left multiplication. The Lie group $$ SL (2,\mathbb{R} ) = \set{ g = \matriz{cc}{a & b \\ c & d} \mid a,b,c,d \in \mathbb{R}, \quad ad -bc = 1 } $$ has Lie algebra $$ {\goth {sl}}(2, \mathbb{R} ) = \set{\xi_+ e_+ + \xi_- e_- + \xi_0 e_0 \mid \xi_+, \xi_-, \xi_0 \in \mathbb{R} } ,$$ with $$ e_+ = \matriz{cc}{0 & 1 \\ 0 & 0}, \quad e_- = \matriz{cc}{0 & 0 \\ 1 & 0}, \quad e_0 = \matriz{cc}{1 & 0 \\ 0 & -1} ,$$ and nonzero commutation relations, $$ [e_+, e_-] = e_0, \quad [e_+ , e_0] = - 2 e_+, \quad [e_-,e_0 ] = 2e_- .$$ We shall identify $TQ$ with $SL(2, \mathbb{R} ) \times {\goth {sl}}(2, \mathbb{R} )$ by left translations, i.e., $(g, \xi ) \in SL(2, \mathbb{R} ) \times {\goth {sl}}(2, \mathbb{R} )$, corresponds to the element $(g,\dot{g}) \in TSL(2,\mathbb{R} )$, with $$ \dot{g} = g \xi .$$ Now we will consider the state space $P = \mathbb{R}$ and $SL(2,\mathbb{R} )$ acting on it by Moebius transformations, i.e., $$ x\cdot g^{-1} = \frac{dx - b}{-cx + a}, \quad \quad g = \matriz{cc}{a & b \\ c & d} \in SL (2, \mathbb{R} ) .$$ If $B$ is a diagonal $\mathfrak{sl}(2,\mathbb{R})$--valued equivariant 1-form (a slight generalization of a principal connection), i.e., $B = \textrm{diag} (B_+,B_-, B_0)$ in the basis $e_+,e_-, e_0$, then the vector field $\xi_P (x) = B_g (\dot{g})_P(x)$ takes the form: $$ \xi_P(x) = (\xi_+ B_+ + 2\xi_0 B_0 x - \xi_- B_- x^2)\frac{\partial}{\partial x}.$$ Hence the control equation becomes the Riccati equation: \Eq{\label{xcontrol} \dot{x} = a(t)x^2+b(t)x+c(t) ,} with $a(t) = -\xi_- B_- $, $b(t) = 2\xi_0 B_0 $ and $c(t)= \xi_+ B_+$. To define the optimal control problem we will consider again the objective functional \Eq{\label{objcontrol2} S = \int_0^T \frac{1}{2} \langle \xi, I \xi \rangle dt ,} where we use the Killing--Cartan form $\langle \xi, \zeta \rangle = 4\mathbb{T}r (\xi^T \zeta )$. The results discussed along the paper show that there is a well defined relation among the solutions of the optimal control problem given by eqs. (\ref{xcontrol}), (\ref{objcontrol2}) and the critical paths of the Lagrangian system \Eq{\label{lagsl2} L(g, \dot{g}) = \frac{1}{2} \langle g^{-1}\dot{g},I g^{-1}\dot{g} \rangle } on $TSL(2,\mathbb{R} )$. The Euler-Lagrange equations for the Lagrangian (\ref{lagsl2}) are given by: \Eq{\label{ecelsl2r} \dot{g} = g \xi , \quad I \dot{\xi} = [I \xi, \xi] .} The equations of motion resulting after applying Pontryagin's maximum principle to the Hamiltonian $H(x, p; g, \dot{g})= \langle p, a x^2+b x+c \rangle - \frac{1}{2} \langle \xi, I \xi \rangle$ are: \Eq{\label{hamsl2r} \dot{x} = \dfrac{\partial H}{\partial p} = a x^2+ b x + c, \quad \dot{p} =-\dfrac{\partial H}{\partial x}= - (2a x+b) p ,} and the optimal feedback relations are given by: \begin{eqnarray}\label{feedback} 0 &=& \dfrac{\partial H}{\partial \xi_+}=B_+ p -I_+ \xi_+, \nonumber \\ 0 &=& \dfrac{\partial H}{\partial \xi_-}=-B_- p x^2- I_- \xi_-, \nonumber \\ 0 &=& \dfrac{\partial H}{\partial \xi_0}=2B_0 p x-I_0 \xi_0. \end{eqnarray} Substituting in (\ref{hamsl2r}) the values of $\xi_+, \xi_-, \xi_0$ obtained from (\ref{feedback}) we get the following nonlinear Hamiltonian equations: $$\dot{x}=\Big[\dfrac{B_+^2}{I_+}x^4 + \dfrac{4 B_0^2}{I_0} x^2 + \dfrac{B_-^2}{I_-}\Big]~p, \quad \dot{p}=-\Big[\dfrac{2 B_+^2}{I_+} + \dfrac{4 B_0^2}{I_0}\Big]~p^2 x,$$ Thanks to the relation of reciprocity seen in this article, we can find solutions to them through Euler-Lagrange equations (\ref{ecelsl2r}) that take the form of a hyperbolic rigid body: \Eq{\label{eulagsl2r2}\left\{\begin{array}{ll} \dot{\xi}_+=\dfrac{2(I_0-I_+)}{I_+}~ \xi_0 ~\xi_ +,\\\\ \dot{\xi}_-=-\dfrac{2(I_0-I_-)}{I_+}~ \xi_0 ~\xi_ -,\\\\ \dot{\xi}_0=\dfrac{(I_+-I_-)}{I_0}~ \xi_+ ~\xi_-.\end{array}\right.} We can solve these equations easily in the symmetric case, i.e., $I_+=I_-=I$. Thus, $\dot{\xi}_0=0$. Let $\alphalpha$ be the quantity $\alphalpha = 2 \xi_0 (I_0 -I)/I$. Solving the equations (\ref{eulagsl2r2}), we obtain the analogous solution to the top precession in the hyperbolic case, which is written as: $$\xi_+= \xi_{+0}~e^{\alphalpha t}, \xi_-= \xi_{-0}~e ^{-\alphalpha t}.$$ Finally substituting in (\ref{feedback}) again we obtain the solutions we were looking for: $$x(t)=\dfrac{C_0}{2 C_+}~e^{-\alphalpha t}, \quad p(t)=2 C_+ ~e ^{\alphalpha t},$$ where $C_0=\dfrac{I_0 \xi_{0}}{2 B_0}$ and $C_+ =\dfrac{I \xi_{+ 0}}{2 B_+}$. \subsubsection{The group $SU(2)$ } If we substitute $SL(2, \mathbb{R} )$ by $SU(2)$ in the previous example, Section \ref{ricatti_sl2}, the control equation becomes: \Eq{\label{xcontrol2} \dot{x} =a(t)x^2+b(t)x+c(t),} with $a(t) = \xi_+ B_+- i\xi_- B_- $, $b(t) = 2 i \xi_0 B_0 $ and $c(t)= \xi_+ B_++i\xi_- B_- $. The Euler-Lagrange equations for the Lagrangian (\ref{lagsl2}), using now the Killing--Cartan form of $SU(2)$, are given by: $$ \dot{g} = g \xi , \quad \dot{J}\xi = [J \xi ,\xi],$$ with $$J(\xi)=I\xi+\xi I.$$ Repeating again the procedure above we obtain the following solutions for the symmetric case: $$x(t)=\dfrac{C_0}{ C_-e^{-\alphalpha t}+i C_+ e^{\alphalpha t}}, \quad p(t)=C_+ ~e ^{\alphalpha t}-i C_ - e^{-\alphalpha t},$$ where $C_0=\dfrac{I_0 \xi_{0}}{2 B_0}$, $C_- =\dfrac{I \xi_{- 0}}{2 B_-}$, $C_+ =\dfrac{I \xi_{+ 0}}{2 B_+}$ and $\alphalpha$ as above. A similar instance of this correspondence was discussed in the context of quantum optimal control in \cite{Ib08}. \subsubsection{The group $SO(2,1)$} If we substitute now $SL(2, \mathbb{R} )$ by the group $SO(2,1)$ whose Lie algebra is given by: $${\goth {so}}(2,1) = \set{\xi_+ e_+ + \xi_- e_- + \xi_0 e_0 \mid \xi_+, \xi_-, \xi_0 \in \mathbb{C} },$$ with $$ e_+ = \matriz{rr}{i & 0 \\ 0&-i}, \quad e_- = \matriz{rr}{0 & i \\-i & 0}, \quad e_0 = \matriz{rr}{0 &-1\\ -1 & 0} ,$$ the control equation becomes: \Eq{\label{xcontrol3} \dot{x} =a(t)x^2+b(t)x+c(t),} with $a(t) = \xi_0 B_0+i\xi_- B_- $, $b(t) = 2 i \xi_+ B_+ $ and $c(t)= -\xi_0 B_0+i\xi_- B_-$. The Euler-Lagrange equations for the Lagrangian (\ref{lagsl2}) are given by: $$ \dot{g} = g \xi , \quad I \dot{\xi} = [ \xi^\alphast,I\xi] . $$ Following the same procedure we obtain in the symmetric case the solutions: $$x(t)=\dfrac{C_+ e^{\alphalpha t}}{ C_-e^{-\alphalpha t}-i C_0}, \quad p(t)=-i C_-e^{-\alphalpha t}- C_0,$$ where $C_0=\dfrac{I_0 \xi_{0}}{2 B_0}$, $C_- =\dfrac{I \xi_{- 0}}{2 B_-}$ and $C_+ =\dfrac{I \xi_{+ 0}}{2 B_+}$. \section*{Appendix A. A Lagrange's multiplier theorem} In this appendix we will prove a version of Lagrange's multiplier theorem which is suitable for the purposes of the paper. \begin{theorem}\label{LMT} Let $E$ be a vector bundle with hermitian connection $\nabla$ and standard fiber the Hilbert space $\mathcal{H}$ over a smooth Hilbertian manifold $M$. Let $s\in \Gamma(E)$ be a smooth section of $E$ transverse to the zero section of $E$ and $W$ an open subset in the zero set of the section $s$, $Z_s = \{ x \in M \mid s(x) = 0 \}$. Let $f\colon M \to \mathbb{R}$ be a $C^1$-function and $f_W\colon W \to \mathbb{R}$ the restriction of $f$ to $W$. Then they are equivalent: \begin{itemize} \item[i.-] The point $x_0 \in W$ is a critical point of $f_W$. \item[ii.-] There exists $\alphalpha_0 \in E_{x_0}^*$ such that the point $(x_0,\alphalpha_0) \in E^*$ is a critical point of the function $F\colon E^* \to \mathbb{R}$ given by: $$F(x,\alphalpha) = f(x) + \langle \alphalpha, s(x)\rangle .$$ \end{itemize} \end{theorem} The bundle $E^*$ is the dual bundle of $E$. The fiber $E_x^*$ at each point $x$ of $M$ is the topological dual of the Hilbert space $E_x \cong \mathcal{H}$ and because of Riesz theorem, it can be naturally identified with $E_x$. In this sense $E^* \cong E$. The function $F$ in the statement of the theorem above can be written alternatively as $$F = \pi^* f + P_s ,$$ where $\pi\colon E\to M$ is the bundle projection and $P_s$ denoting the linear function along the fibres of $E$ induced by the section $s$, $P_s(x,\alphalpha) = \langle \alphalpha, s(x)\rangle$. Proof. Because $s$ is transverse to the zero section of $E$, $Z_s$ is a smooth submanifold of $M$. Since $W$ is an open subset of $Z_s$, it is a smooth submanifold of $M$. Moreover, $T_xZ_s = \ker \nabla s(x)$, $x\in Z_s$. If $x \in W$, because $W$ is open in $s^{-1}(0)$, then we have $T_xW = T_xs^{-1}(0) = \ker \nabla s(x)$. Let us consider now a point $x_0 \in W$ which is a critical point of $f_W$. Then, $df_W(x_0) = 0$, i.e., $df(x_0)(U) = 0$ for all $U\in T_xW$, hence $df(x_0) \in (\ker \nabla s(x_0))^0$. If we compute now the differential of the function $F$ we obtain, $$dF(x,\alphalpha)(X) = df(x)(\pi_*X) + d\langle \alphalpha, s(x)\rangle (X) = df(x)(U) + \langle X^V, s(x)\rangle + \langle \alphalpha , \nabla_{U} s(x) \rangle ,$$ where the tangent vector $X \in T_{(x,\alphalpha)}E$ is decomposed into its horizontal and vertical components $X = X^H + X^V$ with respect to the connection $\nabla$, and $\pi_*(X) = U \in T_xM$. If $x_0\in W$, then $s(x_0 ) = 0$, and we get: $$dF(x_0,\alphalpha)(X) = df(x_0)(U) + \langle \alphalpha , \nabla_U s(x) \rangle .$$ Because of the Fredholm alternative theorem the equation $\langle \alphalpha , \nabla s(x) \rangle = - df(x_0)$ has a solution $\alphalpha$ iff $df(x_0) \in (\ker \nabla s(x_0))^0$. Conversely, if $(x_0,\alphalpha_0) \in E^*$ is a critical point of the function $F$ then, as $dF(x_0,\alphalpha_0)=0$ and $x_0\in W$, we get: $$df(x_0) + \langle \alphalpha_0 , \nabla s(x_0) \rangle = 0 .$$ Again the equation $\langle \alphalpha , \nabla s(x) \rangle = - df(x_0)$ has a solution if and only if $df(x_0) \in (\ker \nabla s(x_0))^0$, and this implies that $df_W(x_0) = 0$. $\Box$ \section*{Appendix B. Lagrange's multiplier theorem on spaces of horizontal curves and optimal control} To use Lagrange multipliers Thm. \ref{LMT} as stated in Appendix A, in the context of this paper, requires to set up the adequate framework. We will use for this purpose Klingerberg's analytic setting for functionals in spaces of curves \cite{Kl78}. We shall consider the spaces of curves $\gamma(t) = (x(t),q(t))$, $t \in I = [0,T]$, of Sobolev class $k$, $k\geq 1$, on the space $P\times Q$. Such space will be denoted by $H^k([0,T],P\times Q)$ \cite{Kl78} and is a paracompact Hilbert manifold modelled on the Hilbert space $H^k([0,T], \mathbb{R}^{n+m})$, $n = \hbox{{\rm dim}}\: P$, $m= \hbox{{\rm dim}}\: Q$, of maps $\gamma\colon [0,T] \to \mathbb{R}^{n+m}$ in $L^2([0,T], \mathbb{R}^{n+m})$ possessing weak derivatives $\gamma^{(l)}(t)$, $0\leq l \leq k$, in $L^2([0,T], \mathbb{R}^{n+m})$. The tangent space to $H^k([0,T], P\times Q)$ at the curve $\gamma (t) = (x(t), q(t))$ is given by the sections of Sobolev class $k$ of the pull-back bundle $\gamma^*(TP\times TQ)$. Because the bundle $\gamma^*(TP\times TQ)$ over $[0,T]$ is trivial, such space of sections can be identified with the Hilbert space of Sobolev maps $H^k([0,T], \mathbb{R}^{n+m})$. We shall denote by $T(H^k([0,T], P\times Q))$ the tangent bundle thus constructed on this space of curves. Moreover we can consider at each map $\gamma$ the Hilbert space of sections of Sobolev class $l$, $0\leq l \leq k$, of the pull-back bundle $\gamma^*(TP\times TQ)$. The collection of such spaces defines a vector bundle over $H^k([0,T], P\times Q)$ whose fiber is given by $H^l([0,T], \mathbb{R}^{n+m})$. We shall denote such vector bundle as $T^{(l)} (H^k([0,T], P\times Q))$. Various endpoint conditions for the curves we are considering define Hilbert submanifolds of the Hilbert manifold $H^k([0,T], P\times Q)$. For instance, given $x_0,x_T\in P$, the endpoint conditions considered along the paper, $x(0) = x_0$, $x(T) = x_T$, define a Hilbert submanifold of $H^k([0,T], P\times Q)$ that will be denoted in what follows as $\mathcal{P}_{x_0,x_T}^k$, $k \geq 1$. Thus: $$\mathcal{P}_{x_0,x_T}^k = \set{(x(t),q(t)) \in H^k(P\times Q) \mid x(0) = x_0, x(T) = x_T} .$$ The tangent space to $\mathcal{P}_{x_0,x_T}^k$ is given by the Hilbert subspace, $$T_\gamma \mathcal{P}_{x_0,x_T}^k = \set{ (\delta x(t),\delta q(t)) \in H^k(\gamma^*(P\times Q)) = T_\gamma H^k(P\times Q) \mid \delta x(0) = \delta x(T) =0}.$$ We have defined in this way the tangent bundle $T\mathcal{P}_{x_0,x_T}^k$. In a similar way, we can consider for each curve $\gamma\in \mathcal{P}^k$ the set of sections of Sobolev class $l$, $l\leq k$, of the bundle $\gamma^*(TP\times TQ)$ vanishing at the endpoints $x_0, x_T$. The total space of such sections defines another vector bundle $T^{(l)}\mathcal{P}_{x_0,x_T}^k \to \mathcal{P}_{x_0,x_T}^k$. Notice that we can also consider the bundle $T_{\mathcal{P}_{x_0,x_T}^k}^{(l)} (H^k([0,T], P\times Q))$ which is the restriction to $\mathcal{P}_{x_0,x_T}^k$ of the tangent bundle $T^{(l)} (H^k([0,T], P\times Q))$. Thus for instance, the map $\gamma \mapsto \dot{\gamma}$ is a section of $T_{\mathcal{P}^k}^{(k-1)} (H^k([0,T], P\times Q))$. In order to apply this formalism to the optimal control problem discussed in the body of the paper, we will consider the Hilbert manifold $M := \mathcal{P}_{x_0,x_T}^k$ of curves of Sobolev class $k\geq 1$ on $P\times Q$ with fixed endpoints $x_0, x_T\in P$. We shall consider too the Hilbert manifold of curves of Sobolev class $k\geq 1$ on the quotient space $P\times_G Q$ discussed along the paper (see Section \ref{associated}). We have, as in the previous discussion, the tangent bundle $T^{(k-1)} H^k([0,T], P\times_G Q)$. The natural projection $\Pi\colon P\times Q \to P\times_G Q$ induces a projection on the corresponding spaces of curves that will be denoted with the same letter $\Pi \colon \mathcal{P}_{x_0,x_T}^k \to H^k ([0,T], P\times_G Q)$. We shall denote by $E$ the pull-back of the bundle $T^{(k-1)} H^k([0,T], P\times_G Q)$ to $\mathcal{P}_{x_0,x_T}^k$ along the map $\Pi$. Notice that the fiber of $E\to \mathcal{P}_{x_0,x_T}^k$ at $\gamma = (x,q)$ is the Hilbert space of $H^k$ sections of the bundle $(xq)^*(T(P \times_G Q))$. Moreover, the Hilbert bundle $E$ always admits an hermitian connection $\nabla$ \cite{La85}. Such connection can also be explicitly constructed from a canonical global metric defined on $E$ but we will not insist on these aspects here. Consider now the section $s$ of the bundle $E$ defined by the map: \begin{equation}\label{sBP} s(\gamma)(t) = B^P \left( \frac{d}{dt} \big( x(t)q(t) \big) \right), \end{equation} where $\gamma(t) = (x(t), q(t))$, $B$ is a principal connection on the principal bundle $\pi_2\colon Q \to N$, and $B^P$ the connection associated to $B$ on the bundle $\pi_Q\colon P\times_G Q \to N$ (see Section \ref{associated_connections}). Clearly the section $s$ is smooth and transverse to the zero section of $E$. Notice that the tangent space at a zero section point $\gamma$ of $E$ can be written as $T_\gamma E = T_\gamma^{(k-1)}\mathcal{P}_{x_0,x_T}^k \oplus V_\gamma(E)$, where $V_\gamma(E) \cong E_\gamma$ denotes the vertical subspace of $T_\gamma E$. But any vertical vector $\xi \in E_\gamma$ is in the range of $\nabla s$ for a generic connection $\nabla$. Finally we will consider the $C^1$-map $S\colon M \to \mathbb{R}$ defined by eq. (\ref{objective}). Hence, applying Lagrange's multiplier theorem, Thm. \ref{LMT}, to the function $S$ and the section of $E\to M$ defined by eq. (\ref{sBP}), we have that the curve $\gamma$ will be a critical point of the function $S$ restricted to the submanifold defined by the zero set $Z_s$ of the section $s$ if and only if there exists an element $p\in E^*$ such that $(\gamma,p)$ is a critical point of the function $F\colon E^*\to \mathbb{R}$ given by $$F(\gamma, p) = S(\gamma) +\langle p, s(\gamma )\rangle .$$ Because of the Sobolev embedding theorem the critical curve $\gamma(t)$ is of differentiability class $C^k$ and $p(t)$ is of differentiability class $C^r$ with $r = k - 1$. Thus if we assume that $\gamma$ is in $\mathcal{P}_{x_0,x_T}^k$ for all $k\geq 0$, then the critical pair $(\gamma, p)$ will be of class $(k,k-1)$ for all $k\geq 0$, hence of class $C^\infty$. Thus we can summarize the previous discussion in the form of the following theorem. \begin{theorem}\label{smooth_LMT} With the notation above, a smooth curve $\gamma(t)=(x(t), q(t))$ is a critical point of the functional $S = \int_0^T L(x,q) dt$ subjected to the horizontal constraint conditions $$ B^P\left( \frac{d}{dt}\big( x(t)q(t)\big) \right) = 0 $$ if and only if there exists a smooth lifting $(\gamma(t), p(t))$ of this curve that is a critical point of the extended functional $$S_{\mathbb{L}} [x,p,q]= \int_0^T \left(L(x,q) + \Big\langle p(t), B^P\Big(\frac{d}{dt}\big(x(t)q(t) \big) \Big) \Big\rangle \right) dt.$$ \end{theorem} \end{document}
\begin{document} \title{On the existence of Generalized Unicorns on Surfaces ootnote{ Mathematics Subject Classification (2000)\,:\, 53B40, Primary 53C60; Secondary 53D35 .} \begin{abstract} This paper addresses the problem of existence of generalized Landsberg structures on surfaces using the Cartan--K\"ahler Theorem and a Path Geometry approach. \end{abstract} \tableofcontents \section{Introduction} \quad A Finsler norm, or metric, on a real smooth, $n$-dimensional manifold $M$ is a function $F:TM\to \left[0,\infty \right)$ that is positive and smooth on $\widetilde{TM}=TM\backslash\{0\}$, has the {\it homogeneity property} $F(x,\lambda v)=\lambda F(x,v)$, for all $\lambda > 0$ and all $v\in T_xM$, having also the {\it strong convexity} property that the Hessian matrix \begin{equation*} g_{ij}=\frac{1}{2}\frac{\partial^2 F^2}{\partial y^i\partial y^j} \end{equation*} is positive definite at any point $u=(x^i,y^i)\in \widetilde{TM}$.\\ \quad The fundamental function $F$ of a Finsler structure $(M,F)$ determines and it is determined by the (tangent) {\it indicatrix}, or the total space of the unit tangent bundle of $F$ \begin{equation*} \Sigma_F:=\{u\in TM:F(u)=1\} \end{equation*} which is a smooth hypersurface of $TM$.\\ \quad At each $x\in M$ we also have the {\it indicatrix at x} \begin{equation*} \Sigma_x:=\{v\in T_xM \ |\ F(x,v)=1\}=\Sigma_F\cap T_xM \end{equation*} which is a smooth, closed, strictly convex hypersurface in $T_xM$. \\ \quad A Finsler structure $(M,F)$ can be therefore regarded as smooth hypersurface $\Sigma\subset TM$ for which the canonical projection $\pi:\Sigma\to M$ is a surjective submersion and having the property that for each $x\in M$, the $\pi$-fiber $\Sigma_x=\pi^{-1}(x)$ is strictly convex including the origin $O_x\in T_xM$. We point out that the strong convexity condition of $F$ implies that the fiber $\Sigma_x$ is strictly convex, but the converse is not true (see \cite{BCS2000} for details on this point and a counterexample). \\ \quad A generalization of this notion is the {\it generalized Finsler structure} introduced by R. Bryant. In the two dimensional case a generalized Finsler structure is a coframing $\omega=(\omega^1,\omega^2,\omega^3)$ on a three dimensional manifold $\Sigma$ that satisfies some given structure equations (see \cite{Br1995}). By extension, one can study the generalized Finsler structure $(\Sigma,\omega)$ defined in this way ignoring even the existence of the underlying surface $M$. It was pointed out by C. Robles that in the case $n>2$, there will be no such globally defined coframing on the $2n-1$-dimensional manifold $\Sigma$. The reason is that even though the orthonormal frame bundle $\mathcal{F}$ over $M$ does admit a global coframing, it is a peculiarity of the $n=2$ dimensional case that $\mathcal{F}$ can be identified with $\Sigma$ (see also \cite{BCS2000}, p. 92-93 for concrete computations). \\ \quad A generalized Finsler structure is {\it amenable} if the space of leaves $M$ of the foliation $\{\omega^1=0,\omega^2=0\}$ is differentiable manifold such that the canonical projection $\pi:\Sigma\to M$ is a smooth submersion. \\ \quad In order to study the differential geometry of the Finsler structure $(M,F)$, one needs to construct the pull-back bundle $(\pi^*TM,\pi,\Sigma)$ with the $\pi$-fibers $\pi^{-1}(u)$ diffeomorphic to $T_xM$, where $u=(x,v)\in \Sigma$ (see \cite{BCS2000}). In general this is not a principal bundle. \\ \quad By defining an orthonormal moving coframing on $\pi^*TM$ with respect to the Riemannian metric on $\Sigma$ induced by the Finslerian metric $F$, the moving equations on this frame lead to the so-called Chern connection. This is an almost metric compatible, torsion free connection of the vector bundle $(\pi^*TM,\pi,\Sigma)$. \\ \quad The canonical parallel transport $\Phi_t:T_xM\setminus 0 \to T_{\sigma(t)}M\setminus 0 $, defined by the Chern connection along a curve $\sigma$ on $M$, is a diffeomorphism that preserves the Finslerian length of vectors. Unlike the parallel transport on a Riemannian manifold, $\Phi_t$ is not a linear isometry in general. \\ \quad This unexpected fact leads to some classes of special Finsler metrics. A Finsler metric whose parallel transport is a linear isometry is called a {\bf Berwald metric}, and one whose parallel transport is only a Riemannian isometry is called a {\bf Landsberg metric} (see \cite{B2007} for a very good exposition). \\ \quad Equivalently, a Berwald metric is a Finsler metric whose Chern connection coincides with the Levi Civita connection of a certain Riemannian metric on $M$, in other words it is``Riemannian-metrizable". These are the closest Finslerian metric to the Riemannian ones. The connection is Riemannian, while the metric is not. However, in the two dimensional case, any Berwald structure is Riemannian or flat locally Minkowski, i.e. there are no geometrically interesting Berwald surfaces. \\ \quad Landsberg structures have the property that the Riemannian volume of the Finslerian unit ball is a constant. This remarkable property leads to a proof of Gauss-Bonnet theorem on surfaces \cite{BCS2000} and other interesting results.\ \ \quad Obviously, any Berwald structure is a Landsberg one. However, there are no examples of global Landsberg structures that are not Berwald. This is one of the main open problems in modern Finsler geometry. {\bf Problem.} {\it Do there exist Landsberg structures that are not Berwald?} \quad The long time search for this kind of metric structures with beautiful properties, which everybody wanted to see but no one could actually get, makes D. Bao to call these metrics ``unicorns". \\ \quad On the other hand, on several occasions since 2002, R. Bryant claimed that there is plenty of {\it generalized} Landsberg structures on manifolds that are not Berwald. Moreover, he said that there are a lot of such generalized metrics depending on two families of two variables (see \cite{B2007}, p. 46--47). \\ \quad Even though from the first prophecy on the existence of generalized unicorns several years already passed, as far as we know, there is no proof or paper to confirm and develop further Bryant's affirmations. \\ \quad The purpose of this paper is two folded. First, we give a proof of the existence of generalized Landsberg structures on surfaces, which are not generalized Berwald structures and discuss their local amenability. \\ \quad Namely, we prove the following \quad {\bf Corollary 4.3.}\\ \quad {\it There exist non-trivial generalized Landsberg structures on a 3-manifold $\Sigma$.} \quad Secondly, using a path geometry approach we construct locally a generalized Landsberg structure by means of a Riemannian metric $g$ on the manifold of $N$-parallels $\Lambda$ (see \cite{Br1995} for a similar study of existence of generalized Finsler structures with $K=1$). In the case when such Riemannian metric has its Levi-Civita connection $\nabla^g$ in a Zoll projective class $[\nabla]$ on $S^2$ it follows this generalized unicorn is in fact a classical one. We conjecture that this is always possible.\\ \quad In this way, even though we haven't explicitly computed yet the fundamental function $F:TM\to [0,\infty)$ of this Landsberg metric, our study gives an affirmative answer to the Problem posed above in the two dimensional case (see also \cite{Sz2008a}, \cite{M2008}, \cite{Sz2008b} for discussions on the existence of smooth unicorns in arbitrary dimension). Of course a proof for our conjecture in Section 9 remains to be given. \quad Our method is based on an {\it upstairs} - {\it downstairs} gymnastics by moving between the base manifold and the total space of a fiber bundle. \\ \quad We give here the outline of our method in order to help the reader finding his way through the paper.\\ \quad We start by assuming the existence of a generalized Landsberg structure $\{\omega^1,\omega^2,\omega^3\}$ on a 3-manifold $\Sigma$ and we perform first a coframe change (\ref{coframe_change}) by means of a function $m$ on $ \Sigma$ such that the new coframe $\{\theta^1,\theta^2,\theta^3\}$ has the properties: \begin{enumerate} \item it satisfies the structure equations (\ref{k_struct_eq}); \item its ``geodesic foliation'' $\{\theta^1=0,\theta^3=0\}$ coincides with the ``indicatrix foliation'' $\{\omega^1=0,\omega^2=0\}$ of the generalized Landsberg structure $(\Sigma,\omega)$. \end{enumerate} \quad Assuming these two conditions for $(\Sigma,\theta)$ we obtain a set of differential conditions for the function $m$ in terms of its directional derivatives with respect to the coframe $\omega$ given in Proposition 6.1, or, equivalently, in Proposition 6.2 if we start conversely. \\ \quad Based on these, one can remark the following: \begin{enumerate} \item the function $m$ is invariant along the leaves of the foliation $\{\omega^2=0,\omega^3=0\}$, therefore, if we assume that $(\Sigma,\omega)$ is normal amenable, i.e. the leaf space of $\{\omega^2=0,\omega^3=0\}$ is a 2-dimensional differentiable manifold $\Lambda$, and the quotient projection $\nu:\Sigma\to \Lambda$ is a smooth submersion, then $m$ actually lives ``downstairs'' on this manifold rather than ``upstairs'' on $\Sigma$ as initially expected; \item If we realize $\{\theta^1,\theta^2,\theta^3\}$ as the canonical coframe of a Riemannian metric $g$ on $\Lambda$, then the function $k$ in (\ref{k_struct_eq}) is the lift of the Gauss curvature of the Riemannian metric $g$, hence the name ``curvature condition'' for (\ref{condC_up}) is motivated; \item since we have constructed from the beginning the coframe $\theta$ on $\Sigma$ such that its geodesic foliation will generate the indicatrix leafs on $\Sigma$, if we could choose a Riemannian metric $g$ ``downstairs'' on $\Lambda$ all of whose geodesics are embedded circles, then the amenability of $(\Sigma,\omega)$ would be guaranteed. It is known that this kind of Riemannian metric exists and they are usually called Zoll metrics (see \cite{B1978}, \cite{G1976}). A more general concept is the Zoll projective structure $[\nabla]$ on $\Lambda$ (see \S 3.2 as well as \cite{LM2002}). These are projective equivalence classes of torsion free affine connections on $\Lambda$ whose geodesics are embedded circles in $\Lambda$. Moreover, under some very reasonable conditions they are metrizable by Riemannian metrics whose Levi-Civita connections $\nabla^g$ belong to the initial Zoll projective structure $[\nabla]$. \end{enumerate} \quad All these imply that if we start ``downstairs'' with a Riemannian metric $g=u^2[(dz^1)^2+(dz^2)^2]$ on $\Lambda$, for some isothermal coordinates $(z^1,z^2)\in \Lambda$, where $u$ is a smooth function on $\Lambda$, then we can construct the $g$-orthonormal oriented frame bundle $\mathcal F(\Lambda)$ with its canonical coframe, say $\{\alpha^1,\alpha^2,\alpha^3\}$.\\ \quad On the other hand, we set up a second order PDE system on $\Lambda$ for the functions $u, \bar m$ such that the lift $\widetilde m=\nu^*(\bar m)$ satisfies the conditions of Proposition 6.2. The Cartan-K\"ahler theorem tells us that such pairs of functions $(u,\bar{m})$ exist and they depend on 4 functions of 1 variable (Proposition 8.1). Then, by the coframe changing (\ref{inverse_coframe}) we obtain a new coframe $\widetilde \omega$ on the 3-manifold $\Sigma:=\mathcal F(\Lambda)$ which will satisfy the structure equations (\ref{Lands_struct_eq}) of a generalized Landsberg structure. The isothermal coordinates $(z^1,z^2)$ on $\Lambda$ and a homogeneous coordinate in the fiber of $\nu:\mathcal F(\Lambda) \to \Lambda$ over a point $z\in \Lambda$ will give a local form (\ref{normal_form}).\\ \quad The following diagram shows our {\it upstairs}-{\it downstairs} gymnastics. \begin{equation*} \begin{matrix} Upstairs \quad & (\Sigma,\omega)& \xrightarrow{m}& \quad (\Sigma,\theta)&\quad \equiv \quad & \quad\quad (\mathcal F(\Lambda),\alpha)& \xrightarrow{\widetilde m}& \Sigma:=(\mathcal F (\Lambda),\widetilde \omega)\\ & & & s^*\downarrow & & \nu^*\uparrow & & \\ Downstairs & & &\quad (\Lambda,g) &\quad \equiv \quad &(\Lambda,\widetilde g) & & \end{matrix} \end{equation*} \quad We use extensively the Cartan-K\"ahler theory in this paper in order to study the existence of integral manifolds of linear Pfaffian systems associated to PDE's upstairs as well as downstairs. The nontriviality of our generalized Landsberg structures can be achieved by choosing appropriate initial conditions for the integral submanifolds.\\ \quad The theory of exterior differential systems is one of the strongest tools to study geometric structures. E. Cartan and other mathematicians reformulated various type of geometric structures by the exterior differential systems' terminology. However, very few essentially new results were obtained except for the work of R. Bryant, and few others (see \cite{Br et al 1991}, \cite{IL2003} and the references in these two fundamental books).\\ \quad In the present paper, the Cartan-K\"ahler theorem is essentially used to find the new geometric structures, namely generalized Landsberg structures. This shows the usefulness and applicability of the theory of exterior differential systems. \\ \quad For the concrete computations regarding Cartan-K\"ahler Theorem we have used the MAPLE package Cartan found in the Jeanne Clelland's home page (http://math.colorado.edu/ $\tilde{ }$ jnc/). We have found it extremely useful for checking this kind of computations. \begin{center} * \end{center} \quad Here is the structure of our paper. After a short survey of some basic notions of Finsler surfaces and generalized Finsler structures on surfaces in Section 2, we construct the linear Pfaffian exterior differential system in Section 3 whose integral manifolds are the sought structures. \\ \quad Using it we prove a local existence theorem for generalized Landsberg structures on surfaces that are not of Berwald type using the Cartan-K\"{a}hler theorem for linear Pfaffian systems in Section 4. Firstly, we assume the existence of generalized Landsberg structures on surfaces and build a linear Pfaffian system whose integral manifolds consist of the scalar invariants $I$ and $K$ of the generalized Landsberg structure considered. Then Cartan-K\"{a}hler theorem tells us also that this kind of generalized structures depend on two arbitrary functions of two variables on $\Sigma$ (\S 4.1, \S 4.2). This proves Bryant's affirmations. \\ \qquad However, this discussion holds good under the assumption that generalized Landsberg structures exist. We will show here more, namely, we will study the involutivity of a Pfaffian system on $\Sigma$ whose integral manifolds consist of the coframe $\omega$ satisfying the structure equations \eqref{Lands_struct_eq} together with the scalar invariants $I$, $K$ satisfying the Bianchi identities \eqref{Lands_Bianchi}. This Pfaffian system is not a linear one, so we needed to prolong, but finally, Cartan-K\"{a}hler theorem tells us that these structures depend on 3 functions of 3 variables (\S 4.3). The degree of freedom is in this case higher than before, including the findings in \S 4.1, \S 4.2 as partial results. \\ \quad We discuss the local amenability of these structures in Section 5. \\ \quad Since the Cartan-K\"ahler theory is not very popular amongst the Finsler geometers, we introduce the basic notions and results in an Appendix. For the same reason, at the first use of the Cartan-K\"{a}hler theorem for linear Pfaffian systems, we present the computations in detail. Later uses of the theorem in \S 4.3 and \S 8.2 show only the main formulas leaving the heavy computations to be verified by the reader. \\ \quad In order to obtain an amenable Landsberg structure on a 3-manifold $\Sigma$ we have considered in Section 6 a special coframe changing on $\Sigma$ constructed such that the indicatrix foliation of the initial Landsberg structure to coincide to the geodesic foliation of the new constructed structure. Moreover, this new coframe is realizable as the canonical coframe on the orthogonal frame bundle of a Riemannian surface (Section 7). \\ \quad Keeping all these in mind, by inverting the procedure in Section 7 we have constructed in Section 8 a generalized Landsberg structure, on the total space $\mathcal F(\Lambda)$ of the orthonormal frame bundle of a Riemannian surface $(\Lambda,g)$, in terms of a smooth function $\bar m$ on $ \Lambda$. The Landsberg structure is not a Berwald one if $\bar m$ is not constant.\\ \quad Finally, in Section 9, we discuss a possible way to show the existence of classical two dimensional unicorns. This problem is equivalent to the problem of finding a Riemannian metric $g$ that metrizes a Zoll projective class on $S^2$ and satisfies in the same time the PDE system (\ref{condL_2}), (\ref{condC_2}). We conjecture that this is always possible. \\ \quad Then, by construction the geodesic foliation $\{\alpha^1=0,\alpha^3=0\}$ of $g$ will foliate the 3-manifold $\mathcal F(\Lambda)$ by circles and the geodesic leaf space, say $M$, of the geodesic foliation naturally becomes a differentiable manifold and the leaf quotient mapping $\pi: \mathcal F(\Lambda)\to M$ becomes a smooth submersion. In other words, we obtain a double fibration (see \S 3.1 and \S 3.2).\\ \quad Therefore, by our procedure it follows that this generalized Landsberg structure is amenable and its fibers $\pi^{-1} (x)$ are compact, where $\pi:\mathcal F(\Lambda) \to M$, $x\in M$. \\ \quad A simple argument will show that this generalized Landsberg structure is actually a classical Landsberg structure on the 2-manifold $M$, provided our conjecture is true. \quad {\bf Acknowledgments.} The authors would like to express their gratitude to Vladimir Matveev who pointed out an error in a previous version of the paper. We also thank to David Bao, Gheorghe Pitis and Colleen Robles for many useful discussions. We are also indebted to Keizo Yamaguchi for his valuable advises. Finally, we are grateful to the referee who pointed out the importance of the amenability of the generalized Landsberg structure and for many other helpful suggestions. \section{Riemann--Finsler surfaces} \quad We are going to restrict ourselves for the rest of the paper to the two dimensional case. To be more precise, our manifold $\Sigma$ will be always 3-dimensional, and the manifold $M$ will be 2-dimensional, in the case it exists.\\ \quad{\bf Definition 2.1.}\quad A 3-dimensional manifold $\Sigma$ endowed with a coframing $ \omega=(\omega^1,\omega^2,\omega^3)$ which satisfies the structure equations \begin{equation}\label{finsler_struct_eq} \begin{split} d\omega^1&=-I\omega^1\wedge\omega^3+\omega^2\wedge\omega^3\\ d\omega^2&=-\omega^1\wedge\omega^3\\ d\omega^3&=K\omega^1\wedge\omega^2-J\omega^1\wedge \omega^3 \end{split} \end{equation} will be therefore called a {\it generalized Finsler surface}, where $I$, $J$, $K$ are smooth functions on $\Sigma$, called the invariants of the generalized Finsler structure $ (\Sigma,\omega)$ (see \cite{Br1995} for details).\\ \quad As long as we work only with generalized Finsler surfaces, it might be possible that this generalized structure is not realizable as a classical Finslerian structure on a surface $M$. This imposes the following definition \cite{Br1995}.\\ \quad{\bf Definition 2.2.}\ A generalized Finsler surface $(M,\omega)$ is said to be {\it amenable} if the leaf space $ \mathcal{M}$ of the codimension 2 foliation defined by the equations $\omega^1=0$, $\omega^2=0$ is a smooth surface such that the natural projection $\pi:\Sigma\to \mathcal{M}$ is a smooth submersion.\\ \quad As R. Bryant emphasizes in \cite{Br1995} the difference between a classical Finsler structure and a generalized one is global in nature, in the sense that {\it every generalized Finsler surface structure is locally diffeomorphic to a classical Finsler surface structure. }\\ \quad The following fundamental result can be also found in \cite{Br1995}\\ \quad{\bf Theorem 2.1.} \quad {\it The necessary and sufficient condition for a generalized Finsler surface $ (\Sigma,\omega)$ to be realizable as a classical Finsler structure on a surface are \begin{enumerate} \item the leaves of the foliation $\{\omega^1=0,\ \omega^2=0\}$ are compact; \item it is amenable, i.e. the space of leaves of the foliation $\{\omega^1=0,\ \omega^2=0\}$ is a differentiable manifold $M$; \item the canonical immersion $\iota:\Sigma\to TM$, given by $\iota(u)=\pi_{*,u}(\hat{e}_2)$, is one-to-one on each $\pi$-fiber $\Sigma_x$, \end{enumerate} where we denote by $(\hat{e}_1,\hat{e}_2, \hat{e}_3)$ the dual frame of the coframing $(\omega^1,\omega^2,\omega^3)$. }\\ \quad In the same source it is pointed out that if for example the $\{\omega^1=0,\ \omega^2=0\}$ leaves are not compact, or even in the case they are, if they are ramified, or if the curves $\Sigma_x$ winds around origin in $T_xM$, in any of these cases, the generalized Finsler surface structure is not realizable as a classical Finsler surface.\\ \quad An illustrative example found in \cite{Br1995} is the case of an amenable generalized Finsler surface such that the invariant $I$ is constant, however $I$ is not zero. This kind of generalized structure is not realizable as a Finsler surface because $I\neq 0$ means that the leaves of the foliation $\{\omega^1=0, \ \omega^2=0\}$ are not compact. Indeed, in the case $I^2<4$, the $\pi$-fibers $\Sigma_x$ are logarithmic spirals in $T_xM$.\\ \quad Let us return to the general theory of generalized Finsler structures on surfaces. By taking the exterior derivative of the structure equations (\ref{finsler_struct_eq}) one obtains the {\it Bianchi equations of the Finsler structure}: \begin{equation*} \begin{split} & J=I_2\\ & K_3+KI+J_2=0, \end{split} \end{equation*} where we denote by $I_i$ the directional derivatives with respect to the coframing $\omega$, i.e. \begin{equation*} df=f_1\omega^1+f_2\omega^2+f_3\omega^3, \end{equation*} for any smooth function $f$ on $\Sigma$.\\ \quad Taking now one more exterior derivative of the last formula written above, one obtains the Ricci identities with respect to the generalized Finsler structure \begin{equation*} \begin{split} & f_{21}-f_{12}=-Kf_3\\ & f_{32}-f_{23}=-f_1\\ & f_{31}-f_{13}=If_1+f_2+Jf_3. \end{split} \end{equation*} \quad{\bf Remarks.} \begin{enumerate} \item Remark first that the structure equations of a Riemannian surface are obtained from (\ref{finsler_struct_eq}) by putting $I=J=0$. \item Since $J=I_2$, one can easily see that the necessary and sufficient condition for a generalized Finsler structure to be non-Riemannian is $I\neq 0$. \end{enumerate} \quad {\bf Definition 2.3.}\quad {\it A generalized Landsberg structure} on $\Sigma$ is a generalized Finsler structure $(M,\omega)$ such that $J=0$, or equivalently, $I_2=0$.\\ \quad Remark that such a generalized structure is characterized by the structure equations \begin{equation}\label{Lands_struct_eq} \begin{split} d\omega^1&=-I\omega^1\wedge\omega^3+\omega^2\wedge\omega^3\\ d\omega^2&=-\omega^1\wedge\omega^3\\ d\omega^3&=K\omega^1\wedge\omega^2, \end{split} \end{equation} and Bianchi identities \begin{equation}\label{Lands_Bianchi} \begin{split} dI & =I_1\omega^1 \qquad \qquad +I_3\omega^3 \\ dK & =K_1\omega^1+K_2\omega^2-KI\omega^3, \end{split} \end{equation} where $\omega=(\omega^1$, $\omega^2$, $\omega^3)$ is a coframing on a certain 3-dimensional manifold $\Sigma$, and $I$ and $K$ are smooth functions defined on $\Sigma$. We will see that we actually need more, so we assume that the functions $I$ and $K$ are analytic on $\Sigma$.\\ \quad It is also useful to have the Ricci identities \cite{BCS2000} for the invariants $I$ and $K$. Indeed, taking first into account that \begin{equation*} K_{31}=-I_1K-IK_1,\quad K_{32}=-IK_2,\quad K_{33}=K(I^2-I_3), \end{equation*} we obtain \begin{align*} & I_{12}=KI_3, & K_{21}&-K_{12}=IK^2 \\ & I_{32}=-I_1, & K_{23}&=K_1-IK_2 \\ & I_{31}-I_{13}=II_1, & K_{13}&=-(2K_1I+KI_1+K_2). \end{align*} \quad We are interested in studying the existence of non-trivial generalized Landsberg structures on $\Sigma$, i.e. generalized Landsberg structures that are not of Berwald type. \\ \quad Recall the following definition. \quad {\bf Definition 2.4.} \quad A {\it generalized Berwald structure} is a generalized Finsler structure characterized by the structure equations (\ref{Lands_struct_eq}), and \begin{equation*} dI \equiv 0 \quad \mod\quad \omega^3, \end{equation*} or, equivalently, \begin{equation*} I_1=I_2=0,\qquad I_3\neq 0. \end{equation*} \quad The reason we called Berwald structures (generalized or not) on surfaces {\it trivial} is given in the following rigidity theorem.\\ \quad{\bf Theorem 2.2. Rigidity theorem for Berwald surfaces} \cite{Sz1981}\\ \quad {\it Let $(M,F)$ be a connected Berwald surface for which the Finsler structure $F$ is smooth and strongly convex on all of $\widetilde{TM}$. \begin{enumerate} \item If $K= 0$, then $F$ is locally Minkowski everywhere. \item If $K\neq 0$, then $F$ is Riemannian everywhere. \end{enumerate} } \quad In other words, the only possible Berwald structures are either the flat locally Minkowski ones, or the Riemannian ones. Therefore the term {\it non-trivial} in the present paper addresses a Landsberg structure that is not locally Minkowski, nor Riemannian. Both of these are well studied trivial examples of Landsberg surfaces. \\ \quad {\bf Remark.}\\ It is interesting to remark that $I_1=0$ is not the only condition that makes a Lansdsberg to become a Berwald one. \quad Indeed, using the structure and the Ricci equations one can easily see that if for a Landsberg structure on a surface at least one of the following relation is satisfied \begin{equation*} I_3=0, \quad K_2=0, \end{equation*} then that structure must be a Berwald one. \\ \quad Remark also that the condition \begin{equation*} K_1=0 \end{equation*} does not necessarily imply triviality. In fact, all the generalized Landsberg structures in this paper satisfy this condition. \section{Path Geometries} \subsection{Path geometries of a generalized Landsberg structure} \quad Recall from \cite{Br1997} that a {\it (classical) path geometry} on a surface $M$ is a foliation $\mathcal P$ of the projective tangent bundle $\mathbb P (TM)$ by contact curves, each of which is transverse to the fibers of the canonical projection $\pi:\mathbb P (TM)\to M$.\\ \quad Namely, let $\gamma:(a,b)\to M$ be a smooth, immersed curve, and let us denote by $\hat{\gamma}:(a,b)\to \mathbb{P}(TM)$ its canonical lift to the projective tangent bundle $\pi:\mathbb{P}(TM)\to M$. Then, the fact that the canonical projection $\pi$ is a submersion implies that, for each line $L\in \mathbb{P}(TM)$, the linear map \begin{equation*} \pi_{*,L}:T_L \mathbb{P}(TM)\to T_xM, \end{equation*} is surjective, where $\pi(L)=x\in M$. Therefore \begin{equation*} E_L:=\bigl(\pi_{*,L}\bigr)^{-1}(L)\subset T_L \mathbb{P}(TM) \end{equation*} is a 2-plane in $T_L \mathbb{P}(TM)$ that defines a contact distribution and therefore a contact structure on $\mathbb{P}(TM)$.\\ \quad A curve on $\mathbb{P}(TM)$ is called {\it contact curve} if it is tangent to the contact distribution $E$. Nevertheless, the canonical lift $\hat{\gamma}$ to $\mathbb{P}(TM)$ of a curve $\gamma$ on $M$ is a contact curve.\\ \quad A {\it local path geometry} on $M$ is a foliation $\mathcal P$ of an open subset $U\subset \mathbb P (TM)$ by contact curves, each of which is transverse to the fibers of $\pi:\mathbb P (TM)\to M$.\\ \quad In the case there is a surface $\Lambda$ and a submersion $l:\mathbb P (TM) \to \Lambda$ whose fibers are the leaves of $\mathcal P$, then the path geometry will be called {\it amenable}.\\ \quad More generally, a {\it generalized path geometry} on a 3-manifold $\Sigma$ is a pair of transverse codimension 2 foliations $(\mathcal P, \mathcal Q)$ with the property that the (unique) 2-plane field $D$ that is tangent to both foliations defines a contact structure on $\Sigma$. \\ \quad In the case when there is a surface $\Lambda_{\mathcal P}$ and a submersion $l_{\mathcal P}:\mathbb P (TM) \to \Lambda_{\mathcal P}$ whose fibers are the leaves of the foliation $\mathcal P$, then the generalized path geometry $(\mathcal P, \mathcal Q)$ will be called $\mathcal P$-{\it amenable}. A $\mathcal Q$-{\it amenable} generalized path geometry $(\mathcal P, \mathcal Q)$ is defined in a similar way.\\ \quad One can easily see that a classical path geometry on $\Sigma=\mathbb P (TM)$ is a special case of generalized path geometry where the second foliation $ \mathcal Q$ is taken to be the fibers of the canonical projection $\pi:\mathbb P (TM)\to M$.\\ \quad In the case of a Landsberg structure on a 3-manifold $\Sigma$, we can define two kinds of generalized path geometries.\\ \quad We can consider \begin{enumerate} \item $ \mathcal P := \{\omega^1=0,\omega^3=0\}$ the {\it ``geodesic'' foliation} of $\Sigma$, i.e. the leaves are curves on $\Sigma$ tangent to $\hat e_2$; \item $ \mathcal Q := \{\omega^1=0,\omega^2=0\}$ the {\it ``indicatrix'' foliation} of $\Sigma$, i.e. the leaves are curves on $\Sigma$ tangent to $\hat e_3$; \item $ \mathcal R := \{\omega^2=0,\omega^3=0\}$ the {\it ``normal'' foliation} of $\Sigma$, i.e. the leaves are curves on $ \Sigma$ tangent to $\hat e_1$. \end{enumerate} \quad We can consider now the generalized path geometries \begin{equation*} \mathcal G_1=(\mathcal P, \mathcal Q),\qquad \mathcal G_2=(\mathcal R, \mathcal Q). \end{equation*} \quad Remark that on the case of $\mathcal G_1$, the 2-plane field $D_1=<\hat e_2,\hat e_3>$ defines indeed a contact structure on $\Sigma$. To verify this we need a contact 1-form $\eta$ on $\Sigma$ such that $D_1=\ker \eta$. By definition it follows that $\eta$ has to be \begin{equation*} \eta=A\omega^1 \end{equation*} and we have \begin{equation*} \eta\wedge d\eta=A^2\omega^1\wedge\omega^2\wedge\omega^3. \end{equation*} \quad Therefore $\eta$ is a contact form on $\Sigma$ if and only if $A\neq 0$, so $\mathcal G_1$ is a well defined path geometry on $\Sigma$. \\ \quad We can do the same discussion for $\mathcal G_2$, where the 2-plane field is $D_2=<\hat e_1,\hat e_3>$. As above, we look for $\eta$ such that $D_2=\ker \eta$, so we must have \begin{equation*} \eta=A\omega^2, \end{equation*} and a simple computation shows that again \begin{equation*} \eta\wedge d\eta=A^2\omega^1\wedge\omega^2\wedge\omega^3. \end{equation*} \quad Therefore, again $\eta$ is a contact form on $\Sigma$ if and only if $A\neq 0$, and again $\mathcal G_2$ is a well defined path geometry on $\Sigma$. \\ \quad Recall also from the same \cite{Br1997} that {\it every generalized path geometry is always identifiable with a local path geometry on a surface}. Indeed, for a $u\in \Sigma$, let $U\subset \Sigma$ be an open neighborhood of $u$ on which the foliation $\mathcal Q$ is locally amenable, i.e. there exist a smooth surface $M$ and a smooth surjective submersion $\pi:U\to M$ such that the fibers of $\pi$ are the leaves of $Q$ restricted to $U$. Remark that this is always possible (for example due to Frobenius theorem) and that $M$ and $\pi$ are uniquely determined by $U$ up to a diffeomorphism.\\ \quad A natural smooth map $\nu:U\subset \Sigma\to \mathbb P (TM)$, which makes the following diagram commutative, \begin{equation*} \begin{split} & \qquad \ \ \quad \nu \\ & U\subset \Sigma \longrightarrow \mathbb P (TM) \\ \pi & \downarrow \qquad \swarrow \\ & M \end{split} \end{equation*} can be defined as follows \begin{equation*} \nu(u)=\pi_{*,u}(T_u\mathcal P), \end{equation*} for any $u\in U$. This application is well defined because $\pi_{*,u}(T_u\mathcal P)$ is a 1-dimensional subspace of $T_{\pi(u)}M$, and therefore an element of $\mathbb P (T_{\pi(u)}M)$. \\ \quad For the generalized path geometry $\mathcal G_1=(\mathcal P, \mathcal Q)$ we put \begin{equation*} \nu_1:U\subset \Sigma\to \mathbb P (TM),\qquad \nu_1(u)=\pi_{*,u}(\hat e_2), \end{equation*} and for the generalized path geometry $\mathcal G_2=(\mathcal R, \mathcal Q)$ we put \begin{equation*} \nu_2:U\subset \Sigma\to \mathbb P (TM),\qquad \nu_2(u)=\pi_{*,u}(\hat e_1). \end{equation*} \quad Remark that because the foliations $\mathcal P$, $\mathcal Q$ and $\mathcal R$ are all transverse to each other, it follows again that $\pi_{*,u}(T_u \mathcal P)$ and $\pi_{*,u}(T_u \mathcal R)$ are 1-dimensional subspaces in $T_{\pi(u)}M$, i.e. $\nu_1$, $\nu_2$ are immersions and therefore local diffeomorphisms. \subsection{Zoll projective structures} \quad A classical example of a path geometry on a 3-manifold is the path geometry of a Riemannian metrizable Zoll projective structure. This is not only an example of path geometry, but it will be very useful in the construction of a non-trivial Landsberg structure.\\ \quad Recall that a Riemannian metric $g$ on a smooth manifold $\Lambda$ is called a {\it Zoll metric} if all its geodesics are simple closed curves of equal length. See \cite{B1978} for basics of Zoll metrics as well as \cite{G1976} for the abundance of Zoll metrics on $S^2$.\\ \quad We will use in the present paper a more general notion, namely Zoll projective structures. Our exposition follows closely \cite{LM2002}.\\ \quad{\bf Definition 3.1.} If $\nabla$ is a torsion free affine connection on a smooth manifold $\Lambda$, then the projective class $[\nabla]$ of $\nabla$ is called a {\it Zoll projective structure} if the image of any maximal geodesic of $ \nabla$ is an embedded circle $S^1\subset \Lambda$.\\ \quad Given a Zoll projective structure $[\nabla]$ on $\Lambda$, the canonical lift of its geodesics will provide the geodesic foliation $\mathcal P$ on the projectivized tangent bundle $\mathbb P(T\Lambda)$ which foliates $\mathbb P(T\Lambda)$ by circles. Let $M$ be the leaf space of the geodesic foliation $\mathcal P$ of a Zoll projective structure.\\ \quad It can be shown that any Zoll projective structure $[\nabla]$ on a compact orientable surface $\Lambda$ is {\it tame}, namely each leaf of its geodesic foliation on $\mathbb P(T\Lambda)$ has a neighborhood which is diffeomorphic to $\mathbb R^2\times S^1$, such that each leaf corresponds to a circle of the form $\{u\}\times S^1$, for any $u\in \mathbb P(T\Lambda)$.\\ \quad This implies further that the leaf space $M$ of a Zoll projective structure $[\nabla]$ on a compact orientable surface $\Lambda$ has a canonical structure of differentiable manifold such that the quotient map $\pi:\mathbb P(T \Lambda) \to M$ becomes a submersion. We obtain therefore the following {\it double fibration} of a Zoll projective structure. \begin{equation*} \begin{split} & \qquad \mathbb P (T\Lambda) \\ & \nu \swarrow \qquad \searrow \pi\\ & \Lambda\qquad\qquad \quad M \end{split} \end{equation*} \quad Let us assume from now $\Lambda=S^2$. It is natural to ask when a given Zoll projective structure $[\nabla]$ on $S^2$ can be represented by the Levi-Civita connection of a Riemannian metric $g$ on $\Lambda=S^2$. \\ \quad The answer is given in Theorem 4.8. of \cite{LM2002}, p. 512. We are not going to state or to prove this theorem here because it will take too much space to define all the notions that are involved. Instead, we are going to describe the construction of the Riemannian metric $g$ that represents a Zoll projective structure, in the case such a metric exists. It is clear from \cite{LM2002} that the set of Riemannian metrizable Zoll projective structures is not empty, so we can assume the existence of Riemannian metrizable Zoll projective structures $[\nabla]$ on $S^2$. \\ \quad Let us consider the isothermal local coordinates $(z^1,z^2)$ on $S^2$ induced from the Zoll projective structure (the concrete construction can be found in \cite{LM2002}, p. 513), and let \begin{equation*} g=u^2\ensuremath{\mathbb{B}}igl[ (dz^1)^2+(dz^2)^2 \ensuremath{\mathbb{B}}igr], \end{equation*} be the metric on $S^2$ in these coordinates, where $u$ is a smooth function. If one puts \begin{equation*} \gamma=d\log u, \end{equation*} then the Levi-Civita connection $\nabla^g$ of the Riemannian metric $g$ belongs to the Zoll projective structure $ [\nabla]$ if \begin{equation}\label{Gamma} \Gamma_{kl}^j=\gamma_k\delta_l^j+\gamma_l\delta_k^j-\gamma^j\delta_{kl}, \end{equation} where $\gamma=\gamma_1dz^1+\gamma_2dz^2$, and $\Gamma_{kl}^i$ are the Christoffel symbols of the Zoll projective structures $[\nabla]$, i.e. \begin{equation*} \Gamma_{jk}^i=\ensuremath{\mathbb{B}}igl< dz^i,\nabla_{\frac{\partial}{\partial z^j}}\frac{\partial}{\partial z^k}\ensuremath{\mathbb{B}}igr> \end{equation*} for a connection $\nabla$ in the Zoll projective structure $[\nabla]$, and $\gamma^j=g^{ji}\gamma_i$.\\ \quad It follows that for a given Zoll projective structure $[\nabla]$ we obtain \begin{equation}\label{small_gamma} \gamma_i=\frac{1}{2}\ensuremath{\mathbb{B}}igl(\Gamma_{i1}^1+\Gamma_{i2}^2 \ensuremath{\mathbb{B}}igr),\qquad i=1,2. \end{equation} \quad If we denote by $R$ the Gauss curvature of $g$, then taking into account that $\gamma_i=\frac{1}{u}\frac{\partial u}{\partial z^i}$, it follows \begin{equation} R=-\frac{1}{u^2}\textrm{div} \gamma, \end{equation} where we put $\textrm{div} \gamma=\frac{\partial\gamma_1}{\partial z^1}+\frac{\partial\gamma_2}{\partial z^2}$.\\ \quad If we denote by $\{\alpha^1,\alpha^2,\alpha^3\}$ the canonical coframe on the bundle of $g$-orthonormal frames on $\Lambda$ then $\mathcal G=(\mathcal P, \mathcal Q)$ is a path geometry on $\mathbb P(T\Lambda)$, where $\mathcal P:=\{\alpha^1=0,\alpha^2=0\}$ is the geodesic foliation of $g$ and $\mathcal Q:=\{\alpha^1=0,\alpha^3=0\}$. \section{The Cartan--K\"ahler theory} \subsection {A linear Pfaffian system on generalized Landsberg surfaces} \quad This section and the following one are motivated by Bryant's prophecy on the existence of generalized unicorns that we mentioned already in Introduction. Since the Finsler geometry community is familiarized with his statements, we will give here our interpretation of it. We point out however, that the discussion following hereafter does not imply the existence of non-trivial generalized unicorns. This will be shown only in section 4.3 in a different setting.\\ \quad In order to make use of the Cartan-K\"{a}hler theory, we are going to construct an exterior differential system associated to the coframe $(\omega^1,\omega^2,\omega^3)$ that satisfies (\ref{Lands_struct_eq}), (\ref{Lands_Bianchi}).\\ \quad In this section we {\it assume} the existence of three linear independent one forms $(\omega^1,\omega^2,\omega^3)$ on the 9-dimensional manifold $\widetilde{\Sigma}=\Sigma\times \mathbb R^2\times \mathbb R^4$ that satisfy the structure equations (\ref{Lands_struct_eq}), where we consider the free coordinates $(I,K)\in \mathbb R^2$, and $(I_1,I_3,K_1,K_2)\in \mathbb R^4$, and study the degree of freedom of the scalar functions $I$ and $K$.\\ \quad First, we consider the following 1-forms \begin{equation}\label{Lands_Pfaffian} \begin{split} \theta^1: & = dI-I_1\omega^1-I_3\omega^3 \\ \theta^2: & = dK -K_1\omega^1-K_2\omega^2+KI\omega^3, \end{split} \end{equation} and let us denote by $\mathcal{I}$ the differential ideal generated by $\{\theta^1,\theta^2\}$. We also denote \begin{equation*} \begin{split} \Omega & :=\omega^1\wedge\omega^2\wedge\omega^3, \\ J & :=\{\theta^a,\omega^i\}, \\ I& :=\{\theta^a\}, \end{split} \end{equation*} where {\it a}=1,2, {\it i}=1,2,3.\\ \quad We will use the same letter $I$ for the invariant of a (generalized) Finsler structure as well as for the set of 1-forms $\theta^1,\theta^2$. We hope that this will not lead to any confusion.\\ \quad In order to use the Cartan--K\"{a}hler theory we are going to consider the pair $(I,J)$ as an EDS with independence condition on a certain manifold $\widetilde{\Sigma}$ to be determined later. We consider $dI$ and $dK$ as linearly independent 1-forms on the manifold $\widetilde{\Sigma}$.\\ \quad By exterior differentiation of $\{\theta^1,\theta^2\}$ we obtain \begin{equation*} \begin{split} d\theta^1 & =-dI_1\wedge \omega^1-dI_3\wedge \omega^3-I_3K\omega^1\wedge \omega^2-I_1\omega^2\wedge \omega^3+II_1\omega^1\wedge\omega^3\\ d\theta^2 & =-dK_1\wedge\omega^1-dK_2\wedge\omega^2+IK^2\omega^1\wedge\omega^2+(IK_2- K_1)\omega^2\wedge\omega^3\\ & +(2IK_1+I_1K+K_2)\omega^1\wedge\omega^3+K\theta^1\wedge\omega^3+I\theta^2\wedge\omega^3. \end{split} \end{equation*} \quad Let us remark that the above formulas can be rewritten as \begin{equation*} \begin{split} d\theta^1 & \equiv (-dI_1+I_3K\omega^2-II_1\omega^3)\wedge\omega^1+ (-dI_3-I_1\omega^2)\wedge\omega^3 \quad \mod\ \{I\} \\ d\theta^2 & \equiv \ensuremath{\mathbb{B}}igl[-dK_1-IK^2\omega^2-(2IK_1+I_1K+K_2)\omega^3\ensuremath{\mathbb{B}}igr] \wedge\omega^1\\ & +\ensuremath{\mathbb{B}}igl[-dK_2-(IK_2-K_1)\omega^3\ensuremath{\mathbb{B}}igr]\wedge\omega^2 \quad \mod\ \{I\}. \end{split} \end{equation*} \quad It follows that we can write \begin{equation*} d\theta^a \equiv \pi^a_i\wedge \omega^i\quad \mod\ \{I\}, \end{equation*} where {\it a}=1,2, {\it i}=1,2,3. \\ \quad The 1-forms matrix $(\pi^a_i)$ has the following non-vanishing entries: \begin{equation}\label{pi_matrix} \begin{split} \pi_1^1 & = -dI_1+I_3K\omega^2-II_1\omega^3,\\ \pi_3^1 & = -dI_3-I_1\omega^2,\\ \pi_1^2 & = -dK_1-IK^2\omega^2-(2IK_1+I_1K+K_2)\omega^3\\ \pi_2^2 & = -dK_2-(IK_2-K_1)\omega^3. \end{split} \end{equation} \quad By putting now \begin{equation}\label{pi_vector} \begin{split} & \pi^1:=\pi_1^1,\qquad \pi^2:=\pi_3^1,\\ & \pi^3:=\pi_1^2,\qquad \pi^4:=\pi_2^2, \end{split} \end{equation} we obtain that $(I,J)$ is a linear Pfaffian system that lives on the 9 dimensional manifold $\widetilde{\Sigma}$ which has the coframing \begin{equation} \{\theta^1,\theta^2,\omega^1,\omega^2,\omega^3,\pi^1,\pi^2,\pi^3,\pi^4 \} \end{equation} that is adapted to the filtration \begin{equation*} I\subset J\subset T^*\widetilde{\Sigma}. \end{equation*} \quad Since the apparent torsion was absorbed, we can write \begin{equation*} d\theta^a \equiv A_{\epsilon i}^a\pi^\epsilon \wedge \omega^i \qquad \mod\ \{I\}, \end{equation*} where the non-vanishing entries of $ A^a_{i\epsilon}$ are \begin{equation}\label{Lands_tableau} A_{11}^1=A^1_{23}=A^2_{31}=A^2_{42}=1. \end{equation} \quad The 1-forms $\pi_a^i$ are sections of $T^*\widetilde{\Sigma} / J$, or, equivalently, they are components of a section of $I^*\otimes J/I$. \\ \quad From now on, by abuse of notation we will write the structure equations of the EDS as \begin{equation*} \begin{split} & \theta^a=0\\ & d\theta^a \equiv A_{\epsilon i}^a\pi^\epsilon \wedge \omega^i \qquad \mod\ \{I\}\\ & \Omega=\omega^1\wedge \omega^2\wedge \omega^3\neq 0. \end{split} \end{equation*} \quad From (\ref{Lands_tableau}) it follows that the tableau $A$ of the linear Pfaffian system $(I,J)$ is given by \begin{equation*} A= \begin{pmatrix} a & 0 & d \\ b & c & 0 \end{pmatrix}, \end{equation*} where $a,b,c,d$ are nonzero constants. Therefore, the reduced characters of the tableau $A$ are $s_1=2$, $s_2=2$, $s_3=0$, and $s_0=$rank$\ I=2$.\\ \quad The symbol $B$ of the linear Pfaffian system $(I,J)$ is then \begin{equation*} B= \begin{pmatrix} 0 & e & 0 \\ 0 & 0 & f \end{pmatrix}, \end{equation*} where $e,f$ are nonzero constants. \subsection {The integrability conditions} \quad Let us denote by $(G_3(T\widetilde{\Sigma}),\pi,\widetilde{\Sigma})$ the Grassmannian of three planes through the origin of $T\widetilde{\Sigma}$. Then the dimension of the base manifold and the fiber over a point $p\in \widetilde{\Sigma}$ are given by \begin{equation*} \dim G_3(T\widetilde{\Sigma}) =27,\qquad \dim G_3(T_p\widetilde{\Sigma}) =18, \end{equation*} respectively.\\ \quad If we denote by $p_i^a$, ($a=1,...,6$, $i=1,2,3$) the local coordinates of the fiber $G_3(T_p \widetilde{\Sigma})$, then for a $3$-plane $E\in G_3(T_p\widetilde{\Sigma})$, that satisfies the independence condition $\omega^1\wedge\omega^2\wedge\omega^3_{\ |E}\neq 0$, by an eventual relabeling of the coordinates, equations of integral elements of $(I,J)$ are \begin{equation}\label{integ_elem} \begin{split} & \theta^b=0,\ (b=1,2)\\ & \pi^\epsilon-p^{\epsilon}_i\omega^i=0, \end{split} \end{equation} where $p_i^\epsilon$, ($\epsilon=1,...,4$, $i=1,2,3$) are functions on $G_3(T_p\widetilde{\Sigma})$.\\ \quad The relations (\ref{integ_elem}) regarded as system of linear equations in $p_i^\epsilon$ are the {\it first order integrability conditions} of the linear Pfaffian system $(I,J)$. One can remark that in the most general case, these equations are over-determined, in the sense that there are more equations than unknowns. Therefore, in general there is likely for such linear systems to be incompatible.\\ \quad In our case, using the fact that integral elements of $\theta^a=0$ must satisfy $d\theta^a=0$ also, then using (\ref{Lands_tableau}) we obtain the solutions of (\ref{integ_elem}) as follows: \begin{equation}\label{integ_manif} \begin{split} & p_2^1=0,\qquad \qquad \qquad p_3^1=p_1^2,\\ & p_2^2=0,\\ & p_3^3=0,\qquad \qquad \qquad p_2^3=p_1^4,\\ & p_3^4=0, \end{split} \end{equation} the rest of the functions, namely $p^1_1,p^2_1,p^2_3,p^3_1,p^4_1,p^4_2$, being arbitrary.\\ \quad One can see that the maximum rank of this system of functions is $d=6$, and that it is of local rank constant. In other words, $\mathcal V_3(\mathcal{I},\Omega)$ is a smooth codimension 6 submanifold of $G_3(T\widetilde{\Sigma})$, where we denoted by $\mathcal V_3(\mathcal{I},\Omega)\subset G_3(T \widetilde{\Sigma})$ the subbundle of 3-dimensional integral elements of $\mathcal{I}$. \\ \quad Remark that $(\mathcal V_3(\mathcal{I},\Omega),\widetilde{I})$ is the prolongation of $(\widetilde{\Sigma},\mathcal{I})$, where $\mathcal{I}$ is the exterior differential system generated by the Pfaffian system $I$. Here, $\widetilde{I}$ is the exterior differential system on $\mathcal V_3(\mathcal{I},\Omega)$ generated by the Pfaffian system \begin{equation*} \widetilde{I}=\{\theta^1,\theta^2,\pi^1-p_1^1\omega^1-p_1^2\omega^3, \pi^2-p_1^2\omega^1-p_3^2\omega^3,\pi^3-p_1^3\omega^1-p_1^4\omega^2, \pi^4-p_1^4\omega^1-p_2^4\omega^2\}, \end{equation*} i.e. $\widetilde{I}$ is the pullback to $\mathcal V_3(\mathcal{I},\Omega)$, by the inclusion $\iota:\mathcal V_3(\mathcal{I},\Omega)\to G_3(T_p\widetilde{\Sigma})$, of the canonical system on $G_3(T_p\widetilde{\Sigma})$.\\ \quad Moreover, since the dimension of the solution space of equations (\ref{integ_manif}) is 6, the Cartan involutivity test is satisfied: \begin{equation*} s_1+2s_2+3s_3=2+2\cdot2+0=6=d. \end{equation*} \quad Using the Cartan-K\"{a}hler theorem for linear Pfaffian systems (see \cite{IL2003}, p. 176, \cite{Br et al 1991} for a more general exposition, and the Appendix), we can summarize the findings in this section in the following theorem.\\ \quad{\bf Theorem 4.1.}\\ \quad {\it Assume there exist three 1-forms $(\omega^1, \omega^2,\omega^3)$ on a 9-dimensional manifold $\widetilde{\Sigma}$ which satisfy the structure equations \eqref{Lands_struct_eq}, where I, K are considered as free coordinates on $\widetilde{\Sigma}$, and dI, dK are independent from $\omega^1, \omega^2,\omega^3$.\\ \quad Then the pair $(I,J)$ is an involutive linear Pfaffian system with independence condition on $\widetilde{\Sigma}$. Therefore, solving a series of Cauchy problems yields analytic integral manifolds of $(I,J)$ passing through $\tilde{u}\in \widetilde{\Sigma}$ that, roughly speaking, depend on two functions of two variables.} \quad We emphasize the fact that the existence of analytical integral manifolds of $(I,J)$ is guaranteed only in a neighborhood $U\subset \widetilde{\Sigma}$ of $\tilde{u}$. \\ \quad Therefore, for any point $\tilde{u}\in\widetilde{\Sigma}$ chosen such that $I_1\neq 0$, the existence of an integral submanifold of $(I,J)$ passing through this point is guaranteed by Theorem 4.1. This is a non-trivial generalized Landsberg surface structure on which the independence condition $\omega^1\wedge\omega^2\wedge\omega^3\neq 0$ is satisfied. In other words, this integral submanifold can be realized as the graph of the analytical mapping \begin{equation*} \begin{split} &\Sigma\to\widetilde{\Sigma},\\ &u \mapsto (u,I(u),K(u),I_1(u),I_3(u),K_1(u),K_2(u))\in \widetilde{\Sigma}. \end{split} \end{equation*} \quad This proves R. Bryant's prophecy. Unfortunately, these generalized structures are not always amenable, in other words, they are not always realizable as Finsler structures on surfaces as will be seen.\\ \quad {\bf Remark.}\\ \quad If we write the structure equations as \begin{equation*} \begin{pmatrix} d\theta^1\\ d\theta^2 \end{pmatrix} = \begin{pmatrix} \pi^1 & 0 & \pi^2 \\ \pi^3 & \pi^4 & 0 \end{pmatrix} \wedge \begin{pmatrix} \omega^1\\ \omega^2\\ \omega^3 \end{pmatrix} , \end{equation*} then we can put them in a normal form which reflects the Cartan test for involutivity. \\ \quad Indeed, if one changes the basis $\{\omega^1,\omega^2,\omega^3\}$ to $\{\widetilde{\omega}^1:=\omega^1,\widetilde{\omega}^2:=\omega^2,\widetilde{\omega}^3:=\omega^3-\omega^2\}$, then it follows \begin{equation*} \begin{split} & d\theta^1\equiv {\pi}^1\wedge\widetilde{\omega}^1+{\pi}^2\wedge\widetilde{\omega}^2+{\pi}^2\wedge \widetilde{\omega}^3\\ & d\theta^2\equiv {\pi}^3\wedge\widetilde{\omega}^1+{\pi}^4\wedge\widetilde{\omega}^2,\qquad\qquad\qquad \mod {I}. \end{split} \end{equation*} \quad Therefore, in this frame, the tableau $A$ of $(I,J)$ is now given by \begin{equation} A= \begin{pmatrix} a & d & d \\ b & c & 0 \end{pmatrix}. \end{equation} \quad One can now directly verify by visual inspection that, indeed, there are $s_1=2$ independent 1-forms in the first column of the tableau matrix of $(I,J)$, $s_1+s_2=4$ independent 1-forms in the first two columns, and $s_1+s_2+s_3=4$ independent 1-forms in the first three columns, i.e. in the entire matrix. This agrees with Cartan's test for involutivity. \subsection{The existence of generalized Landsberg structures on surfaces} \quad In the present section we are going to generalize our setting and show the existence of the coframes $\omega$ satisfying (\ref{Lands_struct_eq}) together with the scalar functions $I$ and $K$ satisfying (\ref{Lands_Bianchi}), without using any of the assumptions in \S4.1, \S4.2. \\ \quad Let $\Sigma$ be again a 3-manifold, and let $\pi:\mathcal{F}(\Sigma)\to\Sigma$ be its frame bundle, namely \begin{equation*} \mathcal{F}(\Sigma)=\{(u,f_u) | f_u:T_u\Sigma\to V \ \textrm{linear isomorphism}\}, \end{equation*} where $V$ is a 3-dimensional real vector space.\\ \quad Let $\eta$ be the tautological $V$-valued 1-form on $\mathcal{F}(\Sigma)$, defined as usual by \begin{equation}\label{taut_def} \eta_f(w)=f_u(\pi_*w), \end{equation} where $f=(u,f_u)\in \mathcal{F}(\Sigma)$, and $w\in T_f\mathcal{F}(\Sigma)$.\\ \quad It is known that a coframe on the manifold $\mathcal{F}(\Sigma)$ is given by $(\eta^i,\alpha^i_j)$, $i,j=1,2,3$, where $\eta^i$ are the components of the $V$-valued tautological form $\eta$, and $\alpha^i_j$ are the 1-forms on $\mathcal{F}(\Sigma)$ that satisfy the structure equations \begin{equation}\label{str.eq_1} d\eta^i=-\alpha^i_j\wedge\eta^j,\qquad i,j=1,2,3. \end{equation} \quad Such 1-forms always exist, but without supplementary conditions, they are not unique. These forms are the connection forms of the frame bundle.\\ \quad Here, we choose a "flat type connection form", i.e. 1-forms $\alpha^i_j$ satisfying \begin{equation}\label{str.eq_2} d\alpha^i_j=\alpha^i_k\wedge\alpha^k_j,\qquad i,j,k=1,2,3. \end{equation} \quad We define next the following (local) trivialization of the frame bundle \begin{equation}\label{id} \begin{split} t:& \ \mathcal{F}(\Sigma)\to \Sigma\times GL(3,\mathbb{R})\\ &f=(u,f_u)\mapsto (u,(f_j^i)), \end{split} \end{equation} where for a coordinate system $(x^1,x^2,x^3)$ on $\Sigma$, and a basis $\{e_1,e_2,e_3\}$ of $V$, $(f_j^i)$ is the representation matrix of the mapping $f_u:T_u\Sigma\to\Sigma$ with respect to the bases $\{\frac{\partial}{\partial x^i}\}$ and $\{e_i\}$.\\ \quad A system of coordinates on $\Sigma\times GL(3,\mathbb{R})$ is given by $(x^i,f^i_j)$, $i,j=1,2,3$, and a coframe on the manifold $\Sigma\times GL(3,\mathbb{R})$ will be $(\omega^i,df^i_j)$, where we put \begin{equation}\label{omegas} \omega^i=f_j^idx^j. \end{equation} \quad We remark that the tautological 1-forms $\eta=(\eta^i)$, $i=1,2,3$, on $\mathcal{F}(\Sigma)$ correspond to the 1-forms $(\omega^i)$ under the identification (\ref{id}). This can be verified by direct computation checking that the 1-forms $\omega^i$ in (\ref{omegas}) satisfy (\ref{taut_def}).\\ \quad Moreover, if we put \begin{equation*} \beta_j^i=d(f_k^i)(f^{-1})_j^k,\qquad i,j,k=1,2,3, \end{equation*} then the 1-forms $(\beta_j^i)$ on $\mathcal{F}(\Sigma)$ correspond to the "connection forms" $(\alpha_j^i)$. Indeed, a straightforward computation shows that the $\beta_j^i$'s defined above verify the structure equations (\ref{str.eq_1}), (\ref{str.eq_2}).\\ \quad With these preparations in hand, we move on to the study of the existence of a coframe $\omega$ and the scalars $I$, $K$ on the 3-manifold $\Sigma$ that satisfy (\ref{Lands_struct_eq}) and (\ref{Lands_Bianchi}), respectively. \\ \quad In order to do this, we consider the 18-dimensional manifold \begin{equation*} \widetilde{\Sigma}=\mathcal{F}(\Sigma)\times \mathbb{R}^6 \end{equation*} with the coframe \begin{equation*} \{\eta^1,\eta^2,\eta^3,(\alpha_j^i)_{i,j=1,2,3},\theta^1,\theta^2,\pi^1,\pi^2,\pi^3,\pi^4\}, \end{equation*} where $\pi^1,\pi^2,\pi^3,\pi^4$ are the 1-forms in \eqref{pi_matrix}, \eqref{pi_vector}.\\ \quad We consider the 1-forms \begin{equation}\label{forms1_eta} \begin{split} \Theta^1 & =d\eta^1+I\eta^1\wedge\eta^3-\eta^2\wedge\eta^3\\ \Theta^2 & =d\eta^2+\eta^1\wedge\eta^3\\ \Theta^3 & =d\eta^3-K\eta^1\wedge\eta^2 \end{split} \end{equation} and \begin{equation}\label{forms2_eta} \begin{split} \theta^1 & =dI-I_1\eta^1-I_3\eta^3\\ \theta^2 & =dK-K_1\eta^1-K_2\eta^2+KI\eta^3, \end{split} \end{equation} obtaining in this way the exterior differential system \begin{equation*} \widetilde{\mathcal I}=\{\Theta^1,\Theta^2,\Theta^3,\theta^1,\theta^2\} \end{equation*} with independence condition \begin{equation*} \Omega=\eta^1\wedge\eta^2\wedge\eta^3\neq 0. \end{equation*} \quad Let us remark that any element $E\in G_3(T\widetilde{\Sigma})$ such that $\Omega |_{E}\neq 0$ is defined by \begin{equation*} \begin{split} \alpha^i_{j\ |E} & =A^i_{jk}(E)\eta^k_{\ |E}\\ \theta^i_{\ |E} & =B_k^i(E)\eta^k_{\ |E}\\ \pi^i_{\ |E} & =C_k^i(E)\eta^k_{\ |E}, \end{split} \end{equation*} where $(A^i_{jk})_{i,j,k=1,2,3}$, $(B_k^i)_{i=1,2;k=1,2,3}$, $(C_k^i)_{i=1,2,3,4;k=1,2,3}$ are smooth functions on $G_3(T\widetilde{\Sigma},\Omega)$. \quad In other words, $(A^i_{jk},B_k^i,C_k^i)$ are the fiber coordinates of the fibration $G_3(T\widetilde{\Sigma},\Omega)\to \widetilde{\Sigma}$. This fiber is 45-dimensional.\\ \quad However, due to the identification (\ref{id}) and the discussion above, we can consider the local coordinates \begin{equation*} (x^1,x^2,x^3,(f_j^i)_{i,j=1,2,3},I,K,I_1,I_3,K_1,K_2)\in\widetilde{\Sigma} \end{equation*} on the 18-dimensional manifold $\mathcal{F}(\Sigma)\times\mathbb{R}^6$ and identify the 1-forms $\eta^i$ with $\omega^i$ given in (\ref{omegas}). Since the settings are equivalent, for simplicity, we will work in these coordinates instead of the general case described at the beginning of this subsection. \\ \quad It follows that the 1-forms (\ref{forms1_eta}), (\ref{forms2_eta}) of the exterior differential system $\widetilde{\mathcal{I}}$, can be written as \begin{equation}\label{forms1_omega} \begin{split} \Theta^1 & =d\omega^1+I\omega^1\wedge\omega^3-\omega^2\wedge\omega^3\\ \Theta^2 & =d\omega^2+\omega^1\wedge\omega^3\\ \Theta^3 & =d\omega^3-K\omega^1\wedge\omega^2 \end{split} \end{equation} and \begin{equation}\label{forms2_omega} \begin{split} \theta^1 & =dI-I_1\omega^1-I_3\omega^3\\ \theta^2 & =dK-K_1\omega^2-K_2\omega^2+KI\omega^3, \end{split} \end{equation} with independence condition \begin{equation*} \Omega=\omega^1\wedge\omega^2\wedge\omega^3\neq 0, \end{equation*} where $\omega$'s are given by (\ref{omegas}).\\ \quad The integral manifolds of $(\widetilde{\mathcal{I}},\Omega)$ will consist of the coframe $\{\omega^1,\omega^2,\omega^3\}$, and the functions $(I,K,I_1,I_3,K_1,K_2)$ on $\Sigma$. The projection of such integral manifold to $\Sigma$ gives a generalized Landsberg structure $(\Sigma,\omega)$.\\ \quad Let us remark that the situation is now quite different from the one in \S4.1. The $\Theta$'s are 2-forms, while $\theta$'s are 1-forms, so the exterior differential system $(\widetilde{\mathcal{I}},\Omega)$ is not a linear Pfaffian system, and therefore we cannot apply the Cartan-K\"ahler theorem for linear Pfaffian systems as we did previously. Even there are more general versions of the Cartan-K\"ahler theorem, the strategy we adopt here is to prolong $\widetilde{\mathcal{I}}$ in order to obtain a linear Pfaffian system (for details see \cite{IL2003}, p. 177).\\ \quad Let us consider the prolongation $\mathcal{V}(\widetilde{\mathcal{I}},\Omega)\subset G_3(T\widetilde{\Sigma})$ over $\widetilde{\Sigma}$, with the fiber inhomogeneous Grassmannian coordinates $\ensuremath{\mathbb{B}}igl((p^i_j)_{i=1,2;j=1,2,3}, (p_{jk}^i)_{i,j,k=1,2,3}$, \\ $(q_k^i)_{i=1,2,3,4;k=1,2,3}\ensuremath{\mathbb{B}}igr)$, such that \begin{equation*} \begin{split} \theta^i_{|_E} & =p_k^i(E)dx^k_{|_E}\\ {df_j^i}_{|_E} & =p_{jk}^i(E)dx^k_{|_E}\\ \pi^i_{|_E} & =q_k^i(E)dx^k_{|_E}, \end{split} \end{equation*} for any integral element $E$.\\ \quad Then, the equations \begin{equation*} \begin{split} & \theta^i=d\theta^i=0,\qquad i=1,2,\\ & \Theta^j=d\Theta^j=0,\qquad j=1,2,3, \end{split} \end{equation*} will give the defining equations of the prolongation $\mathcal{V}(\widetilde{\mathcal{I}},\Omega)$.\\ \quad As concrete computation, we remark first that $\theta^i=0$ will imply $p_j^i=0$, so these functions will not appear in out analysis. A similar computation as in \S4.1 shows that the structure equations for $\theta$'s are \begin{equation*} \begin{split} & d\theta^1\equiv \pi^1\wedge \omega^1+\pi^2\wedge \omega^2\quad \mod \{\theta,\Theta\} \\ & d\theta^2\equiv \pi^3\wedge \omega^1+\pi^4\wedge \omega^3. \end{split} \end{equation*} These equations will give some of the $q_j^i$'s.\\ \quad The equations $\Theta^i\equiv 0$ $\mod \{\theta,\Theta\}$ will give some of the $p_{jk}^i$. The rest of the equations $d\Theta^i\equiv 0$ $\mod \{\theta,\Theta\}$ will be satisfied due to some Bianchi identities, so they will give no further conditions.\\ \quad In this way, we obtain the linear Pfaffian $\widetilde{\widetilde{\mathcal{I}}}$ on $\mathcal{V}(\widetilde{I},\Omega)$ generated by the 1-forms \begin{equation}\label{large_lin_pfaff} \{\theta^1,\theta^2,({\Theta^i_j})_{i,j=1,2,3},\Pi^1,\Pi^2,\Pi^3,\Pi^4\}, \end{equation} where \begin{equation*} \begin{split} & \Theta^i_j=df^i_j-p_{jk}^idx^k,\quad i,j,k=1,2,3,\\ & \Pi^i=\pi^i-q^i_kdx^k,\quad i=1,2,3,4, \quad k=1,2,3 \end{split} \end{equation*} and we will study its involutivity by means of Cartan-K\"ahler theory as we did in \S4.1, \S4.2.\\ \quad It is easy to see that putting the conditions $d\theta^i=0$, $i=1,2$ it results 6 relations with 12 unknown functions $(q_j^i)_{i=1,2,3,4;\ j=1,2,3}$. We solve $q_3^1$, $q_2^2$, $q_3^2$ in terms of $q_1^1$, $q_2^1$, $q_1^2$, and $q_3^3$, $q_2^4$, $q_3^4$ in terms of $q_1^3$, $q_2^3$, $q_1^4$. It follows \begin{equation*} \begin{split} & \quad q_2^2=\frac{1}{f_1^3}\ensuremath{\mathbb{B}}igl(q_1^1f_2^1-q_2^1f_1^1+q^2_1f_1^3\ensuremath{\mathbb{B}}igr),\\ & \begin{pmatrix} q_3^1\\ q_3^2 \end{pmatrix}= \begin{pmatrix} -f_2^1 & -f_2^3 \\ f_1^1 & f_1^3 \end{pmatrix}^{-1} \begin{pmatrix} -q_2^1f_3^1 & -q_2^2f_3^2 \\ q_1^1f_3^1 & q_1^2f_3^3 \end{pmatrix}, \end{split} \end{equation*} and \begin{equation*} \begin{split} & \quad q_2^4=\frac{1}{f_1^2}\ensuremath{\mathbb{B}}igl(q_3^1f_2^1-q_2^3f_1^1+q^4_1f_1^2\ensuremath{\mathbb{B}}igr),\\ & \begin{pmatrix} q_3^3\\ q_3^4 \end{pmatrix}= \begin{pmatrix} -f_2^1 & -f_2^2 \\ f_1^1 & f_1^2 \end{pmatrix}^{-1} \begin{pmatrix} -q_2^1f_3^1 & -q_2^4f_3^2 \\ q_1^3f_3^1 & q_1^4f_3^2 \end{pmatrix}. \end{split} \end{equation*} \quad In the same way, from $\Theta^i_j=0$, $i,j=1,2,3$, we obtain 9 relations with 27 unknown functions $(p_{jk}^i)$, $i,j,k=1,2,3$. Solving 9 of them, we obtain \begin{equation*} \begin{split} & p_{31}^1=p_{13}^1-I(f_3^1f_1^3-f_1^1f_3^3)+(f_3^2f_1^3-f_1^2f_3^3)\\ & p_{12}^1=p_{21}^1-I(f_2^1f_1^3-f_1^1f_2^3)+(f_2^2f_1^3-f_1^2f_2^3)\\ & p_{23}^1=p_{32}^1-I(f_3^1f_2^3-f_2^1f_3^3)+(f_3^2f_2^3-f_2^2f_3^3), \end{split} \end{equation*} \begin{equation*} \begin{split} & p_{12}^2=p_{21}^2-(f_1^1f_2^3-f_2^1f_1^3)\\ & p_{23}^2=p_{32}^2-(f_2^1f_3^3-f_3^1f_2^3)\\ & p_{31}^3=p_{13}^3-(f_3^1f_1^3-f_1^1f_3^3), \end{split} \end{equation*} \begin{equation*} \begin{split} & p_{12}^3=p_{21}^3+K(f_1^1f_2^2-f_2^1f_1^2)\\ & p_{23}^3=p_{32}^3+K(f_2^1f_3^2-f_3^1f_2^2)\\ & p_{31}^3=p_{13}^3+K(f_3^1f_1^2-f_1^1f_3^2). \end{split} \end{equation*} \quad Using these relations we study the involutivity of the linear Pfaffian (\ref{large_lin_pfaff}). By similar computations as in \S4.1, \S4.2 we obtain that the structure equations of (\ref{large_lin_pfaff}) are given by \begin{equation}\label{large_str.eq} d\begin{pmatrix} \theta^1 \\ \theta^2 \\ \Theta^1_1 \\ \Theta^1_2 \\ \Theta^1_3 \\ \Theta^2_1 \\ \Theta^2_2 \\ \Theta^2_3 \\ \Theta^3_1 \\ \Theta^3_2 \\ \Theta^3_3 \\ \Pi^1 \\ \Pi^2 \\ \Pi^3 \\ \Pi^4 \end{pmatrix}\equiv \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ \rho^1 & \rho^2 & \rho^3\\ \rho^2 & \rho^4 & \rho^5\\ \rho^3 & \rho^5 & \rho^6\\ \rho^7 & \rho^8 & \rho^9\\ \rho^8 & \rho^{10} & \rho^{11}\\ \rho^9 & \rho^{11} & \rho^{12}\\ \rho^{13} & \rho^{14} & \rho^{15}\\ \rho^{14} & \rho^{16} & \rho^{17}\\ \rho^{15} & \rho^{17} & \rho^{18}\\ \rho^{19} & \rho^{20} & \Phi^1\\ \rho^{21} & \Phi^2 & \Phi^3\\ \rho^{22} & \rho^{23} & \Phi^4\\ \rho^{24} & \Phi^5 & \Phi^6 \end{pmatrix} \begin{pmatrix} dx^1 \\ dx^2 \\ dx^3 \end{pmatrix}\qquad \mod\{\widetilde{\widetilde{\mathcal{I}}}\}, \end{equation} where $\rho^i$, $i=1,\dots,24$ are 1-forms on $\mathcal{V}(\widetilde{\mathcal{I}},\Omega)$, linearly independent from the 1-forms in (\ref{large_lin_pfaff}), and $\Phi^j$, $j=1,\dots,6$ are linear combinations of the $\rho$'s. \\ \quad It means that the apparent torsion can be absorbed. It also can be checked that the space of integral elements at each point has dimension 38.\\ \quad On the other hand, the reduced characters of the tableau corresponding to (\ref{large_str.eq}) are \begin{equation*} s_1=13,\quad s_2=8,\quad s_3=3, \end{equation*} and Cartan test's for involutivity reads \begin{equation*} s_1+2s_2+3s_3=38. \end{equation*} Therefore the Pfaffian system (\ref{large_lin_pfaff}) is involutive.\\ \quad Putting all these together, and assuming that $\Sigma$ and $\alpha,\eta$ are analytic, from Cartan-K\"ahler theory we obtain \quad {\bf Theorem 4.2.}\\ \quad {\it The linear Pfaffian prolongation $(\mathcal{V}(\widetilde{\mathcal{I}},\Omega),\widetilde{\widetilde{\mathcal{I}}})$ of the exterior differential system $\widetilde{\mathcal{I}}$ on $\widetilde{\Sigma}$ is involutive. Moreover, the analytical integral manifolds of $\widetilde{\widetilde{\mathcal{I}}}$ depend on 3 functions of 3 variables. }\\ \quad Since the projection of an integral manifold of the prolongation $\widetilde{\widetilde{\mathcal{I}}}$ to $\widetilde{\Sigma}$ is also an integral manifold of $\widetilde{\mathcal{I}}$, it follows \quad {\bf Corollary 4.3.}\\ \quad {\it There exist non-trivial generalized Landsberg structures on a 3-manifold $\Sigma$.} \quad The non-triviality of the integral manifolds can be obtained by choosing an appropriate initial value. See the discussion at the end of \S4.2.\\ \quad {\bf Remark.}\\ \quad We point out that the degree of freedom of the integral manifolds of $\widetilde{\widetilde{\mathcal{I}}}$ does not equal the degree of freedom of the scalar functions $I$ and $K$. The reason is that the 3 functions of 3 variables obtained in Theorem 4.2 include the degree of freedom of the coframe $(\omega^1,\omega^2,\omega^3)$ as well. \section{The local amenability of generalized Landsberg structures on surfaces} \quad The notion of amenability given in Definition 2.2 has the following local version \quad {\bf Definition 5.1.} The generalized Finsler structure $(\Sigma,\omega)$ is called {\it locally} {\it amenable} if for any point $u\in \Sigma$, there exists an open neighborhood $U\subset \Sigma$ of $u$ to which $(\Sigma,\omega)$ restricts to be amenable, i.e. $(U,\omega_{|U})$ is amenable in the sense of Definition 2.2. \quad We can now formulate a local version of the Theorem 2.1. {\bf Theorem 5.1.}\quad {\it Let $(\Sigma,\omega)$ be a generalized Finsler structure. Then the following two conditions always hold good. (2)' $(\Sigma,\omega)$ is locally amenable. (3)' The mapping $\nu:U\to T(\widetilde{U})$ is a smooth embedding, where $\widetilde{U}$ is the leaf space of the foliation $\{\omega^1_{|U}=0,\omega^2_{|U}=0\}$. } {\it Proof.} The proof is quite straightforward. Remark first that the differential system $\{\omega^1=0,\omega^2=0\}$ is completely integrable. Indeed, the structure equations (\ref{finsler_struct_eq}) of a generalized Finsler structure show that \begin{equation*} \begin{split} & d\omega^1 \equiv 0\qquad \qquad \mod\{\omega^1,\omega^2\}.\\ & d\omega^2 \equiv 0 \end{split} \end{equation*} \quad It follows from Frobenius theorem that for any point $u\in \Sigma$, there exists an open neighborhood $U\subset\Sigma$ of $u$ such that the leaf space of the foliation $\{\omega^1_{|U}=0,\omega^2_{|U}=0\}$ is a differentiable manifold, say $\widetilde{U}$, such that the canonical projection $\pi:U\to \widetilde{U}$ is a smooth submersion. \quad From here we see immediately that $\nu:U\to T(\widetilde{U})$ is a smooth embedding. \begin{flushright} Q. E. D. \end{flushright} \quad We point out that the condition (1) in Theorem 2.1 is not necessarily true for this $U$. \quad Indeed, imagine for a moment the case when the generalized Finsler structure $(\Sigma,\omega)$ satisfies all the conditions in Theorem 2.1, i.e. it is a classical Finsler structure on a differentiable surface $M$ such that $\pi:\Sigma\to M$ is a smooth submersion. In this case, even though if we restrict ourselves to a small neighborhood $\widetilde{U}\subset M$, the fibers $\Sigma_x$ over $x\in \widetilde{U}$ are not changed in any way, they remain diffeomorphic to $S^1$ when we shrink the base manifold $M$. \quad This situation changes dramatically when we are working with a local generalized structure on $\Sigma$. Considering the neighborhood $U \subset \Sigma$ as given by the Frobenius theorem, the fibers are also {\it cut off}. The situation is similar with taking a neighborhood of a point on the surface of the sphere $S^2$, for example. In general, the great circles will have only some open arcs contained in this neighborhood, and there is no reason for these arcs to be compact. \quad Hence, the local conditions in Theorem 5.1 are not enough for $(\Sigma, \omega)$ to be classical Finsler structure on $\widetilde{U}$. \quad Therefore, we have {\bf Corollary 5.2.}\quad {\it Let $(\Sigma,\omega)$ be a generalized Finsler structure and let $U\subset \Sigma$ be the neighborhood given in Theorem 5.1, where (2)' and (3)' are satisfied. \quad Then $(U,\omega_{|U})$ satisfies (1) in Theorem 2.1 if and only if it is a classical Finsler structure on $\widetilde{U}$. } \quad In conclusion, recall that we have proved the existence of non-trivial generalized Landsberg surfaces in Theorem 4.1. In other words, the Cartan--K\"ahler theorem assures us that there exists a neighborhood $U\subset \Sigma$ such that $(U,\omega_{|U})$ is a non-trivial generalized Landsberg surface. \quad On the other hand, since the differential system $(U,\omega^1,\omega^2)$ is completely integrable, from the discussion above it follows that, on a possible smaller open set on $\Sigma$, there exists the local coordinate system $u=(x,y,p)$ such that the leaf space of the foliation $\{\omega^1=0,\omega^2=0\}$ is a differentiable manifold. \quad We can therefore conclude that for a small enough $\varepsilon>0$, there exist amenable non-trivial generalized Landsberg structures $(U,\omega)$, depending on two functions of two variables, over an open disk $D=\{(x,y):x^2+y^2<\varepsilon\}\subset \widetilde{U}$ in the plane. \quad Finally, we emphasize that these non-trivial Landsberg generalized structures do not necessarily satisfy the condition (1) in Theorem 2.1, so they are not necessarily classical Finsler structures. \section{A special coframing} \quad For a nowhere vanishing smooth function $m$ on $\Sigma$, we define the 1-forms \begin{equation}\label{coframe_change} \begin{split} \theta^1 & =m\omega^2\\ \theta^2 & =\omega^3\\ \theta^3 & =m\omega^1+m_3\omega^2, \end{split} \end{equation} where the subscripts represent the directional derivatives with respect to the generalized Landsberg coframe $(\omega^1,\omega^2,\omega^3)$. \\ \quad Remark that \begin{equation*} \theta^1\wedge\theta^2\wedge\theta^3=m^2\omega^1\wedge\omega^2\wedge\omega^3, \end{equation*} therefore $\{\theta^1,\theta^2,\theta^3\}$ is a coframe on $\Sigma$ provided $m$ is nowhere vanishing smooth function on $ \Sigma$.\\ \quad An easy linear algebra exercise will show that we have \begin{equation}\label{frame_change} \begin{split} f_1 & =-\frac{m_3}{m}\hat{e}_1+\frac{1}{m}\hat{e}_2\\ f_2 & =\hat{e}_3\\ f_3 & =\frac{1}{m}\hat{e}_1, \end{split} \end{equation} where we have denoted by $\{f_1,f_2,f_3\}$ and $\{\hat{e}_1,\hat{e}_2,\hat{e}_3\}$ the dual frames of $\{\theta^1,\theta^2,\theta^3\}$ and $\{\omega^1,\omega^2,\omega^3\}$, respectively.\\ \quad We would like to impose conditions on the function $m$ and the invariants $I$, $K$ such that the new coframe $\theta=\{\theta^1,\theta^2,\theta^3\}$ satisfies the structure equations \begin{equation}\label{k_struct_eq} \begin{split} d\theta^1 & =\theta^2\wedge\theta^3\\ d\theta^2 & =\theta^3\wedge\theta^1\\ d\theta^3 & = k\theta^1\wedge\theta^2, \end{split} \end{equation} where $k$ is a smooth function on $\Sigma$ to be determined (one can see that from the third structure equation of the coframe $\theta$ that $dk\wedge \theta^1\wedge \theta^2=0$, therefore the directional derivative of $k$ with respect to $ \theta^3$ must vanish). This is a so-called {\it K-Cartan structure} (see \cite{GG2002}). \\ \quad Straightforward computations show that \begin{equation*} d\theta^1 =\theta^2\wedge\theta^3 \end{equation*} holds if and only if \begin{equation*} m_1=0. \end{equation*} This is our first condition on $m$.\\ \quad It also follows that \begin{equation} I=-2\ \frac{m_3}{m},\quad K=m^2. \end{equation}\\ \quad In this case, we obtain \begin{equation} k=1-\frac{m_{33}}{m}. \end{equation} \quad Remark that the {\it Landsberg condition} reads \begin{equation*} I_2=0 \Longleftrightarrow m_{32}=\frac{m_2m_3}{m}, \end{equation*} and the {non-triviality conditions} \begin{equation*} \begin{split} I_1 & \neq 0 \Longleftrightarrow m_2\neq 0 \\%\Longleftrightarrow m_{\theta 1}\neq 0\\ I_3 & \neq 0 \Longleftrightarrow mm_{33}-(m_3)^2\neq 0\\% \Longleftrightarrow mm_{\theta 22}-(m_{\theta 2})^2\neq 0\\ K_2 & \neq 0 \Longleftrightarrow I_1 \neq 0\\ K_3 & \neq 0 \Longleftrightarrow m_3 \neq 0. \end{split} \end{equation*} \quad We obtain therefore the following {\bf Proposition 6.1.} \\ {\it Let $(\Sigma, \omega)$ be a generalized Landsberg structure on the 3-manifold $\Sigma$ and let $m:\Sigma \to \mathbb{R}$ be a smooth nowhere vanishing function satisfying the conditions \begin{enumerate} \item the direction invariance condition \begin{equation} m_1=0 \end{equation} \item the Landsberg condition \begin{equation} m_{23}=\frac{m_2 m_3}{m}. \end{equation} \end{enumerate} Then $\theta=\{\theta^1, \theta^2, \theta^3\}$, with the $\theta^i$'s given in (\ref{coframe_change}), is a coframe on the 3-manifold $\Sigma$ that satisfies the structure equations (\ref{k_struct_eq}) with (3) the curvature condition \begin{equation}\label{curv_cond} k=1-\frac{m_{33}}{m}. \end{equation} } \quad Remark that in this case, besides the conditions in the proposition above, the function $m$ will satisfy the Ricci type identities \begin{equation*} \begin{split} & m_{21}=-m^2m_3\\ & m_{23}-m_{32}=0\\ & m_{31}=m_2. \end{split} \end{equation*} \quad Conversely, we can start with a coframe $\theta=\{\theta^1, \theta^2, \theta^3\}$ on the 3-manifold $\Sigma$ that satisfies the structure equations (\ref{k_struct_eq}) for a function $k:\Sigma\to \mathbb{R}$ such that $k_{\theta 3}=0$. Here, we denote by $h_{\theta i}$ the directional derivatives of a smooth function $h$ with respect to the coframe $\theta$, i.e. $dh=h_{\theta 1}\theta^1+h_{\theta 2}\theta^2+h_{\theta 3}\theta^3$. Making use of a nowhere vanishing smooth function $m:\Sigma\to \mathbb{R}$, we can construct the 1-forms \begin{equation}\label{coframe_change_omega} \begin{split} &\omega^1=\frac{1}{m}(\theta^3-\frac{m_{\theta 2}}{m}\theta^1)\\ &\omega^2=\frac{1}{m}\theta^1\\ &\omega^3=\theta^2. \end{split} \end{equation} \quad By a simple straightforward computation we obtain {\bf Proposition 6.2.} {\it Let $\theta=\{\theta^1, \theta^2, \theta^3\}$ be a coframe on the 3-manifold $\Sigma$ that satisfies the structure equations (\ref{k_struct_eq}) for a smooth function $k:\Sigma \to \mathbb{R}$, and let $m:\Sigma\to \mathbb{R}$ be a nowhere vanishing smooth function that satisfies the conditions \begin{enumerate} \item the direction invariance condition \begin{equation} m_{\theta 3}=0, \end{equation} \item the Landsberg condition \begin{equation}\label{lands_cond_theta} (L)\qquad m_{\theta 21}=0, \end{equation} \item the curvature condition \begin{equation}\label{curv_cond_theta} (C)\qquad \frac{m_{\theta 22}}{m}=1-k. \end{equation} Then $\omega=\{\omega^1,\omega^2,\omega^3\}$, with the $\omega^i$'s given in (\ref{coframe_change_omega}), is a generalized Landsberg structure on the 3-manifold $\Sigma$ with the invariants \begin{equation} I=-2\frac{m_{\theta 2}}{m},\quad K=m^2. \end{equation} \end{enumerate} } \quad In this case, the Ricci type equations for $m$ in the coframe $\theta^1$, $\theta^2$, $\theta^3$ are \begin{equation}\label{Ricci_cond_theta} \begin{split} & m_{\theta 12}=m_{\theta 21}=0\\ & m_{\theta 13}=-m_{\theta 2}\\ & m_{\theta 23}=m_{\theta 1}. \end{split} \end{equation} \quad{\bf Remarks.} \begin{enumerate} \item Let $(\Sigma,\omega)$ be a generalized Landsberg structure, and suppose that $U\subset \Sigma$ is an open set where the foliation \begin{equation*} \mathcal R = \{\omega^2=0,\omega^3=0\} \end{equation*} is amenable, i.e. the leaf space $\Lambda$ of integral curves of $\hat e_1$ in $U$ is a differentiable manifold, and \begin{equation*} l:U\to \Lambda \end{equation*} is a smooth submersion. Then $\theta^1$, $\theta^2$ can be regarded as the tautological 1-forms of the frame bundle and $\theta^3$ as the Levi-Civita connection of the Riemannian manifold $\Lambda$. The function $k$ plays the role of the Gauss curvature. \item The indicatrix foliation $\mathcal Q:\{\omega^1=0,\omega^2=0\}$ of the generalized Landsberg structure $\{\omega^1,\omega^2,\omega^3\}$ coincides with the geodesic foliation $\mathcal P:\{\theta^1=0,\theta^3=0\}$ of the new coframe $\{\theta^1,\theta^2,\theta^3\}$ on $\Sigma$. \item The normal foliation $\mathcal R:\{\omega^2=0,\omega^3=0\}$ of the generalized Landsberg structure $\{\omega^1,\omega^2,\omega^3\}$ coincides with the indicatrix foliation $\mathcal Q:\{\theta^1=0,\theta^2=0\}$ of the coframe $\{\theta^1,\theta^2,\theta^3\}$ on $\Sigma$. \item In the case when the generalized Landsberg structure $\{\omega^1,\omega^2,\omega^3\}$ is realizable as a classical Finsler structure $(M,F)$ on a certain 2-dimensional differentiable manifold $M$ such that $\pi:\Sigma \to M$ is its indicatrix bundle, then the leaves of the normal foliation $\mathcal R:\{\omega^2=0,\omega^3=0\}$ are the (normal) lifts of some paths on $M$ called $N$-parallel or $N$-extremal curves. The geometric meaning of such curves $\gamma: [a,b]\to M$ is that the normal vector field $N(t)$ along $\gamma(t)$, defined by $g_N(N,T)=0$, is parallel along $\gamma$. Here $T(t)$ is the tangent vector field to the curve $\gamma$, and $g$ is the Riemannian metric induced by the Finslerian structure in each tangent plane $T_xM$. It is also known that the $N$-parallels $\gamma$ are solutions of a second order differential equation on $M$ and the solution of this SODE is uniquely determined by some initial conditions $(x_0,Y_0)\in TM$ (see \cite{ISS2009} for details). \end{enumerate} \section{The geometry of quotient space $\Lambda$} \subsection{The setting} \quad In the light of our discussion in \S6, we can conclude that if $U\subset \Sigma$ is an open set where the normal foliation $ \mathcal R = \{\omega^2=0,\omega^3=0\}$ is amenable, i.e. the leaf space $\Lambda$ of integral curves of $\hat e_1$ in $U$ is a differentiable manifold, $l:U\to \Lambda$ is a smooth submersion, and $m$ is a smooth function on $\Sigma$ that satisfies the conditions in Proposition 6.1., then there exist \begin{enumerate} \item a quadratic form $g$ on $\Lambda$ such that $l^*(g)=m^2(\omega^2)^2+(\omega^3)^2$; \item a 2-form $dA$ on $\Lambda$ such that $l^*(dA)=m\omega^2\wedge\omega^3$; \item a smooth function $\bar{m}$ on $\Lambda$ such that $l^*(\bar{m})=m$. \end{enumerate} \quad We can construct now a $g$-orthonormal coframe $\eta^1$, $\eta^2$ on $\Lambda$ (it may be only locally defined), i.e. there exist two 1-forms $ \eta^1$, $\eta^2$ on $\Lambda$, such that \begin{equation*} g=(\eta^1)^2+(\eta^2)^2,\qquad dA=\eta^1\wedge \eta^2>0. \end{equation*} \quad This is equivalent with giving a smooth section $s$ of the orthonormal frame bundle $\nu:\mathcal F(\Lambda) \longrightarrow \Lambda$, i.e. a {\it first order adapted lift} to the geometry of the Riemannian manifold $(\Lambda,g)$.\\ \quad If we denote by $\{e_1,e_2\}$ the dual frame of $\{\eta^1,\eta^2\}$ it follows that $\{e_{1\ |z},e_{2\ |z}\}$ is a $g$-orthonormal basis of $T_z\Lambda$, and $(z,e_{1\ |z},e_{2\ |z})\in \mathcal F(\Lambda)$ is a frame on the manifold $ \Lambda$ at each point $z\in \Lambda$. \\ \quad There exist two smooth functions, say $a$ and $b$, on $\Lambda$ such that \begin{equation*} \begin{split} & d\eta^1=a\eta^1\wedge\eta^2\\ & d\eta^2=b\eta^1\wedge \eta^2. \end{split} \end{equation*} \quad By straightforward computation, it also follows that there exists a 1-form, say $\eta^3$, on $\Lambda$, such that \begin{equation*} \begin{split} & d\eta^1=\eta^2\wedge\eta^3\\ & d\eta^2=\eta^3\wedge \eta^1, \end{split} \end{equation*} and therefore we must have \begin{equation*} \eta^3=-a\eta^1-b\eta^2. \end{equation*} \quad One can easily check that if $\{\widetilde{\eta}^1,\widetilde{\eta}^2\}$ is another $g$-orthonormal frame, then it follows $d\widetilde{\eta}^3=d\eta^3$.\\ \quad By straightforward computation we obtain further \begin{equation*} d\eta^3=R\eta^1\wedge\eta^2, \end{equation*} where $R=a_2-a^2-b_1-b^2$, where $a_i$, $b_i$ means directional derivatives with respect to the coframe $\{\eta^1,\eta^2\}$. \\ \quad One can easily see that for another $g$-orthonormal frame $\{\widetilde{\eta}^1,\widetilde{\eta}^2\}$, the function $R$ remains unchanged, and therefore it depends only on $g$. \\ \quad Let us denote \begin{equation*} \begin{split} s(z)=(z,f_z) \end{split} \end{equation*} a local section of $\nu:\mathcal F(\Lambda)\to \Lambda$.\\ \quad It is then known that on $\mathcal F(\Lambda)$ there are tautological 1-forms \begin{equation*} \alpha^i_f\in T_f^* \mathcal F(\Lambda),\quad \alpha^i_f:=\eta^i(\nu_*w), \end{equation*} where $w\in T_f \mathcal F(\Lambda)$, and $i\in\{1,2\}$, such that \begin{equation*} (\nu^*_f(\eta^1),\nu^*_f(\eta^2))=(\alpha^1_f,\alpha^2_f) \end{equation*} gives a basis of semibasic forms on $\mathcal F(\Lambda)$.\\ \quad Consider now the $g$-orthonormal frame bundle $\nu:\mathcal F_{\textrm{on}}(\Lambda)\to \Lambda$ with its tautological 1-forms $\{\alpha^1,\alpha^2\}$.\\ \quad If $s:\Lambda\to \mathcal F_{\textrm{on}}(\Lambda)$ is a smooth (local) section, then \begin{equation*} \begin{split} & \eta^1=s^*(\alpha^1)\\ & \eta^2=s^*(\alpha^2) \end{split} \end{equation*} is a local coframe on $\Lambda$ such that \begin{equation*} g=(\eta^1)^2+(\eta^2)^2. \end{equation*} \quad Recall that the {\it ``downstairs''} Fundamental Lemma of Riemannian geometry tells us that there exists a unique 1-form $\eta^3$ on $\Lambda$ such that \begin{equation*} \begin{split} & s^*(d\alpha^1)=s^*(\alpha^2)\wedge\eta^3\\ & s^*(d\alpha^2)=\eta^3\wedge s^*(\alpha^1)\\ & d\eta^3=Rs^*(\alpha^1)\wedge s^*(\alpha^2), \end{split} \end{equation*} where $R:\Lambda\to \mathbb R$ is the Gauss curvature of the Riemannian surface $(\Lambda,g)$. These are the so- called {\it ``downstairs''} structure equations of the Riemannian metric $g$ on $\Lambda$.\\ \quad We also recall the {\it ``upstairs''} Fundamental Lemma of Riemannian geometry that states that it must exist a unique 1-form $\alpha^3$ on $\mathcal F(\Lambda)$ such that \begin{equation*} \begin{split} & d\alpha^1=\alpha^2\wedge \alpha^3\\ & d\alpha^2=\alpha^3\wedge\alpha^1\\ & d\alpha^3=k\alpha^1\wedge\alpha^2, \end{split} \end{equation*} where $k:\mathcal F(\Lambda)\to \mathbb R$ is the Gauss curvature {\it ``upstairs''}. In our setting it must satisfy the curvature condition \eqref{curv_cond}. It follows that $R=s^*k$. These are the {\it ``upstairs''} structure equations of the Riemannian metric $g$ on $\Lambda$. One can also see that on $\Lambda $ we have \begin{equation*} (\eta^1,\eta^2,\eta^3)=s^*(\alpha^1,\alpha^2,\alpha^3). \end{equation*} \quad {\it Example 7.1.}\\ \quad Let us consider a flat Riemannian metric $\widetilde g$ on $\Lambda$, i.e. $\widetilde R=0$. It follows that there exist local coordinates $z=(z^1,z^2)$ on $\Lambda$, such that \begin{equation*} \widetilde \eta^1=dz^1,\qquad \widetilde \eta^2=dz^2, \end{equation*} and therefore $a=0$, $b=0$ because $d\widetilde \eta^1=0$, $d\widetilde \eta^2=0$.\\ \quad It follows $\widetilde \eta^3=0$ as well as $R=0$.\\ \quad We construct now the coframe $(z; dz^1,dz^2)$ on $\Lambda$ and its oriented orthonormal frame bundle $\nu:\widetilde{\mathcal F}_{\textrm{on}}(\Lambda)\to \Lambda$ with respect to the Riemannian metric \begin{equation*} \tilde{g}=(dz^1)^2+(dz^2)^2. \end{equation*}\\ \quad In this case, the tautological 1-forms on $\widetilde{\mathcal F}_{\textrm{on}}(\Lambda)$ will have the normal form \begin{equation*} \begin{split} & \widetilde \alpha^1=\cos(t) dz^1-\sin(t) dz^2\\ & \widetilde \alpha^2=+\sin(t) dz^1+\cos(t) dz^2\\ & \widetilde \alpha^3=dt, \end{split} \end{equation*} where $t\in [0,2\pi]$ is the fiber coordinate over $z\in \Lambda$. \quad {\it Example 7.2.} \quad A more general example is the local form of a metric $g=u^2\widetilde g$ conformal to the flat case discussed above, where $u$ is a smooth function on $\Lambda$. In this case we have $g=(\eta^1)^2+(\eta^2)^2$, where \begin{equation*} \eta^1=u \textrm{d}z^1,\qquad \eta^2=u \textrm{d}z^2. \end{equation*} \quad By exterior differentiation it follows \begin{equation*} \begin{split} & a=-\frac{1}{u^2}\frac{\partial u}{\partial z^2}\\ & b=\ \frac{1}{u^2}\frac{\partial u}{\partial z^1}. \end{split} \end{equation*} \quad If we denote by $\nu:{\mathcal F}_{\textrm{on}}(\Lambda)\to \Lambda$ the bundle of $g$-oriented orthonormal frames on $\Lambda$, we obtain on ${\mathcal F}_{\textrm{on}}(\Lambda)$ the tautological 1-forms \begin{equation*} \begin{split} & \alpha^1=u \widetilde \alpha^1\\ & \alpha^2=u \widetilde \alpha^2\\ & \alpha^3= \widetilde \alpha^3-*d(\log u), \end{split} \end{equation*} where $*$ is the Hodge operator, $\widetilde \alpha^1, \widetilde \alpha^2$ and $\widetilde \alpha^3$ are the the tautological 1-forms and the Levi-Civita connection form of the flat metric $\widetilde g$, respectively.\\ \quad A straightforward computation shows that the Gauss curvature $R$ of $g$ is \begin{equation}\label{Gauss_curv} R=-\frac{1}{u^2}\Delta(\log u), \end{equation} where $\Delta$ is the Laplace operator in the coordinates $(z^1,z^2)$.\\ \quad It follows that a local form for the coframe $( \alpha^1, \alpha^2, \alpha^3)$ is given by \begin{equation*} \begin{split} & \alpha^1=u\ensuremath{\mathbb{B}}igl(\cos(t) dz^1-\sin(t) dz^2\ensuremath{\mathbb{B}}igr)\\ & \alpha^2=u\ensuremath{\mathbb{B}}igl(\sin(t) dz^1+\cos(t) dz^2\ensuremath{\mathbb{B}}igr)\\ & \alpha^3=dt-*d(\log u), \end{split} \end{equation*} where $t\in [0,2\pi]$ is the fiber coordinate over $z\in \Lambda$. Here, we denote the pullback $\nu^*(u)$ of $u$ to $\mathcal{F}(\Lambda)$ by the same letter. \subsection{The frame bundle $\mathcal{F}(\Lambda)$} \quad We return to our setting in \S 7.1, and start with an arbitrary Riemannian surface $(\Lambda,g)$ with the area 2-form $dA$ given such that \begin{equation*} g=(\eta^1)^2+(\eta^2)^2,\qquad dA=\eta^1\wedge \eta^2>0, \end{equation*} where $\{\eta^1,\eta^2\}$ is an $g$-orthonormal coframe on $\Lambda$, and $\{e_1,e_2\}$ is its dual frame.\\ \quad We construct as above the $g$-oriented frame bundle $\nu:\mathcal F(\Lambda)\to \Lambda$, where $(z,e_{1\ | z},e_{2\ |z})$ is a $g$-oriented frame on $\Lambda$.\ \quad Let us denote by $\hat{l}$ the mapping \begin{equation*} \hat{l}:\Sigma\to\mathcal F(\Lambda),\quad u\mapsto \hat{l}(u)=\ensuremath{\mathbb{B}}igl(l(u);l_{*,u}(f_{1\ |u}), l_{*,u}(f_{2\ |u}) \ensuremath{\mathbb{B}}igr), \end{equation*} where $f_1$, $f_2$ are given in (\ref{frame_change}). {\bf Proposition 7.1.} {\it The mapping $\hat{l}:\Sigma\to\mathcal F(\Lambda)$ defined above is a local diffeomorphism.} \quad We will give the proof of this result below.\\ \quad We have therefore the commutative diagram. \begin{equation*} \begin{matrix} \Sigma & \xrightarrow{\hat{l}} & \mathcal F(\Lambda)\\ & l \searrow & \downarrow \nu \\ & & \Lambda \end{matrix} \end{equation*} \quad Remark that due to Proposition 7.1 we can locally identify $\Sigma$ with $\mathcal F(\Lambda)$ as well as the coframes $\theta$ and $\alpha$. In order to avoid confusion we will still write $\hat{l}^*$, but we will consider all the formulas proved above for the coframe $\theta$ to hold good for $\alpha$ as well via $\hat{l}^*$.\\ \quad Let us consider now the tautological 1-forms $\{\alpha^1,\alpha^2\}$ on $\mathcal F(\Lambda)$, i.e. \begin{equation*} \nu^*(\eta^1)=\alpha^1,\qquad \nu^*(\eta^2)=\alpha^2, \end{equation*} or, equivalently, \begin{equation}\label{alpha12_omega} \hat{l}^*(\alpha^1)=m\omega^2,\qquad \hat{l}^*(\alpha^2)=\omega^3. \end{equation} \quad A simple computation shows that we must also have \begin{equation*} {l}^*(\eta^1)=m\omega^2,\qquad {l}^*(\eta^2)=\omega^3. \end{equation*} \subsection{ The structure equations } \quad We are going to discuss the structure equations on $\mathcal F(\Lambda)$ and $\Lambda$, respectively. \boxed{ {\it "Upstairs"} } \quad We have mentioned already the {\it "upstairs"} structure equations on $\mathcal F(\Lambda)$. \quad If we pullback the first two equations to $\Sigma$ by the means of $\hat{l}^*$, it follows \begin{equation*} \begin{split} &d (\hat{l}^* \alpha^1)=\hat{l}^*(\alpha^2)\wedge\hat{l}^*(\alpha^3)\\ &d (\hat{l}^* \alpha^2)=\hat{l}^*(\alpha^3)\wedge\hat{l}^*(\alpha^1) \end{split} \end{equation*} and from here, by using (\ref{alpha12_omega}) we obtain \begin{equation}\label{alpha3_omega} \hat{l}^*(\alpha^3)=m\omega^1+m_3\omega^2 \end{equation} on $\Sigma$.\\ \quad Remark that \begin{equation*} \hat{l}^*(\alpha^1\wedge\alpha^2\wedge \alpha^3)= m^2\omega^1\wedge\omega^2\wedge\omega^3\neq 0, \end{equation*} i.e. $\hat{l}$ is indeed a local diffeomorphism and this proves the Proposition 7.1 above. \boxed{ {\it "Downstairs"} } \qquad The {\it "downstairs"} structure equations on $\Lambda$ are \begin{equation*} \begin{split} & d\eta^1=\eta^2\wedge \eta^3\\ & d\eta^2=\eta^3\wedge \eta^1\\ & d\eta^3=R\ \eta^1\wedge \eta^2, \end{split} \end{equation*} where $R$ is the {\it "downstairs"} Gauss curvature of $(\Lambda,g)$.\\ \quad We pullback the last equation above to $\mathcal F(\Lambda)$ by means of $\nu^*$. It follows \begin{equation*} d\alpha^3=\nu^*(R\ \eta^1\wedge \eta^2). \end{equation*} \quad On the other hand, by exterior differentiation of (\ref{alpha3_omega}) we obtain \begin{equation*} \begin{split} \hat{l}^*(d\alpha^3) & =d(m\omega^1)+d(m_3\omega^2)=(m-m_{33})\omega^2\wedge\omega^3\\ &=\frac{m-m_{33}}{m}\hat{l}^*(\alpha^1)\wedge\hat{l}^*(\alpha^2)= (1-\frac{m_{33}}{m})l^*(\eta^1\wedge\eta^2). \end{split} \end{equation*} \quad It follows \begin{equation*} l^*(R\eta^1\wedge\eta^2)=(1-\frac{m_{33}}{m})l^*(\eta^1\wedge\eta^2), \end{equation*} and from here we obtain the following {\it curvature condition} on $\Sigma$: \begin{equation}\label{condC_up} (C)\qquad \frac{m_{33}}{m}l^*(\eta^1\wedge\eta^2)=l^*\ensuremath{\mathbb{B}}igl[(1-R)\eta^1\wedge\eta^2\ensuremath{\mathbb{B}}igr]. \end{equation} \quad We would like to express now the quantity $\frac{m_{33}}{m}$ living on $\Sigma$ as the image of a quantity living on $\Lambda$ through $l^*$. \\ \quad Recall from the general theory that if $\{e_1,e_2\}$ is an adapted frame to the geometry of the Riemannian surface $(\Lambda,g)$, this is equivalent with giving a section of the frame bundle $\nu:\mathcal F(\Lambda)\to \Lambda $, i.e. \begin{equation*} s:\Lambda\to \mathcal F(\Lambda),\qquad \nu\circ s=id_\Lambda, \end{equation*} i.e. we have a so called {\it first order adapted lift}.\\ \quad Let us consider next an arbitrary smooth function $\bar{m}$ on $\Lambda$, and lift it {\it ``upstairs''}, i.e. we obtain a function $\widetilde{m}=\bar{m}\circ \nu$ on $\mathcal F(\Lambda)$, such that $s^*(\widetilde{m})=\bar{m}$, and a function $m$ on $ \Sigma$ such that \begin{equation} m=\hat{l}^*(\widetilde{m})=\hat{l}^*(\nu^*\bar{m})=(\nu\circ \hat{l})^*\bar{m}. \end{equation} \quad We take next the exterior derivative of the relation $m=l^*(\bar m)$. It follows \begin{equation*} \begin{split} dm=l^*(d\bar m)=l^*(\bar m_1\eta^1+\bar m_2\eta^2)=l^*(\bar m_1)m\omega^2+l^*(\bar m_2)\omega^3, \end{split} \end{equation*} i.e. $dm$ is a linear combination of the 1-forms $\omega^2$, $\omega^3$. This implies \begin{equation*} m_1=0. \end{equation*} \quad It follows that this $m$ can be used to relate the coframes $\omega$ and $\alpha$ as in \S6.1. Under these conditions, we take the exterior derivative of the relation $m=\hat{l}^*(\widetilde{m})$. It follows that \begin{equation*} \begin{split} dm & =m_2\omega^2+m_3\omega^3=\hat{l}^*(\widetilde{m}_1\alpha^1+\widetilde{m}_2\alpha^2+ \widetilde{m}_3\alpha^3)\\ & =\hat{l}^*(\widetilde{m}_1)m\omega^2+\hat{l}^*(\widetilde{m}_2)\omega^3+\hat{l}^*(\widetilde{m}_3) (m \omega^1+m_3\omega^2), \end{split} \end{equation*} and from here, we obtain \begin{equation*} \begin{split} & \hat{l}^*(\widetilde{m}_1)=\frac{m_2}{m}\\ & \hat{l}^*(\widetilde{m}_2)=m_3\\ & \hat{l}^*(\widetilde{m}_3)=0. \end{split} \end{equation*} \quad Remark that Proposition 7.1 together with the last condition above imply that \begin{equation*} \widetilde{m}_3=0. \end{equation*} \quad By a straightforward computation we also obtain \begin{equation*} \hat{l}^*(\widetilde{m}_{22})=m_{33}. \end{equation*} \quad Recall that $(\eta^1,\eta^2,\eta^3)=s^*(\alpha^1,\alpha^2,\alpha^3)$, and using now the relation $ \bar{m}=s^*(\widetilde{m})$ we have \begin{equation*} s^*(d\widetilde{m})=s^*(\widetilde{m}_1)\eta^1+s^*(\widetilde{m}_2)\eta^2, \end{equation*} where we have put $d\widetilde{m}=\widetilde{m}_1\alpha^1+\widetilde{m}_2\alpha^2$ on $\mathcal F(\Lambda)$ and $d\bar m=\bar m_1\eta^1+\bar m_2\eta^2$ on $\Lambda$.\\ \quad Then, it follows \begin{equation*} \begin{split} & \bar m_1=s^*(\widetilde{m}_1)\\ & \bar m_2=s^*(\widetilde{m}_2). \end{split} \end{equation*} \quad A straightforward computation using (\ref{lands_cond_theta}), (\ref{Ricci_cond_theta}) pulled back through $\hat{l}^*$ shows that \begin{equation*} d\widetilde{m}_2=\widetilde{m}_{22}\alpha^2+\widetilde{m}_1\alpha^3, \end{equation*} and pulling this equation back through $s^*$ we get \begin{equation*} s^*(\widetilde{m}_{22})=\bar m_{22}+b\bar m_1, \end{equation*} where $b$ is the function on $\Lambda$ from $d\eta^2=b\eta^1\wedge\eta^2$.\\ \quad In the same way we obtain \begin{equation*} \begin{split} & s^*(\widetilde{m}_{11})=\bar m_{11}-a\bar m_2,\\ & s^*(\widetilde{m}_{12})= s^*(\widetilde{m}_{21})=\bar m_{12}-b\bar m_2=\bar m_{21}+a\bar m_1, \end{split} \end{equation*} where we take into account the Ricci type identity on $\Lambda$: \begin{equation*} \bar{m}_{21}-\bar{m}_{12}+a\bar m_1+b\bar m_2=0. \end{equation*} \quad Hence, we obtain \begin{equation*} m_{33}=l^*(s^*(\widetilde{m}_{22}))=l^*(\bar m_{22}+b\bar m_1). \end{equation*} \quad Using now this in (\ref{condC_up}) we are led to the following {\it curvature relation on $\Lambda$}: \begin{equation}\label{condC_down} (C) \qquad \frac{\bar m_{22}+b\bar m_1}{\bar m}=1-R, \end{equation} which, together with the {\it Landsberg condition on $\Lambda$} , namely \begin{equation}\label{condL_down} (L) \qquad \bar m_{12}-b\bar m_2=\bar m_{21}+a\bar m_1=0, \end{equation} are the fundamental relations to be satisfied by $\bar m$ on $\Lambda$.\\ \quad Remark that the non-triviality relations $m_2\neq 0$, $m_3\neq 0$ are equivalent to \begin{equation*} \widetilde{m}_1\neq 0,\qquad \widetilde{m}_2\neq 0 \end{equation*} on $\mathcal F(\Lambda)$ or, equivalently, \begin{equation}\label{condN_down} (N)\qquad \bar{m}_1\neq 0,\qquad \bar{m}_2\neq 0 \end{equation} on $\Lambda$. \section{Constructing local generalized unicorns} \subsection{Recovering the generalized Landsberg structure} \quad Conversely, one can locally construct a generalized Landsberg structure as follows. Let us consider \begin{enumerate} \item an oriented Riemannian surface $(\Lambda,g)$ of Gauss curvature $R$, and \item a function $\bar m$ on $\Lambda$ that satisfies the PDE system (\ref{condC_down}), (\ref{condL_down}) with the non-triviality conditions (\ref{condN_down}). \end{enumerate} \quad Then, on the orthonormal frame bundle $\nu:\mathcal F(\Lambda)\to \Lambda$ there exist the tautological 1-forms $\alpha^1$, $\alpha^2$ and the Levi-Civita connection form $\alpha^3$ that satisfy the usual structure equations \begin{equation}\label{riemann_struct_eq} \begin{split} d\alpha^1& =\alpha^2\wedge\alpha^3\\ d\alpha^2& =\alpha^3\wedge\alpha^1\\ d\alpha^3& =\nu^*(R)\ \alpha^1\wedge\alpha^2. \end{split} \end{equation} \quad Let us construct the coframing \begin{equation}\label{inverse_coframe} \begin{split} & \bar{\omega}^1=\frac{1}{\widetilde{m}}(\alpha^3-\frac{\widetilde{m}_2}{\widetilde{m}}\alpha^1)\\ & \bar{\omega}^2=\frac{1}{\widetilde{m}}\alpha^1\\ & \bar{\omega}^3=\alpha^2, \end{split} \end{equation} where $\widetilde{m}=\nu^*(\bar m)$.\\ \quad It follows from Section 6, Section 7 that $\{\bar\omega^1,\bar\omega^2,\bar\omega^3\}$ is a non-trivial generalized Landsberg structure on the 3-manifold $\mathcal F(\Lambda)$ with the invariants \begin{equation*} I=-2\frac{\widetilde{m}_2}{\widetilde{m}},\qquad K=\widetilde{m}^2. \end{equation*} \quad By similar computations as in Section 4 one can show by means of Cartan-K\"ahler theorem that the PDE system (\ref{condC_down}), (\ref{condL_down}) is involutive. We will not discuss here the most general situation, but a particular case will be described below. We recall also that a Riemannian structure on a surface depends on a function of two variables, say $u$ on $\Lambda$ (this is a consequence of the existence of isothermal coordinates on a Riemannian surface).\\ \quad Summarizing, it follows from the Cartan--K\"ahler theorem used in Section 4 that the degree of freedom of the scalar invariants $I$, $K$ of a generalized Landsberg structure locally depends on two arbitrary functions of two variables (see \S 4.1, \S 4.2). We point out that these two functions of two variables are in the Cartan-K\"ahler sense, i.e. they show the degree of freedom of $(I,K)$, but one should not think that they are exactly the functions $u$ and $\bar{m}$ used in the precedent section. \\ \quad More generally, a generalized Landsberg structure, i.e. the coframe $\{\omega^1,\omega^2,\omega^3\}$ together with the scalar invariants $I$, $K$, depends on 3 functions of 3 variables (see \S 4.3). A particular case is the generalized Landsberg structure (\ref{inverse_coframe}) constructed using a function $u$ on $\Lambda$, from the Riemannian structure $(\Lambda,g)$ downstairs, and a function $\bar m$ on $\Lambda$ satisfying (\ref{condC_down}), (\ref{condL_down}). We will show in the next section that the degree of freedom of the pair of functions $(u,\bar m)$ is actually 4 functions of 1 variable (see Proposition 8.1).\\ \quad Remark that our solution has a lower degree of freedom than the general solution predicted by our first use of Cartan--K\"ahler theorem in Section 4 due to our particular choice of the coframe changing (\ref{inverse_coframe}), so there is no contradiction with our results in Section 4.\\ \quad Remark also that our condition $\widetilde m_1=0$ implies that the directional derivative of the invariant $K$ with respect to $\widetilde \omega^1$ vanishes, in other words we are considering here an integral manifold of the linear Pfaffian system (\ref{Lands_Pfaffian}) passing through the initial condition \begin{equation*} (u_0,I(u_0),K(u_0),I_1(u_0),I_3(u_0),0,K_2(u_0)), \end{equation*} as explained in \S 4.2, where the invariants $I$, $K$ are given above. \subsection{A local form} \quad In order to construct a local form for the generalized Landsberg structure given by (\ref{inverse_coframe}), we are going to use Zoll projective structures.\\ \quad Let us start with a Riemannian metric $g=u^2[(dz^1)^2+(dz^2)^2]$ on the surface $\Lambda$ with the Christoffel symbols $\Gamma_{jk}^i$, and construct the 1-form $\gamma$ on $\Lambda$ as in \eqref{Gamma}, \eqref{small_gamma}.\\ \quad By putting $\gamma=d(\log u)$, i.e. \begin{equation}\label{u_ODE} \frac{1}{u}\frac{\partial u}{\partial z^i}=\gamma_i,\qquad i=1,2, \end{equation} in some isothermal coordinates $(z^1,z^2)\in \Lambda$, it follows that the Gauss curvature of the Riemannian metric $g=u^2[(dz^1)^2+(dz^2)^2]$ will be given by \begin{equation*} R=-\frac{1}{u^2}\textrm{div}\gamma \end{equation*} as explained in \S 3.2. See also Example 7.2 for other formulas.\\ \quad On the other hand, in order to obtain a generalized Landsberg structure upstairs, we need a function $\bar{m}$ on $\Lambda$ that satisfies the conditions (\ref{condC_down}), (\ref{condL_down}) and the non-triviality conditions (\ref{condN_down}). \\ \quad If we denote by numerical subscripts the directional derivatives of $\bar{m}$ with respect to the $g$-orthonormal coframe \begin{equation*} \eta^1=udz^1,\quad \eta^2=udz^2, \end{equation*} and with letters the partial derivatives, then straightforward computations show the expression of first order directional derivatives \begin{equation}\label{m_1st_deriv} \bar{m}_i=\frac{1}{u}\bar{m}_{z^i},\qquad i=1,2, \end{equation} and second order directional derivatives \begin{equation}\label{m_2st_deriv} \begin{split} & \bar{m}_{11}=\frac{1}{u^2}(-\gamma_1\bar{m}_{z^1}+\bar{m}_{z^1z^1})\quad \bar{m}_{12}=\frac{1}{u^2}(-\gamma_2\bar{m}_{z^1}+\bar{m}_{z^1z^2})\\ & \bar{m}_{21}=\frac{1}{u^2}(-\gamma_1\bar{m}_{z^2}+\bar{m}_{z^2z^1})\quad \bar{m}_{22}=\frac{1}{u^2}(-\gamma_2\bar{m}_{z^2}+\bar{m}_{z^2z^2}). \end{split} \end{equation} \quad It follows from (\ref{condC_down}), (\ref{condL_down}) that $\bar{m}$ must satisfy \begin{enumerate} \item {\it The Landsberg condition} \begin{equation}\label{condL_2} (L)\qquad \bar{m}_{z^1z^2}=\gamma_1\bar{m}_{z^2}+\gamma_2\bar{m}_{z^1}, \end{equation} \item {\it The curvature condition} \begin{equation}\label{condC_2} (C)\qquad \bar{m}_{z^2z^2}= -(\gamma_1m_{z1}- \gamma_2m_{z2})+u^2+\textrm{div}\gamma. \end{equation} \end{enumerate} \quad It follows that these two conditions can be regarded as a PDE system for $\bar{m}$ on $\Lambda$, where $\gamma$'s is given by (\ref{u_ODE}).\\ \quad The first question that arises is the involutivity of such a PDE system. We will discuss this using our favorite tool, the Cartan-K\"ahler theorem. \\ \quad Let $J^2(\mathbb{R}^2, \mathbb{R}^2)$ be a second order jet space of two functions on a plane. The second jet space $J^2 (\mathbb{R}^2, \mathbb{R}^2)$ has the canonical system \begin{equation*} C^2= \{\theta_{i}^{j}=0 \quad (i=0,1,2 ,j=1,2)\} \end{equation*} where $(z^1,z^2,\bar{m},u, \bar{m}_{z^1},\bar{m}_{z^2}, u_{z^1},u_{z^2},\bar{m}_{z^1 z^1},\bar{m}_{z^1 z^2},\bar{m}_{z^2 z^2}, u_{z^1 z^1},u_{z^1z^2},u_{z^2 z^2})$ are the coordinates on $J^2(\mathbb{R}^2, \mathbb{R}^2)$ and \begin{eqnarray*} \theta_{0}^{1}=d\bar{m}-\bar{m}_{z^1}dz^1 -\bar{m}_{z^2}dz^2 \qquad &,&\quad \theta_{0}^{2}= du-u_{z^1}dz^1 -u_{z^2}dz^2\ , \\ \theta_{1}^{1}=d\bar{m}_{z^1}-\bar{m}_{z^1 z^1}dz^1 -\bar{m}_{z^1 z^2}dz^2 &,& \quad\theta_{1}^{2}= du_{z^1}-u_{z^1 z^1}dz^1 -u_{z^1 z^2}dz^2\ , \\ \theta_{2}^{1}= d\bar{m}_{z^2}-\bar{m}_{z^1 z^2}dz^1 -\bar{m}_{z^2 z^2}dz^2 &,& \quad\theta_{2}^{2}= du_{z^2}-u_{z^1 z^2}dz^1 -u_{z^2 z^2}dz^2 \end{eqnarray*} are the canonical contact forms. \\ \quad We consider the system of PDE formed by the equations $(L),(C)$, namely, \[ R=\{(L),(C) \} \subset J^2(\mathbb{R}^2, \mathbb{R}^2),\quad I= C^2 |_R , \quad \Omega =dz^1 \wedge dz^2, \] with coordinates $(z^1,z^2,\bar{m},u, \bar{m}_{z^1},\bar{m}_{z^2}, u_{z^1},u_{z^2},\bar{m}_{z^1 z^1}, u_{z^1 z^1},u_{z^1z^2},u_{z^2 z^2})$ on $R$.\\ \quad By a straightforward computation we find that the Pfaffian system $I$ has absorbable torsion. Moreover, its tableau is given by \begin{equation} \begin{pmatrix} 0 & \qquad & 0 \\ a & \qquad & 0 \\ 0 & \qquad & \frac{\bar m}{u}(b+d) \\ 0 & \qquad & 0 \\ b & \qquad & c \\ c & \qquad & d \end{pmatrix} \end{equation} and the characters of the tableau are $s_1=4,\ s_2=0$. Since the dimension of the space of integral elements is $4=s_1+2s_2$, Cartan's Test for involutivity implies that the system is involutive. \\ \quad Hence, in the analytic category, the Cartan-K\"ahler theorem implies that the solutions exist, and, roughly speaking, they depend on 4 functions of 1 variable. \\ \quad We are led in this way to the following result. {\bf Proposition 8.1.} \\ {\it \quad The system of partial differential equations (L), (C) for two unknown functions u, $\bar m$ of two variables has solutions. Moreover, these solutions depend in Cartan-K\"ahler sense on 4 functions of 1 variable.} \quad We obtain therefore the following prescription for constructing generalized Landsberg structures: \quad $\bullet$ Start with a smooth surface $\Lambda$ with local coordinates $z^1$, $z^2$ and consider the functions $ \bar{m}, u:\Lambda\to \mathbb R$ which satisfy (\ref{condL_2}), (\ref{condC_2}). The existence of such an $\bar{m}$ and $u$ is guaranteed by the Cartan-K\"ahler theorem (Proposition 8.1). \quad $\bullet$ Denote by $g=u^2[(dz^1)^2+(dz^2)^2]$ the corresponding Riemannian metric on $\Lambda$ conformal equivalent to the flat metric, and by $R$ its Gauss curvature given by (\ref{Gauss_curv}); \quad $\bullet$ Construct the $g$-orthonormal frame bundle $\nu:\mathcal F(\Lambda)\to \Lambda$ with the tautological 1-forms $\alpha^1$, $\alpha^2$ and the Levi-Civita connection form $\alpha^3$; \quad $\bullet$ Lift the function $\bar{m}$ to $\Sigma:=\mathcal F(\Lambda)$ as $\widetilde m:=\nu^*(\bar{m})$; \quad $\bullet$ Construct the coframe $(\bar{\omega}^1,\bar{\omega}^2,\bar{\omega}^3)$ on $\Sigma=\mathcal F(\Lambda)$ given by (\ref{inverse_coframe}). \quad Then, we have \quad {\bf Theorem 8.2.} {\it The coframe $(\bar{\omega}^1,\bar{\omega}^2,\bar{\omega}^3)$ constructed above is a generalized Landsberg structure on the 3-manifold $\Sigma=\mathcal F(\Lambda)$.} \quad Indeed, remark first that $\widetilde m:=\nu^*(\bar{m})$ implies $s^*(\widetilde m)=\bar{m}$, as well as $\widetilde m_3=0$ by taking the exterior derivative. Then, in the present setting, similar computations with those in \S 7.2 show that conditions (L) and (C) upstairs in Proposition 6.2 hold good. Computing now the structure equations of the coframe $\bar{\omega}$ and making use of (\ref{riemann_struct_eq}) and properties in Proposition 6.2, one can easily verify that $\bar{\omega}$ is a generalized Landsberg structure on the 3-manifold $\Sigma=\mathcal{F}(\Lambda)$.\\ \quad Using the normal form from Example 7.2 in \S 7.1, we obtain the following normal form of this generalized unicorn: \begin{equation}\label{normal_form} \begin{split} & \bar{\omega}^1=\frac{1}{\widetilde{m}}\ensuremath{\mathbb{B}}igl[dt-*d(\log u)-\frac{u\ \widetilde{m}_2}{\widetilde{m}}\ensuremath{\mathbb{B}}igl(\cos(t) dz^1-\sin(t) dz^2\ensuremath{\mathbb{B}}igr)\ensuremath{\mathbb{B}}igr]\\ & \bar{\omega}^2=\frac{u}{\widetilde{m}}\ensuremath{\mathbb{B}}igl(\cos(t) dz^1-\sin(t) dz^2\ensuremath{\mathbb{B}}igr)\\ & \bar{\omega}^3=u\ensuremath{\mathbb{B}}igl(\sin(t)dz^1+\cos(t)dz^2\ensuremath{\mathbb{B}}igr), \end{split} \end{equation} where $\widetilde{m}=\nu^*(\bar m)$, $\widetilde{m}_2=\nu^*(\frac{1}{u}\frac{\partial \bar{m}}{\partial z^2})$ and $t\in [0,2\pi]$ is the fiber coordinate over $z=(z^1,z^2)\in \Lambda$. Here, we denote again the prolongation $\nu^*(u)$ of $u$ to $\mathcal{F}(\Lambda)$ by the same letter. \section{Concluding remarks} \quad In the present note we have shown how is possible to construct a non-trivial generalized Landsberg structure $\{\omega^1,\omega^2,\omega^3\}$ on a 3-manifold $\Sigma$ using a Riemannian metric $g$ on a surface $\Lambda$ that basically depends on 2 functions of 1 variable, namely, $u$ and $\bar{m}$. Due to Cartan-K\"{a}hler Theorem in \S 8.2, we know that these functions are locally described by 4 functions of 1 variable, case included in the general solution predicted by Cartan-K\"ahler Theory in Section 4. A local form of it is given by (\ref{normal_form}). This generalized Landsberg structure is {\it locally amenable} in the sense of \S 5.2. Our generalized unicorn has the fundamental geometrical property that its indicatrix foliation $\{\omega^1=0,\omega^2=0\}$ coincides with the geodesic foliation $\{\alpha^1=0,\alpha^3=0\}$ of the Riemannian metric $g$ of $\Lambda$.\\ \quad However, our initial intention was to search for classical unicorns on surfaces, i.e. generalized Landsberg structures that satisfy the conditions of Theorem 2.1.\\ \quad Recall that a generalized Finsler structure is amenable if the indicatrix foliation $\mathcal Q=\{\omega^1=0,\omega^2=0\}$ is amenable, i.e. the leaf space is a differentiable manifold. \\ \quad Let us also recall that a Zoll metric on $S^2$ depends on one odd arbitrary function on one variable (see \cite{B1978} and \cite{LM2002} for details). We are lead in this way to the following \quad {\bf Conjecture 9.1.} {\it There exists a solution u of (\ref{condL_2}), (\ref{condC_2}) that gives a Riemannian metric $g=u^2[(dz^1)^2+(dz^2)^2]$ whose Levi-Civita connection $\nabla^g$ belongs to a Zoll projective class on $S^2$. } \quad If we accept this conjecture as true, then we just have constructed a generalized Landsberg structure $\{\bar \omega^1,\bar \omega^2,\bar \omega^3\}$ on the frame bundle $\Sigma:=\mathcal F(S^2)$ of a Riemannian surface $ (S^2,g)$ whose Levi-Civita connection $\nabla^g$ belongs to a Zoll projective structure on $S^2$, in other words, the geodesic foliation $\mathcal P=\{\alpha^1=0,\alpha^3=0\}$ of $g$ foliates the 3-manifold $\Sigma$ by circles. Remark in the same time that we had constructed our coframe $\bar \omega$ from $\alpha$ by (\ref{inverse_coframe}) such that its indicatrix foliation $\mathcal Q=\{\omega^1=0,\omega^2=0\}$ coincides with the geodesic foliation $\mathcal P=\{\alpha^1=0,\alpha^3=0\}$ of $g$. Then, by the properties of Zoll projective structure on $S^2$ described partially in \S 3.2 it follows that the space of geodesics, say $M$, of the metric $ (\Lambda=S^2,g)$ is a differentiable manifold, and hence, the generalized Landsberg structure $\{\bar \omega^1,\bar \omega^2,\bar \omega^3\}$ is globally amenable. In other words, the map $\pi:\Sigma\to S^2$ is a smooth submersion. Obviously, the leaves of the indicatrix foliation $\{\bar \omega^1=0,\bar \omega^2=0\}$ are diffeomorphic to $S^1$, so they must be compact.\\ \quad Finally, in order to have a true classical unicorn, we have to show more, namely that the canonical immersion $ \iota:\Sigma\to TM$, given by $\iota(u)=\pi_{*,u}(\hat{e}_2)$ is injective on each $\pi$-fiber $\Sigma_x$, as stated in Theorem 2.1. This is not so difficult to prove. Let us denote by \begin{equation*} \gamma_u:[a,b]\to \Sigma \end{equation*} the geodesic flow of the Zoll projective structure $[\nabla]$ on $S^2$ through the point $u\in \Sigma$, and let us take another point, say $u_1$ on the same leaf, i.e. there exist some parameter values $s_0, s_1\in [a,b]$ such that \begin{equation*} \gamma_u(s_0)=u,\qquad \gamma_u(s_1)=u_1 \end{equation*} on $\Sigma$. \\ \quad From \S 3.2 we know that the leaves $\gamma$ are closed, periodic, simple curves of same length on $ \Sigma$, i.e. for \begin{equation*} \gamma_u(s_0)=u \neq \gamma_u(s_1)=u_1 \Longrightarrow \hat{e}_{2\ |\gamma_u(s_0)}\neq \hat{e}_{2\ | \gamma_u(s_1)}, \end{equation*} where $\hat{e}_{2}\in T_{\gamma}\Sigma$ is thought as a vector field along $\gamma$. Applying to this the linear map $ \pi_{*,u}$ it follows \begin{equation*} \pi_{*,\gamma(s_0)}(\hat{e}_{2\ |\gamma_u(s_0)})\neq \pi_{*,\gamma(s_1}(\hat{e}_{2\ |\gamma_u(s_1)}) \end{equation*} and therefore it follows that $\iota$ must be injective on each $\pi$-fiber $\Sigma_x$.\\ \quad Then, from Theorem 2.1 we can conclude {\it \quad There are Landsberg structures on $M=S^2$ which are not Berwald type, provided the conjecture above is true.} \section{Appendix. The Cartan--K\"ahler theorem for linear Pfaffian systems} \quad We give a short outline of the main tool used in the present paper, the Cartan--K\"ahler theorem for linear Pfaffian systems. This theorem is presented in several textbooks, for \cite{Br et al 1991}, \cite{IL2003}, \cite{O1995}, etc., but our presentation here follows our favorite monograph \cite{IL2003}.\\ \quad Let us denote by $\Omega^*(\Sigma)=\bigoplus_{k} \Omega^k(\Sigma)$ the space of smooth differential forms on the manifold $\Sigma$. It is a standard fact that $\Omega^*(\Sigma)$ is a graded algebra under the wedge product. \\ \quad A subspace $\mathcal{I}\subset \Omega^*(\Sigma)$ is called {\it an exterior ideal} or {\it an algebraic ideal} if it is a direct sum of homogeneous subspaces (namely, $\mathcal{I} = \bigoplus_{k} {\mathcal{I}}^k$, ${\mathcal{I}}^k \subset \Omega^k(\Sigma)$.) and it satisfies \[ \omega\wedge \eta\in \mathcal{I}, \] for $\omega\in \mathcal{I}$ and {\it any} differential form $\eta\in \Omega^*(\Sigma)$.\\ \quad An exterior ideal is called {\it a differential ideal} if for any $\omega\in \mathcal{I}$, we have $d\omega\in \mathcal{I}$ also. \\ \quad A differential ideal $\mathcal{I} \subset \Omega^*(\Sigma)$ is called an {\it exterior differential system} on a manifold $\Sigma$ (EDS for short).\\ \quad A set of differential forms of arbitrary degree $\{\omega^1, \omega^2, \dots,\omega^k \}$ is said to {\it generate the EDS} $\mathcal{I}$ if any $\theta \in \mathcal{I}$ can be written as a finite ``linear combination'', namely \begin{equation*} \mathcal{I} =\{ \sum_{i=1}^{k} \alpha^i \wedge \omega^i+ \sum_{i=1}^{k} \beta^i \wedge d\omega^i \ |\ \alpha^i,\beta^i \in \Omega^*(\Sigma) \}. \end{equation*} \quad A {\it Pfaffian system} $\mathcal{I}$ on a manifold $\Sigma$ is an EDS finitely generated by 1-forms $\{\omega^1,\omega^2,\dots\omega^k\}$ only.\\ \quad For an EDS $\mathcal{I}$ on a manifold $\Sigma$, a decomposable differential $k$-form $\Omega$ (up to scale) is called the independence condition if $\Omega$ does not vanish modulo $\mathcal{I}$ on $\Sigma$.\\ \quad We denote by $(\mathcal{I}, \Omega)$ a pair of an EDS and an independence condition on a manifold $\Sigma$.\\ \quad A submanifold $f:M\to\Sigma$ is called {\it an integral submanifold} (or solution) of the EDS $(\mathcal{I}, \Omega) $ if \begin{equation*} \begin{split} & f^*(\theta^a)=0, \qquad \theta^a\in \mathcal{I},\\ & f^*(\Omega)\neq 0. \end{split} \end{equation*} \quad Remark also that $f^*(\theta)=0$ imply $f^*(d\theta)=0$.\\ \quad There is a notion of infinitesimal solution also. A $k$-dimensional subspace $E\subset T_x\Sigma$ is called {\it an integral element} of $(\mathcal{I}, \Omega)$ if \begin{equation*} \begin{split} & \theta^a_{|_E}=0, \qquad \theta^a\in \mathcal{I},\\ & \Omega_{|_E}\neq 0. \end{split} \end{equation*} \quad Usually one regards $E$ as an element of the Grassmannian $G_k(T_x\Sigma)$ of $k$-planes through the origin of the vector space $T_x\Sigma$. The space of $k$-dimensional integral elements of $(\mathcal{I},\Omega)$ is usually denoted by $\mathcal{V}_k(\mathcal{I},\Omega) $.\\ \quad Roughly speaking, a differential system will be called {\it integrable} if one can determine its integral manifolds of a prescribed dimension passing through each point. In the case of a Pfaffian system with the maximum degree independence condition, its integrability is guaranteed by Frobenius theorem. However, in the case when the independence condition is not the maximum degree, then one has to use more powerfull tools as the Cartan-K\"{a}hler Theorem.\\ \quad Let $(I,J)$ be a pair of a collection of $1$-forms $I=\{\theta^1, \theta^2, \ldots, \theta^s \}$ and $J=\{\omega^1 ,\omega^2 ,\ldots ,\omega^k \}$ which are linearly independent modulo $I$.\\ \quad Remark that $(I,J)$ induces an EDS $(\mathcal{I},\Omega)$ by a Pfaffian system $\mathcal{I}$ generated by $I$ and the independence condition $\Omega= \omega^1 \wedge \omega^2 \wedge \ldots \wedge \omega^k$.\\ \quad The pair $(I,J)$ is called a {\it linear Pfaffian system} if \begin{equation*} d\theta^a\equiv 0\qquad \mod J, \end{equation*} for all $\theta^a$ in $I$.\\ \quad If $(I,J)$ is a linear Pfaffian system, let us denote by $\pi^{\epsilon}$, $\epsilon=1,2,\dots,\dim \Sigma-s-k$ such that $T^*\Sigma$ is locally spanned by $\theta^a,\omega^i, \pi^\epsilon$. The coframing $\theta^a,\omega^i,\pi^\epsilon$ is called {\it adapted} to the filtration $I\subset J\subset T^*\Sigma$. It follows immediately that there must locally exist some functions $A_{\epsilon i}^a$ and $T_{ij}^a$ on $\Sigma$ such that \begin{equation}\label{struct_eq_torsion} d\theta^a\equiv A_{\epsilon i}^a\pi^\epsilon\wedge\omega^i+T_{ij}^a\omega^i\wedge\omega^j\qquad \mod I. \end{equation} \quad The terms $T_{ij}^a\omega^i\wedge\omega^j$ in (\ref{struct_eq_torsion}) are called {\it apparent torsion}. Apparent torsion must be normalized before prolonging the system. Namely, one have to choose, if possible, some new one forms $\tilde{\pi}^\epsilon$ such that $\tilde{T}_{ij}^a=0$, with respect to the new coframe $ \theta^a,\omega^i,\tilde{\pi}^\epsilon$ on $\Sigma$. In this case one says that {\it the apparent torsion is absorbable}.\\ \quad If this is not possible, then one says that there is {\it torsion} and in this case the system admits no integral elements.\\ \quad Remark that the functions $A_{\epsilon i}^a$ and $T_{ij}^a$ depend on the choices of the bases for $I$ and $J$. However, one can construct invariants from these functions. Indeed, for a fixed generic point $x\in \Sigma$, the {\it tableau of $(I,J)$ at x} is defined as $\Sigma$ such that \begin{equation*} A_x:=\{A_{\epsilon i}^aw_a\otimes v^i\ :\ 1\leq \epsilon\leq \dim \Sigma-\dim J_x\}\subseteq W\otimes V^*, \end{equation*} where $V^*:=(J/I)_x$, $W^*=I_x$, $w^a=\theta^a_x$, $v^j=\omega^j_x$. A standard argument of linear algebra shows that $A_x$ is independent of any choices.\\ \quad We fix a point $x\in \Sigma$ and denote the tableau $A_x$ simply with $A\in W\otimes V^*$. The tableau $A$ depends on the basis $b=(v^1,v^2,\dots,v^n)$ of $W$. One defines \begin{tabular}{rcl} $s_1(b)$ & = & no. of independent entries in the first col. of $A$\\ $s_1(b)+s_2(b)$ & = & no. of independent entries in the first 2 col. of $A$\\ \ & \dots & \ \\ $s_1(b)+\dots+s_n(b)$ & = & no. of independent entries in $A$. \end{tabular} \quad Equivalently, one can see that {\it the characters $s_1(b),s_2(b),\dots,s_n(b)$ of the tableau A} do not depend actually on the choice of the basis $b$ of $W$, but only on the flag of subspaces \begin{equation*} F:\ (0)=F_n\subset F_{n-1}\subset \dots F_1\subset F_0=V^*. \end{equation*} \quad This allows us to rewrite $s_k(b)$ as $s_k(F)$. By defining \begin{equation*} A_k(F)=(W\otimes F_k)\cap A, \end{equation*} it follows that \begin{equation*} \dim A_k(F)=s_{k+1}(F)+\dots s_n(F). \end{equation*} One can easily see that $A_k(F)$ is the subspace of matrices in $A$ for which the first $k$ columns are zero with respect to the basis $b$ for $V$.\\ \quad One defines next the {\it reduced characters of the tableau A} as\\ \begin{tabular}{rcl} $s_1$ & = & $\max\{s_1(F)\ :\ \text{all flags}\}$\\ $s_2$ & = & $\max\{s_1(F)\ :\ \text{flags with}\ s_1(F)=s_1\}$\\ \ & \dots & \ \\ $s_n$ & = & $\max\{s_n(F)\ :\ \text{flags with} \ s_1(F)=s_1,\dots,s_{n-1}(F)=s_{n-1}\}$. \end{tabular} \quad These scalars are invariants of the tableau $A$, i.e. they are independent of any choice of bases of $V$ or $W$. \\ \quad It can be shown that the reduced characters must satisfy the inequality: \begin{equation}\label{dim_intergral_elem} \dim A^{(1)}\leq s_1+2s_2+\dots +ns_n, \end{equation} where $A^{(1)}$ is the {\it first prolongation of A}, namely \begin{equation*} A^{(1)}:=(A\otimes V^*)\cap (W\otimes S^2 V^*), \end{equation*} and $S^2 V^*$ is the space of symmetric 2-tensors of $V^*$.\\ \quad We reach in this way to one of the most important notion in the theory of exterior differential systems. The tableau $A\in W\otimes V^*$ is called {\it involutive} if equality holds in (\ref{dim_intergral_elem}), i.e. we have \begin{equation*} \dim A^{(1)}= s_1+2s_2+\dots +ns_n. \end{equation*} This condition is also called {\it Cartan test for involutivity}.\\ \quad If $A$ is involutive such that $s_l\neq 0$ and $s_{l+1}= 0$, then $s_l$ is called the {\it character} of the system and the integer $l$ is called the {\it Cartan integer} of the system.\\ \quad We can give now the main tool used in this paper, the Cartan--K\"ahler Theorem for Linear Pfaffian systems. Even though the theorem can be formulated in general for arbitrary exterior differential systems (see \cite{Br et al 1991}, \cite{IL2003}), the version for Linear Pfaffian systems will suffice for our purposes in the present paper. {\bf Theorem A.1.\ \ The Cartan--K\"ahler Theorem for Linear Pfaffian systems} \quad {\it Let (I,J) be an analytic linear Pfaffian system on a manifold $\Sigma$, let $x\in \Sigma$ be a point and let $U \subset \Sigma$ be a neighborhood containing $x$, such that for all $y\in U$, \begin{enumerate} \item The apparent torsion is absorbable at $y$, and \item the tableau $A_y$ is involutive. \end{enumerate} \quad Then solving a series of well-posed Cauchy problems yields analytic integral manifolds of $(I,J)$ passing through x. }\\ \quad Informally, one says that the solutions depend (in Cartan-K\"ahler sense) on $s_l$ functions of $l$ variables, where $s_l$ is the character of the system (see \cite{IL2003}, p. 176 for the precise statement of the Theorem and other details). \quad A linear Pfaffian system satisfying the conditions (1), and (2) in the Cartan--K\"ahler Theorem for linear Pfaffian systems is said to be {\it involutive}.\\ \quad Recall that if an EDS is not a linear Pfaffian system, then by prolongation one can linearize it and then study its involutivity by Cartan--K\"ahler Theorem for linear Pfaffian systems. \begin{center} Sorin V. SABAU\\ School of Science, Department of Mathematics\\ Tokai University,\\ Sapporo, 005\,--\,8601 Japan {\tt [email protected]} Kazuhiro SHIBUYA\\ Graduate School of Science, Hiroshima University, \\ Higashi Hiroshima, 739\,--\,8521, Japan {\tt [email protected]} Hideo SHIMADA\\ School of Science, Department of Mathematics\\ Tokai University,\\ Sapporo, 005\,--\,8601 Japan {\tt [email protected]} \end{center} \end{document}
\begin{equation}gin{document} \title{{Gauge transformations and conserved quantities in classical and quantum mechanics}} \author{Bertrand Berche} \email{[email protected]} \affiliation{Groupe de Physique Statistique, Institut Jean Lamour, Universit\'e de Lorraine, 54506 Vandoeuvre-les-Nancy, France} \affiliation{Centro de F\'isica, Instituto Venezolano de Investigaciones Cient\'ificas, 21827, Caracas, 1020 A, Venezuela} \author{Daniel Malterre} \email{[email protected]} \affiliation{Groupe Surfaces et spectroscopies, Institut Jean Lamour, Universit\'e de Lorraine, 54506 Vandoeuvre-les-Nancy, France} \author{Ernesto Medina} \email{[email protected]} \affiliation{Centro de F\'isica, Instituto Venezolano de Investigaciones Cient\'ificas, 21827, Caracas, 1020 A, Venezuela} \affiliation{Groupe de Physique Statistique, Institut Jean Lamour, Universit\'e de Lorraine, 54506 Vandoeuvre-les-Nancy, France} \date{\today} \begin{equation}gin{abstract} We are taught that gauge transformations in classical and quantum mechanics do not change the physics of the problem. Nevertheless here we discuss three broad scenarios where under gauge transformations: (i) conservation laws are not preserved in the usual manner; (ii) non-gauge-invariant quantities can be associated with physical observables; and (iii) there are changes in the physical boundary conditions of the wave function that render it non-single-valued. We give worked examples that illustrate these points, in contrast to general opinions from classic texts. We also give a historical perspective on the development of Abelian gauge theory in relation to our particular points. Our aim is to provide a discussion of these issues at the graduate level. \end{abstract} \pacs{\\72.80.Vp Electronic transport in graphene\\ 75.70.Tj Spin-orbit effects\\ 11.15.-q Gauge field theories\\ keywords: graphene, spin-orbit interaction, non-Abelian gauge theory, gauge transformation} \keywords{graphene, ring, spin-current} \maketitle \section{Introduction\label{sec1}} It is hard to exaggerate the role of gauge invariance in the construction of physical theories, and many aspects of gauge theory, gauge co-variance, gauge invariance, and the connection to symmetry and conservation laws have been discussed both in textbooks and research journals. The aim of this paper is to attempt to clarify some subtleties that arise in quantum mechanics in the context of gauge transformations: Is the wave function always single-valued? If not, what are the consequences of its multi-valued character on the definition of observables? We also discuss the physical meaning of some gauge-invariant and non-gauge-invariant quantities in connection with rotational symmetry. This paper is intended to be followed by graduate students and may serve as the basis for advanced exercises or projects in a second course on nonrelativistic quantum mechanics. Gauge invariance was originally discovered as a property of Maxwell's equations in electrodynamics, where the equations of the theory do not change when a gauge transformation of the potentials is performed: $\vac A\to\vac A'=\vac A+\boldsymbol{\nabla}\alpha$, $\phi\to\phi'=\phi-\partial_t\alpha$, where $\mathbf{A}$ is the vector potential, $\phi$ is the scalar potential, and $\alpha$ is an arbitrary function of the space and time coordinates. According to a widespread teaching paradigm, this freedom appears mostly as a device that can help simplify a problem mathematically while leaving the physical content intact, i.e., with the same electric and magnetic fields. This view probably goes back to the work of Heaviside:\cite{WuYang06} \begin{equation}gin{quote} $\vac A$ and its scalar potential parasite $\phi$ sometimes causing great mathematical complexity and indistinctiveness; and it is, for practical reasons, best to murder the whole lot, or, at any rate, merely employ them as subsidiary functions$\,\ldots$ \end{quote} This opinion was nevertheless not held by Maxwell or Thomson, who considered $\vac A$ to be a momentum per charge (i.e., more than a \textit{subsidiary} function), and there has been an abundant literature, in particular in this journal,\cite{Calkin,Konopinski,SemonTaylor,Johnson} discussing the role of $e\vac A$ as a linear momentum in a similar manner to $e\phi$ as a potential energy. With the advent of quantum theory, the role of the vector potential was intensely revisited,\cite{O'Raifeartaigh} in particular with the celebrated paper of Aharonov and Bohm.\cite{AharonovBohm} An account of the most relevant literature is given in the Resource Letter of Cheng and Li.\cite{ChengLi} An important new insight regarding gauge theory was achieved by Weyl in 1918, and then in 1929, when he considered a generalization of the gravitation theory of Einstein.~\cite{Weyl1918} While lengths of vectors are conserved in Riemannian geometry, Weyl allowed for a length change during parallel transport and thus introduced an additional connection, which he proposed to identify with the electromagnetic gauge vector, providing the first unified theory of gravitation and electromagnetism. This theory did not survive major physical objections at the time,\cite{Pauli1958} but became prominent after its reformulation in the context of quantum mechanics.\cite{Weyl1929} There, it is the wave function that inherits a phase in an electromagnetic field, suggesting the possibility of reformulating Weyl's theory by contemplating complex objects instead of vectors in Riemann space. This new concept gave birth to modern gauge field theory.\cite{Weyl1929} Before becoming a standard approach in textbooks,\cite{YangMills54,Ryder,Ramond} Weyl's theory was spread in the physics community through influential papers by Dirac\cite{Dirac31} and Pauli,\cite{Pauli} and then by Wu and Yang.\cite{WuYang75} In the spirit of Einstein's theory of gravitation, it converts an interaction into a property of the ``space,'' in other words, it ``geometrizes'' the electromagnetic interaction via the so called non-integrable phase of the wave function. Non-integrability here means non-definite values for the phase at points on a space-time trajectory. Only changes in the phase (and thus its derivatives) have meaning. The derivatives of the phase are in fact the gauge fields in electromagnetism. If the phase of the wave function is non-integrable, an issue arises concerning the single-valuedness (or multi-valuedness) of wave functions in geometries where it closes on itself---a question that is usually overlooked in the literature. Even influential textbooks have contradictory statements concerning this delicate question. Some authors consider the single-valuedness of the wave function as mandatory: \begin{equation}gin{quote}The conditions which must be satisfied by solutions of Schr\"odinger's equation are very general in character. First of all, the wave function must be single-valued and continuous in all space.\cite{Landau}\end{quote} \begin{equation}gin{quote}It is implicit in the fundamental postulates of quantum mechanics that the wave function for a particle without spin must have a definite value at each point in space. Hence, we demand that the wave function be a \textit{single-valued} function of the particle's position.\cite{Merzbacher}\end{quote} The condition of single-valuedness is often considered as a prerequisite to build eigenstates of angular momentum, leading to integer eigenvalues of $L_z$ (in units of $\hbar$).\cite{Cohen,Messiah,Liboff,MerzbacherAJP} Other authors consider this question with more caution, for example: \begin{equation}gin{quote}It is reasonable to require that the wave function and its gradient be continuous, finite, and single-valued at every point in space, in order that a definite physical situation can be represented uniquely by a wave function.~\cite{Schiff}\end{quote} \begin{equation}gin{quote}Because kets (or wave functions) are not in themselves observable quantities, they need not be single-valued. On the other hand, a Hermitian operator $A$ that purports to be an observable must be single-valued under rotation to insure that its expectation value $\langle\psi|A|\psi\rangle$ is single-valued in an arbitrary state.~\cite{Gottfried}\end{quote} \begin{equation}gin{quote}Multiple-valued wave functions cannot be excluded a priori. Only physically measurable quantities, such as probability densities and expectation values of operators, must be single-valued. Double-valued wave functions are used in the theory of particles with intrinsic spin.~\cite{Weisskopf}\end{quote} Ballentine addresses the question without hiding the underlying difficulties: \begin{equation}gin{quote}The assumptions of \textit{single-valuedness} and \textit{nonsingularity} can be justified in a classical field theory, such as electromagnetism, in which the field is an observable physical quantity. But in quantum theory, the state function $\Psi$ does not have such direct physical significance, and the classical boundary conditions cannot be so readily justified. \textit{Why should $\Psi$ be single-valued under rotation?} Physical significance is attached, not to $\Psi$ itself, but to quantities such as $\langle\Psi|A|\Psi\rangle$, and these will be unchanged by a $2\pi$ rotation\,\dots. \textit{Why should $\Psi$ be nonsingular?} It is clearly desirable for the integral of $|\Psi|^2$ to be integrable so that the total probability can be normalized to one\,\dots. It is difficult to give an adequate justification of the conventional boundary conditions in this quantum-mechanical setting.\cite{Ballentine}\end{quote} In this paper we will first introduce the problem via a discussion of gauge invariance in the classical context. We will then briefly review the extension to quantum mechanics, involving both the operators and the wave function, and define gauge-invariant and non-gauge-invariant quantities and state conservation laws. This discussion will set a precise stage to illustrate both changes in the statement of conservation laws and lack of single-valued wave functions under certain gauge transformations. \section{Gauge invariance, gauge covariance, and unitary transformations}\label{1bis} Consider a single nonrelativistic spinless particle in an external magnetic field $\vac B$. For simplicity, we ignore the scalar potential in this discussion. The corresponding classical Hamiltonian is \begin{equation} H=\frac{1}{2m}(\vac p-e\vac A(\vac r))^2,\end{equation} where $(\vac r,\vac p)$ are the fundamental dynamical variables in the Hamiltonian formulation, i.e., the position $\vac r$ and the \textit{canonical momentum} conjugate to the position, $\vac p=\partial L/\partial\dot {\vac r}$ with $L$ the Lagrangian of the particle. Newtonian mechanics dictates that the physical quantities experimentalists can measure are the positions and velocities $\vac r$ and $\vac v$, and one can define a \textit{mechanical momentum} as ${\vac \boldsymbol{\pi}}=\vac p-e\vac A=m{\vac v}$ in terms of which the Hamiltonian reduces to purely kinetic energy, $H=\boldsymbol{\pi}^2/2m$. If one changes the gauge that determines the potentials in the Hamiltonian according to $\vac A\to\vac A'=\vac A+\boldsymbol{\nabla}\alpha$, with $\alpha$ a function depending on space (and time in the more general case), the invariance of the physics is stated as \begin{equation}gin{eqnarray} \vac r'&=&\vac r, \label{gaugeclassicalr}\\ \vac \boldsymbol{\pi}'&=&\vac \boldsymbol{\pi}, \label{gaugeclassical} \end{eqnarray} where a prime denotes the physical quantity in the new gauge. This condition entails a gauge dependence in the canonical momentum, \begin{equation}\vac p'=\vac p+e\boldsymbol{\nabla} \alpha.\label{gaugeclassicalp} \end{equation} As can be seen, $\vac \boldsymbol{\pi}$ does not change with the gauge choice because the change in $\vac p$ is compensated by the change in $\vac A$. All gauge-invariant physical quantities are thus built from functions of $\vac r$ and $\vac \boldsymbol{\pi}$. All the classical physical quantities are then specified by combinations of these mechanical variables. A familiar example of a function one can build is the angular momentum. The canonical function would be built as $\vac l=\vac r\times\vac p$ while the mechanical angular momentum would be $\boldsymbol{\lambda}=\vac r\times m\vac v=\vac r\times \boldsymbol{\pi}$. The latter is gauge-invariant by construction, while the former, like $\vac p$, is not. In quantum mechanics the quantization rules dictate that we now make the replacements $\vac r\rightarrow {\hat{\vac R}}$ and $\vac p\rightarrow {\hat{\vac P}}$ of dynamical variables with the corresponding operators {(denoted with ``hats'')}. From these dynamical quantized variables one can then build the gauge potential $\vac A( {\hat{\vac R}})$, the velocity operator $\vac V( {\hat{\vac R}},\hat{\vac P})=(\hat{\vac P}-e\vac A)/m$, and also the mechanical momentum operator $\hat{\boldsymbol{\Pi}}=m\hat{\vac V}$, as well as $ {\hat{\ \vac L}=\hat{\vac R}\times\hat{\vac P}}$ and $ {\hat{\boldsymbol{\Lambda}}=\hat{\vac R}\times \hat{\boldsymbol{\Pi}}}$ for the corresponding canonical and mechanical angular momenta. We then have the quantized Hamiltonian \begin{equation}gin{equation} {\hat H}=\frac{1}{2m}( {\hat{\vac P}}-e\vac A( {\hat{\vac R}}))^2. \end{equation} The canonical operators obey commutation rules {$[X_j,P_k]=i\hbar\delta_{jk}$, $j,k=1,2,3$, where $X_j$ and $P_k$ are the Cartesian components of ${\hat{\vac R}}$ and $ {\hat{\vac P}}$,} and, as a conventional rule, it is convenient to preserve the form of the canonical momentum operator in the position representation $\hat{\vac P}=-i\hbar\boldsymbol{\nabla}$ for all gauge choices. This is also a consequence of the fact that the canonical momentum $\hat{\vac P}$ is the generator of space translations, and this property should be kept for all gauges. So the counterparts of Eqs.\ (\ref{gaugeclassicalr}) and (\ref{gaugeclassical}) are \begin{equation}gin{eqnarray} \hat{\vac R}'&=&\hat{\vac R},\label{gaugequantumR} \\ \hat{\vac P}'&=&\hat{\vac P}, \label{gaugequantumP} \end{eqnarray} and they entail that \begin{equation} \hat{\boldsymbol{\Pi}}'=\hat{\boldsymbol{\Pi}}-e\boldsymbol{\nabla}\alpha.\label{gaugePi}\end{equation} Now in quantum mechanics, the physical information is not only in the (operators representing) dynamical variables themselves, but also in the expectation values, which involve the wave functions. In terms of the expectation values, the rules of Eqs.\ (\ref{gaugequantumR}) and (\ref{gaugequantumP}) turn into the classical gauge results (see Eqs. (\ref{gaugeclassicalr})-(\ref{gaugeclassicalp})), \begin{equation}gin{eqnarray} \langle\psi'|\hat{\vac R}'|\psi'\rangle&=&\langle\psi|\hat{\vac R}|\psi\rangle, \label{gaugeexpectationvalueR}\\ \langle\psi'|\hat{\vac P}'|\psi'\rangle&=&\langle\psi|\hat{\vac P}+e\boldsymbol{\nabla}\alpha|\psi\rangle. \label{gaugeexpectationvalueP} \end{eqnarray} One can arrive at the same conclusion by cooking up the appropriate unitary transformation $\hat U$ designed so that \begin{equation}gin{equation} \psi'({\vac R})=\hat U\psi({\vac R}),\label{TransfPsi} \end{equation} with $\hat U\hat U^{\dagger} {=\hat U^{\dagger}\hat U}=1$ to preserve the norm of $\psi$. If we are to satisfy Eqs.\ (\ref{gaugeexpectationvalueR}) and (\ref{gaugeexpectationvalueP}) then we must have \begin{equation}gin{eqnarray} \hat U^{\dagger}\hat{\vac R} \hat U&=&\hat{\vac R}, \\ \hat U^{\dagger}\hat{\vac P} \hat U&=&\hat{\vac P}+e\boldsymbol{\nabla}\alpha. \end{eqnarray} These two equations are satisfied by the choice~\cite{Cohen321} \begin{equation} \hat U=\exp\left({i\frac{e}{\hbar}\alpha}\right),\end{equation} where we again stress that $\alpha$ depends on $\vac r$ and would in the general case also depend on $t$. In the case of the usual dynamical variables $\hat{\vac R}$ and $\hat{\boldsymbol{\Pi}}$, one has \begin{equation}gin{eqnarray} \hat U\hat{\vac R}\hat U^{\dagger}&=&\hat{\vac R}\hat U^{\dagger}\hat U=\hat{\vac R},\label{unitaryOfR}\\ \hat U\hat{\vac \boldsymbol{\Pi}}\hat U^{\dagger}&=&\hat U\vac (\hat{\vac P}-e\vac A(\hat{\vac R}))\hat U^{\dagger} =\hat{\vac \boldsymbol{\Pi}} -e\boldsymbol{\nabla} \alpha, \label{operatorunitary} \end{eqnarray} since $\hat U$ is only a function of the position operator. These relations coincide with the gauge-transformed counterparts given in Eqs.~(\ref{gaugequantumR}) and (\ref{gaugePi}). This is an important property, which has to do with the gauge invariance of position and mechanical momentum, as we now discuss. Most of the physical quantities ${\cal Q}$ in the theory (here we omit the spin and any other internal properties of the particle) can be expressed in terms of the fundamental dynamical variables, which, in Hamiltonian formalism, are $\hat{\vac R}$ and $\hat{\vac P}$, i.e., ${\cal Q}(\hat{\vac R},\hat{\vac P})$. They are represented by Hermitian operators $\hat Q$, usually referred to as \textit{observables}. (See, for example, Ref.~\onlinecite{Messiah2} for a discussion of the general relation between observables and Hermitian operators). Under gauge transformations, they become $\hat{Q}'\equiv {\cal Q}(\hat{\vac R}',\hat{\vac P}')$. Let us consider such an operator $\hat Q$ that has the additional property of being \textit{gauge invariant}, that is, its matrix elements, being possibly associated with the results of measurements, do not depend on the gauge choice: \begin{equation} \langle\psi'|\hat{Q}'|\psi'\rangle=\langle\psi |\hat Q|\psi\rangle.\label{eq-gaugeinvmatel}\end{equation} This requires $\hat Q=\hat U^\dagger\hat Q'\hat U$, or \begin{equation} \hat Q'=\hat U\hat Q\hat U^\dagger.\label{eq-Q}\end{equation} This relation is fundamental to understanding gauge invariance in quantum mechanics. The observable ${\cal Q}$ is gauge \textit{invariant} in the sense that any matrix element takes the same value in different gauges~(\ref{eq-gaugeinvmatel}), but the operator $\hat Q$ representing the quantity has then to be gauge \textit{covariant} (\ref{eq-Q}) in order to achieve this property. This requirement can be satisfied by operators that keep the same form in different gauges, e.g., $\hat{\vac R}$ in Eq.~(\ref{unitaryOfR}), as in the classical realm. But it can also be satisfied by operators that differ in the two gauges, unlike the classical case (e.g., $\hat{\boldsymbol{\Pi}}$ in Eq.~(\ref{operatorunitary})). On the other hand, there also exist operators that keep the same form in two gauges, but that do not obey the gauge covariance property (\ref{eq-Q}) and hence are not gauge invariant (e.g., $\hat{\vac P}'$ in Eq.~(\ref{gaugequantumP}) does not coincide with $\hat U\hat{\vac P}\hat U^\dagger=\hat{\vac P}-e\boldsymbol{\nabla}\alpha$). In Table~I we list different physical properties that are modified (or not) by gauge transformations. Some authors consider gauge-invariant quantities as ``genuine'' physical quantities, and consider non-gauge-invariant quantities to be not genuinely physical.\cite{Cohen321} However, some non-gauge-invariant Hermitian operators, like the canonical linear momentum or canonical angular momentum operators, play fundamental roles in physics. They are the generators of the space group (infinitesimal translation and rotation operators, respectively), and thus, as conserved quantities in closed systems, are central in the Hamiltonian formalism. Moreover, according to Noether's theorem,\cite{BjorkenDrell,Sakurai,WeinbergQM} such quantities are conserved in physical situations where the corresponding symmetry is satisfied by the Hamiltonian. Because conservation laws are of critical importance, we adopt a less extreme position by stating that both gauge-invariant and non-gauge-invariant observables can be associated with physical quantities. We will see below an example of a non-gauge-invariant operator that has a physical interpretation. \begin{equation}gin{table}[h!] \centering \caption{Summary of gauge transformations of essential physical quantities. Note that the last relations for the gauge transformation of the Hamiltonian are written here in the general case of a space- and time-dependent gauge transformation (see Eq.~(\ref{EqHGal})).} \begin{equation}gin{ruledtabular} \begin{equation}gin{tabular}{ l cc l l } Classical context & && \multicolumn{2}{c}{\hskip-4em Quantum context }\\ \hline \multicolumn{5}{l}{Gauge vector:}\\ $\vac A'=\vac A+\boldsymbol{\nabla}\alpha$ && && $\vac A'=\vac A+\boldsymbol{\nabla}\alpha$\\ \hline \multicolumn{5}{l}{Gauge-invariant quantities:}\\ $\vac r'=\vac r$ & &\qquad\qquad& $\langle\psi'|\hat{\vac R}'|\psi'\rangle=\langle\psi|\hat{\vac R}|\psi\rangle$ & $\hat{\vac R}'=\hat U\hat{\vac R}\hat U^{\dagger}=\hat{\vac R}$ \\ $\boldsymbol{\pi}'=\boldsymbol{\pi}$ & && $\langle\psi'|\hat{\boldsymbol{\Pi}}'|\psi'\rangle=\langle\psi|\hat{\boldsymbol{\Pi}}|\psi\rangle$& $\hat{\boldsymbol{\Pi}}'=\hat U\hat{\boldsymbol{\Pi}}\hat U^{\dagger}=\hat{\boldsymbol{\Pi}}-e\boldsymbol{\nabla}\alpha$ \\ $\boldsymbol{\lambda}'=\boldsymbol{\lambda}$ & && $\langle\psi'|\hat{\boldsymbol{\Lambda}}'|\psi'\rangle=\langle\psi|\hat{\boldsymbol{\Lambda}}|\psi\rangle$ & $\hat{\boldsymbol{\Lambda}}'=\hat U\hat{\boldsymbol{\Lambda}}\hat U^{\dagger}=\hat{\boldsymbol{\Lambda}}-e\hat{\vac R}\times\boldsymbol{\nabla}\alpha$ \\ $K'=K$ & && $\langle\psi'|\hat{K}'|\psi'\rangle=\langle\psi|\hat{K}|\psi\rangle$ & $\hat{K}'=\hat U\hat{K}\hat U^{\dagger}=\frac {1}{2m}(\hat{\boldsymbol{\Pi}}-e\boldsymbol{\nabla}\alpha)^2$ \\ \hline \multicolumn{5}{l}{Generators of space-time symmetries (regular gauge transformations):}\\ $\vac p'=\vac p+e\boldsymbol{\nabla}\alpha$ & && $\langle\psi'|\hat{\vac P}'|\psi'\rangle=\langle\psi|\hat{\vac P}+e\boldsymbol{\nabla}\alpha|\psi\rangle$ & $\hat{\vac P}'=\hat{\vac P}=-i\hbar\boldsymbol{\nabla}\not=\hat U\hat{\vac P}\hat U^{\dagger}$ \\ $\vac l'=\vac l+e\vac r\times\boldsymbol{\nabla}\alpha$ & && $\langle\psi'|\hat{\vac L}'|\psi'\rangle=\langle\psi|\hat{\vac L}+e\hat{\vac R}\times\boldsymbol{\nabla}\alpha|\psi\rangle$ \qquad\qquad & $\hat{\vac L}'=\hat{\vac L}=-i\hbar\hat{\vac R}\times\boldsymbol{\nabla}\not=\hat U\hat{\vac L}\hat U^{\dagger}$ \\ $H'=H-e\partial_t\alpha$ & && $\langle\psi'|\hat{H}'|\psi'\rangle=\langle\psi|\hat{H}-e\partial_t\alpha|\psi\rangle$ & $\hat{H}'=\hat{H}=i\hbar\partial_t\not=\hat U\hat{H}\hat U^{\dagger}$\\ \end{tabular} \end{ruledtabular} \label{table} \end{table} Let us now discuss the effect of a gauge transformation on a conserved quantity. Assume that for some reason, a physical quantity ${\cal Q}$ should be conserved in a given problem, a property that one expresses in quantum mechanics by the equation \begin{equation} \frac {d}{dt}\langle\hat Q\rangle_{\psi}=0, \end{equation} where $\langle\hat Q\rangle_{\psi}$ is the expectation value of $\hat Q$ in the quantum state $|\psi\rangle$, that is, $\langle\hat Q\rangle_{\psi}=\langle\psi|\hat Q|\psi\rangle$. This implies \begin{equation} \frac i\hbar\langle[\hat H,\hat Q]\rangle_\psi+\langle\partial_t\hat Q\rangle_\psi=0, \end{equation} and, if $\partial_t\hat Q=0$, we have $\langle[\hat H,\hat Q]\rangle_\psi=0$. If we are furthermore working in a gauge such that $\hat Q$ commutes with $\hat H$, the equation is automatically fulfilled. The commutation of an observable with the Hamiltonian implies then the conservation of that observable. As a consequence, the operators $\hat Q$ and $\hat H$ have in this case the same eigenstates. The conservation property should obviously be robust to gauge transformations. Hence in a different time-independent gauge with $|\psi'\rangle=\hat U|\psi\rangle$ and \begin{equation} \hat H'=\hat U\hat H\hat U^\dagger-e\partial_t \alpha= \hat U\hat H\hat U^\dagger,\label{EqHGal} \end{equation} it is straightforward to show that a gauge-invariant quantity obeying Eq.~(\ref{eq-Q}) commutes with $\hat H'$: \begin{equation} [\hat H',\hat Q']=[\hat U\hat H\hat U^\dagger,\hat U\hat Q\hat U^\dagger]=\hat U[\hat H,\hat Q]\hat U^\dagger=0. \end{equation} The case of a non-gauge-invariant conserved quantity is more subtle. Consider a quantity like $\hat{\vac P}$ or $\hat{\vac L}$ that satisfies \begin{equation} \hat Q=\hat Q'\ne \hat U\hat Q\hat U^\dagger. \end{equation} It might appear that $[\hat H',\hat Q']\not=0$, i.e., $\hat Q'$ and $\hat H'$ do not share the same eigenstates. Nevertheless $\hat Q'$ is still a conserved quantity in the sense that the {\em expectation value} of the commutator vanishes: \begin{equation} \frac {d}{dt}\langle \hat Q'\rangle_{\psi'}=\langle[\hat H',\hat Q']\rangle_{\psi'}=0. \end{equation} Let us illustrate this property, anticipating the example of angular momentum in a magnetic field with cylindrical symmetry along the $z$ direction, treated classically in Sec.~\ref{SecClass}, and quantum mechanically in Sec.~\ref{SecQM}. Among all gauge choices, we can select one exhibiting the cylindrical symmetry of the problem so that $\hat L_z$ is a conserved quantity ($[\hat H,\hat L_z]=0$). For a different gauge choice obtained from the unitary operator $\hat U$, the corresponding conserved quantity is $\hat U\hat L_z\hat U^{\dagger}$ whereas the canonical angular momentum in this gauge is $\hat L'_z=\hat L_z$ and does not commute with $\hat H'$. However as $\hat L'_z=\hat U\hat L_z\hat U^{\dagger}-e\partial_{\varphi}\alpha$ (cylindrical coordinates), it is straigthforward to show from the periodicity of $\alpha$ that $\langle[\hat H',\hat L_z']\rangle_{\psi'}=0$. This discussion opens a new question since there appears a particular gauge in which the conservation equation takes a simpler form, $[\hat H,\hat L_z]=0$ rather than $\langle[\hat H',\hat L_z]\rangle_{\psi'}=0$. {This particular} gauge respects the space-time symmetry encoded in the conserved quantity, as we will illustrate in Sec.~\ref{SecQM}. We thus have to distinguish two kinds of physical quantities, both corresponding to observables in quantum mechanics and represented by Hermitian operators. The first ones are gauge-invariant and satisfy Eq.~(\ref{eq-Q}). They are associated with the same quantity in different gauges (like position and velocity) and can be simply measured and interpreted. The second ones, like the canonical momentum or canonical angular momentum, are not gauge-invariant. They represent quantities that, being the generators of space-time transformations, keep the same geometrical meaning but carry different physical content in different gauges. Nevertheless they might be related to fundamental symmetries and then {commute with the Hamiltonian} in the gauge where the Hamiltonian exhibits the total symmetry of the system. We emphasize that the Hamiltonian itself is such a quantity, $\hat H=\hat K+e\phi$ (with $\hat K$ the kinetic energy). {Although} not gauge-invariant in the general case {due to the presence of the scalar potential contribution,} {it governs the time evolution of the system} and its role in the physical theory can hardly be overestimated. \section{A case study: classical treatment}\label{SecClass} We now consider the classical problem of a particle with charge $e$ subject to a central force and moving in a circular orbit of radius $\rho$. In terms of unit vectors $\vac u_\rho$ and $\vac u_\varphi$, the particle's position is $\vac r=\rho\vac u_\rho$ and its velocity is $\vac v=v_\varphi\vac u_\varphi$. We then turn on a uniform magnetic field perpendicular to the plane of the trajectory. We use a superscript~0 to denote the values of quantities before the application of the field, e.g., the radius $\rho_0$ and velocity $v_{0\varphi}$ are linked to the central force field $F$ by Newton's law, $F=mv_{0\varphi}^2/\rho_0$. Due to the applied magnetic field (approximated as a time-dependent uniform field $\vac B=B(t)\vac u_z$), a time-dependent gauge vector in the cylindrical gauge \begin{equation}\vac A=\frac 12B(t)\rho\vac u_\varphi,\end{equation} is the source of the electromotive force {$e\vac E=-e\partial_t\vac A$ on the charge. This force is due to the action of the induced electric field $\vac E$ that arises because of a changing flux within the circular motion. } If we consider the change in the gauge vector $\delta\vac A=\partial_t\vac A\ \!dt$ associated with the application in the time $dt$ of an infinitesimal magnetic field $\delta\vac B$, the electromagnetic force $-e\partial_t\vac A$ leads to a variation of kinetic energy $\delta(\frac 12 mv_\varphi^2)=mv_\varphi\delta v_\varphi=-e\delta\vac A\cdot\vac v$, hence a modification of the velocity $\delta v_\varphi=-(e/m)\delta\vac A\cdot\vac v/|\vac v|$, which depends on the relative orientation of $\vac v$ and~$\vac A$. This variation of velocity due to the external field is exactly what is needed to keep the trajectory unchanged, because now the total force exerted on the charge is $F+ev_\varphi\delta B$ and coincides to first order with $m(v_{0\varphi}+\delta v_\varphi)^2/\rho$, with the same radius $\rho=\rho_0$ as in the initial state. The action of the field modifies the charge's speed along the circular trajectory, hence causing a change in the magnetic moment of the loop that is \textit{opposite} to~$\vac B$. This is the origin of orbital diamagnetism in this classical model. This result is consistent with the conservation of the canonical angular momentum. The problem, as it was stated here, exhibits rotational symmetry around the $z$ axis at any time and this implies that the canonical angular momentum $l_z=(\vac r\times\vac p)_z$ is conserved. Before the application of the field it is $l_{0z}=mv_{0\varphi}\rho_0$, while in the final state it is computed as $l_z=(mv_\varphi+eA_\varphi)\rho$. As $v_\varphi-v_{0\varphi}=-eA_\varphi/m$, conservation of canonical angular momentum $l_z=l_{0z}$ is ensured. When the magnetic field is applied, the induced electric field, $-e\partial_t\vac A$, leads to a change of kinetic energy and of mechanical angular momentum. In the cylindrical gauge, the canonical angular momentum is conserved. But this is not a gauge-independent quantity and in another gauge that does not exhibit the symmetry of the problem, the canonical angular momentum would not be conserved. It is thus instructive to analyze the same problem with a different choice of gauge. Consider now the Landau gauge $\vac A'=B(t)x\ \!\vac u_y$. In cylindrical coordinates it is \begin{equation}\vac A'=\frac 12B(t)\rho\sin(2\varphi)\ \!\vac u_\rho+\frac 12 B(t)\rho(1+\cos(2\varphi))\ \!\vac u_\varphi,\end{equation} and we pass from $\vac A$ to $\vac A'$ via the gauge transformation $\vac A'=\vac A+\boldsymbol{\nabla}\alpha$ with \begin{equation} \alpha(\rho,\varphi)=\frac 14B(t)\rho^2\sin(2\varphi).\label{eq-alpha} \end{equation} The vector potential $\vac A'$ is {\em not} uniform along the trajectory, and this breaks the rotational symmetry in {the} formulation of the problem (e.g., the Hamiltonian explicitly depends on the angle~$\varphi$). The Lagrangian of the particle, \begin{equation} L'=\frac 12m|\vac v|^2-e(\phi'-\vac v\cdot\vac A'),\end{equation} also depends explicitly on $\varphi$ and, as a consequence, the canonical angular momentum is not a conserved quantity. Although $-\partial_t\vac A'\not= -\partial_t\vac A$, the physical problem itself is nevertheless still the same, because in the new gauge there is an additional contribution to the electric field, $-\boldsymbol{\nabla}\phi'$, with $\phi'=-\partial_t\alpha$ in such a way that the force exerted on the charge, $-e(\partial_t\vac A'+\boldsymbol{\nabla}\phi')$, is the same as $-e\partial_t\vac A$ in the cylindrical gauge. The {canonical} angular momentum {of the particle} in the new gauge can be calculated as $l'_z=(\vac r'\times(m\vac v'+e\vac A'))_z=mv_\varphi\rho+eA'_\varphi\rho=l_{0z}+\frac 12eB\rho^2\cos(2\varphi)$ (we use the fact that $\vac r$ and $\vac v$ are gauge-invariant), i.e., it is not conserved, and compared to its expression in the cylindrical gauge, one has \begin{equation} l'_z=l_z+\frac{e}{2\pi}\Phi\cos(2\varphi),\label{eq-lzprime} \end{equation} where $\Phi=B\pi\rho^2$ is the magnetic flux enclosed by the loop. In the non-rotationally-symmetric gauge, the canonical angular momentum $l'_z(t)$ oscillates around an average value that is its value $l_z$ in the cylindrical gauge. There is another way to see what is happening between the two choices of gauge, following an interpretation given by Feynman.\cite{Feynman} The full system under consideration is the particle \textit{and} the field. In the initial state, there is no field and the total canonical angular momentum reduces to the particle's mechanical contribution $mv_{0\varphi}\rho_0$. In the final state where the applied field $\vac B$ has reached its final static value, the contributions to the mechanical momentum can be written for the particle as $mv_{\varphi}\rho$, and for the field~\cite{CohenAtoms} as \begin{equation} \varepsilon_0\int\vac r'\times (\vac E_{e}(\vac r')\times \vac B(\vac r'))d^3r',\label{EqFieldContrib}\end{equation} where $\vac E_e$ is the Coulombic contribution of the particle of charge $e$. Equation~(\ref{EqFieldContrib}) corresponds to the angular momentum transfer from the field to the particle via $\vac E_e(\vac r')$ (note that in an intermediate state when $\vac B$ depends on time, the associated electric field would also contribute to the field angular momentum). Due to the Coulombic form of \begin{equation}\vac E_e(\vac r')=\frac{e}{4\pi\varepsilon_0}\frac{\vac r'-\vac r}{|\vac r'-\vac r|^3},\end{equation} {the expression written in Eq.~(\ref{EqFieldContrib}) takes the form}~\cite{Johnson,Konopinski} \begin{equation} \frac{e}{4\pi}\int \vac r'\times \left(\frac{\vac r'-\vac r}{|\vac r'-\vac r|^3}\times\vac B(\vac r') \right)d^3r'=\vac r\times e\vac A_{\rm sym.}(\vac r), \end{equation} with \begin{equation}\vac A_{\rm sym.}(\vac r)=\frac{\mu_0}{4\pi}\int\frac{\vac j(\vac r')}{|\vac r-\vac r'|}d^3r',\end{equation} the vector potential at the particle's position $\vac r$ {\rm in the cylindrical gauge}, i.e., with our notations $\vac A_{\rm sym.}(\vac r)=\vac A(\vac r)$. The quantity that is conserved is the canonical angular momentum \begin{equation} \vac l=\vac r\times (\vac v+e\vac A_{\rm sym}(\vac r)).\label{Eqlcan}\end{equation} With another gauge choice $\vac A'$, obviously the particle's contribution to the mechanical angular momentum is unchanged, and similarly, the field's contribution (\ref{EqFieldContrib}) is also unchanged since it only depends on $\vac E$ and $\vac B$ fields, but now \begin{equation} \vac l'=\vac r\times (\vac v+e\vac A'(\vac r)),\label{Eqlprimecan} \end{equation} differs from Eq.~(\ref{Eqlcan}). Note that subtracting Eq.~(\ref{Eqlcan}) from Eq.~(\ref{Eqlprimecan}), we recover Eq.~(\ref{eq-lzprime}) via the gradient of the gauge function $\alpha$ in Eq.~(\ref{eq-alpha}). The fact that conservation of angular momentum is gauge dependent has been discussed in detail in the literature.\cite{Konopinski,SemonTaylor} In the cylindrical gauge, the Hamiltonian has the full symmetry of the physical problem (we apply an axially symmetric magnetic field here). The solution of the problem exhibits the full symmetry and conservation of angular momentum is satisfied, i.e., $l_z(t)={\rm const}$. In the Landau gauge, the Hamiltonian (or Lagrangian) displays a lower symmetry, which manifests itself as a gauge-dependent oscillation that reflects the original conservation law only on average. The vector potential acquires a physical significance in the symmetric gauge as the linear momentum transfer from the field to the charge.~\cite{Zangwill} This example shows how to interpret physically a non-gauge-invariant quantity, the canonical angular momentum of the particle, or the vector potential, in a particular gauge. \section{Quantum mechanical formulation}\label{SecQM} \subsection{Ring and regular gauge transformations}\label{sec2} Let us now illustrate the same concepts with the same problem treated in quantum mechanics. Consider a single electron without spin moving freely on a circular ring of radius $\rho=a$ in cylindrical coordinates $(\rho,\varphi,z)$. The eigenfunctions are standing waves on the ring, $\psi_{0l_{z}}(\varphi)=(2\pi a)^{-1/2}e^{il_{z}\varphi}$, which are also eigenstates of the canonical angular momentum $\hat{L}_z=-i\hbar\partial_\varphi$ with eigenvalues $\hbar l_{z}$ with $\l_{z}\in\mathbb{Z}$. The subscript $0$ is for the initial state of the system with no magnetic field. In the presence of a uniform magnetic field $\vac B=B\ \!\vac u_z$ piercing the ring with a flux $\Phi=B\pi a^2$, with the choice of the cylindrical gauge where the gauge vector takes the form $\vac A=\frac 12B\rho\vac \ \!\vac u_\varphi$, the Hamiltonian on the ring in the nonrelativistic limit reduces to \begin{equation}y \hat H&=&\frac{1}{2m}(\hat P_\varphi-eA_\varphi(a))^2\nonumber\\ &=&\frac{\hbar^2}{2ma^2}(-i\partial_\varphi-\Phi/\Phi_0)^2,\label{eqH}\end{equation}y with the ordinary representation of the canonical momentum \begin{equation} \hat P_\varphi=-i\hbar a^{-1}\partial_\varphi,\label{eqp}\end{equation} and $\Phi_0=2\pi\hbar/e$ the quantum unit of flux. Thanks to rotational symmetry, $[\hat H,\hat L_z]=0$ (the Hamiltonian exhibits the symmetry of the physical problem), the eigenfunctions of $\hat H$ are again those of the canonical angular momentum $\hat L_z$, namely \begin{equation}\psi_{l_z}(\varphi)=(2\pi a)^{-1/2}e^{il_z\varphi},\label{eqpsi}\end{equation} with integer $l_z$ values, and the eigenenergies are \begin{equation} E_{l_z}(\Phi)=\frac{\hbar^2}{2ma^2}(l_z-\Phi/\Phi_0)^2, \end{equation} while the eigenvalues of the canonical angular momentum $\hbar l_z$ are unchanged (i.e., conserved). The eigenfunctions are single-valued, $\psi_{l_z}(\varphi+2\pi)=\psi_{l_z}(\varphi)$, i.e., they belong to the Hilbert space with specified boundary conditions \begin{equation} {\cal H}=\{\psi(\varphi) \ | \ \textstyle \int_0^{2\pi}|\psi|^2\,d\varphi <+\infty \ \hbox{and}\ \psi(\varphi+2\pi)=\psi(\varphi)\}.\label{eqHilbert}\end{equation} The magnetic field, breaking time reversal symmetry, induces {an electron current. This contribution} to the persistent current in the ring is given by the $\Phi$-dependent part of the corresponding energy eigenvalue, \begin{equation}y j_\varphi&=&-\frac{\partial E_{l_z}}{\partial\Phi}\nonumber\\ &=&\frac e{m}{\psi_{l_z}}^*(\varphi)(-i\hbar a^{-1}\partial_\varphi-eA_\varphi(a))\psi_{l_z}(\varphi)\nonumber\\ &=&\frac{e\hbar}{2\pi ma^2}(l_z-\Phi/\Phi_0). \end{equation}y We note that, like the mechanical linear momentum, a \textit{mechanical angular momentum}, $\hat \Lambda_z=-i\hbar\partial_\varphi-eaA_\varphi(a)$, related to the angular velocity, {appears in this equation.} It is the relevant quantity needed to calculate a physical quantity like the current density, but $\hat L_z$ is the operator associated with the conservation law in the cylindrical gauge. Instead of the cylindrical gauge $\vac A$, one can use the Landau gauge $\vac A'= \frac 12 B\rho \sin (2\varphi)\ \!\vac u_\rho +\frac 12 B\rho(1+\cos (2\varphi))\ \!\vac u_\varphi$, even though the latter choice is again not adapted to the circular geometry. Equation~(\ref{eq-alpha}) is the gauge function that describes the change of formulation from $\vac A$ to $\vac A'$ and the eigenfunctions on the ring are modified accordingly, \begin{equation}\psi_{l_z}'(\varphi)=\hat U_L\psi_{l_z}(\varphi)= (2\pi a)^{-1/2}\exp{i\Bigl(l_z\varphi+\frac{\Phi}{2\Phi_0}\sin(2\varphi)\Bigr)},\label{eqpsiprime} \end{equation} with \begin{equation} \hat U_L=\exp\Bigl({i\frac{e}{\hbar}\alpha(a,\varphi)}\Bigr)=\exp\Bigl({i\frac{\Phi}{2\Phi_0}\sin(2\varphi)}\Bigr),\end{equation} where the subscript $L$ is for Landau. These eigenfunctions also exhibit the ring periodicity {(see Fig.~\ref{fig1})}. The eigenvalues of the gauge-transformed Hamiltonian, \begin{equation} \hat H'=\hat U_L\hat H\hat U_L^{\dagger}=\frac{1}{2m}(-i\hbar a^{-1}\partial_\varphi-eA'_\varphi(a))^2, \end{equation} are of course unchanged by the unitary gauge transformation. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8cm]{BercheMalterreMedinaFigure01.pdf} \end{center} \caption{The real part of the Landau-gauge $l_z=2$ eigenfunction, for $\Phi/\Phi_0=-2.5$ (solid) and $1/3$ (dashed). }\label{fig1} \end{figure} In the Landau gauge, the canonical angular momentum is not the unitary transform of $\hat L_z$ (that is, $\hat L_z'=\hat L_z\ne \hat U\hat L_z\hat U^\dagger$). As a consequence, the eigenfunctions of $\hat H'$ are \textit{not} eigenstates of the gauge-invariant canonical angular momentum operator $\hat L_z$, \begin{equation} \hat L_z\psi_{l_z}'(\varphi)=\hbar\Bigl(l_z+\frac{\Phi}{\Phi_0}\cos(2\varphi)\Bigr)\psi_{l_z}'(\varphi) \not={\tt scalar}\times\psi_{l_z}'(\varphi).\label{eqNotLz} \end{equation} This result is the quantum mechanical counterpart of Eq.~(\ref{eq-lzprime}). It might be a priori surprising: we are looking at the same problem as in the unprimed gauge, so we expect the same angular momentum in a physical state of the same energy. This is indeed true, as can be observed by the calculation of the \textit{expectation value} given by the matrix element \begin{equation}y \langle\psi_{l_z}'|\hat L_z|\psi_{l_z}'\rangle&=&\int_0^{2\pi}\!a\,d\varphi\, {\psi_{l_z}^{\prime*}}(\varphi) \hbar\Bigl(l_z+\frac{\Phi}{\Phi_0}\cos(2\varphi)\Bigr)\psi_{l_z}'(\varphi)\nonumber\\ &=&\hbar l_z. \end{equation}y What Eq.~(\ref{eqNotLz}) expresses is the fact that the eigenfunctions (\ref{eqpsiprime}) are not eigenstates of the canonical angular momentum because the latter does not commute with the Hamiltonian in the Landau gauge: $[\hat H',\hat L_z]\not=0$. The two operators thus cannot share the same eigenstates. The calculation of the current density also illustrates the differences with the cylindrical gauge, although $j_\varphi=-\partial E_{l_z}/\partial\Phi$ is the same in both gauges. We have for that purpose to evaluate \begin{equation}y j_\varphi&=&\frac em {\psi_{l_z}'}^*(\varphi)\hat \Lambda'_z\psi_{l_z}'(\varphi)\nonumber\\ &=&\frac em {\psi_{l_z}'}^*(\varphi)(-i\hbar a^{-1}\partial_\varphi-e{A'}_\varphi(a))\psi_{l_z}'(\varphi)\nonumber\\ &=&\frac{e\hbar}{2\pi ma^2}(l_z-\Phi/\Phi_0), \end{equation}y and the additional term due to the change of gauge $A_\varphi\to {A'}_\varphi$ is exactly compensated by the action of $ -i\hbar\partial_\varphi$ on the modified wave function, to keep the current the same as in the cylindrical gauge. \subsection{The case of singular gauge transformations}\label{sec3} Let us now consider a multivalued gauge transformation described by the gauge function \begin{equation}\alpha(\varphi)=-\Phi\frac {\varphi}{2\pi}.\end{equation} This transformation is singular in the sense that it does not display the angular periodicity: $\alpha(\varphi+2\pi)\not=\alpha(\varphi)$. Hence the associated unitary transformation is also singular: \begin{equation} \hat U_s=e^{-i(\Phi/\Phi_0)\varphi}, \end{equation} where the subscript $s$ stands for singular. This transformation changes the gauge vector on the ring from $A_\varphi=\frac 12Ba$ to $A''_\varphi=0$, i.e., it appears to completely gauge-away the magnetic field, because $\int_0^{2\pi}a\,d\varphi\ \!A''_\varphi(a)=0$. This is nevertheless not true, because the vector potential acquires a radial component $A''_\rho(\rho,\varphi)=-B\rho\varphi$ that is multivalued, and in order to close the integration circuit properly to evaluate $\int_{\cal C}\vac A''\cdot d\vac r$, one has to go around the branch cut of $A''_\rho$ (see Fig.~\ref{figContour}) and this path contributes to the circulation exactly what is needed to recover $\int_{\Sigma({\cal C})}\vac B\cdot d\vac S=\Phi$. The magnetic field is thus absent from the expression for the Hamiltonian on the ring, but still present in the wave function, as we will see. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=6.5cm]{BercheMalterreMedinaFigure02.pdf} \caption{Integration contour for the calculation of the magnetic field flux.} \label{figContour} \end{center} \end{figure} Indeed, under the same transformation, the eigenfunctions become \begin{equation} \psi_{l_z}''(\varphi)=e^{-i(\Phi/\Phi_0)\varphi}\psi(\varphi)=(2\pi a)^{-1/2}e^{i(l_z-\Phi/\Phi_0)\varphi},\label{eqpsiprimeprime}\end{equation} hence they are multivalued in the general case,\cite{Dirac31} since there is no need for the flux $\Phi$ to be equal to an integer number of flux quanta $\Phi_0$. In the new gauge, the eigenstates belong to the Hilbert space (see Fig.~\ref{fig3}) \begin{equation}y {\cal H}''&=&\{\psi(\varphi) \ | \ \textstyle\int_0^{2\pi}|\psi|^2 d\varphi<+\infty \nonumber\\ &&\qquad \hbox{and}\ \psi(\varphi+2\pi)=e^{-i 2\pi{\Phi}/{\Phi_0}}\psi(\varphi)\},\label{eqHilbertprimeprime} \end{equation}y which differs from Eq.~(\ref{eqHilbert}) in the boundary conditions imposed on the allowed quantum states. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8cm]{BercheMalterreMedinaFigure03.pdf} \end{center} \caption{The real part of the singular-gauge $l_z=2$ eigenfunction, for $\Phi/\Phi_0=-2.5$ (solid) and $1/3$ (dashed). }\label{fig3} \end{figure} The new Hamiltonian is obtained via unitary transformation, as in the case of the non-singular gauge transformation: \begin{equation} \hat H''=\hat U_s\hat H\hat U_s^{\dagger}=\frac{1}{2m}(-i\hbar a^{-1}\partial_\varphi)^2.\label{eqHprimeprime}\end{equation} As claimed above, the gauge vector no longer appears in the expression for the Hamiltonian, but the magnetic flux still enters the problem via the boundary conditions and the multi-valued character of the eigenstates. The eigenvalues are unchanged, \begin{equation}y \hat H''\psi_{l_z}''(\varphi)&=&\frac{\hbar^2}{2ma^2}(l_z-\Phi/\Phi_0)^2\psi_{l_z}''(\varphi), \end{equation}y and the current density, defined via \begin{equation}y j_\varphi&=&\frac ema{\psi_{l_z}''}^*(\varphi)(-i\hbar\partial_\varphi)\psi_{l_z}''(\varphi) \nonumber\\ &=&\frac{e\hbar}{2\pi ma^2}(l_z-\Phi/\Phi_0), \end{equation}y also remains unchanged. {\textcolor{black}{ In this expression}} $\hbar(l_z-\Phi/\Phi_0)$ appears as the non-integer eigenvalues of the mechanical angular momentum.~\cite{Wilczek,KobePRL} In this singular gauged problem, on the other hand, the representation of the canonical angular momentum has to be modified. Indeed, $-i\hbar\partial_\varphi$ acting on Eq.~(\ref{eqpsiprimeprime}) would not produce the proper eigenvalues $\hbar l_z$. For the reciprocal statement, see, e.g., Ref.~\onlinecite{Shankar}: \begin{equation}gin{quote}{\textcolor{black}{The angular momentum $p_\varphi\to -i\hbar\partial/\partial_\varphi$ is Hermitian, as it stands, {\em on single-valued functions:} $\psi(\rho,\varphi+2\pi)=\psi(\rho,\varphi)$.}} \end{quote} This problem has been discussed in the literature~\cite{Kretzschmar65,Riess72} and the correct representation of the canonical angular momentum that acts in the Hilbert space of multivalued square integrable complex functions has to incorporate the boundary conditions as \begin{equation} \hat L''_z=-i\hbar\partial_\varphi+\hbar\frac{\Phi}{\Phi_0}. \end{equation} The eigenvalues of ${\hat L''}_z$ are integer multiples of~$\hbar$, as we expect from the Lie algebra of orbital angular momentum. This property is overlooked in the literature, but it can be proven showing that this angular momentum, and not just $-i\hbar\partial_\varphi$, is indeed the generator of rotations. A $2\pi$ rotation acting on the multivalued gauged wave functions leads to \begin{equation}y {\cal R}_{2\pi}\psi''(0)&\equiv& e^{-\frac i\hbar 2\pi \hat L''_z}\psi''(0)\nonumber\\ &=&e^{-\frac i\hbar 2\pi (-i\hbar\partial_\varphi+\hbar\frac{\Phi}{\Phi_0})}\psi''(0)\nonumber\\ &=&e^{-i 2\pi\Phi/\Phi_0}\psi''(-2\pi)\nonumber\\ &\equiv&\psi''(0). \end{equation}y \subsection{Comparison between the two approaches} The regular and singular gauge transformations are rigorously equivalent and, as expected, they correspond to two different ways of dealing with the same physical problem: either with explicit operator representation as in Eqs.~(\ref{eqH}) and (\ref{eqp}), and periodic wave functions (\ref{eqpsi}) belonging to the Hilbert space (\ref{eqHilbert}); or via non-integrable phases encoded in the wave functions (\ref{eqpsiprimeprime}), which belong to the space (Eq.~(\ref{eqHilbertprimeprime})) acted on by the \textit{free-particle} Hamiltonian, Eq.~(\ref{eqHprimeprime}). The first approach can be considered as the standard physicist's way of implementing a gauge interaction: Starting from the free-particle problem, the minimal coupling prescription $\hat{\vac P}=-i\hbar\boldsymbol{\nabla}\to\hat{\vac P}'=\hat{\vac P}-e\vac A=-i\hbar\boldsymbol{\nabla}-e\vac A$ is implemented in the free-particle Hamiltonian, leading to an interaction term that is apparent, and one searches for ``well behaved'' eigenfunctions, i.e., with well-defined phases. Operators there (e.g. $\hat{\vac P}=-i\hbar\boldsymbol{\nabla}$) keep their ordinary forms (they are not gauged transformed). The second approach may be considered more as following Weyl's program of geometrization of electrodynamics:~\cite{WuYang75} In this approach, the interaction is not apparent in the representation, the Hamiltonian still being that of the free particle, but it is present in the non-integrable phase, i.e., it has been geometrized. Dirac has a very illuminating discussion on this question, which we highly recommend to students.~\cite{Dirac31} Since Dirac's exposition of the reasoning is so penetrating, we quote below a relevant excerpt: \begin{equation}gin{quote} We express $\psi$ in the form $$(2)\qquad\qquad \psi = Ae^{i\gamma}$$ where $A$ and $\gamma$ are real functions of $x,y,z$ and $t$, denoting the amplitude and phase of the wave function. For a given state of motion of the particle, $\psi$ will be determined except for an arbitrary constant numerical coefficient, which must be of modulus unity if we impose the condition that $\psi$ shall be normalised. The indeterminacy in $\psi$ then consists in the possible addition of an arbitrary constant to the phase $\gamma$. Thus the value of $\gamma$ at a particular point has no physical meaning and only the difference between the values of $\gamma$ at two different points is of any importance.\,\dots\ Let us examine the conditions necessary for this nonÐintegrability of phase not to give rise to ambiguity in the applications of the theory.\,\dots\ For the mathematical treatment of the question we express $\psi$ more generally than ($2$), as a product $$(3)\qquad\qquad \psi = \psi_1e^{i\begin{equation}ta}$$ where $\psi_1$ is any ordinary wave function (i.e., one with a definite phase at each point) whose modulus is everywhere equal to the modulus of $\psi$. The uncertainty of phase is thus put in the factor $e^{i\begin{equation}ta}$. This requires that $\begin{equation}ta$ shall not be a function of $x,y,z,t$ having a definite value at each point, but $\begin{equation}ta$ must have definite derivatives $$\kappa_x=\frac{\partial\begin{equation}ta}{\partial x}, \ \kappa_y=\frac{\partial\begin{equation}ta}{\partial y}, \ \kappa_z=\frac{\partial\begin{equation}ta}{\partial z}, \ \kappa_0=\frac{\partial\begin{equation}ta}{\partial t}, \ $$ at each point, which do not in general satisfy the conditions of integrability $\partial\kappa_x/\partial y=\partial\kappa_y/\partial x$, etc.\,\dots\ From ($3$) we obtain $$(5)\qquad\qquad -i h\frac{\partial}{\partial x}\psi=e^{i\begin{equation}ta}\left(-ih\frac{\partial}{\partial x}+h\kappa_x\right)\psi_1,$$ with similar relations for the $y$, $z$ and $t$ derivatives. It follows that if $\psi$ satisfies any wave equation, involving the momentum and energy operators $\vac p$ and $W$, $\psi_1$ will satisfy the corresponding wave equation in which $\vac p$ and $W$ have been replaced by $\vac p+h\boldsymbol{\kappa}$ and $W-h\kappa_0$ respectively. Let us assume that $\psi$ satisfies the usual wave equation for a free particle in the absence of any field. Then $\psi_1$ will satisfy the usual wave equation for a particle with charge $e$ moving in an electromagnetic field whose potentials are $$(6)\qquad\qquad \vac A=\frac{hc}{e}\boldsymbol{\kappa},\quad\vac A_0=-\frac{h}{e}\kappa_0.$$ Thus, since $\psi_1$ is just an ordinary wave function with a definite phase, our theory reverts to the usual one for the motion of an electron in an electromagnetic field. This gives a physical meaning to our non-integrability of phase. We see that we must have the wave function $\psi$ always satisfying the same wave equation, whether there is a field or not, and the whole effect of the field when there is one is in making the phase nonÐintegrable. The components of the 6-vector $ \vac{curl}\ \!\boldsymbol{\kappa}$ are, apart from numerical coefficients, equal to the components of the electric and magnetic fields $\vac E$ and $\vac H$. They are, written in three-dimensional vector-notation, $$(7)\qquad\qquad\vac {curl}\ \!\boldsymbol{\kappa} =\frac{e}{hc}\vac H,\quad\vac{grad}\ \!\kappa_0-\frac{\partial\boldsymbol{\kappa}}{\partial t}=\frac{e}{h}\vac E.$$ The connection between non-integrability of phase and the electromagnetic field given in this section is not new, being essentially just Weyl's Principle of Gauge Invariance in its modern form. \end{quote} \section{Discussion: Geometrization of physics} The first theory in which gauge symmetry plays its full role as we understand it today is Einstein's theory of gravitation, general relativity. There, the gravitational interaction is ``geometrized,'' i.e., instead of considering the motion of a point particle in space-time, subject to gravitational interaction with, for example, massive particles, the point particle follows geodesics, which are just free-fall trajectories in a curved space-time with metric $ds^2=g_{\mu\nu}dx^\mu dx^\nu$. Free fall is understood as the motion of a free particle, its Lagrangian being purely kinetic energy. The interaction enters, via the gravitational potential, into the metric tensor $g_{\mu\nu}$ of the curved space-time according to Einstein's field equations.~\cite{Einstein1915} The geometry of the underlying space-time is Riemannian geometry, in which vector lengths are invariant (e.g., $ds^2$), but their orientation (the phase in our electromagnetic examples) is not integrable, i.e., two vectors that follow parallel transport along a space-time curve will see their orientations change in a curved manifold in a manner that is path dependent (non-integrability of the orientation), but their relative orientation will remain unchanged (see Fig.~\ref{FigGTR}). \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=9cm]{BercheMalterreMedinaFigure04.pdf} \caption{Einstein gravitation. (a) Parallel transport of a vector between the same starting and ending points along two distinct curves (solid and dashed) leads to vectors of different orientations in a curved space. (b) Parallel transport keeping two vectors' lengths conserved and relative orientations fixed.} \label{FigGTR} \end{center} \end{figure} \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=9cm]{BercheMalterreMedinaFigure05.pdf} \caption{ Weyl gravitation. (a) Parallel transport of a vector between the same starting and ending points along two distinct curves leads to vectors of different orientations \textit{and lengths}. (b) Parallel transport, not conserving vector lengths, but keeping length ratios and relative orientations preserved.} \label{FigWeylG} \end{center} \end{figure} The gauge principle was later elaborated by Weyl in his 1918 paper~\cite{Weyl1918} where he considered, in addition to the quadratic form $ds^2$, a linear form $d\ell = \ell_\mu dx^\mu$, which enables measuring the length of vectors, and he relaxed the constraint of length conservation of Riemann geometry. Now, not only the orientation but also the length of a vector is non-integrable: two vectors that follow parallel transport along a space-time curve will now see their lengths and their orientations change in a path-dependent manner, but their relative lengths and relative orientation remain unchanged (see Fig.~\ref{FigWeylG}). The quantity $\ell_\mu$, which allows for this non-integrability of length, was identified by Weyl as $\ell_\mu=(e/\gamma) A_\mu$, where $A_\mu$ is the gauge vector of electrodynamics, {\textcolor{black}{and $\gamma$ is an unknown constant with the dimensions of an action}}. This identification is similar to the way $g_{\mu\nu}$ encodes the gravitational potential in Einstein theory. The square of the length $||v||^2=g_{\mu\nu}v^\mu v^\nu$ of a vector $v^\mu$ transported between points $A$ and $B$ along a curve ${\cal C}$ is now path dependent and is determined by the ``gauge field'' $A_\mu$: $||v_B||^2=||v_A||^2 \exp\bigl((e/\gamma)\int_{\cal C}A_\mu dx^\mu\bigr)$. In spite of its beautiful mathematical construction, incorporating gravitation and electromagnetism in a single unified theory, Weyl's approach did not survive Einstein's criticism since it was unsuccessful at describing the physical world: It predicted that the time measured by a clock (e.g., frequencies given by atomic spectra) would depend on its history, a prediction that has never been observed experimentally. Soon after Weyl's work, Schr\"odinger, London, and Fock noticed that Weyl's theory could be adapted to quantum mechanics,~\cite{O'Raifeartaigh} essentially at the price of allowing the constant prefactor between $\ell_\mu$ and $A_\mu$ to be purely imaginary. Instead of non-integrable lengths, the theory now turns into one with non-integrable phases, as was synthesized by Weyl in his 1929 paper,\cite{Weyl1929} which can be considered as the birth of modern gauge theory. The mathematical object that is now transported is the wave function $\psi$, and the constant $\gamma$, as discussed by Schr\"odinger, with dimensions of an action, is identified as $\hbar$: \begin{equation} \psi=\psi_0e^{i(e/\hbar)\int_{\cal C}A_\mu dx^\mu}. \end{equation} The connection $A_\mu$ that acts to connect vectors or wave functions between different space-time points is peculiar in the sense that it is a non-integrable function. This means that the function has no definite value at each point in space while it has a definite derivative. In the different contexts we have discussed, the orientation of a vector in general relativity, the length of the vector in Weyl's theory, and the phase of the wave function in quantum mechanics are all non-integrable in this sense. \begin{equation}gin{acknowledgments} EM and BB are respectively grateful to the University of Lorraine and to IVIC for invitations. They also thank the CNRS and FONACIT for support through the ``PICS'' programme {\it Spin transport and spin manipulations in condensed matter: polarization, spin currents and entanglement}. BB and DM benefited from useful discussions with the group ``Connexions'' between mathematicians, historians, and physicists at Universit\'e de Lorraine. \end{acknowledgments} \section*{References} \def\paper#1#2#3#4#5{{#1,}{\ {\it #2}}{\ {\bf #3},}{\ #4}{\ (#5).}} \def\papertitle#1#2#3#4#5#6{{#1,}{\ {\it #2}}{\ {\bf #3},}{\ #4}{\ (#5).}} \begin{equation}gin{thebibliography}{50} \bibitem{WuYang06} A. C. T. Wu and C. N. Yang, ``Evolution of the concept of the vector potential in the description of fundamental interactions,'' Int. J. Mod. Phys. A {\bf 21}, 3235--3277 (2006). \bibitem{Calkin} M. G. Calkin, ``Linear Momentum of Quasistatic Electromegnetic Fields,'' Am. J. Phys. {\bf 34}, 921--925 (1966). \bibitem{Konopinski} E. J. Konopinski, ``What the electromagnetic vector potential describes,'' Am. J. Phys. {\bf 46}, 499--502 (1978). \bibitem{Johnson} F. S. Johnson, B. L. Cragin, and R. R. Hodges, ``Electromagnetic momentum density and the Poynting vector in static fields,'' Am. J. Phys. {\bf 62}, 33--41 (1994). \bibitem{SemonTaylor} M. D. Semon and J. R. Taylor, ``Thoughts on the magnetic vector potential,'' Am. J. Phys. {\bf 64}, 1361--1369 (1996). \bibitem{O'Raifeartaigh} L. O'Raifeartaigh, \textit{The Dawning of Gauge Theory}, (Princeton University Press, Princeton, 1997). \bibitem{AharonovBohm} Y. Aharonov and D. Bohm, ``Significance of Electromagnetic Potentials in the Quantum Theory'', Phys. Rev. {\bf 115}, 485-491 (1959). \bibitem{ChengLi} T.P. Cheng and L.-F. Li, ``Resource Letter: GI-1 Gauge invariance'', Am. J. Phys. {\bf 56}, 586-600 (1988). \bibitem{Weyl1918} H. Weyl, ``Gravitation und Elektricitat'', Sitzbe. Preuss Akad. Wiss. {\bf 465} 29-42 (1918) [English translation: ``Gravitation and Electricity'', appeared in \textit{The Principle of Relativity (A collection of original memoirs on the special and general relativity by H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl; with notes by A. Sommerfeld)}, (Dover Publications, New-York, 1952), p200-216]. \bibitem{Pauli1958} W. Pauli, \textit{Theory of relativity}, (Pergamon Press, London, 1958). \bibitem{Weyl1929} H. Weyl, ``Elektron und Gravitation. I.'', Zeit. fŸr Physik {\bf 56}, 330-352 (1929) [An English translation: ``Electron and gravitation'', appeared in O'Raifeartaigh Ref. [6], p121-144]. \bibitem{YangMills54} C.N. Yang and R.L. Mills, ``Conservation of isotopic spin and isotopic gauge invariance'', Phys. Rev. {\bf 96}, 191-195 (1954). \bibitem{Ryder} L.H. Ryder, \textit{Quantum Field Theory}, Second Edition, (Cambridge University Press, Cambridge, 1996). \bibitem{Ramond} P. Ramond, \textit{Field theory: a modern primer}, (Westview Press, Boulder, 1990). \bibitem{Dirac31} P.A.M. Dirac, ``Quantised Singularities in the Electromagnetic Field'', Proc. Roy. Soc. A {\bf 133}, 60-72 (1931) \bibitem{Pauli} W. Pauli, ``Relativistic Field Theories of Elementary Particles'', Rev. Mod. Phys. {\bf 13}, 203-232, 1941 \bibitem{WuYang75} T.T. Wu and C.N. Yang, ``Concept of nonintegrable phase factors and global formulation of gauge fields'', Phys. Rev. D {\bf 12}, 3845-3857 (1975). \bibitem{Landau} L.D. Landau and E.M. Lifshitz, \textit{Course of Theoretical physics, Volume 3: Quantum Mechanics - Non-relativistic Theory}, Third edition, (Pergamon Press, Oxford, 1977), p. 53. \bibitem{Merzbacher} E. Merzbacher, \textit{Quantum Mechanics}, Third edition, (John Wiley, New-York, 1998), p. 243. \bibitem{Cohen} C. Cohen-Tannoudji, B. Diu and F. Laloe, \textit{Quantum mechanics}, vol. 1, (Wiley, New-York, 1996), p. 663. \bibitem{Messiah} A. Messiah, \textit{Quantum Mechanics}, vol. 1, (North Holland, Amsterdam, 1961), p. 196. \bibitem{Liboff} R.L. Liboff, \textit{Introductory Quantum Mechanics}, (Addison-Wesley, Reading, 1980), p. 320. \bibitem{MerzbacherAJP} E. Merzbacher, ``Single Valuedness of Wave Functions'', Am. J. Phys. {\bf 30}, 237-247 (1962). \bibitem{Schiff} L.I. Schiff, \textit{Quantum Mechanics}, (McGraw-Hill, New-York, 1949), p. 29. \bibitem{Gottfried} K. Gottfried and T.M. Yan, \textit{Quantum Mechanics: Fundamentals}, Second edition (Springer, New-York, 2003), p. 116. \bibitem{Weisskopf} J.M. Blatt and V.F. Weisskopf, \textit{Theoretical Nuclear Physics}, (Springer, New-York, 1979), p. 783. \bibitem{Ballentine} L.E. Ballentine, \textit{Quantum Mechanics, A Modern Development}, (World Scientific, Singapore, 1998), p. 169. \bibitem{Cohen321} C. Cohen-Tannoudji, B. Diu and F. Laloe, \textit{Quantum Mechanics}, vol. 1, (Wiley, New-York, 1996) p. 321. {\bibitem{Messiah2} A. Messiah, \textit{Quantum Mechanics}, vol. 1, (North Holland, Amsterdam, 1961), Chap. VII, Sec. 9.} \bibitem{BjorkenDrell} J.D. Bjorken and S.D. Drell, \textit{Relativistic Quantum Fields}, (MacGraw-Hill, New-York, 1956), p. 17. \bibitem{Sakurai} J.J. Sakurai, \textit{Modern Quantum Mechanics}, (Addison-Wesley, Reading, 1994), p. 248. \bibitem{WeinbergQM} S. Weinberg, \textit{Lectures on Quantum Mechanics}, second edition, (Cambridge University Press, Cambridge, 2015), p. 74. \bibitem{Feynman} R.P. Feynman, R.B. Leighton, M. Sands, \textit{The Feynman lectures on physics}, (Addison-Wesley publishing company, Reading, 1963), vol.2 chapters 17-4 and 27-6. \bibitem{CohenAtoms} C. Cohen-Tannoudji, J. Dupont-Roc and G. Grynberg, \textit{Photons and Atoms. Introduction to Quantum Electrodynamics}, (Wiley, New York, 1989). \bibitem{Zangwill} A. Zangwill, \textit{Modern Electrodynamics}, (Cambridge University Press, Cambridge, 2013), p. 515. \bibitem{Wilczek} F. Wilczek, ``Magnetic Flux, Angular Momentum, and Statistics'', Phys. Rev. Lett. {\bf 48}, 1144-1146 (1982). \bibitem{KobePRL} D.H. Kobe, ``Comment on: Magnetic Flux, Angular Momentum, and Statistics'', Phys. Rev. Lett. {\bf 49}, 1592 (1982). \bibitem{Shankar} R. Shankar, \textit{Principles of Quantum Mechanics}, Second edition, (Plenum Press, New-York, 1994), p. 216. \bibitem{Kretzschmar65} M. Kretzschmar, ``Must Quantal Wave Functions be Single-Valued?'', Z. Phys. {\bf 185}, 73-83 (1965). \bibitem{Riess72} J. Riess, ``Single-valued and multi-valued Schršdinger wave functions'', Helv. Phys. Acta {\bf 45}, 1066-1073 (1972). \bibitem{Einstein1915} A. Einstein, ``Die Grundlage der allgemeine Relativit\"atstheorie'', Annalen der Physik, {\bf 49}, 769-822 (1916) [English translation: ``The Foundation of the Genaral Theory of Relativity'', appeared in \textit{The Principle of Relativity (A collection of original memoirs on the special and general relativity by H.A. Lorentz, A. Einstein, H. Minkowski and H. Weyl; with notes by A. Sommerfeld)}, (Dover Publications, New-York, 1952)]. \end{thebibliography} \end{document}
\mbox{\boldmath$m$}box{\boldmath$b$}egin{document} \mbox{\boldmath$m$}aketitle \mbox{\boldmath$m$}box{\boldmath$d$}oublespacing \mbox{\boldmath$m$}box{\boldmath$v$}space{-1.5cm} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} {{\lambda}rge Shonosuke Sugasawa$^1$\mbox{\boldmath$f$}ootnote{Email: [email protected]}, Kosaku Takanashi$^2$ and Kenichiro McAlinn$^3$} \mbox{\boldmath$m$}edskip \mbox{\boldmath$m$}edskip \noindent $^1$Faculty of Economics, Keio University\\ $^2$RIKEN Center for Advanced Intelligence Project\\ $^3$Department of Statistics, Operations, and Data Science, Fox School of Business, Temple University\\ \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$v$}space{0.5cm} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} {\mbox{\boldmath$m$}box{\boldmath$b$}f {\lambda}rge Abstract} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} We propose a novel Bayesian methodology to mitigate misspecification and improve estimating treatment effects. A plethora of methods to estimate- particularly the heterogeneous- treatment effect have been proposed with varying success. It is recognized, however, that the underlying data generating mechanism, or even the model specification, can drastically affect the performance of each method, without any way to compare its performance in real world applications. Using a foundational Bayesian framework, we develop Bayesian causal synthesis; a supra-inference method that synthesizes several causal estimates to improve inference. We provide a fast posterior computation algorithm and show that the proposed method provides consistent estimates of the heterogeneous treatment effect. Several simulations and an empirical study highlight the efficacy of the proposed approach compared to existing methodologies, providing improved point and density estimation of the heterogeneous treatment effect. \mbox{\boldmath$m$}box{\boldmath$v$}space{-0cm} \mbox{\boldmath$m$}box{\boldmath$b$}igskip\noindent {\mbox{\boldmath$m$}box{\boldmath$b$}f Key words}: Bayesian predictive synthesis; causal inference; ensemble learning; Gaussian process \mbox{\boldmath$m$}box{\boldmath$s$}ection{Introduction}{\lambda}bel{sec:Intro} One of the central problems of statistics is assessing, comparing, and mitigating model misspecification. In predictive contexts, this is fairly straightforward: the accuracy of a model is measured by how well it predicts future data. By having ``direct" feedback (e.g. squared error, predictive likelihood, etc.), we can measure the efficacy of any proposed method-- either model selection or combination-- through its predictive performance. For causal inference methods, we have a starkly different picture. As the potential outcome of a unit can never be observed (e.g. we can never observe what would have happened if a patient did not take a drug, if they did), there is no direct way to measure how well a method is ``predicting" the potential outcome. The causal inference literature has traditionally dealt with this problem through the design of experiments, assuring that, under some assumptions, the treatment effect can be measured \mbox{\boldmath$m$}box{\boldmath$c$}itep{rubin1974estimating,rubin2008forobjective}. With the broadening of applications in causal inference, such as to observational studies, however, these assumptions are harder to meet or justify. This has led to developments of more model-based methods that allow for complex analyses of the treatment effect, particularly in the Bayesian literature \mbox{\boldmath$m$}box{\boldmath$c$}itep{gustafson2006curious,hill2011bayesian,heckman2014treatment,li2014bayesian,hahn2018regularization,roy2018bayesian,sugasawa2019estimating}, but are then susceptible to issues of model misspecification. One particularly relevant and prominent example of this is the problem of estimating heterogeneous treatment effects \mbox{\boldmath$m$}box{\boldmath$c$}itep[HTE:][]{green2012modeling,athey2016recursive,taddy2016nonparametric,henderson2016bayesian,powers2018some,kunzel2019metalearners,syrgkanis2019machine}, where modeling assumptions are needed to measure the heterogeneity. Recent advances in causal inference-- particularly as it intersects with machine learning-- have led to developments in new methodologies in estimating the HTE, such as causal forest \mbox{\boldmath$m$}box{\boldmath$c$}itep[CF:][]{wager2018estimation,athey2019estimating,athey2019generalized} and causal BART \mbox{\boldmath$m$}box{\boldmath$c$}itep[BCF:][]{hahn2020bayesian,linero2022mediation}, which have both received considerable attention. Part of the appeal of these methods is their flexibility, achieved through sum-of-regression-trees. However, despite their flexibility, these methods can produce disparate inference, as they are sensitive to model specification, priors, etc., particularly when the number of observations is small. This leads to significant uncertainty on the user's part, as there is no ``right" method/specification for all scenarios and situations. This is evident from large-scale simulation studies and causal inference competitions that show that no single method performs the best in every problem or scenario \mbox{\boldmath$m$}box{\boldmath$c$}itep{dorie2017aciccomp2016,wendling2018comparing,carvalho2019assessing,dorie2019automated,hahn2019atlantic,mcconnell2019estimating}. Yet, there is no straightforward way to deal with model uncertainty in this context, due to the aforementioned problem of unobserved potential outcomes, and standard model selection techniques (such as information criteria or cross validation) are not adequate. We contribute to this literature by proposing a formal Bayesian approach to synthesize multiple HTE estimates, which we call Bayesian causal synthesis (BCS). Rather than finding the best model/specification, our approach leverages the discrepancies between different estimates, learns from their biases and dependencies, and improves HTE estimation through a synthesis of information. By not relying on a single estimate, we are effectively conducting supra-inference, which not only robustifies estimation, but also expands the model space to better approximate the underlying heterogeneity. The idea of using multiple estimates to improve estimation-- often coined as ensemble methods-- is not new \mbox{\boldmath$m$}box{\boldmath$c$}itep[in fact, the idea can be traced back to, at least,][]{galton1907vox}. In the predictive literature, ensemble methods have been used extensively to improve predictions by reducing model uncertainty \mbox{\boldmath$m$}box{\boldmath$c$}itep{bates1969combination,hall2007combining,wang2022forecast}. As the task typically involves minimizing some predictive criteria, developing ensemble methods (including estimating ensemble weights) is straightforward; at the very least, the efficacy can be directly assessed through predictive accuracy, even in real world data. This, again, is not the case for causal inference. Nonetheless, many causal inference methods-- implicitly or explicitly-- rely on ensembling ideas. For example, CF and BCF both use a sum of trees, which is an averaging of individual trees (in the case of CF, this is an equal weight average), and \mbox{\boldmath$m$}box{\boldmath$c$}ite{zigler2014uncertainty} uses model averaging to model the propensity score. Related to this paper, \mbox{\boldmath$m$}box{\boldmath$c$}ite{han2022ensemble} propose a stacking approach to ensemble individual treatment effects. Three problems make ensemble methods for causal inference particularly difficult. The first is that there is no clear target to minimize, and it is not obvious whether modeling the outcome leads to improved estimation of the HTE. The second, which relates to the first, is that there exists performance heterogeneity in estimating HTEs. In other words, one estimator can be good at estimating the treatment effect on one group but poor in another, and vice versa. Thus, minimizing some criteria in a sweeping manner (as is done in standard linear averaging) cannot capture this performance heterogeneity. The third is that these estimates are highly dependent, due to their shared usage of data, modeling structure, etc., making simple ensembling (e.g. linear averaging, such as stacking)-- which implicitly assumes that each estimate is independent-- inadequate. Existing ensemble methods cannot take into account, or mitigate, these issues, due to their construction and assumptions. Our BCS approach mitigates all three issues through the usage of the Bayesian predictive synthesis (BPS) framework \mbox{\boldmath$m$}box{\boldmath$c$}itep[][]{mcalinn2019dynamic}. While the BPS framework has been shown to be superior to other (benchmark and state-of-the-art) ensemble methods in predictive tasks \mbox{\boldmath$m$}box{\boldmath$c$}itep[e.g.][]{mcalinn2020multivariate,cabel2022spatially}, the extension to causal inference has not been done due to the aforementioned problems (particularly the problem of lacking direct, predictive feedback). For BCS, we develop a synthesis function that has two components: the first is the prognostic term that includes the propensity score, and the second synthesizes the HTE estimates-- treated as latent factors-- through a Gaussian process. This approach allows for the development of fast posterior computation. Further, we show that this approach achieves consistency with regard to the HTE estimate, providing theoretical justification for our approach. Comparisons using several simulation data and competition data show that our approach improves upon existing approaches, in terms of both mean and coverage probabilities. Particularly, using the dataset from the Atlantic causal inference conference data analysis challenge (2017), we outperform competing strategies in the most difficult scenarios, achieving near ideal coverage probability (approximately 2\% within the 95\% interval for all scenarios considered). Finally, we demonstrate the flexibility of our approach in a re-analysis of data from an observational study of the effect of smoking on medical expenditures. This paper is organized as follows. In Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:BCS}, we introduce the problem statement and notation, then develop the BCS framework and computation algorithm. Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:theory} investigates the theoretical property of BCS, and shows that the HTE estimate is consistent. Simulation studies and comparisons using competition data are presented in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim}. Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:app} presents an empirical application to medical expenditure data. Finally, Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:summ} concludes the paper with summary comments. \mbox{\boldmath$m$}box{\boldmath$s$}ection{Bayesian Causal Synthesis {\lambda}bel{sec:BCS}} \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Problem setting and notation} Let $T\in \{0,1\}$ be the binary treatment indicator, with $T=1$ indicating being treated and $T=0$ indicating the opposite. Let $Y^{(T)}$ denote the potential outcome indicating the response of a subject if the subject receives treatment $T$. The following methods can be generally adapted to various types of outcomes. In practice, we can only observe $Y=Y^{(T)}$ for each subject, that is, each patient has potential outcomes $Y^{(1)}$ and $Y^{(0)}$, but only one of them can be observed as $Y$. We assume that $X$, a $q$-dimensional covariate vector, is available. Hence, the observed data consists of sets of these variables; $\{(y_i, x_i, T_i),\ i=1,\ldots,n\}$. Our main goal is in estimating the heterogeneous treatment effect (HTE): the difference between the response $Y$, under $T=1$ and $T=0$, in hypothetical worlds, averaged across subpopulations defined by covariate, $X$. This kind of counterfactual estimand can be formalized in the potential outcome framework \mbox{\boldmath$m$}box{\boldmath$c$}itep{rubin2005causal,imbens2015causal}. We make the stable unit treatment value assumption (SUTVA) throughout \mbox{\boldmath$m$}box{\boldmath$c$}itep[excluding interference between units and multiple versions of treatment;][]{imbens2015causal}. Thus, we observe the potential outcome that corresponds to the realized treatment, $y_i=T_iY_i^{(1)}+(1-T_i)Y_i^{(0)}$. Throughout the paper, we assume that strong ignorability condition that $(Y_i^{(1)}, Y_i^{(0)})$ and $T_i$ are independent given $x_i$, and also that $0<P(T_i=1|x_i)<1$ for $i=1,\ldots,n$. The first condition assumes that there are no unmeasured confounders, and the second condition is necessary to estimate treatment effects given arbitrary covariates. Under these conditions, it holds that $E[Y_i^{(T_i)}|x_i]=E[y_i|x_i,T_i]$, so that the HTE can be expressed as $$ \mbox{\boldmath$m$}box{\boldmath$t$}au(x_i)\mbox{\boldmath$m$}box{\boldmath$e$}quiv E[y_i|x_i,T_i=1]-E[y_i|x_i,T_i=0]. $$ As noted in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:Intro}, there are a variety of methods to estimate $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ from the observed data, and our goal is to develop a methodology to synthesize multiple estimators to make supra-inference on $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$. Specifically, let ${\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idehat{\mbox{\boldmath$m$}box{\boldmath$t$}au}}_j(X) \ (j=1,\ldots,J)$ be an estimator of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ by the $j$-th method and $s_j(X)$ be the associated standard error of ${\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idehat{\mbox{\boldmath$m$}box{\boldmath$t$}au}}_j(X)$. Then, an approximated posterior of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$, which we denote as $h_j(X)$, can be defined as a normal distribution with mean ${\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idehat{\mbox{\boldmath$m$}box{\boldmath$t$}au}}_j(X)$ and variance $s_j^2(X)$. \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Synthesis model for multiple treatment effect estimates {\lambda}bel{sec:BCSintro}} Let $f_j(X)$ be a random variable following the $j$-th (approximated) posterior distribution of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$, $h_j(f(X))$, which comprises the set, $\mbox{\boldmath$m$}H=\{h_1(\mbox{\boldmath$m$}box{\boldmath$c$}dot),\ldots,h_J(\mbox{\boldmath$m$}box{\boldmath$c$}dot)\}.$ Then, in the BPS framework, the set, $\mbox{\boldmath$m$}H$, is synthesized via Bayesian updating with the posterior of the form, \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation} p(Y|\mbox{\boldmath$m$}H ,X)=\int_{\mbox{\boldmath$f$}(X)}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}lpha(Y|\mbox{\boldmath$f$}(X))\prod_{j=\mbox{\boldmath$m$}box{\boldmath$s$}eq1J}h_{j}(X)d{\mbox{\boldmath$f$}(X)},{\lambda}bel{eq:theorem1} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} where $\mbox{\boldmath$f$}(X)=(f_{1}(X),\ldots,f_{J}(X))^{\mbox{\boldmath$m$}box{\boldmath$t$}op}$ is a $J-$dimensional latent vector and $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}lpha(Y|\mbox{\boldmath$f$}(X))$ is a conditional probability density function for $Y$ given $\mbox{\boldmath$f$}(X)$, called the synthesis function. Note that eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:theorem1} is only a coherent Bayesian posterior if it satisfies the consistency condition \mbox{\boldmath$m$}box{\boldmath$c$}itep[see,][]{genest1985modeling,west1992modelling1,west1992modelling2,mcalinn2019dynamic}. In short, the consistency condition states that, given the decision maker's prior, $p(Y)$, and her prior expectation of what each estimator will produce before observing it, $m(\mbox{\boldmath$f$}(X))$, her priors have to be consistent: $p(Y)=\int_{\mbox{\boldmath$f$}(X)}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}lpha(y|\mbox{\boldmath$f$}(X))m(\mbox{\boldmath$f$}(X))d\mbox{\boldmath$f$}(X)$, for eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:theorem1} to be a coherent Bayesian posterior. To synthesize the multiple estimators for the causal task, we consider the following synthesis model as the conditional specification of $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}lpha(Y|\mbox{\boldmath$f$}(X))$: \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation}{\lambda}bel{BPS} Y=\mbox{\boldmath$m$}u(X, \pi)+T\left\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(X)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)f_j(X)\mbox{\boldmath$m$}box{\boldmath$r$}ight\}+{\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}, \ \ \ {\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N(0, {\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2), \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} where $\mbox{\boldmath$m$}u(X,\pi)$ is an unknown function of $X$ and propensity score, $\pi=P(T=1|X)$, $f_j(X)$ is the $j$-th latent variable having distribution $h_j(X)$, and $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$ is the weight for the $j$-th estimator. Note that the HTE under the synthesis model (eq.~\mbox{\boldmath$m$}box{\boldmath$r$}ef{BPS}) is given by \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation}{\lambda}bel{eq:tau} {\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{\mbox{\boldmath$m$}box{\boldmath$t$}au}}(X)\mbox{\boldmath$m$}box{\boldmath$e$}quiv E[Y|X, T=1]-E[Y|X, T=0]=\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(X)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)f_j(X), \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} and $\mbox{\boldmath$m$}u(X, \pi)$ corresponds to the prognostic term. There are several reasons why this synthesis model, and the BPS framework, is fitting for this task. First, the BPS framework allows for flexible specification in the synthesis model. In this case, the synthesis model has two crucial components for improved supra-inference. The first component is the prognostic term, $\mbox{\boldmath$m$}u(X, \pi)$. The inclusion of the propensity score $\pi\mbox{\boldmath$m$}box{\boldmath$e$}quiv \pi(X)$ in the prognostic term is known to improve the performance, as noted in \mbox{\boldmath$m$}box{\boldmath$c$}ite{hahn2020bayesian}, which we can include apart from the individual estimates. The second component is the averaging term, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(X)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)f_j(X)$. An important feature of eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{BPS} is that the weight, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$, of each HTE estimator can vary depending on the covariate, $X$. This is crucial, as the performance of each estimator can vary depending on the subgroup/heterogeneity. For example, one estimator might be good at estimating the treatment effect for the male subgroup, another might be good for the female subgroup. Typical ensemble methods (such as linear averaging or stacking), cannot take into account this heterogeneity in performance. Comparatively, eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{BPS} can capture performance heterogeneity through $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$. Furthermore, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(X)$ can capture additional variation in $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ that cannot be explained by the set of $J$ estimators, which aids in improving inference. Second, because each estimate is treated as a latent factor in eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:theorem1}, the posterior inference involves learning the biases and dependencies across estimates. Because these estimates are similar, as they use the same data, similar procedures, or only differ in the selection of hyperparameters, the synthesis coefficients would have to take into account these characteristics to successfully infer. Treating the HTE estimates as latent factors allow for this through Bayesian learning. Finally, as we show in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:theory}, eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{BPS} has the desirable theoretical property of producing consistent estimates of the HTE, meaning that it will asymptotically converge to the true HTE. To complete the specifics of the above model (eq.~\mbox{\boldmath$m$}box{\boldmath$r$}ef{BPS}), we assume that the unknown functions $\mbox{\boldmath$m$}u(X, \pi)$ and $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$ independently follow an isotropic Gaussian process. In particular, to assure computational scalability under large sample sizes, we employ a nearest-neighbor Gaussian process \mbox{\boldmath$m$}box{\boldmath$c$}itep{datta2016hierarchical}, that is, $\mbox{\boldmath$m$}u(X, \pi){\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m NNGP}_m({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}u}}, \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2C(\mbox{\boldmath$m$}box{\boldmath$c$}dot; \phi_{\mbox{\boldmath$m$}u}))$ and $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X){\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m NNGP}_m({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j, \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2C(\mbox{\boldmath$m$}box{\boldmath$c$}dot;\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}))$, where $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2$ and $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2$ are unknown variance parameter, $\phi_{\mbox{\boldmath$m$}u}$ and $\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}$ are unknown spatial range parameter, $C(\mbox{\boldmath$m$}box{\boldmath$c$}dot;\phi)$ is a valid correlation function with spatial range parameter $\phi$, and $m$ is the number of nearest neighbors. Here ${\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j$ for $j=1,\ldots,J$ are global weight for the $j$-th method and also is a prior mean of the heterogeneous weight $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$. While it is possible to assign an informative prior on ${\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j$, we set ${\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_0=0$ and ${\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j=1/J \ (j=1,\ldots,J)$, as a default choice, to make the prior synthesis function an equal weight averaging of the $J$ estimators. Given observed sample $(y_i, x_i, T_i)$ for $i=1,\ldots,n$, the synthesis model (eq.~\mbox{\boldmath$m$}box{\boldmath$r$}ef{BPS}) for the observed sample is given by \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} &y_i=\mbox{\boldmath$m$}u(x_i,\pi_i)+T_i\left\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(x_i)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_i)f_j(x_i)\mbox{\boldmath$m$}box{\boldmath$r$}ight\}+{\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}_i, \ \ \ {\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}_i{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N(0, {\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2), \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} with $(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_1),\ldots, \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_n)){\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j 1_n, \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2H_m(\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}; \mbox{\boldmath$m$}athcal{X}))$ independently for $j=0,1,\ldots,J$ and $(\mbox{\boldmath$m$}u(x_1,\pi_1),\ldots,\mbox{\boldmath$m$}u(x_n,\pi_n)){\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}u}} 1_n, \mbox{\boldmath$m$}box{\boldmath$t$}au_\mbox{\boldmath$m$}u^2 H_m(\phi_\mbox{\boldmath$m$}u; \mbox{\boldmath$m$}athcal{Z}))$. Here $H_m(\mbox{\boldmath$m$}box{\boldmath$c$}dot; \mbox{\boldmath$m$}athcal{X})$ and $H_m(\mbox{\boldmath$m$}box{\boldmath$c$}dot; \mbox{\boldmath$m$}athcal{Z})$ are $n\mbox{\boldmath$m$}box{\boldmath$t$}imes n$ covariance matrices of the joint distribution on $n$ observations based on $m$-nearest neighbor Gaussian processes on $\mbox{\boldmath$m$}athcal{X}$ and $\mbox{\boldmath$m$}athcal{Z}$, respectively, where $\mbox{\boldmath$m$}athcal{X}$ and $\mbox{\boldmath$m$}athcal{Z}$ are space of $x_i$ and $z_i=(x_i,\pi_i)$, respectively. In what follows, we assign conditionally conjugate priors for unknown parameters other than spatial range parameters, that is, ${\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}(\mbox{\boldmath$m$}box{\boldmath$d$}elta_{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma}/2, \mbox{\boldmath$m$}box{\boldmath$e$}ta_{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma}/2)$, $\mbox{\boldmath$m$}box{\boldmath$t$}au_\mbox{\boldmath$m$}u^2{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}(\mbox{\boldmath$m$}box{\boldmath$d$}elta_{\mbox{\boldmath$m$}u}/2, \mbox{\boldmath$m$}box{\boldmath$e$}ta_{\mbox{\boldmath$m$}u}/2)$ and $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}(\mbox{\boldmath$m$}box{\boldmath$d$}elta_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}, \mbox{\boldmath$m$}box{\boldmath$e$}ta_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$. For the spatial range parameter, we assign uniform priors, $\phi_\mbox{\boldmath$m$}u{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m U(\mbox{\boldmath$m$}box{\boldmath$u$}nderline{c}_{\mbox{\boldmath$m$}u}, \overline{c}_{\mbox{\boldmath$m$}u})$ and $\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m U(\mbox{\boldmath$m$}box{\boldmath$u$}nderline{c}_{\mbox{\boldmath$m$}box{\boldmath$b$}eta}, \overline{c}_{\mbox{\boldmath$m$}box{\boldmath$b$}eta})$. Then, the posterior distribution of the unknown parameters $\psi=({\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2, \mbox{\boldmath$m$}box{\boldmath$t$}au_\mbox{\boldmath$m$}u^2, \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0}^2, \ldots,\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_J}^2, \phi_{\mbox{\boldmath$m$}u},\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0},\ldots,\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_J})$, baseline function $\mbox{\boldmath$m$}u(\mbox{\boldmath$m$}box{\boldmath$c$}dot)$ and varying coefficients $\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(\mbox{\boldmath$m$}box{\boldmath$c$}dot),\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_J(\mbox{\boldmath$m$}box{\boldmath$c$}dot)$ can be approximated by a Markov Chain Monte Carlo (MCMC) algorithm. \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Posterior computation} To describe the MCMC algorithm, we let $\mbox{\boldmath$m$}u_i=\mbox{\boldmath$m$}u(z_i)$, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_{ji}=\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_i)$ and $f_{ji}=f_j(x_i)$, for notational simplicity. We define $\mbox{\boldmath$m$}u_i$ and $\Phi_n$ be a collection of latent variables, namely, $\Phi_n=\{\mbox{\boldmath$m$}u_i, \mbox{\boldmath$m$}box{\boldmath$b$}eta_{0i},\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_{Ji}, f_{1i},\ldots,f_{Ji}\}_{i=1,\ldots,n}$. The joint posterior distribution of $\psi$ and $\Phi_n$ is given by \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} \Pi(\psi, \Phi_n|\mbox{\boldmath$m$}athcal{D}) &\propto \Pi(\psi)\prod_{i=1}^n \phi\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig(y_i; \mbox{\boldmath$m$}u_i+T_i\mbox{\boldmath$m$}box{\boldmath$b$}ig(\mbox{\boldmath$m$}box{\boldmath$b$}eta_{0i}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_{ji}f_{ji}\mbox{\boldmath$m$}box{\boldmath$b$}ig),{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig)\prod_{j=1}^J h_j(f_{ji})\\ & \ \ \ \ \ \ \mbox{\boldmath$m$}box{\boldmath$t$}imes \phi_n(\mbox{\boldmath$m$}u^{(s)}; {\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}u}} 1_n, \mbox{\boldmath$m$}box{\boldmath$t$}au_\mbox{\boldmath$m$}u^2 H_m(\phi_\mbox{\boldmath$m$}u;\mbox{\boldmath$m$}Z))\prod_{j=0}^J \phi_n(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{(s)}; {\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j 1_n, \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2H_m(\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j};\mbox{\boldmath$m$}X)), \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} where $\mbox{\boldmath$m$}athcal{D}$ is a set of observed data, $\mbox{\boldmath$m$}u^{(s)}=(\mbox{\boldmath$m$}u_1,\ldots,\mbox{\boldmath$m$}u_n)$, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{(s)}=(\mbox{\boldmath$m$}box{\boldmath$b$}eta_{j1},\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_{jn})$ and $\Pi(\psi)$ is a joint prior distribution of $\psi$. The posterior computation can be easily carried out via Gibbs sampling, where step-by-step sampling steps are described in the Supplementary Material. Given the posterior samples of $\Phi_n$, the posterior samples of the synthesized HTE $\mbox{\boldmath$m$}box{\boldmath$t$}au(x_i)$ given in eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:tau} can be generated. Regarding the inference on $\mbox{\boldmath$m$}box{\boldmath$t$}au(x_0)$ for arbitrary $x_0\in \mbox{\boldmath$m$}X$, we suppose that $J$ estimators of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X_0)$ are available. From the distributional assumptions for $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$, the conditional distribution of $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_0)$ is $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_0)|\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{(s)} {\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j + B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0)\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(N(x_0)), \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0) )$, where \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation*} \mbox{\boldmath$m$}box{\boldmath$b$}egin{split} B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0)&=C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0, N(x_0); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_0), N(x_0); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})^{-1}, \\ F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0)&=1-C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_0, N(x_0); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_0), N(x_0); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})^{-1}C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_0), x_0; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}), \mbox{\boldmath$m$}box{\boldmath$e$}nd{split} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation*} and $N(x_0)$ denotes an index set of $m$-nearest neighbors of the evaluation point $x_0$. Here $C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t, v; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$ for $t=(t_1,\ldots,t_{d_t})\in \mbox{\boldmath$m$}athbb{R}^{d_t}$ and $v=(v_1,\ldots,v_{d_v})\in \mbox{\boldmath$m$}athbb{R}^{d_v}$ denotes a $d_t\mbox{\boldmath$m$}box{\boldmath$t$}imes d_v$ correlation matrix whose $(i_t, i_v)$-element is $C(\|t_{i_t}-v_{i_v}\|; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$. Then, given the posterior samples of $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j$ and $\psi$, the posterior samples of $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_0)$ can be generated for $j=0,\ldots,J$. This gives posterior samples of $\mbox{\boldmath$m$}box{\boldmath$t$}au(x_0)$ provided that the HTE estimators at $x_0$, namely, $f_1(x_0),\ldots,f_J(x_0)$, are available. Detailed sampling procedure can be found in the Supplementary Material (Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:NNGP-pos}). \mbox{\boldmath$m$}box{\boldmath$s$}ection{Consistency of Bayesian Causal Synthesis {\lambda}bel{sec:theory}} To understand the theoretical properties of BCS, and to provide a theoretical justification for the proposed method, we investigate its asymptotic behavior regarding its consistency in terms of estimating the HTE. Due to identification issues related to the synthesis model, the consistency of the HTE cannot be directly shown. To circumvent this, we instead show predictive consistency of the treated and control outcomes, separately. Then, given the predictive consistency of the two outcomes, we have consistency in the HTE as their difference. We assume that the following varying coefficient model, \mbox{\boldmath$m$}box{\boldmath$b$}egin{alignat}{1} y({x}_{i}) & =\mbox{\boldmath$m$}u^{*}\left({x}_{i},\pi\mbox{\boldmath$m$}box{\boldmath$r$}ight)+T\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\left\{ \mbox{\boldmath$m$}box{\boldmath$b$}eta_{0}^{*}\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^{J}\mbox{\boldmath$m$}box{\boldmath$b$}eta_{j}^{*}\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)f_{j}\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight\} +\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{i}, \ \ \ \ i=1,\ldots,n,{\lambda}bel{eq:VCM} \mbox{\boldmath$m$}box{\boldmath$e$}nd{alignat} with $\mbox{\boldmath$m$}athbb{E}\left[\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight]=0$ and $\mbox{\boldmath$m$}athbb{E}\left[\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{i}^{2}\mbox{\boldmath$m$}box{\boldmath$r$}ight]={\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma_{i}^{2}$, is correctly specified as a regression function. Thus, we assume that $\mbox{\boldmath$m$}athbb{E}\left[\left.\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight|x_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight]=0$ holds for $i=1,\ldots,n$. Further, for the propensity score, $\pi$, we assume $0\ll\pi\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)\ll1$, for all $X$. This means that for all treatment hierarchies, there exist subjects for assignment/non-assignment. Denote $y_{i}=y\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $\mbox{\boldmath$m$}u_{i}=\mbox{\boldmath$m$}u\left({x}_{i},\pi\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $T_{i}=T\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $f_{ji}=f_{j}\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_{ji}^{*}=\mbox{\boldmath$m$}box{\boldmath$b$}eta_{j}^{*}\left({x}_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ as the actual value of the random variable concerning the explanatory variable, ${x}_{i}$. Further, denote the $n$-th sample using the superscript: $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}=\left(y_{1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,y_{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{\mbox{\boldmath$m$}box{\boldmath$t$}op}$, $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}u}^{n}=\left(\mbox{\boldmath$m$}u_{1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,\mbox{\boldmath$m$}u_{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{\mbox{\boldmath$m$}box{\boldmath$t$}op}$, $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}^{n}=\left(\mbox{\boldmath$m$}box{\boldmath$b$}eta_{j1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_{jn}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{\mbox{\boldmath$m$}box{\boldmath$t$}op}$, $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{f}_{j}^{n}=\left(f_{j1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,f_{jn}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{\mbox{\boldmath$m$}box{\boldmath$t$}op}$. With regard to the parameters, we denote the $n$-th parameter vector as $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n}=\left(\mbox{\boldmath$m$}u^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}^{n},{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. Here, ${\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}I_{n\mbox{\boldmath$m$}box{\boldmath$t$}imes n}$ is the covariance matrix of $\left\{ \mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{i}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} _{i=1,\mbox{\boldmath$m$}box{\boldmath$c$}dots n}$. The observation to be predicted, $y_{n+1}\left({x}_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, given the explanatory variable, ${x}_{n+1}$, can be written as \mbox{\boldmath$m$}box{\boldmath$b$}egin{alignat}{1} y_{n+1} & =\mbox{\boldmath$m$}u_{n+1}^{*}+T_{n+1}\left\{ \mbox{\boldmath$m$}box{\boldmath$b$}eta_{0,n+1}^{*}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^{J}\mbox{\boldmath$m$}box{\boldmath$b$}eta_{j,n+1}^{*}f_{j,n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} +\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{n+1},\ \ \mbox{\boldmath$m$}box{\boldmath$v$}arepsilon_{n+1}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N\left(0,{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}\mbox{\boldmath$m$}box{\boldmath$r$}ight){\lambda}bel{eq:VCMahead}\\ & \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{n+1}^{*}=\left(\mbox{\boldmath$m$}box{\boldmath$b$}eta_{1,n+1}^{*},\mbox{\boldmath$m$}box{\boldmath$c$}dots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_{J,n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{\mbox{\boldmath$m$}box{\boldmath$t$}op},\ \ \left\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert _{L^{2}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)}<\infty.\nonumber \mbox{\boldmath$m$}box{\boldmath$e$}nd{alignat} The unknown parameter set is ${\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}=\left(\mbox{\boldmath$m$}u_{n+1}^{*},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{n+1}^{*},{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, and the distribution of eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:VCMahead} is written as $p\left(y_{n+1}\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. We now construct the predictive distribution of eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:VCMahead}, using a Gaussian process. Let the prior distributions be $\mbox{\boldmath$m$}u{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m \mbox{\boldmath$m$}box{\boldmath$t$}extrm{GP}\left(\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u},h_{\mbox{\boldmath$m$}u}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m \mbox{\boldmath$m$}box{\boldmath$t$}extrm{GP}\left(\mbox{\boldmath$m$}box{\boldmath$t$}au_{j},h_{j}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ and $\pi\left({\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma\mbox{\boldmath$m$}box{\boldmath$r$}ight)={\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{-1}$. Here, we assume that $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}$ are independent for $j$. The conditional likelihood function of $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}$, given $\{ \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{f}_{j}^{n}\}_{j=1\mbox{\boldmath$m$}box{\boldmath$c$}dots J}$ and $\mbox{\boldmath$m$}athcal{L}(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n},T,\{ \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{f}_{j}^{n}\}_{j=1\mbox{\boldmath$m$}box{\boldmath$c$}dots J})$, is $N\mbox{\boldmath$m$}box{\boldmath$b$}ig(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}-\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}u}^{n}-T( \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{0}^{n}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^{J}\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}^{n}\mbox{\boldmath$m$}box{\boldmath$c$}irc\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{f}_{j}^{n}), {\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}I_{n\mbox{\boldmath$m$}box{\boldmath$t$}imes n})$, where the $\mbox{\boldmath$m$}box{\boldmath$c$}irc$ operator is the Hadamard product. The likelihood function $\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n},T\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, is convoluted by the agent density, $\left\{ h_{j}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} $, which gives us \mbox{\boldmath$m$}box{\boldmath$b$}egin{alignat*}{1} \mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n},T\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight) & =\int_{\mbox{\boldmath$m$}athbb{R}^{J}}N\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}-\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}u}^{n}-T\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig( \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{0}^{n}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^{J}\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}^{n}\mbox{\boldmath$m$}box{\boldmath$c$}irc\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{f}_{j}^{n}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig), {\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^{2}I_{n\mbox{\boldmath$m$}box{\boldmath$t$}imes n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\prod_{j=1}^{J}h_{j}\left(\left.f_{j}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight|x^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)df. \mbox{\boldmath$m$}box{\boldmath$e$}nd{alignat*} Then, the conditional joint probability density of $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}=((\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n})^\mbox{\boldmath$m$}box{\boldmath$t$}op, y_{n+1})^\mbox{\boldmath$m$}box{\boldmath$t$}op$ given ${x}^{n+1}$ is \mbox{\boldmath$m$}box{\boldmath$b$}egin{alignat}{1} P\left(\left.\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight|{x}^{n+1},T\mbox{\boldmath$m$}box{\boldmath$r$}ight)=& \int_{\mbox{\boldmath$m$}athbb{R}^{J+2}}\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n+1},T\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\pi\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{n+1}{\lambda}bel{eq:Predictive} \mbox{\boldmath$m$}box{\boldmath$e$}nd{alignat} and the predictive distribution of $y_{n+1}$ is \[ p\left(y_{n+1}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}^{n+1},T\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)=\mbox{\boldmath$f$}rac{P\left(\left.\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight|{x}^{n+1},T\mbox{\boldmath$m$}box{\boldmath$r$}ight)}{\int_{\mbox{\boldmath$m$}athbb{R}}P\left(\left.\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight|{x}^{n+1},T\mbox{\boldmath$m$}box{\boldmath$r$}ight)dy_{n+1}}. \] In what follows, we show that the above predictive distribution is consistent with the true distribution $p\left(y_{n+1}\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. Our goal is to construct a predictive distribution, $p\left(y_{n+1}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}^{n+1},T_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, that is consistent with the target, $p\left(y_{n+1}\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. Here, a consistent predictive distribution is defined as \[ \lim_{n\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}P\left(\left.\mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)}\left|p\left(y_{n+1}\in A\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)-p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}^{n+1},T_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight)=0, \] where $P\left(\left.\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ is a probability measure of eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:VCM} and $\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ is a Borel set. We show a stronger result, \[ \mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)}\left|p\left(y_{n+1}\in A\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)-p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}^{n+1},T_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow0,\ \mbox{\boldmath$m$}box{\boldmath$t$}extrm{in probability }P\left(\left.\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight). \] \mbox{\boldmath$m$}box{\boldmath$b$}egin{thm}{\lambda}bel{thm1} Let $\mbox{\boldmath$m$}u^{*}\left({x},\pi\left({x}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight)\in L^{2}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}^{*}\left({x}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\in L^{2}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ and the parameter space, $L^{2}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, is complete and separable; i.e., a standard measure space. Assume, $0<\pi(\mbox{\boldmath$m$}u^{*},\{ \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}^{*}\}_{j=0,\mbox{\boldmath$m$}box{\boldmath$c$}dots,J})$. Then, we have \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation} \lim_{n\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}P\left(\left.\mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)}\left|p\left(y_{n+1}\in A\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)-p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}_{n+1},T_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight)=0, {\lambda}bel{eq:Consis} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} where $P\left(\left.\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ is a probability measure of eq.~\mbox{\boldmath$m$}box{\boldmath$e$}qref{eq:VCM} and $\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ is a Borel set. Similarly, we have \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation} \mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athcal{B}\left(\mbox{\boldmath$m$}athbb{R}\mbox{\boldmath$m$}box{\boldmath$r$}ight)}\left|p\left(y_{n+1}\in A\left|{x}_{n+1},T_{n+1},{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta_{n+1}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)-p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},{x}_{n+1},T_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow0,\ {\lambda}bel{eq:TotalConsis} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} in probability $P\left(\left.\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight|T\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. \mbox{\boldmath$m$}box{\boldmath$e$}nd{thm} The proof of Theorem~\mbox{\boldmath$m$}box{\boldmath$r$}ef{thm1} is in the Supplementary Material (Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:proof-thm1}). Theorem~\mbox{\boldmath$m$}box{\boldmath$r$}ef{thm1} shows that the synthesis model (eq.~\mbox{\boldmath$m$}box{\boldmath$r$}ef{BPS}) is consistent, with regard to the predictive distribution of the outcome, given the treatment assignment, $T$. The HTE estimate can be expressed by the expectation of the predictive distribution of $y_{n+1}$: \[ {{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{\mbox{\boldmath$m$}box{\boldmath$t$}au}}}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)=\int y_{n+1}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\{p\left(y_{n+1}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{x}_{n+1},T_{n+1}=1\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)-p\left(y_{n+1}|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{x}_{n+1},T_{n+1}=0\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\}dy_{n+1}. \] Since we have predictive consistency, if the convergence of the first moment is guaranteed, we can show that the synthesized estimate of the HTE is also consistent. \mbox{\boldmath$m$}box{\boldmath$b$}egin{thm} Assume that the predictive distributions, $p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{x}_{n+1},T_{n+1}=1\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ and $p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{x}_{n+1},T_{n+1}=0\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, are uniform integrable, that is, \[ \lim_{c\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}\limsup_{n\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}\int_{\mbox{\boldmath$m$}athbb{R}}\left|y_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight|1_{\left(\left|y_{n+1}\mbox{\boldmath$m$}box{\boldmath$r$}ight|>c\mbox{\boldmath$m$}box{\boldmath$r$}ight)}p\left(y_{n+1}\in A\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{x}_{n+1},T_{n+1} \mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)dy_{n+1}=0 \] for $T_{n+1}\in\{0,1\}$. Then, ${{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{\mbox{\boldmath$m$}box{\boldmath$t$}au}}}\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)\overset{\mbox{\boldmath$m$}box{\boldmath$t$}extrm{p}}{\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow}\mbox{\boldmath$m$}box{\boldmath$t$}au\left(X\mbox{\boldmath$m$}box{\boldmath$r$}ight)$ holds. Therefore, the synthesized HTE estimate in eq.~(\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:tau}) is consistent. \mbox{\boldmath$m$}box{\boldmath$e$}nd{thm} \mbox{\boldmath$m$}box{\boldmath$s$}ection{Simulation Study {\lambda}bel{sec:sim}} To demonstrate and highlight the efficacy of our approach, we conduct a series of experiments. The first simulation study in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim-syn} is based on synthetic data, and the second study in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim-acic} adopts several scenarios used in the Atlantic causal inference conference data analysis challenge. Further simulation studies, including a small sample study and a out-of-sample study, can be found in the Supplementary Material (Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:supp-sim}). \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Synthetic data {\lambda}bel{sec:sim-syn}} We first investigate the performance of BCS, together with existing methods, to estimate the HTE. For simplicity, we consider randomized treatment assignments, where the propensity score is $1/2$ for all the subjects. Let $X=(X_1,\ldots,X_p)$ be a $p$-dimensional covariate, where the fourth is a dichotomous variable, the fifth is an unordered categorical taking three levels (denoted 1, 2, 3), and the other variables are continuously drawn from standard normal distributions. The data generating process is $$ Y=\mbox{\boldmath$m$}u(X)+\mbox{\boldmath$m$}box{\boldmath$t$}au(X)T+{\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}, \ \ \ \ {\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N(0, {\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2), $$ where $T{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m Ber}(0.5)$ is the treatment assignment and we set ${\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2=1$. For the prognostic term $\mbox{\boldmath$m$}u(X)$ and treatment effect $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$, we consider the following scenarios: \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} &\mbox{\boldmath$m$}box{\boldmath$t$}ext{(A)} \ \ \mbox{\boldmath$m$}u(X)=-7+6|X_3|-3X_5, \ \ \ \ \ \mbox{\boldmath$m$}box{\boldmath$t$}ext{(B)} \ \ \mbox{\boldmath$m$}u(X)=2+2{\mbox{\boldmath$m$}box{\boldmath$s$}igma}n(3X_3)\\ &\mbox{\boldmath$m$}box{\boldmath$t$}ext{(A)} \ \ \mbox{\boldmath$m$}box{\boldmath$t$}au(X)=1+2X_2X_5, \ \ \ \ \ \mbox{\boldmath$m$}box{\boldmath$t$}ext{(B)} \ \ \mbox{\boldmath$m$}box{\boldmath$t$}au(X)=1+2X_2X_5+X_3^2/2, \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} and we adopt four scenarios consisting of all the possible combinations. We generate $n=300$ samples from the data generating process and estimate the form of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ to evaluate the treatment effects of the observed samples. To estimate $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$, we consider seven methods: \mbox{\boldmath$m$}box{\boldmath$b$}egin{itemize} \item[-] Bayesian causal forest \mbox{\boldmath$m$}box{\boldmath$c$}itep[BCF:][]{hahn2020bayesian}: Using R package \mbox{\boldmath$m$}box{\boldmath$t$}exttt{bcf}, apply the Bayesian causal forest to the observed data and generate 1500 posterior samples of each $\mbox{\boldmath$m$}box{\boldmath$t$}au(X_i)$, after discarding the first 500 samples. \item[-] Linear model (LM): Fitting a simple linear regression model, $Y=X^\mbox{\boldmath$m$}box{\boldmath$t$}op \mbox{\boldmath$m$}box{\boldmath$b$}eta_1+TX^\mbox{\boldmath$m$}box{\boldmath$t$}op \mbox{\boldmath$m$}box{\boldmath$b$}eta_2+{\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}$, and estimating $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ by $X^\mbox{\boldmath$m$}box{\boldmath$t$}op \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_2$. Standard errors of the estimator are also computed. \item[-] Additive models (AM): Fitting an additive model, $Y=f_1(X)+Tf_2(X)+{\mbox{\boldmath$m$}box{\boldmath$v$}arepsilon}$, and estimating $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ by $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{f}_2(X)$. A crude approximation of the standard error of $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{f}_2(X)$ is computed by weighted bootstrap with 200 replications. \item[-] X-learner \mbox{\boldmath$m$}box{\boldmath$c$}itep[XL:][]{kunzel2019metalearners}: Using R package (\mbox{\boldmath$m$}box{\boldmath$u$}rl{https://github.com/xnie/rlearner}), apply X-learner with gradient boosting trees to the observed data to obtain point estimates of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X_i)$. \item[-] R-learner \mbox{\boldmath$m$}box{\boldmath$c$}itep[RL:][]{nie2021quasi}: Using the same R package as X-learner, apply R-learner with gradient boosting trees to the observed data to obtain point estimates of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X_i)$. \item[-] Causal forest \mbox{\boldmath$m$}box{\boldmath$c$}itep[CF:][]{wager2018estimation}: Using R package \mbox{\boldmath$m$}box{\boldmath$t$}exttt{grf}, apply the causal forest to the observed data and compute estimates and standard errors of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ of the test data. \mbox{\boldmath$m$}box{\boldmath$e$}nd{itemize} Furthermore, we consider synthesizing three results (BCF, LM, AM) that have standard error estimates. We adopt the following two synthesis methods: \mbox{\boldmath$m$}box{\boldmath$b$}egin{itemize} \item[-] Causal stacking (CST): Applying the causal stacking algorithm by \mbox{\boldmath$m$}box{\boldmath$c$}ite{han2022ensemble} to ensemble the five methods given above, where the gradient boosting tree algorithm is used to construct prediction models in the averaging set. Note that it only provides point estimates. \item[-] Bayesian causal synthesis (BCS): Apply the proposed Bayesian predictive synthesis to combine the results of BCF, LM and AM, and obtain posterior distributions of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X_i)$. The number of nearest-neighbor is set to $m=15$. 1500 posterior samples after discarding the first 500 samples are used to obtain posterior distributions of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$. Note that only subsets of $X$ that are selected in AM are used for modeling the varying coefficients, $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$. \mbox{\boldmath$m$}box{\boldmath$e$}nd{itemize} To demonstrate the effectiveness of the BCS, we first show the results under the scenario that both forms of $\mbox{\boldmath$m$}u(X)$ and $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ are $(A)$. The results of point prediction and $95\%$ credible intervals of BCS are presented in Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:oneshot1}, where the point predictions by the three models are also shown. BCS provides more accurate estimates than the three prediction methods, especially for samples having very small or large treatment effects. More importantly, the credible intervals of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ obtained from BCS successfully cover the true values. The coverage rate was $99.5\%$, which is slightly larger than the nominal level of $95\%$. To investigate the coverage property further, we present the interval lengths of credible (confidence) intervals of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ in Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:oneshot2}. The result indicates that the posterior uncertainty on HTE around the region in which observed samples are abundant is small, while the uncertainty of the posterior obtained via BCS tends to be considerably large for regions where the observed samples are small. This is a reasonable and desirable result that reflects the uncertainty in each estimator, even though the other methods are overconfident. \mbox{\boldmath$m$}box{\boldmath$b$}egin{figure}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}entering \includegraphics[width=13cm,clip]{trueest.png} \mbox{\boldmath$m$}box{\boldmath$c$}aption{Point estimates obtained from four methods and $95\%$ credible intervals of BCS (vertical line). {\lambda}bel{fig:oneshot1} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{figure} \mbox{\boldmath$m$}box{\boldmath$b$}egin{figure}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}entering \includegraphics[width=13cm,clip]{lengthtrue.png} \mbox{\boldmath$m$}box{\boldmath$c$}aption{Length of $95\%$ credible/confidence intervals against the true values of the heterogeneous treatment effect. {\lambda}bel{fig:oneshot2} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{figure} We next evaluate the performance of point and interval estimation through Monte Carlo replications. For the evaluation of point prediction, we compute the mean squared error (MSE): $$ {\mbox{\boldmath$m$}box{\boldmath$r$}m MSE}=\mbox{\boldmath$f$}rac{1}{n_{\mbox{\boldmath$m$}box{\boldmath$r$}m test}}\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^{n_{\mbox{\boldmath$m$}box{\boldmath$r$}m test}}\left\{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{\mbox{\boldmath$m$}box{\boldmath$t$}au}(X_j)-\mbox{\boldmath$m$}box{\boldmath$t$}au(X_j)\mbox{\boldmath$m$}box{\boldmath$r$}ight\}^2. $$ In Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:sim-MSE}, we show the relationship between the CP and AL, and boxplots of MSE values of 100 Monte Carlo replications for two scenarios. Overall, BCS provides more accurate estimates than the other methods, providing both improved point and interval estimates. Looking at the interval estimates, we see that for Scenario 1, $p=30$, we see that both BCS and BCF perform well, with the 95\% interval including the target coverage probability. However, looking at Scenario 3, $p=90$, we see that BCF has significant uncertainty, as seen with the large interval. CF performs poorly in all scenarios. For the point estimates, we see that BCS improves over all methods (except for AM in Scenario 3, $p=90$, where BCS loses by a small margin). This shows that, even when BCS performs poorly (i.e. not improving upon the individual estimates), it is still on par with the best estimate, and when it performs well, it performs significantly better than the others. Tables~\mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-smallmse} and \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-smallcp} presents the MSE, and empirical coverage probability (CP) and average lengths (AL) of the $95\%$ credible (confidence) intervals of $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$, obtained by BCS, BCF, LM, AM, XL, RL, CF, and CST, while XL, RL, and CST do not provide uncertainty measures. The results show that 1) BCS outperforms the other methods for most scenarios, in terms of MSE, 2) generally, MSE increases as the dimension increases, 3) CST, given the aforementioned problem with standard ensemble methods, performs poorly compared to BCS, and 4) BCS produces good CP compared to the nominal probability, although BCF performs slightly better for lower dimensions (but performs poorly when dimensions are large). Overall, these results provide good evidence for using BCS for estimating HTEs. BCS is consistently the best (or second best in three out of twelve cases), in terms of MSE, even if the best method going into BCS varies. Thus, even if the user is unsure which method to use, BCS can provide robust, accurate inference. \mbox{\boldmath$m$}box{\boldmath$b$}egin{figure}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}entering \includegraphics[width=13cm,clip]{sim.png} \mbox{\boldmath$m$}box{\boldmath$c$}aption{Left column: Coverage probability (X-axis) and interval length (Y-axis) of the HTE estimates (top: Scenario 1, $p=30$; bottom: Scenario 3, $p=90$). The vertical and horizontal lines denote the 95\% interval. Right column: Boxplots of mean squared errors (MSE) of the estimation methods (top: Scenario 1, $p=30$; bottom: Scenario 3, $p=90$). {\lambda}bel{fig:sim-MSE} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{figure} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Mean squared error (MSE) of point estimates of heterogeneous treatment effects, averaged over 100 Monte Carlo replications. The smallest and second smallest MSE values are highlighted in bold. } {\lambda}bel{tab:sim-smallmse} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{ccccccccccccc} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line Scenario & $\mbox{\boldmath$m$}u(X)$ & $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ & $p$ & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{LM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{XL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{RL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CST} \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.14} & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.38} & 3.83 & 2.86 & 2.76 & 5.24 & 5.95 & 4.38 \\ 1 & (A) & (A) & 60 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.67} & 3.54 & 4.39 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.00} & 3.08 & 5.76 & 9.41 & 5.46 \\ & & & 90 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.10} & 4.80 & 4.30 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.92} & 3.12 & 6.54 & 11.53 & 3.36 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.95} & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.92} & 3.09 & 3.02 & 2.33 & 4.18 & 4.98 & 3.85 \\ 2 & (B) & (A) & 60 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.28} & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.58} & 3.21 & 2.97 & 2.60 & 5.05 & 7.36 & 4.89 \\ & & & 90 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.57} & 3.00 & 3.22 & 3.01 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.76} & 5.37 & 9.23 & 2.99 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.42} & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.61} & 4.44 & 2.97 & 3.09 & 5.54 & 6.85 & 4.78 \\ 3 & (A) & (B) & 60 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.83} & 3.41 & 4.73 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.94} & 3.32 & 5.92 & 9.60 & 5.63 \\ & & & 90 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.52} & 5.19 & 5.11 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.06} & 3.54 & 6.31 & 12.40 & 3.67 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.09} & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.15} & 3.56 & 3.11 & 2.75 & 4.80 & 5.56 & 4.53 \\ 4 & (B) & (B) & 60 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.52} & 2.90 & 3.67 & 3.20 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.83} & 5.31 & 7.86 & 5.85 \\ & & & 90 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.66} & 3.17 & 3.72 & 3.14 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.03} & 5.75 & 9.73 & 3.24 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{CP ($\%$) and AL of $95\%$ credible/confidence intervals of heterogeneous treatment effects in the test samples, averaged over 100 Monte Carlo replications. } {\lambda}bel{tab:sim-smallcp} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} {\mbox{\boldmath$f$}ootnotesize \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{ccccrrrrrrrrrrr} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CP} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} \\ Scenario & $\mbox{\boldmath$m$}u(X)$ & $\mbox{\boldmath$m$}box{\boldmath$t$}au(X)$ & $p$ & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{LM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{LM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & 98.0 & 95.9 & 52.4 & 50.0 & 68.2 & & 6.38 & 6.17 & 2.60 & 2.00 & 3.24 \\ 1 & (A) & (A) & 60 & 97.7 & 94.2 & 53.3 & 50.8 & 55.9 & & 6.89 & 7.05 & 2.86 & 2.13 & 2.81 \\ & & & 90 & 96.0 & 88.1 & 51.2 & 51.7 & 46.4 & & 7.03 & 6.97 & 2.77 & 2.11 & 2.63 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & 93.8 & 95.5 & 45.8 & 44.6 & 63.9 & & 5.25 & 5.41 & 1.74 & 1.87 & 2.66 \\ 2 & (B) & (A) & 60 & 92.7 & 94.5 & 44.9 & 45.0 & 55.8 & & 5.40 & 5.95 & 1.84 & 1.90 & 2.12 \\ & & & 90 & 91.6 & 93.9 & 46.0 & 44.9 & 47.1 & & 5.50 & 6.29 & 1.89 & 1.92 & 1.96 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & 98.0 & 95.6 & 55.3 & 50.8 & 65.0 & & 6.75 & 6.33 & 3.03 & 2.15 & 3.31 \\ 3 & (A) & (B) & 60 & 97.5 & 94.7 & 55.7 & 52.6 & 52.8 & & 7.09 & 7.13 & 3.18 & 2.21 & 2.87 \\ & & & 90 & 95.8 & 87.8 & 53.9 & 54.0 & 43.3 & & 7.27 & 7.17 & 3.20 & 2.28 & 2.71 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & & 30 & 92.8 & 94.4 & 43.3 & 46.0 & 58.9 & & 5.26 & 5.40 & 1.90 & 1.94 & 2.64 \\ 4 & (B) & (B) & 60 & 91.8 & 93.5 & 41.8 & 47.8 & 50.1 & & 5.56 & 6.07 & 1.88 & 2.11 & 2.15 \\ & & & 90 & 91.4 & 93.9 & 43.6 & 49.4 & 43.3 & & 5.58 & 6.38 & 2.04 & 2.13 & 1.95 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Atlantic causal inference conference data analysis challenge 2017 {\lambda}bel{sec:sim-acic}} Atlantic causal inference conference (ACIC) data analysis challenge is a prominent causal inference competition, where several methods are compared in a blind, fair manner. In the competition, there are four data generating processes, but we focus on the most complicated underlying structures (called ``non-additive errors"), where the detailed description of the data generating process is given in \mbox{\boldmath$m$}box{\boldmath$c$}ite{hahn2019atlantic}. We consider 8 scenarios of the combinations of true parameters, as presented in \mbox{\boldmath$m$}box{\boldmath$c$}ite{hahn2019atlantic}. We randomly sampled $n=500$ samples from the simulated data, and applied the same methods in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim}. Based on $R=100$ replications, the performance is evaluated through root mean squared errors (RMSE), given by $$ {\mbox{\boldmath$m$}box{\boldmath$r$}m RMSE}=\left[\mbox{\boldmath$f$}rac{1}{nR}\mbox{\boldmath$m$}box{\boldmath$s$}um_{r=1}^R\mbox{\boldmath$m$}box{\boldmath$s$}um_{i=1}^n\left\{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{\mbox{\boldmath$m$}box{\boldmath$t$}au}^{(r)}(X_i)-\mbox{\boldmath$m$}box{\boldmath$t$}au^{(r)}(X_i)\mbox{\boldmath$m$}box{\boldmath$r$}ight\}^2\mbox{\boldmath$m$}box{\boldmath$r$}ight]^{1/2}, $$ where $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}at{\mbox{\boldmath$m$}box{\boldmath$t$}au}^{(r)}(X_i)$ and $\mbox{\boldmath$m$}box{\boldmath$t$}au^{(r)}(X_i)$ are the estimated and true values of the heterogeneous treatment effect, respectively, in the $r$-th replication. The results are reported in Table \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-ACIC1}. For BCS, BCF, and CF, we also assessed empirical coverage probability (CP) and average length (AL) of $95\%$ credible or confidence intervals. The results are given in Table \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-ACIC2}. Looking at the point estimation (Table \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-ACIC1}), we see that BCS outperforms every other method, where BCF is a close second in many examples. Given that BCF (and their variants) are top performers in the competition, improving upon BCF with a clear margin is notable. Focusing on the CP (Table \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-ACIC2}), we see the true strengths of BCF. Specifically, even though the other methods, including BCF, produce subpar CPs, deviating from the nominal 95\%, BCS is able to produce CPs that are within approximately 2\% within the nominal 95\%. Considering the difficulty of the scenarios, achieving such an accurate CP is a good indication of the performance of BCS. \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Root mean squared errors of point estimates of heterogeneous treatment effects of observed samples with the ACIC data, averaged over 100 replications. The smallest MSE values are highlighted in bold. } {\lambda}bel{tab:sim-ACIC1} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{clrrrrrrr} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line Scenario & & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{LM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AM} & \mbox{\boldmath$m$}ulticolumn{1}{c}{XL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{RL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.05} & 1.17 & 1.87 & 1.89 & 1.12 & 1.30 & 2.04 \\ 2 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.14} & 1.37 & 1.88 & 1.90 & 1.37 & 1.71 & 2.06 \\ 3 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.35} & 1.49 & 1.84 & 1.86 & 1.43 & 1.62 & 2.00 \\ 4 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.47} & 1.69 & 1.90 & 1.95 & 1.68 & 2.14 & 2.04 \\ 5 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 0.98} & 1.08 & 1.84 & 1.85 & 1.02 & 1.27 & 2.01 \\ 6 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.08} & 1.34 & 1.82 & 1.84 & 1.34 & 1.73 & 2.15 \\ 7 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.46} & 1.61 & 1.87 & 1.89 & 1.54 & 1.77 & 2.00 \\ 8 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.58} & 1.75 & 1.88 & 1.95 & 1.80 & 2.22 & 2.03 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Coverage probability (CP) and average length (AL) of $95\%$ credible (confidence) intervals of heterogeneous treatment effects of observed samples with the ACIC data, averaged over 100 replications. } {\lambda}bel{tab:sim-ACIC2} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{clrrrrrrr} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line & & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CP} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{AL} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} \\ Scenario & & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCS} & \mbox{\boldmath$m$}ulticolumn{1}{c}{BCF} & \mbox{\boldmath$m$}ulticolumn{1}{c}{CF} \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & & 94.0 & 69.2 & 42.1 & & 3.58 & 1.99 & 2.07 \\ 2 & & 93.7 & 70.8 & 54.2 & & 3.92 & 2.66 & 3.07 \\ 3 & & 96.5 & 86.7 & 65.5 & & 5.72 & 4.20 & 3.16 \\ 4 & & 96.2 & 88.2 & 74.3 & & 5.93 & 4.93 & 4.47 \\ 5 & & 96.1 & 77.6 & 45.3 & & 3.98 & 2.44 & 2.20 \\ 6 & & 94.4 & 74.6 & 53.9 & & 4.25 & 3.04 & 3.05 \\ 7 & & 96.0 & 87.7 & 68.7 & & 6.09 & 4.65 & 3.52 \\ 8 & & 96.3 & 89.8 & 77.5 & & 6.46 & 5.43 & 4.97 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{figure}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}entering \includegraphics[width=13cm,clip]{acic.png} \mbox{\boldmath$m$}box{\boldmath$c$}aption{Left column: Coverage probability (X-axis) and interval length (Y-axis) of the HTE estimates for Scenarios 5-8. The vertical and horizontal lines denote the 95\% interval. Right column: Boxplots of mean squared errors (MSE) of the estimation methods for Scenarios 5-8. {\lambda}bel{fig:app1} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{figure} \mbox{\boldmath$m$}box{\boldmath$s$}ection{Application: The Effect of Smoking on Medical Expenditures {\lambda}bel{sec:app}} As an empirical demonstration, we consider the question of how smoking affects medical expenditures. This question has been studied in several previous papers; though here, we follow \mbox{\boldmath$m$}box{\boldmath$c$}ite{imai2004causal} in analyzing data extracted from the 1987 National Medical Expenditure Survey (NMES) by Johnson et al. (2003). Our setup includes the following ten patient attributes: \mbox{\boldmath$m$}box{\boldmath$b$}egin{itemize} \itemsep-1em \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{age}: age in years at the time of the survey \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{smoke age}: age in years when the individual started smoking \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{gender}: male or female \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{race}: other, black or white \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{marriage status}: married, widowed, divorced, separated, never married \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{education level}: college graduate, some college, high school graduate, other \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{census region}: geographic location, Northeast, Midwest, South, West \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{poverty status}: poor, near poor, low income, middle income, high income \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{seat belt}: does the patient regularly use a seat belt when in a car \item \mbox{\boldmath$m$}box{\boldmath$t$}exttt{years quit}: how many years since the individual quit smoking \mbox{\boldmath$m$}box{\boldmath$e$}nd{itemize} We estimate the treatment effect for \mbox{\boldmath$m$}box{\boldmath$t$}exttt{smoke age}, \mbox{\boldmath$m$}box{\boldmath$t$}exttt{age}, and \mbox{\boldmath$m$}box{\boldmath$t$}exttt{years quit}. The results are given in Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:app1}. For BCS, we synthesize BCF, LM, and AM. Looking at the top row in Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:app1}, we find several interesting features. For one, the HTE profile is quite different between the three underlying methods, and across covariates. Specifically, LM can be seen to be too rigid, having the linearity assumption, compared to BCF and LAM, which are more flexible. Focusing specifically on \mbox{\boldmath$m$}box{\boldmath$t$}exttt{Smoke Age}, we see that the heterogeneity between the three underlying methods to be quite different. BCS, taking the best of all worlds, can be seen to balance the three methods well. Moving the attention to the BCS coefficients (Figure~\mbox{\boldmath$m$}box{\boldmath$r$}ef{fig:app1}, bottom row), some interesting patterns highlight the strengths of BCS. For example, looking at \mbox{\boldmath$m$}box{\boldmath$t$}exttt{Smoke Age}, we see a significant increase in the intercept as smoke age increases. As mentioned previously, the intercept captures the misspecification that the totality of the estimators cannot capture (i.e. the model set misspecification). For smoke age, there are a lot fewer people who start smoking later in life, and thus fewer people in those subgroups. This causes considerable misspecification and uncertainty. The increase in the intercept as the age increases reflects this. This pattern is also seen in the other subgroups, but to a lesser extent due to those subgroups having more members. Looking at the coefficients of each method, interestingly, AM has the most weight for most cases. This is surprising, since we would expect BCF to perform the best, and thus have the most weight. We also observe some regions where there is a reversal in coefficients, most notably the latter half of \mbox{\boldmath$m$}box{\boldmath$t$}exttt{Years Quit}, where there is a significant drop in coefficients for AM and BCF, while LM remains constant. This shows that BCS is able to capture performance heterogeneity amongst subgroups. While an in-depth analysis of the application is beyond the scope of this paper, it is clear that the inference obtained from BCS is sufficiently different from the other methods. This has broad implications for other studies, as BCS can provide a more robust and accurate estimate of the HTE. \mbox{\boldmath$m$}box{\boldmath$b$}egin{figure}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}entering \includegraphics[width=13cm,clip]{med.png} \mbox{\boldmath$m$}box{\boldmath$c$}aption{The estimated treatment effects (top) and model coefficients (bottom) against three continuous covariates. {\lambda}bel{fig:app1} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{figure} \mbox{\boldmath$m$}box{\boldmath$s$}ection{Summary Comments {\lambda}bel{sec:summ}} Working under a coherent Bayesian framework that provides a theoretically and conceptually sound way to synthesize information, we develop Bayesian causal synthesis, a method to synthesize multiple causal estimates. With this new method, decision makers can calibrate, learn, and update coefficients on each estimation method, and improve estimation. We show, through theoretical analysis, that BCS provides HTE estimates that are consistent. Several simulation studies, including those used in causal inference competitions, show that BCS, by synthesizing several HTE estimates, outperforms other estimation methods, in terms of mean squared error and coverage probability. A real data observational study also highlights the flexibility, interpretability, and efficacy of BCS. In addition to the causal problems addressed in this paper, there are several areas in which BCS can be useful. This includes RCTs and other experiments, as well as observational studies where temporal or spatial information is relevant. Another clear extension is to meta-analysis, where experimental information can be used to synthesize estimates effectively. We believe that the flexibility of our approach will lead to further developments that will define increasing empirical support for the utility and efficacy of the approach, and attract applied researchers. \mbox{\boldmath$m$}box{\boldmath$s$}ection*{Acknowledgement} This work is partially supported by Japan Society for Promotion of Science (KAKENHI) grant numbers 21H00699. \mbox{\boldmath$m$}box{\boldmath$b$}ibliographystyle{chicago} \mbox{\boldmath$m$}box{\boldmath$b$}ibliography{refs} \mbox{\boldmath$m$}box{\boldmath$s$}etcounter{equation}{0} \mbox{\boldmath$m$}box{\boldmath$s$}etcounter{section}{0} \mbox{\boldmath$m$}box{\boldmath$s$}etcounter{table}{0} \mbox{\boldmath$m$}box{\boldmath$s$}etcounter{page}{1} \mbox{\boldmath$m$}box{\boldmath$r$}enewcommand{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{section}}{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{section}} \mbox{\boldmath$m$}box{\boldmath$r$}enewcommand{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{equation}}{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{equation}} \mbox{\boldmath$m$}box{\boldmath$r$}enewcommand{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{table}}{S\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}rabic{table}} \mbox{\boldmath$m$}box{\boldmath$v$}space{1cm} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} {\LARGE {\mbox{\boldmath$m$}box{\boldmath$b$}f Supplementary Material for ``Bayesian Causal Synthesis for Supra-Inference on Heterogeneous Treatment Effect"} } \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} This Supplementary Material provides details of the sampling algorithm of BCS, proofs of Theorem 1, and additional simulation results. \mbox{\boldmath$m$}box{\boldmath$s$}ection{Step-by-step Sampling Procedures } {\lambda}bel{sec:NNGP-pos} The use of the $m$-nearest neighbor Gaussian process for $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(X)$ leads to a multivariate normal distribution with a sparse precision matrix for $(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_1),\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_n))$, defined as $$ \pi(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_1),\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_n)) =\prod_{i=1}^n \phi(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i);B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(x_i)), \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2 F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)), \ \ \ \ j=0,\ldots,J, $$ with $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i)=\mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_i)-{\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j$, where \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation}{\lambda}bel{eq:BF-beta} \mbox{\boldmath$m$}box{\boldmath$b$}egin{split} B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)&=C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i, N(x_i); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_i), N(x_i); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})^{-1}, \\ F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)&=1-C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i, N(x_i); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_i), N(x_i); \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})^{-1}C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(N(x_i), x_i; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}), \mbox{\boldmath$m$}box{\boldmath$e$}nd{split} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} and $N(x_i)$ denotes an index set of $m$-nearest neighbors of $x_i$. Here $C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t, v; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$ for $t=(t_1,\ldots,t_{d_t})\in \mbox{\boldmath$m$}athbb{R}^{d_t}$ and $v=(v_1,\ldots,v_{d_v})\in \mbox{\boldmath$m$}athbb{R}^{d_v}$ denotes a $d_t\mbox{\boldmath$m$}box{\boldmath$t$}imes d_v$ correlation matrix whose $(i_t, i_v)$-element is $C(\|t_{i_t}-v_{i_v}\|; \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$. In the same way, the joint prior distribution of $\mbox{\boldmath$m$}u$ is $$ \pi(\mbox{\boldmath$m$}u(z_1),\ldots,\mbox{\boldmath$m$}u(z_n)) =\prod_{i=1}^n \phi(\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(z_i);B_{\mbox{\boldmath$m$}u}(z_i)\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(z_i)), \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2 F_{\mbox{\boldmath$m$}u}(z_i)), $$ with $\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i)=\mbox{\boldmath$m$}u(x_i)-{\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}u}}$, where \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation}{\lambda}bel{eq:BF-mu} \mbox{\boldmath$m$}box{\boldmath$b$}egin{split} B_{\mbox{\boldmath$m$}u}(z_i)&=C_{\mbox{\boldmath$m$}u}(z_i, N(z_i);\phi_\mbox{\boldmath$m$}u)C_{\mbox{\boldmath$m$}u}(N(z_i), N(z_i);\phi_\mbox{\boldmath$m$}u)^{-1}, \\ F_{\mbox{\boldmath$m$}u}(z_i)&=1-C_{\mbox{\boldmath$m$}u}(z_i, N(z_i);\phi_\mbox{\boldmath$m$}u)C_{\mbox{\boldmath$m$}u}(N(z_i), N(z_i);\phi_\mbox{\boldmath$m$}u)^{-1}C_{\mbox{\boldmath$m$}u}(N(z_i), z_i;\phi_\mbox{\boldmath$m$}u), \mbox{\boldmath$m$}box{\boldmath$e$}nd{split} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} and $C_\mbox{\boldmath$m$}u(\mbox{\boldmath$m$}box{\boldmath$c$}dot,\mbox{\boldmath$m$}box{\boldmath$c$}dot;\phi_\mbox{\boldmath$m$}u)$ is defined in the same way as $C_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(\mbox{\boldmath$m$}box{\boldmath$c$}dot,\mbox{\boldmath$m$}box{\boldmath$c$}dot;\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j})$. In what follows, we assume that $f_{ji}{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m N(a_{ji}, b_{ji})$, where $a_{ji}\mbox{\boldmath$m$}box{\boldmath$e$}quiv a_j(x_i)$ and $b_{ji}\mbox{\boldmath$m$}box{\boldmath$e$}quiv b_j(x_i)$. \mbox{\boldmath$m$}box{\boldmath$b$}igskip The full conditional distributions of $\Phi_n$ and $\psi$ are described as follows: \mbox{\boldmath$m$}box{\boldmath$b$}egin{itemize} \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of varying coefficients)} \ \ For $i=1,\ldots,n$, the full conditional distribution of $(\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(x_i),\ldots,\mbox{\boldmath$m$}box{\boldmath$b$}eta_J(x_i))$ is given by $N({\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}box{\boldmath$b$}eta}}_j + A_{i}^{-1}B_{i}, A_{i}^{-1})$ with \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} &A_{i}=\mbox{\boldmath$f$}rac{T_if_if_i^{\mbox{\boldmath$m$}box{\boldmath$t$}op}}{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+{\mbox{\boldmath$m$}box{\boldmath$r$}m diag}(\gamma_{0i},\ldots,\gamma_{Ji}), \ \ \ \ B_{i}=\mbox{\boldmath$f$}rac{T_if_i(y_i-\mbox{\boldmath$m$}u_i)}{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+(m_{0i},\ldots,m_{Ji})^\mbox{\boldmath$m$}box{\boldmath$t$}op, \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} where \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} &\gamma_{ji}=\mbox{\boldmath$f$}rac{1}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2 F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{t;x_i\in N(t) }\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t; x_i)^2}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2 F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t)}, \\ &m_{ji}=\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)^\mbox{\boldmath$m$}box{\boldmath$t$}op \mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(x_i))}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)} +\mbox{\boldmath$m$}box{\boldmath$s$}um_{t;x_i\in N(t) }\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t;x_i)}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t)}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(t)-\mbox{\boldmath$m$}box{\boldmath$s$}um_{s\in N(t), s\neq x_i}B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t;s) \mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(s)\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\}, \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} and $f_i=(1, f_{1i},\ldots,f_{Ji})^\mbox{\boldmath$m$}box{\boldmath$t$}op$. Here $B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t;s)$ denotes the scalar coefficient for $\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i)$ among the element of the coefficient vector $B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(t)$. \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of prognostic term)} \ \ For $i=1,\ldots,n$, the full conditional distribution of $\mbox{\boldmath$m$}u(z_i)$ is given by $$ N\left( {\mbox{\boldmath$m$}box{\boldmath$b$}ar{\mbox{\boldmath$m$}u}} + \left(\mbox{\boldmath$f$}rac1{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+\gamma_i^{(\mbox{\boldmath$m$}u)}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{-1}\left(\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{y}_i}{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+m_i^{(\mbox{\boldmath$m$}u)}\mbox{\boldmath$m$}box{\boldmath$r$}ight), \left(\mbox{\boldmath$f$}rac1{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+\gamma_i^{(\mbox{\boldmath$m$}u)}\mbox{\boldmath$m$}box{\boldmath$r$}ight)^{-1} \mbox{\boldmath$m$}box{\boldmath$r$}ight), $$ where $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{y}_i=y_i-T_i\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(x_i)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_i)f_j(x_i)\}$, and \mbox{\boldmath$m$}box{\boldmath$b$}egin{align*} &\gamma_{i}^{(\mbox{\boldmath$m$}u)}=\mbox{\boldmath$f$}rac{1}{\mbox{\boldmath$m$}box{\boldmath$t$}au_\mbox{\boldmath$m$}u^2 F_{\mbox{\boldmath$m$}u}(x_i)}+\mbox{\boldmath$m$}box{\boldmath$s$}um_{t;z_i\in N(t)}\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}u}(t; z_i)^2}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2 F_{\mbox{\boldmath$m$}u}(t)}, \\ &m_{i}^{(\mbox{\boldmath$m$}u)}=\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}u}(z_i)^\mbox{\boldmath$m$}box{\boldmath$t$}op \mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(z_i))}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2F_{\mbox{\boldmath$m$}u}(z_i)} +\mbox{\boldmath$m$}box{\boldmath$s$}um_{t;z_i\in N(t) }\mbox{\boldmath$f$}rac{B_{\mbox{\boldmath$m$}u}(t;z_i)}{\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2F_{\mbox{\boldmath$m$}u}(t)}\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\{\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(t)-\mbox{\boldmath$m$}box{\boldmath$s$}um_{s\in N(t), s\neq z_i}B_{\mbox{\boldmath$m$}u}(t;s) \mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(s)\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$B$}ig\}, \mbox{\boldmath$m$}box{\boldmath$e$}nd{align*} \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of latent factors)}\ \ \ Generate $f_{ji}$ from $N((A_{ji}^{(f)})^{-1}B_{ji}^{(f)}, (A_{ji}^{(f)})^{-1})$, where $$ A_{ji}^{(f)}=\mbox{\boldmath$m$}box{\boldmath$b$}igg(\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$b$}eta_{ji}^2}{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}+\mbox{\boldmath$f$}rac1{b_{ji}}\mbox{\boldmath$m$}box{\boldmath$b$}igg), \ \ \ \ \ B_{ji}^{(f)}=\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$b$}eta_{ji}}{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2}\mbox{\boldmath$m$}box{\boldmath$b$}igg\{y_i-\mbox{\boldmath$m$}u_i-T_i\mbox{\boldmath$m$}box{\boldmath$b$}igg(\mbox{\boldmath$m$}box{\boldmath$b$}eta_{0i}-\mbox{\boldmath$m$}box{\boldmath$s$}um_{k\neq j}\mbox{\boldmath$m$}box{\boldmath$b$}eta_{ki}f_{ki}\mbox{\boldmath$m$}box{\boldmath$b$}igg)\mbox{\boldmath$m$}box{\boldmath$b$}igg\}+\mbox{\boldmath$f$}rac{a_{ji}}{b_{ji}} $$ \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2$)}\ \ For $j=0,\ldots,J$, the full conditional distribution of $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2$ is $$ {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}\left(\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$d$}elta_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}+n}{2}, \mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$e$}ta_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}}{2}+\mbox{\boldmath$f$}rac12\mbox{\boldmath$m$}box{\boldmath$s$}um_{i=1}^n \mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$b$}ig\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i)-B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(x_i))\mbox{\boldmath$m$}box{\boldmath$b$}ig\}^2}{F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)}\mbox{\boldmath$m$}box{\boldmath$r$}ight). $$ \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2$)}\ \ The full conditional distribution of $\mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2$ is $$ {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}\left(\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$d$}elta_\mbox{\boldmath$m$}u+n}{2}, \mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$e$}ta_{\mbox{\boldmath$m$}u}}{2}+\mbox{\boldmath$f$}rac12\mbox{\boldmath$m$}box{\boldmath$s$}um_{i=1}^n \mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}box{\boldmath$b$}ig\{\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(z_i)-B_{\mbox{\boldmath$m$}u}(z_i)\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(z_i))\mbox{\boldmath$m$}box{\boldmath$b$}ig\}^2}{F_{\mbox{\boldmath$m$}u}(z_i)}\mbox{\boldmath$m$}box{\boldmath$r$}ight). $$ \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of $\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}$)} \ \ For $j=0,\ldots,J$, the full conditional distribution of $\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}$ is proportional to $$ \prod_{i=1}^n \phi(\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(x_i);B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)\mbox{\boldmath$m$}box{\boldmath$b$}eta_j^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(x_i)), \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}^2F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)), \ \ \ \phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}\in (\mbox{\boldmath$m$}box{\boldmath$u$}nderline{c}_{\mbox{\boldmath$m$}box{\boldmath$b$}eta}, \overline{c}_{\mbox{\boldmath$m$}box{\boldmath$b$}eta}), $$ where $B_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)$ and $F_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}(x_i)$ depend on $\phi_{\mbox{\boldmath$m$}box{\boldmath$b$}eta_j}$ as defined in (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:BF-beta}). \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of $\phi_{\mbox{\boldmath$m$}u}$)} \ \ The full conditional distribution of $\phi_{\mbox{\boldmath$m$}u}$ is proportional to $$ \prod_{i=1}^n \phi(\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(z_i);B_{\mbox{\boldmath$m$}u}(z_i)\mbox{\boldmath$m$}u^{\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$a$}st}(N(z_i)), \mbox{\boldmath$m$}box{\boldmath$t$}au_{\mbox{\boldmath$m$}u}^2F_{\mbox{\boldmath$m$}u}(z_i)), \ \ \ \phi_{\mbox{\boldmath$m$}u}\in (\mbox{\boldmath$m$}box{\boldmath$u$}nderline{c}_{\mbox{\boldmath$m$}u}, \overline{c}_{\mbox{\boldmath$m$}u}), $$ where $B_{\mbox{\boldmath$m$}u}(z_i)$ and $F_{\mbox{\boldmath$m$}u}(z_i)$ depend on $\phi_{\mbox{\boldmath$m$}u}$ as defined in (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:BF-mu}). \item[-] {\mbox{\boldmath$m$}box{\boldmath$b$}f (Sampling of ${\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2$)} \ \ The full conditional is ${\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma^2{\mbox{\boldmath$m$}box{\boldmath$s$}igma}m {\mbox{\boldmath$m$}box{\boldmath$r$}m IG}(\mbox{\boldmath$m$}box{\boldmath$d$}elta_{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma/2+n/2,\mbox{\boldmath$m$}box{\boldmath$e$}ta_{{\mbox{\boldmath$m$}box{\boldmath$s$}igma}gma}/2+\mbox{\boldmath$m$}box{\boldmath$s$}um_{i=1}^n(\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{y}_i-\mbox{\boldmath$m$}u_i)^2/2)$, where $\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$w$}idetilde{y}_i=y_i-T_i\{\mbox{\boldmath$m$}box{\boldmath$b$}eta_0(x_i)+\mbox{\boldmath$m$}box{\boldmath$s$}um_{j=1}^J \mbox{\boldmath$m$}box{\boldmath$b$}eta_j(x_i)f_j(x_i)\}$. \mbox{\boldmath$m$}box{\boldmath$e$}nd{itemize} \mbox{\boldmath$m$}box{\boldmath$s$}ection{Proof of Theorem 1} {\lambda}bel{sec:proof-thm1} \mbox{\boldmath$m$}box{\boldmath$b$}egin{proof} This proof is for the $T=1$ case, though the proof for $T=0$ follows equivalently. Let the direct product measure (cylinder measure) of the data, $\left(y_{1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,y_{n},\mbox{\boldmath$m$}box{\boldmath$c$}dots\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, be $\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}$, and the direct product measure (cylinder measure) of the data of the synthesis model, $\left(y_{1},\mbox{\boldmath$m$}box{\boldmath$c$}dots,y_{n},\mbox{\boldmath$m$}box{\boldmath$c$}dots\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, be $\mbox{\boldmath$m$}athbb{P}_{\infty}=\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)$. Denote the marginal under the Gaussian process prior as \[ \mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}=\int_{\mbox{\boldmath$m$}athbb{R}^{J\otimes\infty},\mbox{\boldmath$m$}athbb{R}_{+}}\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\pi\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}. \] As with $\mbox{\boldmath$m$}athbb{P}_{n}^{*},\mbox{\boldmath$m$}athbb{P}_{n}\pi^{\otimes n}$, denote the conditional distribution of the cylinder measure, given $\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}$, as $\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}},\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}$. Note that \[ \left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}=\mbox{\boldmath$f$}rac{\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{*\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)}{\int_{y^{n+1}}\mbox{\boldmath$m$}box{\boldmath$c$}dots\int_{y^{\infty}}\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{*\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\mbox{\boldmath$m$}box{\boldmath$c$}dots d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}} \] and \[ \left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}=\mbox{\boldmath$f$}rac{\int_{\mbox{\boldmath$m$}athbb{R}^{J\otimes\infty},\mbox{\boldmath$m$}athbb{R}_{+}}\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\pi\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}}{\int_{y^{n+1}}\mbox{\boldmath$m$}box{\boldmath$c$}dots\int_{y^{\infty}}\int_{\mbox{\boldmath$m$}athbb{R}^{J\otimes\infty},\mbox{\boldmath$m$}athbb{R}_{+}}\mbox{\boldmath$m$}athcal{L}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}\left|\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight.\mbox{\boldmath$m$}box{\boldmath$r$}ight)\pi\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{{\mbox{\boldmath$m$}box{\boldmath$t$}heta}eta}^{\infty}d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1}\mbox{\boldmath$m$}box{\boldmath$c$}dots d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{\infty}}. \] Given this notation, to prove (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:Consis}), we need to show, \[ \mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athbb{R}^{\infty}}\left|\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}-\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow0,\ \mbox{\boldmath$m$}box{\boldmath$t$}extrm{in }\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}, \] and to prove (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:TotalConsis}), we need to show, \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation} \lim_{n\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}\mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athbb{R}^{\infty}}\left|\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\otimes p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)-\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\otimes p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|=0.{\lambda}bel{eq:Consis-2} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} Since neither is greater than 2, and given that the equality, \[ \mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athbb{R}^{\infty}}\left|\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\otimes p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)-\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\otimes p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight|=\int_{y^{n}}\mbox{\boldmath$m$}box{\boldmath$s$}up_{A\in\mbox{\boldmath$m$}athbb{R}^{\infty}}\left|\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}-\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\mbox{\boldmath$m$}box{\boldmath$r$}ight|p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}, \] holds, they are equivalent. Thus, proving (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:Consis-2}) is sufficient to prove both. Now, since we assumed $0<\pi\left(\mbox{\boldmath$m$}u^{*},\left\{ \mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{\mbox{\boldmath$m$}box{\boldmath$b$}eta}_{j}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} _{j=0,\mbox{\boldmath$m$}box{\boldmath$c$}dots,J}\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, $\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}$ is absolute continuous with regard to the marginal, $\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}$: $\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\ll\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}$. Denote the likelihood ratio of $\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}$ and $\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}$, and its $\mbox{\boldmath$m$}athcal{F}_{n}$-conditional likelihood ratio as \[ Z=\mbox{\boldmath$f$}rac{d\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}}{d\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}},\quad Z_{n}=\mbox{\boldmath$m$}athbb{E}^{\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\left[\left.\mbox{\boldmath$f$}rac{d\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}}{d\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}athcal{F}_{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight]. \] Here, $\mbox{\boldmath$m$}athbb{E}^{\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\left[\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight]$ is an integral regarding the cylinder measure, $\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}$. Further, since $\mbox{\boldmath$m$}athbb{E}^{\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\left[Z_{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight]\leqq1,\ ^{\mbox{\boldmath$f$}orall}n$, and from Doob's martingale convergence theorem, we have \mbox{\boldmath$m$}box{\boldmath$b$}egin{equation} \lim_{n\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow\infty}\mbox{\boldmath$m$}athbb{E}^{\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\left[\left|Z_{n}-Z\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ight]=0.{\lambda}bel{eq:DMCT} \mbox{\boldmath$m$}box{\boldmath$e$}nd{equation} From this, we can prove (\mbox{\boldmath$m$}box{\boldmath$r$}ef{eq:Consis-2}). For the function, $h$, on $\mbox{\boldmath$m$}athbb{R}^{\infty}$, we have \mbox{\boldmath$m$}box{\boldmath$b$}egin{alignat*}{1} & \mbox{\boldmath$m$}box{\boldmath$s$}up_{\left\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert h\left(\mbox{\boldmath$m$}box{\boldmath$c$}dot,y^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert \leqq1}\left|\int_{y^{n}}\left\{ \int h\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1:\infty},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\left(\left.\mbox{\boldmath$m$}athbb{P}_{\infty}^{*}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}-\left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1:\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} p^{*}\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight|\\ = & \mbox{\boldmath$m$}box{\boldmath$s$}up_{\left\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert h\left(\mbox{\boldmath$m$}box{\boldmath$c$}dot,y^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\mbox{\boldmath$m$}box{\boldmath$r$}ight\mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$V$}ert \leqq1}\left|\int_{y^{n}}\int h\left(\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1:\infty},\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)\left\{ Z-Z_{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight\} \left.\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}\mbox{\boldmath$m$}box{\boldmath$r$}ight|_{\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}}d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n+1:\infty}\mbox{\boldmath$m$}athbb{P}_{n}\pi^{\otimes n}d\mbox{\boldmath$m$}box{\boldmath$b$}oldsymbol{y}^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight|\\ \leqq & \mbox{\boldmath$m$}athbb{E}^{\mbox{\boldmath$m$}athbb{P}_{\infty}\pi^{\otimes\infty}}\left[\left|Z_{n}-Z\mbox{\boldmath$m$}box{\boldmath$r$}ight|\mbox{\boldmath$m$}box{\boldmath$r$}ight]\mbox{\boldmath$m$}box{\boldmath$r$}ightarrow0. \mbox{\boldmath$m$}box{\boldmath$e$}nd{alignat*} If we let $h\left(\mbox{\boldmath$m$}box{\boldmath$c$}dot,y^{n}\mbox{\boldmath$m$}box{\boldmath$r$}ight)=1_{A}\left(\mbox{\boldmath$m$}box{\boldmath$c$}dot\mbox{\boldmath$m$}box{\boldmath$r$}ight)$, we obtain the result. \mbox{\boldmath$m$}box{\boldmath$e$}nd{proof} \mbox{\boldmath$m$}box{\boldmath$s$}ection{Additional Simulation Studies } {\lambda}bel{sec:supp-sim} \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Performance under small samples} To investigate the performance under relatively small sample sizes, we conducted additional simulation with $n=50$, $p=5$, and the same four data generating scenarios considered in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim-syn}. Mean squared error (MSE) of point estimates and coverage probability (CP) and average length (AL) of $95\%$ credible/confidence intervals of heterogeneous treatment effects, averaged over 100 replications, are reported in Tables~\mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-small-n-MSE} and \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:sim-small-n-coverage}. As with the simulation study in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim-syn}, we see that BCS is either the best or second best estimate, in terms of point estimation, and consistently best in CP. Unlike the simulation study, though, BCF does not perform as well, with it lagging behind AM in all cases. This is good evidence that tree based models have issues when the number of observations is small, owing to its flexible nature. Although AM does outperform BCF in MSE, BCF does retain better CP. The issue for these methods in practice, is that it is unclear what amount of data is ``enough" in a given context. However, BCS, by synthesizing each estimate, is able to achieve improved estimation in all cases, since it can learn, given the data, which method should have more weight. \mbox{\boldmath$m$}box{\boldmath$s$}ubsection{Out-of-sample performance} We further investigate the out-of-samples performance of estimation and inference on heterogeneous treatment effects. Again, adopting the same data generating scenarios with $n=200$ and $p\in\{5,15\}$, we apply the same methods to the observed data to estimate heterogeneous treatment effects in 200 test samples generated in the same way as the observed samples. Note that the R package \mbox{\boldmath$m$}box{\boldmath$t$}exttt{bcf} does not provide out-of-sample inference, thus was omitted from this study. Regarding the proposed BCS, we combine the results from CF, LM and AM. Mean squared error (MSE) of point estimates and coverage probability (CP) and average length (AL) of $95\%$ credible/confidence intervals of heterogeneous treatment effects, averaged over 100 replications, are reported in Tables~\mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:out-of-sample-MSE} and \mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:out-of-sample-coverage}. As confirmed in Section~\mbox{\boldmath$m$}box{\boldmath$r$}ef{sec:sim-syn}, the proposed BCS attains the smallest and second smallest MSE values in all of the scenarios, and it significantly improves the performance of the three models synthesized. Table~\mbox{\boldmath$m$}box{\boldmath$r$}ef{tab:out-of-sample-coverage} shows that BCS provides reasonable interval estimation, where the empirical coverage is around the nominal level, while the other methods severely undercover the true value. \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Mean squared error (MSE) of point estimates of heterogeneous treatment effect in the observed samples under $n=50$ and $p=5$, averaged over 100 replications. The smallest and second smallest MSE values are highlighted in bold. } {\lambda}bel{tab:sim-small-n-MSE} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{cccccccccccccc} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line Scenario & & BCS & BCF & LM & AM & XL & RL & CF & CST \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 5.34} & 7.80 & 9.23 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 4.06} & 8.26 & 12.15 & 19.46 & 10.22 \\ 2 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.26} & 5.03 & 4.49 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 4.09} & 5.62 & 9.12 & 18.38 & 10.09 \\ 3 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 5.66} & 8.54 & 10.63 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 4.31} & 8.85 & 12.57 & 19.82 & 10.56 \\ 4 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.67} & 5.81 & 5.13 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 4.77} & 6.46 & 10.45 & 19.53 & 10.26 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Coverage probability (CP) and average length (AL) of $95\%$ credible/confidence intervals of heterogeneous treatment effect in the observed samples under $n=50$ and $p=5$, averaged over 100 replications. } {\lambda}bel{tab:sim-small-n-coverage} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{cccccccccccccc} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line && \mbox{\boldmath$m$}ulticolumn{5}{c}{CP ($\%$)} && \mbox{\boldmath$m$}ulticolumn{5}{c}{AL}\\ Scenario & & BCS & BCF & LM & AM & CF & & BCS & BCF & LM & AM & CF \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & & 95.5 & 89.2 & 83.1 & 67.1 & 49.3 & & 9.31 & 8.18 & 7.98 & 3.60 & 4.90 \\ 2 & & 94.1 & 90.2 & 85.4 & 71.1 & 38.4 & & 7.08 & 6.58 & 5.86 & 3.95 & 3.46 \\ 3 & & 96.2 & 89.8 & 84.7 & 69.3 & 50.9 & & 9.76 & 8.46 & 8.66 & 3.79 & 5.20 \\ 4 & & 93.8 & 88.4 & 83.9 & 69.3 & 37.2 & & 7.33 & 6.73 & 6.00 & 4.07 & 3.58 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Mean squared error (MSE) of point estimates of heterogeneous treatment effect in 200 test samples, averaged over 100 replications. The smallest and second smallest MSE values are highlighted in bold. } {\lambda}bel{tab:out-of-sample-MSE} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{cccccccccccccc} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line Scenario & $p$ & & BCS & CF & LM & AM & XL & RL & CST \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & 5 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.32} & 6.27 & 4.29 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.03} & 3.04 & 4.49 & 4.22 \\ & 15 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.61} & 7.32 & 6.39 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.10} & 4.11 & 6.03 & 5.83 \\ 2 & 5 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 1.59} & 5.62 & 3.17 & 3.05 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.55} & 3.76 & 4.80 \\ & 15 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.84} & 5.98 & 4.38 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.15} & 3.63 & 4.95 & 5.70 \\ 3 & 5 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.85} & 6.71 & 5.35 & 3.67 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.32} & 5.11 & 5.18 \\ & 15 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 4.26} & 7.96 & 7.67 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.62} & 4.44 & 6.46 & 6.72 \\ 4 & 5 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.14} & 6.35 & 3.84 & 3.65 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 2.93} & 4.13 & 4.95 \\ & 15 & & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.32} & 6.36 & 4.98 & {\mbox{\boldmath$m$}box{\boldmath$b$}f 3.66} & 3.99 & 5.54 & 5.79 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$b$}egin{table}[t!] \mbox{\boldmath$m$}box{\boldmath$c$}aption{Coverage probability (CP) and average length (AL) of $95\%$ credible/confidence intervals of heterogeneous treatment effect in 200 test samples, averaged over 100 replications. } {\lambda}bel{tab:out-of-sample-coverage} \mbox{\boldmath$m$}box{\boldmath$b$}egin{center} \mbox{\boldmath$m$}box{\boldmath$b$}egin{tabular}{cccccccccccccc} \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line &&& \mbox{\boldmath$m$}ulticolumn{4}{c}{CP ($\%$)} && \mbox{\boldmath$m$}ulticolumn{4}{c}{AL}\\ Scenario & $p$ & & BCS & CF & LM & AM & & BCS & CF & LM & AM \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line 1 & 5 & & 98.5 & 65.2 & 70.2 & 55.7 & & 6.82 & 2.93 & 3.91 & 2.11 \\ & 15 & & 98.8 & 70.3 & 83.7 & 56.8 & & 8.69 & 3.64 & 6.85 & 2.15 \\ 2 & 5 & & 99.2 & 59.4 & 65.4 & 59.1 & & 6.27 & 2.40 & 2.85 & 2.35 \\ & 15 & & 98.5 & 63.3 & 78.5 & 57.7 & & 7.38 & 2.80 & 4.92 & 2.30 \\ 3 & 5 & & 96.4 & 63.0 & 69.6 & 50.1 & & 7.19 & 3.10 & 4.31 & 2.19 \\ & 15 & & 97.4 & 67.5 & 84.1 & 51.7 & & 9.26 & 3.72 & 7.61 & 2.18 \\ 4 & 5 & & 97.1 & 55.2 & 61.9 & 54.7 & & 6.66 & 2.45 & 2.94 & 2.40 \\ & 15 & & 96.6 & 61.2 & 77.3 & 54.4 & & 7.70 & 2.91 & 5.09 & 2.38 \\ \mbox{\boldmath$m$}box{\mbox{\boldmath$m$}box{\boldmath$b$}oldmath$h$}line \mbox{\boldmath$m$}box{\boldmath$e$}nd{tabular} \mbox{\boldmath$m$}box{\boldmath$e$}nd{center} \mbox{\boldmath$m$}box{\boldmath$e$}nd{table} \mbox{\boldmath$m$}box{\boldmath$e$}nd{document}
\begin{document} \title[Real Paley-Wiener Theorem ]{Real Paley-Wiener Theorem for the Generalized Weinstein transform in quantum calculus} \author[Y. Bettaibi \& H. Ben Mohamed]{} \maketitle \centerline{\bf Youssef Bettaibi$^1$} \centerline{E-mail : [email protected]} \centerline{\bf Hassen Ben Mohamed$^1$} \centerline{E-mail : [email protected]} $^1${University of Gabes, Faculty of Sciences of Gabes, LR17ES11 Mathematics and Applications, 6072, Gabes, Tunisia. . } \begin{abstract} We first characterize the image of the compactly supported smooth even functions under the q-Weinstein transform as a subspace of the Schwartz space. We then describe the space of smooth $L_{\alpha, q, a}^{2}$-functions whose q-Weinstein transform has compact support as a subspace of the space of $L_{\alpha, q, a}^{2}$-functions. \end{abstract} \noindent {\bf Keywords :} {$q$-theory, Weinstein transform, $q$-integral transform} \\ {\bf 2010 AMS Classification : } {33D15; 33E20; 33D60; 42B10} \section{Introduction} The original Paley-Wiener theorem \cite{RPW1} describes the Fourier transform of $L^{2}$ -functions on the real line with support in a symmetric interval as entire functions of exponential type whose restriction to the real line are $L^{2}$-functions, which has proved to be a basic tool for transform in various set-ups. Recently, there has been a great interest in the real Paley-Wiener theorem due to Bang \cite{RPW2}, in which the adjective "real" expresses that information about the support of the Fourier transform comes from growth rates associated to the function $f$ on $\mathbb{R},$ rather than on $\mathbb{C}$ as in the classical "complex Paley-Wiener theorem". Bang \cite{RPW2} discovered a characterization of band-limited signals by using a derivative operator, whose result can be rephrased as $f \in L^{2}(\mathbb{R})$ is band-limited with bandwidth $\hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}gma$ if and only if $f$ infinitely differentiable, $\frac{d^{m}f}{d t^{m}} \in L^{2}(\mathbb{R})$ for all positive integers $m$ and $$\lim _{m \rightarrow \infty}\left\|\frac{\mathrm{d}^{m}}{\mathrm{d} t^{m}} f\right\|_{L^{2}(\mathbb{R})}^{1 / m}=\hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}gma=\sup \{|\xi|: \xi \in \operatorname{supp} \mathcal{F}( f)\},$$ where $\mathcal{F}( f)$ is the Fourier transform of $f.$ A wide number of papers have been devoted to the extension of the theory on higher dimensions and many other integral transforms (see\cite{RPW3}, \cite{RPW13}, \cite{RPW22}, and the references therein).\\ A class of Paley-Wiener theorems sitting inside the Schwartz space was obtained by Andersen in \cite{RPW5}, where it is shown that the Fourier transform is a bijection between smooth functions supported in $[-R, R]$ and the space of all Schwartz functions satisfying, for all $m \in \mathbb{N}$ $$ \sup _{x \in \mathbb{R}, n \in \mathbb{N}_{0}} R^{-n} n^{-N}(1+|x|)^{N}\left|\frac{d^{n}}{d x^{n}} f\right|<\infty. $$ Following the classical theory, an element of $P W_{a}$ will be called bandlimited signal. \\ papers have been devoted to the extension of the theory on many other transforms and different classes of functions, for example, Hankel transform (see \cite{RPWH}) and the Weinstein transform (see \cite{RPWW}) .\\ In the literature, these theorems are known to hold for more general transforms in classical analysis as well as in Quantum Calculus for example the Real Paley-Wiener Theorem for the q-Dunkl transform(see \cite{RPWD}), q-Hankel transform(see \cite{RPWH})\\ In \cite{youss}, we introduce a q-analogue of the Weinstein operator and we investigate its eigenfunction. Next, we study its associated Fourier transform which is a q-analogue of the Weinstein transform.\\ In this paper, we shall continue their work by giving two real Paley-Wiener theorems for the $q$-Weinstein transform. The first uses techniques due to Tuan and Zayed \cite{RPW}, in order to describe the image under the $q$-Weinstein transform $\mathscr{F}_{W}^{\alpha, q}$ of $L_{\alpha, q,a}^{2}$ (the space of square integrable functions on $B_{(0,a)}$ with respect to the measure $\left.x_{2}^{2 \alpha+1} d_{q} x_{1}d_{q} x_{2}, \alpha \geq-1 / 2\right) .$ The second characterizes the image of the compactly supported q-smooth functions domain under $\mathscr{F}_{W}^{\alpha, q}$ This paper is organized as follows: in Section $2,$ we present some standard conventional notations used in the sequel. In Section $3,$ we will mention some results and definitions from the theory of $q$-Weinstein operator and $q$-Weinstein transform. All of these results can be found in \cite{youss}. Section 4 is devoted to study the real Paley-Wiener theorem for $q-L^{2}$-functions. Finally in Section $5,$ we give a real Paley-Wiener theorem for the $q$-Schwartz functions. \section{Notations and preliminaries} For the convenience of the reader, we provide in this section a summary of the mathematical notations and definitions used in this paper. We refer the reader to the general references \cite{GR} and \cite{KC}, for the definitions, notations and properties of the $q$-shifted factorials and the $q$-hypergeometric functions. Throughout this paper, we assume $q\in]0, 1[$ and we denote \\ $\bullet\displaystyle~\mathbb{R}_q=\{\pm q^n~~;~~n\in\mathbb{Z}\}$, $\displaystyle \mathbb{R}_{q,+}=\{q^n~~;~~n\in\mathbb{Z}\}$.\\ $\bullet \ {\mathbb{R}_q^{2}}=\mathbb{R}_q\times\mathbb{R}_{q}$ and $ {\mathbb{R}_{q,+}^{2}}=\mathbb{R}_q\times\mathbb{R}_{q,+}$ \\ $\bullet ~x= (x_1,x_2)\in\mathbb{R}^2, -x =(-x_1,x_2)$ and $ \| x\|=\sqrt{x_1^{2}+x_2^{2}}$ \\ $\bullet$ For $a>0 $, $B_{(0,a)}=\Big\{x\in{\mathbb{R}_q^{2}},\quad \|x\|\leq a\Big\}$, $\quad B_{+(0,a)}=B_{(0,a)}\bigcap{\mathbb{R}_{q,+}^{2}} $\\ \subsection{Basic symbols} For $x \in \mathbb{C} $, the $q$-shifted factorials are defined by \begin {equation*} (x;q)_0=1;~~ (x;q)_n = \displaystyle\prod _{k=0}^{n-1}(1-xq^k),~~ n=1,2,...;~~ (x;q)_\infty = \displaystyle\prod _{k=0}^\infty(1-xq^k). \end {equation*} We also denote \begin {equation*} [x]_q={{1-q^x}\over{1-q}},\quad ~ x\in \mathbb{C}\quad {\rm and}\quad [n]_q! ={{(q;q)_n}\over {(1-q)^n}}, \quad ~~ n\in \mathbb{N}. \end {equation*} \subsection{ Operators and elementary special functions}~~\\ The $q$-Gamma function is given by (see \cite{Jac} ) $$ \Gamma_q (x) ={(q;q)_{\infty}\over{(q^x;q)_{\infty}}}(1-q)^{1-x} , ~~ ~~x\neq 0, -1 , -2 ,... $$ It satisfies the following relations \begin {equation*} \label{gam1} \Gamma_q (x+1) =[x]_q \Gamma_q (x) ,~~ \Gamma_q (1) = 1~~ \hbox{ and } \lim_{q\longrightarrow 1^-}\Gamma_q (x) = \Gamma(x) , \mathbb{R}e(x) >0. \end {equation*} The third Jackson's normalized Bessel function is given by ( see \cite {Rubin}) \begin{equation}\label{j} j_\alpha(x;q^2) = \displaystyle \sum_{n=0}^{+\infty} (-1)^n \frac{\Gamma_{q^2}(\alpha+1)q^{n(n+1)}}{(1+q)\Gamma_{q^2}(\alpha+n+1)\Gamma_{q^2}(n+1)}x ^{2n}, \end{equation} the $q$-trigonometric functions $q$-cosine and $q$-sine are defined by ( see \cite {Rubin}) \begin {equation}\label{cos} \cos(x;q^2)=j_{-\frac{1}{2}}\left(x ; q^{2}\right) \quad{\rm , }\quad\hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}n(x;q^2)=xj_{\frac{1}{2}}\left(x ; q^{2}\right) \end {equation} and the $q$-analogue exponential function is given by ( see \cite {Rubin}) \begin{align}\label{exp} e(z;q^2) =\cos(-iz;q^2)+i\hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}n(-iz;q^2) \end{align} These functions are absolutely convergent for all $z$ in the plane and when $q$ tends to 1 they tend to the corresponding classical ones pointwise and uniformly on compacts.\\ Note that we have for all $x \in \mathbb{R}_q$ (see \cite{Rubin}) \begin{equation}\label{iexp} |j_\alpha(x;q^2)|\leq \frac{1}{(q;q)_\infty} \quad \hbox{and} \quad |\ e(ix;q^2)|\leq \displaystyle \frac{2}{(q;q)_\infty}. \end{equation} The $q^2$-analogue differential operator is ( see \cite {Rubin}) \begin {equation*}\partial_q(f)(z)=\left\{\begin{array}{cc} \displaystyle\frac{f\left(q^{-1}z\right)+f\left(-q^{-1}z\right)-f\left(qz\right)+f\left(-qz\right)-2f(-z)}{2(1-q)z} & if~~z\neq 0 \\ \displaystyle\lim_{x\rightarrow 0}\partial_q(f)(x)\qquad({\rm in}~~ \mathbb{R}_q) & if~~z= 0. \\ \end{array}\right. \end {equation*} \begin{remark} If $f$ is differentiable at $z$, then $\displaystyle \lim_{q\rightarrow1}\partial_q(f)(z)=f'(z)$.\\ A repeated application of the $q^2$-analogue differential operator $n$ times is given by:$$ \partial_q^0f=f,\quad \partial_q^{n+1}f=\partial_q(\partial_q^nf). $$ For $\beta=(\beta_1,\beta_2)\in \mathbb{N}\times\mathbb{N}$, we use the notation $$D_q^{\beta}=\partial_{x,q}^{\beta_1}\partial_{y,q}^{\beta_2} .$$ The $q^2$-analogue Laplace operator or $q$-Laplacian is given by $$\Delta_q= \partial_{x,q}^2+\partial_{y,q}^2.$$ \end{remark} The following lemma lists some useful computational properties of $\partial_q$, and reflects the sensitivity of this operator to parity of its argument. The proof is straightforward. \begin{lemme}\label{ld}~~\\ 1) We have \begin{align*} & \partial_q \hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}n(x;q^2)=\cos(x;q^2),\quad \partial_q \cos(x;q^2)=-\hbox{ si}} \def\ci{\hbox{ ci}} \def\Ei{\hbox{ Ei}n(x;q^2), \\ & \partial_q e(x;q^2)=e(x;q^2) \quad\text{and}\quad \partial_q j_\alpha(x;q^2)=-\frac{x}{[2\alpha +2]_q}j_{\alpha +1}(x;q^2). \end{align*} 2) For all function $f$ we have $$\displaystyle \partial_q f(z)=\frac{f_e(q^{-1}z)-f_e(z)}{(1-q)z}+\frac{f_o(z)-f_o(qz)}{(1-q)z}.$$ Here, for a function $f$ defined on $\mathbb{R}_q$, $f_e$ and $f_o$ are its even and odd parts respectively.\\ 3) Let $f$ and $g$ two functions. \\ \indent i) If $f$ even and $g$ odd we have $$ \partial_q(fg)(z)=q\partial_q(f)(qz)g(z)+ f(qz)\partial_q(g)(z) =\partial_q(g)(z))f(z)+ qg(qz)\partial_q(f)(qz);$$ \indent ii) If $f$ and $g$ are even we have $$ \partial_q(fg)(z)=\partial_q(f)(z)g(q^{-1}z)+ f(z)\partial_q(g)(z).$$ iii) If $f$ and $g $ are odd: \begin{equation*} \partial_q (f g)(x)=\partial_q(f)(x) g\left(\frac{x}{q}\right)+f(x) \partial_q(g)(x)- \frac{f(x)}{x}\left(g\left(\frac{x}{q}\right)+g(x)\right) \end{equation*} \end{lemme} By the use of the $q^2$-analogue differential operator $\partial_q$, we note:\\ $ \bullet$ $ \mathscr{E}_{q}(\mathbb{R}_q^2)$, the space of functions $f$ defined on $\mathbb{R}_q\times\mathbb{R}_q$, satisfying for all $n\in\mathbb{N}$ and all $a \geq 0$, $$ P_{n,a}(f) = \sup\left\{ |D _{q}^{\beta}f(x)|, \mid \beta\mid\leq n; x\in \mathbb{R}_{q}^2:\|x\|\leq a\right\}<\infty $$ and $$ \lim_{x\rightarrow (0,0)}D _{q}^{\beta} f(x)\quad (\hbox{ in}\quad \mathbb{R}_{q}^2)\qquad \hbox{ exists}.$$ We provide it with the topology defined by the semi norms $P_{n,a}.$ \\ $ \bullet$ $\mathscr{E}_{\ast ,q}(\mathbb{R}_q^2)$, the space of functions in $\mathscr{E}_{q}(\mathbb{R}_q\times\mathbb{R}_q)$, even with respect to the last variable. \\ $ \bullet$ $ \mathscr{S}_{q}(\mathbb{R}_{q}^2)$, the space of functions $f$ defined on $\mathbb{R}_{q}^2$ satisfying $$\forall n\in\mathbb{N},\quad P_{n,q}(f)=\sup_{x\in\mathbb{R}_{q}^2}\sup_{|\beta|\leq n}\left| D _{q}^\beta\left[\|x\|^{2n} f(x)\right]\right|<+\infty$$ and $$ \lim_{x\rightarrow (0,0)}D _{q}^{\beta} f(x)\quad ({\rm in}\quad \mathbb{R}_{q}^2)\qquad {\rm exists}.$$ $\bullet$ $ \mathscr{S}_{\ast,q} (\mathbb{R}_{q}^2)$, the space of functions in $\mathscr{S}_{q}(\mathbb{R}_q^2)$, even with respect to the last variable. \\ $\bullet$ $ \mathscr{D}_{q}\left(\mathbb{R}_{q}^2\right),$ the space of the restrictions on $\mathbb{R}_{q}^2$ of infinity q-differentiable functions on $\mathbb{R}_{q}^2$ with compact supports.\\ $\bullet$ $ \mathscr{D}_{a,q} (\mathbb{R}_{q}^2)$, the space function in $ \mathscr{D}_{q}(\mathbb{R}_{q}^2),$ supported in $B_{(0,a)}$.\\ $\bullet$ $ \mathscr{D}_{\ast ,q} (\mathbb{R}_{q}^2)$, the space function in $\mathscr{D}_{q}(\mathbb{R}_{q}^2) $, even with respect to the last variable. \\ $\bullet$ $ \mathscr{D}_{\ast ,a,q} (\mathbb{R}_{q}^2)$, the space function in $ \mathscr{D}_{\ast ,q}(\mathbb{R}_{q}^2),$ supported in $B_{(0,a)}$.\\ The $q$-Jackson integrals are defined by (see \cite {Jac}) {\small\begin {equation}\label{int1} \int_0^a{f(x)d_qx} =(1-q)a \sum_{n=0}^{\infty}q^nf(aq^n),\quad \int_a^b{f(x)d_qx} =\int_0^b{f(x)d_qx}-\int_0^a{f(x)d_qx}, \end {equation}} \begin {equation}\label{int2} \int_{\mathbb{R}_{q,+}}f(x)d_qx=\int_0^{\infty}f(x)d_qx =(1-q)\sum_{n=-\infty}^{\infty}q^nf(q^n)\quad \end {equation} and \begin {equation}\label{int2019} \int_{\mathbb{R}_{q}}f(x)d_qx=\int_{-\infty}^{\infty}f(x)d_qx =(1-q)\sum_{n=-\infty}^{\infty}q^nf(q^n) +(1-q)\sum_{n=-\infty}^{\infty}q^nf(-q^n), \end {equation} provided the sums converge absolutely. \\ The following simple result, giving $q$-analogues of the integration by parts theorem, can be verified by direct calculation. \begin{lemme}\label{ppar}~~\\ 1) For $a>0$, if $\displaystyle \int_{-a}^a (\partial_q f)(x)g(x)d_qx$ exists, then {\small \begin{equation*} \int_{-a}^a (\partial_q f)(x)g(x)d_qx=2\left[f_e(q^{-1}a)g_o(a)+f_o(a)g_e(q^{-1}a)\right]-\int_{-a}^a f(x)(\partial_q g)(x)d_qx. \end{equation*}} 2) If $\displaystyle \int_{-\infty}^\infty (\partial_q f)(x)g(x)d_qx$ exists, \begin{equation}\label{006} \int_{-\infty}^\infty (\partial_q f)(x)g(x)d_qx=-\int_{-\infty}^\infty f(x)(\partial_q g)(x)d_qx. \end{equation} \end{lemme} Using the $q$-Jackson integrals, we note for $p>0 $ and $\alpha \geq-\frac{1}{2}$,\\ $\bullet$ $ \displaystyle L_{\alpha ,q}^p(\mathbb{R}_{q,+}^{2})$ the Banach space constituted of functions such that $$ \|f\|_{L_{\alpha ,q}^p(\mathbb{R}_{q,+}^{2})}=\left(\int_{\mathbb{R}_{q,+}^{2}}|f(x_1,x_2)|^pd\mu_{\alpha,q}(x_1,x_2)\right)^{\frac{1}{p}}<\infty , \text { if } p <\infty ,$$ where \begin{equation*} d\mu_{\alpha,q}(x_1,x_2)=x_{2}^{2\alpha+1}d_qx_1d_qx_2. \end{equation*} $$ \|f\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}=\sup_{x\in\mathbb{R}_{q,+}^{2}}|f(x)|<\infty. $$ \begin{lemme}\label{l23} Let $f \in \mathscr{D}_{R,q}(\mathbb{R}_{q}^2) $. Then, for $n_1$, $n_2$, $p_1$, $p_2$ and $ p\in\mathbb{N}$ such that $ p_1\leq p<n_1 $ and $p_2\leq p<n_2 $, then the function $ (t_1,t_2)\mapsto D_{q}^{(p_1,p_2)}\Big(t_1^{n_1}t_2^{n_2} f\Big)\in\mathscr{D}_{\frac{R}{q^{p}},q}(\mathbb{R}_{q}^2)$ \begin{equation} \Big\Arrowvert D_q^{(p_1,p_2)}\Big(t_1^{n_1}t_2^{n_2} f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})}\leq C(R,p)\Big(\frac{R}{q^{2p}}\Big)^{n_1+n_2}\frac{(q^{n_1};q^{-1})_p(q^{n_2};q^{-1})_p}{(1-q)^{2p}}. \end{equation} \end{lemme} \begin{proof} Using Lemma \ref{ld}, we have $$ D_q^{(p_1,p_2)}\Big(t_1^{n_1}t_2^{n_2} f\Big)(t_1,t_2)=t_1^{n_1-p_1}t_2^{n_2-p_2}(-1)^{p_1+p_2}\frac{\left(q^{-n_1} ; q\right)_{p_1}\left(q^{-n_2} ; q\right)_{p_2}}{(1-q)^{p_1+p_2}} f_{p_1,p_2}(t_1,t_2),$$ where $f_{p_1,p_2}$ is a function satisfying $\operatorname{supp} (f_{p_1,p_2}) \subset B(0,q^{-p}R)$ and $$ \left\|f_{p_1,p_2}\right\|{_{L_q^\infty(\mathbb{R}_{q,+}^{2})}} \leq C_{p_1,p_2} \sum_{k_1=0}^{p_1}\sum_{k_2=0}^{p_2}\left\|D^{(k_1,k_2)} f\right\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}. $$ So, \begin{align*} &\Big\Arrowvert D_q^{(p_1,p_2)}\Big(t_1^{n_1}t_2^{n_2} f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})}\\&\leq C_{p_1,p_2} \sum_{k_1=0}^{p_1}\sum_{k_2=0}^{p_2}\left\|D_q^{(k_1,k_2)} f\right\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}(q^{-p}R)^{n_1+n_2-p_1-p_2} \Big|\frac{\left(q^{-n_1} ; q\right)_{p_1}\left(q^{-n_2} ; q\right)_{p_2}}{(1-q)^{p_1+p_2}} \Big|\\ & \leq C_{p_1,p_2} \sum_{k_1=0}^{p_1}\sum_{k_2=0}^{p_2}\left\|D_q^{(k_1,k_2)} f\right\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}(q^{-p}R)^{n_1+n_2-p_1-p_2} \Big|\frac{\left(q^{-n_1} ; q\right)_{p}\left(q^{-n_2} ; q\right)_{p}}{(1-q)^{2p}} \Big|\\ &= C_{p_1,p_2} \sum_{k_1=0}^{p_1}\sum_{k_2=0}^{p_2}\left\|D_q^{(k_1,k_2)} f\right\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}(q^{-2p}R)^{n_1+n_2}(q^{-p}R)^{-p_1-p_2} q^{p(p-1)} \frac{\left(q^{n_1} ; q^{-1}\right)_{p}\left(q^{n_2} ; q^{-1}\right)_{p}}{(1-q)^{2p}} \\ & \leq C(R,p)\Big(\frac{R}{q^{2p}}\Big)^{n_1+n_2}\frac{(q^{n_1};q^{-1}) _p(q^{n_2};q^{-1})_p}{(1-q)^{2p}}, \end{align*} where $$C(R,p)=\max_{p_1,p_2\leq p} \Big\{C_{p_1,p_2} \sum_{k_1=0}^{p_1}\sum_{k_2=0}^{p_2}\left\|D_q^{(k_1,k_2)} f\right\|_{L_q^\infty(\mathbb{R}_{q,+}^{2})}(q^{-p}R)^{-p_1-p_2} q^{p(p-1)}\Big\}.$$ \end{proof} \begin{coro} \label{RD} Let $f \in \mathscr{D}_{R,q}(\mathbb{R}_{q}^2) $. Then, for $p,n ,i$ and $j$$ \in\mathbb{N}$ such that $ i,j\leq p $, there exists $C_{p,R}>0$ such that \begin{equation} \Big\Arrowvert D_q^{(2i,2j)}\Big(\|t\|^{2n} f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})}\leq C_{p,R}\Big(\frac{R}{q^{4p}}\Big)^{2n}\Big(\frac{(q^{2n};q^{-1})_{2p}}{(1-q)^{2p}}\Big)^{2}. \end{equation} \end{coro} \begin{proof} Thanks to Lemma \ref{l23}, we have \begin{equation} \begin{split} \Big\Arrowvert D_q^{(2i,2j)}\Big(\|t\|^{2n} f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})} &\leq\sum_{n_1+n_2=2n}\binom{2n}{n_1}\Big\Arrowvert D_q^{(2i,2j)}\Big(t_1^{2n_1}t_2^{2n_2} f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})}\\ &\leq C(R,p)\Big(\frac{R}{q^{4p}}\Big)^{2n}\sum_{n_1+n_2=2n}\binom{2n}{n_1} \frac{(q^{2n_1};q^{-1})_{2p}(q^{2n_2};q^{-1})_{2p}}{(1-q)^{4p}}. \end{split}\end{equation} Hence, the fact that $((q^{n};q^{-1})_p)_{n>p}$ is an increasing sequence achieves the proof. \end{proof} \section{The $q$-Weinstein transform} In \cite{youss}, a $q$-analogue of the Weinstein operator and its associated Fourier transform are introduced and studied. In this section, we collect some of their basic properties. \noindent$\bullet$ The $q$-Weinstein operator is given by \begin{equation*} \triangle_{\alpha,q}=\partial_{q,x}^{2}+\frac{1}{ |y|^{2\alpha+1}}\partial_{q,y}( |y|^{2\alpha+1}\partial_{q,y})=\partial_{q,x}^{2}+\mathscr{B}_{{\alpha },q},\quad \alpha\geq-\frac{1}{2}, \end{equation*} where $\mathscr{B}_{{\alpha },q}$ is the $q$-Bessel operator defined in \cite{pwb}.\\ \noindent$\bullet$ For all $f$, $g\in \mathscr{S}_{\ast,q} ({\mathbb{R}_q^{2}})$, we have \begin{equation}\label{labla} \int_{0}^{+\infty}\int_{-\infty}^{+\infty}\triangle_{{\alpha },q}f(x,y)g(x,y) y^{2\alpha+1} d_{q}x d_{q}y=\int_{0}^{+\infty}\int_{-\infty}^{+\infty}\triangle_{{\alpha },q}g(x,y)f(x,y) y^{2\alpha+1} d_{q}x d_{q}y. \end{equation} That is $\triangle_{{\alpha },q}$ is self-adjoint.\\ \noindent $\bullet$ For all $\lambda=(\lambda_{1},\lambda_{1}) \in \mathbb{C}^2,$ the function \begin{equation}\label{ej} \Lambda^{\alpha}_{q,\lambda}(x)=\Lambda^{\alpha}_{q}(\lambda_{1}x_1,\lambda_{2}x_2)=e(-i\lambda_{1}x_1;q^{2})j_{\alpha}(\lambda_{2}x_2;q^{2}) \end{equation} is the unique solution of the $q$-differential-difference equation: \begin{equation*} \left\{ \begin{array}{lll} \mathscr{B}_{{\alpha },q}u(x_1,x_2) & = & -\lambda_{2} ^{2}u(x_1,x_2), \\ &\\ \partial_{q,x_1}^{2}u(x_1,x_2)& = &-\lambda_{1} ^{2}u(x_1,x_2),\\ &\\ u(0,0) =1, & &\partial_{q,x_2}u\left( 0,0\right) = 0,\qquad \partial_{q,x_1}u(0,0) = -i\lambda_{1}. \end{array} \right. \end{equation*} The function $\Lambda^{\alpha}_{q,\lambda}$ satisfies the following properties (see\cite{youss} and \cite{youss 1}) \begin{enumerate} \item Let $\lambda=(\lambda_1,\lambda_2) \in \mathbb{C}^2$. For all $n ,p\in \mathbb{N}, $ we have \begin{equation}\label{06} \partial_{q}^n\left[\mathscr{B}_{{\alpha },q}^p(\Lambda^{\alpha}_{q,\lambda})\right]=\mathscr{B}_{{\alpha },q}^p\left[\partial_{q}^n(\Lambda^{\alpha}_{q,\lambda})\right]=(-i\lambda_1)^n(-i\lambda_2)^{2p}\Lambda^{\alpha}_{q,\lambda}. \end{equation} \item For $n \in \mathbb{N}$ and $ \lambda \in \mathbb{R}^2$,we have \begin{equation}\label{di} \forall x \in \mathbb{R}_{q}^{2} , \quad \triangle_{{\alpha },q}^n(\Lambda^{\alpha}_{q,\lambda})(x)=(-1)^n\parallel\lambda\parallel^{2n}\Lambda^{\alpha}_{q,\lambda}(x). \end{equation} \item For $\lambda=(\lambda_1,\lambda_2) \in \mathbb{C}^2$ and $\beta=(\beta_1,\beta_2)\in \mathbb{N}^2,$ we have \begin{equation}\label{6} \forall x \in \mathbb{R}_{q}^{2} , \quad\left|D_q^{\beta}\left(\Lambda^{\alpha}_{q,\lambda}\right)(x)\right|\leq\frac{4|\lambda_1|^{\beta_1}|\lambda_2|^{\beta_2}}{(q;q)_\infty^2} . \end{equation} In particular, \begin{equation}\label{7}\forall x \in \mathbb{R}_{q}^{2} , \quad \left|\Lambda^{\alpha}_{q,\lambda}(x)\right|\leq\frac{4}{(q;q)_\infty^2}. \end{equation} \item For all $\lambda \in \mathbb{R}_{q}^{2},$ we have $\Lambda^{\alpha}_{q,\lambda} \in \mathscr{S}_{\ast,q} ({\mathbb{R}_q^{2}})$.\\ \item For all $x$, $y\in \mathbb{R}_{q,+}^{2} $, we have \begin{equation}\label{pro} \int_{\mathbb{R}_{q,+}^{2}}\Lambda^{\alpha}_{q,\lambda}(x){\Lambda^{\alpha}_{q,-\lambda}(y)}d_{q}\mu_{\alpha}(\lambda) =\left[2(1+q)^{\alpha-{\frac{1}{2}}}\Gamma_{q^{2}}(\frac{1}{2})\Gamma_{q^{2}}(\alpha+1) \right]^{2}\delta^{\alpha}_{x}(y), \end{equation} withe $\delta^{\alpha}_{x},~\alpha\geq-\frac{1}{2}$, denotes the weighted Dirac-measure at $x \in \mathbb{R}_{q,+}^2 $ defined by \begin{equation*} \forall y \in\mathbb{R}_{q,+}^{2}, \quad \delta^{\alpha}_{x}(y)=\left\{ \begin{array}{ccc} \left[ (1-q)^{2}|x_{1}|x_{2}^{2\alpha+2}\right]^{-1}, &if& x=y, \\ 0, & &ifnot. \end{array} \right. \end{equation*} \item The function $(\lambda,z)\mapsto \Lambda^{\alpha}_{q,\lambda}(z)$ has a unique extension to $\mathbb{C}^2 \times \mathbb{C}^2$ and we have \begin{equation*} \forall z,\lambda\in \mathbb{C}^2, \quad \Lambda^{\alpha}_{q,\lambda}(z)= \sum_{n,m=0}^{\infty} v_{n,m}(-i\lambda_1z_1)^{n}(i\lambda_2z_2)^{2m}, \end{equation*} where $$\forall n,m \in \mathbb{N}\quad v_{n,m}=a_n.b_m . $$\\ \item For all $x \in \mathbb{R}_{q,+}^{2} \cap[-a, a]^{2}$, we have $$ \forall z \in \mathbb{C}^2, \quad \left|\Lambda_{q, z}^{\alpha}(x)\right| \leq 4 e^{{2}a({1+\sqrt{q}})\|z\|}. $$ \end{enumerate} \begin{lemme}\label{ldd} Let $f \in \mathscr{D}_{*,R,q}(\mathbb{R}_{q}^2) $ for $k \in\mathbb{N}$, we have \begin{equation} \Big\Arrowvert \triangle_{{\alpha },q}^k f\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})}\leq C_{k}\max_{p_1,p_2\leq k} \Big\Arrowvert D_q^{2(p_1,p_2)} f\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})} \end{equation} \end{lemme} \begin{proof} Let $f \in \mathscr{D}_{*,R,q}(\mathbb{R}_{q}^2)$, from the Lemma \ref{ld}, we have \begin{align*} (\triangle_{\alpha,q}f)(x,y)&=\partial_{x,q}^2f(x,y)+q^{2\alpha+1}\partial_{y,q}^2f(x,y)-\frac{q[-2\alpha-1]_{q}}{y}\partial_{y,q}f(x,y)\\ &=\partial_{x,q}^2f(x,y)+q^{2\alpha+1}\partial_{y,q}^2f(x,y)-{q[-2\alpha-1]_{q}}\int_{0}^{1}\partial_{y,q}^{2}f(x,yt)d_qt. \end{align*} So, for $ k \in\mathbb{N}$, $k\geq 1$, $\triangle_{\alpha,q}^{k}f \in \mathscr{D}_{*,R,q}(\mathbb{R}_{q}^2)$ and we have \begin{align*} (\triangle_{\alpha,q}^k f)(x,y)&=\sum_{m=0}^{k}\left(\begin{array}{l}{k} \\ {m}\end{array}\right) \Big(D_{q}^{2(k-m,k)} f\Big)(x,y) +\sum_{j=1}^{2k-1} \int_{0}^{1} \cdots \int_{0}^{1} P_{2k-1}\left(t_{1}, \ldots, t_{j}\right)\\ &\times \Big(D_{q}^{2(k-m,k)} f\Big)\left(x,\mathrm{y.t}_{1} \cdots \mathrm{t}_{\mathrm{j}}\right) \mathrm{dt}_{1} \cdots \mathrm{dt}_{\mathrm{j}}+\int_{0}^{1} \cdots \int_{0}^{1} \mathrm{Q}_{2\mathrm{k}-1}\left(\mathrm{t}_{1}, \ldots, \mathrm{t}_{\mathrm{k}-1}\right)\\ &\times \Big(D_{q}^{2(k-m,k)} f\Big)\left({x,y.t}_{1} \cdots t_{\mathrm{k}}\right) t_{1} \cdots t_{\mathrm{k}}, \end{align*} where $\mathrm{P}_{\mathrm{2k}-1}\left(t_{1}, \ldots, t_{\mathrm{j}}\right), \mathrm{j}=1,2, \ldots, \mathrm{2k}-1,$ and $\mathrm{Q}_{\mathrm{2k}-1}\left(t_{1}, \ldots, t_{\mathrm{2k}-1}\right)$ are polynomials of degree at most $\mathrm{2k}-1$ with respect to each variable.\\ Thus there exists a positive constant $C_k$ such that \begin{equation} \Big|\triangle_{{\alpha },q}^k f(x,y)\Big|\leq C_{k}\max_{p_1,p_2\leq k} \Big\Arrowvert D_q^{2(p_1,p_2)}\Big( f\Big)\Big\Arrowvert_{L_q^\infty(\mathbb{R}_{q,+}^{2})} \end{equation} \end{proof} The $q$-Weinstein transform $\mathscr{F}^{\alpha ,q}_W$ is defined on $L_{\alpha, q}^{1}\left(\mathbb{R}_{q,+}^{2}\right)$ by : \begin{equation}\label{FB} \mathscr{F}^{\alpha ,q}_W(f)(\lambda) = K_{\alpha ,q} \displaystyle\int_{\mathbb{R}_{q,+}^{2}} f(x,y)\Lambda^{\alpha}_{q,\lambda}(x,y)y^{2\alpha +1}d_qxd_qy \end{equation} $where$ \begin{equation}\label{c} K_{\alpha ,q} =\frac{(1+q)^{\frac{1}{2}-\alpha}}{2\Gamma_{q^2}\left( \frac{1}{2}\right)\Gamma_{q^2}\left( \alpha+1\right)}. \end{equation} The $q$-Weinstein transformation satisfies the following properties \begin{enumerate} \item For all $f \in L_{\alpha ,q}^1 (\mathbb{R}_{q,+}^{2})$, $ \mathscr{F}_W^{\alpha ,q}(f)\in L_{q}^\infty(\mathbb{R}_{q,+}^{2})$, and we have \begin{equation}\label{3.12} \|\mathscr{F}_W^{\alpha ,q}(f)\|_{L_{q}^\infty(\mathbb{R}_{q,+}^{2})} \leq \frac{4K_{\alpha ,q}}{(q;q)^2_\infty} \| f \|_{L_{\alpha ,q}^1 (\mathbb{R}_{q,+}^{2})}\end{equation} and \begin{equation*} \lim_{\|\lambda\|\rightarrow \infty}\mathscr{F}_W^{\alpha ,q}(f)(\lambda)=0. \end{equation*} \item Let $f \in \mathscr{S}_{\ast,q} ({\mathbb{R}_q^{2}})$. According to relations (\ref{06}), \eqref{di} and integration by parts, we have \begin{equation}\label{10} \mathscr{F}_W^{\alpha ,q}( \partial_{q}^n \mathscr{B}_{{\alpha },q}^p(f))(\lambda )=(i\lambda_1)^{n}(i\lambda_2)^{2p} \mathscr{F}_W^{\alpha ,q}(f)(\lambda ), \end{equation} \begin{equation}\label{9} \mathscr{F}_W^{\alpha ,q}(x_1^{n}x_2^{2p}f)(\lambda )= i^{n+2p}\partial_{q}^n \mathscr{B}_{{\alpha },q}^p\left( \mathscr{F}_W^{\alpha ,q}(f)\right)(\lambda ), \end{equation} \begin{equation}\label{FDlab} \mathscr{F}_W^{\alpha ,q}(\triangle_{ \alpha ,q}f)(\lambda )= -\parallel\lambda\parallel^2 \mathscr{F}_W^{\alpha ,q}(f)(\lambda ),\end{equation} \begin{equation*} \mathscr{F}_W^{\alpha ,q}(\parallel .\parallel^{2} f)(\lambda )= -\triangle_{{\alpha },q}\left( \mathscr{F}_W^{\alpha ,q}(f)\right)(\lambda ).\end{equation*} \item For $f,g \in L_{\alpha ,q}^1 (\mathbb{R}_{q,+}^{2}),$ we have \begin{equation}\label{symD} \int_{\mathbb{R}_{q,+}^{2}}\mathscr{F}_W^{\alpha ,q}(f)(\lambda)g(\lambda )d\mu_{\alpha,q}(\lambda) = \displaystyle\int_{\mathbb{R}_{q,+}^{2}}f(\lambda)\mathscr{F}_W^{\alpha ,q}(g)(\lambda)\mu_{\alpha,q}(\lambda). \end{equation} \end{enumerate} (See\cite{youss}).\\ \begin{theorem}~\\ i) \underline{Plancherel formula} \\ For $\displaystyle \alpha\geq -1/2$, the $q$-Weinstein transform $\mathscr{F}_W^{\alpha, q}$ is an isomorphism from $\mathscr{S}_{\ast,q} ({\mathbb{R}_q^{2}})$ onto itself. Moreover, for all $f\in\mathscr{S}_{\ast,q} ({\mathbb{R}_q^{2}})$, we have \begin{equation}\label{pldun} \|\mathscr{F}_W^{\alpha,q}(f)\|_{L_{\alpha ,q}^2 (\mathbb{R}_{q,+}^{2})}=\|f\|_{L_{\alpha ,q}^2 (\mathbb{R}_{q,+}^{2})}. \end{equation} ii) \underline{Plancheral theorem} \\ The $q$-Weinstein transform can be uniquely extended to an isometric isomorphism on $L_{\alpha,q}^2(\mathbb{R}_{q,+}^{2})$. ~~Its inverse transform ${(\mathscr{F}_W^{\alpha ,q})}^{-1}$ is given by: \begin{equation}\label{11} {(\mathscr{F}_W^{\alpha ,q})}^{-1}(f)(x) = {K_{\alpha ,q}}\displaystyle\int_{\mathbb{R}_{q,+}^{2}} f(\lambda) \Lambda _{q,\lambda} ^{\alpha }(-x)d\mu_{\alpha,q}(\lambda). \end{equation} \end{theorem} \section{REAL PALEY-WIENER THEOREM FOR $ \displaystyle L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})$-FUNCTIONS} For $a \in \mathbb{R}_{q,+},$ we introduce the Paley-Wiener space $P W_{q, \alpha, a}$ as \begin{equation} P W_{q, \alpha, a}=\left\{f \in \mathscr{E}_{\ast ,q}(\mathbb{R}_q^{2}): \forall n \in \mathbb{N},\triangle_{ \alpha ,q}^{n} f \in L_{\alpha ,q}^p(\mathbb{R}_{q,+}^{2})\text { and } \lim _{n \rightarrow+\infty}\left\|\triangle_{ \alpha ,q}^{n} f\right\|^{\frac{1}{2n}}_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})} \leq a\right\} \end{equation} \text { The main result in this section will need the following lemma. } \begin{lemme}\label{2} Let $F$ be a function defined on $\mathbb{R}_{q,+}^{2},$ such that $ x\longmapsto\lVert x\lVert^{2n} F(x) \in L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})$ for all $n \in \mathbb{N}.$ Then \begin{equation} \lim _{n \rightarrow+\infty} \Big\lVert\lVert x\lVert^{2n} F\Big\lVert_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})}^{\frac{1}{2n}}=\sup \Big\{\|x\|^2,\quad x \in \text {supp} \operatorname{(F)}\cap \mathbb{R}_{q,+}^{2}\Big\} \end{equation} \end{lemme} \begin{proof} The case $F=0$ is trivial, since in this case $supp (F)=\emptyset .$\\ Suppose now that $F \neq 0$ and define a measure $m_{\alpha,q}$ on $\mathbb{R}_{q,+}^2$ by $$ d m_{\alpha,q}=\|F\|_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})}^{-2}|F(x)|^{2} d\mu_{\alpha,q} (x) $$ We have $m_{\alpha,q}\left(\mathbb{R}^{2}_{q,+}\right)=1$ and $$ \Big\lVert \left\|x\right\|^{2n} F\Big\lVert_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})}^{\frac{1}{2n}}=\|F\|_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})}^{\frac{1}{2n}}\Big\lVert\|x\|^2\Big\lVert_{L^{2 n}\left(\mathbb{R}_{q,+}^2, d m_{\alpha,q}\right)}^2. $$ Moreover, $$ \lim _{n \rightarrow+\infty}\Big\lVert\|x\|^2\Big\lVert^2_{L^{2 n}\left(\mathbb{R}_{q,+}^2, d m_{\alpha,q}\right)}=\Big\lVert\|x\|^2\Big\lVert_{L^{\infty}\left(\mathbb{R}_{q,+}^2, d m_{\alpha,q}\right)} $$ and \begin{align*} \Big\lVert\|x\|^2\Big\lVert_{L^{\infty}\left(\mathbb{R}_{q,+}^2, d m_{\alpha,q}\right)}&=\sup \Big\{\|x\|^2,\quad x \in { supp(m_{\alpha,q}) }\Big\}\\&=\sup \Big\{\|x\|^2,\quad x \in {supp} \operatorname{(F)}\cap \mathbb{R}_{q,+}^{2}\Big\}. \end{align*} Finally, the fact $$ \lim _{n \rightarrow+\infty}\|F\|_{L_{\alpha ,q}^2(\mathbb{R}_{q,+}^{2})}^{\frac{1}{2n}}=1 $$ gives the result. \end{proof} \textbf{Notation :} For $a>0,$ we denote by $L_{\alpha, q, a}^{2}$ the space of functions in $L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^{2}\right)$ with compact support in $B_{(0,a)} .$ \begin{theorem} For any $a\in \mathbb{R}_{q,+},$ the q-Weinstein transform $\mathscr{F}_{W}^{\alpha, q}$ is bijective from $L_{\alpha, q, a}^{2}$ onto $P W_{q, \alpha, a} .$ \end{theorem} \begin{proof} Let $f\in L_{\alpha, q, a}^{2}$. \\ From the properties of $\mathscr{F}_{W}^{\alpha, q},$ we get $\mathscr{F}_{W}^{\alpha, q}(f) \in L_{\alpha, q}^{2}(\mathbb{R}_{q,+}^2)$ and by definition of $f,$ we have for all $n \in \mathbb{N}, t \mapsto \|t\|^{2n} f(t)$ belongs to $L_{\alpha, q}^{1}\left(\mathbb{R}_{q,+}^2\right) \cap L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right) .$ Then, a repeated application of the operator $\triangle_{ \alpha ,q} $ to the $\mathscr{F}_{W}^{\alpha, q}(f)$ gives \begin{equation} \left(\triangle_{ \alpha ,q}^{n}\mathscr{F}_{W}^{\alpha, q}( f)\right)(x)=(-1)^{n} \mathscr{F}_{W}^{\alpha, q}\left(\|t\|^{2n} f\right)(x), n=0,1,... . \ldots \end{equation} So, the properties of $\mathscr{F}_{W}^{\alpha, q}$ imply that $\mathscr{F}_{W}^{\alpha, q}f \in \mathcal{E}_{*,q}\left(\mathbb{R}_{q}^2\right)$ and for all nonnegative integer $n,$ $\left(\Delta_{\alpha, q}^{n}\mathscr{F}_{W}^{\alpha, q}( f)\right) \in L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right) .$ Moreover, the Plancherel theorem gives \begin{equation} \left\|\Delta_{\alpha, q}^{n}\mathscr{F}_{W}^{\alpha, q} (f)\right\|_{L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right)}^{2}=\Big\lVert \|t\|^{2n} f\Big\lVert_{L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right)}^{2}=\int_{\mathbb{R}_{q,+}^2} \|t\|^{4 n}|f(t)|^{2}d\mu_{\alpha,q}(t) \end{equation} By using Lemma \ref{2}, we get \begin{equation} \lim _{n \rightarrow+\infty}\left\|\left(\Delta_{\alpha, q}^{n} \mathscr{F}_{W}^{\alpha, q}f\right)\right\|_{L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right)}^{\frac{1}{2n}}=\sup \Big\lbrace \|\lambda\|^2,\quad {\lambda \in \operatorname{supp}(f) \cap \mathbb{R}_{q,+}^2}\Big\rbrace \leq a \end{equation} and $\mathscr{F}_{W}^{\alpha, q}(f) \in P W_{q, \alpha, a}$\\ $\text { Reciprocally, let } f \in P W_{q, \alpha, a} . \text { We have by the inversion formula }$ \begin{equation} f(x)={K_{\alpha, q}} \int_{\mathbb{R}_{q,+}^2}( \mathscr{F}_{W}^{\alpha, q})^{-1}(f)(t) \Lambda _{q,x} ^{\alpha }(t)d\mu_{\alpha,q}(t). \end{equation} So for all $n\in\mathbb{N}$ we have \begin{equation} \left(\Delta_{\alpha, q}^{n} f\right)(x)=(-1)^n{K_{\alpha, q}} \int_{\mathbb{R}_{q,+}^2} \|t\|^{2n}( \mathscr{F}_{W}^{\alpha, q})^{-1}(f)(t) \Lambda _{q,x} ^{\alpha }(t)d\mu_{\alpha,q}(t) \end{equation} Using the Plancherel formula, we obtain $$\Big\Arrowvert\left\|t\right\|^{2n} ( \mathscr{F}_{W}^{\alpha, q})^{-1}(f)\Big\Arrowvert _{L^2_{\alpha, q}(\mathbb{R}_{q,+}^2)}=\left\|\Delta_{\alpha, q}^{n}(f)\right\|_{L^2_{\alpha, q}(\mathbb{R}_{q,+}^2)}<\infty.$$ Then for all $n \in \mathbb{N}, t \mapsto \|t\|^{2n} ( \mathscr{F}_{W}^{\alpha, q})^{-1}(f)(t)$ belongs to $L_{\alpha, q}^{2}\left(\mathbb{R}_{q,+}^2\right).$ \\mathscr{F}inally, by Lemma $2,$ we deduce that $( \mathscr{F}_{W}^{\alpha, q})^{-1}(f)\in L_{\alpha, q, a}^{2}.$ \end{proof} \begin{remark} We have for all $a \in \mathbb{R}_{q,+}$ $$ P W_{q, \alpha, a}=\left\{f: \forall x \in \mathbb{R}_{q}^{2}, f(x)=\frac{K_{\alpha, q}}{2} \int_{0}^{a}\int_{-a}^{a} g(t) \Lambda^{\alpha}_{q,x}d\mu_{\alpha,q}(t), \quad g \in L_{\alpha, q, a}^{2}\right\} $$ Then, any element of $P W_{q, \alpha, a}$ is extendable to an entire function on $\mathbb{C}^2$ of exponential type. \end{remark} \section{REAL PALEY-WIENER THEOREM FOR FUNCTIONS IN THE $q$-SCHWARTZ SPACE} For $m \in \mathbb{N},$ we define the real Paley-Wiener space $P W_{\alpha, {q}}^{m}$ by \begin{equation} P W_{\alpha, q}^{m}=\left\{f \in \mathscr S_{*,q}\left(\mathbb{R}_{q}^{2}\right): \exists a \in \mathbb{R}_{+}, \text { such that } \sup _{x\in \mathbb{R}_{q}^{2}, n \in \mathbb{N}\\ ,n \geq m} a^{-2n} B_{n, m, q}(1+\|x\|^{2})^{m}\left|\Delta_{\alpha, q}^{n} f(x)\right|<\infty\right\} \end{equation}where $B_{n, m, q}=\Big(\frac{(1-q)^{2m}}{\left(q^{2n} ; q^{-1}\right)_{2m}}\Big)^{2}$ \begin{theorem} For $m \in \mathbb{N}, m> \alpha+\frac{3}{2},$ the $q$-Weinstein transform $\mathscr{F}_{W}^{\alpha, q}$ is a bijection from $\mathscr{D}_{*,q}\left(\mathbb{R}_{q}^2\right)$ onto $P W_{\alpha, q}^{m}$ \end{theorem} \begin{proof} Let $f \in P W_{a, q}^{m} .$ There exist a positive real $a$ and a constant $C_{a, m}$ such that for all $x \in \mathbb{R}_{q}^2$ and all integer $n \geq m$ $$ \left|\Delta_{\alpha, q}^{n} f(x)\right| \leq C_{a, m} a^{2n} \frac{1}{B_{n, m, q}} \frac{1}{(1+\|x\|^2)^{m}} $$ Consider $x \in \mathbb{R}_{q,+}^2$ outside of $B_{(0,a)}.$ We have $$ \Big(\mathscr{F}_{W}^{\alpha, q}\Big)^{-1}\left(\Delta_{\alpha, q}^{n} f\right)(x)=(-1)^n \|x\|^{2n} \Big(\mathscr{F}_{W}^{\alpha, q}\Big)^{-1}(f)(x)=(-1)^n \|x\|^{2n} \mathscr{F}_{W}^{\alpha, q}(f)(-x) $$ So \begin{align*} \Big|\mathscr{F}_{W}^{\alpha, q}(f)(-x)\Big|&=\Big|\Big(\frac{-1}{\|x\|^2}\Big)^{n} {K_{\alpha, q}} \int_{\mathbb{R}_{q,+}^2} \Delta_{\alpha, q}^{n} f(t) \Lambda^{\alpha}_{q,-x}(t)d\mu_{\alpha,q}(t)\Big|\\ & \leq \frac{}{}\frac{ 4C_{a, m}K_{\alpha, q}}{(q;q)_\infty^2B_{n, m, q}} \Big(\frac{a}{\|x\|}\Big)^{2n} \int_{\mathbb{R}_{q,+}^2} \frac{t_2^{2\alpha+1}}{(1+\|t\|^2)^{m}}d_qt_1d_qt_2 \end{align*} since $\|x\|>a,$ this last quantity clearly approaches zero as $n$ tends to $+\infty,$ it follows that $\operatorname{supp}\left(\mathscr{F}_{W}^{\alpha, q}\right)^{-1}(f) \subset B_{(0,a)} .$ Finally, since $\mathscr{F}_{W}^{\alpha, q}$ is an isomorphism from $\mathscr S_{*,q}\left(\mathbb{R}_{q}^2\right)$ onto itself and $f \in \mathscr{S}_{*,q}\left(\mathbb{R}_{q}^2\right),$ we obitain $\left(\mathscr{F}_{W}^{\alpha, q}\right)^{-1}(f) \in \mathscr{S}_{*,q}\left(\mathbb{R}_{q}^2\right),$ which implies that $\left(\mathscr{F}_{W}^{\alpha, q}\right)^{-1}(f) \in \mathscr{D}_{*,a,q}(\mathbb{R}_{q}^2) \subset \mathscr{D}_{*,q}\left(\mathbb{R}_{q}^2\right)$. Conversely, let $f \in \mathscr{D}_{q}\left(\mathbb{R}_{q}^2\right) .$ There exists then $R \in \mathbb{R}_{q,+}$ such that $supp (f) \subset B_{(0,R)}.$ We have for $x \in \mathbb{R}_{q}^2$ and integer $n \geq m$ $$ \Delta_{\alpha, q}^{n}\left(\mathscr{F}_{W}^{\alpha, q}(f)\right)(x)=(-1)^{n} {K_{\alpha, q}} \int_{\mathbb{R}_{q,+}^2} \|t\|^{2n} f(t) \Lambda^{\alpha}_{q,x}(t)d\mu_{\alpha,q}(t) $$ Then, from the properties of the $q$-Weinstein operator $\Delta_{\alpha, q},$ we have for $p \leq m$ \begin{align*} \Big| \|x\|^{2p}\Delta_{\alpha, q}^{n}\left(\mathscr{F}_{W}^{\alpha, q}(f)\right)(x)\Big|&=\Big|(-1)^{n+p} {K_{\alpha, q}} \int_{\mathbb{R}_{q,+}^2}\Delta_{\alpha, q}^{p}\Big( \|t\|^{2n} f(t)\Big) \Lambda^{\alpha}_{q,x}(t)d\mu_{\alpha,q}(t)\Big|\\ &\leq \frac{4K_{\alpha, q}}{(q;q)_\infty^2} \int_{\mathbb{R}_{q,+}^2}\Big|\Delta_{\alpha, q}^{p}\Big( \|t\|^{2n} f(t)\Big) \Big|d\mu_{\alpha,q}(t) \end{align*} So, from the Lemma \ref{ldd} and the Corollary \ref{RD}, it exists positive constant, $C_{p,R,\alpha}$, independent of $n$ such that \begin{align*} \Big| \|x\|^{2p}\Delta_{\alpha, q}^{n}\left(\mathscr{F}_{W}^{\alpha, q}(f)\right)(x)\Big|&\leq C_{p,R,\alpha}\Big(\frac{R}{q^{4p}}\Big)^{2n}\Big(\frac{(q^{2n};q^{-1})_{2p}}{(1-q)^{2p}}\Big)^{2} \int_{B_{+(0,q^{-2p}R)}}t_2^{2\alpha+1}d_qt_2d_qt_1\\ &\leq (q^{-2p}R)^{2\alpha+3}C_{p,R,\alpha}\Big(\frac{R}{q^{4m}}\Big)^{2n}\Big(\frac{(q^{2n};q^{-1})_{2m}}{(1-q)^{2m}}\Big)^{2}. \end{align*} Now, taking $$a=\frac{R}{q^{4m}}, \text{ and } C_{a, m}= \sum_{p=0}^{m} \binom{m}{p} (q^{2p}R)^{2\alpha+3}C_{p,R,\alpha},$$ we obtain \begin{align*} \Big| \Delta_{\alpha, q}^{n}\left(\mathscr{F}_{W}^{\alpha, q}(f)\right)(x)\Big|(1+\|x\|^2)^{m} \leq C_{a, m} a^{2n} \frac{1}{B_{n, m, q}} \end{align*} \end{proof} \textbf{{Example:}} In Section 3.2 of \cite{j} it is shown that for $\alpha>-1 / 2$ and $p \geqslant 1,$ the $q-j_{\alpha+p}$ Bessel function has the $q$ -integral representation of Sonine type $$ j_{\alpha+p}\left(y ; q^{2}\right)=\int_{0}^{1} W_{p-1}\left(t ; q^{2}\right) j_{\alpha}\left(y t ; q^{2}\right) t^{2 \alpha+1}d_{q} t $$ where $$ W_{p-1}\left(x ; q^{2}\right)=\frac{\left(x^{2} q^{2} ; q^{2}\right)_{\infty}}{\left(x^{2} q^{2 p-1} ; q^{2}\right)_{\infty}} $$ So, \begin{align*} \Lambda^{\alpha+p}_{q}(x,y)&=e(-ix;q^{2})j_{\alpha+p}(y;q^{2})\\ &=\int_{-1}^{1}\int_{0}^{1} W_{p-1}\left(t_2 ; q^{2}\right)\Lambda^{\alpha}_{q}(t_1x,yt_2) \delta_{1}(t_{1}) d\mu_{\alpha,q}(t_1,t_2) \end{align*} where \begin{equation*} \forall t \in\mathbb{R}_{q}, \quad \delta_{1}(t)=\left\{ \begin{array}{ccc} \frac{1}{1-q}, &if& t=1, \\ 0, & &ifnot. \end{array} \right. \end{equation*} As a result, $(x,y)\longmapsto\Lambda^{\alpha+p}_{q}(x,y) \in P W_{q, 1}^{\alpha}$ and satisfies \begin{align*} \Big| \Delta_{\alpha, q}^{n}\left(\Lambda^{\alpha+p}_{q}\right)(x,y)\Big| \leq \frac{C_{1, m}}{B_{n, m, q}}\frac{1}{(1+\|(x,y)\|^2)^{m}}. \end{align*} \end{document}
\begin{document} \nocite{*} \title{Moment functions of higher rank on polynomial hypergroups} \begin{abstract} In this paper we consider generalized moment functions of higher order. These functions are closely related to the well-known functions of binomial type which have been investigated on various abstract structures. In our former paper we investigated the properties of generalized moment functions of higher order on commutative groups. In particular, we proved the characterization of generalized moment functions on a commutative group as the product of an exponential and composition of multivariate Bell polynomial and a sequence additive functions. In the present paper we continue the study of generalized moment function sequences of higher order in the more abstract setting, namely we consider functions defined on a hypergroup. We characterize these functions on the polynomial hypergroup in one variable by means of partial derivatives of a composition of polynomials generating the polynomial hypergroup and an analytic function. As an example, we give an explicit formula for moment generating functions of rank at most two on the Tchebyshev hypergroup. \end{abstract} \section{Introduction}\label{intro} Let $G$ be a locally compact Abelian topological group. Recall that a nonzero continuous function $m\colon G\to \mathbb{C}$ is called {\it exponential}, if $$ m(x+y)=m(x)m(y) $$ holds for all $x,y$ in $G$. \vskip.2cm Exponentials can be considered as "generalized moment functions" of order zero. Indeed, a sequence $\varphi_n:G\to \mathbb{C}$ of continuous functions is called a {\it generalized moment function sequence}, if $\varphi_0(0)=1$ and \begin{equation} \varphi_n(x+y)=\sum_{k=0}^n \binom{n}{k} \varphi_k(x)\varphi_{n-k}(y) \label{mom1} \end{equation} holds for all $x,y$ in $G$ and $n$ in $\mathbb{N}$. In this case $\varphi_0$ is an arbitrary exponential function and the function $\varphi_k$ is a {\it generalized moment function of order} $k$. \vskip.2cm Equation \eqref{mom1} is closely related to the well-known functions of binomial type. A detailed discussion about binomial type equations in abstract setting, which has been the motivation for the present research, can be found in \cite{MR440237}, where it was shown that if $G$ is a grupoid and $R$ is a commutative ring, then functions $\varphi_n\colon G\to R$ satisfying \eqref{mom1} for each $n$ in $\mathbb{N}$ are of the form \begin{equation} \varphi_n(t)=n!\sum_{j_1+2j_2+\dots+nj_n=n}\prod_{k=1}^n \frac{1}{j_k!}\left(\frac{a_k(t)}{k!} \right)^{j_k} \label{eq:AczelPolySol} \end{equation} for all $t$ in $G$ and $k$ in $\mathbb{N}$ with arbitrary homomorphisms $a_k$ from $G$ into $R$. \vskip.2cm The system of functional equations \eqref{mom1}, as well as the concept of moment functions can be and has been generalized for commutative hypergroups. Formally, the group operation $+$ in \eqref{mom1} is replaced by $*$, the convolution defined on the commutative hypergroup $X$, and we obtain the system of equations \begin{equation} \varphi_n(x*y)=\sum_{k=0}^n \binom{n}{k} \varphi_k(x)\varphi_{n-k}(y), \label{mom2} \end{equation} where $x,y$ are in $X$, and $n=0,1,2,\dots$. We also assume that $\varphi_0(o)=1$, where $o$ is the identity of the hypergroup $X$. We recall that in the hypergroup setting equation \eqref{mom2} is a system of integral equations of the following form: $$ \int_X \varphi_n(t)\,d(\delta_x*\delta_y)(t)=\sum_{k=0}^n \binom{n}{k} \varphi_k(x)\varphi_{n-k}(y). $$ For more details see \cite{BloHey95} or \cite{MR2978690}. The functions $\varphi_n$ are supposed to be continuous. \section{Moment functions of higher rank} Further generalization is available in the following way. Let $X$ be a commutative hypergroup, $r$ a positive integer, and for each multi-index $\alphapha$ in $\mathbb{N}^r$ let $\varphi_{\alphapha}:X\to \mathbb{C}$ be a continuous function. We say that $(\varphi_{\alphapha})_{\alphapha \in \mathbb{N}^{r}}$ is a \emph{generalized moment sequence of rank $r$}, if \begin{equation}\label{Eq3} \varphi_{\alphapha}(x*y)=\sum_{\beta\leq \alphapha} \binom{\alphapha}{\beta} \varphi_{\beta}(x)\varphi_{\alphapha-\beta}(y) \end{equation} holds whenever $x,y$ are in $X$. We may consider finite sequences as well, if we restrict $|\alphapha|\leq N$ with some nonnegative integer $N$. For simplicity we use the term {\it moment sequence} omitting the adjective "generalized". We call a continuous function $\varphi:X\to \mathbb{C}$ a {\it moment function}, if there is a positive integer $r$, a moment sequence $(\varphi_{\alphapha})_{\alphapha \in \mathbb{N}^{r}}$ of rank $r$, and a multi-index $\alphapha$ in $\mathbb{N}^r$ such that $\varphi=\varphi_{\alphapha}$. We recall that, using multi-indeces, besides the usual vector-notation for the basic operations we use the following notation: for $\alphapha=(\alphapha_1,\alphapha_2,\dots,\alphapha_r)$ and $\beta=(\beta_1,\beta_2,\dots,\beta_r)$ in $\mathbb{N}^r$ we shall write $\alphapha\leq \beta\enskip\text{whenever}\enskip \alphapha_i\leq \beta_i\enskip\text{for} \enskip i=1,2,\dots r$, and $\alphapha<\beta\enskip\text{whenever}\enskip \alphapha\leq\beta\enskip\text{and}\enskip \alphapha\ne \beta$. Further, we use the notations $$ |\alphapha|=\alphapha_1+\alphapha_2+\cdots+\alphapha_r,\hskip1cm \alphapha!=\alphapha_1!\cdot \alphapha_2!\cdots\alphapha_r!, $$ $$ \binom{\alphapha}{\beta}=\frac{\alphapha!}{\beta! \cdot (\alphapha-\beta)!},\hskip1cm x^{\alphapha}=x_1^{\alphapha_1}\cdot x_2^{\alphapha_2}\cdot\dots\cdot x_r^{\alphapha_r}. $$ If there is no misunderstanding, the zero of $\mathbb{N}^r$ will be denoted by $0$ instead of $(0,0,\dots,0)$. In our former paper \cite{FecGseSze20} we have investigated moment function sequences of higher rank on groups. For their description we used Bell polynomials (see the definition in \cite{FecGseSze20}). We proved the following result (see Proposition 3. and Theorem 2. in \cite{FecGseSze20}): \begin{thm} Let $G$ be a commutative group, $r$ a positive integer, and for each $\alphapha$ multi-index in $\mathbb{N}^r$. The functions $f_{\alphapha} : G\mapsto \mathbb{C}$ form a generalized moment sequence of rank $r$ if and only if there exists an exponential \hbox{$m:G\mapsto \mathbb{C}$} and a family of complex-valued additive functions $a=(a_{\alphapha})$ such that for every multi-index $\alphapha$ in $\mathbb{N}^r$ and $x$ in $G$ we have $$ f_{\alphapha}(x)=B_{\alphapha}\big(a_{\alphapha}(x)\big)m(x). $$ \end{thm} It is reasonable to ask if a similar description of moment functions of higher rank is available in the hypergroup case. The answer is negative: due to the meaning of the symbol $f(x*y)$ as an integral products of functions satisfying simple functional equations will not preserve their properties arising from these equations. For instance, products of exponentials on hypergroups is, in general, not an exponential. In the subsequent paragraphs we will show that still the idea of multiplying exponentials by Bell polynomials can be replaced by application of some appropriate differential operators. \section{Main results} We begin with recalling the definition of a polynomial hypergroup in one variable. Let $(a_n)_{n\in \mathbb{N}}$, $(b_n)_{n\in \mathbb{N}}$ and $(c_n)_{n\in \mathbb{N}}$ be real sequences with the following properties $$ c_n>0,\quad b_n\geq 0, \quad a_{n+1}>0 $$ for each $n$ in $\mathbb{N}$. Moreover, $a_0=b_0=0$ and $$ a_n+b_n+c_n=1 $$ for each $n$ in $\mathbb{N}$. We define the sequence of polynomials $(P_n)_{n\in\mathbb{N}}$ by the formulas $P_0(\lambda)=1$, $P_1(\lambda)=\lambda$ and $$\lambda P_n(\lambda)=a_nP_{n-1}(\lambda)+b_nP_n(\lambda)+c_nP_{n+1}(\lambda)$$ for each $n\geq 1$ and $\lambda$ in $\mathbb{R}$. One can show that for each $k,m,n$ in $\mathbb{N}$ there exist constnts $c(n,m,k)$ such that \begin{equation}\label{eq:LinFormula} P_n\cdot P_m=\sum_{k=|n-m|}^{n+m}c(n,m,k)P_k. \end{equation} for each $m,n$ in $\mathbb{N}$. Formula \eqref{eq:LinFormula} is called linearization formula. One can show that $$ \sum_{k=|n-m|}^{n+m}c(n,m,k)=1 $$ for each $m,n$ in $\mathbb{N}$. If $c(n,m,k)\geq 0$ for all $k,m,n$ in $\mathbb{N}$, then on the set $\mathbb{N}$ we can define the hypergroup structure, where the convolution is given by $$\delta_n*\delta_m= \sum_{k=|n-m|}^{n+m}c(n,m,k)\delta_k.$$ Assume now that $X$ is a polynomial hypergroup generated by the sequence of polynomials $(P_n)_{n\in\mathbb{N}}$ and $r,N$ are positive integers. Our purpose is to describe the general solution of the system of functional equations \eqref{Eq3} for the unknown functions $\varphi_{\alphapha}:\mathbb{N}\to\mathbb{C}$ with $|\alphapha|\leq N$. \vskip.2cm In the case $r=1$ the system \eqref{Eq3} reduces to the system \eqref{mom1} which was solved in \cite{MR2161803} (see also \cite[Theorem 2.5, p. 44]{MR2978690}). We recall the result. \begin{thm}\label{r1} Let $X$ be a polynomial hypergroup generated by the sequence of polynomials $(P_n)_{n\in \mathbb{N}}$ and let $N$ be a positive integer. The sequence of functions $\varphi_{k}:\mathbb{N}\to\mathbb{C}$ $(k=0,1,\dots, N)$ is a moment function sequence (of rank $1$) if and only if they have the form \begin{equation}\label{genformr1} \varphi_k(n)=(P_n\circ f)^{(k)}(0)\hskip .5cm k=0,1,\dots,N; n\in \mathbb{N}, \end{equation} where \begin{equation}\label{r1f} f(t)=\sum_{j=0}^N \frac{\varphi_j(1)}{j!}t^j\enskip\text{for}\enskip t \enskip\text{in}\enskip \mathbb{R}. \end{equation} \end{thm} The latter theorem says that moment functions can be reperesented as \[ \varphi_{\alphapha}(n)= \partial^{\alphapha}(P_{n}\circ f)(t)\vert_{t=0}. \] The derivative of the above composition can be computed with the aid of the multivariate Faá di Bruno formula, and using this, we get that \[ \partial^{\alphapha}(P_{n}\circ f)(t)\vert_{t=0}= \sum_{\beta \leq \alphapha} \partial^{\beta}P_{n}(f(t))\cdot B_{\alphapha, \beta}(f(t), \ldots, \partial^{\alphapha}f(t))\vert_{t=0} \] If we use that \[ \partial^{\alphapha}f(t)\vert_{t=0}= \varphi_{\alphapha}(1), \] then the above formula becomes simpler. For more details see the paper of A.~Schumann on arXiv: \href{https://arxiv.org/pdf/1903.03899.pdf}{https://arxiv.org/pdf/1903.03899.pdf}. Our next theorem generalizes this result for moment function sequences of higher rank. \begin{thm}\label{r} Let $X$ be a polynomial hypergroup generated by the sequence of polynomials $(P_n)_{n\in \mathbb{N}}$ and let $r,N$ be positive integers. The sequence of functions $\varphi_{\alphapha}:\mathbb{N}\to\mathbb{C}$ $(\alphapha\in \mathbb{N}^r, |\alphapha|\leq N)$ is a moment function sequence of rank $r$ if and only if they have the form \begin{equation}\label{genformr} \varphi_{\alphapha}(n)=\partial^{\alphapha}(P_n\circ f)(0)\hskip .5cm k=0,1,\dots,N; n\in \mathbb{N}, \end{equation} where $f:\mathbb{R}^r\to\mathbb{C}$ is defined for $t$ in $\mathbb{R}^r$ by \begin{equation}\label{rf} f(t)=\sum_{|\alphapha|\leq N} \frac{\varphi_{\alphapha}(1)}{\alphapha!}t^{\alphapha}\enskip\text{for}\enskip t \enskip\text{in}\enskip \mathbb{R}^r. \end{equation} \end{thm} \begin{proof} We show the sufficiency first. We start with the identity $$ P_m(\lambda)P_n(\lambda)=P_{m*n}(\lambda), $$ where $$ P_{m*n}(\lambda)=\sum_{k=|m-n|}^{m+n} c(m,n,k)P_k(\lambda) $$ and $\lambda$ is an arbitrary complex number. We substitute $\lambda=f(t)$, where $t$ is arbitrary in $\mathbb{R}^r$: \begin{equation}\label{Eq1} P_m(f(t))P_n(f(t))=P_{m*n}(f(t)), \end{equation} and apply $\partial^{\alphapha}$ on both sides to get \begin{equation}\label{Eq2} \partial^{\alphapha}(P_{m*n}(f(t))=\sum_{\beta\leq \alphapha} \binom{\alphapha}{\beta} \partial^{\beta}P_m(f(t))\cdot \partial^{\alphapha-\beta}P_n(f(t)). \end{equation} Now we substitute $t=0$ to obtain $$ \varphi_{\alphapha}(m*n)=\partial^{\alphapha}(P_{m*n}\circ f)(0) $$ $$ =\sum_{\beta\leq \alphapha}\binom{\alphapha}{\beta} \partial^{\beta}(P_{m}\circ f)(0)\cdot \partial^{\alphapha-\beta}(P_{n}\circ f)(0)=\sum_{\beta\leq \alphapha}\binom{\alphapha}{\beta} \varphi_{\beta}(m)\cdot \varphi_{\alphapha-\beta}(n), $$ that is, the functions $\varphi_{\alphapha}$ for $|\alphapha|\leq N$ form a moment sequence of rank $r$, which proves the sufficiency part of the theorem. Now suppose that the sequence of functions $\varphi_{\alphapha}:\mathbb{N}\to\mathbb{C}$ for $\alphapha$ in $\mathbb{N}^r$ with $|\alphapha|\leq N$ is a moment function sequence of rank $r$ on $X$. We define the function $f$ as given in \eqref{rf} and the functions $\psi_{\alphapha}:\mathbb{N}\to\mathbb{C}$ by \begin{equation}\label{psi} \psi_{\alphapha}(n)=\varphi_{\alphapha}(n)-\partial^{\alphapha}(P_n\circ f)(0) \end{equation} for $\alphapha$ in $\mathbb{N}^r $ with $|\alphapha|\leq N$ and for each $n=0,1,\dots$. We show that all functions $\psi_{\alphapha}$ vanish identically. For $\alphapha=0$ we have $$ \psi_0(n)=\varphi_0(n)-P_n(\varphi_0(1)) $$ whenever $n=0,1,\dots$. As $\varphi_0$ is an exponential, by Theorem 2.2 in \cite{MR2978690}, it follows $\varphi_0(n)=P_n(\varphi_0(1))$, hence $\psi_0=0$. \vskip.2cm Now we show by induction on $|\alphapha|$ that $\varphi_{\alphapha}(0)=0$ whenever $|\alphapha|>0$. Indeed, $\varphi_{\alphapha}$ is a $\varphi_0$-sine function for each $\alphapha$ with $|\alphapha|=1$, hence $\varphi_{\alphapha}(0)=0$. Now let $|\alphapha|>1$, and assume that we have proved that $\varphi_{\beta}(0)=0$ for each $\beta$ with $0<|\beta|<|\alphapha$. Then we substitute $x=y=0$ in \eqref{Eq3} to obtain $$ \varphi_{\alphapha}(0)=\varphi_{\alphapha}(0)+\sum_{0<\beta<\alphapha} \binom{\alphapha}{\beta} \varphi_{\beta}(0)\varphi_{\alphapha-\beta}(0)+\varphi_{\alphapha}(0), $$ that is $$ \varphi_{\alphapha}(0)=2\varphi_{\alphapha}(0), $$ which implies $\varphi_{\alphapha}(0)=0$. This implies $\psi_{\alphapha}(0)=0$ whenever $|\alphapha|>0$. On the other hand, for $|\alphapha|>0$ we have $$ \psi_{\alphapha}(1)=\varphi_{\alphapha}(1)-\partial^{\alphapha}(P_1\circ f)(0)=\varphi_{\alphapha}(1)-\partial^{\alphapha}f(0)=\varphi_{\alphapha}(1)-\varphi_{\alphapha}(1)=0. $$ We show, by induction on $|\alphapha|$, that $\psi_{\alphapha}=0$ for each $\alphapha$. This clearly holds for $\alphapha=0$. Assuming that $\psi_{\beta}=0$ for $\beta<\alphapha$ we apply the linearization formula \eqref{Eq2} with $m=1$ and $t=0$ to get \begin{equation*} \partial^{\alphapha}(P_{n*1}\circ f)(0)=\sum_{\beta\leq \alphapha} \binom{\alphapha}{\beta} \partial^{\alphapha-\beta}(P_1\circ f)(0)\cdot \partial^{\beta}(P_n\circ f)(0), \end{equation*} or for $n\geq 1$ \begin{equation*} \sum_{l=n-1}^{n+1}c(n,1,l)\partial^{\alphapha}(P_l\circ f)(0)=\sum_{\beta\leq \alphapha} \binom{\alphapha}{\beta} \varphi_{\alphapha-\beta}(1)\cdot \partial^{\beta}(P_n\circ f)(0), \end{equation*} that is \begin{equation}\label{Eq4} \sum_{\beta< \alphapha} \binom{\alphapha}{\beta} \varphi_{\alphapha-\beta}(1)\cdot \partial^{\beta}(P_n\circ f)(0)= \end{equation} $$ \sum_{l=n-1}^{n+1}c(n,1,l)\partial^{\alphapha}(P_l\circ f)(0)-\varphi_0(1) \partial^{\alphapha}(P_n\circ f)(0). $$ On the other hand, by the definition of the moment function sequence we have $$ \sum_{\beta\leq \alphapha} \binom{\alphapha}{\beta} \varphi_{\beta}(n)\varphi_{\alphapha-\beta}(1)=\varphi_{\alphapha}(n*1), $$ which can be rewritten as \begin{equation*} \sum_{\beta<\alphapha} \binom{\alphapha}{\beta}\varphi_{\alphapha-\beta}(1)\cdot \varphi_{\beta}(n)=\varphi_{\alphapha}(n*1)-\varphi_0(1)\varphi_{\alphapha}(n), \end{equation*} or \begin{equation}\label{Eq5} \sum_{\beta<\alphapha} \binom{\alphapha}{\beta}\varphi_{\alphapha-\beta}(1)\cdot \varphi_{\beta}(n)= \end{equation} $$ \sum_{l=n-1}^{n+1}c(n,1,l)\varphi_{\alphapha}(l)-\varphi_0(1)\varphi_{\alphapha}(n). $$ We subtract \eqref{Eq4} from \eqref{Eq5}, then, by assumption, we have $$ \sum_{l=n-1}^{n+1}c(n,1,l)\psi_{\alphapha}(l)=\varphi_0(1)\psi_{\alphapha}(n). $$ This is a second order linear recursion for the sequence $n\mapsto \psi_{\alphapha}(n)$ with $\psi_{\alphapha}(0)=\psi_{\alphapha}(1)=0$, which implies $\psi_{\alphapha}=0$. This holds for each $\alphapha$ with $|\alphapha|\leq N$, hence our theorem is proved. \end{proof} As an example we calculate the generalized moment functions of rank at most $2$ in the case $r=2$ on the Tchebyshev hypergroup. \begin{ex} The generating functions are the Tchebyshev polynomials of the first kind: $$ T_n(\lambda)=\cos (n \arccos \lambda)\hskip.2cm \text{for}\hskip.2cm n=0,1,\dots $$ By formula \eqref{genformr} we have $$ \varphi_{\alphapha}(n)=\partial^{\alphapha} (T_n\circ f)(0,0) $$ whenever $\alphapha=(0,0), (1,0), (0,1), (1,1), (2,0), (0,2)$, and $f$ is defined by equation \eqref{rf}. We denote $\varphi_{0,0}(1)=T_1(\varphi_{0,0}(1))$ by $\lambda$, then we have \begin{eqnarray*} \varphi_{0,0}(n)&=&T_n(\lambda)\\ \varphi_{1,0}(n)&=&c_{1,0}T_n'(\lambda), \hskip.3cm \\ \varphi_{0,1}(n)&=&c_{0,1}T_n'(\lambda), \hskip.3cm \\ \varphi_{1,1}(n)&=&c_{1,0}c_{0,1}T_n''(\lambda)+c_{1,1}T_n'(\lambda)\\ \varphi_{2,0}(n)&=&c_{1,0}^2T_n''(\lambda)+c_{2,0}T_n'(\lambda), \hskip.3cm \\ \varphi_{0,2}(n)&=&c_{0,1}^2T_n''(\lambda)+c_{0,2}T_n'(\lambda). \end{eqnarray*} \end{ex} \begin{ackn} The research of E.~Gselmann has partially been carried out with the help of the project 2019-2.1.11-T\'{E}T-2019-00049, which has been implemented with the support provided from NRDI (National Research, Development and Innovation Fund of Hungary), financed under the T\'{E}T funding scheme. \\ The research of E.~Gselmann and L.~Sz\'{e}kelyhidi has been supported by the NRDI (National Research, Development and Innovation Fund of Hungary) Grant no. K 134191. \end{ackn} \end{document}
\begin{document} \begin{abstract} Brownian motions on a metric graph are defined. Their generators are characterized as Laplace operators subject to Wentzell boundary at every vertex. Conversely, given a set of Wentzell boundary conditions at the vertices of a metric graph, a Brownian motion is constructed pathwise on this graph so that its generator satisfies the given boundary conditions. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction and Main Results} \label{sect_1} Since the groundbreaking works of Bachelier~\cite{Ba00}, Einstein~\cite{Ei05, Ei06}, and Smo\-lu\-chow\-ski~\cite{Sm06}, \footnote{\footnotesize It seems that Schr\"odinger \cite{Sc15} was the first to introduce the notion of a \emph{first passage time}, (in German \emph{Erstpassagezeit}), i.e., a special type of \emph{stopping time}, in the continuous time context of the Brownian motion process. It is striking that this article and the parallel work of Smo\-lu\-chow\-ski~\cite{Sm15} has practially gone unnoticed in the physics literature, while being cited by statisticians, e.g., \cite{Tw45, FoCh78}.}\space the theory of the Brownian movement had been established as a central, recurrent theme in mathematics and physics. In the sequel the Brownian phenomenon stimulated the development of many important ideas and theories. A complete description of the history is beyond the scope of this introduction, but in keywords we want to mention the following: The construction of Wiener space~\cite{Wi23} and Wiener's approach of statistical mechanics and chaos~\cite{Wi38}, It\^o's theory of stochastic integration~\cite{It44} and stochastic differential equations~\cite{It46}, L\'evy's analysis of the fine structure of Brownian motion and his theory of the Brownian local time~\cite{Le37, Le48}, Feynman's path integral~\cite{Fe48} with its new view towards quantum mechanics, Kac' work on path integrals~\cite{Ka49, Ka50}. Towards the middle of the last century there were the works by Feller~\cite{Fe52, Fe54, Fe54a} and It\^o--McKean~\cite{ItMc63, ItMc74} on Brownian motions on intervals (see also below), Gross' abstract Wiener spaces~\cite{Gr67}, Nelson's work on functional integration and on the relation between quantum and stochastic dynamics~\cite{Ne54, Ne64, Ne67, Ne73}, giving new momentum to Euclidean and to constructive quantum field theory, e.g.~\cite{Sch58, Sch59, Sy69, Si74, GlJa81} and nonrelativistic quantum physics, e.g.~\cite{Si79}. Further we want to mention the asymptotics of Wiener integrals and large deviation theory~\cite{Sc65, DoVa75}, the theory of Dirichlet forms~\cite{Fu80, Si75, AlR89}, the development of the Malliavin~\cite{Ma78, Ma97} and Hida calculi~\cite{Hi75, HiKu93}, and Bismuth's approach to the Atiyah--Singer index theorem~\cite{Bi84, Bi86}. In addition, there were important developments in other fields, such as engineering, biology or mathematical finance, which were triggered by the theory of Brownian motion. The present article is directly linked to the above quoted works by Feller and It\^o--McKean. So we want to sketch these in little more detail. In his pioneering articles~\cite{Fe52, Fe54, Fe54a}, Feller raised the problem of characterizing and constructing all Brownian motions on a finite or on a semi-infinite interval. In the sequel this problem stimulated very important research in the field of stochastic processes, and the problem of constructing all such Brownian motions found a complete solution in the work of It\^o and McKean~\cite{ItMc63, ItMc74} via the combination of the theory of the local time of Brownian motion~\cite{Le48}, and the theory of (strong) Markov processes~\cite{Bl57, Dy61, Dy65a, Dy65b, Hu56}. The central result of these investigations is that the most general Brownian motion on the half line $\mathbb{R}_+$ is determined by a generator which is (one half times) the Laplace operator on $\mathbb{R}_+$ with \emph{Wentzell boundary conditions} at the origin, i.e., linear combinations of the function value with the values of the first and second derivative at the origin (with coefficients satisfying certain restrictions, see below). It\^o and McKean showed in~\cite{ItMc63} --- partly based on the ideas of Feller~\cite{Fe52, Fe54, Fe54a} --- how to construct the paths of such motions: The boundary conditions are implemented by a combination of reflection at the origin with a slow down and killing, both on the scale of the local time at zero. The ideas contained this article became one of the roots of their highly influential book~\cite{ItMc74}. In recent years, there has been a growing interest in \emph{metric graphs}, that is, piecewise linear spaces with singularities formed by the vertices of the graph. Metric graphs arise naturally as models in many domains, such as physics, chemistry, computer science and engineering to mention just a few --- we refer the interested reader to~\cite{Ku04} for a review of such models and for further references. Therefore it is natural to extend Feller's problem to metric graphs. Stochastic processes, in particular Brownian motions and diffusions, on locally one-dimensional structures, notably on graphs and networks, have already been studied in a number of articles of which we want to mention~\cite{BaCh84, DeJa93, EiKa96, FrSh00, FrWe93, Fr94, Gr99, Kr95} in this context. In previous articles~\cite{KoSc99, KoSc00, KoSc06, KoSc06a}, two of the current authors studied the self-ad\-joint\-ness of Laplace operators on metric graphs and discussed their spectra. This allowed a discussion of the associated quantum scattering matrices. Further properties of the semigroups generated by Laplace operators on metric graphs, including a Selberg--type trace formula and the problem whether these semigroups are positivity preserving or contractive, have been studied in \cite{KoPo07d, KoPo09c}. This is one of the motivations of our present study since semigroups with these properties typically show up in Markov processes. Below we will return to this point, see remark~\ref{rem_sa_bc}. The wave equation on metric graphs and its finite propagation speed has been discussed in~\cite{KoPo11}. For suitable Laplacians free quantum fields on metric graphs satisfying the Klein-Gordon equation and Einstein causality were constructed in ~\cite{Sch09}. In~\cite{CPBMSG} the authors have constructed the paths of all possible Brownian motions (in the sense defined below) on single vertex graphs using the well-known Walsh process \cite{Wa78}, \cite{BaPi89} (see also \cite{ BaCh84, Ro83, Sa86a, Sa86b, Va85}) as the starting point. Furthermore, the relation to the quantum mechanical scattering is discussed in detail there. The latter article provides an essential input for the construction of all possible Brownian motions on a general metric graph in the sense of definition~\ref{def1i} (see below) which we carry out here. The article is organized in the following way. In section~\ref{sect_mr} we set up our framework and prove our main results: Theorem~\ref{thm1i} characterizes all possible Brownian motions (in the sense of definition~\ref{def1i}) on a metric graph~$\mathcal{G}$ in terms of Wentzell boundary conditions at the vertices. Conversely, theorem~\ref{thm1ii} states that for every choice of a set of Wentzell boundary conditions at the vertices as described in theorem~\ref{thm1i}, one can construct a Brownian motion on~$\mathcal{G}$ implementing these conditions. Theorem~\ref{thm1i} is proved in section~\ref{sect2}. As a preparation of the proof of theorem~\ref{thm1ii} we consider in section~\ref{sect3} the situation where one is given two metric graphs $\mathcal{G}_1$, $\mathcal{G}_2$ with Brownian motions $X_2$, $X_2$ in the sense of definition~\ref{def1i} thereon. If one joins some of the external edges of $\mathcal{G}_1$ and $\mathcal{G}_2$ to form a new metric graph $\mathcal{G}$, it is shown how to construct the paths of a Brownian motion $X$ on $\mathcal{G}$ by appropriately gluing the paths of $X_1$ and $X_2$ together. Theorem~\ref{thm1ii} is proved in section~\ref{sect4} via the procedure of section~\ref{sect3} and the results in~\cite{CPBMSG}, where the paths of Brownian motions on star graphs are constructed with methods similar to those of Feller~\cite{Fe52, Fe54, Fe54a} and It\^o--McKean~\cite{ItMc63, ItMc74}. The article is concluded in section~\ref{sect5} by a discussion of the inclusion of tadpoles. Furthermore, there two appendices: one with a technical result on the crossover times which is used in section~\ref{sect3}, the other about Feller semigroups and resolvents. Given these results, it would be interesting to see whether known results for special cases of Brownian motion or diffusions on metric graphs can be extended to all Feller processes. For example, an arcsine law has been proved in~\cite{BaPi89a} for the case of a Walsh process on a single vertex graph, for the case of a general metric graph we refer to~\cite{De02, BDe09} (for a discussion of local time distributions see also~\cite{CoDe02}). In a similar vein: What about occupation times on edges for the case of general (local) boundary conditions of the type~\eqref{eq1iii} at the vertices? Can one say something about large deviations as done for example for Brownian motions without killing and more generally for conservative diffusion processes in~\cite{FrSh00}? What form does the It\^o formula take in the case of a diffusion process on a metric graph with a generator subject to the boundary conditions~\eqref{eq1iii}? \nablaoindent \textbf{Acknowledgement.} The authors thank Mrs.~and Mr.~Hulbert for their warm hospitality at the \textsc{Egertsm\"uhle}, Kiedrich, where part of this work was done. J.P.\ gratefully acknowledges fruitful discussions with O.~Falkenburg, A.~Lang and F.~Werner. We owe special thanks to O.~Falkenburg for pointing out reference~\cite{Sc15} to us. The authors also thank the anonymous referee for pointing out further references. R.S.~thanks the organizers of the \emph{Chinese--German Meeting on Stochastic Analysis and Related Fields}, Beijing, May 2010, where some of the material of this article was presented. \section{Main Results} \label{sect_mr} In the present article we shall only treat \emph{finite} metric graphs, and consider a metric graph $(\mathcal{G},d)$ as being defined by a finite collection of finite or semi-infinite closed intervals, some of their endpoints --- the \emph{vertices} of the graph --- being identified. See figure~\ref{fig1} for an example of a simple, typical metric graph. The metric $d$ is then defined in the canonical way as the length of a shortest path between two points along the \emph{edges} (formed by the intervals), and the length along each edge is measured with the usual metric on the real line. \begin{figure} \caption{A metric graph $\mathcal{G} \label{fig1} \end{figure} For a formal definition of metric graphs within the context of graph theory we refer the interested reader, e.g., to~\cite{KoSc99, KoSc00}. Within that context our definition above means that we identify --- as we may without any loss of generality --- an abstract metric graph with its \emph{geometric graph} (see, e.g., \cite{Ju05}). Moreover, in the sequel it will often be convenient and without any danger of confusion to identify an edge of a metric graph with the corresponding interval of the real line. Edges isomorphic to $\mathbb{R}_+$ are called \emph{external}, while those isomorphic to a finite interval --- that is, those edges connecting two vertices --- are called \emph{internal}. The set of vertices of $\mathcal{G}$ is denoted by $V$, the set of internal edges by $\mathcal{I}$ and the set of external edges by $\mathcal{E}$. Moreover we set $\mathcal{L}=\mathcal{I}\cup\mathcal{E}$. The combinatorial structure of the graph $\mathcal{G}$ is described by a map $\delta$ from $\mathcal{L}$ into $V\cup (V\times V)$ which associates with every internal edge $i$ an ordered pair $\bigl(\partial^-(i), \partial^+(i)\bigr)\in V\times V$, $\partial^-(i)$ is called the \emph{initial vertex} of $i$ while $\partial^+(i)$ is its \emph{terminal vertex}. If $i\in\mathcal{I}$ is isomorphic to the interval $[a,b]$ then $\partial^-(i)$ corresponds to $a$, while $\partial^+(i)$ corresponds to~$b$. An external edge $e$ is mapped by $\partial$ to $\partial(e)\in V$ which is the vertex to which $e$ is incident, and also in this case we call the vertex the \emph{initial vertex} of~$e$. For the definition of a Brownian motion on the metric graph $(\mathcal{G},d)$ we take a standpoint similar to the one of Knight~\cite{Kn81} for the semi-line or a finite interval: \begin{definition} \label{def1i} A \emph{Brownian motion} on a metric graph $(\mathcal{G},d)$ is a diffusion process $(X_t,\,t\in\mathbb{R}_+)$ such that when $X$ starts on an edge $e$ of $\mathcal{G}$ then the process $X$ with absorption in the vertex, vertices respectively, to which $e$ is incident is equivalent to a standard one dimensional Brownian motion on the interval $e$ with absorption in the endpoint(s) of $e$. \end{definition} \begin{remarks} \label{def1ii} By saying that $X$ is a \emph{diffusion process} we mean that $X$ is a normal, strong Markov process (in the sense of~\cite{BlGe68}), a.s.\ with paths which are c\`adl\`ag and continuous on $[0,\zeta)$, where $\zeta$ is the life time of $X$. We shall always assume that the filtration for $X$ satisfies the ``usual conditions''. With the help of the well-known first passage time formula (e.g., \cite{Ra56} or \cite{ItMc74}) for the resolvent of $X$ it is not hard to show as in~\cite{Kn81} that every Brownian motion on a metric graph $\mathcal{G}$ is a Feller process. \end{remarks} The first crucial problem is then to characterize the behavior of the stochastic process when it reaches one of the vertices of the graph $\mathcal{G}$, or in other words, the characterization of the boundary conditions at the vertices of the Laplace operator which generates the stochastic process. We want to mention in passing that in an $L^2$-setting all boundary conditions for Laplace operators on $\mathcal{G}$ which make them self-adjoint have been characterized in~\cite{KoSc99, KoSc00}. The first main result of the present paper is Feller's theorem for metric graphs. In order to state this theorem we have to introduce some notation. The Banach space of real valued, continuous functions on $\mathcal{G}$ vanishing at infinity, equipped with the sup-norm, is denoted by $C_0(\mathcal{G})$. We let $\mathbb{D}elta$ denote a universal cemetery point for all stochastic processes considered, and make the usual convention that every $f\in C_0(\mathcal{G})$ is extended to $\mathcal{G}\cup\{\mathbb{D}elta\}$ by setting $f(\mathbb{D}elta)=0$. Consider the generator $A$ of $X$ on $C_0(\mathcal{G})$ with domain $\mathcal{D}(A)$. Define the space $\mathbb{C}ii$ to consist of those functions $f$ in $C_0(\mathcal{G})$ which are twice continuously differentiable in the \emph{open interior} $\mathcal{G}^\circ = \mathcal{G}\setminus V$ of $\mathcal{G}$, and which are such that their second derivative $f''$ extends from $\mathcal{G}^\circ$ to a function in $C_0(\mathcal{G})$. The next lemma, which can be proved with the fundamental theorem of calculus and the mean value theorem, states some of the properties of functions in $\mathbb{C}ii$. $\mathcal{L}(v)$ denotes the set of edges incident with $v\in V$. \begin{lemma} \label{lemC02} Assume that $f$ belongs to $\mathbb{C}ii$, and consider $v\in V$, $l\in\mathcal{L}(v)$. Then the inward directional derivatives $f^{(i)}(v_l)$, $i=1$, $2$, of $f$ of first and second order at $v$ in direction of the edge $l$ exist, and \begin{align} f'(v_l) &= \begin{cases}\displaystyle \partialhantom{-}\lim_{\xi\to v,\,\xi\in l^\circ} f'(\xi), & \text{if $v$ is an initial vertex of $l$,}\label{inw-deri}\\[2ex] \displaystyle -\lim_{\xi\to v,\,\xi\in l^\circ} f'(\xi), & \text{if $v$ is a terminal vertex of $l$,} \end{cases}\\[2ex] f''(v_l) &= \lim_{\xi\to v,\,\xi\in l^\circ} f''(\xi)\label{inw-derii} \end{align} hold true. Moreover, $f'$ (defined on $\mathcal{G}^\circ$) vanishes at infinity. \end{lemma} \begin{remark} \label{remC02} If $f\in\mathbb{C}ii$ then by definition of $\mathbb{C}ii$, $f''(v_k) = f''(v_l)$ for every $v\in V$, and all $k$, $l\in\mathcal{L}(v)$, and we shall simply write $f''(v)$. On the other hand, in general $f'(v_k)\nablae f'(v_l)$ for $k\nablae l$. \end{remark} Let $V_\mathcal{L}$ denote the subset of $V\times\mathcal{L}$ given by \begin{equation*} V_\mathcal{L} = \bigl\{(v,l),\,v\in V \text{ and }l\in\mathcal{L}(v)\bigr\}. \end{equation*} We shall also write $v_l$ for $(v,l)\in V_\mathcal{L}$. Consider data of the following form \begin{equation} \label{eq1i} \begin{split} a &= (a_v,\,v\in V)\in [0,1)^V\\ b &= (b_{v_l},\,v_l\in V_\mathcal{L}) \in [0,1]^{V_\mathcal{L}}\\ c &= (c_v,\,v\in V)\in [0,1]^V \end{split} \end{equation} subject to the condition \begin{equation} \label{eq1ii} a_v + \sum_{l\in \mathcal{L}(v)} b_{v_l} + c_v =1,\qquad \text{for every $v\in V$}. \end{equation} Define a subspace $\mathcal{H}_{a,b,c}$ of $\mathbb{C}ii$ as the space of those functions $f$ in $\mathbb{C}ii$ which at every vertex $v\in V$ satisfy the \emph{Wentzell boundary condition} \begin{equation}\label{eq1iii} a_v f(v) - \sum_{l\in\mathcal{L}(v)} b_{v_l}f'(v_l)+ \frac{1}{2}\,c_v f''(v)=0. \end{equation} Now we can state our first main result: \begin{theorem}[Feller's theorem for metric graphs] \label{thm1i} Let $X$ be a Brownian motion on $\mathcal{G}$, and let $A$ be its generator on $C_0(\mathcal{G})$ with domain $\mathcal{D}(A)$. Then there are $a$, $b$, $c$ as in~\eqref{eq1i}, \eqref{eq1ii}, so that $\mathcal{D}(A)=\mathcal{H}_{a,b,c}$. For $f\in\mathcal{D}(A)$, $A f = 1/2 f''$. \end{theorem} \begin{remark} The boundary conditions in $\mathcal{H}_{a,b,c}$ are \emph{local} in the sense that only \emph{one vertex} enters each of the conditions~\eqref{eq1iii}. This is a direct consequence of the path properties of $X$, namely of the condition that the only jumps $X$ may have are those from $\mathcal{G}$ (actually from a vertex) to the cemetery point. \end{remark} \begin{remark}\label{standard} Boundary conditions with $a_v=0=c_v$ are often called \emph{standard boundary conditions} (see e.g.~\cite{Ku04, KoSc06}) giving rise to what is called a \emph{skew Brownian motion} \cite{ItMc74}, for a recent survey see e.g.~\cite{Le06}. Killing occurs when $a_v\nablaeq 0$, and~\cite{CPBMSG} provides a detailed discussion of the process for single vertex graphs. When $a_v= 0$ the process is conservative and has been studied extensively in~\cite{FrWe93,FrSh00}. \end{remark} Our second main result is converse of theorem~\ref{thm1i}, namely \begin{theorem} \label{thm1ii} For any choice of the data as in~\eqref{eq1i}, \eqref{eq1ii}, there is a Brownian motion $X$ on the metric graph $\mathcal{G}$ so that its generator $A$ has $\mathcal{H}_{a,b,c}$ as its domain. \end{theorem} In order to rephrase the statements of theorems~\ref{thm1i} and~\ref{thm1ii} in a concise way, we bring in some additional notation. With a slight abuse of language we shall also call any quadruple $\mathbb{X}=(\Omega,\mathcal{A},P,X)$ a \emph{Brownian motion on $\mathcal{G}$} whenever $(\Omega,\mathcal{A},P)$ is a complete probability space, and $X=(X_t,\,t\in\mathbb{R})$ defined thereon is a Brownian motion on $\mathcal{G}$ as in definition~\ref{def1i}. $\mathcal{X}(\mathcal{G})$ denotes the set of all Brownian motions in this sense, subject to the equivalence relation which is defined by equality of all finite dimensional distributions. In $\mathbb{R}^{n+1}$ consider the (compact, convex) $n$-simplex \begin{equation*} \sigma^n= \mathbb{B}igl\{x\in\mathbb{R}^{n+1},\, x_i\ge 0, \sum_{i=1}^{n+1}x_i=1\mathbb{B}igr\}. \end{equation*} Let $\sigma^n_0$ be the simplex $\sigma^n$, with the point $(1,0,\cdots,0)$ removed \begin{equation*} \sigma^n_0=\sigma^n\setminus\{(1,0,\cdots,0)\}. \end{equation*} It is still convex but not closed. Given a fixed but arbitrary ordering of $\mathcal{L}(v)$ any triple $\bigl(a_v,(b_{v_l}, \,l\in\mathcal{L}(v)),c_v\bigr)$ satisfying~\eqref{eq1ii} can be viewed as an element in $\sigma^{n(v)+1}_0$ with $n(v)=|\mathcal{L}(v)|$. With $N(\mathcal{G})=\sum_v( n(v) +2)$ set \begin{equation*} \Sigma(\mathcal{G})= \mathbb{B}igCart_{v\in V} \sigma^{n(v)+1}_0\subset \mathbb{R}^{N(\mathcal{G})}. \end{equation*} Let $\iota$ be the mapping defined via theorem~\ref{thm1i} by associating to every Brownian motion $\mathbb{X}$ the data $(a,b,c)\in\Sigma(\mathcal{G})$. Since any two Brownian motions on $\mathcal{G}$, which have the same finite dimensional distributions, define the same semigroup, and therefore have the same generator, it follows that $\iota$ maps these to the same data, that is, $\iota$ can be viewed as mapping from $\mathcal{X}(\mathcal{G})$ to $\Sigma(\mathcal{G})$. Theorem~\ref{thm1ii} states that $\iota$ is surjective. To see its injectivity, suppose that $[\mathbb{X}_1]$, $[\mathbb{X}_2]$ are different elements in $\mathcal{X}(\mathcal{G})$, where $[\mathbb{X}_i]$, $i=1$, $2$, denotes the equivalence class of a representative $\mathbb{X}_i$. Assume that $\iota([\mathbb{X}_1]) = \iota([\mathbb{X}_2])$. By theorem~\ref{thm1i} the generator $A_i$, $i=1$, $2$, of $\mathbb{X}_i$ is uniquely determined by the data $\iota([\mathbb{X}_i])$, and therefore we get $A_1=A_2$. It follows, that $\mathbb{X}_1$ and $\mathbb{X}_2$ define the same semigroup, and therefore all their finite dimensional distributions coincide, which is a contradiction. Thus we have proved that $\iota$ is a bijection from $\mathcal{X}(\mathcal{G})$ onto $\Sigma(\mathcal{G})$: \begin{corollary} The set $\mathcal{X}(\mathcal{G})$ of all Brownian motions on $\mathcal{G}$ is in one-to-one correspondence with the set $\Sigma(\mathcal{G})$. \end{corollary} \section{Proof of Theorem~\ref{thm1i}} \label{sect2} The following notation will be useful throughout this article: If $\xi$ is a point in $\mathcal{G}^\circ = \mathcal{G}\setminus V$, then it is in one-to-one correspondence with its \emph{local coordinates} $(l,x)$, where $l\in\mathcal{L}$ is the edge to which $\xi$ belongs, while $x$ is the point corresponding to $\xi$ in the interval to which $l$ is isomorphic. Then we simply write $\xi=(l,x)$. If $f$ is a function on the graph $\mathcal{G}$ we shall also denote $f(\xi)$ by $f(l,x)$ or $f_l(x)$. We denote by $U=(U_t,\,t\in\mathbb{R}_+)$ the semigroup generated by a Brownian motion $X$ on $\mathcal{G}$ acting on the Banach space $B(\mathcal{G})$ of bounded measurable functions on $\mathcal{G}$, equipped with the sup-norm, that is, for $f\in\mathbb{B}(\mathcal{G})$, \begin{equation*} U_t f(\xi) = E_\xi\bigl(f(X_t)\bigr),\qquad t\in\mathbb{R}_+,\,\xi\in\mathcal{G}. \end{equation*} Clearly, $U$ is a positivity preserving contraction semigroup. In the sequel we shall notationally not distinguish between the semigroup $U$ acting on $B(\mathcal{G})$ and its restriction to the subspace $\mathbb{C}o$ of $B(\mathcal{G})$. The proof of the following lemma can be taken over with minor modifications from the standard literature, e.g., from~\cite[Chapter~6.1]{Kn81}. Therefore it is omitted here. \begin{lemma} \label{lem2i} For every Brownian motion $X$ on the metric graph $\mathcal{G}$, the generator $A$ of its semigroup $U$ acting on $\mathbb{C}o$ has a domain $\mathcal{D}(A)$ contained in $\mathbb{C}ii$. Moreover, for every $f\in\mathcal{D}(A)$, $A f=1/2\,f''$. \end{lemma} The preceding lemma implies the second statement of theorem~\ref{thm1i}. The proof of the first statement of theorem~\ref{thm1i} has two rather distinct parts, and therefore we split it by proving the following two lemmas: \begin{lemma} \label{lem2ii} Suppose that $X$ is a Brownian motion on a metric graph $\mathcal{G}$, and that $\mathcal{D}(A)$ is the domain of the generator $A$ of its semigroup. Then there are $a$, $b$, $c$ as in~\eqref{eq1i}, \eqref{eq1ii}, so that $\mathcal{D}(A)\subset\mathcal{H}_{a,b,c}$. \end{lemma} \begin{lemma} \label{lem2iii} Suppose that $A$ is the generator of a Brownian motion $X$ on $\mathcal{G}$ with domain $\mathcal{D}(A)\subset \mathcal{H}_{a,b,c}$ for some $a$, $b$, $c$ as in~\eqref{eq1i}, \eqref{eq1ii}. Then $\mathcal{D}(A)=\mathcal{H}_{a,b,c}$ \end{lemma} \begin{proof}[Proof of lemma~\ref{lem2ii}] Our proof follows the one in~\cite[Chapter~6.1]{Kn81} quite closely --- actually, it is sufficient to consider a special case of the proof given there. We show that for every vertex $v\in V$ there are constants $a_v\in [0,1)$, $b_{v_l}\in [0,1]$, $l\in\mathcal{L}(v)$, $c_v\in [0,1]$ satisfying~\eqref{eq1ii}, and such that all $f$ in the domain $\mathcal{D}(A)$ of the generator satisfy the boundary condition~\eqref{eq1iii}. To this end, we let $f\in\mathcal{D}(A)$, fix a vertex $v\in V$, and compute $A f(v)$. Consider the exit time from $v$, i.e., the stopping time $S_v = H(\mathcal{G}o)$, where for any subset $M\subset \mathcal{G}$, $H(M)\equiv H_M$ denotes the hitting time of $M$. It is well known (e.g., \cite{Kn81, ReYo91, DyJu69}) that because of the strong Markov property of $X$, $S_v$ is under $P_v$ exponentially distributed with a rate $\beta_v\in[0,+\infty]$. Consequently we discuss three cases: \nablaoindent \emph{Case $\beta_v=0$}: $X$ is absorbed at $v$, i.e., $v$ is a \emph{trap}. Thus $U_t f(v) = f(v)$ for all $t\ge 0$. Consequently, $A f(v)=0$, and therefore $1/2\, f''(v)=0$. Thus $f$ satisfies the boundary condition~\eqref{eq1iii} at $v$ with $a_v=0$, $c_v=1$, and $b_{v_l}=0$ for all $l\in\mathcal{L}(v)$. \nablaoindent \emph{Case $0<\beta_v<+\infty$}: In this case the process stays at $v$ $P_v$--a.s.\ for a strictly positive, finite moment of time, i.e., $v$ is \emph{exponentially holding}. It is well known (cf., e.g., \cite[p.~154]{Kn81}, \cite[p.~104, Prop.~3.13]{ReYo91}) that then the process has to leave $v$ by a jump, and by our assumption of path continuity on $[0,\eta)$, the process has to jump to the cemetery $\mathbb{D}elta$. Therefore we get for $t>0$, $U_t f(v) = \exp(-\beta t) f(v)$, and thus $A f(v) + \beta f(v) = 0$, and the boundary condition~\eqref{eq1iii} holds for the choice \begin{equation} \label{eq2i} a_v = \frac{\beta}{1+\beta},\quad c_v=\frac{1}{1+\beta},\quad b_{v_l} = 0,\,l\in\mathcal{L}(v). \end{equation} \nablaoindent \emph{Case $\beta_v=+\infty$}: In this case the $X$ leaves the vertex $v$ immediately, and it begins a Brownian excursion into one of the edges incident with the vertex $v$. In particular, $v$ is not a trap. Therefore we may compute $A f(v)$ in Dynkin's form, e.g., \cite[p.~140, ff.]{Dy65a}, \cite[p.~99]{ItMc74}. For $\epsilon>0$ let $H_{v,\epsilon}$ denote the hitting time of the complement of the ball $B_\epsilon(v)$ of radius $\epsilon$ around $v$. Then \begin{equation} \label{eq2ii} A f(v) = \ensuremath\lim_{\gep\da 0} \frac{E_v\mathbb{B}igl(f\bigl(X(H_{v,\epsilon})\bigr)\mathbb{B}igr) -f(v)}{E_v(H_{v,\epsilon})}. \end{equation} Now \begin{align*} E_v\mathbb{B}igl(f\bigl(X(H_{v,\epsilon})\bigr)\mathbb{B}igr) &= \sum_{l\in\mathcal{L}(v)} f_l(\epsilon)\,P_v\bigl(X(H_{v,\epsilon})\in l\bigr) + f(\mathbb{D}elta)\,P_v\bigl(X(H_{v,\epsilon})=\mathbb{D}elta\bigr)\nablaonumber\\ &= \sum_{l\in\mathcal{L}(v)} f_l(\epsilon)\,P_v\bigl(X(H_{v,\epsilon})\in l\bigr), \end{align*} where the last equality follows from $f(\mathbb{D}elta)=0$. Let us denote \begin{align*} r_l(\epsilon) &= \frac{P_v\bigl(X(H_{v,\epsilon})\in l\bigr)}{E_v(H_{v,\epsilon})}, \ l\in\mathcal{L}(v),\quad r_\mathbb{D}elta(\epsilon) = \frac{P_v\bigl(X(H_{v,\epsilon})=\mathbb{D}elta\bigr)}{E_v(H_{v,\epsilon})},\\[1ex] K(\epsilon) &= 1 + r_\mathbb{D}elta(\epsilon) + \epsilon \sum_{l\in\mathcal{L}(v)} r_l(\epsilon). \end{align*} The continuity of the paths of $X$ up to the lifetime $\zeta$ yields \begin{equation*} \sum_{l\in\mathcal{L}(v)} P_v\bigl(X(H_{v,\epsilon})\in l\bigr) + P_v\bigl(X(H_{v,\epsilon})=\mathbb{D}elta\bigr)=1, \end{equation*} and therefore equation~\eqref{eq2ii} can be rewritten as \begin{equation*} \ensuremath\lim_{\gep\da 0}\mathbb{B}igl(A f(v) + r_\mathbb{D}elta(\epsilon) f(v) - \sum_{l\in\mathcal{L}(v)} r_l(\epsilon) \bigl(f_l(\epsilon)-f(v)\bigr)\mathbb{B}igr)=0. \end{equation*} Since for all $\epsilon>0$, $K(\epsilon)^{-1}\le 1$, it follows that \begin{equation*} \ensuremath\lim_{\gep\da 0}\mathbb{B}igl(\frac{1}{K(\epsilon)}\,A f(v) + \frac{r_\mathbb{D}elta(\epsilon)}{K(\epsilon)}\,f(v) - \sum_{l\in\mathcal{L}(v)} \frac{\epsilon\, r_l(\epsilon)}{K(\epsilon)}\, \frac{f_l(\epsilon)-f(v)}{\epsilon}\mathbb{B}igr)=0, \end{equation*} which by lemma~\ref{lem2i} we may rewrite as \begin{equation*} \ensuremath\lim_{\gep\da 0}\mathbb{B}igl(a_v(\epsilon) f(v) + \frac{1}{2}\,c_v(\epsilon) f''(v) - \sum_{l\in\mathcal{L}(v)} b_{v_l}(\epsilon)\,\frac{f_l(\epsilon)-f(v)}{\epsilon}\mathbb{B}igr)=0, \end{equation*} where we have introduced the non-negative quantities \begin{equation*} a_v(\epsilon) = \frac{r_\mathbb{D}elta(\epsilon)}{K(\epsilon)},\quad c_v(\epsilon) = \frac{1}{K(\epsilon)},\quad b_{v_l}(\epsilon) = \frac{\epsilon\, r_l(\epsilon)}{K(\epsilon)},\ l\in\mathcal{L}(v). \end{equation*} Observe that for every $\epsilon>0$, \begin{equation*} a_v(\epsilon) + c_v(\epsilon) + \sum_{l\in\mathcal{L}(v)} b_{v_l}(\epsilon) =1. \end{equation*} Therefore every sequence $(\epsilon_n,\,n\in\mathbb{N})$ with $\epsilon_n>0$ and $\epsilon_n\ensuremath\downarrow 0$ has a subsequence so that $a_v(\epsilon)$, $c_v(\epsilon)$ and $b_{v_l}(\epsilon)$, $l\in\mathcal{L}(v)$, converge along this subsequence to numbers $a_v$, $c_v$, and $b_{v_l}$ respectively in $[0,1]$, and the relation~\eqref{eq1ii} holds true. It is not hard to check that for every $f\in\mathbb{C}ii$ \begin{equation*} \frac{f_l(\epsilon)-f(v)}{\epsilon} \end{equation*} converges with $\epsilon\ensuremath\downarrow 0$ to $f'(v_l)$, and therefore we obtain that for every vertex $v\in V$, $f\in\mathcal{D}(A)$ satisfies the boundary condition~\eqref{eq1iii} with data $a$, $b$, $c$ as in~\eqref{eq1i}, \eqref{eq1ii}. \end{proof} Before we can prove lemma~\ref{lem2iii} we have to introduce some additional formalism. We define the subspace $\mathbb{C}oo$ of functions $f$ in $\mathbb{C}o$ which are twice continuously differentiable on $\mathcal{G}o$, such that $f''$ (as defined on $\mathcal{G}^\circ$) vanishes at infinity, and furthermore for every $v\in V$ and all $l\in\mathcal{L}$ the limit \begin{equation*} \lim_{\xi\to v,\, \xi\in l\ensuremath^\circ} f''(\xi) \end{equation*} exists. Similarly as in the statement of lemma~\ref{lemC02} the last limit is equal to the second order derivative $f''(v_l)$ of $f$ at $v$ in direction of $l$. For given data $a$, $b$, $c$ as in~\eqref{eq1i}, \eqref{eq1ii}, it will be convenient to consider $\mathcal{H}_{a,b,c}$ equivalently as being the subspace of $\mathbb{C}oo$ so that for its elements $f$ at every $v\in V$ the boundary conditions~\eqref{eq1iii} as well as the boundary condition \begin{equation} \label{eq2iii} f''(v_l) = f''(v_k),\qquad \text{for all $l,\,k\in\mathcal{L}(v)$} \end{equation} hold true. Relation~\eqref{eq2iii} is just another way to express that $f''$ extends continuously from $\mathcal{G}^\circ$ to $\mathcal{G}$. We consider the sets $V$, $\mathcal{E}$, and $\mathcal{I}$ as being ordered in some arbitrary way. With the convention that in $\mathcal{L}$ the elements of $\mathcal{E}$ come first this induces also an order relation on $\mathcal{L}$. Suppose that $f\in\mathbb{C}oo$. With the given ordering of $\mathcal{E}$ and $\mathcal{I}$ we define the following column vectors of length $|\mathcal{E}|+2|\mathcal{I}|$: \begin{align*} f(V) &= \mathbb{B}igl(\bigl(f_e(0),\,e\in\mathcal{E}\bigr),\bigl(f_i(0),\,i\in\mathcal{I}\bigr), \bigl(f_i(\rho_i)\,i\in\mathcal{I}\bigr)\mathbb{B}igr)^t,\\ f'(V) &= \mathbb{B}igl(\bigl(f'_e(0),\,e\in\mathcal{E}\bigr),\bigl(f'_i(0),\,i\in\mathcal{I}\bigr), \bigl(-f'_i(\rho_i)\,i\in\mathcal{I}\bigr)\mathbb{B}igr)^t,\\ f''(V) &= \mathbb{B}igl(\bigl(f''_e(0),\,e\in\mathcal{E}\bigr),\bigl(f''_i(0),\,i\in\mathcal{I}\bigr), \bigl(f''_i(\rho_i)\,i\in\mathcal{I}\bigr)\mathbb{B}igr)^t, \end{align*} where the superscript ``$t$'' indicates transposition. We want to write the boundary conditions~\eqref{eq1iii}, \eqref{eq2iii} in a compact way. To this end we introduce the following order relation on $V_\mathcal{L}$: For $v_l$, $v'_{l'}\in V_\mathcal{L}$ we set $v_l \partialreceq v'_{l'}$ if and only if $v \partialrec v'$ or $v = v'$ and $l\partialreceq l'$ (where for $V$ and $\mathcal{L}$ we use the order relations introduced above). For $f$ as above set \begin{align*} \tilde f(V) &= \bigl(f(v_l),\,v_l\in V_\mathcal{L}\bigr)^t,\\ \tilde f'(V) &= \bigl(f'(v_l),\,v_l\in V_\mathcal{L}\bigr)^t,\\ \tilde f''(V) &= \bigl(f''(v_l),\,v_l\in V_\mathcal{L}\bigr)^t. \end{align*} Then there exists a permutation matrix $P$ so that \begin{equation*} \tilde f(V) = P f(V),\qquad \tilde f'(V) = P f'V),\qquad \tilde f''(V) = P f''(V). \end{equation*} In particular, $P$ is an orthogonal matrix which has in every row and in every column exactly one entry equal to one while all other entries are zero. For every $v\in V$ we define the following $|\mathcal{L}(v)|\times|\mathcal{L}(v)|$ matrices: \begin{align*} \tilde A(v) &= \begin{pmatrix} a_v & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix},\\[2ex] \tilde B(v) &= \begin{pmatrix} -b_{v_{l_1}} & -b_{v_{l_2}} & -b_{v_{l_3}} & \cdots & -b_{v_{l_{|\mathcal{L}(v)|}}}\\ 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix},\\[2ex] \tilde C(v) &= \begin{pmatrix} 1/2\,c_v & 0 & 0 & 0 & \cdots & 0 \\ 1 & -1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & -1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & -1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & -1 \end{pmatrix}, \end{align*} where we have labeled the elements in $\mathcal{L}(v)$ in such a way that in the above defined ordering we have $l_1\partialrec l_2 \partialrec \dotsb \partialrec l_{|\mathcal{L}(v)|}$. Observe that $\tilde C(v)$ is invertible if and only if $c_v\nablae 0$. Define block matrices $\tilde A$, $\tilde B$, and $\tilde C$ by \begin{equation*} \tilde A = \bigoplus_{v\in V} A(v),\quad \tilde B = \bigoplus_{v\in V} B(v), \quad \tilde C = \bigoplus_{v\in V} C(v). \end{equation*} Then we can write the boundary conditions~\eqref{eq1iii}, \eqref{eq2iii} simultaneously for all vertices as \begin{equation} \label{bch} \tilde A \tilde f(V) + \tilde B \tilde f'(V) + \tilde C \tilde f''(V) = 0. \end{equation} Consequently the boundary conditions can equivalently be written in the form \begin{equation} \label{bc} Af(V) + Bf'(V) + Cf''(V) = 0, \end{equation} with \begin{equation} A = P^{-1}\tilde A P,\quad B = P^{-1}\tilde B P,\quad C = P^{-1}\tilde C P. \end{equation} We bring in the following two matrix-valued functions on the complex plane \begin{equation} \label{Zpm} \hat Z_\partialm (\kappa) = A \partialm \kappa B + \kappa^2 C,\qquad \kappa\in\mathbb{C}. \end{equation} \begin{lemma} \label{lem2iv} There exists $R>0$ so that for all $\kappa\in\mathbb{C}$ with $|\kappa|\ge R$ the matrices $\hat Z_\partialm(\kappa)$ are invertible, and there are constants $C$, $p>0$ so that \begin{equation} \label{invnorm} \|\hat Z_\partialm(\kappa)^{-1}\|\le C\,|\kappa|^p ,\qquad |\kappa|\ge R. \end{equation} \end{lemma} \begin{proof}[Proof of lemma~\ref{lem2iv}] Since we have \begin{equation} \label{ZP} \hat Z_{\partialm}(\kappa) = P^{-1}\bigl(\tilde A \partialm\kappa \tilde B + \kappa^2\tilde C\bigr) P \end{equation} for an orthogonal matrix $P$, for the proof of the first statement it suffices to show that there exists $R>0$ such that \begin{equation*} \tilde A \partialm\kappa \tilde B + \kappa^2\tilde C \end{equation*} are invertible for complex $\kappa$ outside of the open ball of radius $R$. For this in turn it suffices to show that for every vertex $v\in V$ the matrices \begin{equation*} \begin{split} \tilde A(v) &\partialm\kappa \tilde B(v) + \kappa^2\tilde C(v)\\ &= \begin{pmatrix} a_v \partialm \kappa b_{v_{l_1}}+ \kappa^2/2\,c_v & \partialm\kappa b_{v_{l_2}} & \partialm\kappa b_{v_{l_3}} & \partialm\kappa b_{v_{l_4}} & \cdots & \partialm\kappa b_{v_{l_{|\mathcal{L}(v)|}}} \\ \kappa^2 & -\kappa^2 & 0 & 0 & \cdots & 0 \\ 0 & \kappa^2 & -\kappa^2 & 0 & \cdots & 0 \\ 0 & 0 & \kappa^2 & -\kappa^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & -\kappa^2 \end{pmatrix} \end{split} \end{equation*} are invertible for all $\kappa\in\mathbb{C}$ with $|\kappa|\ge R$. An elementary calculation gives \begin{equation*} \det\bigl(\tilde A(v) \partialm\kappa \tilde B(v) + \kappa^2\tilde C(v)\bigr) = \mathbb{B}igl(a_v \partialm \kappa \sum_{l\in \mathcal{L}(v)}b_{v_l} + \frac{\kappa^2}{2}\,c_v\mathbb{B}igr) \bigl(-\kappa^2\bigr)^{|\mathcal{L}(v)|-1}. \end{equation*} The choices $\kappa=\partialm 1$ together with condition~\eqref{eq1ii} show that the polynomial of second order in $\kappa$ in the first factor on the right hand side does not vanish identically. Therefore, it is non-zero in the exterior of an open ball with some radius $R_v>0$. Hence, we obtain the first statement for the choice $R = \max_{v\in V} R_v$. Moreover, from the calculation of the determinants above we also get for every $v\in V$ and all $\kappa\in\mathbb{C}$ with $|\kappa|\ge R$ an estimate of the form \begin{equation} \label{detinv} \bigl|\det\bigl(\tilde A(v) \partialm\kappa \tilde B(v) + \kappa^2\tilde C(v)\bigr)\bigr|^{-1} \le \text{const.} \end{equation} Thus, using the co-factor formula for \begin{equation*} \bigl(\tilde A(v) \partialm\kappa \tilde B(v) + \kappa^2\tilde C(v)\bigr)^{-1} \end{equation*} we find with~\eqref{detinv} the estimate \begin{equation*} \bigl\|\bigl(\tilde A(v) \partialm\kappa \tilde B(v) + \kappa^2\tilde C(v)\bigr)^{-1}\bigr\| \le C_v |\kappa|^{p_v},\qquad |\kappa|\ge R, \end{equation*} for some constants $C_v$, $p_v>0$. Consequently we get \begin{equation*} \bigl\|\bigl(\tilde A \partialm\kappa \tilde B + \kappa^2\tilde C\bigr)^{-1}\bigr\| \le C |\kappa|^p,\qquad |\kappa|\ge R, \end{equation*} for some constants $C$, $p>0$, and by~\eqref{ZP} we have proved inequality~\eqref{invnorm}. \end{proof} With these preparations we can enter the \begin{proof}[Proof of lemma~\ref{lem2iii}] Let the data $a$, $b$, $c$ be given as in~\eqref{eq1i}, \eqref{eq1ii}. We have to show that the inclusion $\mathcal{D}(A)\subset \mathcal{H}_{a,b,c}$ is not strict. Assume to the contrary that the inclusion $\mathcal{D}(A)\subset \mathcal{H}_{a,b,c}$ is strict. We will derive a contradiction. Let $R = (R_\lambda,\,\lambda>0)$ be the resolvent of $A$. Then for every $\lambda>0$, $R_\lambda$ is a bijection from $C_0(\mathcal{G})$ onto $\mathcal{D}(A)$, that is, $R_\lambda^{-1}$ is a bijection from $\mathcal{D}(A)$ onto $C_0(\mathcal{G})$. For $\lambda>0$ consider the linear mapping $H_\lambda:\,f\mapsto \lambda f - 1/2 f''$ from $\mathcal{H}_{a,b,c}$ to $C_0(\mathcal{G})$. On $\mathcal{D}(A)$ this mapping coincides with the bijection $R^{-1}_\lambda$ from $\mathcal{D}(A)$ onto $C_0(\mathcal{G})$. Therefore our assumption entails that $H_\lambda$ cannot be injective. Hence for any $\lambda>0$ there exists $f(\lambda)\in\mathcal{H}_{a,b,c}$, $f(\lambda)\nablae 0$, with \begin{equation} \label{homeq} H_\lambda f(\lambda) = \lambda f(\lambda) - \frac{1}{2}\,f''(\lambda) = 0. \end{equation} We will show that $f(\lambda)\in\mathcal{H}_{a,b,c}$ satisfying \eqref{homeq} can only hold when $f(\lambda)=0$ on $\mathcal{G}$. It will be convenient to change the variable $\lambda$ to $\kappa = \sqrt{2\lambda}$, and there will be no danger of confusion that we shall simply write $f(\kappa)$ for $f(\lambda)$ from now on. Then the solution of~\eqref{homeq} is necessarily of the form given by \begin{align} f_e(\kappa,x) &= r_e(\kappa)\,e^{-\kappa x} & e&\in\mathcal{E},\,x\in \mathbb{R}_+,\\ f_i(\kappa,x) &= r^+_i(\kappa)\,e^{\kappa x}+r^-_i(\kappa)\,e^{\kappa(\rho_i- x)} & i&\in\mathcal{I},\,x\in [0,\rho_i], \end{align} and we want to show that for some $\kappa > 0$, the boundary conditions~\eqref{eq1iii} and~\eqref{eq2iii} entail that $r_e(\kappa) = r^+_i(\kappa) = r^-_i(\kappa)=0$ for all $e\in\mathcal{E}$, $i\in\mathcal{I}$. For $\kappa>0$, define a column vector $r(\kappa)$ of length $|\mathcal{E}|+2|\mathcal{I}|$ by \begin{equation*} r(\kappa) = \bigl((r_e(\kappa),\,e\in\mathcal{E}),(r^+_i(\kappa),\,i\in\mathcal{I}),(r^-_i(\kappa),\,i\in\mathcal{I})\bigr)^t, \end{equation*} and introduce the $(|\mathcal{E}|+2|\mathcal{I}|)\times (|\mathcal{E}|+2|\mathcal{I}|)$ matrices \begin{equation*} X_\partialm(\kappa) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & \partialm e^{\kappa \rho} \\ 0 & \partialm e^{\kappa \rho} & 1 \end{pmatrix} \end{equation*} --- appropriately modified in case that $\mathcal{E}$ or $\mathcal{I}$ is the empty set --- with the $|\mathcal{I}|\times|\mathcal{I}|$ diagonal matrices \begin{equation*} e^{\kappa \rho} = \diag{e^{\kappa \rho_i},\,i\in\mathcal{I}\bigr}. \end{equation*} Then the boundary conditions~\eqref{eq1iii}, \eqref{eq2iii} for $f(\kappa)$ read \begin{equation} \label{Zr} Z(\kappa)r(\kappa) = 0, \end{equation} with \begin{equation} \label{defZ} Z(\kappa) = (A+\kappa^2C)X_+(\kappa) + \kappa BX_-(\kappa). \end{equation} Thus, if we can show that for some $\kappa > 0$ the matrix $Z(\kappa)$ is invertible, the proof of the lemma is finished. Note that the matrix-valued function $Z$ is entire in $\kappa$, and therefore so is its determinant. Thus, if can show that $\kappa\mapsto \det Z(\kappa)$ does not vanish identically, then it can only vanish on a discrete subset of the complex plane, and for $\kappa$ in the complement of this set $Z(\kappa)$ is invertible. Write \begin{equation*} X_\partialm(\kappa) = 1 \partialm \delta X(\kappa), \end{equation*} with \begin{equation*} \delta X(\kappa) = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & e^{\kappa \rho} \\ 0 & e^{\kappa \rho} & 0 \end{pmatrix}, \end{equation*} so that we can write \begin{equation*} Z(\kappa) = \hat Z_+(\kappa)\bigl(1+\delta Z(\kappa)\bigr), \end{equation*} with \begin{equation*} \delta Z(\kappa) = \hat Z_+(\kappa)^{-1}\, \hat Z_-(\kappa)\,\delta X(\kappa). \end{equation*} Observe that in case that $\mathcal{I}=\emptyset$, we obtain $\delta Z(\kappa)=0$, and in this case the invertibility of $Z(\kappa)$ for all $\kappa$ with $\kappa\ge R$ follows from lemma~\ref{lem2iv}. Hence we assume from now on that $\mathcal{I}\nablae \emptyset$. Lemma~\ref{lem2iv} provides us with the bound \begin{equation*} \bigl\|\hat Z_+(\kappa)^{-1}\hat Z_-(\kappa)\bigr\| \le \text{const.}\,|\kappa|^{q}, \end{equation*} for all $\kappa\in\mathbb{C}$, $|\kappa|\ge R$, and for some $q>0$. On the other hand, we get \begin{equation*} \|\delta X(\kappa)\| \le e^{\kappa \rho_0}, \end{equation*} for all $\kappa\le 0$ where $\rho_0 = \min_{i\in\mathcal{I}} \rho_i$. Therefore, there exists a constant $R'>0$ so that for all $\kappa\le -R'$ we have $\|\delta Z(\kappa)\|<1$, and therefore for such $\kappa$, $Z(\kappa)$ is invertible, i.e., $\det Z(\kappa)\nablae 0$. Hence there also exists $\kappa > 0$ so that $Z(\kappa)$ is invertible, and the proof is finished. \end{proof} \begin{remark}\label{rem_sa_bc} The special (and only) choice of boundary conditions in the form $c_v=0$ and $b_{v_l}=(1-a_v)/n(v)>0$ for all $v$ also gives rise to a selfadjoint nonpositive Laplace operator on $L^2(\mathcal{G})$, see theorem~5.1 in \cite{KoSc06}. The associated semigroup is positivity preserving and agrees on $C_0(\mathcal{G})\cap L^2(\mathcal{G})$ with the semigroup associated to a corresponding Brownian motion on $\mathcal{G}$. In turn the theorem just quoted also provides examples of selfadjoint Laplace operators, whose semigroups are positivity preserving but which are not linked to a Brownian motion process in the above way. \end{remark} \section{Joining Two Metric Graphs} \label{sect3} For what follows it will be convenient to write $\mathcal{G} = (V, \mathcal{I}, \mathcal{E}, \partial)$ for a metric graph $\mathcal{G}$ in order to make explicit that $V$ is its set of vertices, $\mathcal{I}$ its set of internal and $\mathcal{E}$ its set of external edges. $\delta$ is the map from $\mathcal{L}$ into $V\cup(V\times V)$ as defined in section~\ref{sect_1}. For simplicity we assume from now on --- with the exception of the discussion in section~\ref{sect5} --- that the metric graphs under consideration do not have any \emph{tadpoles}, that is, internal edges $i$ so that $\partial^-(i)=\partial^+(i)$. Throughout this section we suppose that $\mathcal{G}_k=(V_k,\mathcal{I}_k, \mathcal{E}_k,\partial_k)$, $k=1$, $2$, are two finite metric graphs. In the following subsection we shall construct a new metric graph $\mathcal{G}=(V,\mathcal{I}, \mathcal{E},\partial)$ from $\mathcal{G}_1$ and $\mathcal{G}_2$ by connecting some of their external edges. It will be convenient to consider the metric graphs $\mathcal{G}_1$, $\mathcal{G}_2$ as subgraphs of the metric graph $\mathcal{G}_0=\mathcal{G}_1\uplus\mathcal{G}_2$ which is their (disjoint) union: $\mathcal{G}_0=(V_0,\mathcal{I}_0,\mathcal{E}_0,\partial_0)$, with $V_0=V_1\cup V_2$, $\mathcal{I}_0=\mathcal{I}_1\cup \mathcal{I}_2$, $\mathcal{E}_0=\mathcal{E}_1\cup \mathcal{E}_2$, and where the map $\partial_0$ comprises the maps $\partial_1$, $\partial_2$ in the obvious way. \subsection{Construction of the graph \boldmath $\mathcal{G}$} \label{ssect3i} Suppose that $N$ is a natural number such that $N\le \min(|\mathcal{E}_1|,|\mathcal{E}_2|)$. For $k=1$, $2$, select subsets $\mathcal{E}_k'\subset \mathcal{E}_k$ of edges with $|\mathcal{E}_1'|=|\mathcal{E}_2'|=N$ to be joined. Let these sets be labeled as follows \begin{equation*} \mathcal{E}'_1 = \{e_1,\dotsc,e_N\},\quad \mathcal{E}'_2 = \{l_1,\dotsc,l_N\}. \end{equation*} \begin{figure} \caption{Two metric graphs $\mathcal{G} \label{fig2} \end{figure} In addition we assume that we are given strictly positive numbers $b_1$, \dots, $b_N$, which will serve as the lengths of the new internal edges, as well as $\sigma_k\in\{-1,1\}$, $k=1$, \dots, $N$, which will determine their orientations. For every $k\in\{1,\dotsc,N\}$ we associate with the interval $[0,b_k]$ an abstract edge $i_k$ (not in $\mathcal{I}_0$) which is isomorphic to $[0,b_k]$. Set $\mathcal{I}_c = \{i_1,\dotsc,i_N\}$, and \begin{align*} V &= V_0,\\ \mathcal{I} &= \mathcal{I}_0\cup\mathcal{I}_c,\\ \mathcal{E} &= \mathcal{E}_0\setminus (\mathcal{E}'_1\cup\mathcal{E}'_2). \end{align*} The map $\partial$ is constructed in two steps: Let $\partial'$ be the restriction of $\partial_0$ to $\mathcal{I}_0\cup\mathcal{E}_0\setminus(\mathcal{E}'_1\cup\mathcal{E}'_2)$. Then $\partial$ is the extension of $\partial'$ to $\mathcal{I}\cup\mathcal{E}$, which is defined by \begin{equation*} \partial(i_k) = \begin{cases} \bigl(\partial_1(e_k),\partial_2(l_k)\bigr), &\text{if $\sigma_k=1$},\\ \bigl(\partial_2(l_k),\partial_1(e_k)\bigr), &\text{if $\sigma_k=-1$}, \end{cases} \qquad k=1,\dotsc,N. \end{equation*} Figure~\ref{fig1} shows an example of a metric graph which is constructed from the two metric graphs in figure~\ref{fig2} by joining the $N=3$ pairs of external edges $(e_1,l_1)$, $(e_2,l_2)$ and $(e_3,l_3)$. The new internal edges $i_1$, $i_2$ and $i_3$ have the lengths $1$, $\sqrt{2}$ and $1$ respectively (in some scale). Conversely, let a metric graph $\mathcal{G}$ be given. Associate with every vertex $v\in V$ of $\mathcal{G}$ a single vertex graph $\mathcal{G}(v)$ with vertex $v$ and $n(v)$ external edges, where $n(v)$ is the number of edges incident with $v$ in $\mathcal{G}$. Then it is clear that we can reconstruct $\mathcal{G}$ from the single vertex graphs $\mathcal{G}(v)$, $v\in V$, by finitely many applications of the joining procedure described above. For the purposes below it will be convenient to introduce some additional notation. We let $V_c\subset V$ denote the subset of vertices of $\mathcal{G}$ which are connected to each other by the new internal edges in $\mathcal{I}_c$. That is, $v\in V_c$ is such that there exists at least one $i\in\mathcal{I}_c$ with $v\in\partial(i)$. For notational simplicity, here and below we also use $\partial(l)$ to denote the set consisting of $\partial^-(l)$ and $\partial^+(l)$ if $l\in \mathcal{I}$, and of $\partial(l)$ if $l\in\mathcal{E}$. In the example of the figures~\ref{fig1} and~\ref{fig2}, $V_c = \{v_2,v_3,w_1,w_2\}$. Consider a vertex $v\in V_c$ which belongs to $\mathcal{G}_1$, and let $i_k\in\mathcal{I}_c$, $k\in\{1,\dotsc,N\}$, be an internal edge connecting $v$ to $\mathcal{G}_2$, i.e., $v\in\partial(i_k)$. Then the point $\eta\in\mathcal{G}_2^\circ$ with local coordinates $(l_k,b_k)$ is called a \emph{shadow vertex} of the vertex $v$. $\ensuremath\text{shad}(v)\subset \mathcal{G}^0_2$ is the set of all shadow vertices of $v$. If $v\in V_c\cap\mathcal{G}_2$, its set of shadow vertices (which are points in $\mathcal{G}_1^\circ$) is defined analogously. $V_s=\ensuremath\text{shad}(V_c) = \cup_{v\in V_c}\ensuremath\text{shad}(v)$ is the set of all shadow vertices. If $\xi\in V_s$, then there exists a unique $v\in V_c$ so that $\xi\in\ensuremath\text{shad}(v)$. We put $\kappa(\xi)=v$ and thereby define a mapping from $V_s$ onto $V_c$. Of course, in general $\kappa$ is not injective. In figure~\ref{fig3} the shadow vertices of the example above are depicted as small circles on the external edges, i.e., $V_s=\{\xi_1,\xi_2,\xi_3,\eta_1, \eta_2,\eta_3\}$. For example, $\ensuremath\text{shad}(v_2)=\{\eta_1,\eta_2\}$, $\ensuremath\text{shad}(w_1)=\{\xi_1\}$, and $\kappa(\xi_2)=w_2$, $\kappa(\eta_2)=v_2$. \begin{figure} \caption{The graphs $\mathcal{G} \label{fig3} \end{figure} \subsection{Construction of a Preliminary Version of the Brownian Motion} \label{ssect3ii} From now we suppose that we are given a family of probability spaces \begin{equation*} (\mathbb{X}i^0,\mathcal{C}^0,Q^0_\xi), \qquad \,\xi\in\mathcal{G}_0, \end{equation*} and that thereon a Brownian motion with state space $\mathcal{G}_0$ in the sense of definition~\ref{def1i} is defined. This Brownian motion is denoted by $Z^0=(Z^0(t),\,t\in\mathbb{R}_+)$. Actually, since $\mathcal{G}_0=\mathcal{G}_1\cup \mathcal{G}_2$ and $\mathcal{G}_1$, $\mathcal{G}_2$ are disconnected, this is the same as saying that we are given a Brownian motion on $\mathcal{G}_1$ and one on $\mathcal{G}_2$. However, notationally it will be more convenient to view this as one stochastic process. We assume, as we may, that $Z^0$ has exclusively c\`adl\`ag paths which are continuous up to the lifetime $\zeta^0$ of $Z^0$. $\mathcal{F}^0=(\mathcal{F}^0_t,\,t\in\mathbb{R}_+)$ denotes the natural filtration of $Z^0$. The hitting time of $V_s$ by $Z^0$ is denoted by $\tau^0$, i.e., \begin{equation*} \tau^0 = \inf\,\{t>0,\,Z^0(t) \in V_s\}. \end{equation*} Furthermore, we assume that $\vartheta=(\vartheta_t,\,t\in\mathbb{R}_+)$ is a family of shift operators for $Z^0$ acting on $\mathbb{X}i^0$. For any topological space $(T,\mathcal{T})$ denote by $C_\mathbb{D}elta(\mathbb{R}_+,T)$ the space of mappings $\omega$ from $\mathbb{R}_+$ into $T\cup\{\mathbb{D}elta\}$ which are right continuous, have left limits in $T$, are continuous up to the lifetime \begin{equation*} \zeta_\omega = \inf\,\{t>0,\,\omega(t)= \mathbb{D}elta\}, \end{equation*} and which are such that $\omega(t)=\mathbb{D}elta$ implies $\omega(s)=\mathbb{D}elta$ for all $s\ge t$. In particular and in the present context, $\omega\in\mathbb{C}D(\mathbb{R}_+,\mathcal{G}_0)$ is either continuous from $\mathbb{R}_+$ into $\mathcal{G}_0$ or it has a jump from $\mathcal{G}_1$ or $\mathcal{G}_2$ to $\mathbb{D}elta$, but there can be no jump from $\mathcal{G}_1$ to $\mathcal{G}_2$ or vice versa. We shall make use of some special versions of the process $Z^0$, which we introduce now. For every $v\in V_c$, $Z^1_v=(Z^1_v(t),\,t\in\mathbb{R}_+)$ denotes a Brownian motion on $\mathcal{G}_0$ defined on another probability space $(\mathbb{X}i^1_v,\mathcal{C}^1_v,\mu^1_v)$ such that under $\mu^1_v$, $Z^1_v$ is equivalent to $Z^0$ under $Q^0_v$. We suppose that $Z^1_v$ exclusively has paths which start in $v$ and which belong to $\mathbb{C}D(\mathbb{R}_+,\mathcal{G}_0)$. (For example, one can use a standard path space construction to obtain such a version from $(\mathbb{X}i^0,\mathcal{C}^0, Q^0_v, Z^0)$.) The hitting time of $V_s$ by $Z^1_v$ is denoted by $\tau^1_v$, its lifetime by $\zeta^1_v$. The idea to define the preliminary version $Y=(Y(t),\,t\in\mathbb{R}_+)$ of the Brownian motion on $\mathcal{G}$ is to construct its paths as follows. Let $\xi\in\mathcal{G}$ be a given starting point. $\mathcal{G}$ (viewed as a set) has the following decomposition (cf.\ figure~\ref{fig4}): \begin{equation*} \mathcal{G} = \hat\mathcal{G}_1\uplus\hat\mathcal{G}_2, \end{equation*} with \begin{align*} \hat\mathcal{G}_1 &= \mathcal{G}_1\setminus \bigl(e^\circ_1\cup\dotsb\cup e^\circ_N\bigr),\\ \hat\mathcal{G}_2 &= \bigl(\mathcal{G}_2\setminus \bigl(l^\circ_1\cup\dotsb\cup l^\circ_N\bigr)\bigr) \cup \bigl(i_1^\circ\cup\dotsc\cup i_N^\circ\bigr). \end{align*} \begin{figure} \caption{The starting points of $Y$.} \label{fig4} \end{figure} Thus we may consider $\xi$ instead as a point in $\hat\mathcal{G}_1\uplus\hat\mathcal{G}_2\subset \mathcal{G}_0$. We pause here for the following remark: Of course, the convention we make that all new open inner edges $i^\circ_1,\dotsc,i^\circ_N$ are attached to $\hat\mathcal{G}_2$ is somewhat arbitrary. Just as well any subset of them could have been attached to $\hat\mathcal{G}_1$ instead. Even though different conventions lead to processes with different paths, the main result of this section, theorem~\ref{thm3xiii}, remains unchanged. It follows that all resulting processes are equivalent to each other. Let $Y$ start as $Z^0$ in $\xi\in\hat\mathcal{G}_1\uplus\hat\mathcal{G}_2$, and consider one trajectory. (In order to avoid any confusion, let us point out that even though $\xi\in\hat\mathcal{G}_k$, $k=1$, $2$, the process $Z^0$ moves in $\mathcal{G}_k$.) If this trajectory reaches the cemetery point $\mathbb{D}elta$ before hitting the set $V_s$ of shadow vertices, it is the complete trajectory of $Y$ and it stays forever at the cemetery. If the trajectory hits a shadow vertex $\eta\in V_s$ before its lifetime expires, this piece of the trajectory of $Y$ ends at the hitting time $\tau^0$. Set $v=\kappa(\eta)$, and let the trajectory of $Y$ continue with an (independent) trajectory of $Z^1_v$ until its lifetime expires or it hits a shadow vertex, and so on. Figure~\ref{fig5} explains the idea. \begin{figure} \caption{The construction of the process $Y$.} \label{fig5} \end{figure} The construction described above is formalized in the following way. Define \begin{equation*} \mathbb{X}i^1 = \mathbb{B}igCart_{v\in V_c} \mathbb{X}i^1_v,\quad \mathcal{C}^1 = \bigotimes_{v\in V_c} \mathcal{C}^1_v,\quad Q^1 = \bigotimes_{v\in V_c} \mu^1_v, \end{equation*} and view each of the stochastic processes $Z^1_v$, $v\in V_c$, as well as the random variables $\tau^1_v$, $\zeta^1_v$, as defined on this product space. Let \begin{equation*} \bigl(\mathbb{X}i^n,\mathcal{C}^n,Q^n,Z^n,\tau^n,\zeta^n\bigr),\qquad n\in\mathbb{N},\,n\ge 2, \end{equation*} be an independent sequence of copies of \begin{equation*} \bigl(\mathbb{X}i^1,\mathcal{C}^1,Q^1,Z^1,\tau^1,\zeta^1\bigr), \end{equation*} where $Z^1=(Z^1_v,\,v\in V_c)$ and similarly for $\tau^1$, $\zeta^1$. Next set \begin{equation*} \mathbb{X}i = \mathbb{B}igCart_{n=0}^\infty \mathbb{X}i^n,\quad \mathcal{C} = \bigotimes_{n=0}^\infty \mathcal{C}^n,\quad Q_\xi = Q^0_\xi \otimes\mathbb{B}igl(\bigotimes_{n=1}^\infty Q^n\mathbb{B}igr),\ \xi\in\mathcal{G}. \end{equation*} The procedure sketched above of pasting together pieces of the trajectories of the various processes $Z^n_v$ is controlled by a Markov chain $(K_n,\,n\in\mathbb{N})$ which moves at random times $(S_n,\,n\in\mathbb{N})$ in the state space $V_c\cup\{\mathbb{D}elta\}$. We set out to construct this chain $\bigl((S_n,K_n),\,n\in\mathbb{N}\bigr)$. Define $S_1=\tau^0$. On $\{S_1=+\infty\}$, i.e., in the case when $\zeta^0<\tau^0$, set $K_1 = \mathbb{D}elta$. Otherwise define \begin{equation*} K_1 = \kappa\bigl(Z^0(\tau^0)\bigr). \end{equation*} Observe that since all processes considered have right continuous paths, they are all measurable stochastic processes, and therefore the evaluation of their time argument at a random time yields a well-defined random variable. Set $S_2=+\infty$ on $\{S_1=+\infty\}$, while \begin{equation*} S_2 = S_1 + \tau^1_{K_1} \end{equation*} on $\{S_1<+\infty\}$. On $\{S_2=+\infty\}$ put $K_2=\mathbb{D}elta$, and on its complement \begin{equation*} K_2 = \kappa\bigl(Z^1_{K_1}(\tau^1_{K_1})\bigr). \end{equation*} These construction steps are iterated in the obvious way: The sequence \begin{equation*} \bigl((S_n,K_n),\,n\in\mathbb{N}\bigr) \end{equation*} is inductively defined by $S_n=+\infty$ and $K_n=\mathbb{D}elta$ on $\{S_{n-1}=+\infty\}$, while \begin{align*} S_n &= S_{n-1} + \tau^{n-1}_{K_{n-1}},\\[1ex] K_n &= \kappa\bigl(Z^{n-1}_{K_{n-1}}(\tau^{n-1}_{K_{n-1}})\bigr) \end{align*} on $\{S_{n-1}<+\infty\}$. Note that by construction $K_n = \mathbb{D}elta$, $n\in\mathbb{N}$, if and only if $S_n=+\infty$, and in that case $K_{n'}=\mathbb{D}elta$, $S_{n'}=+\infty$ for all $n'\ge n$. Thus $(+\infty,\mathbb{D}elta)$ is a cemetery state for the chain $((S_n,K_n),\,n\in\mathbb{N})$. For example with a Borel--Cantelli argument it is not hard to see (cf.\ also~\cite{BMMG0}) that there exists a set $\mathbb{X}i'\in\mathcal{C}$ so that for all $\xi\in\mathcal{G}$, $Q_\xi(\mathbb{X}i')=0$, and for all $\omega\in\mathbb{X}i\setminus\mathbb{X}i'$ the sequence $(S_n(\omega),\,n\in\mathbb{N})$ increases to $+\infty$ in such a way that for all $n\in\mathbb{N}$, $S_n(\omega)<S_{n+1}(\omega)$ holds when $S_n(\omega)<+\infty$. Now we are ready to construct $Y=(Y(t),\in\mathbb{R}_+)$. Let $\xi\in\mathcal{G} = \hat\mathcal{G}_1\uplus\hat\mathcal{G}_2$ be a given starting point, and suppose that $t\in\mathbb{R}_+$ is given. On $\mathbb{X}i'$ set $Y(t)=\mathbb{D}elta$. On $\mathbb{X}i\setminus\mathbb{X}i'$ there is a unique $n\in\mathbb{N}_0$ so that $t\in [S_n, S_{n+1})$, with the convention $S_0=0$. If $t\in [0,S_1)$, define $Y(t) = Z^0(t)$. If $t\in [S_n, S_{n+1})$ for $n\in\mathbb{N}$, then necessarily $S_n$ is finite, so that $K_n\in V_c$, and we define \begin{equation*} Y(t) = Z^n_{K_n}(t-S_n). \end{equation*} In addition, we make the convention $Y(+\infty)=\mathbb{D}elta$. The natural filtration generated by $Y$ will be denoted by $\mathcal{F}^Y=(\mathcal{F}^Y_t,\,t\in\mathbb{R}_+)$. It follows from the construction of $Y$ that $\mathbb{D}elta$ is a cemetery state for $Y$. Indeed, suppose that $\omega\in\mathbb{X}i\setminus \mathbb{X}i'$, and that the trajectory $Y(\,\cdot\,,\omega)$ reaches the point $\mathbb{D}elta$ at a finite time $\zeta^Y(\omega)$. This implies that there is an $n\in\mathbb{N}_0$ such that $\zeta^Y(\omega)\in [S_n(\omega),S_{n+1}(\omega))$. Then $S_n(\omega)$ is finite, and therefore $K_n(\omega)\in V_c$ so that $Y(\,\cdot\,,\omega)$ is equal to $Z^n_{K_n}(\,\cdot\,-S_n(\omega),\omega)$ on the interval $[S_n(\omega),S_{n+1}(\omega))$, and this trajectory reaches $\mathbb{D}elta$ before hitting a shadow vertex. Hence $\tau^n_{K_n}=+\infty$ which entails that $S_{n+1}(\omega)=+\infty$. Consequently after $S_n(\omega)$ there are no finite crossover times for this trajectory, and therefore $Y(\,\cdot\,,\omega)$ stays at~$\mathbb{D}elta$ forever. Furthermore note that the left limit $Y(\zeta^Y(\omega)-,\omega)$ at $\zeta^Y(\omega)$ belongs to $V_0$. In terms of the stochastic process $Y$ the random times $S_n$, $n\in\mathbb{N}$, have the following description. Suppose that $Y$ starts in $\xi\in\mathcal{G}$. Then $S_1$ is the hitting time of $V_c$. But if $\xi\in V_c$, then actually it is the hitting time of $V_c\setminus\{\xi\}$, because it hits a vertex in $V_c$ which corresponds to the first hitting of a shadow vertex, i.e., a point in $\mathcal{G}_0$ different from $\xi$, by $Z^0$. In particular, $S_1>0$. Similarly, $S_n$ is the hitting time of $V_c\setminus\{K_{n-1}\}$ by $Y$ after time $S_{n-1}$. In appendix~\ref{appA} it is shown that for every $n\in\mathbb{N}$, $S_n$ is a stopping time with respect to $\mathcal{F}^Y$. It follows from its construction that $Y$ is a normal process, that is, for every $\xi\in\mathcal{G}$, $Q_\xi(Y(t=0)=\xi)=1$. Furthermore, all paths of $Y$ belong to $\mathbb{C}D(\mathbb{R}_+,\mathcal{G})$. Let $S_V$ be the hitting time of the set of vertices $V$ of $\mathcal{G}$ by $Y$. Then $S_V \le S_1$, because $V_c\subset V$ and therefore we find that $Y(\,\cdot\,\land S_V)$ is pathwise equal to $Z^0(\,\cdot\,\land S^0_V)$, where $S^0_V$ denotes the hitting time of $V$ by $Z^0$. Suppose that the starting point $\xi$ belongs to $l^\circ$, $l\in\mathcal{I}\cup\mathcal{E}$, and $l$ is isomorphic to the interval $I$. Then by definition of $Z^0$ (cf.\ definition~\ref{def1i}), the stopped process $Z^0(\,\cdot\,\land S^0_V)$ is equivalent to a standard Brownian motion on the interval $I$ with absorption at the endpoint(s) of $I$. Hence the same is true for $Y$: $Y(\,\cdot\,\land S_V)$ is equivalent to a standard Brownian motion on $I$ with absorption at the endpoint(s) of $I$. \subsection{Markov property of \boldmath $Y$} \label{ssect3iii} For any measurable space $(M,\mathcal{M})$, $B(M)$ denotes the space of bounded, measurable functions on $M$. Every $f\in B(\mathcal{G}^n)$, $n\in\mathbb{N}$, is extended to $(\mathcal{G}\cup\{\mathbb{D}elta\})^n$ by $f(\xi_1,\dotsc,\xi_n)=0$, $(\xi_1,\dotsc,\xi_n)\in(\mathcal{G}\cup\{\mathbb{D}elta\})^n$, whenever there is an index $k\in\{1,\dotsc, n\}$ so that $\xi_k=\mathbb{D}elta$. In this subsection we shall prove the following \begin{proposition} \label{prop3i} $Y$ has the simple Markov property: For all $f\in B(\mathcal{G})$, $s$, $t\in\mathbb{R}_+$, $\xi\in\mathcal{G}$, \begin{equation} \label{eq3i} E_\xi\bigl(f\bigl(Y(s+t)\bigr)\ensuremath\,\big|\, \mathcal{F}^Y_s\bigr) = E_{Y(s)}\bigl(f\bigl(Y(t)\bigr)\bigr) \end{equation} holds true $Q_\xi$--a.s.\ on $\{Y(s)\nablae\mathbb{D}elta\}$. \end{proposition} The proof of proposition~\ref{prop3i} is somewhat technical and lengthy. Therefore it will be broken up into a sequence of lemmas. For every $n\in\mathbb{N}$ the probability space $(\mathbb{X}i,\mathcal{C},Q_\xi)$, $\xi\in\mathcal{G}$, underlying the construction of the process $Y$ may be written as the product of the probability spaces $(\mathbb{X}il, \mathcal{C}l, \mathbb{Q}l_\xi)$ and $(\mathbb{X}iu, \mathcal{C}u, \mathbb{Q}u)$ with \begin{align*} \mathbb{X}il &= \mathbb{B}igCart_{j=0}^{n-1} \mathbb{X}i^j, & \mathbb{X}iu &= \mathbb{B}igCart_{j=n}^{\infty} \mathbb{X}i^j,\\ \mathcal{C}l &= \bigotimes_{j=0}^{n-1} \mathcal{C}^j, & \mathcal{C}u &= \bigotimes_{j=n}^{\infty} \mathcal{C}^j,\\ \mathbb{Q}l_\xi &= Q_\xi^0\otimes\mathbb{B}igl(\bigotimes_{j=1}^{n-1} Q^j\mathbb{B}igr), & \mathbb{Q}u &= \bigotimes_{j=n}^{\infty} Q^j. \end{align*} Introduce a family $\mathcal{B}=(\mathcal{B}_n,\,n\in\mathbb{N}_0)$ of sub--$\sigma$--algebras of $\mathcal{C}$ by setting \begin{equation*} \mathcal{B}_n = \mathcal{C}^{\le n-1}\times \mathbb{X}i^{\ge n}. \end{equation*} Obviously, the family $\mathcal{B}$ forms a filtration. Furthermore, from the construction of $K_n$ and $S_n$ it is easy to see that the chain $((S_n, K_n),\,n\in\mathbb{N})$ is adapted to $\mathcal{B}$. First we study the chain $((S_n,K_n),\,n\in\mathbb{N})$ in more detail. Recall our convention that $S_0=0$. We set $\mathcal{B}_0=\{\emptyset,\mathbb{X}i\}$, and under the law $Q_v$, $v\in V_c$, we put $K_0=v$. $g\in\ensuremath B(\R_+\times V_c)$ is extended to $\mathbb{R}bp\times \bigl(V_c\cup\{\mathbb{D}elta\}\bigr)$ by $g(+\infty,\,\cdot\,)=g(\,\cdot\,,\mathbb{D}elta) =g(+\infty,\mathbb{D}elta)=0$. For $g\in\ensuremath B(\R_+\times V_c)$, $n\in\mathbb{N}_0$, define \begin{equation} \label{eq3ii} (U_n g)(s,v) = E_{v}\bigl(g(s+S_n, K_n)\bigr),\qquad s\in\mathbb{R}_+,\,v\in V_c. \end{equation} Note that $U_0=\text{id}$, and that for every $g\in\ensuremath B(\R_+\times V_c)$ and all $n\in\mathbb{N}$, $U_n g\in\ensuremath B(\R_+\times V_c)$. In particular, the convention mentioned above applies to $U_n g$, too. \omegaodbreak \begin{lemma} \label{lem3ii} {\ } \begin{enum_a} \item For all $m$, $n\in\mathbb{N}$, $m\le n$, $\xi\in\mathcal{G}$, $s\ge 0$, and every $g\in\ensuremath B(\R_+\times V_c)$ the following formula holds true $Q_\xi$--a.s.\ \begin{equation} \label{eq3iii} E_\xi\bigl(g(s+S_n, K_n) \ensuremath\,\big|\, \mathcal{B}_m)\bigr) = (U_{n-m}g)(s+S_m,K_m). \end{equation} \item $(U_n,\,n\in\mathbb{N}_0)$ forms a semigroup of linear maps on $\ensuremath B(\R_+\times V_c)$. In particular, for all $(s,v)\in\mathbb{R}_+\times V_c$ under $Q_{v}$ the chain $\bigl((s+S_n, K_n),\,n\in\mathbb{N}_0\bigr)$ is a homogeneous Markov chain with transition kernel \begin{equation*} P\bigl((s,v), A\bigr) = Q_{v}\bigl((s+S_1,K_1)\in A\bigr),\qquad A\in\mathcal{B}(\mathbb{R}_+\times V_c). \end{equation*} \end{enum_a} \end{lemma} \begin{proof} For $m=n$ formula~\eqref{eq3iii} is trivial. Consider the case when $n\ge 2$, $m=n-1$. Let $\ensuremath\mathbb{L}ambda\in\mathcal{B}_{n-1}$, $v\in V_c$, and put $\ensuremath\mathbb{L}ambda_v = \ensuremath\mathbb{L}ambda\cap\{K_{n-1}=v\}\in\mathcal{B}_{n-1}$. From the construction of $S_n$ and $K_n$ \begin{align*} E_\xi\bigl(&g(s+S_n,K_n); \ensuremath\mathbb{L}ambda_v\bigr)\\[1ex] &\hspace{-1.5em}= \int_{\mathbb{X}il[n-2]} 1_{\ensuremath\mathbb{L}ambda_v} \mathbb{B}igl(\int_{\mathbb{X}iu[n-1]}g\bigl(s+S_{n-1}+\tau^{n-1}_v, \kappa\bigl(Z^{n-1}_v(\tau^{n-1}_v)\bigr)\bigr)\,d\mathbb{Q}u[n-1]\mathbb{B}igr)\,d\mathbb{Q}l[n-2]_\xi\\[1ex] &\hspace{-1.5em}= \int_{\mathbb{X}il[n-2]} 1_{\ensuremath\mathbb{L}ambda_v} \mathbb{B}igl(\int_{\mathbb{X}i^0}g\bigl(s+u+\tau^0, \kappa\bigl(Z^0(\tau^0)\bigr)\bigr)\,dQ^0_v\mathbb{B}igr)\mathbb{E}val_{u=S_{n-1}}\,d\mathbb{Q}l[n-2]_\xi\\[1ex] &\hspace{-1.5em}=E_\xi\mathbb{B}igl(E_v\bigr(g(s+u+S_1,K_1)\bigr)\mathop{\big|}\nolimits_{u=S_{n-1}};\ensuremath\mathbb{L}ambda_v\mathbb{B}igr)\\[1ex] &\hspace{-1.5em}=E_\xi\mathbb{B}igl(E_{K_{n-1}}\bigr(g(s+u+S_1,K_1)\bigr)\mathop{\big|}\nolimits_{u=S_{n-1}};\ensuremath\mathbb{L}ambda_v\mathbb{B}igr)\\[1ex] &\hspace{-1.5em}=E_\xi\mathbb{B}igl( (U_1 g)(s+S_{n-1},K_{n-1});\ensuremath\mathbb{L}ambda_v\mathbb{B}igr), \end{align*} where in the last step we used definition~\eqref{eq3ii}. If in the preceding calculation we replace the event $\{K_{n-1}=v\}$ by $\{K_{n-1}=\mathbb{D}elta\}$, we get zero on both sides because $K_{n-1}=\mathbb{D}elta$ implies $K_n=\mathbb{D}elta$ (see subsection~\ref{ssect3ii}). Thus summation over $v\in V_c$ gives \begin{equation*} E_\xi\bigl(g(s+S_n,K_n);\ensuremath\mathbb{L}ambda\bigr) = E_\xi\bigl((U_1 g)(s+S_{n-1}, K_{n-1});\ensuremath\mathbb{L}ambda\bigr), \end{equation*} and equation~\eqref{eq3iii} is proved for the case where $n\ge 2$ and $m=n-1$. As a consequence we get \begin{align*} (U_n g)(s,v) &= E_v\bigl(E_v\bigl(g(s+S_n,K_n)\ensuremath\,\big|\, \mathcal{B}_{n-1}\bigr)\bigr)\\ &= E_v\bigl((U_1 g)(s+S_{n-1},K_{n-1})\bigr)\\ &= \bigl(U_{n-1}\comp U_1 g\bigr)(s,v). \end{align*} Now the general semigroup relation $U_{n+m}=U_n\comp U_m$, $m$, $n\in\mathbb{N}_0$, follows by an application of Fubini's theorem. Finally we show formula~\eqref{eq3iii} in the general case: \begin{align*} E_\xi\bigl(g(s+S_n,K_n)&\ensuremath\,\big|\, \mathcal{B}_m\bigr)\\ &= E_\xi\bigl(E_\xi\bigl(g(s+S_n,K_n)\ensuremath\,\big|\, \mathcal{B}_{n-1}\bigr)\ensuremath\,\big|\, \mathcal{B}_m\bigr)\\ &= E_\xi\bigl((U_1 g)(s+S_{n-1},K_{n-1})\ensuremath\,\big|\,\mathcal{B}_m\bigr)\\ &=\quad \dots \quad=\\ &= E_\xi\bigl((U_1\comp\dotsb\comp U_1 g)(s+S_m,K_m)\ensuremath\,\big|\,\mathcal{B}_m\bigr)\\ \end{align*} where the multiple composition in the last expression involves $n-m$ operators $U_1$. The semigroup property of $(U_n,\,n\in\mathbb{N}_0)$ implies formula~\eqref{eq3iii}. \end{proof} It will be useful to introduce some additional notation. For $r\in\mathbb{N}$, let $\mathbb{R}rp$ denote the set of all increasingly ordered $r$--tuples with entries in $\mathbb{R}_+$. If $u\in\mathbb{R}rp$ and $s\in\mathbb{R}$ we set $u+s=(u_1+s,\dotsc, u_r+s)\in\mathbb{R}^r$. $u<s$ means that $u_i<s$ for all $i=1$, \dots, $r$ or equivalently $u_r<s$. The relations $u>s$, $u\le s$, and $u\ge s$ are defined analogously. In particular, when $s\le u$, then $u-s\in\mathbb{R}rp$. For $r$, $q\in\mathbb{N}$, and $u\in\mathbb{R}rp$, $w\in\mathbb{R}rp[q]$ with $u\le w_1$, define \begin{equation*} (u,w) = (u_1,\dotsc, u_r,w_1,\dotsc,w_q)\in\mathbb{R}rp[r+q]. \end{equation*} Furthermore, $Y(u)$ stands for $(Y(u_1),\dotsc, Y(u_r))$, and similarly for $Z^n_v(u)$, $n\in\mathbb{N}_0$, $v\in V_c$. In the sequel we shall consider random variables $W_m(h,g,u)$ of the following form \begin{equation} \label{eq3iv} W_m(h,g,u) = h\bigl(Y(u)\bigr)\,\chi_m(u)\,g(S_{m+1},K_{m+1}), \end{equation} where $m\in\mathbb{N}$, $h$ belongs to $\ensuremath BG$, $r\in \mathbb{N}$, $g$ to $\ensuremath B(\R_+\times V_c)$, and $u\in\mathbb{R}rp$. Here we have set \begin{equation*} \chi_m(u) = 1_{\{S_m\le u <S_{m+1}\}}. \end{equation*} For $s\ge 0$ with $s\le u$ define \begin{equation} \label{eq3v} W_{m,s}(h,g,u) = h\bigl(Y(u-s)\bigr)\,\chi_m(u-s)\,g(s+S_{m+1},K_{m+1}), \end{equation} so that $W_{m,s=0}(h,g,u)=W_m(h,g,u)$. Moreover set \begin{equation} \label{eq3vi} R_m(h,g,u)(s,v) = E_v\bigl(W_{m,s}(h,g,u)\bigr),\qquad s\in\mathbb{R}_+,\,s\le u,\,v\in V_c. \end{equation} For the following it will be convenient to let $W_{m,s}(h,g,u)$ and $R_m(h,g,u)(s,v)$, $v\in V_c$, be defined for all $s\in\mathbb{R}_+$. To this end we make the convention that $Y(t)=\mathbb{D}elta$ for all $t<0$. Then by $W_{m,s}(h,1,u)=W_m(h,1,u-s)$ the following formula \begin{equation} \label{eq3via} R_m(h,1,u)(s+t,v) = R_m(h,1,u-s)(t,v) \end{equation} holds for all $m\in\mathbb{N}$, $h\in B(\mathcal{G}^r)$, $r\in\mathbb{N}$, $u\in\mathbb{R}rp$, $s$, $t\in\mathbb{R}_+$, $v\in V_c$. Suppose that $r$, $q\in\mathbb{N}$, and that $h\in\ensuremath BG$, $f\in\ensuremath BG[q]$. Then $h\otimes f$ denotes the function in $B(\mathcal{G}^{r+q})$ given by \begin{equation} \label{eq3vib} \begin{split} h\otimes f&(\eta_1,\dotsc,\eta_{r+q})\\ &= h(\eta_1,\dotsc,\eta_r)\,f(\eta_{r+1},\dotsc,\eta_{r+q}),\qquad (\eta_1,\dotsc,\eta_{r+q})\in\mathcal{G}^{r+q}. \end{split} \end{equation} \begin{lemma} \label{lem3iii} Suppose that $r\in\mathbb{N}$, $u\in\mathbb{R}rp$, and that $h\in\ensuremath BG$. \begin{enum_a} \item If $q\in\mathbb{N}$, $w\in \mathbb{R}rp[q]$ with $u_r\le w$, and $f\in\ensuremath BG[q]$, then \begin{subequations} \label{eq3vii} \begin{equation} \label{eq3viia} R_0\bigl(h\otimes f, 1, (u,w)\bigr)= R_0\big(M(f,w,u_r)h, 1,u\bigr) \end{equation} holds true, where $M(f,w,s)h\in B(\mathcal{G}^r)$ is given by \begin{equation} \label{eq3viib} \bigl(M(f,w,s)h\bigr)(\eta) = h(\eta)\,E_{\eta_r}\bigl(W_{0,s}(f,1,w)\bigr), \quad \eta\in\mathcal{G}^r,\,0\le s\le w. \end{equation} \end{subequations} \item If $g\in\ensuremath B(\R_+\times V_c)$, then \begin{subequations} \label{eq3viii} \begin{equation} \label{eq3viiia} R_0\bigl(h,g,u\bigr) = R_0\bigl(N(g,u_r)h,1,u\bigr) \end{equation} holds, where $N(g,s)h\in B(\mathcal{G}^r)$ is given by \begin{equation} \label{eq3viiib} \bigl(N(g,s) h\bigr)(\eta)= h(\eta)\,E_{\eta_r}\bigl(g(s+S_1,K_1)\bigr), \qquad \eta\in\mathcal{G}^r,\,s\ge 0. \end{equation} \end{subequations} \end{enum_a} \end{lemma} \begin{proof} Both statements follow from the Markov property of the Brownian motion $Z^0$ on $\mathcal{G}_0$ underlying the construction of $Y$. We only prove statement~(b), the proof of~(a) is similar and therefore omitted. Using the definition of $R_0$ and the construction of $Y$, we compute for $s\in\mathbb{R}_+$, $v\in V_c$, as follows: \begin{align*} R_0\bigl(h,g,u\bigr)(s,v) &= E_v\bigl(h\bigl(Y(u-s)\bigr)\,\chi_0(u-s)\,g(s+S_1,K_1)\bigr)\\ &= E_v\bigl(h\bigl(Z^0(u-s)\bigr)\,1_{\{0\le u-s <\tau^0\}}\, g\bigl(s+\tau^0,\kappa(Z^0(\tau^0))\bigr)\bigr). \end{align*} Recall that $\mathcal{F}^0$ denotes the natural filtration of $Z^0$, and $\vartheta$ is a family of shift operators for $Z^0$. It follows from the definition of the stopping time $\tau^0$ and the path properties of $Z^0$, that on $\{\tau^0\ge u_r-s\}$ the relation $\tau^0 = u_r-s + \tau^0\comp\vartheta_{u_r-s}$ holds true. Moreover, it is easy to check that on this event we have $Z^0(\tau^0)=Z^0(\tau^0)\comp\vartheta_{u_r-s}$. Therefore \begin{align*} R_0\bigl(h,g,u\bigr)(s,v) &= E_v\mathbb{B}igl(h\bigl(Z^0(u-s)\bigr)\,1_{\{0\le u-s <\tau^0\}}\\ &\hspace{2em} \times E_v\bigl(g(u_r+\tau^0,\kappa(Z^0(\tau^0))) \comp\vartheta_{u_r-s}\ensuremath\,\big|\, \mathcal{F}^0_{u_r-s}\bigr)\mathbb{B}igr)\\ &= E_v\mathbb{B}igl(h\bigl(Z^0(u-s)\bigr)\,1_{\{0\le u-s <\tau^0\}}\\ &\hspace{2em} \times E_{Z^0(u_r-s)}\bigl(g(u_r+\tau^0,\kappa(Z^0(\tau^0)))\bigr)\mathbb{B}igr)\\ &= E_v\mathbb{B}igl(h\bigl(Y(u-s)\bigr)\,\chi_0(u-s)\, E_{Y(u_r-s)}\bigl(g(u_r+S_1,K_1)\bigr)\mathbb{B}igr)\\ &= R_0\bigl(N(g,u_r)h,1,u\bigr)(s,v), \end{align*} and the proof is concluded. \end{proof} \begin{lemma} \label{lem3iv} For all $m$, $r\in\mathbb{N}$, $h\in\ensuremath BG$, $g\in\ensuremath B(\R_+\times V_c)$, $u\in\mathbb{R}rp$, $\xi\in\mathcal{G}$, the formula \begin{equation} \label{eq3ix} E_\xi\bigl(W_m(h,g,u)\ensuremath\,\big|\, \mathcal{B}_m\bigr) = R_0(h,g,u)(S_m,K_m) \end{equation} holds $Q_\xi$--a.s. \end{lemma} \begin{proof} Observe that both side of equation~\eqref{eq3ix} vanish on the set $\{K_m=\mathbb{D}elta\}$. Let $\ensuremath\mathbb{L}ambda\in\mathcal{B}_m$, $v\in V_c$, and set $\ensuremath\mathbb{L}ambda_v=\ensuremath\mathbb{L}ambda\cap\{K_m=v\}\in\mathcal{B}_m$. Then \begin{align*} E_\xi\bigl(&W_m(h,g,u);\ensuremath\mathbb{L}ambda_v\bigr)\\ &= \int_{\mathbb{X}il[m-1]} 1_{\ensuremath\mathbb{L}ambda_v}\, \mathbb{B}igl(\int_{\mathbb{X}iu[m]} h\bigl(Y(u)\bigr)\, 1_{\{S_m\le u<S_{m+1}\}}\\ &\hspace{7em} \times g(S_{m+1},K_{m+1})\,d\mathbb{Q}u[m]\mathbb{B}igr)\,d\mathbb{Q}l[m-1]_\xi\\ &= \int_{\mathbb{X}il[m-1]} 1_{\ensuremath\mathbb{L}ambda_v}\, \mathbb{B}igl(\int_{\mathbb{X}i^m} h\bigl(Z^m_v(u-s)\bigr)\, 1_{\{0\le u-s < \tau^m_v\}}\\ &\hspace{7em} \times g\bigl(s+\tau^m_v,\kappa(Z^m_v(\tau^m_v))\bigr)\,dQ^m\mathbb{B}igr) \mathbb{E}val_{s=S_m}\,d\mathbb{Q}l[m-1]_\xi\\ &= \int_{\mathbb{X}il[m-1]} 1_{\ensuremath\mathbb{L}ambda_v}\, \mathbb{B}igl(\int_{\mathbb{X}i^0} h\bigl(Z^0(u-s)\bigr)\, 1_{\{0\le u-s < \tau^0\}}\\ &\hspace{7em} \times g\bigl(s+\tau^0,\kappa(Z^0(\tau^0))\bigr)\,dQ^0_v\mathbb{B}igr) \mathbb{E}val_{s=S_m}\,d\mathbb{Q}l[m-1]_\xi\\ &= E_\xi\mathbb{B}igl(E_v\bigl(h\bigl(Y(u-s)\bigr)\,\chi_0(u-s)\,g(s+S_1,K_1)\bigr) \mathop{\big|}\nolimits_{s=S_m};\,\ensuremath\mathbb{L}ambda_v\mathbb{B}igr)\\ &= E_\xi\mathbb{B}igl(E_{K_m}\bigl(h\bigl(Y(u-s)\bigr)\,\chi_0(u-s)\,g(s+S_1,K_1)\bigr) \mathop{\big|}\nolimits_{s=S_m};\,\ensuremath\mathbb{L}ambda_v\mathbb{B}igr)\\ &= E_\xi\bigl(R_0(h,g,u)(S_m,K_m);\,\ensuremath\mathbb{L}ambda_v\bigr). \end{align*} Summation over $v\in V_c$ finishes the proof. \end{proof} \begin{lemma} \label{lem3v} For all $m$, $r\in\mathbb{N}$, $u\in\mathbb{R}rp$, $h\in\ensuremath BG$, $g\in\ensuremath B(\R_+\times V_c)$, \begin{equation} \label{eq3x} R_m(h,g,u)(s,v) = \bigl(U_m R_0(h,g,u)\bigr)(s,v),\qquad s\in\mathbb{R}_+,\,v\in V_c, \end{equation} holds. \end{lemma} \begin{proof} By definition of $U_m$ \begin{equation*} \bigl(U_m R_0(h,g,u)\bigr)(s,v) = E_v\bigl(R_0(h,g,u)(s+S_m,K_m)\bigr). \end{equation*} With formula~\eqref{eq3via} and lemma~\ref{lem3iv} we find \begin{align*} \bigl(U_m R_0(h,g,u)\bigr)(s,v) &= E_v\bigl(R_0(h,g(s+\,\cdot\,),u-s)(S_m,K_m)\bigr)\\ &= E_v\bigl(E_v\bigl(W_m(h,g(s+\,\cdot\,),u-s)\ensuremath\,\big|\, \mathcal{B}_m\bigr)\bigr)\\ &= E_v\bigl(W_m(h,g(s+\,\cdot\,),u-s)\bigr)\\ &= E_v\bigl(W_{m,s}(h,g,u)\bigr)\\ &= R_m(h,g,u)(s,v).\qedhere \end{align*} \end{proof} \begin{corollary} \label{cor2vi} For all $m$, $n$, $r\in\mathbb{N}$, $u\in\mathbb{R}rp$, $h\in\ensuremath BG$, $g\in\ensuremath B(\R_+\times V_c)$, \begin{equation} \label{eq3xi} U_n R_m(h,g,u) = R_{n+m}(h,g,u). \end{equation} is valid. \end{corollary} \begin{proof} By lemma~\ref{lem3v} and lemma~\ref{lem3ii}, statement~(b), we obtain \begin{equation*} U_n R_m(h,g,u) = U_n\comp U_m R_0(h,g,u) = U_{n+m} R_0(h,g,u) = R_{n+m}(h,g,u).\qedhere \end{equation*} \end{proof} \begin{lemma} \label{lem3vii} For all $m$, $n\in\mathbb{N}$, $m\le n$, $r\in\mathbb{N}$, $u\in\mathbb{R}rp$, $h\in\ensuremath BG$, $g\in\ensuremath B(\R_+\times V_c)$, $\xi\in\mathcal{G}$, the following formula holds true: \begin{equation} \label{eq3xii} E_\xi\bigl(W_n(h,g,u)\ensuremath\,\big|\, \mathcal{B}_m\bigr) = R_{n-m}(h,g,u)(S_m,K_m). \end{equation} \end{lemma} \begin{proof} Apply lemma~\ref{lem3iv} to compute as follows \begin{align*} E_\xi\bigl(W_n(h,g,u)\ensuremath\,\big|\, \mathcal{B}_m\bigr) &= E_\xi\bigl(E_\xi\bigl(W_n(h,g,u)\ensuremath\,\big|\, \mathcal{B}_n\bigr)\ensuremath\,\big|\, \mathcal{B}_m\bigr)\\ &= E_\xi\bigl(R_0(h,g,u)(S_n,K_n)\ensuremath\,\big|\, \mathcal{B}_m\bigr)\\ &= \bigl(U_{n-m} R_0(h,g,u)\bigr)(S_m,K_m), \end{align*} where we used lemma~\ref{lem3ii}, formula~\eqref{eq3iii}, in the last step. An application of lemma~\ref{lem3v} concludes the proof. \end{proof} With these preparations, we are ready for the \begin{proof}[Proof of Proposition~\ref{prop3i}] Assume that $f\in\ensuremath B(\mathcal{G})$, $s$, $t\in\mathbb{R}_+$, and that $\xi\in\mathcal{G}$. Since $(S_m,\,m\in\mathbb{N})$ $Q_\xi$--a.s.\ strictly increases to $+\infty$, and since $S_m$, $m\in\mathbb{N}$, is an $\mathcal{F}^Y$--stopping time (cf., lemma~\ref{lemA} in appendix~\ref{appA}), it suffices to prove that equation~\eqref{eq3i} holds $Q_\xi$--a.s.\ for every $m\in\mathbb{N}_0$ on $\{S_m\le s <S_{m+1}, Y(s)\nablae \mathbb{D}elta\}\in\mathcal{F}^Y_s$. We fix an arbitrary $m\in\mathbb{N}_0$. Clearly, the family of random variables of the form \begin{equation*} g\bigl(Y(w)\bigr)\,1_{\{0\le w <S_m\}}\,W_m(h,1,u) \end{equation*} with $r$, $q\in\mathbb{N}$, $u\in\mathbb{R}rp$, $u_r=s$, $w\in\mathbb{R}rp[q]$, $h\in\ensuremath BG$, and $g\in\ensuremath BG[q]$, generates the $\sigma$--algebra $\mathcal{F}_s^Y\cap\{S_m\le s<S_{m+1}\}$. Therefore it is sufficient to show that \begin{equation} \label{eq3xiii} \begin{split} E_\xi\bigl(g\bigl(Y(w)\bigr)\,&1_{\{0\le w <S_m\}}\, W_m(h,1,u)\,f\bigl(Y(s+t)\bigr)\bigr)\\[1ex] &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w <S_m\}}\,W_m(h,1,u)\, E_{Y(s)}\bigl(f(Y(t))\bigr)\bigr), \end{split} \end{equation} holds for all $r$, $q\in\mathbb{N}$, $u\in\mathbb{R}rp$ with $u_r=s$, $w\in\mathbb{R}rp[q]$, $h\in\ensuremath BG$, $g\in\ensuremath BG[q]$ and $f\in B(\mathcal{G})$. (Since the random variables under the expectation signs of both sides of equation~\eqref{eq3xiii} vanish on the set $\{Y(s)=\mathbb{D}elta\}$, we can henceforth safely ignore the condition $Y(s)\nablae \mathbb{D}elta$.) Expand the left hand side of equation~\eqref{eq3xiii} as follows: \begin{equation} \label{eq3xiv} \begin{split} E_\xi\bigl(g\bigl(&Y(w)\bigr)\,1_{\{0\le w <S_m\}}\,W_m(h,1,u)\,f\bigl(Y(s+t)\bigr)\bigr)\\[1ex] &= \sum_{n=m}^\infty E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w <S_m\}}\,W_m(h,1,u)\, W_n(f,1,s+t)\bigr). \end{split} \end{equation} Consider the summand with $n=m$, which is of the form \begin{align*} E_\xi\bigl(&g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h\otimes f,1,(u,s+t))\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\, E_\xi\bigl(W_m(h\otimes f,1,(u,s+t))\ensuremath\,\big|\, \mathcal{B}_m\bigr)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}} \,R_0(h\otimes f, 1, (u,s+t))(S_m,K_m)\bigr), \end{align*} where we made use of formula~\eqref{eq3ix}. Now we apply statement~(a) of lemma~\ref{lem3iii} with the choice $q=1$ which yields (recall that $u_r=s$) \begin{align*} E_\xi\bigl(&g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h\otimes f,1,(u,s+t))\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}} \,R_0(M(f,s+t,s)h, 1, u)(S_m,K_m)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}} \,E_\xi\bigl(W_m(M(f,s+t,s)h, 1, u)\ensuremath\,\big|\,\mathcal{B}_m\bigr)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\, W_m(M(f,s+t,s)h, 1, u)\bigr), \end{align*} where we used formula~\eqref{eq3ix} again. Combining~\eqref{eq3iv} with~\eqref{eq3viib} in $W_m(M(f,s+t,s)h, 1, u)$ we thus have shown \begin{equation} \label{eq3xv} \begin{split} E_\xi\bigl(&g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h,1,u)\,W_m(f,1,s+t)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h,1,u)\, E_{Y(s)}\bigl(f(Y(t))\,1_{\{0<t<S_1\}}\bigr)\bigr). \end{split} \end{equation} Next consider a generic summand with $n>m$ on the right hand side of~\eqref{eq3xiv}. Then \begin{align*} E_\xi\bigl(&g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,W_m(h,1,u)\,W_n(f,1,s+t)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,W_m(h,1,u)\, E_\xi\bigl(W_n(f,1,s+t)\ensuremath\,\big|\, \mathcal{B}_{m+1}\bigr)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,W_m(h,1,u)\, R_{n-m-1}(f,1,s+t)(S_{m+1},K_{m+1})\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,W_m(h,R_{n-m-1}(f,1,s+t),u)\bigr) \end{align*} where we used lemma~\ref{lem3vii} in the second step. Conditioning on $\mathcal{B}_m$ gives \begin{align*} E_\xi\bigl(&g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,W_m(h,1,u)\,W_n(f,1,s+t)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, E_\xi\bigl(W_m(h,R_{n-m-1}(f,1,s+t),u)\ensuremath\,\big|\, \mathcal{B}_m\bigr)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, R_0(h,R_{n-m-1}(f,1,s+t),u)(S_m,K_m)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, R_0\bigl(N(R_{n-m-1}(f,1,s+t),s)h,1,u\bigr)(S_m,K_m)\bigr). \end{align*} Here we used lemmas~\ref{lem3iv}, \ref{lem3iii}.b, and in the last step also $u_r=s$. Applying lemma~\ref{lem3iv}, we get \begin{align*} E_\xi\bigl(g\bigl(Y(w)\bigr)\,&1_{\{0\le w<S_m\}}\,W_m(h,1,u)\,W_n(f,1,s+t)\bigr)\\ &= E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\\ &\hspace{4em}\times E_{\xi}\bigl(W_m(N(R_{n-m-1}(f,1,s+t),s)h,1,u) \ensuremath\,\big|\, \mathcal{B}_m\bigr)\mathbb{B}igr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, W_m\bigl(N(R_{n-m-1}(f,1,s+t),s)h,1,u\bigr)\bigr)\\ &= E_\xi\bigl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, h\bigl(Y(u)\bigr)\,\chi_m(u)\\ &\hspace{4em}\times E_{Y(s)}\bigl(R_{n-m-1}(f,1,s+t)(s+S_1,K_1)\bigr)\bigr). \end{align*} For $\eta\in\mathcal{G}$ relation~\eqref{eq3via} yields \begin{align*} E_\eta\bigl(R_{n-m-1}(f,1,s+t)(s+S_1,K_1)\bigr) &= E_\eta\bigl(R_{n-m-1}(f,1,t)(S_1,K_1)\bigr)\\ &= E_\eta\bigl(E_\eta\bigl(W_{n-m}(f,1,t)\ensuremath\,\big|\,\mathcal{B}_1\bigr)\bigr)\\ &= E_\eta\bigl(W_{n-m}(f,1,t)\bigr), \end{align*} with another application of formula~\eqref{eq3xii}. With the choice $\eta=Y(s)$ this relation therefore gives \begin{equation} \label{eq3xvi} \begin{split} E_\xi\bigl(g\bigl(Y(w)\bigr)\,&1_{\{0\le w<S_m\}}\,W_m(h,1,u)\,W_n(f,1,s+t)\bigr)\\ &= E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\,h\bigl(Y(u)\bigr)\,\chi_m(u)\\ &\hspace{7em}\times E_{Y(s)}\bigl(f(Y(t))\,1_{\{S_{n-m}\le t<S_{n-m+1}\}}\bigr)\mathbb{B}igr). \end{split} \end{equation} Formulae~\eqref{eq3xv} and~\eqref{eq3xvi} entail \begin{align*} \sum_{n=m}^\infty &E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w <S_m\}}\,W_m(h,1,u)\, W_n(f,1,s+t)\mathbb{B}igr)\\[1ex] &= E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h,1,u)\, E_{Y(s)}\bigl(f(Y(t))\,1_{\{0\le t<S_1\}}\bigr)\mathbb{B}igr)\\ &\hspace{4em}+ \sum_{n=m+1}^\infty E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w<S_m\}}\, h\bigl(Y(u)\bigr)\,\chi_m(u)\\ &\hspace{12em}\times E_{Y(s)}\bigl(f(Y(t))\,1_{\{S_{n-m}\le t<S_{n-m+1}\}}\bigr)\mathbb{B}igr)\\[1ex] &= E_\xi\mathbb{B}igl(g\bigl(Y(w)\bigr)\,1_{\{0\le w < S_m\}}\,W_m(h,1,u)\, E_{Y(s)}\bigl(f(Y(t))\bigr)\mathbb{B}igr), \end{align*} which proves equation~\eqref{eq3xiii}. \end{proof} \subsection{A Brownian motion on \boldmath $\mathcal{G}$ and its generator}\label{ssect3iv} The stochastic process $Y$ and its underlying probability family $(\mathbb{X}i,\mathcal{C},Q)$, $Q = (Q_\xi,\,\xi\in\mathcal{G})$, are not very convenient to work with. Therefore we introduce another version in this subsection. As the underlying sample space $\Omega$ we choose the path space $\mathbb{C}D(\mathbb{R}_+,\mathcal{G})$ of $Y$ endowed with the $\sigma$--algebra $\mathcal{A}$ generated by the cylinder sets of $\mathbb{C}D(\mathbb{R}_+,\mathcal{G})$. Obviously, $Y$ is a measurable mapping from $(\mathbb{X}i,\mathcal{C})$ into $(\Omega,\mathcal{A})$. For $\xi\in\mathcal{G}$ let $P_\xi$ denote the image measure of $Q_\xi$ under $Y$. Set $P=(P_\xi,\,\xi\in\mathcal{G})$. Moreover, let the canonical coordinate process on $(\Omega,\mathcal{A})$ be denoted by $X=(X_t,\,t\in\mathbb{R}_+)$. Clearly, $X$ is a version of $Y$. We set $X_{+\infty}=\mathbb{D}elta$, and denote the natural filtration of $X$ by $\mathcal{F}=(\mathcal{F}_t,\,t\in\mathbb{R}_+)$. As usual $\mathcal{F}_\infty$ stands for $\sigma(\mathcal{F}_t,\,t\in\mathbb{R}_+)$. Whenever it is notationally more convenient we shall also write $X(t)$ for $X_t$, $t\in\mathbb{R}_+$. Let $H_V$ denote the hitting time of the set $V$ of vertices of $\mathcal{G}$ by $X$: \begin{equation*} H_V = \inf\{t > 0,\,X_t\in V\}. \end{equation*} Suppose that $X$ starts in $\xi\in l^\circ$, $l\in\mathcal{I}\cup\mathcal{E}$, and that $l$ is isomorphic to the interval $I$. Then it follows directly from the discussion at the end of subsection~\ref{ssect3ii} that the stopped process $X(\,\cdot\,\land H_V)$ is equivalent to a standard Brownian motion on $I$ with absorption in the endpoint(s) of $I$. The necessary path properties of $X$ being obvious, we therefore find that $X$ satisfies all defining properties of a Brownian motion on $\mathcal{G}$ (cf.\ definition~\ref{def1i}), except that we still have to prove its strong Markov property. This will be done next. Let $\theta = (\theta_t,\,t\in\mathbb{R}_+)$ denote the natural family of shift operators on $\Omega$: $\theta_t(\omega) = \omega(t+\,\cdot\,)$ for $\omega\in\Omega$. Thus in particular $\theta$ is a family of shift operators for $X$. Since the simple Markov property is a property of the finite dimensional distributions of a stochastic process, and the finite dimensional distributions of $X$ and $Y$ coincide, it immediately follows from proposition~\ref{prop3i} that $X$ is a Markov process. Then standard monotone class arguments (e.g., \cite{KaSh91, ReYo91}) give the Markov property in the familiar general form: \begin{proposition} \label{prop3viii} Assume that $\xi\in\mathcal{G}$, $t\in\mathbb{R}_+$, and that $W$ is an $\mathcal{F}_\infty$--measurable, positive or integrable random variable on $(\Omega,\mathcal{A}, P)$. Then \begin{equation} \label{eq3xvii} E_\xi\bigl(W\comp \theta_t \ensuremath\,\big|\, \mathcal{F}_t\bigr) = E_{X_t}\bigl(W\bigr), \end{equation} holds true $P_\xi$--a.s.\ on $\{X_t\nablae \mathbb{D}elta\}$. \end{proposition} A routine argument based on the path properties of $X$ (similar to, but much easier than the one used in the proof of lemma~\ref{lemA} in appendix~\ref{appA}) shows that $H_V$ is an $\mathcal{F}$--stopping time. We have the following \begin{lemma} \label{lem3ix} $X$ has the strong Markov property with respect to the hitting time $H_V$. That is, for all $\xi\in\mathcal{G}$, $t\in\mathbb{R}_+$, $f\in\ensuremath B(\mathcal{G})$, \begin{equation} \label{eq3xviii} E_\xi\bigl(f(X_{t+H_V})\ensuremath\,\big|\, \mathcal{F}_{H_V}\bigr) = E_{X_{H_V}}\bigl(f(X_t)\bigr) \end{equation} holds true $P_\xi$--a.s. \end{lemma} \begin{proof} To begin with, observe that since $\Omega=\mathbb{C}D(\mathcal{G})$ and $X$ is the canonical coordinate process, there is a natural family $\alpha=(\alpha_t,\,t\in\mathbb{R}_+)$ of stopping operators for $X$, namely $\alpha_t(\omega) = \omega(\,\cdot\,\land t)$, $t\in\mathbb{R}_+$. Therefore we get that $\mathcal{F}_T = \sigma(X_{s\land T},\,s\in\mathbb{R}_+)$ for any stopping time $T$ relative to $\mathcal{F}$. Indeed, one can show this along the same lines used to prove Galmarino's theorem (e.g., \cite[p.~458]{Ba91}, \cite[p.~86]{ItMc74}, \cite[p.~43~ff]{Kn81}, \cite[p.~45]{ReYo91}). Therefore it is sufficient to prove that for all $n\in\mathbb{N}$, $s_1$, \dots, $s_n\in\mathbb{R}_+$, $t\in\mathbb{R}_+$, $\xi\in\mathcal{G}$, and all $g\in\ensuremath B(\mathcal{G}^n)$, $f\in\ensuremath B(\mathcal{G})$, the following formula \begin{equation} \label{eq3xix} \begin{split} E_\xi\bigl(g(X(s_1&\land H_V),\dotsc,X(s_1\land H_V))\,f(X(t+ H_V))\bigr)\\ &= E_\xi\bigl(g(X(s_1\land H_V),\dotsc,X(s_1\land H_V))\, E_{X(H_V)}\bigl(f(X(t))\bigr)\bigr) \end{split} \end{equation} holds. Recall that $S_V$ denotes the hitting time of $V$ by $Y$. Since $P_\xi$ is the image of $Q_\xi$ under $Y$, and since $S_V = H_V\comp Y$, equation~\eqref{eq3xix} is equivalent to \begin{equation} \label{eq3xx} E_\xi\bigl(G\,f(Y(t+S_V))\bigr) = E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\bigr)\bigr), \end{equation} where we have set \begin{equation*} G = g\bigl(Y(s_1\land S_V),\dotsc, Y(s_n\land S_V)\bigr). \end{equation*} Recall that $S_1$ denotes the hitting time of $V_c\subset V$ by $Y$, so that $S_V\le S_1$. $Y$ is progressively measurable relative to $\mathcal{F}^Y$ which entails that $G$ is measurable with respect to $\mathcal{F}^Y_{S_V}\subset \mathcal{F}^Y_{S_1}\subset \mathcal{B}_1$ (see also the corresponding argument in the proof of lemma~\ref{lemA}). Using the notation of subsection~\ref{ssect3iii} we write \begin{equation} \label{eq3xxi} \begin{split} E_\xi\big(G\, &f(Y(t+S_V))\bigr)\\ &= E_\xi\big(G\, f(Y(t+S_V);\,S_V\le t+S_V < S_1)\bigr)\\ &\hspace{4em} + \sum_{n=1}^\infty E_\xi\bigl(G\,E_\xi\bigl(f(Y(t+u))\, \chi_n(t+u)\ensuremath\,\big|\, \mathcal{B}_1\bigr)\mathop{\big|}\nolimits_{u=S_V}\bigr). \end{split} \end{equation} For the last equality --- similarly as in the proof of lemma~\ref{lem3ii} --- we made use of the product structure of the probability space $(\mathbb{X}i,\mathcal{C},Q_\xi)$, and the fact that $S_V\le S_1$, which entails that $S_V$ only depends on the variable $\omega^0\in \mathbb{X}i^0$. By lemma~\ref{lem3vii} and formula~\eqref{eq3via} we get for $u\le S_1$, $n\in\mathbb{N}$, \begin{align*} E_\xi\bigl(f(Y(t+u))\,\chi_n(t+u)\ensuremath\,\big|\, \mathcal{B}_1\bigr) &= R_{n-1}(f,1,t+u)(S_1,K_1)\\ &= R_{n-1}(f,1,t)(S_1-u,K_1). \end{align*} Then \begin{align*} E_\xi\bigl(G\, R_{n-1}&(f,1,t)(S_1-S_V,K_1)\bigr)\\ &= E_\xi\bigl(G\,R_{n-1}(f,1,t)(0,K_1);\,S_V=S_1\bigr)\\ &\hspace{4em} +E_\xi\bigl(G\,R_{n-1}(f,1,t)(S_1-S_V,K_1);\,S_V<S_1\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\,\chi_{n-1}(t)\bigr);\,S_V=S_1\bigr)\\ &\hspace{4em} +E_\xi\bigl(G\,R_{n-1}(f,1,t)(S_1-S_V,K_1);\,S_V<S_1\bigr), \end{align*} because on $S_V=S_1$, $Y(S_V)=Y(S_1)=K_1$. The second term on the right hand side of the last equality only involves the random variables $Y(s_i\land S_V)$, $S_1$, $S_V$, and $K_1$. They are all defined in terms of the strong Markov process $Z^0$ underlying the construction of $Y$ (cf.\ section~\ref{ssect3ii}). Moreover, on the event $\{S_V < S_1\}$ we get from the definition of $S_1$ as the hitting time of $V_c$ that $S_1 = S_V + S_1\comp \vartheta_{S_V}$. Also, on $\{S_V < S_1\}$, $K_1 = K_1\circ\vartheta_{S_V}$ holds true. On the other hand $G$ is measurable with respect to $\mathcal{F}^0_{S_V}$, where $\mathcal{F}^0$ is the natural filtration of $Z^0$. Thus the strong Markov property of $Z^0$ gives \begin{align*} E_\xi\bigl(G\,&R_{n-1}(f,1,t)(S_1-S_V,K_1);\,S_V<S_1\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(R_{n-1}(f,1,t)(S_1,K_1)\bigr);\,S_V<S_1\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\,\chi_n(t)\bigr);\,S_V<S_1\bigr). \end{align*} In the last step we used for $\eta\in\mathcal{G}$, \begin{align*} E_\eta\bigl(R_{n-1}(f,1,t)(S_1,K_1)\bigr) &= E_\eta\bigl(E_\eta\bigl(W_n(f,1,t)\ensuremath\,\big|\, \mathcal{B}_1\bigr)\bigr)\\ &= E_\eta\bigl(f(Y(t))\,\chi_n(t)\bigr), \end{align*} with another application of lemma~\ref{lem3vii}, and then we made the choice $\eta= Y(S_V)$. Similarly, for the first term on the right hand side of equation~\eqref{eq3xxi} we can use the strong Markov property of $Z^0$ (together with $\{t+S_V<S_1\} = \{t<S_1\comp\vartheta_{S_V}\}\cap\{S_V<S_1\}$) to show that \begin{equation*} \begin{split} E_\xi\bigl(G\,f(Y(t+S_V));\,&S_V\le t+S_V < S_1\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t);\,t<S_1)\bigr);\,S_V<S_1\bigr). \end{split} \end{equation*} Inserting these results into the right hand side of formula~\eqref{eq3xxi}, we find \begin{align*} E_\xi\bigl(&G\,f(Y(t+S_V))\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t));\,t<S_1\bigr);\,S_V<S_1\bigr)\\ &\hspace{4em} +\sum_{n=1}^\infty E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\, \chi_n(t)\bigr);\,S_V<S_1\bigr)\\ &\hspace{4em} +\sum_{n=0}^\infty E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\, \chi_n(t)\bigr);\,S_V=S_1\bigr)\\ &= E_\xi\bigl(G\,E_{Y(S_V)}\bigl(f(Y(t))\bigr)\bigr). \end{align*} Thus equation~\eqref{eq3xx} holds and lemma~\ref{lem3ix} is proved. \end{proof} \begin{proposition} \label{prop3x} $X$ is a Feller process. \end{proposition} \begin{proof} It is well-known, that it is sufficient to prove~(i) that the resolvent of $X$ preserves $C_0(\mathcal{G})$, and~(ii) that for all $f\in C_0(\mathcal{G})$, $\xi\in\mathcal{G}$, $E_\xi\bigl(f(X_t)\bigr)$ converges to $f(\xi)$ as $t$ decreases to $0$. (A complete proof can be found in appendix~\ref{app_FSR}.) Statement~(ii) immediately follows by an application of the dominated convergence theorem and the fact that $X$ is a normal process with right continuous paths. To prove statement~(i) consider the resolvent $R = (R_\lambda,\,\lambda>0)$ of $X$, and let $\lambda>0$. Since $X$ is strongly Markovian with respect to the hitting time $H_V$ of the set of vertices $V$ (lemma~\ref{lem3ix}), we get for $\xi\in\mathcal{G}$, $f\in\ensuremath B(\mathcal{G})$ the first passage time formula (e.g., \cite{Ra56} or \cite{ItMc74}) \begin{equation} \label{eq3xxii} (R_\lambda f)(\xi) = (R^D_\lambda f)(\xi) + E_\xi\bigl(e^{-\lambda H_V}\,(R_\lambda f)(X_{H_V})\bigr), \end{equation} where $R^D$ is the Dirichlet resolvent. That is, $R^D$ is the resolvent of the process $X$ with killing at the moment of reaching a vertex of $\mathcal{G}$. Recall the equivalence of the stopped process $X(\,\cdot\,\land H_V)$ with the Brownian motion with absorption on the corresponding interval $I$ stated at the beginning of this subsection. Then we can give explicit expressions for all entities appearing in the first passage time formula~\eqref{eq3xxii}. Using the well-known formulae for Brownian motions on the real line (see, e.g., \cite[p.~73ff]{DyJu69}, \cite[p.~29f]{ItMc74}) we find for $\xi$, $\eta\in\mathcal{G}^\circ$, \begin{subequations} \label{eq3xxiii} \begin{equation} \label{eq3xxiiia} r^D_\lambda(\xi,\eta) = \sum_{i\in \mathcal{I}} r^D_{\lambda,i}(\xi,\eta)\,1_{\{\xi,\eta\in i\}} + \sum_{e\in\mathcal{E}} r^D_{\lambda,e}(\xi,\eta)\,1_{\{\xi,\eta\in e\}}, \end{equation} with \begin{equation} \label{eq3xxiiib} r^D_{\lambda,i}(\xi,\eta) = \frac{1}{\ensuremath\sqrt{2\gl}}\,\sum_{k\in\mathbb{Z}}\mathbb{B}igl(e^{-\ensuremath\sqrt{2\gl}|x-y+2ka_i|}-e^{-\ensuremath\sqrt{2\gl}|x+y+2ka_i|}\mathbb{B}igr), \end{equation} where in local coordinates $\xi=(i,x)$, $\eta=(i,y)$, $x$, $y\in (0,a_i)$. In the case of an external edge $e$, we get \begin{equation} \label{eq3xxiiic} r^D_{\lambda,e}(\xi,\eta) = \frac{1}{\ensuremath\sqrt{2\gl}}\,\mathbb{B}igl(e^{-\ensuremath\sqrt{2\gl}|x-y|}-e^{-\ensuremath\sqrt{2\gl}(x+y)}\mathbb{B}igr), \end{equation} \end{subequations} with $\xi=(e,x)$, $\eta=(e,y)$, $x$, $y\in (0+\infty)$. Remark that both kernels vanish whenever $\xi$ or $\eta$ converge from the interior of any edge to a vertex to which the edge is incident. Consider the second term on the right hand side of equation~\eqref{eq3xxii}. Suppose that $\xi\in i^\circ$, $i\in \mathcal{I}$, and that $i$ is isomorphic to $[0,a_i]$. Assume furthermore that $v_1$, $v_2$ are the vertices in $V$ to which $i$ is incident, and that under this isomorphism $v_1$ corresponds to $0$, while $v_2$ corresponds to $a_i$. Then we get \begin{equation*} \begin{split} E_\xi\bigl(e^{-\lambda H_V}\,(R_\lambda f)(X_{H_V})\bigr) = E_\xi\bigl(&e^{-\lambda H_{v_1}};\,H_{v_1}<H_{v_2}\bigr)\,(R_\lambda f)(v_1)\\ &+ E_\xi\bigl(e^{-\lambda H_{v_2}};\,H_{v_2}<H_{v_1}\bigr)\,(R_\lambda f)(v_2), \end{split} \end{equation*} because $X$ has paths which are continuous up to the lifetime of $X$, and $X$ cannot be killed before reaching a vertex. Here $H_{v_k}$, $k=1$, $2$, denotes the hitting time of the vertex $v_k$. The expectation values in the last line are those of a standard Brownian motion and they are well-known, too (see, e.g., \cite[p.~73ff]{DyJu69}, \cite[p.~29f]{ItMc74}). Thus for $\xi = (i,x)$, $x\in [0,a_i]$, \begin{equation} \label{eq3xxiv} \begin{split} E_\xi\bigl(&e^{-\lambda H_V}\,(R_\lambda f)(X_{H_V})\bigr)\\ &= \frac{\sinh\bigl(\ensuremath\sqrt{2\gl} (a_i-x)\bigr)}{\sinh\bigl(\ensuremath\sqrt{2\gl} a_i\bigr)}\,(R_\lambda f)(v_1) + \frac{\sinh\bigl(\ensuremath\sqrt{2\gl} x\bigr)}{\sinh\bigl(\ensuremath\sqrt{2\gl} a_i\bigr)}\,(R_\lambda f)(v_2). \end{split} \end{equation} Similarly, for $\xi\in e^\circ$ with local coordinates $\xi=(e,x)$, $x\in (0,+\infty)$ we find \begin{equation} \label{eq3xxv} E_\xi\bigl(e^{-\lambda H_V}\,(R_\lambda f)(X_{H_V})\bigr) = e^{-\ensuremath\sqrt{2\gl} x}\,(R_\lambda f)(v), \end{equation} where $v$ is the vertex to which $e$ is incident. With the formulae~\eqref{eq3xxii}--\eqref{eq3xxv} it is straightforward to check that $R_\lambda$ maps $C_0(\mathcal{G})$ into itself, and the proof is complete. \end{proof} Since $X$ has right continuous paths, standard results (see, e.g, \cite[Theorem~III.3.1]{ReYo91}, or \cite[Theorem~III.15.3]{Wi79}) provide the \begin{corollary} \label{cor3xi} $X$ is strongly Markovian. \end{corollary} Thus we have also proved the \begin{corollary} \label{cor3xii} $X$ is a Brownian motion on $\mathcal{G}$ in the sense of definition~\ref{def1i}. \end{corollary} It remains to calculate the domain of the generator of $X$, i.e., the boundary conditions at the vertices. Let $v\in V$, and assume that $X$ starts in $v$. Then by construction of $X$ and $Y$, $X$ is equivalent to $Z^0$ with start in $v$ up to its first hitting of a shadow vertex. That is, $X$ is equivalent to $Z^0$ up to the first time $X$ hits a vertex different from $v$. It follows that if $v$ is absorbing for $Z^0$, then it is so for $X$, and if $v$ is an exponential holding point with jump to $\mathbb{D}elta$, then it is also so for $X$ with the same exponential rate. In particular, $v$ is a trap for $X$ if and only if it is a trap for $Z^0$. If $v$ is not a trap, then we can use Dynkin's formula~\cite[p.~140, ff.]{Dy65a} to calculate the boundary condition implemented by $X$. Clearly, this gives the same boundary conditions as for $Z^0$, because Dynkin's formula only involves an arbitrary small neighborhood of the vertex. (See also the corresponding arguments in section~\ref{sect2}.) Thus we have proved the following \begin{theorem} \label{thm3xiii} $X$ is a Brownian motion on $\mathcal{G}$ whose generator has a domain characterized by the same boundary conditions as the generator of $Z^0$. \end{theorem} \section{Proof of Theorem~\ref{thm1ii}} \label{sect4} Suppose that $\mathcal{G}=(V,\mathcal{I},\mathcal{E},\partial)$ is a metric graph without tadpoles. Let data $a$, $b$, $c$ as in~\eqref{eq1i} be given which satisfy equation~\eqref{eq1ii}. With every $v\in V$ we associate a single vertex graph $\mathcal{G}(v)$ consisting of the vertex $v$ and $|\mathcal{L}(v)|$ external edges. In~\cite{CPBMSG} the authors have shown how the construction of Brownian motions on a finite or semi-infinite interval by Feller~~\cite{Fe52, Fe54, Fe54a} and It\^o--McKean~\cite{ItMc63, ItMc74} (cf.\ also \cite{Kn81}) can be extended to the case of single vertex graphs. For the convenience of the reader, we quickly sketch the method. If $b_v=0$ and $c_v=1$ then this trivially is a collection of $|\mathcal{L}(v)|$ many standard Brownian motions on the real line with absorption at the origin (corresponding to the vertex $v$), mapped onto the external edges of $\mathcal{G}(v)$. If $b_v=0$ and $c_v<1$ these Brownian motions are killed by a jump to $\mathbb{D}elta$ after holding the processes at the origin for an independent exponentially distributed time of rate $a/c$. For $b_v\nablae 0$ one uses a Walsh process on $\mathcal{G}(v)$ (see \cite{Wa78}, \cite{BaPi89}), and builds in a time delay as well as killing, both on the scale of the local time at the vertex. With appropriately chosen parameters for these two mechanisms, theorem~5.7 in~\cite{CPBMSG} states that the so constructed process $X^v$ is a Brownian motion on $\mathcal{G}(v)$ such that its generator is the $1/2$ times the Laplace operator acting on $f\in C^2_0(\mathcal{G}(v))$ with boundary conditions at the vertex $v$ given by~\eqref{eq1iii}. Next we build the graph $\mathcal{G}$ by successively connecting appropriately chosen external edges of the single vertex graphs $\mathcal{G}(v)$, $v\in V$, as in section~\ref{ssect3i}. Consider the stochastic process $X$ which is successively constructed from the Brownian motions $X^v$ as in subsections~\ref{ssect3ii}--\ref{ssect3iv}. Theorem~\ref{thm3xiii} states that $X$ is a Brownian motion on $\mathcal{G}$ which is such that its generator has a domain which is characterized by the same boundary conditions at each vertex $v\in V$ as for the single vertex graphs $\mathcal{G}(v)$. Therefore, $X$ is a Brownian motion on $\mathcal{G}$ as in the statement of theorem~\ref{thm1ii}, whose proof is therefore complete. \section{Discussion of Tadpoles} \label{sect5} Suppose that $\mathcal{G}$ is a metric graph which has one tadpole $i_t$ connected to a vertex $v\in V$. That is, $v$ is simultaneously the initial and final vertex of $i_t$: $\partial(i_t)=(v,v)$. Figure~\ref{tad_i} shows a metric graph with a tadpole attached to the vertex $v$. \begin{figure} \caption{A metric graph with a tadpole $i_t$ attached to $v$.} \label{tad_i} \end{figure} Let $b_t$ be the length of $i_t$. Assume furthermore, that we are given data $a$, $b$, $c$ as in equations~\eqref{eq1i}, \eqref{eq1ii}. We want to construct a Brownian motion $X$ on $\mathcal{G}$ the implementing boundary conditions corresponding to these data. Let $\mathcal{G}_1$ be the metric graph obtained from $\mathcal{G}$ by replacing the tadpole by two external edges $e_1$, $e_2$, incident with $v$. Construct a Brownian motion $X_1$ on $\mathcal{G}_1$ corresponding to the data $a$, $b$, $c$ as above. Consider the real line $\mathbb{R}$ as a single vertex graph $\mathcal{G}_2$ with the origin as the vertex $v_0$, and edges $l_1$, $l_2$ which are isomorphic to $[0,+\infty)$, $(-\infty, 0]$. Take a Walsh process $X_2$ on this graph which with probability $1/2$ chooses either edge for the next excursion when at the origin. Then $X_2$ is just a ``skew'' Brownian motion as in~\cite[p.~115]{ItMc74} which actually is not skew. That is, it is equivalent to a standard Brownian motion on the real line. Now join $\mathcal{G}_1$ and $\mathcal{G}_2$ by connecting the pairs $(e_1,l_1)$, $(e_2,l_2)$ via two new internal edges of length $b_t/2$. Denote the resulting metric graph by $\hat \mathcal{G}$. Figure~\ref{tad_ii} shows this construction. \begin{figure} \caption{$\hat \mathcal{G} \label{tad_ii} \end{figure} Construct a Brownian motion $\hat X$ on $\hat\mathcal{G}$ from $X_1$ and $X_2$ as in section~\ref{sect2}. By construction, $\hat X$ is equivalent to a standard Brownian motion in every neighborhood of $v_0$ which is small enough such that it does not include the vertex $v$. Therefore the additional vertex $v_0$ of $\mathcal{G}$ does not yield any non-trivial boundary condition. Thus, if we identify the open tadpole edge $i_t^0$ with the subset of $\mathcal{G}_2$ isomorphic to $(-b_t/1,b_t/2)$, then we obtain a Brownian motion $X$ on $\mathcal{G}$ implementing the desired boundary conditions. Obviously, any (finite) number of tadpoles can be handled in the same way. \begin{appendix} \section{On the Crossover Times $S_n$} \label{appA} We recall from section~\ref{sect2} that in terms of the process $Y$ the crossover times $S_n$, $n\in\mathbb{N}$, can be described as follows. Let $Y$ start in $\xi\in\mathcal{G}$. Then $S_1$ is the hitting time of $V_c\setminus\{\xi\}$ by $Y$. In particular, $S_1>0$ for all paths of $Y$. For $n\ge 2$, $S_n$ is the hitting time after $S_{n-1}$ of $V_c\setminus\{K_{n-1}\}$ by $Y$. Since by construction $Y(S_{n-1})=K_{n-1}$ and the paths of $Y$ are continuous on $[0,\zeta)$, we get $S_n>S_{n-1}$ for all paths of $Y$. Therefore \begin{align*} S_n &= \inf\,\bigl\{t>S_{n-1},\,Y(t)\in V_c\setminus\{K_{n-1}\}\bigr\}\\ &= \inf\,\bigl\{t\ge S_{n-1},\,Y(t)\in V_c\setminus\{K_{n-1}\}\bigr\} \end{align*} holds. In this appendix we prove the following \begin{lemma} \label{lemA} For every $n\in\mathbb{N}$, $S_n$ is a stopping time relative to $\mathcal{F}^Y$. \end{lemma} \begin{proof} Set $S_0=0$, and for $n\in\mathbb{N}$, \begin{equation*} V_n = \begin{cases} V_c\setminus\{\xi\}, & \text{if $n=1$},\\[1ex] V_c\setminus\{K_{n-1}\}, & \text{otherwise}. \end{cases} \end{equation*} For $n\in\mathbb{N}$, $r\in\mathbb{R}_+$ define \begin{align*} A_{n,r} &= \{S_n \le r < \zeta\}\\[1ex] B_{n,r} &= \bigcap_{m\in\mathbb{N}}\bigcup_{u\in\mathbb{Q},\,0\le u\le r} \bigl\{S_{n-1}\le u,\, d\bigl(Y(u), V_n\bigr)\le 1/m,\,r < \zeta\bigr\}. \end{align*} We claim that for all $n\in\mathbb{N}$, $r\in\mathbb{R}_+$, \begin{equation} \label{eqAi} A_{n,r}=B_{n,r}. \end{equation} To prove this claim, suppose first that $\omega\in A_{n,r}$. Then $S_n(\omega)$ is finite, and therefore the set \begin{equation*} \bigl\{t\ge S_{n-1}(\omega),\,Y(t,\omega)\in V_n(\omega)\bigr\}\subset [0,\zeta(\omega)) \end{equation*} is non-empty. Thus there exists a sequence $(u_i,\,i\in\mathbb{N})$ in this set which decreases to $S_n(\omega)$. Since $Y(\,\cdot\,,\omega)$ is continuous on $[0,\zeta(\omega))$, it follows that $Y(S_n(\omega),\omega)\in V_n(\omega)$. By assumption, $S_n(\omega)\le r <\zeta(\omega)$, and therefore the continuity of $Y(\,\cdot\,,\omega)$ on $[0,\zeta(\omega))$ and $S_{n-1}(\omega)<S_n(\omega)$ imply that for every $m\in\mathbb{N}$ there exists $u\in\mathbb{Q}\cap[S_{n-1}(\omega),r]$ so that $d(Y(u,\omega),V_n(\omega))\le 1/m$. Hence $\omega\in B_{n,r}$. As for the converse, suppose now that $\omega\in B_{n,r}$. Then there exists a sequence $(u_m,\,m\in\mathbb{N})$ in $\mathbb{Q}\cap[S_{n-1}(\omega),r]$ so that $d(Y(u_m,\omega), V_n(\omega))$ converges to zero as $m$ tends to infinity. Since $(u_m,\,m\in\mathbb{N})$ is bounded we may assume, by selecting a subsequence if necessary, that $(u_m,\,m\in\mathbb{N})$ converges to some $u\in[S_{n-1}(\omega),r]$ as $m\to+\infty$. Thus we find that $Y(u,\omega)\in V_n(\omega)$, and therefore $Y(\,\cdot\,,\omega)$ hits $V_n(\omega)$ in the interval $[S_{n-1}(\omega),r]$. Consequently, $S_n(\omega)\le r$, and hence $\omega\in A_{n,r}$, concluding the proof of the claim. Next we prove by induction that for every $n\in\mathbb{N}$, $S_n$ is an $\mathcal{F}^Y$--stopping time. Let $n=1$. By~\eqref{eqAi} for every $r\ge 0$, \begin{equation*} A_{1,r} = \bigcap_{m\in\mathbb{N}}\bigcup_{u\in\mathbb{Q},\,0\le u\le r} \bigl\{ d\bigl(Y(u), V_1\bigr)\le 1/m\}\cap \{r < \zeta\bigr\}. \end{equation*} Clearly, $\{r<\zeta\} = \{Y(r)\in\mathcal{G}\}\in\mathcal{F}^Y_r$. Moreover, since $V_1$ is a deterministic set, $d(\,\cdot\,,V_1)$ is measurable from $\mathcal{G}$ to $\mathbb{R}_+$, and therefore $\{d(Y(u),V_1)\le 1/m\}\in\mathcal{F}^Y_u \subset \mathcal{F}^Y_r$. Hence $A_{1,r}\in\mathcal{F}^Y_r$. Let $t\ge 0$, and write \begin{align*} \{S_1 \le t\} &= \{S_1<\zeta\le t\} \cup A_{1,t}\\ &= \mathbb{B}igl(\bigcup_{r\in\mathbb{Q},\,0\le r\le t} \{S_1\le r <\zeta\}\cap\{\zeta\le t\}\mathbb{B}igr)\cup A_{1,t}\\ &= \mathbb{B}igl(\bigcup_{r\in\mathbb{Q},\,0\le r\le t} A_{1,r}\cap\{\zeta\le t\}\mathbb{B}igr)\cup A_{1,t}. \end{align*} Therefore $\{S_1\le t\}\in\mathcal{F}^Y_t$, and hence $S_1$ is a stopping time relative to $\mathcal{F}^Y$. Now suppose that $n\in\mathbb{N}$, $n\ge 2$, and that $S_{n-1}$ is an $\mathcal{F}^Y$--stopping time. We show that for all $r\in\mathbb{R}_+$, $A_{n,r}\in\mathcal{F}^Y_r$. First we remark that since $\mathcal{G}$ is a separable metric space, the metric $d$ on $\mathcal{G}$ is a measurable mapping from $(\mathcal{G}\times\mathcal{G}, \mathcal{B}_d\otimes\mathcal{B}_d)$ to $(\mathbb{R}_+,\mathcal{B}(\mathbb{R}_+))$. For example, this follows from Theorem~I.1.10 in~\cite{Pa67}, and the continuity of $d:\mathcal{G}\times\mathcal{G} \to \mathbb{R}_+$ when $\mathcal{G}\times\mathcal{G}$ is equipped with the product topology. Consider $K_{n-1}= Y(S_{n-1})$. Since $Y$ has right continuous paths it is progressively measurable relative to $\mathcal{F}^Y$ (e.g., \cite[Proposition~I.4.8]{ReYo91}). Thus by Proposition~I.4.9 in~\cite{ReYo91} it follows that $K_{n-1}$ is $\mathcal{F}^Y_{S_{n-1}}$--measurable. Consequently on $\{S_{n-1}\le u\}$, $K_{n-1}$ is $\mathcal{F}^Y_u$--measurable. Equation~\eqref{eqAi} reads \begin{equation*} A_{n,r} = \bigcap_{m\in\mathbb{N}} \bigcup_{u\in\mathbb{Q},\,0\le u\le r} \{S_{n-1}\le u\} \cap \bigl\{d\bigl(Y(u),V_c\setminus K_{n-1}\bigr)\le 1/m\bigr\} \cap \{r< \zeta\}. \end{equation*} It follows that $A_{n,r}\in\mathcal{F}^Y_r$, as claimed. But then $\{S_n\le t\}\in\mathcal{F}^Y_t$ for all $t\in\mathbb{R}_+$ is proved with the same argument as at the end of the discussion of the case $n=1$. \end{proof} \section{Feller Semigroups and Resolvents} \label{app_FSR} In this appendix we give an account of the Feller property of semigroups and resolvents. The material here seems to be quite well-known, and our presentation of it owes very much to~\cite{Ra56}, most notably the inversion formula for the Laplace transform, equation~\eqref{inv_L} in connection with lemma~\ref{lem_B_6}. On the other hand, we were not able to locate a reference where the results are collected and stated in the form in which we employ them in the present paper. This applies in particular to the ``mixed'' forms of the statements (iii)--(vi) in theorem~\ref{thm_B_3}, which we find especially convenient to use in this article. Therefore we also provide proofs for some of the statements. Assume that $(E,d)$ is a locally compact separable metric space with Borel $\sigma$--algebra denoted by $\mathcal{B}(E)$. $B(E)$ denotes the space of bounded measurable real valued functions on $E$, $\mathbb{C}oe$ the subspace of continuous functions vanishing at infinity. $B(E)$ and $\mathbb{C}oe$ are equipped with the sup-norm $\|\,\cdot\,\|$. The following definition is as in~\cite{ReYo91}: \begin{definition} \label{def_B_1} A \emph{Feller semigroup} is a family $U=(U_t,\,t\ge 0)$ of positive linear operators on $\mathbb{C}oe$ such that \begin{enum_i} \item $U_0=\text{id}$ and $\|U_t\|\le 1$ for every $t\ge 0$; \item $U_{t+s} = U_t\comp U_s$ for every pair $s$, $t\ge 0$; \item $\lim_{t\downarrow 0} \|U_t f - f\|=0$ for every $f\in \mathbb{C}oe$. \end{enum_i} \end{definition} Analogously we define \begin{definition} \label{def_B_2} A \emph{Feller resolvent} is a family $R=(R_\lambda,\,\lambda>0)$ of positive linear operators on $\mathbb{C}oe$ such that \begin{enum_i} \item $\|R_\lambda\|\le \lambda^{-1}$ for every $\lambda>0$; \item $R_\lambda - R_\mu = (\mu-\lambda) R_\lambda\comp R_\mu$ for every pair $\lambda$, $\mu>0$; \item $\lim_{\lambda\to\infty}\|\lambda R_\lambda f - f\|=0$ for every $f\in\mathbb{C}oe$. \end{enum_i} \end{definition} In the sequel we shall focus our attention on semigroups $U$ and resolvents $R$ associated with an $E$--valued Markov process, and which are \emph{a priori} defined on $B(E)$. (In our notation, we shall not distinguish between $U$ and $R$ as defined on $B(E)$ and their restrictions to $\mathbb{C}oe$.) Let $X = (X_t,\,t\ge 0)$ be a Markov process with state space $E$, and let $(P_x,\,x\in E)$ denote the associated family of probability measures on some measurable space $(\Omega,\mathcal{A})$ so that $P_x(X_0=x) = 1$. $E_x(\,\cdot\,)$ denotes the expectation with respect to $P_x$. We assume throughout that for every $f\in B(E)$ the mapping \begin{equation*} (t,x) \mapsto E_x\bigl(f(X_t)\bigr) \end{equation*} is measurable from $\mathbb{R}_+\times E$ into $\mathbb{R}$. The semigroup $U$ and resolvent $R$ associated with $X$ act on $B(E)$ as follows. For $f\in B(E)$, $x\in E$, $t\ge 0$, and $\lambda>0$ set \begin{align} U_t f(x) &= E_x\bigl(f(X_t)\bigr), \label{eq_B_1}\\ R_\lambda f(x) &= \int_0^\infty e^{-\lambda t} U_t f (x)\,dt. \label{eq_B_2} \end{align} Property~(i) of Definitions~\ref{def_B_1} and \ref{def_B_2} is obviously satisfied. The semigroup property, (ii) in Definition~\ref{def_B_1}, follows from the Markov property of $X$, and this in turn implies the resolvent equation, (ii) of Definition~\ref{def_B_2}. Moreover, it follows also from the Markov property of $X$ that the semigroup and the resolvent commute. On the other hand, in general neither the property that $U$ or $R$ map $\mathbb{C}oe$ into itself, nor the strong continuity property (iii) in Definitions~\ref{def_B_1}, \ref{def_B_2} hold true on $B(E)$ or on $\mathbb{C}oe$. If $W$ is a subspace of $B(E)$ the resolvent equation shows that the image of $W$ under $R_\lambda$ is independent of the choice of $\lambda>0$, and in the sequel we shall denote the image by $RW$. Furthermore, for simplicity we shall write $U\mathbb{C}oe\subset \mathbb{C}oe$, if $U_t f\in\mathbb{C}oe$ for all $t\ge 0$, $f\in\mathbb{C}oe$. \begin{theorem} \label{thm_B_3} The following statements are equivalent: \begin{enum_i} \item $U$ is Feller. \item $R$ is Feller. \item $U\mathbb{C}oe\subset\mathbb{C}oe$, and for all $f\in\mathbb{C}oe$, $x\in E$, $\lim_{t\downarrow 0} U_t f(x) = f(x)$. \item $U\mathbb{C}oe\subset\mathbb{C}oe$, and for all $f\in\mathbb{C}oe$, $x\in E$, $\lim_{\lambda\rightarrow \infty} \lambda R_\lambda f(x) = f(x)$. \item $R\mathbb{C}oe\subset\mathbb{C}oe$, and for all $f\in\mathbb{C}oe$, $x\in E$, $\lim_{t\downarrow 0} U_t f(x) = f(x)$. \item $R\mathbb{C}oe\subset\mathbb{C}oe$, and for all $f\in\mathbb{C}oe$, $x\in E$, $\lim_{\lambda\rightarrow \infty} \lambda R_\lambda f(x) = f(x)$. \end{enum_i} \end{theorem} \begin{remark} The equivalence of statements~(i) and~(ii) has been shown in~\cite[No.~81, p.~291]{DeMe88} based on an application of the Hille--Yosida--theorem. \end{remark} We prepare a sequence of lemmas. The first one follows directly from the dominated convergence theorem: \begin{lemma} \label{lem_B_4} Assume that for $f\in B(E)$, $U_t f \rightarrow f$ as $t\downarrow 0$. Then $\lambda R_\lambda f \rightarrow f$ as $\lambda\to+\infty$. \end{lemma} \begin{lemma} \label{lem_B_5} The semigroup $U$ is strongly continuous on $RB(E)$. \end{lemma} \begin{proof} If strong continuity at $t=0$ has been shown, strong continuity at $t>0$ follows from the semigroup property of $U$, and the fact that $U$ and $R$ commute. Therefore it is enough to show strong continuity at $t=0$. Let $f\in B(E)$, $\lambda>0$, $t>0$, and consider for $x\in E$ the following computation \begin{align*} U_t R_\lambda f(x) &- R_\lambda f(x)\\ &= \int_0^\infty e^{-\lambda s} E_x\bigl(f(X_{t+s})\bigr)\,ds - \int_0^\infty e^{-\lambda s} E_x\bigl(f(X_s)\bigr)\,ds\\ &= e^{\lambda t} \int_t^\infty e^{-\lambda s} E_x\bigl(f(X_s)\bigr)\,ds - \int_0^\infty e^{-\lambda s} E_x\bigl(f(X_s)\bigr)\,ds\\ &= \bigl(e^{\lambda t}-1\bigr) \int_t^\infty e^{-\lambda s} E_x\bigl(f(X_s)\bigr)\,ds - \int_0^t e^{-\lambda s} E_x\bigl(f(X_s)\bigr)\,ds\\ \end{align*} where we used Fubini's theorem and the Markov property of $X$. Thus we get the following estimation \begin{align*} \bigl\|U_t R_\lambda f - R_\lambda f\bigr\| &\le \biggl(\bigl(e^{\lambda t} - 1\bigr)\int_t^\infty e^{-\lambda s}\,ds +\int_0^t e^{-\lambda s}\,ds\biggl)\, \|f\|\\ &= \frac{2}{\lambda}\, \bigl(1-e^{-\lambda t}\bigr)\,\|f\|, \end{align*} which converges to zero as $t$ decreases to zero. \end{proof} For $\lambda>0$, $t\ge 0$, $f\in B(E)$, $x\in E$ set \begin{equation} \label{inv_L} U^\lambda_t f(x) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n!} \,n\lambda\, e^{n\lambda t}\, R_{n\lambda} f(x). \end{equation} Observe that, because of $n\lambda\|R_{n\lambda} f\| \le \|f\|$, the last sum converges in $B(E)$. For the proof of the next lemma we refer the reader to~\cite[p.~477~f]{Ra56}: \begin{lemma} \label{lem_B_6} For all $t\ge 0$, $f\in RB(E)$, $U^\lambda_t f$ converges in $B(E)$ to $U_t f$ as $\lambda$ tends to infinity. \end{lemma} \begin{lemma} \label{lem_B_7} If $U_t\mathbb{C}oe \subset \mathbb{C}oe$ for all $t\ge 0$, then $R_\lambda\mathbb{C}oe\subset \mathbb{C}oe$, for all $\lambda>0$. If $R_\lambda\mathbb{C}oe\subset \mathbb{C}oe$, for some $\lambda>0$, and $R_\lambda\mathbb{C}oe$ is dense in $\mathbb{C}oe$, then $U_t\mathbb{C}oe \subset \mathbb{C}oe$ for all $t\ge 0$. \end{lemma} \begin{proof} Assume that $U_t\mathbb{C}oe \subset \mathbb{C}oe$ for all $t\ge 0$, let $f\in\mathbb{C}oe$, $x\in E$, and suppose that $(x_n,\,n\in\mathbb{N})$ is a sequence converging in $(E,d)$ to $x$. Then a straightforward application of the dominated convergence theorem shows that for every $\lambda>0$, $R_\lambda f(x_n)$ converges to $R_\lambda f(x)$. Hence $R_\lambda f\in \mathbb{C}oe$. Now assume that that $R_\lambda\mathbb{C}oe\subset \mathbb{C}oe$, for some and therefore for all $\lambda>0$, and that $R_\lambda\mathbb{C}oe$ is dense in $\mathbb{C}oe$. Consider $f\in R\mathbb{C}oe$, $t>0$, and for $\lambda>0$ define $U^\lambda_t f$ as in equation~\eqref{inv_L}. Because $R_{n\lambda}f\in\mathbb{C}oe$ and the series in formula~\eqref{inv_L} converges uniformly in $x\in E$, we get $U^\lambda_t f\in\mathbb{C}oe$. By lemma~\ref{lem_B_6}, we find that $U^\lambda_t f$ converges uniformly to $U_t f$ as $\lambda\to+\infty$. Hence $U_t f\in\mathbb{C}oe$. Since $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$, $U_t$ is a contraction and $\mathbb{C}oe$ is closed, we get that $U_t\mathbb{C}oe\subset\mathbb{C}oe$ for every $t\ge 0$. \end{proof} The following lemma is proved as a part of Theorem~17.4 in~\cite{Ka97} (cf.\ also the proof of Proposition~2.4 in~\cite{ReYo91}). \begin{lemma} \label{lem_B_8} Assume that $R\mathbb{C}oe\subset \mathbb{C}oe$, and that for all $x\in E$, $f\in\mathbb{C}oe$, $\lim_{\lambda\to\infty} \lambda R_\lambda f(x) = f(x)$. Then $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$. \end{lemma} If for all $f\in\mathbb{C}oe$, $x\in E$, $U_t f(x)$ converges to $f(x)$ as $t$ decreases to zero, then similarly as in the proof of lemma~\ref{lem_B_4} we get that $\lambda R_\lambda f(x)$ converges to $f(x)$ as $\lambda\to+\infty$. Thus we obtain the following \begin{corollary} \label{cor_B_9} Assume that $R\mathbb{C}oe\subset \mathbb{C}oe$, and that for all $x\in E$, $f\in\mathbb{C}oe$, $\lim_{t\downarrow 0} U_t f(x) = f(x)$. Then $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$. \end{corollary} Now we can come to the \begin{proof}[Proof of theorem~\ref{thm_B_3}] We show first the equivalence of statements~(i), (ii), (iv), and (vi): \nablaoindent ``(i)\mathbb{R}a(ii)'' Assume that $U$ is Feller. From lemma~\ref{lem_B_7} it follows that $R_\lambda\mathbb{C}oe\subset\mathbb{C}oe$, $\lambda>0$. Let $f\in\mathbb{C}oe$. Since $U$ is strongly continuous on $\mathbb{C}oe$, lemma~\ref{lem_B_4} implies that $\lambda R_\lambda f$ converges to $f$ as $\lambda$ tends to $+\infty$. Hence $R$ is Feller. \nablaoindent ``(ii)\mathbb{R}a(vi)'' This is trivial. \nablaoindent ``(vi)\mathbb{R}a(iv)'' By lemma~\ref{lem_B_8}, $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$, and therefore lemma~\ref{lem_B_7} entails that $U\mathbb{C}oe\subset\mathbb{C}oe$. \nablaoindent ``(iv)\mathbb{R}a(i)'' By lemmas~\ref{lem_B_7} and \ref{lem_B_8}, $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$, and therefore by lemma~\ref{lem_B_5} $U$ is strongly continuous on $\mathbb{C}oe$. Thus $U$ is Feller. Now we prove the equivalence of~(i), (iii), and (v): \nablaoindent ``(i)\mathbb{R}a(iii)'' This is trivial. \nablaoindent ``(iii)\mathbb{R}a(v)'' This follows directly from Lemma~\ref{lem_B_7}. \nablaoindent ``(v)\mathbb{R}a(i)'' By corollary~\ref{cor_B_9}, $R\mathbb{C}oe$ is dense in $\mathbb{C}oe$, hence it follows from lem\-ma~\ref{lem_B_7} that $U\mathbb{C}oe\subset\mathbb{C}oe$. Furthermore, lemma~\ref{lem_B_5} implies the strong continuity of $U$ on $R\mathbb{C}oe$, and by density therefore on $\mathbb{C}oe$. (i) follows. \end{proof} \end{appendix} \partialrovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \partialrovidecommand{\mathbb{M}R}{\relax\ifhmode\unskip\space\fi MR } \partialrovidecommand{\mathbb{M}Rhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \partialrovidecommand{\href}[2]{#2} \end{document}
\begin{document} \vskip 20pt \centerline{\bf \large Bell Inequality and Many-Worlds Interpretation} \vskip 30pt \centerline{\bf L. Vaidman} \centerline{ Raymond and Beverly Sackler School of Physics and Astronomy} \centerline{Tel-Aviv University, Tel-Aviv 69978, Israel} \vskip 20pt It is argued that the lesson we should learn from Bell's inequalities is not that quantum mechanics requires some kind of action at a distance, but that it leads us to believe in parallel worlds. \section{Introduction} Bell's work \cite{Bell64} led to a revolution in our understanding of Nature. I remember attending my first physics conference on ``Microphysical Reality and Quantum Formalism,'' in Urbino, 1985. Most of the talks were about Aspect's experiment \cite{Aspect} confirming the nonlocality of quantum mechanics based on the experimental violations of Bell's inequalities. Although I did not share the skepticism of many speakers regarding the results of Aspect, I was not ready to accept that a local action in one place can instantaneously change anything at another place. So, while for the majority the lesson from Bell was that quantum mechanics requires some ``spooky action at a distance'', I was led by Bell's result to an alternative revolutionary change in our view of Nature. I saw no other way, but accepting the many-worlds interpretation (MWI) of quantum mechanics \cite{Eve,SEP}. I shall start by presenting the Einstein-Podolsky-Rosen (EPR) \cite{EPR} argument. Then, Bell's idea will be presented using the Greenberger-Horne-Zeilinger setup \cite{GHZ} in the form proposed by Mermin \cite{Mermin,myGHZ}. The discussion of nonlocality will suggest that Bell's inequalities are the only manifestation of action at a distance in Nature. The demonstration of the necessity of action at a distance will be done through a detailed analysis of the GHZ experiment. Then, I shall show how multiple worlds resolve the problem of action at a distance. After discussing the issue of nonlocality in the MWI I shall conclude by citing Bell's view on the MWI. \section{EPR - Bell - GHZ }\label{E-B-G} The story of Bell cannot be told without first describing the EPR argument. Instead of following the historical route, I shall use the GHZ setup, which, in my view, is the clearest way to explain the EPR and Bell's discovery. There are three separate sites with Alice, Bob and Charley which share an entangled state of three spin-$\frac{1}{2}$ particles, the GHZ state: \begin{equation} \label{GHZ} |GHZ\rangle = {1\over \sqrt 2}{\Large (}|{\uparrow}_z\rangle_A|{\uparrow}_z\rangle_B|{\uparrow}_z\rangle_C - |{\downarrow}_z\rangle_A|{\downarrow}_z\rangle_B|{\downarrow}_z\rangle_C{\Large )} . \end{equation} The GHZ state is a maximally entangled state of a spin at every site with spins in the two other sites. Therefore, the measurement of the spin in each site and in any direction can be performed, in principle, using measurements at other sites. The assumption that there is no action at a distance in Nature, tells us that the measurements of Alice and Bob cannot change Charley's spin. After Alice's and Bob's measurements, Charley's spin becomes known. The spin value could not have been changed by distant measurements, therefore it existed before. This is the consequence of the celebrated EPR criterion for a physical reality. According to the EPR argument, the values of the spins of Alice, Bob and Charley in all directions are elements of reality. Quantum mechanics does not provide these values. Furthermore, the uncertainty relations prevent the simultaneous existence of some of these spin values. Thus, EPR concluded that quantum theory is incomplete. At the end of their paper, EPR expressed hope that one day quantum theory will be completed to make these elements of reality certain. It took almost thirty years before Bell showed that it cannot be done. We need not consider many elements of reality to show the inconsistency. In the GHZ setup it is enough to consider the spin values just in two directions, $x$ and $y$. Let us rewrite the GHZ state in the $x$ basis: \begin{equation} \nonumber {1\over 2}(|{\uparrow}_x\rangle_A |{\uparrow}_x\rangle_B|{\downarrow}_x\rangle_C + |{\uparrow}_x\rangle_A |{\downarrow}_x\rangle_B|{\uparrow}_x\rangle_C + |{\downarrow}_x\rangle_A |{\uparrow}_x\rangle_B|{\uparrow}_x\rangle_C + |{\downarrow}_x\rangle_A |{\downarrow}_x\rangle_B|{\downarrow}_x\rangle_C). \end{equation} We see that the product of the spins measured in the $x$ direction is $-1$ with certainty. Similarly, if we use the $x$ basis for Alice and $y$ bases for Bob and Charley, we learn that the product of the spins measured in the $x$ direction for Alice, and in the $y$ direction for Bob and Charley, is $1$ with certainty. Due to symmetry of the GHZ state, the product is $1$ with certainty also if it is Bob or Charley, instead of Alice, who is the only one to measured the spin in $x$ direction. Therefore, the following equations should be fulfilled for the results of the spin measurements: \begin{eqnarray} \{{\sigma_A}_x\} \{{\sigma_B}_x\} \{{\sigma_C}_x\} & = & -1 , \label{E1}\\ \{{\sigma_A}_x\} \{{\sigma_B}_y\} \{{\sigma_C}_y\} & = & 1 ,\label{E2}\\ \{{\sigma_A}_y\} \{{\sigma_B}_x\} \{{\sigma_C}_y\} & = & 1 , \label{E3}\\ \{{\sigma_A}_y\} \{{\sigma_B}_y\} \{{\sigma_C}_x\} & = & 1 ,\label{E4} \end{eqnarray} \noindent where $\{{\sigma_A}_x\}$ signifies the outcome of the measurement of $\sigma_x$ by Alice, etc. All values of spin components in the above equations are EPR elements of reality. The outcomes should exist prior to the measurement and independent of what is done to other particles. So, according to EPR, the value of Alice's spin $\{{\sigma_A}_x\}$ appearing in (\ref{E1}) should be the same as in equation (\ref{E2}) and similarly for other values of spin variables. But this contradicts the fact that equations (\ref{E1}-\ref{E4}) cannot be jointly satisfied: the product of the lefthand sides is a product of squares, so it is positive, while the product of the righthand sides is $-1$. \section{From Bell inequality to nonlocality } Apparently, the first conclusion which can be reached here is that Nature is random. The predictions of quantum theory including the results (\ref{E1}-\ref{E4}) were tested and verified in all experiments performed to date. We have shown above that equations (\ref{E1}-\ref{E4}) are inconsistent with the assumption that there are definite predictions for all these results. Therefore, at least some of the results should not exist prior to the measurement: the outcome of the measurement is random! However, there is a problem with this ``proof'' of randomness. Let us assume that Charley's outcome is the one which is random. This contradicts the fact that after Alice's and Bob's measurements, Charley's result is definite. A nonlocal action is then required to fulfill the equation. But if nonlocality is accepted, the EPR concept of elements of reality loses its basis, so the proof of randomness fails. This is a proof of nonlocality. There is no way to explain these quantum correlations by some underlying local definite values or local probability distributions. It seems that the conclusion must be that actions (measurements in $x$ or $y$ directions) of Alice and Bob change the outcome of Charley's measurement performed immediately after. It is a proof that there is an action at a distance. It got the name of ``spooky action at a distance'' since there is no known underlying mechanism, and furthermore, since it is not observable: one cannot send signals to Charley using measurements in Alice's and Bob's sites. \section{Against nonlocality } The role of science is to explain how and why things happen. A bird falls because a bullet hits it. The hunter shot the bullet. He was able to point his gun since photons reflected by the bird reached his eyes. In all these explanations, objects are present in particular places and they interact with other objects by sending particles (photons, bullets) from one object to another. This allows the concept of location of an object: it is the place where it can be influenced directly by other objects and where it can influence directly other objects. Clearly, this is the picture in classical physics: only local actions exist. It is true that classical physics has also global formulations. A minimal action principle provides a complete solution given initial conditions without presenting explicit local mechanism. There is a logical option for existence of a world described by an action principle with nonlocal interactions as well as a world with local interactions but without minimal action principle. I feel that the local action explanation is the most important part of our picture of Nature and thus we should try to keep it even when we turn to the correct physical theory which is not classical, but quantum. In quantum mechanics, the Aharonov-Bohm effect \cite{AB} (AB) seems to be a counter example. An electron changes its motion as a function of the magnetic field in a region where the electron does not pass. There is a nonzero vector potential at the location of the electron, but this potential is not locally defined, only the line integral of the potential is physical (gauge invariant). Recently, I solved, at least for myself, this apparent conflict with locality. I found local explanations of both the scalar and the magnetic AB effects \cite{VAB}. The electron moves in a free-field region, but the source of the potential, the solenoid, or the capacitor, feels the field of the electron. I considered the latter as quantum objects and realized that there is an unavoidable entanglement between them and the electron during the AB interference experiment. I calculated the phase acquired by the solenoid in the AB experiment and found that it equals exactly to the AB phase. In the AB experiment the phase is manifested in the interference shift of the electron, but it is explained by the local action of the electromagnetic field of the electron on the solenoid. Thus, the only manifestation of a nonlocal action that we know of in physics, is the unexplained nonlolcal Bell-type correlations. Since physics has no mechanism for nonlocal actions, its existence is appears to be a ``miracle''. In other words, physics cannot explain it. As a physicist, I want to believe that we do understand Nature. Bell apparently tells us that it is impossible, or that we need to make a large conceptual change in our views of Nature. The best option I see in this situation is to reject a tacit assumption, necessary for the Bell's proof, that there is only one world. There is one physical Universe, but there are many outcomes in every quantum experiments, corresponding to many worlds as we perceive them. \section{From Bell inequalities to the MWI } How exactly the MWI resolves the difficulty due to the EPR-Bell-GHZ argument? The concept of the EPR element of reality is the core of the breakdown. \begin{quote} {\it If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding lo this physical quantity.} \flushright (Einstein, Podolsky, and Rosen~1935) \end{quote} It is true that Alice and Bob, after measuring and collecting the results of their spin $x$ measurements, can predict with certainty the outcome of Charley's spin $x$ measurement. However, believing in the MWI, Alice and Bob know that their prediction is not universally true. It is true only in their particular world. They {\it know} that there are parallel worlds in which Charley's outcome is different. There is no ``counterfactual definiteness'' which is frequently assumed in Bell-type arguments: no definite outcome exists prior to the measurement. To see explicitly how the MWI removes the action at a distance of Bell-type experiments consider a demonstration of the GHZ experiment which is not yet possible with current technology, but may become possible in the near future. The choice of which components of the spin are measured by Alice, Bob, and Charley is made according to the ``random'' results of other quantum measurements \cite{Sheid}. I argued that a better strategy is to rely on macroscopic signals from galaxies in different parts of the Universe \cite{VBell}, but for my analysis here, a quantum device is more appropriate. In the GHZ setup analysis not all combinations of measurements are considered: spin $x$ measurements are performed by just one observer or by all three observers. To ensure this we distribute between Alice, Bob and Charley another GHZ set of spin-$\frac{1}{2}$ particles. So, the state of all particles (in the $z$ basis) is: \begin{equation} \label{GHZ+} {1\over 2}{\Large (}|{\uparrow}\rangle_A|{\uparrow}\rangle_B|{\uparrow}\rangle_C - |{\downarrow}\rangle_A|{\downarrow}\rangle_B|{\downarrow}\rangle_C{\Large )}~{\Large (}|{\uparrow}\rangle_A|{\uparrow}\rangle_B|{\uparrow}\rangle_C - |{\downarrow}\rangle_A|{\downarrow}\rangle_B|{\downarrow}\rangle_C{\Large )} . \end{equation} Alice, Bob and Charley perform spin $x$ measurement of their additional spins. If the outcome is 1, then the spin $y$ measurement of the second particle, the one from the original GHZ set, is performed. If the outcome is -1, then the $x$ component of the spin is measured instead. Alice's measurements split the world into four worlds. According to Everett's ``relative state formulation of quantum theory'' \cite{Eve}, these are Alice's worlds: $(\uparrow_{xA}),(\downarrow_{xA}),(\uparrow_{yA}),(\downarrow_{yA})$. Nothing changes at Bob's and Charley's sites because of Alice's actions. The complete local descriptions of Bob's and Charley's GHZ spins remain to be the same mixtures: completely unpolarized spins. Now let us add the measurements of Bob. He also splits his world into four Everett worlds. According to the EPR argument, after Alice's and Bob's measurements, there is an element of reality associated with Charley's spin. Indeed, the information about the results of their measurements tells us what is Charley's spin. For example, in the world $(\uparrow_{xA}, \uparrow_{xB})$, Charley's spin is $|{\downarrow}\rangle_x$. But in a parallel world $(\uparrow_{xA}, \downarrow_{xB})$, Charley's spin is $|{\uparrow}\rangle_x$. So, there is no single element or reality of Charley's spin $x$ in Nature. According to the definition of a ``world'' that I prefer \cite{SEP}, in any world all macroscopic objects have well localized states. All measuring devices show definite values, so in every world Alice, Bob and Charley have well defined values of spins. Alice first splits the world into four different worlds according to her measurements. Each of the worlds is then split again into four worlds by Bob. Charley, however, does not make any additional splitting. In every one of the 16 worlds created by Alice and Bob, the outcomes of his two spin measurements are already fixed. Here are all 16 worlds: \begin{eqnarray} \nonumber (\downarrow_{xA}, \downarrow_{xB}, \downarrow_{xC}),~~(\downarrow_{xA}, \uparrow_{xB}, \uparrow_{xC}),~~(\uparrow_{xA}, \downarrow_{xB}, \uparrow_{xC}),~~(\uparrow_{xA}, \uparrow_{xB}, \downarrow_{xC}),~~ \\ \nonumber (\uparrow_{xA}, \uparrow_{yB}, \uparrow_{yC}),~~(\uparrow_{xA}, \downarrow_{yB}, \downarrow_{yC}),~~(\downarrow_{xA}, \uparrow_{yB}, \downarrow_{yC}),~~(\downarrow_{xA}, \downarrow_{yB}, \uparrow_{yC}),~~ \\ (\uparrow_{yA}, \uparrow_{xB}, \uparrow_{yC}),~~(\uparrow_{yA}, \downarrow_{xB}, \downarrow_{yC}),~~(\downarrow_{yA}, \uparrow_{xB}, \downarrow_{yC}),~~(\downarrow_{yA}, \downarrow_{xB}, \uparrow_{yC}),~~ \\\nonumber (\uparrow_{yA}, \uparrow_{yB}, \uparrow_{xC}),~~(\uparrow_{yA}, \downarrow_{yB}, \downarrow_{xC}),~~(\downarrow_{yA}, \uparrow_{yB}, \downarrow_{xC}),~~\downarrow_{yA}, \downarrow_{yB}, \uparrow_{xC}).~~ \end{eqnarray} All the worlds fulfill equations (\ref{E1}-\ref{E4}). However, we do not get a contradiction as in Section \ref{E-B-G} because the equations do not have to be correct together: each one of the equations is correct in four worlds for which it can be applied. Different equations are valid in different worlds, so that the values of the spins in the equations can be different. The contradiction arises if we assume that there is only one world. \section{MWI and nonlocality } As shown above, the MWI removes action at a distance from quantum physics. Like in classical relativistic physics, any local action on a system changes nothing whatsoever at remote locations at the moment of disturbance. It does not mean, however, that quantum mechanics provides a local picture similar to classical physics with particles and fields localised in 3-space. In classical physics, the complete description is given by specifying the trajectories of the particles and values of the fields: \begin{equation}\label{ClUni} {\rm Universe} = \{\vec{r}_i(t),~ \vec F_j(\vec{r},t)\}. \end{equation} It is local because it can be alternatively presented as an infinite set of vectors for all space-time points $(\vec{r},t)$ which provide values of projection operators $\{{\rm \bf P}_i(\vec{r},t)\}$ and values of all fields at this point $\{\vec F_j(\vec{r},t) \}$. In classical physics, outcomes of an experiment at every site are fully specified by the local description of this site. The measuring devices and the observers can be expressed in the same language, in terms of the locations $\vec{r}_i$s of the particles that they are made of. The final positions of the particles of the measuring devices are fully explained by their initial states and by the local interactions occurring in the interval between the initial and final times. Classical physics is deterministic (classical probability theory is relevant only for situations with incomplete knowledge of the full description), so the issue of correlations between outcomes of experiments at different places does not arise. In a quantum world, if all particles are in a product state, then the description of the Universe is similar to the classical one: it is a set of these wave functions, \begin{equation}\label{prodUni} {\rm Universe} = \{\Psi_i(\vec{r},t)\}, \end{equation} which also can be represented as an infinite set of vectors with values of the wave functions at all space-time points. (For the current analysis it is not necessary to go to field theory which describes quantum fields). However, if we introduce measuring devices and observers, the local coupling of the measurement process will destroy the above product state. The description of the Universe then is the wave function in the configuration space of all particles, \begin{equation}\label{Uni} {\rm Universe} = \Psi(\vec{r}_1, \vec{r}_2,...\vec{r}_i,...,t) , \end{equation} which cannot be represented as a set of vectors in space-time points. In the MWI, this wave function in the configuration space is all that exists. It explains everything, but not in a simple way. It does not provide a transparent connection to our experiences. The way to connect the Universal wave function to our experience is to decompose it into a superposition of terms, each one corresponding to a different world. In each such term all variables specifying states of macroscopic objects are essentially in a product state. In each of the 16 worlds of our GHZ experiment, Alice, Bob and Charley have definite results of their spin measurements. What makes this situation nonlocal is that while all four different local options are present for all observers, i.e., there are four Everett worlds for Alice, and separately for Bob and for Charley, we do not have 64 worlds. Specifying Everett worlds of two observers fixes the world of the third. This connection between local worlds of the observers is the nonlocality of the MWI. Is there any possibility for an action at a distance in the framework of the MWI? Obviously, on the level of the physical Universe which includes all the worlds, local action cannot change anything at remote locations. However, a local action splits the world which is a nonlocal concept, and local actions can make splitting to worlds which differ at remote locations. Thus, an observer for whom only his world is relevant, has an illusion of an action at a distance when he performs a measurement on a system entangled with a remote system. (He also has an illusion of randomness each time he performs a quantum measurement \cite{qmdet}.) Consider again our example when Alice and Bob finished their measurements, but Charley still did not make his measurements. He knows that Alice and Bob made the measurements, he knows that there are 16 worlds: \begin{eqnarray} \nonumber (\downarrow_{xA}, \downarrow_{xB}),~~(\downarrow_{xA}, \uparrow_{xB}),~~(\uparrow_{xA}, \downarrow_{xB}),~~\uparrow_{xA}, \uparrow_{xB}),\\ \nonumber (\uparrow_{xA}, \uparrow_{yB}), ~~(\uparrow_{xA}, \downarrow_{yB}), ~~(\downarrow_{xA}, \uparrow_{yB}),~~(\downarrow_{xA}, \downarrow_{yB}), \\ (\uparrow_{yA}, \uparrow_{xB}),~~(\uparrow_{yA}, \downarrow_{xB}), ~~(\downarrow_{yA}, \uparrow_{xB}),~~(\downarrow_{yA}, \downarrow_{xB}),\\ (\uparrow_{yA}, \uparrow_{yB}), ~~(\uparrow_{yA}, \downarrow_{yB}),~~(\downarrow_{yA}, \uparrow_{yB}),~~(\downarrow_{yA}, \downarrow_{yB}).\nonumber \end{eqnarray} Charley is in all these worlds. He is in a single Everett world which includes 16 worlds according to my definition. There is no meaning to ask him now in which world out of 16 he is. If Charley follows the instructions and performs the measurements according to the rules stated above, he will create four Everett worlds by creating macroscopic outcomes in his laboratory. Each of his new Everett worlds belongs to four worlds specified by Alice and Bob. Charley has a choice of performing or not performing the measurements and this will change the set of worlds he will belong to. He can also make spin measurements not in accordance to the instructions: for example, he can make spin $y$ measurements instead of $x$ measurements and vice versa. Now, for every outcome he will end up belonging to eight, instead of four, worlds. He will also increase the total number of existing worlds to 32. If instead of following the instructions, all observers will make both spin measurements in the $z$ direction, there will be only 4 worlds: \begin{eqnarray} \nonumber (\downarrow_{zA},\downarrow_{zA},\downarrow_{zB},\downarrow_{zB}, \downarrow_{zC}, \downarrow_{zC}),~~ (\downarrow_{zA},\uparrow_{zA},\downarrow_{zB},\uparrow_{zB}, \downarrow_{zC}, \uparrow_{zC}),~~\\ (\uparrow_{zA},\downarrow_{zA},\uparrow_{zB},\downarrow_{zB}, \uparrow_{zC}, \downarrow_{zC}),~~ (\uparrow_{zA},\uparrow_{zA},\uparrow_{zB},\uparrow_{zB}, \uparrow_{zC}, \uparrow_{zC}).~~ \end{eqnarray} Each observer, performing measurements in $z$ direction creates worlds in which other observers have definite values of spin in $z$ direction. It looks like nonlocal action at a distance: local measurement changed some property in remote location. But it is a subjective change for an observer in a particular world: he understands that in the physical universe which includes worlds with all outcomes of his local measurements, the remote spins also have all possible values. \section{Conclusions} Bell inequalities lead us to a hard choice: either we believe that there is some kind of action at a distance, or that there are multiple realities. My strong feeling is that accepting action at a distance is the bigger price and I am convinced that the MWI is the correct description of Nature. In the MWI the Bell proof of action at a distance fails in an obvious way since it requires a single world to ensure that measurements have single outcomes. Although there is no action at a distance in the MWI, it still has nonlocality. The core of the nonlocality of the MWI is entanglement which is manifested in the connection between local Everett worlds of the observers. I feel that Bell inequalities can be manifested as a property of these connections, but I could not find a simple way to formulate it. I hope that this will be done in the future. My first formulation of the MWI and arguments in its favor appeared in a preprint \cite{schizo} that I sent to John Bell at the end of 1989. He was not convinced. He replied with a short paragraph saying that if there are multiple worlds, there should be one in which I do not believe in the MWI. He added, more seriously, that he does not know what is the right way to understand quantum mechanics, but the MWI does not sound plausible to him. He expressed this view in more details in the Nobel Symposium \cite{Bell86}: \begin{quote} The `many world interpretation seems to me an extravagant, and above all an extravagantly vague, hypothesis. I could almost dismiss it as silly. And yet... It may have something distinctive to say in connection to `Einstein Podolsky Rosen puzzle', and it would be worthwhile, I think, to formulate some precise version of it to see if it really so. And the existence of all possible worlds may make us more comfortable about existence of our own world... which seems to be in some ways a highly improbable one.\flushright (John Bell, 1986) \end{quote} For me Bell's result was the first reason to accept the MWI. Since then, the discovery of teleportation and of the interaction-free measurements turned my belief into a strong conviction \cite{PSA}. I feel that now I developed ``the precise version of the MWI'' which John Bell was looking for \cite{qmdet}. I regret that I did not have this clear vision at 1989 when I discussed interpretation of quantum mechanics with John Bell in Erice. I thank Eliahu Cohen and Shmuel Nussinov for helpful discussions. This work has been supported in part by the Israel Science Foundation Grant No. 1311/14 and German-Israeli Foundation Grant No. I-1275-303.14. \small \end{document}
\begin{document} \title{Quantum-secured single-pixel imaging with enhanced security} \author{Jaesung Heo} \author{Junghyun Kim} \author{Taek Jeong} \author{Yong Sup Ihn} \author{Duk Y. Kim} \author{Zaeill Kim} \author{Yonggi Jo}\email{[email protected]} \affiliation{Agency for Defense Development, Daejeon 34186, South Korea} \date{\today} \begin{abstract} In this paper, we propose a novel quantum-secured single-pixel imaging method that utilizes non-classical correlations of a photon pair. Our method can detect any attempts to deceive it by exploiting the non-classical correlation of the photon pairs, while rejecting strong chaotic light illumination through photon heralding. A security analysis based on polarization-correlation has been conducted, demonstrating that our method has improved security compared to existing quantum-secured imaging. We also provide proof-of-principle demonstrations of our method and trustworthy images reconstructed using our security analysis. Our proposed method can be developed using matured techniques used in quantum secure communication, thus offering a promising direction for practical applications in secure imaging. \end{abstract} \maketitle \section{Introduction} The non-classical correlations between quantum systems are the fundamental source of quantum advantage in various quantum information protocols. Entanglement-based quantum key distribution (QKD) \cite{E91,BBM92} and quantum-secured imaging \cite{Malik2012,Malik2012IEEE} utilize quantum correlations to provide security against potential eavesdropping attacks in the quantum channel, while quantum ghost imaging (QGI) \cite{Pittman1995} employs the correlations to enhance the signal-to-noise ratio (SNR) of an image beyond the classical limit. In correlation-based quantum-secured imaging \cite{Malik2012IEEE}, non-classical correlations can be analyzed to detect any attempts to deceive the imaging system, and the SNR of an image can be enhanced similarly to QGI. Recently, an experimental demonstration of quantum-secured ghost imaging in the time-frequency domain was reported \cite{Yao2018}. The original proposal for quantum-secured imaging was based on a prepare-and-measure approach \cite{Malik2012}, where the same photon was used for both imaging and security check. However, in the correlation-based quantum-secured imaging \cite{Malik2012IEEE, Yao2018}, the imaging and security check are carried out sequentially using different photon pairs. Furthermore, while the original proposal's security analysis focused on the most rudimentary scenario, there would exist another potential attack against the system. In this article, we propose a novel quantum-secured single-pixel imaging (QS-SPI) method that exploits non-classical correlations of a photon pair for imaging and security checking, simultaneously. Our imaging method is based on single-pixel imaging (SPI), also known as computational ghost imaging (CGI). Compared to the existing quantum-secured imaging methods, our proposed method offers enhanced security against potential attacks. Specifically, we consider a deceiving attack that constructs a fake target image by intercepting genuine signals and illuminating false signals. By analyzing non-classical correlations of photon pairs, the attack can be exposed by an error rate. However, existing methods are unable to detect a partial deceiving attack, which creates a deceiving image with an error rate below the detection threshold by mixing true and false signals. QS-SPI can detect this attack by analyzing not only the non-classical correlations of photon pairs, but also the spatial profiles of the obtained images. Furthermore, our method can extract trustworthy information while discarding delusive information under a partial deceiving attack. As QS-SPI conducts imaging and security checking simultaneously, the obtained images are closely related to the security of the protocol, which enhances the security. We demonstrate our theory through proof-of-principle experiments based on polarization-correlation. We expect that advanced techniques used in quantum secure communication can further improve the security of QS-SPI. This article is organized as follows. In Sec. \ref{Sec2}, we propose the QS-SPI setup and describe its security. Experimental realization will be shown in Sec. \ref{Sec3}, and finally, it is concluded in Sec. \ref{Sec4}. \section{Quantum-Secured Single-Pixel Imaging}\label{Sec2} \begin{figure} \caption{A schematic diagram of QS-SPI is shown. Alice, who operates the imaging system, generates pairs of polarization-entangled photons. The signal photon is sent to an SLM to be filtered according to a desired imaging pattern. This filtered photon illuminates the target and is reflected to be measured by SPCMs. The idler photon, the other photon in the entangled pair, is measured directly by the other set of SPCMs. The time-correlation and polarization-correlation of the two modes are analyzed from the measured data of the SPCMs.} \label{Scheme} \end{figure} In SPI, an image is constructed by using correlation between the spatial information of a beam, given by a spatial light modulator (SLM), and the intensity of that beam after interacting with a target. Single-pixel detectors, such as photodiodes, are used to measure the intensity. Let us denote $k\text{-th}$ spatial pattern on the SLM as $P^{(k)}$ and its corresponding intensity measured as $I^{(k)}$. Varying the spatial patterns, the corresponding intensities are recorded, and the correlation of the two yields a target image $G$, \begin{align}\label{GI} G(i,j)=\braket{P^{(k)}(i,j)I^{(k)}}-\braket{P^{(k)}(i,j)}\braket{I^{(k)}}, \end{align} where $i$ and $j$ represent the pixel position of 2D image and $\braket{\cdot}$ denotes averaging for the whole $N$ patterns \cite{Shapiro2008, Gibson2020}. Various imaging patterns can be used for $P^{(k)}$, but for enhancing image quality while reducing number of patterns required for imaging, an orthogonal pattern set is selected. One of the orthogonal pattern sets used for SPI is Hadamard patterns \cite{Pratt1969, Souza1988, Duarte2008, Sun2017}. Hadamard patterns are constructed based on Hadamard matrix. A $2^{n+1}\times 2^{n+1}$ Hadamard matrix is calculated by the following: \begin{align} H_{2^{n+1}}=H_{2^{n}}\otimes H_{2}, \end{align} where \begin{align} H_{2}=\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \end{align} and $\otimes$ denotes tensor product. Hadamard patterns are generated by reshaping each row of the Hadamard matrix $H_{2^{2n}}$ into a $2^{n}\times 2^{n}$ square matrix. Since the negative pixel value cannot be displayed, two shots are necessary to represent a Hadamard pattern \cite{Gibson2020}. Transitioning matrix elements 1 as white and -1 as black, one imaging pattern is made, and the other one is the inverse. For $2^n \times 2^n$ resolution image, the total number of shots required for imaging is $2^{2n+1}$. Fig.~\ref{Scheme} shows a schematic diagram of QS-SPI. Polarization entangled photon pairs generated via spontaneous parametric down-conversion (SPDC) is exploited for the security check. The Bell state used is $\ket{\Phi^{+}}=\frac{1}{\sqrt{2}}\left(\ket{H,H}_{SI}+\ket{V,V}_{SI}\right)$, where $\ket{\cdot}$ represents polarization state of a single photon, $H$ ($V$) denotes horizontal (vertical) polarization, and the subscripts $S$ and $I$ denote signal and idler modes, respectively. Security of QS-SPI is based on time and polarization correlation of photon pairs. The signal photon is sent to a SLM. Based on an imaging pattern, SLM filters signal spatially, and the selected photon illuminates a target. The idler photon is ideally retained and measured to analyze correlations with the signal photon. In the QS-SPI setup, four single photon counting modules (SPCMs) are exploited as single-pixel detectors, which have sub-ns timing resolution \cite{Ndagano2020,Li2021,Defienne2021}. They provide simpler measurement setup compared to a QGI setup exploiting electron multiplying charge-coupled device (EMCCD) \cite{Gregory2020, Gregory2021}. As SPCM has higher acceptable noise level than EMCCD, QS-SPI can reconstruct a trustworthy image even if an intensity of false signal reaches to EMCCD saturation level. Moreover, given the timing resolution of SPCMs, we can exploit a time-correlation of photon pairs to reject a strong background noise \cite{Yang2020,Kim2021APL,Johnson2022}, while a strong chaotic light can severely disturb a conventional CGI system \cite{Li2021,Kim2022,Heo2022}. QS-SPI uses coincidence counts of the signal and idler modes as the imaging intensity $I$ to exploit time-correlation. Thus, it is naturally immune to an imaging disrupting attack, an attack that strong chaotic light illuminates an imaging system to saturate the sensor. Note that it is suitable to use digital micromirror device (DMD) as a SLM of QS-SPI rather than liquid crystal based SLM. To control and measure correlations of signals at single-photon level, SLM with high reflectivity and polarization-independence is required. Compared to a liquid crystal based SLM, digital micromirror device (DMD) is better suited for such purposes. \subsection{Method of image deceiving attack} The security analysis method of quantum key distribution (QKD) has been well-established for protecting photon-carried information against eavesdropping, which is directly related to the generation of secret keys. For example, in polarization-based QKD, the polarization-encoded information of a photon is critical to the secret key. There are many advanced attacks for extracting polarization-encoded information of successively transmitted photons, such as collective attacks which exploit demanding technologies including quantum cloning machines \cite{Buzek1996, BrussEkert1998}, quantum memories \cite{Lvovsky2009}, and collective measurements \cite{Biham1997, Acin2007}. However, in SPI, the main purpose of an attack is to deceive an imaging system to construct a fake image rather than to eavesdrop secret keys. For this purpose, the meaningful attack is to modulate the intensity (photon number) of the light for the formulation of a fake image. Under this circumstance, an intercept-and-resend attack is the probable attack strategy for image-deceiving attacks \cite{Malik2012}. \begin{figure} \caption{This figure shows a schematic diagram of QS-SPI under a potential attack by an adversary, referred to as Eve. In this attack, Eve manipulates the photon number of a fake signal for the purpose of generating a fraudulent image, which is then sent to Alice. To evade detection, Eve must ensure that the polarization of the fake signal is similar to Alice's signal, which can be achieved by using the intercept-and-resend attack.} \label{SchemeEve} \end{figure} Fig.~\ref{SchemeEve} shows a schematic diagram of QS-SPI under a possible attack of an enemy called Eve. It is assumed that Eve can exploit all implementations allowed by the laws of physics and all processes of QS-SPI are known to Eve. For the deceiving attack, Eve possesses time-resolved single-photon detectors with polarization discrimination and an on-demand single-photon source with polarization control. Eve intercepts Alice's signal photon and discriminates its polarization. Since it is not possible to measure a quantum state in conjugate bases simultaneously, disturbance in original photon state is inevitable. After the polarization measurement, without a delay, the on-demand single-photon source generates a photon in the measured polarization, and the photon is sent to Alice. SPI constructs an image by spatial pattern information and the number of received photons, so Eve should control ${n_g}/{n_m}$ according to the DMD pattern to make QS-SPI construct a fraud image, where $n_g$ ($n_m$) is the number of generated (measured) photons of Eve. As the signal and the idler are polarization entangled photon pair, polarization of signal is heralded when polarization of idler is measured. However, intercept-and-resend attack leads to detection of signal photons in an unheralded polarization. Such errors are key resources for security analysis of QS-SPI. Details of the security check is described in the following section. \subsection{Security analysis in QS-SPI} Presence of Eve is tested by Alice via measuring photons in mutually unbiased bases (MUBs). One basis, named rectilinear basis, consists of horizontal and vertical polarization, and the other basis, diagonal basis, does diagonal ($D$) and anti-diagonal ($A$) polarization. For the two bases, the following relations are satisfied: $\ket{D}=\frac{1}{\sqrt{2}}\left(\ket{H}+\ket{V}\right)$ and $\ket{A}=\frac{1}{\sqrt{2}}\left(\ket{H}-\ket{V}\right)$, and thus, the two bases are MUBs. Alice, who has a QS-SPI system, randomly chooses the measurement basis for the security check. Different from QKD, it is not necessary for the basis choice of signal and idler modes to be independently random, since the measurement setups of the both modes belong to Alice. Let us define $r_{1}\coloneqq H$, $r_{2}\coloneqq V$, $d_{1}\coloneqq D$, and $d_{2}\coloneqq A$. Then, $P(X_{i},X_{j})=C(X_{i},X_{j})/\sum_{k,l=1}^{2}C(X_{k},X_{l})$ is satisfied, where $C(x,y)$ is the coincidence counts of $x$- and $y$-polarized photons in the signal and idler modes, respectively, $P(x,y)$ is a probability of a coincidence count $C(x,y)$ to happen, $X\in\{r,d\}$, and $i,j\in\{1,2\}$. From $\ket{\Phi^{+}}$, $P(X_{i},X_{i})=1/2$ and $P(X_{i},X_{j})=0$ for $i\neq j$, indicating that the latter coincidence count to happen is erroneous. Error rate can be defined as the ratio of erroneous coincidence counts to all coincidence counts. Since idler photon is unhindered by Eve's attack, an error rate is defined with respect to polarization of the idler. Thus, a polarization error rate of a $X_{i}$-polarized idler is \begin{align}\label{polE} e_{X_{i}}&=\frac{C(X_{j},X_{i})}{\sum_{k=1}^{2}C(X_{k},X_{i})}, \end{align} where $i\neq j$. Under no attack, the error rates are always zero. However, if there is an enemy who tries to disturb the imaging system, erroneous coincidence counts increase, so Alice can notice the presence of an attack by analyzing the error rates. Eve possesses its own MUBs for polarization measurement. Let us denote its constitutive polarization in primed notation, i.e., $H'$, $V'$, $D'$, and $A'$. For their respective identical basis choice, let the angle difference between the two as $\theta$, measured in counterclockwise from one polarization of the Alice to that of the Eve, i.e., angle measured from $H$-polarization to $H'$-polarization in counterclockwise. Then, the angle difference between their respective different bases is $\theta\pm\frac{\pi}{4}$. Suppose Alice chooses rectilinear basis and measures idler in $H$-polarization. For Eve's own rectilinear basis, Eve measures Alice's signal in $H'$ ($V'$)-polarization with probability $\cos^2\theta$ ($\sin^2\theta$). Regardless of Eve's result, Alice's error rate, i.e., probability of detecting $V$-polarized signal, is $\cos^2\theta\sin^2\theta$. Thus, the error rate observed by Alice is $2\cos^2\theta\sin^2\theta=(1-\cos^2 2\theta)/2$. If Eve chooses the other measurement basis, the error rate is calculated by replacing $\theta$ to $\theta\pm\frac{\pi}{4}$: $(1-\sin^2 2\theta)/2$. Since Eve's basis choice is random, error rate in $H$-polarized idler is calculated as the following: $e_{H}=\left[(1-\cos^2 2\theta)/2+(1-\sin^2 2\theta)/2\right]/2=1/4$. The result indicates that the threshold error rate for determining the presence of an attack is 25\% regardless of the angle difference $\theta$. If an error rate in either rectilinear or diagonal basis is greater than 25\%, the protocol is compromised. Rather than the full deceiving attack, Eve can perform a partial attack, i.e., Eve performs the attack against some photons, while the other photons are passed through. With this attack, Eve cannot manipulate a full image, but can add some pixels in an original image. For example, when an original image is "1", Eve can make the image to "4" by adding one vertical and one diagonal lines. Even when the attack is performed, the error rate can keep less than $25\%$, since only a part of photons influenced by Eve contributes to the error rate. Thus, the existing security analysis of quantum-secured imaging methods cannot detect the presence of Eve, even if the image is modified from the original image. To detect this partial deceiving attack, we should analyze the intensity of Eve's light and the pixel-area that Eve tries to modify. Here, we provide a security analysis to show how the parameters can be used to detect the attack and how we can reconstruct a trustworthy image. Let the coincidence counts originated from signal of Alice (Eve) be $n_{A(E)}$. By target profile $\chi_{A(E)}$ and channel efficiency $\eta_{A(E)}$, detected coincidence counts $I_D$ at $n$-th imaging pattern is \begin{align} I_D^{(n)} &= \sum_{k\in\{A,E\}} \frac{1}{M}\sum_{i,j}P^{(n)}(i,j)\left( \eta_k(i,j) \chi_k(i,j) n_k \right) \\ &\coloneqq \sum_{k\in\{A,E\}} I_k^{(n)}, \end{align} where $M$ is total number of pixels and $I_{A(E)}$ is detected coincidence counts originated from Alice's (Eve's) signal only. As the sum of any Hadamard pattern to its corresponding inverse pattern is the pattern where all components are 1, total coincidence counts for the whole $N$ imaging patterns is \begin{align} \sum_n I_D^{(n)}=\sum_{k\in\{A,E\}}\frac{N}{2M} S_k n_k, \end{align} where $S_{A(E)}$ is a sum of components of Alice's (Eve's) target image, i.e., $S_{A(E)} \coloneqq \sum_{i,j}\eta_{A(E)}(i,j)\chi_{A(E)}(i,j)$. Previous analysis shows that minimum erroneous coincidence counts are a quarter of $I_E$. Thus, the threshold error rate $e_T$ becomes \begin{align}\label{pda errorT by counts} e_T=\frac{1}{4}\frac{S_E n_E}{S_A n_A + S_E n_E}. \end{align} It is shown explicitly that the error rate under a partial deceiving attack can be smaller than 25\%. Also, the error rate is closely related not only to the beam intensity but also to the target profile. However, one cannot get target information before imaging. Thus, to notice the Eve's partial deceiving attack, threshold error rate based on the analysis on constructed image is required. From Eq.~\ref{GI}, an image formed by $I_k$, where $k\in\{A,E\}$, is \begin{align} G_k(i,j)&=\frac{1}{N} \sum_n P^{(n)}(i,j)I_k^{(n)} - \frac{1}{2N}\sum_n I_k^{(n)} \\ &=\frac{1}{4M} \eta_k(i,j) \chi_k(i,j) n_k. \end{align} Then, an image formed by $I_D$ is $G_\text{all}=G_A+G_E$. Note that neither $G_A$ nor $G_E$ are directly obtainable. An image formed by correct (erroneous) coincidence counts is obtainable, denoted as $G_{\text{cor(mask)}}$. Under intercept-and-resend attack, $G_\text{mask} = \frac{1}{4} G_E$, which makes $G_\text{cor} = G_A + \frac{3}{4} G_E$. As $S_{A(E)}=\sum_{i,j}G_{A(E)}(i,j)$, the proportion that affects the 25\% error rate is related to the sum of all pixel values of images. For full deceiving attack, $G_{\text{all}}=G_E$, thus only Eve's counts contribute to 25\% error rate. However, for partial deceiving attack, Eve's contribution is $4\sum_{i,j}G_{\text{mask}}(i,j)$, where the 4 comes from intercept-and-resend attack. The threshold error rate becomes \begin{align}\label{pda graphical threshold error} e_T = \frac{1}{4}\cdot\frac{4\sum_{i,j}{G_{\text{mask}}(i,j)}}{\sum_{i,j}{G_{\text{all}}(i,j)}} = \frac{\sum_{i,j}{G_{\text{mask}}(i,j)}}{\sum_{i,j}{G_{\text{all}}(i,j)}}. \end{align} Under intercept-and-resend attack, one can see that Eq.~\ref{pda errorT by counts} and Eq.~\ref{pda graphical threshold error} are equivalent When the error rate is greater than partial deceiving attack threshold calculated by Eq.~\ref{pda graphical threshold error}, Eve's partial deceiving attack can be noticed. Extraction of trustworthy information from the obtained images can be done based on image relations: \begin{align}\label{trustworthy} G_A(i,j) = G_\text{cor}(i,j) - 3 G_\text{mask}(i,j). \end{align} Note that enhanced security of QS-SPI is based on simultaneous procedure of imaging and security analysis. If the two were done sequentially, coincidence counts for imaging is irrelevant to security checking, thus neither detection of partial deceiving attack nor trustworthy image reconstruction is possible. In summary, security analysis procedure of QS-SPI is conducted as follows: \begin{enumerate} \item If the error rate is greater than 25\%, the imaging system is under full deceiving attack. All images should be discarded. \item If the error rate is smaller than 25\%, Alice calculates $e_T$ based on Eq.~\ref{pda graphical threshold error} to check possible partial deceiving attack. \item If the error rate is smaller than $e_T$, obtained images are trustworthy. If the error rate is greater than $e_T$, only the trustworthy image reconstructed by Eq.~\ref{trustworthy} is credible. \end{enumerate} \section{Proof-of-Principle demonstration}\label{Sec3} \subsection{QS-SPI setup} \begin{figure*} \caption{Experimental setups of our QS-SPI. A polarization-entangled photon pair is generated by the Sagnac interferometer with ppKTP crystal. Polarization of idler photon is directly detected by SPCMs. The signal photon is reflected by the DMD with post-selection of its position and sent to a true target. After interaction with the target, the photon is counted by SPCMs in selected polarization. Full deceiving attack is demonstrated by blocking Alice's signal by flip mirror and sending Eve's laser beam in polarization-controlled and intensity-modulated manner. Accidental coincidence counts are occurred by Eve's light which lead to formation of a false image. Partial deceiving attack is realized by acquiring photon counts twice, one from true signal and the other from false signal. PBS: polarizing beam splitter; QWP (HWP): quarter (half) wave plate; ND filter: neutral density filter.} \label{exp} \end{figure*} Fig.~\ref{exp} shows setups for proof-of-principle demonstration of QS-SPI. A polarization-entangled state is generated from the Sagnac interferometer with periodically poled potassium titanyl phosphate (ppKTP) crystal \cite{Kim2006}. The crystal is pumped by 405 nm continuous wave (CW) laser, generating 810 nm polarization-entangled photon pairs via type-II SPDC process. The initial state generated from the Sagnac interferometer is $\ket{\Psi^{+}}=\frac{1}{\sqrt{2}}\left(\ket{H,V}_{SI}+\ket{V,H}_{SI}\right)$, so to make the state be $\ket{\Phi^{+}}$, additional phase shifts on the idler mode is given. The state is prepared with fidelity $98.6$\%, verified via quantum state tomography \cite{QST}. After the generation, the idler mode is detected by SPCMs (Excelitas Technologies, SPCM-780-13-FC) in selected polarization. SPCMs are connected to a time-correlated single photon counting (TCSPC) module to record detection time and polarization. The signal photon is sent to the DMD (Vialux GmbH, DLP650LNIR) and post-selected according to a displayed pattern. In our setup, we used Hadamard patterns with resolution $32 \times 32$, so the total number of shots required for one image is 2048. The spatially post-selected signal photons interact with the target, alphabet letter "A" or "F", and detected by SPCMs. "A" is used for demonstrating all attack methods while "F" is used for partial deceiving attack only. TCSPC analyzes detection from the four SPCMs to give coincidence counts of one signal and one idler. In the setup, power of pump laser for generation of the entangled photon pairs was 5 mW. Single count rates of the signal and idler without target were $6\times 10^{3}$ cps and $8\times 10^{4}$ cps, respectively. We set the coincidence window as 650 ps, and coincidence count rate of a signal and an idler in the same polarization was 300 cps. In imaging, photon acquisition time for one Hadamard pattern was 3.5 seconds. \subsection{Demonstration of Eve's attack} As previously described, Eve's intercept-and-resend attack exploits on-demand single-photon source to make a generated photon enter within the coincidence window. However, since the implementation is not feasible with current technologies, we simulated intercept-and-resend attack with implementable devices. In QS-SPI, the image reconstruction relies on coincidence counts, and Eve can manipulate the system by controlling these counts. In our proposed deceiving attack, Eve illuminates Alice's receiver with an 810 nm CW laser, inducing accidental coincidence counts. To simplify the attack, we fix the polarization of the illumination laser to $H'=H$ or $D'=D$, instead of controlling the polarization based on the measured information. Ideally, no erroneous coincidence counts happen when Eve chooses the same polarization basis as Alice does. For instance, if Alice's signal is in $H$-polarization and Eve chooses the rectilinear basis, Eve sends false signal in $H$-polarization. However, if Eve selects the diagonal basis, $50\%$ of the counts are recorded as erroneous coincidence counts. To demonstrate the intercept-and-resend attack accurately, we selected coincidence counts for imaging and error rate calculation according to the following criteria: \begin{enumerate} \item If Alice and Eve select identical bases, coincidence counts where the polarization of the idler is matched with the false signal are selected, and the others are discarded. \item If different bases are chosen, all polarization combinations of coincidence counts are selected. \end{enumerate} However, for the case 1, we ignore Eve's attack in different polarization. For example, we ignore $V$ polarization errors in the rectilinear basis while considering errors in the diagonal basis. To account this factor, we consider the possibility of random basis choice for $V$ and $A$ polarized idlers. In the full deceiving attack, Alice's signal is blocked, and only accidental coincidence counts are detected. To deceive Alice's setup, accidental coincidence count rate needs to be similar to the coincidence count rate of the entangled photon pairs. To achieve this condition, the power of Eve's laser is determined as follows. The detection probabilities per window on the signal and idler mode SPCMs are $N_{E}\tau$ and $N_{I}\tau$, respectively, where $N_{E}$ is the mean photon number of Eve's laser and $\tau$ is a coincidence window. In this case, the coincidence probability in the coincidence window is given by the product of the single probabilities, $N_{E}N_{I}\tau^{2}$. Then, the accidental coincidence count rate $n_{acc}$ can be calculated by the following equation: \begin{align} n_{acc}=N_{I}N_{E}\tau. \end{align} To make $n_{acc}\sim 300$ cps with $N_{I}=8\times 10^{4}$ cps and $\tau = 650$ ps, $N_{E}\sim 5.8\times 10^{6}$ cps is obtained, which is 1000 times greater than the original signal photon count rate without a target. This photon number corresponds to approximately 1.41 pW for an 810 nm CW laser. Intensity modulation of Eve's laser, which is necessary to deceive SPI, is performed by using another DMD. Eve's DMD displays the overlapped patterns of Alice's Hadamard patterns and a fraud image. After imaging under full deceiving attack, a fraud image, alphabet letter "D", is constructed from the accidental coincidence counts. In the partial deceiving attack, Alice detects both true and false signals with their intensity ratio being controlled. Photon acquisition was done twice for each imaging pattern. One acquisition is for the true signal and the other is for the false signal, controlled by the flip mirror. The polarization of the fraud signal is controlled in the same manner as in the full deceiving attack case. In this attack, we first set the true target as the alphabet letter "F". To disguise the true target profile, the false signal contains target information as the left-and-right inverse of the alphabet letter "L". Combining the two signals, a deceiving image is formed, digital number "8". Additionally, we demonstrate the attack with the targets used in the full deceiving attack, namely the alphabet letters "A" and "D". In this case, the two images overlap, and the true target information is covered by the fraudulent image. For the two possible scenarios, we demonstrate the trustworthy image reconstruction based on the obtained images. \subsection{Results} \begin{figure*} \caption{Images obtained by QS-SPI without any attack (left) or under full deceiving attack (right). Imaging is done for 5 times and average error rate in each basis with standard deviation in parenthesis are shown below the corresponding images, where $e_r$ ($e_d$) denotes error rate in rectilinear (diagonal) basis. (a) When there is no attack, all error rates are suppressed below a partial deceiving attack threshold, 4.5\%. (b) Under the attack, error rates close to 25\% is obtained, which is the expected error rate under ideal intercept-and-resend attack. As the error rates are over 25\%, full deceiving attack exists, thus all images should be discarded.} \label{result_pol} \end{figure*} Fig.~\ref{result_pol} shows the images obtained without Eve's attack and under full deceiving attack. Error rate in basis is marked below the corresponding images, where error rate in rectilinear (diagonal) basis is $e_r$ ($e_d$). Due to experimental defects, some pixels in obtained images were negative, thus we transitioned the negatives to 0. The images shown are normalized ones, i.e., all pixels are divided by the maximum pixel value and multiplied by 255. In the proof-of-principle experiments, all data are obtained five times, and the average of the five errors is marked with the standard deviation in parenthesis. When there is no attack, error rates lower than 25\% are obtained. To test the possible partial deceiving attack, the threshold error rate is calculated to be 4.54\%. As the error rates are lower than the threshold, the security of the obtained images is guaranteed. Under Eve's full deceiving attack, error rates close to 25\% are obtained as expected from the error analysis under the intercept-and-resend attack. As the error rates are greater than 25\%, Eve's full deceiving attack exists, indicating that the images are fake and should be discarded. \begin{figure*} \caption{Images obtained by QS-SPI under a partial deceiving attack where true target profile has no overlap with false target profile. False signal to true signal intensity ratio of 500, 1000, and 2000 are demonstrated. When the ratio is 1000, contrast of true and false target images is balanced to form a deceiving image, digital number "8". As either $e_r$ or $e_d$ is beyond $e_T$ but below 25\%, partial deceiving attack exists. Reconstructed trustworthy images are similar to true target image. The stronger the noise intensity, the more the background noise of trustworthy image, due to suppression of true signal by strong false signal.} \label{result_pda F8} \end{figure*} \begin{figure*} \caption{Demonstration of partial deceiving attack where a true target profile has overlap with a false target profile. As either $e_r$ or $e_d$ is beyond $e_T$ but below 25\%, partial deceiving attack exists. From the obtained images, trustworthy images are reconstructed. The image quality of the overlapped region in trustworthy images gets worsen as the intensity of false signal gets stronger. This is due to suppression of true signal by strong false signal, which also causes background fluctuation to be stronger.} \label{result_pda AD} \end{figure*} The results of the QS-SPI under a partial deceiving attack are presented in Fig.~\ref{result_pda F8} and Fig.~\ref{result_pda AD}. We varied the false signal intensity to true signal intensity ratio and demonstrated the cases where the ratio is 500, 1000, and 2000. The former is the case where a true target profile has no overlap with a false target profile, but the two are combined to produce a deceiving image. When the ratio is 1000, the contrast of true and false target images is balanced to form a deceiving image, digital number "8". The latter is the case where true and fake images have overlap. True and false targets are the alphabet letters "A" and "D", respectively, and the overlap deteriorates true target information. In both cases, all error rates are below 25\%. However, as either $e_r$ or $e_d$ is greater than the partial deceiving attack threshold $e_T$, partial deceiving attack exists. We reconstructed trustworthy images using Eq.~\ref{trustworthy} and compared them with the true target image acquired by true signals only. We observed that $G_\text{mask}$ has the spatial profile of the false target. By subtracting three times of $G_\text{mask}$ from $G_\text{cor}$, we obtained the image of the true target profile. When the error rate is much smaller than 25\%, which is the threshold of a full deceiving attack, the trustworthy images are well recovered, close to the true images. However, as the error rate approaches 25\%, the background noise increases due to the suppression of the true signal in comparison with the intensity of the false signal. When the two image profiles overlap, the suppression causes pixel values in the overlapped region of 3$G_\text{mask}$ to be similar to those in $G_\text{cor}$. Therefore, some information of the true target profile in the overlapped region may be erased during the trustworthy image reconstruction process, which is why the image quality in the overlapped region degrades rapidly. \begin{figure} \caption{Images constructed under an imaging disrupting attack by the original SPI (left) and QS-SPI (right). The attack is realized by illuminating Eve's laser to Alice's signal, where the laser power was 1000 times greater than Alice's signal. The image obtained by SPI is ruined, while the time-correlation of a photon pair made QS-SPI to successfully construct the image.} \label{Dattack} \end{figure} Lastly, we tested the robustness of QS-SPI against strong noise and the result is shown in Fig.~\ref{Dattack}. Eve's jamming laser, which has a power 1000 times greater than Alice's signal, illuminates the receiver to disturb imaging process. In the original SPI, only single counts in the signal mode are exploited to construct an image. Thus, the obtained image is ruined by the attack. However, by exploiting time-correlation between the signal and idler photons, QS-SPI can overcome the attack, allowing the target image to be successfully obtained \cite{Kim2021APL}. \section{Summary and discussion}\label{Sec4} In this paper, we propose a methodology of quantum-secured single-pixel imaging (QS-SPI) that is resilient against possible image deceiving attacks. Our method employs polarization-correlation techniques, which enable imaging and security analysis to be performed simultaneously for all detected photon pairs. This results in enhanced security analysis and reconstruction of trustworthy images. We consider two types of deceiving attacks: full deceiving attacks and partial deceiving attacks. A full deceiving attack involves intercepting all true signals and illuminating false signals to construct a fraudulent image. This type of attack can be detected by analyzing statistical errors in polarization-correlation, where the criterion error rate is 25\%. However, a partial deceiving attack involves intercepting a portion of the true signals and mixing them with false signals to create a deceiving image with an error rate below 25\%. To detect this type of attack, we analyze both the polarization-correlation and the obtained images formed by correct and erroneous coincidence counts. When a partial deceiving attack is confirmed, QS-SPI can reconstruct trustworthy information while rejecting false information. We demonstrate the setup of QS-SPI and show that it is capable of detecting deceiving attacks. We simulate an intercept-and-resend attack, and obtain a theoretical error rate of 25\%. We also show how QS-SPI can detect partial deceiving attacks and reconstruct trustworthy images from the obtained data, which represents an improvement in security over previously proposed quantum-secured imaging schemes. Finally, we demonstrate the robustness of QS-SPI under strong chaotic noise. To use QS-SPI as an application, the active basis choice setups and two SPCMs can be replaced with passive ones that consist of a 50:50 beam splitter, phase shifters, and four SPCMs corresponding to the four polarization detection. In the modified scheme, the photon pairs detected at SPCMs with a mismatched basis should be discarded in the security check. However, a security check based on the Bell inequality \cite{Bell1964}, particularly the Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{Clauser1969}, can be introduced to exploit all the basis combinations to perform security checks \cite{Naik2000}. This approach allows us to construct an image without discarding photon pairs. In the security check, if the polarization statistics violate the Bell inequality, the absence of an attack is guaranteed, and this method provides device-independent security \cite{Barrett2005,Acin2006,Acin2007}. With little transitions, QS-SPI can be applied to various protocols such as ghost imaging scheme. In our proposed setup, this can be achieved by removing the DMD and measuring the spatial profile of the idler photon using CCDs or single-pixel detectors in a raster-scanning manner. The role of the DMD is replaced by the measurement of the idler's spatial profile, which provides information about the spatial profile of the signal. When an idler photon is detected, its intensity at a specific pixel position, polarization, and detection time are recorded. By analyzing the intensity-correlation and time-correlation of the signal and idler photons pixel by pixel, an image of the target can be reconstructed with resilience against imaging disrupting attacks. Additionally, by analyzing the polarization-correlation pixel by pixel, the security analysis methodology of QS-SPI can be directly applied. Moreover, QS-SPI is applicable to a quantum-secured optical ranging protocol \cite{Malik2012}. Since our setup exploited time-correlation of signal and idler photons, time-of-flight information of a signal photon should be measured. Thus, QS-SPI provides a method to securely acquire a target distance against jamming attacks. We expect that QS-SPI can be advanced with matured techniques utilized in quantum secure communication. For instance, six polarization states in three possible mutually unbiased bases (MUBs) can be employed to enhance security \cite{Bruss1998}, or for reference-frame-independent security analysis \cite{Laing2010}. Moreover, various degrees-of-freedom in a single photon can be utilized to exploit high-dimensional quantum states \cite{Cerf2002, Jo2016, Bouchard2018, Jo2019} or hyper-entangled states \cite{Wang2015, JKim2021}. Furthermore, our method can be developed to quantum-secured LiDAR using quantum-correlation-based free-space experimental techniques \cite{DKim2022}. \end{document}
\begin{document} \title[Sobolev trace constant]{Estimates for the Sobolev trace constant with critical exponent and applications} \author[J. Fern\'andez Bonder and N. Saintier]{Juli\'an Fern\'andez Bonder and Nicolas Saintier} \address{Departamento de Matem\'atica, FCEyN UBA (1428) \break\indent Buenos Aires, Argentina. } \email{JFB: {\tt [email protected]}, NS: {\tt [email protected]} } \keywords{Sobolev trace embedding, Optimal design problems, Critical exponents. \\ \indent 2000 {\it Mathematics Subject Classification.} 35J20, 35P30, 49R50} \begin{abstract} In this paper we find estimates for the optimal constant in the critical Sobolev trace inequality $S\|u\|_{L^{p_*}(\partialartial\Omega)}^p \le \|u\|_{W^{1,p}(\Omega)}^p$ that are independent of $\Omega$. This estimates generalized those of \cite{AY} for general $p$. Here $p_* := p(N-1)/(N-p)$ is the critical exponent for the immersion and $N$ is the space dimension. Then we apply our results first to prove existence of positive solutions to a nonlinear elliptic problem with a nonlinear boundary condition with critical growth on the boundary, generalizing the results of \cite{BR}. Finally, we study an optimal design problem with critical exponent. \end{abstract} \maketitle \section{Introduction} Sobolev inequalities are relevant for the study of boundary value problems for differential operators. They have been studied by many authors and it is by now a classical subject. It at least goes back to \cite{Aubin1}, for more references see \cite{DH}. In particular, the Sobolev trace inequality has been intensively studied in \cite{Biezuner, Escobar, FBLDR, BR, LZ}, etc. Let $\Omega$ be a bounded smooth domain of $\mathbb{R}^N$. For any $1<p<N$, the Sobolev trace immersion says that there exists a constant $S>0$ such that $$ S\Big(\int_{\partialartial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*} \le \int_{\Omega} |\nabla u|^p + |u|^p\, dx $$ for any $u\in W^{1,p}(\Omega)$, where $W^{1,p}(\Omega)$ is the usual Sobolev spaces of the functions $u\in L^p(\Omega$ such that $\nabla u\in L^p(\Omega$. Here $p_* := p(N-1)/(N-p)$ is the critical exponent for this inequality. The optimal constant in the above inequality is the largest possible $S$, that is $$ S = S_p(\Omega) := \inf \frac{\displaystyle \int_{\Omega} |\nabla u|^p + |u|^p\, dx}{\displaystyle \Big(\int_{\partialartial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*}}, $$ where the infimum is taken over the set $X := W^{1,p}(\Omega)\setminus W_0^{1,p}(\Omega)$, $W_0^{1,p}(\Omega)$ being the closure for the $W^{1,p}$-norm of the space of smooth functions with compact support in $\Omega$. The dependance of $S$ with respect to $p$ and $\Omega$ has been studied by many authors, specially in the {\em subcritical case}, i.e. where $p_*$ is replaced by any exponent $q$ such that $1<q<p_*$. See, for instance \cite{FdP, FBFR} and references therein. The analysis for the critical case is more involved because the immersion $W^{1,p}(\Omega)\hookrightarrow L^{p_*}(\partialartial\Omega)$ is no longer compact and so the existence of minimizers for $S$ does not follows by standard methods. To overcome this problem, in \cite{BR}, the authors use an old idea from T. Aubin \cite{Aubin1}. In fact, let $K_p^{-1}$ be the best trace constant for the embedding $W^{1,p}(\mathbb{R}^n_+)\hookrightarrow L^{p_*}(\partialartial\mathbb{R}^n_+)$, namely \begin{equation}\label{BestConstant} K_p^{-1}=\inf_{u\in W^{1,p}(\mathbb{R}^n_+)\setminus W^{1,p}_0(\mathbb{R}^n_+} \frac{ \int_{\mathbb{R}^n_+} |\nabla u|^p dx }{ \left(\int_{\partial\mathbb{R}^n_+} |u|^{p_*} dS\right)^{p/p_*} }. \end{equation} In \cite{BR} it is shown, following ideas from \cite{Aubin1}, that if \begin{equation}\label{Cond} S_p(\Omega) < K_p^{-1}, \end{equation} then there exists an extremal for $S_p(\Omega)$. Taking the function $u\equiv 1$ in the definition of $S_p(\Omega)$ one obtain that if $$ \frac{|\Omega|}{|\partial\Omega|^\frac{p}{p_*}}< K_p^{-1}, $$ then \eqref{Cond} is satisfied. Observe that this is a global condition on $\Omega$. It follows from Lions \cite{Lions} that the infimum \eqref{BestConstant} is achieved. The value of $K_p$ is explicitely known when $p=2$ (see Escobar \cite{Escobar}). Recently, Biezuner \cite{Biezuner} proved that $K_p$ is also the best first constant in the inequality, $$ \left( \int_{\partial\Omega} |u|^{p_*} dS \right)^\frac{p}{p_*} \le A\int_\Omega |\nabla u|^p dx + B\int_\Omega |u|^p dx, $$ in the sense that, for any $\epsilon>0$, there exists a constant $C_\epsilon$ such that \begin{equation}\label{OptimalInequ} \left( \int_{\partial\Omega} |u|^{p_*} dS \right)^\frac{p}{p_*} \le (K_p+\epsilon)\int_\Omega |\nabla u|^p dx + C_\epsilon\int_\Omega |u|^p dx, \end{equation} for every $u\in W^{1,p}(\Omega)$, and $K_p$ is the lowest possible constant. This fact will be used in a crucial way in the course of the paper. On the other hand a local condition ensuring \eqref{Cond}, depending only on local geometric properties of $\Omega$, is known to hold in the case $p=2$. Indeed Adimurthi-Yadava \cite{AY} obtained \eqref{Cond} assuming the existence of a ``good point'' $x\in\partial\Omega$, i.e. a point $x$ at which the mean curvature of $\partial\Omega$ is positive and such that, in a neighborhood of $x$, $\Omega$ lies on one side of the tangent plane at $x$. The method in their proof is the use as test-functions of a suitable rescaling of the extremals of \eqref{BestConstant}. These extremals are explicitly known for $p=2$ since Escobar's work \cite{Escobar} who conjectured the result for any $p\in (1,N)$. This conjecture has recently been proved by Nazaret \cite{Nazaret} using a mass-transportation method. It turns out that all the extremals of \eqref{BestConstant} are of the form \begin{equation}\label{Extremals} \begin{aligned} U_{\epsilon,y_0}(y,t) = & \frac{\epsilon^\frac{N-p}{p(p-1)}}{[(t+\epsilon)^2+|y-y_0|^2]^\frac{N-p}{2(p-1)}} \\ = & \epsilon^{-\frac{N-p}{p}}U\left(\frac{y-y_0}{\epsilon},\frac{t}{\epsilon}\right) \end{aligned} \end{equation} where $\epsilon>0$ and $y, y_0\in\mathbb{R}^{N-1} = \partialartial\mathbb{R}^N_+$, $t>0$, with \begin{equation}\label{Def_U} U(y,t)= \frac{1}{[(t+1)^2+|y|^2]^\frac{N-p}{2(p-1)}}. \end{equation} The knowledge of this extremals allows us first to compute the explicit value of $K_p$: \begin{prop}\label{Prop_ValueBestConstant} The value of $K_p$ is $$ K_p^{-1} = \left(\frac{N-p}{p-1}\right)^{p-1} \partiali^\frac{p-1}{2} \left( \frac{\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \right)^\frac{p-1}{N-1}. $$ \end{prop} Applying a similar technique as in \cite{AY}, we can use the rescaled extremals for $K_p$ and obtain a local (geometrical) condition on $\Omega$ such that \eqref{Cond} is satisfied. In fact, we can deal with a slightly more general problem. Namely \begin{equation}\label{MainPb1} \lambda = \lambda(p,\Omega) := \inf\frac{\displaystyle \int_\Omega |\nabla u|^p + h(x) |u|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*}} \end{equation} where the infimum is taken over $X$ and the function $h\in C^1(\overline{\Omega})$ is such that there exists $c>0$ satisfying \begin{equation}\label{Coercivity} \int_\Omega |\nabla u|^p + h(x) |u|^p\, dx \ge c \|u\|_{W^{1,p}(\Omega)}^p \end{equation} for any $u\in X$. We are lead to the following generalization of the notion of ``good point'' to our case: we say that a point $x\in\partialartial\Omega$ is a ``good point'' if there exists $r>0$ such that $\Omega\cap B_r(x)$ lies on one side of the tangent plane at $x$ and either $H(x)>0$ or, if $H(x)=0$, either $$ h(x)<0 \text{ if } N=2,3,4 \text{ and } p<\sqrt{N} $$ or, if $N\ge 5$, \begin{equation*} \begin{aligned} & h(x)<0 \text{ if } p<2, \\ & \frac{N}{N-1}\sum \lambda_i^2 - 2\sum_{i<j}\lambda_i\lambda_j < \frac{-8(N-1)h(x)}{(N-2)(N-4)} \text{ if } p=2, \\ & \frac{p+N-2}{N-1}\sum \lambda_i^2 - 2\sum_{i<j}\lambda_i\lambda_j<0 \text{ if } 2<p<(N+2)/3. \end{aligned} \end{equation*} where the $\lambda_i$'s are the principal curvatures at $x$ and $H(x)$ is the mean curvature at $x$. Remark that our method gives the restriction $1<p<(N+1)/2$ and also that a ``good point'' in the sense of Adimurthi-Yadava is also a ``good point'' in our sense. We get the following theorem: \begin{thm}\label{thm1} Let $1<p<(N+1)/2$. If there exist a ``good point'' $x\in\partialartial\Omega$, then \begin{equation}\label{eq-thm1} \lambda < K_p^{-1}. \end{equation} \end{thm} As a consequence of Theorem \ref{thm1} we have \begin{coro}\label{coro-thm1} Under the hypotheses of Theorem \ref{thm1}, the infimum \eqref{MainPb1} is achieved. \end{coro} Observe that any extremal $u$ can be taken to be nonnegative (just replace $u$ by $|u|$), and if we take it {\em normalized} as $\|u\|_{L^{p_*}(\partialartial\Omega)}=1$, it is an eigenfunction associated to the eigenvalue $\lambda$ in the sense that it is a weak solution of the following Steklov-like eigenvalue problem \begin{equation}\label{MainEqu} \begin{cases} -\Delta_pu + h(x) u^{p-1} = 0 \quad &\text{in } \Omega \\ |\nabla u|^{p-2} \frac{\partial u}{\partial\nu} = \lambda u^{p_*-1} &\text{on } \partial\Omega \end{cases} \end{equation} where $\Delta_pu = \text{div}(|\nabla u|^{p-2}\nabla u)$ is the $p-$Laplacian and $\nu$ is the unit outward normal of $\Omega$. Then it follows by the results of Cherrier \cite{Cherrier} that $u$ is smooth on $\Omega$ and continuous up to the boundary. Moreover, it is strictly positive in $\overline{\Omega}$ (see, for instance, \cite{FBR-JMAA}) so any extremal has constant sign. As an application of Theorem \ref{thm1}, we study a shape optimization problem related to $\lambda$. Given $\alpha\in (0,|\Omega|)$, where $|\Omega|$ denotes the volume of $\Omega$, and a measurable subset $A\subset\Omega$ of volume $\alpha$, we first consider the minimization problem \begin{equation}\label{MainPbShape1} \lambda_A = \inf\frac{\displaystyle \int_\Omega |\nabla u|^p + h(x) |u|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*}} \end{equation} where the infimum is taken over $X_A := \{u\in X\ |\ u|_A = 0 \text{ a.e.}\}$ and the function $h\in C^1(\overline{\Omega})$ is such that the coercivity assumption \eqref{Coercivity} holds As a consequence of Theorem \ref{thm1}, we have \begin{thm}\label{thmShape1} Let $1<p<(N+1)/2$ and let $A\subset \Omega$ be such that $|A|=\alpha$. Assume that there exists a ``good point'' $x\in\partial\Omega$ such that $B_r(x)\cap A=\emptyset$ for some $r>0$. Then $\lambda_A$ is attained by some nonnegative nontrivial $u_A$. \end{thm} These extremals $u_A$ are eigenfunctions associated to the eigenvalue $\lambda_A$ in the sense that, if $A$ is closed, they are weak solutions of the following Steklov-like eigenvalue problem \begin{equation}\label{MainEquShape} \begin{cases} -\Delta_p u + h(x) u^{p-1} = 0 & \text{in } \Omega\setminus A \\ |\nabla u|^{p-2} \frac{\partial u}{\partial\nu} = \lambda_A u^{p_*-1} & \text{on } \partial\Omega\setminus A \\ u = 0 & \text{in } A \end{cases} \end{equation} We consider the following shape optimization problem: \begin{itemize} \item[]{\em For a fixed $0<\alpha<|\Omega|$, find a set $A_*$ of measure $\alpha$ that minimizes $\lambda_A$ among all measurable subsets $A\subset \Omega$ of measure $\alpha$. That is, } $$ \lambda(\alpha) := \inf_{A\subset\Omega, |A|=\alpha}{\lambda_A} = \lambda_{A_*}. $$ \end{itemize} In this paper we prove that there exist an optimal set $A_*$ (with their corresponding extremals $u_*$) for this optimization problem. This optimization problem in the subcritical case (that is, when $p_*$ is replaced by an exponent $q$ with $1<q<p_*$) has been considered recently. In fact, in \cite{BRW1} the existence of an optimal set has been established, see also \cite{BGR} for numerical computations. Then, in \cite{BRW2}, the interior regularity of optimal sets was analyzed in the case $p=2$. We remark that in the result of \cite{BRW2} the subcriticality plays no role, so this local regularity result holds true also for this critical case. We prove, \begin{thm}\label{thmShape2} Let $1<p<(N+1)/2$. If there exists a ``good point'' $x\in\partialartial\Omega$, then $\lambda(\alpha)$ is achieved. \end{thm} Problems of optimal design related to eigenvalue problems like \eqref{MainEquShape} appear in several branches of applied mathematics, specially in the case $p=2$. For example in problems of minimization of the energy stored in the design under a prescribed loading. We refer to \cite{CC} for more details. We want to stress that Theorem \ref{thmShape2} is new, even in the case $p=2$. \subsection*{Organization of the paper} In the next section we deal with the proof of the applications of the estimate $\lambda < K_p^{-1}$, that is, we deal with the proof of Corollary \ref{coro-thm1} and Theorems \ref{thmShape1} and \ref{thmShape2}. We leave for the final section the computation of $K_p$ and the proof of Theorem \ref{thm1}. \section{Applications of Theorem \ref{thm1}} \setcounter{equation}{0} In this section we use Theorem \ref{thm1}, that is proved in the Section 3, and prove Corollary \ref{coro-thm1}, Theorem \ref{thmShape1} and Theorem \ref{thmShape2}. \subsection{Proof of Corollary \ref{coro-thm1}} We first prove that $\lambda$ is attained as soon as \eqref{eq-thm1} is satisfied. Since this kind of criterion is classical (see e.g. \cite{DN} or \cite{BR}), we only sketch the proof for the reader's convenience. Let $\{u_n\}_{n\in\mathbb{N}}\subset X$ be a minimizing sequence for (\ref{MainPb1}) normalized such that $\|u_n\|_{L^{p_*}(\partialartial\Omega)}=1$. According to (\ref{Coercivity}), this sequence is bounded in $X$ and thus it converges up to a subsequence to some $u\in X$ weakly in $X$, strongly in $L^p(\Omega)$ and a.e. Using Ekeland's variational principle (see \cite{Willem} Theorems 8.5 and 8.14), we can assume that $\{u_n\}_{n\in\mathbb{N}}$ is a Palais-Smale sequence for the functional $J:W^{1,p}(\Omega)\to \mathbb{R}$ defined by $$ J(u)= \frac{1}{p} \int_\Omega |\nabla u|^p + h(x)|u|^p\, dx - \frac{\lambda}{p_*} \int_{\partial\Omega} |u|^{p_*}\, dS, $$ in the sense that the sequence $\{J(u_n)\}_{n\in\mathbb{N}}$ is bounded and $DJ(u_n)\to 0$ strongly in $(W^{1,p}(\Omega))^*$. Letting $v_n:=u_n-u$, we can also assume that, up to a subsequence, $$ |v_n|^{p_*}\, dS \rightharpoonup d\nu,\qquad |\nabla v_n|^p\, dx \rightharpoonup d\mu, $$ weakly in the sense of measures, where $\mu$ and $\nu$ are nonnegative measures such that $\text{supp}(\nu)\subset\partial\Omega$. According to \eqref{OptimalInequ}, we have for any $\partialhi\in C^1(\overline{\Omega})$ that $$ \left( \int_{\partial\Omega} |\partialhi v_n|^{p_*}\, dS \right)^{p/p_*} \le (K_p+\epsilon)\int_\Omega |\nabla (\partialhi v_n)|^p\, dx + C_\epsilon \int_\Omega |\partialhi v_n|^p\, dx. $$ Passing to the limit in this expression, first in $n\to\infty$ and then in $\epsilon\to 0$, we get that $$ \left( \int_{\partial\Omega} |\partialhi|^{p_*}\, d\nu \right)^{p/p_*} \le K_p \int_\Omega |\partialhi|^p\, d\mu $$ for any $\partialhi\in C^1(\overline{\Omega})$. From this inequality, we can deduce as in \cite{Lions} Lemma 2.3, the existence of a sequence of points $\{x_i\}_{i\in I}\subset\partial\Omega$, $I\subset\mathbb{N}$, and two sequences of positive real numbers $\{\nu_i\}_{i\in I}$, $\{\mu_i\}_{i\in I}$ such that $$ \nu=\sum_{i\in I} \nu_i\delta_{x_i},\quad \mu\ge\sum_{i\in I} \mu_i\delta_{x_i} \quad \text{and}\quad \mu_i\ge K_p^{-1}\nu_i^{p/p_*} \quad \forall\ i\in I. $$ Therefore, \begin{equation}\label{CCP} \begin{cases} |u_n|^{p_*}dS & \rightharpoonup |u|^{p_*}dS + \sum_{i\in I} \nu_i\delta_{x_i} \\ |\nabla u_n|^pdx & \rightharpoonup |\nabla u_n|^pdx + \mu \ge |\nabla u_n|^pdx + \sum_{i\in I} \mu_i\delta_{x_i} \\ \mu_i & \ge K_p^{-1}\nu_i^{p/p_*}~~\forall~i\in I. \end{cases} \end{equation} It can also be shown that $\{v_n\}_{n\in\mathbb{N}}$ is a Palais-Smale sequence for the functional $I:W^{1,p}(\Omega)\to \mathbb{R}$ defined by $$ I(u):=J(u)-\int_\Omega h(x)|u|^p\, dx $$ (see e.g. \cite{Saintier}). In particular, for any $\partialhi\in C^1(\overline{\Omega})$, \begin{align*} o(1) & = DI(v_n)(v_n\partialhi) \\ & = \int_\Omega |\nabla v_n|^{p-2} \nabla v_n\nabla (v_n\partialhi)\, dx - \lambda \int_{\partial\Omega} |v_n|^{p_*}\partialhi\, dS. \end{align*} Passing to the limit, we get that $\int_\Omega \partialhi\, d\mu = \lambda\int_{\partial\Omega} \partialhi\, d\nu$ for any $\partialhi\in C^1(\overline{\Omega})$. Hence $\mu = \lambda\nu$. Using \eqref{CCP}, we then obtain the estimates \begin{equation}\label{Estimates_mu} \nu_i\ge (\lambda K_p)^{-\frac{n-1}{p-1}}, \quad \mu_i\ge K_p^{-1}(\lambda K_p)^{-\frac{n-1}{p-1}}\quad \forall\ i\in I. \end{equation} Now, by \eqref{CCP}, \eqref{Coercivity} and \eqref{Estimates_mu}, we arrive at \begin{align*} \lambda & = \int_\Omega |\nabla u_n|^p\, dx + \int_\Omega h(x)|u_n|^p\, dx + o(1) \ge \sum_{i\in I} \mu_i \\ & \ge card(I) K_p^{-1}(\lambda K_p)^{-\frac{n-1}{p-1}}. \end{align*} We deduce that if \eqref{eq-thm1} holds, then $I$ is empty. In that case, $u_n\to u$ strongly in $W^{1,p}(\Omega)$ and in $L^{p_*}(\partial\Omega)$. In particular $u$ is a minimizer for $\lambda$. This completes the proof \qed \subsection{Proof of Theorem \ref{thmShape1}} Arguing exactly as in the proof of Theorem \ref{thm1} we obtain that a normalized minimizing sequence $\{u_n\}_{n\in\mathbb{N}}\subset X_A$ for $\lambda_A$ converges, up to a subsequence, strongly in $W^{1,p}(\Omega)$ to some $u_A$ as soon as \begin{equation}\label{Cond2} \inf_{u\in X_A} \frac{\displaystyle \int_{\Omega} |\nabla u|^p + |u|^p\, dx}{\displaystyle \Big(\int_{\partialartial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*}} < K_p^{-1}. \end{equation} Since there exists a ``good point'' $x\in\partial\Omega$ such that $B_r(x)\cap A=\emptyset$, we deduce from the computations in the next section, by choosing a cut-off function $\partialhi$ with support in $B_{r/2}(x)$ in the definition of the test function $u_\epsilon$ \eqref{test.function}, that this strict inequality \eqref{Cond2} holds. Hence $u_n\to u$ strongly in $W^{1,p}(\Omega)$ and $L^{p_*}(\partial\Omega)$ and also a.e.. In particular $u$ is a minimizer for $\lambda_A$. \qed \subsection{Proof of Theorem \ref{thmShape2}} We begin by noticing that $$ \lambda(\alpha) = \inf \{\lambda_A,~A\subset\Omega~\text{measurable},~|A|\ge\alpha\}. $$ Hence $$ \lambda(\alpha) = \inf_{u\in X,~|\{u=0\}|\ge \alpha} \frac{\displaystyle \int_{\Omega} |\nabla u|^p + |u|^p\, dx}{\displaystyle \Big(\int_{\partialartial\Omega} |u|^{p_*}\, dS\Big)^{p/p_*}}. $$ Since $\alpha<|\Omega|$ and there exists a ``good point'', it follows from the test functions computations of the next section, by choosing a function $\partialhi$ with support in a ball of radius small enough in the definition of $u_\epsilon$ \eqref{test.function}, that $\lambda(\alpha)<K_p^{-1}$. By the same argument as before, this implies the existence of a nonnegative $u_*\in X$, $|\{u_*=0\}|\ge \alpha$, such that $$ \frac{\displaystyle \int_{\Omega} |\nabla u_*|^p + |u_*|^p\, dx}{\displaystyle \Big(\int_{\partialartial\Omega} |u_*|^{p_*}\, dS\Big)^{p/p_*}} = \lambda(\alpha). $$ We now conclude as in \cite{BRW1}, Theorem 1.2, that in fact $|\{u_*=0\}|=\alpha$ and so $A_* = \{u_*=0\}$ is an optimal set for $\lambda(\alpha)$.\qed \section{Proof of Theorem \ref{thm1}} \setcounter{equation}{0} In this section we prove our main result. First we recall some very well known formulae and prove Proposition \ref{Prop_ValueBestConstant}. Finally we prove Theorem \ref{thm1}. In all the subsequent computations, the following well known formulae will be used frequently: \begin{align*} & \omega_{N-1} = \text{volume of the standard unit sphere $S^{N-1}$ of $\mathbb{R}^N$} = \frac{2\partiali^\frac{N}{2}}{\Gamma\left(\frac{N}{2}\right)}, \\ \\ & \int_0^{+\infty} \frac{r^\alpha}{(1+r^2)^\beta} dr = \frac{\Gamma\left(\frac{\alpha+1}{2}\right)\Gamma\left(\frac{2\beta - \alpha-1}{2}\right)} {2\Gamma(\beta)} \quad \text{ for } 2\beta-\alpha>1,\\ \\ & \Gamma(z)\Gamma(z+\frac{1}{2}) = 2^{1-2z}\sqrt{\partiali}\Gamma(2z) \quad \text{ for } Re(z)>0. \end{align*} We first compute the value of $K_p$: \begin{proof}[{\bf Proof of Propostion \ref{Prop_ValueBestConstant}}] Let $U$ be the function defined by \eqref{Def_U}. We first compute the $L^{p_*}$-norm of $U$ restricted to $\mathbb{R}^{N-1}\times\{0\} = \partial\mathbb{R}^N_+$. \begin{align*} \int_{\mathbb{R}^{N-1}} |U(y,0)|^{p_*}\, dy & = \int_{\mathbb{R}^{N-1}} \frac{dy}{(1+|y|^2)^{p(N-1)/2(p-1)}}\\ & = \omega_{N-2} \int_0^{\infty} \frac{r^{N-2}\, dr}{(1+r^2)^{p(N-1)/2(p-1)}} \\ & = \partiali^{(N-1)/2} \frac{\Gamma\left(\frac{N-1}{2(p-1)}\right)}{\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \end{align*} We now compute the $L^p$-norm of the gradient of $U$. First $$ \nabla U(y,t)= -\frac{N-p}{p-1} \frac{(y,t+1)}{[(1+t)^2+|y|^2]^{\frac{N-p}{2(p-1)}+1}}. $$ Using the change of variable $y=(1+t)z$ and passing to polar coordinates, we can then write \begin{align*} \int_{\mathbb{R}^N_+} |\nabla U(y,t)|^p\, dydt & = \left(\frac{N-p}{p-1}\right)^p \int_{\mathbb{R}^N_+} \frac{dydt}{ [(1+t)^2+|y|^2]^{\frac{p(N-1)}{2(p-1)}} } \\ & = \left(\frac{N-p}{p-1}\right)^p \int_0^{+\infty} \frac{dt}{(1+t)^\frac{N-1}{p-1}} \omega_{N-2} \int_0^{+\infty} \frac{r^{N-2}\, dr}{(1+r^2)^\frac{p(N-1)}{2(p-1)}} \\ & = \left(\frac{N-p}{p-1}\right)^{p-1} \partiali^\frac{N-1}{2} \frac{\Gamma\left(\frac{N-1}{2(p-1)}\right)}{\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)}. \end{align*} Hence $$ K_p^{-1} = \frac{\displaystyle \int_{\mathbb{R}^N_+} |\nabla U(y,t)|\, dydt} {\displaystyle \Big(\int_{\mathbb{R}^{N-1}} |U(y,0)|^{p_*} dy \Big)^\frac{p}{p_*}} = \left(\frac{N-p}{p-1}\right)^{p-1} \partiali^\frac{p-1}{2} \left(\frac{\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \right)^\frac{p-1}{N-1} $$ and the proof is complete \end{proof} We now turn our attention to the proof of Theorem \ref{thm1}. Let $x_0\in\partial\Omega$ be a ``good point''. By taking an appropriate chart, we can assume that $x_0=0$ and that there exist $r>0$ and $\lambda_1,\dots,\lambda_{N-1}\in\mathbb{R}$ such that \begin{align*} B_r\cap\Omega = & \{(y,t)\in B_r,\ t>\rho(y)\} \\ B_r\cap\partial\Omega =& \{(y,t)\in B_r,\ t=\rho(y)\} \end{align*} where $y=(y_1,\dots,y_{N-1})\in\mathbb{R}^{N-1}$, $B_r$ is the Euclidean ball centered at the origin and of radius $r$, and $$ \rho(y)=\frac{1}{2}\sum_{i=1}^{N-1} \lambda_i y_i^2 + \sum_{i,j,k} c_{ijk}y_i y_j y_k + O(|y|^4). $$ Since $x_0=0$ is a ``good point'', we have $\rho\ge 0$. Moreover, the $\lambda_i$'s are the principal curvatures at $0$ and thus $$ H(0)=\frac{1}{N-1}\sum_{i=1}^{N-1} \lambda_i. $$ Let $\partialhi$ be a smooth radial function with compact support in $B_{r/2}$ be such that $\partialhi\equiv 1$ in $B_{r/4}$. We consider the test functions \begin{equation}\label{test.function} u_\epsilon(y,t)=\frac{\partialhi(y,t)}{[(t+\epsilon)^2+|y|^2]^\frac{N-p}{2(p-1)}},~~~\epsilon>0. \end{equation} In order to give the asymptotic development of the Rayleigh quotient for $u_\epsilon$, we first compute the different terms involved: \begin{Step}\label{step_calcul1} We have the following estimates: \begin{equation}\label{NormeGradient} \int_\Omega |\nabla u_\epsilon|^p\, dx = A_1 \epsilon^{-\frac{N-p}{p-1}} + \begin{cases} A_2 \epsilon^{1-\frac{N-p}{p-1}} + A_3 \epsilon^{2-\frac{N-p}{p-1}} \\ \hspace{1cm} +\begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if }p=\frac{N+3}{4} \\ O(1) \text{ if } \frac{N+3}{4}<p<\frac{N+1}{2} \end{cases} \\ A_2'\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{equation} \begin{equation}\label{NormeLp} \int_\Omega h(x) |u_\epsilon|^p\, dx = \begin{cases} D\epsilon^{-\frac{N-p^2}{p-1}} + \begin{cases} O(\epsilon^{1-\frac{N-p^2}{p-1}}) \text{ if }p<\frac{-1+\sqrt{4N+5}}{2} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{-1+\sqrt{4N+5}}{2} \\ O(1) \text{ if } \sqrt{N}>p>\frac{-1+\sqrt{4N+5}}{2} \end{cases} \\ O(\ln(1/\epsilon)) \text{ if } p=\sqrt{N} \\ O(1) \text{ if } p>\sqrt{N} \end{cases} \end{equation} \begin{equation}\label{NormeCritique} \begin{aligned} \int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS =& B_1 \epsilon^{-1-\frac{N-p}{p-1}} + B_2 \epsilon^{-\frac{N-p}{p-1}} \\ & + \begin{cases} B_3 \epsilon^{1-\frac{N-p}{p-1}} + \begin{cases} O(\epsilon^{2-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+2}{3} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+2}{3} \\ O(1) \text{ if } \frac{N+2}{3}<p<\frac{N+1}{2} \end{cases} \\ B_4 \ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{aligned} \end{equation} where $$ A_1 = \frac{1}{2}\left(\frac{N-p}{p-1}\right)^{p-1}\omega_{N-2} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} $$ $$ A_2 = -\frac{H(0)\omega_{N-2}}{4} \left(\frac{N-p}{p-1}\right)^p \frac{\Gamma\left(\frac{N+1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} $$ $$ A_2' = -\frac{H(0)\omega_{N-2}}{2} \left(\frac{N-p}{p-1}\right)^p $$ $$ A_3 = \frac{\omega_{N-2}}{16} \left(\frac{N-p}{p-1}\right)^p \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \left(\frac{3}{2}\sum\lambda_i^2+\sum_{i<j}\lambda_i\lambda_j \right) $$ $$ B_1 = \omega_{N-2} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-1}{2(p-1)}\right)}{2\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} $$ $$ B_2 = -\frac{\omega_{N-2} \sum \lambda_i}{8} \frac{p(N-1)}{p-1} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(1+\frac{p(N-1)}{2(p-1)}\right)} $$ \begin{align*} B_3 = & \frac{\omega_{N-2}}{32} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \times \\ &\left\{\left(1+\frac{3(N-2p+1)}{p-1}\right) \sum \lambda_i^2 + \left(-2+\frac{2(N-2p+1)}{p-1}\right) \sum_{i<j} \lambda_i\lambda_j \right\} \end{align*} $$ B_4 = \frac{\omega_{N-2}}{2} \left\{\left(\frac{1}{N-1} - \frac{p(N-1)}{4(p-1)}\right) \sum \lambda_i^2 - \frac{p(N-1)} {2(p-1)}\sum_{i<j} \lambda_i\lambda_j + o(1) \right\} $$ $$ D = h(0) \frac{p-1}{N-p^2} \omega_{N-2} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-p^2+p-1}{2(p-1)}\right)} {2\Gamma\left(\frac{p(N-p)}{2(p-1)}\right)} $$ \end{Step} \begin{proof}[Proof of Step \ref{step_calcul1}] We have $$ [(t+\epsilon)^2+|y|^2]^\frac{N-1}{p-1} |\nabla u_\epsilon|^2 = \left(\frac{N-p}{p-1}\right)^2 \partialhi^2 + |\nabla\partialhi|^2 - 2\frac{N-p}{p-1}\partialhi(y\cdot \nabla_y\partialhi + (t+\epsilon)\partial_t\partialhi) $$ Hence in $B_{r/4}$, $$ |\nabla u_\epsilon|^p = \left(\frac{N-p}{p-1}\right)^p \frac{1}{[(t+\epsilon)^2+|y|^2]^\frac{p(N-1)}{2(p-1)}}, $$ and then $$ \int_\Omega |\nabla u_\epsilon|^p\, dx = \left(\frac{N-p}{p-1}\right)^p (I_1 - I_2) + O(1) $$ with $$ I_1 = \int_{Q_a} \frac{1}{[(t+\epsilon)^2+|x|^2]^\frac{p(n-1)}{2(p-1)}} \quad\text{ and }\quad I_2 = \int_{Q_a\setminus \Omega} \frac{1}{[(t+\epsilon)^2+|x|^2]^\frac{p(n-1)}{2(p-1)}}, $$ where $Q_a := \{(y,t)\ |\ |y|\le a \text{ and } 0\le t\le a\}$. Changing variables $y=(1+t)z$ and passing to polar coordinates, we have \begin{align*} I_1 & = \int_{Q_a} \frac{1}{[(t+\epsilon)^2+|y|^2]^\frac{p(N-1)}{2(p-1)}}\, dydt \\ & = \epsilon^{-\frac{N-p}{p-1}} \int_{\mathbb{R}^N_+} \frac{1}{[(1+t)^2+|y|^2]^\frac{p(N-1)}{2(p-1)}}\, dydt + O(1) \\ & = \epsilon^{-\frac{N-p}{p-1}} \omega_{N-2} \int_0^\infty \frac{dt}{(1+t)^\frac{N-1}{p-1}} \int_0^\infty \frac{r^{N-2}\, dr} {(1+r^2)^\frac{p(N-1)}{2(p-1)}} + O(1) \end{align*} Hence \begin{equation}\label{I_1} I_1 = \epsilon^{-\frac{N-p}{p-1}} \frac{p-1}{N-p} \omega_{N-2} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-1}{2(p-1)}\right)} {2\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} + O(1). \end{equation} On the other hand, according to Taylor's formula, \begin{align*} I_2 = & \int_{|y|\le a} \int_0^{\rho(y)}\frac{1}{[(t+\epsilon)^2+|y|^2]^\frac{p(N-1)}{2(p-1)}}\, dtdy \\ = & \int_{|y|\le a} \frac{\rho(y)\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} - \frac{p(N-1)}{2(p-1)} \epsilon \int_{|y|\le a} \frac{\rho(y)^2\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & + O\left(\int_{|y|\le a} \frac{|y|^6\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \right) \\ = &\ I_3 - \frac{p(N-1)}{2(p-1)} \epsilon I_4 + \begin{cases} O\left( \epsilon^{3-\frac{N-p}{p-1}}\right), \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)), \text{ if } p=\frac{N+3}{4} \\ O(1), \text{ if } p>\frac{N+3}{4} \end{cases} \end{align*} As the sphere is symmetric, we have \begin{align*} I_3 & = \frac{1}{2}H(0) \int_{|y|\le a} \frac{|y|^2\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} + O\left(\int_{|y|\le a} \frac{|y|^4\, dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} \right) \end{align*} with \begin{equation}\label{E1} \begin{aligned} \int_{|y|\le a} & \frac{|y|^2\, dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} = \epsilon^{1-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^N dr}{(1+r^2)^{\frac{p(N-1)}{2(p-1)}}} \\ & = \begin{cases} \epsilon^{1-\frac{N-p}{p-1}}\omega_{N-2} \frac{\Gamma\left(\frac{N+1}{2}\right) \Gamma\left(\frac{N-2p+1}{2(p-1)}\right) } { 2\Gamma\left(\frac{p(N-1)}{2(p-1)}\right) } + O(1) \text{ if } p<\frac{N+1}{2} \\ \approx\omega_{N-2}\ln(1/\epsilon) \text{ if } p<\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \\ \end{cases} \end{aligned} \end{equation} and \begin{equation}\label{E2} \begin{aligned} \int_{|y|\le a} \frac{|y|^4\, dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} & = \epsilon^{3-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^{N+2}\, dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}}} \\ & = \begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+3}{4} \\ O(1) \text{ if } p>\frac{N+3}{4} \end{cases} \end{aligned} \end{equation} Since $\frac{N+3}{4}<\frac{N+1}{2}$ we get \begin{equation*} \begin{aligned} I_3 & = \begin{cases} \epsilon^{1-\frac{N-p}{p-1}}\omega_{N-2} H(0) \frac{\Gamma\left(\frac{N+1}{2}\right) \Gamma\left(\frac{N-2p+1}{2(p-1)}\right)}{4\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} + \begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+3}{4} \\ O(1) \text{ if } \frac{N+3}{4}<p<\frac{N+1}{2} \end{cases} \\ \approx\frac{1}{2}H(0)\omega_{N-2}\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{aligned} \end{equation*} Concerning $I_4$, we have \begin{align*} I_4 = & \frac{1}{4}\sum \lambda_i^2 \int_{|y|\le a} \frac{y_i^4\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}}\\ & + \frac{1}{2}\sum_{i<j} \lambda_i\lambda_j \int_{|y|\le a} \frac{y_i^2 y_j^2\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & \qquad + O\left(\int_{|y|\le a} \frac{|y|^5\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \right). \end{align*} First we compute \begin{align*} \int_{|y|\le a} \frac{y_i^4\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} & = \epsilon^{1-\frac{N-p}{p-1}} \int_{|y|\le a/\epsilon} \frac{y_i^4\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & = \begin{cases} O(1) \text{ if } p>\frac{N+1}{2} \\ \approx \omega_{N-2}\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \end{cases} \end{align*} and if $p<\frac{N+1}{2}$, \begin{align*} & \int_{|y|\le a} \frac{y_i^4\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & = 2 \epsilon^{1-\frac{N-p}{p-1}} \omega_{N-3} \int_0^\infty \frac{r^{N-3}\, dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}-\frac{3}{2}}} \int_0^\infty \frac{y^4\, dy} {(1+y^2)^{\frac{p(N-1)}{2(p-1)}+1}} + O(1). \end{align*} Hence \begin{equation}\label{I4_1} \begin{aligned} & \int_{|y|\le a} \frac{y_i^4\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & = \begin{cases} \epsilon^{1-\frac{N-p}{p-1}} \frac{\omega_{N-3}}{2} \frac{\Gamma\left(\frac{N-2}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)\Gamma\left(\frac{5}{2}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}+1\right)} + O(1) \text{ if } p<\frac{N+1}{2} \\ \approx \omega_{N-2}\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{aligned} \end{equation} In the same way $$ \int_{|y|\le a} \frac{y_i^2y_j^2\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} = \begin{cases} \approx\omega_{N-2}\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} $$ and if $p<\frac{N+1}{2}$, \begin{align*} & \int_{|y|\le a} \frac{y_i^2y_j^2\, dy} {(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ = & \epsilon^{1-\frac{N-p}{p-1}} \int_{|y|\le a/\epsilon} \frac{y_i^2y_j^2\, dy} {(1+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ = & 4\omega_{N-4} \int_0^\infty \frac{r^{N-4}\, dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}-2}} \int_0^\infty \frac{y_i^2\, dy_i} {(1+y_i^2)^{\frac{p(N-1)}{2(p-1)}-\frac{1}{2}}} \int_0^\infty \frac{y_j^2\, dy_j} {(1+y_j^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & + O(1) \end{align*} Hence \begin{align*} & \int_{|y|\le a} \frac{y_i^2y_j^2\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & = \begin{cases} \epsilon^{1-\frac{N-p}{p-1}} \frac{\omega_{N-4}}{2} \frac{\Gamma\left(\frac{N-3}{2}\right)\Gamma\left(\frac{3}{2}\right)^2 \Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}+1\right) } + O(1) \text{ if } p<\frac{N+1}{2} \\ \approx \omega_{N-2}\ln(1/\epsilon) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{align*} Once again, \begin{align*} \int_{|y|\le a} \frac{|y|^5\, dy}{(\epsilon^2+|y|^2)^{\frac{p(N-1)}{2(p-1)}+1}} & = \epsilon^{2-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^{N+3}\,dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}+1}} \\ & = \begin{cases} O(\epsilon^{2-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+2}{3} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+2}{3} \\ O(1) \text{ if } p>\frac{N+2}{3} \end{cases} \end{align*} Using the fact that $\Gamma(\frac32)=\frac{\sqrt{\partiali}}{2}$, $\Gamma(\frac52)=\frac{3\sqrt{\partiali}}{4}$, and $$ \omega_{N-3} = \frac{1}{\sqrt{\partiali}} \frac{\Gamma\left(\frac{N-1}{2}\right)}{\Gamma\left(\frac{N-2}{2}\right)}\omega_{N-2}, \qquad \omega_{N-4} = \frac{1}{\partiali} \frac{\Gamma\left(\frac{N-1}{2}\right)}{\Gamma\left(\frac{N-3}{2}\right)}\omega_{N-2}, $$ we eventually get that \begin{equation}\label{I4} I_4 = \begin{cases} \frac{\omega_{N-2}}{16}\epsilon^{1-\frac{N-p}{p-1}} \frac{\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)\Gamma\left(\frac{N-1}{2}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}+1\right)} \left( \frac{3}{2}\sum \lambda_i^2 + \sum_{i<j} \lambda_i\lambda_j \right) \\ \hspace{1cm} + \begin{cases} O(\epsilon^{2-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+2}{3} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+2}{3} \\ O(1) \text{ if } \frac{N+2}{3}<p<\frac{N+1}{2} \end{cases} \\ \frac{\omega_{N-2}}{2}\ln(1/\epsilon)\left(\frac{1}{2}\sum \lambda_i^2 + \sum_{i<j} \lambda_i\lambda_j + o(1)\right) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{equation} We thus obtain $$ I_2 = \begin{cases} \epsilon^{1-\frac{N-p}{p-1}} \frac{H(0)\omega_{N-2}}{4} \frac{\Gamma\left(\frac{N+1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right) } \\ \hspace{1cm} - \epsilon^{2-\frac{N-p}{p-1}} \frac{\omega_{N-2}}{16} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} \left(\frac{3}{2}\sum \lambda_i^2 + \sum_{i<j} \lambda_i\lambda_j \right) \\ \hspace{1cm} + \begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+3}{4} \\ O(1) \text{ if } \frac{N+3}{4}<p<\frac{N+1}{2} \end{cases} \\ \frac{H(0)\omega_{N-2}}{2}\ln(1/\epsilon)(1 + o(1)) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} $$ So the proof of \eqref{NormeGradient} is completed. To prove \eqref{NormeLp}, we first observe that \begin{equation}\label{NormeLp_4} \begin{aligned} \int_\Omega & h(x) |u_\epsilon|^p\, dx = h(0)\int_\Omega |u_\epsilon|^p\, dx + O\left(\int_\Omega |x||u_\epsilon|^p\, dx \right) \\ & = h(0)\int_{Q_a} |u_\epsilon|^p\, dx + O\left(\int_{Q_a\setminus\Omega} |u_\epsilon|^p\, dx + \int_{Q_a} |x||u_\epsilon|^p\, dx \right), \end{aligned} \end{equation} where, as before, $Q_a = \{(y,t)\ |\ |y|\le a \text{ and } 0\le t\le a\}$. Now, \begin{align*} \int_{Q_a} |u_\epsilon|^p dx & = \int_{|y|\le a,0<t\le a} \frac{dydt} {[(t+\epsilon)^2+|y|^2]^\frac{p(N-p)}{2(p-1)}} + O(1) \\ & = \epsilon^{-\frac{N-p^2}{p-1}} \int_{|y|\le a/\epsilon,0<t\le a/\epsilon} \frac{dydt} {[(1+t)^2+|y|^2]^\frac{p(N-p)}{2(p-1)}} + O(1) \\ & = \begin{cases} O(\ln(1/\epsilon)) \text{ if } p^2=N \\ O(1) \text{ if } p^2>N \end{cases} \end{align*} If $p^2<N$, using the change of variable $y=(1+t)z$ and then passing to polar coordinates, we get \begin{align*} \int_{Q_a} |u_\epsilon|^p dx & = \epsilon^{-\frac{N-p^2}{p-1}} \omega_{N-2} \int_0^\infty \frac{dt}{(1+t)^{\frac{N-p^2}{p-1}+1}} \int_0^\infty \frac{r^{N-2}\, dr}{(1+r^2)^\frac{p(N-p)}{2(p-1)}} + O(1) \end{align*} Hence \begin{equation}\label{NormeLp_1} \begin{aligned} \int_{Q_a} |u_\epsilon|^p dx & = \begin{cases} \epsilon^{-\frac{N-p^2}{p-1}} \frac{p-1}{N-p^2} \omega_{N-2} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-p^2+p-1}{2(p-1)}\right)} {2\Gamma\left(\frac{p(N-p)}{2(p-1)}\right)} + O(1) \text{ if } p^2<N \\ O(\ln(1/\epsilon)) \text{ if } p^2=N \\ O(1) \text{ if } p^2>N \end{cases} \end{aligned} \end{equation} On the other hand, using Taylor's formula, \begin{equation}\label{NormeLp_2} \begin{aligned} \int_{Q_a\setminus \Omega} |u_\epsilon|^p dx & = \int_{|y|\le a} \int_0^{\rho(y)} \frac{dt}{[(t+\epsilon)^2+|y|^2]^\frac{p(N-p)}{2(p-1)}}\, dy + O(1) \\ & = O\left( \int_{|y|\le a} \frac{|y|^2\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-p)}{2(p-1)}}\, dy \right) + O(1) \\ & = \epsilon^{1-\frac{N-p^2}{p-1}} O\left(\int_0^{a/\epsilon} \frac{r^N\, dr}{(1+r^2)^\frac{p(N-p)} {2(p-1)}}\right) + O(1) \\ & = \begin{cases} O(\epsilon^{1-\frac{N-p^2}{p-1}}) \text{ if } p<\frac{-1+\sqrt{4N+5}}{2} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{-1+\sqrt{4N+5}}{2} \\ O(1) \text{ if } p>\frac{-1+\sqrt{4N+5}}{2} \end{cases} \end{aligned} \end{equation} Similarly, \begin{equation}\label{NormeLp_3} \begin{aligned} \int_{Q_a} |x| |u_\epsilon|^p dx & = \int_{Q_a} \frac{|(y,t)|}{[(t+\epsilon)^2+|y|^2]^\frac{p(N-p)}{2(p-1)}}\, dydt + O(1) \\ & = \epsilon^{1-\frac{N-p^2}{p-1}} \int_{Q_{a/\epsilon}} \frac{|(y,t)|}{[(1+t)^2+|y|^2]^\frac{p(N-p)}{2(p-1)}}\, dydt + O(1) \\ & = \begin{cases} O(\epsilon^{1-\frac{N-p^2}{p-1}}) \text{ if } p<\frac{-1+\sqrt{4N+5}}{2} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{-1+\sqrt{4N+5}}{2} \\ O(1) \text{ if } p>\frac{-1+\sqrt{4N+5}}{2} \end{cases} \end{aligned} \end{equation} Combining \eqref{NormeLp_4}, \eqref{NormeLp_1}, \eqref{NormeLp_2} and \eqref{NormeLp_3}, gives \eqref{NormeLp}. Finally, to prove \eqref{NormeCritique}, we first observe that $$ \int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS = \int_{Q_a} |u_\epsilon|^{p_*}\, dS $$ for small $\epsilon$ and so \begin{equation*} \begin{aligned} \int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS = & \int_{|y|\le a} \frac{\sqrt{1+|\nabla\rho|^2}}{[(\epsilon+\rho(y))^2+|y|^2]^\frac{p(N-1)}{2(p-1)}}\, dy \\ = & \int_{|y|\le a} \frac{1+\frac12 |\nabla\rho|^2 + O(|y|^4)} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} \Big[1-\frac{p(N-1)}{2(p-1)} \frac{\rho(2\epsilon+\rho)}{\epsilon^2+|y|^2} \\ & - c_{N,p}\frac{\rho^2(2\epsilon+\rho)^2}{(\epsilon^2+|y|^2)^2} + O\left(\frac{\rho^3(2\epsilon+\rho)^3}{(\epsilon^2+|y|^2)^3}\right)\Big]\, dy, \end{aligned} \end{equation*} where $$ c_{N,p} = -\frac{p(N-1)}{4(p-1)}\left[ \frac{p(N-1)}{2(p-1)}+1\right]. $$ Hence \begin{align*} & \int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS = \\ = & \int_{|y|\le a} \frac{dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}}\, dy - \epsilon^\frac{p(N-1)}{p-1} \int_{|y|\le a} \frac{\rho(y)\, dy} {(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}} \\ & + \frac12 \int_{|y|\le a} \frac{|\nabla\rho|^2\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} - \frac{p(N-1)}{2(p-1)} \int_{|y|\le a} \frac{\rho^2(y)\, dy}{(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}} \\ & - 4\epsilon^2 c_{N,p} \int_{|y|\le a} \frac{\rho^2(y)\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} \\ & + O\left(\int_{|y|\le a} \frac{|y|^4\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}}\, dy + \epsilon \int_{|y|\le a} \frac{|y|^4\, dy}{(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}}\, dy \right) \\ & = I_5 - \epsilon^\frac{p(N-1)}{p-1} I_7 + \frac12 I_6 - \frac{p(N-1)}{2(p-1)} I_8 - 4\epsilon^2 c_{N,p} I_9 + O(I_{10}). \end{align*} We first compute $I_5$ as follows: \begin{equation}\label{I5} \begin{aligned} I_5 & = \int_{|y|\le a} \frac{dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} = \omega_{N-2} \epsilon^{-1-\frac{N-p}{p-1}} \int_0^{a/\epsilon} \frac{r^{N-2}\, dr} {(1+r^2)^\frac{p(N-1)}{2(p-1)}} \\ & = \omega_{N-2} \epsilon^{-1-\frac{N-p}{p-1}} \int_0^\infty \frac{r^{N-2}\, dr} {(1+r^2)^\frac{p(N-1)}{2(p-1)}} + O(1) \\ & = \omega_{N-2} \epsilon^{-1-\frac{N-p}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-1}{2(p-1)}\right)} {2\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} + O(1). \end{aligned} \end{equation} According to \eqref{E1} and \eqref{E2}, using the relation $\Gamma\left(\frac{N+1}{2}\right)=\frac{N-1}{2}\Gamma\left(\frac{N-1}{2}\right)$, we have \begin{equation}\label{I6} \begin{aligned} I_6 & = \int_{|y|\le a} \frac{|\nabla \rho|^2\, dy}{(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} \\ & = \sum \lambda_i^2 \int_{|y|\le a} \frac{|y_i|^2\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} + O\left(\int_{|y|\le a} \frac{|y|^4\, dx} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} \right) \\ & = \frac{\sum \lambda_i^2}{N-1} \int_{|y|\le a} \frac{|y|^2\, dy} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} + O\left(\int_{|y|\le a} \frac{|y|^4\, dx} {(\epsilon^2+|y|^2)^\frac{p(N-1)}{2(p-1)}} \right) \\ & = \begin{cases} \frac14 \sum \lambda_i^2 \omega_{N-2} \epsilon^{1-\frac{N-p}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-2p+1}{2(p-1)}\right)} {\Gamma\left(\frac{p(N-1)}{2(p-1)}\right)} + \begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+3}{4} \\ O(1) \text{ if } \frac{N+1}{2}>p>\frac{N+3}{4} \end{cases} \\ \frac{\omega_{N-2}\sum \lambda_i^2}{N-1} \ln(1/\epsilon)) \text{ if } p=\frac{N+1}{2} \\ O(1) \text{ if } p>\frac{N+1}{2} \end{cases} \end{aligned} \end{equation} By radial symmetry, we have \begin{align*} I_7 = & \int_{|y|\le a} \frac{\rho(y)\, dy}{(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}} \\ = & \frac{\sum \lambda_i}{2(N-1)} \int_{|y|\le a} \frac{|y|^2\, dy} {(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}} + O\left(\int_{|y|\le a} \frac{|y|^4\, dy} {(\epsilon^2+|y|^2)^{1+\frac{p(N-1)}{2(p-1)}}} \right) \\ = & \frac{\omega_{N-2} \sum \lambda_i}{2(N-1)} \epsilon^{-1-\frac{N-p}{p-1}} \int_0^{a/\epsilon} \frac{r^N\, dr}{(1+r^2)^{1+\frac{p(N-1)}{2(p-1)}}} \\ & + \epsilon^{-\frac{N-p}{p-1}} O\left(\int_0^{a/\epsilon} \frac{r^{N+2}\, dr} {(1+r^2)^{1+\frac{p(N-1)}{2(p-1)}}} \right)\\ = & \frac{\omega_{N-2} \sum \lambda_i}{2(N-1)} \epsilon^{-1-\frac{N-p}{p-1}} \int_0^\infty \frac{r^N\, dr} {(1+r^2)^{1+\frac{p(N-1)}{2(p-1)}}} + \begin{cases} O(\epsilon^{1-\frac{N-p}{p-1}} ) \text{ if } p<\frac{N+1}{2} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+1}{2} \\ O(\epsilon^{-\frac{N-p}{p-1}}) \text{ if } p>\frac{N+1}{2} \end{cases} \end{align*} and so \begin{equation}\label{I7} \begin{aligned} I_7 = & \frac{\omega_{N-2} \sum \lambda_i}{8} \epsilon^{-1-\frac{N-p}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(1+\frac{p(N-1)}{2(p-1)}\right)} \\ & + \begin{cases} O(\epsilon^{1-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+1}{2} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+1}{2} \\ O(\epsilon^{-\frac{N-p}{p-1}}) \text{ if } p>\frac{N+1}{2} \end{cases} \end{aligned} \end{equation} To compute $I_9$ we proceed as in the computations of $I_4$, i.e. \begin{align*} I_9 = & \int_{|y|\le a} \frac{\rho^2(y)\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}} } \\ = & \frac14 \sum \lambda_i^2 \int_{|y|\le a} \frac{y_1^4\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}}\\ & + \frac12 \sum_{i<j} \lambda_i\lambda_j \int_{|y|\le a} \frac{y_i^2 y_j^2\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} + O\left( \int_{|y|\le a} \frac{|y|^5\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}} } \right). \end{align*} Now \begin{align*} \int_{|y|\le a} & \frac{y_1^4\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} = \epsilon^{-\frac{N-1}{p-1}} \int_{\mathbb{R}^{N-1}} \frac{y_1^4\, dy} {(1+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} + O(1) \\ & = 2 \epsilon^{-\frac{N-1}{p-1}} \omega_{N-3} \int_0^\infty \frac{r^{N-3}\, dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}-\frac12}} \int_0^\infty \frac{s^4\, ds} {(1+s^2)^{2+\frac{p(N-1)}{2(p-1)}}} + O(1) \\ & = \frac{3\omega_{N-2}}{8} \epsilon^{-\frac{N-1}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(2+\frac{p(N-1)}{2(p-1)}\right)} + O(1), \end{align*} \begin{align*} \int_{|y|\le a} & \frac{y_i^2 y_j^2\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}} } = \epsilon^{-\frac{N-1}{p-1}} \int_{\mathbb{R}^{N-1}} \frac{y_i^2 y_j^2\, dy}{(1+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} + O(1) \\ = & 4\epsilon^{-\frac{N-1}{p-1}} \omega_{N-4} \int_0^\infty \frac{r^{N-4}\, dr} {(1+r^2)^{\frac{p(N-1)}{2(p-1)}-1}} \int_0^\infty \frac{y_i^2\, dy_i} {(1+y_i^2)^{\frac12+\frac{p(N-1)}{2(p-1)}} } \\ & \times \int_0^\infty \frac{y_j^2 dy_j}{(1+y_j^2)^{2+\frac{p(N-1)}{2(p-1)}} } + O(1) \\ = & \frac{\omega_{N-2}}{8} \epsilon^{-\frac{N-1}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right)\Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(2+\frac{p(N-1)}{2(p-1)}\right)} + O(1), \end{align*} and \begin{align*} \int_{|y|\le a} \frac{|y|^5\, dy} {(\epsilon^2+|y|^2)^{2+\frac{p(N-1)}{2(p-1)}}} & = \epsilon^{-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^{N+3}\, dr} {(1+r^2)^{2+\frac{p(N-1)}{2(p-1)}}} \\ & = O(\epsilon^{-\frac{N-p}{p-1}}) \end{align*} Hence \begin{equation}\label{I9} \begin{aligned} I_9 = & \frac{\omega_{N-2}}{16} \epsilon^{-\frac{N-1}{p-1}} \frac{\Gamma\left(\frac{N-1}{2}\right) \Gamma\left(\frac{N-1}{2(p-1)}\right)} {\Gamma\left(2+\frac{p(N-1)}{2(p-1)}\right)} \left(\frac32 \sum \lambda_i^2 + \sum_{i<j} \lambda_i\lambda_j \right)\\ & + O(\epsilon^{-\frac{N-p}{p-1}}). \end{aligned} \end{equation} Finally, for $I_{10}$ we have, \begin{align*} I_{10} = & \epsilon^{3-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^{N+2}\, dr}{(1+r^2)^\frac{p(N-1)}{2(p-1)}} + \epsilon^{2-\frac{N-p}{p-1}} \omega_{N-2} \int_0^{a/\epsilon} \frac{r^{N+2}\, dr}{(1+r^2)^{1+\frac{p(N-1)}{2(p-1)}}} \\ = & \begin{cases} O(\epsilon^{3-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+3}{4} \\ O(\ln(1/\epsilon)) \text{ if } p=\frac{N+3}{4} \\ O(1) \text{ if } p>\frac{N+3}{4} \end{cases} + \begin{cases} O(\epsilon^{2-\frac{N-p}{p-1}}) \text{ if } p<\frac{N+1}{2} \\ O(\epsilon\ln(1/\epsilon)) \text{ if } p=\frac{N+1}{2} \\ O(\epsilon) \text{ if } p>\frac{N+1}{2} \end{cases} \end{align*} and so \begin{equation}\label{I10} I_{10} = \begin{cases} O(\epsilon^{2-\frac{N-p}{p-1}}) \text{ if } p\le\frac{N+2}{3} \\ O(1) \text{ if } p>\frac{N+2}{3} \end{cases} \end{equation} Putting these estimates together, we arrive at \eqref{NormeCritique}. This completes the proof of Step \ref{step_calcul1}. \end{proof} \begin{Step}\label{Step3} We have, for any dimension $N\ge 2$, $$ K_p^{-1}\frac{\displaystyle \int_{\Omega} |\nabla u_\epsilon|^p + |u_\epsilon|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS\Big)^{p/p_*}} = \begin{cases} 1+O(\epsilon^\frac{N-p}{p-1}) \text{ if } p>\frac{N+1}{2} \\ 1 - \frac{N-1}{2}H(0)\epsilon \ln(1/\epsilon) + o(\epsilon \ln(1/\epsilon)) \text{ if } p=\frac{N+1}{2} \end{cases} $$ and, if $p<\frac{N+1}{2}$, for dimension $N=2,3,4$ \begin{align*} K_p^{-1}\frac{\displaystyle \int_{\Omega} |\nabla u_\epsilon|^p + |u_\epsilon|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS\Big)^{p/p_*}} = & 1 -\frac{(N-p)(p-1)}{N-2p+1} H(0) \epsilon \\ & + \begin{cases} \frac{D}{A_1}\epsilon^p + \begin{cases} E \epsilon^2 + O(\epsilon^{1+p}) \text{ if } p<\frac{N+2}{3} \\ O(\epsilon^\frac{N-p}{p-1}) \text{ if } \frac{N+2}{3}\le p<\sqrt{N} \end{cases} \\ O(\epsilon^\frac{N-p}{p-1}\ln(1/\epsilon)) \text{ if } p=\sqrt{N} \\ O(\epsilon^\frac{N-p}{p-1}) \text{ if } \sqrt{N}<p<\frac{N+1}{2} \end{cases} \end{align*} where $$ E = \frac{(N-p)(p-1)}{4(N-1)(N-2p+1)} \left\{\frac{p+N-2}{N-1}\sum \lambda_i^2 - 2\sum_{i<j}\lambda_i\lambda_j \right\}. $$ Also, for dimensions $N\ge 5$, \begin{align*} K_p^{-1}\frac{\displaystyle \int_{\Omega} |\nabla u_\epsilon|^p + |u_\epsilon|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS\Big)^{p/p_*}} = & 1 -\frac{(N-p)(p-1)}{N-2p+1} H(0) \epsilon \\ & + \begin{cases} E\epsilon^2 + \begin{cases} \frac{D}{A_1}\epsilon^p + \begin{cases} o(\epsilon^2) \text{ if } p\le 2 \\ o(\epsilon^p) \text{ if } 2\le p <\sqrt{N} \end{cases} \\ o(\epsilon^2) \text{ if } \sqrt{N}\le p<\frac{N+2}{3} \end{cases} \\ O(\epsilon^2) \text{ if } \frac{N+2}{3}\le p<\frac{N+1}{2} \end{cases} \end{align*} \end{Step} \begin{proof}[Proof of Step \ref{Step3}] Noting that $$ \frac{A_1}{B_1^\frac{N-p}{N-1}} = K_p^{-1}, $$ we have, when e.g. $n\ge 6$ and $p\le 2$, that \begin{align*} K_p^{-1}&\frac{\displaystyle \int_{\Omega} |\nabla u_\epsilon|^p + |u_\epsilon|^p\, dx}{\displaystyle \Big(\int_{\partial\Omega} |u_\epsilon|^{p_*}\, dS\Big)^{p/p_*}} = 1+\left(\frac{A_2}{A_1} - \frac{N-p}{N-1} \frac{B_2}{B_1}\right)\epsilon + \frac{D}{A_1}\epsilon^p \\ & + \left\{ \frac{N-p}{N-1} \left[\frac12 \left(\frac{N-p}{N-1}+1\right) \left(\frac{B_2}{B_1}\right)^2 - \frac{B_3}{B_1} - \frac{B_2}{B_1}\frac{A_2}{A_1}\right] + \frac{A_3}{A_1} \right\} \epsilon^2 + o(\epsilon^2). \end{align*} Using the fact that \begin{align*} & \Gamma\left(\frac{N+1}{2}\right) = \Gamma\left(\frac{N-1}{2}+1\right) = \frac{N-1}{2}\Gamma\left(\frac{N-1}{2}\right)\\ & \Gamma\left(\frac{N-1}{2(p-1)}\right) = \Gamma\left(\frac{N-2p+1}{2(p-1)}+1\right) = \frac{N-2p+1}{2(p-1)}\Gamma\left(\frac{N-2p+1}{2(p-1)} \right), \end{align*} we get \begin{align*} & \frac{A_2}{A_1} = -\frac12 \frac{N-p}{N-2p+1} \sum \lambda_i, \\ & \frac{A_3}{A_1} = \frac14 \frac{N-p}{N-2p+1} \left\{\frac32 \sum \lambda_i^2 - 2 \sum_{i<j} \lambda_i\lambda_j \right\}, \\ & \frac{B_2}{B_1} = -\frac12 \sum \lambda_i, \\ & \frac{B_3}{B_1} = \frac{1}{8(N-2p+1)} \left\{(3N-5p+2)\sum \lambda_i^2 - 4(N-p) \sum_{i<j} \lambda_i\lambda_j \right\}, \\ & \frac{D}{A_1} = \begin{cases} \frac{2h(0)}{(N-3)(N-4)} \text{ if } p=2 \\ \text{has same sign as } \frac{h(0)}{N-p^2} \text{ otherwise}. \end{cases} \end{align*} Hence $$ \frac{A_2}{A_1}-\frac{N-p}{N-1}\frac{B_2}{B_1} = -\frac{(N-p)(p-1)}{N-2p+1} H(0) $$ and \begin{align*} \frac{N-p}{N-1} &\left[\frac12 \left(\frac{N-p}{N-1} + 1\right) \left(\frac{B_2}{B_1}\right)^2 - \frac{B_3}{B_1} - \frac{B_2}{B_1} \frac{A_2}{A_1}\right] + \frac{A_3}{A_1} \\ = & \frac{(N-p)(p-1)}{4(N-1)(N-2p+1)} \left\{\frac{p+N-2}{N-1} \sum \lambda_i^2 - 2\sum_{i<j}\lambda_i\lambda_j \right\}, \end{align*} which gives the result. We get the others equalities in much the same way. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{thm1}] At this point is just a combination of Steps 1 and 2. \end{proof} \end{document}
\begin{document} \title{How to build a pillar: a proof of Thomassen's conjecture} \begin{abstract} Carsten Thomassen in 1989 conjectured that if a graph has minimum degree more than the number of atoms in the universe ($\delta(G)\ge 10^{10^{10}}$), then it contains a \emph{pillar}, which is a graph that consists of two vertex-disjoint cycles of the same length, $s$ say, along with $s$ vertex-disjoint paths of the same length which connect matching vertices in order around the cycles. Despite the simplicity of the structure of pillars and various developments of powerful embedding methods for paths and cycles in the past three decades, this innocent looking conjecture has seen no progress to date. In this paper, we give a proof of this conjecture by building a pillar (algorithmically) in sublinear expanders. \end{abstract} \section{Introduction}\label{sec-intro} Which structures can we guarantee in a graph $G$ by imposing a condition on the average degree of $G$, $d(G)$? Extremal problems under this setting have been well studied. For example, given any graph $H$ the average degree condition required to contain a copy of $H$ is the well-known Tur\'an problem. Such embedding results are often useful for problems in other areas. To just name a historical one: Erd\H{o}s~\cite{Erd38} in 1938 studied multiplicative Sidon sets via the embeddings of 4-cycles in graphs. It is well-known that unless $H$ is acyclic, the average degree condition needed to force a copy of $H$ must depend on the number of vertices in the host graph $G$. What about sparse host graphs $G$ with only constant average degree? It turns out that there are rich families of structures that can be embedded in such sparse graphs. For example, any graph with $d(G)\geq 2$ can easily be seen to contain a cycle. A cycle can be viewed as a subdivision of $K_3$, the complete graph on three vertices. Given a graph $H$, an \emph{$H$-subdivision} is a graph obtained from $H$ by subdividing each of its edges into internally vertex-disjoint paths. The notion of subdivision connects graph theory and topology. In 1930, Kuratowski famously proved that a graph is planar if and only if it does not contain a subdivision of $K_{5}$ or the complete bipartite graph with three vertices in each part~\cite{Kur30}. Generalising the above observation about cycles, Mader~\cite{Mader67} in 1967 proved that, for each $k\in\mathbb{N}$, average degree at least some sufficiently large constant, depending only on $k$, is enough to guarantee a $K_k$-subdivision. In another direction, Bollob\'as~\cite{Bol77} in 1977 showed that additional conditions may be placed on the cycle length by raising the average degree condition. That is, for any $a\in \mathbb{N}$ and odd $b$, average degree at least some sufficiently large constant guarantees a cycle with length congruent to $a \mod b$. Other substructures whose existence is implied by sufficiently large constant average degree are $k$ vertex-disjoint cycles, for each $k$, shown by Corradi and Hajnal~\cite{CH63} in 1963, whose lengths may even be required to all be the same (see Egawa~\cite{Ega96}) or form an arithmetic progression (see Verstra\"ete~\cite{Ver00}). More recently, the second author and Montgomery~\cite{LiuMont20} proved that, for each $k$, graphs with large average degree must contain a subdivision of $K_k$ such that all its edges are subdivided the same number of times. In~\cite{LiuMont20}, techniques were introduced to construct paths (and hence cycles) in extremely sparse graphs while controlling their length. These techniques imply, for example, that sufficiently large constant average degree guarantees a cycle whose length is a power of 2. \subsection{Main result} In this paper, we are interested in embedding pillars as subgraphs. A \emph{pillar} is a graph that consists of two disjoint cycles $C_1$ and $C_2$ of the same length, say $s$, with vertex-sets $V(C_1)=\{v_1,\dots,v_s\}$ and $V(C_2)=\{w_1,\dots,w_s\}$, and $s$ disjoint paths $Q_1,\dots,Q_s$, all of the same length, such that $Q_i$ is a $v_i,w_i$-path, for each $i\in[s]$; see Figure~\ref{fig:pillar}. \begin{figure} \caption{A pillar and a $K_4$-pillar.\label{fig:pillar} \label{fig:pillar} \end{figure} In 1989, Thomassen~\cite{Tho89} conjectured that every graph with sufficiently large constant minimum degree contains a pillar. There have been numerous powerful methods for embedding paths and cycles developed in the past three decades, such as Robertson and Seymour's work on graph linkage~\cite{RS95} (see also~\cite{BT96,TW05}), Bondy and Simonovits's use of Breadth First Search~\cite{BS74} (see also~\cite{Pik12}), Krivelevich and Sudakov's use of Depth First Search~\cite{KS13} and the use of expanders in a long line of work by Krivelevich (see e.g. his survey~\cite{Kri19} and more recently~\cite{EKK21}), see also a recent method developed by Gao, Huo, Liu and Ma~\cite{GHLM21}. Despite these developments and the simple nature of pillars, the innocent looking conjecture of Thomassen has seen no progress in the past thirty years. One explanation for the difficulty of this conjecture is the following. Cycles are not hard to embed as all vertices within are of degree 2; and subdivisions, even though having vertices of degree at least 3, can be embedded so that all these high degree vertices are pairwise far apart. Thus, embedding cycles or subdivisions boils down to anchoring at some (well-positioned) vertices and constructing vertex-disjoint paths between pairs of them. On the other hand, in a pillar, degree-3 vertices are jammed into the two cycles $C_1$ and $C_2$, which have to be embedded one next to another. To see why degree-3 vertices are game changers, a classical result of Pyber, R\"odl and Szemer\'edi~\cite{PRSz95} shows that constant average degree does \emph{not} suffice to force a 3-regular subgraph. More precisely, they constructed an $n$-vertex graph $G$ with $d(G)=\Omega(\log\log n)$ which contains no $r$-regular subgraphs for any $r\ge 3$. A priori, it is not clear whether a pillar behaves more like a subdivision or a 3-regular graph. Our main result confirms Thomassen's conjecture, showing that pillars are fundamentally \emph{different} from 3-regular graphs in the sense that they can be forced by large constant degree. \begin{theorem}\label{thm-thomassen-conj} There exists a constant $C>0$ such that every graph with average degree at least $C$ contains a pillar. \end{theorem} In fact, our method provides embeddings of more general structures. Here is one example that can be seen as a common generalisation of a subdivision and a pillar. Given $k\in\mathbb{N}$, a $K_k$-\emph{pillar} is a graph that consists of $k$ disjoint cycles $C_1,\ldots, C_k$ of the same length, such that for any distinct $C_i,C_j$ there is a collection $\mathcal{Q}_{i,j}$ of paths of the same length connecting matching vertices in order around $C_i, C_j$, and all paths in $\cup_{ij\in{[k]\choose 2}}\mathcal{Q}_{i,j}$ are pairwise disjoint (see Figure~\ref{fig:pillar}). Note that a pillar is a $K_2$-pillar; and a $K_k$-subdivision can be obtained from taking one appropriate vertex from each cycle $C_i$ in a $K_k$-pillar along with the corresponding paths between pairs of them. Our method can be extended to show the following. We leave its proof for enthusiastic readers. \begin{theorem}\label{thm:pillar-k} Given $k\in\mathbb{N}$, there exists $C=C(k)>0$ such that every graph with average degree at least $C$ contains a $K_k$-pillar. \end{theorem} \subsection{Discussions} \subsubsection{Our approach} In our proof, we make crucial use of a notion called sublinear expanders (see Section~\ref{subsec-robust-expander}). We prove that pillars can be built in sublinear expanders, from which Theorem~\ref{thm-thomassen-conj} follows. There has been a sequence of advancements on the theory of sublinear expanders, which results in resolutions of several long-standing conjectures. We refer the interested readers to~\cite{GFKimKimLiu21,HKL20,CruxCycle, HHKL21, KLShS17,MadLM,LiuMont20,LWY}. Our constructive proof can be turned into an algorithm. To deal with the troublesome degree-3 vertices in a pillar, we use a structure called \emph{kraken} (see Definition~\ref{def-kraken} and Figure~\ref{pic-kraken}). A prototype of this structure appeared in the work of Haslegrave, Kim and Liu (\emph{nakji} in~\cite{HKL20}) on sparse minors; it was formally introduced in the work of Gil Fern\'andez, Kim, Kim and Liu~\cite{GFKimKimLiu21} on cycles with geometric constraints. Roughly speaking, a kraken consists of a cycle, in which every vertex has a large `boundary'. If we manage to find two krakens, then we can link the matching vertices in their cycles by expanding and connecting their boundaries to obtain a pillar. Finding a \emph{single} kraken in a sublinear expander is already not an easy task; this was done in~\cite{GFKimKimLiu21} with an involved argument. To find two krakens here, we prove a \emph{robust} embedding lemma for kraken (Lemma~\ref{lem-robust-kraken}), which is the main challenge and contribution of this paper. We expect it to have further applications for embedding problems. Its proof uses the existence of kraken in sublinear expanders from~\cite{GFKimKimLiu21} as black box and builds on the techniques developed in the work of Liu and Montgomery~\cite{LiuMont20} (see the beginning of Section~\ref{sec-robust-kraken} for the high level ideas). Various difficulties occur during the embedding process as the expansion we work with is only sublinear, hence not `additive', which causes additional technicalities when implementing the above natural approach. For instance, to carry out the step of linking two krakens, we have to impose additional structural property on the krakens (see Lemma~\ref{lem-link-2pulp-adj}). \subsubsection{Future directions} Call a class of graphs \emph{forcible by large degree} (or forcible in short) if all graphs with large constant average degree contains one of them as a subgraph. As mentioned, by the result of Pyber, R\"odl and Szemer\'edi~\cite{PRSz95}, the class of $3$-regular graphs is not forcible, while on the contrast, our main result shows that `semi-$3$-regular' pillars are. An intriguing problem is to figure out where the line is. That is, would it be possible to give certain characterisation of close-to-3-regular graphs that are forcible? A good starting point would be to find more natural forcible classes of graphs. One possible concrete direction is the following. A \emph{prism} is a Cartesian product of an edge and a cycle. We can think of a pillar as a \emph{partial} subdivision of a prism, in which only the matching edges linking two cycles in the prism are subdivided. In a sense, a pillar is a minimal partial subdivision of a prism that is forcible while keeping the closeness of the degree-3 vertices. What we mean is that if we allow two consecutive matching edges in a prism to be kept unsubdivided, then the resulting class would not be forcible as such graphs all contain a 4-cycle. Indeed, it is well-known in extremal graph theory that there are $n$-vertex $4$-cycle-free graphs with average degree $\Omega(\sqrt{n})$ (consider e.g. the incidence graphs of points and lines in projective planes). In general, no upper bound can be imposed on the girth of graphs in a forcible class. \begin{itemize} \item What are other obstructions to forcibility apart from bounded girth? \item Give more (non)examples of forcible class of partial subdivision of 3-regular (or min-degree-3) graphs with adjacencies of degree-3 vertices largely preserved. \end{itemize} Finally, we would like to draw attention to one particular class of graphs that we do not know whether it is forcible or not. A set of $k$ edge-disjoint cycles $C_1,\ldots, C_k$ form $k$-\emph{nested cycles without crossing} if $V(C_1)\subset V(C_2)\subset \ldots \subset V(C_k)$ and for each $i\in[k-1]$, no two edges of $C_i$ are crossing chords in $C_{i+1}$ (i.e., if $C_{i+1}=v_1\dots v_{\ell}$, then $C_i$ has no two edges $v_iv_{i'}$ and $v_jv_{j'}$ with $i<j<i'<j'$). Very recently, Kim, Kim and the authors~\cite{GFKimKimLiu21} proved that $2$-nested cycles without crossing are forcible, answering an old question of Erd\H{o}s. Thomassen~\cite{Tho89} made the following stronger conjecture, which remains open: $k$-nested cycles without crossing are forcible for any fixed $k$. \noindent\textbf{Organisation.} Section~\ref{sec-prelim} contains the tools needed in our proofs. Theorem~\ref{thm-thomassen-conj} will be proved in Section~\ref{sec-main-proof}, which is split into two lemmas. In Section~\ref{sec-robust-kraken} we prove the key lemma that we can find a kraken robustly in a sublinear expander; and Section~\ref{sec-link-2krakens} is devoted to linking krakens using paths of the same length to obtain a pillar. \section{Tools and building blocks}\label{sec-prelim} \subsection{Notations} For $n\in\mathbb{N}$, let $[n]:=\{1,\dots,n\}$. If we claim that a result holds for $0<a\ll b,c\ll d<1$, it means that there exist positive functions $f,g$ such that the result holds as long as $a<f(b,c)$ and $b<g(d)$ and $c<g(d)$. We will not compute these functions explicitly. In many cases, we treat large numbers as if they are integers, by omitting floors and ceilings if it does not affect the argument. We write $\log$ for the base-$e$ logarithm. Given a graph $G$, denote its average degree $2e(G)/|G|$ by $d(G)$, and write $\delta(G)$ and $\Delta(G)$ for its minimum and maximum degree, respectively. We write $N(v)$ for the set of neighbours of $v\in V(G)$ and we denote by $N_G(v,W)$ the set of neighbours of $v$ in $W\subseteq V(G)$ and $d_G(v,W)=|N_G(v,w)|$. Denote the (external) neighbourhood of $W$ by $N(W)=(\cup_{v\in W}N(v))\setminus W$. We write $N_G^0(W)=W$, and, for each integer $k\ge 1$, let $B_G^k(W)=\cup_{0\le j\le k}N_G^{j}(W)$ the ball of radius $k$ around $W$ in $G$, that is, the set of all vertices a graph distance at most $k$ to $W$. We let $B(W)=B^1(W)$. Let $F\subseteq G$ and $H$ be graphs, and $U\subseteq V(G)$. We write $G[U]\subseteq G$ for the induced subgraph of $G$ on vertex set $U$. Denote by $G\cup H$ the graph with vertex set $V(G)\cup V(H)$ and edge set $E(G)\cup E(H)$, and write $G-U$ for the induced subgraph $G[V(G)\setminus U]$, and $G\setminus F$ for the spanning subgraph of $G$ obtained from removing the edge set of $F$. For a path~$P$, we write $\ell(P)$ for its length, which is the number of edges in the path. Where we say $P$ is a path from a vertex set $A$ to a disjoint vertex set $B$, we mean that $P$ has one endvertex in each of $A$ and $B$, and no internal vertices in $A\cup B$. \subsection{3-dimensional cube in asymmetric bipartite graphs} The 3-dimensional cube $Q_3$ is a particular instance of the structures that we are looking for: two cycles of length $4$ whose vertices are pairwise linked by a path of length $1$. In various places when we wish to expand a set $U$ robustly, we would run into the issue that~$U$ could send most of the edges to some set $W$ that we need to avoid. In such scenarios, we can use the following simple yet useful asymmetric bipartite Tur\'an type result to obtain a 3-dimensional cube. \begin{prop}\label{prop-Q3} Let $d\geq 4$ be an integer and let $G$ be a bipartite graph with partite sets $U$ and $W$ such that $|U|> {|W|\choose 3}$ and every vertex in $U$ has at least $d$ neighbours in $W$. Then, $G$ contains a copy of $Q_3$. \end{prop} \begin{proof} We colour triples in $W$ that have common neighbours in $U$ as follows. Consider an uncoloured triple $\{x,y,z\}$ in $W$, if they have a common neighbour $v$ in $U$ that has not yet being used to colour a triple in $N(v)$, then colour $\{x,y,z\}$ with $v$. We write $c_{x,y,z}$ for the vertex in $U$ that is used to colour $\{x,y,z\}$ if it exists. Repeat this until no more triples can be coloured. Note that in this partial colouring, no colour (in $U$) is used more than once. Since $|U|> {|W|\choose 3}$, there exists $v\in U$ that is not assigned as a colour for any triple. This, together with the maximality of the partial colouring, implies that every triple in $N(v)$ has been coloured by some vertex in $U\setminus\{v\}$. Thus, as $|N(v)|\geq d\geq 4$, we can take four vertices $x,y,z,w\in N(v)$, which together with $c_{x,y,z},c_{x,z,w},c_{y,z,w}, c_{x,y,w}$ induce a $Q_3$ in $G$. \end{proof} \subsection{Sublinear expander}\label{subsec-robust-expander} Our proof makes use of the sublinear expander introduced by Koml\'os and Szemer\'edi \cite{KSz96}. We shall use the following extension by Haslegrave, Kim and Liu~\cite{HKL20}. \begin{defin}\label{def-expander} Let $\varepsilon_{1}>0$ and $k\in\mathbb{N}$. A graph $G$ is an $(\varepsilon_{1},k)$-\textit{expander} if for all $X\subset V(G)$ with $k/2\leq |X|\leq |G|/2$, and any subgraph $F\subseteq G$ with $e(F)\leq d(G)\cdot \varepsilon(|X|)|X|$, we have $$ |N_{G\setminus F}(X)| \geq \varepsilon(|X|)\cdot|X|, $$ where $$ \varepsilon(x)=\varepsilon\left(x, \varepsilon_{1}, k\right)=\left\{\begin{array}{cc} 0 & \text { if } x<k / 5, \\ \varepsilon_{1} / \log ^{2}(15 x / k) & \text { if } x \geq k / 5. \end{array}\right. $$ \end{defin} Note that when $x\geq k/2$, $\varepsilon(x)$ is decreasing, which implies that the rate of expansion, $|N_G(B^i_G(X))|/|B^i_G(X)|\geq \varepsilon(|B^i_G(X)|,\varepsilon_1,k)$ guaranteed by the expansion condition decreases as $i$ increases; however, $\varepsilon(x)\cdot x$ is increasing, which means that the lower bound for $|N_G(B_G^i(X))|$ coming from the expansion property increases as $i$ increases. Such sublinear expansion rate seems rather weak at the first glance, the strength of this notion is that every graph contains one such sublinear expander subgraph with almost the same average degree. We shall use the following version, which is a combination of Lemma~3.2 in \cite{HKL20} and Corollary~2.5 in \cite{LiuMont20}. \begin{theorem}\label{thm-pass-to-expander} There exists some $\varepsilon_1>0$ such that the following holds for every $\varepsilon_2>0$ and $d\in \mathbb{N}$. Every graph $G$ with $d(G)\geq 8d$ has a bipartite $(\varepsilon_1,\varepsilon_2d)$-expander subgraph $H$ with $\delta(H)\geq d$. \end{theorem} Thus, when dealing with extremal problems of embeddings in graphs with given density, such as proving Theorem~\ref{thm-thomassen-conj}, we can always pass to a subgraph to enjoy such expansion. One key consequence of the expansion is the so-called short diameter property. That is, we can find short paths robustly between two sufficiently large sets. \begin{lemma}[\cite{LiuMont20}, Lemma 3.4]\label{lem-short-diam-new} For each $0<\varepsilon_1,\varepsilon_2<1$, there exists $d_0=d_0(\varepsilon_1,\varepsilon_2)$ such that the following holds for each $n\geq d\geq d_0$ and $x\geq 1$. Let $G$ be an $n$-vertex $(\varepsilon_1,\varepsilon_2d)$-expander with $\delta(G)\geq d-1$. Let $A,B\subseteq V(G)$ with $|A|,|B|\geq x$, and let $W\subseteq V(G)\setminus(A\cup B)$ satisfy $|W|\log^3n\leq 10x$. Then, there is a path from $A$ to $B$ in $G-W$ with length at most $\frac{40}{\varepsilon_1}\log^3n$. \end{lemma} \subsection{Robust expansions of sets} We collect here some more lemmas for robust expansion of sets in sublinear expanders. The first one enables us to grow a set $A$ past some given set $X$ as long as $X$ does not interfere with each sphere around $A$ too much. This is formalised as follows. \begin{defin}\label{def-thin-set} For $\lambda>0$ and $k\in\mathbb{N}$, we say that a vertex set $X$ in a graph $G$ is~\emph{$(\lambda,k)$-thin around $A$} if $X\cap A=\varnothing$ and, for each $i\in\mathbb{N}$, $$ |N_G(B_{G-X}^{i-1}(A))\cap X|\leq \lambda \cdot i^k. $$ \end{defin} We will use the following result to get such expansion. \begin{prop}[\cite{GFKimKimLiu21}, Proposition 2.5]\label{prop-exp-HL} Let $0<1/d\ll \varepsilon_1\ll 1/\lambda, 1/k$ and $1\leq r\leq\log n$. Suppose $G$ is an $n$-vertex $(\varepsilon_1,\varepsilon_2d)$-expander with $\delta(G)\ge d$, and $X,Y$ are sets of vertices with $|X|\ge 1$ and $|Y|\leq \frac{1}{4}\varepsilon(|X|)\cdot |X|$. Let $W$ be a $(\lambda,k)$-thin set around $X$ in $G-Y$. Then, for each $1\leq r\leq \log n$, we have $$|B^r_{G-W-Y}(X)|\geq\mathrm{ex}p(r^{1/4}).$$ \end{prop} When we are given a large collection of sets in a sublinear expander, we can use the following lemma to find one set within the collection that expands robustly to medium (polylogarithmic) size. We remark that in the original Lemma 3.7 in \cite{LiuMont20}, condition \textbf{A3} below was stated as $C_i$ is $(4,1)$-thin around $A_i$, instead of $(\sqrt{|A_i|},1)$-thin, but the same proof works for this variant. \begin{lemma}[\cite{LiuMont20}, Lemma~3.7]\label{lem-expand-together} For each $0<\varepsilon_1<1$, $0<\varepsilon_2<1/5$ and $k\in \mathbb{N}$, there exists $d_0=d_0(\varepsilon_1,\varepsilon_2,k)$ such that the following holds for each $n\geq d\geq d_0$. Suppose that $G$ is an $n$-vertex bipartite $(\varepsilon_1,\varepsilon_2 d)$-expander with $\delta(G)\ge d$. Let $U\subseteq V(G)$ satisfy $|U|\leq \mathrm{ex}p((\log\log n)^2)$. Let $r\geq n^{1/8}$ and $\ell_0=(\log\log n)^{20}$. Suppose $(A_i,B_i,C_i)$, $i\in [r]$, are such that the following hold for each $i\in [r]$. \stepcounter{propcounter} \begin{enumerate}[label = \emph{\bfseries \Alph{propcounter}\arabic{enumi}}] \item $|A_i|\geq d_0$.\label{exptog} \item $B_i\cup C_i$ and $A_i$ are disjoint sets in $V(G)\setminus U$, with $|B_i|\leq |A_i|/\log^{10}|A_i|$.\label{exptog2} \item $C_i$ is $(\sqrt{|A_i|},1)$-thin around $A_i$ in $G-U-B_i$.\label{exptog3} \item Each vertex in $B_{G-U-B_i-C_i}^{\ell_0}(A_i)$ has at most $d/2$ neighbours in $U$.\label{exptog4} \item For each $j\in [r]\setminus\{i\}$, $A_i$ and $A_j$ are at least a distance $2\ell_0$ apart in $G-U-B_i-C_i-B_j-C_j$.\label{exptog5} \end{enumerate} Then, for some $i\in [r]$, $$|B^{\ell_0}_{G-U-B_i-C_i}(A_i)|\geq \log^{k}n.$$ \end{lemma} Lastly, we need the following result to find a linear size vertex set with polylogarithmic diameter in $G$ while avoiding an arbitrary set of size $o(n/\log^2n)$. \begin{lemma}[\cite{LiuMont20}, Lemma~3.12]\label{lem-find-large-ball} Let $0<1/d\ll\varepsilon_1,\varepsilon_2<1$ and let $G$ be an $n$-vertex bipartite $(\varepsilon_1,\varepsilon_2d)$-expander with $\delta(G)\geq d$. For any $W\subseteq V(G)$ with $|W|\leq \varepsilon_1n/100\log^2n$, there is a set $B\subseteq G-W$ with size at least $n/25$ and diameter at most $200\varepsilon_1^{-1}\log^3n$. \end{lemma} \subsection{Krakens} A basic structure we often use is a large set with small radius, defined as follows. \begin{defin}\label{def-expansion} Given a vertex $v$ in a graph $F$, $F$ is a \emph{$(D,m)$-expansion of $v$} if $|F|=D$ and every vertex of $F$ is a distance at most $m$ from $v$. \end{defin} Expansions around a vertex can be trimmed to a smaller size. \begin{prop}[\cite{LiuMont20}, Proposition 3.10]\label{prop:trimming} Let $D,m\in \mathbb{N}$ and $1\leq D'\leq D$. Then, any graph $F$ which is a $(D,m)$-expansion of $v$ contains a subgraph which is a $(D',m)$-expansion of $v$. \end{prop} \begin{defin}\label{def-kraken} For $k,s,t\in\mathbb{N}$, a \emph{$(k,s,t)$-kraken} is a graph that consists of a cycle $C$ with vertices $v_1,\dots,v_k$, vertices $u_1,\ldots, u_k$ out of $V(C)$, and subgraphs $F_{j}$ and $P_{j}$, $j\in[k]$, such that \begin{itemize}\itemsep=0pt \item $\{F_{j}:j\in[k]\}$ is a collection of sets disjoint from each other and from $V(C)$, and each~$F_j$ is a $(t,s)$-expansion of $u_j$. We call each $F_j$ a \emph{leg} and $u_j$ its \emph{end}. \item $\{P_{j}:j\in[k]\}$ is a collection of pairwise disjoint paths, and each $P_j$ is a $v_j,u_j$-path of length at most $10s$ with internal vertices disjoint from $V(C)\cup(\cup_{i\in[k]} V(F_i))$. \end{itemize} \end{defin} \begin{figure} \caption{A $(12,s,t)$-kraken.\label{pic-kraken} \label{pic-kraken} \end{figure} We usually write a kraken as a tuple $(C, u_j, F_{j}, P_{j})$, $j\in[k]$, see Figure~\ref{pic-kraken}. The following result guarantees a large kraken in a sublinear expander. \begin{lemma}[\cite{GFKimKimLiu21}, Lemma~3.2]\label{lem-kraken-from-kraken} Let $0<1/d\ll \varepsilon_1,\varepsilon_2,1/b<1$. Let $G$ be an $n$-vertex $(\varepsilon_1,\varepsilon_2d)$-expander with $\delta(G)\geq d$. Let $m=200\varepsilon_1^{-1}\log^3n$. Then, there exists a $(k,m,\log^{b}n)$-kraken $(C, u_j, F_j, P_j)$, $j\in[k]$, in $G$ for some $k\le \log n$. \end{lemma} \subsection{Adjusters} An important tool we need in our proof is a recent lemma of Liu and Montgomery~\cite{LiuMont20} (Lemma~\ref{lem-finalconnect}), which robustly finds paths of specific lengths between a given pair of vertices in a sublinear expander with some mild conditions. Before stating the lemma, let us briefly introduce the key object called \emph{adjuster}, involved for its proof. The basic structure is an even cycle~$C$, together with two disjoint large connected subgraphs $F_1,F_2$ attached to two almost-antipodal vertices $v_1,v_2$ on the cycle $C$. If~$C$ has length $2\ell$, for some $\ell\leq\log n$, there are two $v_1,v_2$-paths, one of length $\ell+1$ and the other with length $\ell-1$. The subgraphs $F_1$, $F_2$ are set to be comfortably larger than the size of $C$, so that they can be connected by a short path while avoiding $C$. The idea is to link many such structures sequentially to form an \emph{adjuster}, so that we can use it to find paths of many different lengths by varying the length of the path we take around each cycle, see Figure~\ref{pic-adjuster}. \begin{figure} \caption{Adjuster. \label{pic-adjuster} \label{pic-adjuster} \end{figure} We need the following definition to record the parity of paths between two vertices in a connected bipartite graph. For any connected bipartite graph $H$ and $u,v\in V(H)$, let $$ \pi(u,v,H)=\left\{\begin{array}{ll} 0 & \text{ if $u$ and $v$ are in the same vertex class in the (unique) bipartition of $H$},\\ 1 & \text{ if $u$ and $v$ are in different vertex classes in the bipartition of $H$}. \end{array} \right. $$ Using adjusters, Liu and Montgomery~\cite{LiuMont20} proved the following. \begin{lemma}[\cite{LiuMont20}, Lemma 4.8]\label{lem-finalconnect} There exists some $\varepsilon_1>0$ such that, for any $0<\varepsilon_2<1/5$ and $b\geq 10$, there exists $d_0=d_0(\varepsilon_1,\varepsilon_2,b)$ such that the following holds for each $n\geq d\geq d_0$. Suppose that $G$ is an $n$-vertex $Q_3$-free bipartite $(\varepsilon_1,\varepsilon_2 d)$-expander with $\delta(G)\ge d$. Suppose $\log^{10} n\leq D\leq \log^{b}n$, and $U\subseteq V(G)$ with $|U|\leq D/2\log^3n$, and let $m=\frac{8000}{\varepsilon_1}\log^3n$. Suppose $F_1,F_2\subseteq G-U$ are vertex-disjoint such that $F_i$ is a $(D,m)$-expansion of $v_i$, for each $i\in[2]$. Let $\log^{7}n\leq \ell\leq n/\log^{10}n$ be such that $\ell=\pi(v_1,v_2,G)\mod 2$. Then, there is a $v_1,v_2$-path with length $\ell$ in $G-U$. \end{lemma} The careful readers might notice that the original Lemma 4.8 in~\cite{LiuMont20} requires the graph $G$ to not contain a large clique subdivision ($\mathsf{TK}_{d/2}^{(2)}$-free). The above version for $Q_3$-free graphs~$G$ can be proved by following the proof in~\cite{LiuMont20} and replacing the use of Proposition~3.16 there with Proposition~\ref{prop-Q3} here. \section{Main lemmas}\label{sec-main-proof} To prove Theorem~\ref{thm-thomassen-conj}, we first find two copies of krakens whose cycles are of the same length. Then we link sequentially and disjointly the legs of one kraken to those of the other one. We package these two steps into the following two lemmas, respectively. The first lemma is the key one, which constructs a kraken robustly in an expander. \begin{lemma}\label{lem-robust-kraken} For each $0<\varepsilon_1,\varepsilon_2<1$ and integer $b\ge 10$, there exists $d_0=d_0(\varepsilon_1,\varepsilon_2,b)$ such that the following holds for each $n\ge d\ge d_0$. Let $G$ be a $Q_3$-free $n$-vertex bipartite $(\varepsilon_1,\varepsilon_2 d)$-expander with $\delta(G)\geq d$. Let $L$ be the set of vertices with degree at least $e^{(\log\log n)^2}$. Let $m=200\varepsilon^{-1}\log^3n$ and let $U\subseteq V(G)$ satisfy $|U|\leq (\log n)^{2b}$. Then, for some $k\leq\log n$, $G-U$ contains a $(k,2m,(\log n)^{b})$-kraken $(C, u_j, F_j, P_j)$, $j\in[k]$, such that \begin{itemize} \item for each $j\in [k]$, either $u_j\in L$ or $F_j\subseteq G-L$; and \item any distinct legs $F_{j}, F_{j'}$ in $G-L$ are a distance at least $(\log n)^{1/10}$ apart from each other and from $U\setminus L$ in $G-L$. \end{itemize} \end{lemma} The next lemma carries out the finishing blow. It allows us to link each pair of vertices in the cycles inside krakens via their legs disjointly to construct a pillar. Lemma~\ref{lem-finalconnect} kicks in here to make sure that all the paths used are of the same length. \begin{lemma}\label{lem-link-2pulp-adj} For each $0<\varepsilon_1,\varepsilon_2<1$ and $t\ge 10$, there exists $d_0=d_0(\varepsilon_1,\varepsilon_2,t)$ such that the following holds for each $n\ge d\ge d_0$. Let $G$ be a $Q_3$-free $n$-vertex bipartite $(\varepsilon_1,\varepsilon_2 d)$-expander with $\delta(G)\geq d$ and $L$ be the set of vertices with degree at least $e^{(\log\log n)^2}$. Let $m=400\varepsilon^{-1}\log^3n$ and let $\mathsf{K}_\alpha=(C_\alpha,u_j^\alpha,F_j^\alpha,P_j^\alpha)$ and $\mathsf{K}_\beta=(C_\beta,u_j^\beta,F_j^\beta,P_j^\beta)$, $j\in[s]$, be two disjoint $(s,m,(\log n)^{2t})$-krakens, for some $s\leq\log n$, with $V(C_\alpha)=\{v_1^\alpha,\dots,v_s^\alpha\}$ and $V(C_\beta)=\{v_1^\alpha,\dots,v_s^\alpha\}$, and such that \begin{itemize} \item for each $\sigma\in\{\alpha,\beta\}$ and $j\in [s]$, either $u_j^\sigma\in L$ or $F_j^\sigma\subseteq G-L$; and \item all legs in $K_{\alpha}$ and $K_{\beta}$ lying completely in $G-L$ are a distance at least $(\log n)^{1/10}$ apart from each other in $G-L$. \end{itemize} Then, for any $\log^{7}n\leq \ell\leq \log^{t}n$ with $\ell=\pi(v_1^{\alpha},v_1^{\beta},G)$, there is a collection of pairwise disjoint paths $Q_i$, $i\in[s]$, such that each $Q_i$ is a $v_i^\alpha,v_i^\beta$-path of length $\ell$ internally disjoint from $C_\alpha$ and $C_\beta$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm-thomassen-conj}] By passing to a subgraph using Theorem~\ref{thm-pass-to-expander}, we may assume that we start with an $n$-vertex bipartite $(\varepsilon_1,\varepsilon_2d)$-expander graph with minimum degree $d$. As a 3-dimensional cube $Q_3$ is a pillar, we may further assume that $G$ is $Q_3$-free. We shall repeatedly apply Lemma~\ref{lem-robust-kraken} to obtain two krakens whose cycles are of the same length and then invoke Lemma~\ref{lem-link-2pulp-adj} to finish the proof. More precisely, let $t\ge 10$. For $i<\log n+1$, suppose we have already found $i$ disjoint $(k_i,m,(\log n)^{2t})$-krakens for some $k_i\le \log n$. Let $U$ be the union of the vertex sets of all these krakens, then, as $t\ge 10$, $|U|\leq i\cdot \log n(1+10m+(\log n)^{2t})\leq (\log n)^{4t}$. Then by Lemma~\ref{lem-robust-kraken}, we can find another kraken in $G-U$. Thus, we can find at least $\log n+1$ disjoint krakens. As the cycle in each kraken has length at most $\log n$, by pigeonhole principle, among these krakens there are two whose cycles are of the same length $s$, for some $s\leq\log n$. Let $L$ be the set of vertices with degree at least $e^{(\log\log n)^2}$. Lemma~\ref{lem-robust-kraken} and the choice of $U$ guarantee that the legs of these two krakens not containing high degree vertices from $L$ can be taken far apart from each other in $G-L$. Thus, Lemma~\ref{lem-link-2pulp-adj} applies and we can link these two krakens to obtain the desired pillar. \end{proof} \section{Sustainable kraken fishing}\label{sec-robust-kraken} In this section, we prove Lemma~\ref{lem-robust-kraken}, which constructs a kraken in an expander robustly. Let us first describe the high level idea for the proof. Suppose, for contradiction, that there is no kraken in $G-U$ with the required size and properties. Take a maximal collection of krakens $\mathbf{K}$ in $G-U-L$ so that their legs are far apart. Our goal is to show that we can eventually expand all the legs of one of these krakens to obtain a desired kraken, giving a contradiction. To this end, we first show that there are many krakens in $\mathbf{K}$ (see Claim~\ref{claim-size-P0}). This can be done by repeatedly applying Lemma~\ref{lem-kraken-from-kraken} to some appropriate expander subgraph in $G-U-L$. We then consider maximal collections $\mathcal{P}$ and $\mathcal{Q}$ of paths from legs of krakens in $\mathbf{K}$ to either~$L\setminus U$ or to $\mathcal{Z}$, a collection of large sets each with small diameter. We show that there cannot be a kraken in $\mathbf{K}$ with all its legs linked to paths in $\mathcal{P}\cup\mathcal{Q}$ (see Claim~\ref{claim-free-legs}), i.e., all the krakens in $\mathbf{K}$ have at least one `free' leg that is not linked to $\mathcal{P}\cup\mathcal{Q}$, as otherwise we can extend the legs of the kraken using the large sets in $\mathcal{Z}$ or the neighbourhoods of the large degree vertices in $L\setminus U$ to obtain a large kraken in $G-U$. Finally, we collectively expand all the free legs in all the krakens in $\mathbf{K}$. Then by Lemma~\ref{lem-expand-together}, one free leg of one of these krakens must expand and can therefore be linked to an unused set in $\mathcal{Z}$, contradicting the maximality of $\mathcal{Q}$. This will conclude the proof. \begin{figure} \caption{An illustration of the proof of Lemma~\ref{lem-robust-kraken} \end{figure} \begin{proof}[Proof of Lemma~\ref{lem-robust-kraken}] Suppose, for contradiction, that, for any $k\leq\log n$, $G-U$ contains no $(k,2m,(\log n)^{b})$-kraken with the desired properties. Set $$\ell_0=(\log\log n)^{20}, \quad \Delta=e^{(\log\log n)^2}, \quad\text{ and } \quad G':=G-L.$$ Thus, $\Delta(G')\leq \Delta$. Further define $$U_0=\{v\in V(G)\setminus U: d_G(v,U)\geq d/2\}.$$ Note that, if $|U_0|\geq (\log n)^{6b}\geq |U|^3$, then, by Proposition~\ref{prop-Q3} with $(U,W)_{\ref{prop-Q3}}=(U_0,U)$, $G$ contains a copy of $Q_3$, a contradiction. Therefore, we can assume that $|U_0|\leq(\log n)^{6b}$, and hence, as $\delta(G)\geq d$ and $n\geq d_0(\varepsilon_1,\varepsilon_2,b)$ is large, $G-U$ contains at least $$\frac{1}{2}\sum_{v\not\in U\cup U_0}d_{G-U}(v)\ge \Big(n-|U|-|U_0|\Big)\cdot\frac{d}{4}\geq \Big(n-(\log n)^{2b}-(\log n)^{6b}\Big)\cdot\frac{d}{4}\geq \frac{nd}{8}$$ edges. Let $U_1:=U\cup U_0$, we have that $|U_1|\leq (\log n)^{2b}+(\log n)^{6b}\leq 2(\log n)^{6b}$. Take a maximal collection $\mathbf{K}$ of krakens in $G-U$ such that the following hold: \stepcounter{propcounter} \begin{enumerate}[label = {\bfseries \Alph{propcounter}\arabic{enumi}}] \item The legs of each kraken $\mathsf{K}=(C,u_j,F_j,P_j)\in\mathbf{K}$ that do not contain a vertex in $L$ lie in $G'$ and are at least $10\ell_0$-apart from the legs of other krakens and from $U_1\setminus L$ in $G'$.\label{pulp1} \item We can index krakens in $\mathbf{K}$ so that each $\mathsf{K}_H\in\mathbf{K}$ is a $(k_H,m_H,(\log n_H)^{b})$-kraken, for some $n_H,m_H,k_H\in\mathbb{N}$ with $\frac{d}{64}\le n_H\le n$, $m_H=200\varepsilon^{-1}\log^3n_H \leq m$ and $k_H\leq\log n_H$.\label{pulp2} \end{enumerate} \begin{claim}\label{claim-size-P0} $|\mathbf{K}|\geq n^{1/8}$. \end{claim} \begin{poc} Suppose, for contradiction, that $|\mathbf{K}|< n^{1/8}$. Let~$W=\big(U_1\cup (\cup_{\mathsf{K}\in\mathbf{K}}V(\mathsf{K}))\big)\setminus~L$. Note that, for each $\mathsf{K}_H\in\mathbf{K}$, we have $$|V(\mathsf{K}_H)|\leq \sum_{j=1}^{k_H}|V(F_j)|+\sum_{j=1}^{k_H}|V(P_j)|\leq\log n_H\cdot((\log n_H)^{b}+10m_H)\leq 20(\log n_H)^{b+1},$$ which implies that $|W|\leq 2(\log n)^{6b}+n^{1/8}\cdot20(\log n_H)^{b+1}\leq n^{1/7}$. Let $W'=B_{G'}^{10\ell_0}(W)$. Since $\Delta(G')\leq e^{(\log\log n)^2}$, then $|W'|\leq 2|W|\Delta^{10\ell_0}\leq n^{1/6}$. Thus, there are at most $|W'|\Delta\leq\Delta\cdot n^{1/6}\leq nd/16$ edges in $G$ with some vertex in~$W'$. As $G-U$ contains at least $nd/8$ edges, $G-U-W'$ contains at least $nd/16$ edges. Consequently, $d(G-U-W')\geq d/8$. Then, by Theorem~\ref{thm-pass-to-expander}, $G-U-W'$ contains a bipartite $(\varepsilon_1,\varepsilon_2d/64)$-expander~$H$ with $\delta(H)\geq d/64$. Thus, by Lemma~\ref{lem-kraken-from-kraken} with $n_{\ref{lem-kraken-from-kraken}}=n_H:=|H|$, $m_{\ref{lem-kraken-from-kraken}}=m_H=200\varepsilon_1^{-1}\log^3n_H\leq m$ and some $k_H\leq\log n_H$, there exists a $(k_H,m_H,\log^{b}n_H)$-kraken in $H$, which is at distance at least $10\ell_0$-apart from every other kraken in $\mathbf{K}$ and from $U_1\setminus L$ in $G'$, as $H\subseteq G-U-W'$, a contradiction to the maximality of~$\mathbf{K}$. \end{poc} Now, let $p_0=|\mathbf{K}|$ and write $\mathsf{K}_i$ for a $(k_{H_i},m_{H_i},(\log n_{H_i})^{b})$-kraken in $\mathbf{K}$, for each $i\in[p_0]$. We may assume $p_0=n^{1/8}$. Moreover, for each $i\in[p_0]$, write $c_i\leq \log n_{H_i}$ for the length of the cycle $\bar{C_i}$ in $\mathsf{K}_i$, and denote by $F_{i,j}$, $u_{i,j}$, $P_{i,j}$, $j\in [c_i]$, the $j$-th leg, its end and the corresponding path of $\mathsf{K}_i$, respectively; that is, for each $i\in[p_0]$, $\mathsf{K}_i=(\bar{C_i},u_{i,j},F_{i,j},P_{i,j})$, $j\in[c_i]$. \begin{claim}\label{claim-balls} There exists a collection of connected set of vertices $\mathcal{Z}:=\{Z_i:i\in[m^2]\}$ in $G'-U-V(\mathbf{K})$ such that for each $i\in[m^2]$, $Z_i$ has size $(\log n)^{100b}$ with diameter at most $m$ and it is at distance at least $(\log n)^{1/10}$ apart from other sets in $\mathcal{Z}$ and from $U$ in $G'$. \end{claim} \begin{poc} Take a maximal collection of sets $Z_i$, $i\in[s]$, with the claimed size, which are pairwise far apart and far from $U$ in $G'$. If $s<m^2$, then $$\big|B_{G'}^{(\log n)^{1/10}}(\cup_{i\in[s]}Z_i\cup U)\big|\leq 2\cdot \Big(s\cdot (\log n)^{100b}+(\log n)^{2b}\Big)\cdot \Delta(G')^{(\log n)^{1/10}} <\sqrt{n}.$$ Thus, by Lemma~\ref{lem-find-large-ball} with $W_{\ref{lem-find-large-ball}}=V(\mathbf{K})\cup B_{G'}^{(\log n)^{1/10}}(\cup_{i\in[s]}Z_i\cup U)$, we can find another large set with small diameter in $G'-U-V(\mathbf{K})$ far apart from $\cup_{i\in[s]}Z_i\cup U$ in $G'$, a contradiction to the maximality of $s$. \end{poc} Take now two collections of paths $\mathcal{P}$ and $\mathcal{Q}$ as follows. \stepcounter{propcounter} \begin{enumerate}[label = {\bfseries \Alph{propcounter}\arabic{enumi}}] \item\label{whatlunch1} Let $\mathcal{P}$ be a maximal collection of paths in $G-U$ from legs $V(F_{i,j})$, $j\in[c_i],i\in [p_0]$ of krakens in~$\mathbf{K}$ to $L\setminus U$ with length at most $\ell_0$ each and such that paths from the same kraken are vertex-disjoint. Subject to $|\mathcal{P}|$ being maximal, let $\ell(\mathcal{P}):=\sum_{P\in\mathcal{P}}\ell(P)$ be minimised. \item\label{whatlunch2} Let $\mathcal{Q}$ be a maximal collection of paths in $G-U$ from legs $V(F_{i,j})$, $j\in[c_i],i\in [p_0]$ of krakens in $\mathbf{K}$ that are not already linked to paths in $\mathcal{P}$ to $\mathcal{Z}$ with length at most $3m$ each and such that: \begin{itemize}\itemsep=0pt \item each $Z_i\in\mathcal{Z}$ is linked to at most one leg of the same kraken; and \item paths in $\mathcal{P}\cup\mathcal{Q}$ from the same kraken are pairwise vertex-disjoint. \end{itemize} Subject to $|\mathcal{Q}|$ being maximal, let $\ell(\mathcal{Q}):=\sum_{Q\in\mathcal{Q}}\ell(Q)$ be minimised. \end{enumerate} \begin{claim}\label{claim-free-legs} There is no kraken in $\mathbf{K}$ with all its legs linked to paths in $\mathcal{P}\cup\mathcal{Q}$. \end{claim} \begin{poc} Suppose, for contradiction, that there exists $\mathsf{K}_i\in\mathbf{K}$ that has all its legs linked to paths in $\mathcal{P}\cup\mathcal{Q}$. Let $p=|V(\mathcal{P})\cap\bar{C_i}|$ and $q=|V(\mathcal{Q})\cap\bar{C_i}|$, and note that $p+q=c_i\leq \log n$. Let $v_1,\ldots,v_{p+q}$ be vertices in $\bar{C_i}$. Relabelled if necessary, we may assume that $v_1,\dots,v_p$ are the vertices in $\bar{C_i}$ whose corresponding legs are linked by paths in $\mathcal{P}$ to high degree vertices $u_1,\dots,u_p\in V(\mathcal{P})\cap(L\setminus U)$, and $Z_{p+1},\dots,Z_{p+q}$ are the sets in $\mathcal{Z}$ linked to $\mathsf{K}_i$ by paths in $\mathcal{Q}$. For $j\in[p+1,p+q]$, let~$u_j$ be the vertex in $Z_j\cap V(\mathcal{Q})$. Since $|Z_j|=(\log n)^{100b}$ for every $j\in[p+1,p+q]$, using Proposition~\ref{prop:trimming}, we can take connected sets $\widetilde{Z}_j\subseteq Z_j$ with $|\widetilde{Z}_j|=\log^{b}n$ and $u_j\in \widetilde{Z}_j$ such that all $\widetilde{Z}_j$ are pairwise disjoint. On the other hand, for each $j\in[p]$, we can choose pairwise disjoint sets $Y_j\subseteq N(u_j)$ with $|Y_j|=\log^{b}n$ that are also disjoint from $\{\widetilde{Z}_j:j\in[p+1,p+q]\}$, as $|N(u_j)|\geq\Delta$. For each $P_j\in\mathcal{P}$, $j\in[p]$, we can extend it to a $v_j,u_j$-path $\widetilde{P}_{i,j}\subseteq P_j\cup F_{i,j}\cup P_{i,j}$ with $|\widetilde{P}_{i,j}|\leq \ell_0+m_{H_i}+10m_{H_i}\leq 20m$. Similarly, for each $Q_j\in\mathcal{Q}$, $j\in[p+1,p+q]$, we can extend it to a $v_j,u_j$-path $\widetilde{Q}_{i,j}\subseteq Q_j\cup F_{i,j}\cup P_{i,j}$ with length $|\widetilde{Q}_{i,j}|\leq 3m+m_{H_i}+10m_{H_i}\leq 20m$. Thus, we get a kraken $\mathsf{K}_i'=(\bar{C_i},u_j,F_{i,j}',P_{i,j}')$, $j\in[c_i]$, where, for $j\in[p]$, $F_{i,j}'=G[Y_j]\cup\{u_j\}$ and $P_{i,j}'=\widetilde{P}_{i,j}$ and for $j\in[p+1,p+q]$, $F_{i,j}'=G[\widetilde{Z}_j]$ and $P_{i,j}'=\widetilde{Q}_{i,j}$, which is a $(c_i,2m,(\log n)^{b})$-kraken in $G-U$. Note that by \ref{pulp1} and the choice of $\mathcal{Z}$, $\mathsf{K}_i'$ has the desired property that the legs whose ends are not in $L$ lie completely in $G'$ and are far apart from each other and from $U_1\setminus L$ in $G'$, a contradiction to our initial assumption. \end{poc} Therefore, for each $\mathsf{K}_i$, $i\in[p_0]$, there must exist one `free' leg $F_{i,j_0}$ which is not linked to any path in $\mathcal{P}\cup\mathcal{Q}$. Note that by definition, for every $i\in[p_0]$, $F_{i,j_0}\subseteq G'$, as otherwise, if~$F_{i,j_0}$ contains some vertex $u\in L$, then we can view $\{u\}$ as a single-vertex path from $F_{i,j_0}$ to~$L$, a contradiction to the maximality of $\mathcal{P}$ in \ref{whatlunch1} and $F_{i,j_0}$ being free. Now, we shall use Lemma~\ref{lem-expand-together} to collectively expand free legs in all krakens to find one that expands well. If succeeded, we can then link this free leg to an unused set in $\mathcal{Z}$ to reach the final contradiction to the maximality of $\mathcal{Q}$. To this end, we need to specify the sets $A_i,B_i,C_i$ to invoke Lemma~\ref{lem-expand-together}. Let $\mathcal{P}_i$ and $\mathcal{Q}_i$ be the subcollections of paths in $\mathcal{P}$ and~$\mathcal{Q}$, respectively, linked to the kraken~$\mathsf{K}_i$, and for each $i\in[p_0]$, define the sets $$A_i:=F_{i,j_0}, \quad B_i:=\bar{C_i}\cup\Big(\bigcup_{j\in[c_i]}P_{i,j}\Big)\setminus \{u_{i,j_0}\}, \quad\text{and }\quad C_i:=V(\mathcal{P}_i\cup\mathcal{Q}_i).$$ \begin{claim} There is some $\ell\in[p_0]$ for which $$|B^{\ell_0}_{G-U-B_\ell-C_\ell}(A_\ell)|\geq(\log n)^{200b}.$$ \end{claim} \begin{poc} In order to prove this claim, we have to show that conditions \emph{\ref{exptog}}--\emph{\ref{exptog5}} hold with $A_i,B_i,C_i$ as above. First, by \ref{pulp2}, $|A_i|=(\log n_{H_i})^{b}\geq(\log (d/64))^{b}$ can be taken suffiently large by taking $d\ge d_0(\varepsilon_1,\varepsilon_2,b)$ large, thus \emph{\ref{exptog}} holds. It is clear from the choice of $A_i$ that it is disjoint from $B_i\cup C_i$ in $V(G)\setminus U$. Moreover, as $b\ge 10$, again by \ref{pulp2}, we have $$|B_i|\leq\log n_{H_i}+\log n_{H_i}\cdot10m_{H_i}\leq (\log n_{H_i})^{6}\leq\frac{|A_i|}{\log^{10}|A_i|},$$ so that \emph{\ref{exptog2}} holds. For \emph{\ref{exptog3}}, note that for any path $P\in\mathcal{P}_i\cup\mathcal{Q}_i$ and any $r\in\mathbb{N}$, we have $$|N_{G-U-B_i}(B_{G-U-B_i-C_i}^{r-1}(A_i))\cap V(P)|\leq r+1.$$ Indeed, if this is not the case, we can replace the initial segment of at least $r+1$ vertices in~$P$ by the length-$r$ path in $B_{G-U-B_i}(B_{G-U-B_i-C_i}^{r-1}(A_i))$ to obtain a new path, shorter than~$P$, from $A_i$ to the endvertex of $P$ in $L\setminus U$ or $\mathcal{Z}$. This contradicts the minimality of $\ell(\mathcal{P})$ or~$\ell(\mathcal{Q})$ in \ref{whatlunch1} and \ref{whatlunch2}. Thus, as $|\mathcal{P}_i\cup \mathcal{Q}_i|\le c_i\le \log n_{H_i}\le |A_i|^{1/4}$, we have that $$|N_{G-U-B_i}(B_{G-U-B_i-C_i}^{r-1}(A_i))\cap C_i|\leq\log n_{H_i}\cdot (r+1)\leq\sqrt{|A_i|}\cdot r.$$ That is, $C_i$ is $(\sqrt{|A_i|},1)$-thin around $A_i$ in $G-U-B_i$, yielding~\emph{\ref{exptog3}}. Note that there is no path with length at most $\ell_0$ from $A_i$ to $L\setminus U$ in $G-U-B_i-C_i$, as otherwise it would be a contradiction to the maximality of $\mathcal{P}$. Thus, we have that $B^{\ell_0}_{G-U-B_i-C_i}(A_i)=B^{\ell_0}_{G-L-U-B_i-C_i}(A_i)$, which, by \ref{pulp1}, is disjoint from $U_1$, and by the choice of $U_0\subseteq U_1$, we have that \emph{\ref{exptog4}} holds. Similarly, for any $j\in[p_0]\setminus\{i\}$, we have that $B^{\ell_0}_{G-U-B_j-C_j}(A_j)=B^{\ell_0}_{G-L-U-B_j-C_j}(A_j)$. So by \ref{pulp1}, $B^{\ell_0}_{G-U-B_j-C_j}(A_j)$ and $B^{\ell_0}_{G-U-B_i-C_i}(A_i)$ are disjoint. In particular, $A_i$ and $A_j$ are at distance at least $2\ell_0$ apart in $G-U-B_i-C_i-B_j-C_j$, and hence \emph{\ref{exptog5}} holds. Therefore, Lemma~\ref{lem-expand-together} applies and so there is some $\ell\in[p_0]$ for which $$|B^{\ell_0}_{G-U-B_\ell-C_\ell}(A_\ell)|\geq(\log n)^{200b},$$ as claimed. \end{poc} Partition $\mathcal{Z}=\mathcal{B}\cup\mathcal{B}'$, where $\mathcal{B}'$ is the subcollection of sets in $\mathcal{Z}$ that are linked through paths in~$\mathcal{Q}$ to the kraken $\mathsf{K}_\ell$, and $\mathcal{B}=\mathcal{Z}\setminus\mathcal{B}'$. We now show that there is a path of length at most $3m$ from $A_\ell$ to $\mathcal{B}$ avoiding $V(\mathcal{B}')\cup U\cup B_\ell\cup C_\ell$, which will lead us to a contradiction to the maximality of $\mathcal{Q}$, and will complete the proof. For this, we observe that $$|V(\mathcal{B}')\cup U\cup B_\ell\cup C_\ell|\leq\log n_{H_{\ell}}\cdot(\log n)^{100b}+(\log n)^{2b}+(\log n_{H_{\ell}})^{6}+\log n_{H_{\ell}}\cdot 3m\leq 2(\log n)^{100b+1}.$$ Therefore, we have that $$10|B^{\ell_0}_{G-U-B_\ell-C_\ell}(A_\ell)|\geq10(\log n)^{200b}\geq\log^3n\cdot|V(\mathcal{B}')\cup U\cup B_\ell\cup C_\ell|$$ and $$10|V(\mathcal{B})|\geq10(m^2-\log n)(\log n)^{100b}\geq\log^3n\cdot|V(\mathcal{B}')\cup U\cup B_\ell\cup C_\ell|.$$ Then applying Lemma~\ref{lem-short-diam-new} with $(A,B,W)_{\ref{lem-short-diam-new}}=(B^{\ell_0}_{G-U-B_\ell-C_\ell}(A_\ell),V(\mathcal{B}),V(\mathcal{B}')\cup U\cup B_\ell\cup C_\ell)$, we get a path from $B^{\ell_0}_{G-U-B_\ell-C_\ell}(A_\ell)$ to $\mathcal{B}$ of length at most $40\varepsilon_1^{-1}\log^3n$, which can be extended to an $A_\ell,\mathcal{B}$-path of length at most $\ell_0+40\varepsilon_1^{-1}\log^3n\leq3m$, contradicting the maximality of $\mathcal{Q}$, and completing the proof. \end{proof} \section{Krakens hand in hand}\label{sec-link-2krakens} \begin{proof}[Proof of Lemma~\ref{lem-link-2pulp-adj}] Let $\mathsf{K}_\alpha,\mathsf{K}_\beta$ be as given and fix $\log^{7}n\leq \ell\leq \log^{t}n$ with $\ell=\pi(v_1^{\alpha},v_1^{\beta},G)$. Set $\ell_0=(\log\log n)^{20}$ and $\Delta=e^{(\log\log n)^2}$. Since $G$ is bipartite, we know that $s$ is even. Without loss of generality, we may assume that $v_1^{\alpha},v_3^{\alpha},\dots,v_{s-1}^{\alpha}$, and $v_1^{\beta},v_3^{\beta},\dots,v_{s-1}^{\beta}$ lie in one side of the bipartition of $G$, while $v_2^{\alpha},v_4^{\alpha},\dots,v_s^{\alpha}$, and $v_2^{\beta},v_4^{\beta},\dots,v_s^{\beta}$ lie in the other side. In particular, $\ell$, the length of each path $Q_i$ to be constructed, is even. We call a leg of $\mathsf{K}_\alpha$ or $\mathsf{K}_\beta$ a low degree leg if it lies completely in $G-L$ and high degree leg otherwise. We sequentially find pairwise disjoint $v_j^{\alpha},v_j^{\beta}$-path $Q_j$ such that, for each $j\in[s]$, $Q_j$ is disjoint from $\bigcup_{i=j+1}^s V(P_{i}^\alpha\cup F_{i}^{\alpha}\cup P_{i}^\beta\cup F_{i}^{\beta})$, as follows. Suppose we have already constructed $Q_1,Q_2,\dots,Q_k$, and we want to embed $Q_{k+1}$, for some $k\in\{0,1,\dots,s-1\}$. We do so by connecting the legs $F_{k+1}^\alpha$ and $F_{k+1}^\beta$. For $\sigma\in\{\alpha,\beta\}$, if $F_{k+1}^\sigma$ is a low degree leg, then we first expand it. To be precise, we consider the following three cases. See Figure~\ref{fig-link-kraken}. \begin{figure} \caption{An illustration of the proof of Lemma~\ref{lem-link-2pulp-adj} \label{fig-link-kraken} \end{figure} \noindent\emph{Case 1.} If $u_{k+1}^{\sigma}\in L$, then set $$X_{k+1}^{\sigma}=N(u_{k+1}^{\sigma})\cup V(P_{k+1}^{\sigma}).$$ Note that in this case, $X_{k+1}^{\sigma}$ is a $(\Delta,10m+1)$-expansion of $v_{k+1}^{\sigma}$. Suppose then $u_{k+1}^{\sigma}\not\in L$ and so $F_{k+1}^\sigma\subseteq G-L$. Set $$Z:=V(C_\alpha)\cup V(C_\beta)\cup \Big(\bigcup_{j=1}^sV(P_j^\alpha\cup P_j^\beta)\Big) \cup \Big(\bigcup_{i=1}^kV(Q_i)\Big).$$ We wish to expand $F_{k+1}^{\sigma}$ in $G-Z$ while avoiding all unused legs $\cup_{i=k+2}^{s}P_{i}^\alpha\cup F_i^\alpha\cup P_{i}^\beta\cup F_i^\beta$. \noindent\emph{Case 2.} If $F_{k+1}^{\sigma}$ does not intersect $L$ after expanding $\ell_0$ steps in $G-Z$, namely, $B_{G-Z}^{\ell_0}(V(F_{k+1}^{\sigma}))$ is disjoint from $L$. By the hypothesis, all low degree legs in $\mathsf{K}_\alpha, \mathsf{K}_\beta$ are pairwise a distance $(\log n)^{1/10}$ apart in $G-L$, and hence we see that $B_{G-Z}^{\ell_0}(V(F_{k+1}^{\sigma}))$ is disjoint from all unused low degree legs. Since $$|Z|\leq 2s+2s\cdot 10m+s\log^{t}n\leq 2\log^{t+1}n\le \frac{1}{4}\,\varepsilon(|F_{k+1}^\sigma|)\cdot|F_{k+1}^\sigma|,$$ we can apply Proposition~\ref{prop-exp-HL} with $(X,Y,W,r)_{\ref{prop-exp-HL}}=(F_{k+1}^\sigma,Z,\varnothing,\ell_0)$ to get that $$|B_{G-Z}^{\ell_0}(V(F_{k+1}^\sigma))|\geq \Delta.$$ In this case, set $$X_{k+1}^\sigma=B_{G-Z}^{\ell_0}(V(F_{k+1}^\sigma))\cup V(P_{k+1}^{\sigma}),$$ which is a $(\Delta,11m+\ell_0)$-expansion of $v_{k+1}^{\sigma}$. \noindent\emph{Case 3.} Suppose that $B_{G-Z}^{\ell_0}(V(F_{k+1}^{\sigma}))$ intersects $L$. Let $R_{k+1}^{\sigma}$ be a shortest $u_{k+1}^{\sigma}, L$-path in $B_{G-Z}^{\ell_0}(V(F_{k+1}^{\sigma}))$, which has length at most $m+\ell_0\le 2m$. In this case, let $w_{k+1}^{\sigma}$ be the endpoint of $R_{k+1}^{\sigma}$ in $L$ and set $$X_{k+1}^\sigma=V(R_{k+1}^{\sigma})\cup N(w_{k+1}^{\sigma})\cup V(P_{k+1}^{\sigma}),$$ which a $(\Delta,12m+1)$-expansion of $v_{k+1}^{\sigma}$. Now, let $$\hat{Z}=\Big(\bigcup_{i=1}^kV(Q_i)\Big)\cup \left(\bigcup_{i=k+2}^{s}V(P_{i}^\alpha\cup F_{i}^\alpha\cup P_i^\beta\cup F_i^\beta)\right).$$ Note that $$|\hat{Z}|\le s\log^{t}n+2s\cdot (10m+(\log n)^{2t})\le (\log n)^{3t}.$$ Trim $X_{k+1}^\sigma$, using Proposition~\ref{prop:trimming}, down to a $((\log n)^{4t},20m)$-expansion of $v_{k+1}^{\sigma}$ disjoint from $\hat{Z}$; call it $\hat{X}_{k+1}^\sigma$. Finally, apply Lemma~\ref{lem-finalconnect} with $$(D,U,F_1,F_2,v_1,v_2)_{\ref{lem-finalconnect}}=((\log n)^{4t}, \hat{Z},\hat{X}_{k+1}^\alpha,\hat{X}_{k+1}^\beta,v_{k+1}^\alpha,v_{k+1}^{\beta})$$ to obtain the desired $v_{k+1}^{\alpha},v_{k+1}^{\beta}$-path $Q_{k+1}$ disjoint from $\bigcup_{i=k+2}^s V(P_{i}^\alpha\cup F_{i}^{\alpha}\cup P_{i}^\beta\cup F_{i}^{\beta})$. Carrying out the embeddings for all $k\in\{0,1,\dots,s-1\}$ finishes the proof. \end{proof} \end{document}
\begin{document} \title{Unstable Entropy and Unstable Pressure for Partially Hyperbolic Endomorphisms\footnotetext{\\ \emph{ 2010 Mathematics Subject Classification} {\footnotesize \centerline{1. School of Mathematical Sciences} \centerline{Hebei Normal University, Shijiazhuang, 050024, P.R. China} \centerline{2. Department of Applied Mathematics, College of Science} \centerline{China Agricultural University, Beijing, 100083, P.R. China} \centerline{3. School of Mathematical Sciences} \centerline{Xiamen University, Xiamen, 361005, P.R. China} } \begin{abstract} In this paper, unstable metric entropy, unstable topological entropy and unstable pressure for partially hyperbolic endomorphisms are introduced and investigated. A version of Shannon-McMillan-Breiman Theorem is established, and a variational principle is formulated, which gives a relationship between unstable metric entropy and unstable pressure (unstable topological entropy). As an application of the variational principle, some results on the $u$-equilibrium states are given. \end{abstract} \section{Introduction} In order to describe the complexity of a dynamical system from different points of view, some invariants are introduced, among which, entropy including metric entropy and topological entropy is a crucial one. The metric entropy gives the maximum amount of average information one can get from a system with respect to an invariant measure, while the topological entropy describes the exponential growth rate of the number of orbits. A variational principle relating them says that topological entropy is equal to the supremum of metric entropy over all invariant measures. The pressure with respect to a potential function is a generalization of topological entropy, and a variational principle relating metric entropy and it can also be formulated. Concerning smooth ergodic theory, the Lyapunov exponent can be introduced and there has been a system of results related with entropy for diffeomorphisms (the invertible case). Pesin's entropy formula relating metric entropy and Lyapunov exponents with respect to an SRB measure is established for both deterministic and random cases \textcolor[rgb]{0.00,0.00,0.00}{(cf. \cite{LedrappierYoung1985}, \cite{LiuQian1995})}. Moreover, a generalized entropy formula (dimension formula) with respect to a general invariant measure is formulated (cf. \cite{LedrappierYoung1985a}). Plenty of physical processes are irreversible, so it is interesting to investigate corresponding results as in \cite{LedrappierYoung1985,LedrappierYoung1985a} for non-invertible case, i.e. endomorphisms. However, for endomorphisms, there are some difficulties to establish similar results. Due to non-invertibility, the preimage of a given point is usually not a single point, hence the notion of unstable manifolds is not well defined and therefore it is a problem to formulate the SRB property. Further more, the non-invertibility leads to some subtle difficulties such that it is not convenient to consider problems concerning entropies on the phase space. In order to overcome this difficulty, in \cite{Zhu1998}, Zhu introduced the inverse limit space (see Section \ref{sec:pre}, for details), which makes it possible to define the unstable manifolds and borrow some ideas from the smooth ergodic theory for random dynamical systems. In \cite{QianZhu2001}, Qian and Zhu gave the necessary and sufficient condition for Pesin's entropy formula in the case of endomorphisms. Moreover, a series of results on ergodic theory of endomorphisms are obtained, see \cite{QianXieZhu2009} for a complete discussion of this topic. Via the inverse limit space, the unstable manifolds can be well defined, which implies us that the unstable entropy and unstable pressure can be introduced for endomorphisms. The concept of unstable entropy is originally introduced by Hu, Hua and Wu in \cite{HuHuaWu2017} for partially hyperbolic diffeomorphisms, which gives a description of the complexity of a system along unstable manifolds. In \cite{HuHuaWu2017}, a complete discussion is given, including the relationship between unstable metric entropy and Ledrappier and Young's entropy, a version of Shannon-McMillan-Breiman Theorem and a variational principle relating unstable metric entropy and unstable topological entropy. It is important to point out that in \cite{HuHuaWu2017} the unstable metric entropy is given by the conditional entropy of a finite partition with respect to a measurable partition, instead of the form in Ledrappier and Young's papers \cite{LedrappierYoung1985,LedrappierYoung1985a}. The latter form of metric entropy is introduced to establish connection with SRB measures, which is not suitable for giving an unstable version of variational principle; while the former one makes it possible to formulate the variational principle using a classical method. However, both of the two forms give the same thing but from different points of view (see Theorem A in \cite{HuHuaWu2017} for more details). As a generalization of unstable topological entropy, unstable pressure is defined in \cite{HuWuZhu2017}, where the so-called $u$-equilibrium states are introduced and investigated finely. Recently, a version of local variational principle and Katok's entropy formula for unstable metric entropy are given in \cite{Wu2017} and \cite{HuangChenWang2018} respectively. Our main purpose in this paper is to establish unstable entropy and unstable pressure for endomorphisms and try to give a system of results as in \cite{HuHuaWu2017}. Inspired by the argument in \cite{QianZhu2001,QianXieZhu2009}, we introduce the concepts via the inverse limit space. For an endomorphism $f$ on a Riemannian manifold $M$ with an $f$-invariant measure $\mu$, the inverse limit space $M^f\subset M^{\mathbb{Z}}$ is introduced, and a dynamical system $(M^f,\tau,\tilde{\mu})$ is established, where $\tau$ is the left shift operator on $M^f$ and $\tilde{\mu}$ is a $\tau$-invariant measure corresponding to $\mu$. Thanks to $(M^f,\tau,\tilde{\mu})$, we can give two types of definitions of unstable metric entropy, one is introduced via a ``pointwise'' way (see Definition \ref{def:metricentropy1}), which is denoted by $\tilde{h}^u_\mu(f)$, and the other one is defined using finite partitions (see Definition \ref{def:metricentropy2}), which is in the classical form and denoted by $h^u_\mu(f)$. Then we show that $\tilde{h}^u_\mu(f)$ and $h^u_\mu(f)$ are equivalent when $\tilde{\mu}$ is ergodic (Theorem \ref{thm:localvsfinite}). Then a version of Shannon-McMillan-Breiman Theorem is established for our case (Theorem \ref{thm:SMB}), which makes our unstable metric entropy meaningful. Again using $(M^f,\tau,\tilde{\mu})$, we define the unstable pressure and unstable topological entropy, the latter one can be viewed as a special case of the former one. And we show that the variational principle for classical entropy and pressure also holds in our case (Theorem \ref{thm:vp}), which makes it possible to consider the so-called $u$-equilibrium states for endomorphisms. This paper is organized as follows. In Section \ref{sec:pre}, we give some preliminaries, including the concept of partial hyperbolicity for endomorphisms, the inverse limit space and other necessary definitions for our results. And in the end of this section, our main results are also given. In Section \ref{sec:umetricentropy}, we give the precise definitions of unstable metric entropy for endomorphisms via two methods, following which, Theorem \ref{thm:localvsfinite} is proved. And Theorem \ref{thm:SMB} is also proved in this section. In section \ref{sec:pressure}, definitions of unstable pressure and unstable topological entropy are introduced, some properties of unstable pressure are also listed in the end of this section. In section \ref{sec:vp}, we prove our main result, i.e., the variational principle (Theorem \ref{thm:vp}), and as an application, the so-called $u$-equilibrium states for endomorphisms are introduced. \section{Preliminaries and statements of main results}\label{sec:pre} Throughout this paper, let $M$ be a compact $C^\infty$ Riemannian manifold without boundary endowed with metric $d(\cdot,\cdot)$ and $f:M\to M$ a $C^1$ endomorphism. Denote $TM$ the tangent bundle of $M$ with norm $\Vert\cdot\Vert$. Both $d(\cdot,\cdot)$ and $\Vert\cdot\Vert$ are induced by the Riemannian metric. For a metric space $X$, denote $\mathcal{B}(X)$ the Borel $\sigma$-algebra of $X$. Let $M^\mathbb{Z}$ be the infinite product space of $M$ endowed with the product topology and the metric $\tilde{d}(\tilde{x},\tilde{y})=\sum_{n=-\infty}^{\infty}2^{-|n|}d(x_n,y_n)$ for $\tilde{x}=\{x_n\}_{n=-\infty}^{\infty}$ and $\tilde{y}=\{y_n\}_{n=-\infty}^{\infty}$. In order to define unstable manifolds, we need the concept of \textit{inverse limit space} denoted by $M^f$, which means it is a subspace of the product space $M^\mathbb{Z}$, and $fx_n=x_{n+1}$, $n\in\mathbb{Z}$, for $\tilde{x}=\{x_n\}_{n=-\infty}^{+\infty}\in M^f$. It is clear that $M^f$ is a closed subspace of $M^{\mathbb{Z}}$. Let $\Pi\colon M^f\to M$ be the projection such that for $\tilde{x}=\{x_n\}_{n=-\infty}^{+\infty}$, $\Pi(\tilde{x})=x_0$. Let $\tau\colon M^f\to M^f$ be the left shift operator. Denote $\mathcal{M}(f)$ the set of all $f$-invariant Borel measures on $M$, and denote $\mathcal{M}(\tau)$ the set of all $\tau$-invariant measures on $M^f$. On one hand, for any $\mu\in\mathcal{M}(f)$, there is a unique $\tau$-invariant measure $\tilde{\mu}$ on $M^f$ corresponding to $\mu$ with $\Pi(\tilde{\mu})=\mu$ (see Proposition \uppercase\expandafter{\romannumeral1}.3.1 in \cite{QianXieZhu2009}); on the other hand, for any $\tilde{\mu}\in\mathcal{M}(\tau)$, $\mu\colon=\Pi(\tilde{\mu})$ is an $f$-invariant measure on $M$. In addition, $\mu$ is ergodic with respect to $f$ if and only if $\tilde{\mu}$ is ergodic with respect to $\tau$. For more details on the relationship between $\mathcal{M}(f)$ and $\mathcal{M}(\tau)$, the reader can refer to \uppercase\expandafter{\romannumeral1}.3 in \cite{QianXieZhu2009}. In the remaining of this paper, we always denote $\mu$ and $\tilde{\mu}$ the measures on $M$ and $M^f$ respectively with $\Pi(\tilde{\mu})=\mu$. Consider the pull back bundle $E=\Pi^*TM$. The tangent map $Df$ induces a fiber preserving map on $E$ with respect to the left shift map $\tau$, defined by $\Pi^*\circ Df\circ\Pi_*$, and still denoted by $Df$ for simplicity. Now, we give the definition of partial hyperbolicity. \begin{mdef}\rm\label{def:ph} \textcolor[rgb]{0.00,0.00,0.00}{$f$ is said to be \textit{(uniformly) partially hyperbolic} if there exist a continuous splitting of the pull back bundle $E$ into three subbundles, i.e., $E(\tilde{x})=E^s({\tilde{x}})\oplus E^c({\tilde{x}})\oplus E^u({\tilde{x}})$ for all ${\tilde{x}}\in M^f$ and constants $\lambda_1$, $\lambda_1'$, ${\lambda_2}$, ${\lambda_2}'$ and $C$ with $0<{\lambda_1}<1<{\lambda_2}$, ${\lambda_1}<{\lambda_1}'\leq{\lambda_2}'<{\lambda_2}$ and $C>0$ such that for each $\tilde{x}\in M^f$,} \begin{enumerate}[label=(\roman*)] \item {$D_{\tilde{x}}f(E^k({\tilde{x}}))= E^k(\tau({\tilde{x}}))$}, for $k=s,c,u$; \item for $v^s\in E^s({\tilde{x}})$ and $n\in\mathbb{Z}^+$, $\Vert D_{\tilde{x}}f^nv^s\Vert\leq C{\lambda_1}^n\Vert v^s\Vert$; \item for $v^c\in E^c({\tilde{x}})$ and $n\in\mathbb{Z}^+$, $C^{-1}{({\lambda_1}')}^{n}\Vert v^c\Vert\leq\Vert D_{\tilde{x}}f^nv^c\Vert\leq C({\lambda_2}')^n\Vert v^c\Vert$; \item for $v^u\in E^u({\tilde{x}})$ and $n\in\mathbb{Z}^+$, $\Vert D_{\tilde{x}}f^nv^u\Vert\geq C^{-1}{{\lambda_2}}^{n}\Vert v^u\Vert$. \end{enumerate} \end{mdef} From now on, let $(f,M,\mu)$ be a dynamical system, where $f$ is a partially hyperbolic endomorphism, and $\mu$ is an $f$-invariant Borel measure. Let $\tilde{\mu}$ be the corresponding measure on $M^f$. For $\tilde{x}=\{x_n\}_{n=-\infty}^{\infty}\in M^f$ and $\epsilon>0$ small enough, define \begin{align*} W^u_\epsilon(\tilde{x},f)\colon=\{&z_0\in M\colon \text{there exists }\tilde{z}\in M^f \text{ with }\Pi(\tilde{z})=z_0,\\&d(z_{-n},x_{-n})<\epsilon\text{ for } n\in\mathbb{N} \text{ and }\limsup_{n\to\infty}\frac{1}{n}\log d(z_{-n},x_{-n})\leq-\log\lambda_2\}, \end{align*} where $\lambda_2$ is the constant in Definition \ref{def:ph}. $W^u_\epsilon(\tilde{x},f)$ is called a \textit{local unstable manifold} of $f$ at $\tilde{x}$. Now we have the following theorem, which is stated for hyperbolic endomorphisms, while it is still valid for our partially hyperbolic case. The reader can also refer to \cite{Przytycki1976,Ruelle1988,Young1986} for more details. \begin{thm}[Theorem \uppercase\expandafter{\romannumeral4}.2.1 in \cite{QianXieZhu2009}]\label{thm:umt} Let $f$ be a partially hyperbolic endomorphism. Then there exists a continuous family of $C^1$ embedded disks $\{D^u_{\tilde{x}}\}_{\tilde{x}\in M^f}$ in $M$ and constants $0<\lambda<1$ and $\epsilon>0$ such that \begin{enumerate}[label=(\roman*)] \item $T_{{x}_0}D^u_{\tilde{x}}=E^u({x}_0)$, for any $\tilde{x}\in M^f$; \item for any $z_0\in D^u_{\tilde{x}}$, there exists unique $\tilde{z}\in M^f$ such that $\Pi(\tilde{z})=z_0$ and \begin{equation}\label{eq:locunstable2} d(z_{-n},x_{-n})\leq \lambda^nd(z_0,x_0), \end{equation} for $n\in\mathbb{Z}^+$; \item $D^u_{\tilde{x}}\cap B(x_0,\epsilon)=W^u_\epsilon(\tilde{x},f)$, where $B(x_0,\epsilon)=\{y\in M\colon d(y,x)<\epsilon\}$. \end{enumerate} \end{thm} Then we can define \[ \widetilde{W}^u_\epsilon(\tilde{x},f)\colon=\{\tilde{z}\in M^f\colon\Pi(\tilde{z})\in W^u_\epsilon(\tilde{x},f)\text{ and }\tilde{z}\text{ satisfies \eqref{eq:locunstable2}}\}. \] Sometimes, we will use the notation $W^u_{\text{loc}}(\tilde{x},f)$ and $\widetilde{W}^u_{\text{loc}}(\tilde{x},f)$ for $W^u_{\epsilon}(\tilde{x},f)$ and $\widetilde{W}^u_{\epsilon}(\tilde{x},f)$ respectively. \begin{rmk}\rm According to Theorem \ref{thm:umt}, it is clear that \[ \Pi|_{\widetilde{W}^u_{\text{loc}}(\tilde{x},f)}\colon\widetilde{W}^u_{\text{loc}}(\tilde{x},f)\to W^u_{\text{loc}}(\tilde{x},f) \] is a bijection, which is crucial for our subsequent proofs. \end{rmk} Now we define \begin{align*} W^u(\tilde{x},f)=\{&z_0\in M\colon \text{there exits }\tilde{z}\text{ with }\Pi(\tilde{z})=z_0\\ &\text{and }\textcolor[rgb]{0.00,0.00,0.00}{\limsup_{n\to+\infty}\frac{1}{n}d(z_{-n},x_{-n})\leq-\log\lambda_2}\}, \end{align*} and \[ \widetilde{W}^u(\tilde{x},f)=\{\tilde{z}\in M^f\colon\Pi(\tilde{z})\in W^u(\tilde{x},f)\text{ with }{\limsup_{n\to+\infty}\frac{1}{n}d(z_{-n},x_{-n})\leq-\log\lambda_2}\}, \] where $\lambda_2$ is the constant in Definition \ref{def:ph}. We call $W^u(\tilde{x},f)$ the \textit{global unstable set} at $\tilde{x}$. It can also be proved as in \cite{Zhu1998} that there exists a sequence of $C^1$ embedded disks $\{W_{-n}(\tilde{x})\}_{n=0}^{+\infty}$ in $M$ such that $fW_{-n}(\tilde{x})\supset W_{-(n-1)}(\tilde{x})$ for $n\in\mathbb{Z}^+$ and \[ W^u(\tilde{x},f)=\bigcup_{n=0}^{+\infty}f^nW_{-n}(\tilde{x}), \] which shows that $W^u(\tilde{x},f)$ is in fact an immersed submanifold of $M$ tangent at $\Pi(\tilde{x})$ to $E^u(\Pi(\tilde{x}))$. Then we denote the set $\{W^u(\tilde{x},f)\colon\tilde{x}\in M^f\}$ by $W^u$, which is called $W^u$-foliation. For a measurable partition $\eta$ of $M^f$, $\eta(\tilde{x})$ means the element in $\eta$ containing $\tilde{x}$. Now we give some definitions related to measurable partitions. \begin{mdef}\rm A measurable partition $\eta$ of $M^f$ is said to \textit{be subordinate to} $W^u$-foliation if for $\tilde{\mu}$-a.e. $\tilde{x}$, $\eta(\tilde{x})$ has the following properties: \begin{enumerate}[label=(\roman*)] \item $\Pi|_{\eta{(\tilde{x})}}\colon\eta(\tilde{x})\to\Pi(\eta(\tilde{x}))$ is bijective; \item There exists a $k(\tilde{x})$-dimensional \textcolor[rgb]{0.00,0.00,0.00}{(where $k(\tilde{x})=\dim E^u(x_0)$)} $C^1$ embedded submanifold $W_{\tilde{x}}$ of $M$ with $W_{\tilde{x}}\subset W^u(\tilde{x})$, such that $\Pi(\eta(\tilde{x}))\subset W_{\tilde{x}}$, and $\Pi(\eta(\tilde{x}))$ contains an open neighborhood of $x_0$ in $W_{\tilde{x}}$. \end{enumerate} \end{mdef} Given $\tilde{\mu}\in\mathcal{M}(\tau)$. For a measurable partition $\eta$ of $M^f$, there exists a canonical system $\{\tilde{\mu}^\eta_{\tilde{x}}\}_{\tilde{x}\in M^f}$ of conditional measures of $\tilde{\mu}$ associated with $\eta$, satisfying \begin{enumerate}[label=(\roman*)] \item for every measurable set $\widetilde{B}\subset M^f$, $\tilde{x}\mapsto\tilde{\mu}^\eta_{\tilde{x}}(\widetilde{B})$ is measurable; \item $\tilde{\mu}(\widetilde{B})=\int_{M^f}\tilde{\mu}^\eta_{\tilde{x}}(\widetilde{B})\mathrm{d}\tilde{\mu}(\tilde{x})$. \end{enumerate} See e.g. \cite{Rokhlin1962} for more details. Let $\alpha$ be a measurable partition of $M^f$. The diameter of $\alpha$ is defined as follows: \[ \mathrm{diam}(\alpha)=\sup_{A\in\alpha}\mathrm{diam}(\Pi(A)), \] where for a subset $B$ of $M$, \[ \mathrm{diam}(B):=\sup_{x,y\in B}d(x,y). \] In the following, we construct a type of measurable partitions subordinate to $W^u$-foliation. Fix $\epsilon>0$. Let $\alpha$ be a finite partition of $M^f$ with diameter small enough such that the diameter of $\alpha$ is less than $\epsilon$. We can construct a finer partition $\eta$ such that for each $\tilde{x}\in M^f$ \[ \eta(\tilde{x})=\alpha(\tilde{x})\cap\widetilde{W}^u_\epsilon(\tilde{x},f). \] Clearly $\eta$ is a measurable partition of $M^f$ (cf. p34 in \cite{HuHuaWu2017} for more details). In addition, by the definition of $\widetilde{W}^u_\epsilon(\tilde{x},f)$ and Theorem \ref{thm:umt}, if $\mu(\partial(\Pi(\alpha)))=0$, $\eta$ is a measurable partition subordinate to $W^u$-foliation, where $\partial(\Pi(\alpha))=\bigcup_{A\in\alpha}\partial(\Pi (A))$ and for $B\subset M$, $\partial B$ means the boundary of $B$. Let $\mathcal{P}(M^f)$ denote the set of all finite partitions with diameter small enough and $\mathcal{P}^u(M^f)$ denote the set of measurable partitions of $M^f$ subordinate to $W^u$-foliation which are induced by finite partitions in $\mathcal{P}(M^f)$. In the following, we consider a special type of measurable partitions. \begin{mdef}\rm A measurable partition $\xi$ of $M^f$ is said to be \textit{increasing} if $\tau^{-1}\xi\geq\xi$. \end{mdef} Consider a measurable partition $\xi=\{A_i\}_{i\in I}$ of $M^f$. A measurable set $B$ is called a $\xi$-set if $B=\cup_{i\in I'}A_i$, where $I'\subset I$. Denote $\mathcal{B}(\xi)$ the $\sigma$-algebra of $M^f$ consisting of all measurable $\xi$-sets. Given $\tilde{\mu}\in\mathcal{M}(\tau)$, define \[ \mathcal{B}^u\colon=\{\tilde{B}\in\mathcal{B}_{\tilde{\mu}}(M^f)\colon\tilde{x}\in \tilde{B}\text{ implies }\widetilde{W}^u(\tilde{x})\subset \tilde{B}\}, \] where $\mathcal{B}_{\tilde{\mu}}(M^f)$ is the completion of $\mathcal{B}(M^f)$ with respect to $\tilde{\mu}$. The following proposition ensures the existence of increasing measurable partitions, the reader can see Section {\uppercase\expandafter{\romannumeral9}}.2.2 in \cite{QianXieZhu2009} for details. \begin{prop}\label{prop:specialpartition} There exists a measurable partition $\xi$ of $M^f$ which has the following properties: \begin{enumerate}[label=(\roman*)] \item $\tau^{-1}\xi\geq\xi$; \item $\bigvee^\infty_{n=0}\tau^{-n}\xi$ is equal to the partition into single points; \item $\mathcal{B}(\bigwedge^\infty_{n=0}\tau^n(\xi))=\mathcal{B}^u$, $\tilde{\mu}$-$\mathrm{mod}$ $0$; \item \textcolor[rgb]{0.00,0.00,0.00}{$\xi$ is subordinate to $W^u$-foliation of $f$.} \end{enumerate} \end{prop} We denote by $\mathcal{Q}^u(M^f)$ the set of all increasing measurable partitions subordinate to $W^u$-foliation as in Proposition \ref{prop:specialpartition}. Now we can introduce the unstable metric entropy along $W^u$-foliation for a partially hyperbolic endomorphism. Two types of unstable metric entropy will be given, one is defined via the average decreasing rate of the Bowen balls which is denoted by $\tilde{h}^u_\mu(f)$, and the other one is defined by the conditional entropy of $f$ along $W^u$-foliation, which is denoted by $h^u_\mu(f)$. Both of their precise definitions are stated in Section \ref{sec:umetricentropy}. When $\mu$ is ergodic, it can be showed that $\tilde{h}^u_\mu(f)$ and $h^u_\mu(f)$ describe the same thing from different points of view essentially, i.e., we have the following theorem. \begin{thmm}\label{thm:localvsfinite} Let $f$ be a $C^1$ partially hyperbolic endomorphism and $\mu$ an ergodic measure of ${f}$. Then \[ \tilde{h}^u_\mu({f})={h}^u_\mu({f}). \] \end{thmm} The classical Shannon-McMillan-Breiman Theorem expresses the metric entropy as the limit of certain conditional information functions, which interprets the metric entropy from the viewpoint of information theory. Moreover, there is a corresponding version of Shannon-McMillan-Breiman Theorem in our case. Specially, for two measurable partitions $\beta$ and $\gamma$ of $M^f$, the conditional entropy of $\beta$ with respect to $\gamma$ for a $\tau$-invariant measure $\tilde{\mu}$ can be given, which is denoted by $I_{\tilde{\mu}}(\beta|\gamma)$. Meanwhile, the conditional entropy $h_\mu(f,\beta|\gamma)$ of $f$ for $\beta$ with respect to $\gamma$ can be introduced. Precise definitions of above concepts are stated in Section \ref{sec:umetricentropy}. Then we have the following theorem. \begin{thmm}\label{thm:SMB} Let $f$ be a $C^1$ partially hyperbolic endomorphism, and suppose $\mu$ is an ergodic measure of ${f}$. {For any $\alpha\in \mathcal{P}( M^f), \eta\in\mathcal{P}^u( M^f)$}, we have \[ \lim_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha^{n-1}_0|\eta)(\tilde{x})=h_\mu({f},\alpha|\eta), \] where for integers $k<j$, $\tau^{-j}\beta\vee\tau^{-(j-1)}\beta\vee\cdots\vee\tau^{-k}\beta$ is denoted by $\beta^j_k$. \end{thmm} Taking account of the topological structure, we can establish the concepts of unstable topological entropy and unstable pressure with respect to a continuous potential function $\varphi$, whose definitions will be given in Section \ref{sec:pressure}. We denote unstable topological entropy and unstable pressure by $h^u_{\text{{\rm top}}}(f)$ and $P^u(f,\varphi)$ respectively. It is natural to consider the relationship between unstable pressure and unstable metric entropy, a version of variational principle can be formulated as follows. Denote $C(M)$ the set of all continuous functions on $M$. \begin{thmm}\label{thm:vp} Let $f$ be a $C^1$ partially hyperbolic endomorphism and $\varphi\in C(M)$. Then we have \[ \sup_{\mu\in\mathcal{M}({f})}\left\{h^u_\mu({f})+\int_{ M}\varphi\mathrm{d}{\mu}\right\}=P^u({f},\varphi). \] \end{thmm} A direct corollary of Theorem \ref{thm:vp} is the following variational principle for unstable topological entropy. \begin{Cor}\label{coro:vp1} Let $f$ be a $C^1$ partially hyperbolic endomorphism, then we have \[ \sup_{\mu\in\mathcal{M}({f})}\left\{h^u_\mu({f})\right\}=h_{\text{{\rm top}}}^u({f}). \] \end{Cor} \section{Unstable metric entropy for endomorphisms}\label{sec:umetricentropy} \subsection{Definitions of unstable metric entropy} In this subsection, we give the definition of unstable metric entropy for endomorphisms via two methods. The equivalence of the two definitions will be proved in the next subsection. Firstly, we give the definition by ``pointwise'' approach. Let $d^u_{\tilde{x}}$ be the metric on $W^u(\tilde{x},f)$ induced by the Riemannian structure on $M$. Denote the $d^u_{n}$-Bowen ball in $\widetilde{W}^u(\tilde{x},f)$ with center $\tilde{x}$ and radius $\epsilon>0$ by $\widetilde{V}^u(f,\tilde{x},n,\epsilon)$, i.e., \[ \widetilde{V}^u(f,\tilde{x},n,\epsilon):=\{\tilde{y}\in \widetilde{W}^u(\tilde{x},f)\colon \tilde{d}^u_{n}(\tilde{y},\tilde{x})<\epsilon\}, \] where \[ \tilde{d}^u_{n}(\tilde{x},\tilde{y}):=\max_{0\leq j\leq n-1}\{ d^{u}_{\tau^j\tilde{x}}(\Pi(\tau^j\tilde{x}),\Pi(\tau^j\tilde{y}))\}. \] \begin{mdef}\rm\label{def:metricentropy1} Given an increasing partition $\xi_u$ of $M^f$ subordinate to $W^u$-foliation, we define the \textit{unstable metric entropy} along $W^u$-foliation as follows: \[ h_\mu(f,\xi_u)=\int_{M^f}h_\mu(f,\tilde{x},\xi_u)\mathrm{d}{\tilde\mu}(\tilde{x}), \] where \[ h_\mu(f,\tilde{x},\xi_u)=\lim_{\epsilon\to 0}\limsup_{n\to\infty}-\frac{1}{n}\log{\tilde{\mu}}_{\tilde{x}}^{\xi_u}\widetilde{V}^u(f,\tilde{x},n,\epsilon). \] \end{mdef} It can be proved that $h_\mu(f,\tilde{x},\xi_u)$ is independent of the choice of $\xi_u$, hence we also denote $\tilde{h}^u_\mu(f)= h_\mu(f,\xi_u)$. Moreover, $h_\mu(f,\tilde{x},\xi_u)$ is $\tau$-invariant, so when $\tilde{\mu}$ is ergodic, we have $\tilde{h}^u_\mu(f)=h_\mu(f,\xi_u)=h_\mu(f,\tilde{x},\xi_u)$, for $\tilde{\mu}$-a.e. $\tilde{x}\in M^f$. The reader can refer to Section {\uppercase\expandafter{\romannumeral9}}.3 in \cite{QianXieZhu2009} for details. \begin{rmk}\rm\label{rmk:low=up} In fact, in Definition \ref{def:metricentropy1}, ``$\limsup$'' can be replaced by ``$\lim$'' and ``$\lim_{\epsilon\to 0}$'' can be dropped. Indeed, denote \begin{equation*}\label{eq:lower} \underline{h}({f},\tilde{x},\epsilon,\xi_u)=\liminf_{n\to\infty}-\frac{1}{n}\log{\tilde\mu}_{\tilde{x}}^{\xi_u}\widetilde{V}^u({f},\tilde{x},n,\epsilon) \end{equation*} and \begin{equation*}\label{eq:upper} \overline{h}({f},\tilde{x},\epsilon,\xi_u)=\limsup_{n\to\infty}-\frac{1}{n}\log{\tilde\mu}_{\tilde{x}}^{\xi_u}\widetilde{V}^u({f},\tilde{x},n,\epsilon). \end{equation*} It has been proved in Section {\uppercase\expandafter{\romannumeral9}}.3 of \cite{QianXieZhu2009} that \begin{equation*}\label{eq:low=up} \lim_{\epsilon\to0}\underline{h}({f},\tilde{x},\epsilon,\xi_u)=\lim_{\epsilon\to0}\overline{h}({f},\tilde{x},\epsilon,\xi_u). \end{equation*} Then following the proof of Lemma 3.1 in \cite{HuHuaWu2017}, we can prove the above claim since $f$ is uniformly expanding along $W^u$-foliation. \end{rmk} In order to give the definition of unstable metric entropy via measurable partitions, firstly we give some definitions on information function, which is slightly modified in our context. Some properties concerning information function will be also listed in the end of this subsection. \begin{mdef}\rm Let $\alpha$ and $\eta$ be two measurable partitions of $M^f$. The \textit{information function} of $\alpha$ with respect to ${\tilde\mu}$ is defined as \[ I_{{\tilde{\mu}}}(\alpha)(\tilde{x}):=-\log{\tilde{\mu}}(\alpha(\tilde{x})), \] and the \textit{entropy} of $\alpha$ with respect to ${\tilde\mu}$ is defined as \[ H_{{\tilde{\mu}}}(\alpha):=\int_{M^f}I_{{\tilde{\mu}}}(\alpha)(\tilde{x})\mathrm{d}{\tilde{\mu}}(\tilde{x})=-\int_{M^f}\log{\tilde{\mu}}(\alpha(\tilde{x}))\mathrm{d}{\tilde{\mu}}(\tilde{x}). \] The \textit{conditional information function} of $\alpha$ with respect to $\eta$ is defined as \[ I_{{\tilde{\mu}}}(\alpha|\eta)(\tilde{x}):=-\log{\tilde{\mu}}_{\tilde{x}}^\eta(\alpha(\tilde{x})), \] where $\{{\tilde{\mu}}^\eta_{(\tilde{x})}\}_{\tilde{x}\in M^f}$ is a canonical system of conditional measures of ${\tilde{\mu}}$ with respect to $\eta$. Then the \textit{conditional entropy} of $\alpha$ with respect to $\eta$ is defined as \[ H_{{\tilde{\mu}}}(\alpha|\eta):=\int_{M^f}I_{{\tilde{\mu}}}(\alpha|\eta)(\tilde{x})\mathrm{d}{\tilde{\mu}}(\tilde{x})=-\int_{M^f}\log{\tilde{\mu}}_{\tilde{x}}^\eta(\alpha(\tilde{x}))\mathrm{d}{\tilde{\mu}}(\tilde{x}). \] \end{mdef} Now we can give the definition of unstable metric entropy by finite partitions. \begin{mdef}\rm\label{def:metricentropy2} The \textit{conditional entropy} of $f$ for a finite measurable partition $\alpha$ of $M^f$ with respect to $\eta\in\mathcal{P}^u(M^f)$ is defined as \[ h_\mu(f,\alpha|\eta)=\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha^{n-1}_0|\eta). \] \textit{The conditional entropy} of $f$ with respect to $\eta$ is defined as \[ h_{\mu}(f|\eta)=\sup_{\alpha\in\mathcal{P}(M^f)}h_\mu(f,\alpha|\eta), \] and the \textit{conditional entropy} of $f$ along $W^u$-foliation is defined as \[ {h}^u_\mu(f)=\sup_{\eta\in\mathcal{P}^u(M^f)}h_{\mu}(f|\eta). \] \end{mdef} To end this subsection, we list the following lemmas which are derived from \cite{HuHuaWu2017} with slight adaption and will be useful for the proofs of our main results. \begin{lem}\label{lem:info} Given $\tilde\mu\in\mathcal{M}({\tau})$ and let $\alpha$, $\beta$ and $\gamma$ be measurable partitions of $ M^f$ with $H_{\tilde{\mu}}(\alpha|\gamma)$, $H_{\tilde{\mu}}(\beta|\gamma)<\infty$. \begin{enumerate}[label=(\roman*)] \item If $\alpha\leq\beta$, then $I_{\tilde{\mu}}(\alpha|\gamma)(\tilde{x})\leq I_{\tilde{\mu}}(\beta|\gamma)(\tilde{x})$ and $H_{\tilde{\mu}}(\alpha|\gamma)\leq H_{\tilde{\mu}}(\beta|\gamma)$; \item$I_{\tilde{\mu}}(\alpha\vee\beta|\gamma)(\tilde{x})=I_{\tilde{\mu}}(\alpha|\gamma)(\tilde{x})+I_{\tilde{\mu}}(\beta|\alpha\vee\gamma)(\tilde{x}) $ and $H_{\tilde{\mu}}(\alpha\vee\beta|\gamma)=H_{\tilde{\mu}}(\alpha|\gamma)+H_{\tilde{\mu}}(\beta|\alpha\vee\gamma)$; \item $H_{\tilde{\mu}}(\alpha\vee\beta|\gamma)\leq H_{\tilde{\mu}}(\alpha|\gamma)+H_{\tilde{\mu}}(\beta|\gamma);$ \item if $\beta\leq\gamma$, then $H_{\tilde{\mu}}(\alpha|\beta)\geq H_{\tilde{\mu}}(\alpha|\gamma)$. \end{enumerate} \end{lem} \begin{lem}\label{lem:info2} Let $\tilde\mu\in\mathcal{M}({\tau})$, and $\alpha$, $\beta$ and $\gamma$ measurable partitions of $ M^f$. \begin{enumerate}[label=(\roman*)] \item\label{lem:info2_item1} \[ I_{\tilde{\mu}}(\beta_0^{n-1}|\gamma)(\tilde{x})=I_{\tilde{\mu}}(\beta|\gamma)(\tilde{x})+\sum_{i=1}^{n-1}I_{\tilde{\mu}}(\beta|\tau^i(\beta_0^{i-1}\vee\gamma))(\tau^i(\tilde{x})), \] hence \[ H_{\tilde{\mu}}(\beta_0^{n-1}|\gamma)=H_{\tilde{\mu}}(\beta|\gamma)+\sum_{i=1}^{n-1}H_{\tilde{\mu}}(\beta|\tau^i(\beta_0^{i-1}\vee\gamma)); \] \item\label{lem:info2_item2} \begin{align*} & I_{\tilde{\mu}}(\alpha_0^{n-1}|\gamma)(\tilde{x}) \\ =& I_{\tilde{\mu}}(\alpha|\tau^{n-1}\gamma)(\tau^{n-1}(\tilde{x}))+\sum_{i=0}^{n-2}I_{\tilde{\mu}}(\alpha|\alpha_1^{n-1-i}\vee \tau^i\gamma)(\tau^i(\tilde{x})), \end{align*} hence \[ H_{\tilde{\mu}}(\alpha_0^{n-1}|\gamma)=H_{\tilde{\mu}}(\alpha|\tau^{n-1}\gamma)+\sum_{i=0}^{n-2}H_{\tilde{\mu}}(\alpha|\alpha_1^{n-1-i}\vee \tau^i\gamma). \] \end{enumerate} \end{lem} \begin{lem}\label{lem:semicontiofparti} Let $\alpha\in\mathcal{P}( M^f)$ and $\{\zeta_n\}$ be a sequence of increasing measurable partitions of $ M^f$ with $\zeta_n\nearrow\zeta$. Then for $\varphi_n(\tilde{x})=I_{\tilde{\mu}}(\alpha|\zeta_n)(\tilde{x})$, $\varphi^*:=\sup_n\varphi_n\in L^1({\mu})$. \end{lem} \begin{lem}\label{lem:contiofparti} Let $\alpha\in\mathcal{P}( M^f)$ and $\{\zeta_n\}$ be a sequence of increasing measurable {partitions of $ M^f$} with $\zeta_n\nearrow\zeta$. Then \begin{enumerate}[label=(\roman*)] \item $\lim_{n\to\infty}I_{\tilde{\mu}}(\alpha|\zeta_n)(\tilde{x})=I_{\tilde{\mu}}(\alpha|\zeta)(\tilde{x})$ for ${\mu}$-a.e. $\tilde{x}\in M^f$, and \item\label{lem:contiofpartiitem2} $\lim_{n\to\infty}H_{\tilde{\mu}}(\alpha|\zeta_n)=H_{\tilde{\mu}}(\alpha|\zeta)$. \end{enumerate} \end{lem} \subsection{Equivalence of two definitions of unstable metric entropy} In this subsection, we give the proof of Theorem \ref{thm:localvsfinite}, that is, we prove that the two definitions of unstable metric entropy are equivalent when $\tilde{\mu}$ is ergodic. The proof involves the relationship between two types of measurable partitions, $\eta$ and $\xi$, where the first one is a measurable partition subordinate to $W^u$-foliation constructed as in Section \ref{sec:pre}, and the latter is an increasing measurable partition subordinate to $W^u$-foliation as in Proposition \ref{prop:specialpartition}. \begin{potA}\rm The proof will be divided into five steps. \begin{enumerate}[fullwidth,listparindent=2em,label=\textbf{Step \arabic*.}] \item\label{step:1} In this step, we show that $h^u_\mu(f,\alpha|\eta)$ is independent of $\eta$ and $\alpha$. Firstly, let us show that for $\eta_1$ and $\eta_2\in\mathcal{P}^u( M^f)$, we have \[ h_{\mu}({f},\alpha|\eta_1)=h_{\mu}({f},\alpha|\eta_2). \] By Lemma \ref{lem:info}, we have \begin{align}\label{eq:compute} H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_1)+H_{\tilde{\mu}}(\eta_2|\alpha_0^{n-1}\vee\eta_1)=& H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_2\vee\eta_1)+H_{\tilde{\mu}}(\eta_2|\eta_1), \notag\\ H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_2)+H_{\tilde{\mu}}(\eta_1|\alpha_0^{n-1}\vee\eta_2)=& H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_1\vee\eta_2)+H_{\tilde{\mu}}(\eta_1|\eta_2). \end{align} By the construction of $\eta_1$ and $\eta_2$, we know that there are two finite partitions $\alpha_1$ and $\alpha_2$ such that $\eta_j(\tilde{x})=\alpha_j(\tilde{x})\cap \widetilde{W}^u_{\text{loc}}(\tilde{x})$, $j=1,2$, for all $\tilde{x}\in M^f$. Let $N_1$ and $N_2$ be the cardinality of ${\alpha_1}$ and ${\alpha_2}$ respectively. For any $\tilde{x}\in M^f$, $\eta_1(\tilde{x})$ intersects at most $N_2$ elements of ${\alpha_2}$, hence intersects at most $N_2$ elements of $\eta_2$. Thus, we have \[ \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\eta_2|\alpha_0^{n-1}\vee\eta_1)\leq\lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\eta_2|\eta_1)\leq\lim_{n\to\infty}\frac{1}{n}\log N_2=0. \] Similarly, we have \[ \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\eta_1|\alpha_0^{n-1}\vee\eta_2)\leq \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\eta_1|\eta_2)\leq\lim_{n\to\infty}\frac{1}{n}\log N_1=0. \] Hence we by \eqref{eq:compute}, we get \[ \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_1)=\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta_2). \] Then we show that for any $\beta$, $\gamma\in\mathcal{P}( M^f)$, \[ \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\beta_0^{n-1}|\eta)=\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\gamma_0^{n-1}|\eta). \] Again, by Lemma \ref{lem:info}, we have \begin{equation}\label{eq:betagamma1} H_{\tilde{\mu}}(\beta_0^{n-1}|\eta)\leq H_{\tilde{\mu}}(\gamma_0^{n-1}|\eta)+H_{\tilde{\mu}}(\beta_0^{n-1}|\gamma_0^{n-1}\vee\eta), \end{equation} and similar to the proof of Lemma 2.7 (\romannumeral2) in \cite{HuHuaWu2017}, we can show that \begin{equation}\label{eq:betagamma2} \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\beta_0^{n-1}|\gamma_0^{n-1}\vee\eta)=0. \end{equation} By \eqref{eq:betagamma1} and \eqref{eq:betagamma2}, we have \[ \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\beta_0^{n-1}|\eta)\leq \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\gamma_0^{n-1}|\eta). \] Interchanging $\beta$ with $\gamma$, we obtain \[ \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\beta_0^{n-1}|\eta)\geq\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\gamma_0^{n-1}|\eta). \] \item\label{step:2} In this step, we present a construction of the increasing partition $\xi$, which will be crucial in subsequent steps. The reader can refer to Section {\uppercase\expandafter{\romannumeral9}}.2.2 in \cite{QianXieZhu2009} for more details. Given an ergodic $\tilde\mu\in\mathcal{M}({\tau})$, we can choose a set $\tilde{\Lambda}\subset M^f$, $\tilde{x}_*\in\tilde{\Lambda}$ and positive constants $\hat{\epsilon}$, $\hat{r}$ such that \[ B_{\tilde{\Lambda}}:=B_{\tilde{\Lambda}}(\tilde{x}_*,\hat{\epsilon}\hat{r}/2)=\{\tilde{x}\in M^f\colon \tilde{d}(\tilde{x},\tilde{x}_*)<\hat{\epsilon}\hat{r}/2\} \] has positive ${\tilde\mu}$ measure and the following construction of a partition $\xi_u$ satisfies Proposition \ref{prop:specialpartition}. For each $r\in[\hat{r}/2,\hat{r}]$, put \[ S_{u,r}=\bigcup_{\tilde{x}\in B_{\tilde{\Lambda}}}S_u(\tilde{x},r), \] where $S_u(\tilde{x},r)=\{\tilde{y}\in\widetilde{W}^u_{\text{loc}}(\tilde{x})|\Pi(\tilde{y})\in B(\Pi(\tilde{x}_*),r)\}$. Then we can define a partition $\hat{\xi}_{u,\tilde{x}_*}$ of $ M^f$ such that \[ (\hat{\xi}_{u,\tilde{x}_*})(\tilde{y})= \begin{cases} S_u(\tilde{x},r), & {\tilde{y}\in S_u(\tilde{x},r) \text{\ for some\ }\tilde{x}\in B_{\tilde{\Lambda}},} \\ M^f\setminus S_{u,r}, & \mbox{otherwise}. \end{cases} \] Next we can choose an appropriate $r\in[\hat{r}/2,\hat{r}]$ such that \[ \xi_u=\bigvee_{j=0}^\infty\tau^j\hat{\xi}_{u,\tilde{x}_*} \] is subordinate to $W^u$-foliation. The notation $\hat{\xi}_{u,-k}=\bigvee_{j=0}^k\tau^j\hat{\xi}_{u,\tilde{x}_*}$ will be used in the following steps. \item In this step, some facts concerning $\hat{\xi}_{u,-k}$ will be given, which are useful for the proof of our results. \begin{fact}\label{fact:convergence} Let $\tilde\mu\in\mathcal{M}(\tau)$ be an ergodic measure. Suppose $\eta\in\mathcal{P}^u( M^f)$ is subordinate to $W^u$-foliation, and $\hat{\xi}_{u,-k}$ is a partition described in \ref{step:2}, where $k\in\mathbb{N}\cup\{\infty\}$. Then for $\tilde\mu$-almost every ${\tilde{x}}$, there exists $N=N(\tilde{x})>0$ such that for any $j>N$, we have \[ (\hat{\xi}_{u,-k-j}\vee\tau^j\eta)(\tau^j{\tilde{x}})=(\hat{\xi}_{u,-k-j})(\tau^j{\tilde{x}}). \] Hence, for any partition $\beta$ of $ M^f$ with $H_{\tilde{\mu}}(\beta|\hat{\xi}_{u,-k})<\infty$, \[ I_{\tilde{\mu}}(\beta|\hat{\xi}_{u,-k-j}\vee\tau^j\eta)(\tau^j{\tilde{x}})=I_{\tilde{\mu}}(\beta|\hat{\xi}_{u,-k-j})(\tau^j{\tilde{x}}), \] which implies that \[ \lim_{j\to\infty}H_{\tilde{\mu}}(\beta|\hat{\xi}_{u,-k-j}\vee\tau^j\eta)=H_{\tilde{\mu}}(\beta|\xi_u). \] Particularly, if we take $k=\infty$, then the above two equalities become \[ I_{\tilde{\mu}}(\beta|\xi_u\vee\tau^j\eta)(\tau^j{\tilde{x}})=I_{\tilde{\mu}}(\beta|\xi_u)(\tau^j{\tilde{x}}), \] and \[ \lim_{j\to\infty}H_{\tilde{\mu}}(\beta|\xi_u\vee\tau^j\eta)=H_{\tilde{\mu}}(\beta|\xi_u). \] \end{fact} \begin{pof1}\rm Define $\widetilde{B}(\tilde{x},\rho)$ as follows: \[ \widetilde{B}^u(\tilde{x},\rho)\colon=\{\tilde{y}\in\widetilde{W}^u_{\text{loc}}(\tilde{x},f)\colon d^u_{\tilde{x}}(\Pi(\tilde{y}),\Pi(\tilde{x}))<\rho\}. \] Since $\eta$ is subordinate to $W^u$, for ${\tilde\mu}$-a.e. $\tilde{x}$, there exists $\rho=\rho(\tilde{x})>0$ such that $\widetilde{B}^u(\tilde{x},\rho)\subset\eta(\tilde{x})$. Since ${\tilde\mu}$ is ergodic, for ${\tilde\mu}$-a.e. $\tilde{x}\in M^f$, there are infinitely many $n>0$ such that $\tau^n\tilde{x}\in S_{u,r}$. Take $N=N(\tilde{x})$ large enough such that \[ \tau^N\tilde{x}\in S_{u,r} \] and \[ \tau^{-N}(\hat{\xi}_{u,\tilde{x}_*}(\tau^N\tilde{x}))\subset \widetilde{B}^u(\tilde{x},\rho)\subset\eta(\tilde{x}). \] Then we have \[ {\tau^{-j}}\Big(\tau^{j-N}(\hat{\xi}_{u,{\tilde{x}_*}}({\tau^N\tilde{x}}))\Big)\subset\eta(\tilde{x}) \] for any $j\geq N$. Since \[ \hat{\xi}_{u,-k-j}=\bigvee_{l=0}^{k+j}\tau^l(\hat{\xi}_{u,{\tilde{x}_*}})\geq \tau^{j-N}(\hat{\xi}_{u,{\tilde{x}_*}}), \] so we have \[ {\tau^{-j}}\Big((\hat{\xi}_{u,-k-j})(\tau^j(\tilde{x}))\Big)\subset\eta(\tilde{x}). \] Thus we have \[ (\hat{\xi}_{u,-k-j})(\tau^j(\tilde{x}))\subset(\tau^j\eta)(\tau^j(\tilde{x})), \] which implies that \[ \left((\hat{\xi}_{u,-k-j})\vee \tau^j\eta\right)(\tau^j(\tilde{x}))=(\hat{\xi}_{u,-k-j})(\tau^j(\tilde{x})). \] This proves the first statement in Fact \ref{fact:convergence}. Then following the line of the proof for Lemma 2.11 in \cite{HuHuaWu2017}, we can prove the remaining results in Fact \ref{fact:convergence}, where Fatou's Lemma and Lemma \ref{lem:contiofparti} are needed. \end{pof1} The proof of the following fact is analogous to that in \cite{HuHuaWu2017}, the reader can refer to the proof of Lemma 2.10 in \cite{HuHuaWu2017} for more details. \begin{fact}\label{fact:approach} Suppose that $\tilde{\mu}\in\mathcal{M}(\tau)$ is an ergodic measure and $\alpha\in\mathcal{P}( M^f)$ is finite. For any $\epsilon>0$, there exists $K>0$ such that for any $k\geq K$, \[ \limsup_{n\to\infty}H_{\tilde{\mu}}(\alpha|\alpha^n_1\vee(\hat{\xi}_{u,-k})_1^n)\leq\epsilon. \] \end{fact} The following fact comes from \cite{QianXieZhu2009}. \begin{fact}[Proposition \uppercase\expandafter{\romannumeral9}.3.1 in \cite{QianXieZhu2009}]\label{fact:3} When $\tilde{\mu}\in\mathcal{M}(\tau)$ is an ergodic measure, we have \[ \tilde{h}^u_\mu(f)=H_{\tilde{\mu}}(\xi_u|\tau\xi_u). \] \end{fact} \item In this step, we prove that $h_\mu(f,\alpha|\eta)\leq\tilde{h}^u_\mu(f)$. By Lemma \ref{lem:info2} \ref{lem:info2_item1}, with $\gamma=\eta$ and $\beta=\hat{\xi}_{u,-k}$, we have for any $\eta\in\mathcal{P}^u( M^f)$, $n>0$, \begin{equation}\label{eq:step4} \frac{1}{n}H_{\tilde{\mu}}((\hat{\xi}_{u,-k})_0^{n-1}|\eta)=\frac{1}{n}H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\eta)+\frac{1}{n}\sum_{j=0}^{n-1}H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\tau\hat{\xi}_{u,-k-j+1}\vee\tau^j\eta). \end{equation} By Fact \ref{fact:convergence}, the second term of the right side of \eqref{eq:step4} converges to $H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\tau\xi_u)$ as $j\to\infty$. It is clear that each elements of $\eta$ intersects at most {$2^{k+1}$} elements of $\hat{\xi}_{u,-k}$. So we have \[ H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\eta)\leq\log{2^{k+1}}, \] which implies that \[ \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\eta)=0. \] Thus we get \begin{equation}\label{eq:finitevsmeas} \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}((\hat{\xi}_{u,-k})_0^{n-1}|\eta)=H_{\tilde{\mu}}(\hat{\xi}_{u,-k}|\tau\xi_u)\leq H_{\tilde{\mu}}(\xi_u|\tau\xi_u). \end{equation} By Lemma \ref{lem:info2} \ref{lem:info2_item2} with $\gamma=(\hat{\xi}_{u,-k})_0^{n-1}$ and the fact that \[ \tau^j(\hat{\xi}_{u,-k})_0^{n-1}=(\hat{\xi}_{u,-k-j})_0^{n-j-1}, \] we know that \begin{align*} H_{\tilde{\mu}}(\alpha_0^{n-1}|(\hat{\xi}_{u,-k})_0^{n-1}) &=H_{\tilde{\mu}}(\alpha|\hat{\xi}_{u,-n-k+1})+\sum_{j=0}^{n-2}H_{\tilde{\mu}}(\alpha|\alpha_1^{n-1-j}\vee(\hat{\xi}_{u,-k-j})_0^{n-1-j}) \\ &=H_{\tilde{\mu}}(\alpha|\hat{\xi}_{u,-n-k+1})+\sum_{j=1}^{n-1}H_{\tilde{\mu}}(\alpha|\alpha_1^{j}\vee{\hat{\xi}^j}_{u,-k-n+1+j}) \\ &\leq H_{\tilde{\mu}}(\alpha)+\sum_{j=1}^{n-1}H_{\tilde{\mu}}(\alpha|\alpha_1^j\vee(\hat{\xi}_{u,-k})_1^j). \end{align*} For any $\epsilon>0$, take $k>0$ as in Fact \ref{fact:approach}, thus we have \[ \limsup_{n\to\infty}H_{\tilde{\mu}}(\alpha|\alpha_1^{n-1}\vee(\hat{\xi}_{u,-k})_1^{n-1})\leq \epsilon. \] Then we get \begin{equation}\label{eq:finitevsmeas2} \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|(\hat{\xi}_{u,-k})_0^{n-1})\leq\epsilon. \end{equation} By Lemma \ref{lem:info}, we have \begin{equation}\label{eq:finitevsmeas3} H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)\leq H_{\tilde{\mu}}((\hat{\xi}_{u,-k})_0^{n-1}|\eta)+H_{\tilde{\mu}}(\alpha_0^{n-1}|(\hat{\xi}_{u,-k})^{n-1}_0). \end{equation} Thus, by \eqref{eq:finitevsmeas2}, \eqref{eq:finitevsmeas3}, then by \eqref{eq:finitevsmeas} and Fact \ref{fact:3} we have \begin{align*} h_\mu({f},\alpha|\eta) &=\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta) \\ &\leq \lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}((\hat{\xi}_{u,-k})_0^{n-1}|\eta)+\epsilon \\ &{\leq H_{\tilde{\mu}}(\xi_u|\tau\xi_u)+\epsilon} \\ &=\tilde{h}^u_\mu(f)+\epsilon. \end{align*} Let $\epsilon\to0$, we have $h_\mu({f},\alpha|\eta)\leq \tilde{h}^u_\mu(f).$ \item In this step, we complete the proof of Theorem \ref{thm:localvsfinite}. By a similar treatment in the proof of Proposition 2.13 in \cite{HuHuaWu2017}, we can construct an increasing measurable partition $\tilde{\xi}$ satisfying Proposition \ref{prop:specialpartition} with diameter bounded above. And we know that $h_\mu({f},\tilde{\xi})=h_\mu({f},\xi_u)$. So we only need to prove $h_\mu(f,\alpha|\eta)\leq\tilde{h}^u_\mu(f)$ for $\tilde{\xi}$. We can choose a sequence of partitions $\alpha_n\in\mathcal{P}( M^f)$ such that \[ \mathcal{B}(\alpha_n)\nearrow\mathcal{B}(\tau^{-1}\tilde{\xi})\text{ as }n\to\infty, \] which implies \[ \lim_{n\to\infty}H_{\tilde{\mu}}(\alpha_n|\tilde{\xi})=H_{\tilde{\mu}}(\tau^{-1}\tilde{\xi}|\tilde{\xi}). \] Thus, we have \[ \sup_{\alpha\in\mathcal{P}( M^f),\alpha<\tau^{-1}\tilde{\xi}}H_{\tilde{\mu}}(\alpha|\tilde{\xi})=H_{\tilde{\mu}}(\tau^{-1}\tilde{\xi}|\tilde{\xi}). \] For any $\alpha\in\mathcal{P}( M^f)$ with $\alpha<\tau^{-1}\tilde{\xi}$, we have that for any $j>0$, $\tau^j\alpha^{j-1}_0<\tau^j(\tau^{-1}\tilde{\xi})^{j-1}_0=\tilde{\xi}$. Then by Lemma \ref{lem:info2} \ref{lem:info2_item1}, we have \begin{align*} H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta) &= H_{\tilde{\mu}}(\alpha|\eta)+\sum_{j=1}^{n-1}H_{\tilde{\mu}}(\alpha|\tau^j(\alpha_0^{j-1}\vee\eta)) \\ &\geq H_{\tilde{\mu}}(\alpha|\eta)+\sum_{j=1}^{n-1}H_{\tilde{\mu}}(\alpha|\tilde{\xi}\vee\tau^j\eta). \end{align*} Then by Fact \ref{fact:convergence} we have \[ \lim_{j\to\infty}H_{\tilde{\mu}}(\alpha|\tilde{\xi}\vee\tau^j\eta)=H_{\tilde{\mu}}(\alpha|\tilde{\xi}), \] which implies that \[ \limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)\geq\liminf_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)\geq H_{\tilde{\mu}}(\alpha|\tilde{\xi}). \] So we have \begin{align*} \sup_{\alpha\in\mathcal{P}( M^f)}h_\mu({f},\alpha|\eta) &\geq\sup_{\alpha\in\mathcal{P}( M^f),\alpha<\tau^{-1}\tilde{\xi}}h_\mu({f},\alpha|\eta) \\ &= \sup_{\alpha\in\mathcal{P}( M^f),\alpha<\tau^{-1}\tilde{\xi}}\limsup_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta) \\ &\geq \sup_{\alpha\in\mathcal{P}( M^f),\alpha<\tau^{-1}\tilde{\xi}}\liminf_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha_0^{n-1}|\eta) \\ &\geq \sup_{\alpha\in\mathcal{P}( M^f),\alpha<\tau^{-1}\tilde{\xi}}H_{\tilde{\mu}}(\alpha|\tilde{\xi}) \\ &=H_{\tilde{\mu}}(\tau^{-1}\tilde{\xi}|\tilde{\xi}). \end{align*} By the statement in \ref{step:1}, $h_\mu({f},\alpha|\eta)$ is independent of $\alpha$, meaning \[ h_\mu({f},\alpha|\eta)=\sup_{\beta\in\mathcal{P}( M^f)}h_\mu({f},\beta|\eta) \] for any $\alpha\in\mathcal{P}( M^f)$. This finishes the proof of Theorem \ref{thm:localvsfinite}. \end{enumerate} \end{potA} A corollary can be obtained directly as follows. \begin{cor} Suppose that $\tilde\mu\in\mathcal{M}(\tau)$ is ergodic, then for any $\alpha\in\mathcal{P}( M^f)$ and $\eta\in\mathcal{P}^u( M^f)$, we have \[ {h}_\mu^u({f})=h_\mu({f},\alpha|\eta)=\lim_{n\to\infty}\frac{1}{n}H_{\tilde{\mu}}(\alpha^{n-1}_0|\eta). \] \end{cor} \subsection{Shannon-McMillan-Breiman Theorem for unstable metric entropy} In this subsection, we give the proof of Theorem \ref{thm:SMB}. We always assume that $\tilde{\mu}\in\mathcal{M}(\tau)$ is ergodic. Firstly, we need some lemmas. \begin{lem}\label{lem:SMBlow} Let $\alpha\in\mathcal{P}( M^f)$, $\eta\in\mathcal{P}^u( M^f)$. Then for any $\xi\in\mathcal{Q}^u( M^f)$, we have \[ {h_\mu({f},\alpha|\eta)}\leq\liminf_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\xi)(\tilde{x})\quad\text{ for }{\tilde\mu}\text{-a.e. }\tilde{x}. \] \end{lem} \begin{pf}\rm For any $\epsilon>0$, there exists $k>0$ such that $\mathrm{diam}(\alpha^k_0\vee\xi)\leq\epsilon$. Then for $n>0$, we have \[ (\alpha_0^{k+n-1}\vee\xi)(\tilde{x})=\bigvee_{j=0}^{n-1}( {\tau^{-j}}\alpha^k_0\vee\xi)(\tilde{x})\subset \widetilde{V}^u({f},\tilde{x},n,\epsilon). \] By Theorem \ref{thm:localvsfinite} and Remark \ref{rmk:low=up} we know that for ${\tilde\mu}$-a.e. $\tilde{x}$, \begin{align*} h_\mu({f},\alpha|\eta)&=h_\mu({f},\tilde{x},\xi)\\ & = \lim_{n\to\infty}-\frac{1}{n}\log{\mu}_{\tilde{x}}^{\xi}\widetilde{V}^u({f},\tilde{x},n,\epsilon) \\ &\leq \liminf_{n\to\infty}-\frac{1}{n}\log{\mu}_{\tilde{x}}^{\xi}((\alpha^{k+n-1}_0)(\tilde{x})) \\ &= \liminf_{n\to\infty}-\frac{1}{n}\log{\mu}_{\tilde{x}}^{\xi}((\alpha^{n-1}_0)(\tilde{x})) \\ &= \liminf_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha^{n-1}_0|\xi)(\tilde{x}). \end{align*} \end{pf} The following lemmas are counterparts of those in \cite{HuHuaWu2017}, which are completely parallel to the treatment in \cite{HuHuaWu2017}, so we omit the proofs. \begin{lem}[Lemma 3.4 in \cite{HuHuaWu2017}]\label{lem:SMBlow2} Let $\eta\in\mathcal{P}^u( M^f)$ and $\xi\in\mathcal{Q}^u( M^f)$. Then for ${\tilde\mu}$-a.e. $\tilde{x}$, we have \[ \liminf_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\xi)(\tilde{x})=\liminf_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)(\tilde{x}), \] \[ \limsup_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\xi)(\tilde{x})=\limsup_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)(\tilde{x}). \] \end{lem} \begin{lem}[Lemma 3.7 in \cite{HuHuaWu2017}]\label{lem:SMBup} For any $\eta\in\mathcal{P}^u( M^f)$ and $\xi\in\mathcal{Q}^u( M^f)$, we have \[ \lim_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\tau^{-n}\xi|\eta)(\tilde{x})=\lim_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\tau^{-n}\xi|\xi)(\tilde{x})=h_\mu({f},\tilde{x},\xi). \] \end{lem} \begin{lem}[Lemma 3.8 in \cite{HuHuaWu2017}])\label{lem:SMBup2} Let $\alpha\in\mathcal{P}( M^f)$, $\eta\in\mathcal{P}^u( M^f)$. Then for ${\tilde\mu}$-a.e. $\tilde{x}$, we have \[ \lim_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha^{n-1}_0|\xi^{n-1}_0\vee\eta)(\tilde{x})=0. \] \end{lem} Now, we begin to prove Theorem \ref{thm:SMB}. \begin{potB}\rm By Lemma \ref{lem:SMBlow} and Lemma \ref{lem:SMBlow2} we can get directly \begin{equation}\label{eq:SMBlow} h_\mu({f},\alpha|\eta)\leq\liminf_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha_0^{n-1}|\eta)(\tilde{x}). \end{equation} By Lemma \ref{lem:info}, we have \begin{align*} I_{\tilde{\mu}}(\alpha^{n-1}_0|\eta)(\tilde{x}) \leq& I_{\tilde{\mu}}(\alpha^{n-1}_0\vee\xi^{n-1}_0|\eta)(\tilde{x}) \\ =& I_{\tilde{\mu}}(\xi^{n-1}_0|\eta)(\tilde{x})+I_{\tilde{\mu}}(\alpha^{n-1}_0|\xi^{n-1}_0\vee\eta)(\tilde{x}). \end{align*} Then by Lemma \ref{lem:SMBup2}, Lemma \ref{lem:SMBup}, and Theorem \ref{thm:localvsfinite}, we have \begin{align}\label{eq:SMBup} \limsup_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\alpha^{n-1}_0|\eta)(\tilde{x}) &\leq\limsup_{n\to\infty}\frac{1}{n}I_{\tilde{\mu}}(\xi^{n-1}_0|\eta)(\tilde{x}) \notag \\ &= h_\mu^u({f})=h_\mu({f},\alpha|\eta). \end{align} Combining \eqref{eq:SMBlow} and \eqref{eq:SMBup}, we complete the proof of Theorem \ref{thm:SMB}. \end{potB} \section{Unstable topological entropy and unstable pressure for endomorphisms}\label{sec:pressure} In this section, we give the definition of unstable topological entropy and unstable pressure for a potential function $\varphi\in C(M)$ for endomorphisms. Similar to the classical pressure, there are several ways to define unstable pressure. Here we use $W^u$-separated sets. Fix $\delta>0$, for $\tilde{x}\in M^f$, Let $\overline{W^u(\tilde{x},\delta)}$ be the $\delta$-neighborhood of $x_0$ in $W^u(\tilde{x},f)$. A subset $E$ of $\overline{W^u(\tilde{x},\delta)}$ is called an $(n,\epsilon)$ {$W^u$-separated set} if for any $y_1,y_2\in E$, we have $d^u_{n}(y_1,y_2)>\epsilon$, where $ d^u_{n}(y_1,y_2)$ is defined by {\[ d^u_{n}(y_1,y_2):=\max_{0\leq j\leq n-1}\{ d^u_{\tau^j\tilde{x}}(f^j(y_1),f^j(y_2))\}. \]} Recall that $d^u_{\tilde{x}}$ is the metric on $W^u(\tilde{x},f)$ induced by the Riemannian structure on $M$. Now we can define $P^u(f,\varphi,\tilde{x},\delta,n,\epsilon)$ as follows, \begin{align*} P^u(f,\varphi,\tilde{x},\delta,n,\epsilon)=\sup & \Big\{\sum_{y\in E}\exp((S_n\varphi)(y)):\\ & \text{ } E\text{ is an }(n,\epsilon)\ W^u\text{-separated set of }\overline{W^u(\tilde{x},\delta)}\Big\}, \end{align*} {where $(S_n\varphi)(y)=\sum_{j=0}^{n-1}\varphi(f^j(y))$.} Then $P^u(f,\varphi,\tilde{x},\delta)$ is defined as \[ P^u(f,\varphi,\tilde{x},\delta)=\lim_{\epsilon\to 0}\limsup_{n\to\infty}\frac{1}{n}\log P^u(f,\varphi,\tilde{x},\delta,n,\epsilon). \] Next, we define \[ P^u(f,\varphi,\delta)=\sup_{\tilde{x}\in M^f}P^u(f,\varphi,\tilde{x},\delta) \] Let $\tilde{\varphi}(\tilde{x})=\varphi(\Pi(\tilde{x}))$. It is easy to check that \[ \int_{M^f}\tilde{\varphi}\mathrm{d}\tilde{\mu}=\int_M\varphi\mathrm{d}\mu. \] Denote \[ \textcolor[rgb]{0.00,0.00,0.00}{\overline{\widetilde{W}^u(\tilde{x},\delta)}=\{\tilde{y}\in M^f\colon\Pi(\tilde{y})\in \overline{W^u(\tilde{x},\delta)}\text{ and satisfies \eqref{eq:locunstable2}}\}.} \] A subset $\widetilde{E}$ of $\overline{\widetilde{W}^u(\tilde{x},\delta)}$ is called an $(n,\epsilon)$ $W^u$-separated set if for any $\tilde{y}_1$, $\tilde{y}_2\in\widetilde{E}$, we have \[ \tilde{d}^u_{n}(\tilde{y}_1,\tilde{y}_2)>\epsilon. \] Then we can define \begin{align*} \widetilde{P}(\tau,{\tilde{\varphi}},\tilde{x},\delta,n,\epsilon)\colon=\sup & \Big\{\sum_{\tilde{y}\in \widetilde{E}}\exp((\tilde{S}_n\tilde{\varphi})(\tilde{y})):\\ & \text{ } \widetilde{E}\text{ is an }(n,\epsilon)\ W^u\text{-separated set of }\overline{\widetilde{W}^u(\tilde{x},\delta)}\Big\}, \end{align*} {where $(\tilde{S}_n\varphi)(\tilde{y})=\sum_{j=0}^{n-1}\tilde\varphi(\tau^j(\tilde{y}))$.} It is clear that for an $(n,\delta)$ $W^u$-separated set $\widetilde{E}$ of $\overline{\widetilde{W}^u(\tilde{x},\delta)}$, there is an $(n,\delta)$ $W^u$-separated set $E$ with the same cardinality as $\widetilde{E}$, and vice versa. Then noticing that $\varphi(f^j(\Pi(\tilde{x})))=\tilde{\varphi}(\tau^j(\tilde{x}))$ we have $\widetilde{P}(\tau,{\tilde{\varphi}},\tilde{x},\delta,n,\epsilon)=P^u(f,\varphi,\tilde{x},\delta,n,\epsilon)$. Then $\widetilde{P}(\tau,\tilde{\varphi},\tilde{x},\delta)$ and $\widetilde{P}(\tau,\tilde{\varphi},\delta)$ can be formulated similarly. Finally, we can give the definition of unstable pressure for $f$. \begin{mdef}\rm The \textit{unstable pressure} for $f$ is defined as \[ P^u(f,\varphi)=\lim_{\delta\to 0}P^u(f,\varphi,\delta)=\lim_{\delta\to 0}\widetilde{P}(\tau,\tilde{\varphi},\delta). \] \end{mdef} We can prove that $P^u(f,\varphi)$ \textit{is independent of $\delta>0$}. Indeed, notice that for given $\delta_1<\delta$ and $\tilde{x}\in M^f$, there exists a positive number $N=N(\delta,\delta_1)$ depending on the Riemannian structure on $\overline{\widetilde{W}^u(\tilde{x},\delta)}$ such that \[ \overline{\widetilde{W}^u(\tilde{x},\delta)}\subset\bigcup_{j=1}^{N}\overline{\widetilde{W}^u(\tilde{y}_j,\delta_1)} \] for some $\tilde{y}_j\in\overline{\widetilde{W}^u(\tilde{x},\delta)}$, $j=1,2,\cdots,N$. Then following the calculation in the proof of Lemma 4.1 in \cite{HuHuaWu2017}, we can prove that $P^u({f},\varphi,\delta)\leq P^u({f},\varphi)$, and it is clear that $P^u({f},\varphi,\delta)\geq P^u({f},\varphi)$, which means $P^u(f,\varphi)$ {is independent of $\delta$}. \begin{mdef}\rm The \textit{unstable topological entropy} of $f$ is defined as \[ h^u_{\text{{\rm top}}}(f)=P^u(f,0). \] \end{mdef} The following proposition can be obtained directly from the definitions. \begin{prop} For any $\varphi$, $\psi\in C(M)$ and constant $c\in \mathbb{R}$, the following properties hold. \begin{enumerate}[label=(\roman*)] \item If $\varphi\leq \psi$, then $P^u(f ,\varphi)\leq P^u(f ,\psi)$; \item $P^u(f ,\varphi+c)=P^u(f ,\varphi)+c$; \item {$h_{\text{{\rm top}}}^u(f)+\inf \varphi \leq P^u(f, \varphi) \leq h_{\text{{\rm top}}}^u(f)+\sup\varphi$;} \item {if $P^u(f ,\cdot)<\infty$, $|P^u(f, \varphi)-P^u(f, \psi)|\leq \|\varphi-\psi\|$;} \item {if $P^u(f ,\cdot)<\infty$}, then the map $P^u(f ,\cdot)\colon C(M)\to\mathbb{R}\cup\{\infty\}$ is convex; \item $P^u(f ,\varphi+\psi\circ f-\psi)=P^u(f ,\varphi)$; \item $P^u(f ,\varphi+\psi)\leq P^u(f ,\varphi)+P^u(f ,\psi)$. \end{enumerate} \end{prop} \section{Variational principle}\label{sec:vp} \subsection{Variational principle for unstable pressure} In this subsection, we prove our main result of this paper, i.e. Theorem \ref{thm:vp}, whose proof consists of two parts. \begin{potC}\rm Let $\varphi\in C(M)$. \begin{enumerate}[fullwidth,listparindent=2em,label=\textbf{Part \Roman*.}] \item In this part, we prove that For $\mu\in\mathcal{M}(f)$, \[ h^u_\mu(f)+\int_{M}\varphi \mathrm{d}{\mu}\leq P^u(f,\varphi). \] Firstly, we give a useful lemma from \cite{HuWuZhu2017}. \begin{lem}\label{lem:wellknown} Suppose $0\leq p_1$, $\cdots$, $p_m\leq1$, {$s=p_1+\cdots+p_m$} and $a_1$, $\cdots$, $a_m\in\mathbb{R}$. Then \[ \sum_{i=1}^{m}p_i(a_i-\log p_i)\leq s\left(\log\sum_{i=1}^{m}e^{a_i}-\log s\right). \] \end{lem} The following two lemmas are also important, whose proofs are analogous to those in \cite{HuHuaWu2017}. \begin{lem}[Proposition 2.14 in \cite{HuHuaWu2017}]\label{lem:affine} For any $\alpha\in\mathcal{P}( M^f)$ and $\eta\in\mathcal{P}^u( M^f)$, the map ${{\tilde{\mu}}}\mapsto H_{{{\tilde{\mu}}}}(\alpha|\eta)$ from $\mathcal{M}({\tau})$ to $\mathbb{R}^+\cup \{0\}$ is concave. Moreover, the map ${\tilde{\mu}}\mapsto h_{{\mu}}^u({f})$ from $\mathcal{M}({\tau})$ to $\mathbb{R}^+\cup \{0\}$ is affine. \end{lem} \begin{lem}[Proposition 2.15 in \cite{HuHuaWu2017}]\label{lem:semiconti} Let ${\tilde{\mu}}\in\mathcal{M}({\tau})$ and {$\eta\in \mathcal{P}^u( M^f)$.} Assume that there exists a sequence of partitions $\{\beta_n\}_{n=1}^{\infty}\subset\mathcal{P}( M^f)$ such that $\beta_1<\beta_2<\cdots<\beta_n<\cdots$ and {$\mathcal{B}(\beta_n)\nearrow\mathcal{B}(\eta)$}, {and moreover, ${{{\mu}}}(\partial(\Pi(\beta_n)))=0$, for $n=1,2,\cdots$. Let $\alpha\in\mathcal{P}( M^f)$ satisfy ${{{\mu}}}(\partial(\Pi(\alpha)))=0$}. Then the function ${{\tilde{\mu}}}'\mapsto H_{{{\tilde{\mu}}}'}(\alpha|\eta)$ is upper semi-continuous at ${{\tilde{\mu}}}$, i.e., \[ \limsup_{{{\tilde{\mu}}}'\to{{\tilde{\mu}}}}H_{{{\tilde{\mu}}}'}(\alpha|\eta)\leq H_{{{\tilde{\mu}}}}(\alpha|\eta). \] Moreover, the function ${\tilde{\mu}}'\mapsto h_{{{\mu}}'}^u({f})$ is upper semi-continuous at ${\tilde{\mu}}$, i.e., \[ \limsup_{{{\tilde{\mu}}}'\to{\tilde{\mu}}}h_{{{\mu}}'}^u({f})\leq h_{{{\mu}}}^u({f}). \] \end{lem} By the definition of unstable pressure and $\tilde{\varphi}$, we only need to prove that \begin{equation}\label{eq:vpleq} h^u_\mu(f)+\int_{M^f}\tilde{\varphi}\mathrm{d}\tilde{\mu}\leq P^u(f,\varphi). \end{equation} Let ${\tilde{\mu}}=\int_{\mathcal{M}^e({\tau})}{\tilde{\nu}} \mathrm{d}m({\tilde{\nu}})$ be the unique ergodic decomposition where ${\mathcal{M}^e({\tau})}$ is the set of ergodic measures in ${\mathcal{M}({\tau})}$ and $m$ is a Borel probability measure such that $m({\mathcal{M}^e({\tau})})=1$. Since ${\tilde{\mu}} \mapsto h_{{\mu}}^u({f})$ is affine and upper semi-continuous by Lemma \ref{lem:affine} and \ref{lem:semiconti}, so is ${\tilde{\mu}} \mapsto h_{{\mu}}^u({f})+\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}$ and hence \begin{equation*}\label{e:ergodicdecom} h_{{\mu}}^u({f})+\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}=\int_{\mathcal{M}^e({\tau})}\Big(h_\nu^u({f})+\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\nu}}}\Big) \mathrm{d}m({\tilde{\nu}}) \end{equation*} So we only need to prove \eqref{eq:vpleq} for ergodic measures. We assume ${\tilde{\mu}}$ is ergodic. Let $\xi\in \mathcal{Q}^u( M^f)$, Then we can pick $\tilde{x}\in M^f$ satisfying \begin{enumerate}[label=(\roman*)] \item ${{\tilde{\mu}}}_{\tilde{x}}^\xi(\xi{(\tilde{x})})=1$; \item\label{prp:item2} there exists ${\widetilde{B}}\subset\xi{(\tilde{x})}$ such that \begin{enumerate}[label=(\alph*)] \item ${{\tilde{\mu}}}_{\tilde{x}}^\xi({\widetilde{B}})=1$; \item $h_{{\mu}}({f},\xi)=h_{{\mu}}({f},{\tilde{y}},\xi)= \lim_{n\to\infty}-\frac{1}{n}\log{{\tilde{\mu}}}^\xi_{\tilde{y}}(\widetilde{V}^u({f},{\tilde{y}},n,\epsilon))$ for any $\tilde{y}\in {\widetilde{B}}$ and $\epsilon>0$, according to Remark \ref{rmk:low=up}; \item $\lim_{n\to\infty}\frac{1}{n}(\widetilde{S}_n\tilde{\varphi})({\tilde{y}})=\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}$ for any $\tilde{y}\in {\widetilde{B}}$, which can be obtained using the Birkhoff ergodic theorem on $( M^f,\tau)$. \end{enumerate} \end{enumerate} Fix $\rho>0$. By property \ref{prp:item2} we know that for any $\tilde{y}\in {\widetilde{B}}$, there exists $N(\tilde{y})=N(\tilde{y},\epsilon)>0$ such that if $n\geq N(\tilde{y})$ then we have \[ {{\tilde{\mu}}}^\xi_{\tilde{y}}(\widetilde{V}^u({f},{\tilde{y}},n,\epsilon))\leq e^{-n(h_{{\mu}}({f},\xi)-\rho)} \] and \begin{equation}\label{eq:est2} \frac{1}{n}(\widetilde{S}_n\tilde{\varphi})({\tilde{y}})\geq\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}-\rho. \end{equation} Denote ${\widetilde{B}}_n=\{\tilde{y}\in {\widetilde{B}}\colon N(\tilde{y})\leq n\}$. Then ${\widetilde{B}}=\bigcup_{n=1}^\infty {\widetilde{B}}_n$. So we can choose $n>0$ such that ${{\tilde{\mu}}}^\xi_{\tilde{x}}({\widetilde{B}}_n)>{{\tilde{\mu}}}^\xi_{\tilde{x}}({\widetilde{B}})-\rho=1-\rho$. If $\tilde{y}\in {\widetilde{B}}_n\subset\xi{(\tilde{x})}$, then ${{\tilde{\mu}}}^\xi_{\tilde{y}}={{\tilde{\mu}}}^\xi_{\tilde{x}}$. So for any $\tilde{y}\in {\widetilde{B}}_n$ we have \begin{equation}\label{eq:est3} {{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},{\tilde{y}},n,\epsilon))\leq e^{-n(h_{{\mu}}({f},\xi)-\rho)}. \end{equation} Now we can choose $\delta>0$ such that $\widetilde{W}^u(\tilde{x},\delta)\supset\xi{(\tilde{x})}$. Let ${\widetilde{F}}$ be an {$(n,\epsilon/2)$ $W^u$-spanning set} of $\overline{\widetilde{W}^u(\tilde{x},\delta)}\cap {\widetilde{B}}_n$ (i.e. for any $\tilde{z}\in\overline{\widetilde{W}^u(\tilde{x},\delta)}\cap {\widetilde{B}}_n$, there is $\tilde{y}\in {\widetilde{F}}$ such that $\tilde{d}^u_n(\tilde{y},\tilde{z})<\epsilon/2$.) satisfying \[ \overline{\widetilde{W}^u(\tilde{x},\delta)}\cap {\widetilde{B}}_n\subset\bigcup_{\tilde{z}\in {\widetilde{F}}}\widetilde{V}^u({f},\tilde{z},n,\epsilon/2), \] and $\widetilde{V}^u({f},{\tilde{z}},n,\epsilon/2)\cap {\widetilde{B}}_n\neq\emptyset$ for any $\tilde{z}\in {\widetilde{F}}$. Then choose an arbitrary point in $\widetilde{V}^u({f},{\tilde{z}},n,\epsilon/2)\cap {\widetilde{B}}_n$, which is denoted by $\tilde{y}(\tilde{z})$. Then we have \begin{align}\label{eq:est3.5} 1-\rho &< {{\tilde{\mu}}}^\xi_{\tilde{x}}(\overline{\widetilde{W}^u(\tilde{x},\delta)}\cap {\widetilde{B}}_n)\notag\\ &\leq {{\tilde{\mu}}}^\xi_{\tilde{x}}(\bigcup_{\tilde{z}\in {\widetilde{F}}}\widetilde{V}^u({f},{\tilde{z}},n,\epsilon/2)) \notag\\ &\leq \sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},{\tilde{z}},n,\epsilon/2)) \notag\\ & \leq\sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon)). \end{align} Using \eqref{eq:est2}, \eqref{eq:est3} and Lemma \ref{lem:wellknown} with \[ p_i={{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\text{ and }a_i=(\widetilde{S}_n\tilde{\varphi})(\tilde{y}(\tilde{z})), \] we have \begin{align*} &\sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\left(n\left(\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}-\rho\right)+n(h_{{\mu}}({f},\xi)-\rho)\right)\notag\\ \leq &\sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\Big((\widetilde{S}_n\tilde{\varphi})(y(z))-\log {{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\Big)\notag\\ \leq &\left(\sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\right)\left(\log \sum_{\tilde{z}\in {\widetilde{F}}}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}(\tilde{z})))-\right. \\ \qquad& \log \left.\sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\right). \end{align*} Combining \eqref{eq:est3.5}, \begin{align}\label{eq:est4} & n\Big(\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}-\rho\Big)+n(h_{{\mu}}({f},\xi)-\rho) \notag\\ \leq &\log \sum_{\tilde{z}\in {\widetilde{F}}}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}(\tilde{z})))-\log \sum_{\tilde{z}\in {\widetilde{F}}}{{\tilde{\mu}}}^\xi_{\tilde{x}}(\widetilde{V}^u({f},\tilde{y}(\tilde{z}),n,\epsilon))\notag\\ \leq & \log\sum_{\tilde{z}\in {\widetilde{F}}}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}(\tilde{z})))-\log(1-\rho). \end{align} Let ${\Delta_{\epsilon}:=\sup\{|\tilde{\varphi}(\tilde{x})-\tilde{\varphi}({\tilde{y}})|\colon d(\Pi(\tilde{x}),\Pi(\tilde{y}))\leq\epsilon\}}$. For any $\tilde{z}\in {\widetilde{F}}$, we have \[ \exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}(\tilde{z})))\leq\exp((\widetilde{S}_n\tilde{\varphi})({\tilde{z}})+n\Delta_{\epsilon}). \] Dividing by $n$ and taking the $\limsup$ on both sides of \eqref{eq:est4}, we have \[ \int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}+h_{{\mu}}({f},\xi)-2\rho\leq\limsup_{n\to\infty}\frac{1}{n}\log\sum_{\tilde{z}\in {\widetilde{F}}}\exp((\widetilde{S}_n\tilde{\varphi})({\tilde{z}}))+\Delta_{\epsilon}. \] We can choose a sequence $\{{\widetilde{F}}_n\}$ of such ${\widetilde{F}}$ such that \[ \limsup_{n\to\infty}\frac{1}{n}\log\sum_{\tilde{z}\in {\widetilde{F}}_n}\exp((\widetilde{S}_n\tilde{\varphi})({\tilde{z}}))\leq {\widetilde{P}^u({\tau},\tilde{\varphi},\delta)}. \] Since $\rho$ is arbitrary, and $\Delta_{\epsilon}\to 0$ as $\epsilon\to0$, we have \[ \int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}+h_{{\mu}}({f},\xi)\leq \widetilde{P}^u({\tau},\tilde{\varphi},\delta), \] which implies what we need. \item In this part, we prove that \[ \sup_{\mu\in\mathcal{M}(f)}\left\{h^u_\mu(f)+\int_{M}\varphi\mathrm{d}{\mu}\right\}=P^u(f,\varphi), \] which completes the proof of Theorem \ref{thm:vp}. In fact, we only need to prove that for any $\rho>0$, there exists ${\tilde{\mu}}\in\mathcal{M} ({\tau})$ such that $h^u_{{\mu}}({f})+\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}\geq P^u({f},{\varphi})-\rho$. Given $\delta>0$, we can choose ${\tilde{x}_0}\in M^f$ such that \[ \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta)\geq \widetilde{P}^u({\tau},\tilde{\varphi},\delta)-\rho. \] Take $\epsilon>0$ small enough. Then let $\widetilde{E}_n$ be an $(n,\epsilon)$ ${W}^u$-separated set of $\overline{\widetilde{W}^u({\tilde{x}_0},\delta)}$ such that \[ \log\sum_{\tilde{y}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}))\geq\log \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta,n,\epsilon)-1. \] Then we construct measures ${\tilde\nu}_{n}$ as follows: \[ {\tilde{\nu}_n}:=\frac{\sum_{\tilde{y}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y}))\tilde{\delta}_{\tilde{y}}}{\sum_{\tilde{z}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{z}))}, \] where $\tilde{\delta}_\cdot$ denotes a Dirac measure. Let \[ {{\tilde{\mu}}}_{n}=\frac{1}{n}\sum_{i=0}^{n-1}\tau^i{\tilde\nu}_{n}. \] Then there exists a subsequence $\{n_i\}$ such that \[ \lim_{i\to\infty}{{\tilde{\mu}}}_{n_i}={{\tilde{\mu}}}. \] It is easy to check that ${\tilde{\mu}}\in\mathcal{M} ({\tau})$. We can choose a partition $\eta\in\mathcal{P}^u( M^f)$ such that $\overline{\widetilde{W}^u({\tilde{x}_0},\delta)}\subset \eta_{ }({\tilde{x}_0})$ (by shrinking $\delta$ if necessary). Then choose a finite partition $\alpha$ of $ M^f$ with sufficiently small diameter such that ${{{\mu}}}(\partial \Pi(\alpha))=0$ and suppose that $\alpha$ contains $K$ elements. {Let $\alpha^u$ denote the corresponding measurable partition in $\mathcal{P}^u( M^f)$ constructed via $\alpha$.} Fix $q$, $n\in\mathbb{N}$ with $1<q\leq n-1$. Put $a(j)=\left[\frac{n-j}{q}\right]$, $j=0,1,\cdots,q-1$, where we denote by $[a]$ the integer part of $a$. Then \[\bigvee_{u=0}^{n-1}\tau^{-i}\alpha=\bigvee_{r=0}^{a(j)-1}\tau^{-(rq+j)}\alpha_0^{q-1}\vee\bigvee_{t\in T_j}\tau^{-t}\alpha, \] where $T_j=\{0,1,\cdots,j-1\}\cup\{j+aq(j),\cdots,n-1\}$. Note that $\mathrm{Card\ } T_j\leq 2q$. Moreover, we require that $\mathrm{diam}(\alpha)\ll\epsilon$. Then \begin{align*} & \log\sum_{\tilde{y}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y})) \\ =& \sum_{\tilde{y}\in \widetilde{E}_n}{\tilde{\nu}_n}(\{\tilde{y}\})\Big(-\log{\tilde{\nu}_n}(\{\tilde{y}\})+(\widetilde{S}_n\tilde{\varphi})(\tilde{y})\Big) \\ =& H_{\tilde\nu_n}(\alpha^{n-1}_0|\eta)+\int_{ M^f}(\widetilde{S}_n\tilde{\varphi})\mathrm{d}{\tilde\nu}_n. \end{align*} Then following the same calculation in \cite{HuWuZhu2017}, we have that \begin{align*} & \log\sum_{\tilde{y}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y})) \\ \leq & 2q\log K+H_{\tau^j{\tilde\nu}_{n}}(\alpha^{q-1}_0|\tau^j\eta) \\ + &\sum_{r=1}^{a(j)-1}H_{\tau^{rq+j}{\tilde\nu}_{n}}(\alpha^{q-1}_0|\tau\alpha^u)+\int_{ M^f}(\widetilde{S}_n\tilde{\varphi})\mathrm{d}{\tilde\nu}_{n}. \end{align*} Summing the inequality above over $j$ from $0$ to $q-1$ and dividing by $n$, by Lemma \ref{lem:affine} we have \begin{align}\label{eq:keyest} & \frac{q}{n}\log\sum_{\tilde{y}\in \widetilde{E}_n}\exp((\widetilde{S}_n\tilde{\varphi})(\tilde{y})) \notag \\ \leq & \frac{2q^2}{n}\log K+\frac{1}{n}\sum_{j=0}^{q-1}H_{\tau^j{\tilde\nu}_{n}}(\alpha^{q-1}_0|\tau^j\eta) \notag \\ + & H_{{{\tilde{\mu}}}_n}(\alpha^{q-1}_0|\tau\alpha^u)+q\int_{ M^f}\tilde{\varphi} \mathrm{d}{{\tilde{\mu}}}_n. \end{align} Then we can choose a sequence $\{n_k\}$ such that \begin{enumerate}[label=(\roman*)] \item ${{\tilde{\mu}}}_{n_k}\to{{\tilde{\mu}}}$ as $k\to\infty$; \item the following equality holds \begin{align*} & \lim_{k\to\infty}\frac{1}{n_k}\log \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta,n_k,\epsilon) \\ =& \limsup_{n\to\infty}\frac{1}{n}\log \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta,n,\epsilon); \end{align*} \item ${\tilde\nu}_{n_k}\to \tilde\nu$ as $k\to\infty$ for some measure $\tilde\nu$ on $ M^f$. \end{enumerate} Since ${{\tilde{\mu}}}(\partial \Pi(\alpha))=0$, by Lemma \ref{lem:semiconti}, \[ \limsup_{k\to\infty}H_{{{\tilde{\mu}}}_{n_k}}(\alpha_0^{q-1}|\tau\alpha^u)\leq H_{{{\tilde{\mu}}}}(\alpha_0^{q-1}|\tau\alpha^u). \] As $\tilde\nu_{n}$ is supported on $\overline{\widetilde{W}^u({\tilde{x}_0},\delta)}$, for each $j=0,\cdots,q-1$, we can choose $\alpha, \beta_n \in \mathcal{P}(M^f)$ such that $\beta_1<\beta_2<\cdots<\beta_n<\cdots$ and {$\mathcal{B}(\beta_n)\nearrow\mathcal{B}(\tau^j\eta)$}, and moreover, $(\Pi\tau^j{\tilde\nu})(\partial(\Pi(\alpha_0^{q-1})))=0$, $(\Pi\tau^j{\nu})(\partial(\Pi(\beta_n)_0^{q-1}))=0$. Then applying Lemma \ref{lem:semiconti} we have \[ \limsup_{k\to\infty}\frac{1}{n_k}\sum_{j=0}^{q-1}H_{\tau^j{\tilde\nu}_{n_k}}(\alpha_0^{q-1}|\tau^j\eta)\leq \limsup_{k\to\infty}\frac{1}{n_k}\sum_{j=0}^{q-1}H_{\tau^j{\tilde\nu}}(\alpha_0^{q-1}|\tau^j\eta)=0. \] Thus replacing $n$ by $n_k$ in \eqref{eq:keyest} and letting $k\to\infty$, by the above claim and discussions, we get \begin{align*} & q\limsup_{n\to\infty}\frac{1}{n}\log \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta,n,\epsilon) \\ \leq& H_{{{\tilde{\mu}}}}(\alpha^{q-1}_0|\tau\alpha^u)+q\int_{ M^f}\tilde{\varphi}\mathrm{d}{{\tilde{\mu}}}. \end{align*} By Theorem \ref{thm:localvsfinite}, \begin{align*} & \limsup_{n\to\infty}\frac{1}{n}\log \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta,n,\epsilon) \\ \leq& \lim_{q\to\infty}\frac{1}{q}H_{{{\tilde{\mu}}}}(\alpha^{q-1}_0|\tau\alpha^u)+\int_{ M^f}\tilde{\varphi}\mathrm{d}{{\tilde{\mu}}} \\ = & h^u_{{\mu}}({f})+\int_{ M^f}\tilde{\varphi}\mathrm{d}{{\tilde{\mu}}}. \end{align*} Let $\epsilon\to 0$, we have $\widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta)\leq h^u_{{\mu}}({f})+\int_{ M^f}\tilde{\varphi}\mathrm{d}{{\tilde{\mu}}}$. Recall that $P^u({f},{\varphi})=\widetilde{P}^u({\tau},\tilde{\varphi}, \delta)\leq \widetilde{P}^u({\tau},\tilde{\varphi},{\tilde{x}_0},\delta)+\rho$. The proof of Theorem \ref{thm:vp} is complete. \end{enumerate} \end{potC} \subsection{\texorpdfstring{$u$-equilibrium states for endomorphisms}{u-equilibrium states for endomorphisms}} In this subsection, we introduce the notion of $u$-equilibrium state and list some results concerning it, whose proofs are similar to those in \cite{HuWuZhu2017}. Let $\varphi\in C(M)$. \begin{mdef}\rm\label{def:uequi} $\mu\in\mathcal{M}({f})$ is said to be a $u$-equilibrium state for $\varphi$, if it satisfies \[ h^u_\mu({f})+\int_{ M^f}\varphi\mathrm{d}{\mu}=P^u({f},\varphi). \] \end{mdef} We denote by $\mathcal{M}^u({f},\varphi)$ the set of all $u$-equilibrium states for $\varphi$. \begin{prop}\label{prop:equilibrium} Let $\varphi\in C(M)$, then we have the following properties related with $u$-equilibrium states. \begin{enumerate}[label=(\roman*)] \item\label{equi1} $\mathcal{M}^u({f},\varphi)$ is non-empty, and it is convex, in particular, the measure of maximal unstable metric entropy always exists; \item the extreme points of $\mathcal{M}^u({f},\varphi)$ are precisely ergodic members of $\mathcal{M}^u({f},\varphi)$; \item $\mathcal{M}^u({f},\varphi)$ is compact and has an {ergodic $u$-equilibrium state}; \item assume $\varphi$, $\psi\in C(M)$ are cohomologous, i.e. $\varphi=\psi+\sigma-\sigma\circ\tau-c$ for some $c\in\mathbb{R}$ and $\sigma\in C(M)$. Then $\varphi$ and $\psi$ have the same $u$-equilibrium states, and \[ P^u({f},\varphi)=P^u({f},\psi)-c. \] \end{enumerate} \end{prop} {[email protected] (Xinsheng Wang)} {[email protected] (Weisheng Wu)} {[email protected] (Yujun Zhu)} \end{document}
\begin{equation}gin{document} \author{M. W. Mitchell$^{1}$, C. W. Ellenor$^{1}$ , S. Schneider$^{2}$ and A. M. Steinberg$^{1}$} \affiliation{$^{1}$Department of Physics, University of Toronto, 60 St.~George St., Toronto, ON M5S 1A7, Canada \\ $^{2}$Chemical Physics Theory Group, Department of Chemistry, University of Toronto, 80 St. George St., Toronto, ON M5S 3H6, Canada} \newcommand{\today}{\today} \title{ Diagnosis, prescription and prognosis of a Bell-state filter by quantum process tomography.} \newcommand{3.0in}{3.0in} \begin{equation}gin{abstract} Using a Hong-Ou-Mandel interferometer, we apply the techniques of quantum process tomography to characterize errors and decoherence in a prototypical two-photon operation, a singlet-state filter. The quantum process tomography results indicate a large asymmetry in the process and also the required operation to correct for this asymmetry. Finally, we quantify errors and decoherence of the filtering operation after this modification. \end{abstract} \pacs{42.50-p, 03.67.Mn, 03.67.Pp } \maketitle \newcommand{\begin{equation}}{\begin{equation}gin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{equation}a}{\begin{equation}gin{eqnarray}} \newcommand{\end{equation}a}{\end{eqnarray}} \newcommand{\ket}[1]{\left|#1\right>} \newcommand{\bra}[1]{\left<#1\right|} \newcommand{{\bf k}}{{\bf k}} \newcommand{{\em et al.}}{{\em et al.}} \newcommand{$^{\circ}$~}{$^{\circ}$~} \newcommand{{\cal E}}{{\cal E}} \newcommand{\polket}{} { Quantum computation promises exponential speedup in the solution of difficult problems such as factoring large numbers and simulating quantum systems \cite{DiVincenzo1995, EckertJozsa1996}. In a quantum computer single- and multiple-qubit operations drive the system through a sequence of highly entangled states before the result is finally measured. A quantum computation is vulnerable to errors and to environmental decoherence, which destroys the entanglement. Characterization of quantum operations including errors and decoherence is a pressing issue for quantum information processing \cite{Roadmap}, and is possible by the technique of {\em quantum process tomography} (QPT) \cite{Poyatos1997,ChuangNielsen1997}. QPT has been demonstrated for single qubits \cite{DeMartini2002,Altepeter2003} and for mixed ensembles of two-qubit systems \cite{Braunstein1999} in NMR \cite{Childs2001}. Here we present QPT of an entanglement-generating two-qubit operation, the partitioning of photons by a beamsplitter in a Hong-Ou-Mandel (HOM) interferometer. Our characterization reveals large imperfections in the process and indicates the appropriate remedy. Finally, we extend the QPT results to predict the accuracy of the process, once repairs are carried out. } Multi-qubit operations on photons, once thought to require very large optical nonlinearities, can now be performed with linear optical elements such as wave-plates and beamsplitters coupled with the highly nonlinear process of photodetection. This idea is exploited in schemes for linear optics quantum computation \cite{KLM2001,Franson2002,Gilchrist2003} and to generate multi-photon entangled states \cite{KokDowling2002, Fiurasek2002}. The schemes are probabilistic and employ {\em post-selection}: the photodetection signals indicate when the correct operation has taken place. The HOM effect plays a central role in these proposals, and itself is a prototypical example of a post-selected multi-qubit operation; it generates correlations and entanglement without optical nonlinearities. The HOM effect has been used to produce entangled states for Bell inequality tests \cite{Ou1988,Shih1988} and to make probabilistic Bell state measurements for quantum teleportation and entanglement swapping \cite{Bouwmeester1997,Pan1998}. In the HOM effect, two photons meeting at a 50/50 beamsplitter can leave by different output ports only if they are in some way distinguishable \cite{HongOuMandel}. We use photon pairs which are indistinguishable in wavelength, spatial mode and arrival time at the beamsplitter, leaving only the polarization to (possibly) distinguish them. By detecting photons leaving from different output ports, we post-select an entangled polarization state. Ideally, the process acts as a filter for the Bell singlet state $\polket{\Psi^{-}} = (\polket{HV} - \polket{VH})/\sqrt{2}$, in which the photons have orthogonal polarizations in any basis. In any real apparatus this process will include errors and decoherence. Using the techniques of QPT, we determine how the polarization state, more specifically the $4\times 4$ density matrix $\rho$ which describes an arbitrary two-photon mixed state, changes in passing the beamsplitter. In general, $\rho$ will change as $\rho^{(in)} \rightarrow \rho^{(out)} = {\cal E}(\rho^{(in)})$, where ${\cal E}$ is the {\em superoperator}, a linear mapping from input density matrices to output density matrices. The superoperator completely characterizes the effect on the system, including coherent evolutions, decohering interactions with the environment, and loss. \begin{equation}gin{figure}[h] \centerline{\epsfig{width=3.0in,figure=Setup.eps01}} \caption{Schematic of experimental setup. BBO, $\begin{equation}ta$-barium borate crystals, H half-wave plate, Q quarter wave plate, BS beamsplitter, PBS polarizing beamsplitter, SPCM single-photon counting module, TRR translatable retro-reflector. } \end{figure} \newcommand{\vec{\rho}}{\vec{\rho}} \newcommand{{\bf M}}{{\bf M}} We use a HOM interferometer constructed to produce arbitrary input polarizations and detect arbitrary output polarizations. The experimental setup is shown schematically in Fig. 1. A 7 mW beam of 351.1 nm light from an argon-ion laser illuminates a pair of 0.6 mm thick $\begin{equation}ta$-barium borate crystals, cut for degenerate downconversion at a half-opening angle of 3.3$^{\circ}$~. Pairs of downconversion photons at 702.2 nm emerge from the crystals vertically polarized. This initial polarization state can be rotated into any input product state by the state preparation half- and quarter-wave plates immediately before the central beamsplitter. The downconversion beams meet the beamsplitter at 45$^{\circ}$~ incidence. The beamsplitter itself \cite{CVIBeamsplitter} consists of a multi-layer dielectric coating on a glass substrate, with an anti-reflection coated back face. Polarization analyzers consisting of a quarter- and a half-waveplate before a polarizing beamsplitter are used to select an arbitrary product state. Photons which pass the analyzers are detected by single-photon counting modules and individual and coincidence detection rates are registered on a computer. Downconversion beams were aligned to overlap both spatially and temporally on the beamsplitter, giving a HOM dip visibility of 90 $\pm$ 5\% for both horizontal and vertical input polarizations. The process tomography measurements described below were performed at the center of this dip. We prepare 16 linearly independent input states $\{ \rho^{(in)}_{i} \} $ and measure the corresponding outputs $\{ \rho^{(out)}_{i} \}$. The inputs \cite{JKMW} are the pure states $\rho_{i}^{(in)} = \ket{\psi_{i}}\bra{\psi_{i}}$ where \begin{equation}a \{ \polket{\psi_{1}},\ldots,\polket{\psi_{16}} \} &=& \{ \polket{HH}, \polket{HV}, \polket{VV}, \polket{VH}, \nonumber \\ & & \polket{RH}, \polket{RV} \polket{DV}, \polket{DH}, \nonumber \\ & & \polket{DR}, \polket{DD}, \polket{RD}, \polket{HD}, \nonumber \\ & & \polket{VD}, \polket{VL}, \polket{HL}, \polket{RL} \} \end{equation}a and the polarizations are horizontal $H$, vertical $V$, diagonal $D = (H+V)/\sqrt{2}$, right circular $R=(H-iV)/\sqrt{2}$ and left circular $L=(H+iV)/\sqrt{2}$. A single output $\rho_{i}^{(out)}$ can be found by making projective measurements onto the sixteen states $\{\psi_{j}\}$ . The coincidence rates for these measurements are $R_{ij} = R_{0} {\mathrm{Tr}}[\rho_{i}^{(out)}\ket{\psi_{j}}\bra{\psi_{j}}]$, where $R_{0}$ is the constant rate of downconversion at the crystals. Note that we use non-normalized output density matrices, i.e. ${\mathrm{Tr}}[\rho^{(out)}] \le 1$, because photon pairs can be lost in the process. Absorption and scattering losses are small, but post-selection necessarily removes a significant fraction of the pairs for most input states. \begin{equation}gin{figure}[h] \centerline{\epsfig{width=2 in,figure=CoincMatrix.eps01}} \caption{Coincidence rates. Brightness indicates the count rate observed in a given two-photon polarisation state (horizontal axis) for a given input polarisation state (vertical axis).} \end{figure} \begin{equation}gin{figure}[h] \centerline{\epsfig{width=3.0in,figure=TypicalOutDMN.eps}} \caption{Output density matrix (normalized) for an input state of $HV$. Left graph shows $\mathrm{Re}[\rho^{(out)}]$, right graph shows $\mathrm{Im}[\rho^{(out)}]$. } \end{figure} The measured coincidence rates $R_{ij}$ are shown in Fig. 2. As expected for a filter, the output has similar polarization characteristics for all inputs, but not all are equally transmitted, e.g., $\polket{HH}$ and $\polket{VV}$ are blocked. A typical output density matrix, reconstructed using maximum-likelihood estimation \cite{JKMW} is shown in Fig. 3. The large coherence between $\polket{HV}$ and $\polket{VH}$ indicates that this is an entangled state, with a concurrence \cite{Wooters1998, Coffman2000, JKMW} of $C = 0.89$. The HOM effect is acting as an entangled-state filter, but the selected state is clearly not $\polket{\Psi^{-}}$, which has a real density matrix and {\em negative} off-diagonal elements. \begin{equation}gin{figure}[h] \centerline{\epsfig{width=3.0in,figure=SuperOperator.eps01}} \caption{Reconstructed superoperators for the post-selected HOM process. a) superoperator as measured, b) predicted superoperator after repair. The matrix ${\bf M}$ is shown, input density matrix elements at bottom, output elements at left, where the density matrices are represented in vector form (see text). Horizontal stripes on the vertical bars indicate the the best estimated value and the statistical uncertainties. } \end{figure} We can understand this behaviour through the superoperator ${\cal E}$. For clarity, we work in the Bell state basis $\{\polket{\Psi^{-}},\polket{\Psi^{+}},\polket{\Phi^{-}},\polket{\Phi^{+}}\}$ where $\polket{\Psi^{\pm}} = (\polket{HV} \pm \polket{VH})/\sqrt{2}$ and $\polket{\Phi^{\pm}} = (\polket{HH} \pm \polket{VV})/\sqrt{2}$. We use a matrix representation for {$\cal{E}$}: The density matrix is written as a real 16-dimensional vector $\vec{\rho}$ made from the independent coefficients of the non-normalized density matrix, i.e., $\vec{\rho} = (\rho_{11}, \ldots, \rho_{44}, \mathrm{Re}[\rho_{12}], \mathrm{Im}[\rho_{12}], \mathrm{Re}[\rho_{13}], \ldots, \mathrm{Im}[\rho_{34}])^{T}$. The superoperator is represented by a matrix ${\bf M}$ which acts as \begin{equation} \vec{\rho}^{(out)} = {\bf M} \vec{\rho}^{(in)}. \label{eq:SuperMatrixDefinition} \end{equation} In principle, ${\bf M}$ could be found from this equation by a simple inversion, since we measured $R_{ij}$ for a basis set $\{\rho^{(in)}_{i}\}$. This procedure is sensitive to small errors and can produce a non-physical ${\bf M}$, i.e., one which predicts non-physical (mathematically, non-positive semidefinite) $\rho^{(out)}$. Instead, we reconstruct ${\bf M}$ by maximum-likelihood estimation within the space of completely positive superoperators, i.e.~operators that map physical density matrices to physical density matrices (see e.g. Sudarshan \cite{Sudarshan1961} for the mathematical conditions on the mapping between density matrices). The reconstructed ${\bf M}$ is shown in Fig. 4 a), with error estimates from an ensemble of simulated datasets Poisson distributed around the measured data. This matrix is ``normalized'' to give ${\mathrm{Tr}}[\rho_{out}] = 1/4$ when the input is a completely mixed state. We verify the accuracy of the reconstructed superoperator using the the input states $\polket{LL}$ and $\polket{RR}$, which were not used in the reconstruction process. These states are used as input, both to equation (\ref{eq:SuperMatrixDefinition}) and in the HOM interferometer. In both cases, prediction and the experiment result (again by maximum likelihood reconstruction) agree with fidelity of $97$\%. The superoperator ${\bf M}$ bears little resemblance to an ideal singlet-state filter, for which $M_{ij} = \delta_{i,1}\delta_{j,1}$. Clearly the process is not performing the intended filtering operation. In fact, it is nearly a projection onto a different maximally entangled state \cite{Asymmetry}. Written as a canonical Kraus operator sum \cite{Choi1975,Kraus1983}, the superoperator allows us to find this state directly. In the sum ${\cal E}(\rho) = \sum_{l} \hat{K}_{l} \rho \hat{K}_{l}^{\dagger}$, the leading operator $\hat{K}_{1}$ is very nearly a projector onto the state $\polket{\Psi^{-}_{\phi}} \equiv (\polket{HV} - \exp[i \phi] \polket{VH})/\sqrt{2}$ with $\phi = 0.84~\pi$. This immediately suggests a way to ``fix'' the non-ideal beamsplitter. Adding a birefringent phase shifter which takes $VH\rightarrow \exp[-i \phi]VH$ before the beamsplitter and the reverse operation afterward would give (nearly) a single-state filter. Finally, we can predict the behaviour of this ``fixed'' operation. The corresponding matrix ${\bf M}$ is shown in Fig. 4b). The large (1,1) element indicates the filtering operation and the smaller nonzero elements contribute to decoherence and other errors. These errors presumably arise from imperfect overlap of the downconverted beams and residual imperfections in the beamsplitter. They do not appear to have a simple form, but we can gain some insight from some simple measures, calculated using the superoperator. An unpolarized input (a completely mixed state) gives rise to an output that is 84\% $\polket{\Psi^{-}}$, or an average polarization ratio (intensity of $\polket{\Psi^{-}}$ versus average intensity of the other three Bell states) of 16:1. This same output is entangled, with a concurrence of $C = 0.70$, sufficient for a Bell inequality violation. Quantifying purity with the linear entropy $S_{L}$, which ranges from 0 for a pure state to 1 for a completely mixed state, the process purifies the mixed state from $S_{L}=1$ to $S_{L}=0.37$. We can also ask how well the repaired filter maintains the coherence of an input. The pure input $\Psi^{-}$ is passed 75\% of the time and emerges largely pure, with $S_{L} = 0.13$. The state $\Psi^{+}$ is 13\% passed with $S_{L} = 0.51$ and $\Phi^{\pm}$ are on average 6\% passed with low purity $S_{L} = 0.88$. Of course, different applications for a singlet-state filter will have different requirements and different figures of merit. The superoperator we have found using QPT is more general, a complete characterization suitable for evaluating any proposed use. It is also, we have seen, a useful diagnostic and predictive tool. We thank Daniel Lidar, Jeff Lundeen and Kevin Resch for assistance and helpful discussions. This work was supported by the National Science and Engineering Research Council of Canada, Photonics Research Ontario, the Canadian Institute for Photonic Innovations and the DARPA-QuIST program (managed by AFOSR under agreement No. F49620-01-1-0468). \begin{equation}gin{thebibliography}{99} \bibitem{DiVincenzo1995} D. P. DiVincenzo, {\em Science} {\bf 270}, 255 (1995). \bibitem{EckertJozsa1996} A. Ekert, R. Jozsa, {\em Rev. Mod. Phys.} {\bf 68}, 733 (1996). \bibitem{Roadmap} Quantum information science and technology experts panel, {\em A quantum information science and technology roadmap. } (2002; http://qist.lanl.gov). \bibitem{Poyatos1997} J. F. Poyatos, J. I. Cirac, P. Zoller, {\em Phys. Rev. Lett.} {\bf 78}, 390 (1997). \bibitem{ChuangNielsen1997} I. L. Chuang, M. A. Nielsen, {\em J. Mod. Opt.} {\bf 44}, 2455 (1997). \bibitem{DeMartini2002} F. De Martini, A. Mazzei, M. Ricci, G. M. D'Ariano, in press (available at http://arXiv.org/abs/quant-ph/0207143 ). {\em } {\bf } \bibitem{Altepeter2003} J. B. Altepeter {\em et al.} in press (available at http://arXiv.org/abs/quant-ph/0303038 ). {\em } {\bf } \bibitem{Braunstein1999} S. L. Braunstein {\em et al.} {\em Phys. Rev. Lett.} {\bf 83}, 1054 (1999). \bibitem{Childs2001} A.M. Childs, I.L. Chuang, D.W. Leung, {\em Phys. Rev. Lett.} {\bf 78}, 390 (1997). \bibitem{KLM2001} E. Knill , R. Laflamme, G. J. Milburn, {\em Nature} {\bf 409}, 46 (2001). \bibitem{Franson2002} J. D. Franson, M. M. Donegan, M. J. Fitch, B. C. Jacobs, T. B. Pittman, {\em Phys. Rev. Lett.} {\bf 89}, 137901/1 (2002). \bibitem{Gilchrist2003} A. Gilchrist, W.J. Munro, A.G. White in press (available at http://arXiv.org/abs/quant-ph/0301112 ). {\em } {\bf } \bibitem{KokDowling2002} P. Kok, H. Lee, J. P. Dowling, {\em Phys. Rev. A} {\bf 65}, 052104/1 (2002). \bibitem{Fiurasek2002} J. Fiur\'{a}\v{s}ek, {\em Phys. Rev. A.} {\bf 65}, 053818/1 (2002). \bibitem{Ou1988} Z. Y. Ou, L. Mandel, {\em Phys. Rev. Lett} {\bf 61}, 50 (1988). \bibitem{Shih1988} Y. H. Shih, C. O. Alley, {\em Phys. Rev. Lett} {\bf 61}, 2921 (1988). \bibitem{Bouwmeester1997} D. Bouwmeester {\em et al.} {\em Nature} {\bf 390}, p. 575 (1997). \bibitem{Pan1998} J.-W. Pan , D. Bouwmeester, H. Weinfurter, A. Zeilinger, {\em Phys. Rev. Lett.} {\bf 80}, 3891 (1998). \bibitem{HongOuMandel} C. K. Hong, Z. Y. Ou, L. Mandel, {\em Phys. Rev. Lett.} {\bf 59}, 2044 (1987). \bibitem{CVIBeamsplitter} CVI Laser Corp. Non-polarizing plate beamsplitter BSNP-702.2-50-0525, 50\% reflectance for both s and p polarizations at 702.2 nm. \bibitem{JKMW} D. F. V. James, P. G. Kwiat, W. J. Munro, A. G. White, {\em Phys. Rev. A} {\bf 64}, 052312/1 (2001). \bibitem{Coffman2000} V. Coffman, J. Kundu, W. K.Wooters, {\em Phys. Rev. A} {\bf 61}, 052306/1 (2000). \bibitem{Wooters1998} W. K. Wooters, {\em Phys. Rev. Lett.} {\bf 80}, 2245 (1998). \bibitem{Sudarshan1961} E. C. G. Sudarshan, P. M. Mathews, J. Rau, {\em Phys. Rev.} {\bf 121}, 920 (1961). \bibitem{Asymmetry} This may appear surprising, because it is well known that the HOM effect {\em must} select $\Psi^{-}$ if the beamsplitter treats $H$ and $V$ polarizations symmetrically. But the beamsplitter used here lacks inversion symmetry (only the front surface is reflective) and this allows a H/V asymmetry. Concretely, when both photons are reflected, one passes through the anti-reflection coated surface twice, while the other does not pass through it at all. If this coating is birefringent the inputs $\polket{HV}$ and $\polket{VH}$ will acquire different phase shifts upon double-reflection. As a result, a different state within the $\Psi^{-},\Psi^{+}$ subspace is selected. To our knowledge, this is the first observation of this asymmetry in the HOM effect. \bibitem{Choi1975} M. D. Choi, {\em Linear Algebr. Appl.} {\bf 10}, 285 (1975). \bibitem{Kraus1983} K. Kraus, {\em States, effects, and operations: Fundamental notions of quantum theory } (Springer-Verlag, Berlin, 1983). \end{thebibliography} \end{document}
\begin{document} \title{Beurling--Fig\`a-Talamanca--Herz algebras} \begin{abstract} For a locally compact group $G$ and $p \in (1,\infty)$, we define and study the Beurling--Fig\`a-Talamanca--Herz algebras $A_p(G,\omega)$. For $p=2$ and abelian $G$, these are precisely the Beurling algebras on the dual group $\hat{G}$. For $p =2$ and compact $G$, our approach subsumes an earlier one by H.\ H.\ Lee and E.\ Samei. The key to our approach is not to define Beurling algebras through weights, i.e., possibly unbounded continuous functions, but rather through their inverses, which are bounded continuous functions. We prove that a locally compact group $G$ is amenable if and only if one---and, equivalently, every---Beurling--Fig\`a-Talamanca--Herz algebra $A_p(G,\omega)$ has a bounded approximate identity. {\widehat{\mathrm{env}}}nd{abstract} \begin{keywords} amenable, locally compact group; Beurling algebra; Beurling--Fig\`a--Talamanca--Herz algebra; Beurling--Fourier algebra; bounded approximate identity; inverse of a weight; Leptin's theorem; weight. {\widehat{\mathrm{env}}}nd{keywords} \begin{sloppy} \begin{classification} Primary 43A99; Secondary 22D12, 43A15, 43A32, 46H05, 46J10. {\widehat{\mathrm{env}}}nd{classification} {\widehat{\mathrm{env}}}nd{sloppy} \section*{Introduction} A {\widehat{\mathrm{env}}}mph{weight} on a locally compact group $G$ is a measurable, locally integrable function $\omega \!: G \to [1,\infty)$ such that \begin{equation} \label{submult} \omega(xy) \leq \omega(x) \omega(y) \qquad (x,y \in G). {\widehat{\mathrm{env}}}nd{equation} The corresponding {\widehat{\mathrm{env}}}mph{Beurling algebra} (\cite[Definition 3.7.2]{RSt}) is defined as \[ L^1(G,\omega) := \{ f \in L^1(G) : \omega f \in L^1(G) \}. \] It is a subalgebra of $L^1(G)$ and a Banach algebra in its own right with respect to the norm $\| \cdot \|_\omega$ given by $\| f \|_\omega := \| \omega f \|_1$ for $f \in L^1(G, \omega)$. There is no loss of generality if we suppose that $\omega$ is continuous (\cite[Theorem 3.7.5]{RSt}). Beurling algebras have been objects of study in abstract harmonic analysis for a long time, especially for abelian $G$ (see \cite{Kan} and \cite{RSt}, for instance). \par If $G$ is abelian with dual group $\hat{G}$, then the Fourier transform is an isometric isomorphism between $L^1(G)$ and the Fourier algebra $A(\hat{G})$ of $\hat{G}$. Consequently, if $\omega$ is any weight on $G$, then $L^1(G,\omega)$ is isomorphic to a subalgebra of $A(\hat{G})$. In \cite{Eym}, P.\ Eymard defined the Fourier algebra $A(G)$ for general, not necessarily abelian, locally compact groups $G$. This brings up the natural question if there is a way to define certain subalgebras of $A(G)$, which, for abelian $G$, correspond to the Beurling algebras on $L^1(\hat{G})$. \par In \cite{LS}, H.-H.\ Lee and E.\ Samei introduced the notion of a Beurling--Fourier algebra. If $G$ is a locally compact group and $\omega \!: G \to [1,\infty)$ is a weight, then multiplication with $\omega$ defines a closed, densely defined operator on $L^2(G)$, which is bounded if and only if $\omega$ is bounded, i.e., $L^1(G,\omega)$ is trivial. Consequently, Lee and Samei define what they call a {\widehat{\mathrm{env}}}mph{weight on the dual of $G$} as a closed, densely defined operator on $L^2(G)$ affiliated with the group von Neumann algebra $\operatorname{VN}(G)$. The resulting theory of Beurling--Fourier algebras is particularly tractable for what Lee and Samei call central weights on the duals of compact groups. Independently, these weights and their corresponding Beurling--Fourier algebras were also introduced and investigated by J.\ Ludwig, L.\ Turowska, and the third-named author (\cite{LST}). \par The approach in \cite{LST} is restricted to compact groups, and both in \cite{LS} and \cite{LST}, it is unclear if the given definitions of a Beurling--Fourier algebra can be extended beyond the $L^2$-context to define weighted variants of the Fig\`a-Talamanca--Herz algebras (see \cite{Eym2}, \cite{FT}, \cite{Her1}, \cite{Her2}, and \cite{Spe}). In the present note, we propose a different approach to Beurling--Fourier algebras with the following features: \begin{itemize} \item if $G$ is a locally compact abelian group with dual group $\hat{G}$, then the Beurling--Fourier algebras correspond---via the Fourier transform---to the Beurling algebras on $\hat{G}$; \item at least for compact $G$, our approach subsumes the one from \cite{LS} (and thus of \cite{LST}); \item the definitions extend effortlessly from the $L^2$-framework to a general $L^p$-context with $p \in (1,\infty)$, which enables us to define {\widehat{\mathrm{env}}}mph{Beurling--Fig\`a-Talamanca--Herz algebras}. {\widehat{\mathrm{env}}}nd{itemize} \par The key idea is to not attempt to define a ``dual'' notion of weight, but rather that of the inverse of a weight. This approach enables us to define Beurling--Fourier algebras without any reference to the theory of von Neumann algebras, on which \cite{LS} relies heavily, so that it can be adapted to an $L^p$-context. \par For the resulting Beurling--Fig\`a-Talamanca--Herz algebras, we obtain an extension of the Leptin--Herz theorem, which characterizes the amenable locally compact groups through the existence of bounded approximate identities in their Fig\`a-Talamanca--Herz algebras: a locally compact group is amenable if and only if one---or, equivalently, every---of its Beurling--Fig\`a-Talamanca--Herz algebras has a bounded approximate identity. \section{Beurling algebras through inverses of weights} We shall suppose throughout that all weights are continuous: by \cite[Theorem 3.7.5]{RSt}, this is no limitation if one is only interested in the corresponding Beurling algebras. \par If $G$ is a locally compact group and $\omega \!: G \to [1,\infty)$ is a weight, then $\omega$ is bounded if and only if $L^1(G,\omega) = L^1(G)$ with an equivalent norm, i.e., unless $L^1(G,\omega)$ is trivial, the multiplication operator induces by $\omega$ on $L^2(G)$ is unbounded. The inverse of $\omega$---with respect to pointwise multiplication---, however, is bounded on $G$, i.e., the corresponding multiplication operator on $L^2(G)$ is bounded and thus lies in the multiplier algebra of ${\mathbb M}hcal{C}_0(G)$, the ${C^\ast}$-algebra of all continuous functions on $G$ vanishing at infinity (represented on $L^2(G)$ as multiplication operators). \par For a locally compact group $G$, we denote by ${\mathbb M}hcal{C}_b(G)$ the ${C^\ast}$-algebra of all bounded continuous functions on $G$. We note the following: \begin{proposition} \label{inverseweightprop} Let $G$ be a locally compact group. Then the following are equivalent for non-negative $\alpha \in {\mathbb M}hcal{C}_b(G)$ with $\| \alpha \|_\infty \leq 1$: \begin{items} \item there is a weight $\omega \!: G \to [1,\infty)$ such that $\alpha = \omega^{-1}$; \item \begin{alphitems} \item the map \begin{equation} {\mathbb M}hcal{C}_0(G) \to {\mathbb M}hcal{C}_0(G), \quad f \mapsto \alpha f {\widehat{\mathrm{env}}}nd{equation} has dense range; \item there is $\Omega \in L^\infty(G \times G)$ with $\| \Omega \|_\infty \leq 1$ such that \begin{equation} \label{bigO} \alpha(x) \alpha(y) = \alpha(xy) \Omega(x,y) \qquad (x,y \in G). {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{alphitems} {\widehat{\mathrm{env}}}nd{items} Moreover, if $\omega$ is as in {\widehat{\mathrm{env}}}mph{(i)}, then \[ L^1(G,\omega) = \{ \alpha f : f \in L^1(G) \} \] and \[ \| \alpha f \|_\omega = \| f \|_1 \qquad (f \in L^1(G)). \] {\widehat{\mathrm{env}}}nd{proposition} \begin{proof} (i) $\Longrightarrow$ (ii): Set \[ \Omega(x,y) := \frac{\omega(xy)}{\omega(x) \omega(y)} \qquad (x,y \in G). \] From ({{\mathrm{op}}eratorname{Re}}f{submult}), it is immediate that $\Omega \in {\mathbb M}hcal{C}_b(G \times G) \subset L^\infty(G \times G)$ with $\| \Omega \|_\infty \leq 1$, and by definition, ({{\mathrm{op}}eratorname{Re}}f{bigO}) holds. Hence, (a) is satisfied. To see that (b) holds, note that $\{ \alpha f : f \in {\mathbb M}hcal{C}_0(G) \}$ is a self-adjoint subalgebra of ${\mathbb M}hcal{C}_0(G)$ that strongly separates the points of $G$; it is therefore dense in ${\mathbb M}hcal{C}_0(G)$ by the Stone--Weierstra{\ss} theorem. \par (ii) $\Longrightarrow$ (i): From (b), it is immediate that $\alpha (x) \neq 0$ for all $x \in G$. Hence, we can define $\omega := \alpha^{-1}$. As $\| \alpha \|_\infty \leq 1$, it is clear that $\omega(G) \subset [1,\infty)$. From (a), it follows that $\omega$ satisfies ({{\mathrm{op}}eratorname{Re}}f{submult}). \par The ``moreover'' part is obvious. {\widehat{\mathrm{env}}}nd{proof} \par The bottom line of Proposition {{\mathrm{op}}eratorname{Re}}f{inverseweightprop} is that Beurling algebras can be defined without any reference to a weight---a possibly unbounded continuous function---, but rather through the inverses of weights, which are bounded continuous functions, i.e., multipliers of ${\mathbb M}hcal{C}_0(G)$. \par To adapt the notion of the inverse of a weight to context of Fourier algebras, we introduce the notion of a Hopf--von Neumann algebra (see \cite{ES}). As is customary, we write $\bar{\otimes}$ for the tensor product of von Neumann algebras. \begin{definition} A {\widehat{\mathrm{env}}}mph{Hopf--von Neumann algebra} is a pair $(M,\Gamma)$ where $M$ is a von Neumann algebra and $\Gamma \!: M \to M \bar{\otimes} M$ is a {\widehat{\mathrm{env}}}mph{co-multiplication}, i.e., a normal, faithful, unital $^\ast$-homomorphism such that \[ (\Gamma \otimes {\mathrm{id}}) \circ \Gamma = ({\mathrm{id}} \otimes \Gamma) \circ \Gamma. \] {\widehat{\mathrm{env}}}nd{definition} \par Whenever $(M,\Gamma)$ is a Hopf--von Neumann algebra, the unique predual $M_\ast$ of $M$ becomes a Banach algebra with respect to the product $\ast$ defined via \begin{equation} \label{predualprod} {{\mathrm{op}}eratorname{lan}}gle f \ast g, x {{\mathrm{op}}eratorname{ran}}gle := {{\mathrm{op}}eratorname{lan}}gle f \otimes g, \Gamma x {{\mathrm{op}}eratorname{ran}}gle \qquad (f, g \in M_\ast, \, x \in M). {\widehat{\mathrm{env}}}nd{equation} If $M_\ast$ is equipped with its canonical operator space structure (see \cite{ER} for background on the theory of operator spaces), then ({{\mathrm{op}}eratorname{Re}}f{predualprod}) defines not only a contractive, but completely contractive bilinear map, thus turning $M_\ast$ into a completely contractive Banach algebra (see \cite[p.\ 308]{ER}). \begin{example} Let $G$ be a locally compact group, and $M = L^\infty(G)$---so that $M_\ast = L^1(G)$ and $L^\infty(G) \bar{\otimes} L^\infty(G) \cong L^\infty(G \times G)$---, and define $\Gamma \!: L^\infty(G) \to L^\infty(G \times G)$ through \[ (\Gamma \phi)(x,y) := \phi(xy) \qquad (\phi \in L^\infty(G), \, x,y \in G). \] It is easy to check that the product on $L^1(G)$ in the sense of ({{\mathrm{op}}eratorname{Re}}f{predualprod}) is just the ordinary convolution product on $L^1(G)$. {\widehat{\mathrm{env}}}nd{example} \par The first part of Proposition {{\mathrm{op}}eratorname{Re}}f{inverseweightprop} can thus be rephrased as: \begin{corollary} \label{inverseweightcor} Let $G$ be a locally compact group. Then the following are equivalent for non-negative $\alpha \in {\mathbb M}hcal{C}_b(G)$ with $\| \alpha \|_\infty \leq 1$: \begin{items} \item there is a weight $\omega \!: G \to [1,\infty)$ such that $\alpha = \omega^{-1}$; \item \begin{alphitems} \item the map \begin{equation} {\mathbb M}hcal{C}_0(G) \to {\mathbb M}hcal{C}_0(G), \quad f \mapsto \alpha f {\widehat{\mathrm{env}}}nd{equation} has dense range; \item there is $\Omega \in L^\infty(G \times G)$ with $\| \Omega \|_\infty \leq 1$ such that \[ \alpha \otimes \alpha = (\Gamma \alpha)\Omega. \] {\widehat{\mathrm{env}}}nd{alphitems} {\widehat{\mathrm{env}}}nd{items} {\widehat{\mathrm{env}}}nd{corollary} \section{Weight inverses and Beurling--Fourier algebras} Let $G$ be a locally compact group, and let $\lambda \!: G \to {\mathbb M}hcal{B}(L^2(G))$ be the left regular representation of $G$ on $L^2(G)$, i.e., \[ (\lambda(x) \xi)(y) := \xi(x^{-1}y) \qquad (\xi \in L^2(G), \, x,y \in G). \] Through integration, $\lambda$ ``extends'' to a $^\ast$-representation of the group algebra $L^1(G)$; we use the symbol $\lambda$ for it as well. We define \[ C^\ast_r(G) := \overline{\lambda(L^1(G))}^{\| \cdot \|} \qquad\text{and}\qquad \operatorname{VN}(G) := \overline{\lambda(L^1(G))}^{\text{weak$^\ast$}}, \] the {\widehat{\mathrm{env}}}mph{reduced group ${C^\ast}$-algebra} and the {\widehat{\mathrm{env}}}mph{group von Neumann algebra} of $G$, respectively. The {\widehat{\mathrm{env}}}mph{Fourier algebra} $A(G)$ of $G$ is the predual of $\operatorname{VN}(G)$ (see \cite{Eym}). \par We introduce a co-multiplication $\hat{\Gamma} \!: \operatorname{VN}(G) \to \operatorname{VN}(G) \bar{\otimes} \operatorname{VN}(G) \cong \operatorname{VN}(G \times G)$, thus turning $A(G)$ into a completely contractive Banach algebra. To this end, define $W \in {\mathbb M}hcal{B}(L^2(G \times G))$ via \[ (W \boldsymbol{\xi})(x,y) := \boldsymbol{\xi}(x,xy) \qquad (\boldsymbol{\xi} \in L^2(G \times G), \, x,y \in G). \] Then \[ \hat{\Gamma} \!: {\mathbb M}hcal{B}(L^2(G)) \to {\mathbb M}hcal{B}(L^2(G \times G)), \quad T \mapsto W^{-1}(T \otimes 1)W \] is a co-multiplication, satisfying \[ \hat{\Gamma} \lambda(x) = \lambda(x) \otimes \lambda(x) \qquad (x \in G); \] it follows that $\hat{\Gamma} \operatorname{VN}(G) \subset \operatorname{VN}(G \times G)$. Let the product on $A(G)$ induced by $\hat{\Gamma}$ be denoted by $\hat{\ast}$. Given $f,g \in A(G)$ and $x \in G$, we have \[ {{\mathrm{op}}eratorname{lan}}gle f \hat{\ast} g, \lambda(x) {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle f \otimes g, \hat{\Gamma} \lambda(x) {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle f \otimes g, \lambda(x) \otimes \lambda(x) {{\mathrm{op}}eratorname{ran}}gle = f(x) g(x), \] i.e., $\hat{\ast}$ is pointwise multiplication. \par Whenever $M$ is a von Neumann algebra, its predual $M_\ast$ is an $M$-bimodule in a canonical manner: \[ {{\mathrm{op}}eratorname{lan}}gle x, y f {{\mathrm{op}}eratorname{ran}}gle := {{\mathrm{op}}eratorname{lan}}gle xy,f {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle y, fx {{\mathrm{op}}eratorname{ran}}gle \qquad (f \in M_\ast, \, x, y \in M). \] Also, if $A$ is a ${C^\ast}$-algebra, we write ${\mathbb M}hcal{M}(A)$ for its {\widehat{\mathrm{env}}}mph{multiplier algebra}. \par With an eye on Corollary {{\mathrm{op}}eratorname{Re}}f{inverseweightcor}, we define: \begin{definition} \label{weightinversedef} Let $G$ be a locally compact group $G$. A {\widehat{\mathrm{env}}}mph{weight inverse} is an element $\omega^{-1}$ of ${\mathbb M}hcal{M}(C^\ast_r(G))$ with $\| \omega^{-1} \| \leq 1$ such that the following are satisfied: \begin{alphitems} \item the maps \begin{equation} \label{denserange1} C^\ast_r(G) \to C^\ast_r(G), \quad x \mapsto x \omega^{-1} {\widehat{\mathrm{env}}}nd{equation} and \begin{equation} \label{denserange2} C^\ast_r(G) \to C^\ast_r(G), \quad x \mapsto \omega^{-1} x {\widehat{\mathrm{env}}}nd{equation} have dense range; \item there is $\Omega \in \operatorname{VN}(G \times G)$ with $\| \Omega \| \leq 1$ such that \[ \omega^{-1} \otimes \omega^{-1} = (\hat{\Gamma} \omega^{-1})\Omega; \] {\widehat{\mathrm{env}}}nd{alphitems} The corresponding {\widehat{\mathrm{env}}}mph{Beurling--Fourier algebra} is defined as \[ A(G,\omega) := \{ \omega^{-1} f : f \in A(G) \}. \] {\widehat{\mathrm{env}}}nd{definition} \begin{remarks} \item We have not defined what $\omega$ is: the $\omega$ in $A(G,\omega)$ is thus purely symbolic. However, a simple Hahn--Banach argument shows that $\omega^{-1} \!: L^2(G) \to L^2(G)$ is injective with dense range (as is $(\omega^{-1})^\ast$). We can thus define $\omega \!: \omega^{-1}L^2(G) \to L^2(G)$ as the inverse of $\omega^{-1} \!: L^2(G) \to \omega^{-1} L^2(G)$. It is immediate (\cite[Proposition II.6.2]{Yos}) that $\omega$ is closable and thus extends to a closed (necessarily densely defined) operator on $L^2(G)$. \item If $\omega^{-1}$ is self-adjoint, then it is sufficient that one of ({{\mathrm{op}}eratorname{Re}}f{denserange1}) or ({{\mathrm{op}}eratorname{Re}}f{denserange2}) have dense range. {\widehat{\mathrm{env}}}nd{remarks} \par At the first glance, it may seem bewildering that we do not require weight inverses to be {\widehat{\mathrm{env}}}mph{positive} elements of ${\mathbb M}hcal{M}(C^\ast_r(G))$. The reason for this is that we are interested in later extending Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef} to an $L^p$-context for general $p \in (1,\infty)$, where there is no suitable notion of positivity available. Still, not requiring in Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef} that $\omega^{-1}$ be positive, does not yield any more Beurling--Fourier algebras, as the next proposition shows: \begin{proposition} \label{posweight} Let $G$ be a locally compact group, and let $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(G))$ be a weight inverse. Then $|(\omega^{-1})^\ast | \in {\mathbb M}hcal{M}(C^\ast_r(G))$ is also a weight inverse such that the corresponding Beurling--Fourier algebra coincides with $A(G,\omega)$. {\widehat{\mathrm{env}}}nd{proposition} \begin{proof} Due to Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}(a), the sets $\{ x \omega^{-1} : x \in C^\ast_r(G) \}$ and $\{ \omega^{-1} x : x \in C^\ast_r(G) \}$ are dense in $C^\ast_r(G)$, as are $\{ x (\omega^{-1})^\ast : x \in C^\ast_r(G) \}$ and $\{ (\omega^{-1})^\ast x : x \in C^\ast_r(G) \}$. \par Let $(\omega^{-1})^\ast = u | (\omega^{-1})^\ast |$ be the polar decomposition of $(\omega^{-1})^\ast$. Then \begin{multline*} \{ x | (\omega^{-1})^\ast | : x \in C^\ast_r(G) \} \supset \\ \{ x | (\omega^{-1})^\ast | | (\omega^{-1})^\ast | : x \in C^\ast_r(G) \} = \{ (x \omega^{-1}) (\omega^{-1})^\ast : x \in C^\ast_r(G) \} {\widehat{\mathrm{env}}}nd{multline*} is dense in $C^\ast_r(G)$, as is---by an analogous argument---$\{ | (\omega^{-1})^\ast | x: x \in C^\ast_r(G) \}$, i.e., \[ C^\ast_r(G) \to C^\ast_r(G), \quad x \mapsto x | (\omega^{-1})^\ast | \] and \[ C^\ast_r(G) \to C^\ast_r(G), \quad x \mapsto | (\omega^{-1})^\ast | x \] each have dense range. \par As we remarked after Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}, $(\omega^{-1})^\ast$ is injective with dense range. Consequently, the partial isometry $u$ must be unitary; note also that $u \in \operatorname{VN}(G)$ (\cite[Proposition II.3.14]{Tak}). Let $\Omega$ be as in Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}(b). Then we have \[ | (\omega^{-1})^\ast | \otimes | (\omega^{-1})^\ast | = (\omega^{-1} \otimes \omega^{-1})(u \otimes u) = (\hat{\Gamma} \omega^{-1})\Omega(u \otimes u) = (\hat{\Gamma} | (\omega^{-1})^\ast | ) (\hat{\Gamma} u) \Omega(u \otimes u). \] As $\| (\hat{\Gamma} u) \Omega(u \otimes u) \| = \| \Omega \| \leq 1$, it follows that $| (\omega^{-1})^\ast |$ satisfies Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}(b) with $(\hat{\Gamma} u) \Omega(u \otimes u)$ {\widehat{\mathrm{env}}}mph{en lieu} of $\Omega$. \par Finally, note that \[ A(G,\omega) = \{ \omega^{-1} f : f \in A(G) \} = \{ \omega^{-1} u f : f \in A(G) \} = \{ | (\omega^{-1})^\ast | f : f \in A(G) \}, \] so that the Beurling--Fourier algebras corresponding to $\omega^{-1}$ and $|(\omega^{-1})^\ast|$ coincide. {\widehat{\mathrm{env}}}nd{proof} \par For abelian groups and positive weight inverses, the Beurling--Fourier algebras in the sense of Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef} are in perfect duality with the classical Beurling algebras, as we shall now see. \par If $G$ is a locally compact abelian group with dual group $\hat{G}$, we always suppose that Haar measures on $G$ and $\hat{G}$ are scaled such that the Fourier inversion formula (\cite[1.5.1, Theorem]{Rud}) holds. In this case, there is a unique unitary ${\mathbb M}hcal{P} \!: L^2(G) \to L^2(\hat{G})$---the {\widehat{\mathrm{env}}}mph{Plancherel transform}---that coincides with the Fourier transform ${\mathbb M}hcal{F} \!: L^1(G) \to A(\hat{G})$ on $L^1(G) \cap L^2(\hat{G})$. By $\hat{{\mathbb M}hcal{F}}$ and $\hat{{\mathbb M}hcal{P}}$, we denote the Fourier and Plancherel transforms, respectively, arising from $\hat{G}$. Also, if $G$ is any locally compact group and if $\phi \in L^\infty(G)$, we denote the corresponding multiplication operator on $L^2(G)$ by $M_\phi$ (slightly abusing notation, we shall often write $\phi$ instead of $M_\phi$). Finally, if $G$ is a locally compact group, and $\phi \!: G \to {{\mathbb M}hbb C}$ is any function, we define functions $\bar{\phi}$, $\check{\phi}$, and $\tilde{\phi}$ on $G$ by letting \[ \bar{\phi}(x) := \overline{\phi(x)}, \quad \check{\phi}(x) := \phi(x^{-1}), \quad\text{and}\quad \tilde{\phi}(x) := \overline{\phi}(x^{-1}) \qquad (x \in G). \] \par The following lemma is known by all likelihood, but for lack of a suitable reference, we give a proof: \begin{lemma} \label{planchl} Let $G$ be a locally compact abelian group with dual group $\hat{G}$. Then we have: \begin{equation} \label{intertw} {\mathbb M}hcal{P}^\ast \lambda(f) {\mathbb M}hcal{P} = M_{(\hat{{\mathbb M}hcal{F}} f)^\vee} \qquad (f \in L^1(\hat{G})). {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{lemma} \begin{proof} Let $f, \xi \in L^1(G) \cap L^2(G)$. Then \[ {\mathbb M}hcal{P}(\lambda(f) \xi) = {\mathbb M}hcal{F}(f \ast \xi) = ({\mathbb M}hcal{F}f) ({\mathbb M}hcal{P}\xi) \] holds. It follows that \begin{equation} \label{intertw0} {\mathbb M}hcal{P} \lambda(f) {\mathbb M}hcal{P}^\ast = M_{{\mathbb M}hcal{F}f} \qquad (f \in L^1(G)). {\widehat{\mathrm{env}}}nd{equation} \par Let $V \in {\mathbb M}hcal{B}(L^2(G))$ be the unitary operator given by $V\xi := \check{\xi}$ for $\xi \in L^2(G)$. It is routinely checked that ${\mathbb M}hcal{P}^\ast = V \hat{{\mathbb M}hcal{P}}$. Replacing the r\^oles of $G$ and $\hat{G}$, we obtain from ({{\mathrm{op}}eratorname{Re}}f{intertw0}) that \[ {\mathbb M}hcal{P}^\ast \lambda(f) {\mathbb M}hcal{P} = V \hat{{\mathbb M}hcal{P}} \lambda(f) \hat{{\mathbb M}hcal{P}}^\ast V = V M_{\hat{{\mathbb M}hcal{F}} f} V = M_{(\hat{{\mathbb M}hcal{F}} f)^\vee} \qquad (f \in L^1(\hat{G})), \] which proves ({{\mathrm{op}}eratorname{Re}}f{intertw}). {\widehat{\mathrm{env}}}nd{proof} \begin{lemma} \label{adjFourier} Let $G$ be a locally compact abelian group with dual group $\hat{G}$, and let ${\mathbb M}hcal{F} \!: L^1(G) \to A(\hat{G})$ be the Fourier transform. Then ${\mathbb M}hcal{F}^\ast \!: \operatorname{VN}(\hat{G}) \to L^\infty(G)$ is a $^\ast$-isomorphism that maps $C^\ast_r(\hat{G})$ onto ${\mathbb M}hcal{C}_0(G)$ and satisfies \begin{equation} \label{comultcomp} ({\mathbb M}hcal{F}^\ast \otimes {\mathbb M}hcal{F}^\ast) \circ \hat{\Gamma} = \Gamma \circ {\mathbb M}hcal{F}^\ast. {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{lemma} \begin{proof} Since ${\mathbb M}hcal{F}$ is an isomorphism of Banach algebras and since the multiplication in $L^1(G)$ and $A(\hat{G})$, respectively, arises from $\Gamma$ and $\hat{\Gamma}$, respectively, it is clear that ({{\mathrm{op}}eratorname{Re}}f{comultcomp}) holds. \par To tell the Hilbert space inner product of $L^2(\hat{G})$ apart from a Banach space duality ${{\mathrm{op}}eratorname{lan}}gle \cdot, \cdot {{\mathrm{op}}eratorname{ran}}gle$, we write ${{\mathrm{op}}eratorname{lan}}gle \cdot | \cdot {{\mathrm{op}}eratorname{ran}}gle$. Let $f, g \in L^1(G)$, and let $\xi, {\widehat{\mathrm{env}}}ta \in L^2(G)$ be such that $g = \xi \bar{{\widehat{\mathrm{env}}}ta}$; we have: \[ \begin{split} {{\mathrm{op}}eratorname{lan}}gle g, {\mathbb M}hcal{F}^\ast(\lambda(f)) {{\mathrm{op}}eratorname{ran}}gle & = {{\mathrm{op}}eratorname{lan}}gle \xi \bar{{\widehat{\mathrm{env}}}ta}, {\mathbb M}hcal{F}^\ast(\lambda(f)) {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle {\mathbb M}hcal{F} (\xi\bar{{\widehat{\mathrm{env}}}ta}), \lambda(f) {{\mathrm{op}}eratorname{ran}}gle \\ & = \left{{\mathrm{op}}eratorname{lan}}gle {\mathbb M}hcal{P}\xi \ast \widetilde{{\mathbb M}hcal{P} {\widehat{\mathrm{env}}}ta}, \lambda(f) \right{{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle \lambda(f) {\mathbb M}hcal{P}\xi | {\mathbb M}hcal{P}{\widehat{\mathrm{env}}}ta {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle {\mathbb M}hcal{P}^\ast \lambda(f) {\mathbb M}hcal{P}\xi | {\widehat{\mathrm{env}}}ta {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle M_{(\hat{{\mathbb M}hcal{F}}f)^\vee} \xi | {\widehat{\mathrm{env}}}ta {{\mathrm{op}}eratorname{ran}}gle, \qquad\text{by Lemma {{\mathrm{op}}eratorname{Re}}f{planchl}}, \\ & = {{\mathrm{op}}eratorname{lan}}gle g, (\hat{{\mathbb M}hcal{F}} f)^\vee {{\mathrm{op}}eratorname{ran}}gle \\ {\widehat{\mathrm{env}}}nd{split} \] It follows that \begin{equation} \label{intertw+} {\mathbb M}hcal{F}^\ast (\lambda(f)) = {\mathbb M}hcal{P}^\ast \lambda(f) {\mathbb M}hcal{P} = (\hat{{\mathbb M}hcal{F}}f)^\vee. {\widehat{\mathrm{env}}}nd{equation} From the first equality, it is immediate that ${\mathbb M}hcal{F}^\ast$ is a $^\ast$-homomorphism. The second equality shows that ${\mathbb M}hcal{F}^\ast$ maps the algebra $\lambda(L^1(\hat{G}))$, which is dense in $C^\ast_r(\hat{G})$ onto a dense subalgebra of ${\mathbb M}hcal{C}_0(G)$. {\widehat{\mathrm{env}}}nd{proof} \begin{proposition} \label{dualprop} Let $G$ be a locally compact abelian group with dual group $\hat{G}$, and let ${\mathbb M}hcal{F} \!: L^1(G) \to A(\hat{G})$ be the Fourier transform. Then: \begin{items} \item a continuous function $\omega \!: G \to [1,\infty)$ is a weight if and only if $(\hat{\omega})^{-1} := ({\mathbb M}hcal{F}^\ast)^{-1}(\omega^{-1})$ is a positive weight inverse, in which case ${\mathbb M}hcal{F}(L^1(G,\omega)) = A(\hat{G},\hat{\omega})$; \item if $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(\hat{G}))$ is a positive weight inverse, then $({\mathbb M}hcal{F}^\ast \omega^{-1})^{-1}$ is a weight on $G$. {\widehat{\mathrm{env}}}nd{items} {\widehat{\mathrm{env}}}nd{proposition} \begin{proof} As ${\mathbb M}hcal{F}^\ast$ is a $^\ast$-isomorphism mapping $C^\ast_r(\hat{G})$ onto ${\mathbb M}hcal{C}_0(G)$, it is clear that ${\mathbb M}hcal{F}^\ast$ and its inverse respect positivity and map the closed unit balls of ${\mathbb M}hcal{M}(C^\ast_r(\hat{G}))$ and ${\mathbb M}hcal{C}_b(G)$ onto each other. \par Let $\omega$ be a weight on $G$. Then $\alpha := (\hat{\omega})^{-1}$ is a non-negative function in the unit ball of ${\mathbb M}hcal{C}_b(G)$ satisfying Corollary {{\mathrm{op}}eratorname{Re}}f{inverseweightcor}(ii)(a). From Lemma {{\mathrm{op}}eratorname{Re}}f{adjFourier}, we conclude that $(\hat{\omega})^{-1} := ({\mathbb M}hcal{F}^\ast)^{-1}(\alpha)$ satisfies Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}, i.e., is a weight inverse. \par Conversely, if $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(\hat{G}))$ is a positive weight inverse, then $\alpha := {\mathbb M}hcal{F}^\ast \omega^{-1}$ is a non-negative function in the unit ball of ${\mathbb M}hcal{C}_b(G)$ satisfying Corollary {{\mathrm{op}}eratorname{Re}}f{inverseweightcor}(ii), so that $({\mathbb M}hcal{F}^\ast \omega^{-1})^{-1}$ is a weight by that corollary. \par Let $\omega \!: G \to [1,\infty)$ be a weight, and let $(\hat{\omega})^{-1}$ be defined as in (i). To see that ${\mathbb M}hcal{F}(L^1(G,\omega)) = A(\hat{G},\hat{\omega})$, first note that \[ ({\mathbb M}hcal{F}^\ast)^{-1} \phi = {\mathbb M}hcal{P} M_\phi {\mathbb M}hcal{P}^\ast \qquad (\phi \in L^\infty(G)) \] by ({{\mathrm{op}}eratorname{Re}}f{intertw+}). Let $f \in L^1(G)$. Choose $\xi, {\widehat{\mathrm{env}}}ta \in L^2(G)$ such that $f = \xi \bar{{\widehat{\mathrm{env}}}ta}$. Then we have: \[ {\mathbb M}hcal{F}(\omega^{-1}f) = {\mathbb M}hcal{F}(\omega^{-1}\xi \bar{{\widehat{\mathrm{env}}}ta}) = {\mathbb M}hcal{P}(M_{\omega^{-1}} \xi) \ast \widetilde{{\mathbb M}hcal{P}{\widehat{\mathrm{env}}}ta} = (({\mathbb M}hcal{P}M_{\omega^{-1}}{\mathbb M}hcal{P}^\ast){\mathbb M}hcal{P}\xi) \ast \widetilde{{\mathbb M}hcal{P}{\widehat{\mathrm{env}}}ta} = ({\mathbb M}hcal{F}^\ast)^{-1} (\omega^{-1})({\mathbb M}hcal{F}f). \] This completes the proof. {\widehat{\mathrm{env}}}nd{proof} \par We now give examples for weight inverses in the sense of Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}: \begin{examples} \item Let $G$ be a locally compact abelian group with dual group $\hat{G}$, and let ${\mathbb M}hcal{F} \!: L^1(G) \to A(\hat{G})$ denote the Fourier transform. By Proposition {{\mathrm{op}}eratorname{Re}}f{dualprop}(i), $(\hat{\omega})^{-1} := ({\mathbb M}hcal{F}^\ast )^{-1}(\omega^{-1})$ is a weight inverse for every weight $\omega \!: G \to [1,\infty)$, and ${\mathbb M}hcal{F}(L^1(G,\omega)) = A(\hat{G},\hat{\omega})$ holds. By Proposition {{\mathrm{op}}eratorname{Re}}f{dualprop}(ii), every weight inverse in ${\mathbb M}hcal{M}(C^\ast_r(\hat{G}))$ arises in this fashion. \item In \cite{LS}, H.\ H.\ Lee and E.\ Samei define Beurling--Fourier algebras using an explicit definition of a weight on the dual of a locally compact group $G$ (\cite[Definition 2.4]{LS}). A weight in their sense is a closed, densely defined, positive operator $\omega$ on $L^2(G)$ affiliated with $\operatorname{VN}(G)$ satisfying various properties. In particular, they require: \begin{items} \item $\omega$ has a bounded inverse $\omega^{-1} \in \operatorname{VN}(G)$; \item $(\hat{\Gamma} \omega)(\omega^{-1} \otimes \omega^{-1}) \leq 1$ (for the definition of $\hat{\Gamma}\omega$, see \cite{LS}); \item $\{ x \omega^{-1} : x \in VN(G) \}$ is weak$^\ast$ dense in $\operatorname{VN}(G)$. {\widehat{\mathrm{env}}}nd{items} By multiplying $\omega$, if necessary, with a positive scalar, there is also no loss of generality to suppose that $\| \omega^{-1} \| \leq 1$. \par Suppose that $G$ is compact, so that \begin{equation}\label{compactcase} C_r^\ast(G)^{\ast\ast} = \operatorname{VN}(G) = {\mathbb M}hcal{M}(C^\ast_r(G)), {\widehat{\mathrm{env}}}nd{equation} and let $\omega$ be a weight on the dual of $G$ in the sense of \cite[Definition 2.4]{LS}. We claim that $\omega^{-1}$ is a weight inverse in the sense of Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}. First of all, note that $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(G))$ by ({{\mathrm{op}}eratorname{Re}}f{compactcase}). Since the weak$^\ast$ topology of $\operatorname{VN}(G)$ restricted to $C^\ast_r(G)$ is the weak topology, we obtain that $\{ x \omega^{-1} : x \in C^\ast_r(G) \}$ is norm dense in $C^\ast_r(G)$; as $\omega^{-1}$ is positive, $\{ \omega^{-1} x : x \in C^\ast_r(G) \}$ is also norm dense in $C^\ast_r(G)$. Finally, set $\Omega := (\hat{\Gamma} \omega)(\omega^{-1} \otimes \omega^{-1})$, so that \[ \omega^{-1} \otimes \omega^{-1} =(\hat{\Gamma}\omega^{-1}) (\hat{\Gamma} \omega)(\omega^{-1} \otimes \omega^{-1}) = (\hat{\Gamma}\omega^{-1})\Omega. \] \par It follows that the central weights discussed in \cite[Subsection 2.2]{LS} as well as the weights introduced in \cite[Section 3]{LST}, all yield weight inverses in the sense of Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}. {\widehat{\mathrm{env}}}nd{examples} \par So far, we have only labeled the Beurling--Fourier ``algebras'' algebras without showing that they are indeed algebras. \par For the next theorem note that, if $G$ is a locally compact group and $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(G))$ is a weight inverse. Then ({{\mathrm{op}}eratorname{Re}}f{denserange1}) has a dense range, so that its adjoint is injective, as is the restriction \begin{equation} \label{rats} A(G) \to A(G), \quad f \mapsto \omega^{-1} f {\widehat{\mathrm{env}}}nd{equation} to $A(G)$. \begin{theorem} \label{BFthm} Let $G$ be a locally compact group, and let $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(G))$ be a weight inverse. Then $A(G,\omega)$ is a dense subalgebra of $A(G)$. Moreover, if $A(G,\omega)$ is equipped with the unique operator space structure turning the bijection \[ A(G) \to A(G,\omega), \quad f \mapsto \omega^{-1} f \] into a complete isometry, it is a completely contractive Banach algebra. {\widehat{\mathrm{env}}}nd{theorem} \par We refrain from giving a proof here because we will prove a more general result in the context of Fig\`a-Talamanca--Herz algebras (see Theorem {{\mathrm{op}}eratorname{Re}}f{BFTHthm} below). \section{Beurling--Fig\`a-Talamanca--Herz algebras} Let $G$ once again be a locally compact group, let $p \in (1,\infty)$, and let $\lambda_p \!: G \to {\mathbb M}hcal{B}(L^p(G))$ be the left regular representation of $G$ on $L^p(G)$, meaning \[ (\lambda_p(x) \xi)(y) := \xi(x^{-1}y) \qquad (\xi \in L^p(G), \, x,y \in G); \] we also write $\lambda_p$ for the representation of $L^1(G)$ on $L^p(G)$ obtained through integration. We define \[ \mathrm{PF}_p(G) := \overline{\lambda_p(L^1(G))}^{\| \cdot \|} \qquad\text{and}\qquad \operatorname{PM}_p(G) := \overline{\lambda_p(L^1(G))}^{\text{weak$^\ast$}}, \] the {\widehat{\mathrm{env}}}mph{$p$-pseudofunctions} and the {\widehat{\mathrm{env}}}mph{$p$-pseudomeasures} on $G$, respectively; we also define \[ {\mathbb M}hcal{M}(\mathrm{PF}_p(G)) := \{ x \in \operatorname{PM}_p(G) : \text{$x\mathrm{PF}_p(G) \subset \mathrm{PF}_p(G)$ and $\mathrm{PF}_p(G)x \subset \mathrm{PF}_p(G)$} \}. \] The $p$-pseudomeasures form a weak$^\ast$ closed subspace of the dual Banach space ${\mathbb M}hcal{B}(L^p(G))$ and thus have a canonical predual, the {\widehat{\mathrm{env}}}mph{Fig\`a-Talamanca--Herz algebra} $A_p(G)$. \par For what follows, we require the theory of $p$-operator spaces, which is outlined in \cite{Daw}, for instance. \par There are a $p$-completely contractive, weak$^\ast$ continuous map $\hat{\Gamma}_p \!: \operatorname{PM}_p(G) \to \operatorname{PM}_p(G \times G)$ with \[ \hat{\Gamma}_p \lambda_p(x) = \lambda_p(x) \otimes \lambda_p(x) \qquad (x \in G) \] as well as a canonical, weak$^\ast$ continuous, $p$-complete contraction $\theta \!: \operatorname{PM}_p(G \times G) \to (A_p(G) \hat{\otimes}_p A_p(G))^\ast$ such that the preadjoint $(\theta \hat{\Gamma}_p)_\ast \!: A_p(G) \hat{\otimes}_p A_p(G) \to A_p(G)$ is pointwise multiplication (here, $\hat{\otimes}_p$ stands for the projective tensor product of $p$-operator spaces; see \cite{Daw} for the definition). \par As $A(G)$ is a completely contractive $\operatorname{VN}(G)$-bimodule, $A_p(G)$ is a $p$-completely contractive $\operatorname{PM}_p(G)$-bimodule. We can thus extend Definition {{\mathrm{op}}eratorname{Re}}f{weightinversedef}: \begin{definition} \label{pweightinversedef} Let $G$ be a locally compact group $G$, and let $p \in (1,\infty)$. A {\widehat{\mathrm{env}}}mph{weight inverse} is an element $\omega^{-1}$ of ${\mathbb M}hcal{M}(\mathrm{PF}_p(G))$-with $\| \omega^{-1} \| \leq 1$ such that the following are satisfied: \begin{alphitems} \item the maps \begin{equation} \label{denserange3} \mathrm{PF}_p(G) \to \mathrm{PF}_p(G), \quad x \mapsto x \omega^{-1} {\widehat{\mathrm{env}}}nd{equation} and \begin{equation} \label{denserange4} \mathrm{PF}_p(G) \to \mathrm{PF}_p(G), \quad x \mapsto \omega^{-1} x {\widehat{\mathrm{env}}}nd{equation} have dense range; \item there is $\Omega \in \operatorname{PM}_p(G \times G)$ with $\| \Omega \| \leq 1$ such that \[ \omega^{-1} \otimes \omega^{-1} = (\hat{\Gamma}_p \omega^{-1})\Omega. \] {\widehat{\mathrm{env}}}nd{alphitems} The corresponding {\widehat{\mathrm{env}}}mph{Beurling--Fig\`a-Talamanca--Herz algebra} is defined as \[ A_p(G,\omega) := \{ \omega^{-1} f : f \in A_p(G) \}. \] {\widehat{\mathrm{env}}}nd{definition} \par We have a canonical extension of Theorem {{\mathrm{op}}eratorname{Re}}f{BFthm} to Beurling--Fig\`a-Talamanca--Herz algebras: \begin{theorem} \label{BFTHthm} Let $G$ be a locally compact group, let $p \in (1,\infty)$, and let $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ be a weight inverse. Then $A_p(G,\omega)$ is a dense subalgebra of $A_p(G)$. Moreover, if $A_p(G,\omega)$ is equipped with the unique $p$-operator space structure turning the bijection \begin{equation} \label{injective} A_p(G) \to A_p(G,\omega), \quad f \mapsto \omega^{-1} f {\widehat{\mathrm{env}}}nd{equation} into a complete isometry, it is a $p$-completely contractive Banach algebra. {\widehat{\mathrm{env}}}nd{theorem} \begin{proof} To show that $A_p(G,\omega)$ is dense in $A_p(G)$, let $x \in \operatorname{PM}_p(G)$ be such that ${{\mathrm{op}}eratorname{lan}}gle f, x {{\mathrm{op}}eratorname{ran}}gle = 0$ for $f \in A_p(G,\omega)$, i.e., ${{\mathrm{op}}eratorname{lan}}gle \omega^{-1} f, x {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle f, x \omega^{-1} {{\mathrm{op}}eratorname{ran}}gle = 0$ for $f \in A_p(G)$. It follows that $x \omega^{-1} = 0$. As ({{\mathrm{op}}eratorname{Re}}f{denserange4}) has dense range in $\mathrm{PF}_p(G)$, the set $\{ \omega^{-1} y : y \in \operatorname{PM}_p(G) \}$ is weak$^\ast$ dense in $\operatorname{PM}_p(G)$, so that $x \operatorname{PM}_p(G) = \{ 0 \}$. Since $\operatorname{PM}_p(G)$ is unital, we conclude that $x = 0$, so that $A_p(G,\omega)$ is dense in $A_p(G)$ by the Hahn--Banach theorem. \par To see that $A_p(G,\omega)$ is multiplicatively closed, let $f,g \in A_p(G)$, let $x \in \operatorname{PM}_p(G)$, and note that \[ \begin{split} {{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} f)(\omega^{-1} g),x {{\mathrm{op}}eratorname{ran}}gle & = {{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} \otimes \omega^{-1})(f \otimes g), \theta \hat{\Gamma}_p x {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} \otimes \omega^{-1}) \theta_\ast(f \otimes g), \hat{\Gamma}_p x {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle (\hat{\Gamma}_p \omega^{-1}) \Omega \theta_\ast(f \otimes g), \hat{\Gamma}_p x {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle \Omega \theta_\ast (f \otimes g), \hat{\Gamma}_p(x \omega^{-1}) {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle (\hat{\Gamma}_p)_\ast(\Omega \theta_\ast(f \otimes g)), x \omega^{-1} {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle \omega^{-1} (\hat{\Gamma}_p)_\ast(\Omega \theta_\ast(f \otimes g)), x {{\mathrm{op}}eratorname{ran}}gle, {\widehat{\mathrm{env}}}nd{split} \] i.e., \begin{equation} \label{BFTHprod} (\omega^{-1} f)(\omega^{-1} g) = \omega^{-1} (\hat{\Gamma}_p)_\ast(\Omega \theta_\ast(f \otimes g)) \in A_p(G,\omega). {\widehat{\mathrm{env}}}nd{equation} Hence, $A_p(G,\omega)$ is a subalgebra of $A_p(G)$. \par Since $\theta_\ast$, $(\hat{\Gamma}_p)_\ast$, and \[ A_p(G \times G) \to A_p(G \times G), \quad F \mapsto \Omega F \] are $p$-complete contractions, so is their composition, it follows from ({{\mathrm{op}}eratorname{Re}}f{BFTHprod}) that $A_p(G,\omega)$ is a $p$-completely contractive Banach algebra. {\widehat{\mathrm{env}}}nd{proof} \par If $G$ is a locally compact group and $\omega$ is a weight on $G$, then $L^1(G,\omega) = L^1(G)$ with equivalent norms if and only if $\omega$ is bounded, which is trivially satisfied for compact $G$. In view of the duality between $L^1$- and Fourier algebras, one should expect a similar result for Beurling--Fourier algebras which should always be true on discrete groups. Indeed, this holds even for Beurling--Fig\`a-Talamanca--Herz algebras: \begin{proposition} Let $G$ be a locally compact group, and let $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ be a weight inverse. Then the following are equivalent: \begin{items} \item the inclusion map from $A_p(G,\omega)$ into $A_p(G)$ is surjective; \item the inclusion map from $A_p(G,\omega)$ into $A_p(G)$ is surjective and has a $p$-completely bounded inverse; \item $\omega^{-1}$ is left invertible in $\operatorname{PM}_p(G)$. {\widehat{\mathrm{env}}}nd{items} If $G$ is discrete, then $\omega^{-1}$ is automatically invertible in ${\mathbb M}hcal{M}(\mathrm{PF}_p(G))$, so that {\widehat{\mathrm{env}}}mph{(i)}, {\widehat{\mathrm{env}}}mph{(ii)}, and {\widehat{\mathrm{env}}}mph{(iii)} hold. {\widehat{\mathrm{env}}}nd{proposition} \begin{proof} (iii) $\Longrightarrow$ (ii) $\Longrightarrow$ (i) hold trivially. \par (i) $\Longrightarrow$ (iii): The composition of ({{\mathrm{op}}eratorname{Re}}f{injective}) with the canonical inclusion of $A_p(G,\omega)$ into $A_p(G)$ is \[ A_p(G) \to A_p(G), \quad f \mapsto \omega^{-1} f. \] If this map is bijective, then so is \[ \operatorname{PM}_p(G) \to \operatorname{PM}_p(G), \quad x \mapsto x \omega^{-1}. \] As $\operatorname{PM}_p(G)$ is unital, $\omega^{-1}$ must be left invertible in $\operatorname{PM}_p(G)$. \par If $G$ is discrete then, $\mathrm{PF}_p(G)$ is unital, so that ${\mathbb M}hcal{M}(\mathrm{PF}_p(G)) = \mathrm{PF}_p(G)$. From the density of the ranges of ({{\mathrm{op}}eratorname{Re}}f{denserange3}) and ({{\mathrm{op}}eratorname{Re}}f{denserange4}) in $\mathrm{PF}_p(G)$ it is clear that the left ideals $\{ x\omega^{-1} : x\in \mathrm{PF}_p(G) \}$ and $\{ \omega^{-1} x : x \in \mathrm{PF}_p(G) \}$ are both dense in $\mathrm{PF}_p(G)$ and thus, by basic Banach algebra theory, all of $\mathrm{PF}_p(G)$. It follows that $\omega^{-1}$ is invertible in $\mathrm{PF}_p(G)$. {\widehat{\mathrm{env}}}nd{proof} \begin{remark} Suppose that $p =2$ and that $\omega^{-1} \in {\mathbb M}hcal{M}(C^\ast_r(G))$ is a weight inverse such that ({{\mathrm{op}}eratorname{Re}}f{rats}) is surjective (and thus, automatically, bijective). As $(\omega^{-1})^\ast = u | (\omega^{-1})^\ast|$ with $u \in \operatorname{VN}(G)$ unitary---see the proof of Proposition {{\mathrm{op}}eratorname{Re}}f{posweight}---, it follows that \[ A(G) \mapsto A(G), \quad f \mapsto |( \omega^{-1})^\ast| f \] is also bijective. Hence, $|(\omega^{-1})^\ast|$ is left invertible in $\operatorname{VN}(G)$ and, being self-adjoint, actually invertible. This entails that $(\omega^{-1})^\ast$ is invertible in $\operatorname{VN}(G)$ and thus in ${\mathbb M}hcal{M}(C^\ast_r(G))$. In the $p=2$ situation, we thus have the equivalence of: \begin{items} \item the inclusion map from $A(G,\omega)$ into $A(G)$ is surjective; \item the inclusion map from $A(G,\omega)$ into $A(G)$ is surjective and has a completely bounded inverse; \item $\omega^{-1}$ is invertible in ${\mathbb M}hcal{M}(C^\ast_r(G))$. {\widehat{\mathrm{env}}}nd{items} {\widehat{\mathrm{env}}}nd{remark} \section{A weighted Leptin--Herz theorem} It is well known that, for any locally compact group $G$ and any weight $\omega \!: G \to [1,\infty)$, the Beurling algebra $L^1(G,\omega)$ has a bounded approximate identity (\cite[Proposition 3.7.7]{RSt}). On the other hand, H.\ Leptin proved in \cite{Lep} that a locally compact group $G$ is amenable if and only if $A(G)$ has a bounded approximate identity. This result was subsequently extended in to Fig\`a-Talamanca--Herz algebras in \cite{Her2} by C.\ Herz, who claimed this extension to be folklore. \par In this section, we prove a weighted version of the Leptin--Herz theorem: a locally compact group $G$ is amenable if and only if, for all $p \in (1,\infty)$ and all weight inverses $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$, the algebra $A_p(G,\omega)$ has a bounded approximate identity. \par As $A_p(G,\omega)$ is dense in $A_p(G)$ with the inclusion being ($p$-completely) contractive, any bounded approximate identity for $A_p(G,\omega)$ is automatically an approximate identity for $A_p(G)$, thus forcing $G$ to be amenable. If one tries to adapt the proof in the unweighted case---via F{\o}lner type conditions---, difficulties show up immediately: in general, the functions in $A_p(G,\omega)$ with compact support need not be dense in $A_p(G,\omega)$. We thus pursue a different route, which is inspired by the theory of Kac algebras (see \cite{ES}). \begin{definition} \label{Ppdef} Let $G$ be a locally compact group, and let $p \in [1,\infty)$. We call a net $(\xi_\alpha )_\alpha$ of non-negative norm one functions in $L^p(G)$ a {\widehat{\mathrm{env}}}mph{$(P_p)$-net} if \[ \sup_{x \in K} \| \lambda_p(x) \xi_\alpha - \xi_\alpha \|_p \to 0. \] for a compact $K \subset G$. {\widehat{\mathrm{env}}}nd{definition} \begin{remark} The choice of terminology in Definition {{\mathrm{op}}eratorname{Re}}f{Ppdef} is, of course, due to property $(P_p)$ introduced by H.\ Reiter (see \cite[Definition 8.3.1]{RSt}). A locally compact group $G$ is amenable if and only if it has Property $(P_p)$ for one---and, equivalently, for all---$p \in [1,\infty)$ (\cite[Theorem 6.14]{Pie}), i.e., there is a $(P_p)$-net in $L^p(G)$. {\widehat{\mathrm{env}}}nd{remark} \par For any $p \in (1,\infty)$, the $(P_p)$-nets are defined in terms of an asymptotic invariance property. For the proof of our weighted Leptin--Herz theorem, we require three more such properties, which we formulate as three lemmas. \begin{lemma} \label{Pplem} Let $G$ be an amenable locally compact group, and let $p \in (1,\infty)$. Then: \begin{items} \item the augmentation character $1 \in L^\infty(G)$ on $L^1(G)$ extends uniquely to a multiplicative linear functional on ${\mathbb M}hcal{M}(\mathrm{PF}_p(G))$; \item for any $(P_p)$-net $( \xi_\alpha )_\alpha$ in $L^p(G)$, we have \begin{equation} \label{Ppeq} \| x \xi_\alpha - {{\mathrm{op}}eratorname{lan}}gle x,1 {{\mathrm{op}}eratorname{ran}}gle \xi_\alpha \|_p \to 0 \qquad (x \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G)). {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{items} {\widehat{\mathrm{env}}}nd{lemma} \begin{proof} It follows from \cite[Theorem 5]{Cow} that $1$ extends (necessarily uniquely) to $\mathrm{PF}_p(G)$ as a (necessarily multiplicative) bounded linear functional. Fix $a \in \mathrm{PF}_p(G)$ with ${{\mathrm{op}}eratorname{lan}}gle a, 1 {{\mathrm{op}}eratorname{ran}}gle =1$, and define \[ \phi \!: {\mathbb M}hcal{M}(\mathrm{PF}_p(G)) \to {{\mathbb M}hbb C}, \quad x \mapsto {{\mathrm{op}}eratorname{lan}}gle xa, 1 {{\mathrm{op}}eratorname{ran}}gle. \] Clearly, $\phi$ is a continuous functional extending $1$. To see that $\phi$ is multiplicative, let $( e_\alpha )_\alpha$ be a bounded approximate identity for $L^1(G)$, so that $(\lambda_p(e_\alpha) )_\alpha$ is a bounded approximate identity for $\mathrm{PF}_p(G)$. We obtain for $x,y \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$: \begin{multline*} {{\mathrm{op}}eratorname{lan}}gle xy, \phi {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle xya, 1 {{\mathrm{op}}eratorname{ran}}gle = \lim_\alpha {{\mathrm{op}}eratorname{lan}}gle x \lambda(e_\alpha) ya, 1 {{\mathrm{op}}eratorname{ran}}gle = \lim_\alpha {{\mathrm{op}}eratorname{lan}}gle x \lambda(e_\alpha) , 1 {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle y a , 1 {{\mathrm{op}}eratorname{ran}}gle \\ = \lim_\alpha {{\mathrm{op}}eratorname{lan}}gle x \lambda(e_\alpha) a , 1 {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle ya,1 {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle xa, 1 {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle ya,1 {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle x, \phi {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle y, \phi {{\mathrm{op}}eratorname{ran}}gle. {\widehat{\mathrm{env}}}nd{multline*} It is obvious that $\phi$ is the only multiplicative extension of $1$ from $\mathrm{PF}_p(G)$ to ${\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ (for the sake of simplicity, we will also denote this extension by $1$). This proves (i). \par For the proof of (ii), first note that ({{\mathrm{op}}eratorname{Re}}f{Ppeq}) holds for $x \in \lambda_p(L^1(G))$: this is due to the fact the the functions with compact support are dense in $L^1(G)$. Due to the norm density of $\lambda(L^1(G))$ in $\mathrm{PF}_p(G)$, we obtain ({{\mathrm{op}}eratorname{Re}}f{Ppeq}) for $x \in \mathrm{PF}_p(G)$ as well. Finally, let $x \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ be arbitrary. Fix $a \in \mathrm{PF}_p(G)$ with ${{\mathrm{op}}eratorname{lan}}gle a, 1 {{\mathrm{op}}eratorname{ran}}gle = 1$, so that $\| a \xi_\alpha - \xi_\alpha \|_p \to 0$ and thus \[ \| x \xi_\alpha - xa \xi_\alpha \|_p \to 0. \] As ${{\mathrm{op}}eratorname{lan}}gle xa,1 {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle x, 1 {{\mathrm{op}}eratorname{ran}}gle$, we obtain \[ \| x \xi_\alpha - {{\mathrm{op}}eratorname{lan}}gle x,1 {{\mathrm{op}}eratorname{ran}}gle \xi_\alpha \|_p \leq \| x \xi_\alpha - xa \xi_\alpha \|_p + \| xa \xi - {{\mathrm{op}}eratorname{lan}}gle xa, 1 {{\mathrm{op}}eratorname{ran}}gle \xi_\alpha \|_p \to 0. \] This completes the proof. {\widehat{\mathrm{env}}}nd{proof} \par Let $p \in (1,\infty)$ be arbitrary. As in the case $p=2$, we define $W_p \in {\mathbb M}hcal{B}(L^p(G \times G))$ by letting \[ (W_p \boldsymbol{\xi})(x,y) := \boldsymbol{\xi}(x,xy) \qquad (\boldsymbol{\xi} \in L^p(G \times G), \, x,y \in G), \] Like in that case, we have \begin{equation} \label{comultdef} \hat{\Gamma}_p x = W^{-1}_p(x \otimes 1) W_p \qquad (x \in \operatorname{PM}_p(G \times G)). {\widehat{\mathrm{env}}}nd{equation} Observe also that, if $q \in (1,\infty)$ is dual to $p$, i.e., $\frac{1}{p} + \frac{1}{q} = 1$, then \begin{equation} \label{adjoints} W_p^\ast = W^{-1}_q \qquad\text{and}\qquad (W_p^{-1})^\ast = W_q. {\widehat{\mathrm{env}}}nd{equation} \begin{lemma} \label{Wpinv} Let $G$ be a locally compact group, let $p \in (1,\infty)$, and let $( \xi_\alpha )_\alpha$ be a $(P_p)$-net in $L^p(G)$. Then we have \begin{equation} \label{asy1} \| W_p({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha) - {\widehat{\mathrm{env}}}ta \otimes \xi_\alpha \|_p \to 0 \qquad ({\widehat{\mathrm{env}}}ta \in L^p(G)) {\widehat{\mathrm{env}}}nd{equation} and \begin{equation} \label{asy2} \| W_p^{-1} ({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha) - {\widehat{\mathrm{env}}}ta \otimes \xi_\alpha \| \to 0 \qquad ({\widehat{\mathrm{env}}}ta \in L^p(G)). {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{lemma} \begin{proof} If ${\widehat{\mathrm{env}}}ta$ has compact support, ({{\mathrm{op}}eratorname{Re}}f{asy1}) is immediate from Definition {{\mathrm{op}}eratorname{Re}}f{Ppdef}; the general case follows by the usual density argument. Clearly, ({{\mathrm{op}}eratorname{Re}}f{asy2}) follows from ({{\mathrm{op}}eratorname{Re}}f{asy1}). {\widehat{\mathrm{env}}}nd{proof} \begin{lemma} \label{asylem} Let $G$ be a locally compact group, let $p,q \in (1,\infty)$ be dual to each other, let $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$, let $\Omega \in \operatorname{PM}_p(G \times G)$ be as in {\widehat{\mathrm{env}}}mph{Definition {{\mathrm{op}}eratorname{Re}}f{pweightinversedef}(iii)}, and let $( \xi_\alpha )_\alpha$ be a $(P_q)$-net in $L^q(G)$. Then we have \begin{equation} \label{asy3} \| \Omega^\ast({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha) - {\widehat{\mathrm{env}}}ta \otimes (\omega^{-1})^\ast \xi_\alpha \|_q \to 0 \qquad ({\widehat{\mathrm{env}}}ta \in L^p(G)). {\widehat{\mathrm{env}}}nd{equation} {\widehat{\mathrm{env}}}nd{lemma} \begin{proof} By Definition {{\mathrm{op}}eratorname{Re}}f{pweightinversedef}(iii) and ({{\mathrm{op}}eratorname{Re}}f{comultdef}), we have \[ \omega^{-1} \otimes \omega^{-1} = (\hat{\Gamma} \omega^{-1}) \Omega = W_p^{-1}(\omega^{-1} \otimes 1) W_p \Omega. \] Through taking adjoints---taking ({{\mathrm{op}}eratorname{Re}}f{adjoints}) into account---, we obtain \begin{equation} \label{omegaeq} (\omega^{-1})^\ast \otimes (\omega^{-1} )^\ast = \Omega^\ast W_q^{-1}((\omega^{-1})^\ast \otimes 1) W_q. {\widehat{\mathrm{env}}}nd{equation} As $( \xi_\alpha )_\alpha$ is a $(P_q)$-net, we obtain from Lemma {{\mathrm{op}}eratorname{Re}}f{Wpinv} that \[ \| \Omega^\ast W_q^{-1}((\omega^{-1})^\ast \otimes 1) W_q({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha) - \Omega^\ast((\omega^{-1})^\ast {\widehat{\mathrm{env}}}ta \otimes \xi_\alpha) \|_q \to 0. \] In view of ({{\mathrm{op}}eratorname{Re}}f{omegaeq}), this yields ({{\mathrm{op}}eratorname{Re}}f{asy3}) in the case where ${\widehat{\mathrm{env}}}ta \in (\omega^{-1})^\ast L^q(G)$; the general case follows from the fact that $(\omega^{-1})^\ast L^q(G)$ is dense in $L^q(G)$. {\widehat{\mathrm{env}}}nd{proof} \par For our next result, the technical heart of our argument, recall the notion of a {\widehat{\mathrm{env}}}mph{weak approximate identity} of a Banach algebra $A$: this a net $( e_\alpha)_\alpha$ in $A$ such that \[ a e_\alpha \to a \quad\text{and}\quad e_\alpha a \to a \qquad (a \in A) \] in the weak topology of $A$ (see, for instance, \cite[Definition 11.3]{BD}). \begin{proposition} \label{weakbai} Let $G$ be a locally compact group, let $( \xi_\alpha )_{\alpha \in {\mathbb M}hcal{A}}$ be a $(P_1)$-net in $L^1(G)$, let $p,q \in (1,\infty)$ be dual to each other, and let the net $( e_\alpha )_{\alpha \in {\mathbb M}hbb{A}}$ in $A_p(G)$ be defined by \[ e_\alpha(x) := \left{{\mathrm{op}}eratorname{lan}}gle \lambda_p(x) \xi_\alpha^\frac{1}{p}, \xi_\alpha^\frac{1}{q}\right{{\mathrm{op}}eratorname{ran}}gle \qquad (x \in G, \, \alpha \in {\mathbb M}hbb{A}). \] Then, if $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ is a weight inverse, the net $\left( {{\mathrm{op}}eratorname{lan}}gle \omega^{-1}, 1 {{\mathrm{op}}eratorname{ran}}gle^{-1} \omega^{-1} e_\alpha \right)_{\alpha \in {\mathbb M}hbb{A}}$ in $A_p(G,\omega)$ is a weak approximate identity for $A_p(G, \omega)$. {\widehat{\mathrm{env}}}nd{proposition} \begin{proof} It is clear that $( {{\mathrm{op}}eratorname{lan}}gle \omega^{-1}, 1 {{\mathrm{op}}eratorname{ran}}gle^{-1} \omega^{-1} e_\alpha )_{\alpha \in {\mathbb M}hbb{A}}$ is bounded in $A_p(G,\omega)$. Also note that \[ (\omega^{-1} e_\alpha)(x) = \left{{\mathrm{op}}eratorname{lan}}gle \lambda_p(x) \omega^{-1} \xi_\alpha^\frac{1}{p}, \xi_\alpha^\frac{1}{q}\right{{\mathrm{op}}eratorname{ran}}gle \qquad (x \in G, \, \alpha \in {\mathbb M}hbb{A}). \] \par Let $f \in A_p(G)$. Without loss of generality, suppose that there are ${\widehat{\mathrm{env}}}ta \in L^p(G)$ and $\zeta \in L^q(G)$ with ${{\mathrm{op}}eratorname{lan}}gle \omega^{-1} {\widehat{\mathrm{env}}}ta, \zeta {{\mathrm{op}}eratorname{ran}}gle =1$ such that $f(x) = {{\mathrm{op}}eratorname{lan}}gle \lambda_p(x) {\widehat{\mathrm{env}}}ta, \zeta {{\mathrm{op}}eratorname{ran}}gle$ for $x \in G$; this means that $(\omega^{-1}f)(x) = {{\mathrm{op}}eratorname{lan}}gle \lambda_p(x) \omega^{-1} {\widehat{\mathrm{env}}}ta, \zeta {{\mathrm{op}}eratorname{ran}}gle$ for $x \in G$. \par There is a canonical complete isomorphism $\kappa \!: \operatorname{PM}_p(G) \to A_p(G,\omega)^\ast$, given by \[ {{\mathrm{op}}eratorname{lan}}gle \omega^{-1} f, \kappa(x) {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle f,x {{\mathrm{op}}eratorname{ran}}gle \qquad (x \in \operatorname{PM}_p(G)). \] Fix $x \in \operatorname{PM}_p(G)$. \par From the proof of Theorem {{\mathrm{op}}eratorname{Re}}f{BFTHthm}, we see that for $\alpha \in {\mathbb M}hbb{A}$: \begin{equation} \label{ouch} \begin{split} \lefteqn{{{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} f) {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \omega^{-1} e_\alpha, \kappa(x) {{\mathrm{op}}eratorname{ran}}gle} & \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} {{\mathrm{op}}eratorname{lan}}gle \omega^{-1} (\hat{\Gamma}_p)_\ast(\Omega \theta_\ast(f \otimes e_\alpha)), \kappa(x) {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} {{\mathrm{op}}eratorname{lan}}gle \Omega \theta_\ast(f \otimes e_\alpha), \hat{\Gamma}_p x {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} {{\mathrm{op}}eratorname{lan}}gle \theta_\ast(f \otimes e_\alpha), (\hat{\Gamma}_p x)\Omega {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} {{\mathrm{op}}eratorname{lan}}gle \theta_\ast(f \otimes e_\alpha), W_p^{-1}(x \otimes 1)W_p \Omega {{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \left{{\mathrm{op}}eratorname{lan}}gle \zeta \otimes \xi_\alpha^\frac{1}{q}, W_p^{-1}(x \otimes 1)W_p \Omega\left({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha^\frac{1}{p} \right) \right{{\mathrm{op}}eratorname{ran}}gle \\ & = {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \left{{\mathrm{op}}eratorname{lan}}gle \Omega^\ast W_q^{-1}(x^\ast \otimes 1)W_q\left(\zeta \otimes \xi_\alpha^\frac{1}{q}\right), \left({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha^\frac{1}{p} \right) \right{{\mathrm{op}}eratorname{ran}}gle. {\widehat{\mathrm{env}}}nd{split} {\widehat{\mathrm{env}}}nd{equation} As $( \xi_\alpha )_{\alpha \in {\mathbb M}hbb{A}}$ is a $(P_1)$ net, $\left( \xi_\alpha^\frac{1}{q} \right)_{\alpha \in {\mathbb M}hbb{A}}$ is a $(P_q)$-net. From Lemmas {{\mathrm{op}}eratorname{Re}}f{Wpinv} and {{\mathrm{op}}eratorname{Re}}f{asylem}, we conclude that \[ \left\| \Omega^\ast W_q^{-1}(x^\ast \otimes 1)W_q\left(\zeta \otimes \xi_\alpha^\frac{1}{q}\right) - x^\ast \zeta \otimes (\omega^{-1})^\ast \xi_\alpha \right\|_q \to 0 \] and thus \begin{multline*} \left{{\mathrm{op}}eratorname{lan}}gle \zeta \otimes \xi_\alpha^\frac{1}{q}, W_p^{-1}(x \otimes 1)W_p \Omega\left({\widehat{\mathrm{env}}}ta \otimes \xi_\alpha^\frac{1}{p} \right) \right{{\mathrm{op}}eratorname{ran}}gle - \left{{\mathrm{op}}eratorname{lan}}gle \zeta \otimes \xi_\alpha^\frac{1}{q}, x{\widehat{\mathrm{env}}}ta \otimes \omega^{-1} \xi_\alpha^\frac{1}{p}\right{{\mathrm{op}}eratorname{ran}}gle \\ = \left{{\mathrm{op}}eratorname{lan}}gle \Omega^\ast W_q^{-1}(x^\ast \otimes 1)W_q\left(\zeta \otimes \xi_\alpha^\frac{1}{q}\right) - x^\ast \zeta \otimes (\omega^{-1})^\ast \xi_\alpha, {\widehat{\mathrm{env}}}ta \otimes \xi_\alpha^\frac{1}{p} \right{{\mathrm{op}}eratorname{ran}}gle \to 0. {\widehat{\mathrm{env}}}nd{multline*} Together with ({{\mathrm{op}}eratorname{Re}}f{ouch}), this yields \begin{equation} \label{moreouch} \lim_\alpha {{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} f) {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \omega^{-1} e_\alpha, \kappa(x) {{\mathrm{op}}eratorname{ran}}gle - {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \left{{\mathrm{op}}eratorname{lan}}gle \zeta \otimes \xi_\alpha^\frac{1}{q}, x{\widehat{\mathrm{env}}}ta \otimes \omega^{-1} \xi_\alpha^\frac{1}{p}\right{{\mathrm{op}}eratorname{ran}}gle = 0. {\widehat{\mathrm{env}}}nd{equation} \par On the other hand, as $( \xi_\alpha )_{\alpha \in {\mathbb M}hbb{A}}$ is a $(P_1)$ net, $\left( \xi_\alpha^\frac{1}{p} \right)_{\alpha \in {\mathbb M}hbb{A}}$ is a $(P_p)$-net, so that \[ \left\| \omega^{-1}\xi_\alpha^\frac{1}{p} - {{\mathrm{op}}eratorname{lan}}gle \omega^{-1},1 {{\mathrm{op}}eratorname{ran}}gle \xi_\alpha^\frac{1}{p} \right\|_p \to 0 \] by Lemma {{\mathrm{op}}eratorname{Re}}f{Pplem}(ii) and thus \begin{equation} \label{evenmoreouch} \left{{\mathrm{op}}eratorname{lan}}gle \zeta \otimes \xi_\alpha^\frac{1}{q}, x{\widehat{\mathrm{env}}}ta \otimes \omega^{-1} \xi_\alpha^\frac{1}{p}\right{{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle f, x {{\mathrm{op}}eratorname{ran}}gle \left{{\mathrm{op}}eratorname{lan}}gle \xi_\alpha^\frac{1}{q}, \omega^{-1} \xi_\alpha^\frac{1}{p} \right{{\mathrm{op}}eratorname{ran}}gle \to {{\mathrm{op}}eratorname{lan}}gle f, x {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle \omega^{-1}, 1 {{\mathrm{op}}eratorname{ran}}gle = {{\mathrm{op}}eratorname{lan}}gle \omega^{-1}f, \kappa(x) {{\mathrm{op}}eratorname{ran}}gle {{\mathrm{op}}eratorname{lan}}gle \omega^{-1}, 1 {{\mathrm{op}}eratorname{ran}}gle. {\widehat{\mathrm{env}}}nd{equation} \par Combined, ({{\mathrm{op}}eratorname{Re}}f{moreouch}) and ({{\mathrm{op}}eratorname{Re}}f{evenmoreouch}) yield \[ \lim_\alpha {{\mathrm{op}}eratorname{lan}}gle (\omega^{-1} f) {{\mathrm{op}}eratorname{lan}}gle\omega^{-1},1{{\mathrm{op}}eratorname{ran}}gle^{-1} \omega^{-1} e_\alpha - f, \kappa(x) {{\mathrm{op}}eratorname{ran}}gle = 0. \] As $x \in \operatorname{PM}_p(G)$ was arbitrary, this completes the proof. {\widehat{\mathrm{env}}}nd{proof} \par Summing everything up, we obtain: \begin{theorem} The following are equivalent for a locally compact group $G$: \begin{items} \item $G$ is amenable; \item for every $p \in (1,\infty)$ and for every weight inverse $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$, the Beurling--Fig\`a-Talamanca--Herz algebra $A_p(G,\omega)$ has a bounded approximate identity; \item there are $p \in (1,\infty)$ and a weight inverse $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ such that the Beurling--Fig\`a-Talamanca--Herz algebra $A_p(G,\omega)$ has a bounded approximate identity {\widehat{\mathrm{env}}}nd{items} {\widehat{\mathrm{env}}}nd{theorem} \begin{proof} (i) $\Longrightarrow$ (ii): Let $ p \in (1,\infty)$, and let $\omega^{-1} \in {\mathbb M}hcal{M}(\mathrm{PF}_p(G))$ be a weight inverse. As $G$ is amenable, it has Reiter's property $(P_1)$ (\cite[Proposition 6.12]{Pie}), i.e., there is a $(P_1)$-net in $L^1(G)$. By Proposition {{\mathrm{op}}eratorname{Re}}f{weakbai}, this means that $A_p(G,\omega)$ has a weak bounded approximate identity. By a standard Banach algebra result (see \cite[Proposition 11.4]{BD}, for example), this means that $A_p(G,\omega)$ already has a bounded approximate identity. \par (ii) $\Longrightarrow$ (iii) is trivial. \par (iii) $\Longrightarrow$ (i): As we remarked at the beginning of this section, the existence of a bounded approximate identity for $A_p(G,\omega)$ already implies the existence of one for $A_p(G)$. By the unweighted Leptin--Herz theorem (\cite[Theorem 10.4]{Pie}), this means that $G$ is amenable. {\widehat{\mathrm{env}}}nd{proof} {{\mathrm{op}}eratorname{Re}}newcommand{1.2}{1.0} \begin{thebibliography}{L--S--T} \begin{small} \bibitem[B--D]{BD} \textsc{F.\ F.\ Bonsall} and \textsc{J.\ Duncan}, \textit{Complete Normed Algebras}. Ergebnisse der Mathematik und ihrer Grenzgebiete \textbf{80}, Springer Verlag, 1973. \bibitem[Cow]{Cow} \textsc{M.\ Cowling}, An application of Littlewood--Paley theory in harmonic analysis. \textit{Math.\ Ann.}\ \textbf{241} (1979), 83--96. \bibitem[Daw]{Daw} \textsc{M.\ Daws}, $p$-operator spaces and Fig\`a-Talamanca–-Herz algebras. \textit{J.\ Operator Theory} \textbf{63} (2010), 47-–83. \bibitem[E--R]{ER} \textsc{E.\ G.\ Effros} and \textsc{Z.-J.\ Ruan}, \textit{Operator Spaces}. London Mathematical Society Monographs (New Series) \textbf{23}, Clarendon Press, 2000. \bibitem[E--S]{ES} \textsc{M.\ Enock} and \textsc{J.-M.\ Schwartz}, \textit{Kac Algebras and Duality of Locally Compact Groups}. Springer Verlag, 1992. \bibitem[Eym 1]{Eym} \textsc{P.\ Eymard}, L'alg\`ebre de Fourier d'un groupe localement compact. \textit{Bull.\ Soc.\ Math.\ France} \textbf{92} (1964), 181--236. \bibitem[Eym 2]{Eym2} \textsc{P.\ Eymard}, Alg\`ebres $A_p$ et convoluteurs de $L^p$. In: \textit{S\'eminaire Bourbaki, vol.\ 1969/70, Expos\'es 364--381}, Lecture Notes in Mathematics \textbf{180}, Springer Verlag, 1971. \bibitem[F-T]{FT} \textsc{A.\ Fig\`a-Talamanca}, Translation invariant operators in $L^p$. \textit{Duke Math.\ J.}\ \textbf{32} (1965) 495--501. \bibitem[Her 1]{Her1} \textsc{C.\ Herz}, The theory of $p$-spaces with an application to convolution operators. \textit{Trans.\ Amer.\ Math.\ Soc.}\ \textbf{154} (1971), 69--82. \bibitem[Her 2]{Her2} \textsc{C.\ Herz}, Harmonic synthesis for subgroups. \textit{Ann.\ Inst.\ Fourier (Grenoble)} \textbf{23} (1973), 91--123. \bibitem[Kan]{Kan} \textsc{E.\ Kaniuth}, \textit{A Course in Commutative Banach Algebras}. Graduate Texts in Mathematics \textbf{246}, Springer Verlag, 2009. \bibitem[Lep]{Lep} \textsc{H.\ Leptin}, Sur l'alg\`ebre de Fourier d'un groupe localement compact. \textit{C.\ R.\ Acad.\ Sci.\ Paris\/}, S\'er.\ A \textbf{266} (1968), 1180--1182. \bibitem[L--S]{LS} \textsc{H.\ H.\ Lee} and \textsc{E.\ Samei}, Beurling--Fourier algebras, operator amenability, and Arens regularity. \textit{J.\ Funct.\ Anal.}\ \textbf{262} (2012), 167--209. \bibitem[L--S--T]{LST} \textsc{J.\ Ludwig}, \textsc{N.\ Spronk}, and \textsc{L.\ Turowska}, Beurling--Fourier algebras on compact groups: spectral theory. \textit{J.\ Funct.\ Anal.}\ \textbf{262} (2012), 463--499, \bibitem[Pie]{Pie} \textsc{J.-P.\ Pier}, \textit{Amenable Locally Compact Groups}. Wiley-Interscience, 1984. \bibitem[R--St]{RSt} \textsc{H.\ Reiter} and \textsc{J.\ D.\ Stegeman}, {\widehat{\mathrm{env}}}mph{Classical Harmonic Analysis and Locally Compact Groups}. London Mathematical Society Monographs (New Series) \textbf{22}, Clarendon Press, 2000. \bibitem[Rud]{Rud} \textsc{W.\ Rudin}, \textit{Fourier Analysis on Groups}. Wiley Classics Library, John Wiley \& Sons, 1990. \bibitem[Spe]{Spe} \textsc{R.\ Spector}, Sur la structure locale des groupes ab\'eliens localement compacts. \textit{Bull.\ Soc.\ Math.\ France Suppl.\ M\'em.}\ \textbf{24} (1970). \bibitem[Tak]{Tak} \textsc{M.\ Takesaki}, \textit{Theory of Operator Algebras}, I. Encyclopedia of Mathematical Sciences \textbf{124}, Springer Verlag, 2003. \bibitem[Yos]{Yos} \textsc{K.\ Yosida}, \textit{Functional Analysis}. Grundlehren der mathematischen Wissenschaften \textbf{123}, Springer Verlag, 1980. {\widehat{\mathrm{env}}}nd{small} {\widehat{\mathrm{env}}}nd{thebibliography} {{\mathrm{op}}eratorname{Re}}newcommand{1.2}{1.2} \begin{tabbing} \textit{Second author's address}: \= Department of Mathematical and Statistical Sciences \kill \textit{First author's address}: \> Department of Mathematics \\ \> Faculty of Science \\ \> Istanbul University \\ \> Istanbul \\ \> Turkey \\[ amount] \textit{E-mail}: \> \texttt{[email protected]} \\[ amount] \textit{Second author's address}: \> Department of Mathematical and Statistical Sciences \\ \> University of Alberta \\ \> Edmonton, Alberta \\ \> Canada T6G 2G1 \\[ amount] \textit{E-mail}: \> \texttt{[email protected]} \\[ amount] \textit{Third author's address}: \> Department of Pure Mathematics \\ \> University of Waterloo \\ \> Waterloo, ON \\ \> Canada N2L 3G1 \\[ amount] \textit{E-mail}: \> \texttt{[email protected]} {\widehat{\mathrm{env}}}nd{tabbing} \mbox{} {\small [{\tt \today}]}} \usepackage{amsmath,amssymb,amsfonts,diagrams {\widehat{\mathrm{env}}}nd{document}
\begin{document} \title{Random time-changes and asymptotic results for a class of continuous-time Markov chains on integers with alternating rates\thanks{The authors acknowledge the support of: GNAMPA and GNCS groups of INdAM (Istituto Nazionale di Alta Matematica); MIUR--PRIN 2017, Project ‘Stochastic Models for Complex Systems’ (no. 2017JFFHSH); MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006).}} \author{Luisa Beghin\thanks{Dipartimento di Scienze Statistiche, Sapienza Universit\`{a} di Roma, Piazzale Aldo Moro 5, 00185 Rome, Italy. e-mail: \texttt{[email protected]}}\and Claudio Macci\thanks{Dipartimento di Matematica, Universit\`a di Roma Tor Vergata, Via della Ricerca Scientifica, 00133 Rome, Italy. e-mail: \texttt{[email protected]}}\and Barbara Martinucci\thanks{Dipartimento di Matematica, Universit\`{a} degli Studi di Salerno, Via Giovanni Paolo II n. 132, 84084 Fisciano, SA, Italy. e-mail: \texttt{[email protected]}}} \date{} \maketitle \begin{abstract} We consider continuous-time Markov chains on integers which allow transitions to adjacent states only, with alternating rates. We give explicit formulas for probability generating functions, and also for means, variances and state probabilities of the random variables of the process. Moreover we study independent random time-changes with the inverse of the stable subordinator, the stable subordinator and the tempered stable subodinator. We also present some asymptotic results in the fashion of large deviations. These results give some generalizations of those presented in \cite{DicrescenzoMacciMartinucci}.\\ \ \\ \emph{AMS Subject Classification:} 60F10; 60J27; 60G22; 60G52.\\ \emph{Keywords:} large deviations, moderate deviations, fractional process, tempered stable subordinator. \end{abstract} \section{Introduction} We consider a class of continuous-time Markov chains on integers which can have transitions to adjacent states only, and with alternating transition rates to their adjacent states; namely we assume to have the same transition rates for the odd states, and the same transition rates for the even states. We recall that Markov chains with alternating rates are useful in the study of chain molecular diffusion; see e.g. \cite{TarabiaTakagiElbaz} and other references cited in \cite{DicrescenzoMacciMartinucci}. In this paper we also study independent random time-changes of these Markov chains with the inverse of the stable subordinator and the (possibly tempered) stable subordinator. We give a more rigorous presentation in terms of the generator. In general we consider a continuous-time Markov chain $\{X(t):t\geq 0\}$ on $\mathbb{Z}$ (where $\mathbb{Z}$ is the set of integers), and we consider the state probabilities \begin{equation}\label{eq:pmf-notation} p_{k,n}(t):=P(X(t)=n|X(0)=k), \end{equation} which satisfy the condition $p_{k,n}(0)=1_{\{k=n\}}$; the generator $G=(g_{k,n})_{k,n\in\mathbb{Z}}$ of $\{X(t):t\geq 0\}$ is defined by $$g_{k,n}:=\lim_{t\to 0}\frac{p_{k,n}(t)-p_{k,n}(0)}{t}.$$ Then, for some $\alpha_1,\alpha_2,\beta_1,\beta_2>0$, we assume to have (see Figure \ref{fig1}) $$g_{k,n}:=\left\{\begin{array}{ll} \alpha_1&\ \mbox{if}\ n=k+1\ \mbox{and}\ k\ \mbox{is even}\\ \beta_1&\ \mbox{if}\ n=k+1\ \mbox{and}\ k\ \mbox{is odd}\\ \alpha_2&\ \mbox{if}\ n=k-1\ \mbox{and}\ k\ \mbox{is even}\\ \beta_2&\ \mbox{if}\ n=k-1\ \mbox{and}\ k\ \mbox{is odd}\\ 0&\ \mbox{otherwise} \end{array}\right.\ (\mbox{for}\ k\neq n);$$ therefore $$g_{n,n}=\left\{\begin{array}{ll} -(\alpha_1+\alpha_2)&\ \mbox{if}\ n\ \mbox{is even}\\ -(\beta_1+\beta_2)&\ \mbox{if}\ n\ \mbox{is odd}. \end{array}\right.$$ \begin{figure} \caption{Transition rate diagram of $\{X(t):t\geq 0\} \label{fig1} \end{figure} We remark that this is a generalization of the model in \cite{DicrescenzoMacciMartinucci}; in fact we recover that model by setting $$\left\{\begin{array}{ll} \alpha_1=\lambda\eta+\mu(1-\eta)\\ \beta_1=\mu\eta+\lambda(1-\eta)\\ \alpha_2=\lambda\theta+\mu(1-\theta)\\ \beta_2=\mu\theta+\lambda(1-\theta) \end{array}\right.$$ for $\lambda,\mu>0$ and $\eta,\theta\in[0,1]$; moreover the case $(\theta,\eta)=(1,1)$ was studied in \cite{DicrescenzoIulianoMartinucci}, whereas the case $(\theta,\eta)=(0,1)$ identifies the model investigated in \cite{ConollyParthasarathyDharmaraja} and \cite{TarabiaTakagiElbaz}. In particular we extend the results in \cite{DicrescenzoMacciMartinucci} by giving explicit expressions of the probability generating function, mean and variance of $X(t)$ (for each fixed $t>0$), and we study the asymptotic behavior (as $t\to\infty$) in the fashion of large deviations. Here we also give explicit expressions of the state probabilities. Moreover we consider some random time-changes of the basic model $\{X(t):t\geq 0\}$, with independent processes. This is motivated by the great interest that the theory of random time-changes (and subordination) is being receiving starting from \cite{Bochner} (see also \cite{Schilling}). In particular this theory allows to construct non-standard models which are useful for possible applications in different fields; indeed, in many circumstances, the process is more realistically assumed to evolve according to a random (so-called operational) time, instead of the usual deterministic one. A wide class of random time-changes concerns subordinators, namely nondecreasing L\'{e}vy processes (see, for example, \cite{Sato}, \cite{KumarNaneVellaisamy}, \cite{MeerschaertNaneVellaisamy} and \cite{OrsingherBeghin}, \cite{DicrescenzoMartinucciZacks}); recent works with different kind of random time-changes are \cite{DingGieseckeTomecek}, \cite{BeghinOrsingherJoSP2016} and \cite{DovidioOrsingherToaldo}. The random time-changes of $\{X(t):t\geq 0\}$ studied in this paper are related to fractional differential equations and stable processes. More precisely we consider: \begin{enumerate} \item the inverse of the stable subordinator $\{T^\nu(t):t\geq 0\}$; \item the (possibly tempered) stable subordinator $\{\tilde{S}^{\nu,\mu}(t):t\geq 0\}$ for $\nu\in(0,1)$ and $\mu\geq 0$ (we have the tempered case when $\mu>0$). \end{enumerate} In both cases, i.e. for both $\{X(T^\nu(t)):t\geq 0\}$ and $\{X(\tilde{S}^{\nu,\mu}(t)):t\geq 0\}$, we provide expressions for the state probabilities in terms of the generalized Fox-Wright function. We recall \cite{HoudreKawai}, \cite{Rosinski} and \cite{SabzikarMeerschaertChen} among the references with the tempered stable subordinator. Typically these two random time-changes are associated to some generalized derivative in the literature; namely the Caputo left fractional derivative (see, for example, (2.4.14) and (2.4.15) in \cite{KilbasSrivastavaTrujillo}) in the first case, and the shifted fractional derivative (see (6) in \cite{BeghinJCP}; see also (17) in \cite{BeghinJCP} for the connections with the fractional Riemann-Liouville derivative) in the second case. We also try to extend the large deviation results for $\{X(t):t\geq 0\}$ to the cases with a random time-change considered in this paper. It is useful to remark that all the large deviation principles in this paper are proved by applications of the G\"{a}rtner Ellis Theorem; moreover these large deviation principles yield the convergence (at least in probability) to the values at which the large deviation rate functions uniquely vanish. Thus, motivated by potential applications, when dealing with large deviation principles with the same speed function, we compare the rate functions to establish if we have a faster or slower convergence (if they are comparable). In conclusion the evaluation of the rate function can be an important task, in particular when they are given in terms of a variational formula (as happens with the application of the G\"{a}rtner Ellis Theorem). The applications of the G\"{a}rtner Ellis Theorem are based on suitable limits of moment generating functions. So, in view of the applications of this theorem, we study the probability generating functions of the random variables of the processes; in particular the formulas obtained for $\{X(T^\nu(t)):t\geq 0\}$ have some analogies with many results in the literature for other time-fractional processes (for instance the probability generating functions are expressed in terms of the Mittag-Leffler function), with both continuous and discrete state space (see, for example, \cite{MeerschaertNaneVellaisamy}, \cite{HahnKobayashiUmarov}, \cite{BeghinMacciJAP2014} and \cite{Iksanov-etal}). For $\{X(T^\nu(t)):t\geq 0\}$ we can consider large deviations only (the difficulties to obtain a moderate deviation result are briefly discussed); moreover we compute (and plot) different large deviation rate functions for various choices of $\nu\in(0,1)$ and we conclude that, the smaller is $\nu$, the faster is the convergence of $\frac{X^\nu(t)}{t}$ to zero (as $t\to\infty$). For $\{X(\tilde{S}^{\nu,\mu}(t)):t\geq 0\}$ we can obtain large and moderate deviations for the tempered case $\mu>0$ only; in fact in this case we can apply the G\"{a}rtner Ellis Theorem because we have light-tailed distributed random variables (namely the moment generating functions of the involved random variables are finite in a neighborhood of the origin). There are some references in the literature with applications of the G\"{a}rtner Ellis Theorem to time-changed processes. However there are very few cases where the random time-change is given by the inverse of the stable subordinator; see e.g. \cite{GajdaMagdziarz} and \cite{WangChang} where the time-changed processes are fractional Brownian motions. We are not aware of any other references where the time-changed process takes values on $\mathbb{Z}$. We conclude with the outline of the paper. Section \ref{sec:preliminaries} is devoted to some preliminaries on large deviations. In Section \ref{sec:non-fractional} we present the results for the basic model, i.e. the (non-fractional) process $\{X(t):t\geq 0\}$. Finally we present some results for the process $\{X(t):t\geq 0\}$ with random time-changes: the case with the inverse of the stable subordinator is studied in Section \ref{sec:time-fractional}, the case with the (possibly tempered) stable subodinator is studied in Section \ref{sec:time-change-TSS}. The final appendix (Section \ref{sec:pmf-expressions}) is devoted to the state probabilities expressions. \section{Preliminaries on large deviations}\label{sec:preliminaries} Some results in this paper concerns the theory of large deviations; so, in this section, we recall some preliminaries (see e.g. \cite{DemboZeitouni}, pages 4-5). A family of probability measures $\{\pi_t:t>0\}$ on a topological space $\mathcal{Y}$ satisfies the large deviation principle (LDP for short) with rate function $I$ and speed function $v_t$ if: $\lim_{t\to+\infty}v_t=+\infty$, $I:\mathcal{Y}\to[0,+\infty]$ is lower semicontinuous, $$\liminf_{t\to+\infty}\frac{1}{v_t}\log\pi_t(O)\geq -\inf_{y\in O}I(y)$$ for all open sets $O$, and $$\limsup_{t\to+\infty}\frac{1}{v_t}\log\pi_t(C)\leq -\inf_{y\in C}I(y)$$ for all closed sets $C$. A rate function is said to be good if all its level sets $\{\{y\in\mathcal{Y}:I(y)\leq\eta\}:\eta\geq 0\}$ are compact. We also present moderate deviation results. This terminology is used when, for each family of positive numbers $\{a_t:t>0\}$ such that $a_t\to 0$ and $ta_t\to\infty$, we have a family of laws of centered random variables (which depend on $a_t$), which satisfies the LDP with speed function $1/a_t$, and they are governed by the same quadratic rate function which uniquely vanishes at zero (for every choice of $\{a_t:t>0\}$). More precisely we have a rate function $J(y)=\frac{y^2}{2\sigma^2}$, for some $\sigma^2>0$. Typically moderate deviations fill the gap between a convergence to zero of centered random variables, and a convergence in distribution to a centered Normal distribution with variance $\sigma^2$. The main large deviation tool used in this paper is the G\"{a}rtner Ellis Theorem (see e.g. Theorem 2.3.6 in \cite{DemboZeitouni}). \section{Results for the basic model (non-fractional case)}\label{sec:non-fractional} In this section we present the results for the basic model. Some of them will be used for the models with random time-changes in the next sections. We start with some non-asymptotic results, where $t$ is fixed, which concern probability generating functions, means and variances. In the second part we present the asymptotic results, namely large and (moderate) deviation results as $t\to\infty$. In particular the probability generating functions $\{F_k(\cdot,t):k\in\mathbb{Z},t\geq 0\}$ are important in both parts; they are defined by $$F_k(z,t):=\mathbb{E}\left[z^{X(t)}|X(0)=k\right]=\sum_{n=-\infty}^\infty z^np_{k,n}(t)\ (\mbox{for}\ k\in\mathbb{Z}),$$ where $\{p_{k,n}(t):k,n\in\mathbb{Z},t\geq 0\}$ are the state probabilities in \eqref{eq:pmf-notation}. We also have to consider the function $\Lambda:\mathbb{R}\to\mathbb{R}$ defined by \begin{equation}\label{eq:def-Lambda} \Lambda(\gamma):=\frac{h(e^\gamma)}{e^\gamma}-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}, \end{equation} where \begin{equation}\label{eq:hz} \left.\begin{array}{l} h(z):=\frac{1}{2}\sqrt{\tilde{h}(z;\alpha_1,\alpha_2,\beta_1,\beta_2)},\ \mbox{where}\\ \tilde{h}(z;\alpha_1,\alpha_2,\beta_1,\beta_2):=(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2z^2+4(\beta_1z^2+\beta_2)(\alpha_1z^2+\alpha_2). \end{array}\right. \end{equation} \begin{remark}\label{rem:exchange-parameters} The non-asymptotic results presented below depend on $k=X(0)$, and we have different formulations when $k$ is odd or even. In particular we can reduce from a case to another by exchanging $(\alpha_1,\alpha_2)$ and $(\beta_1,\beta_2)$. On the contrary $k$ is negligible for the asymptotic results; in fact $\tilde{h}(z;\alpha_1,\alpha_2,\beta_1,\beta_2)=\tilde{h}(z;\beta_1,\beta_2,\alpha_1,\alpha_2)$, and we have an analogous property for the function $\Lambda$, for its first derivative $\Lambda^\prime$ and its second derivative $\Lambda^{\prime\prime}$. \end{remark} The function $\Lambda$ is the analogue of the function $\Lambda$ in equation (14) in \cite{DicrescenzoMacciMartinucci}, and plays a crucial role in the proofs of the large (and moderate) deviation results. However we refer to this function also for the non-asymptotic results in order to have simpler expressions; in particular we refer to the derivatives $\Lambda^\prime(0)$ and $\Lambda^{\prime\prime}(0)$ and therefore we present the following lemma. \begin{lemma}\label{lem:2-derivatives-Lambda-origin} Let $\Lambda$ be the function in \eqref{eq:def-Lambda}. Then we have $$\Lambda^\prime(0)=\frac{2(\alpha_1\beta_1-\alpha_2\beta_2)}{\alpha_1+\alpha_2+\beta_1+\beta_2}$$ and $$\Lambda^{\prime\prime}(0)=\frac{4(\alpha_1\beta_1+\alpha_2\beta_2)}{\alpha_1+\alpha_2+\beta_1+\beta_2} -\frac{8(\alpha_1\beta_1-\alpha_2\beta_2)^2}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}.$$ Moreover $\Lambda^{\prime\prime}(0)>0$; in fact $$\Lambda^{\prime\prime}(0)=\frac{4\{(\alpha_1\beta_1+\alpha_2\beta_2)[(\alpha_1+\alpha_2)^2+(\beta_1+\beta_2)^2 +2\alpha_1\beta_2+2\alpha_2\beta_1]+8\alpha_1\alpha_2\beta_1\beta_2\}}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}.$$ \end{lemma} \begin{proof} We have $$\Lambda^\prime(\gamma)=\frac{h^\prime(e^\gamma)e^{2\gamma}-e^\gamma h(e^\gamma)}{e^{2\gamma}}=h^\prime(e^\gamma)-e^{-\gamma}h(e^\gamma),$$ which yields $\Lambda^\prime(0)=h^\prime(1)-h(1)$, and $$\Lambda^{\prime\prime}(\gamma)=h^{\prime\prime}(e^\gamma)e^\gamma-(-e^{-\gamma}h(e^\gamma)+h^\prime(e^\gamma))= h^{\prime\prime}(e^\gamma)e^\gamma+e^{-\gamma}h(e^\gamma)-h^\prime(e^\gamma),$$ which yields $\Lambda^{\prime\prime}(0)=h^{\prime\prime}(1)+h(1)-h^\prime(1)=h^{\prime\prime}(1)-\Lambda^\prime(0)$. The desired equalities can be checked with some cumbersome computations. In particular we can check that $$h(1)=\frac{1}{2}\sqrt{(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+4(\beta_1+\beta_2)(\alpha_1+\alpha_2)}=\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}$$ and \begin{align*} h^\prime(1)=\left.\frac{1}{2}\frac{2(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2z+4[2z\beta_1(\alpha_1z^2+\alpha_2)+2z\alpha_1(\beta_1z^2+\beta_2)]} {2\sqrt{(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2z^2+4(\beta_1z^2+\beta_2)(\alpha_1z^2+\alpha_2)}}\right|_{z=1}\\ =\frac{(\alpha_1+\alpha_2)^2+(\beta_1+\beta_2)^2+6\alpha_1\beta_1+2\alpha_1\beta_2+2\alpha_2\beta_1-2\alpha_2\beta_2}{2(\alpha_1+\alpha_2+\beta_1+\beta_2)}; \end{align*} moreover, if we use the symbol $g$ in place of $\tilde{h}(\cdot;\alpha_1,\alpha_2,\beta_1,\beta_2)$ in \eqref{eq:hz} for simplicity, we can check that $$g(1)=(\alpha_1+\alpha_2+\beta_1+\beta_2)^2,$$ $$g^\prime(1)=2[(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+8\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1],$$ $$g^{\prime\prime}(1)=2[(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+24\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1],$$ and \begin{multline*} h^{\prime\prime}(1)=\frac{2g(1)g^{\prime\prime}(1)-(g^\prime(1))^2}{8(g(1))^{3/2}}\\ =\frac{4(\alpha_1+\alpha_2+\beta_1+\beta_2)^2[(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+24\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1]} {8(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}\\ -\frac{4[(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+8\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1]^2}{8(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}\\ =\frac{(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+24\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1} {2(\alpha_1+\alpha_2+\beta_1+\beta_2)}\\ -\frac{[(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2+8\alpha_1\beta_1+4\alpha_1\beta_2+4\alpha_2\beta_1]^2}{2(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}. \end{multline*} Other details are omitted. \end{proof} \subsection{Non-asymptotic results} In this section we present explicit formulas for probability generating functions (see Proposition \ref{prop:pgf}), means and variances (see Proposition \ref{prop:mean-variance}). In all these propositions we can check what we said in Remark \ref{rem:exchange-parameters} about the exchange of $(\alpha_1,\alpha_2)$ and $(\beta_1,\beta_2)$. In view of this we present some preliminaries. It is known that the state probabilities solve the equations $$\left\{\begin{array}{ll} \dot{p}_{k,2n}(t)=\beta_1p_{k,2n-1}(t)-(\alpha_1+\alpha_2)p_{k,2n}(t)+\beta_2p_{k,2n+1}(t)\\ \dot{p}_{k,2n+1}(t)=\alpha_1p_{k,2n}(t)-(\beta_1+\beta_2)p_{k,2n+1}(t)+\alpha_2p_{k,2n+2}(t)\\ p_{k,n}(0)=1_{\{k=n\}} \end{array}\right.\ (\mbox{for}\ k\in\mathbb{Z}).$$ So, if we consider the decomposition \begin{equation}\label{eq:pgf-decomposition} F_k=G_k+H_k, \end{equation} where $G_k$ and $H_k$ are the generating functions defined by $$G_k(z,t):=\sum_{j=-\infty}^\infty z^{2j}p_{k,2j}(t)\ \mbox{and}\ H_k(z,t):=\sum_{j=-\infty}^\infty z^{2j+1}p_{k,2j+1}(t)=\sum_{j=-\infty}^\infty z^{2j-1}p_{k,2j-1}(t),$$ we have \begin{equation}\label{eq:pgf-equations} \left\{\begin{array}{ll} \frac{\partial G_k(z,t)}{\partial t}=z\beta_1H_k(z,t)-(\alpha_1+\alpha_2)G_k(z,t)+\frac{\beta_2}{z}H_k(z,t)\\ \frac{\partial H_k(z,t)}{\partial t}=z\alpha_1G_k(z,t)-(\beta_1+\beta_2)H_k(z,t)+\frac{\alpha_2}{z}G_k(z,t)\\ G_k(z,0)=z^k\cdot 1_{\{k\ \mathrm{is\ even}\}},\ H_k(z,0)=z^k\cdot 1_{\{k\ \mathrm{is\ odd}\}} \end{array}\right.\ (\mbox{for}\ k\in\mathbb{Z}). \end{equation} We remark that, if we consider the matrix \begin{equation}\label{eq:def-matrix-A} A:=\left(\begin{array}{cc} -(\alpha_1+\alpha_2)&z\beta_1+\frac{\beta_2}{z}\\ z\alpha_1+\frac{\alpha_2}{z}&-(\beta_1+\beta_2) \end{array}\right), \end{equation} the equations \eqref{eq:pgf-equations} can be rewritten as $$\left\{\begin{array}{ll} \frac{\partial}{\partial t}\left(\begin{array}{c} G_k(z,t)\\ H_k(z,t) \end{array}\right)=A\left(\begin{array}{c} G_k(z,t)\\ H_k(z,t) \end{array}\right)\\ G_k(z,0)=z^k\cdot 1_{\{k\ \mathrm{is\ even}\}},\ H_k(z,0)=z^k\cdot 1_{\{k\ \mathrm{is\ odd}\}} \end{array}\right.\ (\mbox{for}\ k\in\mathbb{Z}).$$ Thus \begin{equation}\label{eq:pgf-matrices-for-decomposition} \left(\begin{array}{c} G_{2k}(z,t)\\ H_{2k}(z,t) \end{array}\right)=e^{At}\left(\begin{array}{c} z^{2k}\\ 0 \end{array}\right)\ \mbox{and}\ \left(\begin{array}{c} G_{2k+1}(z,t)\\ H_{2k+1}(z,t) \end{array}\right)=e^{At}\left(\begin{array}{c} 0\\ z^{2k+1}\\ \end{array}\right). \end{equation} We start with the probability generating functions. \begin{proposition}\label{prop:pgf} For $z>0$ we have $$F_k(z,t)=z^ke^{-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}t}\left(\cosh\left(\frac{th(z)}{z}\right)+ \frac{c_k(z)}{h(z)}\sinh\left(\frac{th(z)}{z}\right)\right),$$ where \begin{equation}\label{eq:coefficients} c_k(z):=\left\{\begin{array}{ll} \frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))z}{2}+\alpha_1z^2+\alpha_2&\ \mbox{if}\ k\ \mbox{is even}\\ \frac{(\alpha_1+\alpha_2-(\beta_1+\beta_2))z}{2}+\beta_1z^2+\beta_2&\ \mbox{if}\ k\ \mbox{is odd}. \end{array}\right. \end{equation} \end{proposition} \begin{proof} The main part of the proof consists of the computation of the exponential matrix $e^{At}$, where $A$ is the matrix in \eqref{eq:def-matrix-A}, and finally we easily conclude by taking into account \eqref{eq:pgf-decomposition} and \eqref{eq:pgf-matrices-for-decomposition}. The eigenvalues of $A$ are \begin{equation}\label{eq:eigenvalues} \hat{h}_\pm(z):=-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}\pm\frac{h(z)}{z} \end{equation} (where $h$ is defined by \eqref{eq:hz}), and it is known that we can find a matrix $S$ such that $$S\left(\begin{array}{cc} -\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}-\frac{h(z)}{z}&0\\ 0&-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}+\frac{h(z)}{z} \end{array}\right)S^{-1}=A;$$ in particular we can consider the matrix $$S:=\left(\begin{array}{cc} \frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}-\frac{h(z)}{z}&\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}+\frac{h(z)}{z}\\ z\alpha_1+\frac{\alpha_2}{z}&z\alpha_1+\frac{\alpha_2}{z} \end{array}\right)$$ and its inverse is $$S^{-1}=-\frac{z}{2h(z)}\left(\begin{array}{cc} 1&\frac{-z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}\\ -1&\frac{z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}. \end{array}\right).$$ Then the desired exponential matrix is \begin{multline*} e^{At}=S\left(\begin{array}{cc} e^{(-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}-\frac{h(z)}{z})t}&0\\ 0&e^{(-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}+\frac{h(z)}{z})t} \end{array}\right)S^{-1}\\ =-\frac{z}{2h(z)}S\left(\begin{array}{cc} e^{\hat{h}_-(z)t}&e^{\hat{h}_-(z)t}\cdot\frac{-z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}\\ -e^{\hat{h}_+(z)t}&e^{\hat{h}_+(z)t}\cdot\frac{z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1^2z+\alpha_2)} \end{array}\right); \end{multline*} moreover, after some computations, we have $$e^{At}=\left(\begin{array}{cc} u_{11}(z,t)&u_{12}(z,t)\\ u_{21}(z,t)&u_{22}(z,t) \end{array}\right),$$ where \begin{multline*} u_{11}(z,t):=-\frac{z}{2h(z)}\left(e^{\hat{h}_-(z)t}\left(\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}-\frac{h(z)}{z}\right) -e^{\hat{h}_+(z)t}\left(\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}+\frac{h(z)}{z}\right)\right)\\ =\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))z}{2h(z)}\cdot\frac{e^{\hat{h}_+(z)t}-e^{\hat{h}_-(z)t}}{2}+\frac{e^{\hat{h}_-(z)t}+e^{\hat{h}_+(z)t}}{2}, \end{multline*} $$u_{21}(z,t):=-\frac{z}{2h(z)}\left(z\alpha_1+\frac{\alpha_2}{z}\right)(e^{\hat{h}_-(z)t}-e^{\hat{h}_+(z)t}) =\frac{\alpha_1z^2+\alpha_2}{h(z)}\cdot\frac{e^{\hat{h}_+(z)t}-e^{\hat{h}_-(z)t}}{2},$$ \begin{multline*} u_{12}(z,t):=-\frac{z}{2h(z)}\left(\left(\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}-\frac{h(z)}{z}\right) e^{\hat{h}_-(z)t}\cdot\frac{-z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}\right.\\ \left.+\left(\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}+\frac{h(z)}{z}\right) e^{\hat{h}_+(z)t}\cdot\frac{z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}\right)\\ =-\frac{z}{2h(z)}\cdot\frac{z}{\alpha_1z^2+\alpha_2}\left(e^{\hat{h}_-(z)t}-e^{\hat{h}_+(z)t}\right) \left(\frac{h^2(z)}{z^2}-\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))^2}{4}\right)\\ =-\frac{1}{h(z)(\alpha_1z^2+\alpha_2)}\left(h^2(z)-\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))^2z^2}{4}\right) \frac{e^{\hat{h}_-(z)t}-e^{\hat{h}_+(z)t}}{2}\\ =\frac{\beta_1z^2+\beta_2}{h(z)}\cdot\frac{e^{\hat{h}_+(z)t}-e^{\hat{h}_-(z)t}}{2} \end{multline*} and \begin{multline*} u_{22}(z,t):=-\frac{z}{2h(z)}\left(z\alpha_1+\frac{\alpha_2}{z}\right)\\ \left(e^{\hat{h}_-(z)t}\cdot\frac{-z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)} +e^{\hat{h}_+(z)t}\cdot\frac{z[\beta_1+\beta_2-(\alpha_1+\alpha_2)]-2h(z)}{2(\alpha_1z^2+\alpha_2)}\right)\\ =\frac{e^{\hat{h}_-(z)t}}{2}\left(\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))z}{2h(z)}+1\right) +\frac{e^{\hat{h}_+(z)t}}{2}\left(-\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))z}{2h(z)}+1\right)\\ =\frac{(\beta_1+\beta_2-(\alpha_1+\alpha_2))z}{2h(z)}\cdot\frac{e^{\hat{h}_-(z)t}-e^{\hat{h}_+(z)t}}{2}+\frac{e^{\hat{h}_-(z)t}+e^{\hat{h}_+(z)t}}{2}. \end{multline*} We complete the proof noting that, by \eqref{eq:pgf-decomposition} and \eqref{eq:pgf-matrices-for-decomposition}, we have $$F_{2k}(z,t)=z^{2k}(u_{11}(z,t)+u_{21}(z,t))$$ and $$F_{2k+1}(z,t)=z^{2k+1}(u_{12}(z,t)+u_{22}(z,t));$$ in fact these equalities yield \begin{multline*} F_{2k}(z,t)=z^{2k}\left(\frac{e^{\hat{h}_-(z)t}+e^{\hat{h}_+(z)t}}{2} +\frac{1}{h(z)}\left(\frac{\beta_1+\beta_2-(\alpha_1+\alpha_2)}{2}z+\alpha_1z^2+\alpha_2\right)\frac{e^{\hat{h}_+(z)t}-e^{\hat{h}_-(z)t}}{2}\right)\\ =z^{2k}e^{-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}t}\left(\cosh\left(\frac{th(z)}{z}\right) +\frac{c_{2k}(z)}{h(z)}\sinh\left(\frac{th(z)}{z}\right)\right) \end{multline*} and \begin{multline*} F_{2k+1}(z,t)=z^{2k+1}\left(\frac{e^{\hat{h}_-(z)t}+e^{\hat{h}_+(z)t}}{2} +\frac{1}{h(z)}\left(\frac{\alpha_1+\alpha_2-(\beta_1+\beta_2)}{2}z+\beta_1z^2+\beta_2\right)\frac{e^{\hat{h}_+(z)t}-e^{\hat{h}_-(z)t}}{2}\right)\\ =z^{2k+1}e^{-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}t}\left(\cosh\left(\frac{th(z)}{z}\right) +\frac{c_{2k+1}(z)}{h(z)}\sinh\left(\frac{th(z)}{z}\right)\right). \end{multline*} \end{proof} In the next proposition we give mean and variance; in particular we refer to $\Lambda^\prime(0)$ and $\Lambda^{\prime\prime}(0)$ given in Lemma \ref{lem:2-derivatives-Lambda-origin}. \begin{proposition}\label{prop:mean-variance} We have $$\mathbb{E}[X(t)|X(0)=k]=k+\Lambda^\prime(0)t+\left.\left(\frac{c_k(z)}{h(z)}\right)^\prime\right|_{z=1} \frac{1-e^{-(\alpha_1+\alpha_2+\beta_1+\beta_2)t}}{2},$$ where $$\left.\left(\frac{c_k(z)}{h(z)}\right)^\prime\right|_{z=1}=\left\{\begin{array}{ll} \frac{2(\alpha_1+\alpha_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^2}&\ \mbox{if}\ k\ \mbox{is even}\\ \frac{2(\beta_1+\beta_2)(\beta_1-\beta_2-\alpha_1+\alpha_2)}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^2}&\ \mbox{if}\ k\ \mbox{is odd}. \end{array}\right.$$ Moreover, if $k$ is even, we have \begin{multline*} \mathrm{Var}[X(t)|X(0)=k]=\Lambda^{\prime\prime}(0)t-e^{-2(\alpha_1+\alpha_2+\beta_1+\beta_2)t} \frac{(\alpha_1+\alpha_2)^2(\alpha_1-\alpha_2-\beta_1+\beta_2)^2}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^4}\\ +\frac{e^{-(\alpha_1+\alpha_2+\beta_1+\beta_2)t}}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^3} \left\{8t(\alpha_1+\alpha_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)(\alpha_1\beta_1-\alpha_2\beta_2)\right.\\ +(\alpha_1+\alpha_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)(\alpha_1+\alpha_2-\beta_1-\beta_2) -6(\alpha_2-\beta_1)(\alpha_1-\alpha_2-\beta_1+\beta_2)(\alpha_1+\alpha_2-\beta_1-\beta_2)\\ -2(7\alpha_2+\beta_1-2\beta_2)(\beta_1+\beta_2)(\alpha_1+\alpha_2-\beta_1-\beta_2) -4(\alpha_2-\beta_2)^2(\alpha_1+\alpha_2-\beta_1-\beta_2)+8\alpha_2(\beta_1+\beta_2)(\alpha_1+\alpha_2)\\ \left.-8\alpha_2(\beta_1+\beta_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)+8\beta_1(\beta_1+\beta_2)^2 -\frac{16(\alpha_2+\beta_1)^2(\beta_1+\beta_2)^2}{\alpha_1+\alpha_2+\beta_1+\beta_2}\right\}\\ +\frac{1}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^3}\left\{(-7\alpha_1+3\alpha_2+10\beta_1-4\beta_2)(\beta_1+\beta_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)\right.\\ +4(\alpha_2+\alpha_1)(\alpha_2+2\beta_2)(\alpha_1-\alpha_2-\beta_1+\beta_2)+4(\alpha_2-\beta_2)^2(\alpha_1-\alpha_2-\beta_1+\beta_2)\\ +4(\alpha_2-\beta_2)(\alpha_2+\beta_1)(\beta_1+\beta_2)-10(\alpha_2+\beta_1)(\beta_1+\beta_2)^2\\ \left.+8(\alpha_2+\beta_1)(\alpha_2-\beta_2)^2\right\} +\frac{20(\alpha_2+\beta_1)^2(\beta_1+\beta_2)^2}{(\alpha_1+\alpha_2+\beta_1+\beta_2)^4}. \end{multline*} Finally, if $k$ is odd, $\mathrm{Var}[X(t)|X(0)=k]$ can be obtained by exchanging $(\alpha_1,\alpha_2)$ and $(\beta_1,\beta_2)$ in the last expression (we recall that, as pointed out in Remark \ref{rem:exchange-parameters}, $\Lambda^{\prime\prime}(0)$ does not change). \end{proposition} \begin{proof} The desired expressions of means and variance can be obtained with suitable (well-known) formulas in terms of $\left.\frac{dF_k(z,t)}{dz}\right|_{z=1}$ and $\left.\frac{d^2F_k(z,t)}{dz^2}\right|_{z=1}$; these two values can be computed by considering the explicit formulas of $F_k(z,t)$ in Proposition \ref{prop:pgf}. The computations are cumbersome and we omit the details. \end{proof} \subsection{Asymptotic results} Here we present Propositions \ref{prop:LD} and \ref{prop:MD}, which are the generalization of Propositions 3.1 and 3.2 in \cite{DicrescenzoMacciMartinucci}. In both cases we apply the G\"{artner} Ellis Theorem, and we use the probability generating function in Proposition \ref{prop:pgf}. Actually the proof of Proposition \ref{prop:MD} here is slightly different from the proof of Proposition 3.2 in \cite{DicrescenzoMacciMartinucci}. \begin{proposition}\label{prop:LD} For all $k\in\mathbb{Z}$, $\left\{P\left(\frac{X(t)}{t}\in\cdot\Big|X(0)=k\right):t>0\right\}$ satisfies the LDP with speed function $v_t=t$ and good rate function $\Lambda^*(y):=\sup_{\gamma\in\mathbb{R}}\{\gamma y-\Lambda(\gamma)\}$. \end{proposition} \begin{proof} We can simply adapt the proof of Proposition 3.1 in \cite{DicrescenzoMacciMartinucci}. The details are omitted. \end{proof} \begin{proposition}\label{prop:MD} Let $\{a_t:t>0\}$ be such that $a_t\to 0$ and $ta_t\to+\infty$ (as $t\to+\infty$). Then, for all $k\in\mathbb{Z}$, $\left\{P\left(\sqrt{ta_t}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\in\cdot\Big|X(0)=k\right):t>0\right\}$ satisfies the LDP with speed function $v_t=\frac{1}{a_t}$ and good rate function $J(y):=\frac{y^2}{2\Lambda^{\prime\prime}(0)}$. \end{proposition} \begin{proof} We apply the G\"{a}rtner Ellis Theorem. More precisely we show that \begin{equation}\label{eq:GE-limit-MD-non-fractional} \lim_{t\to\infty}a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]= \frac{\gamma^2}{2}\Lambda^{\prime\prime}(0)\ (\mbox{for all}\ \gamma\in\mathbb{R}); \end{equation} in fact we can easily check that $J(y)=\sup_{\gamma\in\mathbb{R}}\left\{\gamma y-\frac{\gamma^2}{2}\Lambda^{\prime\prime}(0)\right\}$ (for all $y\in\mathbb{R}$). We remark that \begin{multline*} a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]\\ =a_t\left(\log\mathbb{E}\left[\exp\left(\frac{\gamma}{\sqrt{ta_t}}X(t)\right)\Big|X(0)=k\right] -\frac{\gamma}{\sqrt{ta_t}}\mathbb{E}[X(t)|X(0)=k]\right). \end{multline*} As far as the right hand side is concerned, we take into account Proposition \ref{prop:pgf} for the moment generating function and Proposition \ref{prop:mean-variance} for the mean; then we get \begin{multline*} \lim_{t\to\infty}a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]\\ =\lim_{t\to\infty}a_t\left(k\frac{\gamma}{\sqrt{ta_t}}-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}t+t\frac{h(e^{\gamma/\sqrt{ta_t}})}{e^{\gamma/\sqrt{ta_t}}} -\frac{\gamma}{\sqrt{ta_t}}(k+\Lambda^\prime(0)t)\right) \end{multline*} and, by \eqref{eq:def-Lambda}, we obtain $$\lim_{t\to\infty}a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right] =\lim_{t\to\infty}ta_t\left(\Lambda\left(\frac{\gamma}{\sqrt{ta_t}}\right)-\frac{\gamma}{\sqrt{ta_t}}\Lambda^\prime(0)\right).$$ Finally, if we consider the second order Taylor formula for the function $\Lambda$, we have $$\lim_{t\to\infty}ta_t\left(\Lambda\left(\frac{\gamma}{\sqrt{ta_t}}\right)-\frac{\gamma}{\sqrt{ta_t}}\Lambda^\prime(0)\right) =\lim_{t\to\infty}ta_t\left(\frac{\gamma^2}{2ta_t}\Lambda^{\prime\prime}(0)+o\left(\frac{\gamma^2}{ta_t}\right)\right)$$ for a remainder $o\left(\frac{\gamma^2}{ta_t}\right)$ such that $o\left(\frac{\gamma^2}{ta_t}\right)/\frac{\gamma^2}{ta_t}\to 0$, and \eqref{eq:GE-limit-MD-non-fractional} is checked. \end{proof} \begin{remark}\label{rem:alternative-proof-MD} The expressions of mean and variance in Proposition \ref{prop:mean-variance} yield the following limits: $$\lim_{t\to\infty}\frac{\mathbb{E}[X(t)|X(0)=k]}{t}=\Lambda^\prime(0);\ \lim_{t\to\infty}\frac{\mathrm{Var}[X(t)|X(0)=k]}{t}=\Lambda^{\prime\prime}(0).$$ These limits give a generalization of the analogue limits in \cite{DicrescenzoMacciMartinucci}. \end{remark} \section{Results with the inverse of the stable subordinator}\label{sec:time-fractional} In this section we consider the process $\{X^\nu(t):t\geq 0\}$, for $\nu\in(0,1)$, i.e. \begin{equation}\label{eq:time-change-representation-fractional} X^\nu(t):=X^1(T^\nu(t)), \end{equation} where $\{T^\nu(t):t\geq 0\}$ is the inverse of the stable subordinator, independent of a version of the non-fractional process $\{X^1(t):t\geq 0\}$ studied above. So we recall some preliminaries. We start with the definition of the Mittag-Leffler function (see e.g. \cite{Podlubny}, page 17): $$E_\nu(x):=\sum_{j\geq 0}\frac{x^j}{\Gamma(\nu j+1)}\ (\mbox{for all}\ x\in\mathbb{R}).$$ Then we have \begin{equation}\label{eq:mgf-fractional} \mathbb{E}[e^{\gamma T^\nu(t)}]=E_\nu(\gamma t^\nu). \end{equation} In some references this formula is stated assuming that $\gamma\leq 0$ but this restriction is not needed because we can refer to the analytic continuation of the Laplace transform with complex argument. We also recall that formula (24) in \cite{MainardiMuraPagnini} provides a version of \eqref{eq:mgf-fractional} for $t=1$ (in that formula there is $-s$ in place $\gamma$, and $s\in\mathbb{C}$). \subsection{Non-asymptotic results} We start with Proposition \ref{prop:pgf-fractional}, which provides an expression for the probability generating functions $\{F_k^\nu(\cdot,t):k\in\mathbb{Z},t\geq 0\}$ defined by $$F_k^\nu(z,t):=\mathbb{E}\left[z^{X^\nu(t)}|X^\nu(0)=k\right] =\sum_{n=-\infty}^\infty z^n p_{k,n}^\nu(t)\ (\mbox{for}\ k\in\mathbb{Z}),$$ where $\{p_{k,n}^\nu(t):k,n\in\mathbb{Z},t\geq 0\}$ are the state probabilities defined by \begin{equation}\label{eq:pmf-notation-fractional} p_{k,n}^\nu(t):=P(X^\nu(t)=n|X^\nu(0)=k). \end{equation} Obviously Proposition \ref{prop:pgf-fractional} is the analogue of Proposition \ref{prop:pgf} (and we can recover it by setting $\nu=1$). \begin{proposition}\label{prop:pgf-fractional} For $z>0$ we have $$F_k^\nu(z,t)=z^k\left(\frac{E_\nu(\hat{h}_-(z)t^\nu)+E_\nu(\hat{h}_+(z)t^\nu)}{2} +\frac{c_k(z)}{h(z)}\cdot\frac{E_\nu(\hat{h}_+(z)t^\nu)-E_\nu(\hat{h}_-(z)t^\nu)}{2}\right),$$ where $c_k(z)$ is as in \eqref{eq:coefficients} and $\hat{h}_\pm(z)$ are the eigenvalues in \eqref{eq:eigenvalues}. \end{proposition} \begin{proof} We recall that $T^\nu(0)=0$. Then, if we refer the expression of the probability generating functions $\{F_k(\cdot,t):k\in\mathbb{Z},t\geq 0\}$ in Proposition \ref{prop:pgf}, we have \begin{multline*} F_k^\nu(z,t)=\mathbb{E}\left[z^{X^1(T^\nu(t))}|X^1(0)=k\right]=\mathbb{E}\left[F_k(z,T^\nu(t))|X^1(0)=k\right]\\ =\mathbb{E}\left[z^ke^{-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}T^\nu(t)}\left(\cosh\left(\frac{T^\nu(t)h(z)}{z}\right)+ \frac{c_k(z)}{h(z)}\sinh\left(\frac{T^\nu(t)h(z)}{z}\right)\right)|X^1(0)=k\right]. \end{multline*} Then, by taking into account the moment generating function in \eqref{eq:mgf-fractional}, after some manipulations we get \begin{multline*} \tilde{F}_k^{\nu,\mu}(z,t)=z^k\left(\left(1+\frac{c_k(z)}{h(z)}\right)\frac{\mathbb{E}[e^{\hat{h}_+(z)T^\nu(t)}]}{2} +\left(1-\frac{c_k(z)}{h(z)}\right)\frac{\mathbb{E}[e^{\hat{h}_-(z)T^\nu(t)}]}{2}\right)\\ =z^k\left(\left(1+\frac{c_k(z)}{h(z)}\right)\frac{E_\nu(\hat{h}_+(z)t^\nu)}{2} +\left(1-\frac{c_k(z)}{h(z)}\right)\frac{E_\nu(\hat{h}_-(z)t^\nu)}{2}\right). \end{multline*} So we can immediately check that this coincides with the expression in the statement of the proposition. \end{proof} \subsection{Asymptotic results} Here we present Proposition \ref{prop:LD-fractional}, which is the analogue of Proposition \ref{prop:LD}. Unfortunately we cannot present a moderate deviation result, namely we cannot present the analogue of Proposition \ref{prop:MD}; see the discussion in Remark \ref{rem:MD-fractional-missing}. We conclude this section with Remark \ref{rem:comparison}, where we compare the convergence of processes for different values of $\nu\in (0,1)$. In fact, if we consider the framework of Proposition \ref{prop:LD-fractional} below, the rate function $\Lambda_\nu^*(y)$ uniquely vanishes at $y=0$, and therefore $\frac{X^\nu(t)}{t}$ converges to $0$ as $t\to\infty$ (we recall that, for $\nu=1$, $\frac{X^\nu(t)}{t}$ converges to $\Lambda^\prime(0)$ as $t\to\infty$); moreover, the more $\Lambda_\nu^*(y)$ is larger around $y=0$, the more the convergence of $\frac{X^\nu(t)}{t}$ is faster. In particular in Remark \ref{rem:comparison} we take $0<\nu_1<\nu_2<1$, and we get strict inequalities between $\Lambda_{\nu_1}^*(y)$ and $\Lambda_{\nu_2}^*(y)$ in a sufficiently small neighborhood of the origin $y=0$ (except the origin itself because we have $\Lambda_{\nu_1}^*(0)=\Lambda_{\nu_2}^*(0)=0$). \begin{proposition}\label{prop:LD-fractional} We set $$\Lambda_\nu(\gamma):=\left\{\begin{array}{ll} (\Lambda(\gamma))^{1/\nu}&\ \mbox{if}\ \Lambda(\gamma)\geq 0\\ 0&\ \mbox{if}\ \Lambda(\gamma)<0, \end{array}\right.$$ where $\Lambda$ is the function in \eqref{eq:def-Lambda}. Then, for all $k\in\mathbb{Z}$, $\left\{P\left(\frac{X^\nu(t)}{t}\in\cdot\Big|X^\nu(0)=k\right):t>0\right\}$ satisfies the LDP with speed function $v_t=t$ and good rate function $\Lambda_\nu^*(y):=\sup_{\gamma\in\mathbb{R}}\{\gamma y-\Lambda_\nu(\gamma)\}$. \end{proposition} \begin{proof} We want to apply the G\"{a}rtner Ellis Theorem and, for all $\gamma\in\mathbb{R}$, we have to take the limit of $\frac{1}{t}\log F_k^\nu(e^\gamma,t)$ (as $\to\infty$). Obviously we consider the expression of the function $F_k^\nu(z,t)$ in Proposition \ref{prop:pgf-fractional}. Firstly, if $\nu\in(0,1)$, we have \begin{equation}\label{eq:GE-limit-fractional} \lim_{t\to\infty}\frac{1}{t}\log F_k^\nu(e^\gamma,t)=\Lambda_\nu(\gamma)\ (\mbox{for all}\ \gamma\in\mathbb{R}); \end{equation} this can be checked noting that $\hat{h}_-(z)<0$, $\hat{h}_+(e^\gamma)=\Lambda(\gamma)$ (for all $\gamma\in\mathbb{R}$), by taking into account the limit $$\lim_{t\to\infty}\frac{1}{t}\log E_\nu(ct^\nu)=\left\{\begin{array}{ll} 0&\ \mbox{if}\ c\leq 0\\ c^{1/\nu}&\ \mbox{if}\ c>0 \end{array}\right.$$ (this limit can be seen as a consequence of an expansion of Mittag-Leffler function; see (1.8.27) in \cite{KilbasSrivastavaTrujillo} with $\alpha=\nu$ and $\beta=1$), and by considering a suitable application of Lemma 1.2.15 in \cite{DemboZeitouni}. Moreover the function $\Lambda_\nu$ in the limit \eqref{eq:GE-limit-fractional} is nonnegative and attains its minimum, equal to zero, at the points of the set $\{\gamma\in\mathbb{R}:\Lambda(\gamma)\leq 0\}$; we recall that this set can be reduced to the single point $\gamma=0$ if and only if $\Lambda^\prime(0)=0$. Thus we can apply the G\"{a}rtner Ellis Theorem (because the function in the limit is finite everywhere and differentiable), and the desired LDP holds. \end{proof} \begin{remark}\label{rem:MD-fractional-missing} We have some difficulties to get the extension of Proposition \ref{prop:MD} for the time-fractional case. In fact, if a moderate deviation holds, we expect that it is governed by the rate function $J_\nu(y):=\frac{y^2}{2\Lambda^{\prime\prime}(0)}$, where $\Lambda^{\prime\prime}(0)$ is the second derivative at the origin $\gamma=0$ of $\Lambda_\nu$, and assuming that such value exists and it is finite. On the contrary $\Lambda^{\prime\prime}(0)$ exists only if $\nu\in(0,1/2]$, and it is equal to zero. So, in such a case, we should have $$J_\nu(y):=\left\{\begin{array}{ll} 0&\ \mbox{if}\ y=0\\ \infty&\ \mbox{if}\ y\neq 0, \end{array}\right.$$ and this rate function is not interesting; in fact it is the largest rate function that we have for a sequence that converges to zero (for instance this rate function comes up when we have constant random variables converging to zero). \end{remark} \begin{remark}\label{rem:comparison} We take $0<\nu_1<\nu_2<1$. We recall that: \begin{itemize} \item for $\nu\in(0,1)$ and $y\in\mathbb{R}$, the equation $\Lambda_\nu^\prime(\gamma)=y$ admits a solution; for the case $y=0$ we have $$\{\gamma\in\mathbb{R}:\Lambda_\nu^\prime(\gamma)=0\}=\{\gamma\in\mathbb{R}:\Lambda(\gamma)\leq 0\},$$ and therefore we have a unique solution $\gamma=0$ if and only if $\Lambda^\prime(0)=0$; on the contrary, if $y\neq 0$, we have a unique solution $\gamma_{y,\nu}\in\mathbb{R}$, say; \item there exists $\delta>0$ such that, if $\inf\{|\gamma-\tilde{\gamma}|:\Lambda(\tilde{\gamma})\leq 0\}<\delta$, then $\Lambda(\gamma)\in(0,1)$, and therefore $0<\Lambda_{\nu_1}(\gamma)<\Lambda_{\nu_2}(\gamma)$. \end{itemize} Thus, by combining these two statements, there exists $\delta^\prime>0$ such that, for $0<|y|<\delta^\prime$, we have $$0<\Lambda_{\nu_2}^*(y)=\gamma_{y,\nu_2}y-\Lambda_{\nu_2}(\gamma_{y,\nu_2})<\gamma_{y,\nu_2}y-\Lambda_{\nu_1}(\gamma_{y,\nu_2}) \leq\sup_{\gamma\in\mathbb{R}}\{\gamma y-\Lambda_{\nu_1}(\gamma)\}=\Lambda_{\nu_1}^*(y)$$ (see Figure \ref{fig2} where $\Lambda^\prime(0)=0$ and we consider some specific values of $\nu$). In conclusion we can say that \begin{equation}\label{eq:comparison-statement} \mbox{$\frac{X^{\nu_1}(t)}{t}$ converges to zero faster than $\frac{X^{\nu_2}(t)}{t}$ (as $t\to\infty$).} \end{equation} We also remark that the statement \eqref{eq:comparison-statement} is not surprising if we take into account the time-change representation \eqref{eq:time-change-representation-fractional}. In fact, if we denote the stable subordinator by $\{S^\nu(t):t\geq 0\}$, we have that \begin{equation}\label{eq:fgm-stable-subordinator} \mathbb{E}[e^{\gamma S^\nu(t)}]=\left\{\begin{array}{ll} e^{-|\gamma|^\nu t}&\ \mbox{if}\ \gamma\leq 0\\ \infty&\ \mbox{otherwise}; \end{array}\right. \end{equation} thus, as $\nu\in(0,1)$ decreases, the increasing trend of $\{S^\nu(t):t\geq 0\}$ increases, and therefore the increasing trend of the inverse of the stable subordinator $\{T^\nu(t):t\geq 0\}$ decreases. Then, for $0<\nu_1<\nu_2<1$, the increasing trend of the random time-change $\{T^{\nu_1}(t):t\geq 0\}$ for $X(\cdot)$ is slower than the increasing trend of $\{T^{\nu_2}(t):t\geq 0\}$; so $\frac{X^1(T^{\nu_1}(t))}{t}$ converges to zero faster than $\frac{X^1(T^{\nu_2}(t))}{t}$ (as $t\to\infty$), and this statement meets \eqref{eq:comparison-statement}. \end{remark} \begin{figure} \caption{The rate function $\Lambda_\nu^*$ around $y=0$ for $\Lambda^\prime(0)=0$ (only in this case $\Lambda_\nu^*$ is differentiable everywhere; on the contrary, for $y=0$, left and right hand derivatives of $\Lambda_\nu^*(y)$ do not coincide) and some values for $\nu$: $\nu=1/4$ (dashed line), $\nu=1/2$ (continuous line) and $\nu=1$ (dotted line).} \label{fig2} \end{figure} \section{Results with the (possibly tempered) stable subordinator}\label{sec:time-change-TSS} In this section we consider the process $\{\tilde{X}^{\nu,\mu}(t):t\geq 0\}$, for $\nu\in(0,1)$ and $\mu\geq 0$, i.e. $$\tilde{X}^{\nu,\mu}(t):=X^1(\tilde{S}^{\nu,\mu}(t)),$$ where $\{\tilde{S}^{\nu,\mu}(t):t\geq 0\}$ is a (possibly tempered) stable subordinator, independent of a version of the non-fractional process $\{X^1(t):t\geq 0\}$ studied above. So we recall some preliminaries on $\{\tilde{S}^{\nu,\mu}(t):t\geq 0\}$. Firstly, for $t>0$, we have $$P(\tilde{S}^{\nu,\mu}(t)\in dx)=\underbrace{e^{-\mu x+\mu^\nu t}f_{S^\nu(t)}(x)}_{=:f_{\tilde{S}^{\nu,\mu}(t)}(x)}dx,$$ where $$P(S^\nu(t)\in dx)=f_{S^\nu(t)}(x)dx$$ and $\{S^\nu(t):t\geq 0\}$ is the stable subordinator; note that $\{\tilde{S}^{\nu,\mu}(t):t\geq 0\}$ with $\mu=0$ coincides with $\{S^\nu(t):t\geq 0\}$. Moreover we have \begin{equation}\label{eq:mgf-TSS} \mathbb{E}[e^{\gamma\tilde{S}^{\nu,\mu}(t)}]=e^{\mu^\nu t}\mathbb{E}[e^{(\gamma-\mu )S^\nu(t)}]=\left\{ \begin{array}{ll} e^{-t((\mu-\gamma)^\nu-\mu^\nu)}&\ \mbox{if}\ \gamma\leq\mu\\ \infty&\ \mbox{otherwise}, \end{array}\right. \end{equation} where we take into account \eqref{eq:fgm-stable-subordinator}. Moreover, for $\mu>0$, if we consider the function $\Psi_{\nu,\mu}$ defined by \begin{equation}\label{eq:def-Psi} \Psi_{\nu,\mu}(\gamma):=\left\{\begin{array}{ll} \mu^\nu-(\mu-\gamma)^\nu&\ \mbox{if}\ \gamma\leq\mu\\ \infty&\ \mbox{otherwise}, \end{array}\right. \end{equation} for all $t>0$ we have \begin{equation}\label{eq:Psi-means} \frac{\mathbb{E}[\tilde{S}^{\nu,\mu}(t)]}{t}=\frac{\nu\mu^{\nu-1}t}{t}=\nu\mu^{\nu-1}=\Psi_{\nu,\mu}^\prime(0) \end{equation} and \begin{equation}\label{eq:Psi-variances} \frac{\mathrm{Var}[\tilde{S}^{\nu,\mu}(t)]}{t}=\frac{-\nu(\nu-1)\mu^{\nu-2}t}{t}=-\nu(\nu-1)\mu^{\nu-2}=\Psi_{\nu,\mu}^{\prime\prime}(0) \end{equation} (actually, if $\mu=0$, the above formulas \eqref{eq:Psi-means} and \eqref{eq:Psi-variances} hold as left derivatives equal to infinity). \subsection{Non-asymptotic results} We start with Proposition \ref{prop:pgf-TSS}, which provides an expression for the probability generating functions $\{\tilde{F}_k^{\nu,\mu}(\cdot,t):k\in\mathbb{Z},t\geq 0\}$ defined by $$\tilde{F}_k^{\nu,\mu}(z,t):=\mathbb{E}\left[z^{\tilde{X}^{\nu,\mu}(t)}|\tilde{X}^{\nu,\mu}(0)=k\right] =\sum_{n=-\infty}^\infty z^n\tilde{p}_{k,n}^{\nu,\mu}(t)\ (\mbox{for}\ k\in\mathbb{Z}),$$ where $\{\tilde{p}_{k,n}^{\nu,\mu}(t):k,n\in\mathbb{Z},t\geq 0\}$ are the state probabilities defined by \begin{equation}\label{eq:pmf-notation-TSS} \tilde{p}_{k,n}^{\nu,\mu}(t):=P(\tilde{X}^{\nu,\mu}(t)=n|\tilde{X}^{\nu,\mu}(0)=k). \end{equation} Obviously Proposition \ref{prop:pgf-TSS} is the analogue of Propositions \ref{prop:pgf} and \ref{prop:pgf-fractional}. The condition $\hat{h}_+(z)\leq\mu$ will be discussed after the proof. \begin{proposition}\label{prop:pgf-TSS} For $z>0$ we have $$\tilde{F}_k^{\nu,\mu}(z,t)=\left\{\begin{array}{ll} z^k\left(\frac{e^{-t((\mu-\hat{h}_+(z))^\nu-\mu^\nu)}+e^{-t((\mu-\hat{h}_-(z))^\nu-\mu^\nu)}}{2}\right.\\ \ \ \ +\left.\frac{c_k(z)}{h(z)}\cdot\frac{e^{-t((\mu-\hat{h}_+(z))^\nu-\mu^\nu)}-e^{-t((\mu-\hat{h}_-(z))^\nu-\mu^\nu)}}{2}\right)&\ \mbox{if}\ \hat{h}_+(z)\leq\mu\\ \infty&\ \mbox{otherwise}, \end{array}\right.$$ where $c_k(z)$ is as in \eqref{eq:coefficients} and $\hat{h}_\pm(z)$ are the eigenvalues in \eqref{eq:eigenvalues}. \end{proposition} \begin{proof} We recall that $\tilde{S}^{\nu,\mu}(0)=0$. Then, if we refer the expression of the probability generating functions $\{F_k(\cdot,t):k\in\mathbb{Z},t\geq 0\}$ in Proposition \ref{prop:pgf}, we have \begin{multline*} \tilde{F}_k^{\nu,\mu}(z,t)=\mathbb{E}\left[z^{X^1(\tilde{S}^{\nu,\mu}(t))}|X^1(0)=k\right]=\mathbb{E}\left[F_k(z,\tilde{S}^{\nu,\mu}(t))|X^1(0)=k\right]\\ =\mathbb{E}\left[z^ke^{-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}\tilde{S}^{\nu,\mu}(t)}\left(\cosh\left(\frac{\tilde{S}^{\nu,\mu}(t)h(z)}{z}\right)+ \frac{c_k(z)}{h(z)}\sinh\left(\frac{\tilde{S}^{\nu,\mu}(t)h(z)}{z}\right)\right)|X^1(0)=k\right]. \end{multline*} Then, by taking into account the moment generating function in \eqref{eq:mgf-TSS}, after some manipulations we get (we recall that $\hat{h}_-(z)<0$) \begin{multline*} \tilde{F}_k^{\nu,\mu}(z,t)=z^k\left(\left(1+\frac{c_k(z)}{h(z)}\right)\frac{\mathbb{E}[e^{\hat{h}_+(z)\tilde{S}^{\nu,\mu}(t)}]}{2} +\left(1-\frac{c_k(z)}{h(z)}\right)\frac{\mathbb{E}[e^{\hat{h}_-(z)\tilde{S}^{\nu,\mu}(t)}]}{2}\right)\\ =z^k\left(\left(1+\frac{c_k(z)}{h(z)}\right)\frac{e^{-t((\mu-\hat{h}_+(z))^\nu-\mu^\nu)}}{2} +\left(1-\frac{c_k(z)}{h(z)}\right)\frac{e^{-t((\mu-\hat{h}_-(z))^\nu-\mu^\nu)}}{2}\right) \end{multline*} if $\hat{h}_+(z)\leq\mu$ (and infinity otherwise). So we can easily check that this coincides with the expression in the statement of the proposition. \end{proof} We conclude this section with a brief discussion on the condition $\hat{h}_+(z)\leq\mu$ for $\mu\geq 0$. For $z>0$ we have $$\frac{\sqrt{(\alpha_1+\alpha_2-(\beta_1+\beta_2))^2z^2+4(\beta_1z^2+\beta_2)(\alpha_1z^2+\alpha_2)}}{z}-\frac{\alpha_1+\alpha_2+\beta_1+\beta_2}{2}\leq\mu$$ by \eqref{eq:eigenvalues} and \eqref{eq:hz}. Then, after some easy computations, it is easy to check that this is equivalent to $$\alpha_1\beta_1z^4-(\mu^2+\mu(\alpha_1+\alpha_2+\beta_1+\beta_2)+\alpha_1\beta_1+\alpha_2\beta_2)z^2+\alpha_2\beta_2\leq 0;$$ in conclusion we have $\hat{h}_+(z)\leq\mu$ if and only if $\sqrt{m_-(\mu)}\leq z\leq\sqrt{m_+(\mu)}$, where \begin{multline*} m_\pm(\mu):=\frac{1}{2\alpha_1\beta_1}\left\{\mu^2+\mu(\alpha_1+\alpha_2+\beta_1+\beta_2)+\alpha_1\beta_1+\alpha_2\beta_2\right.\\ \left.\pm\sqrt{(\mu^2+\mu(\alpha_1+\alpha_2+\beta_1+\beta_2)+\alpha_1\beta_1+\alpha_2\beta_2)^2-4\alpha_1\alpha_2\beta_1\beta_2}\right\}. \end{multline*} In particular, for case $\mu=0$, we have $\hat{h}_+(z)\leq 0$ if and only if $\sqrt{\min\left\{1,\frac{\alpha_2\beta_2}{\alpha_1\beta_1}\right\}}\leq z\leq\sqrt{\max\left\{1,\frac{\alpha_2\beta_2}{\alpha_1\beta_1}\right\}}$ because $$m_\pm(0)=\frac{\alpha_1\beta_1+\alpha_2\beta_2\pm |\alpha_1\beta_1-\alpha_2\beta_2|}{2\alpha_1\beta_1};$$ so we have $m_-(0)=1$ and/or $m_+(0)=1$, and they are both equal to 1 if and only if $\alpha_1\beta_1=\alpha_2\beta_2$ or, equivalently, $\Lambda^\prime(0)=0$ by Lemma \ref{lem:2-derivatives-Lambda-origin}. \subsection{Asymptotic results} Here we present Proposition \ref{prop:LD-TSS}, which is the analogue of Propositions \ref{prop:LD} and \ref{prop:LD-fractional}. In this case we have no restriction on the value of $\Lambda^\prime(0)$. Finally we present Proposition \ref{prop:MD-TSS}, which is the analogue of Proposition \ref{prop:MD}. In the proofs of Propositions \ref{prop:LD-TSS} and \ref{prop:MD-TSS} we apply the G\"{a}rtner Ellis Theorem, and the condition $\mu>0$ is required. \begin{proposition}\label{prop:LD-TSS} Assume that $\mu>0$, and set $$\tilde{\Lambda}_{\nu,\mu}(\gamma):=\left\{\begin{array}{ll} \mu^\nu-(\mu-\Lambda(\gamma))^\nu&\ \mbox{if}\ \Lambda(\gamma)\leq\mu\\ \infty&\ \mbox{otherwise}, \end{array}\right.$$ where $\Lambda$ is the function in \eqref{eq:def-Lambda}. Then, for all $k\in\mathbb{Z}$, $\left\{P\left(\frac{\tilde{X}^{\nu,\mu}(t)}{t}\in\cdot\Big|\tilde{X}^{\nu,\mu}(0)=k\right):t>0\right\}$ satisfies the LDP with speed function $v_t=t$ and good rate function $\tilde{\Lambda}_{\nu,\mu}^*(y):=\sup_{\gamma\in\mathbb{R}}\{\gamma y-\tilde{\Lambda}_{\nu,\mu}(\gamma)\}$. \end{proposition} \begin{proof} We want to apply the G\"{a}rtner Ellis Theorem and, for all $\gamma\in\mathbb{R}$, we have to take the limit of $\frac{1}{t}\log\tilde{F}_k^{\nu,\mu}(e^\gamma,t)$ (as $\to\infty$). Obviously we consider the expression of the function $\tilde{F}_k^{\nu,\mu}(z,t)$ in Proposition \ref{prop:pgf-TSS}. Firstly we have \begin{equation}\label{eq:GE-limit-TSS} \lim_{t\to\infty}\frac{1}{t}\log\tilde{F}_k^{\nu,\mu}(e^\gamma,t)=\tilde{\Lambda}_{\nu,\mu}(\gamma)\ (\mbox{for all}\ \gamma\in\mathbb{R}); \end{equation} this can be checked noting that $\hat{h}_-(z)<0$, $\hat{h}_+(e^\gamma)=\Lambda(\gamma)$ (for all $\gamma\in\mathbb{R}$), and by considering a suitable application of Lemma 1.2.15 in \cite{DemboZeitouni}. The function $\tilde{\Lambda}_{\nu,\mu}$ in the limit \eqref{eq:GE-limit-TSS} is essentially smooth (see e.g. Definition 2.3.5 in \cite{DemboZeitouni}); in fact it is finite in a neighborhood of the origin, differentiable in the interior of the set $\mathcal{D}:=\{\gamma\in\mathbb{R}:\tilde{\Lambda}_{\nu,\mu}(\gamma)<\infty\}$, and steep (namely $\tilde{\Lambda}_{\nu,\mu}^\prime(\gamma_n)\to\infty$ for every sequence $\{\gamma_n:n\geq 1\}$ in the interior of $\mathcal{D}$ which converges to a boundary point of the interior of $\mathcal{D}$) because, if $\gamma_0$ is such that $\Lambda(\gamma_0)=\mu$, we have $$\tilde{\Lambda}_{\nu,\mu}^\prime(\gamma)=\nu(\mu-\Lambda(\gamma))^{\nu-1}\Lambda^\prime(\gamma)\to\infty\ (\mbox{as}\ \gamma\to\gamma_0).$$ Then we can apply the G\"{a}rtner Ellis Theorem (in fact the function $\tilde{\Lambda}_{\nu,\mu}$ is also lower semi-continuous), and the desired LDP holds. \end{proof} In view of the next result on moderate deviations we compute $\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)$. We remark that, if we consider the function $\Psi_{\nu,\mu}$ in \eqref{eq:def-Psi}, we have $\tilde{\Lambda}_{\nu,\mu}(\gamma)=\Psi_{\nu,\mu}(\Lambda(\gamma))$ (for all $\gamma\in\mathbb{R}$). Thus we have $$\tilde{\Lambda}_{\nu,\mu}^\prime(\gamma)=\Psi_{\nu,\mu}^\prime(\Lambda(\gamma))\Lambda^\prime(\gamma),\ \tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(\gamma)=\Psi_{\nu,\mu}^\prime(\Lambda(\gamma))\Lambda^{\prime\prime}(\gamma) +\Psi_{\nu,\mu}^{\prime\prime}(\Lambda(\gamma))(\Lambda^\prime(\gamma))^2$$ and therefore (for the second equality see \eqref{eq:Psi-means} and \eqref{eq:Psi-variances}) $$\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)=\Psi_{\nu,\mu}^\prime(0)\Lambda^{\prime\prime}(0) +\Psi_{\nu,\mu}^{\prime\prime}(0)(\Lambda^\prime(0))^2=\nu\mu^{\nu-1}\Lambda^{\prime\prime}(0)-\nu(\nu-1)\mu^{\nu-2}(\Lambda^\prime(0))^2.$$ We remark that $\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)>0$ because $\Lambda^{\prime\prime}(0)>0$ (see Lemma \ref{lem:2-derivatives-Lambda-origin}) and $\mu>0$. \begin{proposition}\label{prop:MD-TSS} Assume that $\mu>0$. Let $\{a_t:t>0\}$ be such that $a_t\to 0$ and $ta_t\to+\infty$ (as $t\to+\infty$). Then, for all $k\in\mathbb{Z}$, $\left\{P\left(\sqrt{ta_t}\frac{\tilde{X}^{\nu,\mu}(t)-\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]}{t}\in\cdot \Big|\tilde{X}^{\nu,\mu}(0)=k\right):t>0\right\}$ satisfies the LDP with speed function $v_t=\frac{1}{a_t}$ and good rate function $J_{\nu,\mu}(y):=\frac{y^2}{2\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)}$. \end{proposition} \begin{proof} We apply the G\"{a}rtner Ellis Theorem. More precisely we show that \begin{equation}\label{eq:GE-limit-MD-TSS} \lim_{t\to\infty}a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{\tilde{X}^{\nu,\mu}(t) -\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]}{t}\right)\Big|\tilde{X}^{\nu,\mu}(0)=k\right]= \frac{\gamma^2}{2}\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)\ (\mbox{for all}\ \gamma\in\mathbb{R}); \end{equation} in fact we can easily check that $J_{\nu,\mu}(y)=\sup_{\gamma\in\mathbb{R}}\left\{\gamma y-\frac{\gamma^2}{2}\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)\right\}$ (for all $y\in\mathbb{R}$). We remark that \begin{multline*} a_t\log\mathbb{E}\left[\exp\left(\frac{\gamma}{a_t}\sqrt{ta_t}\frac{\tilde{X}^{\nu,\mu}(t) -\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]}{t}\right)\Big|\tilde{X}^{\nu,\mu}(0)=k\right]\\ =a_t\left(\log\mathbb{E}\left[\exp\left(\frac{\gamma}{\sqrt{ta_t}}\tilde{X}^{\nu,\mu}(t)\right)\Big|\tilde{X}^{\nu,\mu}(0)=k\right] -\frac{\gamma}{\sqrt{ta_t}}\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]\right)\\ =a_t\left(\log\tilde{F}_k^{\nu,\mu}(e^{\gamma/\sqrt{ta_t}},t) -\frac{\gamma}{\sqrt{ta_t}}\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]\right), \end{multline*} where $\tilde{F}_k^{\nu,\mu}(z,t)$ is the probability generating function in Proposition \ref{prop:pgf-TSS}. Moreover, by Proposition \ref{prop:mean-variance} (together with a conditioning with respect to $\{\tilde{S}^{\nu,\mu}(t):t\geq 0\}$ and some properties of this process) we have $$\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]= k+\Lambda^\prime(0)\mathbb{E}[\tilde{S}^{\nu,\mu}(t)]+\mathbb{E}[b(\tilde{S}^{\nu,\mu}(t))],$$ where $b(r)=\left.\left(\frac{c_k(z)}{h(z)}\right)^\prime\right|_{z=1}\frac{1-e^{-(\alpha_1+\alpha_2+\beta_1+\beta_2)r}}{2}$ is a bounded function of $r\geq 0$; thus, by \eqref{eq:Psi-means}, we have $$\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]= k+\Lambda^\prime(0)\Psi_{\nu,\mu}^\prime(0)t+\mathbb{E}[b(\tilde{S}^{\nu,\mu}(t))].$$ Then, since $\hat{h}_-(e^\gamma)<0$ and $\hat{h}_+(e^\gamma)=\Lambda(\gamma)$ for all $\gamma\in\mathbb{R}$, we get \begin{multline*} \lim_{t\to\infty}a_t\left(\log\tilde{F}_k^{\nu,\mu}(e^{\gamma/\sqrt{ta_t}},t) -\frac{\gamma}{\sqrt{ta_t}}\mathbb{E}[\tilde{X}^{\nu,\mu}(t)|\tilde{X}^{\nu,\mu}(0)=k]\right)\\ \lim_{t\to\infty}a_t\left(k\frac{\gamma}{\sqrt{ta_t}}+t\tilde{\Lambda}_{\nu,\mu}\left(\frac{\gamma}{\sqrt{ta_t}}\right) -\frac{\gamma}{\sqrt{ta_t}}\left(k+\Lambda^\prime(0)\Psi_{\nu,\mu}^\prime(0)t+\mathbb{E}[b(\tilde{S}^{\nu,\mu}(t))]\right)\right)\\ =\lim_{t\to\infty}ta_t\left(\tilde{\Lambda}_{\nu,\mu}\left(\frac{\gamma}{\sqrt{ta_t}}\right) -\frac{\gamma}{\sqrt{ta_t}}\Lambda^\prime(0)\Psi_{\nu,\mu}^\prime(0)\right); \end{multline*} in fact the term with $\mathbb{E}[b(\tilde{S}^{\nu,\mu}(t))]$ is negligible because it is the function $b(\cdot)$ is bounded. Finally, if we consider the second order Taylor formula for the function $\tilde{\Lambda}_{\nu,\mu}$, we have \begin{multline*} \tilde{\Lambda}_{\nu,\mu}\left(\frac{\gamma}{\sqrt{ta_t}}\right) -\frac{\gamma}{\sqrt{ta_t}}\Lambda^\prime(0)\Psi_{\nu,\mu}^\prime(0)\\ =\frac{\gamma}{\sqrt{ta_t}}\tilde{\Lambda}_{\nu,\mu}^\prime(0)+\frac{\gamma^2}{2ta_t}\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0) +o\left(\frac{\gamma^2}{ta_t}\right)-\frac{\gamma}{\sqrt{ta_t}}\Lambda^\prime(0)\Psi_{\nu,\mu}^\prime(0) =\frac{\gamma^2}{2ta_t}\tilde{\Lambda}_{\nu,\mu}^{\prime\prime}(0)+o\left(\frac{\gamma^2}{ta_t}\right) \end{multline*} for a remainder $o\left(\frac{\gamma^2}{ta_t}\right)$ such that $o\left(\frac{\gamma^2}{ta_t}\right)/\frac{\gamma^2}{ta_t}\to 0$, and \eqref{eq:GE-limit-MD-TSS} can be easily checked. \end{proof} \appendix \section{State probabilities}\label{sec:pmf-expressions} In this section we present some formulas for the state probabilities \eqref{eq:pmf-notation}, \eqref{eq:pmf-notation-fractional} and \eqref{eq:pmf-notation-TSS}. These formulas can be obtained by extracting suitable coefficients of the probability generating functions above; see Propositions \ref{prop:pgf}, \ref{prop:pgf-fractional} and \ref{prop:pgf-TSS}, respectively. Here, as usual, binomial coefficients with negative arguments are equal to zero. For each family of state probabilities we distinguish two cases, and we introduce a suitable auxiliary function: if $\alpha_1+\alpha_2\neq\beta_1+\beta_2$, \begin{eqnarray*} && \hspace*{-1.5cm} \vartheta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2):=\left(\frac{4 \alpha_2 \beta_2}{ (\alpha_1+\alpha_2-\beta_1-\beta_2)^2}\right)^{s-r} \,\sum_{h=0}^{n-s+r} {n\choose h+s-r} \left(\frac{4 \alpha_1 \beta_2}{ (\alpha_1+\alpha_2-\beta_1-\beta_2)^2} \right)^{h} \nonumber \\ && \hspace*{-0.8cm} \times \sum_{l=0}^{h} {h+s-r \choose l} {h+s-r \choose h-l} \left( \frac{\alpha_2 \beta_1}{\alpha_1 \beta_2} \right)^{l}; \end{eqnarray*} if $\alpha_1+\alpha_2=\beta_1+\beta_2$, $$\eta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2):=\left(\frac{\alpha_2}{\alpha_1}\right)^{s-r} (\alpha_1 \beta_2)^n \sum_{l=0}^{n-s+r} {n \choose l} {n \choose s-r+l} \left( \frac{\alpha_2 \beta_1}{\alpha_1 \beta_2} \right)^{l}.$$ \begin{proposition}\label{prop:pmf-expressions} Let $\{p_{k,n}(t):k,n\in\mathbb{Z},t\geq 0\}$ be as in \eqref{eq:pmf-notation}.\\ (i) Assume that $\alpha_1+\alpha_2\neq\beta_1+\beta_2$. Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: \begin{eqnarray*} && \hspace*{-1.2cm} p_{2r,2s}(t) = {\rm e}^{-\frac{(\alpha_1+\alpha_2+\beta_1+\beta_2)}{2}\, t} \sum_{n=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \\ && \hspace*{-1.2cm} \times \left[\frac{t^{2n}}{(2n)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n}+\frac{t^{2n+1}}{(2n+1)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n+1}\right] \cdot \vartheta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2); \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p_{2r,2s+1}(t) = {\rm e}^{-\frac{(\alpha_1+\alpha_2+\beta_1+\beta_2)}{2}\, t} \left\{ \alpha_1 \sum_{n=|s-r|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r}\!\!\cdot \vartheta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \right. \\ && \hspace*{-1.2cm} \left. +\alpha_2 \sum_{n=|s-r+1|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1}\!\!\cdot \vartheta^n_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2)\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p_{2r+1,2s}(t) = {\rm e}^{-\frac{(\alpha_1+\alpha_2+\beta_1+\beta_2)}{2}\, t} \left\{ \beta_2 \sum_{n=|s-r|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r}\!\!\cdot \vartheta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \right. \\ && \hspace*{-1.2cm} \left. +\beta_1 \sum_{n=|s-r-1|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r-1}\!\!\cdot \vartheta^n_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2)\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-2cm} p_{2r+1,2s+1}(t) = {\rm e}^{-\frac{(\alpha_1+\alpha_2+\beta_1+\beta_2)}{2}\, t} \sum_{n=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \\ && \hspace*{-2cm} \times \left[\frac{t^{2n}}{(2n)!} \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 n}+\frac{t^{2n+1}}{(2n+1)!} \left(\frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{2} \right)^{2 n+1}\right] \cdot \vartheta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2). \end{eqnarray*} (ii) Assume that $\alpha_1+\alpha_2=\beta_1+\beta_2$ Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: $$p_{2r,2s}(t) = {\rm e}^{-{(\alpha_1+\alpha_2)}\, t} \sum_{n=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \frac{t^{2n}}{(2n)!} \cdot \eta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2),$$ \begin{eqnarray*} && \hspace*{-1.2cm} p_{2r,2s+1}(t) = {\rm e}^{-{(\alpha_1+\alpha_2)}\, t} \left\{ \alpha_1 \sum_{n=|s-r|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r}\!\!\cdot \eta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \right. \\ && \hspace*{-1.2cm} \left. +\alpha_2 \sum_{n=|s-r+1|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1}\!\!\cdot \eta^n_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2)\right\}, \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p_{2r+1,2s}(t) = {\rm e}^{-{(\alpha_1+\alpha_2)}\, t} \left\{ \beta_2 \sum_{n=|s-r|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r}\!\!\cdot \eta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \right. \\ && \hspace*{-1.2cm} \left. +\beta_1 \sum_{n=|s-r-1|}^{+\infty} \frac{t^{2n+1}}{(2n+1)!} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r-1}\!\!\cdot \eta^n_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2)\right\}, \end{eqnarray*} $$p_{2r+1,2s+1}(t) = {\rm e}^{-{(\alpha_1+\alpha_2)}\, t} \sum_{n=|s-r|}^{+\infty} \frac{t^{2n}}{(2n)!} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \!\!\cdot \eta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2).$$ \end{proposition} \begin{remark}\label{rem:link-with-DIM} If $\alpha_1=\alpha_2=\lambda$ and $\beta_1=\beta_2=\mu$, then Proposition \ref{prop:pmf-expressions} coincides with Proposition 1 in \cite{DicrescenzoIulianoMartinucci} and corrects a misprint contained in formula (18). \end{remark} In view of the next propositions we recall the definition of the generalized Fox-Wright function (see e.g. (1.11.14) in \cite{KilbasSrivastavaTrujillo}). We have \begin{equation}\label{ppsiq} {}_{p}\psi_{q} \left[\begin{array}{cc} (a_l,\alpha_l)_{1,p} & \\ & ; z\\ (b_l,\beta_l)_{1,q} & \end{array}\right] =\sum_{n=0}^{+\infty}\frac{z^n}{n!} \frac{\prod_{j=1}^p \Gamma(a_j+\alpha_j n)} {\prod_{l=1}^q \Gamma(b_l+\beta_l n)}, \end{equation} where $z,a_j,b_l\in {\mathbb{C}}$ and $\alpha_j,\beta_l\in {\mathbb{R}}$. \begin{proposition}\label{prop:pmf-expressions-fractional} Let $\{p_{k,n}^\nu(t):k,n\in\mathbb{Z},t\geq 0\}$ be as in \eqref{eq:pmf-notation-fractional}.\\ (i) Assume that $\alpha_1+\alpha_2\neq\beta_1+\beta_2$. Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r,2s}(t) =\sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^k_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 k} \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{2 k \nu} (\alpha_1+\alpha_2-\beta_1-\beta_2)}{ (\alpha_1+\alpha_2+\beta_1+\beta_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. +\frac{t^{2 k \nu}}{(2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{(2 k+1) \nu} (\alpha_1+\alpha_2-\beta_1-\beta_2)}{2 (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{(2 k+1) \nu} (\alpha_1+\alpha_2+\beta_1+\beta_2)}{2 (2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ;\frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (2,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r,2s+1}(t) = \alpha_1 \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^k_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 k} \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}} { (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{2 t^{2 k \nu}}{(\alpha_1+\alpha_2+\beta_1+\beta_2)(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right\} \\ && \hspace*{-1.2cm} +\alpha_2 \sum_{k=|s-r+1|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1} \vartheta^k_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2) \left(\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{2} \right)^{2 k} \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}}{(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{2 t^{2 k \nu} }{(\alpha_1+\alpha_2+\beta_1+\beta_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r+1,2s}(t) = \beta_2 \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^k_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \left(\frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{2} \right)^{2 k} \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}} { (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{2 t^{2 k \nu}}{(\alpha_1+\alpha_2+\beta_1+\beta_2)(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right\} \\ && \hspace*{-1.2cm} +\beta_1 \sum_{k=|s-r-1|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r-1} \vartheta^k_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2) \left(\frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{2} \right)^{2 k} \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}}{(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{2 t^{2 k \nu} }{(\alpha_1+\alpha_2+\beta_1+\beta_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r+1,2s+1}(t) = \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^k_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \left(\frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{2} \right)^{2 k}\\ && \hspace*{-1.2cm} \times \left\{- \frac{t^{2 k \nu} (\alpha_1+\alpha_2-\beta_1-\beta_2)}{ (\alpha_1+\alpha_2+\beta_1+\beta_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. +\frac{t^{2 k \nu}}{(2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{(2 k+1) \nu} (\alpha_1+\alpha_2+\beta_1+\beta_2)}{2 (2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (2,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. +\frac{t^{(2 k+1) \nu} (\alpha_1+\alpha_2-\beta_1-\beta_2)}{2 (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; \frac{[t^\nu (\alpha_1+\alpha_2+\beta_1+\beta_2)]^2}{4} \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right\}. \end{eqnarray*} (ii) Assume that $\alpha_1+\alpha_2=\beta_1+\beta_2$. Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r,2s}(t)=\sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^k_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{2 k \nu}}{(2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2 \\ (1,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -(\alpha_1+\alpha_2) \frac{t^{(2 k+1) \nu}}{(2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! &\!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2 \\ (2,2)\!\! &\!\! ((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r,2s+1}(t) =\alpha_1 \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^k_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}} { (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{2 k \nu}}{(\alpha_1+\alpha_2)(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (0,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right\} \\ && \hspace*{-1.2cm} +\alpha_2 \sum_{k=|s-r+1|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1} \eta^k_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}}{(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2 \\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{2 k \nu} }{(\alpha_1+\alpha_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2 \\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r+1,2s}(t) = \beta_2 \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^k_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}} { (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{ t^{2 k \nu}}{(\alpha_1+\alpha_2)(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! &\!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2 \\ (0,2)\!\! &\!\! (2 k \nu+1,2 \nu) & \end{array}\right] \right\} \\ && \hspace*{-1.2cm} +\beta_1 \sum_{k=|s-r-1|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r-1} \eta^k_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{t^{(2 k+1) \nu}}{(2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (1,2)\!\! & \!\!((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{2 k \nu} }{(\alpha_1+\alpha_2) (2k+1)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (0,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} p^{\nu}_{2r+1,2s+1}(t) = \sum_{k=|s-r|}^{+\infty} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^k_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \\ && \hspace*{-1.2cm} \times \left\{\frac{t^{2 k \nu}}{ (2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+1,2)\!\! & \!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (1,2)\!\! & \!\!(2 k \nu+1,2 \nu) & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{t^{(2 k+1) \nu} (\alpha_1+\alpha_2)}{(2k)!} \,{}_{2}\psi_{2} \left[\begin{array}{ccc} (2k+2,2)\!\! &\!\! (1,1) & \\ & & ; [t^\nu (\alpha_1+\alpha_2)]^2\\ (2,2)\!\! &\!\! ((2 k+1) \nu+1,2 \nu) & \end{array}\right] \right\}. \end{eqnarray*} \end{proposition} \begin{remark}\label{rem:how-to-recover-the-nonfractional-case-fractional} If $\nu=1$, then Proposition \ref{prop:pmf-expressions-fractional} coincides with Proposition \ref{prop:pmf-expressions} noting that $${}_{2}\psi_{2} \left[\begin{array}{ccc} (\zeta_1,2) & (1,1) & \\ & & ; z\\ (\omega_1,2) & (\zeta_1,2) & \end{array}\right]=\left\{\begin{array}{ll} \sqrt{z} \sinh({\sqrt{z})} &\ \mbox{if}\ \omega_1=0\\ \cosh({\sqrt{z})} &\ \mbox{if}\ \omega_1=1\\ \frac{\sinh({\sqrt{z})}}{\sqrt{z}}&\ \mbox{if}\ \omega_1=2. \end{array}\right.$$ \end{remark} We conclude with final proposition and we refer again to the generalized Fox-Wright function in \eqref{ppsiq}. \begin{proposition}\label{prop:pmf-expressions-TSS} Let $\{\tilde{p}_{k,n}^{\nu,\mu}(t):k,n\in\mathbb{Z},t\geq 0\}$ be as in \eqref{eq:pmf-notation-TSS}.\\ (i) Assume that $\alpha_1+\alpha_2\neq\beta_1+\beta_2$. Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: \begin{eqnarray*} && \hspace*{-1cm} \tilde p^{\nu,\mu}_{2r,2 s}(t)={\rm e}^{\mu^\nu t} \sum_{n=|s-r|}^{+\infty} \left( \frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \\ && \hspace*{-1.2cm} \times \left\{\frac{1}{(2n)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (1-2n,\nu)\!\! & \end{array}\right] \right. \\ && \hspace*{-1.2cm} \left. -\frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} \tilde p^{\nu,\mu}_{2r,2 s+1}(t)=\frac{2 {\rm e}^{\mu^\nu t} }{\alpha_1+\alpha_2+\beta_1+\beta_2+2\mu} \left\{-\alpha_1 \sum_{n=|s-r|}^{+\infty} \left( \frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \right. \\ && \hspace*{-1.2cm} \left. \times \vartheta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right. \\ && \hspace*{-1.2cm} \left. -\alpha_2 \sum_{n=|s-r+1|}^{+\infty} \left( \frac{\beta_1+\beta_2-\alpha_1-\alpha_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1} \vartheta^n_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2) \right. \\ && \hspace*{-1.2cm} \left. \times \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} \tilde p^{\nu,\mu}_{2r+1,2 s}(t)=\frac{2 {\rm e}^{\mu^\nu t} }{\alpha_1+\alpha_2+\beta_1+\beta_2+2\mu} \left\{-\beta_2 \sum_{n=|s-r|}^{+\infty} \left( \frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \right. \\ && \hspace*{-1.2cm} \left. \times \vartheta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right. \\ && \hspace*{-1.2cm} \left. -\beta_1 \sum_{n=|s-r-1|}^{+\infty} \left( \frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r-1} \vartheta^n_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2) \right. \\ && \hspace*{-1.2cm} \left. \times \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right] \right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1cm} \tilde p^{\nu,\mu}_{2r+1,2 s+1}(t)={\rm e}^{\mu^\nu t} \sum_{n=|s-r|}^{+\infty} \left( \frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \vartheta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \\ && \hspace*{-1.2cm} \times \left\{ \frac{1}{(2n)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (1-2n,\nu)\!\! & \end{array}\right] -\frac{\alpha_1+\alpha_2-\beta_1-\beta_2}{\alpha_1+\beta_1+\alpha_2+ \beta_2+2 \mu} \right. \\ && \hspace*{-1.2cm} \left. \times \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2+\beta_2}{2}+\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right\}. \end{eqnarray*} (ii) Assume that $\alpha_1+\alpha_2=\beta_1+\beta_2$. Then, for all $s,r\in\mathbb{Z}$, we have the following four cases: \begin{eqnarray*} && \hspace*{-2cm} \tilde p^{\nu,\mu}_{2r,2 s}(t)={\rm e}^{\mu^\nu t} \sum_{n=|s-r|}^{+\infty} \left( \frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \\ && \hspace*{-0.7cm} \times \frac{1}{(2n)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (1-2n,\nu)\!\! & \end{array}\right]; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} \tilde p^{\nu,\mu}_{2r,2 s+1}(t)={\rm e}^{\mu^\nu t} \left\{-\alpha_1 \sum_{n=|s-r|}^{+\infty} \left( \frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n+1} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^n_{r,s}(\alpha_1,\alpha_2,\beta_1,\beta_2) \right. \\ && \hspace*{-1.2cm} \left. \times \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right] -\alpha_2 \sum_{n=|s-r+1|}^{+\infty} \left( \frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n+1} \right. \\ && \hspace*{-1.2cm} \times \left. \left(\frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r+1} \eta^n_{r,s+1}(\alpha_1,\alpha_2,\beta_1,\beta_2) \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} \tilde p^{\nu,\mu}_{2r+1,2 s}(t)= {\rm e}^{\mu^\nu t} \left\{-\beta_2 \sum_{n=|s-r|}^{+\infty} \left(\frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n+1} \left(\frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \right. \\ && \hspace*{-1.2cm} \times \left. \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right] -\beta_1 \sum_{n=|s-r-1|}^{+\infty} \left( \frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n+1} \right. \\ && \hspace*{-1.2cm} \times \left. \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2}\right)^{s-r-1} \eta^n_{r,s-1}(\beta_1,\beta_2,\alpha_1,\alpha_2) \frac{1}{(2n+1)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (-2n,\nu)\!\! & \end{array}\right]\right\}; \end{eqnarray*} \begin{eqnarray*} && \hspace*{-1.2cm} \tilde p^{\nu,\mu}_{2r+1,2 s+1}(t)={\rm e}^{\mu^\nu t} \sum_{n=|s-r|}^{+\infty} \left( \frac{1}{\alpha_1+\alpha_2+ \mu} \right)^{2 n} \left( \frac{\alpha_1 \beta_1}{\alpha_2 \beta_2} \right)^{s-r} \eta^n_{r,s}(\beta_1,\beta_2,\alpha_1,\alpha_2) \\ && \hspace*{-1.2cm} \times \frac{1}{(2n)!} \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,\nu)\!\! & \\ & ; -t \left( \alpha_1+\alpha_2 +\mu \right)^{\nu} \\ (1-2n,\nu)\!\! & \end{array}\right]. \end{eqnarray*} \end{proposition} \begin{remark}\label{rem:how-to-recover-the-nonfractional-case-TSS} If $\nu=\mu=1$, then Proposition \ref{prop:pmf-expressions-TSS} coincides with Proposition \ref{prop:pmf-expressions} noting that $$ \,{}_{1}\psi_{1} \left[\begin{array}{cc} (1,1)\!\! & \\ & ; -t \left(\frac{\alpha_1+\beta_1+\alpha_2 +\beta_2}{2}+1 \right) \\ (1-l,1)\!\! & \end{array}\right] =\frac{(-t)^l}{2^l} (2+\alpha_1+\alpha_2+\beta_1+\beta_2)^l {\rm e}^{-\frac{t}{2} (2+\alpha_1+\alpha_2+\beta_1+\beta_2)}.$$ \end{remark} \end{document}
\begin{enumerate}gin{document} \title[Inequalities on Bruhat graphs, $R$- and KL polynomials] {Inequalities on Bruhat graphs, $R$- and Kazhdan-Lusztig polynomials} \author{Masato Kobayashi} \thanks{This article will appear in Journal of Combinatorial Series A \textbf{120} (2013) no.2, 470--482.} \date{\today} \subjclass[2000]{Primary:20F55;\,Secondary:51F15} \keywords{Coxeter groups, Bruhat graphs, $R$-polynomials, Kazhdan-Lusztig polynomials} \address{Department of Mathematics\\ Tokyo Institute of Technology, 2-12-1 Oookayama, Tokyo 152-8551, Japan.} \email{[email protected]} \maketitle \begin{enumerate}gin{abstract} From a combinatorial perspective, we establish three inequalities on coefficients of $R$- and Kazhdan-Lusztig polynomials for crystallographic Coxeter groups: (1) Nonnegativity of $(q-1)$-coefficients of $R$-polynomials, (2) a new criterion of rational singularities of Bruhat intervals by sum of quadratic coefficients of $R$-polynomials, (3) existence of a certain strict inequality (coefficientwise) of Kazhdan-Lusztig polynomials. Our main idea is to understand Deodhar's inequality in a connection with a sum of $R$-polynomials and edges of Bruhat graphs. \end{abstract} \tableofcontents \section{Introduction}\label{sintro} In 1979 \CIRCLEte{KL}, Kazhdan and Lusztig discovered two families of polynomials (now known as \emph{$R$-} and \emph{Kazhdan-Lusztig polynomials}) in the course of studying Hecke algebras and Schubert varieties. This family of polynomials is indexed by pairs of elements in a Coxeter group, and the polynomials are in one variable and have integer coefficients. Because Coxeter groups are involved, \emph{Bruhat order} plays a central role in the theory. Bruhat order is locally \emph{Eulerian}. Eulerian posets have been of great importance in combinatorics; one particularly important example is that of the face lattices of convex polytopes, and there has been much study of their $f$- and $h$-vectors. We will not list the large number of classical references on this topic but instead refer to books by Stanley \CIRCLEte{stanley4} and Ziegler \CIRCLEte{ziegler} and the references therein. \\ \indent Recently, there has been work specifically on the $f$-vectors of lower Bruhat intervals. Bj\"{o}rner-Ekedahl \CIRCLEte[Theorems A, E]{bjorner3} and Brion \CIRCLEte[Corollary 2]{brion} have shown certain unimodality properties hold for $f$-vectors of such intervals. Their approach is of a rather geometric flavor, using the theory of intersection cohomology.\\ \indent From a more combinatorial perspective, the \emph{Bruhat graph}, introduced by Dyer \CIRCLEte{dyer1}, is one of the most powerful tools for encoding information about Bruhat intervals. Among Bruhat intervals are two classes of fundamental Eulerian structures: \emph{boolean} and \emph{dihedral} intervals. These coincide up to length 2; however, for length $\ge 3$, their graph structures are different (Figures \ref{ss3}, \ref{b3}). In particular, the graph in Figure \ref{ss3} contains an edge of length 3. This non-boolean structure leads to the study of labeled Bruhat paths on Bruhat graphs. Dyer \CIRCLEte{dyer2} gave an interpretation of $\wt{R}$- (and $R$-)polynomials as the generating function of paths with increasing labels in an arbitrary reflection order. More recently, Billera \CIRCLEte{billera} and Billera-Brenti \CIRCLEte{billera2} studied Bruhat intervals using quasisymmetric functions that extend the flag $f$- and $h$-numbers. They introduced the complete cd-index as a more sophisticated way to compute $R$- and Kazhdan-Lusztig polynomials.\\ \indent Bruhat graphs, and these polynomials all come into play when we study \emph{rational smoothness} and \emph{singularities} of Bruhat intervals in crystallographic Coxeter groups. Terms ``rationally smooth" and ``singular" come from geometry of Schubert varieties. There are many equivalent criteria \CIRCLEte[Section 13.2]{billey1}; regular Bruhat graphs, trivial Kazhdan-Lusztig polynomials, a boolean-like sum of $R$-polynomials and palindromic Poincar\'{e} polynomials. Particularly important is \emph{Deodhar's inequality} to which many researchers contributed; Billey \CIRCLEte{billey2}, Carrell-Peterson \CIRCLEte{carrell}, Dyer \CIRCLEte{dyer4}, Kumar \CIRCLEte{kumar1} and Polo \CIRCLEte{polo2} in the 1990s.\\ \indent The motivation for this article was to understand Deodhar's inequality in a more explicit connection with a sum of $R$-polynomials: On the one hand, Deodhar's inequality guarantees nonnegativity of a certain integer. On the other hand, $R$-polynomials involve many negative coefficients. The key idea for our approach is to view $R$-polynomials as polynomials in $q-1$, not $q$. Then nonnegativity of $\wt{R}$-polynomials come into the picture as we shall see. Although this idea is simple, it is useful for analyzing coefficients of not only $R$-polynomials but also Kazhdan-Lusztig polynomials.\\ \indent Our main result consists of three theorems on inequalities of $R$- and Kazhdan-Lusztig polynomials: \begin{enumerate}gin{itemize} \item nonnegativity of $(q-1)$-coefficients of $R$-polynomials (Theorem \ref{nth1}), \item a new criterion of singularities for Bruhat intervals (Theorem \ref{nth2}), \item the existence of a strict inequality of Kazhdan-Lusztig polynomials \\(Theorem \ref{nth3}). \end{itemize} Proofs are elementary throughout; Nonetheless, we hope that these results will be some contributions to analysis of such polynomials in the future. Here is an organization of the article:\,Sections \ref{snotn} and \ref{srpolys1} record fundamental terminology on Coxeter groups and $R$-polynomials. Section \ref{sr2} gives an explicit description of coefficients for $R$-polynomials with the idea of the absolute length on Bruhat graphs. In Section \ref{s2df}, we recall a notion of rational smoothness and singularities. In Section \ref{revi}, we give a new interpretaion of Deodhar's inequality in terms of a sum of $R$-polynomials. After providing a definition and some background on Kazhdan-Lusztig polynomials in Section \ref{skl1}, we prove Theorem \ref{nth3} in Section \ref{skl2}. \begin{enumerate}gin{figure}[h!] \caption{Bruhat graph of dihedral interval of rank 3} \label{ss3} \[ \begin{enumerate}gin{xy} 0;<5mm,0mm>: ,0+(-12,0)*{\CIRCLE}="b1"*+++!U{} ,(-3,2)+(-12,0)*{\CIRCLE}="al1"*++!R{} ,(3,2)+(-12,0)*{\CIRCLE}="ar1"*++!L{} ,(-3,5)+(-12,0)*{\CIRCLE}="cl1"*++!R{} ,(3,5)+(-12,0)*{\CIRCLE}="cr1"*++!L{} ,(0,7)+(-12,0)*{\CIRCLE}="t1"*+++!D{} ,\ar@{->}"b1";"al1" ,\ar@{->}"b1";"ar1" ,\ar@{->}"al1";"cl1" ,\ar@{->}"ar1";"cr1" ,\ar@{->}"cl1";"t1" ,\ar@{->}"cr1";"t1" ,\ar@{->}"al1";"cr1" ,\ar@{->}"ar1";"cl1" ,\ar@{->}"b1";"t1" \end{xy}\] \end{figure} \begin{enumerate}gin{figure}[h!] \caption{Bruhat graph of boolean poset of rank 3} \label{b3} \[ \begin{enumerate}gin{xy} 0;<5mm,0mm>: ,0+(-12,0)*{\CIRCLE}="b1"*+++!U{} ,0+(-12,2)*{\CIRCLE}="am"*+++!U{} ,0+(-12,5)*{\CIRCLE}="cm"*+++!U{} ,(-3,2)+(-12,0)*{\CIRCLE}="al1"*++!R{} ,(3,2)+(-12,0)*{\CIRCLE}="ar1"*++!L{} ,(-3,5)+(-12,0)*{\CIRCLE}="cl1"*++!R{} ,(3,5)+(-12,0)*{\CIRCLE}="cr1"*++!L{} ,(0,7)+(-12,0)*{\CIRCLE}="t1"*+++!D{} ,\ar@{->}"b1";"am" ,\ar@{->}"b1";"ar1" ,\ar@{->}"b1";"al1" ,\ar@{<-}"cr1";"am" ,\ar@{<-}"cm";"ar1" ,\ar@{->}"al1";"cl1" ,\ar@{->}"ar1";"cr1" ,\ar@{->}"cl1";"t1" ,\ar@{->}"cr1";"t1" ,\ar@{->}"cm";"t1" ,\ar@{->}"al1";"cm" ,\ar@{->}"am";"cl1" \end{xy}\] \end{figure} \section{Notation}\label{snotn} Throughout this article, we follow common notation in the context of Coxeter groups \CIRCLEte{bjorner2,humphreys}. By $(W, S)$ (or simply $W$) we mean a Coxeter system with length function $\end{lem}l$. Unless otherwise specified, $u, v, w$ are elements of $W$ and $e$ is the unit. Let $T=\cup_{w\in W}\, w^{-1}Sw$ denote the set of reflections. Write $u\to w$ if $w=ut$ for some $t\in T$ and $\end{lem}l(u)<\end{lem}l(w)$. Define \emph{Bruhat order} $u\le w$ if there exist $v_1, \dots, v_n\in W$ such that $u\to v_1\to \cdots \to v_n=w$. For $u\le w$, let $[u, w]\overset{\textnormal{def}}{=}\{v\in W\mid u\le v\le w\}$ denote a \emph{Bruhat interval}. Often $\end{lem}l(u, w)\overset{\textnormal{def}}{=}\end{lem}l(w)-\end{lem}l(u)$ abbreviates the length of intervals.\\ \indent More notation on polynomials: As usual, the symbol $\mathbb{N}$ indicates the set of nonnegative integers and $\mathbb{Z}$ integers. For nonzero $f\in \mathbb{Z}[q]$, say $f$ is \emph{palindromic} if $q^{\deg(f)}f(q^{-1})=f(q)$. Let $[q^n](f)$ denote the coefficient of $q^n$ in $f$. An inequality $f\le g $ (or $f\le_q g$) means $[q^n](f)\le [q^n](g)$ for all $n$. In addition, we use some special notation; see Remark \ref{div}. \section{$R$-polynomials}\label{srpolys1} Following \CIRCLEte[Section 5.1]{bjorner2}, we first give a definition of $R$-polynomials. \begin{enumerate}gin{fact}There exists a unique family of polynomials $\{R_{uw}(q)\mid u, w \in W \} \subseteq \mathbb{Z}[q]$ (\emph{$R$-polynomials}) such that \begin{enumerate}gin{enumerate} \item $R_{uw}(q)=0$ \mbox{if $u \not \le w$}, \item $R_{uw}(q)=1$ \mbox{if $u=w$}, \item if $s\in S$ and $ws<w$, then \begin{enumerate}gin{align*} R_{uw}(q)= \begin{enumerate}gin{cases} R_{us, ws}(q) &\mbox{ if } us<u,\\ qR_{us, ws}(q)+(q-1)R_{u, ws}(q) &\mbox{ if } u<us. \end{cases} \end{align*} \end{enumerate} \end{fact} We can equivalently construct such polynomials from the Hecke algebra of $W$ as in \CIRCLEte[Chapter 7]{humphreys}. But this definition is enough for our purpose.\\ \indent We will use the following properties of $R$-polynomials later. \begin{fact}{Let $u\le w$.\label{ff} \begin{enumerate}{\item $R_{uw}(q)$ is a monic polynomial of degree $\end{lem}l(u, w)$. \item \label{ntr}If $u\ne w$, then $q-1$ divides $R_{uw}(q)$, i.e., $R_{uw}(1)=0$. \item $R_{uw}(q)=R_{u^{-1}, w^{-1}}(q)$. \item We have\label{ralt} \[ \sum_{ u\le v \le w}(-1)^{\end{lem}l(u, v)}R_{uv}(q)R_{vw}(q)=\delta_{uw}\mbox{ (Kronecker delta)}.\] \item $q^{\end{lem}l(u, w)}R_{uw}(q^{-1})=(-1)^{\end{lem}l(u, w)}R_{uw}(q)$.\label{fff} }\end{enumerate} }\end{fact} $R$-polynomials involve many negative $q$-coefficients; However, once we regard them as $(q-1)$-polynomials, we can show the nonnegativity of such coefficients (Theorem \ref{nth1}). Next, following \CIRCLEte[Section 5.3]{bjorner2}, we introduce another family of polynomials associated to $R$-polynomials. They have nonnegative integer coefficients: \begin{enumerate}gin{fact} There exists a unique family of polynomials $\{\widetilde{R}_{uw}(q)\mid u, w \in W \} \subseteq \mathbb{N}[q]$ (\emph{$\widetilde{R}$-polynomials}) such that \begin{enumerate}gin{enumerate} \item $\wt{R}_{uw}(q)=0$ \mbox{if $u \not \le w$}, \item $\wt{R}_{uw}(q)=1$ \mbox{if $u=w$}, \item if $s\in S$ and $ws<w$, then \begin{enumerate}gin{align*} \widetilde{R}_{uw}(q)= \begin{enumerate}gin{cases} \widetilde{R}_{us, ws}(q) &\mbox{ if } us<u,\\ \widetilde{R}_{us, ws}(q)+q\widetilde{R}_{u, ws}(q) &\mbox{ if } u<us, \end{cases} \end{align*} \item $\wt{R}_{uw}(q)$ ($u \le w$) is a monic polynomial of degree $\end{lem}l(u, w)$, \item $R_{uw}(q)=q^{\frac{\end{lem}l(u, w)}{2}}\wt{R}_{uw}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})$. \end{enumerate} \end{fact} \section{Some nonnegativity of $R$-polynomials} \label{sr2} Now the main discussion begins with Bruhat graphs, our central idea. Recall that $u\to w$ means $w=ut$ for some $t\in T$ and $\end{lem}l(u)<\end{lem}l(w)$. \begin{defn}{The \emph{Bruhat graph} of $W$ is a directed graph for vertices $w\in W$ and for edges $u\to w$. We can also consider induced subgraphs for subsets of $W$. By a \emph{Bruhat path} we always mean a directed path (hence a strict increasing chain) $u\to v_1 \to \dots \to v_n=w$ in the Bruhat graph of $W$. }\end{defn} \begin{defn}{Let $u\le w$. Define the \emph{absolute length} between $u$ and $w$ to be \[a(u, w)=\min\{n\ge 0 \mid u\to v_1 \to \dots \to v_n=w\}.\] }\end{defn} \begin{rmk}{Hence $u\to w$ is equivalent to $a(u, w)=1$. Note that we have $a(u, w)\le \end{lem}l(u, w)$ by Chain Property \CIRCLEte[Theorem 2.2.6]{bjorner2} and furthermore $(-1)^{a(u, w)}=(-1)^{\end{lem}l(u, w)}$ since $\end{lem}l(v_i, v_{i+1})$ is odd at each edge $v_i\to v_{i+1}$.}\end{rmk} \begin{fact}\CIRCLEte[Exercise 35, Chapter 5]{bjorner2}{\label{d1} For all $u$ and $w$, we have \[ R'_{uw}(1)=\begin{enumerate}gin{cases} 1 & \mbox{if } u\to w,\\ 0 &\mbox{otherwise.} \end{cases} \] Here, $R'_{uw}(1)$ means $\frac{d}{dq}R_{uw}(q)\Bigr|_{q=1}$. }\end{fact} \begin{rmk}{\label{div}In the context of $R$-polynomials, we usually think them as polynomials of integer coefficients. However, it is also possible to regard them as real polynomials so that we can speak of their derivative. This idea is helpful particularly when we want to compute some specific coefficients:\, recall from calculus that for given $f(q)\in \mathbb{R}[q]$, $c\in \mathbb{R}$ and an expansion $f(q)= \sum_{n=0}^{d} a_n(q-c)^n$ with $a_n\in \mathbb{R}$, we have $a_n=f^{(n)}(c)/n!$ where $f^{(n)}$ means the $n$-th derivative. Below, we apply this idea for $R$-polynomials and $c=1$. For convenience, we adopt special notation: $[(q-1)^n](f)\overset{\textnormal{def}}{=} f^{(n)}(1)/n!$ for all nonnegative integers $n$. In addition, we write $f\le_{q-1} g$ to mean $[(q-1)^n](f)\le [(q-1)^n](g)$ for all $n$. }\end{rmk} As a consequence of Fact \ref{d1}, we have that if $u\to w$, then $q-1$ divides $R_{uw}(q)$ while $(q-1)^2$ does not. We may ask more: When does $(q-1)^2$ divide $R_{uw}(q)$ in general? What does the rest of $R_{uw}(q)$ other than a power of $q-1$ look like? Below Theorem \ref{nth1} and Corollary \ref{mth1} answer these questions. Here we need a lemma: \begin{lem}{Let $u<w$, \label{inc}$a=a(u, w)$ and $\end{lem}l=\end{lem}l(u, w)$. Then there exist positive integers $c_\end{lem}l$, $c_{\end{lem}l-2}, \dots, c_a$ such that \[\wt{R}_{uw}(q)=c_\end{lem}l q^\end{lem}l+c_{\end{lem}l-2}q^{\end{lem}l-2}+\dots +c_aq^a. \] Consequently, we have \[R_{uw}(q)=\sum_{k=0}^{\frac{\end{lem}l-a}{2}} c_{a+2k}\,q^{\frac{\end{lem}l-a-2k}{2}} (q-1)^{a+2k}.\] }\end{lem} \begin{prop}f{For the first statement, refer to \CIRCLEte[Theorem 2.5]{incitti4}. Then \begin{enumerate}gin{align*} R_{uw}(q)&=q^{\frac{\end{lem}l}{2}}\wt{R}_{uw}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})\\ &=q^{\frac{\end{lem}l}{2}}\sum_{k=0}^{\frac{\end{lem}l-a}{2}}c_{a+2k}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})^{a+2k}\\ &=q^{\frac{\end{lem}l}{2}}\sum_{k=0}^{\frac{\end{lem}l-a}{2}}c_{a+2k}(q^{-\frac{1}{2}}(q-1))^{a+2k}\\ &=\sum_{k=0}^{\frac{\end{lem}l-a}{2}}c_{a+2k}\,q^{\frac{\end{lem}l-a-2k}{2}}(q-1)^{a+2k}. \end{align*} }\end{prop}f \begin{enumerate}gin{main} \label{nth1} Let $u< w$ and $n$ be a nonnegative integer. If $n<a(u, w)$ or $n>\end{lem}l(u, w)$, then $[(q-1)^n](R_{uw})=0$. Otherwise, $[(q-1)^n](R_{uw})>0$. In particular, $a(u, w)$ is the largest power of $q-1$ that divides $R_{uw}(q)$. As a consequence, we have $R_{uw}(q)\ge_{q-1}0$ for all $u, w$. \end{main} \begin{prop}f{Let $a=a(u, w)$ and $\end{lem}l=\end{lem}l(u, w)$ for simplicity. Consider the expression of $R_{uw}(q)$ in Lemma \ref{inc}. Then $q^{\frac{\end{lem}l-a-2k}{2}}=\sum\binom{(\end{lem}l-a-2k)/2}{i} (q-1)^{i}\ge_{q-1}0$ for all $k$. As a result, all terms $(q-1)^n$ ($a\le n\le \end{lem}l$) appear in the sum with positive coefficients. If $n>\end{lem}l$, then $[(q-1)^n](R_{uw})=0$ since $\deg R_{uw}(q)=\end{lem}l$. }\end{prop}f \begin{enumerate}gin{cor}\label{mth1} Let $u< w$ and $d=\end{lem}l(u, w)-a(u, w)$. Then there exist unique integers $f_{i-1}$, $h_i$ $(0\le i\le d)$ such that \[R_{uw}(q)=(q-1)^{a(u, w)}\left(\sum_{i=0}^df_{i-1}(q-1)^{d-i}\right)=(q-1)^{a(u, w)}\left(\sum_{i=0}^dh_iq^{d-i}\right), \] $f_{-1}=h_0=1$ and $f_{i-1}>0, h_i=h_{d-i}$ for all $i$. \end{cor} \begin{prop}f{Existence of such positive numbers $f_{i-1}$ follows from Theorem \ref{nth1}. Since $R_{uw}(q)$ is monic, we have $f_{-1}=h_0=1$. Observe next that \begin{enumerate}gin{align*} q^{d}\left(\sum_{i=0}^dh_i(q^{-1})^{d-i}\right)&=\frac{q^{\end{lem}l(u, w)} }{q^{a(u, w)}}\,\frac{R_{uw}(q^{-1})}{(q^{-1}-1)^{a(u, w)}}\\ &=\frac{(-1)^{\end{lem}l(u, w)}}{(-1)^{a(u, w)}}\,\frac{R_{uw}(q)}{(q-1)^{a(u, w)}} \textnormal{ (Fact \ref{ff} (\ref{fff}))}\\ &=\sum_{i=0}^dh_iq^{d-i}. \end{align*} The second factor is thus palindromic, i.e., $h_i=h_{d-i}$. It remains to show that $f_{i-1}$ and $h_i$ are all integers. For $f_{i-1}$, we can prove by induction on $\end{lem}l(w)$: If $\end{lem}l(w)=1$, then $u=e, d=0$ so that $f_{-1}=1$. If $\end{lem}l(w)\ge 2$, by recursive relations of $R$-polynomials, we may assume that $u<us$ and $ws<w$ for some $s\in S$. Now the inductive hypothesis shows that both $R_{us, ws}(q), R_{u, ws}(q)\ge_{q-1} 0$ with integer coefficients. Therefore so is $R_{uw}(q)$ since \begin{enumerate}gin{align*} R_{uw}(q)&=qR_{us, ws}(q)+(q-1)R_{u, ws}(q)\\ &=(q-1)R_{us, ws}(q)+R_{us, ws}(q)+(q-1)R_{u, ws}(q). \end{align*} All $h_i$ are also integers since there exist linear relations $h_i=\sum_{j=0}^i(-1)^{i-j}\binom{d-j}{d-i}f_{j-1}$. }\end{prop}f \begin{rmk}{Some $h_i$ can be negative (Example \ref{neg}). We hope to give a combinatorial interpretation of positive integers $f_{i-1}$. }\end{rmk} Brenti showed the following result \CIRCLEte[Theorem 6.3]{brenti4}; However, the last statement of Theorem \ref{nth1} now gives a more direct proof. \begin{cor}{Let $u<w$. Then the following are equivalent:\label{c1} \begin{enumerate}{ \item $R_{uw}(q)=(q-1)^{\end{lem}l(u,w)}$, \item $a(u, w)=\end{lem}l(u, w)$. In other words, there do not exist $x, y\in [u, w]$ such that $x\to y$ and $\end{lem}l(x, y)= 3$. }\end{enumerate} In particular, if $[u, w]$ is boolean, then $R_{uw}(q)=(q-1)^{\end{lem}l(u,w)}$. }\end{cor} That is, whenever $[u, w]$ contains an edge of length $3$, then $R_{uw}(q)$ has a factor other than $q-1$. We see a small example. \begin{enumerate}x{\label{neg} Let $W=\textnormal{A}_2$, $u=123$ and $w=321$ (one-line notation). Figure \ref{ss3} shows the Bruhat graph of $[u, w]$. Observe that $u\to w$ with $\end{lem}l(u, w)=3$. As computed in \CIRCLEte[Example 5.1.2]{bjorner2}, the $R$-polynomial of $[u, w]$ is $(q-1)(q^2-q+1)$ . Since $q^2-q+1=(q-1)^2+(q-1)+1$, we have \[R_{uw}(q)=(q-1)(q^2-q+1)=(q-1)^3+(q-1)^2+(q-1)\ge_{q-1} 0.\] }\end{enumerate}x We close this section with one more result; it shows bounds of coefficients of $R$-polynomials by binomial ones. \begin{prop}{ \label{bn} Let $w\in W$. Then for each $u<w$, we have \[(q-1)^{\end{lem}l}\le_{q-1} R_{uw}(q)\le_{q-1} q^{\end{lem}l}.\] }\end{prop} \begin{prop}f{The first inequality follows from Corollary \ref{mth1}. For the second, it is enough to show that $[(q-1)^n](R_{uw})\le \binom{\end{lem}l}{n}$ for all $n$, $0\le n\le \end{lem}l$. The proof proceeds by induction on $\end{lem}l(w)$: If $\end{lem}l(w)=1$, then $R_{uw}(q)=q-1$ so that $[q-1](R_{uw})=1$. Suppose next $\end{lem}l(w)\ge 2$. Choose $s\in S$ such that $ws<w$. If $us<u$, then $R_{uw}(q)=R_{us, ws}(q)$ in which case we are done by induction ($\end{lem}l(ws)<\end{lem}l(w)$). If $us>u$, then $R_{uw}(q)=qR_{us, ws}(q)+(q-1)R_{u, ws}(q)$ so that \begin{enumerate}gin{align*} [(q-1)^n](R_{uw})&=[(q-1)^n]((q-1)R_{us, ws}+R_{us, ws}+(q-1)R_{u, ws})\\ &\le \binom{\end{lem}l-2}{n-1}+\binom{\end{lem}l-2}{n}+\binom{\end{lem}l-1}{n-1}\mbox{\quad (induction)}\\ &= \binom{\end{lem}l-1}{n}+\binom{\end{lem}l-1}{n-1}\\ &= \binom{\end{lem}l}{n}. \end{align*} }\end{prop}f \begin{rmk}{Unfortunately, this is a little different from Brenti's Conjecture: $|[q^n](R_{uw})|\le \binom{\end{lem}l}{n}$ \CIRCLEte[Problem 5.2]{brenti1}. The conjecture remains open at time of writing (March 2012). We hope that our inequality above is helpful for proving it. See also Caselli \CIRCLEte{caselli2} for some relations between $q$-coefficients of $R$-polynomials and binomial ones. }\end{rmk} \section{Rational smoothness and singularities}\label{s2df} In this section, we recall rational smoothness and singularities for Bruhat intervals. This is a key concept in the sequel. We begin with a convention: \begin{enumerate}gin{conv}\label{convv} \emph{In what follows we assume that $W$ is crystallographic}, i.e., its Coxeter graph has Coxeter labels only from $\{2, 3, 4, 6, \infty\}$. \end{conv} The reason for this assumption is to ensure the correctness of Definition \ref{nons}, Facts \ref{kl2} and \ref{kl22}. \begin{defn}{Let $u\le w$. Set \[\ol{N}(u, w)=\{v\in W \mid u\to v\le w\} \mbox{ and } \ol{\end{lem}l}(u, w)=|\ol{N}(u, w)|.\] }\end{defn} In words, $\ol{N}(u, w)$ is the neighborhood of the bottom vertex on the Bruhat graph of $[u, w]$ (Figure \ref{nuw}); $\ol{\end{lem}l}(u, w)$ is the number of those outgoing edges. \begin{defn}{The \emph{defect} of $[u, w]$ is $\textnormal{df}(u, w)=\ol{\end{lem}l}(u, w)-\end{lem}l(u, w)$. }\end{defn} We know nonnegativity of this integer: \begin{enumerate}gin{figure} \caption{$\ol{N}(u, w)$} \label{nuw} \[ \begin{enumerate}gin{xy} 0;<14mm,0mm>: ,(0,4)*+={\CIRCLE}*+++!D{w} ,(0,0)*{\CIRCLE}*+++!U{u} ,\ar@{->}(0,0);(1,1) ,\ar@{->}(0,0);(-1,1) ,\ar@{->}(0,0);(-0.8,3) ,\ar@{->}(0,0);(0,1) ,\ar@{->}(0,0);(0.8,3) ,\ar@{..}(0,1);(0,4) ,\ar@{..}(-1,1);(0,4) ,\ar@{..}(0,4);(-0.8,3) ,\ar@{..}(0,4);(1,1) ,\ar@{..}(0,4);(0.8,3) \end{xy}\] \end{figure} \begin{fact}{{\CIRCLEte[Deodhar's inequality]{dyer4}}\label{deodd} $\textnormal{df}(u, w)\ge 0$. }\end{fact} \begin{defn}\CIRCLEte[Section 13.2]{billey1}{\label{nons} Let $u\le w$. Say $[u, w]$ is \emph{rationally smooth} if we have the following equivalent conditions: \begin{enumerate}{\item $\sum_{x\le v\le w}R_{xv}(q)=q^{\end{lem}l(x, w)}$ for all $x$ with $u\le x<w$,\label{nn} \item $\textnormal{df}(x, w)=0$ for all $x$ with $u\le x<w$.}\end{enumerate} Otherwise, say $[u, w]$ is \emph{singular}.\label{sgg} }\end{defn} Recall from Theorem \ref{nth1} that $R_{xv}(q)\ge_{q-1} 0$ for all $x, v$. Hence a sum of such polynomials satisfies the same property. In this rationally smooth case, we can write the sum in this way: \[\sum_{x\le v\le w}R_{xv}(q)=q^\end{lem}l=(q-1+1)^\end{lem}l=\sum_{n=0}^{\end{lem}l} \binom{\end{lem}l}{n}(q-1)^n.\] In particular, $[q-1]\left(\sum_{x\le v\le w}R_{xv}\right)=\end{lem}l(x, w)$ for $n=1$. In the next section, we establish two results on such coefficients in a more general point of view. They are stated in the same form. \section{Deodhar's inequality revisited}\label{revi} \begin{prop}{\label{dvc}Let $u\le w$. Then for all $x$ with $u\le x<w$, we have \[[q-1]\left(\sum_{x\le v\le w}R_{xv}\right)-\end{lem}l(x, w)\underset{(*)}{\ge} 0.\] Moreover, $[u, w]$ is singular if and only if \textnormal{($*$)} is strict for some $x$ with $u\le x<w$. }\end{prop} \begin{prop}f{Recall from Fact \ref{d1}: $[q-1](R_{xv})=R'_{xv}(1)=\begin{enumerate}gin{cases} 1&\mbox{if }x\to v,\\ 0 &\mbox{otherwise.}\end{cases}$ It follows that $[q-1]\left(\sum_{x\le v\le w}R_{xv}\right)=|\ol{N}(x, w)|=\ol{\end{lem}l}(x, w)$. Hence ($*$) is nothing but rephrasing Deodhar's inequality: $\textnormal{df}(x, w)=\ol{\end{lem}l}(x, w)-\end{lem}l(x, w)\ge 0$. Consequently, $[u, w]$ is singular if and only if $\textnormal{df}(x, w)>0$ for some $x$ with $u\le x<w$ if and only if $[q-1]\left(\sum_{x\le v\le w}R_{xv}\right)-\end{lem}l(x, w)>0$ for some $x$ with $u\le x<w$. }\end{prop}f \begin{enumerate}gin{main}\label{nth2} Let $u\le w$. Then for all $x$ with $u\le x<w$, we have \[[(q-1)^2]\left(\sum_{x\le v\le w}R_{xv}\right)- \binom{\end{lem}l(x, w)}{2} \underset{(*)}{\ge} 0.\] Moreover, $[u, w]$ is singular if and only if \textnormal{($*$)} is strict for some $x$ with $u\le x<w$. \end{main} We need three lemmas for the proof of Theorem \ref{nth2}. \begin{lem}{\CIRCLEte[Lemma 12.2.12 ($\textnormal{b}_2$)]{kumar2} If $u\to w$, then we have $(q-1)^2$ divides $R_{uw}(q)-q^{\frac{\end{lem}l(u, w)-1}{2}}(q-1)$.\label{le1}}\end{lem} \begin{lem}{If $u\to w$, then $c_1=[q](\wt{R}_{uw})=1$ where $c_1$ is the integer as given in Lemma \ref{inc}.\label{le2} }\end{lem} \begin{prop}f{Consider the expression of $R_{uw}(q)$ in Lemma \ref{inc} with $a=1$. Since $(q-1)^2$ divides $R_{uw}(q)-q^{\frac{\end{lem}l(u, w)-1}{2}}(q-1)$, we must have $c_1=1$. }\end{prop}f \begin{defn}{By $u\to\to w$ we mean $u<w$ and $a(u, w)=2$. For such $(u, w)$, define $m(u, w)=|\{v\in [u, w]\mid u\to v\to w\}|$. }\end{defn} \begin{lem}{\label{le3} \begin{enumerate}{\item If $u\to w$, then $R_{uw}''(1)=\end{lem}l(u, w)-1$. \item If $u\to\to w$, then $R_{uw}''(1)=m(u, w)$. }\end{enumerate} }\end{lem} \begin{prop}f{(1):\, Suppose $u\to w $. Consider the expression of $R_{uw}(q)$ in Lemma \ref{inc}. Differentiate it twice and let $q=1$. Then all terms $k\ge 1$ vanish so that the only $k=0$ term (with $c_1=1$ as above) survives: \begin{enumerate}gin{align*} R_{uw}''(1)&=\left(q^{\frac{\end{lem}l(u, w)-1}{2}}(q-1)\right)''\Bigr|_{q=1}= \frac{4\end{lem}l(u, w)-4}{4}=\end{lem}l(u, w)-1. \end{align*} (2):\,Suppose $u\to \to w$. Differentiate the equation in \mbox{Fact \ref{ff} (\ref{ralt})} \[ \sum_{u\le v\le w}(-1)^{\end{lem}l(u, v)}R_{uv}(q)R_{vw}(q)=\delta_{uw}=0 \label{del} \] twice. Then let $q=1$: \[\sum_{u\le v\le w}(-1)^{\end{lem}l(u, v)}(R''_{uv}(1)R_{vw}(1)+2R_{uv}'(1)R_{vw}'(1)+R_{uv}(1)R_{vw}''(1))=0.\] Note that $R''_{uv}(1)R_{vw}(1)$ is nonzero if and only if $a(u, v)\ge 2$ and $v=w$ (use Fact \ref{d1} and Corollary \ref{mth1}). Similarly $R_{uv}'(1)R_{vw}'(1)$ is nonzero (and must be 1) if and only if $u\to v\to w$. Also $R_{uv}(1)R_{vw}''(1)$ is nonzero if and only if $v=u$ and $a(v, w)\ge 2$. Computing signs, we have $R''_{uw}(1)-2m(u, w)+R''_{uw}(1)=0$. }\end{prop}f \begin{prop}f[Proof of Theorem \ref{nth2}]{Let $u\le x<w$. Since $[(q-1)^2](R_{xv}(q))=R_{xv}^{''}(1)/2$ (Remark \ref{div}) for all $v\in [x, w]$, it is enough to show that \[\sum_{x\le v\le w}\left(\frac{R_{xv}''(1)}{2}\right)-\binom{\end{lem}l(x, w)}{2}\ge 0.\] In the sum, we only need to consider $v\in [x, w]$ such that $a(x, v)\le 2 \le \end{lem}l(x, v)$ (otherwise $R_{xv}''(1)=0$ thanks to Theorem \ref{nth1}). Using Lemma \ref{le3}, write down the sum separately as \[\sum_{x\to v\le w}\frac{R_{xv}''(1)}{2}+\sum_{x\to\to y \le w}\frac{R_{xy}''(1)}{2}=\frac{\;1\;}{\;2\;}\left(\sum_{x\to v\le w}(\end{lem}l(x, v)-1) +\sum_{x\to\to y\le w}m(x, y) \right). \] Compute the second term as \[\sum_{x\to\to y\le w}m(x, y)=\sum_{x\to\to y\le w} |\{v\in [x, y]\mid x\to v\to y\}| =\sum_{x\to v\le w}\ol{\end{lem}l}(v, w). \] Now use Deodhar's inequality twice to obtain \begin{enumerate}gin{align*} [(q-1)^2]\left(\sum_{x\le v\le w}R_{xv}\right)&= \sum_{x\le v\le w}\frac{R_{xv}''(1)}{2}\\ &=\frac{\;1\;}{\;2\;}\sum_{x\to v\le w}\left(\end{lem}l(x, v)-1+\ol{\end{lem}l}(v, w)\right)\\ &\underset{(**)}{\ge} \frac{\;1\;}{\;2\;}\sum_{x\to v\le w}\left(\end{lem}l(x, v)-1+\end{lem}l(v, w)\right)\\ &= \frac{\;1\;}{\;2\;}\sum_{x\to v\le w}\left(\end{lem}l(x, w)-1\right)\\ &=\frac{\;1\;}{\;2\;}\,\ol{\end{lem}l}(x, w)(\end{lem}l(x, w)-1)\\ &\underset{(***)}{\ge}\frac{\;1\;}{\;2\;}\,\end{lem}l(x, w)(\end{lem}l(x, w)-1). \end{align*} We thus confirmed the inequality ($*$) in Theorem \ref{nth2} for all $x$ with $u\le x<w$.\\ \indent Suppose moreover that $[u, w]$ is singular. Then $\ol{\end{lem}l}(x, w)>\end{lem}l(x, w)$ for some $x$ with $u\le x< w$ so that ($***$) is strict. Therefore, ($*$) must be also strict. Suppose, conversely, that ($*$) is strict for some $x$ with $u\le x<w$. Then ($**$) or ($***$) (or both) must be strict; equivalently, there exists some $v_0$ such that $x\to v_0\le w$ and $\ol{\end{lem}l}(v_0, w)>\end{lem}l(v_0, w)$ (hence $v_0\neq w$) or $\ol{\end{lem}l}(x, w)>\end{lem}l(x, w)$ (or both). Together, we showed that $\ol{\end{lem}l}(z, w)>\end{lem}l(z, w)$ for some $z$ with $u\le z<w$. Hence $[u, w]$ is singular. }\end{prop}f \section{KL polynomials}\label{skl1} We now turn to Kazhdan-Lusztig polynomials. Following \CIRCLEte[Theorem 5.1.4]{bjorner2}, we first give a definition. \begin{enumerate}gin{fact} \label{kr} There exists a unique family of polynomials $\{P_{uw}(q)\mid u, w \in W \} \subseteq \mathbb{Z}[q]$ (\emph{Kazhdan-Lusztig polynomials}) such that \begin{enumerate}{ \item $P_{uw}(q)=0$ if $u \not \le w$, \item $P_{uw}(q)=1$ if $u=w$, \item $\deg P_{uw}(q)\le (\end{lem}l(u, w)-1)/2$ if $u<w$, \item \label{kr4}if $u\le w$, then \begin{enumerate}gin{align*} q^{\end{lem}l(u, w)}P_{uw}(q^{-1})=\sum_{u\le v\le w} R_{uv}(q)P_{vw}(q), \end{align*} \item $[q^0](P_{uw})=1$ if $u\le w$, }\end{enumerate} \end{fact} \begin{defn}{\label{sg}Let $u\le w$. Say $u$ (or $[u, w]$) is \emph{singular} if $P_{uw}(q)>1$ where $>$ is the $q$-coefficientwise partial order in $\mathbb{Z}[q]$. Say $u$ is \emph{rationally smooth} if $P_{uw}(q)=1$. }\end{defn} \begin{rmk}{\label{rm1}\hf\begin{enumerate}{\item This definition is equivalent to Definition \ref{sgg}; see \CIRCLEte[Section 13.2]{billey1}. \item Since $[q^0](P_{uw})=1$ whenever $u\le w$, the condition ``$P_{uw}(q)>1$" is equivalent to $P_{uw}(q)= 1+a_jq^j+\cdots$ for some positive integers $j$ and $a_j$. \label{rm2}}\end{enumerate} }\end{rmk} Recall from Convention \ref{convv} that $W$ is crystallographic so that: \begin{fact}[Nonnegativity]{\label{kl2}All coefficients of Kazhdan-Lusztig polynomials in $W$ are nonnegative. }\end{fact} \begin{fact}[Monotonicity]{\label{kl22} If $u\le v\le w$ in $W$, then $P_{uw}(q)\ge P_{vw}(q)$; In other words, fixing the second index $w$, the function $P_{-, w}(q)$ on $[e, w]$ is \emph{weakly} monotonically decreasing. }\end{fact} Historically, these became known first for finite and affine Weyl groups $W$; See \CIRCLEte[Corollary 4]{irving} for Fact \ref{kl2} and \CIRCLEte[Corollary 3.7]{braden} for Fact \ref{kl22}. Further, for example, \CIRCLEte[Theorem 4.2]{bjorner3} says that these properties hold for all crystallographic $W$. Then a natural question arises: \begin{enumerate}gin{ques}\label{q1} Fix $w\in W$. For which pair $u<v$ in $[e, w]$, does a strict inequality $P_{uw}(q) > P_{vw}(q)$ occur?\end{ques} Unfortunately, Fact \ref{kl22} does not tell us anything about this. The idea is to consider $P_{uw}(1)$: \begin{prop}{\label{mono} Let $u<v\le w$. Then $P_{uw}(q)> P_{vw}(q) \iff P_{uw}(1)> P_{vw}(1)$.}\end{prop} \begin{prop}f{Suppose $u<v\le w$. Then we have the inequality $P_{uw}(q)\ge P_{vw}(q)$ as assumed above. Say $P_{uw}(q)=1+b_1q+\cdots + b_dq^d$, $P_{vw}(q)=1+a_1q+\cdots +a_dq^d$ with $a_i\le b_i$ for all $i$. If $P_{uw}(q)>P_{vw}(q)$, then $a_j<b_j$ for some $j$ ($1\le j\le d$). Then \[P_{uw}(1)-P_{vw}(1)=(b_1-a_1)+\cdots+( b_j-a_j )+ \cdots + (b_d-a_d)>0.\] In a similar fashion, we can show the converse.}\end{prop}f \begin{rmk}{In particular, $P_{uw}(1)\ge P_{ww}(1)=1>0$ whenever $u\le w$. These positive integers $\{P_{uw}(1)\}$ play an important role in representation theory of Verma modules. This is one of the reasons we want to study it. Here we refer to only \CIRCLEte[Chapter 8]{humphreys3} in this direction.}\end{rmk} Now, keeping Proposition \ref{mono} in mind, let us put Question \ref{q1} this way: \begin{enumerate}gin{ques}\label{q5} Fix $w\in W$. Further, let $u$ be an arbitrary but fixed element in $[e, w]$ such that $P_{uw}(1)>1$. Then, for which $v$ in $[u, w]$, does a strict inequality $P_{uw}(1) > P_{vw}(1)$ occur?\end{ques} Clearly, this is the case for $v=w$ since $P_{ww}(1)=1$. However, we would like to find some $v$ closer to $u$. Since Bruhat order is defined as the transitive closure of edge relations, it is meaningful to first consider vertices incident to $u$ in $[u, w]$ (Figure \ref{nuw}). For convenience, let us introduce the following definition: \begin{defn}{An edge $u\to v$ in $[u, w]$ is \emph{strict} if $P_{uw}(1)>P_{vw}(1)$. }\end{defn} Now, suppose $P_{uw}(1)>1$. Is $u$ incident to some strict edge? Theorem \ref{nth3} asserts that this is the case for \emph{every} singular vertex $u$ under $w$. \section{Existence of a strict inequality of KL polynomials} \label{skl2} Before Theorem \ref{nth3}, we need a lemma: \begin{lem}{\label{lm}Let $u\le w$. Then \begin{enumerate}{\item we have \[\end{lem}l(u, w)P_{uw}(1)-2P'_{uw}(1) =\sum_{v\in \ol{N}(u, w)}P_{vw}(1).\] \item if $u$ is singular, then $-2P'_{uw}(1)<0$. }\end{enumerate} }\end{lem} \begin{prop}f{(1) Differentiate the equation in Fact \ref{kr} (\ref{kr4}) once and let $q=1$. Then the right hand side is $\sum_{v\in \ol{N}(u, w)}R_{uv}'(1)P_{vw}(1)$ thanks to Fact \ref{d1}. (2) follows from nonnegativity of coefficients (Fact \ref{kl2}). }\end{prop}f \begin{enumerate}gin{main} \label{nth3} Let $u\le w$. If $P_{uw}(1)>1$, then there exists $t\in T$ such that \[P_{uw}(1)>P_{ut, w}(1)>0.\] \end{main} \begin{prop}f{ Let $n\overset{\textnormal{def}}{=}|\{v\in \ol{N}(u, w) \mid u\to v \textnormal{ is strict}\}|$. Suppose $n\le \textnormal{df}(u, w)$. Then Lemma \ref{lm} implies that \begin{enumerate}gin{align*} \end{lem}l(u, w)P_{uw}(1)-2P'_{uw}(1)&=\sum_{v\in \ol{N}(u, w)}P_{vw}(1)\\ &=\sum_{\substack{{v\in \ol{N}(u, w)}\\\textnormal{strict}}}P_{vw}(1)+(\ol{\end{lem}l}(u, w)-n)P_{uw}(1). \end{align*} Thus we have \begin{enumerate}gin{align*} \underbrace{-2P'_{uw}(1)}_{< 0}&=\sum_{\substack{{v\in \ol{N}(u, w)}\\\textnormal{strict}}} P_{vw}(1)+(\ol{\end{lem}l}(u, w)-n-\end{lem}l(u, w))P_{uw}(1)\\ &=\underbrace{\sum_{\substack{{v\in \ol{N}(u, w)}\\\textnormal{strict}}} P_{vw}(1)}_{\ge 0}+\underbrace{(\textnormal{df}(u, w)-n)P_{uw}(1)}_{\ge 0}, \end{align*} a contradiction. Therefore $n\ge \textnormal{df}(u, w)+1\ge 1$. }\end{prop}f We can repeat this argument as long as $P_{ut, w}(1)>1$ as in the following observation: \begin{enumerate}gin{cor} From every singular vertex $u$ under $w$, there exists a directed path \[u=v_0\to v_1\to v_2\to \cdots \to v_d \,\,(\le w)\] such that $d\ge 1$, all $v_i\to v_{i+1}$ are strict and $v_d$ is rationally smooth. \end{cor} \begin{prop}f{Suppose $u$ is singular under $w$. As shown in Theorem \ref{nth3}, there exists a strict edge under $w$, say $u\to v_1$. If $v_1$ is rationally smooth, then we are done. Otherwise find another strict edge, say $v_1\to v_2$. Continue this algorithm until our directed path arrives at some rationally smooth vertex. }\end{prop}f \begin{enumerate}gin{center} \textbf{Acknowledgments.}\\ \end{center} I thank the editor and anonymous referees for many helpful comments and suggestions to improve the manuscript. \end{document}
\begin{document} \title{Three-dimensional loops as sections in a four-dimensional solvable Lie group} \author{\'Agota Figula} \date{} \maketitle \begin{abstract} We classify all three-dimensional connected topological loops such that the group topologically generated by their left translations is the four-dimensional connected Lie group $G$ which has trivial center and precisely two one-dimensional normal subgroups. We show that $G$ is not the multiplication group of connected topological proper loops. \end{abstract} \noindent {\footnotesize {2010 {\em Mathematics Subject Classification:} 57M60, 20N05, 22E25, 22F30}} \noindent {\footnotesize {2010 {\em Keywords:} Topological loops, sharply transitive sections in groups, multiplication group of loops, solvable Lie groups.}} \noindent {\footnotesize {\em Acknowledgments:} Sincere thanks to the referee for the careful reading of the manuscript and for many suggestions. This paper was supported by the Hungarian Scientific Research Fund (OTKA) Grant PD 77392 and by the EEA and Norway Grants (Zolt\'an Magyary Higher Education Public Foundation). } \section{Introduction} A loop $(L, \cdot )$ is a quasigroup with identity element $e \in L$. The left translations $\lambda _a: x \mapsto a x$ and the right translations $\rho _a: x \mapsto x a$, $a \in L$, generate a permutation group on the set $L$. This group is called the multiplication group $Mult(L)$ of $L$ (cf. \cite{bruck}). The subgroup $G$ of $Mult(L)$ generated by all left translations of $L$ is the group of left translations of $L$. The stabilizer $Inn(L)$ of $e \in L$ in the group $Mult(L)$ is called the inner mapping group of $L$. The multiplication group and the inner mapping group of $L$ are essential tools for investigating the structure of the loop $L$. An important problem is to be analyzed under which circumstances a group is the multiplication group of a loop. The answer to this question was given in \cite{kepka} which says that we can use certain conditions for transversals. These conditions were applied to study the structures of the multiplication groups and the inner mapping groups of finite loops (cf. \cite{niem3}, \cite{niem4}, \cite{niem1}, \cite{niem2}, \cite{vesanen}). Another relevant problem is the classification of loops $L$ having a group $G$ as the group of the left translations of $L$. This problem is equivalent to the following problem in group theory: to find the subgroups $H$ of $G$ such that the core of $H$ in $G$ is trivial and after this to determine all sections $\sigma : G/H \to G$ such that $\sigma (H)=1 \in G$, $\sigma (G/H)$ generates $G$ and acts sharply transitively on the left cosets $x H$, $x \in G$. The last property means that for given cosets $g_1 H$, $g_2 H$ there exists precisely one $z \in \sigma (G/H)$ which satisfies the equation $z g_1 H=g_2 H$. In particular if $G$ is a connected Lie group and the section $\sigma $ is continuous with the above property (it is called continuous sharply transitive section), then we obtain all connected topological loops $L$ such that $G$ is the group of left translations of $L$ (cf. \cite{loops}, Section 1). In this paper we investigate connected topological loops. Concrete classifications of $2$-dimensional connected topological loops $L$ having a Lie group of dimension $3$ as the group of left translations of $L$ are given in \cite{loops}, Section 22 and 23, and in \cite{figula0}. The Lie groups which are multiplication groups of $2$-dimensional topological loops are determined in \cite{figula00}. In \cite{figula} we classify all $3$-dimensional simply connected topological loops $L$ having a $4$-dimensional nilpotent Lie group as the group of left translations of $L$. Moreover, we prove that these groups are not multiplication groups of $3$-dimensional topological loops. In recent research we deal with the class of solvable Lie groups and study which groups in this class occur as the group of left translations respectively as the multiplication group of $3$-dimensional topological loops. The solvable Lie groups with non-trivial centre play an important role in the investigation of the multiplication group of $3$-dimensional loops (cf. \cite{figula2}). In contrast to this here we consider the $4$-dimensional solvable Lie group $G$ which has trivial centre and precisely two $1$-dimensional normal subgroups. It depends on a real parameter $a \neq 0$. Its Lie algebra has two kinds of paracomplex structures (cf. \cite{andrada}, Section 2) and admits only for $a=1$ an invariant complex structure (cf. \cite{ovando}, Theorem 5, pp. 24-25). In this paper we classify the $3$-dimensional connected topological loops $L$ which are sections in the Lie group $G$. These loops $L$ are homeomorphic to $\mathbb R^3$ and their multiplications depend on one continuous real function of two or three variables (cf. Theorem \ref{Propelso}). We give an easy proof of the fact that $G$ is not the multiplication group of connected topological proper loops (cf. Theorem \ref{Propmasodik}). Later we intend to extend our research to other $4$-dimensional Lie groups. \section{Preliminaries} A binary system $(L, \cdot )$ is called a loop if there exists an element $e \in L$ such that $x=e \cdot x=x \cdot e$ holds for all $x \in L$ and the equations $a \cdot y=b$ and $x \cdot a=b$ have precisely one solution, which we denote by $y=a \backslash b$ and $x=b/a$. A loop $L$ is proper if it is not a group. The left and right translations $\lambda _a: y \mapsto a \cdot y:L \times L \to L$ and $\rho _a: y \mapsto y \cdot a: L \times L \to L$, $a \in L$, are bijections of $L$. The permutation group $Mult(L)$ generated by all left and right translations of the loop $L$ is called the multiplication group of $L$ and the stabilizer of $e \in L$ in the group $Mult(L)$ is called the inner mapping group $Inn(L)$ of $L$. Let $K$ be a group, let $S \le K$. Denote by $C_K(S)$ the core of $S$ in $K$ (the largest normal subgroup of $K$ contained in $S$). A group $K$ is isomorphic to the multiplication group of a loop if and only if there exists a subgroup $S$ with $C_K(S)=1$ and two left transversals $A$ and $B$ to $S$ in $K$ such that $a^{-1} b^{-1} a b \in S$ for every $a \in A$ and $b \in B$ and $K=\langle A, B \rangle $ (cf. Theorem 4.1 in \cite{kepka}, p. 118). Proposition 2.7 in \cite{kepka}, p. 114, yields the following \begin{lemma} \label{niemenmaa} Let $L$ be a loop with multiplication group $Mult(L)$ and inner mapping group $Inn(L)$. Then the normalizer $N_{Mult(L)}(Inn(L))$ is the direct product $Inn(L) \times Z(Mult(L))$, where $Z(Mult(L))$ is the center of the group $Mult(L)$. \end{lemma} A loop $L$ is called topological if $L$ is a topological space and the binary operations $(x,y) \mapsto x \cdot y, \ (x,y) \mapsto x \backslash y, (x,y) \mapsto y/x :L \times L \to L$ are continuous. Let $G$ be a connected Lie group, $H$ be a subgroup of $G$. Let $\sigma :G/H \to G$ be a continuous section with respect to the natural projection $G \to G/H$. This is called continuous sharply transitive section, if the set $\sigma (G/H)$ operates sharply transitively on $G/H$, which means that to any $g_1 H$ and $g_2 H$ there exists precisely one $z \in \sigma (G/H)$ with $z x H= y H$. Every connected topological loop having a Lie group $G$ as the group topologically generated by its left translations is isomorphic to a loop $L$ realized on the factor space $G/H$, where $H$ is a closed subgroup of $G$ with $C_G(H)=1$ and $\sigma :G/H \to G$ is a continuous sharply transitive section with $\sigma (H)=1 \in G$ such that the subset $\sigma (G/H)$ generates $G$. The multiplication of $L$ on the manifold $G/H$ is defined by $x H \ast y H=\sigma (x H) y H$. Moreover, the subgroup $H$ is the stabilizer of the identity element $e \in L$ in the group $G$. \noindent Since there does not exist a multiplication with identity on the sphere $S^2$ a simply connected $3$-dimensional topological loop $L$ having a Lie group $G$ as the group topologically generated by its left translations is homeomorphic either to $\mathbb R^3$ or to $S^3$ (see \cite{gorbatsevich}, p. 210). If the group $G$ is solvable, then the simply connected loop $L$ must be homeomorphic to $\mathbb R^3$ because of the sphere $S^3$ is not a solvmanifold (cf. Theorem 3.2 in \cite{gorbatsevich}, p. 208). \noindent The kernel of a homomorphism $\alpha :(L, \cdot ) \to (L', \ast )$ of a loop $L$ into a loop $L'$ is a normal subloop $N$ of $L$, i.e. a subloop of $L$ such that \[ x \cdot N=N \cdot x, \ \ (x \cdot N) \cdot y= x \cdot (N \cdot y), \ \ x \cdot ( y \cdot N)=(x \cdot y) \cdot N. \] A loop $L$ is solvable if it has a series $1=L_0 \le L_1 \le \cdots \le L_n=L$, where $L_{i-1}$ is normal in $L_i$ and $L_i/L_{i-1}$ is an abelian group, $i=1, \cdots ,n$. \section{Three-dimensional topological loops as sections\\ in a $4$-dimensional solvable Lie group} A list of the $4$-dimensional indecomposable Lie algebras can be found in \cite{patera}, Table I, p. 988. All these Lie algebras are solvable. Now we consider the Lie algebra $A_{4,2}^a$, $a \neq 0$, in Table I in \cite{patera}. This Lie algebra has a codimension $1$ abelian ideal and it has a representation as subalgebra of ${\bf gl}_4(\mathbb R)$. In this section we classify the $3$-dimensional connected topological loops such that the Lie algebra of the group $G$ topologically generated by their left translations is $A_{4,2}^a$, $a \neq 0$. The multiplication of the loops in this class depends on a continuous real function of two or three variables. We prove that the group $G$ cannot be the multiplication group of connected topological proper loops. We often use the following lemma. \begin{lemma} \label{functional} Let $f: \mathbb R \to \mathbb R$ be a continuous function such that for all $z_1, z_2 \in \mathbb R$ we have \begin{equation} \label{equegyenlet} f(z_2)+ e^{-z_2} f(z_1) = f(z_1+z_2). \end{equation} Then $f(z)=K(1-e^{-z})$, where $K$ is a real constant. \end{lemma} \begin{proof} Interchanging $z_1$ and $z_2$ in (\ref{equegyenlet}) we get $f(z_1)+ e^{-z_1} f(z_2) = f(z_1+z_2)$. The right hand side of the last equation is equal to the right hand side of equation (\ref{equegyenlet}). Hence for $z_1 z_2 \neq 0$ one has $\frac{f(z_1)}{1-e^{-z_1}} = \frac{f(z_2)}{1-e^{-z_2}}= K$ for a suitable constant $K \in \mathbb R$ and the assertion follows. \end{proof} \begin{theorem} \label{Propelso} Let $G$ be the four-dimensional connected solvable Lie group with trivial center having precisely two one-dimensional normal subgroups and choose for $G$ the representation \[ G= \left\{ g(x_1,x_2,x_3,x_4)= \left( \begin{array}{cccc} e^{a x_4} & 0 & 0 & x_1 \\ 0 & e^{x_4} & x_4 e^{x_4} & x_2 \\ 0 & 0 & e^{x_4} & x_3 \\ 0 & 0 & 0 & 1 \end{array} \right), x_i \in \mathbb R, i=1,2,3,4 \right\} \] with fixed number $a \in \mathbb R \setminus \{ 0 \}$. Let $H$ be a one-dimensional subgroup of $G$ which is not normal in $G$. If $H$ is not contained in the commutator subgroup $G'$ of $G$, then using automorphism of $G$ we may choose $H= \{ g(0,0,0,x_4); x_4 \in \mathbb R \}$. There does not exist connected topological loop $L$ having $G$ as the group topologically generated by the left translations of $L$ and $H$ as the stabilizer of $e \in L$. \newline \noindent If $H$ is contained in the commutator subgroup of $G$ and $a \neq 1$, then using automorphisms of $G$ we may choose $H$ as one of the following subgroups: \[ H_1= \{ g(0,0,x_3,0); x_3 \in \mathbb R \}, \ H_2= \{ g(x_1,0,x_1,0); x_1 \in \mathbb R \}, \] \[ H_3=\{ g(x_1,x_1,0,0); x_1 \in \mathbb R \}. \] For $a=1$ using automorphisms of $G$ we may assume that $H=H_1$. \newline \noindent a) Every continuous sharply transitive section $\sigma : G/H_1 \to G$ with the properties that $\sigma (G/H_1)$ generates $G$ and $\sigma (H_1)=1$ is given by the map $\sigma _f: g(x,y,0,z) H_1 \mapsto g(x,y+z e^{z} f(x,z), e^{z} f(x,z),z)$, where $f: \mathbb R^2 \to \mathbb R$ is a continuous function with $f(0,0)=0$ such that the function $f$ does not fulfill the identities $f(x,0)=0$ and $f(0,z)=K(1-e^{-z})$, $K \in \mathbb R$, simultaneously. The multiplication of the loop $L_f$ corresponding to $\sigma _f$ can be written as \begin{equation} \label{elsoszorzas} (x_1,y_1,z_1) \ast (x_2,y_2,z_2)=(x_1+ e^{a z_1} x_2, y_1+y_2 e^{z_1}-z_2 e^{z_1} f(x_1,z_1), z_1+z_2). \end{equation} Every loop $L_f$ defined by (\ref{elsoszorzas}) is solvable. \newline \noindent b) Each continuous sharply transitive section $\sigma : G/H_2 \to G$ such that $\sigma (G/H_2)$ generates $G$ and $\sigma (H_2)=1$ has the form \[ \sigma _h: g(x,y,0,z) H_2 \mapsto g(x+e^{az} h(x,y,z),y+z e^{z} h(x,y,z), e^z h(x,y,z),z), \] $a \neq 1$, where $h: \mathbb R^3 \to \mathbb R$ is a continuous function with $h(0,0,0)=0$ such that $h$ does not satisfy the identities $h(x,y,0)=0$ and $h(0,0,z)=K(1-e^{-z})$, $K \in \mathbb R$, simultaneously and such that for all triples $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2) \in \mathbb R^3$ the equations \begin{equation} \label{masodikequ} y= y_2- e^{z_2-z_1} y_1+ e^{z_2-z_1} z_1 h(x,y,z_2-z_1), \end{equation} \begin{equation} \label{harmadikequ} x= x_2 -x_1 e^{a(z_2-z_1)}+ e^{a z_2} (e^{-z_1}-e^{-a z_1}) h(x,y,z_2-z_1) \end{equation} have a unique solution $(x,y) \in \mathbb R^2$. The multiplication of the loop $L_h$ corresponding to $\sigma _h$ is determined by \begin{equation} (x_1,y_1,z_1) \ast (x_2,y_2,z_2)= \nonumber \end{equation} \begin{equation} \label{multiplication1} \big(x_1+ e^{a z_1}(x_2 + h(x_1,y_1,z_1)[1-e^{(a-1)z_2}]), y_1+ e^{z_1}(y_2 -z_2 h(x_1,y_1,z_1)),z_1+z_2 \big). \end{equation} \newline \noindent c) Every continuous sharply transitive section $\sigma : G/H_3 \to G$ with the properties $\sigma (G/H_3)$ generates $G$ and $\sigma (H_3)=1$ is given by the map \[ \sigma _f: g(x,0,y,z) H_3 \mapsto g(x+e^{az} f(x,y,z),e^z f(x,y,z),y,z), \ a \neq 1, \] where $f: \mathbb R^3 \to \mathbb R $ is a continuous function with $f(0,0,0)=0$ such that $f$ does not fulfill the identities $f(x,y,0)=-x$ and $f(0,0,z)= c(1 -e^{-a z})$, $c \in \mathbb R$, simultaneously and such that for all triples $(x_1,y_1,z_1)$, $(x_2,y_2,z_2) \in \mathbb R^3$ there is a unique $x \in \mathbb R$ satisfying the equation \begin{equation} \label{negyedikujequ} x= x_2- x_1 e^{a (z_2-z_1)}+ \nonumber \end{equation} \begin{equation} e^{a z_2-z_1}[y_1 (z_2-z_1)+ (1- e^{(1-a) z_1}) f(x,y_2-e^{z_2-z_1} y_1,z_2-z_1)]. \end{equation} The multiplication of the loop $L_f$ corresponding to $\sigma _f$ is defined by \begin{equation} \label{loopszorzasketto} (x_1,y_1,z_1) \ast (x_2,y_2,z_2)= \nonumber \end{equation} \begin{equation} \big(x_1+e^{a z_1}(x_2-y_2 z_1 e^{(a-1)z_2}+f(x_1,y_1,z_1)[1-e^{(a-1)z_2}]), y_1+ e^{z_1} y_2, z_1+z_2 \big). \end{equation} \end{theorem} \begin{proof} The linear representation of the Lie group $G$ is given in \cite{ghanam}, p. 164. The Lie algebra ${\bf g}$ of $G$ is given by the basis $\{ e_1,e_2,e_3,e_4 \}$ with $[e_1,e_4]=a e_1$, $[e_2,e_4]=e_2$, $[e_3,e_4]=e_2+e_3$, $a \neq 0$. As $\mathbb R e_1$ and $\mathbb R e_2$ are ideals of ${\bf g}$ the subalgebra ${\bf h}$ of the $1$-dimensional non-normal subgroup $H$ of $G$ does not contain $e_1$, $e_2$. First we assume that ${\bf h}$ is not contained in the commutator subalgebra ${\bf g}'= \langle e_1, e_2, e_3 \rangle $ of ${\bf g}$. Hence one has ${\bf h}= \mathbb R (a e_1+ b e_2+ c e_3+ e_4)$ with $a,b,c \in \mathbb R$. Using the automorphism $e_1 \mapsto e_1$, $e_2 \mapsto e_2$, $e_3 \mapsto e_3$ and $e_4 \mapsto e_4 + a e_1+ b e_2+ c e_3$ of ${\bf g}$ we can assume ${\bf h}= \mathbb R e_4$. Now we assume that there is a connected topological loop $L$ having the group $G$ as the group topologically generated by its left translations and $H =\{ \exp t e_4; t \in \mathbb R \}$ as the stabilizer of $e \in L$. As every $1$-dimensional subalgebra which is not contained in the commutator subalgebra of the Lie algebra ${\bf g}$ is conjugate to ${\bf h}$ every element of $G$ not contained in the commutator subgroup $G'$ of $G$ has a fixed point on $G/H$. As the set $\sigma (G/H)$ is the set of the left translations of the loop $L$ and the left translations have no fixed point $\sigma (G/H)$ should be contained in $G'$. This is a contradiction to the fact that the left translations of $L$ generate $G$ and the first assertion is proved. \newline \noindent If the subalgebra ${\bf h}$ of $H$ is contained in the commutator subalgebra ${\bf g}'$, then $H$ has the form $H=\exp \ t (b_1 e_3 + b_2 e_1+ b_3 e_2)$, $t \in \mathbb R$, with $b_1 \neq 0$ or $b_2 b_3 \neq 0$. If $a \neq 1$, then each automorphism $\varphi $ of ${\bf g}$ is given by $\varphi (e_1)= k e_1$, $\varphi (e_2)= l e_2$, $\varphi (e_3)= n e_2 + l e_3$, $\varphi (e_4)= f_1 e_1+ f_2 e_2 + f_3 e_3+ e_4$ with $kl \neq 0$, $k,l,n,f_1,f_2,f_3 \in \mathbb R$. If $a=1$, then the automorphism group of ${\bf g}$ consists of the linear mappings $\alpha (e_1)= k_1 e_1 + k_2 e_2$, $\alpha (e_2)= l e_2$, $\alpha (e_3)=n_1 e_1+ n_2 e_2 + l e_3$, $\alpha (e_4)= f_1 e_1+ f_2 e_2 + f_3 e_3+ e_4$ with $k_1 l \neq 0$, $k_1,k_2,l,n_1,n_2,f_1,f_2,f_3 \in \mathbb R$. If $b_1 \neq 0$ and $a \neq 1$, then we can change $H$ by an automorphism $\varphi $ of $G$ such that $H$ has one of the following forms \[H_1=\{ \exp t e_3; t \in \mathbb R \}, \quad H_2= \{ \exp t(e_3+e_1); t \in \mathbb R \}. \] If $b_1 \neq 0$ and $a=1$, then using an automorphism $\alpha $ of $G$ we may assume that $H=H_1$. If $b_1=0$ and $a \neq 1$, then we can change $H$ by an automorphism $\varphi $ of $G$ such that $H$ is the subgroup $H_3= \{ \exp t(e_1+e_2); t \in \mathbb R \}$. If $b_1=0$ and $a=1$, then for all $b_2, b_3 \in \mathbb R$ the subgroup $H= \{ \exp \ t (b_2 e_1+ b_3 e_2); t \in \mathbb R \}$ is normal in $G$ which is a contradiction. First we deal with the case that $H=H_1=\{ g(0,0,k,0); \ k \in \mathbb R \}$. Since all elements of $G$ have a unique decomposition as $g(x, y, 0, z) g(0, 0, k, 0)$, any continuous function $f: \mathbb R^3 \to \mathbb R; (x,y, z) \mapsto f(x,y,z)$ determines a continuous section $\sigma : G/H_1 \to G$ given by \[ \sigma: g(x,y,0,z) H_1 \mapsto g(x, y, 0, z) g(0, 0, f(x,y,z), 0) = \] \[ g(x,y+z e^z f(x,y,z), e^z f(x,y,z),z). \] The section $\sigma $ is sharply transitive if and only if for every triples $(x_1,y_1,z_1)$, $(x_2,y_2,z_2) \in \mathbb R^3$ there exists precisely one triple $(x,y,z) \in \mathbb R^3$ such that \[ g(x,y+z e^z f(x,y,z), e^z f(x,y,z), z) g(x_1, y_1, 0, z_1)= g(x_2, y_2, 0, z_2) g(0,0,t,0) \] holds with a suitable $t \in \mathbb R$. This gives the equations \[ z= z_2-z_1, \ x= x_2-x_1 e^{a (z_2-z_1)}, \ t= e^{-z_1} f(x_2-x_1 e^{a (z_2-z_1)},y,z_2-z_1), \] \begin{equation} \label{equelso} 0= y -y_2 +y_1 e^{z_2-z_1}- z_1 e^{z_2-z_1} f(x_2-x_1 e^{a (z_2-z_1)},y,z_2-z_1). \nonumber \end{equation} These are equivalent to the condition that for every $z_0=z_2-z_1$, $x_0= x_2-x_1 e^{a z_0}$ and $z_1 \in \mathbb R$ the function $g: y \mapsto y - z_1 e^{z_0} f(x_0, y, z_0): \mathbb R \to \mathbb R$ is a bijective mapping. Let be $\psi_1 < \psi_2 \in \mathbb R$ then $g(\psi_1) \neq g(\psi_2)$, e.g. $g(\psi _1) < g(\psi _2)$. We consider \[ 0 < g(\psi _2) - g(\psi _1)= \psi _2 - \psi _1 - z_1 e^{z_0}[f(x_0, \psi_2, z_0) - f(x_0, \psi_1, z_0)] \] as a linear function of $z_1 \in \mathbb R$. If $f(x_0, \psi_2, z_0) \neq f(x_0, \psi_1, z_0)$, then there exists a $z_1 \in \mathbb R$ such that $g(\psi _2) - g(\psi _1) =0$, which is a contradiction. Hence the function $f(x, y, z)=f(x,z)$ does not depend on $y$. In this case $g$ is a monotone function and every continuous function $f(x,z)$ with $f(0,0)=0$ determines a loop multiplication. \newline \noindent This loop is proper precisely if the set \[ \sigma (G/H_1)=\{ g(x,y+z e^z f(x,z), e^z f(x,z),z); \\ x,y,z \in \mathbb R \} \] generates the whole group $G$. The set $\sigma (G/H_1)$ contains the subgroup \[ G_1=\{ g(x, y, f(x,0), 0); \ x,y \in \mathbb R \} < G'=[G,G], \] and the subset $F_2=\{ g(0,z e^z f(0,z), e^z f(0,z),z); \ z \in \mathbb R \}$. As $G_1 \cap F_2= \{ 1 \}$ the set $\sigma (G/H_1)$ generates $G$ if the group $G_1$ has dimension $3$. This is the case if and only if the subgroup $G_2=\{ g(x,0,f(x,0),0); x \in \mathbb R \}$ is not a one-parameter subgroup. But $G_2$ is a one-parameter subgroup precisely if $f(x,0)= \lambda x$, $\lambda \in \mathbb R$. In this case the set $\sigma (G/H_1)$ generates $G$ if there exists an element $h \in F_2$ with $h^{-1} G_1 h \neq G_1$. For $h= g(0,z e^z f(0,z), e^z f(0,z),z) \in F_2$, where $z \neq 0$, we have \[ h^{-1} g(x, y, \lambda x, 0) h= g(x e^{-a z}, y e^{-z}-z e^{-z} \lambda x, e^{-z} \lambda x,0). \] Hence $h^{-1} G_1 h =G_1$ if and only if for $a \neq 1$ one has $f(x,0)=0$ and for $a=1$ we have $f(x,0)=\lambda x$, $\lambda \in \mathbb R$. Using this for $a \neq 1$ the group $G_1$ reduces to $\widetilde{G_1}=\{ g(x,y,0,0); x,y \in \mathbb R \}$ and for $a=1$ to $G_1^{\ast }=\{ g(x,y, \lambda x,0); x,y \in \mathbb R \}$, $\lambda \in \mathbb R$. The group $\widetilde{G_1}$ and for $a=1$ the group $G_1^{\ast }$ are normal subgroups of $G$. The set $\sigma (G/H)$ does not generate $G$ precisely if for $a \neq 1$ the set $F_2 \widetilde{G_1}/\widetilde{G_1}$, respectively for $a=1$ the set $F_2 G_1^{\ast }/G_1^{\ast }$ is a one-parameter subgroup of $G/\widetilde{G_1}$, respectively $G/G_1^{\ast }$. Since for $a \neq 1$ one has \[g(\mathbb R, \mathbb R, e^{z_1} f(0,z_1),z_1) g(\mathbb R, \mathbb R, e^{z_2} f(0,z_2),z_2)= \] \[g(\mathbb R, \mathbb R, e^{z_1+z_2} f(0,z_2)+e^{z_1} f(0,z_1), z_1+z_2) \] and for $a=1$ we have \[g(\mathbb R, \mathbb R, e^{z_1}( \lambda x_1 +f(0,z_1)),z_1) g(\mathbb R, \mathbb R, e^{z_2}( \lambda x_2+ f(0,z_2)),z_2)= \] \[g(\mathbb R, \mathbb R, e^{z_1}[e^{z_2}(\lambda x_2+ f(0,z_2))+ \lambda x_1+f(0,z_1)], z_1+z_2) \] the set $\sigma (G/H_1)$ does not generate $G$ if and only if in both cases $f(x,0)=0$ and for all $z_1, z_2 \in \mathbb R$ one has $f(0,z_2)+e^{-z_2} f(0,z_1)=f(0,z_1+z_2)$. Using Lemma \ref{functional} for the real function $f(0,z)$ we obtain $f(0,z)= K(1- e^{-z})$, where $K \in \mathbb R$. Hence the set $\sigma (G/H_1)$ does not generate $G$ if for all $x,z \in \mathbb R$ one has $f(x,0)=0$ and $f(0,z)=K(1- e^{-z})$, $K \in \mathbb R$. Now we represent the loop $L_f$ in the coordinate system $(x,y,z) \mapsto g(x,y,0,z)H_1$. Then the product $(x_1,y_1,z_1) \ast (x_2,y_2,z_2)$ will be determined if we apply $\sigma (g(x_1,y_1,0,z_1)H_1)=g(x_1,y_1+z_1 e^{z_1} f(x_1,z_1), e^{z_1} f(x_1,z_1),z_1)$ to the left coset $g(x_2,y_2,0,z_2)H_1$ and find in the image coset the element of $G$ which lies in the set $\{ g(x,y,0,z)H_1; \ x,y,z \in \mathbb R \}$. A direct computation gives multiplication (\ref{elsoszorzas}) in assertion a). The set $N=\{ (x,y,0); x,y \in \mathbb R \}$ is a normal subgroup isomorphic to $\mathbb R^2$ of the loop $L_f$ and one has $(0,0,z_1)N \ast (0,0,z_2)N= (0,0,z_1+z_2)N$. Hence the factor loop $L_f/N$ is isomorphic to the Lie group $\mathbb R$ and therefore the loop $L_f$ is solvable. \noindent Now we assume that $H=H_2=\{ g(k,0,k,0); \ k \in \mathbb R \}$. As all elements of $G$ can be written in a unique way as $g(x, y, 0, z) g(k, 0, k, 0)$, every continuous function $h: \mathbb R^3 \to \mathbb R; (x,y, z) \mapsto h(x,y,z)$ determines a continuous section $\sigma : G/H_2 \to G$ defined by \begin{equation} \label{section} \sigma_h: g(x,y,0,z) H_2 \mapsto g(x, y, 0, z) g(h(x,y,z), 0, h(x,y,z), 0) = \nonumber \end{equation} \begin{equation} g(x+e^{az} h(x,y,z),y+z e^{z} h(x,y,z), e^z h(x,y,z),z). \end{equation} The section $\sigma $ is sharply transitive if and only if for each triples $(x_1,y_1,z_1)$, $(x_2,y_2,z_2) \in \mathbb R^3$ there exists precisely one triple $(x,y,z) \in \mathbb R^3$ such that \begin{equation} \label{elsoequ} g(x+e^{az} h(x,y,z),y+z e^{z} h(x,y,z), e^z h(x,y,z),z) g(x_1, y_1, 0, z_1)= \nonumber \end{equation} \begin{equation} g(x_2, y_2, 0, z_2) g(0,0,t,0) \end{equation} for a suitable $t \in \mathbb R$. Equation (\ref{elsoequ}) gives $z= z_2-z_1$, $t= e^{-z_1} h(x,y,z_2-z_1)$ and that equations (\ref{masodikequ}), (\ref{harmadikequ}) in assertion b) must have a unique solution $(x,y) \in \mathbb R^2$. \newline \noindent Now we investigate under which circumstances the set $\sigma (G/H_2)$ generates the group $G$. The set $\sigma (G/H_2)$ contains the subgroup \[ G_1=\{ g(x+h(x,y,0), y, h(x,y,0), 0); \ x,y \in \mathbb R \} < G'=[G,G], \] and the subset $F_2=\{ g(e^{az} h(0,0,z),z e^z h(0,0,z), e^z h(0,0,z),z); \ z \in \mathbb R \}$. One has $G_1 \cap F_2= \{ 1 \}$. The set $\sigma (G/H_2)$ generates $G$ if $\hbox{dim} \ G_1=3$. This happens if the group $G_2=\{ g(h(0,y,0), y, h(0,y,0), 0); y \in \mathbb R \}$ or the group $G_3=\{ g(x+h(x,0,0), 0, h(x,0,0), 0); x \in \mathbb R \}$ has dimension $2$. The group $G_2$, respectively $G_3$ is a one-parameter subgroup precisely if $h(0,y,0)=c y$, $c \in \mathbb R$, respectively $h(x,0,0)=b x$, $b \in \mathbb R$. As \[ g(x+h(x,0,0), 0, h(x,0,0), 0) g(h(0,y,0), y, h(0,y,0), 0)= \] \[ g(x+h(x,0,0)+h(0,y,0), y, h(x,0,0)+h(0,y,0), 0) \] the group $G_1$ has dimension $3$ if and only if the function $h(x,y,0)$ is different from $b x+c y$, $b,c \in \mathbb R$. Assuming this the set $\sigma (G/H_2)$ generates $G$ if there exists an element $h \in F_2$ such that $h^{-1} G_1 h \neq G_1$ holds. For $h= g(e^{az} h(0,0,z),z e^z h(0,0,z), e^z h(0,0,z),z) \in F_2$ with $z \neq 0$ one has \[ h^{-1} g(x+b x+c y, y, b x+c y, 0) h= \] \[ g([(b+1)x+cy] e^{-a z}, y e^{-z}-z (b x+c y) e^{-z}, (b x+c y) e^{-z},0). \] We obtain $h^{-1} G_1 h =G_1$ if and only if $b=c=0$. Then the group $G_1=\{ g(x, y, 0, 0); \ x,y \in \mathbb R \}$ is a normal subgroup of $G$. The set $\sigma (G/H_2)$ generates $G$, if the set $(F_2 G_1)/G_1$ is not a one-parameter subgroup of $G/G_1$. Because of \begin{equation} g(\mathbb R, \mathbb R, e^{z_1} h(0,0,z_1), z_1) g(\mathbb R , \mathbb R, e^{z_2} h(0,0,z_2), z_2)= \nonumber \end{equation} \begin{equation} g(\mathbb R, \mathbb R, e^{z_1+z_2} h(0,0,z_2)+e^{z_1} h(0,0,z_1),z_1+z_2), \nonumber \end{equation} the set $\sigma (G/H_2)$ does not generate $G$ precisely if for all $z_1, z_2 \in \mathbb R$ the equality $h(0,0,z_2)+e^{-z_2} h(0,0,z_1)= h(0,0,z_1+z_2)$ holds. Using Lemma \ref{functional} for the function $h(0,0,z)$ we get $h(0,0,z)= K(1 -e^{-z})$ with $K \in \mathbb R$. Therefore the loop $L_h$ is proper if and only if the function $h: \mathbb R^3 \to \mathbb R$ does not satisfy the identities $h(x,y,0)=0$ and $h(0,0,z)= K(1 -e^{-z})$ simultaneously. \newline \noindent The multiplication of the loop $L_h$ in the coordinate system $(x,y,z) \mapsto g(x,y,0,z)H_2$ is determined if we apply $\sigma _h(g(x_1,y_1,0,z_1) H_2)$ given by (\ref{section}) to the left coset $g(x_2,y_2,0,z_2)H_2$ and find in the image coset the element of $G$ which lies in the set $\{g(x,y,0,z)H_2; \ x,y,z \in \mathbb R \}$. A direct computation yields multiplication (\ref{multiplication1}) of assertion b). \noindent Now we consider the case that $H=H_3=\{ g(k,k,0,0); \ k \in \mathbb R \}$. As all elements of $G$ have a unique decomposition as $g(x,0,y,z) g(k,k,0,0)$, any continuous function $f: \mathbb R^3 \to \mathbb R; (x,y, z) \mapsto f(x,y,z)$ determines a continuous section $\sigma : G/H_3 \to G$ given by \[ \sigma : g(x,0,y,z) H_3 \mapsto g(x,0,y,z) g(f(x,y,z), f(x,y,z), 0, 0) = \] \[ g(x+e^{az} f(x,y,z),e^z f(x,y,z),y,z). \] The set $\sigma (G/H_3)$ acts sharply transitively on the factor space $G/H_3$ if and only if for every triples $(x_1,y_1,z_1)$, $(x_2,y_2,z_2) \in \mathbb R^3$ there exists precisely one triple $(x,y,z) \in \mathbb R^3$ such that \begin{equation} \label{negyedikequ} g(x+e^{az} f(x,y,z),e^z f(x,y,z),y,z) g(x_1,0,y_1,z_1)= \nonumber \end{equation} \begin{equation} g(x_2,0, y_2,z_2) g(t,t,0,0) \end{equation} for a suitable $t \in \mathbb R$. Equation (\ref{negyedikequ}) yields that $z= z_2-z_1$, $y= y_2-e^{z_2-z_1} y_1$, $t= y_1 (z_2-z_1) e^{-z_1}+ e^{-z_1} h(x,y_2-e^{z_2-z_1} y_1,z_2-z_1)$ and equation (\ref{negyedikujequ}) in assertion c) must have a unique solution $x \in \mathbb R$. \newline \noindent The set $\sigma (G/H_3)$ contains the subgroup \[ G_1=\{ g(x+f(x,y,0), f(x,y,0), y, 0); \ x,y \in \mathbb R \} < G'=[G,G], \] and the subset $F_2=\{ g(e^{az} f(0,0,z), e^z f(0,0,z), 0,z); \ z \in \mathbb R \}$. We have $G_1 \cap F_2= \{ 1 \}$. The set $\sigma (G/H_3)$ generates $G$ if $G_1$ is a $3$-dimensional group. This is the case if the subgroup \[G_2=\{ g(x+f(x,0,0), f(x,0,0), 0, 0); x \in \mathbb R \} \] or the subgroup \[G_3=\{ g(f(0,y,0), f(0,y,0), y, 0); y \in \mathbb R \} \] has dimension $2$. The group $G_2$, respectively $G_3$ is a one-parameter subgroup precisely if $f(x,0,0)=c x$, $c \in \mathbb R$, respectively $f(0,y,0)=d y$, $d \in \mathbb R$. As \[ g(x+f(x,0,0), f(x,0,0), 0, 0) g(f(0,y,0), f(0,y,0), y, 0)= \] \[ g(x+f(x,0,0)+f(0,y,0), f(x,0,0)+f(0,y,0), y, 0) \] we have $\hbox{dim} \ G_1=3$ if and only if the function $f(x,y,0)$ is different from $c x+d y$, $c,d \in \mathbb R$. In this case the set $\sigma (G/H_3)$ generates $G$ if there exists an element $h \in F_2$ such that $h^{-1} G_1 h \neq G_1$ holds. For $h= g(e^{az} f(0,0,z), e^z f(0,0,z), 0,z) \in F_2$ with $z \neq 0$ we have \[ h^{-1} g(x+c x+d y, c x+d y, y, 0) h= \] \[ g([(c+1)x + dy] e^{-a z}, (-z y + c x+d y)) e^{-z}, e^{-z} y, 0). \] Hence $h^{-1} G_1 h =G_1$ if and only if one has $c=-1$ and $d=0$. Then the group $G_1=\{ g(0, -x, y, 0); \ x,y \in \mathbb R \}$ is a normal subgroup of $G$ and the set $\sigma (G/H_3)$ generates $G$, if the set $(F_2 G_1)/G_1$ is not a one-parameter subgroup of $G/G_1$. Because of \begin{equation} g(e^{a z_1} f(0,0,z_1), \mathbb R, \mathbb R, z_1) g(e^{a z_2} f(0,0,z_2), \mathbb R, \mathbb R, z_2)= \nonumber \end{equation} \begin{equation} g(e^{a (z_1+z_2)} f(0,0,z_2)+ e^{a z_1} f(0,0,z_1), \mathbb R, \mathbb R, z_1+z_2), \nonumber \end{equation} the set $\sigma (G/H_3)$ does not generate $G$ precisely if for all $z_1, z_2 \in \mathbb R$ the identity $f(0,0,z_2)+e^{-a z_2} f(0,0,z_1)= f(0,0,z_1+z_2)$ holds. Using Lemma \ref{functional} for the function $f(0,0,z)$ we obtain $f(0,0,z)=c (1-e^{-a z})$ with a real constant $c \in \mathbb R$. A direct computation yields that the multiplication of the loop $L_f$ corresponding to the section $\sigma _f$ in the coordinate system $(x,y,z) \mapsto g(x,0,y,z) H_3$ is given by (\ref{loopszorzasketto}) and the assertion c) is proved. \end{proof} \begin{theorem} \label{Propmasodik} The $4$-dimensional connected solvable Lie group $G$ which has trivial center and precisely two one-dimensional normal subgroups is not the multiplication group of connected topological proper loops. \end{theorem} \begin{proof} Every $1$-dimensional connected topological loop $L$ having a Lie group as its multiplication group is already a Lie group (cf. Theorem 18.18 in \cite{loops}, p. 248). If the multiplication group of a $2$-dimensional connected topological proper loop is a Lie group, then it is nilpotent (cf. Theorem 1 in \cite{figula00}). Since the group $G$ has trivial center all $3$-dimensional connected topological loop $L$ having the group $G$ as the group topologically generated by the left translations of $L$ are homeomorphic to $\mathbb R^3$ and their multiplications are given by (\ref{elsoszorzas}), (\ref{multiplication1}) and (\ref{loopszorzasketto}) in Theorem \ref{Propelso}. Denote by $H_i$, $i=1,2,3$, the subgroups of $G$ given in Theorem \ref{Propelso}. If the group $G$ is also the group generated by all left and right translations of $3$-dimensional topological loops, then the inner mapping group of $L$ is the group $H_1$ if $L$ is defined by (\ref{elsoszorzas}), the group $H_2$ if $L$ is defined by (\ref{multiplication1}) and the group $H_3$ if $L$ is given by (\ref{loopszorzasketto}). Since the commutator subgroup $G'$ of $G$ is a $3$-dimensional abelian normal subgroup of $G$ and each subgroup $H_i$, $i=1,2,3$, is contained in $G'$ the normalizer of $H_i$ in $G$ coincides with the group $G'$. As the Lie algebra ${\bf g}$ of the group $G$ has trivial center we have a contradiction to Lemma \ref{niemenmaa} and the assertion follows. \end{proof} Author's address: Institute of Mathematics, University of Debrecen,\\ Debrecen, H-4010, P.O.Box: 12, Hungary \\ E-mail: [email protected] \end{document}
\begin{document} \begin{abstract} For any $n\geq 3$, we explicitly construct smooth projective toric $n$-folds of Picard number $\geq 5$, where any nontrivial nef line bundles are big. \end{abstract} \maketitle \tableofcontents \section{Introduction} The following question is our main motivation of this note. \begin{ques} Are there any smooth projective toric varieties $X\not \simeq \mathbb P^n$ such that $$ \partial \Nef (X) \cap \partial \PE(X)=\{0\}\ ? $$ Here, $\Nef(X)$ is the nef cone of $X$ and $\PE(X)$ is the pseudo-effective cone of $X$. \end{ques} By definition, the nef cone $\Nef (X)$ is included in the pseudo-effective cone $\PE(X)$. We note that $ \partial \Nef (X) \cap \partial \PE(X)=\{0\}$ is equivalent to the condition that any nontrivial nef line bundles on $X$ are big when $X\not\simeq \mathbb P^n$. In this note, we explicitly construct smooth {\em{projective}} toric threefolds of Picard number $\geq 5$ on which any nontrivial nef line bundles are big. The main parts of this note are nontrivial examples given in Section \ref{sec4}. See Examples \ref{321} and \ref{43}. In general, it seems to be hard to find those examples. Therefore, it must be valuable to describe them explicitly here. This short note is a continuation and a supplement of the papers:~\cite{kleiman} and \cite{smooth}. Let us see the contents of this note. Section \ref{sec2} is a supplement to the toric Mori theory. We introduce the notion of \lq general\rq \ complete toric varieties. By the definition of \lq general\rq \ projective toric varieties, it is obvious that the final step of the MMP for a $\mathbb Q$-factorial \lq general\rq \ projective toric variety is a $\mathbb Q$-factorial projective toric variety of Picard number one. It is almost obvious if we understand Reid's combinatorial description of toric extremal contraction morphisms. Moreover, it is easy to check that any nontrivial nef line bundles on a \lq general\rq \ complete toric variety are always big. In Section \ref{sec3}, we recall the basic definitions and properties of {\em{primitive collections}} and {\em{primitive relations}} after Batyrev. By the result of Batyrev, any {\em smooth} projective toric variety is \lq general\rq \ if and only if it is isomorphic to the projective space. So, the results obtained in Section \ref{sec2} can not be used to construct examples in Section \ref{sec4}. The first author first considered that there are plenty of \lq general\rq \ smooth projective toric varieties. So, he thought that the examples in Section \ref{sec4} is worthless. Section \ref{sec4} is the main part of this note. We give smooth projective toric threefolds of Picard number $\geq 5$, where any nontrivial nef line bundles are always big. We note that this phenomenon does not occur for smooth projective toric surfaces. Let $X$ be a smooth projective toric surface. Then we can easily see that there exists a morphism $f:X\to \mathbb P^1$ if $X$ is not isomorphic to $\mathbb P^2$. So, the line bundle $f^*\mathcal O_{\mathbb P^1}(1)$ on $X$ is nef but not big. Let $X$ be a smooth projective toric variety and let $\Delta$ be the corresponding fan. If $\Delta$ is sufficiently complicated combinatorially in some sense, then any nontrivial nef line bundles are big. However, we do not know how to define \lq complicated\rq \ fans suitably. Therefore, the explicit examples in Section \ref{sec4} seem to be useful. We note that it is difficult to calculate nef cones or pseudo-effective cones for projective (not necessarily toric) varieties. In the final section:~Section \ref{sec5}, we collect miscellaneous results. We explain how to generalize examples in \cite{smooth} and in Section \ref{sec4} into dimension $n\geq 4$. We also treat $\mathbb Q$-factorial projective toric varieties with $\Nef(X)=\PE(X)$. Let us fix the notation used in this note. For the details, see \cite{reid} or \cite{intro}. For the basic results on the toric geometry, see the standard text books:~\cite{tata}, \cite{oda}, or \cite{fulton}. \begin{notation} We will work over some fixed field $k$ throughout this note. Let $X$ be a complete toric variety; a $1$-cycle of $X$ is a formal sum $\sum a_iC_i$ with complete curves $C_i$ on $X$, and $a_i\in \mathbb Z$. We put $$ Z_1(X):=\{1\text{-cycles of} \ X\}, $$ and $$ Z_1(X)_{\mathbb R}:= Z_1(X)\otimes \mathbb R. $$ There is a pairing $$ \Pic (X)\times Z_1(X)_{\mathbb R} \to \mathbb R $$ defined by $(\mathcal L, C)\mapsto \deg _C\mathcal L$, extended by bilinearity. Define $$ N^1(X):=(\Pic (X)\otimes \mathbb R)/\equiv $$ and $$ N_1(X):= Z_1(X)_{\mathbb R}/\equiv, $$ where the {\em numerical equivalence} $\equiv$ is by definition the smallest equivalence relation which makes $N^1$ and $N_1$ into dual spaces. Inside $N_1(X)$ there is a distinguished cone of effective $1$-cycles, $$ {\NE}(X)=\{\, Z\, | \ Z\equiv \sum a_iC_i \ \text{with}\ a_i\in \mathbb R_{\geq 0}\} \subset N_1(X). $$ It is known that $\NE(X)$ is a rational polyhedral cone. A subcone $F\subset {\NE}(X)$ is said to be {\em{extremal}} if $u,v\in {\NE}(X)$, $u+v\in F$ imply $u,v\in F$. The cone $F$ is also called an {\em{extremal face}} of ${\NE}(X)$. A one-dimensional extremal face is called an {\em{extremal ray}}. We define the {\em{Picard number}} $\rho(X)$ by $$ \rho (X):=\dim _{\mathbb R}N^1(X)< \infty. $$ An element $D\in N^1(X)$ is called {\em{nef}} if $D\geq 0$ on ${\NE}(X)$. We define the {\em{nef cone}} $\Nef (X)$, the {\em{ample cone}} $\Amp(X)$, and the {\em{pseudo-effective cone}} $\PE(X)$ in $N^1(X)$ as follows. $$ \Nef(X)=\{D\, |\, D \text{\ is nef}\}, $$ $$ \Amp(X)=\{D\, |\, D\text{\ is ample}\} $$ and $$ \PE(X)= $$ $$\{D\equiv \sum a_i D_i\,|\, \text{$D_i$ is an effective Weil divisor and $a_i\in \mathbb R_{\geq 0}$}\}. $$ It is not difficult to see that $\PE(X)$ is a rational polyhedral cone in $N^1(X)$ since $X$ is toric. For the usual definition of $\PE(X)$, see, for example, \cite[Definition 2.2.25]{lazarsfeld}. It is easy to see that $\Amp(X)\subset \Nef (X)\subset \PE(X)$. From now on, we assume that $X$ is projective. Let $D$ be an $\mathbb R$-Cartier divisor on $X$. Then $D$ is called {\em{big}} if $D\equiv A+E$ for an ample $\mathbb R$-divisor $A$ and an effective $\mathbb R$-divisor $E$. For the original definition of a big divisor, see, for example, \cite[2.2 Big Line Bundles and Divisors]{lazarsfeld}. We define the {\em{big cone}} $\xBig (X)$ in $N^1(X)$ as follows. $$ \xBig (X)=\{ D\, |\, D \, \text{is big}\}. $$ It is well known that the big cone is the interior of the pseudo-effective cone and the pseudo-effective cone is the closure of the big cone. See, for example, \cite[Theorem 2.2.26]{lazarsfeld}. \end{notation} In \cite{kleiman} and \cite{smooth}, we mainly treated {\em{non-projective}} toric varieties. In this note, we are interested in {\em{projective}} toric varieties. \section{Supplements to the toric Mori theory}\label{sec2} We introduce the following new notion. It will not be useful when we construct various examples of {\em{smooth}} projective toric varieties in Section \ref{sec4}. However, we contain it here for the future usage. By the simple observations in this section, we know that the great mass of complete toric varieties have no nontrivial non-big nef line bundles. \begin{defn}\label{11} Let $X$ be a complete toric variety with $\dim X=n$. Let $\Delta$ be the fan corresponding to $X$. Let $G(\Delta)=\{v_1, \cdots, v_m\}$ be the set of all primitive vectors spanning one dimensional cones in $\Delta$. If there exists a relation $$ a_{i_1}v_{i_1}+\cdots +a_{i_k}v_{i_k}=0 $$ such that $\{i_1, \cdots, i_k\}\subset \{1, \cdots, m\}$, $a_{i_j}\in \mathbb Z_{>0}$ for any $1\leq j\leq k$ with $k\leq n$, then $X$ is called {\em{\lq special\rq}}. If $X$ is not \lq special\rq, then $X$ is called {\em{\lq general\rq}}. \end{defn} \begin{ex} The projective space $\mathbb P^n$ is \lq general\rq \ in the sense of Definition \ref{11}. \end{ex} Let us prepare the following easy but useful lemmas for the toric Mori theory. The proofs are obvious. So, we omit them. \begin{lem}\label{14} Let $X$ be a complete toric variety and let $\pi: \widetilde X\to X$ be a small projective toric $\mathbb Q$-factorialization $($cf.~\cite[Corollary 5.9]{fujino}$)$. Assume that $X$ is \lq general\rq\ $($resp.~\lq special\rq$)$. Then $\widetilde X$ is also \lq general\rq\ $($resp.~\lq special\rq$)$. \end{lem} More generally, we have the following lemma. \begin{lem}\label{15} Let $X$ and $X'$ be complete toric varieties and let $\varphi :X\dashrightarrow X'$ be a proper birational toric map. Assume that $\varphi$ is an isomorphism in codimension one. Then $X$ is \lq general\rq\ if and only if so is $X'$. \end{lem} \begin{lem}\label{16} Let $X$ and $Z$ be a complete toric varieties and let $\pi:X\to Z$ be a birational toric morphism. Assume that $X$ is \lq general\rq. Then $Z$ is \lq general\rq. We note that $Z$ is not necessarily \lq special\rq\ even if $X$ is \lq special\rq. \end{lem} We have two elementary properties. \begin{prop}\label{18} Let $X$ be a complete toric variety and let $f:X\to Y$ be a proper surjective toric morphism onto $Y$. Assume that $X$ is \lq general\rq\ and that $\dim Y<\dim X$. Then $Y$ is a point. \end{prop} \begin{proof} It is obvious. \end{proof} \begin{cor}\label{19} Let $X$ be a complete toric variety. Assume that $X$ is \lq general\rq. Let $D$ be a nef Cartier divisor on $X$ such that $D\not \sim 0$. Then $D$ is big. \end{cor} \begin{proof} Since $D$ is nef, the linear system $|D|$ defines a proper surjective toric morphism $\Phi_{|D|}: X\to Z$. Apply Proposition \ref{18} to $\Phi_{|D|}:X\to Z$. Then we obtain $\dim Z=\dim X$. Therefore, $D$ is big. \end{proof} The next proposition is also obvious. We contain it for the reader's convenience because it has not been stated explicitly in the literature. For the details of the toric Mori theory, see \cite[Section 5]{fujino} and \cite{intro}. \begin{prop}[MMP for \lq general\rq \ projective toric varieties]\label{17} Let $X$ be a $\mathbb Q$-factorial projective toric variety and let $B$ be a Cartier divisor on $X$ such that $B$ is not pseudo-effecitve. Assume that $X$ is \lq general\rq. We run the MMP with respect to $B$. Then we obtain a sequence of $B$-negative divisorial contractions and $B$-flips$:$ $$ X=X_0\dashrightarrow X_1\dashrightarrow \cdots \dashrightarrow X_i\dashrightarrow X_{i+1}\dashrightarrow \cdots \dashrightarrow X_l, $$ where $X_l$ is a $\mathbb Q$-factorial projective toric variety with $\rho (X_l)=1$. \end{prop} \begin{proof} Run the MMP with respect to $B$, where $B$ is not pseudo-effective, for example, $B=K_X$. Since $B$ is not pseudo-effective, the final step is a Fano contraction $X_l\to Z$. Since $X$ is \lq general\rq, $X_l$ is also \lq general\rq \ by Lemmas \ref{15} and \ref{16}. Therefore, $Z$ must be a point by Corollary \ref{18}. This means that $X_l$ is a $\mathbb Q$-factorial projective toric variety with $\rho(X_l)=1$. \end{proof} We will see that any smooth projective toric variety $X$, which is not isomorphic to the projective space, is \lq special\rq\ by \cite{batyrev}. See Proposition \ref{25} below. So, the results in this section can not be applied to {\em{smooth}} projective toric varieties. \section{Primitive collections and relations}\label{sec3} Let us recall the notion of primitive collections and primitive relations introduced by Batyrev (cf.~\cite{batyrev}). It is very useful to compute some explicit examples of toric varieties. Note that this section is not indispensable for understanding the examples in Section \ref{sec4}. Let $\Delta$ be a complete non-singular $n$-dimensional fan and let $G(\Delta)$ be the set of all primitive generators of $\Delta$. \begin{defn}[Primitive collection] A non-empty subset $\mathcal P=\{v_1,$ $\cdots,v_k\}\subset G(\Delta)$ is called a {\em{primitive collection}} if for each generator $v_i\in \mathcal P$ the elements of $\mathcal P\setminus \{v_i\}$ generate a $(k-1)$-dimensional cone in $\Delta$, while $\mathcal P$ does not generate any $k$-dimensional cone in $\Delta$. \end{defn} \begin{defn}[Focus] Let $\mathcal P=\{v_1, \cdots, v_k\}$ be a primitive collection in $G(\Delta)$. Let $S(\mathcal P)$ denote $v_1+\cdots +v_k$. The {\em{focus}} $\sigma (\mathcal P)$ of $\mathcal P$ is the cone in $\Delta$ of the smallest dimension containing $S(\mathcal P)$. \end{defn} \begin{defn}[Primitive relation] Let $\mathcal P=\{v_1, \cdots, v_k\}$ be a primitive collection in $G(\Delta)$ and $\sigma(\mathcal P)$ its focus. Let $w_1, \cdots, w_m$ be the primitive generators of $\sigma (\mathcal P)$. Then there exists a unique linear combination $a_1w_1+\cdots +a_mw_m$ with positive integer coefficients $a_i$ which is equal to $v_1+\cdots+ v_k$. Then the linear relation $v_1+\cdots +v_k-a_1w_1-\cdots -a_mw_m=0$ is called the {\em{primitive relation associated with $\mathcal P$}}. \end{defn} Then we have the description of $\NE(X)$ by primitive relations. \begin{thm}[{cf.~\cite[2.15 Theorem]{batyrev}}] Let $\Delta$ be a projective non-singular fan and $X=X(\Delta)$ the corresponding toric variety. Then the Kleiman-Mori cone $\NE(X)$ is generated by all primitive relations. The primitive relation which spans an extremal ray of $\NE(X)$ is said to be {\em{extremal}}. \end{thm} Let $\Delta$ be a {\em{projective}} non-singular $n$-dimensional fan. Then, Batyrev obtained the following important result. \begin{prop}[{cf.~\cite[3.2 Proposition]{batyrev}}]\label{25} There exists a primitive collection $\mathcal P=\{v_1, \cdots, v_k\}$ in $G(\Delta)$ such that the associated primitive relation is of the form $$ v_1+\cdots +v_k=0. $$ In other words, the focus $\sigma(\mathcal P)=\{0\}$. \end{prop} We close this section with an elementary remark. \begin{rem} If $k=n+1$ in Proposition \ref{25}, then $X(\Delta)\simeq \mathbb P^n$. \end{rem} Therefore, a smooth projective toric variety $X$ is \lq general\rq\ if and only if $X$ is isomorphic to the projective space. By this reason, it is not so easy to construct smooth projective toric varieties on which any nontrivial nef line bundles are big. \section{Examples}\label{sec4} First, let us recall the following example, which is not a toric variety. For the details, see \cite{morimukai} and \cite[p.~67]{matsuki}. \begin{ex}[{\cite[no.~30 Table 2]{morimukai}}]\label{41} Let $X$ be the blowing-up of $\mathbb P^3_{\mathbb C}$ along a smooth conic. Then $X$ is a smooth Fano threefold with $\rho (X)=2$. It is known that $X$ has two extremal divisorial contractions. One contraction is the inverse of the blowing-up $X\to \mathbb P^3$. Another one is a contraction of $\mathbb P^2$ on $X$ into a smooth point. Therefore, it is not difficult to see that every nef Cartier divisor $D\not \equiv 0$ is big. \end{ex} The next example is the main theme of this short note. It is hard for the non-experts to find it. Therefore, we think it is worthwhile to describe it explicitly here. \begin{ex}\label{321} We put $v_1=(1,0,0), v_2=(0,1,0), v_3=(0,0,1)$, and $v_4=(-1, -1, -1)$. We consider the standard fan of $\mathbb P^3$ generated by $v_1, v_2, v_3$, and $v_4$. We subdivide the cone $\langle v_1, v_2, v_4\rangle$ as follows. Take a blow-up $X_1\to \mathbb P^3$ along the vector $v_5=(1, -1, -2)=3v_1+v_2+2v_4$. We take a blow-up $X_2\to X_1$ along the vector $v_6=(1, 0, -1)=\frac{1}{2}(v_1+v_2+v_5)$ and a blow-up $X_3\to X_2$ along $v_7=(0, -1, -2)=\frac{1}{3}(v_2+2v_4+2v_5)$. Finally, we take a blow-up $X_3$ along the vector $v_8=(0, 0, -1) =\frac{1}{2}(v_2+v_7)$ and obtain $X$. Then, it is obvious that $X$ is projective and $\rho (X)=5$. It is easy to see that $X$ is smooth. In this case, $\NE(X)$ is spanned by the following five extremal primitive relations, $v_1+v_2+v_5-2v_6=0$, $v_4+v_5+v_8-2v_7=0$, $v_2+v_7-2v_8=0$, $v_6+v_8-v_2-v_5=0$, and $v_3+v_5-2v_1-v_4=0$. This toric variety $X$ is nothing but the one labeled as [8-10] in \cite[Theorem 9.6]{tata}. The picture below helps us understand the combinatorial data of $X$. \begin{center} \scalebox{1.0}{ \unitlength 0.1in \begin{picture}( 36.1700, 29.4800)( 14.1000,-31.0600) \special{pn 20} \special{pa 3332 376} \special{pa 3332 376} \special{fp} \special{pa 3332 376} \special{pa 1732 3096} \special{fp} \special{pn 20} \special{pa 1732 3096} \special{pa 4932 3096} \special{fp} \special{pn 20} \special{pa 4932 3096} \special{pa 3332 376} \special{fp} \special{pn 20} \special{pa 3012 2136} \special{pa 3332 376} \special{fp} \special{pn 20} \special{pa 3012 2136} \special{pa 1732 3096} \special{fp} \special{pn 20} \special{pa 3012 2136} \special{pa 4932 3096} \special{fp} \special{pn 20} \special{pa 2372 2616} \special{pa 3332 376} \special{fp} \special{pn 20} \special{pa 2372 2616} \special{pa 3972 2616} \special{fp} \special{pn 20} \special{pa 3972 2616} \special{pa 3332 376} \special{fp} \special{pn 20} \special{pa 2372 2616} \special{pa 4932 3096} \special{fp} \special{pn 20} \special{pa 2852 2936} \special{pa 2372 2616} \special{fp} \special{pn 20} \special{pa 2852 2936} \special{pa 1732 3096} \special{fp} \special{pn 20} \special{pa 2852 2936} \special{pa 4932 3096} \special{fp} \put(50.2700,-31.9100){\makebox(0,0)[lb]{$\mathbf{v_2}$}} \put(22.3500,-25.5900){\makebox(0,0){$\mathbf{v_5}$}} \put(26.5900,-29.1900){\makebox(0,0)[rb]{$\mathbf{v_6}$}} \put(16.3500,-31.9100){\makebox(0,0){$\mathbf{v_1}$}} \put(33.3300,-2.4300){\makebox(0,0){$\mathbf{v_4}$}} \put(39.9300,-26.1300){\makebox(0,0)[lb]{$\mathbf{v_8}$}} \put(30.5300,-21.5300){\makebox(0,0)[lb]{$\mathbf{v_7}$}} \end{picture} } \end{center} \begin{claim} There are no projective surjective toric morphism $f:X\to Y$ with $\dim Y=1$ or $2$. \end{claim} \begin{proof} The variety $X$ is obtained by successive blowing-ups of $\mathbb P^3$ inside the cone $\langle v_1, v_2, v_4\rangle$. So, $X$ does not admit to a morphism to a curve. Thus, we have to consider the case when $Y$ is a surface. By considering primitive relations, $f:X\to Y$ must be induced by the projection $\mathbb Z^3\to \mathbb Z^2: (x, y, z)\mapsto (x, y)$ because $v_3+v_8=0$. The image of the cone $\langle v_2, v_5, v_8\rangle$ is the cone spanned by $(0, 1)$ and $(1, -1)$. On the other hand, the image of the cone $\langle v_1, v_4, v_5\rangle$ is the cone spanned by $(1, 0)$ and $(-1, -1)$. Therefore, there are no surjective morphisms $f:X\to Y$ with $\dim Y=2$. \end{proof} Thus, every nef divisor $D\not \sim 0$ is big, that is, $\partial \Nef (X)\cap \partial \PE(X) =\{0\}$. \end{ex} By the following example, the reader understands the advantage of using the toric geometry to construct examples. We do not know what happens if we take blow-ups of $X$ in Example \ref{41}. \begin{ex}\label{43} By taking blowing-ups inside the cone $\langle v_5, v_7, v_8\rangle$ in Example \ref{321}, we obtain a smooth projective toric threefold $X_k$ for any $k\geq 6$ such that $\rho (X_k)=k$ and $\partial \Nef (X_k)\cap \partial \PE(X_k)=\{0\}$, that is, every nef divisor $D\not \sim 0$ on $X_k$ is big. More explicitly, for example, $X_6$ is the blow-up of $X$ along $u_6=v_5+v_7+v_8$ and $X_{k+1}$ is the blow-up of $X_k$ along $u_{k+1}=v_5+v_7+u_k$ for $k\geq 6$. \end{ex} We can easily check that any smooth projective toric threefolds of Picard number $2\leq \rho \leq 4$ have some nontrivial non-big nef line bundles by the classification table in \cite[Theorem 9.6]{tata}. For smooth non-projective toric variety, the following example will help the reader. It is the most famous example of smooth complete non-projective toric threefold. \begin{ex} Let $\Delta$ be the fan whose rays are spanned by $v_1=(1, 0, 0)$, $v_2=(0, 1, 0)$, $v_3=(0, 0, 1)$, $v_4=(-1, -1, -1)$, $v_5=(0, -1, -1)$, $v_6=(-1, 0, -1)$, $v_7=(-1, -1, 0)$, and whose maximal cones are $\langle v_1, v_2, v_3\rangle$, $\langle v_4, v_5, v_6\rangle$, $\langle v_4, v_6, v_7\rangle$, $\langle v_4, v_5, v_7\rangle$, $\langle v_1, v_2, v_5\rangle$, $\langle v_2, v_5, v_6\rangle$, $\langle v_2, v_3, v_6\rangle$, $\langle v_3, v_6, v_7\rangle$, $\langle v_1, v_3, v_7\rangle$, $\langle v_1, v_5, v_7\rangle$. Then $X=X(\Delta)$ is the most famous non-projective smooth toric threefold with $\rho (X)=4$ obtained by Miyake and Oda. By removing three two-dimensional walls $\langle v_1, v_7\rangle$, $\langle v_2, v_5\rangle$, and $\langle v_3, v_6\rangle$ from $\Delta$, we obtain a flopping contraction $f:X\to Y$. It is easy to see that $Y$ is a projective toric threefold with $\rho (Y)=2$ and three ordinary double points. We can check that every nef divisor $D$ can be written as $D=f^*D'$ for some nef divisor $D'$ on $Y$. On the other hand, $\Nef (Y)$ is a two dimensional cone and every nef divisor on $Y$ is big. Therefore, $\Nef (X)$ is also two-dimensional and all the nef divisors on $X$ are big. We note that $\Nef (X)$ is thin in $N^1(X)$ by Kleiman's ampleness criterion since $X$ is a smooth complete non-projective variety. \end{ex} The reader can find many smooth complete non-projective toric threefolds $X$ with $\Nef (X)=\{0\}$ in \cite{smooth}. \section{Miscellaneous comments}\label{sec5} In this final section, we collect miscellaneous results. First, we explain how to generalize Examples \ref{321} and \ref{43} in dimension $\geq 4$. \begin{say}We put $v_1=(1, 0, \cdots, 0)$, $v_2=(0, 1, 0, \cdots, 0)$, $v_3=(0, 0, 1, 0, \cdots, 0)$, $v_4=(-1, -1, \cdots, -1)\in N=\mathbb Z^n$. We consider $w_1=(0, 0, 0, 1, 0, \cdots, 0)$, $w_2=(0, 0, 0, 0, 1, 0, \cdots, 0)$, $\cdots$, $w_{n-3}=(0, \cdots, 0, 1)\in N$. By these vectors, we can construct a fan corresponding to $\mathbb P^n$ as usual. We take $v_5=3v_1+v_2+2v_4= (1, -1, -2, \cdots, -2)$, $v_6=\frac{1}{2}(v_1+v_2+v_5) =(1, 0, -1, \cdots, -1)$, $v_7=\frac{1}{3}(v_2+2v_4+2v_5) =(0, -1, -2, \cdots, -2)$, and $v_8 =\frac{1}{2}(v_2+v_7)=(0, 0, -1, \cdots, -1)$. We take a sequence of blow-ups $$ X\to X_3\to X_2\to X_1\to \mathbb P^n $$ as in Examples \ref{321}. In this case, the center of each blow-up is $(n-3)$-dimensional. We can easily check that $X$ is a smooth projective toric $n$-fold. We note that $v_3+w_1+\cdots+w_{n-3}+v_8=0$. \begin{claim} If $f: X\to Y$ is a proper surjective toric morphism and $Y$ is not a point, then $\dim Y=n$. \end{claim} \begin{proof}[Proof of {\em{Claim}}] By considering linear relations among $v_1, v_2, \cdots, v_8, w_1,$ $\cdots, w_{n-3}$ as in Definition \ref{11}, $f$ should be induced by the projection $\mathbb Z^n\to \mathbb Z^2: (x_1, x_2, \cdots, x_n)\mapsto (x_1, x_2)$ if $\dim Y<n$. By the same arguments as in the proof of Claim in Example \ref{321}, it can not happen. Therefore, we obtain $\dim Y=n$. \end{proof} Thus, any nontrivial nef line bundles on $X$ are big. \end{say} So, for any $(n, \rho)$, where $n\geq 4$ and $\rho \geq 5$, we can construct a smooth projective toric $n$-fold $X$ with $\rho (X)=\rho$ on which any nontrivial nef line bundles are big (cf.~Example \ref{43}). We leave the details for the reader's exercise. The next one is a higher dimensional analogue of \cite{smooth}. \begin{say}[Smooth complete toric varieties with no nontrivial nef line bundles] Let $X$ be a smooth complete toric variety with no nontrivial nef line bundles. We put $\mathcal E=\mathcal O_X^{\oplus k}\oplus \mathcal L$ for $k\geq 1$, where $\mathcal L$ is a nontrivial line bundle on $X$. We consider the $\mathbb P^k$-bundle $\pi:Y=\mathbb P_X(\mathcal E)\to X$. Then $Y$ is a $(\dim X+k)$-dimensional complete toric variety. It is easy to see that there are no nontrivial nef line bundles on $Y$. So, for $n\geq 4$, we can construct many $n$-dimensional smooth complete toric varieties of Picard number $\geq 6$ with no nontrivial nef line bundles by \cite{smooth}. \end{say} Finally, we close this note with an easy result. We treat the other extreme case:~$\Nef (X)=\PE(X)$. \begin{prop} Let $X$ be a $\mathbb Q$-factorial projective toric variety with $\rho (X)=\rho$. Assume that $\Nef (X)=\PE(X)$, that is, every effective divisor is nef. Then there is a finite toric morphism $\mathbb P^{n_1}\times \cdots \times \mathbb P^{n_\rho}\to X$ with $n_1+\cdots+ n_{\rho}=\dim X$. When $X$ is smooth, $X\simeq \mathbb P^{n_1}\times \cdots \times \mathbb P^{n_\rho}$ with $n_1+\cdots+ n_{\rho}=\dim X$. \end{prop} \begin{proof} The condition $\Nef (X)=\PE(X)$ implies that every extremal ray of $\NE(X)$ is a Fano type. First, we assume that $X$ is smooth. We obtain a Fano contraction $f:X\to Y$ with $\rho(Y)=\rho (X)-1$, where $Y$ is a smooth projective toric variety and $\Nef (X)=\PE(X)$. It is well known that $X$ is a projective space bundle over $Y$. By the induction, we obtain $Y\simeq \mathbb P^{n_1} \times \cdots \times\mathbb P^{n_{\rho-1}}$. Therefore, we can easily check that $X\simeq \mathbb P^{n_1}\times \cdots \times \mathbb P^{n_\rho}$ and $f$ is the projection. Lemma \ref{key} may help the reader check it. Next, we just assume that $X$ is a $\mathbb Q$-factorial projective toric variety with $\Nef(X)=\PE(X)$. As above, we have a Fano contraction $f:X\to Y$ with $\rho (Y)=\rho (X)-1$. In this case, $Y$ is a $\mathbb Q$-factorial projective toric variety with $\Nef(Y)=\PE(Y)$. By applying the induction, we have a finite toric surjective morphism $g:W'=\mathbb P^{n_1}\times \cdots \times \mathbb P^{n_{\rho-1}}\to Y$. If we need, we take a higher model $W=\mathbb P^{n_1}\times \cdots \times \mathbb P^{n_{\rho-1}}\to W'\to Y$ and can assume that $V\to W$ is a fiber bundle, where $V$ is the normalization of $W\times _YX$. We note that $\Nef(V)=\PE(V)$. For any irreducible torus invariant closed subvariety $U$ on $V$ such that $\dim U=\dim W+1$ and that $U\to W$ is surjective, we can see that $U$ is a $\mathbb P^1$-bundle over $W$ and $\Nef(U)=\PE(U)$. Therefore, $U\simeq W\times \mathbb P^1$ and $U\to W$ is the first projection by the previous step. By these observations, we can see that $V\simeq W\times F$, where $F$ is a $\mathbb Q$-factorial projective toric variety with $\rho (F)=1$. Thus, we obtain a desired finite toric morphism $\mathbb P^{n_1}\times \cdots \times \mathbb P^{n_\rho}\to X$. \end{proof} The following property is a key lemma. \begin{lem}\label{key} Let $X$ be a $\mathbb Q$-factorial projective toric variety with $\Nef (X)=\PE(X)$. Let $Z$ be any irreducible torus invariant closed subvariety of $X$. Then $Z$ is a $\mathbb Q$-factorial projective toric variety with $\Nef (Z)=\PE(Z)$. \end{lem} \begin{proof} It is obvious. \end{proof} \end{ack} \ifx\undefined\bysame \newcommand{\bysame|{leavemode\hbox to3em{\hrulefill}\,} \fi \end{document}
\begin{document} \title{Bernoulli crossed products without almost periodic weights} \begin{abstract} We prove a classification result for a large class of noncommutative Bernoulli crossed products $(P,\phi)^\Lambda \rtimes \Lambda$ without almost periodic states. Our results improve the classification results from \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, where only Bernoulli crossed products built with almost periodic states could be treated. We show that the family of factors $(P,\phi)^\Lambda \rtimes \Lambda$ with $P$ an amenable factor, $\phi$ a weakly mixing state (i.e.~a state for which the modular automorphism group is weakly mixing) and $\Lambda$ belonging to a large class of groups, is classified by the group $\Lambda$ and the action $\Lambda \curvearrowright (P, \phi)^\Lambda$, up to state preserving conjugation of the action. \end{abstract} \section{Introduction} Distinguishing different von Neumann algebras is one of the main research goals in operator algebra theory. One of the main achievements in this direction was made by Popa, whose deformation/rigidity theory provided powerful classification results for type $\operatorname{I}I_1$ factors such as group measure space constructions $L^\infty(X)\rtimes \Gamma$ of free ergodic probability measure preserving actions $\Gamma \curvearrowright (X,\mu)$. Popa's deformation/rigidity theory has later been combined with modular theory to study type $\operatorname{I}II$ factors, such as group measure space constructions for nonsingular actions \cite{vaes-houdayer;type-III-unique-cartan}, Shlyakhtenko's free Araki-Woods factors \cite{houdayer;structural-results-free-araki-woods} and free quantum group factors \cite{isono;prime-factorization-free-quantum-group-factors}. In the same spirit, Stefaan Vaes and the author provided a classification result for noncommutative Bernoulli crossed products constructed with almost periodic states \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. However, obtaining a full classification for families of type $\operatorname{I}II$ factors without using the existence of almost periodic states, is extremely difficult. Even the free Araki-Woods factors are only fully classified when the underlying one-parameter group is almost periodic \cite{shlyakhtenko;free-quasi-free-states;pacific}. Likewise, the classification of noncommutative Bernoulli crossed products in \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products} depends heavily on the fact that they have a discrete decomposition $N \rtimes \Gamma$, with $\Gamma$ a discrete group acting on a type $\operatorname{I}I_\infty$ factor $N$. To go beyond almost periodic states, essentially new techniques are required. In the current paper, we provide a new argument showing classification results for Bernoulli crossed products with states that are not almost periodic. The noncommutative Bernoulli crossed products, introduced by Connes \cite{connes;almost-periodic-states}, are constructed as follows. Let $(P,\phi)$ be any von Neumann algebra equipped with a normal faithful state $\phi$, and let $\Lambda$ be a countably infinite group. Then take the infinite tensor product $P^\Lambda = \mathbin{\overline\bigotimes}_{g \in \Lambda} P$ indexed by $\Lambda$, with respect to the state $\phi$. The group $\Lambda$ acts on $P^\Lambda$ by shifting the tensor factors. This action is called a \emph{noncommutative Bernoulli action}, and the factor $P^\Lambda \rtimes \Lambda$ is called a \emph{Bernoulli crossed product}. In \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products} it was shown that if $\phi$ is an almost periodic state, then the factors $(P,\phi)^{\mathbb{F}_n} \rtimes \mathbb{F}_n$ are completely classified up to isomorphism by $n$ and by the subgroup of $\mathbb{R}dual$ generated by the point spectrum of the modular operator $\operatorname{D}elta_\phi$. In this paper, we are able to improve this classification and to also include Bernoulli crossed products for which the state $\phi$ is not almost periodic. In this setting, the Bernoulli crossed product $P^\Lambda \rtimes \Lambda$ is always of type $\operatorname{I}II_1$ (see \cref{lem.only-type-III1,lem.outer-core} below), and does not admit any almost periodic weight (see \cref{rem.no-almost-periodic}). Our classification thus yields many nonisomorphic factors of type $\operatorname{I}II_1$ without almost periodic weights. The first result that we obtain, is a nonisomorphism result for Bernoulli crossed products $(P,\phi)^\Lambda \rtimes \Lambda$ where the state $\phi$ is weakly mixing, i.e.~the modular operator $\operatorname{D}elta_\phi$ of the state $\phi$ has no nontrivial finitely dimensional invariant subspaces. We denote by $\mathcal{C}$ the class of groups defined in \cite[Section 4]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. All groups in the class $\mathcal{C}$ are nonamenable, and the class $\mathcal{C}$ contains all weakly amenable groups $\Gamma$ with $\beta_1^{(2)}(\Gamma) > 0$ \cite{popa-vaes;unique-cartan-decomposition-factors-free}, all weakly amenable, nonamenable, bi-exact groups \cite{popa-vaes;unique-cartan-decomposition-factors-hyperbolic} and all free product groups $\Gamma = \Gamma_1 \star \Gamma_2$ with $|\Gamma_1| \geq 2$ and $|\Gamma_2| \geq 3$ \cite{ioana;cartan-subalgebras-amalgamated-free-product-factors}. In particular, it contains all nonelementary hyperbolic groups, such as the free groups $\mathbb{F}_n, n\geq2$. Moreover, class $\mathcal{C}$ is closed under extensions and commensurability. \begin{thmstar}\label{corstar.B} The set of factors \begin{align*} \bigl\{(P,\phi)^\Lambda \rtimes \Lambda \bigm| &\;\;\text{$P$ a nontrivial amenable factor with a normal faithful weakly}\\ &\;\;\text{mixing state $\phi$, and $\Lambda$ an icc group in the class $\mathcal{C}$}\;\bigr\} \end{align*} is exactly classified, up to isomorphism, by the group $\Lambda$ and the action $\Lambda \curvearrowright (P,\phi)^\Lambda$ up to a state preserving conjugacy of the action. \end{thmstar} In particular, \cref{corstar.B} implies that the state $\phi^\Lambda$ on $P^\Lambda$ can be retrieved from the factor $(P,\phi)^\Lambda \rtimes \Lambda$. More generally, we also obtain the following optimal classification result for general states $\phi$ on the base algebra. For every nontrivial factor $(P, \phi)$ equipped with a normal faithful state, we denote by $P_{\phi,\text{ap}} \subset P$ the \emph{almost periodic part} of $P$, i.e.~$P_{\phi,\text{ap}}$ is the subalgebra spanned by the eigenvectors of $\operatorname{D}elta_\phi$, \begin{align*} P_{\phi,\text{ap}} = \big(\operatorname{span} \bigcup_{\mu \in \mathbb{R}dual} \{ x \in P \mid \sigma_t^\phi(x) = \mu^{\mathbf{i}t} x \}\big)''. \end{align*} We obtain the following main theorem, classifying all Bernoulli crossed products with amenable factors $(P,\phi)$ as base algebra, under the assumption that the almost periodic part of the base algebra, $P_{\phi,\text{ap}}$, is a factor. This assumption is equivalent to the assumption that the centralizer $(P^I)_{\phi^I}$ of the infinite tensor product $P^I$ w.r.t.~$\phi^I$ is a factor, see \cref{lem.technical-condition} below. \begin{thmstar}\label{thmstar.A} Let $(P_0,\phi_0)$ and $(P_1,\phi_1)$ be nontrivial amenable factors equipped with normal faithful states, such that $(P_0)_{\phi_0,\text{ap}}$ and $(P_1)_{\phi_1,\text{ap}}$ are factors. Let $\Lambda_0$ and $\Lambda_1$ be icc groups in the class $\mathcal{C}$. The algebras $P_0^{\Lambda_0} \rtimes \Lambda_0$ and $P_1^{\Lambda_1} \rtimes \Lambda_1$ are isomorphic if and only if one of the following statements holds. \begin{enumerate}[\upshape (a),topsep=0mm] \item The states $\phi_0$ and $\phi_1$ are both tracial, and the actions $\Lambda_i \curvearrowright (P_i, \phi_i)^{\Lambda_i}$ are cocycle conjugate, modulo a group isomorphism $\Lambda_0 \cong \Lambda_1$. \item The states $\phi_0$ and $\phi_1$ are both nontracial, and there exist projections $p_i \in (P_i^{\Lambda_i})_{\phi_i^{\Lambda_i}}$ such that the reduced cocycle actions $(\Lambda_i \curvearrowright (P_i, \phi_i)^{\Lambda_i})^{p_i}$ are cocycle conjugate through a state preserving isomorphism, modulo a group isomorphism $\Lambda_0 \cong \Lambda_1$. \end{enumerate} \end{thmstar} Remark that the actions $\Lambda_i \curvearrowright P_i^{\Lambda_i}$ are outer, and under the conditions of the theorem, the centralizer of $P_i^{\Lambda_i}$ w.r.t.~$\phi_i^{\Lambda_i}$ is a factor. Hence, the reduced cocycle action $(\Lambda_i \curvearrowright (P_i, \phi_i)^{\Lambda_i})^{p_i}$ makes sense, and is well defined up to cocycle conjugation, see \cref{section.cocycle-actions}. Note that if the centralizers of $P_i^{\Lambda_i}$ w.r.t.~$\phi_i^{\Lambda_i}$ are trivial, then we automatically get conjugation of the two actions. In particular, \mathbb{C}ref{corstar.B} is now a direct consequence of \cref{thmstar.A}, since if $\phi$ is a weakly mixing state on $P$, then $P_{\phi,\text{ap}} = \mathbb{C}$ is the trivial factor, and hence the centraliser of $P_i^{\Lambda_i}$ w.r.t.~$\phi_i^{\Lambda_i}$ is also trivial. More generally, if the group $\Lambda_i$ is a direct product of two icc groups in the class $\mathcal{C}$, we can apply Popa's cocycle superrigidity theorems and also get conjugation. We obtain the following result, analogous to Theorem B in \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. \begin{thmstar}\label{corstar.C} The set of factors \begin{align*} \bigl\{(P,\phi)^\Lambda \rtimes \Lambda \bigm| &\;\;\text{$P$ a nontrivial amenable factor with normal faithful state $\phi$}\\ &\;\;\text{such that $P_{\phi,\text{ap}}$ is a factor, and $\Lambda$ a direct product of two icc}\\ &\;\;\text{groups in the class $\mathcal{C}$}\;\bigr\} \end{align*} is exactly classified, up to isomorphism, by the group $\Lambda$ and the action $\Lambda \curvearrowright (P,\phi)^\Lambda$ up to a state preserving conjugacy of the action. \end{thmstar} For \emph{two-sided} Bernoulli actions $\Lambda \times \Lambda \curvearrowright P^\Lambda$, we get the following improvement of Theorem C in \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. Note that even in the almost periodic case, we obtain a strengthening, as we can remove the condition that $\Lambda$ has no nontrivial \emph{central sequences}. A central sequence in a group $\Lambda$ is a sequence $g_n \in \Lambda$ such that for all $h \in \Lambda$, $g_n h = h g_n$ eventually. The absence of such sequences ensures that the crossed product $M = P^\Lambda \rtimes (\Lambda \times \Lambda)$ is a full factor, i.e.~that $\operatorname{I}nn(M) \subset \operatorname{Aut}(M)$ is closed for the topology given by $\alpha_n \to \alpha$ if and only if $\forall \psi \in M_\star : \|\psi \circ \alpha_n -\psi \circ \alpha\| \to 0$ (see \cite{connes;almost-periodic-states}). Since we do not require that $\Lambda$ has no nontrivial central sequences, the following result possibly yields a class of nonisomorphic type $\operatorname{I}II_1$ factors that are not full. However, there are probably no groups with central sequences in class $\mathcal{C}$. \begin{thmstar}\label{corstar.D} The set of factors \begin{align*} \bigl\{(P,\phi)^\Lambda \rtimes (\Lambda \times \Lambda) \bigm| &\;\;\text{$P$ a nontrivial amenable factor with normal faithful state $\phi$}\\ &\;\;\text{such that $P_{\phi,\text{ap}}$ is a factor, and $\Lambda$ an icc group in the class $\mathcal{C}$}\;\bigr\} \end{align*} is exactly classified, up to isomorphism, by the group $\Lambda$ and the pair $(P,\phi)$ up to a state preserving isomorphism. \end{thmstar} The proofs of the above nonisomorphism results all make use of Tomita-Takesaki's modular theory and the continuous core of type $\operatorname{I}II$ factors. We first use the main results of \cite{popa-vaes;unique-cartan-decomposition-factors-free,popa-vaes;unique-cartan-decomposition-factors-hyperbolic,ioana;cartan-subalgebras-amalgamated-free-product-factors} providing unique crossed product decomposition theorems for factors of the form $R \rtimes \Lambda$, where $\Lambda \curvearrowright R$ is an outer action on the hyperfinite $\operatorname{I}I_1$ factor $R$ of a group $\Lambda$ in class $\mathcal{C}$. We then obtain a cocycle conjugation isomorphism $\psi : P_0^\Lambda \rtimes \mathbb{R} \to P_1^\Lambda \rtimes \mathbb{R}$ between the induced actions $\Lambda \curvearrowright P_i^\Lambda \rtimes \mathbb{R}$ on the continuous cores. Using the spectral gap methods of \cite{popa;superrigidity-malleable-actions-spectral-gap}, we obtain mutual intertwining $\psi(L\mathbb{R}) \prec L\mathbb{R}$, $L\mathbb{R} \prec \psi(L\mathbb{R})$, see \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R} below. Under the assumption that the almost periodic parts $(P_i)_{\phi_i,\text{ap}}$ are factors, we are able to deduce that the actions $\Lambda \curvearrowright P_i^\Lambda$ must then be cocycle conjugate, up to reductions, see \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R-conclusion,lem.state-preserving-actions}. Recently, Houdayer and Isono \cite{houdayer-isono;unique-prime-factorization} introduced intertwining techniques in the general type $\operatorname{I}II$ setting. While it is tempting to use these interesting techniques to prove the nonisomorphism results in the current article, there are two obstructions to do so. Firstly, the main intertwining theorems for group actions on type $\operatorname{I}I_1$ factors \cite{popa-vaes;unique-cartan-decomposition-factors-free,popa-vaes;unique-cartan-decomposition-factors-hyperbolic,ioana;cartan-subalgebras-amalgamated-free-product-factors} do currently not have general analogues in the type $\operatorname{I}II$ setting. Secondly, it is unclear whether a direct type $\operatorname{I}II$ approach would allow to recover a state preserving conjugacy, as in the above theorems. After all, in our situation, we do have `favorite' states, in contrast to the unique prime factorization problem studied in \cite{houdayer-isono;unique-prime-factorization}. \section{Preliminaries} All mentioned von Neumann algebras are assumed to have separable predual. \subsection{Cocycle actions}\label{section.cocycle-actions} Let $M$ be a von Neumann algebra equipped with a normal semifinite faithful (n.s.f.) weight $\varphi$. We denote by $\operatorname{Aut}(M)$ the group of automorphisms of $M$, and by $\operatorname{Aut}(M, \varphi)$ the subgroup of weight scaling automorphisms of $M$, i.e.~automorphisms $\alpha$ for which there exists a $\lambda \in \mathbb{R}^+_0$ such that $\varphi \circ \alpha = \lambda \varphi$. Equipped with the topology where a net $\alpha_n \in \operatorname{Aut}(M)$ converges to $\alpha$ if and only if for every $\psi \in M_\star$, $\| \psi \circ \alpha_n - \psi \circ \alpha\|$ converges to zero, $\operatorname{Aut}(M)$ and $\operatorname{Aut}(M,\varphi)$ become Polish groups. Let $G$ now be a locally compact group. A \emph{cocycle action} of $G$ on $(M,\varphi)$ is a continuous map $ \alpha : G \to \operatorname{Aut}(M,\varphi) : g \mapsto \alpha_g$ and a continuous map $v : G \times G \to \mathcal{U}(M_\varphi)$ such that: \begin{align*} \alpha_e = \text{id}, \qquad \alpha_{g} \circ \alpha_{h} &= \operatorname{Ad} v_{g,h} \circ \alpha_{gh}, \qquad\forall g,h,k\in G, \\ v_{g,h}v_{gh,k} &= \alpha_g (v_{h,k}) v_{g,hk}. \end{align*} Here, $M_\varphi$ denotes the \emph{centraliser} of $\varphi$. Such an action is denoted by $G \curvearrowright^{\alpha,v} (M,\varphi)$. A strongly continuous map $v$ satisfying the above relations is called a 2-\emph{cocycle} for $\alpha$. If $v=1$, $\alpha$ is called an \emph{action}. Two cocycle actions $G \curvearrowright^{\alpha,v} (M_1,\varphi_1)$ and $G \curvearrowright^{\beta,w} (M_2,\varphi_2)$ are \emph{cocycle conjugate through a weight preserving isomorphism} if there exists a strongly continuous map $u : G \to \mathcal{U}( (M_2)_{\varphi_2})$ and an isomorphism $\psi : (M_1,\varphi_1) \to (M_2,\varphi_2)$ satisfying $\varphi_2 \circ \psi = \varphi_1$ and \begin{align*} \psi \circ \alpha_g &= \operatorname{Ad} u_g \circ \beta_g \circ \psi,\qquad\forall g,h \in G, \\ \psi(v_{g,h})&=u_g \beta_g(u_h) w_{g,h} u^\star_{gh}. \end{align*} If $\delta : G_1 \to G_2$ is a continuous isomorphism, we say that the cocycle actions $G_1 \curvearrowright^{\alpha,v} (M_1,\varphi_1)$ and $G_2 \curvearrowright^{\beta,w} (M_2,\varphi_2)$ are cocycle conjugate \emph{modulo $\delta$}, if the actions $\alpha$ and $\beta \circ \delta$ of $G_1$ are cocycle conjugate. An automorphism $\alpha$ of $M$ is called \emph{properly outer} if there exists no nonzero element $y \in M$ such that $y \alpha(x)=xy$ for all $x \in M$. A cocycle action $\Gamma \curvearrowright^{\alpha,v} (M,\varphi)$ of a discrete group is called \emph{properly outer} if $\alpha_g$ is properly outer for all $g \in \Gamma, g\not=e$. Assume now that $(M,\varphi)$ is a von Neumann algebra with an n.s.f.~weight for which $M_\varphi$ is a factor. Consider a properly outer cocycle action $\Gamma \curvearrowright^{\alpha,v}(M,\varphi)$ of a discrete group $\Gamma$, that preserves the weight $\varphi$. Suppose that $p \in M_\varphi$ is a nonzero projection with $\varphi(p) < \infty$, and choose partial isometries $w_g \in M_\varphi$ such that $p = w_gw_g^\star$, $\alpha_g(p) = w_g^\star w_g$, $w_e=p$. Let $\alpha^p : G \to \operatorname{Aut}(pM p, \varphi_p)$ be defined by $ \alpha_g^p(pxp) = w_g\alpha_g(pxp)w_g^\star$, for $x \in M$, where $\varphi_p(pxp) = \varphi(pxp)/\varphi(p)$. Denote $v_{g,h}^p = w_g \alpha_g(w_h) v_{g,h} w_{gh}^\star \in pM_\varphi p$, for $g,h\in G$. Then $G \curvearrowright^{\alpha^p,v^p}(pM p,\varphi_p)$ is a properly outer cocycle action, which does not depend on the choice of the partial isometries $w_g \in M_\varphi$, up to state preserving cocycle conjugacy. We call this action the \emph{reduced cocycle action of $\alpha$ by $p$}. We say that two properly outer cocycle actions $\Gamma \curvearrowright^{\alpha,v} (M_1,\varphi_1)$ and $\Gamma \curvearrowright^{\beta,w} (M_2,\varphi_2)$ of a discrete group $\Gamma$ are, \emph{up to reductions}, cocycle conjugate through a state preserving isomorphism, if there exists projections $p_i \in (M_i)_{\varphi_i}$ such that the reduced cocycle actions $\alpha^{p_0}$ and $\beta^{p_1}$ are cocycle conjugate through a state preserving isomorphism. \subsection{Popa's theory of intertwining-by-bimodules} Let $(M,\tau)$ be any tracial von Neumann algebra, and let $P \subset 1_P M 1_P$ and $Q \subset 1_Q M 1_Q$ be von Neumann subalgebras. Following \cite{popa;strong-rigidity-malleable-actions-I}, we write $P \prec_M Q$ if there exist a projection $p \in M_n(\mathbb{C}) \otimes Q$, a normal unital $\star$-homomorphism $\theta : P \to p(M_n(\mathbb{C}) \otimes Q)p$ and a non-zero partial isometry $v \in M_{1,n}(\mathbb{C}) \otimes 1_P M 1_Q$ satisfying $a v = v \theta(a)$ for all $a \in P$. We recall from \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products} the definition of the class of groups $\mathcal{C}$. Let $(M,\tau)$ be a tracial von Neumann algebra. For every von Neumann subalgebra $P\subset M$, we consider the \emph{normaliser} $\mathcal{N}_M(P) = \{u \in \mathcal{U}(M) \mid uPu^\star = P\}$. The \emph{Jones index} of a von Neumann subalgebra $P \subset M$ is defined as the $P$-dimension of $L^2(M)$ as a $P$-module, computed using the given trace $\tau$. An inclusion $P \subset M$ is said to be \emph{essentially of finite index} if there exist projections $p \in P' \cap M$ that lie arbitrarily close to $1$ such that $P p \subset pMp$ has finite Jones index. We call $A \subset M$ a \emph{virtual core subalgebra} if $A' \cap M = \mathcal{Z}(A)$ and if the inclusion $\mathcal{N}_M(A)^{\prime\prime} \subset M$ is essentially of finite index. \begin{definition}[\cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}]\label{definition:class-C} We say that a countably infinite group $\Gamma$ belongs to class $\mathcal{C}$ if for every trace preserving cocycle action $\Gamma \curvearrowright (B,\tau)$ and every amenable, virtual core subalgebra $A \subset p (B \rtimes \Gamma)p$, we have that $A \prec B$. \end{definition} Recall that all groups in the class $\mathcal{C}$ are nonamenable, and that class $\mathcal{C}$ contains all weakly amenable groups $\Gamma$ with $\beta_1^{(2)}(\Gamma) > 0$ \cite{popa-vaes;unique-cartan-decomposition-factors-free}, all weakly amenable, nonamenable, bi-exact groups \cite{popa-vaes;unique-cartan-decomposition-factors-hyperbolic} and all free product groups $\Gamma = \Gamma_1 \star \Gamma_2$ with $|\Gamma_1| \geq 2$ and $|\Gamma_2| \geq 3$ \cite{ioana;cartan-subalgebras-amalgamated-free-product-factors}. Moreover, class $\mathcal{C}$ is closed under extensions and commensurability. We refer to section 4 of \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products} for more details. We will also need the following elementary lemma. \begin{lemma} Let $N$ be a finite von Neumann algebra. If $A,B \subset N$ are abelian subalgebras with $A = (A' \cap N)' \cap N$, $B = (B' \cap N)' \cap N$ and $A \prec_N B$, then there exists a nonzero partial isometry $w \in N$ with $ww^\star \in A' \cap N$, $w^\star w \in B' \cap N$ and $A w \subset w B$. \label{lemma:intertwining-abelian-subalg} \end{lemma} \begin{proof} Let $A,B \subset N$ as in the statement of the lemma. Denote by $P$ and $Q$ the relative commutants of $A$ and $B$, i.e.~$P = A' \cap N$ and $Q = B' \cap N$. Since $A \prec_N B$, we also have that $Q \prec_N P$, see \cite[Lemma 3.5]{vaes;explicit-computations-finite-index-bimodules}. Hence, we find projections $p \in P, q \in Q$, a partial isometry $v \in N$ with $vv^\star \leq q$, $v^\star v \leq p$ and a normal unital $\star$-homomorphism $\theta : qQq \to pPp$ satisfying \begin{align*} x v = v \theta(x) \quad\text{ for all}\quad x \in qQq. \end{align*} Note that $vv^\star \in (qQq)' \cap qNq = qB$. Write now $D = \theta(qQq)' \cap pNp$ and $f = v^\star v \in D$, then by spatiality, we have \begin{align*} f D f = (\theta(qQq)f)' \cap fNf = (v^\star Q v)' \cap fNf = v^\star (qB) v. \end{align*} Hence $f$ is an abelian projection of $D$. Note that $pA \subset D$ is an abelian subalgebra. Take now $C \subset D$ a maximal abelian subalgebra satisfying $pA \subset C$, and observe that necessarily $C \subset pPp$. Since $D$ is a finite von Neumann algebra, we find a partial isometry $v_1 \in D$ such that $v_1v_1^\star = f$ and $v_1^\star D v_1 \subset C$, see e.g.~\cite[Lemma C.2]{vaes;rigidity-results-bernoulli}. Put $w = vv_1$, then we still have \begin{align*} xw = w \theta(x) \quad\text{ for all}\quad x \in qQq, \end{align*} since $v_1$ commutes with $\theta(qQq)$, and $ww^\star = vfv^\star = vv^\star \in qB \subset qQq$. Moreover, now we obtain $w^\star w = v_1^\star f v_1 \in C \subset pPp$. We get that $w^\star (qQq) w \subset pPp$, hence $ww^\star Q ww^\star \subset wPw^\star$. Taking the relative commutants in $ww^\star N ww^\star$, we obtain that $w A w^\star \subset ww^\star B$, hence $w^\star$ is the desired partial isometry. \end{proof} \subsection{Takesaki's modular theory} Let $M$ be a von Neumann algebra equipped with a normal semifinite faithful (n.s.f.) weight $\varphi$. Denote by $\sigma^\varphi : \mathbb{R} \curvearrowright M$ the modular automorphism group associated to $\varphi$. The associated crossed product $M \rtimes \mathbb{R}$ of $M$ with the action $\sigma^\varphi$ is called the \emph{continuous core}. We denote by $\pi_\sigma : M \to M\rtimes \mathbb{R}$ the canonical embedding, and by $(\lambda(t))_{t \in \mathbb{R}}$ the canonical group of unitaries such that $\pi_\sigma(\sigma^\varphi_t(x)) = \lambda(t) \pi_\sigma(x)\lambda(t)^\star$. Consider $\mathbb{R}^+_0$ to be the dual of $\mathbb{R}$ under the pairing $\inp{t}{\mu}=\mu^{\mathbf{i}t}$ for $t \in \mathbb{R}$, $\mu \in \mathbb{R}^+_0$, and let $\widehat{\sigma}^\varphi : \mathbb{R}dual \curvearrowright M \rtimes \mathbb{R}$ be the dual action to $\sigma^\varphi$. We have by Takesaki duality \cite[Theorem X.2.3]{takesaki;theory-operator-algebras-II} that \begin{align*} (M \rtimes \mathbb{R}) \rtimes \mathbb{R}dual \cong M \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R})) \end{align*} by the isomorphism $Phi : (M \rtimes \mathbb{R}) \rtimes \mathbb{R}dual \to M \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}))$ given by \begin{align} \{Phi\big(\pi_{\widehat\sigma} \circ \pi_\sigma(x)\big)\xi\}(t) &= (\sigma_t^\varphi)^{-1}(x)\xi(t),&&x \in M, \quad \xi \in L^2(\mathbb{R}, L^2(M)), \quad t \in \mathbb{R},\nonumber\\ \{Phi\big(\pi_{\widehat\sigma} \circ \lambda(s)\big)\xi\}(t) &= \xi(t-s), &&s\in \mathbb{R},\label{eq:takesakiduality}\\ \{Phi\big(\lambda(\mu)\big)\xi\}(t) &=\overline{\inp{t}{\mu}}\xi(t), &&\mu \in \mathbb{R}dual. \nonumber \end{align} Here $(\lambda(\mu))_{\mu \in \mathbb{R}dual}$ is the canonical group of unitaries in the second crossed product. There exists an n.s.f.~trace $\mathbb{T}r_\varphi$ on $M\rtimes \mathbb{R}$ such that \begin{align*} \mathbb{T}r_\varphi \circ \widehat\sigma^\varphi_{\mu} = \mu^{-1} \mathbb{T}r_\varphi \text{ for all }\mu \in \mathbb{R}dual, \end{align*} and such that the dual weight $\widetilde\mathbb{T}r_\varphi$ (see \cite[Definition X.1.16]{takesaki;theory-operator-algebras-II}) on $(M\rtimes \mathbb{R})\rtimes \mathbb{R}dual$ corresponds under the duality $Phi$ to the weight $\varphi \otimesimes \mathbb{T}r(h \mathop{\cdot})$ on $M \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}))$. Here $h$ is the hermitian operator given by $h^{\mathbf{i}t} = \lambda(t) \in B(L^2(\mathbb{R}))$, with $\lambda(t)$ the left regular representation defined by $\big(\lambda(t)f\big)(s) = f(s-t)$ for $f \in L^2(\mathbb{R})$. The following lemma is well known. For a proof, see e.g.~\cite[Proposition 2.4]{houdayer-ricard;free-araki-woods-factors}. \begin{lemma}\label{lem.commutant} Let $(M,\varphi)$ be a von Neumann algebra equipped with a n.s.f.~weight. Then $(L\mathbb{R})' \cap M \rtimes_{\sigma^\varphi} \mathbb{R} = M_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. \end{lemma} \subsection{Structural properties of infinite tensor products} Let $(P,\phi)$ be a von Neumann algebra with a normal faithful state $\phi$. Whenever $I$ is a countable set, we write $P^I$ for the tensor product of $P$ indexed by $I$ with respect to $\phi$. The canonical product state on $P^I$ will be denoted by $\phi^I$. In this section, we study the structure of the infinite tensor product $(P^I, \phi^I)$. The first result we get, is a type classification for these infinite tensor products. We show in particular that such tensor products never yield a factor of type $\operatorname{I}II_0$. This result is probably well known, but we could not find a reference in the literature. \begin{lemma}\label{lem.only-type-III1} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful state, and let $I$ be a countable infinite set. The factor $(P,\phi)^I$ is of type \begin{itemize}[align=left,leftmargin=1.5em,topsep=0mm]\parskip2pt \item[$\operatorname{I}I_1$] if $\phi$ is tracial, \item[$\operatorname{I}II_{\lambda}, \lambda \in (0,1)$] if $\phi$ is nontracial and periodic with period $\frac{2\pi}{|\log \lambda|}$, and \item[$\operatorname{I}II_1$] if $\phi$ is not periodic. \end{itemize} Moreover, denoting by $P^I \rtimes \mathbb{R}$ the crossed product with the modular action of $\phi^I$, we have that $\mathcal{Z}(P^I \rtimes \mathbb{R}) = L(G)$, where $G < \mathbb{R}$ is the subgroup given by $G = \{t \in \mathbb{R} \mid \sigma^\phi_t = \mathord{\operatorname{id}}\}$. \end{lemma} \begin{proof} Put $\varphi = \phi^I$. If $\phi$ is tracial, then clearly $\varphi$ is a trace on $(P,\phi)^I$, hence $(P,\phi)^I$ is a type $\operatorname{I}I_1$ factor and clearly $\mathcal{Z}(P^I \rtimes \mathbb{R}) = L(\mathbb{R})$. Assume now that $\phi$ is not tracial. Then $(P,\phi)^I$ cannot be semifinite and thus is a type $\operatorname{I}II$ factor, see e.g.~\cite[Theorem XIV.1.4]{takesaki;theory-operator-algebras-III}. Identify $I = \mathbb{N}$ and let $N = P^\mathbb{N} \rtimes \mathbb{R}$ denote the crossed product w.r.t.~the modular group $\sigma^{\phi^I}$. For every $n \in \mathbb{N}$, let $\alpha_n$ denote the $\star$-isomorphism $N \to (P \mathbin{\overline{\otimesimes}} P^\mathbb{N}) \rtimes \mathbb{R}$ defined by \begin{align*} \alpha_n\big( \pi_\sigma(\otimes_k x_k) \big) &= \pi_\sigma\big( x_n \otimes \big(x_0 \otimes x_1 \otimes \cdots \otimes x_{n-1} \otimes x_{n+1} \otimes \cdots \big)\big), &&\text{ for }x_k \in P, \\ \alpha_n( \lambda(t)) &= \lambda(t), &&\text{ for }t \in \mathbb{R}. \end{align*} Let $\iota : N \to (1 \otimes P^\mathbb{N}) \rtimes \mathbb{R} \subset (P \mathbin{\overline{\otimesimes}} P^\mathbb{N})\rtimes \mathbb{R}$ be the canonical embedding, and remark that for all $x \in N$, $\alpha_n(x) \to \iota(x)$ $\star$-strongly. Then for every element $x \in \mathcal{Z}(N)$, we have that $\iota(x) \in \mathcal{Z}((P \mathbin{\overline{\otimesimes}} P^\mathbb{N})\rtimes\mathbb{R})$, and in particular $[\iota(x), \pi_\sigma(a \otimes 1)] = 0$ for all $a \in P$. For any $\omega \in P_\star$, denote by $\text{ev}_\omega: L^2(\mathbb{R}, P \mathbin{\overline{\otimesimes}} P^\mathbb{N}) \to L^2(\mathbb{R}, P^\mathbb{N})$ the linear map given by applying $\omega \otimes \mathord{\operatorname{id}}$ to $P \mathbin{\overline{\otimesimes}} P^\mathbb{N}$ diagonally. Also denote by $\widehat\iota : L^2(\mathbb{R}, P^\mathbb{N}) \to L^2(\mathbb{R},P \mathbin{\overline{\otimesimes}} P^\mathbb{N})$ the map induced by $\iota$. The above now means that for $a \in P$, $\text{ev}_\omega \circ \pi_\sigma(a \otimes 1) \circ \widehat\iota$ commutes with $\mathcal{Z}(N)$, when acting on $L^2(N)$. But note that \begin{align*} \{\big(\text{ev}_\omega \circ \pi_\sigma(a \otimes 1) \circ \widehat\iota\big) \xi\}(s) = \omega(\sigma^\phi_{-s}(a))\xi(s) \quad\text{for }\xi \in L^2(\mathbb{R}, L^2(P^\mathbb{N})). \end{align*} This means that $\mathcal{Z}(N) \subset (1 \otimes D)' \cap B(L^2(N))$, where $D \subset L^\infty(\mathbb{R})$ is the von Neumann subalgebra generated by the functions $t \mapsto \omega(\sigma^\phi_{-t}(a))$ for $\omega \in P_\star, a \in P$. On the other hand, we have by \cref{lem.commutant} that $\mathcal{Z}(N) \subset (L\mathbb{R})' \cap N = (P^\mathbb{N})_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. Combining both facts, we obtain that $\mathcal{Z}(N) \subset (P^\mathbb{N})_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R} \cap (1 \otimes D)' = (P^\mathbb{N})_\varphi \mathbin{\overline{\otimesimes}} (L\mathbb{R} \cap D')$. We now make the following case distinction. \textbf{Case 1:} The state $\phi$ is not periodic. In this case, the functions $t \mapsto \omega(\sigma^\phi_{-t}(a))$ for $\omega \in P_\star,a \in P$ separate points of $\mathbb{R}$, and hence we have that $D = L^\infty(\mathbb{R})$. In particular, $L\mathbb{R} \cap D' = \mathbb{C}$, meaning that $\mathcal{Z}(N) \subset (P^\mathbb{N})_\varphi$. As $P^\mathbb{N}$ is a factor, we conclude that $N$ is also a factor, and hence $P^\mathbb{N}$ is of type $\operatorname{I}II_1$. \textbf{Case 2:} The state $\phi$ is periodic with period $T > 0$. Then $D$ consists of all bounded functions $f \in L^\infty(\mathbb{R})$ that are $T$-periodic, i.e.~$f(s)=f(s+T)$ for all $s \in \mathbb{R}$. In particular, $D$ contains the function $f : t \mapsto \exp( \frac{2\pi \mathbf{i}t}{T})$. Using the Fourier transform $L\mathbb{R} \cong L^\infty(\mathbb{R}hat)$, it is now easy to see that $L\mathbb{R} \cap D' \subset L\mathbb{R} \cap\{M_f\}' = L(T\mathbb{Z})$, and hence $\mathcal{Z}(N) \subset (P^\mathbb{N})_\varphi \mathbin{\overline{\otimesimes}} L(T\mathbb{Z})$. Using the Fourier decomposition of elements in $P^\mathbb{N} \rtimes T\mathbb{Z}$, we now can conclude that $\mathcal{Z}(N) = L(T\mathbb{Z})$, and hence $P^\mathbb{N}$ is a factor of type $\operatorname{I}II_\lambda$ with $\lambda = e^{- \frac{2\pi}{T}}$. \end{proof} For $\mu \in \mathbb{R}dual$, we denote by $P_{\phi,\mu}$ the eigenvectors of $\operatorname{D}elta_\phi$ for $\mu$, i.e.~ \begin{align*} P_{\phi,\mu} = \{ x \in P \mid \operatorname{D}elta_\phi \widehat{x} = \mu \widehat{x} \} = \{ x \in P \mid \sigma_\phi^t(x) = \mu^{\mathbf{i}t} x \}. \end{align*} Note that $\overline{P_{\phi,\mu}}^{\|\mathop{\cdot}\|_\phi} = \{ \xi \in L^2(P,\phi) \mid \operatorname{D}elta_\phi \xi = \mu \xi\}$. Recall that a state $\phi$ is called \emph{almost periodic} if $\operatorname{D}elta_\phi$ is diagonalizable. In general, we define $P_{\phi,\text{ap}} \subset P$ as the subalgebra spanned by the eigenvectors of $\operatorname{D}elta_\phi$, i.e.~ \begin{align*} P_{\phi,\text{ap}} = \big(\operatorname{span} \bigcup_{\mu \in \mathbb{R}dual} P_{\phi,\mu} \big)''. \end{align*} The notation $P_{\phi,\text{ap}}$ stands for the \emph{almost periodic part} of $P$, and this name is justified since $P_{\phi,\text{ap}}$ is the maximal subalgebra $Q \subset P$ with \emph{$\phi$-preserving conditional expectation}, such that the restriction of $\phi$ to $Q$ is almost periodic. Here, a subalgebra $Q \subset P$ is said to be with $\phi$-preserving conditional expectation, if there exists a conditional expectation $E : P \to Q$ satisfying $\phi \circ E = \phi$. In \cref{lem.technical-condition} below, we will show that if $(P,\phi)$ is a nontrivial factor equipped with a normal faithful state and $I$ is an infinite set, then the centralizer $(P^I)_{\phi^I}$ of the infinite tensor product is a factor if and only if $P_{\phi,\text{ap}}$ is a factor. To show this, we need the following elementary lemma. \begin{lemma} Let $(M,\psi)$, $(N,\varphi)$ be factors equipped with normal faithful states. For every $\mu \in \mathbb{R}^+_0$, it holds that \begin{align}\label{eq:almost-periodic-part-tensor-product} (M \mathbin{\overline{\otimesimes}} N)_{\psi \otimes \varphi,\mu} \subset \bigoplus_{t \in \mathbb{R}dual} \overline{M_{\psi,\mu t^{-1}} \otimes N_{\varphi,t}}^{\|\mathop{\cdot}\|_{\psi \otimes \varphi}}, \end{align} as subsets of $L^2(M \mathbin{\overline{\otimesimes}} N, \psi\otimes \varphi)$. In particular, $(M \mathbin{\overline{\otimesimes}} N)_{\psi \otimes \varphi, \text{ap}} = M_{\psi,\text{ap}} \mathbin{\overline{\otimesimes}} N_{\varphi,\text{ap}}$. \label{lem.almost-periodic-part-tensor-product} \end{lemma} \begin{proof} Let $(M,\psi)$ and $(N,\varphi)$ be factors with normal faithful states, and fix $\mu \in \mathbb{R}dual$. In this proof, we will write $L^2(M)$ and $L^2(N)$ for $L^2(M,\psi)$ and $L^2(N,\varphi)$ respectively. Assume that $x \in (M\mathbin{\overline{\otimesimes}} N)_{\psi \mathbin{\overline{\otimesimes}} \varphi, \mu}$, and consider $\widehat{x} \in L^2(M)\otimes L^2(N)$. Note that $(\operatorname{D}elta_\psi \otimes \operatorname{D}elta_\varphi)\widehat{x} = \mu \widehat{x}$, hence $\widehat{x} = \sum_{k \in \mathbb{N}} \xi_k \otimes \eta_k$, where $\xi_k \in L^2(M)$ and $\eta_k \in L^2(N)$ are eigenvectors for $\operatorname{D}elta_\psi$ and $\operatorname{D}elta_\varphi$ respectively, such that the product of the eigenvalues of $\xi_k$ and $\eta_k$ equals $\mu$. Now \eqref{eq:almost-periodic-part-tensor-product} follows from the observation that for every von Neumann algebra $(P,\phi)$ equipped with a normal faithful state, and for every $\mu \in \mathbb{R}^+_0$, $\overline{P_{\phi,\mu}}^{\|\mathop{\cdot}\|_\phi} = \{\xi \in L^2(P) \mid \operatorname{D}elta_\phi \xi = \mu \xi\}$. It follows immediately from \eqref{eq:almost-periodic-part-tensor-product} that $(M \mathbin{\overline{\otimesimes}} N)_{\psi \otimes \varphi, \mu} \subset L^2(M_{\psi,\text{ap}} \mathbin{\overline{\otimesimes}} N_{\varphi,\text{ap}}, \psi \otimes \varphi)$, and hence for every $x \in (M\mathbin{\overline{\otimesimes}} N)_{\psi \otimes \varphi,\mu}$ we get $E_{M_{\psi,\text{ap}} \mathbin{\overline{\otimesimes}} N_{\varphi,\text{ap}}}(x) = x$. This demonstrates that indeed $(M \mathbin{\overline{\otimesimes}} N)_{\psi \otimes \varphi, \text{ap}} = M_{\psi,\text{ap}} \mathbin{\overline{\otimesimes}} N_{\varphi,\text{ap}}$. \end{proof} \begin{lemma} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful state, and let $I$ be a countable infinite set. Then it holds that $(P^I)_{\phi^I} = (P_{\phi,\text{ap}}^I)_{\phi^I}$. In particular, the following two statements are equivalent: \begin{enumerate}[(i)] \item $(P^I)_{\phi^I}$ is a factor. \item $P_{\phi,\text{ap}}$ is a factor. \end{enumerate} \label{lem.technical-condition} \end{lemma} \begin{proof} Let $P$, $I$ be as in the statement, and put $Q = P_{\phi,\text{ap}}$ and $\varphi = \phi^I$. The inclusion $(Q^I)_{\varphi} \subset (P^I)_{\varphi}$ being obvious, take $x \in (P^I)_{\varphi}$; we will show that $x \in Q^I$. Note that for all finite subsets $\mathcal{F} \subset I$, $E_{P^\mathcal{F}}(x) \in (P^\mathcal{F})_{\varphi}$, and by \cref{lem.almost-periodic-part-tensor-product}, we have that \begin{align*} E_{P^\mathcal{F}}(x) \in (P^\mathcal{F})_{\varphi} \subset (P^\mathcal{F})_{\varphi,\text{ap}} = Q^\mathcal{F} \subset Q^I. \end{align*} As $x$ is a limit point of $\{E_{P^\mathcal{F}}(x) \mid \mathcal{F} \subset I \text{ finite}\}$ in the strong topology, it follows that also $x \in Q^I$. The implication (ii) $\mathbb{R}ightarrow$ (i) follows now directly from \cite[Lemma 2.4]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. For the reverse implication, note that $\mathcal{Z}(P_{\phi,\text{ap}}) \subset P_\phi$. Thus, if $x \in \mathcal{Z}(P_{\phi,\text{ap}})$ is a nontrivial element, then $x \otimesimes 1 \otimesimes 1 \otimesimes \cdots$ belongs to $(P^I)_\varphi$, and commutes with all elements of $(P_{\phi,\text{ap}})^I$. In particular, $(P^I)_\varphi = (P_{\phi,\text{ap}}^I)_\varphi$ is not a factor. \end{proof} We recall the definition of the (generalized) Bernoulli action. Let $(P,\phi)$ be a von Neumann algebra with a normal faithful state $\phi$, and let $I$ be a countable set. Consider a countable group $\Lambda$ that acts on $I$, and let $\Lambda$ act on $P^I$ by the Bernoulli action \begin{align*} \rho(s)\big(\otimesimes_{k \in I} a_k\big) &= \otimesimes_{k \in I} a_{s^{-1} \cdot k},&\text{for }s \in \Lambda, a_h \in P. \end{align*} The von Neumann algebra $(P,\phi)$ is called the \emph{base algebra} for the Bernoulli action, and the crossed product $P^I \rtimes \Lambda$ is called the \emph{Bernoulli crossed product}. We now study when a Bernoulli action $\Lambda \curvearrowright P^I$ and its associated action on the continuous core $\Lambda \curvearrowright P^I \rtimes \mathbb{R}$ are properly outer. \begin{remark}\label{remark.shift-outer} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful state, and let $I$ be a countable infinite set. Denote by $P^I$ the infinite tensor product w.r.t~$\phi$, and by $\pi_k : P \to P^I$ the embedding at position $k$. Let $\alpha : I \to I$ be any nontrivial permutation, and let $\widehat\alpha : P^I \to P^I$ denote the induced automorphism given by \begin{align*} \widehat\alpha(\pi_{k}(x)) &= \pi_{\alpha(k)}(x), \quad \text{for } x \in P, k \in I. \end{align*} Then $\widehat\alpha$ is (properly) outer, unless $P$ is of type $\operatorname{I}$ and $\{k \in I \mid \alpha(k)\not=k\}$ is finite. Indeed, it is well known that for any nontrivial permutation $\sigma$ on $n$ elements, the induced flip automorphism on $P \mathbin{\overline{\otimesimes}} \cdots \mathbin{\overline{\otimesimes}} P$ given by $x_1 \otimes \cdots \otimes x_n \mapsto x_{\sigma(1)} \otimes \cdots \otimes x_{\sigma(n)}$ is inner if and only if $P$ is a type $\operatorname{I}$ factor, see e.g.~\cite[Theorem 5]{sakai;automorphisms-tensor-products}. If $\alpha$ moves infinitely many points of $I$, the result is the content of \cite[Lemma 2.5]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. \end{remark} If $(P,\phi)$ is a nontrivial factor equipped with a normal faithful state that is not almost periodic, and $\Lambda \curvearrowright I$ is any faithful action, the induced action $\Lambda \curvearrowright P^I \rtimes \mathbb{R}$ on the continuous core of $P^I$ is always (properly) outer, as stated in the next result. Combined with \cref{lem.only-type-III1}, we get in particular that $P^I \rtimes \Lambda$ is always a type $\operatorname{I}II_1$ factor, even if $\Lambda$ is not icc, as its continuous core $(P^I \rtimes \Lambda) \rtimes \mathbb{R} = (P^I \rtimes \mathbb{R}) \rtimes \Lambda$ is a factor. \begin{lemma} \label{lem.outer-core} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful not almost periodic state, and let $I$ be a countable set. Let $\alpha : I \to I$ be any nontrivial permutation. Denote by $P^I \rtimes \mathbb{R}$ the crossed product with the modular action of $\phi^I$. Then the induced automorphism $\widehat\alpha \in \operatorname{Aut}(P^I \rtimes \mathbb{R})$ on the continuous core, given by \begin{align*} \widehat\alpha(\pi_{k}(x)) &= \pi_{\alpha(k)}(x), \quad \text{for } x \in P, k \in I,\\ \widehat\alpha(\lambda(t)) &= \lambda(t), \qquad\:\:\: \text{for }t \in \mathbb{R}, \end{align*} is (properly) outer. Here, $\pi_k : P \to P^I \rtimes \mathbb{R}$ denotes the embedding at position $k$. \end{lemma} \begin{proof} Denote $\varphi = \phi^I$, $Q = P_{\phi,\text{ap}}$, $N = P^I \rtimes \mathbb{R}$, and denote by $\widetilde\varphi$ the dual weight on $N$ w.r.t.~$\varphi$. For every finite nonempty subset $\mathcal{F} \subset I$, let $K_\mathcal{F} \subset L^2(N, \widetilde\varphi) = L^2(\mathbb{R}, L^2(P^I, \varphi))$ be the subspace defined as \begin{align*} K_\mathcal{F} &= L^2\big(\mathbb{R}, L^2( Q^I (P \ominus Q)^\mathcal{F} Q^I, \varphi)\big). \end{align*} By \cref{lem.almost-periodic-part-tensor-product}, we get the orthogonal decomposition \begin{align*} L^2(N \ominus (P^I)_{\varphi,\text{ap}} \rtimes \mathbb{R}, \widetilde\varphi) = L^2\big(\mathbb{R}, L^2(P^I \ominus (P^I)_{\varphi,\text{ap}}, \varphi)\big) = \bigoplus_{\mathcal{F} \in J} K_{\mathcal{F}}, \end{align*} where $J$ denotes the set of all finite nonempty subsets of $I$. Note that since $\phi$ is not almost periodic, $K_{\mathcal{F}}$ is nonzero for every $\mathcal{F} \in J$. Now suppose that $\alpha : I \to I$ is a nontrivial permutation, and denote by $\widehat\alpha \in \operatorname{Aut}(N)$ the induced automorphism. Assume that $v \in N$ is a unitary such that $vx = \widehat\alpha(x)v$ for all $x \in N$. In particular, $v$ commutes with $L\mathbb{R}$ and hence $v \in (P^I)_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R} \subset Q^I \rtimes \mathbb{R}$ by \cref{lem.commutant}. Therefore, the map on $L^2(N, \widetilde\varphi)$ induced by the automorphism $\operatorname{Ad} v$ on $N$ leaves all subspaces $K_\mathcal{F}$ invariant, whereas $\widehat\alpha$ sends $K_\mathcal{F}$ to $K_{\alpha(\mathcal{F})}$ for all finite nonempty subsets $\mathcal{F} \subset I$. This is absurd, as $\alpha$ is a nontrivial permutation. We conclude that $\widehat\alpha$ is outer. \end{proof} If however $(P,\phi)$ is a nontrivial factor equipped with an almost periodic state, and $\Lambda \curvearrowright I$ is any faithful action, then the induced action $\Lambda \curvearrowright P^I \rtimes \mathbb{R}$ on the continuous core of $P^I$ is not always properly outer. For example, if $P$ is a type $\operatorname{I}$ factor and $g \in \Lambda$ moves only a finite number of points of $I$, i.e.~the set $\{ k \in I \mid g \cdot k \not= k\}$ is finite, then the induced automorphism by $g$ on $P^I \rtimes \mathbb{R}$ is implemented by a unitary. However, if we require that any nontrivial element moves infinitely many points of $I$, the induced action $\Lambda \curvearrowright P^I \rtimes \mathbb{R}$ is always properly outer, due to the following result. \begin{lemma} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful almost periodic state, and let $I$ be a countable infinite set. Let $\alpha : I \to I$ be any nontrivial permutation. Denote by $P^I \rtimes \mathbb{R}$ the crossed product with the modular action of $\phi^I$. Then the induced automorphism $\widehat\alpha \in \operatorname{Aut}(P^I \rtimes \mathbb{R})$ on the continuous core, given by \begin{align*} \widehat\alpha(\pi_{k}(x)) &= \pi_{\alpha(k)}(x), \quad \text{for } x \in P, k \in I,\\ \widehat\alpha(\lambda(t)) &= \lambda(t), \qquad\:\:\: \text{for }t \in \mathbb{R}, \end{align*} is properly outer, unless $P$ is a type $I$ factor and the set $\{k \in I \mid \alpha(k)\not=k\}$ is finite. Here, $\pi_k : P \to P^I \rtimes \mathbb{R}$ denotes the embedding at position $k$. \label{lem.outer-core-ap} \end{lemma} \begin{proof} Denote $\varphi = \phi^I$, $N = P^I \rtimes \mathbb{R}$, and assume that $v \in N$ is a nonzero element such that $vx = \widehat\alpha(x)v$ for all $x \in N$. Since $L\mathbb{R}$ is fixed under $\widehat\alpha$, $v$ commutes with $L\mathbb{R}$ and hence $v \in (P^I)_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. Putting $Q = (P^I)_\varphi$, we get in particular that the restricted automorphism $\widehat\alpha|_{Q \mathbin{\overline{\otimesimes}} L\mathbb{R}}$ also satisfies $vx=\widehat\alpha|_{Q \mathbin{\overline{\otimesimes}} L\mathbb{R}}(x)v$ for all $x \in Q \mathbin{\overline{\otimesimes}} L\mathbb{R}$. Taking an appropriate $\omega \in (L\mathbb{R})_\star$, we get a nonzero element $w = (1\otimes \omega)(v) \in Q$ satisfying $wx=\widehat\alpha(x)w$ for all $x \in Q$. Note that $Q$ is a factor by \cite[Lemma 2.4]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, hence $w$ is a multiple of a unitary, and by normalising, we may assume that $w$ is a unitary. Denote by $\Gamma$ the countable subgroup of $\mathbb{R}dual$ generated by the point spectrum of $\operatorname{D}elta_\phi$, endowed with the discrete topology. Put $G = \widehat\Gamma$ and let $\sigma : G \curvearrowright P^I$ denote the extension of $\sigma^\varphi$ to an action of the compact group $G$, as in section 2.2 of \cite{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. By \cite[Lemma 2.1]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, it follows that $\widehat\alpha = \operatorname{Ad} w \circ \sigma_s$ on $P^I$ for some $s \in G$, and by \cite[Lemma 2.5]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, it follows that the set $\{k \in I \mid \alpha(k)\not=k\}$ is then necessarily finite. Moreover, fixing $k \in I$ such that $\alpha(k)\not= k$, we get that $w \pi_k(P) = \pi_{\alpha(k)}(P) w$ as subalgebras of $P^I$. This is only possible if $P$ is a type $\operatorname{I}$ factor. For the converse, note that if $P$ is a type $\operatorname{I}$ factor and $\alpha : I \to I$ is a permutation such that the set $\{k \in I \mid \alpha(k) \not=k \}$ is finite, then there exists a unitary $u \in P^I$ such that $u\pi_k(x)u^\star = \pi_{\alpha(k)}(x)$ for all $k \in I, x \in P$. Since $\varphi \circ \operatorname{Ad} u= \varphi$, we have $u \in (P^I)_{\varphi^I}$, and in particular it follows that $u\lambda(t)u^\star = \lambda(t)$ for all $t \in \mathbb{R}$. We have shown that $\widehat\alpha = \operatorname{Ad} u$ on $P^I \rtimes \mathbb{R}$, thus $\widehat\alpha$ is not properly outer. \end{proof} We saw earlier that if $(P,\phi)$ is a nontrivial factor equipped with a normal faithful state that is not almost periodic, and $\Lambda$ is a countable infinite group, then $P^\Lambda \rtimes \Lambda$ is a type $\operatorname{I}II_1$ factor. The next remark shows that if $\Lambda$ is nonamenable, $P^\Lambda \rtimes \Lambda$ does not admit any almost periodic weight. Recall that a von Neumann algebra $M$ is full if $\operatorname{I}nn(M) \subset \operatorname{Aut}(M)$ is closed for the topology given by $\alpha_n \to \alpha$ if and only if $\forall \psi \in M_\star : \|\psi \circ \alpha_n -\psi \circ \alpha\| \to 0$, and that if $M$ is full, Connes' $\tau$-invariant of $M$ is given by the weakest topology on $\mathbb{R}$ making $\mathbb{R} \to \operatorname{Out}(P^\Lambda \rtimes \Lambda) : t \mapsto \sigma^{\phi^\Lambda}_t$ continuous. \begin{remark}\label{rem.no-almost-periodic} Let $(P,\phi)$ be a nontrivial factor equipped with a normal faithful state that is not almost periodic, and $\Lambda$ a countable infinite nonamenable group. By \cite[Lemma 2.7]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, $P^\Lambda \rtimes \Lambda$ is a full factor, and Connes' $\tau$-invariant is the weakest topology on $\mathbb{R}$ making $\mathbb{R} \to \operatorname{Aut}(P) : t \mapsto \sigma^\phi_t$ continuous. Since $\phi$ is not almost periodic, the completion of this topology cannot be compact. It follows that $P^\Lambda \rtimes \Lambda$ has no almost periodic weights, see the proof of \cite[Corollary 5.3]{connes;almost-periodic-states}. \end{remark} \section{A technical lemma} Let $(P,\phi)$ be a factor equipped with a normal faithful state, and let $\Lambda$ be a countable group acting on an infinite set $I$, such that the action $\Lambda \curvearrowright I$ has no invariant mean. Put $\varphi = \phi^I$ and $N = P^I \rtimes_{\sigma_\varphi} \mathbb{R}$. Denote the action $\Lambda \curvearrowright N$, induced by the generalized Bernoulli action of $\Lambda$ on $P^I$, by $\alpha$. The main result of this section is the following technical lemma, allowing us to locate fixed point subalgebras of $N$ w.r.t.~actions which are outer conjugate to the Bernoulli action $\alpha$. As in the proof of \cite[Lemma 3.3]{brothier-vaes;prescribed-fundamental-group}, locating such subalgebras allows us to deduce cocycle conjugacies between two Bernoulli actions. The proof of \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R} follows the lines of the proof of \cite[Theorem 4.1]{popa;superrigidity-malleable-actions-spectral-gap}. \begin{lemma}\label{lem.pointwise-fixed-subalgebra-intertwines-in-R} Let $p \in L\mathbb{R} \subset N$ be a projection with $0 < \mathbb{T}r(p) < \infty$. Assume that $(V_g)_{g \in \Lambda} \in \mathcal{U}(pNp)$ are unitaries, and that $\Omega : \Lambda \times \Lambda \to \mathbb{T}$ is a map such that $V_g \alpha_g(V_h) = \Omega(g,h) V_{gh}$ for all $g,h \in \Lambda$. Assume that $Q \subset pNp$ is a subalgebra such that for all $x \in Q$ and for all $g \in \Lambda$, $V_g \alpha_g(x) V_g^\star = x$, and assume that $q \in Q' \cap pNp$ is a projection such that for all $g \in \Lambda$, $V_g\alpha_g(q)V_g^\star \sim q$ inside $Q' \cap pNp$. Then it follows that $qQ \prec_{pNp} pL\mathbb{R}$. Put $M = N \cap (L\mathbb{R})' = (P^I)_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. If moreover holds that $q \in Q' \cap pMp$ and $qL\mathbb{R} \subset qQ \subset qMq$, then $qQ \prec_{pMp} pL\mathbb{R}$. \end{lemma} See \cref{lem.commutant} for the equality $N \cap (L\mathbb{R})' = (P^I)_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. We use the spectral gap methods of \cite{popa;superrigidity-malleable-actions-spectral-gap}, applied to the following variant of Popa's malleable deformation \cite{popa;rigidity-non-commutative-bernoulli}, due to Ioana \cite{ioana;rigidity-results-wreath-product}. Let $\widetilde{P} = P \star L\mathbb{Z}$ the free product with respect to the state $\phi$ and the natural trace on $L\mathbb{Z}$, and denote the induced state also by $\phi$. Denote by $u_n, n \in \mathbb{Z}$ the canonical unitaries of $L\mathbb{Z}$. Let $f : \mathbb{T} \to (-\pi,\pi]$ be the unique function determined by $t = \exp(i f(t)), t \in \mathbb{T}$, and let $h =f(u_1)$ be the hermitian operator such that $u_1 = \exp(i h)$. Define $u_t = \exp(i th)$ for every $t \in \mathbb{R}$. Equip $\widetilde{P}^I$ with the one-parameter group of state preserving automorphisms $\theta_t$ given by the infinite tensor product of $\operatorname{Ad} u_t$, for $t \in \mathbb{R}$. Define the period 2 automorphism $\gamma$ of $\widetilde{P}^I$ as the infinite tensor product of the automorphism of $\widetilde{P}$ satisfying $x \mapsto x$ for $x \in P$ and $u_1 \mapsto u_{-1}$. Put $\mathbb{N}til = \widetilde{P}^I \rtimes_{\sigma_\varphi} \mathbb{R}$, and denote also the action $\Lambda \curvearrowright \mathbb{N}til$, induced by the generalized Bernoulli action of $\Lambda$ on $P^I$, by $\alpha$. The automorphisms $\theta_t$ and $\gamma$ naturally extend to $\mathbb{N}til$, by acting as the identity on $L\mathbb{R}$. The condition that $\Lambda \curvearrowright I$ has no invariant mean, implies the following result, which is very similar to \cite[Lemma 3.4]{brothier-vaes;prescribed-fundamental-group}. \begin{lemma}\label{lem.rho-not-amenable} Assume that $(V_g)_{g \in \Lambda}$ are unitaries in $\mathcal{U}(N)$, and $\Omega : \Lambda \times \Lambda \to \mathbb{T}$ is a map such that $V_g \alpha_g(V_h) = \Omega(g,h) V_{gh}$ for all $g,h \in \Lambda$. The unitary representation \begin{align*} \rho : \Lambda \to \mathcal{U}(L^2(\mathbb{N}til \ominus N, \mathbb{T}r)) : \rho_g(\xi) = V_g \alpha_g(\xi) V_g^\star \end{align*} does not weakly contain the trivial representation. \end{lemma} \begin{proof} Denote by $\widetilde\varphi$ be the dual weight on $\mathbb{N}til$ and $N$ w.r.t.~the weight $\varphi$ on $\widetilde{P}^I$ and $P^I$ respectively. Define for every finite subset $\mathcal{F} \subset I$, the subspace $K_\mathcal{F} \subset L^2(\widetilde N,\widetilde\varphi)$ as the $\|\mathop{\cdot}\|_2$-closure of the linear span of $P^I(\widetilde{P} \ominus P)^\mathcal{F} P^I \mathfrak{K}(\mathbb{R})$, where $\mathfrak{K}(\mathbb{R})$ denotes the set of compactly supported continuous functions on $\mathbb{R}$, and $\mathfrak{K}(\mathbb{R})\subset L\mathbb{R}$ as convolution operators. Note that if $x \in P^I(\widetilde{P} \ominus P)^\mathcal{F} P^I \mathfrak{K}(\mathbb{R}), y \in P^I(\widetilde{P} \ominus P)^\mathcal{K} P^I \mathfrak{K}(\mathbb{R})$ for $\mathcal{F}\not=\mathcal{K}$, then $E_{L\mathbb{R}}(y^\star x)=0$. In particular, we have the orthogonal decomposition \begin{equation*} L^2(\mathbb{N}til \ominus N, \mathbb{T}r) = \bigoplus_{\mathcal{F} \in J} K_{\mathcal{F}}. \end{equation*} Remark that $\rho_g(K_\mathcal{F}) = K_{g\mathcal{F}}$ for all $g \in \Lambda$. Denote by $p_\mathcal{F}$ the orthogonal projection onto $K_\mathcal{F}$. Assume that $\rho$ does weakly contain the trivial representation, then we find an $\operatorname{Ad} \rho(\Lambda)$-invariant mean $\mu$ on $B(H)$. Define the map $\mathbb{T}heta : \ell^\infty(J) \to B(H)$ given by $\mathbb{T}heta(F) = \sum_{\mathcal{F} \in J} F(\mathcal{F})p_\mathcal{F}$. Since $\mathbb{T}heta(g \cdot F) = \rho_g \mathbb{T}heta(F) \rho_g^\star$, the composition $\mu \circ \mathbb{T}heta$ yields a $\Lambda$-invariant mean on $J$. But then $\Lambda \curvearrowright I$ admits an invariant mean, by \cite[Lemma 2.6]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. This is absurd, hence $\rho$ does not weakly contain the trivial representation. \end{proof} \begin{proof}[Proof of \mathbb{C}ref{lem.pointwise-fixed-subalgebra-intertwines-in-R}] Let $Q \subset pNp$ a subalgebra such that for all $x \in Q$ and for all $g \in \Lambda$, $V_g \alpha_g(x)V_g^\star = x$, and let $q \in Q' \cap pNp$ be a projection with $V_g \alpha_g(q) V_g^\star \sim_{(Q' \cap pNp)} q$ for all $g \in \Lambda$. Define as above the malleable deformation $(\theta_t)_{t \in \mathbb{R}}$ of $\mathbb{N}til$, and remark that $\theta_t(p) = p$ for all $t \in \mathbb{R}$. Let $\rho$ be the unitary representation of $\Lambda$ on $L^2(p(\mathbb{N}til \ominus N)p, \mathbb{T}r)$ given by $\rho_g(\xi) = V_g \alpha_g(\xi) V_g^\star$. By \cref{lem.rho-not-amenable}, we find a constant $\kappa > 0$ and a finite subset $\mathcal{F} \subset \Lambda$ such that \begin{equation}\label{eq.use-nonamenable} \|\xi\|_2 \leq \kappa \sum_{g \in \mathcal{F}} \|\rho_g(\xi)-\xi\|_2 \quad\text{for all }\xi \in L^2(p(\mathbb{N}til \ominus N)p, \mathbb{T}r). \end{equation} Put $\epsilon = \|q\|_2/6$ and $\delta = \epsilon/(2 \kappa |\mathcal{F}|)$. Take an integer $n_0$ large enough such that $t = 2^{-n_0}$ satisfies \begin{align*} \| (V_g - \theta_t(V_g))p\|_2 &\leq \delta \quad\text{for all }g \in \mathcal{F},\\ \| q - \theta_t(q)\|_2 &\leq \epsilon. \end{align*} For every $x \in Q$ we have $V_g \alpha_g(x) V_g^\star = x$ for all $g \in \Lambda$, and therefore \begin{equation}\label{eq.uniform-bound} \|V_g \alpha_g(\theta_t(x)) V_g^\star - \theta_t(x)\|_2 \leq 2\delta \quad\text{for all }g \in \mathcal{F}\text{ and all }x \in Q\text{ with }\|x\|\leq 1. \end{equation} Denote by $E : \mathbb{N}til \to N$ the unique trace preserving conditional expectation. Whenever $x \in Q$ with $\|x\| \leq 1$, we put $\xi = \theta_t(x) - E(\theta_t(x))$ and conclude from \eqref{eq.use-nonamenable} and \eqref{eq.uniform-bound} that \begin{align*} \|\xi\|_2 \leq \kappa \sum_{g \in \mathcal{F}} \| \rho_g(\xi)-\xi\|_2 \leq 2\kappa |\mathcal{F}| \delta = \epsilon. \end{align*} A direct computation shows that $(\theta_t)$ satisfies the following transversality property in the sense of \cite[Lemma 2.1]{popa;superrigidity-malleable-actions-spectral-gap}: \begin{align*} \| x- \theta_t(x)\|_2 \leq \sqrt{2} \|\theta_t(x) - E(\theta_t(x))\|_2 \quad\text{for all } x \in pNp. \end{align*} We conclude that $\|x - \theta_t(x)\|_2 \leq 2\epsilon$ for all $x \in Q$ with $\|x\|\leq 1$, hence $\|y-\theta_t(y)\|_2 \leq 3\epsilon$ for all $y \in qQ$. It follows that for all $y \in \mathcal{U}(qQ)$, \begin{align*} | \mathbb{T}r( y \theta_t(y^\star)) - \mathbb{T}r(yy^\star) | \leq \|y\|_2 \| y - \theta_t(y)\|_2 \leq \|q\|_2 3\epsilon = \mathbb{T}r(q)/2, \end{align*} and hence $\mathbb{T}r(y \theta_t(y^\star)) \geq \mathbb{T}r(q)/2$ for all $y \in \mathcal{U}(qQ)$. Let $W \in q\mathbb{N}til \theta_t(q)$ be the unique element of minimal $\|\mathop{\cdot}\|_2$ in the weakly closed convex hull of $\{y \theta_t(y^\star) \mid y \in \mathcal{U}(qQ)\}$. Then $\mathbb{T}r(W) \geq \mathbb{T}r(q)/2$ and $xW = W \theta_t(x)$ for all $x \in Q$. In particular, $WW^\star$ commutes with $Q$. Put now for $g \in \Lambda$, $W_g = V_g \alpha_g(W) \theta_t(V_g^\star)$. Since $Q$ is pointwise fixed under $\operatorname{Ad} V_g \circ \alpha_g$, we then also have $x W_g = W_g \theta_t(x)$ for all $x \in Q$. The join of the left support projections of all $W_g$, $g \in \Lambda$, is a projection $q_0 \in p \mathbb{N}til p \cap Q'$ that satisfies $q_0 = V_g \alpha_g(q_0) V_g^\star$ for all $g \in \Lambda$. By \cref{lem.rho-not-amenable}, $q_0 \in N$, and in particular, $\gamma(q_0) = q_0$. We claim that we can find some $g \in \Lambda$ such that $\theta_t(\gamma(W^\star) W_g) \not= 0$. To prove the claim, assume that for all $g \in \Lambda$, $\gamma(W^\star)W_g = 0$. Since the join of the left support projections of all $W_g$, $g \in \Lambda$ equals $q_0$, we get $\gamma(W^\star)q_0 = \gamma(W^\star q_0) = 0$, contradiction. Fix now $g \in \Lambda$ such that $\gamma(W^\star)W_g \not= 0$. Take $u \in \mathcal{U}(Q' \cap pNp)$ such that $uq=V_g \alpha_g(q)V_g^\star u$, and put $W' \mathrel{\mathop:}= \theta_t(\gamma(W^\star) W_g) \theta_{2t}(u)$. Then $W' \in q\mathbb{N}til \theta_{2t}(q)$ is a nonzero element satisfying $x W' = W' \theta_{2t}(x)$ for all $x \in Q$. Repeating the same argument $n_0$ times, we obtain a nonzero element $W \in q \mathbb{N}til \theta_1(q)$ such that $WW^\star \in Q' \cap q \mathbb{N}til q$ and \begin{equation}\label{eq:W} x W = W \theta_1(x) \quad\text{for all }x \in qQ. \end{equation} Observe that if $q \in (L\mathbb{R})'$ and $qL\mathbb{R} \subset qQ$, then $W$ commutes with $L\mathbb{R}$. For every $\mathcal{F} \subset I$, define $N(\mathcal{F}) = P^\mathcal{F} \rtimes_{\sigma_\varphi} \mathbb{R}$. We claim that there exists a finite subset $\mathcal{F} \subset I$ such that $qQ \prec_{pNp} pN(\mathcal{F})p$. Suppose the claim fails, and let $x_n \in \mathcal{U}(qQ)$ be a sequence of unitaries satisfying \begin{align*} \| E_{pN(\mathcal{F})p}(ax_nb^\star)\|_2 \to 0 \quad\text{for all }a,b\in pNq \text{ and all finite subsets }\mathcal{F} \subset I. \end{align*} We claim that \begin{equation}\label{eq.theta1ofQ-weakly-embeds} \|E_{pNp}(a \theta_1(x_n) b^\star)\|_2 \to 0 \quad \text{for all }a,b \in p \mathbb{N}til p. \end{equation} Since the linear span of all $p N \widetilde{P}^\mathcal{F} p$, $\mathcal{F} \subset I$ finite, is $\|\mathop{\cdot}\|_2$-dense in $p\mathbb{N}til p$, it suffices to prove \eqref{eq.theta1ofQ-weakly-embeds} for all $a,b \in pN\widetilde{P}^\mathcal{F} p$ and all finite subsets $\mathcal{F} \subset I$. But for all $a,b \in pN\widetilde{P}^\mathcal{F} p$, we have \begin{align}\label{eq:hulp} E_{pNp}(a \theta_1(x_n) b^\star) = E_{pNp}\big(a \theta_1(E_{pN(\mathcal{F})p}(x_n)) b^\star\big), \end{align} thus the conclusion follows from the choice of the sequence $(x_n)$. So \eqref{eq.theta1ofQ-weakly-embeds} is proven, and since $x_n \in \mathcal{U}(qQ)$ are unitaries, it follows in particular that \begin{align*} \|E_{pNp}(WW^\star)\|_2 = \|E_{pNp}(WW^\star)q\|_2 = \|E_{pNp}(WW^\star) x_n\|_2 = \| E_{pNp}(W \theta_1(x_n) W^\star)\|_2 \to 0. \end{align*} As $W$ is nonzero, we reached a contradiction, and we have proven the existence of a finite subset $\mathcal{F} \subset I$ such that $qQ \prec_{pNp} pN(\mathcal{F})p$. Since $qQ \prec_{pNp} pN(\mathcal{F})p$, we find $n \in \mathbb{N}$, a projection $p' \in pN(\mathcal{F})p \otimesimes M_n(\mathbb{C})$, a unital normal homomorphism $\psi_1 : qQ \to p' \big(pN(\mathcal{F})p \otimesimes M_n(\mathbb{C})\big) p'$ and a partial isometry $v \in qNp \otimesimes M_{1,n}(\mathbb{C})$ such that $v^\star v \leq p'$ and $x v = v\psi_1(x)$ for all $x \in qQ$. For every $g \in \Lambda$, choose $u_g \in \mathcal{U}(Q' \cap pNp)$ such that $u_g V_g \alpha_g(q)V_g^\star = q u_g$. Put $\mathcal{G} \subset \mathcal{U}(Q'\cap pNp)$ the subgroup of all unitaries $u \in Q' \cap pNp$ satisfying $uq = qu$. Let $\mathcal{K} \subset \Lambda$ and $\mathcal{V} \subset \mathcal{G}$ be finite sets such that \begin{align*} \mathbb{T}r\left(\bigvee_{g \in \mathcal{K}, u \in \mathcal{V}} u u_g V_g \alpha_g(vv^\star)V_g^\star u_g^\star u^\star\right) > \frac12 \mathbb{T}r\left(\bigvee_{g \in \Lambda, u \in \mathcal{G}} uu_g V_g \alpha_g(vv^\star)V_g^\star u_g^\star u^\star\right). \end{align*} Remark that for all $g \in \mathcal{K}$, all $u \in \mathcal{V}$, and all $x \in qQ$, we have that \begin{align*} x u u_g V_g \alpha_g(v) = u u_g V_g \alpha_g(v)\, \alpha_g(\psi_1(x)). \end{align*} Replacing $\mathcal{F}$ by $\mathcal{K}\cdot\mathcal{F}$, $\psi_1$ by the direct sum of the maps $\alpha_g\circ \psi_1$ for $g \in \mathcal{K}$, $u \in \mathcal{V}$ and $v$ by the polar part of the direct sum of the $uu_g V_g \alpha_g(v)$'s, we obtain a unital normal $\star$-homomorphism $\psi_1 : qQ \to p' \big(pN(\mathcal{F})p \otimesimes M_n(\mathbb{C})\big) p'$ and a partial isometry $v \in qNp \otimesimes M_{1,n}(\mathbb{C})$ such that $v^\star v \leq p'$, \begin{align}\label{eq.action-does-not-make-v-zero} u_g V_g \alpha_g(vv^\star) V_g^\star u_g^\star \not\perp vv^\star \text{ for all }g \in G, \quad\text{and}\quad x v = v\psi_1(x) \text{ for all }x \in qQ. \end{align} Moreover, we may assume that the support projection of $E_{p N(\mathcal{F}) p \otimesimes M_n(\mathbb{C})}(v^\star v)$ equals $p'$. Since the action $\Lambda \curvearrowright I$ has no invariant mean, a fortiori all orbits are infinite and thus there exists some $g \in \Lambda$ such that $g \mathcal{F} \cap \mathcal{F} = \emptyset$ (see e.g. \cite[Lemma 2.4]{popa-vaes;strong-rigidity-generalized-bernoulli-actions} ). Putting $\psi_2 : qQ \to \alpha_g(p')\big(p N(g\mathcal{F})p \otimesimes M_n(\mathbb{C})\big)\alpha_g(p')$ given by $\psi_2 = \alpha_g \circ \psi_1$, we get that \begin{equation}\label{eq.intertwines} \psi_1(x)\: v^\star u_gV_g \alpha_g(v) = v^\star u_gV_g \alpha_g(v)\: \psi_2(x) \quad\text{for all }x\in qQ. \end{equation} By the first part of \eqref{eq.action-does-not-make-v-zero}, $v^\star u_gV_g \alpha_g(v)\not=0$. We claim that $\psi_1(qQ) \prec_{pNp \otimesimes M_n(\mathbb{C})} pN(\emptyset)p \otimesimes M_n(\mathbb{C}) = pL\mathbb{R} \otimesimes M_n(\mathbb{C})$. Suppose that this is not the case, then we find a sequence $x_n \in qQ$ such that $\psi_1(x_n) \in \mathcal{U}(\psi_1(qQ))$ and \begin{align*} \|E_{L\mathbb{R} \otimesimes M_n(\mathbb{C})}(a \psi_1(x_n) b)\|_2 \to 0 \quad\text{for all }a,b \in pNp \otimesimes M_n(\mathbb{C}). \end{align*} Note that for all $a,b \in pN(\mathcal{F})p \otimesimes M_n(\mathbb{C})$ and all $a',b' \in pN(g\mathcal{F})p \otimesimes M_n(\mathbb{C})$, \begin{align*} E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}(a' a \psi_1(x_n) b b') = a' E_{L\mathbb{R} \otimesimes M_n(\mathbb{C})}\big(a \psi_1(x_n) b\big) b', \end{align*} and by density and \eqref{eq.intertwines}, it follows that $\|E_{p N(g\mathcal{F})p \otimesimes M_n(\mathbb{C})}(y \psi_1(x_n)z)\|_2\to 0$ for all $y,z \in pNp \otimesimes M_n(\mathbb{C})$. Putting $w = v^\star u_gV_g \alpha_g(v)$, we have that \begin{align*} 0 \leftarrow \|E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( w^\star \psi_1(x_n) w) \|_2 &= \|E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( \psi_2(x_n)\, w^\star w) \|_2\\ &= \|\psi_2(x_n)\,E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( w^\star w) \|_2\\ &\geq \|w\psi_2(x_n)\,E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( w^\star w) \|_2\displaybreak[0]\\ &= \|\psi_1(x_n)w\,E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( w^\star w) \|_2\\ &= \|w\,E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}( w^\star w) \|_2\\ &\geq \|w^\star w \,E_{pN(g \mathcal{F})p \otimesimes M_n(\mathbb{C})}(w^\star w )\|_2. \end{align*} But this is absurd, as $w^\star w$ is nonzero. We conclude that $\psi_1(qQ) \prec pL\mathbb{R} \otimesimes M_n(\mathbb{C})$ inside $pNp \otimesimes M_n(\mathbb{C})$. Since the support of $E_{p N(\mathcal{F}) p \otimesimes M_n(\mathbb{C})}(v^\star v)$ equals $p'$, this intertwining can be combined with the intertwining given by $v$, and we obtain that $qQ \prec_{pNp} pL\mathbb{R}$. Assume now that $q \in Q' \cap (L\mathbb{R})' \cap N$ and $qL\mathbb{R} \subset qQ \subset qMq$, where $M = N \cap (L\mathbb{R})' = (P^I)_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. By \eqref{eq:W}, we obtain a nonzero element $W \in q \mathbb{N}til \theta_1(q) \cap (L\mathbb{R})'$ such that $x W = W \theta_1(x)$ for all $x \in qQ$. For every $\mathcal{F} \subset I$, define $M(\mathcal{F}) = M \cap N(\mathcal{F}) = (P^\mathcal{F})_\varphi \mathbin{\overline{\otimesimes}} L\mathbb{R}$. We claim that there exists a finite subset $\mathcal{F} \subset I$ such that $qQ \prec_{pMp} pM(\mathcal{F})p$. If the claim fails, then there exists a sequence of unitaries $x_n \in \mathcal{U}(qQ)$ such that \begin{align*} \| E_{pM(\mathcal{F})p}(ax_nb^\star)\|_2 \to 0 \quad\text{for all }a,b\in pMq \text{ and all finite subsets }\mathcal{F} \subset I. \end{align*} By \eqref{eq:hulp} and using that $E_{pM(\mathcal{F})p}(x_n) = E_{pN(\mathcal{F})p}(x_n)$, it follows that $\|E_{pNp}(a \theta_1(x_n) b^\star)\|_2 \to 0$ for all $a,b \in p \mathbb{N}til p \cap (pL\mathbb{R})'$, and thus $\|E_{pNp}(WW^\star)\|_2 \to 0$, contradiction. Hence the claim is proven. Replacing $N$ by $M$ and $N(\mathcal{F})$ by $M(\mathcal{F})$ in the last part of the proof of the general case, we then obtain that $qQ \prec_{pMp} pL\mathbb{R}$. We need $qL\mathbb{R} \subset qQ$ to establish that $uu_gV_g \alpha_g(q) \in (L\mathbb{R})'$ for all $u \in \mathcal{G}, g \in \Lambda$. \end{proof} Using \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R}, we obtain the following result. Note that this result follows immediately from \cite[Theorem A.1]{popa;class-factors-betti-number-invariants} if both centralizers $(P_i^{I_i})_{\phi_i^{I_i}}$ are trivial, since then $L\mathbb{R} \subset P_i^{I_i} \rtimes \mathbb{R}$ is a maximal abelian subalgebra. \begin{lemma} \label{lem.pointwise-fixed-subalgebra-intertwines-in-R-conclusion} Let $(P_0,\phi_0)$ and $(P_1,\phi_1)$ be nontrivial factors equipped with normal faithful nonperiodic states. Let $\Lambda$ be a countable group acting on infinite sets $I_0$, $I_1$, such that the actions $\Lambda \curvearrowright I_i$ do not admit invariant means. Denote by $\varphi_i = \phi_i^{I_i}$ the canonical state on $P_i^{I_i}$. Assume that the centralizers $(P_i^{I_i})_{\varphi_i}$ are factors. Assume that $\psi : P_0^{I_0} \rtimes_{\sigma^{\varphi_0}} \mathbb{R} \to P_1^{I_1} \rtimes_{\sigma^{\varphi_1}}\mathbb{R}$ is an isomorphism such that the induced actions $\Lambda \curvearrowright P_i^{I_i} \rtimes_{\sigma^{\varphi_i}} \mathbb{R}$ are cocycle conjugate through $\psi$. Then there exists a partial isometry $w \in P_1^{I_1} \rtimes \mathbb{R}$ such that $w w^\star \in \psi(L\mathbb{R})'$, $w^\star w \in (L\mathbb{R})'$ and $\psi(L\mathbb{R})w = wL\mathbb{R}$. \end{lemma} \begin{proof} Let $\psi$ be an isomorphism as in the statement. Put $N_i = P_i^{I_i}\rtimes \mathbb{R}$ and denote the action of $\Lambda$ on $N_i$ by $\alpha^i$. Identify $\mathbb{R}dual$ as the dual of $\mathbb{R}$ using the pairing $\inp{t}{\mu} = \mu^{\mathbf{i}{t}}$ for $t\in\mathbb{R},\mu\in\mathbb{R}dual$, and denote by $\widehat\sigma^{\varphi_i} : \mathbb{R}^+_0 \curvearrowright N_i$ the dual action w.r.t.~$\sigma^{\varphi_i}$. Note that $\widehat\sigma^{\varphi_0}_\mu(L\mathbb{R}) = L\mathbb{R}$ for all $\mu \in \mathbb{R}dual$ and that $\widehat\sigma^{\varphi_0}$ commutes with the action of $\Lambda$ on $N_0$, hence we may replace $\psi$ by $\psi \circ \widehat\sigma^{\varphi_0}_\mu$ for an appropiate $\mu \in\mathbb{R}dual$ and assume that $\psi$ is a trace preserving isomorphism implementing the cocycle conjugation. This will allow us to ease the notations. {\bf Step 1.} First, we prove that for every nonzero finite trace projection $q \in L\mathbb{R}$, there exists a nonzero partial isometry $w \in \psi(q) N_1 q$ such that $ww^\star \in \psi(qL\mathbb{R})'$, $w^\star w \in (qL\mathbb{R})'$ and $\psi(L\mathbb{R})w \subset wL\mathbb{R}$. Note that $N_1$ is a factor by \cref{lem.only-type-III1}, since $\phi_1$ is nonperiodic. Let $q \in L\mathbb{R}$ be any nonzero finite trace projection, and take $u \in \mathcal{U}(N_1)$ such that $u \psi(q) u^\star = q$. Then $\operatorname{Ad} u \circ \psi$ also defines a cocycle conjugation between the actions $\Lambda \curvearrowright qN_iq$, hence we find a 1-cocycle $V_g, g \in \Lambda$ for $\alpha^1$ such that \begin{align*} (\operatorname{Ad} u \circ \psi) \circ \alpha^0_g = \operatorname{Ad} V_g \circ \alpha^1_g \circ (\operatorname{Ad} u \circ \psi), \quad\text{for all }g\in \Lambda. \end{align*} Denote $\psi_u = \operatorname{Ad} u \circ \psi$. By construction, $\psi_u(q L\mathbb{R})$ is an abelian subalgebra of $q N_1 q$, such that for all $x \in \psi_u(q L\mathbb{R})$ and for all $g \in \Lambda$, we have that $V_g \alpha_g^1(x)V_g^\star = x$. By \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R-conclusion}, $\psi_u(q L\mathbb{R}) \prec_{qN_1q} qL\mathbb{R}$. Note that the relative commutant of $q L\mathbb{R}$ inside $q N_i q$ is given by $(qL\mathbb{R})' \cap qN_iq = q((L\mathbb{R})' \cap N_i)q = (P_i^{I_i})_{\varphi_i} \mathbin{\overline{\otimesimes}} q L\mathbb{R}$, see \cref{lem.commutant}. Put $Q_i = (P_i^{I_i})_{\varphi_i} \mathbin{\overline{\otimesimes}} q L\mathbb{R}$, and observe that $Q_i' \cap qN_iq= q L\mathbb{R}$ by the assumption that $(P_i^{I_i})_{\varphi_i}$ is a factor. Since $\psi_u(q L\mathbb{R}) \prec_{qN_1q} qL\mathbb{R}$, it follows from \cref{lemma:intertwining-abelian-subalg} that there exists a partial isometry $w_0 \in qN_1q$ such that $w_0w_0^\star \in \psi_u(qL\mathbb{R})'$, $w_0^\star w_0 \in (qL\mathbb{R})'$ and $\psi_u(qL\mathbb{R}) w_0 \subset w_0 (qL\mathbb{R})$. Then $w = u^\star w_0$ is the desired partial isometry. {\bf Step 2.} Fix now a nonzero finite trace projection $q \in L\mathbb{R}$, and let $w \in \psi(q)N_1q$ be a nonzero partial isometry such that $ww^\star \in \psi(qL\mathbb{R})'$, $w^\star w \in (qL\mathbb{R})'$ and $\psi(L\mathbb{R})w \subset wL\mathbb{R}$. Put $p_0 = \psi^{-1}(ww^\star)$, $p_1 = w^\star w$, and put $M_i = p_i(P_i^{I_i} \rtimes \mathbb{R}) p_i$. Denote by $\theta : M_0 \to M_1$ the isomorphism given by $\theta = \operatorname{Ad} w^\star \circ \psi$, and note that $\theta(p_0 L\mathbb{R}) \subset p_1 L\mathbb{R}$, hence, taking the relative commutants, \begin{align}\label{eq:inclusions} p_0 L\mathbb{R} \subset \theta^{-1}(p_1L\mathbb{R}) \subset p_0( (P_0^{I_0})_{\varphi_0} \mathbin{\overline{\otimesimes}} qL\mathbb{R})p_0. \end{align} Let $u \in \mathcal{U}(\psi(q)N_1q)$ be a unitary such that $u p_1 = w$, and put $Q = (\operatorname{Ad} u^\star \circ \psi)^{-1}(qL\mathbb{R})$. Since $\operatorname{Ad} u^\star \circ \psi$ is a cocycle conjugation between the actions $\Lambda \curvearrowright qN_iq$, we find a 1-cocycle $V_g, g \in \Lambda$ for $\alpha^0$ such that \begin{align*} \alpha_g^1 \circ (\operatorname{Ad} u^\star \circ \psi) = (\operatorname{Ad} u^\star \circ \psi) \circ \operatorname{Ad} V_g \circ \alpha_g^0. \end{align*} In particular, for all $x \in Q$ we have we have $x = V_g \alpha^0_g(x)V_g^\star$. Furthermore, \eqref{eq:inclusions} means that $p_0 L\mathbb{R} \subset p_0 Q \subset p_0 (qN_0q \cap (L\mathbb{R})') p_0$. Also note that for every $g \in \Lambda$, the projections $p_1$ and $\alpha_g^0(p_1)$ both belong to $(P_1)_{\varphi_1} \mathbin{\overline{\otimesimes}} qL\mathbb{R}$, and the central traces of these projections inside $(P_1)_{\varphi_1} \mathbin{\overline{\otimesimes}} qL\mathbb{R}$ coincide. Indeed, $(P_1)_{\varphi_1}$ is a factor and the central trace is therefore given by the conditional expectation $E : (P_1)_{\varphi_1} \mathbin{\overline{\otimesimes}} qL\mathbb{R} \to qL\mathbb{R}$, which satisfies $E = E \circ \alpha_g^0$. In particular, the projections $p_1$ and $\alpha_g^0$ are equivalent, and so are $p_0$, $V_g\alpha_g(p_0)V_g^\star$ inside $Q'\cap qN_0q$. By \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R}, we deduce that $\theta^{-1}(p_1L\mathbb{R}) \prec_{(P_0)_{\varphi_0} \mathbin{\overline{\otimesimes}} qL\mathbb{R}} q L\mathbb{R}$. Since $\theta^{-1}(p_1L\mathbb{R})$ and $qL\mathbb{R}$ are abelian, we then find a partial isometry $v \in (P_0)_{\varphi_0} \mathbin{\overline{\otimesimes}} qL\mathbb{R}$ with $vv^\star \leq p_0$, such that $\theta^{-1}(p_1L\mathbb{R})v \subset vL\mathbb{R}$. But note that $v$ commutes with $L\mathbb{R}$, hence \begin{align*} v^\star v L\mathbb{R} = v^\star (p_0L\mathbb{R})v \subset v^\star \theta^{-1}(p_1L\mathbb{R}) v \subset v^\star v L\mathbb{R}. \end{align*} Putting $q_0 = \theta(vv^\star) \leq p_1$, it follows that $q_0 \in \theta(p_0L\mathbb{R})'$, $q_0 \in (p_1L\mathbb{R})'$ and $q_0 \theta(p_0L\mathbb{R}) = q_0 (p_1 L\mathbb{R})$. Then $w_0 = wq_0 \in \psi(q)N_1 q$ is a nonzero partial isometry such that $w_0 w_0^\star = wq_0w^\star = \psi(vv^\star) \in \psi(qL\mathbb{R})'$, $w_0^\star w_0 = q_0 \in (qL\mathbb{R})'$, and moreover \begin{align*} w_0^\star w_0 (L\mathbb{R}) =q_0 (p_1 L\mathbb{R}) q_0 = q_0 \theta(p_0 L\mathbb{R}) q_0 = w_0^\star \psi(L\mathbb{R})w_0. \end{align*} We conclude that $\psi(L\mathbb{R})w_0 = w_0 L\mathbb{R}$, which completes the proof. \end{proof} \section{A non-isomorphism result for generalized Bernoulli crossed products} Recall that for any factor $(P,\phi)$ equipped with a normal faithful state, $P_{\phi,\text{ap}} \subset P$ denotes the subalgebra spanned by the eigenvectors of $\operatorname{D}elta_\phi$, i.e.~ \begin{align*} P_{\phi,\text{ap}} = \big(\operatorname{span} \bigcup_{\mu \in \mathbb{R}dual} P_{\phi,\mu} \big)''. \end{align*} The following theorem provides a non-isomorphism result for all generalized Bernoulli crossed products, with amenable factors $(P,\phi)$ as base algebras, for which the almost periodic part $P_{\phi,\text{ap}}$ is again a factor. In particular, it will provide a proof of \cref{thmstar.A}. \begin{theorem}\label{thm.non-isomorphism-all-generalized-bernoulli} Let $(P_0,\phi_0)$ and $(P_1,\phi_1)$ be nontrivial amenable factors equipped with normal faithful states, such that $(P_0)_{\phi_0,\text{ap}}$ and $(P_1)_{\phi_1,\text{ap}}$ are factors. Let $\Lambda_0$ and $\Lambda_1$ be icc groups in the class $\mathcal{C}$ that act on infinite sets $I_0$ and $I_1$ respectively. Assume for $i=0,1$ that the action $\Lambda_i \curvearrowright I_i$ has no invariant mean, and that for every $g \in \Lambda_i - \{e\}$, the set $\{k \in I_i \mid g \cdot k \not= k \}$ is infinite. The algebras $P_0^{I_0} \rtimes \Lambda_0$ and $P_1^{I_1} \rtimes \Lambda_1$ are isomorphic if and only if one of the following statements holds. \begin{enumerate}[\upshape (a),topsep=0mm] \item The states $\phi_0$ and $\phi_1$ are both tracial, and the actions $\Lambda_i \curvearrowright (P_i, \phi_i)^{\Lambda_i}$ are cocycle conjugate, modulo a group isomorphism $\Lambda_0 \cong \Lambda_1$. \item The states $\phi_0$ and $\phi_1$ are both nontracial, and the actions $\Lambda_i \curvearrowright (P_i, \phi_i)^{\Lambda_i}$ are, up to reductions and modulo a group isomorphism $\Lambda_0 \cong \Lambda_1$, cocycle conjugate through a state preserving isomorphism. \end{enumerate} \end{theorem} \begin{proof} If (a) or (b) holds, then it is clear that the crossed products $P_i^{I_i} \rtimes \Lambda_i$ are isomorphic. For the reverse implication, let $\psi : P_0^{I_0} \rtimes \Lambda_0 \to P_1^{I_1} \rtimes \Lambda_1$ be a $\star$-isomorphism. Denote by $\varphi_i = \phi_i^{I_i}$ be the product state on $P_i^{I_i}$. By \cref{lem.only-type-III1}, we may distinguish the following three distinct cases. {\bf Case 1.}~{\em One of the states $\phi_i$ is tracial.}\\ If one of the states $\phi_i$ is tracial, $P_{0}^{I_{0}}$, $P_{0}^{I_{0}} \rtimes \Lambda_0$, $P_{1}^{I_{1}}$ and $P_{1}^{I_{1}} \rtimes \Lambda_1$ are all necessarily of type $\operatorname{I}I_1$. Since the groups $\Lambda_i$ are icc and belong to the class $\mathcal{C}$, it follows from \cite[Lemma 8.4]{ioana-peterson-popa;amalgamated-free-product} (see also \cite[lemma 4.1]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}) that $\psi(P_0^{I_0})$ and $P_1^{I_1}$ are unitarily conjugate inside $P_1^{I_1} \rtimes \Lambda_1$. Since the actions $\Lambda_i \curvearrowright P_i^{I_i}$ are outer, this means that $\Lambda_0 \cong \Lambda_1$ and that the actions $\Lambda_i \curvearrowright P_i^{I_i}$ are cocycle conjugate. {\bf Case 2.}~{\em One of the states $\phi_i$ is periodic.}\\ Note that under the extra assumption that the $P_i^{I_i} \rtimes \Lambda_i$ are full factors, the result directly follows from the proof of \cite[Theorem 6.1]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. We now provide a proof of the general situation. First note that since $\Lambda_i$ is icc and $\mathcal{Z}(P_i^{I_i} \rtimes \mathbb{R}) \subset L\mathbb{R}$ by \cref{lem.only-type-III1}, we get that $\mathcal{Z}((P_i^{I_i} \rtimes \mathbb{R}) \rtimes \Lambda_i) = \mathcal{Z}(P_i^{I_i}\rtimes \mathbb{R})$, and hence $P_i^{I_i} \rtimes \Lambda_i$ and $P_i^{I_i}$ are of the same type. In particular, it follows from the type classification (see \cref{lem.only-type-III1}) that since one of the states $\phi_i$ is periodic, also the other is, and that the periods of both states $\phi_i$ are equal. Let $T >0 $ denote this period. Put $G = \mathbb{R}/T\mathbb{Z}$, let $\sigma^i : G \curvearrowright P_i^{I_i}$ denote the actions induced by $\sigma^{\varphi_i}$, and put $N_i = P_i^{I_i} \rtimes G$. By Connes' Radon-Nykodym cocycle theorem for modular automorphism groups, $\psi$ is a cocycle conjugacy between the modular actions $\sigma^i$ of $G$ and therefore extends to a $\star$-isomorphism $Psi : M_0 \to M_1$ between the crossed products $M_i = (P_i^{I_i} \rtimes \Lambda_i) \rtimes G = N_i \rtimes \Lambda_i$. Note that $P_i^{I_i}$ has a factorial discrete decomposition \cite[Lemma 2.4]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, hence $N_i$ is the hyperfinite $\operatorname{I}I_\infty$ factor. Also note that since $\Lambda_i$ has trivial center, the action $\Lambda_i \curvearrowright N_i$ is outer \cite[Lemmas 2.2 and 2.5]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}. Proceeding exactly as in the proof of \cite[Theorem 5.1]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}, we obtain that the action $\Lambda_0 \curvearrowright (P_0^{I_0}, \varphi_0)$ is cocycle conjugate to the reduced cocycle action $(\Lambda_1 \curvearrowright (P_1^{I_1}, \varphi_1))^p$ for some projection $p \in (P_1^{I_1})_{\varphi_1}$. {\bf Case 3.}~{\em Both states $\phi_i$ are nonperiodic.}\\ Let $N_i = P_i^{I_i} \rtimes \mathbb{R}$ denote the crossed product of $P_i^{I_i}$ with the modular action of $\varphi_i$. Since $\phi_i$ is not periodic, $P_i^{I_i}$ is a type $\operatorname{I}II_1$ factor by \cref{lem.only-type-III1}, and hence $N_i$ is a factor. It follows from either \cref{lem.outer-core} or from \cref{lem.outer-core-ap} that the induced action $\Lambda_i \curvearrowright N_i$ is outer. By Connes' Radon-Nykodym cocycle theorem for modular automorphism groups, $\psi$ is a cocycle conjugacy between the modular automorphism groups $(\sigma^{\varphi_0}_t)_{t \in \mathbb{R}}$ and $(\sigma^{\varphi_1}_t)_{t \in \mathbb{R}}$ and therefore extends to a $\star$-isomorphism $Psi : M_0 \to M_1$ between the crossed products $M_i = (P_i^{I_i} \rtimes \Lambda_i) \rtimes \mathbb{R} = N_i \rtimes \Lambda_i$. Since the actions $\Lambda_i \curvearrowright P_i^{I_i}$ are state preserving, the action of $\Lambda_i$ on $N_i$ equals the identity on $L\mathbb{R} \subset N_i$. We claim that $Psi(N_0)$ and $N_1$ are unitarily conjugate inside $M_1$. Take a projection $p_0 \in L\mathbb{R}$ of finite trace. Then $Psi(p_0)$ is a projection of finite trace in the $\operatorname{I}I_\infty$ factor $N_1 \rtimes \Lambda_1$. After a unitary conjugacy of $Psi$, we may assume that $Psi(p_0) = p_1 \in L\mathbb{R}$. Since the projections $p_i$ are $\Lambda_i$-invariant, we have \begin{align*} p_i M_i p_i = p_i N_i p_i \rtimes \Lambda_i \; . \end{align*} The restriction of $Psi$ to $p_0 M_0 p_0$ thus yields a $\star$-isomorphism of $p_0 N_0 p_0 \rtimes \Lambda_0$ onto $p_1 N_1 p_1 \rtimes \Lambda_1$. Because the groups $\Lambda_i$ are icc and belong to the class $\mathcal{C}$, it follows from \cite[Lemma 8.4]{ioana-peterson-popa;amalgamated-free-product} that $Psi(p_0 N_0 p_0)$ is unitarily conjugate to $p_1N_1p_1$. Since the $N_i$ are $\operatorname{I}I_\infty$ factors, the claim follows. By the claim in the previous paragraph, we can choose a unitary $u \in M_1$ such that $u Psi(N_0)u^\star = N_1$. In particular, $\Lambda_0 \cong \Lambda_1$ and $\operatorname{Ad} u \circ Psi$ is a cocycle conjugacy between $\Lambda_0 \curvearrowright N_0$ and $\Lambda_1 \curvearrowright N_1$. Identify $\mathbb{R}dual$ as the dual of $\mathbb{R}$ using the pairing $\inp{t}{\mu} = \mu^{\mathbf{i}t}$ for $t \in \mathbb{R},\mu \in \mathbb{R}dual$, and denote by $\widehat{\sigma}^{\varphi_i} : \mathbb{R}dual \curvearrowright N_i \rtimes \Lambda_i$ the dual action w.r.t.~$\sigma^{\varphi_i}$. We will now extend the obtained cocycle conjugacy to a cocycle conjugacy between the actions $\Lambda_i \times \mathbb{R}dual \curvearrowright N_i$. By construction, $Psi \circ \widehat{\sigma}^{\varphi_0}_\mu = \widehat{\sigma}^{\varphi_1}_\mu \circ Psi$ for all $\mu \in \mathbb{R}dual$. Therefore, $Psi$ further extends to a $\star$-isomorphism $\widetildePsi : M_0 \rtimes \mathbb{R}dual \to M_1 \rtimes \mathbb{R}dual$ satisfying $\widetildePsi(\lambda(\mu)) = \lambda(\mu)$ for all $\mu \in \mathbb{R}dual$. Note that we can view $M_i \rtimes \mathbb{R}dual$ as $N_i \rtimes (\Lambda_i \times \mathbb{R}dual)$. Putting $\mathbb{T}heta = \operatorname{Ad} u \circ \widetildePsi$, we get an isomorphism $N_0 \rtimes (\Lambda_0 \times \mathbb{R}dual)\to N_1\rtimes (\Lambda_1 \times \mathbb{R}dual)$ satisfying \begin{align*} \mathbb{T}heta(N_0) &= N_1, & \mathbb{T}heta(N_0 \rtimes \Lambda_0) &= N_1 \rtimes \Lambda_1, & \mathbb{T}heta(\lambda(\mu)) = u\widehat{\sigma}^{\varphi_1}_\mu(u^\star) \lambda(\mu) \quad\text{for }\mu \in \mathbb{R}dual. \end{align*} Using that the actions $\Lambda_i \curvearrowright N_i$ are outer, that elements in $N_i \rtimes \Lambda_i$ have a unique Fourier decomposition, and that $u\widehat{\sigma}^{\varphi_1}_\mu(u^\star) \in \mathcal{N}_{N_1\rtimes \Lambda_1}(N_1)$, this implies that the restriction of $\mathbb{T}heta$ to $N_0$ is a cocycle conjugacy between the actions $\Lambda_i \times \mathbb{R}dual \curvearrowright N_i$, modulo a continuous group homomorphism $\delta : \Lambda_0 \times \mathbb{R}dual \to \Lambda_1 \times \mathbb{R}dual$ satisfying $\delta(\Lambda_0) = \Lambda_1$ and $\delta(e,\mu) \in \Lambda_1 \times \{\mu\}$. Since $\Lambda_i$ has trivial center, this means that $\delta(g,\mu) = (\delta_0(g),\mu)$ for all $g \in \Lambda_0, \mu \in \mathbb{R}dual$, and a group isomorphism $\delta_0 : \Lambda_0 \to \Lambda_1$. Denoting by $\alpha_i : \Lambda_i \curvearrowright I_i$ the given actions, and by $\widehat\alpha_i : \Lambda_i \to \operatorname{Aut}(P_i^{I_i}\rtimes \mathbb{R})$ the induced actions on the continuous cores, we now have obtained that the actions $\widehat\alpha_0 : \Lambda_0 \curvearrowright N_0$ and $\widehat\alpha_1 \circ \delta_0 : \Lambda_0 \curvearrowright N_1$ are cocycle conjugate. Note that for $i=0,1$, the centralizer of $P_i^{I_i}$ w.r.t.~$\phi_i$ is a factor, by \cref{lem.technical-condition}. By \cref{lem.pointwise-fixed-subalgebra-intertwines-in-R-conclusion} with $\Lambda = \Lambda_0$ acting on $I_0$ by $\alpha_0$, and on $I_1$ by $\alpha_1 \circ \delta_0$, we find a partial isometry $w \in N_1$ such that $ww^\star \in \mathbb{T}heta(L\mathbb{R})'$ and $w^\star w \in (L\mathbb{R})'$, satisfying $\mathbb{T}heta(L\mathbb{R})w = wL\mathbb{R}$. By \cref{lem.state-preserving-actions} below, we conclude that a reduction of the action $\alpha^0 : \Lambda_0\curvearrowright P_0^{I_0}$ is cocycle conjugate to a reduction of $\alpha^1 \circ \delta_0: \Lambda_0\curvearrowright P_1^{I_i}$, through a state preserving isomorphism. \end{proof} \begin{lemma}\label{lem.state-preserving-actions} Let $(P_0,\varphi_0)$, $(P_1,\varphi_1)$ be von Neumann algebras with normal faithful states and separable preduals. Let $\Lambda$ be a countable group, with state preserving actions $\Lambda \curvearrowright^{\alpha^i} (P_i,\varphi_i)$, such that the centralizers $(P_i)_{\varphi_i}$ are factors. Denote by $P_i \rtimes \mathbb{R}$ the crossed product of $P_i$ with the modular action of $\varphi_i$. Let $\Lambda \curvearrowright^{\widetilde \alpha^i} P_i \rtimes \mathbb{R}$ denote the induced action given by $\widetilde \alpha^i_s (\pi_{\sigma^{\varphi_i}}(x)) = \pi_{\sigma^{\varphi_i}}(\alpha^i_s(x))$ for $x \in P_i, s \in \Lambda$ and $\widetilde \alpha^i_s(\lambda(t)) = \lambda(t)$ for $t \in \mathbb{R}$, and let $\mathbb{R}dual \curvearrowright P_i \rtimes \mathbb{R}$ denote the dual action w.r.t.~$\sigma^{\varphi_i}$. Assume that $\psi : P_0 \rtimes \mathbb{R} \to P_1 \rtimes \mathbb{R}$ is an isomorphism such that the induced actions $\Lambda \times \mathbb{R}dual \curvearrowright P_i \rtimes \mathbb{R}$ are cocycle conjugate through $\psi$, and that $w \in P_1 \rtimes \mathbb{R}$ is a partial isometry such that $ww^\star \in \psi(L\mathbb{R})'$, $w^\star w \in (L\mathbb{R})'$ and $\psi(L\mathbb{R})w = wL\mathbb{R}$. Then the actions $\Lambda \curvearrowright (P_i, \varphi_i)$ are, up to reductions, cocycle conjugate through a state preserving isomorphism. \end{lemma} \begin{proof} Put $N_i = P_i \rtimes \mathbb{R}$, and denote by $\psi : N_0 \to N_1$ the given isomorphism and by $w\in N_1$ the partial isometry such that $ww^\star \in \psi(L\mathbb{R})'$, $w^\star w \in (L\mathbb{R})'$ and $\psi(L\mathbb{R})w = wL\mathbb{R}$. Let $V_{g,\mu} \in \mathcal{U}(N_1)$ be a 1-cocycle for the action $\Lambda \times \mathbb{R}dual \curvearrowright N_1$ such that \begin{align}\label{eq:coco} \psi \circ \alpha_g^0 \circ \widehat{\sigma}^{\varphi_0}_\mu = \operatorname{Ad} V_{g, \mu} \circ \alpha_{g}^1 \circ \widehat{\sigma}^{\varphi_1}_\mu \circ \psi \qquad \text{for all }g \in \Lambda, \mu \in \mathbb{R}dual. \end{align} Identify $\mathbb{R}dual$ as the dual of $\mathbb{R}$ using the pairing $\inp{t}{\mu} = \mu^{\mathbf{i}t}$ for $t \in \mathbb{R},\mu \in \mathbb{R}dual$, and denote by $\widehat{\sigma}^{\varphi_i} : \mathbb{R}dual \curvearrowright N_i$ the dual action w.r.t.~$\sigma^{\varphi_i}$. Extend $\psi$ to an isomorphism $Psi : N_0 \rtimes \mathbb{R}^+_0 \to N_1 \rtimes \mathbb{R}dual$ by putting $Psi(x) = \psi(x)$ for $x \in P_0 \rtimes \mathbb{R}$ and $Psi(\lambda(\mu)) = V_{1,\mu} \lambda(\mu)$. Put $\kappa \in \mathbb{R}dual$ such that $\mathbb{T}r_{\varphi_1} \circ \psi = \kappa \mathbb{T}r_{\varphi_0}$, then $Psi$ scales the dual weights of $\mathbb{T}r_{\varphi_i}$ by the same factor $\kappa$, i.e.~$\widetilde \mathbb{T}r_{\varphi_1} \circ \widetildePsi = \kappa \widetilde \mathbb{T}r_{\varphi_0}$. Identify $N_i \rtimes \mathbb{R}dual$ with $P_i \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}dual))$ through the isomorphism \begin{align*} Phi_i^\mathfrak{F} : N_i \rtimes \mathbb{R}dual \to P_i \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}dual)),\quad Phi_i^\mathfrak{F} = \operatorname{Ad}\: (1 \otimesimes \mathfrak{F}) \circ Phi_i, \end{align*} where $Phi_i : N_i \rtimes \mathbb{R}dual \to P_i \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}))$ denotes the Takesaki duality isomorphism as defined in \cite[Theorem X.2.3]{takesaki;theory-operator-algebras-II}, and $\mathfrak{F} : L^2(\mathbb{R}) \to L^2(\mathbb{R}dual)$ is the Fourier transform given by \begin{align*} \mathfrak{F}(f)(\mu) = \frac1{\sqrt{2\pi}} \int_{t \in \mathbb{R}} f(t) \mu^{-\mathbf{i}t} dt, \quad f \in L^1(\mathbb{R}) \cap L^2(\mathbb{R}). \end{align*} Note that the duality isomorphism $Phi_i$ maps $\lambda(t) \in L\mathbb{R}$ to $\lambda(t) \in B(L^2(\mathbb{R}))$, and hence $Phi_i^\mathfrak{F}(L\mathbb{R}) = \operatorname{Ad} \mathfrak{F}\, (L\mathbb{R}) = L^\infty(\mathbb{R}dual)$. Also, $x \in (P_i)_{\varphi_i}$ is mapped to $Phi_i^\mathfrak{F}(x) = x \otimesimes 1$. Let $M$ be the operator affiliated with $L^\infty(\mathbb{R}dual)$ given by $M(f)(t) = t f(t)$, and let $\omega$ be the weight on $B(L^2(\mathbb{R}dual))$ given by $\omega = \mathbb{T}r(M \cdot)$. Observe that $\widetilde\mathbb{T}r_{\varphi_i} = (\varphi_i \otimesimes \omega) \circ Phi_i^\mathfrak{F}$. Putting $\mathbb{T}heta = Phi_1^\mathfrak{F} \circ Psi \circ (Phi_0^\mathfrak{F})^{-1}$ and $\widehat{w} = Phi_1^\mathfrak{F}(w)$, we get an isomorphism \begin{align*} \mathbb{T}heta : P_0 \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}dual)) \to P_1 \mathbin{\overline{\otimesimes}} B(L^2(\mathbb{R}dual)) \end{align*} satisfying \begin{align*} (\varphi_1 \otimesimes \omega) \circ \mathbb{T}heta &= \kappa (\varphi_0 \otimesimes \omega),\\ \mathbb{T}heta \circ (\alpha^0_g \otimesimes \mathord{\operatorname{id}}) &= \operatorname{Ad} Phi_1^\mathfrak{F}(V_{g,1}) \circ (\alpha^1_g \otimesimes \mathord{\operatorname{id}}) \circ \mathbb{T}heta \quad\text{for all }g \in \Lambda,\\ \mathbb{T}heta(L^\infty(\mathbb{R}dual))\,\widehat{w} &=\widehat{w}\, L^\infty(\mathbb{R}dual). \end{align*} Here, the second equality follows from \eqref{eq:coco}. Put $p_0 = \mathbb{T}heta^{-1}(\widehat{w}\widehat{w}^\star)$, $p_1 = \widehat{w}^\star \widehat{w}$. Taking the relative commutants of the last equality, we obtain that $\mathbb{T}heta(p_0 (P_0\mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual))p_0)\, \widehat{w} = \widehat{w}\, (p_1 (P_1 \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual)) p_1)$. Denote by $M_i = p_i(P_i \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual))p_i$ and let $\theta : M_0 \to M_1$ be the isomorphism given by $\theta = \operatorname{Ad} \widehat{w}^\star \circ \mathbb{T}heta$. For $i=0,1$ and $g \in \Lambda$, note that the projections $p_i$ and $\alpha_g^i(p_i)$ both belong to $(P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual)$, since \begin{align*} p_i \in Psi_i^{\mathfrak{F}}(N_i \cap (L\mathbb{R})') = Psi_i^{\mathfrak{F}}((P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L\mathbb{R}) = (P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual). \end{align*} Moreover, the central traces of these projections inside $(P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual)$ coincide, since $(P_i)_{\varphi_i}$ is a factor and the central trace is thus given by the conditional expectation $E : (P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual) \to L^\infty(\mathbb{R}dual)$, which satisfies $E = E \circ \alpha_g^i$. Therefore, we find a partial isometry $v_{i,g} \in (P_i)_{\varphi_i} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual)$ such that $p_i = v_{i,g}v_{i,g}^\star$ and $\alpha_g(p_i) = v_{i,g}^\star v_{i,g}$. Also choose $v_{i,e} = p_i$. Then the formula $\alpha_g^{p_i}(pxp) = v_{i,g}(\alpha_g^i \otimesimes \mathord{\operatorname{id}})(pxp)v_{i,g}^\star$ for $x \in M_i$ defines a cocycle action $\alpha^{p_i} : \Lambda \curvearrowright (p_iM_ip_i, \varphi_i^p)$, where $\varphi_i^{p_i}$ is the state given by $\varphi_i^{p_i}(pxp) = \varphi_i(pxp)/\varphi(p_i)$ for $x \in M_i$. Putting $W_g =\widehat{w}^\star \mathbb{T}heta(v_{0,g}) Phi_1^\mathfrak{F}(V_{g,1}) \alpha_g^1(\widehat{w}) v_{1,g}^\star \in p_1Phi^\mathfrak{F}_1(N_1)p_1 \subset p_1(P_1 \mathbin{\overline{\otimesimes}} B(L^2\mathbb{R}dual))p_1$, we now obtain that, as isomorphisms $M_0 \to M_1$, \begin{align}\label{eq:conj-theta} \theta \circ \alpha_g^{p_0} = \operatorname{Ad} W_g \circ \alpha_g^{p_1} \circ \theta \quad\text{for all }g\in \Lambda. \end{align} Note that by construction, $\alpha_g^{p_i}$ is the trivial action on $p_iL^\infty(\mathbb{R}dual)$, and $\theta(p_0L^\infty(\mathbb{R}dual)) = p_1L^\infty(\mathbb{R}dual)$. Thus we have that $W_g \in p_1Phi_1^\mathfrak{F}(N_1)p_1 \cap (p_1L^\infty(\mathbb{R}dual))' = p_1( (P_1)_{\varphi_1} \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual))p_1$. For $i = 0, 1$, consider the integral decomposition \begin{align*} L^2(P_i,\varphi_i) \mathbin{\overline{\otimesimes}} L^2(\mathbb{R}dual) = \int_{\mathbb{R}dual}^\oplus L^2(P_i,\varphi_i) ds, \quad (P_i \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual), \varphi_i \otimesimes \omega) = \int_{\mathbb{R}dual}^\oplus (P_i, \varphi_i)\, ds. \end{align*} Note that $ds$ satisfies $\int_{\mu_1}^{\mu_2} ds = \int_{\log \mu_1}^{\log \mu_2} e^t d\lambda(t)$, with $d\lambda$ the Lebesgue measure on $\mathbb{R}$. In this disintegration, we can write $p_i = \int_\mathbb{R}dual^\oplus p_i(s) ds$ for a measurable field of projections $s \mapsto p_i(s)$ in $P_i$. Then we find as disintegration of $M_i$, \begin{align*} M_i = p_i(P_i \mathbin{\overline{\otimesimes}} L^\infty(\mathbb{R}dual))p_i = \int_\mathbb{R}dual^\oplus p_i(s)P_i p_i(s) ds. \end{align*} Also choose measurable fields of partial isometries $s \mapsto v_{i,g}(s)$ in $P_i$, and $s \mapsto W_g(s)$ in $P_1$, such that $v_{i,g} = \int_\mathbb{R}dual^\oplus v_{i,g}(s)ds$ and $W_g = \int_\mathbb{R}dual^\oplus W_g(s)ds$. Fix a countable set of measurable fields $\mathfrak{X} = \{s \mapsto x(s)\}$, corresponding to a countable dense subset of $M_0$, such that for almost every $s$, the set $\{x(s) | x \in \mathfrak{X}\}$ is dense in $p_0(s)P_0 p_0(s)$, and such that for all $x \in \mathfrak{X}$ and all $g \in \Lambda$, also the measurable field $s \mapsto v_{0,g}(s) \alpha_g^0(x(s))v_{0,g}(s)^\star$ belongs to $\mathfrak{X}$. Since $\theta : M_0 \to M_1$ is an isomorphism and by uniqueness of disintegrations, we find for almost all $s \in \mathbb{R}dual$ an isomorphism $\theta_s : (p_0(s)P_0p_0(s), \varphi_0^{p_0(s)}) \to (p_1(s)P_1p_1(s), \varphi_1^{p_1(s)})$, such that \begin{align}\label{eq:disint-theta} \theta\left( \int_\mathbb{R}dual^\oplus x(s)ds \right) = \int_\mathbb{R}dual^\oplus \theta_s(x(s))ds \qquad \text{for all measurable fields $x \in \mathfrak{X}$}. \end{align} Combining \eqref{eq:conj-theta} and \eqref{eq:disint-theta}, we get that for all measurable fields $s \mapsto x(s)$ in $\mathfrak{X}$, \begin{align*} \int_\mathbb{R}dual^\oplus \theta_s\big( v_{0,g}(s) \alpha_g^0(x(s)) v_{0,g}(s)^\star \big) ds = \int_\mathbb{R}dual^\oplus W_g(s) v_{1,g}(s) \alpha_g^1(\theta_s(x(s))) v_{1,g}(s)^\star W_g(s)^\star ds. \end{align*} We then can find a conull subset $S \subset L\mathbb{R}dual$ such that for all $s \in S$, $\theta_s$ is defined; $\{x(s)\mid x \in \mathfrak{X} \}$ is dense in $p_0(s)P_0p_0(s)$; $v_{i,g}(s)$ is a partial isometry with left support $p_i(s)$ and right support $\alpha_g(p_i(s))$, fixed by all $\{\sigma_t^{\varphi_i} \mid t \in \mathbb{Q}\}$; $W_g(s)$ is a unitary in $p_1(s) P_1 p_1(s)$ fixed by all $\{\sigma_t^{\varphi_1}\mid t \in \mathbb{Q}\}$; and such that for all $x \in \mathfrak{X}$, \begin{align*} \theta_s\big( v_{0,g}(s) \alpha_g^0(x(s)) v_{0,g}(s)^\star \big) = W_g(s) v_{1,g}(s) \alpha_g^1(\theta_s(x(s))) v_{1,g}(s)^\star W_g(s)^\star. \end{align*} In particular, it follows for all $s \in S$ and for all $g \in \Lambda$, that $v_{i,g}(s) \in (P_i)_{\varphi_i}$, $W_g(s) \in (P_1)_{\varphi_1}$, and that \begin{align*} \theta_s \circ \operatorname{Ad} v_{0,g}(s) \circ \alpha_g^0 = \operatorname{Ad} W_g(s) \circ \operatorname{Ad} v_{1,g}(s) \circ \alpha_g^1 \circ \theta_s. \end{align*} Choosing some $s \in S$ for which $p_0(s)$ is nonzero, we obtain that the actions $\alpha^i : \Lambda\curvearrowright P_i$ are, up to reductions, cocycle conjugate through a state preserving isomorphism. \end{proof} \section{Proofs of \texorpdfstring{\cref{thmstar.A,corstar.B,corstar.C,corstar.D}}{Theorems A-D}} \mathbb{C}ref{thmstar.A,corstar.B} follow now easily from \cref{thm.non-isomorphism-all-generalized-bernoulli}. Nevertheless, we give a detailed proof for the convenience of the reader. \begin{proof}[Proof of \cref{thmstar.A}] Let $\Lambda_i$ be icc groups in the class $\mathcal{C}$, and $(P_i,\phi_i)$ be nontrivial amenable factors with normal faithful states such that $(P_i)_{\phi_i,\text{ap}}$ are factors. Since the class $\mathcal{C}$ does not contain amenable groups, the action $\Lambda_i \curvearrowright \Lambda_i$ has no invariant mean. It is obvious that every nontrivial $g \in \Lambda_i$ moves infinitely many points of $\Lambda_i$. The result follows now directly from \cref{thm.non-isomorphism-all-generalized-bernoulli} with $I_i = \Lambda_i$. \end{proof} \begin{proof}[Proof of \cref{corstar.B}] For $i=0,1$, let $\Lambda_i \in \mathcal{C}$ be an icc group, and let $(P_i,\phi_i)$ be a nontrivial amenable factor with a normal faithful weakly mixing state $\phi_i$. Obviously, if $\Lambda_0 \cong \Lambda_1$ and the actions $\Lambda_i \curvearrowright (P_i,\phi_i)^{\Lambda_i}$ are conjugate, then the von Neumann algebras $P_i^{\Lambda_i} \rtimes \Lambda_i$ are isomorphic. Assume conversely that $(P_0,\phi_0)^{\Lambda_0} \rtimes \Lambda_0 \cong (P_1,\phi_1)^{\Lambda_1} \rtimes \Lambda_1$ are isomorphic. Since $\phi_i$ is weakly mixing, $\operatorname{D}elta_{\phi_i}$ has no eigenvalues, and hence we have that $(P_i)_{\phi,\text{ap}} = \mathbb{C}$ for $i=0,1$. By \cref{lem.technical-condition} and putting $\varphi_i = \phi_i^{\Lambda_i}$, we then also get that $(P_i^{\Lambda_i})_{\varphi_i} = \mathbb{C}$. Applying \cref{thmstar.A}, we get that the groups $\Lambda_i$ are isomorphic, and that the actions $\Lambda_i \curvearrowright P_i^{\Lambda_i}$ are conjugate modulo the isomorphism $\Lambda_0 \cong \Lambda_1$, through a state preserving isomorphism. \end{proof} For the proof of \cref{corstar.C,corstar.D}, we recall the notion of a \emph{generalized 1-cocycle}. Let $\alpha: G \to \operatorname{Aut}(M,\varphi)$ be an action of a locally compact group on a von Neumann algebra $(M,\varphi)$ with an n.s.f.~weight $\varphi$. A \emph{generalized 1-cocycle} for $\alpha$ with \emph{support projection} $p \in M_\varphi$ is a continuous map $w:G \to M_\varphi$ such that $w_g \in p M_\varphi \alpha_g(p)$ is a partial isometry with $p=w_g w_g^\star$ and $\alpha_g(p) = w_g^\star w_g$, and \begin{align*} w_{gh} = \Omega(g,h) w_g \alpha_g(w_h) \quad\text{ for all }g,h \in G, \end{align*} where $\Omega(g,h)$ is a scalar 2-cocycle. \begin{proof}[Proof of \cref{corstar.C}] For $i=0,1$, let $\Lambda_i$ be a direct product of two icc groups in the class $\mathcal{C}$, and let $(P_i,\phi_i)$ be a nontrivial amenable factor with a normal faithful state $\phi_i$ such that $(P_i)_{\phi_i,\text{ap}}$ is a factor. Assume that $(P_0,\phi_0)^{\Lambda_0} \rtimes \Lambda_0 \cong (P_1,\phi_1)^{\Lambda_1} \rtimes \Lambda_1$ are isomorphic. Putting $\varphi_i = \phi_i^{\Lambda_i}$, we get by \cref{thmstar.A} that there exists projections $p_i \in (P_i^{\Lambda_i})_{\varphi_i}$, such that the reductions of $\Lambda_i \curvearrowright P_i^{\Lambda_i}$ by $p_i$ are cocycle conjugate modulo the isomorphism $\Lambda_0 \cong \Lambda_1$ in a state preserving way. By amplifying both actions, we may assume that either $p_0 = 1$ or $p_1 = 1$. By interchanging $P_0$ and $P_1$ is necessary, we now assume that $p_0 = 1$. Identifying $\Lambda = \Lambda_0 = \Lambda_1$, writing $N_i = P_i^{\Lambda_i}$ and denoting the action $\Lambda \curvearrowright N_i$ by $\alpha^i$, this now means that there exists a state preserving isomorphism $\psi : N_0 \to p_1N_1p_1$ and a generalized 1-cocycle $(v_g)_{g \in \Lambda} \in (N_1)_{\varphi_1}$ with support projection $p_1$ for $\alpha^1 : \Lambda\curvearrowright N_1$, such that $\psi \circ \alpha^0_g = \operatorname{Ad} v_g \circ \alpha^1_g \circ \psi$ for all $g \in \Lambda$. Put $M = (P_1)_{\phi,\text{ap}}^{\Lambda_1}$ and note that $(N_1)_{\varphi_1} = M_{\varphi_1}$ by \cref{lem.technical-condition}. In particular, $v_g$ is a generalized 1-cocycle for the action $\Lambda \curvearrowright M$. Since the action $\Lambda \curvearrowright P_0^{\Lambda_0}$ has no nontrivial globally invariant subspaces, also $\operatorname{Ad} v_g \circ \alpha^1_g$ on $M$ has no such invariant subspaces, and it follows from \cite[Corollary 7.3]{vaes-verraedt;classification-type-III-bernoulli-crossed-products} that $v_g = \chi(g) v^\star \alpha_g^1(v)$ for all $g \in \Lambda$, where $\chi : \Lambda \to \mathbb{T}$ is a character and $v \in M_{\varphi_1,\lambda}$ satisfies $v^\star v = p_1$ and $vv^\star = 1$. Then $\operatorname{Ad} v \circ \psi : N_0 \to N_1$ is a state preserving isomorphism implementing a conjugation between the actions $\Lambda \curvearrowright P_i^{\Lambda_i}$. \end{proof} \begin{proof}[Proof of \cref{corstar.D}] Let $\Lambda_i$ be icc groups in the class $\mathcal{C}$, and $(P_i,\phi_i)$ be nontrivial amenable factors with normal faithful states such that $(P_i)_{\phi_i,\text{ap}}$ are factors. We only need to show that if $P_i^{\Lambda_i} \rtimes \Lambda_i \times \Lambda_i$ are isomorphic, then the groups $\Lambda_i$ and the pairs $(P_i,\phi_i)$ are isomorphic. Assume now that $(P_0,\phi_0)^{\Lambda_0} \rtimes \Lambda_0 \times \Lambda_0 \cong (P_1,\phi_1)^{\Lambda_1} \rtimes \Lambda_1 \times \Lambda_1$ are isomorphic. Applying \cref{thm.non-isomorphism-all-generalized-bernoulli}, and proceeding exactly as in the proof of \cref{corstar.C}, using in particular \cite[Corollary 7.3]{vaes-verraedt;classification-type-III-bernoulli-crossed-products} and the weak mixingness of $\Lambda_0 \times \Lambda_0 \curvearrowright P_0^{\Lambda_0}$, we obtain an isomorphism $\delta : \Lambda_0 \times \Lambda_0 \to \Lambda_1 \times \Lambda_1$ and a state preserving isomorphism $\psi : P_0^{\Lambda_0} \to P_1^{\Lambda_1}$ satisfying $\psi \circ \alpha^0_g = \alpha^1_{\delta(g)} \circ \psi$ for all $g \in \Lambda_0 \times \Lambda_0$. Here we denoted the Bernoulli action $\Lambda_i \times \Lambda_i \curvearrowright P_i^{\Lambda_i}$ by $\alpha^i$. An argument from \cite[Proof of Theorem 5.4]{popa-vaes;strong-rigidity-generalized-bernoulli-actions} (see also the last two paragraphs of the proof of \cite[Theorem C]{vaes-verraedt;classification-type-III-bernoulli-crossed-products}) now shows the desired result. \end{proof} {} \end{document}
\begin{document} \title{Multiplexed Memory-Insensitive Quantum Repeaters} \date{\today } \author{O. A. Collins, S. D. Jenkins, A. Kuzmich, and T. A. B. Kennedy} \affiliation{School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332-0430} \pacs{42.50.Dv,03.65.Ud,03.67.Mn} \begin{abstract}Long-distance quantum communication via distant pairs of entangled quantum bits (qubits) is the first step towards more secure message transmission and distributed quantum computing. To date, the most promising proposals require quantum repeaters to mitigate the exponential decrease in communication rate due to optical fiber losses. However, these are exquisitely sensitive to the lifetimes of their memory elements. We propose a multiplexing of quantum nodes that should enable the construction of quantum networks that are largely insensitive to the coherence times of the quantum memory elements. \end{abstract} \maketitle Quantum communication, networking, and computation schemes utilize entanglement as their essential resource. This entanglement enables phenomena such as quantum teleportation and perfectly secure quantum communication \cite{bb84}. The generation of entangled states, and the distance over which we may physically separate them, determines the range of quantum communication devices. To overcome the exponential decay in signal fidelity over the communication length, Briegel {\it et al.} \cite{briegel} proposed an architecture for noise-tolerant quantum repeaters, using an entanglement connection and purification scheme to extend the overall entanglement length using several pairs of quantum memory elements, each previously entangled over a shorter fundamental segment length. A promising approach utilizes atomic ensembles, optical fibers and single photon detectors \cite{duan,matsukevich,chaneliere}. The difficulty in implementing a quantum repeater is connected to short atomic memory coherence times and large optical transmission loss rates. In this Letter we propose a new entanglement generation and connection architecture using a real-time reconfiguration of multiplexed quantum nodes, which improves communication rates dramatically for short memory times. A generic quantum repeater consisting of $2^N+1$ distinct nodes is shown in Fig. 1a. The first step generates entanglement between adjacent memory elements in successive nodes with probability $P_0$. An entanglement connection process then extends the entanglement lengths from $L_0$ to $2L_0$, using either a parallelized (Fig. 1b), or multiplexed (Fig. 1c) architecture. This entanglement connection succeeds with probability $P_1$, followed by subsequent entanglement-length doublings with probabilities $P_2$,...,$P_N$, until the terminal quantum memory elements, separated by $L={2^N}L_0$, are entangled. \begin{figure} \caption{(a) Processes of an $N=3$ multiplexed quantum repeater. In addition to two terminal nodes the network has seven internal nodes consisting of two quantum memory sites containing $n$ independent memory elements. Entanglement generation proceeds with probability $P_0$, creating 8 entanglement lengths of $L_0$. In the lowest panel, shaded memory sites indicate successfully entangled segments. The $N=1$ level entanglement connection proceeds with probability $P_1$, producing four entangled segments of length $2L_0$. Nodes reset to their vacuum states by the connection are blank. The $N=2$ and $N=3$ levels proceed with probabilities $P_2$ and $P_3$. Each stage results in entanglement-length doubling, until an $N=3$ success entangles the terminal nodes. (b) and (c) show the topology of the $n$ memory element sets. The parallel architecture (b) connects entanglement only between memory elements with the same address. In contrast, multiplexing (c) uses a fast sequential scanning of all memory element addresses to connect any available memory elements.} \label{Figure 1} \end{figure} For the simplest case of entanglement-length doubling with a single memory element per site ($N=n=1$), we calculate the average time to successful entanglement connection for both ideal (infinite) and finite quantum memory lifetimes. This basic process is fundamental to the more complex $N$-level quantum repeaters as an $N$-level quantum repeater is the entanglement-length doubling of two $(N-1)$-level systems. {\it{Entanglement-length doubling with ideal memory elements}}.--- Define a random variable $Z$ as the waiting time for an entanglement connection attempt (all times are measured in units of $L_0/c$, where the speed of light $c$ includes any material refractive index). Let $Y\equiv 1$ if entanglement connection succeeds and zero otherwise. Entanglement generation attempts take one time unit. The time to success is the sum of the waiting time between connection attempts and the 1 time unit of classical information transfer during each connection attempt, \begin{eqnarray} &T &=(Z_1+1)Y_1 + (Z_1+Z_2+2)(1-Y_1)Y_2 + \nonumber\\ &&(Z_1+Z_2+Z_3+3)(1-Y_1)(1-Y_2)Y_3 + ..., \end{eqnarray} as Z,Y are independent random variables, it follows that \begin{equation} \langle T\rangle =\frac{\langle Z\rangle + 1}{P_1}, \end{equation} In the infinite memory time limit, $Z$ is simply the waiting time until entanglement is present in both segments, i.e., $Z=\max\{A,B\}$, where $A$ and $B$ are random variables representing the entanglement generation waiting times in the left and right segments. As each trial is independent from prior trials, $A$ and $B$ are geometrically distributed with success probability $P_0$. The mean of a geometric random variable with success probability $p$ is $1/p$, and the minimum of $j$ identical geometric random variables is itself geometric with success probability $1-(1-p)^j$. From these properties it follows that, \begin{equation} \langle T\rangle _{\infty} =\frac{3-P_0^2}{P_0P_1(2-P_0)}. \end{equation} {\it{Entanglement-length doubling with finite memory elements}}.--- For finite quantum memory elements Eqs. (1) and (2) still hold, but $Z$ is no longer simply $\max\{A,B\}$. Rather it is the time until both segments are entangled within $\tau$ time units of each other, where $\tau$ is the memory lifetime. For simplicity, we assume entanglement is unaffected for $\tau$, and destroyed thereafter. A new r.v. $M\equiv 1$ if $|A-B| < \tau$, zero otherwise. Due to the memoryless nature of the geometric distribution, \begin{eqnarray} Z &=& \max\{A_1,B_1\} M_1 + \left( \min\{A_1,B_1\}+\tau \right. \nonumber \\ &+& \max\{A_2,B_2\})(1-M_1)M_2 + ... \end{eqnarray} From this and Eq. (2) it follows that \cite{note} \begin{eqnarray} &\langle Z\rangle_\tau &=\frac{1}{P_0(2-P_0-2q_0^{\tau+1})}+ \frac{2\tau q_0^{\tau+1}}{2-P_0-2q_0^{\tau+1}} \nonumber\\ && +\frac{2q_0(1-q_0^\tau(1+\tau P_0))}{P_0(2-P_0-2q_0^{\tau+1})}, \nonumber\\ &\langle T\rangle_\tau&=\frac{\langle T\rangle _{\infty}- (\frac{1+P_0}{P_0P_1})\frac{q_0^{\tau+1}}{1-P_0/2}}{1-\frac{q_0^{\tau+1}}{1-P_0/2}}, \end{eqnarray} where $q_0 \equiv 1-P_0$. Typically $P_0$ is small compared to $P_1$ as the former includes transmission losses. The terms in $\langle Z\rangle_\tau$ are, respectively, the time spent(I)waiting for entanglement in either segment starting from unexcited nodes,(II) fruitlessly attempting entanglement generation until the quantum memory in the first segment expires,(III) on successful entanglement generation in the other segment. When $P_0 \ll 1/(\tau +1)$, memory times are much smaller than the entanglement generation time, and $\langle Z\rangle_\tau \approx 1/[P_0^2(1+2\tau)] + 2\tau/[P_0(1+2\tau)] + 2 \tau^2(1-P_0)/(1+2\tau)$, and term (I) dominates the entanglement-length doubling time. Fig. 2 shows the sharp increase in $\langle T\rangle_\tau$ for small $\tau$ characteristic of term (I). \begin{figure} \caption{Plot of $\langle T\rangle_\tau$ against $\tau$. $P_0=0.01$, $P_1=0.5$. The minimum possible success time of 2 imposes a similar minimum of the on quantum memory element lifetimes. } \label{Figure 2} \end{figure} {\it{Parallelization and multiplexing}}.--- Long memory coherence times remain an outstanding technical challenge, motivating the exploration of approaches that mitigate the poor low-memory scaling. One strategy is to engineer a system that compensates for low success rates by increasing the number of trials, replacing single memory elements with $n>1$ element arrays. In a parallel scheme, the $i^{th}$ memory element pair in one node interacts only with the $i^{th}$ pair in other nodes, Fig.1b. Thus, a parallel repeater with $n2^{N+1}$ memory elements acts as $n$ independent $2^{N+1}$-element repeaters and connects entanglement $n$ times faster. A better approach is to dynamically reconfigure the connections between nodes, using information about entanglement successes to determine which nodes to connect. In this multiplexed scheme the increased number of node states that allow entanglement connection, compared to parallelizing, improves the entanglement connection rate between the terminal nodes. We now calculate the entanglement connection rate $f_{\tau}$ of an $N=1$ multiplexed system. Unlike the parallel scheme, however, the entanglement connection rate is no longer simply $1/\langle T\rangle_{\tau}$. When one segment has more entangled pairs than its partner, connection attempts do not reset the repeater to its vacuum state and there is residual entanglement. Simultaneous successes and residual entanglement produce average times between successes smaller than $\langle T\rangle_{\tau}$. We approximate the resulting repeater rates when residual entanglement is significantly more probable than simultaneous successes. This is certainly the case in both the low memory time limit and whenever $nP_0 \ll 1$. Our approximation modifies Eq. (4) by including cases where the waiting time is zero due to residual entanglement. In $Z$ of Eq.(4), the $\min\{ A_i,B_i\}$ terms represent the waiting time to an entanglement generation success starting from the vacuum state. Multiplexing modifies Eq.(4) in the following way: for each $i=1,..\infty$ we replace $(A_i,B_i)$ $\rightarrow$ $ (\min\{A_{i,j}\},\min\{B_{i,k}\})$, where $j$ and $k =1,...,n$. The effect of the residual entanglement is approximated by the factor $\alpha$: $\min\{ A_i,B_i\}\rightarrow\alpha \min\{ \min\{A_{i,j}\},\min\{B_{i,k}\}\}$, where 1-$\alpha$ is the probability of residual entanglement. Eq. (4) now approximates the average time between successes. Using Eqs. (2), (4) and the distributions of $\min\{A_{i,j}\}$ and $\min\{B_{i,k}\}$, the resulting rate is \begin{eqnarray} &\langle f\rangle_{\tau,n} &= \frac{P_1(1-q_0^n)(1+q_0^n-2q_0^{n(\tau+1)})}{1+2q_0^n-q_0^{2n}-4q_0^{n(\tau+1)}+2q_0^{n(\tau+2)} + \alpha},\\ &\alpha &= {\frac{q_0^{n-1}(1-q_0^{n})(1-q_0^{2n-1}+2q_0^{3n-2}(1- q_0^{\tau(2n-1}))}{(1-q_0^{2n-1})(1+q_0^n-2q_0^{(\tau+1)n})}}. \nonumber \end{eqnarray} When $n=1$, $\alpha =1$, as required. Further, as $nP_0,\tau$ become large, $\alpha \rightarrow 0$ showing the expected breakdown of the approximation. As $n\rightarrow \infty$, $\alpha$ should approach $1/2$. Fig. 3(a) demonstrates that, as expected, multiplexed connection rates exceed those of parallelized repeaters. The improvement from multiplexing in the infinite memory case is comparatively modest. However, the multiplexed connection rates are dramatically less sensitive to decreasing memory lifetimes. Note that the performance of multiplexing $n=5$ exceeds that parallelizing $n=10$, reflecting a fundamental difference in their dynamics and scaling behavior. Fig. 3(b) further illustrates the memory insensitivity of multiplexed repeaters by displaying the fractional rate $f_\tau/f_\infty$. As parallelized rates scale by the factor $n$, such repeaters all follow the same curve for any $n$. By contrast, multiplexed repeaters become less sensitive to coherence times as $n$ increases. This improved performance in the low memory limit is a characteristic feature of the multiplexed architecture. \begin{figure} \caption{Comparison of entanglement connection rate $f_{\tau} \label{Figure 3} \end{figure} {\it{$N$-level quantum repeaters}}.--- For $N>1$ repeaters we proceed by direct computer simulation, requiring a specific choice of entanglement connection probabilities. We choose the implementation proposed by Duan, Lukin, Cirac, and Zoller (DLCZ) \cite{duan}. The DLCZ protocol requires a total distance $L$, the number of segments $2^N$, the loss $\gamma$ of the fiber connection channels, and the efficiency $\eta$ of retrieving and detecting an excitation created in the quantum memory elements. Let $P_0 = \eta_0 \exp(-{\gamma L_0}/2)$, where $\eta_0$ is related to the fidelity $F\approx 2^N(1-\eta_0)$ \cite{duan}. Recursion relations give the connection probabilities: $P_i =({\eta}/(c_{i-1}+1))(1-{\eta}/({2 \beta (c_{i-1}+1)}))$, $c_i \equiv 2c_{i-1}+1-\eta/\beta$, $i =1,...N$. Neglecting detector dark counts, $c_0=0$. For photon number resolving detectors $\beta =1$ (PNRDs) \cite{duan}. $\beta =2$ for non-photon resolving detectors (NPRDs). For values of $\eta < 1$ photon losses result in a vacuum component of the connected state in either case. For NPRDs, the indistinguishability of one- and two-photon pulses requires a final projective measurement, which succeeds with probability $\epsilon = 1/(c_3+1)$, see Ref. \cite{duan} for a detailed discussion. Consider a 1000 km communication link. Assume a fiber loss of $10\gamma/\ln10 =0.16$ dB/km, $\eta_0=0.01$, and $\eta=0.9$. Taking $N=3$ ($L_0=125$ km) gives $P_0=0.001$. For concreteness we treat the NPRD case, producing connection probabilities: $P_1=0.698$, $P_2=0.496$, $P_3=0.311$, and $\epsilon=0.206$. Fig. 3 demonstrates agreement with the exact predictions for $n=1$ and the approximate predictions for $n>1$. The slight discrepancies for long memory times with larger $n$ are uniform and understood from the simultaneous connection successes neglected in Eq. (6). An $N$-level quantum repeater succeeds in entanglement distribution when it entangles the terminal nodes with each other. Fig. 4 shows the entanglement distribution rate of a 1000 km $N=3$ quantum repeater as a function of the quantum memory lifetime. Remarkably, for multiplexing with $n\gtrsim10$ the rate is essentially constant for coherence times over 100 ms, while for the parallel systems it decreases by two orders of magnitude. For memory coherence times of less than 250 ms, one achieves higher entanglement distribution rates by multiplexing ten memory element pairs per segment than parallelizing 1000. In the extreme limit of minimally-sufficient memory coherence times set by the light-travel time between nodes, each step must succeed the first time. The probabilities of entanglement distribution scale as $nP_0^{2^N}$ (parallelized) and $(nP_0)^{2^N}$ (multiplexed), for $nP_0 \ll1$. \begin{figure} \caption{Simulated entanglement distribution over $1000$ km for multiplexed (solid) and parallel (dashed) $N=3$ quantum repeaters. For $\tau \geq 100$ msec, the multiplexed distribution rate is almost flat for $n \gtrsim 10$; parallel rates decrease by two orders of magnitude. For low memories a multiplexed $n=10$ repeater outperforms an $n=1000$ parallelization.} \label{Figure 5} \end{figure} {\it{Communication and cryptography rates}}.--- The DLCZ protocol requires two separate entanglement distributions and two local measurements to communicate a single quantum bit. The entanglement coincidence requirement and finite efficiency qubit measurements result in communication rates less than $f_{\tau}$. Error correction/purification protocols, via linear-optics-based techniques, will further reduce the rate and may require somewhat lower values of $\eta _0$ than the one used in Fig. 4, to maintain sufficiently high fidelity of the final entangled qubit pair \cite{duan,bennett1}. We emphasize that it is the greatly enhanced entanglement distribution rates with multiplexing that make implementation of such techniques feasible. {\it{Multiplexing with atomic ensembles}}.--- A multiplexed quantum repeater could be implemented using cold atomic ensembles as the quantum memory elements, subdividing the atomic gas into $n$ independent, individually addressable memory elements, Fig. 1c. Dynamic addressing can be achieved by fast (sub-microsecond), two-dimensional scanning using acousto-optic modulators, coupling each memory element to the same single-mode optical fiber. Consider a cold atomic sample 400 $\mu$m in cross-section in a far-detuned optical lattice. If the addressing beams have waists of 20 $\mu$m, a multiplexing of $n>100$ is feasible. To date, the longest single photon storage time is 30 $\mu$s, limited by Zeeman energy shifts of the unpolarized, unconfined atomic ensemble in the residual magnetic field \cite{dspg}. Using magnetically-insensitive atomic clock transitions in an optically confined sample, it should be possible to extend the storage time to tens of milliseconds, which should be sufficient for multiplexed quantum communication over 1000 km. {\it{Summary}}.--- Multiplexing offers only marginal advantage over parallel operation in the long memory time limit. In the opposite, minimal memory limit, multiplexing is $n^{2^N-1}$ times faster, yet the rates are practically useless. Crucially, in the intermediate memory time regime multiplexing produces useful rates when parallelization cannot. As a consequence, multiplexing translates each incremental advance in storage times into significant extensions in the range of quantum communication devices. The improved scaling outperforms massive parallelization with ideal detectors, independent of the entanglement generation and connection protocol used. Ion-, atom-, and quantum dot-based systems should all benefit from multiplexing. We are particularly grateful to T. P. Hill for advice on statistical methods. We thank T. Chaneli\`{e}re and D. N. Matsukevich for discussions and C. Simon for a communication. This work was supported by NSF, ONR, NASA, Alfred P. Sloan and Cullen-Peck Foundations. \end{document}
\begin{document} \title{f Is magnetic flux quantized inside a solenoid? } \begin{abstract} In some textbooks on quantum mechanics, the description of flux quantization in a superconductor ring based on the Aharonov-Bohm effect may lead some readers to a (wrong) conclusion that flux quantization occurs as well for a long solenoid with the same quantization condition in which the charge of cooper pair $2e$ is replaced by the charge of one electron $e$. It is shown how this confusion arises and how can one avoid it. \end{abstract} \section{Introduction} Aharonov-Bohm effect is a quantum mechanical phenomena in which a charged particle in a region free of magnetic field is affected by the vector potential which produces that magnetic field \cite{1}. One immediate result of this effect is to explain the magnetic flux quantization in a superconductor ring. This is done based on the uniqueness of the wave function after a $2\pi$ rotation around the direction of magnetic field. One may usually think that such a description is also applied for a magnetic field trapped inside a long solenoid so that the corresponding flux is quantized as well as superconductor case. However, it is easily shown that in case of such quantization in a long solenoid applied in the Aharonov-Bohm effect, we would not observe the shift of interference pattern in this effect. In this short article, we try to understand the reasons for flux quantization in the superconductor ring and flux non-quantization in a long solenoid. \section{Aharonov-Bohm effect} The Schrodinger equation for a charged particle $e$ in the presence of an electromagnetic field $(\textbf{A}, \phi)$ is given by \begin{equation} \frac{1}{2m}(-i\hbar \nabla -\frac{e}{c}\textbf{A})^2 \psi+e\phi \psi =E \psi. \label{1} \end{equation} This equation is invariant under gauge transformation \begin{equation} \textbf{A} \rightarrow \textbf{A}+\nabla \lambda, \label{2} \end{equation} where $\lambda$ is an arbitrary function of space and time. This invariance causes the wave function to acquire an extra phase, in passing along a path through a region free of magnetic field, according to \begin{equation} \phi=\frac{e}{\hbar c} \int_p \textbf{A}.\textbf{dx}. \label{3} \end{equation} The phase difference between the two cases where the charged particle is moved along two different pathes with the same endpoints encompassing the magnetic flux $\Phi$ is given by \begin{equation} \Delta \phi =\frac{e}{\hbar c} \Phi. \label{4} \end{equation} One may observe this phase difference by locating a long solenoid having a variable magnetic flux between the two slits in the two slits experiment. In fact, the relative phase difference of the wave functions at a given point on the screen when the particle (electron) passes the first or the second slit depends on the magnetic flux encompassed by the long solenoid and any change in the flux results in a shift in the interference pattern which is observable as the Aharonov-Bohm effect. \section{Flux quantization in a superconductor} Consider a ring of superconductor containing a trapped magnetic flux with a constant number of flux lines. The Schrodinger equation for a unit charged particle, namely the cooper pair $2e$ is the same as (\ref{1}) with $e$ replaced by $2e$. The wave function of this pair after a $2\pi$ rotation around the direction of magnetic field inside the ring is given by \begin{equation} \phi=\frac{2e}{\hbar c} \int_p \textbf{A}.\textbf{dx}=\frac{2e}{\hbar c} \Phi. \label{5} \end{equation} The uniqueness of wave function then requires this phase to be an integer multiplications of $2 \pi$ so that the magnetic flux is quantized as follows \begin{equation} \Phi=\frac{2 \pi \hbar c}{2e}n, \:\:\: n=0, \pm1, \pm2, ....\label{6} \end{equation} In other words, the numbers of trapped flux lines can just be integer multiplications of $\frac{2 \pi \hbar c}{2e}$. Flux quantization in a superconductor was experimentally observed in 1961 by Deaver and Fairbank \cite{2}. \section{Flux non-quantization in a long solenoid} In some standard textbooks on quantum mechanics \cite{3} the subject of flux quantization and Aharonov-Bohm effect are so vaguely presented that one may mistakenly realizes the flux quantization is a specific property of any magnetic flux such as the one inside a long solenoid, whereas this is not the case and flux quantization occurs just inside a superconductor ring. On the other hand, in some other books the flux non-quantization in a long solenoid is explicitly mentioned without a detailed explanation \cite{4}. Consider a two slit experiment with a long solenoid, having a magnetic flux $\Phi$, located between the two slits. At a given point on the screen, the superposition of two wave functions of the charged particle received by two slits $1, 2$ in the presence of the long solenoid is given by \begin{equation} \psi=(\psi_1 e^{ie\Phi/\hbar c}+\psi_2)e^{{ie/\hbar c}\int_2 \textbf{A}.\textbf{dx}}. \label{7} \end{equation} The magnetic flux $\Phi$ is then responsible for a relative phase difference between the two wave components received by two slits $1, 2$. This relative phase may shift the interference patter and this is experimentally observed. It is immediately seen that if the magnetic flux would be quantized according to \begin{equation} \Phi=\frac{2 \pi \hbar c}{e}n, \:\:\: n=0, \pm1, \pm2, ....\label{8} \end{equation} the above phase difference would be vanished and no shift in the interference patter would be observed, a result which is in sharp contrast with the observation. Therefore, we conclude that flux quantization does not occur inside a long solenoid. However, a question is immediately raised: Why is that the uniqueness of wave function of the charged particle $e$ after a $2 \pi$ rotation around the long solenoid does not lead to the flux quantization (\ref{8})? \section{Simply and non-simply connectedness } The wave function of the cooper pair inside a superconductor ring should vanish on the interior walls of the ring and it simply means that there is no cooper pair outside the ring, especially in the interior hole encompassing the magnetic flux. Since there is no wave function outside the ring then the closed path integral around the magnetic flux is limited and confined to the interior region of the ring where there are cooper pairs. This means the space is non-simply connected so that one can not continuously contract the closed integral to zero. Therefore, the phase acquired by the wave function in this case is a topological one and the uniqueness condition of wave function leads the magnetic flux to be quantized according to (\ref{6}). In the case of a long solenoid, however, there is no limitation or confinement on the closed integral taken over a closed path around the solenoid because there is no limitation on the presence of electron in the region of magnetic flux inside the solenoid. This is because, contrary to the superconductor ring there is no boundary condition imposed on the wave function of an electron. In principle, after a $2 \pi$ rotation of an electron around the long solenoid one can continuously contract the resultant closed integral (total phase) to zero with no difficulty, because the space in this case is simply connected, namely the electron in principle can be present every where in space. Therefore, the total phase is no longer a topological phase and so one can not expect a quantization condition (\ref{8}), rather we observe a continuous magnetic flux whose continuous variation may lead to the Aharonov-Bohm effect in the two slits experiment including a long solenoid containing magnetic flux. It is worth noticing that although the magnetic flux in the superconductor ring is quantized but the Aharonov-Bohm effect is observed for a superconductor ring, as well. This is because, the flux lines are integer multiplications of $\frac{\pi \hbar c}{e}$ and if we substitute it into the relative phase in (\ref{7}) the phase shift occurs for the odd numbers of flux lines. \section*{Acknowledgment} The author would like to thank very much Prof. M. Berry for his useful comments. \end{document}
\betaegin{document} \title{The Sasaki Join, Hamiltonian 2-forms, and Sasaki-Einstein Metrics} \alphauthor{Charles P. Boyer and Christina W. T{\o}nnesen-Friedman}\tilde{h}anks{Both authors were partially supported by grants from the Simons Foundation, CPB by (\#245002) and CWT-F by (\#208799)} \alphaddress{Charles P. Boyer, Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131.} \email{[email protected]} \alphaddress{Christina W. T{\o}nnesen-Friedman, Department of Mathematics, Union College, Schenectady, New York 12308, USA } \email{[email protected]} \keywords{Sasaki-Einstein metrics, join construction, Hamiltonian 2-form} \subjclass[2000]{Primary: 53D42; Secondary: 53C25} \maketitle \markboth{Sasaki Join, K\"ahler Admissible}{Charles P. Boyer and Christina W. T{\o}nnesen-Friedman} \betaegin{abstract} By combining the join construction \cite{BGO06} from Sasakian geometry with the Hamiltonian 2-form construction \cite{ApCaGa06} from K\"ahler geometry, we recover Sasaki-Einstein metrics discovered by physicists \cite{GMSW04b}. Our geometrical approach allows us to give an algorithm for computing the topology of these Sasaki-Einstein manifolds. In particular, we explicitly compute the cohomology rings for several cases of interest and give a formula for homotopy equivalence in one particular 7-dimensional case. We also show that our construction gives at least a two dimensional cone of both Sasaki-Ricci solitons and extremal Sasaki metrics. \end{abstract} \section{Introduction} Recently the authors have been able to obtain many new results on extremal Sasakian geometry \cite{BoTo12b,BoTo11,BoTo13} by giving a geometric construction that combines the `join construction' of \cite{BG00a,BGO06} with the `admissible construction of Hamiltonian 2-forms' for extremal K\"ahler metrics described in \cite{ApCaGa06,ACGT04,ACGT08,ACGT08c}. In particular, we have proven the existence of Sasaki metrics of constant scalar curvature in a countable number of contact structures of Sasaki type with 2-dimensional Sasaki cones on $S^3$-bundles over Riemann surfaces of positive genus. In this case the contact bundle ${\mathcal D}$ satisfies $c_1({\mathcal D})\nablaeq 0$. These are all the 2-dimensional Sasaki cones on $S^3$-bundles over Riemann surfaces that one can construct through our method. In the present paper we consider the case of certain lens space bundles over a compact positive K\"ahler-Einstein manifold whose contact bundle has vanishing first Chern class. In this case a constant scalar curvature Sasaki metric (CSC) implies the existence of a Sasaki-Einstein metric in the CSC ray. Earlier, physicists \cite{GHP03,GMSW04a,GMSW04b,CLPP05,MaSp05b} working on the AdS/CFT correspondence had discovered a method of constructing Sasaki-Einstein metrics in dimension $2n+3$ from a K\"ahler-Einstein metric on a $2n$-dimensional manifold. Their method is very closely related to the Hamiltonian 2-form approach of \cite{ApCaGa06} (cf. Section 4.3 of \cite{Spa10}). In the present paper we show that the physicist's results fit naturally into our geometric construction. Furthermore, our geometric approach leads naturally to an algorithm for computing the cohomology ring of the $2n+3$-manifolds. In particular, we explicitly compute the cohomology ring of all such examples in dimension $7$ showing that there are a countably infinite number of distinct homotopy types of such manifolds. Our procedure involves taking a join of a regular Sasaki-Einstein manifold $M$ with the weighted 3-sphere $S^3_\betafw$, that is, $S^3$ with its the standard contact structure, but with a weighted contact 1-form whose Reeb vector field generates rotations with different weights for the two complex coordinates $z_1,z_2$ of $S^3\subset \mathbbc^2$. We call this the $S^3_\betafw$-join. \betaegin{theorem}\label{admjoinse} Let $M_{l_1,l_2,\betafw}=M\star_{l_1,l_2}S^3_\betafw$ be the $S^3_\betafw$-join with a regular Sasaki manifold $M$ which is an $S^1$-bundle over a compact positive K\"ahler-Einstein manifold $N$ with a primitive K\"ahler class $[{\mathfrak a}mmaro_N]\in H^2(N,\mathbbz)$. Assume that the relatively prime positive integers $(l_1,l_2)$ are the relative Fano indices given explicitly by $$l_{1}=\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(w_1+w_2,{\mathcal I}_N)},\qquad l_2=\frac{w_1+w_2}{{\mathfrak a}mmacd(w_1+w_2,{\mathcal I}_N)},$$ where ${\mathcal I}_N$ denotes the Fano index of $N$. Then for each vector $\betafw=(w_1,w_2)\in \mathbbz^+\times\mathbbz^+$ with relatively prime components satisfying $w_1>w_2$ there exists a Reeb vector field $\xi_\betafv$ in the 2-dimensional $\betafw$-Sasaki cone on $M_{l_1,l_2,\betafw}$ such that the corresponding Sasakian structure ${\oldmathcal S}=(\xi_\betafv,\eta_\betafv,\Phi,g)$ is Sasaki-Einstein. Additionally (up to isotopy) the Sasakian structure associated to every single ray, $\xi_\betafv$, in the $\betafw$-Sasaki cone is a Sasaki-Ricci soliton as well as extremal. \end{theorem} By the $\betafw$-Sasaki cone we mean the subcone of Sasaki cone spanned by the 2-vector $\betafw$. Note also that if $N$ has no Hamiltonian vector fields, then the $\betafw$-Sasaki cone is the full Sasaki cone, so in this case the Sasaki cone is exhausted by extremal Sasaki metrics as well as Sasaki-Ricci solitons. Most of the SE structures in Theorem \ref{admjoinse} are irregular. Such structures have irreducible transverse holonomy \cite{HeSu12b}, implying there can be no generalization of the join procedure to the irregular case. We must deform within the Sasaki cone to obtain them. Furthermore, it follows from \cite{RoTh11,CoSz12} that constant scalar curvature Sasaki metrics (hence, SE) imply a certain K-semistability. An easy case in all possible dimension is joining with the standard odd dimensional sphere in which case we obtain: \betaegin{theorem}\label{setop} For each pair of ordered ($w_1>w_2$) relatively prime positive integers $(w_1,w_2)$ there is a $2r+3$-dimensional Sasaki-Einstein manifold $M^{2r+3}_{l_1,l_2,\betafw}$ whose cohomology ring is $$\mathbbz[x,y]/(w_1w_2l_1^2x^2,x^{r+1},x^2y,y^2)$$ where $x,y$ are classes of degree $2$ and $2r+1$, respectively, and $(l_1,l_2)$ are given by $$(l_1,l_2)=\betaigl(\frac{r+1}{{\mathfrak a}mmacd(w_1+w_2,r+1)},\frac{w_1+w_2}{{\mathfrak a}mmacd(w_1+w_2,r+1)}\betaigr).$$ \end{theorem} This theorem provides us with many examples of Sasaki-Einstein manifolds with isomorphic cohomology rings. We have \betaegin{corollary}\label{cor1} Let $k$ denote the length of the prime decomposition of $w_1w_2$, then there are $2^{k-1}$ simply connected Sasaki-Einstein manifolds of dimension $2r+3$ determined by Theorem \ref{setop} with isomorphic cohomology rings such that $H^4$ has order $w_1w_2l_1^2$. \end{corollary} For the manifolds $M^{7}_{l_1,l_2,\betafw}$ of dimension 7 ($r=2$) much more is known about the topology. These are special cases of what are called generalized Witten spaces in \cite{Esc05}. In particular, the homotopy type was given in \cite{Kru97}, while the homeomorphism and diffeomorphism type was given in \cite{Esc05}. For our subclass admitting Sasaki-Einstein metrics we give necessary and sufficient conditions on $\betafw$ for homotopy equivalence when the order of $H^4$ is odd in Proposition \ref{homequivprop} below. Thus, we answer in the affirmative the existence of Einstein metrics on certain generalized Witten manifolds. Aside from the $\betafw$-join with the 5-sphere, for dimension 7 our method produces Sasaki-Einstein manifolds $M_{k,\betafw}^7$ on lens space bundles over all del Pezzo surfaces $\mathbbc\mathbbp^2\# k\overline{\mathbbc\mathbbp}^2$ that admit a K\"ahler-Einstein metric. In particular \betaegin{theorem}\label{delPezzo} For each relatively prime pair $(w_1,w_2)$ there exist Sasaki-Einstein metrics on the 7-manifolds $M^7_{k,\betafw}$ with cohomology ring $$H^q(M^7_{k,\betafw},\mathbbz)\alphapprox \betaegin{cases} \mathbbz & \text{if $q=0,7$;} \\ \mathbbz^{k+1} & \text{if $q=2,5$;} \\ \mathbbz^k_{w_1+w_2}\times \mathbbz_{w_1w_2} & \text{if $q=4;$} \\ 0 & \text{if otherwise}, \end{cases}$$ with the ring relations determined by ${\mathfrak a}mmara_i\cup {\mathfrak a}mmara_j=0, w_1w_2s^2=0,\betareak (w_1+w_2){\mathfrak a}mmara_i\cup s=0,$ and ${\mathfrak a}mmara_i,s$ are the $k+1$ two classes with $i=1,\cdots k$ where $k=3,\cdots,8$. Furthermore, when $4\leq k\leq 8$ the local moduli space of Sasaki-Einstein metrics has real dimension $4(k-4)$. \end{theorem} The paper is organized as follows. Section 2 gives a brief review of ruled manifolds that are the projectivization of a complex rank 2 vector bundle of the form $S=\mathbbp(\BOne\oplus L)$ over a K\"ahler-Einstein manifold $N$. These admit Hamiltonian 2-forms that give rise to the K\"ahler admissible construction that is necessary for our procedure. In the somewhat long Section 3 we describe our join construction, in particular, the join with the weighted 3-sphere, $S^3_\betafw$. We then discuss in detail the orbit structure of quasi-regular Reeb vector fields in the $\betafw$-Sasaki cone. Generally, the quotients appear as orbifold log pairs $(S,{\mathfrak a}mmarD)$ which fiber over $N$ with fiber an orbifold of the form $\mathbbc\mathbbp^1[v_1,v_2]/\mathbbz_m$, where ${\mathfrak a}mmarD$ is branch divisor, and $\mathbbc\mathbbp^1[v_1,v_2]$ is a weighted projective space. Moreover, we give conditions for the existence of a regular Reeb vector field, that is, when the quotient has a trivial orbifold structure. In Section 4 we discuss the topology of the joins, giving an algorithm for computing the integral cohomology ring. We work out the details in several cases which gives the proofs of Theorems \ref{setop}, \ref{delPezzo}, and Corollary \ref{cor1}. Finally, in Section 5 we present the details of the admissibility conditions that proves Theorem \ref{admjoinse}. \betaegin{ack} The authors would like to thank David Calderbank for discussions and Matthias Kreck for providing us with a proof that the first Pontrjagin class is a homeomorphism invariant. \end{ack} \section{Ruled Manifolds} In this section we consider ruled manifolds of the following form. Let $(N,{\mathfrak a}mmaro_N)$ be a compact K\"ahler manifold with primitive integer K\"ahler class $[{\mathfrak a}mmaro_N]$, that is, a Hodge manifold. Consider a rank two complex vector bundle of the form $E=\BOne\oplus L$ where $L$ is a complex line bundle on $N$ and $\BOne$ denotes the trivial bundle. By a ruled manifold we shall mean the projectivization $S=\mathbbp(\BOne\oplus L)$. We can view $S$ as a compactification of the complex line bundle $L$ on $N$ by adding the `section at infinity'. For $x\in N$ we let $(u,v)$ denote a point of the fiber $E_x=\BOne\oplus L_x$. There is a natural action of $\mathbbc^*$ (hence, $S^1$) on $E$ given by $(c,z)\mapsto (c,{\mathfrak a}mmarl z)$ with ${\mathfrak a}mmarl\in \mathbbc^*$. The action $z\mapsto {\mathfrak a}mmarl z$ is a complex irreducible representation of $\mathbbc^*$ determined by the line bundle $L$. Such representations (characters) are labeled by the integers $\mathbbz$. Thus, we write $L=L_n$ for $n\in \mathbbz$ and refer to $n$ as the `degree' of $L$. \subsection{A Construction of Ruled Manifolds}\label{ruledsec} We now give a construction of such manifolds. Let $S^1\ra{1.6} M\ra{1.6} N$ be the circle bundle over $N$ determined by the class $[{\mathfrak a}mmaro_N]\in H^2(N,\mathbbz)$. We denote the $S^1$-action by $(x,u)\mapsto (x,e^{i\tilde{h}eta}u)$. Now represent $S^3\subset \mathbbc^2$ as $|z_1|^2+|z_2|^2=1$ and consider an $S^1$-action on $M\times S^3$ given by $(x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;z_1,e^{in\tilde{h}eta}z_2)$. There is also the standard $S^1$-action on $S^3$ given by $(z_1,z_2)\mapsto (e^{i\chi}z_1,e^{i\chi}z_2)$ giving a $T^2$-action on $M\times S^3$ defined by \betaegin{equation}\label{T2act} (x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;e^{i\chi}z_1,e^{i(\chi+n\tilde{h}eta)}z_2). \end{equation} \betaegin{lemma}\label{T2quot} The quotient by the $T^2$-action of Equation (\ref{T2act}) is the projectivization $S_n=\mathbbp(\BOne\oplus L_n)$. \end{lemma} \betaegin{proof} First we see from (\ref{T2act}) that the action is free, so there is a natural bundle projection $(M\times S^3)/T^2\ra{1.6} N$ defined by $\pi(x,[u;z_1,z_2])=x$ where the bracket denotes the $T^2$ equivalence class. The fiber is $\pi^{-1}(x)=[u;z_1,z_2]$ which since $u$ parameterizes a circle is identified with $S^3/S^1=\mathbbc\mathbbp^1$. This bundle is trivial if and only if $n=0$ and $n$ labels the irreducible representation of $S^1$ on the line bundle $L_n$. \end{proof} We can take the line bundle $L_1$ to be any primitive line bundle in ${\rm Pic}(N)$. In particular, we are interested in the taking $L_1$ to be the line bundle associated to the primitive cohomology class $[{\mathfrak a}mmaro_N]\in H^2(N,\mathbbz)$. Then we have \betaegin{lemma}\label{c1L} The following relation holds: $c_1(L_n)=n[{\mathfrak a}mmaro_N]$. \end{lemma} \betaegin{proof} Equation (\ref{T2act}) implies that the $S^1$-action on the line bundle $L_n$ is given by $z\mapsto e^{in\tilde{h}eta}z$. But we know that the definition of $M$ that it is the unit sphere in the line bundle over $N$ corresponding to $n=1$, and this corresponds to the class $[{\mathfrak a}mmaro_N]$, that is $c_1(L_1)=[{\mathfrak a}mmaro_N]$. Thus, $c_1(L_n)=n[{\mathfrak a}mmaro_N]$. \end{proof} \subsection{The Admissible Construction}\label{adm-basics} Let us now suppose that $(N,{\mathfrak a}mmaro_N$) is K\"ahler-Einstein with K\"ahler metric $g_N$ and Ricci form $\rho_N = 2\pi{\mathcal I}_N \omega_N$, where ${\mathcal I}_N$ denotes the Fano index. We will also assume that $n$ from Section \ref{ruledsec} is non-zero. Then $(\omega_{N_n},g_{N_n}): =(2n\pi \omega_N, 2n\pi g_N)$ satisfies that $( g_{N_n}, \omega_{N_n})$ or $(- g_{N_n}, -\omega_{N_n})$ is a K\"ahler structure (depending on the sign of $n$). In either case, we let $(\pm g_{N_n}, \pm \omega_{N_n})$ refer to the K\"ahler structure. We denote the real dimension of $N$ by $2 d_{N}$ and write the scalar curvature of $\pm g_{N_n}$ as $\pm 2 d_{N_n} s_{N_n}$. [So, if e.g. $-g_{N_n}$ is a K\"ahler structure with positive scalar curvature, $s_{N_n}$ would be negative.] Since the (scale invariant) Ricci form is given by $\rho_N=s_{N_n}\omega_{N_n}$, it is easy to see that $s_{N_n}={\mathcal I}_N/n$. Now Lemma \ref{c1L} implies that $c_{1}(L_n)= [\omega_{N_n}/2\pi]$. Then, following \cite{ACGT08}, the total space of the projectivization $S_n=\mathbbp(\BOne\oplus L_n)$ is called {\it admissible}. On these manifolds, a particular type of K\"ahler metric on $S_n$, also called {\it admissible}, can now be constructed \cite{ACGT08}. We shall describe this construction in Section \ref{KE} where we will use it to prove Theorem \ref{admjoinse}. An admissible K\"ahler manifold is a special case of a K\"ahler manifold admitting a so-called Hamiltonian $2$-form. Let $(S,J,{\mathfrak a}mmaro,g)$ be a K\"ahler manifold of real dimension $2n$. Recall \cite{ApCaGa06} that on $(S,J,{\mathfrak a}mmaro,g)$ a {\it Hamiltonian 2-form} is a $J$-invariant 2-form $\phi$ that satisfies the differential equation \betaegin{equation}\label{ham2form} 2\nablaabla_X\phi = d{\rm tr}~\phi\wedge (JX)^\flat-d^c{\rm tr}~\phi\wedge X^\flat \end{equation} for any vector field $X$. Here $X^\flat$ indicates the 1-form dual to $X$, and ${\rm tr}~\phi$ is the trace with respect to the K\"ahler form ${\mathfrak a}mmaro$, i.e. ${\rm tr}~\phi=g(\phi,{\mathfrak a}mmaro)$ where $g$ is the K\"ahler metric. Note that if $\phi$ is a Hamiltonian $2$-form, then so is $\phi_{a,b} = a\phi + b\omega$ for any constants $a,b \in \mathbbr$. A Hamiltonian $2$-form $\phi$ induces an isometric hamiltonian $l$-torus action on $S$ for some $0\leq l \leq n$. To see this, we follow \cite{ApCaGa06} and use the K\"ahler form $\omega$ to identify $\phi$ with a Hermitian endomorphism. Consider now the elementary symmetric functions ${\mathfrak a}mmars_1,\ldots,{\mathfrak a}mmars_n$ of its $n$ eigenvalues. The Hamiltonian vector fields $K_i=J{\rm grad}{\mathfrak a}mmars_i$ are Killing with respect to the K\"ahler metric $g$. Moreover the Poisson brackets $\{\sigma_i,\sigma_j\}$ all vanish and so, in particular the vector fields $K_1,...,K_n$ commute. In the case where $K_1,...,K_n$ are independent, we have that $(S,J,{\mathfrak a}mmaro,g)$ is toric. In fact, it is a very special kind of toric, namely {\em orthotoric} \cite{ApCaGa06}. In general, it is proved in \cite{ApCaGa06} that there exists a number $0 \leq l \leq n$ such that the span of $K_1,...,K_n$ is everywhere at most $l-dimensional$ and on an open dense set $S^0$ $K_1,...,K_l$ are linearly independent. Now $l$ is called the {\it order} of $\phi$. We have $0\leq l \leq n$. The admissible metrics as described in section \ref{KE} admit a Hamiltonian $2$-form of order one (see Remark \ref{hamiltonian2form}). \section{The Join Construction} The join construction was first introduced in \cite{BG00a} for Sasaki-Einstein manifolds, and later generalized to any quasi-regular Sasakian manifolds in \cite{BGO06} (see also Section 7.6.2 of \cite{BG05}). However, as pointed out in \cite{BoTo11} it is actually a construction involving the orbifold Boothby-Wang construction \cite{BoWa,BG00a}, and so applies to quasi-regular strict contact structures. Although it is quite natural to do so, we do not need to fix the transverse (almost) complex structure. Moreover, in \cite{BoTo13} it was shown that in the special case of $S^3$-bundles over Riemann surfaces a twisted transverse complex structure on a regular Sasakian manifold can be realized by a product transverse complex structure on a certain quasi-regular Sasakian structure in the same Sasaki cone. We are interested in exploring just how far this analogy generalizes. The general join construction proceeds as follows. Given compact quasi-regular contact manifolds $(M_i,\eta_i)$ where ${\oldmathcal S}_i$ is a Sasakian structure with contact forms $\eta_i$ and $\partialim M_i=2n_i+1$ together with a pair of relatively prime integers $(l_1,l_2)$, we construct new contact orbifolds $M_1\star_{l_1,l_2}M_2$ of dimension $2(n_1+n_2)+1$ as follows: the Reeb vector fields $\xi_i$ of $\eta_i$ generate a locally free circle action whose quotient ${\oldmathcal Z}_i$ is a symplectic orbifold with symplectic form ${\mathfrak a}mmaro_i$ satisfying $\pi_i^*{\mathfrak a}mmaro_i=d\eta_i$ where $\pi_i:M_i\ra{1.6} {\oldmathcal Z}_i$ is the natural quotient projection. The product orbifold ${\oldmathcal Z}_1\times {\oldmathcal Z}_2$ has symplectic structures ${\mathfrak a}mmaro_{l_1,l_2}=l_1{\mathfrak a}mmaro_1+l_2{\mathfrak a}mmaro_2$ where the forms ${\mathfrak a}mmaro_i$ define primitive cohomology classes in $H^2_{orb}({\oldmathcal Z}_i,\mathbbz)$. This implies that the cohomology class $[{\mathfrak a}mmaro_{l_1,l_2}]$ is also primitive. The orbifold Boothby-Wang construction \cite{BG00a} then gives a contact structure with contact form $\eta_{l_1,l_2}$ on the total space $M_1\star_{l_1,l_2}M_2$ of the $S^1$-orbibundle over ${\oldmathcal Z}_1\times{\oldmathcal Z}_2$ which satisfies $d\eta_{l_1,l_2}=\pi^*{\mathfrak a}mmaro_{l_1,l_2}$ where $\pi:M_1\star_{l_1,l_2}M_2\ra{1.6} {\oldmathcal Z}_1\times{\oldmathcal Z}_2$ is the natural orbibundle projection. We also note that $\eta_{l_1,l_2}$ is a connection 1-form in this orbibundle. The orbifold $M_1\star_{l_1,l_2}M_2$ can also be given as the quotient space of a locally free circle action on the manifold $M_1\times M_2$. Let $\xi_i$ denote the Reeb vector field of the contact manifold $(M_i,\eta_i)$. We consider $\eta_{l_1,l_2}=l_1\eta_1+l_2\eta_2$ as a 1-form on the product $M_1\times M_2$. From the Reeb vector fields $\xi_i$ we construct the vector field $L_{l_1,l_2}=\frac{1}{2l_1}\xi_1-\frac{1}{2l_2}\xi_2$ which induces a locally free circle action on $M_1\times M_2$. This action is free if ${\mathfrak a}mmacd(l_2\upsilon_1,l_1\upsilon_2)=1$ in which case the quotient is a smooth manifold. It is called the {\it $(l_1,l_2)$-join} of $M_1$ and $M_2$ and denoted by $M_1\star_{l_1,l_2}M_2$. Here $\upsilon_i$ denotes the {\it order} of $M_i$, that is, the lcm of the orders of the isotropy groups of the locally free action. So far nothing has been said about Sasakian or K\"ahlerian structures. However, given any symplectic structure, one can choose a compatible almost complex structure, giving an almost K\"ahler manifold. Similarly, given a quasi-regular contact structure one can choose a compatible transverse almost complex structure giving a K-contact manifold. Furthermore, the transverse almost complex structure on $M$ is the horizontal lift of the almost complex structure on $N$, and when these structures are integrable $M$ is Sasakian and $N$ is K\"ahlerian. We have the following easily verified facts about the Sasaki cone and the joins of Sasaki manifolds: \betaegin{itemize} \item The join of extremal Sasaki metrics gives an extremal Sasaki metric. \item The join of CSC Sasaki metrics gives a CSC Sasaki metric. \item Let $(M_i,{\oldmathcal S}_i)$ be quasi-regular Sasakian manifolds with Sasaki cones ${\mathfrak a}mmark_i$, respectively. Let ${\mathfrak a}mmark_\star$ denote the Sasaki cone of the join ${\oldmathcal S}_1\star_{l_1,l_2}{\oldmathcal S}_2$. Then we have $\partialim{\mathfrak a}mmark_\star=\partialim{\mathfrak a}mmark_1+\partialim{\mathfrak a}mmark_2-1$. \item If the dimension of the Sasaki cone ${\mathfrak a}mmark({\oldmathcal S})$ is greater than $1$ then ${\oldmathcal S}$ is of positive or indefinite type. \item If $\partialim{\mathfrak a}mmaA{\mathfrak a}mmau{\mathfrak a}mmat({\oldmathcal S})>1$, then $\partialim{\mathfrak a}mmark({\oldmathcal S})>1$. \end{itemize} \subsection{The $S^3_\betafw$-Join} Here we apply the $(l_1,l_2)$-join construction to the case at hand, namely, the $(l_1,l_2)$-join of a regular Sasaki manifold $M$ with the weighted 3-sphere $S^3_\betafw$. $M$ is a the total space of an $S^1$-bundle over a smooth projective algebraic variety $N$ with K\"ahler form ${\mathfrak a}mmaro_N$. The Reeb vector field $\xi_2$ of the weighted sphere $S^3_\betafw$ is given by $\xi_\betafw=\sum_{i=1}^2w_iH_i$ where $H_i$ is the vector field on $S^3$ induced by $y_i\partial_{x_i}-x_i\partial_{y_i}$ on $\mathbbr^4$, and $w_1,w_2$ are relatively prime positive integers. In this case the smoothness condition becomes ${\mathfrak a}mmacd(l_2,l_1w_1w_2)=1$ which, since $gcd(l_1,l_2)=1$, is equivalent to ${\mathfrak a}mmacd(l_2,w_i)=1$ for $i=1,2$. We consider the quotient $M\star_{l_1,l_2}S^3_\betafw$ of $M\times S^3$ by the $S^1$-action generated by the vector field $L_{l_1,l_2}$. Moreover, the 1-form $\eta_{l_1,l_2}$ passes to the quotient and gives it a contact structure. The Reeb vector field of $\eta_{l_1,l_2}$ is the vector field \betaegin{equation}\label{Reebjoin} \xi_{l_1,l_2}=\frac{1}{2l_1}\xi_1+\frac{1}{2l_2}\xi_2. \end{equation} and we have the commutative diagram \betaegin{equation}\label{s2comdia} \betaegin{matrix} M\times S^3 &&& \\ &\searrow\pi_L && \\ \partialecdnar{\pi_{12}} && M\star_{l_1,l_2}S^3_\betafw &\\ &\swarrow\pi && \\ N\times\mathbbc\mathbbp^1[\betafw] &&& \end{matrix} \end{equation} where the $\pi$s are the obvious projections, and $\mathbbc\mathbbp^1[\betafw]$ is the weighted projective space usually written as $\mathbbc\mathbbp(w_1,w_2)$. \subsection{The Cohomological Einstein Condition} Let us compute the first Chern class of our induced contact structure ${\mathcal D}_{l_1,l_2,\betafw}$ on $M\star_{l_1,l_2}S^{3}_\betafw$. For this we compute the orbifold first Chern class of $N\times \mathbbc\mathbbp^1[\betafw]$, viz. \betaegin{equation}\label{c1N} c_1^{orb}(N\times \mathbbc\mathbbp^1[\betafw])=c_1(N)+\frac{|\betafw|}{w_1w_2}PD(E) \end{equation} as an element of $H^2(N\times \mathbbc\mathbbp^1[\betafw],\mathbbq)\alphapprox H^2(N,\mathbbq)\oplus H^2(\mathbbc\mathbbp^1[\betafw]),\mathbbq)$. Now the K\"ahler form on $N\times \mathbbc\mathbbp^1[\betafw]$ is ${\mathfrak a}mmaro_{l_1,l_2}=l_1{\mathfrak a}mmaro_N+ l_2{\mathfrak a}mmaro_\betafw$ where ${\mathfrak a}mmaro_\betafw$ is the standard K\"ahler form on $\mathbbc\mathbbp^1[\betafw]$ such that $p^*[{\mathfrak a}mmaro_\betafw]$ is a positive generator in $H^2_{orb}(\mathbbc\mathbbp^1[\betafw],\mathbbz)$ where $p:\mathsf{B}\mathbbc\mathbbp^1[\betafw]\ra{1.6} \mathbbc\mathbbp^1[\betafw]$ is the natural orbifold classifying projection, that is, $\mathsf{B}\mathbbc\mathbbp^1[\betafw]$ is the classfying space associated to any groupoid representing the orbifold $ \mathbbc\mathbbp^1[\betafw]$ \cite{Hae84}. Pulling ${\mathfrak a}mmaro_{l_1,l_2}$ back to the join $M_{l_1,l_2,\betafw}= M\star_{l_1,l_2}S^{3}_\betafw$ we have $\pi^*{\mathfrak a}mmaro_{l_1,l_2}=d\eta_{l_1,l_2,\betafw}$ implying that $l_1\pi^*[{\mathfrak a}mmaro_N]+l_2\pi^*[{\mathfrak a}mmaro_\betafw]=0$ in $H^2(M_{l_1,l_2,\betafw},\mathbbz)$. So letting ${\mathfrak a}mmaI$ denote the ideal generated by $l_1\pi^*[{\mathfrak a}mmaro_N]+l_2\pi^*[{\mathfrak a}mmaro_\betafw]$ we have \betaegin{equation}\label{c1cald} c_1({\mathcal D}_{l_1,l_2,\betafw})=\betaigl(\pi^*c_1(N)+|\betafw|\pi^*[{\mathfrak a}mmaro_\betafw]\betaigr)/{\mathfrak a}mmaI. \end{equation} Now we assume that $N$ is K\"ahler-Einstein with K\"ahler form ${\mathfrak a}mmaro_N$ which for convenience we assume to define a primitive class $[{\mathfrak a}mmaro_N]\in H^2(N,\mathbbz)$. We also assume that $(N,{\mathfrak a}mmaro_N)$ is monotone, that is, that $c_1(N)={\mathcal I}_N[{\mathfrak a}mmaro_N]$ where ${\mathcal I}_N$ is the Fano index of $N$. So Equation (\ref{c1cald}) becomes \betaegin{equation}\label{c1cald2} c_1({\mathcal D}_{l_1,l_2,\betafw})=\betaigl({\mathcal I}_N\pi^*[{\mathfrak a}mmaro_N]+|\betafw|\pi^*[{\mathfrak a}mmaro_\betafw]\betaigr)/{\mathfrak a}mmaI. \end{equation} The cohomological Einstein condition is $c_1({\mathcal D}_{l_1,l_2,\betafw})=0$ or equivalently that $\betaigl({\mathcal I}_N\pi^*[{\mathfrak a}mmaro_N]+|\betafw|\pi^*[{\mathfrak a}mmaro_\betafw]\betaigr)$ lies in the ideal ${\mathfrak a}mmaI$. This implies the condition $l_2{\mathcal I}_N=|\betafw|l_{1}$. We have arrived at: \betaegin{lemma}\label{c10} Necessary conditions for the Sasaki manifold $M_{l_1,l_2,\betafw}$ to admit a Sasaki-Einstein metric is that ${\mathcal I}_N>0$, and that $$l_2=\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)},\qquad l_{1}=\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}.$$ \end{lemma} \betaegin{remark}\label{relindex} The integers $l_1,l_2$ in Lemma \ref{c10} were called {\it relative Fano indices} in \cite{BG00a}. For the remainder of the paper we assume that these integers take the values given by Lemma \ref{c10}. \end{remark} \subsection{The Tori Actions} Consider the action of the $3$-dimensional torus $T^{3}$ on the product $M\times S^{3}_\betafw$ defined by \betaegin{equation}\label{Taction} (x,u;z_1,z_2)\mapsto (x,e^{il_2\tilde{h}eta}u;e^{i(\phi_1-l_1w_1\tilde{h}eta)}z_1,e^{i(\phi_2-l_1w_2\tilde{h}eta)}z_2). \end{equation} The Lie algebra ${\mathfrak a}mmat_{3}$ of $T^{3}$ is generated by the vector fields $L_{l_1,l_2,\betafw},H_1,H_2$. Our join manifold $M_{l_1,l_2,\betafw}=M\star_{l_1,l_2}S^{3}_\betafw$ is the quotient by the circle subgroup $S^1_\tilde{h}eta$ obtained by setting $\phi_i=0$ for $i=1,2$ in Equation (\ref{Taction}). We can realize the quotient procedure in two stages. First, divide by the cyclic subgroup $\mathbbz_{l_2}\subset S^1_\tilde{h}eta$ to get $M\times L(l_2;l_1\betafw)$ and then divide by the quotient $S^1_\tilde{h}eta/\mathbbz_{l_2}$ to realize $M_{l_1,l_2,\betafw}$ as an associated fiber bundle to the principal $S^1$-bundle over $N$ with fiber $L(l_2;l_1\betafw)$. Here $L(l_2;l_1\betafw)$ is the lens space $L(l_2;l_1w_1,l_1w_2)$. The infinitesimal generator of the $S^1_\tilde{h}eta$ action is given by \betaegin{equation}\label{infq+2action} L_{l_1,l_2}=\frac{1}{2l_1}\xi_1-\sum_{j=1}^{2}\frac{1}{2l_2}w_jH_j, \end{equation} and we denote its Lie subalgebra by $\{L_{l_1,l_2},\betafw\}$ to indicate its dependence on the weight vector $\betafw$. We have an exact sequence of Abelian Lie algebras \betaegin{equation}\label{Liealgexact} 0\ra{2.5}\{L_{l_1,l_2,\betafw}\}\ra{2.5} {\mathfrak a}mmat_{3}\ra{2.5} {\mathfrak a}mmat_{2}\ra{2.5} 0. \end{equation} The quotient algebra ${\mathfrak a}mmat_{2}$ is generated by the $H_1,H_2$. So as in \cite{BoTo13} the Sasaki cone ${\mathfrak a}mmat_{2}^+$ on $M_{l_1,l_2,\betafw}$ is inherited by the Sasaki cone on $S^{3}$. We refer to this 2-dimensional cone as the {\it $\betafw$-Sasaki cone} or just {\it $\betafw$-cone}. We remark that ${\mathfrak a}mmat_{2}^+$ may not be the full Sasaki cone, since the Sasakian structure on $M$ may have a Sasaki automorphism group whose dimension is greater than one. However, here we shall only work with the restricted Sasaki cone ${\mathfrak a}mmat_{2}^+$. Next we consider the 2-torus action generated by setting $\phi_i=v_i\phi$ in Equation (\ref{Taction}) where for now $v_1,v_2$ are relatively prime positive integers. As in the proof of Proposition 3.8 of \cite{BoTo13} we get a commutative diagram \betaegin{equation}\label{comdia1} \betaegin{matrix} M\times S^{3}_\betafw &&& \\ &\searrow && \\ \partialecdnar{\pi_B} && M_{l_1,l_2,\betafw} &\\ &\swarrow && \\ B_{l_1,l_2,\betafv,\betafw} &&& \end{matrix} \end{equation} where $B_{l_1,l_2,\betafv,\betafw}$ is a bundle over $N$ with fiber a weighted projective space, and $\pi_B$ denotes the quotient projection by $T^2$. The Lie algebra of this $T^2$ is generated by $L_{l_1,l_2,\betafw},\sum_jv_jH_j$. The $T^2$ action describing the $\pi_B$ quotient of Diagram (\ref{comdia1}) is given in this case, as in Equation (\ref{Taction}), by \betaegin{equation}\label{T2action2} (x,u;z_1,z_2)\mapsto (x,e^{i\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}\tilde{h}eta}u;e^{i(v_1\phi-\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}w_1\tilde{h}eta)}z_1,e^{i(v_2\phi-\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}w_2\tilde{h}eta)}z_2). \end{equation} First divide by the finite subgroup $\mathbbz_{l_2}$ of $S^1_\tilde{h}eta$ giving $M\times L(l_2;l_1w_1,l_1w_2)$, where $L(l_2;l_1w_1,l_1w_2)$ is the lens space. Then the residual $S^1_\tilde{h}eta/\mathbbz_{l_2}\alphapprox S^1$ action is \betaegin{equation}\label{resS1act} (x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;[e^{-i\frac{l_1w_1}{l_2}\tilde{h}eta)}z_1,e^{-i\frac{l_1w_2}{l_2}\tilde{h}eta)}z_2]). \end{equation} This describes $$M_{l_1,l_2,\betafw}=M\times_{S^1}L(l_2;l_1w_1,l_1w_2)$$ as an associated fiber bundle to the principal $S^1$-bundle $M\ra{1.5} N$ with fiber $L(l_2;l_1w_1,l_1w_2)$ over the K\"ahler manifold $N$. The brackets in Equation (\ref{resS1act}) denote the equivalence class defined by $(z'_1,z'_2)\sim (z_1,z_2)$ if $(z'_1,z'_2)=({\mathfrak a}mmarl^{l_1w_1}z_1,{\mathfrak a}mmarl^{l_1w_2}z_2)$ for ${\mathfrak a}mmarl^{l_2}=1$. We now turn to the $T^2$ action of $S^1_\phi\times (S^1_\tilde{h}eta/\mathbbz_{l_2})$ on $M\times L(l_2;l_1w_1,l_1w_2)$ given by \betaegin{equation}\label{T2action3} (x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;[e^{i(v_1\phi-\frac{l_1w_1}{l_2}\tilde{h}eta)}z_1,e^{i(v_2\phi-\frac{l_1w_2}{l_2}\tilde{h}eta)}z_2]), \end{equation} This gives rise to the Diagram (\ref{comdia1}) \betaegin{equation}\label{comdia2} \betaegin{matrix} M\times L(l_2;l_1w_1,l_1w_2) &&& \\ &\searrow && \\ \partialecdnar{\pi_B} && M_{l_1,l_2,\betafw} &\\ &\swarrow && \\ B_{l_1,l_2,\betafv,\betafw} &&& \end{matrix} \end{equation} Let us analyze the behavior of the $T^2$ action given by Equation (\ref{T2action3}). We shall see that it it not generally effective. First we notice that the $S^1_\tilde{h}eta$ action is free since it is free on the first factor. Next we look for fixed points under a subgroup of the circle $S^1_\phi$. Thus, we impose $$(e^{iv_1\phi}z_1,e^{iv_2\phi}z_2)=(e^{-2\pi\frac{{\mathcal I}_Nw_1}{|\betafw|}ri}z_1,e^{-2\pi\frac{{\mathcal I}_Nw_2}{|\betafw|}ri}z_2)$$ for some $r=0,\ldots,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}-1$. If $z_1z_2\nablaeq 0$ we must have \betaegin{equation}\label{phisoln} v_1\phi=2\pi(-\frac{{\mathcal I}_Nw_1r}{|\betafw|} +k_1),\qquad v_2\phi=2\pi(-\frac{{\mathcal I}_Nw_2r}{|\betafw|} +k_2) \end{equation} for some integers $k_1,k_2$ which in turn implies $${\mathcal I}_Nr(w_2v_1-w_1v_2)=|\betafw|(k_2v_1-k_1v_2).$$ This gives \betaegin{equation}\label{reqn} r=\frac{|\betafw|}{{\mathcal I}_N}\frac{k_2v_1-k_1v_2}{w_2v_1-w_1v_2} \end{equation} which must be a nonnegative integer less than $\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}$. We can also solve Equations (\ref{phisoln}) for $\phi$ by eliminating $\frac{{\mathcal I}_Nr}{|\betafw|}$ giving \betaegin{equation}\label{phisoln2} \phi =2\pi \frac{k_1w_2-k_2w_1}{w_2v_1-w_1v_2}. \end{equation} Next we write (\ref{reqn}) as \betaegin{equation}\label{reqn3} r=\Bigl(\frac{\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}}{{\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)})}\Bigr)\Bigl(\frac{k_2v_1-k_1v_2}{\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}\frac{w_2v_1-w_1v_2}{{\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)})}}\Bigr) \end{equation} Since $v_1$ and $v_2$ are relatively prime, we can choose $k_1$ and $k_2$ so that the term in the last parentheses is $1$. This determines $r$ as \betaegin{equation}\label{reqn4} r=\frac{\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}}{{\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)})} \end{equation} Now suppose that $z_2=0$. Then generally we have $e^{iv_1\phi}=e^{-2\pi\frac{{\mathcal I}_Nw_1}{|\betafw|}ri}$ for some $r=0,\ldots,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}-1$ or equivalently $r=1,\ldots,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}$. This gives \betaegin{equation}\label{z20} \phi=2\pi(-\frac{{\mathcal I}_Nw_1r}{v_1|\betafw|}+\frac{k}{v_1}). \end{equation} A similar computation at $z_1=0$ gives \betaegin{equation}\label{z10} \phi=2\pi(-\frac{{\mathcal I}_Nw_2r'}{v_2|\betafw|}+\frac{k'}{v_2}). \end{equation} We are interested in when regularity can occur. For this we need the minimal angle at the two endpoints to be equal. This gives $$-\frac{{\mathcal I}_Nw_2r'}{v_2|\betafw|}+\frac{k'}{v_2}=-\frac{{\mathcal I}_Nw_1r}{v_1|\betafw|}+\frac{k}{v_1}$$ for some choice of integers $k,k'$ and nonnegative integers $r,r'<\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}$. This gives \betaegin{equation}\label{endpteqn} \frac{-{\mathcal I}_Nw_2r'+k'|\betafw|}{v_2}=\frac{-{\mathcal I}_Nw_1r+k|\betafw|}{v_1}. \end{equation} \subsection{Periods of Reeb Orbits} We assume that $\betafw\nablaeq (1,1)$. We want to determine the periods of the orbits of the flow of the Reeb vector field defined by the weight vector $\betafv=(v_1,v_2)$. In particular, we want to know when there is a regular Reeb vector field in the $\betafw$-Sasaki cone. Let us now generally determine the minimal angle, hence the generic period of the Reeb orbits, on the dense open subset $Z$ defined by $z_1z_2\nablaeq 0$. For convenience we set $$ s={\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}), \qquad t=\frac{{\mathcal I}_N}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}.$$ \betaegin{lemma}\label{generic3} The minimal angle on $Z$ is $\frac{2\pi}{s}$. Thus, $S^1_\phi/\mathbbz_s$ acts freely on the dense open subset $Z$. \end{lemma} \betaegin{proof} We choose $k_1,k_2$ in Equation (\ref{reqn3}) so that the last parentheses equals $1$. This gives $$t\frac{w_2v_1-w_1v_2}{s}=k_2v_1-k_1v_2.$$ Rearranging this becomes $$(sk_2-tw_2)v_1=(sk_1-tw_1)v_2.$$ Since $v_1$ and $v_2$ are relatively prime this equation implies $sk_i=tw_i+mv_i$ for $i=1,2$ and some integer $m$. Putting this into Equation (\ref{phisoln2}) gives $\phi=\frac{2\pi m}{s}$, so the minimal angle is $\frac{2\pi}{s}$. \end{proof} We now investigate the endpoints defined by $z_2=0$ and $z_1=0$. \betaegin{proposition}\label{regularprop} The following hold: \betaegin{enumerate} \item The period on $Z$, namely $\frac{2\pi}{s}$, is an integral multiple of the periods at the endpoints. Hence, $S^1_\phi/\mathbbz_s$ acts effectively on $M_{l_1,l_2,\betafw}$. \item The period at the endpoint $z_j=0$ is $2\pi\frac{{\mathfrak a}mmacd({\mathcal I}_N, |\betafw|)}{v_i|\betafw|}$ where $i\equiv j+1\mod 2$. So the end points have equal periods if and only if $\betafv=(1,1)$. \item The $\betafw$-Sasaki cone contains a regular Reeb vector field if and only if $\betafv=(1,1)$ and $\frac{w_1+w_2}{{\mathfrak a}mmacd({\mathcal I}_N,w_1+w_2)}$ divides $w_1-w_2$. \end{enumerate} \end{proposition} \betaegin{proof} A Reeb vector field will be regular if and only if the period of its orbit is the same at all points. We know that it is $\frac{2\pi}{s}$ on $Z$. We need to determine the minimal angle at the endpoints. From Equation (\ref{z20}) the angle at $z_2=0$ is $$\phi=2\pi(\frac{-{\mathcal I}_Nw_1r+k|\betafw|}{v_1|\betafw|})=2\pi{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)\Bigl(\frac{\frac{|\betafw|k}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}-\frac{{\mathcal I}_Nw_1r}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}}{v_1|\betafw|}\Bigr). $$ Now $${\mathfrak a}mmacd(\frac{|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)},\frac{{\mathcal I}_Nw_1}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}=1,$$ so we can choose $k$ and $r$ such that numerator of the term in the large parentheses is $1$. This gives period $2\pi\frac{{\mathfrak a}mmacd({\mathcal I}_N, |\betafw|)}{v_1|\betafw|}$. Similarly, at $z_1=0$ we have the period $2\pi\frac{{\mathfrak a}mmacd({\mathcal I}_N, |\betafw|)}{v_2|\betafw|}$. So the period is the same at the endpoints if and only if $v_1=v_2$ which is equivalent to $\betafv=(1,1)$ since $v_1$ and $v_2$ are relatively prime which proves $(2)$. Moreover, the period is the same at all points if and only if \betaegin{equation}\label{eqper} \betafv=(1,1),\qquad \frac{|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}=s={\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}). \end{equation} But the last equation holds if and only if $\frac{|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}$ divides $w_1-w_2$ proving $(3)$. (1) follows from the fact that for each $i=1,2$, $\frac{v_i|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}$ is an integral multiple of ${\mathfrak a}mmacd(|w_2v_1-w_1v_2|,\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)})=s$. \end{proof} We have an immediate \betaegin{corollary}\label{cali1cor} If ${\mathcal I}_N=1$ there are no regular Reeb vector fields in any $\betafw$-Sasaki cone with $\betafw\nablaeq (1,1)$. \end{corollary} Actually, one can say much more. \betaegin{proposition}\label{numreg} Assume $\betafw\nablaeq (1,1)$. There are exactly $K-1$ different $\betafw$-Sasaki cones that have a regular Reeb vector field. These are given by \betaegin{equation}\label{regw} \betafw =\Bigl(\frac{K+n}{{\mathfrak a}mmacd(K+n,K-n)},\frac{K-n}{{\mathfrak a}mmacd(K+n,K-n)}\Bigr) \end{equation} where $K={\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)$ and $1\leq n<K$. \end{proposition} \betaegin{proof} By Proposition \ref{regularprop} a $\betafw$-Sasaki cone contains a regular Reeb vector field if and only if there is $n\in \mathbbz^+$ such that $$w_1-w_2=n\frac{w_1+w_2}{{\mathfrak a}mmacd({\mathcal I}_N,w_1+w_2)}.$$ Clearly, for a solution we must have $n<{\mathfrak a}mmacd({\mathcal I}_N,w_1+w_2)$. Then we have a solution if and only if $$(K-n)w_1=(K+n)w_2$$ for all $1\leq n<K$. Since $w_1>w_2$ and they are relatively prime we have the unique solution Equation \eqref{regw} for each integer $1\leq n<K$. \end{proof} \betaegin{example}\label{Findex} Let us determine the $\betafw$-joins with regular Reeb vector field for ${\mathcal I}_N=2,3$. For example, if ${\mathcal I}_N=2$ for a solution to Equation \eqref{regw} we must have $K=2$ which gives $n=1$ and $\betafw=(3,1)$. This has as a consequence Corollary \ref{Ypqcor} below. Similarly if ${\mathcal I}_N=3$ we must have $K=3$, which gives two solutions $\betafw=(2,1)$ and $\betafw=(5,1)$. \end{example} \betaegin{example}\label{Ypq} Recall the contact structures $Y^{p,q}$ on $S^2\times S^3$ first studied in the context of Sasaki-Einstein metrics in \cite{GMSW04a}, where $p$ and $q$ are relatively prime positive integers satisfying $p>1$ and $q<p$. Since in this case the manifold $M_{l_1,l_2,\betafw}=M^3\star_{l_1,l_2}S^3_\betafw$ is $S^2\times S^3$, we have $N=S^2$ with its standard (Fubini-Study) K\"ahler structure. Hence, ${\mathcal I}_N=2$. This was treated in Example 4.7 of \cite{BoPa10} although the conventions\footnote{In particular, there we chose $w_1\leq w_2$; whereas, here we use the opposite convention, $w_1{\mathfrak a}mmaeq w_2$.} are slightly different. Here we set \betaegin{equation}\label{pqw} \betafw=\frac{1}{{\mathfrak a}mmacd(p+q,p-q)}\betaigl(p+q,p-q\betaigr). \end{equation} Note that the conditions on $p,q$ eliminate the case $\betafw=(1,1)$. We claim that the following relations hold: \betaegin{equation}\label{pqrel} l_1={\mathfrak a}mmacd(p+q,p-q),\qquad l_2=p. \end{equation} To see this we first notice that Lemma \ref{c10} implies that to have a Sasaki-Einstein metric we must have $l_1=1$ and $l_2=\frac{|\betafw|}{2}$ if $|\betafw|$ is even, and $l_1=2$ and $l_2=|\betafw|$ if $|\betafw|$ is odd. Thus, the second of Equations (\ref{pqrel}) follows from the first and Equation (\ref{pqw}). To prove the first equation we first notice that since $p$ and $q$ are relatively prime, ${\mathfrak a}mmacd(p+q,p-q)$ is either $1$ or $2$. Next from Equation (\ref{pqw}) we note that $2p={\mathfrak a}mmacd(p+q,p-q)|\betafw|$. So if $|\betafw|$ is odd, then ${\mathfrak a}mmacd(p+q,p-q)$ must be even, hence $2$. So the first equation holds in this case. This also shows that if $|\betafw|$ is odd then $p=|\betafw|$ is also odd. Now if $|\betafw|$ is even then both $p+q$ and $p-q$ must be odd. But then ${\mathfrak a}mmacd(p+q,p-q)$ must be $1$. In this case $p=\frac{|\betafw|}{2}$ which can be either odd or even. Then from Proposition \ref{regularprop} we recover the following result of \cite{BoPa10}: \betaegin{corollary}\label{Ypqcor} For $Y^{p,q}$ the $\betafw$-Sasaki cone has a regular Reeb vector field if and only if $p=2,q=1$. \end{corollary} For the general $Y^{p,q}$ the Reeb vector field of Theorem 4.2 of \cite{BoPa10} is equivalent to choosing $\betafv=(1,1)$ here. However, as stated in Corollary \ref{Ypqcor}, it is regular only when $p=2,q=1$. Otherwise, it is quasi-regular with ramification index $m=m_1=m_2=p$ if $p$ is odd, and $m=\frac{p}{2}$ if $p$ is even. We remark that the quotient of $Y^{2,1}$ by the regular Reeb vector field is $\mathbbc\mathbbp^2$ blown-up at a point; whereas, we have arrived at it from the $\betafw$-Sasaki cone of an $S^1$ orbibundle over $S^2\times\mathbbc\mathbbp^1[3,1]$. \end{example} \subsection{$B_{l_1,l_2,\betafv,\betafw}$ as a Log Pair} We follow the analysis in Section 3 of \cite{BoTo13}. We have the action of the 2-torus $S^1_\phi/\mathbbz_s\times (S^1_\tilde{h}eta/\mathbbz_{l_2})$ on $M\times L(\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)};l_1w_1,l_1w_2)$ given by Equation (\ref{T2action3}) whose quotient space is $B_{l_1,l_2,\betafv,\betafw}$. It follows from Equation (\ref{T2action3}) that $B_{l_1,l_2,\betafv,\betafw}$ is a bundle over $N$ with fiber a weighted projective space of complex dimension one. By (1) of Proposition \ref{regularprop} the generic period is an integral multiple, say $m_i$, of the period at the divisor $D_i$. Thus, for $i=1,2$ we have \betaegin{equation}\label{ramind} m_i=v_i\frac{|\betafw|}{s{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}=v_im. \end{equation} Note that from its definition $m=\frac{l_2}{s}=\frac{|\betafw|}{s{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}$, so $m_i$ is indeed a positive integer. It is the ramification index of the branch divisor $D_i$. We think of $D_1$ as the zero section and $D_2$ as the infinity section of the bundle $B_{l_1,l_2,\betafv,\betafw}$. Thus, $B_{l_1,l_2,\betafv,\betafw}$ is a fiber bundle over $N$ with fiber $\mathbbc\mathbbp^1[v_1,v_2]/\mathbbz_m\alphapprox \mathbbc\mathbbp^1$. The isomorphism is simply $[z_1,z_2]\mapsto [z_1^{m_2},z_2^{m_1}]$ where the brackets denote the obvious equivalence classes on $\mathbbc\mathbbp^1[v_1,v_2]/\mathbbz_m$. The complex structure of $B_{l_1,l_2,\betafv,\betafw}$ is the projection of the transverse complex structure on $M_{l_1,l_2,\betafw}$ which in turn is the lift of the product complex structure on $N\times \mathbbc\mathbbp^1[\betafw]$. However, $B_{l_1,l_2,\betafv,\betafw}$ is not generally a product as a complex orbifold, nor even topologically. Now we can follow the analysis leading to Lemma 3.14 of \cite{BoTo13}. So we define the map $$\tilde{h}_\betafv:M\times L(l_2;l_1w_1,l_1w_2)\ra{1.6} M\times L(l_2;l_1w_1v_2,l_1w_2v_1)$$ by \betaegin{equation}\label{th} \tilde{h}_\betafv(x,u;[z_1,z_2])=(x,u;[z_1^{m_2},z_2^{m_1}]). \end{equation} It is a $mv_1v_2$-fold covering map. Similar to \cite{BoTo13} we get a commutative diagram: \betaegin{equation}\label{actcomdia} \betaegin{matrix} M\times L(l_2;l_1w_1,l_1w_2) &\fract{{\mathcal A}_{\betafv,l,\betafw}({\mathfrak a}mmarl,{\mathfrak a}mmart)}{\ra{2.5}} & M\times L(l_2;l_1w_1,l_1w_2) \\ \partialecdnar{\tilde{h}_\betafv} && \partialecdnar{\tilde{h}_\betafv} \\ M\times L(l_2;l_1w'_1,l_1w'_2) & \fract{{\mathcal A}_{1,l,\betafw'}({\mathfrak a}mmarl,{\mathfrak a}mmart^{mv_1v_2})}{\ra{2.5}} & M\times L(l_2;l_1w'_1,l_1w'_2), \end{matrix} \end{equation} where $\betafw'=(v_2w_1,v_1w_2)$. So $B_{l_1,l_2,\betafv,\betafw}$ is the log pair $(S_n,{\mathfrak a}mmarD)$ with branch divisor \betaegin{equation}\label{branchdiv} {\mathfrak a}mmarD=(1-\frac{1}{m_1})D_1+ (1-\frac{1}{m_2})D_2, \end{equation} where $S_n$ is a smooth $\mathbbc\mathbbp^1$-bundle over $N$, that is a ruled manifold as described in Section \ref{ruledsec}. Now $B_{l_1,l_2,\betafv,\betafw}$ is the quotient orbifold $$\betaigl(M\times L(l_2;l_1w_1,l_1w_2)\betaigr)/{\mathcal A}_{\betafv,l,\betafw}({\mathfrak a}mmarl,{\mathfrak a}mmart),$$ and $B_{l_1,l_2,1,\betafw'}$ is the quotient $\betaigl(M\times L(l_2;l_1w'_1,l_1w'_2)\betaigr)/{\mathcal A}_{1,l,\betafw'}({\mathfrak a}mmarl,{\mathfrak a}mmart^{v_1v_2})$. So $\tilde{h}_\betafv$ induces a map $h_\betafv:B_{l_1,l_2,\betafv,\betafw}\ra{1.6}S_n$ defined by \betaegin{equation}\label{hquot} h_\betafv([x,u;[z_1,z_2]])=[x,u;[z_1^{m_2},z_2^{m_1}]], \end{equation} where the outer brackets denote the equivalence class with respect to the corresponding $T^2$ action. We have \betaegin{lemma}\label{biholo} The map $h_\betafv:B_{l_1,l_2,\betafv,\betafw}\ra{1.6}B_{l_1,l_2,1,\betafw'}$ defined by Equation (\ref{hquot}) is a biholomorphism. \end{lemma} \betaegin{proof} The map is ostensibly holomorphic. Now $\tilde{h}_\betafv$ is the identity map on $M$ and a $v_1v_2$-fold covering map on the corresponding lens spaces. From the commutative diagram (\ref{actcomdia}) the induced map $h_\betafv$ is fiber preserving and is a bijection on the fibers with holomorphic inverse. \end{proof} Lemma \ref{biholo} allows us to consider the orbifold $B_{l_1,l_2,\betafv,\betafw}$ as the log pair $(B_{l_1,l_2,1,\betafw'},{\mathfrak a}mmarD)$ where ${\mathfrak a}mmarD$ is given by Equation \eqref{branchdiv}. Notice that even when $\betafv=(1,1)$ the orbifold structure can be non-trivial, namely, $B_{l_1,l_2,(1,1),\betafw}=(S_n,{\mathfrak a}mmarD)$ where $m_1=m_2=m=\frac{|\betafw|}{s{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}$ and $${\mathfrak a}mmarD=(1-\frac{1}{m})(D_1+D_2).$$ The $T^2$ action ${\mathcal A}_{1,\betafl,\betafw'}:M\times L(l_2;l_1w'_1,l_1w'_2)\ra{1.5} M\times L(l_2;l_1w'_1,l_1w'_2)$ is given by \betaegin{equation}\label{T2action4} (x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;[e^{i(\phi-\frac{l_1w'_1}{l_2}\tilde{h}eta)}z_1,e^{i(\phi-\frac{l_1w'_2}{l_2}\tilde{h}eta)}z_2]), \end{equation} Defining $\chi=\phi-\frac{l_1w'_1}{l_2}\tilde{h}eta$ gives \betaegin{equation}\label{T2action5} (x,u;z_1,z_2)\mapsto (x,e^{i\tilde{h}eta}u;[e^{i\chi}z_1,e^{i(\chi+\frac{l_1}{l_2}(w'_1-w'_2)\tilde{h}eta)}z_2]). \end{equation} The analysis above shows that this action is generally not free, but has branch divisors at the zero ($z_2=0$) and infinity ($z_1=0$) sections with ramification indices both equal to $m$. Equation (\ref{T2action5}) tells us that the $T^2$-quotient space $B_{l_1,l_2,1,\betafw'}$ is the projectivization of the holomorphic rank two vector bundle $E=\BOne \oplus L_n$ over $N$ where $\BOne$ denotes the trivial line bundle and $L_n$ is a line bundle of `degree' $n=\frac{l_1}{s}(w_1v_2-w_2v_1)$ with $s={\mathfrak a}mmacd(|w_1v_2-w_2v_1|,l_2)$. So $S_n=\mathbbp(\BOne\oplus L_n)$ is a smooth projective algebraic variety. Next we identify $N$ with the zero section $D_1$ of $L_n$, and note that $c_1(L_n)$ is just the restriction of the Poincar\'e dual of $D_1$ to $D_1$, i.e. $PD(D_1)|_{D_1}=c_1(L_n)$. Summarizing we have \betaegin{theorem}\label{preSE} Let $M_{l_1,l_2,\betafw}=M\star_{l_1,l_2}S^3_\betafw$ be the join as described in the beginning of the section with the induced contact structure ${\mathcal D}_{l_1,l_2,\betafw}$ satisfying $c_1({\mathcal D}_{l_1,l_2,\betafw})=0$. Let $\betafv=(v_1,v_2)$ be a weight vector with relatively prime integer components and let $\xi_\betafv$ be the corresponding Reeb vector field in the Sasaki cone ${\mathfrak a}mmat^+_2$. Then the quotient of $M_{l_1,l_2,\betafw}$ by the flow of the Reeb vector field $\xi_\betafv$ is a projective algebraic orbifold written as a the log pair $(S_n,{\mathfrak a}mmarD)$ where $S_n$ is the total space of the projective bundle $\mathbbp(\BOne\oplus L_n)$ over the Fano manifold $N$ with $n=\frac{l_1}{s}(w_1v_2-w_2v_1)$, ${\mathfrak a}mmarD$ the branch divisor \betaegin{equation}\label{branchdiv2} {\mathfrak a}mmarD=(1-\frac{1}{m_1})D_1+ (1-\frac{1}{m_2})D_2, \end{equation} with ramification indices $m_i=v_i\frac{|\betafw|}{s{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}=v_im$ and divisors $D_1$ and $D_2$ given by the zero section $\BOne\oplus 0$ and infinity section $0\oplus L_n$, respectively. Here ${\mathcal I}_N$ is the Fano index of $N$, $s={\mathfrak a}mmacd(|w_1v_2-w_2v_1|,\frac{|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|})$, and $l_i$ are the relative indices given by $l_1=\frac{{\mathcal I}_N}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}$ and $l_2=\frac{|\betafw|}{{\mathfrak a}mmacd({\mathcal I}_N,|\betafw|)}$. The fiber of the orbifold $(S_n,{\mathfrak a}mmarD)$ is the orbifold $\mathbbc\mathbbp^1[v_1,v_2]/\mathbbz_m$. \end{theorem} Notice that in this case we have $\pi_1^{orb}(S_n,{\mathfrak a}mmarD)=\mathbbz_m$. \betaegin{example} Consider $\betafw=(11,1)$ and $\betafv=(4,5)$. Let us take ${\mathcal I}_N=1$. So we have $l_1=1,l_2=12,$ and $s={\mathfrak a}mmacd(12,55-4)=3$. Thus, the generic period of the Reeb vector field $\xi_\betafv$ is $\frac{2\pi}{3}$. The period at the endpoint $z_2=0$ (on the branch divisor $D_1$) is $\frac{2\pi}{48}$, and at the endpoint $z_1=0$ (on $D_2)$ it is $\frac{2\pi}{60}$. So there is an effective action of $S^1_\phi/\mathbbz_3$. This gives an isotropy of $\mathbbz_{16}$ on the branch divisor $D_1$, and an isotropy of $\mathbbz_{20}$ on $D_2$. The ramification indices are then $m_1=16$, $m_2=20$, and $m=\frac{12}{3}=4$. The projective bundle $S_{n}=\mathbbp(\BOne\oplus L_n)$ has $n=\frac{51}{3}=17$. So our log pair is $(S_{17},{\mathfrak a}mmarD)$ with $${\mathfrak a}mmarD=\frac{15}{16}D_1+\frac{19}{20}D_2.$$ The fiber $F$ is a quotient of the orbifold weighted projective space $\mathbbc\mathbbp^1[4,5]$, namely $F=\mathbbc\mathbbp^1[4,5]/\mathbbz_4$. Here $\pi_1^{orb}(F)=\mathbbz_4$. Now let us consider $\betafw=(11,1)$ and $\betafv=(1,1)$. Again take ${\mathcal I}_N=1$, so $l_1=1, l_2=12$, and $s={\mathfrak a}mmacd(12,11-1)=2.$ So the generic period of $\xi_\betafv$ is $\frac{2\pi}{2}=\pi$. The period on the branch divisors $D_1$ and $D_2$ are both $\frac{2\pi}{12}=\frac{\pi}{6}$, and the ramification index $m=6$. The projective bundle $S_n=\mathbbp(\BOne\oplus L_n)$ has $n=5$, so our log pair is $(S_5,{\mathfrak a}mmarD)$ with branch divisor $${\mathfrak a}mmarD= \frac{5}{6}(D_1+D_2).$$ In this case the fiber is a global quotient (developable) orbifold, namely $F=\mathbbc\mathbbp^1/\mathbbz_6$ with $\pi_1^{orb}(F)=\mathbbz_6$. \end{example} \betaegin{example}\label{Ypq2} This is a continuation of Example \ref{Ypq}. We take $\betafv=(1,1)$. Then $$s={\mathfrak a}mmacd(l_2,w_1-w_2)={\mathfrak a}mmacd(p,\frac{2q}{l_1}).$$ So $s=1$ if $l_1=2$ that is when $|\betafw|$ is odd which also implies that $p$ is odd. Whereas, if $|\betafw|$ is even, $l_1=1$, so $s={\mathfrak a}mmacd(p,2q)$ which is $2$ if $p$ is even and $1$ if $p$ is odd. Now consider $n$. We have $$n=\frac{l_1}{s}(w_1-w_2)=\frac{2q}{s}.$$ Thus, $n=2q$ when $p$ is odd, and $n=q$ when $p$ is even in which case $q$ must be odd. Moreover, from Equation (\ref{ramind}) we have $$m=\frac{|\betafw|}{s{\mathfrak a}mmacd(|\betafw|,{\mathcal I}_N)}= \betaegin{cases} p~ \text{if $p$ is odd} \\ \frac{p}{2} ~\text{if $p$ is even.} \end{cases}$$ So in this case our log pair is $(S_{2q},{\mathfrak a}mmarD)$ with $${\mathfrak a}mmarD=\betaigl(1-\frac{1}{p}\betaigr)(D_1+D_2)$$ if $p$ is odd, and $(S_q,{\mathfrak a}mmarD)$ with $q$ odd and $${\mathfrak a}mmarD=\betaigl(1-\frac{2}{p}\betaigr)(D_1+D_2)$$ if $p$ is even. Here $S_{2q}$ and $S_q$ are the even and odd Hirzebruch surfaces, respectively. Note for $Y^{2,1}$ there is no branch divisor, so it is regular as we know from Corollary \ref{Ypqcor} and $S_1$ is $\mathbbc\mathbbp^2$ blown-up at a point. \end{example} \subsection{Examples with $N$ a del Pezzo Surface} The Fano manifolds of complex dimension $2$ are usually called {\it del Pezzo surfaces}. They are exactly $\mathbbc\mathbbp^2,\mathbbc\mathbbp^1\times \mathbbc\mathbbp^1$, and $\mathbbc\mathbbp^2$ blown-up at $k$ generic points with $1\leq k\leq 8$. The join will be a Sasaki 7-manifold for these cases. \betaegin{example}\label{Ncp2} For $N=\mathbbc\mathbbp^2$ with its standard Fubini-Study K\"ahlerian structure, we have ${\mathcal I}_N=3$. From Example \ref{Findex} we see that we have a regular Reeb vector field in the $\betafw$-Sasaki cone in precisely two cases, either $\betafw=(2,1)$, or $\betafw=(5,1)$. In the first case the relative Fano indices are $(l_1,l_2)=(1,1)$ while in the second case they are $(l_1,l_2)=(1,2)$. In the former case our 7-manifold $M^7_{(2,1)}=S^5\star_{1,1}S^3_{(2,1)}$ is an $S^3$-bundle over $\mathbbc\mathbbp^2$; whereas, in the latter case the 7-manifold $M^7_{(5,1)}=S^5\star_{1,2}S^3_{(5,1)}$ is an $L(2;5,1)$ bundle over $\mathbbc\mathbbp^2$. Moreover, it follows from standard lens space theory that $L(2;5,1)$ is diffeomorphic to the real projective space $\mathbbr\mathbbp^3$. \end{example} \betaegin{example}\label{2cp1} For $N=\mathbbc\mathbbp^1\times \mathbbc\mathbbp^1$ with its standard Fubini-Study K\"ahlerian structure, we have ${\mathcal I}_N=2$. There is only one case with a regular Reeb vector field, and that is $\betafw=(3,1)$. Here the relative Fano indices are $(1,2)$. In this case the 7-manifold is $(S^2\times S^3)\star_{1,2}S^3_{(3,1)}$ which can be realized as an $L(2;3,1)\alphapprox \mathbbr\mathbbp^3$ lens space bundle over $\mathbbc\mathbbp^1\times\mathbbc\mathbbp^1$. \end{example} \betaegin{example}\label{blowups} We take $N$ to be $\mathbbc\mathbbp^2$ blown-up at $k$ generic points where $k=1,\ldots,8$, or equivalently $N=N_k=\mathbbc\mathbbp^2\#k\overline{\mathbbc\mathbbp}^2$. All the K\"ahler structures have an extremal representative, but for $k=1,2$ they are not CSC. However, for $k=3,\ldots,8$ they are CSC, and hence, K\"ahler-Einstein. Notice that when $4\leq k\leq 8$ the complex automorphism group has dimension $0$, so the $\betafw$-Sasaki cone is the entire Sasaki cone. Moreover, if $5\leq k\leq 8$ the local moduli space has positive dimension, and we can choose any of the complex structures. By a theorem of Kobayashi and Ochiai \cite{KoOc73} we have ${\mathcal I}_{N_k}=1$ for all $k=1,\ldots,8$. So $l_1=1,l_2=|\betafw|$, and by Corollary \ref{cali1cor} there are no regular Reeb vector fields in the $\betafw$-Sasaki cone with $\betafw\nablaeq (1,1)$. In particular, if $4\leq k\leq 8$, there are no regular Reeb vector fields in the Sasaki cone. Generally, these are $L(|\betafw|;w_1,w_2)$ lens space bundles over $N_k$. Of course, the case $\betafw=(1,1)$ is just an $S^1$-bundle over $N_k\times \mathbbc\mathbbp^1$ with the product complex structure which is automatically regular. These were studied in \cite{BG00a}. \end{example} \section{The Topology of the Joins} Since we are interested in compact Sasaki-Einstein manifolds, which have finite fundamental group, we shall assume that the Sasaki manifold $M$ is simply connected. It is then easy to construct examples with cyclic fundamental group. From the homotopy exact sequence of the fibration $S^1\ra{1.5}M\times S^3\ra{1.5} M_{l_1,l_2,\betafw}$ we have \betaegin{proposition}\label{simcon} If $M$ is simply connected, then so is $M_{l_1,l_2,\betafw}$. Moreover, if $M$ is 2-connected, $\pi_2(M_{l_1,l_2,\betafw})\alphapprox \mathbbz$. \end{proposition} We now describe our method for computing the cohomology ring of the join $M_{l_1,l_2,\betafw}$. \subsection{The Method} Our approach uses the spectral sequence method employed in \cite{WaZi90,BG00a} (see also Section 7.6.2 of \cite{BG05}). The fibration $\pi_L$ in Diagram (\ref{s2comdia}) together with the torus bundle with total space $M\times S^3_\betafw$ gives the commutative diagram of fibrations \betaegin{equation}\label{orbifibrationexactseq} \betaegin{matrix}M\times S^3_\betafw &\ra{2.6} &M_{l_1,l_2,\betafw}&\ra{2.6} &\mathsf{B}S^1 \\ \partialecdnar{=}&&\partialecdnar{}&&\partialecdnar{\psi}\\ M\times S^3_\betafw&\ra{2.6} & N\times\mathsf{B}\mathbbc\mathbbp^1[\betafw]&\ra{2.6} &\mathsf{B}S^1\times \mathsf{B}S^1\, \end{matrix} \qquad \qquad \end{equation} where $\mathsf{B}G$ is the classifying space of a group $G$ or Haefliger's classifying space \cite{Hae84} of an orbifold if $G$ is an orbifold. Note that the lower fibration is a product of fibrations. In particular, the fibration \betaegin{equation}\label{cporbfib} S^3_\betafw \ra{2.6} \mathsf{B}\mathbbc\mathbbp^1[\betafw]\ra{2.6} \mathsf{B}S^1 \end{equation} is rationally equivalent to the Hopf fibration, so over $\mathbbq$ the only non-vanishing differentials in its Leray-Serre spectral sequence are $d_4({\mathfrak a}mmarb)=s^2$ where ${\mathfrak a}mmarb$ is the orientation class of $S^3$ and $s$ is a basis in $H^2( \mathsf{B}S^1,\mathbbq)\alphapprox \mathbbq$ and those induced from $d_4$ by naturality. However, we want the cohomology over $\mathbbz$. \betaegin{lemma}\label{cporbcoh} For $w_1$ and $w_2$ relatively prime positive integers we have $$H^r_{orb}(\mathbbc\mathbbp^1[\betafw],\mathbbz)=H^r( \mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)= \betaegin{cases} \mathbbz &\text{for $r=0,2$,}\\ \mathbbz_{w_1w_2} &\text{for $r>2$ even,}\\ 0 &\text{for $r$ odd.} \end{cases}$$ \end{lemma} \betaegin{proof} As in \cite{BG05} we cover the $\mathsf{B}\mathbbc\mathbbp^1[\betafw]$ with two overlapping open sets $p^{-1}(U_i)\alphapprox \tilde{U}_i\times_{{\mathfrak a}mmarG_i}EO$ where $U_i$ is $\mathbbc\mathbbp^1\setminus \{0\}$ and $\mathbbc\mathbbp^1\setminus \{\infty\}$ for $i=1,2$, respectively. The Mayer-Vietoris sequence is $$\ra{.8}H^r(\mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)\ra{1.0} H^r(p^{-1}(U_1),\mathbbz)\oplus H^r(p^{-1}(U_2)\ra{1.0} H^r(p^{-1}(U_1)\cap p^{-1}(U_2),\mathbbz)\ra{.4}\cdots$$ Now $p^{-1}(U_i)\alphapprox \tilde{U}_i\times_{{\mathfrak a}mmarG_i}EO$ is the Eilenberg-MacLane space $K(\mathbbz_{w_i},1)$ whose cohomology is the group cohomology $$H^r(\mathbbz_{w_i},\mathbbz)=\betaegin{cases} \mathbbz &\text{for $r=0$,}\\ \mathbbz_{w_i} &\text{for $r>0$ even,}\\ 0 &\text{for $r$ odd.} \end{cases}$$ Moreover, $p^{-1}(U_1)\cap p^{-1}(U_2)=\tilde{U}_1\cap\tilde{U}_2\times_{{\mathfrak a}mmarG_1\cap{\mathfrak a}mmarG_2}EO$ and since $w_1,w_2$ are relatively prime ${\mathfrak a}mmarG_1\cap{\mathfrak a}mmarG_2=\mathbbz_{w_1}\cap\mathbbz_{w_2}=\{\BOne\}$. So for $r=2$ the Mayer-Vietoris sequence becomes \betaegin{equation}\label{LS2} 0\ra{1.8}\mathbbz\fract{j}{\ra{1.8}}H^2(\mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)\ra{1.8}\mathbbz_{w_1w_2}\ra{1.8} 0. \end{equation} From the $E_2$ term of the Leray-Serre spectral sequence of the fibration (\ref{cporbfib}), we see that the map $j$ in (\ref{LS2}) must be multiplication by $w_1w_2$ implying that $H^2(\mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)\alphapprox \mathbbz$. For $r>2$ even the sequence gives $H^r(\mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)\alphapprox \mathbbz_{w_1}\oplus \mathbbz_{w_2}\alphapprox \mathbbz_{w_1w_2}$, whereas for $r$ odd $H^r(\mathsf{B}\mathbbc\mathbbp^1[\betafw],\mathbbz)\alphapprox 0$. \end{proof} One now easily sees that \betaegin{lemma}\label{LS} The only non-vanishing differentials in the Leray-Serre spectral sequence of the fibration (\ref{cporbfib}) are those induced naturally by $d_4({\mathfrak a}mmara)= w_1w_2s^2$ for $s\in H^2(\mathsf{B}S^1,\mathbbz)\alphapprox \mathbbz[s]$ and ${\mathfrak a}mmara$ the orientation class of $S^3$. \end{lemma} Now the map $\psi$ of Diagram (\ref{orbifibrationexactseq}) is that induced by the inclusion $e^{i\tilde{h}eta}\mapsto (e^{il_2\tilde{h}eta},e^{-il_1\tilde{h}eta})$. So noting $$H^*(\mathsf{B}S^1\times \mathsf{B}S^1,\mathbbz)=\mathbbz[s_1,s_2]$$ we see that $\psi^*s_1=l_2s$ and $\psi^*s_2=-l_1s$. This together with Lemma \ref{LS} gives $d_4({\mathfrak a}mmara)=w_1w_2l_1^2s^2$ in the Leray-Serre spectral sequence of the top fibration in Diagram (\ref{orbifibrationexactseq}). Further analysis depends on the differentials in the spectral sequence of the fibration \betaegin{equation}\label{MNspec} M\ra{1.5}N\ra{1.5}\mathsf{B}S^1. \end{equation} \betaegin{algorithm} Given the differentials in the spectral sequence of the fibration (\ref{MNspec}), one can use the commutative diagram (\ref{orbifibrationexactseq}) to compute the cohomology ring of the join manifold $M_{l_1,l_2,\betafw}$. \end{algorithm} \subsection{Examples in General Dimension} One case that is particularly easy to describe in all odd dimensions is when $M$ is the odd-dimensional sphere $S^{2r+1}$ with $r=2,3,\ldots,$. Here we have $N=\mathbbc\mathbbp^r$ in which case we have ${\mathcal I}_N=r+1$. So the relative Fano indices are \betaegin{equation}\label{relFanind} (l_1(\betafw),l_2(\betafw))=\betaigl(\frac{r+1}{{\mathfrak a}mmacd(|\betafw|,r+1)},\frac{|\betafw|}{{\mathfrak a}mmacd(|\betafw|,r+1)}\betaigr). \end{equation} Since $l_1,l_2$ are uniquely determined by $r$ and $\betafw$, we write our joins as $M^{2r+3}_\betafw$. We have\betaegin{theorem}\label{topcpr} The join $M^{2r+3}_{\betafw}=S^{2r+1}\star_{l_1,l_2}S^3_\betafw$ with relative Fano indices given by Equation (\ref{relFanind}) has integral cohomology ring given by $$\mathbbz[x,y]/(w_1w_2l_1(\betafw)^2x^2,x^{r+1},x^2y,y^2)$$ where $x,y$ are classes of degree $2$ and $2r+1$, respectively. \end{theorem} \betaegin{proof} The $E_2$ term of the Leray-Serre spectral sequence of the top fibration of diagram (\ref{orbifibrationexactseq}) is $$E^{p,q}_2=H^p(\mathsf{B}S^1,H^q(S^{2r+1}\times S^3_\betafw,\mathbbz))\alphapprox \mathbbz[s]\otimes{\mathfrak a}mmarL[{\mathfrak a}mmara,{\mathfrak a}mmarb],$$ where ${\mathfrak a}mmara$ is a $3$-class and ${\mathfrak a}mmarb$ is a $2r+1$ class. By the Leray-Serre Theorem this converges to $H^{p+q}(M_{\betafw}^{2r+3},\mathbbz)$. From the usual Hopf fibration and Lemma \ref{LS} the only non-zero differentials in the Leray-Serre spectral sequence of the bottom fibration in Diagram (\ref{orbifibrationexactseq}) are $d_4({\mathfrak a}mmara)=w_1w_2s^2_2$ and $d_{2r+2}({\mathfrak a}mmarb)=s^{r+1}_1$. By naturality the differentials of the top fibration of (\ref{orbifibrationexactseq}) are $d_4({\mathfrak a}mmara)=w_1w_2(-l_1s)^2$ and $d_{2r+2}({\mathfrak a}mmarb)=(l_2s)^{r+1}$. It follows that $H^{p}(M_{\betafw}^{2r+3},\mathbbz)$ has an element $x$ of degree $2$ with $w_1w_2l_1^2x^2$ vanishing, and since $l_2$ is relatively prime to $w_1w_2l_1^2$, $x^p$ vanishes for $p{\mathfrak a}mmaeq r+1$. Similarly, for dimensional reasons there is an element $y$ of degree $2r+1$ such that $y^2$ and $x^2y$ vanish. \end{proof} The connected component ${\mathfrak a}mmaA{\mathfrak a}mmau{\mathfrak a}mmat_0(M^{2r+3}_{\betafw})$ of the Sasaki automorphism group ${\mathfrak a}mmaA{\mathfrak a}mmau{\mathfrak a}mmat(M^{2r+3}_{\betafw})$ is $SU(r+1)\times T^2$, so these are toric Sasaki manifolds. However, our methods only make essential use of the 2-dimensional $\betafw$-subtorus. We shall make use of the following \betaegin{lemma}\label{H4lem} If $H^4(M^{2r+3}_\betafw,\mathbbz)=H^4(M^{2r+3}_{\betafw'},\mathbbz)$, then $w'_1w'_2=w_1w_2$ and $l_1(\betafw')=l_1(\betafw)$. \end{lemma} \betaegin{proof} The equality of the 4th cohomology groups together with the definition of $l_1$ imply $$w'_1w'_2l_1{\mathfrak a}mmacd(|\betafw|,r+1)^2=w_1w_2{\mathfrak a}mmacd(|\betafw'|,r+1)^2.$$ Set $g_\betafw={\mathfrak a}mmacd(|\betafw|,r+1)$ and $g_{\betafw'}={\mathfrak a}mmacd(|\betafw'|,r+1)$. Assume $g_{\betafw'}>1$. Since ${\mathfrak a}mmacd(w'_1,w'_2)=1$, $g_{\betafw'}$ does not divide $w'_1w'_2$. Thus, $g_{\betafw'}^2$ divides $g_\betafw^2$. Interchanging the roles of $\betafw'$ and $\betafw$ gives $g_{\betafw'}=g_\betafw$ which implies $l_1(\betafw')=l_1(\betafw)$, and hence, the lemma in the case that $g_{\betafw'}>1$. Now assume $g_{\betafw'}=1$. Then we have $w_1w_2=w'_1w'_2g_\betafw^2$ which implies that $g_\betafw$ divides $w_1w_2$. But then since $w_1,w_2$ are relatively prime, we must have $g_\betafw=1$. \end{proof} Let us set $W=w_1w_2$, and write the prime decomposition of $W=w_1w_2=p_1^{a_1}\cdots p_k^{a_k}$ Let $P_k$ be the number of partitions of $W$ into the product $w_1w_2$ of unordered relatively prime integers, including the pair $(w_1w_2,1)$. Then a counting argument gives $P_k=2^{k-1}$. Once counted we then order the pair $w_1>w_2$ as before. Let ${\mathcal P}_W$ denote the set of $(2r+3)$-manifolds $M^{2r+3}_\betafw$ with isomorphic cohomology rings. Then Lemma \ref{H4lem} implies that the cardenality of ${\mathcal P}_W$ is $P_k=2^{k-1}$. This proves Corollary \ref{cor1} on the Introduction. Generally we can construct the join of any Sasaki-Einstein manifold with the standard $S^3$ to obtain new Sasaki-Einstein manifolds as done in \cite{BG00a}. In the present paper we take the join with a weighted $S^3_\betafw$ and then deform in the Sasaki-cone. From the topological viewpoint this usually adds torsion coming from the effect of Lemmas \ref{cporbcoh} and \ref{LS}. However, in the simplest example which occurs in dimension $5$ the differentials in the spectral sequence conspire to cancel the occurrence of torsion. Of course, in this dimension the only positive K\"ahler-Einstein $2$-manifold is $N=\mathbbc\mathbbp^1$ with its standard Fubini-Study K\"ahler-Einstein structure. Then our procedure gives the $5$-manifolds $Y^{p,q}$ discovered by the physicists \cite{GMSW04a} which are diffeomorphic to $S^2\times S^3$ for all relatively prime positive integers $p,q$ such that $1<q<p$. This case has been well studied from various perspectives \cite{Boy11,BoPa10,MaSp05b,CLPP05}. The $7$-dimensional case is quite amenable to further study, and we shall concentrate our efforts in this direction. It is worth mentioning that the finiteness of deformation types of smooth Fano manifolds implies a bound on the Betti numbers of the join which only depends on dimension. This gives a Betti number bound on the manifolds obtained from our construction. In particular, in dimension seven $b_2(M^7_{k,\betafw})\leq 9$ as seen explicitly above, whereas, in dimension nine we have the bound $b_2(M^9_{k,\betafw})\leq 10$ \cite{BG00a}. \subsection{Examples in Dimension $7$} Here we consider circle bundles over the del Pezzo surfaces $\mathbbc\mathbbp^2,\mathbbc\mathbbp^1\times \mathbbc\mathbbp^1$ and $\mathbbc\mathbbp^2\# k\overline{\mathbbc\mathbbp}^2$ for $k=1,\ldots, 8$. As shown in the next section in each case the Sasaki cone admits an Sasaki-Einstein metric for all admissible $\betafw$. \subsubsection{$M=S^5$} In this case $N=\mathbbc\mathbbp^2$ and $S^5\ra{1.6} \mathbbc\mathbbp^2$ is the standard Hopf fibration and ${\mathcal I}_N=3$. This is a special case of Proposition \ref{topcpr}. \betaegin{proposition}\label{topcp2} Let $M^7_{\betafw}$ be a simply connected 7-manifold of Theorem \ref{topcpr}. There are two cases: \betaegin{enumerate} \item $3$ divides $|\betafw|$ which implies $l_2=\frac{|\betafw|}{3}$ and $l_1=1$. \item $3$ does not divide $|\betafw|$ in which case $l_2=|\betafw|$ and $l_1=3$. \end{enumerate} In both cases the cohomology ring is given by $$\mathbbz[x,y]/(w_1w_2l_1^2x^2,x^3,x^2y,y^2)$$ where $x,y$ are classes of degree $2$ and $5$, respectively. \end{proposition} Notice that in case (1) of Proposition \ref{topcp2} $H^4(M^7_\betafw,\mathbbz)=\mathbbz_{w_1w_2}$, whereas in case (2) we have $H^4(M^7_{\betafw},\mathbbz)=\mathbbz_{9w_1w_2}$. Since $3$ must divide $w_1+w_2$ in the first case and $w_1w_2$ are relatively prime, their cohomology rings are never isomorphic. \betaegin{remark} Let us make a brief remark about the homogeneous case $\betafw=(1,1)$ with symmetry group $SU(3)\times SU(2)\times U(1)$. There is a unique solution with a Sasaki-Einstein metric as shown in \cite{BG00a}. However, dropping both the Einstein and Sasakian conditions, Kreck and Stolz \cite{KS88} gave a diffeomorphism and homeomorphism classification. Furthermore, using the results of \cite{WaZi90}, they show that in certain cases each of the 28 diffeomorphism types admits an Einstein metric. If we drop the Einstein condition and allow contact bundles with non-trivial $c_1$ we can apply the classification results of \cite{KS88} to the Sasakian case. This will be studied elsewhere. \end{remark} For dimension 7 we see from Proposition \ref{topcp2} that if $3$ divides $w_1+w_2$ then the order $|H^4|$ is $W$. However, if $3$ does not divide $w_1+w_2$ then the order of $|H^4|$ is $9W$. So by Lemma \ref{H4lem} ${\mathcal P}_W$ splits into two cases, ${\mathcal P}_W^0$ if $W+1$ is divisible by $3$, and ${\mathcal P}_W^1$ if $W+1$ is not divisible by $3$. Of course, in either case the cardenality of ${\mathcal P}_W$ is $2^{k-1}$ where $k$ is the number of prime powers in the prime decomposition of $W$. \betaegin{proposition}\label{homequivprop} Suppose the order of $H^4$ is odd. The elements $M^7_\betafw$ and $M^7_{\betafw'}$ in ${\mathcal P}_W^0$ are homotopy inequivalent if and only if either \betaegin{equation*} \betaigl(\frac{w'_1+w'_2}{3}\betaigr)^3\equiv \pm\betaigl(\frac{w_1+w_2}{3}\betaigr)^3 \mod \mathbbz_{W}. \end{equation*} The elements $M^7_\betafw$ and $M^7_{\betafw'}$ in ${\mathcal P}_W^1$ are homotopy inequivalent if and only if \betaegin{equation*} (w'_1+w'_2)^3 \equiv\pm (w_1+w_2)^3 \mod \mathbbz_{9W}. \end{equation*} \end{proposition} \betaegin{proof} For $r=2$ consider the $E_6$ differential $d_6({\mathfrak a}mmarb)=l_2(\betafw)^3s^3$ in the spectral sequence of Proposition \ref{topcpr}. Since $l_2$ is relatively prime to $l_1(\betafw)^2w_1w_2$, this takes values in the multiplicative group $\mathbbz_{l_1^2W}^*$ of units in $\mathbbz_{l_1^2W}$. Taking into account the choice of generators, it takes its values in $\mathbbz_{l_1^2W}^*/\{\pm 1\}$. According to Theorem 5.1 of \cite{Kru97} $M^7_\betafw,M^7_{\betafw'}\in{\mathcal P}_W$ are homotopy equivalent if and only if $l_2(\betafw')^3= l_2(\betafw)^3$ in $\mathbbz^*_{l_1^2W}/\{\pm 1\}$. Of course, this means that $l_2(\betafw')^3=\pm l_2(\betafw)^3$ in $\mathbbz^*_{l_1^2W}$. Note that the the other two conditions of Theorem 5.1 of \cite{Kru97} are automatically satisfied in our case. \end{proof} Using a Maple program we have checked some examples for homotopy equivalence which appears to be quite sparse. So far we haven't found any examples of a homotopy equivalence. However, we have not done a systematic computer search which we leave for future work. \betaegin{example}\label{ex1} Our first example is an infinite sequence of pairs with the same cohomology ring. Set $W=3p$ with $p$ an odd prime not equal to $3$, which gives $P_k=2$. Then for each odd prime $p\nablaeq 3$ there are two manifolds in ${\mathcal P}_W^1$, namely $M^7_{(3p,1)}$ and $M^7_{(p,3)}$. The order of $H^4$ is $27p$. We check the conditions of Proposition \ref{homequivprop}. We find $$(3p+1)^3\equiv 9p+1 \mod 27p, \qquad (p+3)^3\equiv p^3+9p^2+27 \mod 27p.$$ First we look for integer solutions of $p^3+9p^2-9p+26\equiv 0 \mod 27p.$ By the rational root test the solutions could only be $p=2,13,26$ none of which are solutions. Next we check the second condition of Proposition \ref{homequivprop}, namely, $p^3+9p^2+9p+28\equiv 0 \mod 27p.$ Again by the rational root test we find the only possibilities are $p=2,7,14,28$, from which s we see that there are no solutions. Thus, we see that $M^7_{(3p,1)}$ and $M^7_{(p,3)}$ are not homotopy equivalent for any odd $p\nablaeq 3$. By the same arguments one can also show that the infinite sequence of pairs of the form $M^7_{(9p,1)}$ and $M^7_{(p,9)}$, with $p$ an odd prime relatively prime to $3$, are never homotopy equivalent. \end{example} \betaegin{remark}\label{pairrem} In Example \ref{ex1} we do not need to have $p$ a prime, but we do need it to be relatively prime to $3$. In this more general case, there will be more elements in ${\mathcal P}_W^1$. For example, if $p=55$ we have $P_k=4$ and the pair $(M^7_{(165,1)},M^7_{(55,3)}$ has the same cohomology ring as $M^7_{(33,5)}$ and $M^7_{(15,11)}$. However, they are not homotopy equivalent to either member of the pair nor to each other. \end{remark} \betaegin{example}\label{ex2} A somewhat more involved example is obtained by setting $W=5\cdot 7\cdot 11\cdot 17$. Here $P_k=8$, so this gives eight 7-manifolds in ${\mathcal P}_W^0$, namely, $$M^7_{(6545,1)},M^7_{(1309,5)},M^7_{(935,7)},M^7_{(595,11)},M^7_{(385,17)},M^7_{(187,35)},M^7_{(119,55)},,M^7_{(85,77)}. $$ One can check that these do not satisfy the conditions for homotopy equivalence of Proposition \ref{homequivprop}. So they are all homotopy inequivalent. \end{example} It is easy to get a necessary condition for homeomorphism. \betaegin{proposition} Suppose $w_1'w_2'=w_1w_2$ is odd and that $M^7_\betafw$ and $M^7_{\betafw'}$ are homeomorphic. Then in addition to the conditions of Proposition \ref{homequivprop}, we must have $$2(w'_1+w'_2)^2\equiv 2(w_1+w_2)^2 \mod 3w_1w_2.$$ \end{proposition} \betaegin{proof} This is because the first Pontrjagin class $p_1$ is actually a homeomorphism invariant\footnote{This appears to be a folklore result with no proof anywhere in the literature. It is stated without proof on page 2828 of \cite{Kru97} and on page 31 of \cite{KrLu05}. We thank Matthias Kreck for providing us with a proof that $p_1$ is a homeomorphism invariant.}. From Kruggel \cite{Kru97} we see that if $3$ does not divide $|\betafw|$ \betaegin{equation}\label{p1} p_1(M^7_\betafw)\equiv 3|\betafw|^2-9w_1^2-9w_2^2\equiv -6|\betafw|^2 \mod 9w_1w_2, \end{equation} which implies the result in this case. If $3$ divides $|\betafw|$ we have \betaegin{equation}\label{p1'} p_1(M^7_\betafw)\equiv -6\Bigl(\frac{|\betafw|}{3}\Bigr)^2 \mod w_1w_2 \end{equation} and this implies the same result. \end{proof} Note that Equations \eqref{p1} and \eqref{p1'} both imply the third condition of Theorem 5.1 in \cite{Kru97} holds in our case. To determine a full homeomorphism and diffeomorphism classification requires the Kreck-Stolz invariants \cite{KS88} $s_1,s_2,s_3\in \mathbbq/\mathbbz$. These can be determined as functions of $\betafw$ in our case by using the formulae in \cite{Esc05,Kru05}; however, they are quite complicated and the classification requires computer programing which we leave for future work. It is interesting to compare the Sasaki-Einstein 7-manifolds described by Theorem \ref{setop} with the 3-Sasakian 7-manifolds studied in \cite{BGM94,BG99} for their cohomology rings have the same form. Seven dimensional manifolds whose cohomology rings are of this type were called 7-manifolds of type $r$ in \cite{Kru97} where $r$ is the order of $H^4$. First recall that the 3-Sasakian 7-manifolds in \cite{BGM94} are given by a triple of pairwise relatively prime positive integers $(p_1,p_2,p_3)$ and $H^4$ is isomorphic to $\mathbbz_{{\mathfrak a}mmars_2(\betafp)}$ where ${\mathfrak a}mmars_2(\betafp)=p_1p_2+p_1p_3+p_2p_3$ is the second elementary symmetric function of $\betafp=(p_1,p_2,p_3)$. It follows that ${\mathfrak a}mmars_2$ is odd. The following theorem is implicit in \cite{Kru97}, but we give its simple proof here for completeness. \betaegin{theorem}\label{pwnothomequiv} The 7-manifolds $M^7_\betafp$ and $M^7_\betafw$ are not homotopy equivalent for any admissible $\betafp$ or $\betafw$. \end{theorem} \betaegin{proof} These manifolds are distinguished by $\pi_4$. Our manifolds $M^7_\betafw$ are quotients of $S^5\times S^3$ by a free $S^1$-action, whereas, the manifolds $M^7_\betafp$ of \cite{BGM94} are free $S^1$ quotients of $SU(3)$. So from their long exact homotopy sequences we have $\pi_i(M^7_\betafw)\alphapprox \pi_i(S^5\times S^3)$ and $\pi_i(M^7_\betafp)\alphapprox \pi_i(SU(3))$ for all $i>2$. But it is known that $\pi_4(SU(3))\alphapprox 0$ whereas, $\pi_4(S^5\times S^3)\alphapprox \mathbbz_2$. \end{proof} \subsubsection{$M=S^2\times S^3, N=\mathbbc\mathbbp^1\times \mathbbc\mathbbp^1$} We have ${\mathcal I}_N=2$, so there are two cases: $|\betafw|$ is odd impying $l_2=|\betafw|$ and $l_1=2$; and $|\betafw|$ is even with $l_2=\frac{|\betafw|}{2}$ and $l_1=1$. In both cases the smoothness condition ${\mathfrak a}mmacd(l_2,l_1w_i)=1$ is satisfied. The $E_2$ term of the Leray-Serre spectral sequence of the top fibration of diagram (\ref{orbifibrationexactseq}) is $$E^{p,q}_2=H^p(\mathsf{B}S^1,H^q(S^2\times S^3\times S^3_\betafw,\mathbbz))\alphapprox \mathbbz[s]\otimes\mathbbz[{\mathfrak a}mmara]/({\mathfrak a}mmara^2)\otimes {\mathfrak a}mmarL[{\mathfrak a}mmarb,{\mathfrak a}mmarg],$$ which by the Leray-Serre Theorem converges to $H^{p+q}(M_{l_1,l_2,\betafw},\mathbbz)$. Here ${\mathfrak a}mmara$ is a 2-class and ${\mathfrak a}mmarb,{\mathfrak a}mmarg$ are 3-classes. From the bottom fibration in Diagram (\ref{orbifibrationexactseq}) we have $d_2({\mathfrak a}mmarb)={\mathfrak a}mmara\otimes s_1$ and $d_4({\mathfrak a}mmarg)=w_1w_2s^2_2$. From the commutativity of diagram \eqref{orbifibrationexactseq} we have $d_2({\mathfrak a}mmarb)=l_2s$ and $d_4({\mathfrak a}mmarg_\betafw)=w_1w_2l_1^2s^2$ which gives $E_4^{4,0}\alphapprox \mathbbz_{w_1w_2l_1^2}$, $E_4^{0,3}\alphapprox \mathbbz$, $E_4^{2,2}\alphapprox \mathbbz_{l_2}$, and $E_\infty^{0,3}=0$. Then using Poincar\'e duality and universal coefficients we obtain \betaegin{proposition}\label{MNprop} In this case $M^7_{l_1,l_2,\betafw}$ with either $(l_1,l_2)=(2,|\betafw|)$ or $(1,\frac{|\betafw|}{2})$ has the cohomology ring given by $$H^*(M^7_{l_1,l_2,\betafw},\mathbbz)=\mathbbz[x,y,u,z]/(x^2,l_2xy,w_1w_2l_1^2y^2,z^2,u^2,zu,zx,ux,uy)$$ where $x,y$ are 2-classes, and $z,u$ are 5-classes. \end{proposition} \subsubsection{$M=k(S^2\times S^3), N=N_k=\mathbbc\mathbbp^2\# k\overline{\mathbbc\mathbbp}^2, k=1,\ldots,8$} Let ${\oldmathcal S}_k$ denote the total space of the principal $S^1$-bundle over $N_k$ corresponding to the anticanonical line bundle $K^{-1}$ on $N_k$. By a well-known result of Smale ${\oldmathcal S}_k$ is diffeomorphic to the $k$-fold connected sum $k(S^2\times S^3)$. We consider the join ${\oldmathcal S}_k\star_{1,|\betafw|}S^3_\betafw$. The case $\betafw=(1,1)$ was studied in \cite{BG00a} where it is shown to have a Sasaki-Einstein metric when $3\leq k\leq 8$. Moreover, in this case we have determined the integral cohomology ring (see Theorem 5.4 of \cite{BG00a}). Here we generalize this result. \betaegin{theorem}\label{Sk7man} The integral cohomology ring of the 7-manifolds $M^7_{k,\betafw}={\oldmathcal S}_k\star_{1,|\betafw|} S^3_\betafw$ is given by $$H^q(M^7_{k,\betafw},\mathbbz)\alphapprox \betaegin{cases} \mathbbz & \text{if $q=0,7$;} \\ \mathbbz^{k+1} & \text{if $q=2,5$;} \\ \mathbbz^k_{|\betafw|}\times \mathbbz_{w_1w_2} & \text{if $q=4;$} \\ 0 & \text{if otherwise}, \end{cases}$$ with the ring relations determined by ${\mathfrak a}mmara_i\cup {\mathfrak a}mmara_j=0, w_1w_2s^2=0, |\betafw|{\mathfrak a}mmara_i\cup s=0,$ where ${\mathfrak a}mmara_i,s$ are the $k+1$ two classes with $i=1,\cdots k.$ \end{theorem} \betaegin{proof} As before the $E_2$ term of the Leray-Serre spectral sequence of the top fibration of diagram (\ref{orbifibrationexactseq}) is $$E^{p,q}_2=H^p(\mathsf{B}S^1,H^q({\oldmathcal S}_k\times S^3_\betafw,\mathbbz))\alphapprox \mathbbz[s]\otimes\prod_i{\mathfrak a}mmarL[{\mathfrak a}mmara_i,{\mathfrak a}mmarb_i,{\mathfrak a}mmarg]/ {\mathfrak a}mmaI,$$ where ${\mathfrak a}mmara_i,{\mathfrak a}mmarb_j,{\mathfrak a}mmarg$ have degrees $2,3,3$, respectively, and ${\mathfrak a}mmaI$ is the ideal generated by the relations ${\mathfrak a}mmara_i\cup {\mathfrak a}mmarb_i={\mathfrak a}mmara_j\cup {\mathfrak a}mmarb_j,{\mathfrak a}mmara_i\cup{\mathfrak a}mmara_j={\mathfrak a}mmarb_i\cup{\mathfrak a}mmarb_j=0$ for all $i,j$, ${\mathfrak a}mmara_i\cup{\mathfrak a}mmarb_j=0$ for $i\nablaeq j$ and ${\mathfrak a}mmarg^2=0$. Consider the lower product fibration of diagram (\ref{orbifibrationexactseq}). As in the previous case the first non-vanishing differential of the second factor is $d_4$, and as in that case $d_4({\mathfrak a}mmarg)=w_1w_2s^2_2$. For the first factor we know from Smale's classification of simply connected spin 5-manifolds that ${\oldmathcal S}_k$ is diffeomorphic to the $k$-fold connected sum $k(S^2\times S^3)$. Moreover, since $N=\mathbbc\mathbbp^2\# k\overline{\mathbbc\mathbbp}^2$, the first factor fibration is $$k(S^2\times S^3)\ra{1.8} \mathbbc\mathbbp^2\# k\overline{\mathbbc\mathbbp}^2\ra{1.8} \mathsf{B}S^1.$$ Here the first non-vanishing differential is $d_2({\mathfrak a}mmarb_i)={\mathfrak a}mmara_i\otimes s$. Again from the commutativity of diagram (\ref{orbifibrationexactseq}) for the top fibration we have $d_2({\mathfrak a}mmarb_i)=|\betafw|{\mathfrak a}mmara_i\otimes s$ at the $E_2$ level and $d_4({\mathfrak a}mmarg)=w_1w_2s^2$ at the $E_4$ level. One easily sees that the $k+1$ $2$-classes ${\mathfrak a}mmara_i\in E_2^{2,0}$ and $s\in E_2^{0,2}$ live to $E_\infty$ and there is no torsion in degree $2$. Moreover, there is nothing in degree $1$, and the $3$-classes ${\mathfrak a}mmarb_i\in E_2^{3,0}$ and ${\mathfrak a}mmarg\in E_4^{3,0}$ die, so there is nothing in degree $3$. However, there is torsion in degree $4$, namely $\mathbbz_{|\betafw|}^k\times \mathbbz_{w_1w_2}$. The remainder follows from Poincar\'e duality and dimensional considerations. \end{proof} This generalizes Theorem 5.4 of \cite{BG00a} where the case $\betafw=(1,1)$ is treated. \betaegin{remark} Since $|\betafw|$ and $w_1w_2$ are relatively prime, $H^4(M^7_{k,\betafw},\mathbbz)\alphapprox \mathbbz^{k-1}_{|\betafw|}\times \mathbbz_{w_1w_2|\betafw|}$. We can ask the question: when can $M^7_{k,\betafw}$ and $M^7_{k',\betafw'}$ have isomorphic cohomology rings? It is interesting and not difficult to see that there is only one possibility, namely $M^7_{1,(3,2)}$ and $M^7_{1,(5,1)}$ in which case $H^4\alphapprox \mathbbz_{30}$. \end{remark} \section{Admissible KE constructions}\label{KE} We now pick up the thread from Section \ref{adm-basics} and describe the construction (see also \cite{ACGT08}) of admissible K\"ahler metrics on $S_n$ (in fact, more generally on log pairs $(S_n, \Delta)$). Consider the circle action on $S_n$ induced by the natural circle action on $L_n$. It extends to a holomorphic $\mathbb{C}^*$ action. The open and dense set ${S_n}_0\subset S_n$ of stable points with respect to the latter action has the structure of a principal circle bundle over the stable quotient. The hermitian norm on the fibers induces, via a Legendre transform, a function ${\mathfrak a}mmaz:{S_n}_0\rightarrow (-1,1)$ whose extension to $S_n$ consists of the critical manifolds ${\mathfrak a}mmaz^{-1}(1)=P(\BOne\oplus 0)$ and ${\mathfrak a}mmaz^{-1}(-1)=P(0 \oplus L_n)$. Letting $\tilde{h}eta$ be a connection one form for the Hermitian metric on ${S_n}_0$, with curvature $d\tilde{h}eta = \omega_{N_n}$, an admissible K\"ahler metric and form are given up to scale by the respective formulas \betaegin{equation}\label{g} g=\frac{1+r{\mathfrak a}mmaz}{r}g_{N_n}+\frac {d{\mathfrak a}mmaz^2} {\Theta ({\mathfrak a}mmaz)}+\Theta ({\mathfrak a}mmaz)\tilde{h}eta^2,\quad \omega = \frac{1+r{\mathfrak a}mmaz}{r}\omega_{N_n} + d{\mathfrak a}mmaz \wedge \tilde{h}eta, \end{equation} valid on ${S_n}_0$. Here $\Theta$ is a smooth function with domain containing $(-1,1)$ and $r$, is a real number of the same sign as $g_{N_n}$ and satisfying $0 < |r| < 1$. The complex structure yielding this K\"ahler structure is given by the pullback of the base complex structure along with the requirement $Jd{\mathfrak a}mmaz = \Theta \tilde{h}eta$. The function ${\mathfrak a}mmaz$ is hamiltonian with $K= J\,grad\, {\mathfrak a}mmaz$ a Killing vector field. In fact, ${\mathfrak a}mmaz$ is the moment map on $S_n$ for the circle action, decomposing $S_n$ into the free orbits ${S_n}_0 = {\mathfrak a}mmaz^{-1}((-1,1))$ and the special orbits $D_1= {\mathfrak a}mmaz^{-1}(1)$ and $D_2={\mathfrak a}mmaz^{-1}(-1)$. Finally, $\tilde{h}eta$ satisfies $\tilde{h}eta(K)=1$. \betaegin{remark}\label{hamiltonian2form} Note that $$\phi := \frac{-(1+r {\mathfrak a}mmaz)}{r^2} \omega_{N_n} + {\mathfrak a}mmaz d{\mathfrak a}mmaz \wedge \tilde{h}eta$$ is a Hamiltonian $2$-form of order one. \end{remark} \betaigskip We can now interpret $g$ as a metric on the log pair $(S_n,\Delta)$ with $${\mathfrak a}mmarD= (1-1/m_1)D_1+(1-1/m_2)D_2$$ if $\Theta$ satisfies the positivity and boundary conditions \betaegin{equation} \label{positivity} \betaegin{array}{l} \Theta({\mathfrak a}mmaz) > 0, \quad -1 < {\mathfrak a}mmaz <1,\\ \\ \Theta(\pm 1) = 0,\\ \\ \Theta'(-1) = 2/m_2\quad \quad \Theta'(1)=-2/m_1. \end{array} \end{equation} \betaegin{remark} This construction is based on the symplectic viewpoint where different choices of $\Theta$ yields different complex structures all compatible with the same fixed symplectic form $\omega$. However, for each $\Theta$ there is an $S^1$-equivariant diffeomorphism pulling back $J$ to the original fixed complex structure on $S_n$ in such a way that the K\"ahler form of the new K\"ahler metric is in the same cohomology class as $\omega$ \cite{ACGT08}. Therefore, with all else fixed, we may view the set of the functions $\Theta$ satisfying \eqref{positivity} as parametrizing a family of K\"ahler metrics within the same K\"ahler class of $(S_n,\Delta)$. \end{remark} \betaigskip The K\"ahler class $\Omega_{\mathbf r} = [\omega]$ of an admissible metric is also called {\it admissible} and is uniquely determined by the parameter $r$, once the data associated with $S_n$ (i.e. $d_N$, $s_{N_n}$, $g_{N_n}$ etc.) is fixed. In fact, \betaegin{equation}\label{admKahclass} \Omega_{\mathbf r} = [\omega_{N_n}]/r + 2 \pi PD[D_1+D_2], \end{equation} where $PD$ denotes the Poincar\'e dual. The number $r$, together with the data associated with $S_n$ will be called {\it admissible data}. Define a function $F({\mathfrak a}mmaz)$ by the formula $\Theta({\mathfrak a}mmaz)=F({\mathfrak a}mmaz)/{\mathfrak a}mmap({\mathfrak a}mmaz)$, where ${\mathfrak a}mmap({\mathfrak a}mmaz) =(1 + r {\mathfrak a}mmaz)^{d_{N}}$. Since ${\mathfrak a}mmap({\mathfrak a}mmaz)$ is positive for $-1\leq {\mathfrak a}mmaz \leq1$, conditions \eqref{positivity} are equivalent to the following conditions on $F({\mathfrak a}mmaz)$: \betaegin{equation} \label{positivityF} \betaegin{array}{l} F({\mathfrak a}mmaz) > 0, \quad -1 < {\mathfrak a}mmaz <1,\\ \\ F(\pm 1) = 0,\\ \\ F'(- 1) = 2{\mathfrak a}mmap(-1)/m_2 \quad \quad F'( 1) =-2{\mathfrak a}mmap(1)/m_1. \end{array} \end{equation} \subsection{The Einstein Conditions} A K\"ahler metric is KE if and only if $$\rho - \lambda \omega=0$$ for some constant $\lambda$. From \cite{ApCaGa06} we have that the Ricci form of an admissible metric given by \eqref{g} equals \betaegin{equation}\label{rho} \rho = \rho_{N} - \frac{1}{2} d d^c \log F =s_{N_n}\omega_{N_n} - \frac{1}{2}\frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)} \omega_{N_n} -\frac{1}{2}\Bigl(\frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)}\Bigr)'({\mathfrak a}mmaz) d{\mathfrak a}mmaz \wedge \tilde{h}eta. \end{equation} Thus the KE condition is equivalent the ODE \betaegin{equation} \label{KEodes} \frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)} = 2s_{N_n} - 2 \lambda ({\mathfrak a}mmaz + 1/r). \end{equation} Now \eqref{positivityF} implies the necessary conditions $$ \betaegin{array}{ccl} s_{N_n} - \lambda (-1 + 1/r)& = & 1/m_2\\ \\ s_{N_n}- \lambda (1 + 1/r)&=& -1/m_1, \end{array} $$ which are equivalent to \betaegin{equation}\label{fano} \betaegin{array}{ccl} 2\lambda& = & 1/m_2+1/m_1\\ \\ 2s_{N_n}r &=& (1+r)/m_2 + (1-r)/m_1. \end{array} \end{equation} Since $s_{N_n}r >0$ we see that the base manifold $N$ (not surprisingly) must have positive scalar curvature. If \eqref{fano} is satisfied, then \eqref{KEodes} is equivalent to the ODE: \betaegin{equation} \label{KEode} \frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)} = (1-{\mathfrak a}mmaz)/m_2 -(1+{\mathfrak a}mmaz)/m_1 \end{equation} Now it is easy to see that for a solution satisfying \eqref{positivityF} to exist we need \betaegin{equation}\label{KEintegral} \int_{-1}^1 \left((1-{\mathfrak a}mmaz)/m_2 -(1+{\mathfrak a}mmaz)/m_1\right) {{\mathfrak a}mmap({\mathfrak a}mmaz)} d{\mathfrak a}mmaz = 0. \end{equation} On the other hand, if this is satisfied \betaegin{equation}\label{KEmetricF} F({\mathfrak a}mmaz) := \int_{-1}^{\mathfrak a}mmaz \left((1-t)/m_2 -(1+t)/m_1\right) {{\mathfrak a}mmap(t)} dt \end{equation} would yield a solution of \eqref{KEodes} satisfying all the conditions of \eqref{positivityF}. Setting $s_{N_n}={\mathcal I}_N/n$ in the second equation of \eqref{fano} we have the following result. \betaegin{proposition}\label{KEprop} Given admissible data and a choice of $m_1,m_2$ as above the admissible metric \eqref{g}, with $\Theta({\mathfrak a}mmaz) = \frac{F({\mathfrak a}mmaz)}{{\mathfrak a}mmap(t)}$ and $F({\mathfrak a}mmaz)$ given by \eqref{KEmetricF}, is KE iff $$2r{\mathcal I}_N/n = (1+r)/m_2 + (1-r)/m_1$$ and \eqref{KEintegral} are both satisfied. \end{proposition} \betaegin{lemma} For the log pair $(S_n,\Delta)$ with $${\mathfrak a}mmarD= (1-1/m_1)D_1+(1-1/m_2)D_2$$ the orbifold Chern class equals $$c_1^{orb}(S_n,\Delta) = c_1(N) + \frac{1}{m_1}PD(D_1)+\frac{1}{m_2}PD(D_2),$$ where $c_1(N)$ is viewed as a pull-back \end{lemma} \betaegin{proof} The usual argument gives that $$c_1^{orb}(S_n,\Delta) = c_1(S_n) + (\frac{1}{m_1}-1)PD(D_1)+(\frac{1}{m_2}-1)PD(D_2)$$ and the lemma now follows from the fact that $$c_1(S_n) = c_1(N) + PD(D_1)+PD(D_2).$$ One can verify the last fact by using the explicit Ricci form above for some convenient choice of admissible metric (e.g. take $F({\mathfrak a}mmaz)=(1-{\mathfrak a}mmaz^2){\mathfrak a}mmap({\mathfrak a}mmaz)$) in the case $m_1=m_2=1$, but it should also follow from general principles. \end{proof} In order to prove Theorem \ref{admjoinse}, we now assume that $(S_n,\Delta)$ arises as the quotient of the flow of a quasi-regular Reeb vector field, $\xi_\betafv$, as in Theorem \ref{preSE}. Since $c_1({\mathcal D})=0$, the K\"ahler class of the quotient metric must be a multiple of $c_1^{orb}(S_n,\Delta)$. Since $2\pi c_1(N) = s_{N_n}[\omega_{N_n}] = {\mathcal I}_N [\omega_{N_n}] /n$ and $2\pi(PD(D_1)-PD(D_2))=[\omega_{N_n}]$ (see e.g. Section 1.3 in \cite{ACGT08}) we see that $$4\pi c_1^{orb}(S_n,\Delta) = (2{\mathcal I}_N/n + 1/m_1-1/m_2)[\omega_{N_n}] + (1/m_1+1/m_2)2\pi(PD(D_1)+PD(D_2))$$ and thus $\Omega_{\mathbf r}$ is a multiple of $c_1^{orb}(S_n,\Delta)$ precisely when the first condition of Propostion \ref{KEprop}, namely $2r{\mathcal I}_N/n = (1+r)/m_2 + (1-r)/m_1$, is satisfied. Using the formulas for $n$, $m_1$ and $m_2$ in Theorem \ref{preSE} we get $$ r = \frac{w_1v_2-w_2v_1}{w_1v_2+w_2v_1}.$$ \betaegin{remark} As can be seen in our paper \cite{BoTo14a}, the factor relating the K\"ahler class of the quotient metric and the K\"ahler class $\Omega_r$ only depends on the initial choice of $l_2$ (to be precise the factor is $l_2/4\pi$), so for simplicity we will simply ignore it from here on out. \end{remark} Canceling out common factors, Proposition \ref{KEprop} implies that (up to isotopy) the Sasaki structure associated to $\xi_\betafv$ is $\eta$-Einstein (and thus, up to transverse homothety, SE) iff \betaegin{equation}\label{KEintegral2} \int_{-1}^1 \left((v_1-v_2)-(v_1+v_2){\mathfrak a}mmaz) \right)((w_1v_2+w_2v_1)+ (w_1v_2-w_2v_1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz = 0. \end{equation} For convenience we set $w_2/w_1=t$ and $v_2/v_1=c$. For the admissible set-up to make sense we assume $c\nablaeq t$. We also assume $0<t< 1$ (i.e. $w_1>w_2$). Now equation \eqref{KEintegral2} is equivalent to \betaegin{equation}\label{KEintegral3} \int_{-1}^1 \left((1-c)-(1+c){\mathfrak a}mmaz) \right)((c+t)+ (c-t){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz = 0. \end{equation} Let $f(c)$ denote the left hand side of \eqref{KEintegral3} and assume $t\in (0,1)\cap{\mathbb Q}$ is fixed. Now it is easy to check that $$f(t) >0\quad\quad\text{and}\quad\quad \lim_{c\rightarrow +\infty}f(c) =-\infty.$$ Thus $\exists c_{t} \in (t,+\infty)$ such that \eqref{KEintegral3} is solved. Usually this $c_t$ will be irrational. As discussed in Remark 5.4 of \cite{BoTo13}, this will correspond to an irregular SE structure. In order to understand this we give the admissible construction on the Sasaki level. Let $M_0$ denote the subspace of $M_{l_1,l_2,\betafw}$ where $\xi_\betafv$ acts freely. Note that $M_0$ is independent of $\betafv$ and is the total space of a circle bundle over $S_{n0}$. Moreover, for any $\betafv$ we have \betaegin{equation}\label{M0} M_{l_1,l_2,\betafw}/M_0=\pi_\betafv^*D_1\betaigsqcup \pi_\betafv^*D_2. \end{equation} We can now pullback the admissible K\"ahler class given by \eqref{admKahclass} to give an admissible transverse K\"ahler class ${\mathfrak a}mmarO_r^T\in H^{1,1}_B({\mathcal F}_{\xi_\betafv})$, as well as the admissible data defined in Equations \eqref{positivity} and \eqref{g} to give the admissible Sasakian data in the quasi-regular case. Explicitly we have the transverse K\"ahler metric and form \betaegin{equation}\label{gT} g^T=\frac{1+r{\mathfrak a}mmatz}{r}\pi_\betafv^*g_{N_n}+\frac {d{\mathfrak a}mmatz^2} {\Theta ({\mathfrak a}mmatz)}+\Theta ({\mathfrak a}mmatz)(\pi_\betafv^*\tilde{h}eta)^2,\quad d\eta_\betafv = \frac{1+r{\mathfrak a}mmatz}{r}\pi_\betafv^*\omega_{N_n} + d{\mathfrak a}mmatz \wedge \pi_\betafv^*\tilde{h}eta, \end{equation} where ${\mathfrak a}mmatz:M_{l_1,l_2,\betafw}\ra{1.6} [-1,1]$ is the moment map of the lifted circle action of the moment map ${\mathfrak a}mmaz$. Furthermore, $\Theta({\mathfrak a}mmatz)$ satisfies its previous conditions, and it is important to realize that the only dependence of the admissible Sasakian data on $\betafv$ is through the boundary conditions $$ \Theta'(-1) = 2/m_2,\quad \quad \Theta'(1)=-2/m_1, \qquad m_i=v_im.$$ We then get a Sasaki metric in the usual way, namely $g_\betafv=g^T+\eta_\betafv\otimes \eta_\betafv$ together with the full Sasakian structure ${\oldmathcal S}_\betafv=(\xi_\betafv,\eta_\betafv,\Phi_\betafv,g_v)$. Although this construction was done for a pair of relatively prime positive integers $v_1,v_2$ we see that by continuity all the data in the construction makes perfect sense on $M_{l_1,l_2,\betafw}$ for any pair of positive real numbers $v_1,v_2$. This defines the admissible Sasaki data in the case of irregular Sasakian structures. Thus, an irregular solution to Equation \eqref{KEintegral3} gives a positive $\eta$-Einstein metric on the Sasaki manifold and hence an SE metrics by a transverse homothety. \betaegin{example} Although the majority of the SE structures obtained in this paper are irregular, we can, however, produce many quasi-regular SE cases as follows: Set $c=kt$. Then \eqref{KEintegral3} is equivalent with \betaegin{equation}\label{KEintegral4} \int_{-1}^1 \left((1-kt)-(1+kt){\mathfrak a}mmaz) \right)((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz = 0. \end{equation} or \betaegin{equation}\label{KEintegral5} t= \frac{\int_{-1}^1 \left(1-{\mathfrak a}mmaz \right)((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz}{k\int_{-1}^1 \left(1+{\mathfrak a}mmaz \right)((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz}. \end{equation} \betaegin{lemma} For $k>1$, $$0<\int_{-1}^1 \left(1-{\mathfrak a}mmaz \right)((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz<\int_{-1}^1 \left(1+{\mathfrak a}mmaz \right)((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz.$$ \end{lemma} \betaegin{proof} The first inequality is obvious and the next is equivalent to $$\int_{-1}^{1} {\mathfrak a}mmaz ((k+1)+ (k-1){\mathfrak a}mmaz)^{d_N}d{\mathfrak a}mmaz >0.$$ By integrating, this in turn is equivalent to $$-d_{N_n} + (2+d_{N_n})k - (2+d_{N_n})k^{d_{N_n}+1} + d_{N_n}k^{d_{N_n}+2} >0.$$ Setting $p(k) = -d_{N_n} + (2+d_{N_n})k - (2+d_{N_n})k^{d_{N_n}+1} + d_{N_n}k^{d_{N_n}+2}$ we observe that $p(1)=p'(1)=0$ while $p\,''(k) > 0$ for all $k>1$. Thus $p(k)>0$ for all $k>1$ and hence the inequality holds. \end{proof} Now it follows that for any given $k\in (1,+\infty)\cap {\mathbb Q}$, $\exists t \in (0,1)\cap {\mathbb Q}$ (determined by \eqref{KEintegral5}) such that if the co-prime integers $w_1$ and $w_2$ are such that $w_2/w_1=t$ and then co-prime integers $v_1$ and $v_2$ are such that $v_2/v_1= kt$ (and $l_1$ and $l_2$ are chosen according to Lemma \ref{c10}) then the ray determined by $(v_1,v_2)$ in the $\betafw$-Sasaki cone contains a quasi-regular SE structure. Note that when $d_N=1$, equation \eqref{KEintegral5} is $$t=\frac{2+k}{k(1+2k)}.$$ The quasi-regular SE solutions \cite{GMSW04a} for $Y^{p,q}$, as described in Example \ref{Ypq}, are recovered by choosing $$k= \frac{q+\sqrt{4p^2-3q^2}}{2(p-q)},$$ assuming, as is prescribed by e.g. Theorem 11.4.5 in \cite{BG05}, that $4p^2-3q^2=n^2$, for some $n \in \mathbbz$. Note that, conversely for $k=a/b$ with co-prime $a>b \in \mathbbz^+$, we have $p=a b + a^2+b^2$ and $q=a^2-b^2$. \end{example} In general, the following result follows from Theorem \ref{admjoinse}. \betaegin{proposition}\label{ypqse} The Reeb vector field of the unique Sasaki-Einstein metric of $Y^{p,q}$ lies in the $\betafw$-Sasaki cone with $\betafw$ determined by Equations (\ref{pqw}). \end{proposition} \subsection{Sasaki-Ricci solitons and extremal Sasaki metrics} Although our main focus in this paper has been on SE structures, it is natural to note that generalizing \eqref{KEodes} to \betaegin{equation} \label{KRSode} \frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)} - a \frac{F'({\mathfrak a}mmaz)}{{\mathfrak a}mmap({\mathfrak a}mmaz)}= 2s_{N_n} - 2 \lambda ({\mathfrak a}mmaz + 1/r), \end{equation} where $a\in\mathbbr$ is some constant, corresponds to generalizing the KE equation $\rho -\lambda \omega=0$ to the K\"ahler Ricci soliton (KRS) equation $$\rho -\lambda \omega = {\mathcal L}_V\omega,$$ with $V = \frac{a}{2} grad_g {\mathfrak a}mmaz$. By following e.g. Section 3 in \cite{ACGT08b} and adapting it to our more general endpoint conditions \eqref{positivityF} (but letting $d_0=d_\infty=0$), it is now straightforward and completely standard to verify that Proposition \ref{KEprop} generalizes with ``KE'' replaced by ``KRS'', \eqref{KEintegral} replaced by \betaegin{equation}\label{KRSintegral} \int_{-1}^1 e^{-a\, {\mathfrak a}mmaz}\left((1-{\mathfrak a}mmaz)/m_2 -(1+{\mathfrak a}mmaz)/m_1\right) {{\mathfrak a}mmap({\mathfrak a}mmaz)} d{\mathfrak a}mmaz = 0, \end{equation} and \eqref{KEmetricF} replaced by \betaegin{equation}\label{KRSmetricF} F({\mathfrak a}mmaz) :=e^{a\,{\mathfrak a}mmaz} \int_{-1}^{\mathfrak a}mmaz e^{-a\,t} \left((1-t)/m_2 -(1+t)/m_1\right) {{\mathfrak a}mmap(t)} dt. \end{equation} Moreover, equation \eqref{KRSintegral} can always be solved for some $a\in \mathbbr$. Thus we realize that (up to isotopy) the Sasaki structure associated to every single ray, $\xi_\betafv$, in our $\betafw$-Sasaki cone is a Sasaki-Ricci soliton (as defined in \cite{FOW06}). We mention also that Sasaki-Ricci solitons on toric 5-manifolds were studied in \cite{LeTo13}. Our set-up (starting from a join construction) allows for cases where no regular ray in the $\betafw$-Sasaki cone exists. If, however, the given $\betafw$-Sasaki cone does admit a regular ray, then the transverse K\"ahler structure is a smooth K\"ahler Ricci soliton and the existence of an SE metric in some ray of the Sasaki cone is predicted by the work of \cite{MaNa13}. Another generalization of \eqref{KEodes} would be to require that \betaegin{equation}\label{extremal1} F''({\mathfrak a}mmaz) = (1+r {\mathfrak a}mmaz)^{d_N-1} P({\mathfrak a}mmaz), \end{equation} where $P({\mathfrak a}mmaz)$ is a polynomial of degree $2$ satisfying that \betaegin{equation}\label{extremal2} P(-1/r) = 2 d_N s_{N_n} r. \end{equation} It is well known that this corresponds to extremal K\"ahler metrics (see e.g. \cite{ACGT08}). Moreover, similarly to the smooth case, one easily sees that \eqref{extremal1} with \eqref{extremal2} has a unique solution $F({\mathfrak a}mmaz)$ satisfying the endpoint conditions of \eqref{positivityF}. Finally, since $N$ is here a {\em positive} K\"ahler-Einstein metric, this polynomial $F({\mathfrak a}mmaz)$ also satisfies the positivity condition of \eqref{positivityF} by the standard root-counting argument introduced by Hwang \cite{Hwa94} and Guan \cite{Gua95}. Thus (up to isotopy) the Sasakian structure associated to every single ray, $\xi_\betafv$, in our $\betafw$-Sasaki cone is extremal. \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$''$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$''$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \partialef$'${$'$} \providecommand{\betaysame}{\leavevmode\hbox to3em{\hrulefill}\tilde{h}inspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \betaegin{thebibliography}{ACGTF08b} \betaibitem[ACG06]{ApCaGa06} Vestislav Apostolov, David M.~J. Calderbank, and Paul Gauduchon, \emph{Hamiltonian 2-forms in {K}\"ahler geometry. {I}. {G}eneral theory}, J. Differential Geom. \textbf{73} (2006), no.~3, 359--412. \MR{2228318 (2007b:53149)} \betaibitem[ACGTF04]{ACGT04} V.~Apostolov, D.~M.~J. Calderbank, P.~Gauduchon, and C.~W. T{\o}nnesen-Friedman, \emph{Hamiltonian $2$-forms in {K}\"ahler geometry. {II}. {G}lobal classification}, J. Differential Geom. \textbf{68} (2004), no.~2, 277--345. \MR{2144249} \betaibitem[ACGTF08a]{ACGT08c} Vestislav Apostolov, David M.~J. Calderbank, Paul Gauduchon, and Christina~W. T{\o}nnesen-Friedman, \emph{Extremal {K}\"ahler metrics on ruled manifolds and stability}, Ast\'erisque (2008), no.~322, 93--150, G{\'e}om{\'e}trie diff{\'e}rentielle, physique math{\'e}matique, math{\'e}matiques et soci{\'e}t{\'e}. II. \MR{2521655 (2010h:32029)} \betaibitem[ACGTF08b]{ACGT08} \betaysame, \emph{Hamiltonian 2-forms in {K}\"ahler geometry. {III}. {E}xtremal metrics and stability}, Invent. Math. \textbf{173} (2008), no.~3, 547--601. \MR{MR2425136 (2009m:32043)} \betaibitem[ACGTF08c]{ACGT08b} \betaysame, \emph{Hamiltonian 2-forms in {K}\"ahler geometry. {IV}. {W}eakly {B}ochner-flat {K}\"ahler manifolds}, Comm. Anal. Geom. \textbf{16} (2008), no.~1, 91--126. \MR{2411469 (2010c:32043)} \betaibitem[BG99]{BG99} C.~P. Boyer and K.~Galicki, \emph{3-{S}asakian manifolds}, Surveys in differential geometry: essays on Einstein manifolds, Surv. Differ. Geom., VI, Int. Press, Boston, MA, 1999, pp.~123--184. \MR{2001m:53076} \betaibitem[BG00]{BG00a} \betaysame, \emph{On {S}asakian-{E}instein geometry}, Internat. J. Math. \textbf{11} (2000), no.~7, 873--909. \MR{2001k:53081} \betaibitem[BG08]{BG05} Charles~P. Boyer and Krzysztof Galicki, \emph{Sasakian geometry}, Oxford Mathematical Monographs, Oxford University Press, Oxford, 2008. \MR{MR2382957 (2009c:53058)} \betaibitem[BGM94]{BGM94} Charles~P. Boyer, Krzysztof Galicki, and Benjamin~M. Mann, \emph{The geometry and topology of {$3$}-{S}asakian manifolds}, J. Reine Angew. Math. \textbf{455} (1994), 183--220. \MR{MR1293878 (96e:53057)} \betaibitem[BGO07]{BGO06} Charles~P. Boyer, Krzysztof Galicki, and Liviu Ornea, \emph{Constructions in {S}asakian geometry}, Math. Z. \textbf{257} (2007), no.~4, 907--924. \MR{MR2342558 (2008m:53103)} \betaibitem[Boy11]{Boy11} Charles~P. Boyer, \emph{Completely integrable contact {H}amiltonian systems and toric contact structures on {$S^2\times S^3$}}, SIGMA Symmetry Integrability Geom. Methods Appl. \textbf{7} (2011), Paper 058, 22. \MR{2861218} \betaibitem[BP14]{BoPa10} Charles~P. Boyer and Justin Pati, \emph{On the equivalence problem for toric contact structures on ${S}^3$-bundles over ${S}^2$}, Pac. Jour. of Math. \textbf{267} (2014), no.~2, 277--324. \betaibitem[BTF13a]{BoTo11} Charles~P. Boyer and Christina~W. T{\o}nnesen-Friedman, \emph{Extremal {S}asakian geometry on {$T^2\times S^3$} and related manifolds}, Compos. Math. \textbf{149} (2013), no.~8, 1431--1456. \MR{3103072} \betaibitem[BTF13b]{BoTo12b} \betaysame, \emph{Sasakian manifolds with perfect fundamental groups}, Afr. Diaspora J. Math. \textbf{14} (2013), no.~2, 98--117. \MR{3093238} \betaibitem[BTF14a]{BoTo13} \betaysame, \emph{Extremal {S}asakian geometry on ${ S}^3$-bundles over {R}iemann surfaces}, Int. Math. Res. Not. IMRN (2014), doi:10.1093/139. \betaibitem[BTF14b]{BoTo14a} \betaysame, \emph{The {S}asaki join, {H}amiltonian 2-forms, and constant scalar curvature}, preprint; arXiv:1402.2546 Math.DG (2014). \betaibitem[BW58]{BoWa} W.~M. Boothby and H.~C. Wang, \emph{On contact manifolds}, Ann. of Math. (2) \textbf{68} (1958), 721--734. \MR{22 \#3015} \betaibitem[CLPP05]{CLPP05} M.~Cveti{\v{c}}, H.~L{\"u}, D.~N. Page, and C.~N. Pope, \emph{New {E}instein-{S}asaki spaces in five and higher dimensions}, Phys. Rev. Lett. \textbf{95} (2005), no.~7, 071101, 4. \MR{2167018} \betaibitem[CS12]{CoSz12} Tristan Collins and Gabor Sz\'ekelyhidi, \emph{K-semistability for irregular {S}asakian manifolds}, preprint; arXiv:math.DG/1204.2230 (2012). \betaibitem[Esc05]{Esc05} C.~M. Escher, \emph{A diffeomorphism classification of generalized {W}itten manifolds}, Geom. Dedicata \textbf{115} (2005), 79--120. \MR{2180043 (2006i:57058)} \betaibitem[FOW09]{FOW06} Akito Futaki, Hajime Ono, and Guofang Wang, \emph{Transverse {K}\"ahler geometry of {S}asaki manifolds and toric {S}asaki-{E}instein manifolds}, J. Differential Geom. \textbf{83} (2009), no.~3, 585--635. \MR{MR2581358} \betaibitem[GHP03]{GHP03} G.~W. Gibbons, S.~A. Hartnoll, and C.~N. Pope, \emph{Bohm and {E}instein-{S}asaki metrics, black holes, and cosmological event horizons}, Phys. Rev. D (3) \textbf{67} (2003), no.~8, 084024, 24. \MR{1995313 (2004i:83070)} \betaibitem[GMSW04a]{GMSW04b} J.~P. Gauntlett, D.~Martelli, J.~Sparks, and D.~Waldram, \emph{A new infinite class of {S}asaki-{E}instein manifolds}, Adv. Theor. Math. Phys. \textbf{8} (2004), no.~6, 987--1000. \MR{2194373} \betaibitem[GMSW04b]{GMSW04a} \betaysame, \emph{Sasaki-{E}instein metrics on {$S^2\times S^3$}}, Adv. Theor. Math. Phys. \textbf{8} (2004), no.~4, 711--734. \MR{2141499} \betaibitem[Gua95]{Gua95} Daniel Guan, \emph{Existence of extremal metrics on compact almost homogeneous {K}\"ahler manifolds with two ends}, Trans. Amer. Math. Soc. \textbf{347} (1995), no.~6, 2255--2262. \MR{1285992 (96a:58059)} \betaibitem[Hae84]{Hae84} A.~Haefliger, \emph{Groupo\"\i des d'holonomie et classifiants}, Ast\'erisque (1984), no.~116, 70--97, Transversal structure of foliations (Toulouse, 1982). \MR{86c:57026a} \betaibitem[HS12]{HeSu12b} Weiyong He and Song Sun, \emph{The generalized {F}rankel conjecture in {S}asaki geometry}, preprint; arXiv:math.DG/1209.4026 (2012). \betaibitem[Hwa94]{Hwa94} Andrew~D. Hwang, \emph{On existence of {K}\"ahler metrics with constant scalar curvature}, Osaka J. Math. \textbf{31} (1994), no.~3, 561--595. \MR{1309403 (96a:53061)} \betaibitem[KL05]{KrLu05} Matthias Kreck and Wolfgang L{\"u}ck, \emph{The {N}ovikov conjecture}, Oberwolfach Seminars, vol.~33, Birkh\"auser Verlag, Basel, 2005, Geometry and algebra. \MR{2117411 (2005i:19003)} \betaibitem[KO73]{KoOc73} S.~Kobayashi and T.~Ochiai, \emph{Characterizations of complex projective spaces and hyperquadrics}, J. Math. Kyoto Univ. \textbf{13} (1973), 31--47. \MR{47 \#5293} \betaibitem[Kru97]{Kru97} B.~Kruggel, \emph{A homotopy classification of certain {$7$}-manifolds}, Trans. Amer. Math. Soc. \textbf{349} (1997), no.~7, 2827--2843. \MR{97m:55012} \betaibitem[Kru05]{Kru05} \betaysame, \emph{Homeomorphism and diffeomorphism classification of {E}schenburg spaces}, Q. J. Math. \textbf{56} (2005), no.~4, 553--577. \MR{MR2182466 (2006h:53045)} \betaibitem[KS88]{KS88} M.~Kreck and S.~Stolz, \emph{A diffeomorphism classification of {$7$}-dimensional homogeneous {E}instein manifolds with {${\rm SU}(3)\times{\rm SU}(2)\times{\rm U}(1)$}-symmetry}, Ann. of Math. (2) \textbf{127} (1988), no.~2, 373--388. \MR{89c:57042} \betaibitem[LTF13]{LeTo13} Eveline Legendre and Christina~W. T{\o}nnesen-Friedman, \emph{Toric generalized {K}\"ahler-{R}icci solitons with {H}amiltonian 2-form}, Math. Z. \textbf{274} (2013), no.~3-4, 1177--1209. \MR{3078263} \betaibitem[MN13]{MaNa13} Toshiki Mabuchi and Yasuhiro Nakagawa, \emph{New examples of {S}asaki-{E}instein manifolds}, Tohoku Math. J. (2) \textbf{65} (2013), no.~2, 243--252. \MR{3079287} \betaibitem[MS05]{MaSp05b} D.~Martelli and J.~Sparks, \emph{Toric {S}asaki-{E}instein metrics on {$S^2\times S^3$}}, Phys. Lett. B \textbf{621} (2005), no.~1-2, 208--212. \MR{2152673} \betaibitem[RT11]{RoTh11} Julius Ross and Richard Thomas, \emph{Weighted projective embeddings, stability of orbifolds, and constant scalar curvature {K}\"ahler metrics}, J. Differential Geom. \textbf{88} (2011), no.~1, 109--159. \MR{2819757} \betaibitem[Spa11]{Spa10} James Sparks, \emph{Sasaki-{E}instein manifolds}, Surveys in differential geometry. {V}olume {XVI}. {G}eometry of special holonomy and related topics, Surv. Differ. Geom., vol.~16, Int. Press, Somerville, MA, 2011, pp.~265--324. \MR{2893680 (2012k:53082)} \betaibitem[WZ90]{WaZi90} M.~Y. Wang and W.~Ziller, \emph{Einstein metrics on principal torus bundles}, J. Differential Geom. \textbf{31} (1990), no.~1, 215--248. \MR{91f:53041} \end{thebibliography} \end{document}
\begin{document} \centerline{} \centerline{} \centerline {\Large{\bf Local null controllability of the N-dimensional}} \centerline{} \centerline{\Large{\bf Navier-Stokes system with N-1 scalar controls}} \centerline{} \centerline{\Large{\bf in an arbitrary control domain}} \centerline{} \centerline{\bf {Nicol\'as Carre\~no}} \centerline{} \centerline{Universit\'e Pierre et Marie Curie-Paris 6} \centerline{UMR 7598 Laboratoire Jacques-Louis Lions, Paris, F-75005 France} \centerline{[email protected]} \centerline{} \centerline{\bf {Sergio Guerrero}} \centerline{} \centerline{Universit\'e Pierre et Marie Curie-Paris 6} \centerline{UMR 7598 Laboratoire Jacques-Louis Lions, Paris, F-75005 France} \centerline{[email protected]} \newtheorem{Theorem}{\quad Theorem}[section] \newtheorem{Definition}[Theorem]{\quad Definition} \newtheorem{Corollary}[Theorem]{\quad Corollary} \newtheorem{Proposition}[Theorem]{\quad Proposition} \newtheorem{Lemma}[Theorem]{\quad Lemma} \newtheorem{Example}[Theorem]{\quad Example} \newtheorem{Remark}[Theorem]{\quad Remark} \numberwithin{equation}{section} \begin{abstract} In this paper we deal with the local null controllability of the $N-$ dimensional Navier-Stokes system with internal controls having one vanishing component. The novelty of this work is that no condition is imposed on the control domain. \end{abstract} {\bf Subject Classification:} 35Q30, 93C10, 93B05 \\ {\bf Keywords:} Navier-Stokes system, null controllability, Carleman inequalities \section{Introduction} Let $\Om$ be a nonempty bounded connected open subset of ${\bf R}^N$ ($N=2$ or $3$) of class $C^{\infty}$. Let $T>0$ and let $\om\subset\Om$ be a (small) nonempty open subset which is the control domain. We will use the notation $Q=\Om\times(0,T)$ and $\Sigma=\partial\Om\times (0,T)$. We will be concerned with the following controlled Navier-Stokes system: \begin{equation}\label{eq:NS} \left\lbrace \begin{array}{ll} y_t - \displaystyleelta y + (y\cdot \nabla)y + \nabla p = v\1_{\om} & \mbox{ in }Q, \\ \nabla\cdot y = 0 & \mbox{ in }Q, \\ y = 0 & \mbox{ on }\Sigma, \\ y(0) = y^0 & \mbox{ in }\Om, \end{array}\right. \end{equation} where $v$ stands for the control which acts over the set $\om$. The main objective of this work is to obtain the local null controllability of system \eqref{eq:NS} by means of $N-1$ scalar controls, i.e., we will prove the existence of a number $\delta>0$ such that, for every $y^0\in X$ ($X$ is an appropriate Banach space) satisfying $$\|y^0\|_X \leq \delta,$$ and every $i\in\{1,\dots,N\}$, we can find a control $v$ in $L^2(\om\times(0,T))^N$ with $v_i\equiv 0$ such that the corresponding solution to \eqref{eq:NS} satisfies $$y(T)=0 \mbox{ in }\Om.$$ This result has been proved in \cite{E&S&O&P-N-1} when $\overline{\om}$ intersects the boundary of $\Om$. Here, we remove this geometric assumption and prove the null controllability result for any nonempty open set $\om\subset\Om$. A similar result was obtained in \cite{CorGue} for the Stokes system. Let us recall the definition of some usual spaces in the context of incompressible fluids: $$V=\{y\in H^1_0(\Om)^N:\,\nabla\cdot y=0 \mbox{ in }\Om \}$$ and $$H=\{y\in L^2(\Om)^N:\,\nabla\cdot y=0 \mbox{ in }\Om,\,y\cdot n =0 \mbox{ on }\partial\Om\}.$$ Our main result is given in the following theorem: \begin{Theorem}\label{teo:nullcontrol} Let $i\in\{1,\dots,N\}$. Then, for every $T>0$ and $\om\subset\Om$, there exists $\delta>0$ such that, for every $y^0\in V$ satisfying $$\|y^0\|_V\leq \delta,$$ we can find a control $v\in L^2(\om\times(0,T))^N$, with $v_i\equiv 0$, and a corresponding solution $(y,p)$ to \eqref{eq:NS} such that $$y(T)=0,$$ i.e., the nonlinear system \eqref{eq:NS} is locally null controllable by means of $N-1$ scalar controls for an arbitrary control domain. \end{Theorem} \begin{Remark}\label{rem1} For the sake of simplicity, we have taken the initial condition in a more regular space than usual. However, following the same arguments as in \cite{E&S&O&P} and \cite{E&S&O&P-N-1}, we can get the same result by considering $y^0\in H$ for $N=2$ and $y^0\in H\cap L^4(\Om)^3$ for $N=3$. \end{Remark} To prove Theorem \ref{teo:nullcontrol}, we follow a standard approach (see for instance \cite{OlegN-S},\cite{E&S&O&P} and \cite{E&S&O&P-N-1}). We first deduce a null controllability result for a linear system associated to \eqref{eq:NS}: \begin{equation}\label{eq:Stokes} \left\lbrace \begin{array}{ll} y_t - \displaystyleelta y + \nabla p = f + v\1_{\om} & \mbox{ in }Q, \\ \nabla\cdot y = 0 & \mbox{ in }Q, \\ y = 0 & \mbox{ on }\Sigma, \\ y(0) = y^0 & \mbox{ in }\Om, \end{array}\right. \end{equation} where $f$ will be taken to decrease exponentially to zero in $T$. We first prove a suitable Carleman estimate for the adjoint system of \eqref{eq:Stokes} (see \eqref{eq:adj-Stokes} below). This will provide existence (and uniqueness) to a variational problem, from which we define a solution $(y,p,v)$ to \eqref{eq:Stokes} such that $y(T)=0$ in $\Om$ and $v_i=0$. Moreover, this solution is such that $e^{C/(T-t)}(y,v)\in L^2(Q)^N\times L^2(\om\times (0,T))^N$ for some $C>0$. Finally, by means of an inverse mapping theorem, we deduce the null controllability for the nonlinear system. This paper is organized as follows. In section 2, we establish all the technical results needed to deal with the controllability problems. In section 3, we deal with the null controllability of the linear system \eqref{eq:Stokes}. Finally, in section 4 we give the proof of Theorem \ref{teo:nullcontrol}. \section{Some previous results} In this section we will mainly prove a Carleman estimate for the adjoint system of \eqref{eq:Stokes}. In order to do so, we are going to introduce some weight functions. Let $\om_0$ be a nonempty open subset of ${\bf R}^N$ such that $\overline{\om_0}\subset \om$ and $\eta\in C^2(\overline{\Om})$ such that \begin{equation} |\nabla \eta|>0 \mbox{ in }\overline{\Om}\setminus\om_0,\, \eta>0 \mbox{ in }\Om \mbox{ and } \eta \equiv 0 \mbox{ on }\partial\Om. \end{equation} The existence of such a function $\eta$ is given in \cite{FurIma}. Let also $\ell\in C^{\infty}([0,T])$ be a positive function satisfying \begin{equation} \begin{split} &\ell(t) = t \quad \forall t \in [0,T/4],\,\ell(t) = T-t \quad \forall t \in [3T/4,T],\\ & \ell(t)\leq \ell(T/2),\, \forall t\in [0,T]. \end{split} \end{equation} Then, for all $\lambda\geq 1$ we consider the following weight functions: \begin{equation}\label{pesos} \begin{split} &\alpha(x,t) = \dfrac{e^{2\lambda\|\eta\|_{\infty}}-e^{\lambda\eta(x)}}{\ell^{8}(t)},\, \xi(x,t)=\dfrac{e^{\lambda\eta(x)}}{\ell^8(t)},\\ &\alpha^*(t) = \max_{x\in\overline{\Om}} \alpha(x,t),\, \xi^*(t) = \min_{x\in\overline{\Om}} \xi(x,t),\\ &\widehat\alpha(t) = \min_{x\in\overline{\Om}} \alpha(x,t),\, \widehat\xi(t) = \max_{x\in\overline{\Om}} \xi(x,t). \end{split} \end{equation} These exact weight functions were considered in \cite{ImaPuelYam}. We consider now a backwards nonhomogeneous system associated to the Stokes equation: \begin{equation}\label{eq:adj-Stokes} \left\lbrace \begin{array}{ll} -\varphi_t - \displaystyleelta \varphi + \nabla \pi = g & \mbox{ in }Q, \\ \nabla\cdot \varphi = 0 & \mbox{ in }Q, \\ \varphi = 0 & \mbox{ on }\Sigma, \\ \varphi(T) = \varphi^T & \mbox{ in }\Om, \end{array}\right. \end{equation} where $g\in L^2(Q)^N$ and $\varphi^T \in H$. Our Carleman estimate is given in the following proposition. \begin{Proposition}\label{prop:Carleman} There exists a constant $\lambda_0$, such that for any $\lambda>\lambda_0$ there exist two constants $C(\lambda)>0$ and $s_0(\lambda)>0$ such that for any $i\in\{1,\dots,N\}$, any $g\in L^2(Q)^N$ and any $\varphi^T \in H$, the solution of \eqref{eq:adj-Stokes} satisfies \begin{equation}\label{eq:Carleman} \begin{split} s^4\iint\limits_{Q} e^{-5s\alpha^*} (\xi^*)^4 |\varphi|^2 dx\,dt &\leq C \left( \iint\limits_{Q} e^{-3s \alpha^*}|g|^2 dx\,dt \right. \\ &\left.+ s^7 \sum_{j=1, j\neq i}^{N} \int\limits_0^T\int\limits_{\om}e^{-2s\widehat\alpha - 3s\alpha^*}(\widehat\xi)^7|\varphi_j|^2 dx\,dt \right) \end{split} \end{equation} for every $s\geq s_0$. \end{Proposition} The proof of inequality \eqref{eq:Carleman} is based on the arguments in \cite{CorGue}, \cite{E&S&O&P} and a Carleman inequality for parabolic equations with non-homogeneous boundary conditions proved in \cite{ImaPuelYam}. In \cite{CorGue}, the authors take advantange of the fact that the laplacian of the pressure is zero, but this is not the case here. Some arrangements of equation \eqref{eq:adj-Stokes} have to be made in order to follow the same strategy. More details are given below. Before giving the proof of Proposition \ref{prop:Carleman}, we present some technical results. We first present a Carleman inequality proved in \cite{ImaPuelYam} for parabolic equations with nonhomogeneous boundary conditions. To this end, let us introduce the equation \begin{equation}\label{eq:heatnonhom} u_t - \displaystyleelta u = f_0 + \sum_{j=1}^N \partial_j f_j \mbox{ in }Q, \end{equation} where $f_0,f_1,\dots,f_N\in L^2(Q)$. We have the following result. \begin{Lemma}\label{teo:Cnonhom} There exists a constant $\widehat{\lambda_0}$ only depending on $\Om$, $\om_0$, $\eta$ and $\ell$ such that for any $\lambda>\widehat{\lambda_0}$ there exist two constants $C(\lambda)>0$ and $\widehat{s}(\lambda)$, such that for every $s\geq \widehat{s}$ and every $u\in L^2(0,T;H^1(\Om))\cap H^1(0,T;H^{-1}(\Om))$ satisfying \eqref{eq:heatnonhom}, we have \begin{multline}\label{eq:Cnonhom} \dfrac{1}{s}\iint\limits_Q e^{-2s\alpha} \dfrac{1}{\xi}|\nabla u|^2 dx\,dt + s\iint\limits_Q e^{-2s\alpha} \xi |u|^2 dx\,dt \\ \leq C\left( s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{4}}u\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)} + s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{8}}u\|^2_{L^2(\Sigma)} \right.\\ + \frac{1}{s^2}\iint\limits_Q e^{-2s\alpha}\frac{|f_0|^2}{\xi^2}dx\,dt +\sum_{j=1}^N\iint\limits_Q e^{-2s\alpha}|f_j|^2 dx\,dt \\ \left.+ s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|u|^2 dx\,dt \right). \end{multline} \end{Lemma} Recall that $$\|u\|_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)}=\left(\|u\|^2_{H^{1/4}(0,T;L^2(\partial\Om))} + \|u\|^2_{L^{2}(0,T;H^{1/2}(\partial\Om))} \right)^{1/2}.$$ The next technical result is a particular case of Lemma 3 in \cite{CorGue}. \begin{Lemma}\label{lemma1} There exists $C>0$ depending only on $\Om$, $\om_0$, $\eta$ and $\ell$ such that, for every $T>0$ and every $u\in L^2(0,T;H^1(\Om))$, \begin{multline}\label{eq:lemma1} s^3\lambda^2\iint\limits_Q e^{-2s\alpha}\xi^3|u|^2dx\,dt \\ \leq C \left( s \iint\limits_Q e^{-2s\alpha}\xi|\nabla u|^2dx\,dt + s^3\lambda^2 \int\limits_0^T\int\limits_{\om_0} e^{-2s\alpha}\xi^3|u|^2dx\,dt \right), \end{multline} for every $\lambda\geq \widehat{\lambda_1}$ and every $s\geq C$. \end{Lemma} \begin{Remark} In \cite{CorGue}, slightly different weight functions are used to prove Lemma \ref{lemma1}. Namely, the authors take $\ell(t)=t(T-t)$. However, this does not change the result since the important property is that $\ell$ goes to $0$ polynomially when $t$ tends to $0$ and $T$. \end{Remark} The next lemma can be readily deduced from the corresponding result for parabolic equations in \cite{FurIma}. \begin{Lemma}\label{lemma2} Let $\zeta(x)=\exp(\lambda\eta(x))$ for $x\in\Om$. Then, there exists $C>0$ depending only on $\Om$, $\om_0$ and $\eta$ such that, for every $u\in H^1_0(\Om)$, \begin{multline}\label{eq:lemma2} \tau^6\lambda^8\int\limits_{\Om}e^{2\tau\zeta}\zeta^6|u|^2 dx + \tau^4\lambda^6\int\limits_{\Om}e^{2\tau\zeta}\zeta^4|\nabla u|^2 dx \\ \leq C \left( \tau^3\lambda^4\int\limits_{\Om}e^{2\tau\zeta}\zeta^3|\displaystyleelta u|^2 dx + \tau^6\lambda^8\int\limits_{\om_0}e^{2\tau\zeta}\zeta^6| u|^2 dx \right), \end{multline} for every $\lambda\geq \widehat{\lambda_2}$ and every $\tau\geq C$. \end{Lemma} The final technical result concerns the regularity of the solutions to the Stokes system that can be found in \cite{Lady} (see also \cite{Temam}). \begin{Lemma} For every $T>0$ and every $f\in L^2(Q)^N$, there exists a unique solution $u\in L^2(0,T;H^2(\Om)^N)\cap H^1(0,T;H)$ to the Stokes system \begin{equation*} \left\lbrace \begin{array}{ll} u_t - \displaystyleelta u + \nabla p = f & \mbox{ in }Q, \\ \nabla\cdot u = 0 & \mbox{ in }Q, \\ u = 0 & \mbox{ on }\Sigma, \\ u(0) = 0 & \mbox{ in }\Om, \end{array}\right. \end{equation*} for some $p\in L^2(0,T;H^1(\Om))$, and there exists a constant $C>0$ depending only on $\Om$ such that \begin{equation}\label{eq:regularity1} \|u\|^2_{L^2(0,T;H^2(\Om)^N)} + \|u\|^2_{H^1(0,T;L^2(\Om)^N)}\leq C \| f\|^2_{L^2(Q)^N}. \end{equation} Furthermore, if $f\in L^2(0,T;H^2(\Om)^N)\cap H^1(0,T;L^2(\Om)^N)$, then\\ $u\in L^2(0,T;H^4(\Om)^N)\cap H^1(0,T;H^2(\Om)^N)$ and there exists a constant $C>0$ depending only on $\Om$ such that \begin{equation}\label{eq:regularity2} \begin{split} \|u\|^2_{L^2(0,T;H^4(\Om)^N)} &+ \|u\|^2_{H^1(0,T;H^2(\Om)^N)} \\ &\leq C( \| f\|^2_{L^2(0,T;H^2(\Om)^N)} + \| f\|^2_{H^1(0,T;L^2(\Om)^N)}). \end{split} \end{equation} \end{Lemma} \subsection{Proof of Proposition \ref{prop:Carleman}} Without any lack of generality, we treat the case of $N=2$ and $i=2$. The arguments can be easily extended to the general case. We follow the ideas of \cite{CorGue}. In that paper, the arguments are based on the fact that $\displaystyleelta \pi=0$, which is not the case here (recall that $\pi$ appears in \eqref{eq:adj-Stokes}). For this reason, let us first introduce $(w,q)$ and $(z,r)$, the solutions of the following systems: \begin{equation}\label{eq:u1} \left\lbrace \begin{array}{ll} -w_t - \displaystyleelta w + \nabla q = \rho g & \mbox{ in }Q, \\ \nabla\cdot w = 0 & \mbox{ in }Q, \\ w = 0 & \mbox{ on }\Sigma, \\ w(T) = 0 & \mbox{ in }\Om, \end{array}\right. \end{equation} and \begin{equation}\label{eq:u2} \left\lbrace \begin{array}{ll} -z_t - \displaystyleelta z + \nabla r = -\rho' \varphi & \mbox{ in }Q, \\ \nabla\cdot z = 0 & \mbox{ in }Q, \\ z = 0 & \mbox{ on }\Sigma, \\ z(T) = 0 & \mbox{ in }\Om, \end{array}\right. \end{equation} where $\rho(t)=e^{-\frac{3}{2}s\alpha^*}$. Adding \eqref{eq:u1} and \eqref{eq:u2}, we see that $(w+z,q+r)$ solves the same system as $(\rho \varphi,\rho \pi)$, where $(\varphi,\pi)$ is the solution to \eqref{eq:adj-Stokes}. By uniqueness of the Stokes system we have \begin{equation}\label{u1+u2} \rho\varphi= w + z \mbox{ and }\rho\pi = q + r. \end{equation} For system \eqref{eq:u1} we will use the regularity estimate \eqref{eq:regularity1}, namely \begin{equation}\label{eq:regularity} \|w\|^2_{L^2(0,T;H^2(\Om)^2)} + \|w\|^2_{H^1(0,T;L^2(\Om)^2)}\leq C \|\rho g\|^2_{L^2(Q)^2}, \end{equation} and for system \eqref{eq:u2} we will use the ideas of \cite{CorGue}. Using the divergence free condition on the equation of \eqref{eq:u2}, we see that $$\displaystyleelta r=0 \mbox{ in }Q.$$ Then, we apply the operator $\nabla\displaystyleelta=(\partial_1\displaystyleelta,\partial_2\displaystyleelta)$ to the equation satisfied by $z_1$ and we denote $\psi:=\nabla\displaystyleelta z_1$. We then have $$-\psi_t - \displaystyleelta \psi= -\nabla(\displaystyleelta(\rho'\varphi_1))\mbox{ in }Q.$$ We apply Lemma \ref{teo:Cnonhom} to this equation and we obtain \begin{multline}\label{eq:Carl1} I(s;\psi):=\dfrac{1}{s}\iint\limits_Q e^{-2s\alpha} \dfrac{1}{\xi}|\nabla \psi|^2 dx\,dt + s\iint\limits_Q e^{-2s\alpha} \xi |\psi|^2 dx\,dt \\ \leq C\left( s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{4}}\psi\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)^2} + s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{8}}\psi\|^2_{L^2(\Sigma)^2} \right.\\ \left. + \iint\limits_Q e^{-2s\alpha} |\rho'|^2|\displaystyleelta \varphi_1|^2 dx\,dt + s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|\psi|^2 dx\,dt \right), \end{multline} for every $\lambda \geq \widehat{\lambda_0}$ and $s\geq \widehat{s}$. We divide the rest of the proof in several steps: \begin{itemize} \item In Step 1, using Lemmas \ref{lemma1} and \ref{lemma2}, we estimate global integrals of $z_1$ and $z_2$ by the left-hand side of \eqref{eq:Carl1}. \item In Step 2, we deal with the boundary terms in \eqref{eq:Carl1}. \item In Step 3, we estimate all the local terms by a local term of $\varphi_1$ and $\epsilon\, I(s;\varphi)$ to conclude the proof. \end{itemize} Now, let us choose $\lambda_0 = \max\{\widehat{\lambda_0},\widehat{\lambda_1},\widehat{\lambda_2}\}$ so that Lemmas \ref{lemma1} and \ref{lemma2} can be applied and fix $\lambda\geq \lambda_0$. In the following, $C$ will denote a generic constant depending on $\Om$, $\om$ and $\lambda$. \textbf{Step 1.} \underline{\textit{Estimate of $z_1$}}. We use Lemma \ref{lemma1} with $u=\displaystyleelta z_1$: \begin{multline}\label{eq:step1-1} s^3\iint\limits_Q e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt \\ \leq C \left( s \iint\limits_Q e^{-2s\alpha}\xi|\psi|^2dx\,dt + s^3 \int\limits_0^T\int\limits_{\om_0} e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt \right), \end{multline} for every $s\geq C$. Now, we apply Lemma \ref{lemma2} with $u=z_1\in H^1_0(\Om)$ and we get: \begin{multline*} \tau^6\int\limits_{\Om}e^{2\tau\zeta}\zeta^6|z_1|^2 dx + \tau^4\int\limits_{\Om}e^{2\tau\zeta}\zeta^4|\nabla z_1|^2 dx \\ \leq C \left( \tau^3\int\limits_{\Om}e^{2\tau\zeta}\zeta^3|\displaystyleelta z_1|^2 dx + \tau^6\int\limits_{\om_0}e^{2\tau\zeta}\zeta^6| z_1|^2 dx \right), \end{multline*} for every $\tau\geq C$. Now we take $$\tau=\frac{s}{\ell^8(t)}$$ for $s$ large enough so we have $\tau\geq C$. This yields to \begin{multline*} s^6\int\limits_{\Om}e^{2s\xi}\xi^6|z_1|^2 dx + s^4\int\limits_{\Om}e^{2s\xi}\xi^4|\nabla z_1|^2 dx \\ \leq C \left( s^3\int\limits_{\Om}e^{2s\xi}\xi^3|\displaystyleelta z_1|^2 dx + s^6\int\limits_{\om_0}e^{2s\xi}\xi^6| z_1|^2 dx \right),\,t\in(0,T), \end{multline*} for every $s\geq C$. We multiply this inequality by $$\exp\left( -2s\frac{e^{2\lambda\|\eta\|_{\infty}}}{\ell^8(t)} \right),$$ and we integrate in $(0,T)$ to obtain \begin{multline*} s^6\iint\limits_{Q}e^{-2s\alpha}\xi^6|z_1|^2 dx\,dt + s^4\iint\limits_{Q}e^{-2s\alpha}\xi^4|\nabla z_1|^2 dx\,dt \\ \leq C \left( s^3\iint\limits_{Q}e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2 dx\,dt + s^6\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi^6| z_1|^2 dx\,dt \right), \end{multline*} for every $s\geq C$. Combining this with \eqref{eq:step1-1} we get the following estimate for $z_1$: \begin{multline}\label{eq:step1-2} s^6\iint\limits_{Q}e^{-2s\alpha}\xi^6|z_1|^2 dxdt + s^4\iint\limits_{Q}e^{-2s\alpha}\xi^4|\nabla z_1|^2 dxdt + s^3\iint\limits_Q e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2 dxdt \\ \leq C \left( s \iint\limits_Q e^{-2s\alpha}\xi|\psi|^2dx\,dt + s^3 \int\limits_0^T\int\limits_{\om_0} e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt \right.\\ \left. + s^6\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi^6| z_1|^2 dx\,dt\right), \end{multline} for every $s\geq C$. \underline{\textit{Estimate of $z_2$}}. Now we will estimate a term in $z_2$ by the left-hand side of \eqref{eq:step1-2}. From the divergence free condition on $z$ we find \begin{equation}\label{eq:step1-3} \begin{split} s^4\iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4 |\partial_2 z_2|^2 dx\,dt &= s^4\iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4 |\partial_1 z_1|^2 dx\,dt \\ &\leq s^4\iint\limits_Q e^{-2s\alpha}\xi^4 |\nabla z_1|^2 dx\,dt. \end{split} \end{equation} Since $z_2|_{\partial\Om}=0$ and $\Om$ is bounded, we have that $$\int\limits_{\Om} |z_2|^2 dx\leq C(\Om)\int\limits_{\Om} |\partial_2 z_2| dx,$$ and because $\alpha^*$ and $\xi^*$ do not depend on $x$, we also have $$s^4\iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4 | z_2|^2 dx\,dt\leq C(\Om) s^4\iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4 |\partial_2 z_2|^2 dx\,dt.$$ Combining this with \eqref{eq:step1-3} we obtain \begin{equation}\label{eq:step1-4} s^4\iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4 | z_2|^2 dx\,dt\leq C s^4\iint\limits_Q e^{-2s\alpha}\xi^4 |\nabla z_1|^2 dx\,dt. \end{equation} Now, observe that by \eqref{u1+u2}, \eqref{eq:regularity} and the fact that $s^2 e^{-2s\alpha} (\xi^*)^{9/4}$ is bounded we can estimate the third term in the right-hand side of \eqref{eq:Carl1}. Indeed, \begin{multline*} \iint\limits_Q e^{-2s\alpha} |\rho'|^2|\displaystyleelta \varphi_1|^2 dx\,dt = \iint\limits_Q e^{-2s\alpha} |\rho'|^2 |\rho|^{-2} |\displaystyleelta (\rho\varphi_1)|^2 dx\,dt \\ \leq C \left( s^2\iint\limits_Q e^{-2s\alpha} (\xi^*)^{9/4}|\displaystyleelta w_1| dx\,dt + s^2\iint\limits_Q e^{-2s\alpha} (\xi^*)^{9/4}|\displaystyleelta z_1| dx\,dt \right) \\ \leq C \left( \|\rho g\|^2_{L^2(Q)^2} + s^2\iint\limits_Q e^{-2s\alpha} (\xi^*)^{3}|\displaystyleelta z_1| dx\,dt \right). \end{multline*} Putting together \eqref{eq:Carl1}, \eqref{eq:step1-2}, \eqref{eq:step1-4} and this last inequality we have for the moment \begin{multline}\label{eq:endstep1} s^6\iint\limits_{Q}e^{-2s\alpha}\xi^6|z_1|^2 dxdt + s^4\iint\limits_{Q}e^{-2s\alpha^*}(\xi^*)^4| z_2|^2 dxdt + s^3\iint\limits_Q e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2 dxdt \\ +\dfrac{1}{s}\iint\limits_Q e^{-2s\alpha} \dfrac{1}{\xi}|\nabla \psi|^2 dx\,dt + s\iint\limits_Q e^{-2s\alpha} \xi |\psi|^2 dx\,dt \\ \leq C \left( s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{4}}\psi\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)^2} + s^{-\frac{1}{2}} \|e^{-s\alpha}\xi^{-\frac{1}{8}}\psi\|^2_{L^2(\Sigma)^2} \right.\\ + \|\rho g\|^2_{L^2(Q)^2} + s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|\psi|^2 dx\,dt \\ \left. + s^3 \int\limits_0^T\int\limits_{\om_0} e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt+ s^6\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi^6| z_1|^2 dx\,dt\right), \end{multline} for every $s\geq C$. \textbf{Step 2.} In this step we deal with the boundary terms in \eqref{eq:endstep1}. First, we treat the second boundary term in \eqref{eq:endstep1}. Notice that, since $\alpha$ and $\xi$ coincide with $\alpha^*$ and $\xi^*$ respectively on $\Sigma$, \begin{equation*} \begin{split} \|e^{-s\alpha^*}\psi\|^2_{L^2(\Sigma)^2} &\leq C \|s^{\frac{1}{2}}e^{-s\alpha^*}(\xi^*)^{\frac{1}{2}}\psi\|_{L^2(Q)^2}\|s^{-\frac{1}{2}}e^{-s\alpha^*}(\xi^*)^{-\frac{1}{2}}\nabla \psi\|_{L^2(Q)^2} \\ & \leq C\left( s\iint\limits_Q e^{-2s\alpha^*}\xi^*|\psi|^2dx\,dt + \frac{1}{s}\iint\limits_Q e^{-2s\alpha^*}\frac{1}{\xi^*}|\nabla \psi|^2 dx\,dt \right), \end{split} \end{equation*} so $\|e^{-s\alpha^*}\psi\|^2_{L^2(\Sigma)^2}$ is bounded by the left-hand side of \eqref{eq:endstep1}. On the other hand, $$s^{-\frac{1}{2}}\|e^{-s\alpha}\xi^{-\frac{1}{8}}\psi\|^2_{L^2(\Sigma)^2}\leq C s^{-\frac{1}{2}} \|e^{-s\alpha}\psi\|^2_{L^2(\Sigma)^2},$$ and we can absorb $s^{-\frac{1}{2}}\|e^{-s\alpha}\psi\|^2_{L^2(\Sigma)^2}$ by taking $s$ large enough. Now we treat the first boundary term in the right-hand side of \eqref{eq:endstep1}. We will use regularity estimates to prove that $z_1$ multiplied by a certain weight function is regular enough. First, let us observe that from \eqref{u1+u2} we readily have \begin{multline*} s^4 \iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4|\rho|^2|\varphi|^2 dx\,dt \\ \leq 2 s^4 \iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4|w|^2 dx\,dt + 2 s^4 \iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4|z|^2 dx\,dt. \end{multline*} Using the regularity estimate \eqref{eq:regularity} for $w$ we have \begin{multline}\label{eq:step2-2} s^4 \iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4|\rho|^2|\varphi|^2 dx\,dt \\ \leq C \left( \|\rho g\|^2_{L^2(Q)^2} + s^4 \iint\limits_Q e^{-2s\alpha^*}(\xi^*)^4|z|^2 dx\,dt\right), \end{multline} thus the term $\|s^2e^{-s\alpha^*}(\xi^*)^2\rho\varphi\|^2_{L^2(Q)^2}$ is bounded by the left-hand side of \eqref{eq:endstep1} and $\|\rho g\|^2_{L^2(Q)^2}$. We define now $$\widetilde{z}:=se^{-s\alpha^*}(\xi^*)^{7/8}z,\,\widetilde{r}:=se^{-s\alpha^*}(\xi^*)^{7/8}r.$$ From \eqref{eq:u2} we see that $(\widetilde{z},\widetilde{r})$ is the solution of the Stokes system: \begin{equation*} \left\lbrace \begin{array}{ll} -\widetilde{z}_t - \displaystyleelta \widetilde{z} + \nabla \widetilde{r} = -se^{-s\alpha^*}(\xi^*)^{7/8}\rho' \varphi - (se^{-s\alpha^*}(\xi^*)^{7/8})_t z & \mbox{ in }Q, \\ \nabla\cdot \widetilde{z} = 0 & \mbox{ in }Q, \\ \widetilde{z} = 0 & \mbox{ on }\Sigma, \\ \widetilde{z}(T) = 0 & \mbox{ in }\Om. \end{array}\right. \end{equation*} Taking into account that $$|\alpha^*_t| \leq C (\xi^*)^{9/8},\,|\rho'|\leq C s\rho (\xi^*)^{9/8}$$ and the regularity estimate \eqref{eq:regularity1} we have \begin{multline*} \|\widetilde{z}\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)} \\ \leq C \left( \|s^2e^{-s\alpha^*}(\xi^*)^2\rho\varphi\|^2_{L^2(Q)^2} + \|s^2e^{-s\alpha^*}(\xi^*)^2 z\|^2_{L^2(Q)^2} \right), \end{multline*} thus, from \eqref{eq:step2-2}, $\|se^{-s\alpha^*}(\xi^*)^{7/8}z\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)}$ is bounded by the left-hand side of \eqref{eq:endstep1} and $\|\rho g\|^2_{L^2(Q)^2}$. From \eqref{u1+u2}, \eqref{eq:regularity} and this last inequality we have that \begin{multline*} \|se^{-s\alpha^*}(\xi^*)^{7/8}\rho\varphi\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)} \\ \leq C \left( \|\rho g\|^2_{L^2(Q)^2} + \|\widetilde{z}\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)} \right), \end{multline*} and thus $\|se^{-s\alpha^*}(\xi^*)^{7/8}\rho\varphi\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)}$ is bounded by the left-hand side of \eqref{eq:endstep1} and $\|\rho g\|^2_{L^2(Q)^2}$. Next, let $$\widehat{z}:=e^{-s\alpha^*}(\xi^*)^{-1/4}z,\,\widehat{r}:=e^{-s\alpha^*}(\xi^*)^{-1/4}r.$$ From \eqref{eq:u2}, $(\widehat{z},\widehat{r})$ is the solution of the Stokes system: \begin{equation*} \left\lbrace \begin{array}{ll} -\widehat{z}_t - \displaystyleelta \widehat{z} + \nabla \widehat{r} = -e^{-s\alpha^*}(\xi^*)^{-1/4}\rho' \varphi - (e^{-s\alpha^*}(\xi^*)^{-1/4})_t z & \mbox{ in }Q, \\ \nabla\cdot \widehat{z} = 0 & \mbox{ in }Q, \\ \widehat{z} = 0 & \mbox{ on }\Sigma, \\ \widehat{z}(T) = 0 & \mbox{ in }\Om. \end{array}\right. \end{equation*} From the previous estimates, it is not difficult to see that the right-hand side of this system is in $L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)$, and thus, using the regularity estimate \eqref{eq:regularity2}, we have \begin{multline*} \|\widehat{z}\|^2_{L^2(0,T;H^4(\Om)^2)\cap H^1(0,T;H^2(\Om)^2)} \\ \leq C \left( \|se^{-s\alpha^*}(\xi^*)^{7/8}\rho\varphi\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)} \right. \\ \left.+ \|se^{-s\alpha^*}(\xi^*)^{7/8} z\|^2_{L^2(0,T;H^2(\Om)^2)\cap H^1(0,T;L^2(\Om)^2)} \right). \end{multline*} In particular, $e^{-s\alpha^*}(\xi^*)^{-1/4}\psi \in L^2(0,T;H^1(\Om)^2)\cap H^1(0,T;H^{-1}(\Om)^2)$ (recall that $\psi=\nabla\displaystyleelta z_1$) and \begin{equation}\label{eq:step2-1} \| e^{-s\alpha^*}(\xi^*)^{-1/4}\psi \|^2_{L^2(0,T;H^1(\Om)^2)} \mbox{ and } \| e^{-s\alpha^*}(\xi^*)^{-1/4}\psi \|^2_{H^1(0,T;H^{-1}(\Om)^2)} \end{equation} are bounded by the left-hand side of \eqref{eq:endstep1} and $\|\rho g\|^2_{L^2(Q)^2}$. To end this step, we use the following trace inequality \begin{equation*} \begin{split} &s^{-1/2}\|e^{-s\alpha}\xi^{-\frac{1}{4}}\psi\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)^2} = s^{-1/2}\|e^{-s\alpha^*}(\xi^*)^{-\frac{1}{4}}\psi\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)^2} \\ & \leq C\,s^{-1/2} \left( \| e^{-s\alpha^*}(\xi^*)^{-1/4}\psi \|^2_{L^2(0,T;H^1(\Om)^2)} + \| e^{-s\alpha^*}(\xi^*)^{-1/4}\psi \|^2_{H^1(0,T;H^{-1}(\Om)^2)} \right). \end{split} \end{equation*} By taking $s$ large enough in \eqref{eq:endstep1}, the boundary term $s^{-1/2}\|e^{-s\alpha}\xi^{-\frac{1}{4}}\psi\|^2_{H^{\frac{1}{4},\frac{1}{2}}(\Sigma)^2}$ can be absorbed by the terms in \eqref{eq:step2-1} and step 2 is finished. Thus, at this point we have \begin{multline}\label{eq:endstep2-2} s^4\iint\limits_{Q}e^{-2s\alpha^*}(\xi^*)^4 |\rho|^2 |\varphi|^2 dx\,dt + s^3\iint\limits_Q e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt \\ +\dfrac{1}{s}\iint\limits_Q e^{-2s\alpha} \dfrac{1}{\xi}|\displaystyleelta^2 z_1|^2 dx\,dt + s\iint\limits_Q e^{-2s\alpha} \xi |\nabla\displaystyleelta z_1|^2 dx\,dt \\ \leq C \left( \|\rho g\|^2_{L^2(Q)^2} + s^6\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi^6| z_1|^2 dx\,dt \right. \\ \left. + s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|\nabla\displaystyleelta z_1|^2 dx\,dt + s^3 \int\limits_0^T\int\limits_{\om_0} e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2dx\,dt \right), \\ \end{multline} for every $s\geq C$. \textbf{Step 3.} In this step we estimate the two last local terms in the right-hand side of \eqref{eq:endstep2-2} in terms of local terms of $z_1$ and the left-hand side of \eqref{eq:endstep2-2} multiplied by small constants. Finally, we make the final arrangements to obtain \eqref{eq:Carleman}. We start with the term $\nabla\displaystyleelta z_1$ and we follow a standard approach. Let $\om_1$ be an open subset such that $\om_0\Subset\om_1\Subset\om$ and let $\rho_1 \in C^2_c(\om_1)$ with $\rho_1\equiv 1$ in $\om_0$ and $\rho_1\geq 0$. Then, by integrating by parts we get \begin{equation*} \begin{split} &s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|\nabla\displaystyleelta z_1|^2 dx\,dt \leq s\int\limits_0^T\int\limits_{\om_1}\rho_1 e^{-2s\alpha}\xi|\nabla\displaystyleelta z_1|^2 dx\,dt\\ &= -s\int\limits_0^T\int\limits_{\om_1}\rho_1 e^{-2s\alpha}\xi \displaystyleelta^2 z_1 \displaystyleelta z_1 dx\,dt + \frac{s}{2} \int\limits_0^T\int\limits_{\om_1} \displaystyleelta(\rho_1 e^{-2s\alpha}\xi) |\displaystyleelta z_1|^2 dx\,dt. \end{split} \end{equation*} Using Cauchy-Schwarz's inequality for the first term and $$|\displaystyleelta(\rho_1 e^{-2s\alpha}\xi)|\leq C s^2e^{-2s\alpha}\xi^3,\, s\geq C$$ for the second one, we obtain for every $\epsilon > 0$ \begin{equation*} \begin{split} &s\int\limits_0^T\int\limits_{\om_0}e^{-2s\alpha}\xi|\nabla\displaystyleelta z_1|^2 dx\,dt \\ &\leq \frac{\epsilon}{s} \int\limits_0^T\int\limits_{\om_1} e^{-2s\alpha}\frac{1}{\xi} |\displaystyleelta^2 z_1|^2 dx\,dt + C(\epsilon)s^3 \int\limits_0^T\int\limits_{\om_1} e^{-2s\alpha}\xi^3 |\displaystyleelta z_1|^2 dx\,dt, \end{split} \end{equation*} for every $s \geq C$. Let us now estimate $\displaystyleelta z_1$. Let $\rho_2 \in C^2_c(\om)$ with $\rho_2\equiv 1$ in $\om_1$ and $\rho_2\geq 0$. Then, by integrating by parts we get \begin{multline*} s^3\int\limits_0^T\int\limits_{\om_1}e^{-2s\alpha}\xi^3 |\displaystyleelta z_1|^2 dx\,dt \leq s^3\int\limits_0^T\int\limits_{\om}\rho_2 e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2 dx\,dt\\ = 2s^3\int\limits_0^T\int\limits_{\om} \nabla(\rho_2 e^{-2s\alpha}\xi^3) \nabla\displaystyleelta z_1 \cdot z_1 dx\,dt +s^3\int\limits_0^T\int\limits_{\om}\displaystyleelta(\rho_2 e^{-2s\alpha}\xi^3)\displaystyleelta z_1 \cdot z_1 dx\,dt \\ +s^3\int\limits_0^T\int\limits_{\om}\rho_2 e^{-2s\alpha}\xi^3 \displaystyleelta^2 z_1 \cdot z_1 dx\,dt. \end{multline*} Using $$|\nabla(\rho_2 e^{-2s\alpha}\xi^3)|\leq C se^{-2s\alpha}\xi^4,\, s\geq C,$$ for the first term in the right-hand side of this last inequality, $$|\displaystyleelta(\rho_2 s^3e^{-2s\alpha}\xi^3)|\leq C s^5e^{-2s\alpha}\xi^5,\, s\geq C,$$ for the second one and Cauchy-Schwarz's inequality we obtain for every $\epsilon > 0$ \begin{equation*} \begin{split} &s^3\int\limits_0^T\int\limits_{\om_1}e^{-2s\alpha}\xi^3|\displaystyleelta z_1|^2 dx\,dt \\ &\leq \epsilon \,\left( \frac{1}{s} \int\limits_0^T\int\limits_{\om} e^{-2s\alpha} \frac{1}{\xi} | \displaystyleelta^2 z_1|^2 dx\,dt + s \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi |\nabla \displaystyleelta z_1|^2 dx\,dt \right. \\ &\left.+ s^3 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^3 |\displaystyleelta z_1|^2 dx\,dt\right) + C(\epsilon)s^7 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^7 | z_1|^2 dx\,dt, \end{split} \end{equation*} for every $s \geq C$. Finally, from \eqref{u1+u2} and \eqref{eq:regularity} we readily obtain \begin{equation*} \begin{split} & s^7 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^7 |z_1|^2 dx\,dt \\ & \leq 2 s^7 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^7 |\rho|^2 |\varphi_1|^2 dx\,dt + 2 s^7 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^7 |w_1|^2 dx\,dt \\ & \leq 2 s^7 \int\limits_0^T\int\limits_{\om} e^{-2s\alpha}\xi^7 |\rho|^2 |\varphi_1|^2 dx\,dt + C \|\rho g\|^2_{L^2(Q)^2}. \end{split} \end{equation*} This concludes the proof of Proposition \ref{prop:Carleman}. \section{Null controllability of the linear system} Here we are concerned with the null controllability of the system \begin{equation}\label{eq:Stokes2} \left\lbrace \begin{array}{ll} y_t - \displaystyleelta y + \nabla p = f + v\1_{\om} & \mbox{ in }Q, \\ \nabla\cdot y = 0 & \mbox{ in }Q, \\ y = 0 & \mbox{ on }\Sigma, \\ y(0) = y^0 & \mbox{ in }\Om, \end{array}\right. \end{equation} where $y^0\in V$, $f$ is in an appropiate weighted space and the control $v\in L^2(\om\times (0,T))^N$ is such that $v_i=0$ for some $i\in \{1,\dots,N\}$. Before dealing with the null controllability of \eqref{eq:Stokes2}, we will deduce a new Carleman inequality with weights not vanishing at $t=0$. To this end, let us introduce the following weight functions: \begin{equation}\label{pesos2} \begin{split} &\beta(x,t) = \dfrac{e^{2\lambda\|\eta\|_{\infty}}-e^{\lambda\eta(x)}}{\tilde{\ell}^{8}(t)},\, \gamma(x,t)=\dfrac{e^{\lambda\eta(x)}}{\tilde{\ell}^8(t)},\\ &\beta^*(t) = \max_{x\in\overline{\Om}} \beta(x,t),\, \gamma^*(t) = \min_{x\in\overline{\Om}} \gamma(x,t),\\ &\widehat{\beta}(t) = \min_{x\in\overline{\Om}} \beta(x,t),\, \widehat{\gamma}(t) = \max_{x\in\overline{\Om}} \gamma(x,t), \end{split} \end{equation} where \begin{equation*} \tilde{\ell}(t)= \left\lbrace \begin{array}{ll} \|\ell\|_{\infty} & 0\leq t \leq T/2, \\ \ell(t) & T/2< t \leq T. \end{array} \right. \end{equation*} \begin{Lemma}\label{lemma:Carleman2} Let $i\in\{1,\dots,N\}$ and let $s$ and $\lambda$ be like in Proposition \ref{prop:Carleman}. Then, there exists a constant $C>0$ (depending on $s$ and $\lambda$) such that every solution $\varphi$ of \eqref{eq:adj-Stokes} satisfies: \begin{multline}\label{eq:Carleman2} \iint\limits_Q e^{-5s\beta^*}(\gamma^*)^4|\varphi|^2 dx\,dt + \|\varphi(0)\|^2_{L^2(\Om)^N} \\ \leq C \left( \iint\limits_Q e^{-3s\beta^*}|g|^2 dx\,dt + \sum_{j=1,j\neq i}^N\int\limits_0^T\int\limits_{\om}e^{-2s\widehat{\beta}-3s\beta^*}\widehat \gamma^7|\varphi_j|^2 dx\,dt \right). \end{multline} \end{Lemma} \textbf{Proof:} We start by an a priori estimate for the Stokes system \eqref{eq:adj-Stokes}. To do this, we introduce a function $\nu\in C^1([0,T])$ such that $$\nu \equiv 1 \mbox{ in }[0,T/2],\, \nu \equiv 0 \mbox{ in } [3T/4,T].$$ We easily see that $(\nu\varphi,\nu\pi)$ satisfies \begin{equation*} \left\lbrace \begin{array}{ll} -(\nu\varphi)_t - \displaystyleelta (\nu\varphi) + \nabla (\nu\varphi) = \nu g - \nu'\varphi& \mbox{ in }Q, \\ \nabla\cdot (\nu\varphi) = 0 & \mbox{ in }Q, \\ (\nu\varphi) = 0 & \mbox{ on }\Sigma, \\ (\nu\varphi)(T) = 0 & \mbox{ in }\Om, \end{array}\right. \end{equation*} thus we have the energy estimate \begin{equation*} \|\nu\varphi\|^2_{L^2(0,T;H^1(\Om)^N)} + \|\nu\varphi\|^2_{L^{\infty}(0,T;L^2(\Om)^N)}\leq C( \|\nu g\|^2_{L^2(Q)^N} + \|\nu' \varphi\|^2_{L^2(Q)^N}), \end{equation*} from which we readily obtain \begin{equation*} \|\varphi \|^2_{L^2(0,T/2;L^2(\Om)^N)} + \|\varphi(0)\|^2_{L^2(\Om)^N} \leq C( \| g\|^2_{L^2(0,3T/4;L^2(\Om)^N)} + \|\varphi\|^2_{L^2(T/2,3T/4;L^2(\Om)^N)}). \end{equation*} From this last inequality, and the fact that $$e^{-3s\beta^*}\geq C>0, \,\forall t\in [0,3T/4] \mbox{ and }e^{-5s\alpha^*}(\xi^*)^4\geq C >0,\, \forall t\in [T/2,3T/4] $$ we have \begin{multline}\label{eq:proofCarl2-1} \int\limits_0^{T/2}\int\limits_{\Om} e^{-5s\beta^*}(\gamma^*)^4 |\varphi|^2 dx\,dt +\|\varphi(0)\|^2_{L^2(\Om)^N}\\ \leq C\left( \int\limits_0^{3T/4}\int\limits_{\Om} e^{-3s\beta^*}|g|^2 dx\,dt + \int\limits_{T/2}^{3T/4}\int\limits_{\Om} e^{-5s\alpha^*}(\xi^*)^4|\varphi|^2 dx\,dt \right). \end{multline} Note that, since $\alpha=\beta$ in $\Om\times (T/2,T)$, we have: \begin{equation*} \begin{split} \int\limits_{T/2}^T\int\limits_{\Om} e^{-5s\beta^*}(\gamma^*)^4 |\varphi|^2 dx\,dt &= \int\limits_{T/2}^T\int\limits_{\Om} e^{-5s\alpha^*}(\xi^*)^4 |\varphi|^2 dx\,dt \\ &\leq C \iint\limits_Q e^{-5s\alpha^*}(\xi^*)^4|\varphi|^2dx\,dt, \end{split} \end{equation*} and by the Carleman inequality of Proposition \ref{prop:Carleman} \begin{multline*} \int\limits_{T/2}^{T}\int\limits_{\Om} e^{-5s\beta^*}(\gamma^*)^4 |\varphi|^2 dx\,dt \\ \leq C\left( \iint\limits_Q e^{-3s\alpha^*}|g|^2 dx\,dt +\sum_{j=1,j\neq i}^N \int\limits_{0}^{T}\int\limits_{\om} e^{-2s\widehat{\alpha}-3s\alpha^*}(\widehat \xi)^7|\varphi_j|^2 dx\,dt \right). \end{multline*} Since $$e^{-3s\beta^*}, e^{-2s\widehat{\beta}-3s\beta^*}\widehat{\gamma}^7 \geq C>0, \,\forall t\in [0,T/2],$$ we can readily get \begin{multline*} \int\limits_{T/2}^{T}\int\limits_{\Om} e^{-5s\beta^*}(\gamma^*)^4 |\varphi|^2 dx\,dt \\ \leq C\left( \iint\limits_Q e^{-3s\beta^*}|g|^2 dx\,dt + \sum_{j=1,j\neq i}^N\int\limits_{0}^{T}\int\limits_{\om} e^{-2s\widehat{\beta}-3s\beta^*}\widehat{\gamma}^7|\varphi_j|^2 dx\,dt \right), \end{multline*} which, together with \eqref{eq:proofCarl2-1}, yields \eqref{eq:Carleman2}. \vskip 1cm Now we will prove the null controllability of \eqref{eq:Stokes2}. Actually, we will prove the existence of a solution for this problem in an appropriate weighted space. Let us set \begin{equation*}\label{operatorL} Ly=y_t-\displaystyleelta y \end{equation*} and let us introduce the space, for $N=2 \mbox{ or }3$ and $i\in \{1,\dots,N\}$, \begin{equation*} \begin{array}{l} E_N^i=\{\, (y,p,v): e^{3/2s\beta^*}\,y,\,e^{s\widehat{\beta}+3/2s\beta^*}\widehat \gamma^{-7/2}\,v\1_{\om}\in L^2(Q)^N,\,v_i\equiv 0, \\ \noalign{ }\hskip1cm e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y \in L^2(0,T;H^2(\Om)^N)\cap L^{\infty}(0,T;V),\\ \noalign{ }\hskip1cm \, e^{5/2s\beta^*}(\gamma^*)^{-2}(Ly+\nabla p-v\1_{\om}) \in L^2(Q)^N\,\}. \end{array} \end{equation*} It is clear that $E_N^i$ is a Banach space for the following norm: $$ \begin{array}{l} \displaystyle \|(y,p,v)\|_{E_N^i}=\left( \|e^{3/2s\beta^*}\,y\|^2_{L^2(Q)^N} +\|e^{s\widehat{\beta}+3/2s\beta^*}\widehat\gamma^{-7/2}\,v\1_{\om}\|^2_{L^2(Q)^N}\right.\\ + \|e^{3/2 s\beta^*}(\gamma^*)^{-9/8}\,y\|^2_{L^2(0,T;H^2(\Om)^N)} \displaystyle +\|e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y\|^2_{L^\infty(0,T;V)}\\ \left.+\|e^{5/2s\beta^*}(\gamma^*)^{-2}(Ly+\nabla p -v\1_{\om})\|^2_{L^2(Q)^N} \right)^{1/2} \end{array} $$ \begin{Remark} Observe in particular that $(y,p,v)\in E^i_N$ implies $y(T)=0$ in $\Om$. Moreover, the functions belonging to this space posses the interesting following property: $$e^{5/2s\beta^*}(\gamma^*)^{-2}(y\cdot \nabla)y\in L^2(Q)^N.$$ \end{Remark} \begin{Proposition}\label{prop:null} Let $i\in \{1,\dots,N\}$. Assume that \begin{equation*} y^0\in V \mbox{ and }e^{5/2s\beta^*}(\gamma^*)^{-2}f\in L^2(Q)^N. \end{equation*} Then, we can find a control $v$ such that the associated solution $(y,p)$ to \eqref{eq:Stokes2} satisfies $(y,p,v)\in E_N^i$. In particular, $v_i\equiv 0$ and $y(T)=0$. \end{Proposition} \noindent {\bf Sketch of the proof:} The proof of this proposition is very similar to the one of Proposition~2 in~\cite{E&S&O&P} and Proposition 1 in \cite{E&S&O&P-N-1}, so we will just give the main ideas. Following the arguments in~\cite{FurIma} and~\cite{OlegN-S}, we introduce the space $$P_0=\{\,(\chi,\sigma)\in C^2(\overline Q)^{N+1}:\nabla\cdot \chi=0,\ \chi=0\ \hbox{on}\ \Sigma\,\}$$ and we consider the following variational problem: \begin{equation}\label{48p} a((\widehat \chi,\widehat \sigma),(\chi,\sigma))=\langle G,(\chi,\sigma)\rangle \quad \forall (\chi,\sigma) \in P_0, \end{equation} where we have used the notations $$\begin{array}{l} \displaystyle a((\widehat \chi,\widehat \sigma),(\chi,\sigma))=\iint\limits_Q e^{-3s\beta^*}\,(L^*\widehat \chi+\nabla \widehat \sigma)\cdot(L^*\chi+\nabla \sigma)\,dx\,dt \\ \noalign{ }\displaystyle \qquad+\sum_{j=1,j\neq i}^N\int\limits_0^T\int\limits_{\om} e^{-2s\widehat{\beta}-3s\beta^*}\widehat \gamma^7\,\widehat \chi_j\,\chi_j\,dx\,dt, \end{array}$$ $$\langle G,(\chi,\sigma)\rangle =\iint\limits_Q f\cdot \chi \,dx\,dt+ \int\limits_{\Omega}y^0\cdot \chi(0)\,dx$$ and $L^*$ is the adjoint operator of $L$, i.e. $$ L^*\chi = -\chi_t - \displaystyleelta \chi. $$ It is clear that $a(\cdot\,,\cdot):P_0\times P_0\mapsto{\bf R}$ is a symmetric, definite positive bilinear form on $P_0$. We denote by $P$ the completion of $P_0$ for the norm induced by $a(\cdot\,,\cdot)$. Then $a(\cdot\,,\cdot)$ is well-defined, continuous and again definite positive on $P$. Furthermore, in view of the Carleman estimate \eqref{eq:Carleman2}, the linear form $(\chi,\sigma) \mapsto \langle G,(\chi,\sigma)\rangle$ is well-defined and continuous on $P$. Hence, from Lax-Milgram's lemma, we deduce that the variational problem \begin{equation}\label{59} \left\{ \begin{array}{l} \displaystyle a((\widehat \chi,\widehat \sigma),(\chi,\sigma))=\langle G,(\chi,\sigma)\rangle \\ \noalign{ }\displaystyle \forall (\chi,\sigma) \in P, \quad (\widehat \chi,\widehat \sigma) \in P, \end{array} \right. \end{equation} possesses exactly one solution $(\widehat \chi,\widehat \sigma)$. Let $\widehat y$ and $\widehat v$ be given by \begin{equation*} \left\{\begin{array}{ll} \displaystyle \widehat y=e^{-3s\beta^*}(L^*\widehat \chi+\nabla \widehat \sigma),&\mbox{ in }Q, \\ \noalign{ }\displaystyle \widehat v_j=-e^{-2s\widehat{\beta}-3s\beta^*}\widehat\gamma^7\,\widehat \chi_j\quad (j\neq i),\quad \widehat v_i\equiv0&\mbox{ in }\om\times (0,T). \end{array}\right. \end{equation*} Then, it is readily seen that they satisfy $$ \iint\limits_{Q}e^{3s\beta^*} |\widehat y|^2dxdt +\sum_{j=1,j\neq i}^N\int\limits_0^T\int\limits_{\om} e^{2s\widehat{\beta}+3s\beta^*}\widehat \gamma^{-7} |\widehat v_j|^2dxdt = a((\widehat \chi,\widehat \sigma),(\widehat \chi,\widehat \sigma))<+\infty $$ and also that $\widehat y$ is, together with some pressure $\widehat p$, the weak solution (belonging to $L^2(0,T;V)\cap L^{\infty}(0,T;H)$) of the Stokes system \eqref{eq:Stokes2} for $v=\widehat v$. It only remains to check that $$e^{3/2s\beta^*}(\gamma^*)^{-9/8}\widehat y\in L^2(0,T;H^2(\Om)^N)\cap L^{\infty}(0,T;V).$$ To this end, we define the functions $$y^*=e^{3/2 s\beta^*}(\gamma^*)^{-9/8}\,\widehat y, \,p^*=e^{3/2 s\beta^*}(\gamma^*)^{-9/8}\,\widehat p$$ and $$f^*=e^{3/2 s\beta^*}(\gamma^*)^{-9/8}(f+\widehat v\1_{ \om}).$$ Then $(y^*,p^*)$ satisfies \begin{equation}\label{ystar} \left\{\begin{array}{ll} \displaystyle Ly^*+\nabla p^*=f^*+(e^{3/2 s\beta^*}(\gamma^*)^{-9/8})_t\,\widehat y&\mbox{ in }Q,\\ \nabla\cdot y^*=0&\mbox{ in }Q,\\ \noalign{ }\displaystyle y^*=0&\mbox{ on }\Sigma, \\ \noalign{ }\displaystyle y^*(0)=e^{3/2s\beta^*(0)}(\gamma^*(0))^{-9/8}y^0&\mbox{ in }\Omega. \end{array}\right. \end{equation} From the fact that $f^*+(e^{3/2 s\beta^*}(\gamma^*)^{-9/8})_t\,\widehat y \in L^2(Q)^N$ and $y^0\in V$, we have indeed $$ y^*\in L^2(0,T;H^2(\Om)^N)\cap L^{\infty}(0,T;V) $$ (see \eqref{eq:regularity1}). This ends the sketch of the proof of Proposition \ref{prop:null}. \section{Proof of Theorem \ref{teo:nullcontrol}} In this section we give the proof of Theorem \ref{teo:nullcontrol} using similar arguments to those in \cite{OlegN-S} (see also \cite{E&S&O&P} and \cite{E&S&O&P-N-1}). The result of null controllability for the linear system \eqref{eq:Stokes2} given by Proposition \ref{prop:null} will allow us to apply an inverse mapping theorem. Namely, we will use the following theorem (see \cite{ATF}). \begin{Theorem}\label{teo:invmap} Let $B_1$ and $B_2$ be two Banach spaces and let $\mathcal{A}:B_1 \to B_2$ satisfy $\mathcal{A}\in C^1(B_1;B_2)$. Assume that $b_1\in B_1$, $\mathcal{A}(b_1)=b_2$ and that $\mathcal{A}'(b_1):B_1 \to B_2$ is surjective. Then, there exists $\delta >0$ such that, for every $b'\in B_2$ satisfying $\|b'-b_2\|_{B_2}< \delta$, there exists a solution of the equation $$\mathcal{A} (b) = b',\quad b\in B_1.$$ \end{Theorem} We apply this theorem setting, for some given $i\in \{1,\dots,N\}$, $$B_1 = E_N^i,$$ $$B_2 = L^2(e^{5/2 s\beta^*}(\gamma^*)^{-2}(0,T);L^2(\Om)^N) \times V$$ and the operator $$\mathcal{A}(y,p,v) = (Ly + (y\cdot \nabla)y +\nabla p - v\1_{\om},y(0)) $$ for $(y,p,v)\in E_N^i$. In order to apply Theorem \ref{teo:invmap}, it remains to check that the operator $\mathcal{A}$ is of class $C^1(B_1;B_2)$. Indeed, notice that all the terms in $\mathcal{A}$ are linear, except for $(y\cdot \nabla)y$. We will prove that the bilinear operator $$((y_1,p_1,v_1),(y_2,p_2,v_2))\to(y_1\cdot \nabla)y_2$$ is continuous from $B_1\times B_1$ to $ L^2(e^{5/2 s\beta^*}(\gamma^*)^{-2}(0,T);L^2(\Om)^N)$. To do this, notice that $e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y \in L^2(0,T;H^2(\Om)^N)\cap L^{\infty}(0,T;V)$ for any $(y,p,v)\in B_1$, so we have $$e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y \in L^2(0,T;L^{\infty}(\Om)^N)$$ and $$\nabla (e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y) \in L^{\infty}(0,T;L^2(\Om)^N).$$ Consequently, we obtain \begin{equation*} \begin{split} &\|e^{5/2 s\beta^*}(\gamma^*)^{-2}(y_1\cdot \nabla)y_2\|_{L^2(Q)^N} \\ &\leq C \|(e^{3/2 s\beta^*}(\gamma^*)^{-9/8}\,y_1\cdot \nabla)e^{3/2 s\beta^*}(\gamma^*)^{-9/8}\,y_2\|_{L^2(Q)^N} \\ &\leq C \|e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y_1\|_{L^2(0,T;L^{\infty}(\Om)^N)}\, \|e^{3/2 s\beta^*}(\gamma^*)^{-9/8}y_2\|_{L^{\infty}(0,T;H^1(\Om)^N)}. \end{split} \end{equation*} Notice that $\mathcal{A}'(0,0,0):B_1\to B_2$ is given by $$\mathcal{A}'(0,0,0)(y,p,v) = (Ly + \nabla p,y(0)),\, \forall (y,p,v)\in B_1,$$ so this functional is surjective in view of the null controllability result for the linear system \eqref{eq:Stokes2} given by Proposition \ref{prop:null}. We are now able to apply Theorem \ref{teo:invmap} for $b_1=(0,0,0)$ and $b_2=(0,0)$. In particular, this gives the existence of a positive number $\delta$ such that, if $\|y(0)\|_V\leq \delta$, then we can find a control $v$ satisfying $v_i\equiv 0$, for some given $i\in \{1,\dots,N\}$, such that the associated solution $(y,p)$ to \eqref{eq:NS} satisfies $y(T)=0$ in $\Om$. This concludes the proof of Theorem \ref{teo:nullcontrol}. \end{document}
\begin{document} \title{Entanglement-enhanced measurement of a completely unknown phase} \author{G.~Y. Xiang} \affiliation{Centre for Quantum Computer Technology, Centre for Quantum Dynamics, Griffith University, Brisbane, 4111, Australia} \author{B.~L. Higgins} \affiliation{Centre for Quantum Computer Technology, Centre for Quantum Dynamics, Griffith University, Brisbane, 4111, Australia} \author{D.~W. Berry} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{H.~M. Wiseman} \affiliation{Centre for Quantum Computer Technology, Centre for Quantum Dynamics, Griffith University, Brisbane, 4111, Australia} \author{G.~J. Pryde} \email{[email protected]} \affiliation{Centre for Quantum Computer Technology, Centre for Quantum Dynamics, Griffith University, Brisbane, 4111, Australia} \begin{abstract} The high-precision interferometric measurement of an unknown phase is the basis for metrology in many areas of science and technology. Quantum entanglement provides an increase in sensitivity, but present techniques have only surpassed the limits of classical interferometry for the measurement of small variations about a known phase. Here we introduce a technique that combines entangled states with an adaptive algorithm to precisely estimate a completely unspecified phase, obtaining more information per photon that is possible classically. We use the technique to make the first \textit{ab initio} entanglement-enhanced optical phase measurement. This approach will enable rapid, precise determination of unknown phase shifts using interferometry. \end{abstract} \pacs{03.65.Ta, 42.50.St, 03.67.-a} \maketitle Precise interferometric measurement is vital to many scientific and technological applications. The use of quantum entanglement allows interferometric sensitivity that surpasses the standard quantum limit (SQL)~\cite{Giovannetti2004,WisMil2010}. Experimental demonstrations of entanglement-enhanced sub-SQL interferometry~\cite{Meyer2001,Leibfried2005,Nagata2007,Okamoto2008}, and most theoretical treatments~\cite{Caves1981,Yurke1986,SumPeg90,Holland1993,Sanders1995,Lee2002,Steuernagel2002,Hofmann2006,Cable2007,Dowling2008}, address the goal of obtaining an increased interference fringe gradient. This is suitable for sensing small variations about an already known phase, but does not give a self-contained measurement of an unknown phase anywhere in $[0,2\pi)$. Both tasks are important~\cite{WisMil2010}, but not equivalent, and to move from the phase-\emph{sensing} regime to the phase-\emph{measurement} regime requires one of several nontrivial measurement algorithms~\cite{Higgins2009,Berry2009}. Here, we demonstrate the first sub-SQL \emph{measurement} of an unknown phase using entanglement-enhanced optical interferometry. Our technique uses a ``bottom-up'' approach, making optimal use of whatever (typically imperfect) entanglement is available to obtain the phase estimate most efficiently. Obtaining phase sensitivity by using entanglement yields an in-principle advantage in bandwidth over recent demonstrations of sub-SQL phase measurement using sequences of multiple passes of single photons~\cite{Higgins2007,Higgins2009}. Although such techniques avoid the complexities of generating entangled states, and are suitable for measuring static phase shifts, they are unsuitable for fast measurement because the time $t$ to complete a measurement scales as the total number of photon passes $N$. Applications like the measurement of rapidly varying phase shifts, or rapid measurement of multiple samples, require a technique where increasing precision does not significantly decrease bandwidth. This can only be achieved by entangled states. A suitable technique for achieving sub-SQL phase measurement using entangled states is to apply the measurement algorithm of Ref.~\cite{Higgins2007} to a sequence of entangled $n$-photon ``NOON'' states~\cite{Dowling2008,Walther2004,Mitchell2004}, which have optimal phase sensitivity for a given $n$. In this case the measurement time $t$ scales as $\log N$, as opposed to $N$ for the multipass implementation. NOON states, however, are notoriously difficult to generate, even for moderate $n$. Previous investigations into exploiting entanglement-enhanced sensitivity have employed a ``top-down'' approach, starting with a theoretical knowledge of the optimal states and determining how to approximate these experimentally by constructing complex circuits to filter them from more easily produced states, and using only some measurement results. By contrast, we adopt a ``bottom-up'' approach by taking available entangled states and using all measurement results to obtain the most phase information. Our scheme uses Bayesian analysis and optimized adaptive feedback~\cite{BerWis00,Higgins2007}. In contrast to the algorithm of Ref.~\cite{Higgins2007}, we use a general approach that can be applied to any entangled state, including NOON states (should efficient production become available in the future). \begin{figure} \caption{\label{fig:interferometer} \label{fig:interferometer} \end{figure} In this experiment, we consider the states produced by $n$-photon dual Fock state inputs, i.e.\ states of the form $\ket{n/2,n/2}_{a,b}$, to the first beam splitter of an interferometer, as shown in Fig.~\ref{fig:interferometer}. These states have been shown to be capable of phase sensing at the Heisenberg limit, that is, with Fisher length scaling as $1/N$ \cite{Holland1993,Berry2001,Okamoto2008}. We generate these states by spontaneous parametric down-conversion (SPDC) and post-selection on counting a given total number of photons. However any source of pairs of indistinguishable single-mode $n$-photon states could be employed. In our experiment, we use both the 2-photon state $\ket{1,1}$ and the 4-photon state $\ket{2,2}$ (as well as the single-photon input $\ket{1,0}$) at different stages of the measurement protocol. Nonclassical interference of the dual Fock states at the first beam splitter produces photon number entanglement in the two arms of the interferometer, $c$ and $d$. For a $\ket{1,1}_{a,b}$ input, the state inside the interferometer is $(\ket{2,0}_{c,d} + \ket{0,2}_{c,d})/\sqrt{2}$. This is an $n=2$ NOON state, and corresponds to the well-known Hong-Ou-Mandel effect~\cite{Hong1987}. With the unknown random phase shift $\phi$ in one arm of the interferometer, and a controllable ``feedback'' phase shift $\theta$ in the other arm, this state evolves to $(e^{2i\phi}\ket{2,0}_{e,f} + e^{2i\theta}\ket{0,2}_{e,f})/\sqrt{2}$. The phase factor $e^{2i\phi}$ demonstrates phase \emph{super-resolution} \cite{Eisenberg2005,Sun2006,Resch2007}---in contrast to a single photon input $\ket{1,0}_{a,b}$, for which the state acquires a phase factor of only $e^{i\phi}$. Generally, $n$-photon NOON states exhibit $n$-fold super-resolution, and it is this super-resolution that gives such states (in principle) the best possible phase sensitivity. While nonclassical interference acting on an $\ket{n/2,n/2}_{a,b}$ state continues to generate entangled states as $n$ increases, these states are not NOON states for $n > 2$. The 4-photon $\ket{2,2}_{a,b}$ input results in a state inside the interferometer of $\sqrt{3/8} (\ket{4,0}_{c,d} + \ket{0,4}_{c,d}) - \ket{2,2}_{c,d}/2$, which evolves to $\sqrt{3/8} (e^{4i\phi}\ket{4,0}_{e,f} + e^{4i\theta}\ket{0,4}_{e,f}) - e^{2i(\phi+\theta)}\ket{2,2}_{e,f}/2$. While this state is entangled, and exhibits components with a 4-fold increase in phase resolution, it also contains an extra term ($\ket{2,2}_{e,f}$) with only a 2-fold increase. Interestingly, it was theoretically shown~\cite{Steuernagel2002} and subsequently experimentally demonstrated~\cite{Nagata2007} that NOON-like 4-photon phase super-resolution can be extracted from the state generated by the $n=4$ dual Fock input if post-selection is employed. Refs.~\cite{Nagata2007} and \cite{Okamoto2008}---the latter using an improved experiment and more thorough analysis---show that phase \emph{sensing} below the SQL is possible by this method, even taking into account the discarding of certain results. However, it is obviously not optimal to deliberately throw away phase information. Here, we do not select a subset of output components---instead, we use the full phase information encoded in the state. Our scheme is as follows. We employ a sequence of entangled states---a phase shift $\phi$ is measured using $M_k$ instances of each entangled $n_k$-photon state, where $n_k=2^k$ and $k \in \{0,1,2\}$. We begin with a flat phase probability distribution $P(\phi)=1/2\pi$, but after each measurement is performed, knowledge about the phase is updated by applying Bayes' theorem to $P(\phi)$. The feedback phase $\theta$ is initially random, but after a detection it is always set to minimize the \emph{expected} phase variance after the subsequent detection, following the algorithm of Ref.~\cite{BerWis00}. The total resources used are quantified by the total photon number, $N=\sum_k 2^k M_k$. We perform an exhaustive numerical search to determine the optimal (or near optimal) $M_k$ and $n_k$ for a given $N$. We examine the expected behaviour of an ideal implementation for the various $n$-photon inputs. Single photons ($n_k = 1$) incident on the first beam splitter of the interferometer are sufficient to generate the (trivial) $n=1$ NOON state. The photon number difference $\Delta$ between the two outputs of the final beamsplitter of the interferometer can take two possible values, $\Delta = \pm 1$, which we rewrite as $\Delta = -1+2x$, where $x\in\{0,1\}$. The probabilities for these two outcomes are \begin{equation} P_1 \left( \Delta = -1 + 2x \, | \, \phi, \theta \right) = A_{x,0} + A_{x,1} \cos \left( \phi - \theta \right), \label{eq:P1} \end{equation} where the $2\times2$-matrix $A$ is defined as \begin{equation} A = \frac{1}{2}\left[\matrix{1 & 1 \cr 1 & -1 \cr}\right]. \end{equation} For the $\ket{1,1}_{a,b}$ input ($n_k = 2$), which produces a 2-photon NOON state inside the interferometer, the probabilities for photon detection at the outputs are \begin{equation} P_2 \left( \left| \Delta \right| = 2x \, | \, \phi, \theta \right) = B_{x,0} + B_{x,1} \cos \left[ 2 \left( \phi - \theta \right) \right] \end{equation} where the matrix $B=A$ in this ideal case, and $x\in\{0,1\}$ as above. (In general the sign of $\Delta$ matters only for odd $n$.) For the $\ket{2,2}_{a,b}$ input ($n_k = 4$), the probability for each combination of number states at the outputs of the interferometer can be written as \begin{equation} P_4\left( \left| \Delta \right| = 2x \, | \, \phi, \theta \right) = \sum_{y=0}^2 \Gamma_{x,y} \cos \left[ 2y \left( \phi - \theta \right) \right] \end{equation} where $x \in \{0,1,2\}$ and \begin{equation} \Gamma = \frac{1}{32} \left[\matrix{11 & 12 & 9 \cr 12 & 0 & -12 \cr 9 & -12 & 3 \cr}\right]. \label{eq:Gamma} \end{equation} Equations~(\ref{eq:P1})--(\ref{eq:Gamma}) define the probability functions that allow us to construct the Bayesian updating protocol. \begin{figure} \caption{\label{fig:layout} \label{fig:layout} \end{figure} Our experimental demonstration uses a common-spatial-mode polarization interferometer, as in Fig.~\ref{fig:layout}. A type-I BBO crystal is pumped by a frequency-doubled mode-locked Ti:Sapphire laser and coupled to polarization-maintaining optical fibres. The resulting spontaneous parametric down-conversion supplies the interferometer with pairs of 820~nm single photons and pairs of biphotons. One horizontally polarized mode and one vertically polarized mode are combined into a single spatial mode using a polarizing beam splitter. The right- and left-circular polarization modes of this single spatial mode constitute the arms of the interferometer, and contain the 2- and 4-photon entangled states. Phase shifts between these circular polarizations are performed using half-wave plates, implementing the unknown $\phi$ and controllable $\theta$ phases. We implement photon number detection at the outputs of the interferometer by evenly splitting each beam into an array of single-photon detectors. For measurements with single photons, one output arm of the SPDC source is guided directly to a detector, and the single photon is heralded by detection coincident with that detector. While theoretical analyses typically assume idealized states, imperfections in the experimental apparatus lead to non-idealities in the real photon states that are generated. To demonstrate the full power of our approach, we include knowledge of these non-idealities in our Bayesian updating mechanism. Obtaining this knowledge requires careful characterization of our apparatus, which we do by least-squares fits to phase fringe data collected with the system phase $\phi$ absent. From this (see Appendix A for details) we arrive at the experimental detection probability coefficient matrices: $$ A' = \frac{1}{2} \left[\matrix{0.999 & 0.976 \cr 1.001 & -0.976 \cr}\right] \quad B' = \frac{1}{2} \left[\matrix{ 0.989 & 0.940 \cr 1.011 & -0.940 \cr}\right] $$ \begin{equation} \Gamma' = \frac{1}{32} \left[\matrix{ 11.206 & 9.829 & 7.596 \cr 12.901 & 0.595 & -10.192 \cr 7.893 & 10.423 & 2.596 \cr }\right]. \label{eq:ExptChar} \end{equation} It is these experimentally determined coefficients which we use to determine the optimal sequences of input configurations $(n_k, M_k)$. For example, we find that for $N=37$ resources the optimal sequence adaptively measures eight biphotons, followed by nine single photons, and finally three 4-photon states. We experimentally demonstrate our algorithm for a representative sample set of $N\in \{4,9,15,25,37,48\}$ resources---the full set of $(n_k, M_k)$ configurations used for these $N$ can be found in Appendix B. \begin{figure} \caption{\label{fig:data} \label{fig:data} \end{figure} The results of our phase measurements are plotted in Fig.~\ref{fig:data}, together with theoretical predictions. Each data point represents 1000 estimates, with error bars showing 95\% confidence intervals calculated using a bootstrap sampling method with $\approx10^6$ samples~\cite{Davison1997}. For comparison, we have also demonstrated a standard-quantum-limited phase measurement scheme with the same experimental layout. This scheme uses only single-photon states, with the (initially random) controllable phase $\theta$ incremented nonadaptively by $\pi/N$ after each measurement. With perfect single-photon visibility this scheme defines the SQL; the imperfect single-photon visibility of the experiment ($\approx 97.6\%$) means that our experimental implementation yields phase uncertainties above the SQL, in agreement with theoretical predictions. The adaptive scheme clearly operates with phase uncertainty below the SQL, with an improvement that increases monotonically with total photon number $N$. For the highest $N$ demonstrated (48) we experimentally measure uncertainty more than 1.86~dB better than the theoretical SQL, and more than 2.85~dB better than the corresponding measured uncertainty of the single-photon scheme for this interferometer visbility. The unknown phase $\phi$ is fixed, but our choice of an initially random $\theta$ ensures equivalence to measuring an unknown phase $\phi \in [0,2\pi)$. Thus these results show, for the first time, the demonstration of a scheme which beats the SQL for the measurement of a random phase using entangled states. It is important to note that, in principle, our bottom-up approach can consider not just enangled states produced from $\ket{n/2,n/2}$ inputs, but any photon number state inside the interferometer, including ideal $n$-photon NOON states. Given such NOON states with $n_k \in \{1,...,2^K\}$, the optimization procedure used here will find a sequence that performs at least as well as the sequence of Ref.~\cite{Higgins2007}, which achieved scaling of phase uncertainty at the fundamental limit of precision due to Heisenberg's uncertainty principle. Other phase sensitive states~\cite{Chiruvelli2009}, including loss resistant states \cite{Kacprowicz2009,Dorner2009,Lee2009}, might also be considered using our approach. We have proposed and demonstrated a powerful and general bottom-up approach to the measurement of random optical phase $\phi \in [0, 2\pi)$, employing Bayesian analysis and optimal adaptive feedback to make the best use of available photon states. This is the first demonstration of sub-SQL measurement of a random phase using entangled states, which can potentially achieve high bandwidth in quantum-enhanced phase measurements, with a wide range of metrological applications. \acknowledgments We thank Jeremy O'Brien for helpful discussions. This work was supported by the Australian Research Council. \appendix \appendix \section{Appendix A: Photon number detection} We use two five-detector arrays of single-photon counting modules to implement number-resolved detection. Each output mode $e$ and $f$ is split into 5 separate spatial modes, one for each single-photon detector, using half-wave plates and polarising beam splitters. The half-wave plates are set such that an equal proportion of an output mode is incident on each photon detector for that mode. The layout of the single-photon detectors is asymmetric for logistical reasons. $n$-photon states ($n>1$) are signaled by coincident detection of $n = n_e + n_f$ photons across the detectors, where $n_e$ and $n_f$ represent the number of photons detected in the respective output modes. With 5 detectors in each of the two output arms of the interferometer, there are a total of $^{10}C_n$ possible coincidence detection patterns that describe an $n$-photon output state---for 4-photon states this gives 210 patterns. The projection probability, that is, the probability that a particular photon output state $\ket{n_e, n_f}_{e,f}$ will be successfully resolved, depends on $n_e$ and $n_f$ even if the individual detectors are unit-efficiency (but not photon-number resolving) photodetectors. For example, in this unit-efficiency case the 4-photon $\ket{2,2}_{e,f}$ state has a projection probability of 0.64 with this detection scheme, whereas the $\ket{4,0}_{e,f}$ state has a projection probability of only 0.096. Like many other experiments, we do not consider loss in our calculation of resources $N$. However, we require that the probability of projection is independent of the particular output state of the interferometer. In addition, a technical limitation means that we can only consider a maximum of 128 patterns at once. For these reasons, we consider only a limited set of patterns, randomly chosen for each measurement result, such that the ultimate probability of detection for each result is approximately independent of the state. We use as many patterns as we can up to the limit of our electronics. To address the remaining discrepancy, we randomly discard a certain small proportion of measurement results in software, before the result can be used in the algorithm. This is equivalent to introducing a controlled state-dependent loss. We emphasize that this solution is a consequence of the imperfect number detection mechanism we use, necessary to simulate perfect detectors, and is not fundamental to our approach. We determine the appropriate proportion of introduced state-dependent loss from our phase fringe characterization of the experiment, which is done with the system phase $\phi$ absent, and given the limited set of detection patterns. From least squares fits to the count rates obtained with $\theta$ varied over the range $[-\pi, \pi]$, we derive three matrices $J$ similar to those of Eq.~\ref{eq:ExptChar}. By taking the first column of the inverse of each matrix we obtain the state-dependent loss probabilities: \begin{center} \begin{tabular}{|c|c|} \hline Detected State & Loss Probability \\ \hline \hline $\ket{1,0}_{e,f}$ & 0 \\ $\ket{0,1}_{e,f}$ & 0.1276 \\ \hline $\ket{1,1}_{e,f}$ & 0.1975 \\ $\ket{2,0}_{e,f}$ or $\ket{0,2}_{e,f}$ & 0 \\ \hline $\ket{2,2}_{e,f}$ & 0.2304 \\ $\ket{3,1}_{e,f}$ or $\ket{1,3}_{e,f}$ & 0.3395 \\ $\ket{4,0}_{e,f}$ or $\ket{0,4}_{e,f}$ & 0 \\ \hline \end{tabular} \end{center} Doing so also ensures the detection probability is independent of the phase, which is a necessary condition of the Bayesian algorithm. We can then apply these probabilities to the fit parameter matrices $J$ to obtain the values of Eq.~\ref{eq:ExptChar}. \section{Appendix B: Sequence configurations} Our approach determines the optimal sequence of $M_k$-many $n_k$-photon states using an exhaustive numerical search. The sequences we demonstrate are: \begin{center} \begin{tabular}{|c||c||c|c||c|c||c|c|c||c|c|c||c|c|c|} \hline $N$ & 4 & \multicolumn{2}{|c||}{9} & \multicolumn{2}{|c||}{15} & \multicolumn{3}{|c||}{25} & \multicolumn{3}{|c||}{37} & \multicolumn{3}{|c|}{48} \\ \hline \hline $M_k$ & 4 & 7 & 1 & 9 & 3 & 13 & 4 & 1 & 8 & 9 & 3 & 10 & 8 & 5 \\ \hline $n_k$ & 1 & 1 & 2 & 1 & 2 & 1 & 2 & 4 & 2 & 1 & 4 & 2 & 1 & 4 \\ \hline \end{tabular} \end{center} Note that, as our scheme is adaptive, the left-to-right ordering of sequences is significant. \section{Appendix C: Fisher information} The Fisher information generated by a phase-sensitive measurement is defined by \begin{equation} F(\phi) = \sum_{x} \frac{1}{P(x|\phi)} \left(\frac{\partial P(x|\phi)}{\partial\phi}\right)^2, \end{equation} where $P(x|\phi)$ is the probability of measurement result $x$ given that the true system phase is $\phi$. The Fisher information places a lower bound on the smallest possible \emph{shift} $\delta \phi$ in the phase away from $\phi$ that can be reliably detected from a large number $M$ of repeated measurements, via the Cram\'{e}r-Rao inequality, \begin{equation} \delta \phi \le 1/\sqrt{M \times F(\phi)}. \end{equation} This motivates defining the Fisher length as $1/\sqrt{F}$. For ideal measurements on an $N$-photon NOON state the Fisher information is $N^2$ and Fisher length is $1/N$, independent of the system phase. Thus the Fisher information for a 4-photon NOON state, for example, is $16$. It is additive for independent measurements on two separate states, so the Fisher information for two 2-photon NOON states is $8$, half that for a single 4-photon NOON state. \begin{figure} \caption{\label{fig:fisher} \label{fig:fisher} \end{figure} For the 4-photon states that we generate, the Fisher information is less than that of a 4-photon NOON state, and is equal to $12$ for ideal measurements. With the experimental detection probability matrix $\Gamma'$ given in Eq.~\ref{eq:ExptChar} for our 4-photon input state $\ket{2,2}_{a,b}$, the maximum Fisher information is $8.6$. This is above the value of $8$ for two independent 2-photon ideal NOON states, and well above the value for two 2-photon NOON states with our experimentally measured visibilities, which is at most $7.1$. With the experimental $\Gamma'$ matrix, the Fisher information is no longer independent of $\phi$, and has the dependence shown in Fig.~\ref{fig:fisher}. With the addition of the controllable phase $\theta$, the Fisher information is a function of $\phi-\theta$. The sensitive dependence on the system phase is likely to be the reason why it is often optimal to perform the 4-photon measurements last---the system phase must already be known quite accurately in order to adjust the feedback phase to maximise the Fisher information. \end{document}
\begin{document} \title{ An extension of the projected gradient method to a Banach space setting with application in structural topology optimization.} \author{Luise Blank, Christoph Rupprecht} \date{} \mathfrak maketitle \begin{abstract} For the minimization of a nonlinear cost functional $j$ under convex constraints the relaxed projected gradient process \begin{align*} \varphi_{k+1} = \varphi_{k} + \alpha_k(P_H(\varphi_{k}-\lambda_k \nabla_H j(\varphi_{k}))-\varphi_{k}) \end{align*} as formulated e.g. in \cite{DemyanovRubinov} is a well known method. The analysis is classically performed in a Hilbert space $H$. We generalize this method to functionals $j$ which are differentiable in a Banach space. Thus it is possible to perform e.g. an $L^2$ gradient method if $j$ is only differentiable in $L^\infty$. We show global convergence using Armijo backtracking in $\alpha_k$ and allow the inner product and the scaling $\lambda_k$ to change in every iteration. As application we present a structural topology optimization problem based on a phase field model, where the reduced cost functional $j$ is differentiable in $H^1\cap L^\infty$. The presented numerical results using the $H^1$ inner product and a pointwise chosen metric including second order information show the expected mesh independency in the iteration numbers. The latter yields an additional, drastic decrease in iteration numbers as well as in computation time. Moreover we present numerical results using a BFGS update of the $H^1$ inner product for further optimization problems based on phase field models. \end{abstract} \noindent {\bf Key words:}{ projected gradient method, variable metric method, convex constraints, shape and topology optimization, phase field approach.} \noindent {\bf AMS subject classification:} 49M05, 49M15, 65K, 74P05, 90C. \section{Introduction} Let $j$ be a functional on a Hilbert space $H$ with inner product $(.,.)_H$ and induced norm $\|.\|_H$ and let $\Phi_{ad}\subseteq H$ be a non-empty, convex and closed subset. We consider the optimization problem \begin{align}\label{eq:optprobl} \mathfrak min j(\varphi)\ \text{ subject to } \varphi\in \Phi_{ad}. \end{align} If $j$ is Fréchet{} differentiable with respect to $\|.\|_H$, the classical projected gradient method introduced in Hilbert space in \cite{goldstein1964} and \cite{levitin1966constrained} can be applied, which moves in the direction of the negative $H$-gradient $-\nabla_H j\in H$, which is characterized by the equality $(\nabla_Hj(\varphi),\eta)_H = \spr{j'(\varphi),\eta}_{H^*,H}$ $\forall\eta\in H$ and orthogonally projects the result back on $\Phi_{ad}$ to stay feasible, i.e. \begin{align}\label{pro1} \varphi_{k+1} = P_H(\varphi_{k}-\lambda_k\nabla_H j(\varphi_{k})). \end{align} To obtain global convergence $\lambda_k$ has to be chosen according to some step length rule, which results in a gradient path method, or one can perform a line search along the descent direction $v_k = P_H(\varphi_{k}-\lambda_k\nabla_H j(\varphi_{k})) - \varphi_k$. A typical application is $H= L^2(\Omega)$, see e.g. \cite{KelleySachs92}. In this paper we consider the case that $j$ is differentiable with respect to a norm which is not induced by a inner product. Hence no $H$-gradient $\nabla_H j$ exists. However, in Section \ref{sec:VMPT} we reformulate the method such that it is well defined under weaker conditions. We show global convergence when Armijo backtracking is applied along $v_k$ and allow the inner product and the scaling $\lambda_k$ to change in every iteration. We call this generalization `variable metric projection' type (VMPT) method. In Section \ref{sec:Application} we study the applicability of the method to a structural topology optimization problem, namely the mean compliance minimization in linear elasticity based on a phase field model. Then the reduced cost functional is differentiable only in $H^1\cap L^\infty$. In the last section we show numerical results for this mean compliance problem. As expected choosing the $H^1$ metric leads to mesh independent iteration numbers in contrast to the $L^2$ metric. We also present the choice of a variable metric using second order information and the choice of a BFGS update of the $H^1$ metric. This reduces the iteration numbers to less than a hundreth. Moreover, we give additional numerical examples for the successful application of the VMPT method. These include a problem of compliant mechanism, drag minimization of the Stokes flow and an inverse problem. \section{Variable metric projection type (VMPT) method}\label{sec:VMPT} \subsection{Generalization of the projected gradient method} The orthogonal projection $P_H(\varphi_{k}-\lambda_k\nabla_H j(\varphi_{k}))$ employed in \eqref{pro1} is the unique solution of \begin{align*} \mathfrak min_{y \in\Phi_{ad}}\frac{1}{2}\|(\varphi_{k}-\lambda_k\nabla_H j(\varphi_{k})) - y \|^2_{H}, \end{align*} which is equivalent to the problem \begin{align}\label{eq:proj2} \mathfrak min_{ y \in\Phi_{ad}}\frac{1}{2}\| y -\varphi_{k}\|^2_{H} +\lambda_k Dj(\varphi_{k}, y -\varphi_{k}), \end{align} since $(\nabla_H j(\varphi_{k}),y -\varphi_{k})_H = j'(\varphi_{k})(y -\varphi_{k}) = Dj(\varphi_{k},y -\varphi_{k})$ where the last denotes the directional derivative of $j$ at $\varphi_{k}$ in direction $ y -\varphi_{k}$. If e.g. $ Dj(\varphi_{k}, y)$ is linear and continuous with respect to $y\in H$ the cost functional of (\ref{eq:proj2}) is strictly convex, continuous and coercive in $H$, and hence \eqref{eq:proj2} has a unique solution $\bar\varphi_k$ \cite{Dacorogna}. In the formulation \eqref{eq:proj2} the existence of the gradient $\nabla_H j$ is not required. Even G\^ateaux differentiability can be omitted.\\ In the following we formulate an extension of the projected gradient method where $P_H(\varphi_{k}-\lambda_k\nabla_H j(\varphi_{k}))$ is replaced by the solution $\bar\varphi_k$ of \eqref{eq:proj2}. First we drop the requirement of a gradient as mentioned above. We assume that the admissible set $\Phi_{ad} $ is a subset of an intersection of Banach spaces $\mathfrak mathbbm{X}\cap \mathbbm D$, where $\mathfrak mathbbm{X}$ and $\mathbbm D$ have certain properties (see \ref{ass:LimitsUnique}), which are e.g. fulfilled for $\mathfrak mathbbm{X}=H^1(\Omega)$ or $\mathfrak mathbbm{X}=L^2(\Omega)$ and $\mathbbm D= L^\infty(\Omega)$. Furthermore assume that $j$ is continuously Fréchet{} differentiable on $\Phi_{ad} $ with respect to the norm $\|.\|_{\mathfrak mathbbm{X}\cap \mathbbm D}:= \|.\|_{\mathfrak mathbbm{X}}+\|.\|_{\mathbbm D}$. The Fréchet{} derivative of $j$ at $\varphi$ is denoted by $j'(\varphi)\in (\mathfrak mathbbm{X}\cap \mathbbm D)^*$ and we write $\spr{.,.}$ for the dual paring in the space $\mathfrak mathbbm{X}\cap \mathbbm D$. Moreover, we use $C$ as a positive universal constant throughout the paper. Secondly, we also allow the norm $\|.\|_H$ in (\ref{eq:proj2}) to change in every iteration. Therefore, we consider a sequence $\{a_k\}_{k\geq 0}$ of symmetric positive definite bilinear forms inducing norms $\|.\|_{a_k} $ on $\mathfrak mathbbm{X}\cap\mathbbm D$ . This approach falls into the class of variable metric methods and includes the choice of Newton and Quasi-Newton based search directions (see for example \cite{Bertsekas, Dunn1980} and \cite{gruver1981algorithmic} for the unconstrained case). In \cite{Bertsekas} these methods are called scaled gradient projection methods and in the case of $a_k = j''(\varphi_k)$ also constrained Newton's method. In finite dimension $a_k$ is given by $a_k(p,v):= p^T B_k v$ where $B_k$ can be the Hessian at $\varphi_k$ or an approximation of it. Hence, in each step of the VMPT method the projection type subproblem \begin{align} \mathfrak min_{y \in\Phi_{ad}} \quad&\frac{1}{2}\| y -\varphi_k\| _{a_k}^2 + \lambda_k \spr{j'(\varphi_k),y -\varphi_k} \label{eq:projproblvm} \end{align} with some scaling parameter $\lambda_k >0$ has to be solved. Problem \eqref{eq:projproblvm} is formally equivalent to the projection $P_{a_k}(\varphi_{k}-\lambda_k\nabla_{a_k} j(\varphi_{k}))$. However, $j$ is not necessarily differentiable with respect to $\|.\|_{a_k}$ and $\mathfrak mathbbm{X}\cap \mathbbm D$ endowed with $a_k(.,.)$ is only a pre-Hilbert space. Hence $\nabla_{a_k} j(\varphi_{k})$ does not need to exist. For globalization of the method we perform a line search based on the widely used Armijo back tracking, which results in Algorithm \ref{algorithm1}. In the next section it is shown that the algorithm is well defined under certain assumptions and in particular that a unique solution $\bar\varphi_k$ of \eqref{eq:projproblvm} exists, together with the proof of convergence. We denote the solution of \eqref{eq:projproblvm} also by $\mathfrak mathcal P_k(\varphi_k)$ due to the connection to a projection. \begin{algo}[VMPT method]\label{algorithm1} \quad \begin{algorithmic}[1] \STATE Choose $0<\beta < 1$, $0<\sigma<1$ and $\varphi_0 \in \Phi_{ad}$. \STATE $k := 0$ \WHILE{$k \leq k_{\textrm{max}}$} \STATE Choose $\lambda_k$ and $a_k$. \STATE \label{choiceofoverlinevarphi} Calculate the minimum $\overline\varphi_k=\mathfrak mathcal P_k(\varphi_k)$ of the subproblem \eqref{eq:projproblvm}. \STATE Set the search direction $v_k := \overline\varphi_k - \varphi_k$ \IF{$\|{\bm{v}}_k\|_\mathfrak mathbbm{X} \leq \textrm{tol}$} \mathfrak mathbbm{R}ETURN \mathcal{E}NDIF \STATE Determine the step length $\alpha_k:= \beta^{m_k}$ with minimal $m_k\in\mathfrak mathbbm{N}_0$ such that \\ \label{eq:armijo} {\begin{center} $j(\varphi_k + \alpha_k v_k) \leq j(\varphi_k) + \alpha_k \sigma \spr{j'(\varphi_k),v_k}$.\end{center}} \STATE Update $\varphi_{k+1} := \varphi_k + \alpha_kv_k$ \STATE $k:=k+1$ \mathcal{E}NDWHILE \end{algorithmic} \end{algo} The stopping criterion $\|v_k\|_\mathfrak mathbbm{X} \leq tol$ is motivated by the fact that $\varphi_k$ is a stationary point of $j$ if and only if $v_k=0$ and $v_k \rightarrow 0 $ in $\mathfrak mathbbm{X}$, cf. Corollary \ref{cor:vkzstat} and Theorem \ref{thm:GlobalConvvm}.\\ We would like to mention, that this algorithm is not a line search along the {\em gradient path} , which is widely used (e.g. in \cite{Bertsekas,Dunn81,Dunn87,GawandeDunn,goldstein1964,gruver1981algorithmic,hinze2008optimization,KelleySachs92,Trol}) and which requires to solve a projection type subproblem like \eqref{pro1} in each line search iteration. This can be unwanted if calculating the projection is expensive compared to the evaluation of $j$. To avoid this we perform a line search along the descent direction $v_k$, which is suggested e.g. in finite dimension or in Hilbert spaces in \cite{Bertsekas,gruver1981algorithmic,rustem1984} and is also used in \cite{Dunn1980}. To include the idea of the gradient path approach, we imbed the possibility to vary the scaling factor $\{\lambda_k\}_{k\geq 0}$ for the formal gradient in \eqref{eq:projproblvm} in each iteration. The parameter $\lambda_k$ can be put into $a_k$ by dividing the cost in \eqref{eq:projproblvm} by $\lambda_k$. However, we treat it as a separate parameter since this reflects the case where $a_k$ is fixed for all iterations. Note that under the assumptions used in this paper a line search along the gradient path is not possible since not even the existence of a positive step length can be shown, cf. Remark \ref{rem:gradpathnotpossible}.\\ Moreover, there is a clear connection to sequential quadratic programming, considering that $\mathfrak mathcal P_k(\varphi_k)$ is the solution of the quadratic approximation of $\mathfrak min_{\varphi\in\Phi_{ad}} j(\varphi)$ with \begin{align*} \mathfrak min_{y\in\Phi_{ad}} j(\varphi_k) + \spr{j'(\varphi_k),y-\varphi_k} + \frac 1 2 a_k(y-\varphi_k,y-\varphi_k). \end{align*} However, the global convergence result is analysed by means of projected gradient theory. \subsection{Global convergence result} We perform the analysis of the method with respect to two norms in the spaces $\mathfrak mathbbm{X}$ and $\mathbbm D$, which we assume to have the following properties: \newcounter{AssCount} \renewcommand{\textbf{(A{\arabic{AssCount}})}}{\textbf{(A{\arabic{AssCount}})}} \begin{list}{\textbf{(A{\arabic{AssCount}})}}{\usecounter{AssCount}}\setlength{\itemsep}{0pt} \item\label{ass:LimitsUnique} $\mathfrak mathbbm{X}$ is a reflexive Banach space. $\mathbbm D$ is isometrically isomorphic to $\mathfrak mathbbm B^*$, where $\mathfrak mathbbm B$ is a separable Banach space. Moreover, for any sequence $\{\varphi_i\}$ in $\mathfrak mathbbm{X}\cap \mathbbm D$ with $\varphi_i\to\varphi$ weakly in $\mathfrak mathbbm{X}$ and $\varphi_i\to\tilde\varphi$ weakly-* in $\mathbbm D$, it holds $\varphi = \tilde \varphi$. \end{list} We identify $\mathbbm D$ and $\mathfrak mathbbm B^*$ and say that a sequence converges weakly-* in $\mathbbm D$ if it converges weakly-* in $\mathfrak mathbbm B^*$. The separability of $\mathfrak mathbbm B$ is used to get weak-* sequential compactness in $\mathbbm D$. We would like to mention that the results hold also if $\mathbbm D $ is a reflexive Banach space, in particular if $\mathbbm D$ is an Hilbert space. In this case weak-* convergence has to be replaced by weak convergence throughout the paper. However, in the application we are interested in $\mathbbm D = L^\infty (\Omega)$.\\ In case of the Sobolev space $\mathfrak mathbbm{X} = W^{k,p}(\Omega)$ and $\mathbbm D = L ^q(\Omega)$ where $\Omega\subseteq \mathfrak mathbbm{R}^d$ is a bounded domain, $k\geq 0$, $1< p< \infty$ and $1< q\leq \infty$ the above assumption is fulfilled. In addition to the above conditions on $\mathfrak mathbbm{X}$ and $\mathbbm D$ let the following assumptions hold for the problem (\ref{eq:optprobl}): \begin{list}{\textbf{(A{\arabic{AssCount}})}}{\usecounter{AssCount}}\setlength{\itemsep}{0pt}\setcounter{AssCount}{1} \item\label{ass:Convex} \label{ass:Closed} $\Phi_{ad}\subseteq \mathfrak mathbbm{X} \cap \mathbbm D $ is convex, closed in $\mathfrak mathbbm{X}$ and non-empty. \item\label{ass:Bdd} $\Phi_{ad}$ is bounded in $\mathbbm D$. \item\label{ass:BddBelow} $j(\varphi) \geq -C> -\infty$ for some $C>0$ and all $\varphi\in\Phi_{ad}$. \item\label{ass:Diff} $j$ is continuously differentiable in a neighbourhood of $\Phi_{ad}\subseteq\mathfrak mathbbm{X}\cap\mathbbm D$. \item\label{ass:WeakDiff} For each $\varphi\in \Phi_{ad}$ and for each sequence $\{\varphi_i\}\subseteq \mathfrak mathbbm{X}\cap \mathbbm D$ with $\varphi_i\to 0$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$ it holds $\spr{j'(\varphi),\varphi_i}\to 0$ as $i\to\infty$. \end{list} Moreover, we request for the parameters $a_k$ and $\lambda_k$ of the algorithm that: \begin{list}{\textbf{(A{\arabic{AssCount}})}}{\usecounter{AssCount}}\setlength{\itemsep}{0pt}\setcounter{AssCount}{6} \item \label{ass:aInnerProduct} $\{a_k\}$ is a sequence of symmetric positive definite bilinear forms on $\mathfrak mathbbm{X}\cap\mathbbm D$. \item\label{ass:aCoercive} It exists $c_1 > 0$ such that $c_1\|p\|^2_{\mathfrak mathbbm{X}} \leq \|p\|^2_{a_k}$ for all $p\in\mathfrak mathbbm{X}\cap\mathbbm D$ and $k\in\mathfrak mathbbm{N}_0$. \item\label{ass:aBdd} For all $k\in\mathfrak mathbbm{N}_0$ it exists $c_2(k)$ such that $\|p\|^2_{a_k}\leq c_2\|p\|^2_{\mathfrak mathbbm{X}\cap\mathbbm D}$ for all $p\in\mathfrak mathbbm{X}\cap\mathbbm D$. \item\label{ass:aWeakDiff} For all $k\in\mathfrak mathbbm{N}_0$, $p\in \Phi_{ad}$ and for each sequence $\{y_i\}\subseteq \Phi_{ad}$ where there exists some $y \in \mathfrak mathbbm{X}\cap \mathbbm D $ with $y_i\to y$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$ it holds $a_k(p,y_i)\to a_k(p,y)$ as $i\to\infty$. \item\label{ass:aToZ} For each subsequence $\{ \varphi_{k_i}\}_i$ of the iterates given by Algorithm \ref{algorithm1} converging in $\mathfrak mathbbm{X}\cap \mathbbm D$, the corresponding subsequence $\{ a_{k_i}\}_i$ has the property that $a_{k_i}(p_i,y_i)\to 0$ for any sequences $\{p_i\},\{y_i\}\subseteq \mathfrak mathbbm{X}\cap \mathbbm D$ with $p_i\to 0$ strongly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$ and $\{y_i\}$ converging in $\mathfrak mathbbm{X}\cap\mathbbm D$. \item\label{ass:Lambda} It holds $0<\lambda_{min}\leq \lambda_k\leq \lambda_{max}$ for all $k\in\mathfrak mathbbm{N}_0$. \end{list} \ref{ass:LimitsUnique}-\ref{ass:Lambda} are assumed throughout this paper if not mentioned otherwise.\\ Assumption \ref{ass:aToZ} reflects the possibility of a point based choice of $a_k$, e.g. dependent on the Hessian $D^2 j(\varphi_{k})$ or on an approximation of the Hessian. Note that \ref{ass:aBdd}-\ref{ass:aToZ} is weaker than the assumption $\|p\|^2_{a_k}\leq c_2\|p\|^2_{\mathfrak mathbbm{X}}$. In \eqref{eq:akso} an example of $a_k$ is given, which only fulfills these weaker assumptions. Also \ref{ass:aCoercive} is weaker than $c_1\|u\|^2_{\mathfrak mathbbm{X}\cap\mathbbm D} \leq \|u\|^2_{a_k}$. The main result of the paper is the following, which is proved in Section \ref{sec:Proof}. \begin{theorem}\label{thm:GlobalConvvm} Let $\{\varphi_k\}\subseteq\Phi_{ad}$ be the sequence generated by the VMPT method (Algorithm \ref{algorithm1}) with $tol=0$ and let the assumptions \ref{ass:LimitsUnique}-\ref{ass:Lambda} hold, then: \begin{enumerate} \item $\lim_{k\to\infty}j(\varphi_k)$ exists. \item Every accumulation point of $\{\varphi_k\}$ in $\mathfrak mathbbm{X}\cap \mathbbm D$ is a stationary point of $j$. \item For all subsequences with $\varphi_{k_i}\to \varphi$ in $\mathfrak mathbbm{X}\cap \mathbbm D$ where $\varphi$ is stationary, the subsequence $\{ v_{k_i}\}_i $ converges strongly in $\mathfrak mathbbm{X}$ to zero. \item If additionally $j\in C^{1,\gamma}(\Phi_{ad})$ with respect to $\|.\|_{\mathfrak mathbbm{X}\cap \mathbbm D} $ for some $0<\gamma\leq 1$ then the whole sequence $\{ v_k \}_k$ converges to zero in $\mathfrak mathbbm{X}$. \end{enumerate} \end{theorem} In the classical Hilbert space setting, i.e. $\mathbbm D = \mathfrak mathbbm{X} = H$ for some Hilbert space $H$, the assumption \ref{ass:Bdd} can be dropped. Also assumption \ref{ass:WeakDiff} is trivial because of \ref{ass:Diff}. Moreover, assumptions \ref{ass:aInnerProduct}-\ref{ass:aToZ} are fulfilled for the choice $a_k(p,v) = (p,A_kv)_H$ where $A_k\in \mathfrak mathcal L(H)$ is a self-adjoint linear operator with $m\|p\|_H^2\leq (p,A_kp)_h \leq M\|p\|_H^2$ and $M\geq m>0$ independent of $k$. This is e.g. assumed in the local convergence theory in \cite{Dunn87,GawandeDunn} and in finite dimension for global convergence in \cite{Bertsekas,rustem1984}. For the special choice $a_k(p,v) = (p,v)_H$, global convergence is shown in \cite{gruver1981algorithmic} and for the case of a line search along the gradient path in \cite{Dunn81}. Result 4. of Theorem \ref{thm:GlobalConvvm} is shown in \cite{hinze2008optimization} in case of a line search along the gradient path under the same assumption $j\in C^{1,\gamma}$. Thus the presented method is a generalization of the classical method in Hilbert space.\\ We would also like to mention the following: \begin{remark} If there exists $C>0$ such that $\|p\|_\mathbbm D \leq C\|p\|_\mathfrak mathbbm{X}$ for all $p\in\mathfrak mathbbm{X}\cap\mathbbm D$, assumption \ref{ass:Bdd} can be omitted.\\ If $\mathfrak mathbbm{X}$ is a Hilbert space, the choice $a_k(u,v) = (u,v)_H$ fulfills all assumptions \ref{ass:aInnerProduct}-\ref{ass:aToZ}. \end{remark} \subsection{Analysis and proof of the convergence result of the VMPT method}\label{sec:Proof} We first show the existence and uniqueness of $\overline\varphi_k=\mathfrak mathcal P_k(\varphi_k)$ based on the direct method in the calculus of variations using the following Lemma and assumptions \ref{ass:Convex}, \ref{ass:Bdd} and \ref{ass:Diff}-\ref{ass:aWeakDiff}. Note that the standard proof cannot be applied, since $a_k$ is indeed $\mathfrak mathbbm{X}$-coercive, but $a_k$ and $\spr{j'(\varphi_k), \cdot }$ are not $\mathfrak mathbbm{X}$-continuous. Another difficulty is that $\mathfrak mathbbm{X}\cap\mathbbm D$ is not necessarily reflexive. \begin{lemma}\label{lem:wsconv} Let $\{p_k\}\subseteq\Phi_{ad}$ with $p_k\to p$ weakly in $\mathfrak mathbbm{X}$ for some $p\in\Phi_{ad}$. Then $p_{k}\to p$ weakly-* in $\mathbbm D$. \end{lemma} \begin{proof} Since $\Phi_{ad} $ is bounded in $\mathbbm D$ and the closed unit ball of $\mathbbm D$ is weakly-* sequentially compact due to the separability of $\mathfrak mathbbm B$, we can extract from any subsequence of $\{p_k\} \subseteq \Phi_{ad}$ another subsequence $\{p_{k_i}\}$ with $p_{k_i} \rightarrow \tilde p$ weakly-* in $\mathbbm D$ for some $\tilde p\in\mathbbm D$. Due to the required unique limit in $\mathfrak mathbbm{X}$ and $\mathbbm D$ we have $\tilde p =p$. Since for any subsequence we find a subsequence converging to the same $p$, we have that the whole sequence converges to $p$. \end{proof} \begin{theorem} \label{thm:Projection} For any $k\in\mathfrak mathbbm{N}_0$ and $\varphi \in\Phi_{ad}$, the problem \begin{align} \mathfrak min_{y \in\Phi_{ad}} \quad& \frac{1}{2}\| y -\varphi \| _{a_k}^2 + \lambda_k \spr{j'(\varphi),y -\varphi} \label{eq:projproblvm2} \end{align} admits a unique solution $\bar\varphi := \mathfrak mathcal P_k(\varphi)$, which is given by the unique solution of the variational inequality \begin{align}\label{var} a_k(\bar\varphi-\varphi,\eta- \bar\varphi) +\lambda_k \spr{j'(\varphi),\eta - \bar\varphi } \geq 0 \qquad \forall \eta \in \Phi_{ad}. \end{align} \end{theorem} \begin{proof} Let $k\in\mathfrak mathbbm{N}_0$ and $\varphi \in \Phi_{ad}$ arbitrary. Problem \eqref{eq:projproblvm2} is equivalent to \begin{align}\label{eq:projprobl2vm} \mathfrak min_{y \in \Phi_{ad} } \quad& g_k( y ) := \tfrac{1}{2}a_{k}( y , y ) + \spr{b_k, y } \end{align} where $\spr{b_k, y } := \lambda_k \spr{j'(\varphi),y } - a_{k}(\varphi, y )$ and $b_k \in (\mathfrak mathbbm{X}\cap \mathbbm D)^*$ due to \ref{ass:Diff} and \ref{ass:aBdd}. By \ref{ass:Bdd} and \ref{ass:aCoercive} we get for any $y \in\Phi_{ad}$ with some generic $C>0$ \begin{align} g_k(y ) &\geq \frac{c_1}{2}\| y \|^2_{\mathfrak mathbbm{X}} - \|b_k\|_{(\mathfrak mathbbm{X}\cap \mathbbm D)^*} (\| y \|_{\mathfrak mathbbm{X}}+\underbrace{\| y \|_{\mathbbm D}}_{\leq C}) \geq -C. \label{eq:gbddbelowvm} \end{align} Thus $g_k$ is $\mathfrak mathbbm{X}$-coercive and bounded from below on $\Phi_{ad}$. Hence we can choose an infimizing sequence $\varphi_i\in \Phi_{ad}$, such that $g_k(\varphi_i)\xrightarrow{i\to\infty} \inf_{y \in\Phi_{ad}} g_k(y )$. From the estimate \eqref{eq:gbddbelowvm} we conclude that $\{\varphi_i\}_i$ is bounded in $\mathfrak mathbbm{X}$. Therefore, we can extract a subsequence (still denoted by $\varphi_i$) which converges weakly in $\mathfrak mathbbm{X}$ to some $\bar \varphi\in \mathfrak mathbbm{X}$. Since $\Phi_{ad}$ is convex and closed in $\mathfrak mathbbm{X}$, it is also weakly closed in $\mathfrak mathbbm{X}$ and thus $\bar \varphi \in \Phi_{ad}$. By Lemma \ref{lem:wsconv} we also get $\varphi_i\to \bar \varphi$ weakly-* in $\mathbbm D$. Finally we show $g_k(\bar \varphi)=\inf_{y\in \Phi_{ad}} g_k(y)$. Using \ref{ass:WeakDiff}, \ref{ass:aCoercive} and \ref{ass:aWeakDiff} one can show that $\liminf_i a_k(\varphi_i,\varphi_i) \geq a_k(\bar \varphi,\bar \varphi)$ and $\lim_i \spr{b_k,\varphi_i}=\spr{b_k,\bar \varphi}$, thus $\liminf_i g_k(\varphi_i) \geq g_k(\bar \varphi)$. We conclude \begin{align*} \inf_{y \in \Phi_{ad}} g_k(y ) \leq g_k(\bar \varphi) \leq \liminf_i g_k(\varphi_i) = \inf_{y \in \Phi_{ad}} g_k(y), \end{align*} which shows the existence of a minimizer of \eqref{eq:projprobl2vm}. Using \ref{ass:aCoercive}, the uniqueness follows from strict convexity of $g_k$.\\ Due to \ref{ass:Diff} and \ref{ass:aBdd}, we have that $g_k$ is differentiable in $\mathfrak mathbbm{X}\cap\mathbbm D$, where its directional derivative at $\bar \varphi$ in direction $\eta-\bar \varphi$ for arbitrary $\eta\in\Phi_{ad}$ is given by \begin{align*} \spr{g_k'(\bar \varphi),\eta-\bar \varphi} &= a_{k}(\bar \varphi-\varphi,\eta-\bar \varphi) + \lambda_k\spr{j'(\varphi),\eta-\bar \varphi} \;. \end{align*} Since the problem \eqref{eq:projproblvm2} is convex, it is equivalent to the first order optimality condition, which is given by the variational inequality \eqref{var}, see \cite{Trol}. \end{proof} We see that $\varphi\in\Phi_{ad}$ is a stationary point of $j$, i.e. $\spr{j'(\varphi),\eta-\varphi}\geq 0$ $\forall\eta\in\Phi_{ad}$, if and only if $\overline\varphi=\varphi$ is the solution of \eqref{var}, i.e. the fixed point equation $\varphi=\mathfrak mathcal P_k(\varphi)$ is fulfilled. This leads to the classical view of the method as a fixed point iteration $\varphi_{k+1}=\mathfrak mathcal P_k(\varphi_k)$ in the case that $\mathfrak mathcal P_k$ is independent of $k$ and $\alpha_k=1$ is chosen. \begin{cor}\label{cor:vkzstat} If there exists some $k\in \mathfrak mathbbm{N}_0$ with $\mathfrak mathcal P_k(\varphi) = \varphi$ then $\varphi$ is a stationary point of $j$. On the other hand, if $\varphi\in\Phi_{ad}$ is a stationary point of $j$ then the fix point equation $\mathfrak mathcal P_k(\varphi) = \varphi$ holds for all $k\in\mathfrak mathbbm{N}_0$. In particular, an iterate $\varphi_k$ of the algorithm is a stationary point of $j$ if and only if $v_k= \mathfrak mathcal P_k(\varphi_k)-\varphi_k = 0$. \end{cor} The variational inequality (\ref{var}) tested with $\eta= \varphi\in \Phi_{ad}$ together with \ref{ass:aCoercive} and \ref{ass:Lambda} yields that $\mathfrak mathcal P_k(\varphi)-\varphi$ is a descent direction for $j$: \begin{lemma}\label{lem:descdirvm} Let $k\in\mathfrak mathbbm{N}_0$, $\varphi\in\Phi_{ad}$ and $v:=\mathfrak mathcal P_k(\varphi)-\varphi$. Then it holds \begin{align}\label{eq:descdirvm} \spr{j'(\varphi),v}\ \leq -\frac {c_1} {\lambda_{max}} \|v\|^2_{\mathfrak mathbbm{X}}. \end{align}\qed \end{lemma} \begin{comment} \begin{proof} Let $\bar\varphi_k:=\mathfrak mathcal P_\mathfrak mathbbm{X}(\varphi_k,k)$. Taking $\eta = \varphi_k\in \Phi_{ad}$ in the variational inequality for $g_k$ , we get by \ref{ass:aCoercive} \begin{align*} 0 &\leq \spr{g_k'(y_k),\varphi_k-y_k} = a_{k}(y_k, \varphi_k-y_k) + \lambda_k \spr{j'(\varphi_k),\varphi_k-y_k} - a_{k}(\varphi_k,\varphi_k-y_k) =\\ &= -a_{k}(y_k-\varphi_k, y_k-\varphi_k) + \lambda_k \spr{j'(\varphi_k),\varphi_k-y_k} \leq -C\|v_k\|^2_{\mathfrak mathbbm{X}} - \lambda_k \spr{j'(\varphi_k),v_k} \end{align*} and finally using \ref{ass:Lambda} \begin{align*} \spr{j'(\varphi_k),v_k}\ \leq -\frac C {\lambda_k} \|v_k\|^2_{\mathfrak mathbbm{X}} \leq -\frac {C} {\lambda_{max}} \|v_k\|^2_{\mathfrak mathbbm{X}}. \end{align*} \end{proof} \end{comment} Note that \eqref{eq:descdirvm} does not hold in the $\mathfrak mathbbm{X}\cap\mathbbm D$-norm.\\ Due to $\spr{j'(\varphi),v}<0$ for $v\neq 0$ the step length selection by the Armijo rule (see step \ref{eq:armijo} in Algorithm \ref{algorithm1}) is well defined, which can be shown as in \cite{Bertsekas}. \begin{remark}\label{rem:gradpathnotpossible} For the existence of a step length and for the global convergence proof we exploit that the path $\alpha\mathfrak mapsto \varphi_k + \alpha v_k$ is continuous in $\mathfrak mathbbm{X}\cap \mathbbm D$. Thus, also the mapping $\alpha\mathfrak mapsto j(\varphi_k + \alpha v_k)$ is continuous. On the other hand, this does not hold for the gradient path. Backtracking along the gradient path or projection arc means that $\alpha_k$ is set to 1, whereas $\lambda_k=\beta^{m_k}$ is chosen with $m_k\in\mathfrak mathbbm{N}_0$ minimal such that the Armijo condition \begin{align*} j(\overline\varphi_k(\lambda_k))\leq j(\varphi_k) + \sigma \spr{j'(\varphi_k),\overline\varphi_k(\lambda_k)-\varphi_k} \end{align*} is satisfied, see for instance \cite{KelleySachs92}. By the notation $\overline\varphi_k(\lambda_k)$ we emphasize that the solution of the subproblem \eqref{eq:projproblvm} depends on $\lambda_k$. However, with the above assumptions it cannot be shown that there exists such a $\lambda_k$. The reason is that due to \ref{ass:aCoercive} the gradient path $\lambda\mathfrak mapsto \overline\varphi_k(\lambda)$ is continuous with respect to the $\mathfrak mathbbm{X}$-norm, whereas $j$ is due to \ref{ass:Diff} only differentiable with respect to the $\mathfrak mathbbm{X}\cap\mathbbm D$-norm. Thus, $j$ along the gradient path, i.e. the mapping $\lambda\mathfrak mapsto j(\overline\varphi_k(\lambda))$, may be discontinuous. \end{remark} To prove statement 2. of Theorem \ref{thm:GlobalConvvm} we use, as in \cite{Bertsekas} for finite dimensions, that $v_k$ is gradient related. This is weaker than the common angle condition. Therefor we need the following two lemmata: \begin{lemma}\label{lem:Tech} For $\{\varphi_k\}_k \subseteq\Phi_{ad}$ with $\varphi_k\to\varphi$ in $\mathfrak mathbbm{X}\cap\mathbbm D$ and $\{p_k\}_k \subseteq \mathfrak mathbbm{X}\cap\mathbbm D$ with $p_k\to p$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$ for some $\varphi,p\in\mathfrak mathbbm{X}\cap\mathbbm D$ it holds $\spr{j'(\varphi_k),p_k}\to \spr{j'(\varphi),p}$. \end{lemma} \begin{proof} We use \ref{ass:Diff} and \ref{ass:WeakDiff} and obtain \begin{align*} &|\spr{j'(\varphi_k),p_k}- \spr{j'(\varphi),p}|\leq |\spr{j'(\varphi_k)-j'(\varphi),p_k}| + | \spr{j'(\varphi),p_k-p}| \leq\\ &\leq \underbrace{\|j'(\varphi_k)-j'(\varphi)\|_{(\mathfrak mathbbm{X}\cap \mathbbm D)^*}}_{\to 0}\underbrace{\|p_k\|_{\mathfrak mathbbm{X}\cap \mathbbm D}}_{\leq C} + \underbrace{| \spr{j'(\varphi),p_k-p}|}_{\to 0}\to 0. \qedhere \end{align*} \end{proof} The preceding lemma is also needed in the proof of Theorem \ref{thm:GlobalConvvm}. \begin{lemma}\label{thm:PHBdd} Let for a sequence $\{\varphi_i\}_i\subseteq \Phi_{ad}$ hold $\varphi_i\to \varphi$ in $\mathfrak mathbbm{X}\cap \mathbbm D$ for some $\varphi\in\mathfrak mathbbm{X}\cap\mathbbm D$. Then there exists $C>0$ such that $\|\mathfrak mathcal P_{k}(\varphi_i)\|_{\mathfrak mathbbm{X}\cap\mathbbm D}\leq C$ for all $i,k\in\mathfrak mathbbm{N}_0$. \end{lemma} \begin{proof} Lemma \ref{lem:descdirvm} yields together with \ref{ass:Bdd} and \ref{ass:Diff} the estimate \begin{align*} \tfrac{c_1}{\lambda_{max}}\|\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i\|^2_\mathfrak mathbbm{X}&\leq -\spr{j'(\varphi_i),\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i}\\ &\leq \|j'(\varphi_i)\|_{(\mathfrak mathbbm{X}\cap\mathbbm D)^*}(\|\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i\|_\mathfrak mathbbm{X} + \|\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i\|_\mathbbm D)\\ &\leq C(\|\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i\|_\mathfrak mathbbm{X}+1), \end{align*} thus $\|\mathfrak mathcal P_{k}(\varphi_i)-\varphi_i\|_\mathfrak mathbbm{X} \leq C$ and hence $\|\mathfrak mathcal P_{k}(\varphi_i)\|_\mathfrak mathbbm{X}\leq C$. Due to \ref{ass:Bdd} we finally get $\|\mathfrak mathcal P_{k}(\varphi_i)\|_{\mathfrak mathbbm{X}\cap\mathbbm D}\leq C$ independent of $i$ and $k$. \end{proof} \begin{lemma}\label{thm:gradrelated} Let $\{\varphi_k\}$ be the sequence generated by Algorithm \ref{algorithm1}, then $\{ v_k\}_k$ is gradient related, i.e.: for any subsequence $\{ \varphi_{k_i} \}_i$ which converges in $\mathfrak mathbbm{X}\cap \mathbbm D$ to a nonstationary point $\varphi \in \Phi_{ad}$ of $j$, the corresponding subsequence of search directions $\{ v_{k_i}\}_i$ is bounded in $\mathfrak mathbbm{X}\cap \mathbbm D$ and $\limsup_i \spr{j'(\varphi_{k_i}),v_{k_i}} < 0$ is satisfied. Moreover, it holds $\liminf_i \|v_{k_i}\|_{\mathfrak mathbbm{X}}> 0$. \end{lemma} \begin{proof} Let $\varphi_{k_i}\to \varphi$ in $\mathfrak mathbbm{X}\cap\mathbbm D$, where $\varphi$ is nonstationary. Lemma \ref{thm:PHBdd} provides that $\{v_{k_i}\}_i $ is bounded in $\mathfrak mathbbm{X}\cap\mathbbm D$. With \eqref{eq:descdirvm}, the statement $\limsup_i \spr{j'(\varphi_{k_i}),v_{k_i}} < 0$ follows from $\liminf_i \|v_{k_i}\|_{\mathfrak mathbbm{X}}= C> 0$, which we show by contradiction.\\ Assume $\liminf_i \|v_{k_i}\|_{\mathfrak mathbbm{X}} = 0$, thus there is a subsequence again denoted by $\{v_{k_i}\}_i $ such that $v_{k_i}\to 0$ in $\mathfrak mathbbm{X}$. Using \eqref{var} for $\bar\varphi_k:= \mathfrak mathcal P_{k}(\varphi_k)$, the positive definiteness of $a_k$ and \ref{ass:Lambda}, it follows for all $\eta \in \Phi_{ad} $ \begin{align} \spr{j'(\varphi_{k}),\eta-\bar\varphi_{k}} &\geq \tfrac{1}{\lambda_k} (a_k(v_k,v_k) + a_k(v_k,\bar\varphi_k-v_k -\eta))\nonumber \\ &\geq -\tfrac{1}{\lambda_{min}} |a_k(v_k,\bar\varphi_k-v_k -\eta)| \; . \label{eq:fstorderki} \end{align} Moreover, $\bar \varphi_{k_i} = v_{k_i}+\varphi_{k_i}\to \varphi$ in $\mathfrak mathbbm{X}$ and also weakly-* in $\mathbbm D$ according to Lemma \ref{lem:wsconv}. From Lemma \ref{lem:Tech} we get $\spr{j'(\varphi_{k_i}),\eta-\bar\varphi_{k_i}}\to \spr{j'(\varphi),\eta-\varphi}$. From \ref{ass:aToZ} we get $a_{{k_i}}(\bar \varphi_{k_i}-\varphi_{k_i}, \varphi_{k_i}-\eta)\to 0$ and we derive from \eqref{eq:fstorderki} that \begin{align*} \spr{j'(\varphi),\eta-\varphi} \geq 0\quad \forall \eta\in\Phi_{ad}, \end{align*} which shows that $\varphi$ is stationary, which is a contradiction. \end{proof} \begin{comment} MOTIVATION \begin{lemma}\label{thm:bootstrap} Let $\{ x_k\} $, $\{ \alpha_k\} $ be real sequences which fulfill $x_k\geq 0$, $\alpha_k\to 0$, $x_k\alpha_k\to 0$ and $x_k\leq c (x_k^{\frac{1+\gamma}{2}}\alpha_k^\gamma + \alpha_k^\gamma)\quad$ for some $c>0$ and $0<\gamma\leq 1$. Then $x_k\to 0$. \end{lemma}\begin{proof} Assume there exists a subsequence still denoted by $\{ x_k\} $ for which $x_k \geq \bar c$ holds for some positive $\bar c$. Rearranging the last inequality gives\\ $x_k^{\frac{1-\gamma}{2}}(x_k^{\frac{1+\gamma}{2}}- c x_k^{\gamma}\alpha_k^\gamma) \leq c \alpha_k^\gamma$. We have $x_k^{\frac{1+\gamma}{2}}- c x_k^{\gamma}\alpha_k^\gamma\geq \tfrac12 \bar c >0$ for all $k$ large enough due to $x_k\alpha_k\to 0$ and $x_k \geq \bar c>0 $, thus \begin{align*} x_k^{\frac{1-\gamma}{2}} \leq \frac{c \alpha_k^\gamma} {x_k^{\frac{1+\gamma}{2}}- c x_k^{\gamma}\alpha_k^\gamma}\leq 2\frac{c}{\bar c} \; \alpha_k^\gamma\to 0, \end{align*} which is a contradiction. \end{proof}\\[3mm] Using $f(u+v)-f(u)=\int_0^1 \tfrac{d}{dt} f(u+t v) dt$ we obtain \begin{lemma}\label{thm:HölderEstimate} Let $X$ be a Banach space, $U\subset X$ convex and $f\in C^{1,\alpha}(U)$ with modulus $L$ for some $0<\alpha\leq 1$. Let $u\in U$ and $v\in X$ such that $u+v\in U$. Then it holds the estimate \begin{align*} f(u+v) - f(u) \leq \spr{f'(u),v} + \frac{L}{1+\alpha}\|v\|^{1+\alpha}. \end{align*} \end{lemma} \end{comment} \label{pageofproof}\begin{proof}[Proof of Theorem \ref{thm:GlobalConvvm}]\ \\ Because of Corollary \ref{cor:vkzstat} we can assume $v_k\neq 0$ and $\alpha_k>0$ for all $k$.\\ 1.) From the Armijo rule and since $v_k$ is a descent direction we get \begin{align}\label{eq:armijo2} j(\varphi_{k+1})-j(\varphi_k) \leq \alpha_k \sigma \spr{j'(\varphi_k),v_k} < 0, \end{align} thus $j(\varphi_k)$ is monotonically decreasing. Since $j$ is bounded from below we get convergence $j(\varphi_k)\to j^*$ for some $j^*\in\mathfrak mathbbm{R}$, which proves 1. \\[2mm] 2.) The proof is similar to \cite{Bertsekas} in finite dimension by contradiction. Let $\varphi$ be an accumulation point, with a convergent subsequence $\varphi_{k_i}\to \varphi$ in $\mathfrak mathbbm{X}\cap\mathbbm D$. The continuity of $j$ on $\Phi_{ad}$ yields then $j^* = j(\varphi)$ and \eqref{eq:armijo2} leads to $\alpha_k \spr{j'(\varphi_k),v_k}\to 0$. Assuming now that $\varphi$ is nonstationary we have $\left|\spr{j'(\varphi_{k_i}),v_{k_i}}\right| \geq C > 0$, since $\{v_k\}$ is gradient related by Lemma \ref{thm:gradrelated}, and thus $\alpha_{k_i}\to 0$. So there exists some $\bar i\in\mathfrak mathbbm{N}$ such that $\alpha_{k_i}/\beta\leq 1$ for all $i\geq \bar i$, and thus $\alpha_{k_i}/\beta$ does not fulfill the Armijo rule due to the minimality of $m_k$. Applying the mean value theorem to the left hand side, we have for some nonnegative $\tilde\alpha_{k_i} \leq \frac{\alpha_{k_i}}{\beta} $ and all $ i\geq\bar i$ that \begin{align} \tfrac{\alpha_{k_i}}{\beta}\spr{j'\left(\varphi_{k_i} + \tilde\alpha_{k_i} v_{k_i}\right),v_{k_i}} = j\left(\varphi_{k_i} + \tfrac{\alpha_{k_i}}{\beta} v_{k_i}\right)-j(\varphi_{k_i}) > \tfrac{\alpha_{k_i}}{\beta} \sigma \spr{j'(\varphi_{k_i}),v_{k_i}} \label{eq:ineq1} \end{align} holds. Since, by Lemma \ref{thm:gradrelated}, $\{v_{k_i}\}_i$ is bounded in $\mathfrak mathbbm{X}\cap \mathbbm D$ and $\tilde\alpha_{k_i}\to 0$, we have that $\varphi_{k_i} + \tilde\alpha_{k_i} v_{k_i}\to \varphi$ in $\mathfrak mathbbm{X}\cap \mathbbm D$. Also $\bar\varphi_{k_i} = \varphi_{k_i} + v_{k_i}$ is uniformly bounded in $\mathfrak mathbbm{X}\cap\mathbbm D$ and thus there exists a subsequence, again denoted by $\{\bar\varphi_{k_i}\}$, which converges to some $y\in\Phi_{ad}$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$. Hence we have that $v_{k_i} = \bar\varphi_{k_i}-\varphi_{k_i}\to \bar v := y-\varphi$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$. According to Lemma \ref{lem:Tech} we can take the limit of both sides of the inequality \eqref{eq:ineq1}, which leads to $ \spr{j'\left(\varphi\right),\bar v} \geq \sigma \spr{j'\left(\varphi\right),\bar v}, $ and $\sigma < 1$ yields $ \spr{j'\left(\varphi\right),\bar v} \geq 0 \; .$ This contradicts $ \spr{j'\left(\varphi\right),\bar v} = \limsup_i\spr{j'(\varphi_{k_i}),v_{k_i}} < 0,$ which is a consequence of Lemma \ref{thm:gradrelated}.\\[2mm] 3.) By proving that out of any subsequence of $\spr{j'(\varphi_{k_i}),v_{k_i}}$ we can extract another subsequence, which converges to 0, we can conlude that $\spr{j'(\varphi_{k_i}),v_{k_i}}\to 0$ which yields $\| v_{k_i} \|_\mathfrak mathbbm{X} \to 0$ by \eqref{eq:descdirvm}. With Lemma \ref{thm:PHBdd}, we get by the same arguments as in 2. that $v_{k_i} \to y-\varphi$ weakly in $\mathfrak mathbbm{X}$ and weakly-* in $\mathbbm D$ for a subsequence and for some $y\in \Phi_{ad}$, thus $\spr{j'(\varphi_{k_i}),v_{k_i}}\to \spr{j'(\varphi),y-\varphi}$ due to Lemma \ref{lem:Tech}. Since $v_{k_i}$ are descent directions for $j$ at $\varphi_{k_i}$ and $\varphi$ is stationary we have $\spr{j'(\varphi),y-\varphi}=0$. \\[2mm] 4.) As in 3.) we prove by a subsequence argument that $\spr{j'(\varphi_k),v_k}\to 0$. For an arbitrary subsequence, which we also denote by index $k$, \eqref{eq:armijo2} yields $\alpha_k\spr{j'(\varphi_k),v_k}\to 0 $. If $\alpha_k\geq c> 0$ for all $k$, the assertion follows immediately. Otherwise there exists a subsequence (again denoted by index $k$) such that $\beta \geq \alpha_k\to 0$ and thus the step length $\alpha_k/\beta$ does not fulfill the Armijo condition. Since $j'$ is Hölder continuous with exponent $\gamma$ and modulus $L$ we obtain \begin{align*} \sigma\tfrac{\alpha_k}{\beta}\spr{j'(\varphi_k),v_k} &< j(\varphi_k+\tfrac{\alpha_k}{\beta}v_k)-j(\varphi_k) = \int_0^1 \tfrac{d}{dt} j(\varphi_k+t \tfrac{\alpha_k}{\beta}v_k) dt \nonumber \\ &\leq \tfrac{\alpha_k}{\beta}\spr{j'(\varphi_k),v_k}+\tfrac{L}{1+\gamma} \left(\tfrac{\alpha_k}{\beta}\right)^{1+\gamma}\|v_k\|^{1+\gamma}_{\mathfrak mathbbm{X}\cap\mathbbm D}. \end{align*} It holds $\|v_k\|_{\mathbbm D}\leq C$ due to \ref{ass:Bdd} and employing \eqref{eq:descdirvm} we obtain \begin{align*} 0<(\sigma-1)\spr{j'(\varphi_k),v_k}< C\tfrac{L}{1+\gamma} (\tfrac{\alpha_k}{\beta})^\gamma(\|v_k\|^{1+\gamma}_{\mathfrak mathbbm{X}}+ 1 ) \leq C \alpha_k^\gamma ( |\spr{j'(\varphi_k),v_k}|^{\frac{1+\gamma}{2}}+ 1 ). \end{align*} We get $x_k:= | \spr{j'(\varphi_k),v_k}|\to 0$. Otherwise there exists a subsequence still denoted by $\{ x_k \} $ with $x_k \to \bar c>0 $. Rearranging the last inequality gives $1 < C\alpha_k^\gamma ( x_k^{\frac{-1+\gamma}{2}}+ x_k^{-1} )\to 0$, which is a contradiction. \end{proof} \begin{remark} Statements 1. and 2. of Theorem \ref{thm:GlobalConvvm} require only that $\overline\varphi_k\in\Phi_{ad}$ is chosen such that the search directions $v_k = \overline\varphi_k-\varphi_k$ are gradient related descent directions, as can be seen in the proof above. Hence $\overline \varphi_k$ does not have to be $\mathfrak mathcal P_k(\varphi_k)$ in Algorithm \ref{algorithm1}. In this case assumption \ref{ass:Bdd} is also not required. \end{remark} \section{An application in structural topology optimization based on a phase field model}\label{sec:Application} In this section we give an example of an optimization problem described in \cite{bfgs2013}, which is not differentiable in a Hilbert space, so the classical projected gradient method cannot be applied, but the assumptions for the VMPT method are fulfilled.\\ We consider the problem of distributing $N$ materials, each with different elastic properties and fixed volume fraction, within a design domain $\Omega\subseteq \mathfrak mathbbm{R}^d$, $d\in\mathfrak mathbbm{N}$, such that the mean compliance $\int_{\Gamma_g}{\bm{g}}\cdot \boldsymbol{u}$ is minimal under the external force ${\bm{g}}$ acting on $\Gamma_g\subseteq\partial\Omega$. The displacement field $\boldsymbol{u}: \Omega\to \mathfrak mathbbm{R}^d$ is given as the solution of the equations of linear elasticity \eqref{equ:Elasticity}. To obtain a well posed problem a perimeter penalization is typically used. Using phase fields in topology optimization was introduced by Bourdin and Chambolle \cite{BourdinChambolle}. Here, the $N$ materials are described by a vector valued phase field ${\bm{v}}arphi:\Omega\to\mathfrak mathbbm{R}^N$ with ${\bm{v}}arphi\geq 0$ and $\sum_i \varphi_i = 1$, which is able to handle topological changes implicitly. The $i$th material is characterized by $\{{\bm{v}}arphi_i=1\}$ and the different materials are separated by a thin interface, whose thickness is controlled by the phase field parameter $\varepsilon>0$. In the phase field setting the perimeter is approximated by the Ginzburg Landau energy. In \cite{bghr2014} it is shown that the given problem for $N=2$ converges as $\varepsilon\to 0$ in the sense of $\Gamma$-convergence. For further details about the model we refer the reader to \cite{bfgs2013}. The resulting optimal control problem reads with $E(\bm{\varphi}):=\int_\Omega\left\{ \frac{\varepsilon}{2}|\nabla {\bm{v}}arphi|^2 + \frac{1}{\varepsilon}\psi_0({\bm{v}}arphi)\right\}$ \begin{align}\label{equ:MultPhaseMCP} \mathfrak min \tilde J({\bm{v}}arphi,\boldsymbol{u}) := & \int_{\Gamma_g}{\bm{g}}\cdot \boldsymbol{u} + \gamma E({\bm{v}}arphi)\\ {\bm{v}}arphi\in H^1(\Omega)^N,\ &\boldsymbol{u} \in H^1_D := \{H^1(\Omega)^d\mathfrak mid {\bm{\xi}}|_{\Gamma_D}=0\} \notag\\ \text{subject to }\quad \int_\Omega {\bm{C}}({\bm{v}}arphi)\mathcal{E}(\boldsymbol{u}):\mathcal{E}({\bm{\xi}}) &= \int_{\Gamma_g}{\bm{g}}\cdot {\bm{\xi}} \quad\forall {\bm{\xi}}\in H^1_D \label{equ:Elasticity}\\ \strokedint_\Omega {\bm{v}}arphi = \mathfrak m, \quad {\bm{v}}arphi &\geq 0, \quad \sum_{i=1}^{N}\varphi^i \equiv 1 ,\label{equ:Constraints} \end{align} where $\gamma>0$ is a weighting factor, $\strokedint_\Omega {\bm{v}}arphi:= \frac{1}{|\Omega|}\int_\Omega {\bm{v}}arphi$, $\psi_0:\mathfrak mathbbm{R}^N\to \mathfrak mathbbm{R}$ is the smooth part of the potential forcing the values of ${\bm{v}}arphi$ to the standard basis $\bm e_i\in \mathfrak mathbbm{R}^N$, and $A:B := \sum_{i,j=1}^d A_{ij}B_{ij}$ for $A,B\in\mathfrak mathbbm{R}^{d\times d}$. The materials are fixed on the Dirichlet domain $\Gamma_D\subseteq\partial\Omega$. The tensor valued mapping ${\bm{C}}:\mathfrak mathbbm{R}^N\to \mathfrak mathbbm{R}^{d\times d}\otimes (\mathfrak mathbbm{R}^{d\times d})^*$ is a suitable interpolation of the stiffness tensors ${\bm{C}}(\bm e_i)$ of the different materials and $\mathcal{E}(\boldsymbol{u}) := \frac 1 2 (\nabla\boldsymbol{u} + \nabla \boldsymbol{u}^T)$ is the linearized strain tensor. The prescribed volume fraction of the $i$th material is given by $\mathfrak mathfrak m_i$. For examples of the functions $\psi_0$ and ${\bm{C}}$ we refer to \cite{bfgrs2014,bfgs2013}. Existence of a minimizer of the problem \eqref{equ:MultPhaseMCP} as well as the unique solvability of the state equation \eqref{equ:Elasticity} is shown in \cite{bfgs2013} under the following assumptions, which we claim also in this paper. \newcounter{AssCountP} \renewcommand{\textbf{(A{\arabic{AssCount}})}P}{\textbf{(AP)}} \begin{list}{\textbf{(A{\arabic{AssCount}})}P}{\usecounter{AssCountP}}\setlength{\itemsep}{0pt} \item \label{ass:psiC11}\label{ass:CC11}\label{ass:Csym}\label{ass:Ccoercive} $\Omega\subseteq\mathfrak mathbbm{R}^d$ is a bounded Lipschitz domain; $\Gamma_D,\Gamma_g\subseteq\partial\Omega$ with $\Gamma_D\cap \Gamma_g = \emptyset$ and $\mathfrak mathcal H^{d-1}(\Gamma_D)>0$. Moreover, ${\bm{g}} \in L^2(\Gamma_g)^d$ and $\psi_0\in C^{1,1}(\mathfrak mathbbm{R}^N)$ as well as $\mathfrak m\geq 0$, $\sum_{i=1}^N \mathfrak mathfrak m_i = 1$. For the stiffness tensor we assume ${\bm{C}} = (C_{ijkl})_{i,j,k,l=1}^d$ with $C_{ijkl}\in C^{1,1}(\mathfrak mathbbm{R}^N)$ and $C_{ijkl}=C_{jikl}=C_{klij}$ and that there exist $a_0, a_1, C >0$, s.t. $a_0|\bm A|^2\leq {\bm{C}}({\bm{v}}arphi)\bm A:\bm A\leq a_1|\bm A|^2$ as well as $|{\bm{C}}'({\bm{v}}arphi)|\leq C$ holds for all symmetric matrices $\bm A\in\mathfrak mathbbm{R}^{d\times d}$ and for all ${\bm{v}}arphi\in\mathfrak mathbbm{R}^N$. \end{list} The state $\boldsymbol{u}$ can be eliminated using the control-to-state operator $S$, resulting in the reduced cost functional $\tilde j({\bm{v}}arphi) := \tilde J({\bm{v}}arphi,S({\bm{v}}arphi))$. In \cite{bfgs2013} it is also shown that $\tilde j:H^1(\Omega)^N\cap L^\infty(\Omega)^N\to \mathfrak mathbbm{R}$ is everywhere Fréchet{} differentiable with derivative \begin{align}\label{eq:jderiv} \tilde j'(\bm{\varphi})\bm v = \gamma \int_\Omega \{ \varepsilon \nabla \bm{\varphi}:\nabla \bm v + \frac{1}{\varepsilon}{ \psi_0'}(\bm{\varphi})\bm v \} - \int_\Omega {\bm{C}}'(\bm{\varphi})\bm v\mathcal{E}(\boldsymbol{u}):\mathcal{E}(\boldsymbol{u}) \end{align} for all ${\bm{v}}arphi, \bm v\in H^1(\Omega)^N\cap L^\infty(\Omega)^N$, where $\boldsymbol{u}=S({\bm{v}}arphi)$ and $S:L^\infty(\Omega)^N\to H^1(\Omega)^d$ is Fréchet{} differentiable. By the techniques in \cite{bfgs2013} one can also show that $S'$ is continuous. In \cite{bfgs2013,bgsssv2010} the problem is solved numerically by a pseudo time stepping method with fixed time step, which results from an $L^2$-gradient flow approach. An $H^{-1}$ gradient flow approach is also considered in \cite{bgsssv2010}. The drawbacks of these methods are that no convergence results to a stationary point exist, and hence also no appropriate stopping criteria are known. In addition, typically the methods are very slow, i.e. many time steps are needed until the changes in the solution ${\bm{v}}arphi$ or in $j$ are small. Here we apply the VMPT method, which does not have these drawbacks and which can additionally incorporate second order information. Since $H^1(\Omega)^N\cap L^\infty(\Omega)^N$ is not a Hilbert space the classical projected gradient method cannot be applied. In the following we show that problem \eqref{equ:MultPhaseMCP} fulfills the assumptions on the VMPT method. Amongst others we use the inner product $a_k({\bm{f}},{\bm{g}}) = \int_\Omega \nabla {\bm{f}}:\nabla {\bm{g}}$. To guarantee positive definiteness of this $a_k$ we first have to translate the problem by a constant to gain $\int_\Omega{\bm{v}}arphi = 0$, which allows us to apply a Poincaré{} inequality. Therefor we perform a change of coordinates in the form $\tilde{\bm{v}}arphi = {\bm{v}}arphi-\mathfrak m$ and get the following problem for the transformed coordinates. \begin{align}\label{equ:MultPhaseMCPTrans} \mathfrak min j({\bm{v}}arphi) := \int_{\Gamma_g}{\bm{g}}\cdot S({\bm{v}}arphi+\mathfrak m) + \gamma E({\bm{v}}arphi+\mathfrak m)\\ {\bm{v}}arphi\in \Phi_{ad} := \left\{{\bm{v}}arphi\in H^1(\Omega)^N\mathrel{}\middle|\mathrel{} \strokedint_\Omega {\bm{v}}arphi = 0, \quad {\bm{v}}arphi \geq -\mathfrak m, \quad \sum_{i=1}^{N}\varphi^i \equiv 0\right\}.\notag \end{align} On the transformed problem \eqref{equ:MultPhaseMCPTrans} we apply the VMPT method in the spaces \begin{align*} \mathfrak mathbbm{X} &:= \left\{{\bm{v}}arphi\in H^1(\Omega)^N\mathrel{}\middle|\mathrel{} \strokedint_\Omega {\bm{v}}arphi = \bm 0\right\},\quad \mathbbm D := L^\infty(\Omega)^N. \end{align*} The space of mean value free functions $\mathfrak mathbbm{X}$ becomes a Hilbert space with the inner product $(\bm f,\bm g)_\mathfrak mathbbm{X}:=(\nabla \bm f,\nabla \bm g)_{L^2}$ and $\|.\|_\mathfrak mathbbm{X}$ is equivalent to the $H^1$-norm \cite{AltFunct}. \begin{theorem}\label{thm:C1} The reduced cost functional $j : \mathfrak mathbbm{X}\cap \mathbbm D\to \mathfrak mathbbm{R}$ is continuously Fréchet{} differentiable and $j'$ is Lipschitz continuous on $\Phi_{ad}$. \end{theorem} \begin{proof} The Fréchet{} differentiability of $j$ on $\mathfrak mathbbm{X}\cap \mathbbm D$ is shown in \cite{bfgs2013}. Let $\bm{\eta},{\bm{v}}arphi_i\in \mathfrak mathbbm{X}\cap \mathbbm D$ and $\boldsymbol{u}_i = S({\bm{v}}arphi_i)$, $i=1,2$. Then with \eqref{eq:jderiv}, $\psi_0\in C^{1,1}(\mathfrak mathbbm{R}^N)$, $C_{ijkl}\in C^{1,1}(\mathfrak mathbbm{R}^N)$ and $|{\bm{C}}'({\bm{v}}arphi)|\leq C$ $\forall{\bm{v}}arphi\in\mathfrak mathbbm{R}^N$ we get \begin{align} |(j'({\bm{v}}arphi_1)- j'({\bm{v}}arphi_2))\bm{\eta}| &\leq \gamma\varepsilon\|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{H^1}\|\bm{\eta}\|_{H^1} + C\frac{\gamma}{\varepsilon}\|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{L^2}\|\bm{\eta}\|_{L^2} \notag\\ &\phantom{=\ }+ |\begingroup\textstyle \int\endgroup_\Omega({\bm{C}}'(\mathfrak m+{\bm{v}}arphi_1)-{\bm{C}}'(\mathfrak m+{\bm{v}}arphi_2))(\bm{\eta})\mathcal{E}(\boldsymbol{u}_1):\mathcal{E}(\boldsymbol{u}_1)|\notag\\ &\phantom{=\ }+|\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi_2)(\bm{\eta})\mathcal{E}(\boldsymbol{u}_1-\boldsymbol{u}_2):\mathcal{E}(\boldsymbol{u}_1)|\notag\\ &\phantom{=\ } + |\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi_2)(\bm{\eta})\mathcal{E}(\boldsymbol{u}_2):\mathcal{E}(\boldsymbol{u}_1-\boldsymbol{u}_2)|\notag\\ &\leq C\|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{H^1}\|\bm{\eta}\|_{H^1}\notag\\ &\phantom{=\ } + \|({\bm{C}}'(\mathfrak m+{\bm{v}}arphi_1)-{\bm{C}}'(\mathfrak m+{\bm{v}}arphi_2))\bm{\eta}\|_{L^\infty}\|\boldsymbol{u}_1\|^2_{H^1}+\notag\\ &\phantom{=\ } + C \|\bm{\eta}\|_{L^\infty}\|\boldsymbol{u}_1-\boldsymbol{u}_2\|_{H^1}(\|\boldsymbol{u}_1\|_{H^1}+\|\boldsymbol{u}_2\|_{H^1})\notag\\ &\leq C\|\bm{\eta}\|_{H^1\cap L^\infty}\{\|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{H^1} + \|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{L^\infty}\|\boldsymbol{u}_1\|^2_{H^1} \notag\\ &\phantom{=\ } + \|\boldsymbol{u}_1-\boldsymbol{u}_2\|_{H^1}(\|\boldsymbol{u}_1\|_{H^1}+\|\boldsymbol{u}_2\|_{H^1})\} \label{eq:derivestim} \end{align} To show the continuity of $j'$, let ${\bm{v}}arphi_n,{\bm{v}}arphi\in \mathfrak mathbbm{X}\cap\mathbbm D$ for $n\in\mathfrak mathbbm{N}$ with ${\bm{v}}arphi_n\to {\bm{v}}arphi$ in $\mathfrak mathbbm{X}\cap\mathbbm D$. Using \eqref{eq:derivestim} yields \begin{multline*} \|j'({\bm{v}}arphi_n)- j'({\bm{v}}arphi)\|_{(H^1\cap L^\infty)^*} \\ \leq C(\|{\bm{v}}arphi_n-{\bm{v}}arphi\|_{H^1\cap L^\infty}(1+\|\boldsymbol{u}_n\|_{H^1}^2)+\|\boldsymbol{u}_n-\boldsymbol{u}\|_{H^1}(\|\boldsymbol{u}_n\|_{H^1}+\|\boldsymbol{u}\|_{H^1})), \end{multline*} where $\boldsymbol{u}_n = S({\bm{v}}arphi_n)$ and $\boldsymbol{u} = S({\bm{v}}arphi)$. From the continuity of $S$ we get that $\|\boldsymbol{u}_n\|_{H^1}$ is bounded and that $\|\boldsymbol{u}_n-\boldsymbol{u}\|_{H^1}\to 0$ as $n\to \infty$. This implies \begin{align*} \|j'({\bm{v}}arphi_n)- j'({\bm{v}}arphi)\|_{(H^1\cap L^\infty)^*} \to 0 \end{align*} and thus $j\in C^1(\mathfrak mathbbm{X}\cap\mathbbm D)$.\\ For the Lipschitz continuity of $j'$ we employ estimate \eqref{eq:derivestim} with ${\bm{v}}arphi_i\in \Phi_{ad}$, $i=1,2$. Since $\Phi_{ad}$ is bounded in $L^\infty$, we get that $S$ is Lipschitz continuous on $\Phi_{ad}$ and that $\|S({\bm{v}}arphi)\|_{H^1}\leq C$, independent of ${\bm{v}}arphi\in\Phi_{ad}$, see \cite{bfgs2013}. This yields \begin{align*} \|j'({\bm{v}}arphi_1)- j'({\bm{v}}arphi_2)\|_{(H^1\cap L^\infty)^*} \leq C \|{\bm{v}}arphi_1-{\bm{v}}arphi_2\|_{H^1\cap L^\infty}, \end{align*} which proofs the Lipschitz continuity of $j'$ in $\Phi_{ad}$. \end{proof} \begin{cor}\label{cor:jPhiadassum} The spaces $\mathfrak mathbbm{X}$ and $\mathbbm D$, together with $j$ and $\Phi_{ad}$ given in \eqref{equ:MultPhaseMCPTrans} fulfill the assumptions \ref{ass:LimitsUnique}-\ref{ass:WeakDiff} of the VMPT method. \end{cor} \begin{proof} Given the choices for $\mathfrak mathbbm{X}$ and $\mathbbm D$ \ref{ass:LimitsUnique} is fulfilled. For ${\bm{v}}arphi\in\Phi_{ad}$ we have \begin{align*} \bm {-1}\leq-\mathfrak m\leq {\bm{v}}arphi \leq \bm 1-\mathfrak m\leq \bm 1\quad \forall {\bm{v}}arphi\in\Phi_{ad} \end{align*} almost everywhere in $\Omega$. Thus it holds \ref{ass:Bdd} and $\Phi_{ad}\subseteq\mathfrak mathbbm{X}\cap\mathbbm D$. Moreover, $\bm 0\in \Phi_{ad}$, $\Phi_{ad}$ is convex, and since $\Phi_{ad}$ is closed in $L^2(\Omega)^N$, it is also closed in $\mathfrak mathbbm{X}\hookrightarrow L^2(\Omega)^N$. Thus \ref{ass:Closed} holds.\\ Assumption \ref{ass:BddBelow} is shown in \cite{bfgs2013} and Theorem \ref{thm:C1} provides \ref{ass:Diff}.\\ Given \begin{align*} \spr{j'({\bm{v}}arphi),{\bm{v}}arphi_i} &= \int_\Omega\{\gamma \varepsilon \nabla{\bm{v}}arphi: \nabla{\bm{v}}arphi_i + (\tfrac \gamma \varepsilon \nabla \psi_0({\bm{v}}arphi+\mathfrak m) -\nabla{\bm{C}}({\bm{v}}arphi+\mathfrak m)\mathcal{E}(\boldsymbol{u}):\mathcal{E}(\boldsymbol{u}))\cdot {\bm{v}}arphi_i\} \end{align*} the first term converges to $0$ if ${\bm{v}}arphi_i\to 0$ weakly in $H^1$. With \ref{ass:psiC11} and $\boldsymbol{u} \in H^1_D$ we have that $\tfrac \gamma \varepsilon\nabla \psi_0({\bm{v}}arphi+\mathfrak m)-\nabla{\bm{C}}({\bm{v}}arphi+\mathfrak m)\mathcal{E}(\boldsymbol{u}):\mathcal{E}(\boldsymbol{u})\in L^1(\Omega)^N$. Hence the remaining term converges to $0$ if ${\bm{v}}arphi_i\to 0$ weakly-* in $L^\infty$, which proves that \ref{ass:WeakDiff} is fulfilled. \end{proof} \mathfrak medskip Possible choices of the inner product $a_k$ for the VMPT method are the inner product on $\mathfrak mathbbm{X}$, i.e. \begin{align} a_k(\bm p, \bm y) &= (\bm p, \bm y)_\mathfrak mathbbm{X} = \int_\Omega\nabla \bm p:\nabla \bm y \label{eq:akH1} \end{align} and the scaled version $ a_k(\bm p, \bm y) = \gamma\varepsilon(\bm p, \bm y)_\mathfrak mathbbm{X} $. Both fulfill the assumptions \ref{ass:aInnerProduct}-\ref{ass:aToZ}. We also give an example of a pointwise choice of an inner product, which includes second order information. Since this choice is not continuous in $\mathfrak mathbbm{X}$, it is not obvious that it fulfills the assumptions. To motivate the choice of this inner product we look at the second order derivative of $j$, which is formally given by \begin{align*} j''({\bm{v}}arphi_k)[\bm p,\bm y] &= \int_\Omega\{\gamma\varepsilon \nabla\bm p:\nabla\bm y - 2({\bm{C}}'(\mathfrak m+{\bm{v}}arphi_k)(\bm y)\mathcal{E}(S'({\bm{v}}arphi_k)\bm p):\mathcal{E}(\boldsymbol{u}_k)) + \\ &\phantom{=\ } + \frac{\gamma}{\varepsilon}\nabla^2\psi_0(\mathfrak m+{\bm{v}}arphi_k)\bm p\cdot\bm y - {\bm{C}}''(\mathfrak m+{\bm{v}}arphi_k)[\bm p,\bm y]\mathcal{E}(\boldsymbol{u}_k):\mathcal{E}(\boldsymbol{u}_k)\}. \end{align*} In \cite{bfgs2013} it is shown that ${\bm{z}}_p:=S'({\bm{v}}arphi_k)\bm p\in H^1_D$ is the unique weak solution of the linearized state equation \begin{align}\label{eq:linearized} \int_\Omega {\bm{C}}(\mathfrak m+{\bm{v}}arphi_k)\mathcal{E}(\bm z_p):\mathcal{E}(\bm{\eta}) = -\int_\Omega {\bm{C}}'(\mathfrak m+{\bm{v}}arphi_k)\bm p\mathcal{E}(\boldsymbol{u}_k):\mathcal{E}(\bm{\eta})\quad\forall \bm{\eta}\in H^1_D \end{align} and that $\|{\bm{z}}_p\|_{H^1}\leq C\|\bm p\|_{L^\infty}$ holds. Since the first two terms in $j''$ define an inner product (see proof of Theorem \ref{thm:akso}), we use \begin{align}\label{eq:akso} a_k(\bm p, \bm y) &= \gamma\varepsilon (\bm p,\bm y)_\mathfrak mathbbm{X} - 2 \int_\Omega {\bm{C}}'(\mathfrak m+\bm{\varphi}_k)(\bm y) \mathcal{E}(\bm z_p):\mathcal{E}(\boldsymbol{u}_k) \end{align} as an approximation of $j''({\bm{v}}arphi_k)$. Testing equation \eqref{eq:linearized} for ${\bm{z}}_y=S'({\bm{v}}arphi_k)\bm y$ with ${\bm{z}}_p $ we can equivalently write \begin{align}\label{eq:akso2} a_k(\bm p, \bm y)= \gamma\varepsilon (\bm p,\bm y)_\mathfrak mathbbm{X} + 2\int_\Omega {\bm{C}}(\mathfrak m+{\bm{v}}arphi_k)\mathcal{E}(\bm z_p):\mathcal{E}(\bm z_y). \end{align} We would like to mention that the $C^2$-regularity of $j$ is not necessary for this definition of $a_k$. \begin{theorem}\label{thm:akso} The bilinear form $a_k$ given in \eqref{eq:akso} fulfills the assumptions \ref{ass:aInnerProduct}-\ref{ass:aToZ}. \end{theorem} \begin{proof} Due to \ref{ass:Ccoercive} and \eqref{eq:akso2} we have \begin{align*} a_k(\bm p,\bm p) \geq \gamma\varepsilon \|\bm p\|^2_{\mathfrak mathbbm{X}}. \end{align*} Thus, \ref{ass:aInnerProduct} and \ref{ass:aCoercive} is fulfilled. Furthermore, \ref{ass:aBdd} holds due to \begin{align*} a_k({\bm{p}},\bm y)&\leq \gamma\varepsilon \|{\bm{p}}\|_{H^1}\|\bm y\|_{H^1}+C\|{\bm{z}}_p\|_{H^1}\|{\bm{z}}_y\|_{H^1}\\ &\leq \gamma\varepsilon \|{\bm{p}}\|_{H^1}\|\bm y\|_{H^1}+C\|{\bm{p}}\|_{L^\infty}\|\bm y\|_{L^\infty}\leq C\|{\bm{p}}\|_{\mathfrak mathbbm{X}\cap\mathbbm D}\|\bm y\|_{\mathfrak mathbbm{X}\cap\mathbbm D}. \end{align*} \ref{ass:aWeakDiff} is proved as in Corollary \ref{cor:jPhiadassum}.\\ Finally we prove \ref{ass:aToZ}. For $\bm y_k\to 0$ and $\bm p_k\to \bm p$ in $\mathfrak mathbbm{X}$ we have $(\bm y_k,\bm p_k)_\mathfrak mathbbm{X}\to 0$ for $k\to\infty$. With ${\bm{v}}arphi_k\to{\bm{v}}arphi$, $\bm p_k\to \bm p$ in $\mathbbm D=L^\infty(\Omega)^N$ and $S:L^\infty(\Omega)^N\to H^1(\Omega)^N$ continuously Fréchet{} differentiable, we have $\boldsymbol{u}_k=S({\bm{v}}arphi_k) \to S({\bm{v}}arphi)=:\boldsymbol{u}$ in $H^1_D$ and ${\bm{z}}_{p_k} = S'({\bm{v}}arphi_k){\bm{p}}_k\to S'({\bm{v}}arphi){\bm{p}} =: {\bm{z}}_p$ in $H^1_D$. In particular, the sequences are bounded in the corresponding norms, including $\|\bm y_k\|_{L^\infty}\leq C$ if $\bm y_k\to\bm y$ weakly-* in $L^\infty$. Using the Lipschitz continuity and boundedness of ${\bm{C}}'$ and $\nabla{\bm{C}}(\mathfrak m+{\bm{v}}arphi)\mathcal{E}({\bm{z}}_{p}):\mathcal{E}(\boldsymbol{u})\in L^1(\Omega)^N$ we have \begin{align*} &|\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi_k)\bm y_k\mathcal{E}({\bm{z}}_{p_k}):\mathcal{E}(\boldsymbol{u}_k)| \\ &\leq |\begingroup\textstyle \int\endgroup_\Omega({\bm{C}}'(\mathfrak m+{\bm{v}}arphi_k)-{\bm{C}}'(\mathfrak m+{\bm{v}}arphi))\bm y_k\mathcal{E}({\bm{z}}_{p_k}):\mathcal{E}(\boldsymbol{u}_k)|\\ &\phantom{\leq\ } + |\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi)\bm y_k\mathcal{E}({\bm{z}}_{p_k}-{\bm{z}}_{p}):\mathcal{E}(\boldsymbol{u}_k)|\\ &\phantom{\leq\ } + |\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi)\bm y_k\mathcal{E}({\bm{z}}_{p}):\mathcal{E}(\boldsymbol{u}_k-\boldsymbol{u})| + |\begingroup\textstyle \int\endgroup_\Omega{\bm{C}}'(\mathfrak m+{\bm{v}}arphi)\bm y_k\mathcal{E}({\bm{z}}_{p}):\mathcal{E}(\boldsymbol{u})|\\ &\leq L\|{\bm{v}}arphi_k-{\bm{v}}arphi\|_{L^\infty}\|\bm y_k\|_{L^\infty}\|{\bm{z}}_{p_k}\|_{H^1}\|\boldsymbol{u}_k\|_{H^1} \\ &\phantom{\leq\ } +\|{\bm{C}}'(\mathfrak m+{\bm{v}}arphi)\|_{L^\infty}\|\bm y_k\|_{L^\infty}\|{\bm{z}}_{p_k}-{\bm{z}}_{p}\|_{H^1}\|\boldsymbol{u}_k\|_{H^1}\\ &\phantom{\leq\ } + \|{\bm{C}}'(\mathfrak m+{\bm{v}}arphi)\|_{L^\infty}\|\bm y_k\|_{L^\infty}\|{\bm{z}}_{p}\|_{H^1}\|\boldsymbol{u}_k-\boldsymbol{u}\|_{H^1} \\ &\phantom{\leq\ } + |\begingroup\textstyle \int\endgroup_\Omega (\nabla{\bm{C}}(\mathfrak m+{\bm{v}}arphi)\mathcal{E}({\bm{z}}_{p}):\mathcal{E}(\boldsymbol{u}))\cdot \bm y_k| \to 0, \end{align*} which gives \ref{ass:aToZ}. \end{proof} Hence with $0<\lambda_{min}\leq \lambda_k\leq \lambda_{max}$, all assumptions of Theorem \ref{thm:GlobalConvvm} are fulfilled and we get global convergence in the space $H^1(\Omega)^N\cap L^\infty(\Omega)^N$. \section{Numerical results} We discretize the structural topology optimization problem \eqref{equ:MultPhaseMCP}-\eqref{equ:Constraints} using standard piecewise linear finite elements for the control ${\bm{v}}arphi$ and the state variable $\boldsymbol{u}$. The projection type subproblem \eqref{eq:projproblvm} is solved by a primal dual active set (PDAS) method similar to the method described in \cite{bgss2011}. Many numerical examples for this problem can be found in \cite{bfgrs2014, bghr2014}, e.g. for cantilever beams with up to three materials in two or three space dimensions and for an optimal material distribution within an airfoil. In \cite{bfgrs2014} the choice of the potential $\psi$ as an obstacle potential and the choice of the tensor interpolation ${\bm{C}}$ is discussed. Also the inner products $(.,.)_\mathfrak mathbbm{X}$ and $\gamma\varepsilon(.,.)_\mathfrak mathbbm{X}$ for fixed scaling parameter $\lambda_k=1$ are compared, where both give rise to a mesh independent method and the latter leads to a large speed up. Note that the choice of $(.,.)_\mathfrak mathbbm{X}$ with $\lambda_k = (\gamma\varepsilon)^{-1}$ leads to the same iterates than choosing $\gamma\varepsilon(.,.)_\mathfrak mathbbm{X}$ and $\lambda_k=1$. Furthermore, it is discussed in \cite{bfgrs2014} that the choice of $\gamma\varepsilon(.,.)_\mathfrak mathbbm{X}$ can be motivated using $j''(\bm{\varphi})$ or by the fact that for the minimizers $\{\bm{\varphi}_\varepsilon\}_{\varepsilon>0}$ the Ginzburg-Landau energy converges to the perimeter as $\varepsilon\to 0$ and hence $\gamma\varepsilon\|\bm{\varphi}_\varepsilon\|^2_\mathfrak mathbbm{X}\approx const$ independent of $\varepsilon\ll 1$. However, since this holds only for the iterates $\bm{\varphi}_k$ when the phases are separated and the interfaces are present with thickness proportional to $\varepsilon$, we suggest to adopt $\lambda_k$ in accordance to this. As updating strategy for $\lambda_k$ the following method is applied: Start with $\lambda_0 = 0.005(\gamma\varepsilon)^{-1}$, then if $\alpha_{k-1}=1$ set $\tilde\lambda_k = \lambda_{k-1}/0.75$, else $\tilde\lambda_k = 0.75\lambda_{k-1}$ and $\lambda_k = \mathfrak max\{\lambda_{min},\mathfrak min\{\lambda_{max},\tilde \lambda_k\}\}$. The last adjustment yields that \ref{ass:Lambda} is fulfilled. Numerical experiments in \cite{bfgrs2014} show that this in fact produces for the choice $(.,.)_\mathfrak mathbbm{X}$ a scaling with $\lambda_k\approx (\gamma\varepsilon)^{-1}$ for large $k$.\\ In \cite{bfgrs2014, bghr2014} the effect of obtaining various local minima of the nonconvex optimization problem \eqref{equ:MultPhaseMCP}-\eqref{equ:Constraints} by choosing different initial guesses $\bm{\varphi}_0$ can be seen. However also the other parameters have an influence.\\ In this paper we concentrate on comparing different choices of the inner products $a_k$ and use herefor the cantilever beam described in \cite{bfgrs2014} with $\psi_0({\bm{v}}arphi) = \tfrac 1 2 (1-\bm{\varphi}\cdot\bm{\varphi})$ and a quadratic interpolation of the stiffness tensors ${\bm{C}}({\bm{v}}arphi)$. The computation are performed on a personal computer with 3GHz and 4GB RAM. First we discuss the choice of $(.,.)_{L^2}$ versus $(.,.)_\mathfrak mathbbm{X}$. The choice of the $L^2$-inner product leads to the commonly used projected $L^2$-gradient method. However, $(.,.)_{L^2}$ does not fulfill the assumptions of the VMPT method, since $j$ is not differentiable in $L^2(\Omega)^N$ or $L^2(\Omega)^N\cap L^\infty(\Omega)^N$. Thus, global convergence is given for the discretized, finite dimensional problem but not in the continuous setting. This leads in contrast to the choice of $(.,.)_\mathfrak mathbbm{X}$ to mesh dependent iteration numbers for the $L^2$-gradient method, which can be seen in Table \ref{tab:L2H1}. The values in Table \ref{tab:L2H1} were computed for different uniform mesh sizes $h$ with the parameters $\varepsilon=0.04$, $\gamma=0.5$, $\bm{\varphi}_0 \equiv \mathfrak m$ and $tol=10^{-5}$ for the stopping criterion $\sqrt{\gamma\varepsilon}\|\nabla\bm{\varphi}_k\|_{L^2}\leq tol$. The behaviour of iteration numbers is in accordance to our analytical results in function spaces considering $h\to 0$. Furthermore, numerical results not listed here show that we obtain for $(.,.)_\mathfrak mathbbm{X}$ and large $k$ scalings $\lambda_k \approx (\gamma\varepsilon)^{-1}$ independent of the mesh parameter $h$, whereas the $L^2$-inner product produces $\lambda_k$ scaled with $h^2$. Since the algorithm using the $L^2$-inner product is equivalent to the explicit time discretization of the $L^2$-gradient flow, i.e. of the Allen-Cahn variational inequality coupled with elasticity, with time step size $\Delta t = \lambda_k$, the scaling $\lambda_k = \mathcal{O}(h^2)$ reflects the known stability condition $\Delta t = \mathcal{O}(h^2)$ for explicit time discretizations of parabolic equations.\\ \begin{table}[h]\centering \begin{tabular}{|l||r|r|r|r|r|} \hline $ h$ & $2^{-4}$ & $2^{-5}$ & $2^{-6}$ & $2^{-7}$ & $2^{-8}$\\ \hline\hline $(.,.)_{L^2}$ & 323 & 5015 & 18200 & 57630 & 172621\\ $(.,.)_\mathfrak mathbbm{X}$ & 111 & 407 & 320 & 275 & 269\\ \hline \end{tabular} \caption{Comparison of iteration numbers for $(.,.)_{L^2}$ and $(.,.)_\mathfrak mathbbm{X}$.}\label{tab:L2H1} \end{table} Next we compare $(.,.)_\mathfrak mathbbm{X}$ with $a_k$ given in \eqref{eq:akso}, which incorporates second order information. As experiment we again use the cantilever beam in \cite{bfgrs2014}, now with $\varepsilon = 0.001$, $\gamma = 0.002$, $tol = 10^{-4}$ and random initial guess ${\bm{v}}arphi_0$ together with an adaptive mesh, which is fine on the interface with $h_{max} = 2^{-6}$ and $h_{min} = 2^{-11}$. The parameter $\lambda_k$ is updated as described above. The computational costs of one iteration with $a_k$ given in \eqref{eq:akso} is significantly higher, since the calculation of $\mathfrak mathcal P_k(\bm{\varphi}_k)$ requires the solution of a quadratic optimization problem with $\bm{\varphi}\in\Phi_{ad}$ and in addition with the linearized state equation \eqref{eq:linearized} as constraints. However, in each PDAS iteration solving the subproblem for fixed $k$, only the right hand side of \eqref{eq:linearized} changes, namely only $\bm p$. We factorize the matrix in the discrete equation once such that for each $\bm p$ only a cheap forward and backward substitution has to be done. In Table \ref{tab:akCompare} the corresponding iteration numbers, the total CPU time, the values of the combined cost functional $j(\bm{\varphi}^*)$ as well as of the parts, i.e. the mean compliance and the Ginzburg-Landau energy are listed. One observes the drastic reduction in iteration numbers using second order information. Due to the mentioned higher costs of calculating the search directions the total CPU-time is only halved. Nevertheless, this can be possibly improved using a more sophisticated solver for $\mathfrak mathcal P_k(\bm{\varphi}_k)$. It can be also observed that the cost $j(\bm{\varphi}^*)$ and the probably more interesting value of the mean compliance is lower. Hence, the different inner products result in different local minima, which are shown in Figure \ref{fig:SmallGamma}. The inner product given in \eqref{eq:akso} yields a finer structure. Also in other experiments we observed a local minima with lower cost value for this choice of $a_k$. \begin{figure} \caption{Local minima for the cantilever beam.} \label{fig:SmallGamma} \end{figure} \begin{table}[h]\centering \begin{tabular}{|l||r|r|r|r|r|} \hline inner product & iterations & CPU time & $j({\bm{v}}arphi^*)$ & $\int_{\Gamma_g}{\bm{g}}\cdot\boldsymbol{u}^*$ & $E({\bm{v}}arphi)$ \\ \hline\hline $(.,.)_\mathfrak mathbbm{X}$ & 11189 & 42h 12min & 15.07 & 15.03 & 20.79\\ $a_k$ in \eqref{eq:akso} & 851 & 19h & 14.99 & 14.93 & 30.12\\ \hline \end{tabular} \caption{Comparison of two different inner products.}\label{tab:akCompare} \end{table} We successfully applied also an L-BFGS update in function spaces (see e.g. \cite{gruver1981algorithmic} for the unconstrained case in Hilbert space) of the metric $a_k$, i.e. starting with $a_0(\bm u,\bm v) = \gamma\varepsilon(\bm u,\bm v)_\mathfrak mathbbm{X}$ we use the update \\ \centerline{$ a_{k+1}(\bm u,\bm v) = a_k(\bm u,\bm v) - \frac {a_k(\bmsub p_k,\bmsub u) a_k(\bmsub p_k,\bmsub v)}{a_k(\bmsub p_k,\bmsub p_k)} + \frac{\spr{y_k,\bmsub u},\spr{y_k,\bmsub v}}{\spr{y_k,\bmsub p_k}} $ }\\ in case that $\spr{y_k,\bm p_k}>0$, where $\bm p_k := \bm{\varphi}_{k+1}-\bm{\varphi}_k$ and $y_k := j'(\bm{\varphi}_{k+1})-j'(\bm{\varphi}_k)$, which performs very good especially for small $\gamma$. Note that -- as in the finite dimensional case -- assumption \ref{ass:aCoercive} cannot be shown for this sequence of inner products, but numerical experiments show that the discretized method is mesh independent, see Table \ref{tab:BFGS} for the above cantilever beam example, where the maximal recursion depth is set to 10. \begin{table}[h]\centering \begin{tabular}{|r||r|r|r|} \hline $h$ & $2^{-5}$ & $2^{-6}$ & $2^{-7}$ \\ \hline\hline $H^1$-BFGS iterations & 85 & 88 & 86\\ \hline \end{tabular} \caption{Mesh independent iteration numbers for the $H^1$-BFGS method.}\label{tab:BFGS} \end{table} The following compliant mechanism problem \begin{align*} \mathfrak min\; \; &\frac 1 2\int_{\Omega_{obs}} (1-\varphi^N)|\boldsymbol{u}-\boldsymbol{u}_\Omega|^2 + \gamma E(\bm{\varphi}), \end{align*} where the elasticity equation \eqref{equ:Elasticity} and the constraints \eqref{equ:Constraints} have to hold, is more difficult. In our numerical analysis the solution process is more sensitive to the choice of $a_k$. Here the above $H^1$-BFGS approach enables us to solve the problem in an acceptable time. Until $\gamma\varepsilon\|\nabla v_k\|_{L^2}\leq tol=10^{-4}$ the calculation of the material distribution in Figure \ref{fig:Cruncher} took 22 hours. It aims to crunch a nut in the middle of the left boundary when the force acts on the right hand side from above and below and the mechanism is supplied on the left boundary. Moreover, we also successfully applied the VMPT method on the following drag minimization problem of the Stokes flow using a phase field approach, which is analysed in \cite{HechtStokesEnergy}: \begin{align*} \mathfrak min \int_\Omega \frac{1}{2}|\nabla \boldsymbol{u}|^2 + &\frac 1 2 \alpha_\varepsilon(\varphi)|\mathfrak mathbf{u}|^2 + \gamma E(\varphi)\\ \int_\Omega \alpha_\varepsilon(\varphi)\boldsymbol{u} \mathfrak mathbf{v} + \int_\Omega \nabla\boldsymbol{u}\cdot\nabla \mathfrak mathbf{v} &= \mathfrak mathbf{0} \quad\forall \mathfrak mathbf{v}\in H^1_{0,div}(\Omega)\\ \mathfrak mathbf{u}|_{\partial\Omega} \equiv \left( 1, 0 \right)^T,\quad \strokedint \varphi &= 0.75,\quad -1\leq \varphi \leq 1. \end{align*} We applied a nested approach in $h$ and $\varepsilon$ as well as an adaptive grid. As inner products we used the above $H^1$-BFGS method and obtained the result in Figure \ref{fig:Stokes} with 188 iterations to obtain $tol=10^{-3}$, which took 17 minutes. A different type of optimization problem is the inverse problem for a discontinuous diffusion coefficient, where the discontinuous coefficient $a$ is smoothed by a phase field approach and no mass conservation is used \cite{DeEllSty2015}: \begin{align*} & \qquad \mathfrak min\; \; \frac 1 2\int_{\Omega} |u-u_{obs}|^2 + \gamma E(\varphi) \\ \text{s.t.}\quad& \int_\Omega a(\varphi)\nabla u\cdot \nabla\xi = \int_{\Gamma} g \xi \quad\forall \xi\in H^1\quad \text{and}\quad \int_\Omega u = \int_\Omega u_{obs},\quad -1 \leq \varphi \leq 1 . \end{align*} We choose $u_{obs}$ as solution of the state equation for $\varphi$ shown in the upper part of Figure \ref{fig:InvProbl} with added noise of 5\% and obtain the solution shown in the lower part of Figure \ref{fig:InvProbl}. \begin{figure} \caption{Successful applications of the VMPT method.} \label{fig:Cruncher} \label{fig:Stokes} \label{fig:InvProbl} \end{figure} The VMPT method can also be used for image inpainting using a phase field approach by considering \begin{align*} \mathfrak min\; \; &\tfrac 1 2\|\varphi-f\|^2_{H(\Omega\setminus D)} + \gamma E(\varphi) \end{align*} such that $\varphi$ fulfills \eqref{equ:Constraints}, where $f$ is the given image and the inpainting is performed in $D$ \cite{BuHeSch09}. The method can adjust to the chosen metric $H(\Omega\setminus D)$ and for this problem a line search with exact step length can be applied \cite{KiesMasterThesis}.\\ The last four mentioned application examples are preliminary results and are under further studies. To our knowledge the VMPT-method outperforms the existing applied optimization algorithms in these cases. \end{document}
\begin{document} \title{On two-sided monogenic functions of axial type\thanks{accepted for publication in Moscow Mathematical Journal}} \author{Dixan Pe\~na Pe\~na$^{\text{a}}$\\ \small{e-mail: [email protected]} \and Irene Sabadini$^{\text{a}}$\\ \small{e-mail: [email protected]} \and Frank Sommen$^{\text{b}}$\\ \small{e-mail: [email protected]}} \date{\small{$^\text{a}$Dipartimento di Matematica, Politecnico di Milano\\Via E. Bonardi 9, 20133 Milano, Italy\\ $^{\text{b}}$Clifford Research Group, Department of Mathematical Analysis\\Faculty of Engineering and Architecture, Ghent University\\Galglaan 2, 9000 Gent, Belgium}} \maketitle \begin{abstract} \noindent In this paper we study two-sided (left and right) axially symmetric solutions of a generalized Cauchy-Riemann operator. We present three methods to obtain special solutions: via the Cauchy-Kowalevski extension theorem, via plane wave integrals and Funk-Hecke's formula and via primitivation. Each of these methods is effective enough to generate all the polynomial solutions. \\ \noindent\textit{Keywords}: Two-sided monogenic functions; plane waves; Vekua systems; Funk-Hecke's formula. \\ \textit{Mathematics Subject Classification}: 30G35, 33C10, 44A12. \end{abstract} \section{Introduction} Let $\mathbb{R}_{0,m}$ be the real Clifford algebra generated by the canonical basis $\{e_1,\ldots,e_m\}$ of the Euclidean space $\mathbb R^m$ (see \cite{Cl,Lo}). It is an associative algebra in which the multiplication has the property $\underline x^2=-\vert\underline x\vert^2=-\sum_{j=1}^mx_j^2$ for any $\underline x=\sum_{j=1}^mx_je_j\in\mathbb R^m$. This requirement clearly implies the following multiplication rules \[e_je_k+e_ke_j=-2\delta_{jk},\quad j,k\in\{1,\dots,m\}.\] Any Clifford number $a\in\mathbb R_{0,m}$ may thus be written as \[a=\sum_Aa_Ae_A,\quad a_A\in\mathbb R,\] using the basis elements $e_A=e_{j_1}\dots e_{j_k}$ defined for every subset $A=\{j_1,\dots,j_k\}$ of $\{1,\dots,m\}$ with $j_1<\dots<j_k$ (for $A=\emptyset$ one puts $e_{\emptyset}=1$). Conjugation in $\mathbb R_{0,m}$ is given by $\overline a=\sum_Aa_A\overline e_A$, where $\overline e_A=\overline e_{j_k}\dots\overline e_{j_1}$, $\overline e_j=-e_j$, $j=1,\dots,m$. It is easy to check that \begin{equation}\label{revconj} \overline{ab}=\overline b\overline a,\quad a,b\in\mathbb R_{0,m}. \end{equation} For each $\ell\in\{0,1,\dots,m\}$ we call \[\mathbb R_{0,m}^{(\ell)}=\text{span}_{\mathbb R}\big(e_A:\;\vert A\vert=\ell\big)\] the subspace of $\ell$-vectors, i.e. the subspace spanned by the products of $\ell$ different basis vectors. Thus, every element $a\in\mathbb R_{0,m}$ admits the so-called multivector decomposition \[a=\sum_{\ell=0}^m[a]_{\ell},\] where $[a]_{\ell}$ denotes the projection of $a$ on $\mathbb R_{0,m}^{(\ell)}$. Observe that the product of two Clifford vectors $\underline x=\sum_{j=1}^mx_je_j$ and $\underline y=\sum_{j=1}^my_je_j$ splits into a scalar part and a 2-vector part \begin{equation*} \underline x\,\underline y=\underline x\bullet\underline y+\underline x\wedge\underline y\in\mathbb R_{0,m}^{(0)}\oplus\mathbb R_{0,m}^{(2)}, \end{equation*} where \[\underline x\bullet\underline y=-\left\langle\underline x,\underline y\right\rangle=-\sum_{j=1}^mx_jy_j\] equals, up to a minus sign, the standard Euclidean inner product between $\underline x$ and $\underline y$, while \[\underline x\wedge\underline y=\sum_{j=1}^m\sum_{k=j+1}^me_je_k(x_jy_k-x_ky_j)\] represents the standard outer (or wedge) product between them. One natural way to extend the theory of holomorphic functions of a complex variable to higher dimensions is to consider the null solutions of the so-called generalized Cauchy-Riemann operator in $\mathbb R^{m+1}$, given by \[\partial_{x_0}+\partial_{\underline x},\] where $\partial_{\underline x}=\sum_{j=1}^me_j\partial_{x_j}$ is the Dirac operator in $\mathbb R^m$ (see \cite{BDS,CS4,DSS,GM,GuSp}). \begin{defn} A function $f:\Omega\rightarrow\mathbb{R}_{0,m}$ defined and continuously differentiable in an open set $\Omega$ in $\mathbb R^{m+1}$ is said to be left {\rm(}resp. right{\rm)} monogenic in $\Omega$ if $(\partial_{x_0}+\partial_{\underline x})f=0$ {\rm(}resp. $f(\partial_{x_0}+\partial_{\underline x})=0${\rm)} in $\Omega$. Moreover, functions which are both left and right monogenic, i.e. functions satisfying the overdetermined system \begin{equation}\label{TSidedEq} (\partial_{x_0}+\partial_{\underline x})f=f(\partial_{x_0}+\partial_{\underline x})=0, \end{equation} are called two-sided monogenic. \end{defn} In a similar fashion is defined monogenicity with respect to the Dirac operator $\partial_{\underline x}$. Note that the differential operator $\partial_{x_0}+\partial_{\underline x}$ provides a factorization of the Laplacian in the sense that \[\Delta=\sum_{j=0}^m\partial_{x_j}^2=(\partial_{x_0}+\partial_{\underline x})(\partial_{x_0}-\partial_{\underline x})\] and hence monogenic functions are harmonic. One basic yet fundamental result in Clifford analysis is the Cauchy-Kowalevski extension theorem, which states that every monogenic function in $\mathbb R^{m+1}$ is determined by its restriction to $\mathbb R^m$ (see \cite{So1}). \begin{thm}[Cauchy-Kowalevski extension theorem]\label{CKextThm} Every function $g(\underline x)$ analytic in the open set $\,\underline\Omega\subset\mathbb R^m$ has a unique left monogenic extension given by \begin{equation*}\label{CKf} \mathsf{CK}[g(\underline x)](x_0,\underline x)=\sum_{n=0}^\infty\frac{(-x_0)^n}{n!}\,\partial_{\underline x}^ng(\underline x), \end{equation*} and defined in an open neighbourhood $\Omega\subset\mathbb R^{m+1}$ of $\,\underline\Omega$. \end{thm} This result leads to the construction of special monogenic functions depending on the choice of the initial function $g(\underline x)$. For instance, if $g$ is a function of the variable $\langle\underline x,\underline t\rangle$ with $\underline t\in\mathbb R^m$ fixed, then $\mathsf{CK}[g(\langle\underline x,\underline t\rangle)]$ will produce a so-called monogenic plane wave function (see \cite{So2,So3}). Let us denote by $\mathsf{M}_{l}(k)$ (resp. $\mathsf{M}_{r}(k)$) the set of all left (resp. right) monogenic homogeneous polynomials of degree $k$ in $\mathbb R^m$. Another class of special monogenic functions we shall deal in this paper is the class of axial left monogenic functions (see \cite{LB,S1,S2,S3}). They are left monogenic functions of the form \begin{equation}\label{AxialLMF} \left(M(x_0,r)+\frac{\underline x}{r}\,N(x_0,r)\right)P_k(\underline x),\quad r=\vert\underline x\vert, \end{equation} where $M$, $N$ are $\mathbb R$-valued continuously differentiable functions depending on the two variables $(x_0,r)$ and $P_k(\underline x)$ belongs to $\mathsf{M}_{l}(k)$. It can be easily shown that $M$ and $N$ must satisfy the following Vekua-type system (see \cite{Ve}) \begin{equation}\label{VeEq} \left\{\begin{aligned} \partial_{x_0}M-\partial_rN&=\frac{2k+m-1}{r}N\\ \partial_rM+\partial_{x_0}N&=0. \end{aligned}\right. \end{equation} One may prove that every left monogenic homogeneous polynomial $M_k(x_0,\underline x)$ of degree $k$ in $\mathbb R^{m+1}$ can be expressed as a finite sum of axial left monogenic functions, i.e. \begin{equation}\label{maindecomp} M_k(x_0,\underline x)=\sum_{n=0}^k\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x_0,\underline x),\quad P_{k-n}(\underline x)\in\mathsf{M}_{l}(k-n), \end{equation} and thus showing that the axial left monogenic functions are in fact the building blocks of the solutions of the equation $(\partial_{x_0}+\partial_{\underline x})f=0$. The analogues of functions (\ref{AxialLMF}) for the case of two-sided monogenicity were introduced in \cite{DSo} and are defined as follows. \begin{defn}\label{a2sidedm} Let $P_{k,\ell}(\underline x)$ be an $\mathbb R_{0,m}^{(\ell)}$-valued polynomial belonging to $\mathsf{M}_{l}(k)$ $(1\le\ell\le m-1)$. A function is called axial two-sided monogenic if it is two-sided monogenic and is of the form \begin{equation}\label{AxialTSMF} A(x_0,r)P_{k,\ell}(\underline x)+B(x_0,r)\underline xP_{k,\ell}(\underline x)+C(x_0,r)P_{k,\ell}(\underline x)\underline x+D(x_0,r)\underline x P_{k,\ell}(\underline x)\underline x, \end{equation} where $r=\vert\underline x\vert$ and $A$, $B$, $C$, $D$ are $\mathbb R$-valued continuously differentiable functions in some open subset of $\,\mathbb R^2_+=\{(x_1,x_2)\in\mathbb R^2:\;x_2>0\}$. \end{defn} In order to allow for explicit computations we assume that $P_{k,\ell}$ takes values in the subspace of $\ell$-vectors. Note that this assumption implies that $P_{k,\ell}$ is two-sided monogenic. Indeed, from $\partial_{\underline x}P_{k,\ell}=0$ and using (\ref{revconj}) we obtain \[0=\overline{P_{k,\ell}}\partial_{\underline x}=(-1)^{\frac{\ell(\ell+1)}{2}}P_{k,\ell}\partial_{\underline x}.\] It thus follows that $P_{k,\ell}\partial_{\underline x}=0$. The consideration of functions (\ref{AxialTSMF}) leads to a system of first-order partial differential equations with variable coefficients (see \cite{DSo}). \begin{prop}\label{caract1} A function is axial two-sided monogenic if and only if $C=B$ and \begin{equation}\label{Veq2sided} \left\{\begin{aligned} \partial_{x_0}A-r\partial_rB&=\left(2k+m-\mu_{\ell}\right)B\\ \partial_{x_0}B+\frac{1}{r}\partial_rA&=\mu_{\ell}D\\ \partial_{x_0}B-r\partial_rD&=(2k+m+2)D\\ \partial_{x_0}D+\frac{1}{r}\partial_rB&=0, \end{aligned}\right. \end{equation} where $\mu_{\ell}=(-1)^{\ell}(2\ell-m)$. \end{prop} In this paper we study axial two-sided monogenic functions in a neighbourhood of the origin. Each such function admits a Taylor series decomposition in terms of two-sided monogenic polynomials that are of axial type. In Section \ref{secc2} we give a characterization of such two-sided monogenic polynomials in terms of the Cauchy-Kowalevski extension theorem. In particular we characterize those polynomials for which the CK-extension will be axial two-sided monogenic and prove that this class of polynomials spans the space of all polynomials two-sided monogenics. In Section \ref{secc3} we consider two-sided monogenic plane waves. They depend on a parameter $\underline t\in S^{m-1}$ and after integrating over the unit sphere $S^{m-1}$ and applying Funk-Hecke's formula one obtains axial two-sided monogenics. We show that all polynomial axial two-sided monogenics may be obtained as integrals of such plane waves. We also construct axial two-sided monogenics that are expressed in terms of Bessel functions. In the final Section \ref{secc4} we start from the simple observation that if \[f(x_0,\underline x)=\left(M(x_0,r)+\displaystyle{\frac{\underline x}{r}}\,N(x_0,r)\right)P_{k,\ell}(\underline x)\] is axial left monogenic, then $f(x_0,\underline x)(\partial_{x_0}-\partial_{\underline x})$ is axial two-sided monogenic. We prove that all axial two-sided monogenics may locally be obtained in this way. So we have several methods to obtain polynomials solutions. Of course one can also consider axial two-sided monogenics in more general domains with possible singularities on the axis or in the origin. It remains to be studied how such solutions might be obtained from the methods exposed here. A method for obtaining polynomial solutions to the Hodge-de Rham system was obtained in \cite{DLaS}. Although the Hodge-de Rham system can be seen as a two-sided monogenic system with respect to the Dirac operator $\partial_{\underline x}$, the authors do not use Vekua systems (see \cite{DSo}), Bessel functions and plane wave integrals. \section{Homogeneous two-sided monogenic polynomials in $\mathbb R^{m+1}$}\label{secc2} The aim of this section is to prove an analogue of the decomposition (\ref{maindecomp}) for the case of two-sided monogenic homogeneous polynomials in $\mathbb R^{m+1}$. We begin by observing that \[e_je_Ae_j=\left\{\begin{array}{ll}(-1)^{\vert A\vert}e_A&\text{for}\quad j\in A,\\(-1)^{\vert A\vert+1}e_A&\text{for}\quad j\notin A,\end{array}\right.\] which clearly yields $\sum_{j=1}^me_je_Ae_j=(-1)^{\vert A\vert}(2\vert A\vert-m)e_A$. Therefore for every $a\in\mathbb R_{0,m}^{(\ell)}$ the following equality holds \begin{equation}\label{trickyeq} \sum_{j=1}^me_jae_j=\mu_{\ell}a,\quad\mu_{\ell}=(-1)^{\ell}(2\ell-m). \end{equation} The fact that polynomial $P_{k,\ell}(\underline x)$ in Definition \ref{a2sidedm} is two-sided monogenic remains valid for every left monogenic function $F(\underline x)$ with values in $\mathbb R_{0,m}^{(\ell)}$. We can say even more: $F(\underline x)$ is two-sided monogenic if and only if $[F(\underline x)]_{\ell}$ is left monogenic for $\ell=0,\dots,m$ (see e.g. \cite{ABoDS}). For the sake of completeness we include a proof here. \begin{prop}\label{deco2sided} Consider the multivector decomposition of function $F(\underline x)$, i.e. \[F=\sum_{\ell=0}^m[F]_{\ell}.\] Then $F$ is two-sided monogenic if and only if each $[F]_{\ell}$ is left monogenic. \end{prop} \begin{proof} We have already seen that the condition is sufficient so we have to prove only the necessity. Put $F_{\ell}=[F]_{\ell}$. Observe that $\partial_{\underline x}F_{\ell}$ decomposes into a $(\ell-1)$-vector and a $(\ell+1)$-vector, i.e. \[\partial_{\underline x}F_{\ell}=\left[\partial_{\underline x}F_{\ell}\right]_{\ell-1}+\left[\partial_{\underline x}F_{\ell}\right]_{\ell+1}.\] Hence $F$ satisfies $\partial_{\underline x}F=0$ if and only if \begin{equation}\label{condizq} \left[\partial_{\underline x}F_{\ell-1}\right]_{\ell}+\left[\partial_{\underline x}F_{\ell+1}\right]_{\ell}=0,\quad\ell=0,\dots,m, \end{equation} with $F_{-1}=F_{m+1}=0$. Similarly, $F$ is right monogenic if and only if \[\left[F_{\ell-1}\partial_{\underline x}\right]_{\ell}+\left[F_{\ell+1}\partial_{\underline x}\right]_{\ell}=0,\quad\ell=0,\dots,m,\] or equivalently \begin{equation}\label{condder} \left[\partial_{\underline x}F_{\ell-1}\right]_{\ell}-\left[\partial_{\underline x}F_{\ell+1}\right]_{\ell}=0,\quad\ell=0,\dots,m, \end{equation} where we have used the identities \[\left[F_{\ell-1}\partial_{\underline x}\right]_{\ell}=(-1)^{\ell-1}\left[\partial_{\underline x}F_{\ell-1}\right]_{\ell},\quad\left[F_{\ell+1}\partial_{\underline x}\right]_{\ell}=(-1)^{\ell}\left[\partial_{\underline x}F_{\ell+1}\right]_{\ell}.\] It follows from (\ref{condizq}) and (\ref{condder}) that $\left[\partial_{\underline x}F_{\ell-1}\right]_{\ell}=\left[\partial_{\underline x}F_{\ell+1}\right]_{\ell}=0$. This clearly ensures that each $F_{\ell}$ is left monogenic. \end{proof} \begin{rem} The scalar part $[F]_{0}$ and the pseudoscalar part $[F]_{m}$ of a two-sided monogenic function defined in an open connected subset of $\,\mathbb R^m$ are constants. \end{rem} In what follows, we recall some essential identities. Let $A$, $B$, $C$, $D$ and $P_{k,\ell}$ be as in Definition \ref{a2sidedm}. It is easily seen that \[\partial_{\underline x}A=\sum_{j=1}^me_j\partial_{x_j}A=\sum_{j=1}^me_j(\partial_rA)(\partial_{x_j}r)=\frac{\partial_rA}{r}\,\underline x\] and therefore \begin{equation}\label{ident1} \partial_{\underline x}\big(AP_{k,\ell}\big)=(\partial_{\underline x}A)P_{k,\ell}+A\partial_{\underline x}P_{k,\ell}=\frac{\partial_rA}{r}\underline xP_{k,\ell}. \end{equation} Using the identity $\partial_{\underline x}(\underline xf)=-mf-2\sum_{j=1}^mx_j\partial_{x_j}f-\underline x\partial_{\underline x}f$ and Euler's theorem for homogeneous functions, we also obtain that \begin{multline}\label{ident2} \partial_{\underline x}\big(B\underline xP_{k,\ell}\big)=(\partial_rB)\frac{\underline x^2}{r}P_{k,\ell}-B\Big(mP_{k,\ell}+2\sum_{j=1}^mx_j\partial_{x_j}P_{k,\ell}+\underline x\partial_{\underline x}P_{k,\ell}\Big)\\ =-\big((2k+m)B+r\partial_rB\big)P_{k,\ell}. \end{multline} On account of (\ref{trickyeq}) we get \[\partial_{\underline x}\big(P_{k,\ell}\underline x\big)=\left(\partial_{\underline x}P_{k,\ell}\right)\underline x+\sum_{j=1}^me_jP_{k,\ell}(\partial_{x_j}\underline x)=\mu_{\ell}P_{k,\ell}.\] This gives \begin{align} \partial_{\underline x}\big(CP_{k,\ell}\underline x\big)&=\mu_{\ell}CP_{k,\ell}+\frac{\partial_rC}{r}\underline xP_{k,\ell}\underline x,\\ \partial_{\underline x}\big(D\underline xP_{k,\ell}\underline x\big)&=-\mu_{\ell}D\underline xP_{k,\ell}-\big((2k+m+2)D+r\partial_rD\big)P_{k,\ell}\underline x.\label{ident3-4} \end{align} In the same way we can deduce identities for \[\big(AP_{k,\ell}\big)\partial_{\underline x},\quad\big(B\underline xP_{k,\ell}\big)\partial_{\underline x},\quad\big(CP_{k,\ell}\underline x\big)\partial_{\underline x}\quad\text{and}\quad\big(D\underline xP_{k,\ell}\underline x\big)\partial_{\underline x}.\] \begin{lem}\label{lemfund} Assume that $R_n(\underline x),S_n(\underline x)\in\mathsf{M}_{l}(n)\cap\mathsf{M}_{r}(n)$ for $n=0,\dots,k$ and let $S_k=0$. If \begin{equation}\label{FraIguald} \sum_{\substack{n=0\\n\;{\rm even}}}^k\left(\vert\underline x\vert^{n}R_{k-n}+\vert\underline x\vert^{n-2}\underline xS_{k-n}\underline x\right)+\sum_{\substack{n=1\\n\;{\rm odd}}}^k\vert\underline x\vert^{n-1}\left(\underline xR_{k-n}+S_{k-n}\underline x\right)=0, \end{equation} then all polynomials $R_n,S_n$ are identically equal to zero, except possibly $R_0$ and $S_0$. More precisely \begin{alignat*}{2} R_n&=S_n=0,&\quad&n=1,\dots,k,\\ [R_0]_{\ell}&=[S_0]_{\ell}=0,&\quad&\ell=1,\dots,m-1,\\ [R_0]_{0}&=(-1)^k[S_0]_{0},&\quad&[R_0]_{m}=(-1)^{m+k-1}[S_0]_{m}. \end{alignat*} \end{lem} \begin{proof} We shall prove the assertion by induction. When $k=1$ we have \[R_1+\underline xR_0+S_0\underline x=0,\] from which we obtain \begin{align*} 0&=\partial_{\underline x}(R_1+\underline xR_0+S_0\underline x)=-mR_0+\sum_{\ell=0}^m\mu_{\ell}[S_0]_{\ell},\\ 0&=(R_1+\underline xR_0+S_0\underline x)\partial_{\underline x}=\sum_{\ell=0}^m\mu_{\ell}[R_0]_{\ell}-mS_0 \end{align*} and hence \begin{equation*} \left\{\begin{array}{ll}m[R_0]_{\ell}-\mu_{\ell}[S_0]_{\ell}&=0\\\mu_{\ell}[R_0]_{\ell}-m[S_0]_{\ell}&=0.\end{array}\right. \end{equation*} It thus follows that \begin{alignat*}{2} [R_0]_{\ell}&=[S_0]_{\ell}=0,&\quad&\ell=1,\dots,m-1,\\ [R_0]_{0}&=-[S_0]_{0},&\quad&[R_0]_{m}=(-1)^{m}[S_0]_{m}, \end{alignat*} showing also that $\underline xR_0+S_0\underline x=0$ and therefore $R_1=0$. The statement is then true for $k=1$. Now we proceed to show that if the assertion holds for some positive integer $k\ge1$, then it also holds $k+1$. First, note that for $k+1$ equality (\ref{FraIguald}) may be rewritten as \[R_{k+1}+\sum_{\substack{n=0\\n\;\text{even}}}^k\vert\underline x\vert^{n}\left(\underline xR_{k-n}+S_{k-n}\underline x\right)+\sum_{\substack{n=1\\n\;\text{odd}}}^k\left(\vert\underline x\vert^{n+1}R_{k-n}+\vert\underline x\vert^{n-1}\underline xS_{k-n}\underline x\right)=0.\] Letting the Dirac operator $\partial_{\underline x}$ act from the left on the last equality, we obtain \begin{multline*} \sum_{\substack{n=0\\n\;\text{even}}}^k\bigg(\vert\underline x\vert^{n}\sum_{\ell}\Big(\mu_{\ell}[S_{k-n}]_{\ell}-(2k+m-n)[R_{k-n}]_{\ell}\Big)+n\vert\underline x\vert^{n-2}\underline xS_{k-n}\underline x\bigg)\\ +\sum_{\substack{n=1\\n\;\text{odd}}}^k\vert\underline x\vert^{n-1}\bigg(\underline x\sum_{\ell}\Big((n+1)[R_{k-n}]_{\ell}-\mu_{\ell}[S_{k-n}]_{\ell}\Big)-(2k+m-n+1)S_{k-n}\underline x\bigg)=0, \end{multline*} where we have used identities (\ref{ident1})-(\ref{ident3-4}). On account of Proposition \ref{deco2sided} and since we have assumed that the assertion is true for $k$, it easily follows from the last equality that \begin{equation}\label{twinid} (2k+m)[R_k]_{\ell}-\mu_{\ell}[S_k]_{\ell}=0 \end{equation} and \begin{alignat*}{2} R_n&=S_n=0,&\quad&n=1,\dots,k-1,\\ [R_0]_{\ell}&=[S_0]_{\ell}=0,&\quad&\ell=1,\dots,m-1,\\ [R_0]_{0}&=(-1)^{k+1}[S_0]_{0},&\quad&[R_0]_{m}=(-1)^{m+k}[S_0]_{m}. \end{alignat*} These equalities imply that $R_{k+1}+\underline x R_k+S_k\underline x=0$. If we now let $\partial_{\underline x}$ act from the right, then we get \[\mu_{\ell}[R_k]_{\ell}-(2k+m)[S_k]_{\ell}=0,\] which together with (\ref{twinid}) clearly implies that $[R_k]_{\ell}=[S_k]_{\ell}=0$ and hence $R_{k+1}=R_k=S_k=0$. \end{proof} We next recall two fundamental decompositions for homogeneous polynomials. The first one is the classical Fischer decomposition in terms of harmonic homogeneous polynomials while the second one is given using two-sided monogenic homogeneous polynomials (see e.g. \cite{DSS}). \begin{thm}[Fischer decompositions]\label{Fisdecomp} Let $\mathsf{P}(k)$ be the set of all homogeneous polynomials of degree $k$ in $\mathbb R^m$. By $\mathsf{H}(k)$ we denote the polynomials in $\mathsf{P}(k)$ which are harmonic. If $P_k(\underline x)\in\mathsf{P}(k)$, then the following two decompositions hold: \begin{align*} P_k&=H_k+\vert\underline x\vert^2P_{k-2},\quad H_k\in\mathsf{H}(k),\;P_{k-2}\in\mathsf{P}(k-2),\\ P_k&=M_k+\underline xP_{k-1}+Q_{k-1}\underline x,\quad M_k\in\mathsf{M}_{l}(k)\cap\mathsf{M}_{r}(k),\;P_{k-1},Q_{k-1}\in\mathsf{P}(k-1). \end{align*} \end{thm} \noindent Before proving the main result of the section it is useful to notice the following. \begin{rem} An analytic function $g(\underline x)$ has a two-sided monogenic extension if and only if it satisfies the condition $\partial_{\underline x}g=g\partial_{\underline x}$. Indeed, if $f(x_0,\underline x)$ is a two-sided monogenic extension of $g$, then from {\rm(}\ref{TSidedEq}{\rm)} it follows that $\partial_{\underline x}f=f\partial_{\underline x}$ and hence $\partial_{\underline x}g=g\partial_{\underline x}$. Finally, observe that this condition implies that $\mathsf{CK}[g(\underline x)]$ is two-sided monogenic. \end{rem} \begin{thm}\label{desc2sided} Suppose that $M_k(x_0,\underline x)$ is a two-sided monogenic homogeneous polynomial of degree $k$ in $\mathbb R^{m+1}$. Then there exist polynomials $S_n(\underline x)\in\mathsf{M}_{l}(n)\cap\mathsf{M}_{r}(n)$, $n=0,\dots,k$, such that \begin{multline*} M_k(x_0,\underline x)=S_k(\underline x)+\sum_{\substack{n=1\\n\;{\rm odd}}}^k\mathsf{CK}\Big[\vert\underline x\vert^{n-1}\big(\underline xS_{k-n}(\underline x)+S_{k-n}(\underline x)\underline x\big)\Big](x_0,\underline x)\\ +\sum_{\substack{n=2\\n\;{\rm even}}}^k\sum_{\ell}\mathsf{CK}\Big[\lambda_{n,\ell}\vert\underline x\vert^{n}[S_{k-n}(\underline x)]_{\ell}+\vert\underline x\vert^{n-2}\underline x[S_{k-n}(\underline x)]_{\ell}\underline x\Big](x_0,\underline x), \end{multline*} where $\lambda_{n,\ell}=-\displaystyle{\frac{(2k+m-n-\mu_{\ell})}{n}}$. \end{thm} \begin{proof} By Theorem \ref{CKextThm} we have that $M_k(x_0,\underline x)=\mathsf{CK}[M_k(0,\underline x)](x_0,\underline x)$. As $M_k(0,\underline x)\in\mathsf{P}(k)$ it follows from the second Fischer decomposition of Theorem \ref{Fisdecomp} that \[M_k(0,\underline x)=\sum_{n_1=0}^k\sum_{n_2=0}^{n_1}\underline x^{n_1-n_2}M_{k-n_1,n_2}(\underline x)\underline x^{n_2},\] where $M_{k-n_1,n_2}(\underline x)\in\mathsf{M}_{l}(k-n_1)\cap\mathsf{M}_{r}(k-n_1)$. Observe that $\underline x^{n_1-n_2}M_{k-n_1,n_2}\underline x^{n_2}$ may be rewritten as \[(-1)^{\frac{n_1}{2}}\vert\underline x\vert^{n_1}M_{k-n_1,n_2}\quad\text{or}\quad(-1)^{\frac{n_1-2}{2}}\vert\underline x\vert^{n_1-2}\underline xM_{k-n_1,n_2}\underline x,\] for $n_1$ even, while for $n_1$ odd $\underline x^{n_1-n_2}M_{k-n_1,n_2}\underline x^{n_2}$ equals \[(-1)^{\frac{n_1-1}{2}}\vert\underline x\vert^{n_1-1}\underline xM_{k-n_1,n_2}\quad\text{or}\quad(-1)^{\frac{n_1-1}{2}}\vert\underline x\vert^{n_1-1}M_{k-n_1,n_2}\underline x.\] Therefore, there exist $R_n(\underline x),S_n(\underline x)\in\mathsf{M}_{l}(n)\cap\mathsf{M}_{r}(n)$ so that \begin{multline*} M_k(0,\underline x)=R_k(\underline x)+\sum_{\substack{n=1\\n\;\text{odd}}}^k\vert\underline x\vert^{n-1}\big(\underline xR_{k-n}(\underline x)+S_{k-n}(\underline x)\underline x\big)\\ +\sum_{\substack{n=2\\n\;\text{even}}}^k\big(\vert\underline x\vert^{n}R_{k-n}(\underline x)+\vert\underline x\vert^{n-2}\underline xS_{k-n}(\underline x)\underline x\big). \end{multline*} Note that $M_k(0,\underline x)$ must satisfy the condition $\partial_{\underline x}M_k(0,\underline x)=M_k(0,\underline x)\partial_{\underline x}$ since $M_k(x_0,\underline x)$ is two-sided monogenic. We thus get \begin{multline*} \sum_{\substack{n=0\\n\;\text{even}}}^{k-1}\bigg(\vert\underline x\vert^{n}\sum_{\ell}a_{n,\ell}\Big([S_{k-n-1}]_{\ell}-[R_{k-n-1}]_{\ell}\Big)+n\vert\underline x\vert^{n-2}\underline x\Big(S_{k-n-1}-R_{k-n-1}\Big)\underline x\bigg)\\ +\sum_{\substack{n=1\\n\;\text{odd}}}^{k-1}\vert\underline x\vert^{n-1}\bigg(\underline x\sum_{\ell}\Big((n+1)[R_{k-n-1}]_{\ell}+b_{n,\ell}[S_{k-n-1}]_{\ell}\Big)\\ -\sum_{\ell}\Big((n+1)[R_{k-n-1}]_{\ell}+b_{n,\ell}[S_{k-n-1}]_{\ell}\Big)\underline x\bigg)=0 \end{multline*} where $a_{n,\ell}=2k+m+\mu_{\ell}-n-2$ and $b_{n,\ell}=2k+m-\mu_{\ell}-n-1$. Lemma \ref{lemfund} now yields \begin{alignat*}{2} R_{k-n}&=S_{k-n},&\quad& n\;\text{odd}\\ [R_{k-n}]_{\ell}&=\lambda_{n,\ell}[S_{k-n}]_{\ell},&\quad& n\;\text{even}, \end{alignat*} for $n=1,\dots,k-1$. These relations can be assumed also in the case $n=k$. This leads to the desired result. \end{proof} \begin{cor}\label{CKxPx} Let k and n denote non-negative integers. Every two-sided monogenic homogeneous polynomial in $\mathbb R^{m+1}$ can always be written as a finite sum of axial two-sided monogenic polynomials of the form \begin{equation}\label{buildblocks2sided} \mathsf{CK}\big[\alpha_{n,\ell}\vert\underline x\vert^{2n}P_{k,\ell}(\underline x)+\vert\underline x\vert^{2n-2}\underline xP_{k,\ell}(\underline x)\underline x\big],\quad\mathsf{CK}\big[\vert\underline x\vert^{2n}\left(\underline xP_{k,\ell}(\underline x)+P_{k,\ell}(\underline x)\underline x\right)\big], \end{equation} where $P_{k,\ell}$ is an $\mathbb R_{0,m}^{(\ell)}$-valued polynomial belonging to $\mathsf{M}_{l}(k)$ $(0\le\ell\le m)$ and \[\alpha_{n,\ell}=\displaystyle{-\frac{2k+2n+m-\mu_{\ell}}{2n}}.\] \end{cor} \begin{proof} Observe that Theorem \ref{desc2sided} actually shows that any two-sided monogenic homogeneous polynomial in $\mathbb R^{m+1}$ can be decomposed as a finite sum of left monogenic polynomials of the form (\ref{buildblocks2sided}). We can claim that these polynomials are two-sided monogenic since their restriction to $\mathbb R^{m}$ satisfy the condition $\partial_{\underline x}g=g\partial_{\underline x}$. Finally, with the help of identities (\ref{ident1})-(\ref{ident3-4}), it is easily seen that they are of the form (\ref{AxialTSMF}) and hence are axial two-sided monogenic polynomials. \end{proof} \section{Monogenic plane waves leading to axial two-sided monogenics}\label{secc3} Let $h(x,y)=u(x,y)+iv(x,y)$ be a holomorphic function and assume that $\underline t\in S^{m-1}$ is a fixed unit vector. It is easy to verify that \[(\partial_{x_0}+\partial_{\underline x})h(x_0,\theta)=\partial_{x_0}h(x_0,\theta)+\underline t\,\partial_{\theta}h(x_0,\theta)=(1+i\underline t)\partial_{x_0}h(x_0,\theta),\] where $\theta=\langle\underline x,\underline t\rangle$. Using now the fact that $1+i\underline t$ and $1-i\underline t$ are zero divisors, we get \[(\partial_{x_0}+\partial_{\underline x})\big((1-i\underline t)h(x_0,\theta)\big)=(1+i\underline t)(1-i\underline t)\partial_{x_0}h(x_0,\theta)=0,\] which implies that $(1-i\underline t)h(x_0,\theta)$ is a monogenic plane wave. Starting with these monogenic plane waves and using Funk-Hecke's formula we will be able to devise a method for constructing axial two-sided monogenic functions. For the reader's convenience we first recall: \begin{thm}[Funk-Hecke's formula \cite{Hoch}] Suppose that $\displaystyle{\int_{-1}^1\vert F(t)\vert(1-t^2)^{(m-3)/2}dt<\infty}$ and let $\underline\xi\in S^{m-1}$. If $Y_k(\underline x)$ is a spherical harmonic of degree $k$ in $\mathbb R^m$, then \[\int_{S^{m-1}}F(\langle\underline\xi,\underline\eta\rangle)Y_k(\underline\eta)dS(\underline\eta)=\sigma_{m-1}C_k(1)^{-1}Y_{k}(\underline\xi)\int_{-1}^1F(t)C_k(t)\left(1-t^2\right)^{(m-3)/2}dt,\] where $C_k(t)$ denotes the Gegenbauer polynomial $C^{\nu}_k(t)$ with $\nu=(m-2)/2$ and $\sigma_{m-1}$ is the surface area of the unit sphere $S^{m-2}$ in $\mathbb R^{m-1}$. \end{thm} Let $\Delta_{\underline x}=\sum_{j=1}^m\partial_{x_j}^2$ be the Laplacian in $\mathbb R^{m}$ and assume that $P_{k,\ell}$ is an $\mathbb R_{0,m}^{(\ell)}$-valued polynomial belonging to $\mathsf{M}_{l}(k)$. Applying the following identity \[\Delta_{\underline x}(fg)=(\Delta_{\underline x}f)g+2\sum_{j=1}^m(\partial_{x_j}f)(\partial_{x_j}g)+f(\Delta_{\underline x}g),\] one can easily check that polynomials $\underline xP_{k,\ell}, P_{k,\ell}\underline x$ are harmonic and that \begin{align*} \Delta_{\underline x}(\underline xP_{k,\ell}\underline x)&=2\partial_{\underline x}(P_{k,\ell}\underline x)=2\mu_{\ell}P_{k,\ell}\\ \Delta_{\underline x}\left(\vert\underline x\vert^2P_{k,\ell}\right)&=\left(\Delta_{\underline x}\vert\underline x\vert^2\right)P_{k,\ell}+4\sum_{j=1}^mx_j\partial_{x_j}P_{k,\ell}=2(2k+m)P_{k,\ell}. \end{align*} The last two equalities enable us to get the classical Fischer decomposition of $\underline xP_{k,\ell}\underline x$, namely: \begin{equation}\label{Fisch2term} \underline xP_{k,\ell}\underline x=\left(\underline xP_{k,\ell}\underline x-\vert\underline x\vert^2\frac{\mu_{\ell}}{2k+m}P_{k,\ell}\right)+\vert\underline x\vert^2\frac{\mu_{\ell}}{2k+m}P_{k,\ell}. \end{equation} \begin{thm}\label{PlaWavMeth} The function defined by \[I_h(x_0,\underline x)=\frac{1}{\sigma_{m-1}}\int_{S^{m-1}}h(x_0,\langle\underline x,\underline t\rangle)(1-i\underline t)P_{k,\ell}(\underline t)(1-i\underline t)dS(\underline t)\] is axial two-sided monogenic with \begin{multline*} A_h(x_0,r)=\frac{r^{-k}}{2k+m}\left((2k+m-\mu_{\ell})C_k(1)^{-1}\int_{-1}^1h(x_0,rt)C_k(t)\left(1-t^2\right)^{(m-3)/2}dt\right.\\ \left.+\mu_{\ell}\,C_{k+2}(1)^{-1}\int_{-1}^1h(x_0,rt)C_{k+2}(t)\left(1-t^2\right)^{(m-3)/2}dt\right), \end{multline*} \[B_h(x_0,r)=C_h(x_0,r)=-ir^{-k-1}C_{k+1}(1)^{-1}\int_{-1}^1h(x_0,rt)C_{k+1}(t)\left(1-t^2\right)^{(m-3)/2}dt,\] \[D_h(x_0,r)=-r^{-k-2}C_{k+2}(1)^{-1}\int_{-1}^1h(x_0,rt)C_{k+2}(t)\left(1-t^2\right)^{(m-3)/2}dt.\] \end{thm} \begin{proof} It is clear that for any $\underline t\in S^{m-1}$ the function $h(x_0,\langle\underline x,\underline t\rangle)(1-i\underline t)P_{k,\ell}(\underline t)(1-i\underline t)$ is two-sided monogenic and hence so is the function $I_h(x_0,\underline x)$. We thus only need to show that it may be written as \[I_h=A_hP_{k,\ell}+B_h\underline xP_{k,\ell}+C_hP_{k,\ell}\underline x+D_h\underline x P_{k,\ell}\underline x.\] In order to perform this task we must compute integrals of the form \[\frac{1}{\sigma_{m-1}}\int_{S^{m-1}}h(x_0,\langle\underline x,\underline t\rangle)F(\underline t)dS(\underline t),\] where $F(\underline t)$ can be equal to $P_{k,\ell}(\underline t)$, $\underline tP_{k,\ell}(\underline t)$, $P_{k,\ell}(\underline t)\underline t$ or $\underline tP_{k,\ell}(\underline t)\underline t$. These integrals shall be denoted by $I_1$, $I_2$, $I_3$ and $I_4$. We then have that \[I_h=I_1-iI_2-iI_3-I_4.\] The first three integrals may be computed directly by applying Funk-Hecke's formula since $P_{k,\ell}(\underline t)$, $\underline tP_{k,\ell}(\underline t)$ and $P_{k,\ell}(\underline t)\underline t$ are harmonic polynomials. Indeed, writing $\underline x$ in polar coordinates, i.e. $\underline x=r\underline\omega$, we obtain \begin{align*} I_1&=P_{k,\ell}(\underline\omega)C_k(1)^{-1}\int_{-1}^1h(x_0,rt)C_k(t)\left(1-t^2\right)^{(m-3)/2}dt\\ I_2&=\underline\omega P_{k,\ell}(\underline\omega)C_{k+1}(1)^{-1}\int_{-1}^1h(x_0,rt)C_{k+1}(t)\left(1-t^2\right)^{(m-3)/2}dt,\\ I_3&=P_{k,\ell}(\underline\omega)\underline\omega C_{k+1}(1)^{-1}\int_{-1}^1h(x_0,rt)C_{k+1}(t)\left(1-t^2\right)^{(m-3)/2}dt. \end{align*} Finally, from (\ref{Fisch2term}) and using Funk-Hecke's formula we also get \begin{multline*} I_4=\left(\underline\omega P_{k,\ell}(\underline\omega)\underline\omega-\frac{\mu_{\ell}}{2k+m}P_{k,\ell}(\underline\omega)\right)C_{k+2}(1)^{-1}\\ \times\int_{-1}^1h(x_0,rt)C_{k+2}(t)\left(1-t^2\right)^{(m-3)/2}dt\\ +\frac{\mu_{\ell}}{2k+m}P_{k,\ell}(\underline\omega)C_k(1)^{-1}\int_{-1}^1h(x_0,rt)C_k(t)\left(1-t^2\right)^{(m-3)/2}dt, \end{multline*} which completes the proof. \end{proof} \noindent In the next examples we compute $I_h$ for the cases $h(x,y)=e^{x+iy}$ and $h(x,y)=(x+iy)^n$. \noindent {\bf Example 1.} An axial two-sided monogenic function of exponential type was obtained in \cite{DSo} by assuming the existence of a solution of (\ref{Veq2sided}) of the form \[A(x_0,r)=e^{x_0}a(r),\;B(x_0,r)=e^{x_0}b(r),\;D(x_0,r)=e^{x_0}d(r).\] This assumption led to an ordinary differential equation of second order for $b(r)$ which could be solved by means of the Bessel function of the first kind $J_{k+m/2}(r)$, namely \begin{equation*} b(r)=r^{-k-\frac{m}{2}}J_{k+\frac{m}{2}}(r). \end{equation*} From this it easily follows that \begin{align*} d(r)&=r^{-k-\frac{m}{2}-1}J_{k+\frac{m}{2}+1}(r),\\ a(r)&=\big(2k+m-\mu_{\ell}\big)b(r)-r^2d(r). \end{align*} We will now show that this particular solution of system (\ref{Veq2sided}) can be derived from Theorem \ref{PlaWavMeth} by assuming $h(x,y)=e^{x+iy}$. In order to do this we shall use the following equalities \begin{equation*} C_k^\nu(1)=\frac{\Gamma(2\nu+k)}{k!\,\Gamma(2\nu)},\quad\Gamma\left(\frac{n}{2}\right)=\sqrt{\pi}\frac{(n-2)!!}{2^{(n-1)/2}}, \end{equation*} \[\int_{-1}^1e^{iat}C_k^\nu(t)\left(1-t^2\right)^{\nu-1/2}dt=\frac{\pi\,2^{1-\nu}i^k\Gamma(2\nu+k)}{k!\,\Gamma(\nu)}a^{-\nu}J_{k+\nu}(a),\] where $\Gamma$ denotes the Gamma function and $n!!$ the double factorial of $n$ (see e.g. \cite{GraRy}). It follows that \begin{multline*} r^{-k}C_k(1)^{-1}\int_{-1}^1e^{irt}C_k(t)\left(1-t^2\right)^{(m-3)/2}dt\\ =\sqrt{2\pi}\,(m-3)!!\,i^kr^{-(k+m/2-1)}J_{k+m/2-1}(r), \end{multline*} from which we immediately get \[B_h(x_0,r)=\sqrt{2\pi}\,(m-3)!!\,i^ke^{x_0}b(r),\quad D_h(x_0,r)=\sqrt{2\pi}\,(m-3)!!\,i^ke^{x_0}d(r).\] For computing $A_h(x_0,r)$ we also need the recurrence relation \[\frac{2\nu}{r}J_\nu(r)=J_{\nu-1}(r)+J_{\nu+1}(r)\] to obtain $A_h(x_0,r)=\sqrt{2\pi}\,(m-3)!!\,i^ke^{x_0}a(r)$. Therefore \begin{multline*} I_h(x_0,\underline x)=\sqrt{2\pi}\,(m-3)!!\,i^ke^{x_0}\Big(a(r)P_{k,\ell}(\underline x)+b(r)\underline xP_{k,\ell}(\underline x)\\ +b(r)P_{k,\ell}(\underline x)\underline x+d(r)\underline x P_{k,\ell}(\underline x)\underline x\Big) \end{multline*} for $h(x,y)=e^{x+iy}$. \noindent {\bf Example 2.} Other two interesting choices of $h$ are provided by the holomorphic functions \[h(x,y)=(x+iy)^{k+2n},\quad h(x,y)=(x+iy)^{k+2n+1}\] because they yield the basic axial two-sided monogenic polynomials (\ref{buildblocks2sided}). Let us first consider the case $h(x,y)=(x+iy)^{k+2n}$. Note that for this case $h(0,rt)C_{k+1}(t)$ is odd as a function of $t$ and therefore $B_h(0,r)=C_h(0,r)=0$. For the computation of $A_h(0,r)$ and $D_h(0,r)$ we use the following identity \[\int_{0}^1t^{k+2\rho}C_k^\nu(t)\left(1-t^2\right)^{\nu-1/2}dt=\frac{\Gamma(2\nu+k)\Gamma(2\rho+k+1)\Gamma\left(\nu+\frac{1}{2}\right)\Gamma\left(\rho+\frac{1}{2}\right)}{2^{k+1}\Gamma(2\nu)\Gamma(2\rho+1)\,k!\,\Gamma(k+\nu+\rho+1)}\] and we can conclude that \begin{multline*} I_h(x_0,\underline x)=\frac{(-1)^{n+1}\sqrt{2\pi}\,(k+2n)!(m-3)!!\,i^k}{(2n-2)!!(2k+2n+m)!!}\\ \times\mathsf{CK}\big[\alpha_{n,\ell}\vert\underline x\vert^{2n}P_{k,\ell}(\underline x)+\vert\underline x\vert^{2n-2}\underline xP_{k,\ell}(\underline x)\underline x\big](x_0,\underline x). \end{multline*} A similar analysis can be made for the case $h(x,y)=(x+iy)^{k+2n+1}$ to obtain \begin{multline*} I_h(x_0,\underline x)=\frac{(-1)^{n}\sqrt{2\pi}\,(k+2n+1)!(m-3)!!\,i^k}{(2n)!!(2k+2n+m)!!}\\ \times\mathsf{CK}\big[\vert\underline x\vert^{2n}\left(\underline xP_{k,\ell}(\underline x)+P_{k,\ell}(\underline x)\underline x\right)\big](x_0,\underline x). \end{multline*} \begin{rem} In view of Corollary \ref{CKxPx} it does follow that every two-sided monogenic homogeneous polynomial in $\mathbb R^{m+1}$ can always be written as a finite sum of functions $I_h$ where $h(x,y)=(x+iy)^{k+2n}$ or $h(x,y)=(x+iy)^{k+2n+1}$. \end{rem} \section{A characterization in terms of derivatives of axial left monogenic functions}\label{secc4} Proposition \ref{caract1} gives a characterization of the axial two-sided monogenic functions. The goal in this section is to offer an alternative description by showing the connection between these functions and the axial left monogenic functions. Suppose that $P_{k,\ell}$ is an $\mathbb R_{0,m}^{(\ell)}$-valued polynomial belonging to $\mathsf{M}_{l}(k)$. If \[\left(M(x_0,r)+\displaystyle{\frac{\underline x}{r}}\,N(x_0,r)\right)P_{k,\ell}(\underline x)\] is axial left monogenic, then it is clear that \[\left[\left(M(x_0,r)+\frac{\underline x}{r}\,N(x_0,r)\right)P_{k,\ell}(\underline x)\right](\partial_{x_0}-\partial_{\underline x})\] is also right monogenic. This function is moreover of the form (\ref{AxialTSMF}) with \begin{equation}\label{rleft2sided} A=\partial_{x_0}M-\mu_{\ell}\frac{N}{r},\quad B=\frac{\partial_{x_0}N}{r},\quad C=-\frac{\partial_{r}M}{r},\quad D=-\frac{\partial_{r}\left(N/r\right)}{r} \end{equation} and hence is axial two-sided monogenic. Observe that $B=C$, which follows from the second equation of (\ref{VeEq}). It is natural to ask whether every axial two-sided monogenic function can be obtained in this way. \begin{thm} Let $F=AP_{k,\ell}+B\underline xP_{k,\ell}+CP_{k,\ell}\underline x+D\underline x P_{k,\ell}\underline x$ be an axial two-sided monogenic function defined in an open neighbourhood of \[\Omega=\left\{(x_0,\underline x)\in\mathbb R^{m+1}:\;(x_0,r)\in [a_1,b_1]\times[a_2,b_2]\subset\mathbb R^2,\;a_2>0\right\}.\] There exists an axial left monogenic function $\left(M+\displaystyle{\frac{\underline x}{r}}\,N\right)P_{k,\ell}$ such that \[F(x_0,\underline x)-\left[\left(M(x_0,r)+\frac{\underline x}{r}\,N(x_0,r)\right)P_{k,\ell}(\underline x)\right](\partial_{x_0}-\partial_{\underline x})=cP_{k,\ell}(\underline x),\] where $c$ is a real constant. \end{thm} \begin{proof} On account of (\ref{rleft2sided}) we need to find solutions $M$, $N$ to the system \begin{align*} \partial_{r}M&=-rB\\ \partial_{r}\left(N/r\right)&=-rD \end{align*} that satisfy the Vekua system (\ref{VeEq}). Thus we have \[M(x_0,r)=-\int_{a_2}^rtB(x_0,t)dt+\alpha(x_0),\] \[N(x_0,r)=r\left(-\int_{a_2}^rtD(x_0,t)dt+\beta(x_0)\right).\] Using the last two equations of (\ref{Veq2sided}) we obtain \begin{align*} \partial_{x_0}M=-\int_{a_2}^r\left(t^2\partial_{t}D(x_0,t)+(2k+m+2)tD(x_0,t)\right)dt+\alpha^{\prime}(x_0)\\ =-(2k+m)\int_{a_2}^rtD(x_0,t)dt-\big(t^2D(x_0,t)\big)\big\vert_{t=a_2}^{t=r}+\alpha^{\prime}(x_0), \end{align*} \[\partial_{x_0}N=r\left(\int_{a_2}^r\partial_{t}B(x_0,t)dt+\beta^{\prime}(x_0)\right)=r\left(B(x_0,t)\big\vert_{t=a_2}^{t=r}+\beta^{\prime}(x_0)\right).\] Hence \[\partial_{x_0}M-\partial_{r}N=\frac{2k+m-1}{r}N+\alpha^{\prime}(x_0)-(2k+m)\beta(x_0)+a_2^2D(x_0,a_2)\] and \[\partial_rM+\partial_{x_0}N=r\left(\beta^{\prime}(x_0)-B(x_0,a_2)\right).\] Therefore, $M$ and $N$ satisfy the Vekua system (\ref{VeEq}) if and only if \begin{align*} \alpha^{\prime}(x_0)-(2k+m)\beta(x_0)&=-a_2^2D(x_0,a_2)\\ \beta^{\prime}(x_0)&=B(x_0,a_2). \end{align*} Thus, it is possible to find an axial left monogenic function $\left(M+\displaystyle{\frac{\underline x}{r}}\,N\right)P_{k,\ell}$ such that \[F(x_0,\underline x)-\left[\left(M(x_0,r)+\frac{\underline x}{r}\,N(x_0,r)\right)P_{k,\ell}(\underline x)\right](\partial_{x_0}-\partial_{\underline x})=c(x_0,r)P_{k,\ell}(\underline x),\] where $c(x_0,r)$ is an $\mathbb R$-valued function. The monogenicity of the left-hand side of the last equality implies that function $c(x_0,r)$ is a constant. \end{proof} \subsection*{Acknowledgments} D. Pe\~na Pe\~na acknowledges the support of a Postdoctoral Fellowship given by Istituto Nazionale di Alta Matematica (INdAM) and cofunded by Marie Curie actions. \end{document}
\begin{document} \title[Categorification of cell modules]{Categorification of (induced) cell modules and the rough structure of generalized Verma modules} \author{Volodymyr Mazorchuk} \otimesperatornameeratorname{add}ress{V. M.: Department of Mathematics, Uppsala University (Sweden).} \email{mazor\symbol{64}math.uu.se} \thanks{The first author was supported by STINT, the Royal Swedish Academy of Sciences, and the Swedish Research Council, the second author was supported by EPSRC grant 32199} \author{Catharina Stroppel} \otimesperatornameeratorname{add}ress{C. S.: Department of Mathematics, University of Glasgow (United Kingdom).} \email{c.stroppel\symbol{64}maths.gla.ac.uk} \mathfrak numberwithin{equation}{section} \mathfrak newtheorem{proposition}{Proposition} \mathfrak newtheorem{lemma}[proposition]{Lemma} \mathfrak newtheorem{corollary}[proposition]{Corollary} \mathfrak newtheorem{theorem}[proposition]{Theorem} \mathfrak newtheorem{definition}[proposition]{Definition} \mathfrak newtheorem{conjecture}[proposition]{Conjecture} \mathfrak newtheorem{example}[proposition]{Example} \mathfrak newtheorem{remark}[proposition]{Remark} \mathfrak newcommand{\otimesperatornamelusop}[1]{{\mathfrak{a}thop{\otimesperatornamelus}\lbraceimits_{#1}}} \mathfrak newcommand{\otimesperatornamelusoop}[2]{{\mathfrak{a}thop{\otimesperatornamelus}\lbraceimits_{#1}^{#2}}} \mathfrak newcommand{\footnote}{\footnote} \rbraceenewcommand{\rbraceightarrow}{\rbraceightarrow} \mathfrak newtheorem{theoremintro}{Theorem} \rbraceenewcommand{\mathbb Roman{theoremintro}}{\mathbb Roman{theoremintro}} \font\sc=rsfs10 at 12 pt \font\scs=rsfs10 at 10 pt \font\scb=rsfs10 at 16 pt \font\scbb=rsfs10 at 18 pt \mathfrak newcommand{\mathfrak{a}thscr{C}}{\mathfrak{a}thscr{C}} \mathfrak newcommand{\mathfrak{a}thscr{S}}{\mathfrak{a}thscr{S}} \def\lbraceambda{\lbraceambdambda} \def\otimesperatorname{\otimesperatornameeratorname} \def\mathbb C{\mathfrak{a}thbb C} \def\mathbb R{\mathfrak{a}thbb R} \def\mathbb N{\mathfrak{a}thbb N} \def\mathbb Z{\mathfrak{a}thbb Z} \def\mathbb Q{\mathfrak{a}thbb Q} \def\mathfrak g{\mathfrak{a}thfrak g} \def\mathfrak p{\mathfrak{a}thfrak p} \def\mathfrak h{\mathfrak{a}thfrak h} \def\mathfrak n{\mathfrak{a}thfrak n} \def\mathbf m{\mathfrak{a}thbf m} \def\mathbf n{\mathfrak{a}thbf n} \mathfrak newcommand{\mathfrak{a}thbb{C}}{\mathfrak{a}thbb{C}} \mathfrak newcommand{\otimesperatornameeratorname{Ext}}{\otimesperatornameeratorname{Ext}} \mathfrak newcommand{\otimesperatornameeratorname{End}}{\otimesperatornameeratorname{End}} \mathfrak newcommand{\otimesperatornameeratorname{add}}{\otimesperatornameeratorname{add}} \mathfrak newcommand{\otimesperatornameeratorname{Ann}}{\otimesperatornameeratorname{Ann}} \def\mathbb F{\mathfrak{a}thbb F} \def\mathbb S{\mathfrak{a}thbb S} \def\lbrace{\lbracebrace} \def\rbrace{\rbracebrace} \def\otimes{\otimestimes} \def\lbracera{\lbraceongrightarrow} \mathfrak newcommand{\mathfrak{a}thbf{a}}{\mathfrak{a}thbf{a}} \mathfrak newcommand{\mathfrak{a}thcal{A}}{\mathfrak{a}thcal{A}} \mathfrak newcommand{\mathfrak{a}thcal{B}}{\mathfrak{a}thcal{B}} \mathfrak newcommand{\mathfrak{a}thcal{L}}{\mathfrak{a}thcal{L}} \mathfrak newcommand{\mathfrak{a}thrm{-mod}}{\mathfrak{a}thrm{-mod}} \mathfrak newcommand{\mathfrak{a}thrm{-gmod}}{\mathfrak{a}thrm{-gmod}} \mathfrak newcommand{\mathfrak{a}thcal}{\mathfrak{a}thcal} \mathfrak newcommand{\mathfrak{a}thbb{Z}}{\mathfrak{a}thbb{Z}} \mathfrak newcommand{\twoheadrightarrow}{\twoheadrightarrow} \mathfrak newcommand{\mathfrak{a}thfrak{g}}{\mathfrak{a}thfrak{g}} \mathfrak newcommand{\mathfrak{a}thfrak{h}}{\mathfrak{a}thfrak{h}} \mathfrak newcommand{\mathfrak{a}}{\mathfrak{a}thfrak{a}} \mathfrak newcommand{\mathbb}{\mathfrak{a}thfrak{b}} \mathfrak newcommand{\mathcal{U}}{\mathfrak{a}thcal{U}} \def\mathcal{F}{\mathfrak{a}thcal{F}} \def\textrm{Hom}{\textrm{Hom}} \def\drawing#1{\begin{center} \epsfig{file=#1} \end{center}} \def\mathfrak{a}thcal{\mathfrak{a}thcal} \def\mathfrak{\mathfrak{a}thfrak} \def\mathbb{\mathfrak{a}thbb} \def\yesnocases#1#2#3#4{\lbraceeft\{ \begin{array}{ll} #1 & #2 \\ #3 & #4 \end{array} \rbraceight. } \mathfrak newcommand{\define}{\stackrel{\mathbbox{\scriptsize{def}}}{=}} \def\mathfrak hsm{\mathfrak hspace{0.05in}} \def\mathcal{O}{\mathfrak{a}thcal{O}} \def\mathscr{C}{\mathfrak{a}thscr{C}} \def\mathfrak{sl}(n){\mathfrak{a}thfrak{sl}(n)} \begin{abstract} This paper presents categorifications of (right) cell modules and induced cell modules for Hecke algebras of finite Weyl groups. In type $A$ we show that these categorifications depend only on the isomorphism class of the cell module, not on the cell itself. Our main application is multiplicity formulas for parabolically induced modules over a reductive Lie algebra of type $A$, which finally determines the so-called rough structure of generalized Verma modules. On the way we present several categorification results and give the positive answer to Kostant's problem from \cite{Jo} in many cases. We also give a general setup of decategorification, precategorification and categorification. \end{abstract} \mathfrak{a}ketitle \tableofcontents \section{Introduction} The Weyl group acts via (exact) translation functors on the principal block of the Bernstein-Gelfand-Gelfand category $\mathfrak{a}thcal{O}$ associated with a semi-simple complex finite-dimensional Lie algebra, see \cite{BG}. On the level of the Grothendieck group, this becomes the regular representation of the Weyl group. The nature of translation functors is such that they obviously preserve several classes of modules - for example projective, injective or tilting modules. This naturally leads to the question whether the isomorphism classes of such modules, considered as elements of the Grothendieck group, can be interpreted in terms of the representation theory of the Weyl group, in particular in terms of the regular representation of the Weyl group. One of the most remarkable breakthrough results in the theory of semisimple complex Lie algebras is that such an interpretation actually exists. The connection is given by the so-called {\em Kazhdan-Lusztig theory}, which first `upgrades' the Weyl group to the corresponding Hecke algebra, and also the corresponding category $\mathfrak{a}thcal{O}$ to its graded version, and then says that the isomorphism classes of the graded indecomposable projective modules in the regular block of the category $\mathfrak{a}thcal{O}$ descend (on the level of the Grothendieck group) to what is now known as the {\em Kazhdan-Lusztig basis} of the Hecke algebra. The introduction of this Kazhdan-Lusztig basis together with the Kazhdan-Lusztig conjecture (\cite[Conjecture~1.5]{KLCoxeter}) was a milestone in combinatorial representation theory which finally turned the computation of the character of any simple highest weight module for a complex semisimple Lie algebra into a purely combinatorial task. One main idea in this combinatorial representation theory showed up already before \cite{KLCoxeter}, namely the idea of (left or right) {\it cells} for finite Weyl group, in particular for the symmetric group. The latter was first studied by combinatorialists (see e.g. \cite{Knuth}) and afterwards introduced into representation theory (\cite{Jopreprint}, \cite{V}). A natural consequence of the theory of cells is the definition of a special class of modules for the Hecke algebra, namely the {\em cell modules}. In type $A$ these modules contribute an exhaustive list of all irreducible modules. For other types however, they are not irreducible in general. The first objective of the present paper is to give a categorical version of (right) cell modules. To each cell in the Weyl group $W$ we associate a certain quotient category of some subcategory of the category $\mathfrak{a}thcal{O}$ (of the corresponding semisimple Lie algebra $\mathfrak{a}thfrak{g}$) which is stable under the action of translation functors. The categories used for this categorification are indecomposable. When passing to the Grothendieck group we obtain the cell module corresponding to our chosen cell. In other words: we categorify cell modules for the Hecke algebra. Note that two different cells might have isomorphic cell modules. In type $A$ the isomorphism classes of cell modules are exactly the isomorphism classes of irreducible modules. We show that the categorical picture is the same: \begin{theoremintro}[Uniqueness theorem]\lbraceambdabel{thmintro1} Assume that $W$ is of type $A$. Then if two cell modules are isomorphic then their categorifications are equivalent. \end{theoremintro} We will make this equivalence concrete by giving an explicit functor which naturally commutes with the functorial action of the Hecke algebra. This is what we call the `uniqueness' of categorifications (Theorem~\rbraceef{thm6}). As a result, we therefore have to each right cell $\mathfrak{a}thbf{R}$ a categorification $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}}$ together with an equivalence $\Phi:\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}_1}\rbraceightarrow\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}_2}$ whenever the cell modules corresponding to $\mathfrak{a}thbf{R}_1$ and $\mathfrak{a}thbf{R}_2$ are isomorphic (i.e. $\mathfrak{a}thbf{R}_1$ and $\mathfrak{a}thbf{R}_2$ are in the same double cell). The Kazhdan-Lusztig cell theory equips the cell modules with a distinct basis which corresponds in the categorification to the isomorphism classes of indecomposable projective modules. Given a parabolic subgroup $W'$ of $W=S_n$, a right cell $\mathfrak{a}thbf{R}'$ of $W'$ and the corresponding cell module $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}'}$ of its Hecke algebra $\mathfrak{a}thds{H}(W')$ there is the induced cell module $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}'}\otimestimes_{\mathfrak{a}thds{H}(W')} \mathfrak{a}thds{H}(W)$. To these data we associate a certain category $\mathfrak{a}thscr{X}=\mathfrak{a}thscr{X} (W,W',\mathfrak{a}thbf{R}')$ of $\mathfrak{a}thfrak{g}$-modules (in fact a subcategory of the category $\mathcal{O}$) such that the following holds (for details see Theorem~\rbraceef{thm53}, Proposition~\rbraceef{cunique}, Theorem~\rbraceef{combinatorics}): \begin{theoremintro}\lbraceambdabel{thm2} \begin{enumerate}[(i)] \item\lbraceambdabel{thm2.1} The category $\mathfrak{a}thscr{X}$ is a categorification of $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}'}\otimestimes_{\mathfrak{a}thds{H}(W')}\mathfrak{a}thds{H}(W)$, with the $\mathfrak{a}thds{H}$-action given by translation functors. \item\lbraceambdabel{thm2.2} Up to equivalence $\mathfrak{a}thscr{X}$ only depends on the isomorphism class of $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}'}$, not on $\mathfrak{a}thbf{R}'$ itself. \item\lbraceambdabel{thm2.3} There is a combinatorial description of $\mathfrak{a}thscr{X}$ in terms of Kazhdan-Lusztig polynomials in the following sense: the module $\mathfrak{a}thscr{C}_{\mathfrak{a}thbf{R}'}\otimestimes_{\mathfrak{a}thds{H}(W')}\mathfrak{a}thds{H}(W)$ is equipped with four natural bases corresponding to four natural classes of modules in $\mathfrak{a}thscr{X}$. \end{enumerate} \end{theoremintro} A consequence of the (now proved) \cite[Conjecture 1.5]{KLCoxeter} is that the Kazhdan-Lusztig basis of the Hecke algebra turns the problem of finding multiplicities of composition factors of Verma modules into a purely combinatorial statement: the multiplicities are given by evaluating the corresponding Kazhdan-Lusztig polynomials. Verma modules are a special sort of induced modules obtained by inducing one-dimensional (irreducible) modules over a Borel subalgebra. In general, one would like to understand the structure of modules obtained by inducing from an arbitrary irreducible module over a parabolic subalgebra, ideally with a combinatorial description similar to the case of Verma modules. This is however a very difficult task because of at least two reasons: Firstly, there is no classification or reasonable understanding of simple modules for finite dimensional complex Lie algebras available (except for the Lie algebra $\mathfrak{a}thfrak{sl}_2$, see \cite{Bl}), hence the starting point for the induction process is not understood at all. Secondly, it might happen that the induced modules are of infinite length (due to a result of Stafford on existence of non-holonomic simple modules over the Weyl algebra and $U(\mathfrak{a}thfrak{sl}_2\times \mathfrak{a}thfrak{sl}_2)$, see \cite{Stafford}). Nevertheless, our paper goes a big step further in solving these problems. The principal idea is that we realize the induced module we are interested in, as a (proper) standard object in some category which is equivalent to some $\mathfrak{a}thscr{X}$ as above. Then the Kazhdan-Lusztig theory together with Theorem~\rbraceef{thm2}\eqref{thm2.3} provides the necessary combinatorics and as a result we can describe the so-called {\it rough structure} of parabolically induced arbitrary simple modules. One of the difficulties is actually to give a precise definition of what is meant by {\it rough structure} (this is the topic of the last section of the articles). In this introduction we just try to give the main idea. To do so let $\mathfrak{a}thfrak{g}$ be a Lie algebra with triangular decomposition. Let $\mathfrak{a}thfrak{p}$ be a parabolic subalgebra of $\mathfrak{a}thfrak{g}$, and $V$ a simple module over the reductive part of $\mathfrak{a}thfrak{p}$. Then $V$ trivially extends to a simple $\mathfrak{a}thfrak{p}$-module, and the corresponding induced module \begin{displaymath} \Delta(\mathfrak{a}thfrak{p},V)=U(\mathfrak{a}thfrak{g})\otimestimes_{U(\mathfrak{a}thfrak{p})}V \end{displaymath} is called a {\em generalized Verma module}. We want to describe the composition factors of generalized Verma modules. Our main result is the following statement (for details and notation see Section~\rbraceef{s9}, in particular Theorem~\rbraceef{s9.5-cor1}): \begin{theoremintro}\lbraceambdabel{thmintro3} Assume that the reductive part of $\mathfrak{a}thfrak{p}$ is of type $A$. For $X,Y\in \otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}\big(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker} (\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}}\big)$ we have the following multiplicity formula in the category of $\mathfrak{a}thfrak{g}$-modules: \begin{equation}\lbraceambdabel{blabla} [\Delta(\mathfrak{a}thfrak{p},V_X):L(\mathfrak{a}thfrak{p},V_Y)]= [\Delta(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(X)}): L(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(Y)})]. \end{equation} \end{theoremintro} Here, the generalized Verma module $\Delta(\mathfrak{a}thfrak{p},V_X)$ is the one we are interested in, that means we want to describe the multiplicities of the left hand side of the equation \eqref{blabla}. On the other hand, $\Delta(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(X)})$ is a generalized Verma module induced from a simple {\em highest weight} module, and hence is easier to understand. We will prove that the multiplicity on the right hand side is given by Kazhdan-Lusztig combinatorics of a certain category $\mathfrak{a}thscr{X}$ as in Theorem~\rbraceef{thm2} above. In fact, the module $\Delta(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(X)})$ belongs to one of the four classes from Theorem~\rbraceef{thm2}\eqref{thm2.3}. Therefore, it becomes in principle possible to compute the multiplicities completely. The only problem here is that simple subquotients of the form $L(\mathfrak{a}thfrak{p},V_Y)$, $Y\in \otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker} (\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}})$, do not exhaust all simple subquotients of $\Delta(\mathfrak{a}thfrak{p},V_X)$. Roughly speaking, Theorem~\rbraceef{thmintro3} gives information only about simple subquotients having small enough annihilator. It turns out that the number and multiplicities of such subquotients are always finite. {\em Knowing all the multiplicities for these `allowed' simple subquotients is what we call `knowing the rough structure of a generalized Verma module'}. To prove Theorem~\rbraceef{thmintro3} we use the approach of \cite{MiSo} together with \cite{KM2}, which associates to simple modules of the form $L(\mathfrak{a}thfrak{p},V_Y)$ certain simple objects of some $\otimesperatorname{Coker}$-category. Without any restriction on the simple module $L$ to start with, our result seems to be the best possible, since all we know in general about $L$ is its annihilator. The complete (i.e. fine) structure of $\Delta(\mathfrak{a}thfrak{p},V_X)$ depends heavily on $V_X$, not just on its annihilator. This becomes transparent by comparing for instance the structure of generalized Verma modules induced from Gelfand-Zetlin-modules on the one hand with generalized Verma modules induced from simple Verma modules on the other hand. In the first case the rough structure always coincides with the fine structure (see for example \cite{MO-0}), whereas in the second case the fine structure is different from the rough structure already in the case of the algebra $\mathfrak{a}thfrak{sl}_3$ (this follows for example from \cite[Theorem~7.6.23]{Di}). Let now $\mathfrak{a}$ be the semisimple part of $\mathfrak p$ and $W'$ the corresponding Weyl group. As a consequence of Theorem~\rbraceef{thmintro3}, we are able to deduce a criterion for the irreducibility of the generalized Verma module $\Delta(\mathfrak{a}thfrak{p},L)$, where $L$ is an {\it arbitrary} simple $\mathfrak{a}$-module (we formulate the statement in the case when $L$ has trivial central character, however, standard arguments extend this to the arbitrary type $A$ case, see Remark~\rbraceef{simplicity}): We first associate in a combinatorial way to $L$ a pair $(x,w)\in W'\times W$ (see Section~\rbraceef{s9}) and then deduce the following result: \begin{theoremintro}\lbraceambdabel{thmintro4} The module $\Delta(\mathfrak{a}thfrak{p},L)$ is irreducible if and only if $w$ belongs to the same coset in $W'\setminus W$ as the longest element $w_0$ of $W$. \end{theoremintro} An essential part of the approach from \cite{MiSo} is the study (including an answer in special cases) of the so-called {\em Kostant's problem} from \cite{Jo}. If $M$ is a $\mathfrak{a}thfrak{g}$-module and $\mathfrak{a}thrm{Ann}(M)$ is the annihilator of $M$ in $U(\mathfrak{a}thfrak{g})$, then the vector space $U(\mathfrak{a}thfrak{g})/\mathfrak{a}thrm{Ann}(M)$ canonically embeds into the vector space of all $\mathfrak{a}thbb{C}$-linear automorphisms of $M$, which are locally finite with respect to the adjoint action of $\mathfrak{a}thfrak{g}$. The question, which was called {\em Kostant's problem for $M$} in \cite{Jo}, is to determine for which modules $M$ the canonical injection above is in fact an isomorphism. We answer this question for several modules $M$. In particular, we prove the following statement: \begin{theoremintro}\lbraceambdabel{thmintro5} Let $\mathfrak{a}thfrak{g}$ be of type $A$, and $x,y\in W=S_n$ be elements in the same left cell. Then Kostant's problem has a positive answer for the simple highest weight module $L(x\cdot 0)$ if and only if it has a positive answer for the simple highest weight module $L(y\cdot 0)$. \end{theoremintro} We believe that our approach will finally lead to a complete positive answer of this problem in type $A$. For other types our approach fails, but the answer to Kostant's problem is negative as well (as was shown in \cite{Jo}). A detailed analysis of the problem, our partial solutions, and the obstacles for other types are given in Subsection~\rbraceef{s9.2}. On the way to our main results we also obtain several categorification results which we think are of interest on their own. We also obtain some unexpected applications of the categorification procedure, in particular we define a canonical filtration on integral permutation and induced cell modules for the symmetric group $S_n$. \mathfrak noindent {\bf A structural overview.} The paper starts with a general discussion on the notion of {\em precategorification} and {\em categorification} in Section~\rbraceef{s2}. In Section~\rbraceef{s25} we give a brief summary of the known categorifications of the regular representations of the Hecke algebra. A categorification of the Kazhdan-Lusztig (right) cell modules is given in Section~\rbraceef{s3}. The categories appearing in this categorification are not very well understood. They are defined as quotients of certain subcategories of the category $\mathcal{O}$. In general they are not highest weight categories and have infinite homological dimension (see Subsection~\rbraceef{s4.4}). From our uniqueness result it follows that the categorifications of the cell modules are certain module categories over (in general non-commutative) symmetric algebras including as a special case Khovanov's algebra $\mathfrak{a}thcal{H}^n$ (from \cite{Kh}, \cite{St3}); and the uniqueness result together with \cite{Br}, \cite{St2} shows that the centres of these categories are isomorphic to the cohomology ring of a certain Springer fiber, that means the fixed point variety of the flag variety $GL(n,\mathfrak{a}thbb{C})/B$ under a nilpotent matrix $N$ (in Jordan normal form). For the standard examples of induced modules, namely the induced sign or induced trivial module, categorifications in terms of parabolic category $\mathcal{O}$ (see e.g. \cite{SoKipp}, \cite{StDuke}) and Harish-Chandra bimodules (see e.g. \cite{MS}) are well-known and well-studied. Our categories corresponding to induced modules are generalizations of both, parabolic category $\mathcal{O}$ and certain categories of Harish-Chandra bimodules. A short summary can be found in Section~\rbraceef{s5}. In an induced cell module for the Iwahori-Hecke algebra, we have four special bases which will have a very natural categorical interpretation (Theorem~\rbraceef{combinatorics}) in terms of isomorphism classes of projective modules, simple modules, standard modules (which are induced projective modules from the categorification of the cell module) and proper standard modules (which are induced simple modules). The categorifications for induced cell modules are stratified in the sense of \cite{CPS}, and even weakly properly stratified in the sense of \cite{Fr}. The latter structure plays an important role in several parts of the paper. In Section~\rbraceef{s8} we study properties of the categories used to categorify induced cell modules (in type $A$). Generalizing Irving's results from \cite{Irself}, we classify all projective modules which are also injective (Theorem~\rbraceef{irving}) and then deduce a double centralizer property (Theorem~\rbraceef{pr991}) which generalizes Soergel's original Struktursatz from \cite{Sperv} highly non-trivially. Maybe the most surprising result here is the description of the center of these induced categories (Theorem~\rbraceef{pcentre}): the center is isomorphic to the center of a certain parabolic category $\mathcal{O}$. Therefore, we again have the explicit description of the center as given in \cite{Br}. Moreover, the categories categorifying induced cell modules are all Ringel self-dual (Theorem~\rbraceef{trsd}), which means that there is an equivalence between the additive subcategory of all projective modules and the additive subcategory of all tilting modules. The categorifications of induced cell modules will finally be used to describe the best possible general result about generalized Verma modules, that means parabolically induced {\it arbitrary} simple modules. The generalized Verma modules as briefly explained above appear as the so-called (proper) standard objects in our categorifications. Our combinatorial description can then be used to deduce at least the multiplicity of certain composition factors (namely the one which can be seen in our categories), and leads to what is called the `rough structure' of generalized Verma modules. In this rough structure all the multiplicities become finite. A very special case of our setup was already considered in \cite{MiSo} and \cite{KM2}. \mathfrak noindent {\bf General terminology.} A {\it ring} always means an associative unitary ring. {\it Graded} always means $\mathfrak{a}thbb{Z}$-graded. For a ring $R$ we denote by $R\otimesperatornameeratorname{-mod}$ and $\otimesperatornameeratorname{mod-}R$ the categories of finitely generated left and right $R$-modules respectively. If $R$ is graded then we denote by $R\mathfrak{a}thrm{-gmod}$ and $\otimesperatornameeratorname{gmod-}R$ the categories of finitely generated graded left and right $R$-modules respectively. Inclusions are denoted by $\subset$. If it is necessary to point out that some inclusion is proper we use the symbol $\subsetneq$. Let $\mathfrak{a}thbb{F}$ be a commutative ring. We denote by $\mathfrak{a}thbb{F}[v,v^{-1}]$ and $\mathfrak{a}thbb{F}((v))$ the rings of Laurent polynomials and formal Laurent series in the variable $v$ with coefficients in $\mathfrak{a}thbb{F}$ respectively. In the paper we usually work over $\mathfrak{a}thbb{Z}$ or over $\mathfrak{a}thbb{C}$. We abbreviate $\otimestimes_{\mathfrak{a}thbb{C}}$ as $\otimestimes$. \mathfrak noindent {\bf Acknowledgments.} We would like to thank Henning Haahr Andersen, Roman Bezrukavnikov, Oleksandr Khomenko, Ryszard Rubinsztein and David Vogan for useful and stimulating discussions. \section{Decategorification, precategorification and categorification}\lbraceambdabel{s2} In this section we define a general algebraic notion of categorification. The definition is based on and further develops the ideas of \cite{KMS}, \cite{KMS2}, \cite{MS}. \subsection{Ordinary setup}\lbraceambdabel{s2.1} Let $\mathfrak{a}thscr{C}$ be a category. If $\mathfrak{a}thscr{C}$ is abelian or triangulated, we denote by $\otimesperatorname{Gr}(\mathfrak{a}thscr{C})$ the {\it Grothendieck group} of $\mathfrak{a}thscr{C}$. The latter one is by definition the free abelian group generated by the isomorphism classes $[M]$ of objects $M$ of $\mathfrak{a}thscr{C}$ modulo the relation $[C]=[A]+[B]$ whenever there is a short exact sequence $A\mathfrak hookrightarrow C\twoheadrightarrow B$ if $\mathfrak{a}thscr{C}$ is abelian; and whenever there is a triangle $(A,C,B,f,g,h)$ if $\mathfrak{a}thscr{C}$ is triangulated. If $\mathfrak{a}thscr{C}$ is additive, we denote by $\otimesperatorname{Gr}(\mathfrak{a}thscr{C})_{\otimesperatornamelus}$ the {\em split Grothendieck group} of $\mathfrak{a}thscr{C}$, which is by definition the free abelian group generated by the isomorphism classes $[M]$ of objects $[M]$ of $\mathfrak{a}thscr{C}$ modulo the relation $[C]=[A]+[B]$ whenever $C\cong A\otimesperatornamelus B$. For $M\in\mathfrak{a}thscr{C}$ we denote by $[M]$ the image of $M$ in the (split) Grothendieck group. Let $\mathfrak{a}thbb{F}$ be a commutative ring with $1$. \begin{definition}{\rbracem Let $\mathfrak{a}thscr{C}$ be an abelian or triangulated, respectively additive, category. Then the {\em $\mathfrak{a}thbb{F}$-decategorification} of $\mathfrak{a}thscr{C}$ is the $\mathfrak{a}thbb{F}$-module $[\mathfrak{a}thscr{C}]^{\mathfrak{a}thbb{F}}:= \mathfrak{a}thbb{F}\otimestimes_{\mathfrak{a}thbb{Z}} \otimesperatorname{Gr}(\mathfrak{a}thscr{C})$ (resp. $[\mathfrak{a}thscr{C}]^{\mathfrak{a}thbb{F}}_{\otimesperatornamelus}:= \mathfrak{a}thbb{F}\otimestimes_{\mathfrak{a}thbb{Z}} \otimesperatorname{Gr}(\mathfrak{a}thscr{C})_{\otimesperatornamelus}$). } \end{definition} The element $1\otimestimes [M]$ of the $\mathfrak{a}thbb{F}$-decategorification is abbreviated as $[M]$ as well. We set $[\mathfrak{a}thscr{C}]:=[\mathfrak{a}thscr{C}]^{\mathfrak{a}thbb{Z}}$ and $[\mathfrak{a}thscr{C}]_{\otimesperatornamelus}:= [\mathfrak{a}thscr{C}]^{\mathfrak{a}thbb{Z}}_{\otimesperatornamelus}$. \begin{definition}{\rbracem Let $V$ be an $\mathfrak{a}thbb{F}$-module. An {\em $\mathfrak{a}thbb{F}$-precategorification} $(\mathfrak{a}thscr{C},\varphi)$ of $V$ is an abelian (resp. triangulated or additive) category $\mathfrak{a}thscr{C}$ with a fixed monomorphism $\varphi$ from $V$ to the $\mathfrak{a}thbb{F}$-decategorification of $\mathfrak{a}thscr{C}$. If $\varphi$ is an isomorphism, then $(\mathfrak{a}thscr{C},\varphi)$ is called an {\em $\mathfrak{a}thbb{F}$-categorification} of $V$. } \end{definition} Hence categorification is in some sense the `inverse' of decategorification. Whereas the latter is uniquely defined, there are usually several different categorifications. In case $\mathfrak{a}thbb{F}=\mathfrak{a}thbb{Z}$ and $V$ is torsion-free there is always the (trivial) categorification given by a semisimple category of the appropriate size. \begin{definition}{\rbracem Let $V$ be an $\mathfrak{a}thbb{F}$-module and $f:V\rbraceightarrow V$ be an $\mathfrak{a}thbb{F}$-endomorphism. Given an $\mathfrak{a}thbb{F}$-precategorification $(\mathfrak{a}thscr{C},\varphi)$ of $V$, an {\em $\mathfrak{a}thbb{F}$-categorification} of $f$ is an exact (resp. triangulated or additive) functor $F:\mathfrak{a}thscr{C}\rbraceightarrow \mathfrak{a}thscr{C}$ such that $[F]\circ \varphi=\varphi\circ f$, where $[F]$ denotes the endomorphism of $[\mathfrak{a}thscr{C}]^{\mathfrak{a}thbb{F}}$ (or $[\mathfrak{a}thscr{C}]_{\otimesperatornamelus}^{\mathfrak{a}thbb{F}}$ if $\mathfrak{a}thscr{C}$ is abelian) induced by $F$. In other words, the following diagram commutes: \begin{displaymath} \xymatrix{ V\ar[rr]^{f}\ar[d]_{\varphi}&& V\ar[d]^{\varphi}\\ [\mathfrak{a}thscr{C}]_{(\otimesperatornamelus)}^{\mathfrak{a}thbb{F}}\ar[rr]^{[F]} && [\mathfrak{a}thscr{C}]_{(\otimesperatornamelus)}^{\mathfrak{a}thbb{F}} } \end{displaymath} } \end{definition} \begin{definition} {\rbracem Assume $A$ is some $\mathfrak{a}thbb{F}$-algebra defined by generators $a_1,\dots,a_k$ and relations $R_j$, $j\in J$. Given an $A$-module $M$, each generator $a_i$ of $A$ defines a linear endomorphism, $f_i$, of $M$. A {\em very weak $\mathfrak{a}thbb{F}$-(pre)categorification} of $M$ is a (pre)categorification $(\mathfrak{a}thscr{C},\varphi)$ of the vector space $M$ together with a categorification $F_i$, $i=1,\dots,k$, of each $f_i$.} \end{definition} If there is an `interpretation' of the relations $R_j$ between the generators of $A$ in terms of isomorphisms of functors, we will call $(\mathfrak{a}thscr{C},\varphi,F_1,\dots,F_k)$ a {\em (pre)ca\-te\-go\-ri\-fi\-ca\-ti\-on} of the $A$-module $M$. The interpretation of the relations will depend on the example. \begin{example}{\rbracem Let $R=\mathfrak{a}thbb{C}[x]/(x^2)$ and $\mathfrak{a}thscr{C}=R\mathfrak{a}thrm{-mod}$. Then $\otimesperatorname{Gr}(\mathfrak{a}thscr{C})\cong\mathfrak{a}thbb{Z}$, generated by the isomorphism class $[\mathfrak{a}thbb{C}]$ of the unique simple $R$-module, and $[\mathfrak{a}thscr{C}]\cong\mathfrak{a}thbb{Z}$. Thus $\mathfrak{a}thscr{C}$ is a $\mathfrak{a}thbb{Z}$-categorification of $\mathfrak{a}thbb{Z}$.} \end{example} \subsection{Graded setup}\lbraceambdabel{s2.2} If $\mathfrak{a}thscr{C}$ is equivalent to a category of modules over a graded ring, then $\otimesperatorname{Gr}(\mathfrak{a}thscr{C})$ (or $\otimesperatorname{Gr}(\mathfrak{a}thscr{C})_{\otimesperatornamelus}$) becomes a $\mathfrak{a}thbb{Z}[v,v^{-1}]$-module via $v^{i}[M]=[M\lbraceambdangle i\rbraceangle]$ for any $M\in\mathfrak{a}thscr{C}$, $i\in\mathfrak{a}thbb{Z}$, where $M\lbraceambdangle i\rbraceangle$ is the module $M$, but in the grading shifted by $i$ such that $(M\lbraceambdangle i\rbraceangle)_j=M_{j-i}$. To define the notion of a decategorification for a category of graded modules (or complexes of graded modules) let $\mathfrak{a}thbb{F}$ be a commutative ring with $1$ and $\iota:\mathfrak{a}thbb{Z}[v,v^{-1}]\rbraceightarrow \mathfrak{a}thbb{F}$ be a fixed homomorphism of unitary rings. Then $\iota$ defines on $\mathfrak{a}thbb{F}$ the structure of a (right) $\mathfrak{a}thbb{Z}[v,v^{-1}]$-module. \begin{definition}{\rbracem The {\em $(\mathfrak{a}thbb{F},\iota)$-decategorification} of $\mathfrak{a}thscr{C}$ is the $\mathfrak{a}thbb{F}$-module \begin{displaymath} [\mathfrak{a}thscr{C}]^{(\mathfrak{a}thbb{F},\iota)}:= \mathfrak{a}thbb{F}\otimestimes_{\mathfrak{a}thbb{Z}[v,v^{-1}]} \otimesperatorname{Gr}(\mathfrak{a}thscr{C})\quad (\text{resp. }[\mathfrak{a}thscr{C}]^{(\mathfrak{a}thbb{F},\iota)}_{\otimesperatornamelus}:= \mathfrak{a}thbb{F}\otimestimes_{\mathfrak{a}thbb{Z}[v,v^{-1}]} \otimesperatorname{Gr}(\mathfrak{a}thscr{C})_{\otimesperatornamelus}). \end{displaymath} } \end{definition} In most of our examples the homomorphism $\iota:\mathfrak{a}thbb{Z}[v,v^{-1}]\rbraceightarrow \mathfrak{a}thbb{F}$ will be the obvious canonical inclusion. In such cases we will omit $\iota$ in the notation. We set \begin{displaymath} [\mathfrak{a}thscr{C}]:=[\mathfrak{a}thscr{C}]^{(\mathfrak{a}thbb{Z}[v,v^{-1}],\mathfrak{a}thrm{id})},\quad [\mathfrak{a}thscr{C}]_{\otimesperatornamelus}:=[\mathfrak{a}thscr{C}]^{(\mathfrak{a}thbb{Z}[v,v^{-1}],\mathfrak{a}thrm{id})}_{\otimesperatornamelus}. \end{displaymath} \begin{definition}{\rbracem Let $V$ be an $\mathfrak{a}thbb{F}$-module. A {\em $\iota$-precategorification} $(\mathfrak{a}thscr{C},\varphi)$ of $V$ is an abelian or triangulated, respectively additive, category $\mathfrak{a}thscr{C}$ with a fixed free action of $\mathfrak{a}thbb{Z}$ and a fixed monomorphism $\varphi$ from $V$ to the $(\mathfrak{a}thbb{F},\iota)$-decategorification of $\mathfrak{a}thscr{C}$. If $\varphi$ is an isomorphism, $(\mathfrak{a}thscr{C},\varphi)$ is called a {\em $\iota$-categorification} of $V$. } \end{definition} The definitions of a {\em $\iota$-categorification} of an endomorphism, $f:V\rbraceightarrow V$, and of a {\em $\iota$-(pre)categorification} of a module over some $\mathfrak{a}thbb{F}$-algebra are completely analogous to the corresponding definitions from the previous subsection. \begin{example}{\rbracem Let $R=\mathfrak{a}thbb{C}[x]/(x^2)$. Consider $R$ as a graded ring (we usually consider it as a cohomology ring and put $x$ in degree two), and take $\mathfrak{a}thscr{C}=R\mathfrak{a}thrm{-gmod}$. Then $[\mathfrak{a}thscr{C}]\cong\mathfrak{a}thbb{Z}[v,v^{-1}]$ as a $\mathfrak{a}thbb{Z}[v,v^{-1}]$-module, hence the graded category $\mathfrak{a}thscr{C}$ is a $(\mathfrak{a}thbb{Z}[v,v^{-1}],\mathfrak{a}thrm{id})$-categorification of $\mathfrak{a}thbb{Z}[v,v^{-1}]$. Note that $\mathfrak{a}thscr{C}$, considered just as an abelian category, is also a $\mathfrak{a}thbb{Z}[v,v^{-1}]$-ca\-te\-go\-ri\-fi\-ca\-ti\-on of $\mathfrak{a}thbb{Z}[v,v^{-1}]$. } \end{example} \section{The Hecke algebra as a bimodule over itself and its categorifications}\lbraceambdabel{s25} In this section we recall the definition of Hecke algebras and give several examples of categorifications of regular (bi)modules over these algebras. We refer the reader to \cite{KMS2} for more examples of categorifications. From now on we fix a finite Weyl group $W$ with identity element $e$, set of simple reflections $S$, and length function $l$. Denote by $w_0$ the longest element of $W$. Let further $\lbraceeq$ be the Bruhat order on $W$. With respect to this order the element $e$ is the minimal and $w_0$ is the maximal element. Our main example will be $W=S_n$, the symmetric group on $n$ elements, and $S=\{(i,i+1),i=1,\dots,n-1\}$, the set of elementary transpositions. \subsection{The Hecke algebra}\lbraceambdabel{s25.2} Denote by $\mathfrak{a}thds{H}=\mathfrak{a}thds{H}(W,S)$ the {\it Hecke algebra} associated with $W$ and $S$; that is the $\mathfrak{a}thbb{Z}$-algebra which is a free $\mathfrak{a}thbb{Z}[v,v^{-1}]$-module with basis $\{H_x\mid x\in W\}$ and multiplication given by \begin{equation} \lbraceambdabel{eqhecke} H_xH_y=H_{xy}\, \text{if $l(x)+l(y)=l(xy),\,\,$ and}\,\, H_s^2=H_e+(v^{-1}-v)H_s\, \text{for $s\in S$}. \end{equation} The algebra $\mathfrak{a}thds{H}$ is a deformation of the group algebra $\mathfrak{a}thbb{Z}[W]$. As a $\mathfrak{a}thbb{Z}[v,v^{-1}]$-algebra it is generated by $\{H_s\mid s\in S\}$, or (which will turn out to be more convenient) by the set $\{\underline{H}_s=H_s+vH_e\mid s\in S\}$. Note that $\underline{H}_s$ is fixed under the involution ${}^-$, which maps $v\mathfrak{a}psto v^{-1}$ and $H_s\mathfrak{a}psto (H_s)^{-1}$, i.e, $\underline{H}_s$ is a Kazhdan-Lusztig basis element. More general, for $w\in W$ we denote by $\underline{H}_w$ the corresponding element from the Kazhdan-Lusztig bases for $\mathfrak{a}thds{H}$ in the normalization of \cite{SoKipp}. The Kazhdan-Lusztig polynomials $h_{x,y}\in \mathfrak{a}thbb{Z}[v]$ are defined via $\underline{H}_x=\sum_{y\in W}h_{y,x}H_x$. With respect to the generators $\underline{H}_s$, $s\in S$, we have the following set of defining relations (in the case $W=S_n$): \begin{eqnarray} \lbraceambdabel{eqhecke2} \underline{H}_s^2&=&(v+v^{-1})\underline{H}_s;\\ \mathfrak nonumber \underline{H}_s\underline{H}_t&=&\underline{H}_t\underline{H}_s, \quad\quad\quad\quad\, \text{ if } ts=st;\\ \mathfrak nonumber \underline{H}_s\underline{H}_t\underline{H}_s+ \underline{H}_t&=&\underline{H}_t\underline{H}_s\underline{H}_t +\underline{H}_s, \,\, \text{ if }\, ts\mathfrak neq st. \end{eqnarray} Let $\mathfrak{a}thbb{F}$ be any commutative ring and $\iota:\mathfrak{a}thbb{Z}[v,v^{-1}]\rbraceightarrow \mathfrak{a}thbb{F}$ be a homomorphism of unitary rings. Then we have the {\em specialized} Hecke algebra $\mathfrak{a}thds{H}^{(\mathfrak{a}thbb{F},\iota)}=\mathfrak{a}thbb{F}\otimestimes_{\mathfrak{a}thbb{Z}[v,v^{-1}]} \mathfrak{a}thds{H}$. Again if $\iota$ is clear from the context (for instance if $\iota$ is the natural inclusion), we will omit it in the notation. \begin{example}\lbraceambdabel{ex6} {\rbracem Let again $R=\mathfrak{a}thbb{C}[x]/(x^2)$. Putting $x$ in degree two, induces a grading on $R$ and $B_s:=R\lbraceambdangle-1\rbraceangle$ becomes a graded $R$-bimodule. Let $\mathfrak{a}thscr{S}$ be the additive category generated by the graded left $R$-modules $\mathfrak{a}thbb{C}\lbraceambdangle j\rbraceangle$ and $B_s\lbraceambdangle j\rbraceangle$, $j\in\mathfrak{a}thbb{Z}$. Then $\otimesperatorname{Gr}(\mathfrak{a}thscr{S})_\otimesperatornamelus$ is a free $\mathfrak{a}thbb{Z}[v,v^{-1}]$-module of rank two, and is isomorphic to $\mathfrak{a}thds{H}(S_2, \{s\})$ via $[\mathfrak{a}thbb{C}\lbraceambdangle l\rbraceangle]\mathfrak{a}psto v^{-l}H_e$, $[B_s\lbraceambdangle l\rbraceangle]\mathfrak{a}psto v^{-l}\underline{H}_s$. The functor $F_s^l=B_s\otimestimes_R -$ satisfies the condition $F_s^l\circ F_s^l\cong F_s^l\lbraceambdangle 1\rbraceangle\otimesperatornamelus F_s^l\lbraceambdangle -1\rbraceangle$ which is an interpretation of the first relation in \eqref{eqhecke}. Hence we get a categorification of the left regular $\mathfrak{a}thds{H}(S_2, \{s\})$-module. This example generalizes to arbitrary finite Weyl groups as we will describe in the next subsection. } \end{example} \subsection{Special bimodules}\lbraceambdabel{s25.3} Associated with $W$ we have the additive category given by the so-called {\em special bimodules} $B_w$, $w\in W$, introduced by Soergel in \cite{SHC}, see also \cite{SKLP}. To define these bimodules we consider the geometric representation $(V_{\mathfrak{a}thbb{R}},\varphi)$ of $W$ and its complexification $(V,\varphi)$, see \cite[4.2]{BjBr}. Let $R$ be the ring of regular functions on $V$ with its natural $W$-action. This ring becomes graded by putting $V^*$ in degree $2$. For any $s\in S$ let $R^s$ be the subring of $s$-invariants in $R$. Note that this is in fact a graded subring of $R$. Given $w\in W$ with a fixed reduced expression $[w]=s_1s_2\cdot\lbracedots\cdot s_k$ define the graded $R$-bimodule $R_{[w]}$ as follows: \begin{displaymath} R_{[w]}=R\otimestimes_{R^{s_1}}R\otimestimes_{R^{s_2}}\dots \otimestimes_{R^{s_k}}R \lbraceambdangle -l(w)\rbraceangle. \end{displaymath} Following \cite{SKLP} we define $B_w$ as the unique indecomposable direct summand of $R_{[w]}$, which is not isomorphic to a direct summand of any $R_{[x]}$ with $l(x)<l(w)$. Let $\mathfrak{a}thscr{S}$ be the smallest additive category which contains all special bimodules, and is closed under taking direct sums and graded shifts. There is a unique isomorphism $\mathfrak{a}thcal{E}$ of $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules, which satisfies \begin{eqnarray*} \mathfrak{a}thcal{E}: \quad \mathfrak{a}thds{H} & \otimesverset{\sim}{\lbraceongrightarrow} & \lbraceeft[\mathfrak{a}thscr{S}\rbraceight]_{\otimesperatornamelus} \\ \underline{H}_w & \mathfrak{a}psto & \lbraceeft[B_w\rbraceight]. \end{eqnarray*} For any $s\in S$ we have the additive endofunctors $\mathfrak{a}thrm{F}_s^l= B_s\otimestimes_{R}{}_-$ and $\mathfrak{a}thrm{F}_s^r={}_-\otimestimes_{R}B_s$ of $\mathfrak{a}thscr{S}$. Altogether we get a categorification of the regular Hecke module as follows (see \cite[Theorem~1.10]{SKLP}, \cite[Satz~7.9]{Ha} and \cite[Theorem~1]{SHC}): \begin{proposition}\lbraceambdabel{prop1} \begin{enumerate}[(i)] \item\lbraceambdabel{prop1.1} $(\mathfrak{a}thscr{S},\mathfrak{a}thcal{E},\{\mathfrak{a}thrm{F}_s^r\}_{s\in S})$ is a categorification of the right regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $\underline{H}_s$, $s\in S$. \item\lbraceambdabel{prop1.2} $(\mathfrak{a}thscr{S},\mathfrak{a}thcal{E},\{\mathfrak{a}thrm{F}_t^l\}_{t\in S})$ is a categorification of the left regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $\underline{H}_t$, $t\in S$. \end{enumerate} \end{proposition} The interpretation of the relations \eqref{eqhecke2} is given by the existence (see \cite[Theorem~1]{SHC}) of isomorphisms of functors as follows (in case $W=S_n$): \begin{eqnarray*} (\mathfrak{a}thrm{F}_s^{\sharp})^2&\cong&\mathfrak{a}thrm{F}_s^{\sharp}\lbraceambdangle 1\rbraceangle \otimesperatornamelus\mathfrak{a}thrm{F}_s^{\sharp}\lbraceambdangle-1\rbraceangle;\\ \mathfrak nonumber \mathfrak{a}thrm{F}_s^{\sharp}\mathfrak{a}thrm{F}_t^{\sharp}&\cong& \mathfrak{a}thrm{F}_t^{\sharp}\mathfrak{a}thrm{F}_s^{\sharp}, \quad\quad\quad\quad\, \text{ if } ts=st;\\ \mathfrak nonumber \mathfrak{a}thrm{F}_s^{\sharp}\mathfrak{a}thrm{F}_t^{\sharp}\mathfrak{a}thrm{F}_s^{\sharp}\otimesperatornamelus \mathfrak{a}thrm{F}_t^{\sharp}&\cong&\mathfrak{a}thrm{F}_t^{\sharp} \mathfrak{a}thrm{F}_s^{\sharp}\mathfrak{a}thrm{F}_t^{\sharp} \otimesperatornamelus\mathfrak{a}thrm{F}_s^{\sharp}, \,\, \text{ if }\, ts\mathfrak neq st, \end{eqnarray*} where $\sharp$ is either $l$ or $r$. For other types the interpretation is similar. \begin{remark}\lbraceambdabel{rem1} {\rbracem \begin{enumerate} \item\lbraceambdabel{rem1.1} The functors $\mathfrak{a}thrm{F}_s^r$ and $\mathfrak{a}thrm{F}_t^l$ naturally commute (with each other), hence the parts \eqref{prop1.1} and \eqref{prop1.2} of Proposition~\rbraceef{prop1} together give a categorification of the regular Hecke {\bf bimodule}. \item\lbraceambdabel{rem1.2} The above categorification is not completely satisfactory, mostly because it is given by an additive category which is not abelian. As a consequence, we cannot see the standard basis of the Hecke module in this categorification, hence we will present a categorification given by an abelian category. This will be done in the next subsection. \item\lbraceambdabel{rem1.3} The proof of Proposition~\rbraceef{prop1} given in \cite[Theorem~1]{SHC} is quite involved and uses the full power of the Kazhdan-Lusztig Theory (or the decomposition theorem \cite[Theorem~6.2.5]{BBD}). \item\lbraceambdabel{rem1.4} If one prefers to work with finite-dimensional algebras and modules, one could replace the polynomial ring $R$ with the coinvariant ring $C$, which is the quotient of $R$ modulo the ideal generated by homogeneous $W$-invariant polynomials of positive degree. One can define the {\em special $C$-bimodules} $B_w\otimestimes_R C$ and obtains a completely analogous result to Proposition~\rbraceef{prop1}, and Remark (1), see \cite[Theorem~2]{SHC}. \item\lbraceambdabel{rem1.5} We could also define the {\em special right $C$-modules} $\otimesverline{B}_w=\mathfrak{a}thbb{C}\otimestimes_R B_w\otimestimes_R C$. Since they are preserved by the functors $\mathfrak{a}thrm{F}_s^r$, $s\in S$, Proposition~\rbraceef{prop1}\eqref{prop1.1} provides another categorification of the regular right $\mathfrak{a}thds{H}$-module, \cite[Zerlegungssatz~1 and Section~2.6]{Sperv}. \end{enumerate} } \end{remark} \subsection{Harish-Chandra bimodules}\lbraceambdabel{s25.4} In this section we would like to improve Proposition~\rbraceef{prop1} and work with abelian categories. We start with introducing the setup, which then will also be used in the next subsection. Let $\mathfrak{a}thfrak{g}$ be a reductive finite-di\-men\-sio\-nal complex Lie algebra associated with the Weyl group $W$. Let $U(\mathfrak g)$ be the universal enveloping algebra of $\mathfrak g$ with its center $Z(\mathfrak g)$. Fix a triangular decomposition $\mathfrak g=\mathfrak n_-\otimesperatornamelus\mathfrak h\otimesperatornamelus\mathfrak n_+$, where $\mathfrak h$ is a fixed Cartan subalgebra of $\mathfrak g$ contained in the Borel subalgebra $\mathfrak{a}thfrak{b}= \mathfrak h\otimesperatornamelus\mathfrak n_+$. For $\lbraceambdambda\in \mathfrak h^*$ we denote by $M(\lbraceambdambda)$ the Verma module with highest weight $\lbraceambdambda$. Let $\rbraceho$ be the half-sum of all positive roots. Define $\mathfrak h^*_{dom}:=\{\lbraceambdambda\in\mathfrak h^*\,:\,\lbraceambdambda+\rbraceho \text{ is dominant}\}$, which is the dominant Weyl chamber with respect to the {\em dot-action} of $W$ on $\mathfrak h^*$ given by $w\cdot \lbraceambdambda=w(\lbraceambdambda+\rbraceho)-\rbraceho$. Denote by $\mathfrak{a}thcal{H}$ the category of Harish-Chandra bimodules for $\mathfrak g$, that is the category of finitely generated $U(\mathfrak g)$-bimodules of finite length, which are locally finite with respect to the adjoint action of $\mathfrak g$ (which is defined for a bimodule $M$ as $x.m=xm-mx$ for any $x\in\mathfrak{a}thfrak{g}$ and $m\in M$). The action of the center defines the following block decomposition of $\mathfrak{a}thcal{H}$: \begin{displaymath} \mathfrak{a}thcal{H}=\bigoplus_{\mathbf m,\mathbf n\in \text{Max}Z(\mathfrak g)} {}_{\mathbf m}\mathfrak{a}thcal{H}_{\mathbf n}, \quad\text{where}\quad {}_{\mathbf m}\mathfrak{a}thcal{H}_{\mathbf n}= \lbraceeft\{ M\in \mathfrak{a}thcal{H}| \exists\, k\in \mathbb N:\mathbf m^k M=0=M\mathbf n^k \rbraceight\}. \end{displaymath} Note that $Z(\mathfrak g)\cong R$ (via the Harish-Chandra isomorphism and \cite[18-1]{Kane}) hence it is positively graded. Let $\mathfrak{a}thbf{0}\in \text{Max}Z(\mathfrak g)$ denote the annihilator (in $Z(\mathfrak g)$) of the trivial $U(\mathfrak g)$-module. Consider the block ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}$. Tensoring with finite-dimensional left and right $U(\mathfrak g)$-modules are endofunctors on $\mathfrak{a}thcal{H}$ and their direct sums and summands are called {\em projective functors}. Indecomposable projective functors were classified in \cite[Theorem~3.3]{BG}. It turns out that these summands are naturally labelled by the elements of $W$. For $w\in W$ we denote by $\theta_w^l$ the indecomposable projective endofunctor of ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}$ corresponding to $w$ and induced by tensoring with a finite dimensional {\it left} $\mathfrak{a}thfrak{g}$-module (as the supindex $l$ indicates). Similarly, we can consider projective functors given by tensoring with finite-dimensional {\it right} $U(\mathfrak g)$-modules and obtain the corresponding functors $\theta_w^r$. For two $\mathfrak g$-modules $M$ and $N$ we denote by $\mathfrak{a}thscr{L}(M,N)$ the largest $\mathfrak{a}thrm{ad}(\mathfrak g)$-finite submodule of $\mathfrak{a}thrm{Hom}_{\mathfrak{a}thbb{C}}(M,N)$, see \cite[1.7.9]{Di}. The classes $[\mathfrak{a}thscr{L}(M(0),M(w\cdot 0))]$, $w\in W$, form a basis of $[{}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}]$, see \cite{BG}, \cite[6.15]{Ja2}. Following \cite[Theorem~2]{SHC} we form the positively graded algebra \begin{displaymath} A^{\infty}=\mathfrak{a}thrm{End}_{R-R}(\otimesperatornamelus_{w\in W}B_w) \end{displaymath} and we have an equivalence (see \cite[Theorem~3]{SHC}) of categories \begin{displaymath} {}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}\cong \mathfrak{a}thrm{nil-}A^{\infty}, \end{displaymath} where $\mathfrak{a}thrm{nil-}A^{\infty}$ is the category of all finite dimensional right $A^{\infty}$-modules $M$ satisfying $M\,A^{\infty}_i=0$ for all $i\mathfrak gg 0$ (for example, this is obviously satisfied for any finite dimensional {\em gradable} $A^{\infty}$-module). We consider the category $\mathfrak{a}thrm{gmod-}A^{\infty}$ of all finite-dimensional graded right $A^{\infty}$-modules. The functors $\theta_w^l$ and $\theta_w^r$ lift to endofunctors of $\mathfrak{a}thrm{gmod-}A^{\infty}$, see \cite[Appendix]{MO}. The modules $\mathfrak{a}thscr{L}(M(0),M(w\cdot 0))$ admit graded lifts as well and we fix standard lifts $\tilde{M}_w$ such that their heads are concentrated in degree $0$. Let $\tilde{\mathfrak{a}thcal{E}}$ be the unique isomorphism of the $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules such that \begin{eqnarray*} \tilde{\mathfrak{a}thcal{E}}: \quad \mathfrak{a}thds{H} & \otimesverset{\sim}{\lbraceongrightarrow} & \lbraceeft[\mathfrak{a}thrm{gmod-}A^{\infty}\rbraceight] \\ H_w & \mathfrak{a}psto &\lbraceeft[\tilde{M}_w\rbraceight] . \end{eqnarray*} \begin{proposition}\lbraceambdabel{prop2} \begin{enumerate}[(i)] \item\lbraceambdabel{prop2.1} $(\mathfrak{a}thrm{gmod-}A^{\infty},\tilde{\mathfrak{a}thcal{E}}, \{\theta_s^l\}_{s\in S})$ is a categorification of the right regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $\underline{H}_s$, $s\in S$. \item\lbraceambdabel{prop2.2} $(\mathfrak{a}thrm{gmod-}A^{\infty},\tilde{\mathfrak{a}thcal{E}}, \{\theta_t^r\}_{t\in S})$ is a categorification of the left regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $\underline{H}_t$, $t\in S$. \end{enumerate} \end{proposition} This statement can be found for example in \cite{StThesis} and \cite{Kh2}. Basically, it follows from \cite{SHC}. We would like to emphasize the difference between Proposition~\rbraceef{prop2} and Proposition~\rbraceef{prop1}: In Proposition~\rbraceef{prop1} the {\em right} regular representation of $\mathfrak{a}thds{H}$ was categorified using functors $\mathfrak{a}thrm{F}_s^r$ of tensor product from the {\em right}, while in Proposition~\rbraceef{prop1} the {\em right} regular representation of $\mathfrak{a}thds{H}$ was categorified using the {\em left} translation functors $\theta_s^l$. The interpretation of the relations \eqref{eqhecke2} is similar to the one given after Proposition~\rbraceef{prop1}. Again, the functors $\theta_s^l$ and $\theta_t^r$ naturally commute with each other and hence the parts \eqref{prop2.1} and \eqref{prop2.2} of Proposition~\rbraceef{prop2} together give a categorification of the regular Hecke bimodule. The connection to Subsection~\rbraceef{s25.3} is given by \cite[Section~3]{SHC}. \subsection{Category $\mathfrak{a}thcal{O}$}\lbraceambdabel{s25.5} We stick to the setup at the beginning of the previous subsection. Consider the BGG category $\mathfrak{a}thcal{O}=\mathfrak{a}thcal{O}(\mathfrak g,\mathfrak{a}thfrak{b})$ (\cite{BGG2}) with its block decomposition \begin{displaymath} \mathfrak{a}thcal{O}=\bigoplus_{\lbraceambdambda\in \mathfrak h^*_{dom}}\mathfrak{a}thcal{O}_{\lbraceambdambda}, \quad\text{where}\quad \mathfrak{a}thcal{O}_{\lbraceambdambda}=\lbraceeft\{ M\in \mathfrak{a}thcal{O}| \exists\, k\in \mathbb N:(\mathfrak{a}thrm{Ann}_{Z(\mathfrak g)}(M(\lbraceambdambda)))^k M=0 \rbraceight\}. \end{displaymath} For $\lbraceambdambda\in\mathfrak h^*$ let $P(\lbraceambdambda)$ be the projective cover of $M(\lbraceambdambda)$ and $L(\lbraceambdambda)$ be the simple quotient of $P(\lbraceambdambda)$. Following \cite{Sperv} we form the graded algebra $A=\mathfrak{a}thrm{End}_{C}(\otimesperatornamelus_{w\in W}\otimesverline{B}_w)$ (see Remark~\rbraceef{rem1}) and obtain an equivalence of categories between $\mathfrak{a}thcal{O}_0$ and $\mathfrak{a}thrm{mod-}A$, the category of finite dimensional right $A$-modules. We denote by $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}}$ the category of finite-dimensional {\em graded} right $A$-modules. To connect this with the previous subsection let ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}^1$ denote the full subcategory of ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}$ consisting of all bimodules which are annihilated by $\mathfrak{a}thbf{0}$ from the right hand side. Then there is an equivalence of categories ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}_{\mathfrak{a}thbf{0}}^1\cong \mathfrak{a}thcal{O}_0$, see \cite[Theorem~5.9]{BG}. Via this equivalence the functors $\theta_w^l$, $w\in W$, restrict to exact endofunctors of $\mathfrak{a}thcal{O}_0$, which admit graded lifts. Unfortunately, the functors $\theta_w^r$ do not preserve $\mathfrak{a}thcal{O}_0$. However, for $s\in S$ there is a unique up to scalar natural transformation $\mathfrak{a}thrm{Id}\lbraceambdangle 1\rbraceangle\rbraceightarrow \theta_s^r$, whose cokernel we denote by $\mathfrak{a}thrm{T}_s$. These are the so-called {\em twisting functors} on $\mathfrak{a}thcal{O}_0$, see \cite{AS} and \cite{KM}. Each $\mathfrak{a}thrm{T}_s$ preserves $\mathfrak{a}thcal{O}_0$ and has a graded lift by definition, but it is only right exact. Therefore we consider $\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}})$, the bounded derived category of the category of finite-dimensional graded right $A$-modules with shift functor $\lbracelbracket\cdot \rbracerbracket$. Let $\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s$ be the left derived functor of $T_s$. For $w\in W$ we abbreviate $\Delta(w)=M(w\cdot 0)$, $L(w)=L(w\cdot 0)$ and $P(w)=P(w\cdot 0)$. All simple, standard and projective modules in $\mathfrak{a}thcal{O}_0$ have standard graded lifts (i.e. their heads are concentrated in degree zero), which we will denote by the same symbols. We fix the unique isomorphism of the $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules such that \begin{eqnarray*} \mathfrak hat{\mathfrak{a}thcal{E}}: \quad \mathfrak{a}thds{H} & \otimesverset{\sim}{\lbraceongrightarrow} & \lbraceeft[\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}})\rbraceight] \\ H_w & \mathfrak{a}psto & \lbraceeft[\Delta(w)\rbraceight] \end{eqnarray*} and obtain the following well-known result: \begin{proposition}\lbraceambdabel{prop3} \begin{enumerate}[(i)] \item\lbraceambdabel{prop3.1} $(\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}}), \mathfrak hat{\mathfrak{a}thcal{E}},\{\theta_s^l\}_{s\in S})$ is a categorification of the right regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $\underline H_s$, $s\in S$. \item\lbraceambdabel{prop3.2} $(\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}}), \mathfrak hat{\mathfrak{a}thcal{E}}, \{\mathfrak{a}thcal{L}T_t\}_{t\in S})$ is a categorification of the left regular representation of $\mathfrak{a}thds{H}$ with respect to the generators $H_t$, $t\in S$. \item\lbraceambdabel{prop3.3} $\mathfrak hat{\mathfrak{a}thcal{E}}(\underline{H}_w)=[P(w)]$ for all $w\in W$. \end{enumerate} \end{proposition} \begin{proof} The ungraded resp. graded cases of \eqref{prop3.1} are treated in \cite[Theorem~3.4(iv)]{BG} and \cite[Theorem~7.1]{Stgrad}. The ungraded resp. graded cases of \eqref{prop3.2} follow from \cite[(2.3) and Theorem~3.2]{AS} and \cite[Appendix]{MO}. The claim \eqref{prop3.3} follows from \cite[Theorem~3.11.4(i) and (iv)]{BGS}. \end{proof} For Proposition~\rbraceef{prop3}\eqref{prop3.1}, the interpretation of the relations \eqref{eqhecke2} is similar to the one given after Proposition~\rbraceef{prop1}. We note that the statements \eqref{prop3.1} and \eqref{prop3.3} of Proposition~\rbraceef{prop3} can be formulated entirely using the underlying abelian category, whereas the statement \eqref{prop3.2} can not. The functors $\mathfrak{a}thcal{L}T_t$, $t\in S$, satisfy braid relations (this can be proved analogously to \cite[Proposition~11.1]{MS3} using \cite[Theorem~2.2]{AS} and \cite[Section~6]{KM}), but we do not know any functorial interpretation for the relation $H_s^2=H_e+(v^{-1}-v)H_s$. Hence (at least for the moment) Proposition~\rbraceef{prop3}\eqref{prop3.2} gives only a {\em very weak} categorification of the left regular representation of $\mathfrak{a}thds{H}$, but a categorification (in the stronger sense) of the underlying representation of the braid group, see \cite{Rouquier}. The functors $\theta_s^l$ and $\mathfrak{a}thcal{L}T_t$ naturally commute with each other and hence the parts \eqref{prop3.1} and \eqref{prop3.2} of Proposition~\rbraceef{prop3} together give a (very weak) categorification of the regular Hecke bimodule. The connection to Remark~\rbraceef{rem1}\eqref{rem1.5} is given by Soergel's functor $\mathfrak{a}thbb{V}$, see \cite{Sperv}. \subsubsection{$\mathfrak{a}thfrak{gl}_2$-example, the basis given by standard modules}\lbraceambdabel{s2.6} Consider the case $W=S_2=\{e,s\}$. In this case the category $\mathfrak{a}thcal{O}_0$ is equivalent to the category of finite-dimensional right $A$-modules, where $A$ is the path algebra of the following quiver with relations: \begin{displaymath} \xymatrix{ s\ar@/^/[rr]^{\alpha} && e\ar@/^/[ll]^{\beta} }, \quad\quad \alpha\beta=0. \end{displaymath} The algebra $A$ is graded with respect to the length of paths. The algebra $A$ has a simple preserving duality and hence the categories of finite-dimensional right and left $A$-modules are equivalent. Working with left $A$-modules reflects better the natural $\mathfrak{a}thfrak{gl}_2$-weight picture, so we will use it. The category $A\mathfrak{a}thrm{-mod}$ has $5$ indecomposable objects, namely, \begin{displaymath} \Delta(s)=L(s):\quad\xymatrix{ \mathfrak{a}thbb{C}\ar@/^/[rr]^{0} && 0\ar@/^/[ll]^{0} }\quad\quad\quad L(e):\quad\xymatrix{ 0\ar@/^/[rr]^{0} && \mathfrak{a}thbb{C}\ar@/^/[ll]^{0} } \end{displaymath} \begin{displaymath} P(s):\quad\xymatrix{ \mathfrak{a}thbb{C}\otimesperatornamelus\mathfrak{a}thbb{C} \ar@/^/[rr]^{ \text{\tiny $\lbraceeft(\begin{array}{cc}1&0\end{array}\rbraceight)$} } && \mathfrak{a}thbb{C} \ar@/^/[ll]^{ \text{\tiny $\lbraceeft(\begin{array}{c}0\\1\end{array}\rbraceight)$}} }\quad\quad\quad \Delta(e)=P(e):\quad\xymatrix{ \mathfrak{a}thbb{C}\ar@/^/[rr]^{0} && \mathfrak{a}thbb{C}\ar@/^/[ll]^{\mathfrak{a}thrm{id}} } \end{displaymath} \begin{displaymath} I(e):\quad\xymatrix{ \mathfrak{a}thbb{C}\ar@/^/[rr]^{\mathfrak{a}thrm{id}} && \mathfrak{a}thbb{C}\ar@/^/[ll]^{0} }. \end{displaymath} Let $f_s$ and $f_e$ denote the primitive idempotents of $A$ corresponding to the vertices $s$ and $e$. Then the functor $\theta_s^l$ is given by tensoring with the bimodule $Af_s\otimestimes_{\mathfrak{a}thbb{C}} f_sA$, and the functor $\mathfrak{a}thrm{T}_s$ is given by tensoring with the bimodule $Af_sA$. We have $\mathfrak{a}thcal{L}_i\mathfrak{a}thrm{T}_s=0$, $i>1$. The values of $\theta_s^l$, $\mathfrak{a}thrm{T}_s$ and $\mathfrak{a}thcal{L}_1\mathfrak{a}thrm{T}_s$ on the indecomposable objects from $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}}$ are: \begin{displaymath} \begin{array}{|c||c|c|c|c|c|} \mathfrak hline M & L(s) & L(e) & P(s) & P(e) & I(e)\\ \mathfrak hline \mathfrak hline \theta_s^l M & P(s)\lbraceambdangle -1\rbraceangle& 0 & P(s)\lbraceambdangle -1\rbraceangle\otimesperatornamelus P(s)\lbraceambdangle 1\rbraceangle & P(s) & P(s)\lbraceambdangle -2\rbraceangle\\ \mathfrak hline \mathfrak{a}thrm{T}_s M & I(e) & 0 & P(s)\lbraceambdangle -1\rbraceangle & L(s) & I(e)\lbraceambdangle -1\rbraceangle\\ \mathfrak hline \mathfrak{a}thcal{L}_1\mathfrak{a}thrm{T}_s M & 0 & L(e)\lbraceambdangle 1\rbraceangle & 0 & 0& L(e)\lbraceambdangle 1\rbraceangle\\ \mathfrak hline \end{array} \end{displaymath} There are several basis for the Grothendieck group, the {\it standard} choice is given by the isomorphism classes of the standard modules $\Delta(w)$, $w=e,s$. In this basis, the action of our functors is as follows: \begin{displaymath} \begin{array}{rcclccl} \lbraceeft[\theta_s^l \Delta(e)\rbraceight] &=& v & \lbraceeft[\Delta(e)\rbraceight]& +& &\lbraceeft[\Delta(s)\rbraceight];\\ \lbraceeft[\theta_s^l \Delta(s)\rbraceight] &=& &\lbraceeft[\Delta(e)\rbraceight]&+ &v^{-1}& \lbraceeft[\Delta(s)\rbraceight];\\ \lbraceeft[\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s \Delta(e)\rbraceight] &=& &&&&\lbraceeft[\Delta(s)\rbraceight]; \\ \lbraceeft[\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s \Delta(s)\rbraceight] &=& &\lbraceeft[\Delta(e)\rbraceight]&+&(v^{-1}-v)& \lbraceeft[\Delta(s)\rbraceight].\\ \end{array} \end{displaymath} This is a categorification of the regular Hecke bimodule in the standard basis. \subsubsection{$\mathfrak{a}thfrak{gl}_3$-example, the basis given by simple modules} \lbraceambdabel{s2.7} Consider the case $W=S_3=\{e,s,t,st,ts,sts=tst=w_0\}$. In this case the category $\mathfrak{a}thcal{O}_0$ has infinitely many indecomposable objects (and is in fact wild, see \cite{FNP}). However one can still compute the actions of $\theta_s^l$, $\theta_t^l$, $\mathfrak{a}thrm{T}_s$ and $\mathfrak{a}thrm{T}_t$ in various bases using known properties of these functors. The easiest basis is given by standard modules; here, however, we present the answer for $\theta_s^l$, $\theta_t^l$ in the most natural basis, namely the one given by simple modules. To shorten the notation we will denote our simple modules just by the corresponding elements of the Weyl group. Here are the graded filtrations of the values of the translation functors $\theta_s^l$ and $\theta_t^l$ applies to simple modules: \begin{displaymath} \begin{array}{|c||c|c|c|c|c|c|} \mathfrak hline M & e & s & t & st & ts & w_0 \\ \mathfrak hline \mathfrak hline \theta_s^l M & 0 & \begin{array}{ccc}&s& \\ st& & e\\ &s&\end{array}& 0 & 0 & \begin{array}{c}ts \\ t \\ ts\end{array}& \begin{array}{c}w_0 \\ st \\ w_0\end{array} \\ \mathfrak hline \theta_t^l M & 0 &0 & \begin{array}{ccc}&t& \\ ts& & e\\ &t&\end{array} & \begin{array}{c}st \\ s \\ st\end{array}& 0& \begin{array}{c}w_0 \\ ts \\ w_0\end{array} \\ \mathfrak hline \end{array} \end{displaymath} From this we can draw the following graph which shows all the non-zero coefficients of the action of $\theta_s^l$ (indicated by solid arrows) and $\theta_t^l$ (indicated by dotted arrows) in the bases of simple modules: \mathfrak hspace{2mm} \begin{equation}\lbraceambdabel{figuresl3} \xymatrix{ & s\ar[dl]_1\ar@/^/[rr]^1\ar@(u,l)[]_{v+v^{-1}} && st\ar@{.>}@/^/[ll]^1\ar@(r,u)@{.>}[]_{v+v^{-1}} & \\ e & && & w_0\ar[ul]_1\ar@{.>}[dl]^1\ar@(u,r)[]^{v+v^{-1}} \ar@(d,r)@{.>}[]_{v+v^{-1}}\\ & t\ar@{.>}[ul]^1\ar@{.>}@/^/[rr]^1\ar@(d,l)@{.>}[]^{v+v^{-1}} && ts\ar@/^/[ll]^1\ar@(r,d)[]^{v+v^{-1}} & \\ } \end{equation} \mathfrak hspace{2mm} The graph \eqref{figuresl3} should be compared for example with \cite[Figure~6.2]{BjBr} (in order to get \cite[Figure~6.2]{BjBr} one should formally evaluate $v=1$ and subtract the identity from $\theta_s^l$ and $\theta_t^l$). From \eqref{figuresl3} one can deduce immediately the existence of the following flag of $\mathfrak{a}thds{H}$-submodules inside our regular $\mathfrak{a}thds{H}$-module: \begin{displaymath} \lbraceambdangle[e]\rbraceangle\subset \lbraceambdangle[e],[s],[st]\rbraceangle\subset \lbraceambdangle[e],[s],[st],[t],[ts]\rbraceangle\subset \lbraceambdangle[e],[s],[st],[t],[ts],[w_0]\rbraceangle. \end{displaymath} The subquotients of this flag are the Kazhdan-Lusztig cell modules for $\mathfrak{a}thds{H}$. As we will show later on, this can be extended to an explicit categorification of these cell modules via some subcategories of $\mathfrak{a}thcal{O}_0$. The definition of a left cell and the categorification of cell modules is the topic of the next section. \section{Categorifications of Cell and Specht modules}\lbraceambdabel{s3} In this section we will introduce two categorifications of cell modules - one which we believe is `the correct one' and one which seems to be more canonical, easier, and straight forward on the first sight, but turns out to be less natural at the end. We do not know if the associated categories are in fact derived equivalent. \subsection{Kazhdan-Lusztig's cell theory}\lbraceambdabel{s3.1} In this subsection we recall some facts from the Kazhdan-Lusztig cell theory. Our main references here are \cite{KLCoxeter} and \cite{BjBr} and we refer the reader to these papers for details. We will use the notation from \cite{SoKipp}. If $x\lbraceeq y$ then denote by $\mu(x,y)$ the coefficient of $v$ in the Kazhdan-Lusztig polynomial $h_{x,y}$ and extend it to a symmetric function $\mu:W\times W\rbraceightarrow \mathfrak{a}thbb{Z}$. In our normalization the formula \cite[(1.0.a)]{KLCoxeter} reads then as follows: \begin{equation}\lbraceambdabel{formula1} \underline{H}_x\underline{H}_s= \begin{cases} \underline{H}_{xs}+\sum_{y<x,ys<y}\mu(y,x)\underline{H}_y, & xs>x;\\ (v+v^{-1})\underline{H}_{x}, & xs<x. \end{cases} \end{equation} In particular, $\mu(x,xs)=\mu(xs,x)=1$ for any $x\in W$ and $s\in S$. For $w\in W$ define the {\em left} and the {\em right descent} sets of $w$ as follows: \begin{displaymath} D_\mathfrak{a}thsf{L}(w):=\{s\in S\,:\, sw<w\},\quad D_\mathfrak{a}thsf{R}(w):=\{s\in S\,:\, ws<w\}. \end{displaymath} Now for $x,y\in W$ we write $x\rbraceightarrow_\mathfrak{a}thsf{L} y$ provided that $\mu(x,y)\mathfrak neq 0$ and there is some $s\in S$ such that $s\in D_{\mathfrak{a}thsf{L}}(x)$ and $s\mathfrak not\in D_\mathfrak{a}thsf{L}(y)$. Denote by $\mathfrak geq_{\mathfrak{a}thsf{L}}$ the transitive closure of the relation $\rbraceightarrow_{\mathfrak{a}thsf{L}}$. The relation $\mathfrak geq_\mathfrak{a}thsf{L}$ is called the {\em left} pre-order on $W$. The equivalence classes with respect to $\mathfrak geq_\mathfrak{a}thsf{L}$ are called the {\em left cells}. The fact that $x,y\in W$ belong to the same left cell will be denoted $x\sim_{\mathfrak{a}thsf{L}}y$. The {\it right} versions $\mathfrak geq_\mathfrak{a}thsf{R}$ and $\sim_{\mathfrak{a}thsf{R}}$ of the above are obtained by applying the involution $x\mathfrak{a}psto x^{-1}$, which yields the notion of {\em right cells}. Given a right cell $\mathfrak{a}thbf{R}\subset W$, the $\mathfrak{a}thbb{C}[v,v^{-1}]$-span $X$ of $\underline{H}_x$, $x\mathfrak geq_{\mathfrak{a}thsf{R}} \mathfrak{a}thbf{R}$, carries a natural structure of a right $\mathfrak{a}thds{H}$-module via \eqref{formula1}. The $\mathfrak{a}thbb{C}[v,v^{-1}]$-span $Y$ of $\underline{H}_x$, $x>_\mathfrak{a}thsf{R} \mathfrak{a}thbf{R}$, is a submodule of $X$. The $\mathfrak{a}thds{H}$-module $X/Y$ is called the {\em (right) cell module} associated with $\mathfrak{a}thbf{R}$ and will be denoted by $S(\mathfrak{a}thbf{R})$. We leave it as an exercise to the reader to verify that our definition of a cell module in fact agrees with the one from \cite{KLCoxeter}. \subsection{Presentable modules}\lbraceambdabel{s3.105} Here we would like to recall the construction of the category of presentable modules from \cite{Au}, a basic construction which will be crucial in the sequel. Let $\mathfrak{a}thscr{A}$ be an abelian category and $\mathfrak{a}thscr{B}$ be a full additive subcategory of $\mathfrak{a}thscr{A}$. Denote by $\otimesverline{\mathfrak{a}thscr{B}}$ the full subcategory of $\mathfrak{a}thscr{A}$, which consists of all $M\in \mathfrak{a}thscr{A}$ for which there is an exact sequence $N_1\rbraceightarrow N_0\rbraceightarrow M\rbraceightarrow 0$ with $N_1,N_0\in \mathfrak{a}thscr{B}$. This exact sequence is called a $\mathfrak{a}thscr{B}$-presentation of $M$. In the special case when $\mathfrak{a}thscr{B}=\mathfrak{a}thrm{add}(P)$ for some projective object $P\in \mathfrak{a}thscr{A}$ we have that $\otimesverline{\mathfrak{a}thscr{B}}$ is equivalent to the category of right $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{A}}(P)$-modules, see \cite[Section~5]{Au}. In particular, $\otimesverline{\mathfrak{a}thrm{add}(P)}$ is abelian. \subsection{Categorification of cell modules}\lbraceambdabel{s3.2} Let $\mathfrak{a}thbf{R}$ be a right cell of $W$. Set \begin{displaymath} \mathfrak{a}thbf{\mathfrak hat{R}}=\{w\in W\,:\,w\lbraceeq_\mathfrak{a}thsf{R} x \text{ for some }x\in \mathfrak{a}thbf{R}\}. \end{displaymath} Let $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ denote the full subcategory of $\mathfrak{a}thcal{O}_0$, whose objects are all $M\in \mathfrak{a}thcal{O}_0$ such that each composition subquotient of $M$ has the form $L(w)$, $w\in \mathfrak{a}thbf{\mathfrak hat{R}}$. For example if $\mathfrak{a}thbf{R}=\{e\}$, the category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ contains only finite direct sums of copies of the trivial $\mathfrak{a}thfrak{g}$-module. In any case, the inclusion functor $\mathfrak{a}thfrak{i}^{\mathfrak{a}thbf{\mathfrak hat{R}}}: \mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}\mathfrak hookrightarrow \mathfrak{a}thcal{O}_0$ is exact and has as left adjoint the functor $\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ which picks out the maximal quotient contained in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$. In particular, the indecomposable projective modules in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ are the $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)=\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}P(w)$, $w\in \mathfrak{a}thbf{\mathfrak hat{R}}$. \begin{remark}\lbraceambdabel{rem2} {\rbracem If $\mathfrak{a}thbf{R}$ contains $w_0^{\mathfrak{a}thfrak{p}}w_0$, where $w_0^{\mathfrak{a}thfrak{p}}$ is the longest element in the parabolic subgroup of $W$ corresponding to a parabolic subalgebra $\mathfrak{a}thfrak{p}\supset\mathfrak{a}thfrak{b}$ of $\mathfrak{a}thfrak{g}$, then $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}=\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$, the principal block of the parabolic category $\mathfrak{a}thcal{O}$ in the sense of \cite{RC}. This follows from \cite[Proposition~6.2.7]{BjBr} and the fact that all simple modules in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$ can be obtained as subquotients of translations of the simple tilting module in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$ (as shown in \cite{CI}). } \end{remark} \begin{proposition}\lbraceambdabel{prop5} Let $\mathfrak{a}thbf{R}$ be a right cell of $W$. \begin{enumerate}[(i)] \item\lbraceambdabel{prop5.1} The category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ is stable under $\theta_s^l$, $s\in S$. \item\lbraceambdabel{prop5.2} The additive category generated by $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)$, $w\in \mathfrak{a}thbf{R}$, is stable under $\theta_s^l$, $s\in S$. \end{enumerate} \end{proposition} \begin{proof} To prove \eqref{prop5.1} it is enough to show that $\theta_s^lL(w)\in \mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ for all $w\in \mathfrak{a}thbf{\mathfrak hat{R}}$. For $z\in W$ using the self-adjointness of $\theta_s^l$, equation \eqref{formula1} and Proposition~\rbraceef{prop3}\eqref{prop3.3} we have: \begin{eqnarray*} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(P(z),\theta_s^l L(w)) &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(\theta_s^l P(z),L(w))\\ &=& \begin{cases} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(P(z)\otimesperatornamelus P(z),L(w)), & zs<z;\\ \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(P(zs)\otimesperatornamelus {\displaystyle\bigoplus_{y<z,ys<y}}P(y)^{\mu(y,z)},L(w)), & zs>z. \end{cases} \end{eqnarray*} The latter space has the chance to be non-zero only in the following cases: $z=w$ or $zs=w>z$, or, finally, $w<z$ where $ws<w$ and $\mu(w,z)\mathfrak neq 0$. In all these cases $w\in \mathfrak{a}thbf{\mathfrak hat{R}}$ implies $z\in \mathfrak{a}thbf{\mathfrak hat{R}}$ and \eqref{prop5.1} follows. To prove \eqref{prop5.2} we use \eqref{prop5.1} and note that $\theta_s^l$ maps projectives from $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ to projectives from $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ since it is self-adjoint. Now take $x\in \mathfrak{a}thbf{R}$. Then $\theta_s^lP^{\mathfrak{a}thbf{\mathfrak hat{R}}}(x)$ is a direct sum of some $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(y)$'s. The possible $y$'s to occur are given by \eqref{formula1}, hence either $y=x$ or $y\in \mathfrak{a}thbf{R}$, or $y=xs>x$. In the last case we have either $y\in \mathfrak{a}thbf{R}$ or $y\mathfrak not\in \mathfrak{a}thbf{\mathfrak hat{R}}$, which is not possible since $\theta_s^l$ preserves $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ by \eqref{prop5.1}. This completes the proof. \end{proof} We know already that the indecomposable projective module $P(x)\in\mathfrak{a}thcal{O}_0$ has a {\em standard} graded lift $\mathfrak{a}thtt{P}(x)$ for all $x\in W$ (for the definition of graded lift we refer to \cite[Section 3]{Stgrad}; here and further a {\em standard} graded lift of a projective or simple or standard module is the lift in which the top of the module is concentrated in degree zero). Now for $x\in \mathfrak{a}thbf{\mathfrak hat{R}}$ the module $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(x)=\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}P(x)$ is the quotient of $P(x)$ modulo the trace of all $P(y)$ such that $y\mathfrak not \lbraceeq_{\mathfrak{a}thsf{R}}x$. The corresponding quotient $\mathfrak{a}thtt{P}^{\mathfrak{a}thbf{\mathfrak hat{R}}}(x)$ of $\mathfrak{a}thtt{P}(x)$ is then a standard graded lift of $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(x)$. Let $\mathfrak{a}thcal{P}^{\mathfrak{a}thbf{R}}$ be the additive category, closed under grading shifts, and generated by $\mathfrak{a}thtt{P}^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)$, $w\in \mathfrak{a}thbf{R}$. This category is the graded version of the additive category from Proposition~\rbraceef{prop5}\eqref{prop5.2}. Set $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}=\otimesverline{\mathfrak{a}thcal{P}^{\mathfrak{a}thbf{R}}}$ (see Subsection~\rbraceef{s3.105}), which is equivalent to the category of graded finite-dimensional right modules over the algebra $B^{\mathfrak{a}thbf{R}}:= \mathfrak{a}thrm{End}_{\mathfrak{a}thcal{O}_0} (\otimesperatornamelus_{w\in \mathfrak{a}thbf{R}} P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w))$, which inherits a grading from the algebra $A$ (Subsection~\rbraceef{s25.5}). From Proposition~\rbraceef{prop5}\eqref{prop5.2} it follows that $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}$ is closed under $\theta_s^l$, $s\in S$. Our first result is the following statement (we recall that $\mathfrak{a}thbb{Z}((v))$ denotes the ring of formal Laurent series in $v$ with integer coefficients): \begin{theorem}[Categorification of cell modules]\lbraceambdabel{thm5} {\tiny .} \begin{enumerate}[(i)] \item\lbraceambdabel{thm5.0} There is a unique monomorphism of $\mathfrak{a}thds{H}$-modules such that \begin{eqnarray*} \mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}}:\quad\quad\quad S(\mathfrak{a}thbf{R}) & \lbraceongrightarrow & \lbraceeft[ \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}} \rbraceight] \\ \underline{H}_w & \mathfrak{a}psto & \lbraceeft[\mathfrak{a}thtt{P}^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)\rbraceight]. \end{eqnarray*} \item\lbraceambdabel{thm5.1} The monomorphism $\mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}}$ defines a precategorification $(\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}},\mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}}, \{\theta_s^l\}_{s\in S})$ and induces a categorification $(\mathfrak{a}thcal{P}^{\mathfrak{a}thbf{R}},\mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}}, \{\theta_s^l\}_{s\in S})$ of the right cell $\mathfrak{a}thds{H}$-module $S(\mathfrak{a}thbf{R})$ with respect to the generators $H_s$, $s\in S$. \item\lbraceambdabel{thm5.2} The monomorphism $\mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}}$ from \eqref{thm5.0} extends uniquely to a $\mathfrak{a}thbb{Z}((v))$-ca\-te\-go\-ri\-fi\-ca\-tion $(\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}},\mathfrak{a}thcal{E}^{\mathfrak{a}thbf{R}},\{\theta_s^l\}_{s\in S})$ of the right cell $\mathfrak{a}thds{H}^{\mathfrak{a}thbb{Z}((v))}$-module $S(\mathfrak{a}thbf{R})^{\mathfrak{a}thbb{Z}((v))}$ with respect to the generators $H_s$, $s\in S$. \end{enumerate} \end{theorem} \begin{proof} The statement \eqref{thm5.0} follows from Proposition~\rbraceef{prop3}\eqref{prop3.1}, Proposition~\rbraceef{prop5}\eqref{prop5.2} and the definitions. The statement \eqref{thm5.1} follows from \eqref{thm5.0}. Note that $B^{\mathfrak{a}thbf{R}}$ has infinite homological dimension in general. Hence the statement \eqref{thm5.2} follows from \eqref{thm5.1} as the extension of scalars from $\mathfrak{a}thbb{Z}[v,v^{-1}]$ to $\mathfrak{a}thbb{Z}((v))$ allows one to work with infinite projective resolutions. \end{proof} \subsection{Remarks on another categorification of cell modules} \lbraceambdabel{s3.244} Formula \eqref{formula1} suggests another way to categorify cell modules. For a right cell $\mathfrak{a}thbf{R}$ of $W$ set \begin{displaymath} \mathfrak{a}thbf{\check{R}}=\{w\in W\,:\,x\lbraceeq_\mathfrak{a}thsf{R} w \text{ for some }x\in \mathfrak{a}thbf{R}\} \end{displaymath} (note the difference to $\mathfrak{a}thbf{\mathfrak hat{R}}$). Let $\mathfrak{a}thscr{A}$ denote the additive category, generated by $P(w)$, $w\in \mathfrak{a}thbf{\check{R}}$. Denote also by $\mathfrak{a}thscr{A}'$ the additive category, generated by $P(w)$, $w\in \mathfrak{a}thbf{\check{R}}\setminus \mathfrak{a}thbf{R}$. Consider the categories $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\check{R}}}=\otimesverline{\mathfrak{a}thscr{A}}$ and $\tilde{\mathfrak{a}thcal{O}}_0^{\mathfrak{a}thbf{\check{R}}}=\otimesverline{\mathfrak{a}thscr{A}'}$. Note that if $\mathfrak{a}thbf{R}$ contains $w_0^{\mathfrak{a}thfrak{p}}$, where $w_0^{\mathfrak{a}thfrak{p}}$ is the longest element in the parabolic subgroup $W_{\mathfrak{a}thfrak{p}}$ of $W$ corresponding to a parabolic subalgebra $\mathfrak{a}thfrak{p}\supset\mathfrak{a}thfrak{b}$ of $\mathfrak{a}thfrak{g}$, then $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\check{R}}}$ coincides with the category of $\mathfrak{a}thfrak{p}$-presentable modules in $\mathfrak{a}thcal{O}_0$ (\cite[Section~2]{MS}) and is equivalent to ${}_{\mathfrak{a}thbf{0}}\mathfrak{a}thcal{H}^1_{\lbraceambdambda}$, where $\lbraceambdambda\in\mathfrak h^*_{dom}$ is integral and has stabilizer $W_{\mathfrak{a}thfrak{p}}$ (\cite[Theorem~5.9(ii)]{BG}). Formula \eqref{formula1} and Proposition~\rbraceef{prop3}\eqref{prop3.3} immediately imply that both, the category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\check{R}}}$ and the category $\tilde{\mathfrak{a}thcal{O}}_0^{\mathfrak{a}thbf{\check{R}}}$, are stable under $\theta_s^l$, $s\in S$. And the `quotient' should be exactly the cell module. To define this `quotient' we let $\mathfrak{a}thcal{Q}^{\mathfrak{a}thbf{R}}$ denote the additive category, closed under grading shifts, and generated by $\mathfrak{a}thtt{P}(w)$, $w\in \mathfrak{a}thbf{R}$. Set $\mathfrak{a}thscr{D}^{\mathfrak{a}thbf{R}}= \otimesverline{\mathfrak{a}thcal{Q}^{\mathfrak{a}thbf{R}}}$. The functors $\theta_s^l$, $s\in S$, do not preserve $\mathfrak{a}thcal{Q}^{\mathfrak{a}thbf{R}}$ unless $\mathfrak{a}thbf{R}=\{w_0\}$. However, one can use them to define right exact functors $\tilde{\theta}_s^l$ on $\mathfrak{a}thscr{D}^{\mathfrak{a}thbf{R}}$ as follows: First we define the functor $\tilde{\theta}_s^l$ on the indecomposable projective module $\mathfrak{a}thtt{P}(x)$. Let $s\in S$ and $x\in \mathfrak{a}thbf{R}$. If $\theta_s^l\mathfrak{a}thtt{P}(x)\in \mathfrak{a}thcal{Q}^{\mathfrak{a}thbf{R}}$, we set $\tilde{\theta}_s^l\mathfrak{a}thtt{P}(x)=\theta_s^l\mathfrak{a}thtt{P}(x)$, otherwise \eqref{formula1} gives \begin{displaymath} \theta_s^l\mathfrak{a}thtt{P}(x)=\mathfrak{a}thtt{P}(xs)\otimesperatornamelus \bigoplus_{y<x,ys<y}\mathfrak{a}thtt{P}(y)^{\mu(y,x)}. \end{displaymath} This decomposition into two summands is unique since the first summand coincides with the trace of the module $\mathfrak{a}thtt{P}(xs)$ in $\theta_s^l\mathfrak{a}thtt{P}(x)$ and the second summand coincides with the trace of the module $\otimesperatornamelus_{w\in \mathfrak{a}thbf{R}}\mathfrak{a}thtt{P}(w)$ in $\theta_s^l\mathfrak{a}thtt{P}(x)$. Hence we can define $\tilde{\theta}_s^l\mathfrak{a}thtt{P}(x)= \bigoplus_{y<x,ys<y}\mathfrak{a}thtt{P}(y)^{\mu(y,x)}$ and define $\tilde{\theta}_s^l$ on morphisms via restriction. In the standard way $\tilde{\theta}_s^l$ extends uniquely to a right exact endofunctor on $\mathfrak{a}thscr{D}^{\mathfrak{a}thbf{R}}$. We do not know if $\tilde{\theta}_s^l$ is exact. By \eqref{formula1}, the action of $\tilde{\theta}_s^l$, $s\in S$, on the Grothendieck group of $\mathfrak{a}thcal{D}^b(\mathfrak{a}thscr{D}^{\mathfrak{a}thbf{R}})$ coincides with the action of $\underline{H}_s$ on $S(\mathfrak{a}thbf{R})$ and hence we obtain another categorification of the cell module $S(\mathfrak{a}thbf{R})$. We do not know whether this categorification is (derived) equivalent to the one constructed in Theorem~\rbraceef{thm5} or not. The principal disadvantage with this categorification is that we do not know to which extend our uniqueness result from Subsection~\rbraceef{s4.1} holds in this setup. \subsection{$\mathfrak{a}thfrak{gl}_3$-example}\lbraceambdabel{s3.245} Let $W=\lbraceambdangle s,t\rbraceangle\cong S_3$. Then there are four right cells and the Hasse diagram of the right order is as follows: \begin{displaymath} \xymatrix@!=0.6pc{ &\{e\}\ar@{-}[dl]_{\mathfrak geq_\mathfrak{a}thsf{R}}\ar@{-}[dr]^{\lbraceeq_\mathfrak{a}thsf{R}} &\\ \{s,st\}\ar@{-}[dr]_{\lbraceeq_\mathfrak{a}thsf{R}}&&\{t,ts\}\ar@{-}[dl]^{\mathfrak geq_\mathfrak{a}thsf{R}}\\ &\{w_0\}.& } \end{displaymath} Consider first the case $\mathfrak{a}thbf{R}=\{w_0\}$, where we have $\mathfrak{a}thcal{O}_0^{\widehat{\{w_0\}}}= \mathfrak{a}thcal{O}_0$. It contains all simple modules $L(w)$, $w\in S_3$. The presentation of this category as a module category over a finite dimensional algebra can be found in \cite[5.1.2]{St3}. The graded filtrations of the indecomposable projective modules (with indicated Verma subquotients) in this case are shown on Figure~\rbraceef{figwam}. \begin{figure} \caption{Indecomposable projectives in $\mathfrak{a} \end{figure} The category $\mathfrak{a}thcal{P}^{\{w_0\}}$ contains (up to grading shift) a unique indecomposable module, namely $P^{\{w_0\}}(w_0)$. The algebra $B^{\{w_0\}}=\otimesperatornameeratorname{End}_\mathfrak{a}thfrak{g}(P^{\{w_0\}}(w_0))$ is the coinvariant algebra of $W$, see \cite[Endomorphismensatz]{Sperv}. Below we collect the analogous information for the three other choices for the right cells, in particular, we present all the algebras which appear there in terms of quivers and relations. {\tiny \begin{displaymath} \begin{array}{|c||c|c|c|} \mathfrak hline \mathfrak{a}thbf{R} & \{e\} & \{s,st\} & \{t,ts\} \\ \mathfrak hline \text{Simple modules:}& e & e, s, st & e, t, ts\\ \mathfrak hline \text{Projective modules:}& e & \begin{array}{c|c|c} P(e) & P(s) & P(st)\\ \mathfrak hline \begin{array}{c}e\\s\\\text{\mathfrak hspace{2mm}} \end{array}& \begin{array}{ccc}&s&\\st&&e\\&s&\end{array}& \begin{array}{c}st\\s\\st\end{array} \end{array}& \begin{array}{c|c|c} P(e) & P(t) & P(ts)\\ \mathfrak hline \begin{array}{c}e\\t\\\text{\mathfrak hspace{2mm}} \end{array}& \begin{array}{ccc}&t&\\ts&&e\\&t&\end{array}& \begin{array}{c}ts\\t\\ts\end{array} \end{array} \\ \mathfrak hline \text{Quiver of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$:}& e & \begin{array}{c}\xymatrix{st\ar@/^/[r]^{\alpha}& s\ar@/^/[r]^{\mathfrak gamma}\ar@/^/[l]^{\beta}&e\ar@/^/[l]^{\delta}} \\ \beta\delta=\mathfrak gamma\alpha=\mathfrak gamma\delta=0\\ \alpha\beta=\delta\mathfrak gamma\end{array} &\begin{array}{c}\xymatrix{ts\ar@/^/[r]^{\alpha}& t\ar@/^/[r]^{\mathfrak gamma}\ar@/^/[l]^{\beta}&e\ar@/^/[l]^{\delta}} \\ \beta\delta=\mathfrak gamma\alpha=\mathfrak gamma\delta=0\\ \alpha\beta=\delta\mathfrak gamma\end{array} \\ \mathfrak hline \text{Quiver of $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}$:}& e & \xymatrix{st\ar@/^/[r]^{\alpha}&s\ar@/^/[l]^{\beta}} \begin{array}{l}\alpha\beta\alpha=\\\beta\alpha\beta=0\end{array}& \xymatrix{ts\ar@/^/[r]^{\alpha}&t\ar@/^/[l]^{\beta}} \begin{array}{l}\alpha\beta\alpha=\\\beta\alpha\beta=0\end{array}\\ \mathfrak hline \end{array} \end{displaymath} } In the above example the category $\mathfrak{a}thcal{O}^\mathfrak{a}thbf{\mathfrak hat{R}}_0$ always coincides with some parabolic category $\mathfrak{a}thcal{O}^{\mathfrak{a}thfrak{p}}_0$. This is not the case in general. The smallest such example is the right cell $\{s_1s_3,s_1s_3s_2\}$ of $S_4$. \subsection{Specht modules}\lbraceambdabel{s3.3} In the special case $W=S_n$ we denote $\mathfrak{a}thds{H}=\mathfrak{a}thds{H}_n$. The (right) cell modules are exactly the irreducible $\mathfrak{a}thds{H}_n$-modules, \cite[Theorem~1.4]{KLCoxeter}. However, cell modules for different right cells (namely if they are in the same double cell) might be isomorphic. Theorem~\rbraceef{thm5} gives therefore (several) categorifications for each irreducible $\mathfrak{a}thds{H}$-module. If we specialize $v=1$ (i.e. we forget the grading) and work over a field of characteristic zero, the irreducible modules for the Hecke algebra specialize to irreducible modules for the symmetric group (for an explicit description see for example \cite{Na}), hence we get categorifications of Specht modules. In the special situation of Remark~\rbraceef{rem2} we obtain the categorification of Specht modules constructed in \cite{KMS}. Every cell module has a symmetric, non-degenerate, $\mathfrak{a}thds{H}_n$-invariant bilinear form $\lbraceambdangle\cdot,\cdot\rbraceangle$ with values in $\mathfrak{a}thbb{Z}[v,v^{-1}]$, which is unique up to a scalar, see \cite[page~114]{Murphy}. There is a categorical interpretation of this form as follows: For any $\mathbb Z$-graded complex vector space $M=\otimesperatornamelus_{j\in\mathbb Z} M^j$ let $h(M)=\sum_{j\in\mathbb Z}( \otimesperatornameeratorname{dim}_\mathfrak{a}thbb{C} M^j)v^{j}\in\mathbb Z[v,v^{-1}]$ be the corresponding Hilbert polynomial. For all $M$, $N\in\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}$ and all $i\in\mathfrak{a}thbb{Z}$ the vector space $E^i(M,N):= \otimesperatornameeratorname{Ext}^i_{\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}}(M,N)$ is $\mathbb Z$-graded in the natural way. Set $h(E(M,N))=\sum_{i\in\mathbb Z}(-1)^i h(E^i(M,N))$. Let $\otimesperatornameeratorname{d}$ denote the graded lift of the standard duality on $\mathcal{O}_0$, restricted to the category $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}$. \begin{proposition} \lbraceambdabel{bilinearform} The form \begin{displaymath} \beta(\cdot,\cdot):=h(E(\cdot,\otimesperatornameeratorname{d}(\cdot))): \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}\times \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}} \rbraceightarrow\mathbb Z((v)) \end{displaymath} descends to a symmetric, non-degenerate, $\mathfrak{a}thcal{H}_n^{\mathbb Z((v))}$-invariant bilinear form $\lbraceambdangle\cdot,\cdot\rbraceangle$ on the $\mathfrak{a}thcal{H}_n^{\mathbb Z((v))}$-module $[\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}}]^{\mathbb Z((v))}$. The restriction of this form to $\lbraceeft[\mathfrak{a}thcal{P}^{\mathfrak{a}thbf{R}}\rbraceight]_{\otimesperatornamelus}$ has values in $\mathfrak{a}thbb{Z}[v,v^{-1}]$. \end{proposition} \begin{proof} The same as the proof of \cite[Proposition~4]{KMS}. \end{proof} \section{Uniqueness of the categorification for type $A$}\lbraceambdabel{s4} In this section we stick to the case where $W=S_n$. In the previous section we constructed various categorifications for each single Specht module via cell modules. In this section we will show that all these categorifications are in fact equivalent. In particular, one can consider the categorification from \cite{KMS} as a kind of `universal one'. \subsection{Equivalence of categories}\lbraceambdabel{s4.1} \begin{theorem}[Uniqueness Theorem]\lbraceambdabel{thm6} Let $\mathfrak{a}thbf{R}_1$ and $\mathfrak{a}thbf{R}_2$ be two right cells of $W=S_n$, which belong to the same double cell. Then there is an equivalence of categories \begin{displaymath} \Phi=\Phi_{\mathfrak{a}thbf{R}_1}^{\mathfrak{a}thbf{R}_2}:\quad \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}_1}\otimesverset{\sim}{\rbraceightarrow} \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}_2}, \end{displaymath} which (naturally) commutes with projective functors and induces an isomorphism of $\mathfrak{a}thds{H}$-modules $[\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}_1}]\cong [\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}_2}]$. \end{theorem} We will only prove the ungraded version of this theorem. The graded version follows by standard arguments. For our proof we will need several new definitions and more notation. For any right cell $\mathfrak{a}thbf{R}$ let $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R})$ denote the full additive subcategories of $\mathfrak{a}thcal{O}$, generated by all indecomposable direct summands of the modules $E\otimestimes P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)$, $w\in \mathfrak{a}thbf{R}$, where $E$ runs through all finite-dimensional $\mathfrak{a}thfrak{g}$-modules. Analogously we define $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{\mathfrak hat{R}})$ using the condition $w\in \mathfrak{a}thbf{\mathfrak hat{R}}$. Set $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{R}}= \otimesverline{\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R})}$ and $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}}= \otimesverline{\mathfrak{a}thscr{P}(\mathfrak{a}thbf{\mathfrak hat{R}})}$. Denote by $\mathfrak{a}thcal{O}_{\otimesperatorname{int}}$ the full subcategory of $\mathfrak{a}thcal{O}$, which consists of all modules with integral support (i.e. those modules $M$ such that each weight of $M$ is also a weight of some finite-dimensional module). Further, for $s\in S$ we denote by $\mathfrak{a}thcal{O}^s_{\otimesperatorname{int}}$ the integral part of the $s$-parabolic category, that is the full subcategory of $\mathfrak{a}thcal{O}_{\otimesperatorname{int}}$, which consists of all modules which have only composition factors of the form $L(w\cdot\lbraceambdambda)$, where $\lbraceambdambda$ is an integral weight in $\mathfrak{a}thfrak{h}_{dom}^*$, $sw\cdot \lbraceambdambda\mathfrak neq w\cdot \lbraceambdambda$, and $sw>w$. For these categories we have the natural inclusion $\mathfrak{a}thrm{i}_s:\mathfrak{a}thcal{O}^s_{\otimesperatorname{int}}\mathfrak hookrightarrow \mathfrak{a}thcal{O}_{\otimesperatorname{int}}$ and we denote by $\mathfrak{a}thrm{Z}_s$ and $\mathfrak hat{\mathfrak{a}thrm{Z}}_s$ the left and the right adjoint to this inclusion respectively. These are the classical {\em Zuckerman functors}. If $\mathfrak{a}thbf{R}$ is a right cell such that $\mathfrak{a}thbf{R}\lbraceeq_\mathfrak{a}thsf{R} sw_0$, then we have the natural inclusion $\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}}: \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}}\mathfrak hookrightarrow \mathfrak{a}thcal{O}^s_{\otimesperatorname{int}}$ and we denote by $\mathfrak{a}thrm{Z}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ and $\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ respectively the left and the right adjoint to this inclusion. Let now $\mathfrak{a}thbf{R}_1$ and $\mathfrak{a}thbf{R}_2$ be two right cells. Assume that (see \cite[Proof of Theorem~1.4]{KLCoxeter}) \begin{equation}\lbraceambdabel{Lcondition} \exists\, s,t\in S\text{ and } w\in \mathfrak{a}thbf{R}_1\text{ such that } (st)^3=e, sw\mathfrak geq w, tw\lbraceeq w, tw\in \mathfrak{a}thbf{R}_2. \end{equation} In this case we have the following picture: \begin{displaymath} \xymatrix{ \mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}^s_{\otimesperatorname{int}}) \ar@/^/[rr]^{\mathfrak{a}thrm{i}_s} &&\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}_{\otimesperatorname{int}}) \ar@/^/[rr]^{\mathfrak{a}thcal{L}\mathfrak{a}thrm{Z}_t\lbracelbracket-1\rbracerbracket} \ar@/^/[ll]^{\mathfrak{a}thcal{R}\mathfrak hat{\mathfrak{a}thrm{Z}}_s} && \mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O}^t_{\otimesperatorname{int}}) \ar@/^/[ll]^{\mathfrak{a}thrm{i}_{t}\lbracelbracket1\rbracerbracket} } \end{displaymath} For this diagram we denote by $\mathfrak{a}thrm{F}$ the composition from the left to the right and by $\mathfrak{a}thrm{G}$ the composition from the right to the left. Directly from the definitions we have that $(F,G)$ is an adjoint pair of functors. Furthermore, there are adjoint pairs $(\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1},\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1})$ and $(\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2},\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2})$ as follows: \begin{displaymath} \xymatrix{ \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}\ar@/^/[rr]^{\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}} && \mathfrak{a}thcal{O}^s_{\otimesperatorname{int}}\ar@/^/[ll]^{\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}} }\quad\quad\quad\quad \xymatrix{ \mathfrak{a}thcal{O}^t_{\otimesperatorname{int}}\ar@/^/[rr]^{\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}} && \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\ar@/^/[ll]^{\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}} } \end{displaymath} \begin{lemma}\lbraceambdabel{l21} The functors $F$, $G$, $\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}$, $\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}$, $\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ and $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ commute with functors of tensoring with finite-\-di\-men\-si\-o\-nal $\mathfrak{a}thfrak{g}$-modules, in particular with projective functors. \end{lemma} \begin{proof} Since all involved categories are stable under tensoring with finite-di\-men\-si\-o\-nal $\mathfrak{a}thfrak{g}$-modules by definition, all involved inclusions commute with these functors. We will show how one derives from here that $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ commutes with tensoring with finite-dimensional $\mathfrak{a}thfrak{g}$-modules. For all other functors the arguments are similar and therefore omitted. Let $E$ be a finite-dimensional $\mathfrak{a}thfrak{g}$-module. For each $M\in \mathfrak{a}thcal{O}^t_{\otimesperatorname{int}}$ from the definition of $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ we have the canonical projection $M\twoheadrightarrow \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}M$ with kernel $K_M$. Denote $\theta:=E\otimestimes{}_-$, and let $P$ be a projective module in $\mathfrak{a}thcal{O}^t_{\otimesperatorname{int}}$ and $f\in \mathfrak{a}thrm{End}_{\mathfrak{a}thfrak{g}}(P)$. Consider the following diagram: \begin{equation}\lbraceambdabel{eqses1} \xymatrix{ &\theta K_{P}\ar@{^{(}->}[rr]\ar[ld]_{\otimesverline{\theta f}} \ar[dd]_>>>>>>>{\varphi}&& \theta P\ar@{->>}[rr]\ar@{=}[dd]\ar[ld]_{\theta f}&& \theta \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}P \ar[ld]^{\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} f} \ar[dd]_>>>>>>>{\varphi'}\\ \theta K_{P}\ar@{^{(}->}[rr]\ar[dd]_{\varphi} && \theta P\ar@{->>}[rr]\ar@{=}[dd]&&\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}P \ar[dd]_>>>>>>>{\varphi'}& \\ &K_{\theta P}\ar@{^{(}->}[rr]\ar[ld]_{\otimesverline{\theta f}}&& \theta P\ar@{->>}[rr]\ar[ld]_{\theta f}&& \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P \ar[ld]^{\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \theta f} \\ K_{\theta P}\ar@{^{(}->}[rr] && \theta P\ar@{->>}[rr]&&\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P& } \end{equation} Both modules, $\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} P$ and $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P$, are obviously projective in $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$. Let $\theta'$ be the adjoint of $\theta$. Then for any simple module $L\in \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ we have \begin{eqnarray*} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} P,L)&=& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} P,\theta' L)\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(P,\theta' L)\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\theta P, L)\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P, L). \end{eqnarray*} Hence $\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} P\cong \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P$. In particular, by definition of $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$, we have that $\theta K_{P}$ coincides with the maximal submodule of $\theta P$, whose head consists only of simple modules not in $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$. In particular, the identity map on $\theta P$ restricts to an isomorphism $\varphi:\theta K_{P}\otimesverset{\sim}{\lbraceongrightarrow} K_{\theta P}$, and induces the isomorphism $\varphi':\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} P \otimesverset{\sim}{\lbraceongrightarrow} \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta P$. It follows that cube on the left and the front, back, top and bottom faces of \eqref{eqses1} commute. Therefore the face pointing to the right commutes as well. This implies $\theta\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\cong \mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\theta$ since both functors are right exact. \end{proof} \begin{proposition}\lbraceambdabel{p22} Assume that $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ has a simple projective module $L$. Then $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$ has a simple projective module $L'$ given by $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \mathfrak{a}thrm{F}\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}L$. \end{proposition} To prove Proposition~\rbraceef{p22} we will need a series of auxiliary statements. We start with verifying that the expression $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \,\mathfrak{a}thrm{F}\,\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}L$ makes sense, i.e. that it gives a module: \begin{lemma}\lbraceambdabel{p22-l1} Let $X=L$ or $X=L(x)$ for some $x\in\mathfrak{a}thbf{{R}}_1$. Then $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\,\mathfrak{a}thrm{F}\, \mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}X\in \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}$. \end{lemma} \begin{proof} The module $X$ does not belong to $\mathfrak{a}thcal{O}^t_{\otimesperatorname{int}}$ because of the conditions \eqref{Lcondition}. Hence by \cite[Proposition~4.2]{EW} we have $\mathfrak{a}thcal{L}_i\mathfrak{a}thrm{Z}_t\, X=0$ for $i=0,2$ and $\mathfrak{a}thcal{L}_1\mathfrak{a}thrm{Z}_t\, X\in \mathfrak{a}thcal{O}^{t}_{\otimesperatorname{int}}$. Thus $\mathfrak{a}thrm{F}X\in \mathfrak{a}thcal{O}^{t}_{\otimesperatorname{int}}$ and hence $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \,\mathfrak{a}thrm{F}\,\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}X \in \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}$. \end{proof} \begin{lemma}\lbraceambdabel{p22-l2} \begin{enumerate}[(i)] \item\lbraceambdabel{p22-l2.1} $L':=\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \,\mathfrak{a}thrm{F}\,\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}L$ is a simple module. \item\lbraceambdabel{p22-l2.2} For each $L(x)$, $x\in\mathfrak{a}thbf{{R}}_1$, the module $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\,\mathfrak{a}thrm{F}\, \mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}L(x)$ is simple and has the form $L(y)$ for some $y\in \mathfrak{a}thbf{{R}}_2$. Moreover, the map $\varphi:x\mathfrak{a}psto y$ is a bijection from $\mathfrak{a}thbf{{R}}_1$ to $\mathfrak{a}thbf{{R}}_2$. \end{enumerate} \end{lemma} \begin{proof} Let $L(z)\in \mathfrak{a}thcal{O}_0$ be the (unique) simple module which translates to $L\in\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ via translations to walls (see e.g. \cite[4.12 (3)]{Ja2}). By \cite[Theorem~2]{MS3}, \cite[Theorem~6.3]{AS} and \cite[Theorem~7.8]{AS} we have \begin{displaymath} \mathfrak{a}thcal{L}_1\mathfrak{a}thrm{Z}_t\, L(z)\cong L(tz)\otimesperatornamelus\bigoplus_y L(y)^{a_y}, \end{displaymath} where $tz\in\mathfrak{a}thbf{{R}}_2$, and $a_y\mathfrak neq 0$ implies that $y\mathfrak neq tz$ but both $y$ and $tz$ belong to the same left cell. Since the intersection of a left and a right cell inside a common two-sided cell consists of exactly one element (by the Robinson-Schensted correspondence, see e.g. \cite[3.1]{Sa}), the later restrictions give that $a_y\mathfrak neq 0$ implies $y\mathfrak not\in \mathfrak{a}thbf{R}_2$. Hence $\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\mathfrak{a}thcal{L}_1\mathfrak{a}thrm{Z}_t\, L(z)$ is a simple module. Translating this onto the walls we obtain that the module $L'$ is simple. This proves \eqref{p22-l2.1} and also \eqref{p22-l2.2} for the module $L(z)$. For other $x\in \mathfrak{a}thbf{{R}}_1$ the proof is just the same as for $L(z)$. The fact that $\varphi:\mathfrak{a}thbf{{R}}_1\rbraceightarrow\mathfrak{a}thbf{{R}}_2$ is a bijection follows from \cite[Section~4]{KLCoxeter}. \end{proof} \begin{lemma}\lbraceambdabel{p22-l3} \begin{enumerate}[(i)] \item\lbraceambdabel{p22-l3.1} $L=\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}\, \mathfrak{a}thrm{G}\,\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\,L'$. \item\lbraceambdabel{p22-l3.2} For any $x\in \mathfrak{a}thbf{{R}}_1$ we have $L(x)=\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}\, \mathfrak{a}thrm{G}\,\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}\,L(\varphi(x))$. \end{enumerate} \end{lemma} \begin{proof} Analogous to the proof of Lemma~\rbraceef{p22-l2}. \end{proof} As $L\in\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$, the category $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ is equivalent to the additive closure of the category with objects $L\otimestimes E$, where $E$ runs through all finite-dimensional $\mathfrak{a}thfrak{g}$-modules. Set $\tilde{\mathfrak{a}thrm{F}}=\mathfrak{a}thrm{Z}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2} \mathfrak{a}thrm{F}\mathfrak{a}thrm{i}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}$, $\tilde{\mathfrak{a}thrm{G}}=\mathfrak hat{\mathfrak{a}thrm{Z}}_s^{\mathfrak{a}thbf{\mathfrak hat{R}}_1} \mathfrak{a}thrm{G}\mathfrak{a}thrm{i}_t^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ and $\mathfrak{a}thscr{Q}=\tilde{\mathfrak{a}thrm{F}}\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$. \begin{lemma}\lbraceambdabel{l23} \begin{enumerate}[(i)] \item\lbraceambdabel{l23.1} The functors $\tilde{\mathfrak{a}thrm{F}}$ and $\tilde{\mathfrak{a}thrm{G}}$ define mutually inverse equivalences between $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ and $\mathfrak{a}thscr{Q}$. \item\lbraceambdabel{l23.2} $\mathfrak{a}thscr{Q}$ is equivalent to the additive closure of the category with objects $L'\otimestimes E$, where $E$ runs through all finite-dimensional $\mathfrak{a}thfrak{g}$-modules. \end{enumerate} \end{lemma} \begin{proof} We have already seen that $\tilde{\mathfrak{a}thrm{F}}L=L'$ and $\tilde{\mathfrak{a}thrm{G}}L'=L$. By Lemma~\rbraceef{l21} we thus have that \begin{equation}\lbraceambdabel{eq12321} \tilde{\mathfrak{a}thrm{G}}\tilde{\mathfrak{a}thrm{F}}\,(E\otimestimes L)\cong E\otimestimes L\quad \text{ and }\quad \tilde{\mathfrak{a}thrm{F}}\tilde{\mathfrak{a}thrm{G}}\,(E\otimestimes L')\cong E\otimestimes L' \end{equation} for any finite-dimensional $\mathfrak{a}thfrak{g}$-module $E$. By definition, we have the adjoint pair $(\tilde{\mathfrak{a}thrm{F}},\tilde{\mathfrak{a}thrm{G}})$. Consider the adjunction morphisms $\mathfrak{a}thrm{adj}:\tilde{\mathfrak{a}thrm{F}}\tilde{\mathfrak{a}thrm{G}}\rbraceightarrow\mathfrak{a}thrm{ID}$ and $\otimesverline{\mathfrak{a}thrm{adj}}:\mathfrak{a}thrm{ID}\rbraceightarrow \tilde{\mathfrak{a}thrm{G}}\tilde{\mathfrak{a}thrm{F}}$. Then the adjunction property says that $\mathfrak{a}thrm{adj}_{\tilde{\mathfrak{a}thrm{F}}()}\circ \tilde{\mathfrak{a}thrm{F}}(\otimesverline{\mathfrak{a}thrm{adj}})=\mathfrak{a}thrm{id}$. In particular $\mathfrak{a}thrm{adj}_{E\otimestimes L'}$ must be surjective, hence an isomorphism by \eqref{eq12321}. Similarly $\otimesverline{\mathfrak{a}thrm{adj}}_{E\otimestimes L}$ is an isomorphism. This proves statement \eqref{l23.1} and statement \eqref{l23.2} follows then from \eqref{l23.1} and Lemma~\rbraceef{l21}. \end{proof} Let now $\mathfrak{a}thscr{Y}_1$ denote the full subcategory of $\mathfrak{a}thcal{O}_0$, whose objects are the $P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x)$ and the $L(x)$, $x\in \mathfrak{a}thbf{R}_1$. Denote further by $\mathfrak{a}thscr{Y}_2$ the full subcategory of $\mathfrak{a}thcal{O}_0$ whose objects are $\tilde{\mathfrak{a}thrm{F}}\, P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x)$, $x\in \mathfrak{a}thbf{R}_1$, and $L(y)$, $y\in \mathfrak{a}thbf{R}_2$. Lemma~\rbraceef{l23} can be refined as follows: \begin{lemma}\lbraceambdabel{p22-l4} The functors $\tilde{\mathfrak{a}thrm{F}}$ and $\tilde{\mathfrak{a}thrm{G}}$ induce mutually inverse equivalences of categories between $\mathfrak{a}thscr{Y}_1$ and $\mathfrak{a}thscr{Y}_2$. \end{lemma} \begin{proof} By definition and Lemma~\rbraceef{l23}, $\tilde{\mathfrak{a}thrm{F}}\, P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x)\in \mathfrak{a}thscr{Y}_2$ for all $x\in \mathfrak{a}thbf{R}_1$, and $\tilde{\mathfrak{a}thrm{G}}\tilde{\mathfrak{a}thrm{F}}\, P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x)\in \mathfrak{a}thscr{Y}_1$ for all $x\in \mathfrak{a}thbf{R}_1$. Analogously to the proof of Lemma~\rbraceef{p22-l2} one shows that for each $x\in \mathfrak{a}thbf{R}_1$ we have $\tilde{\mathfrak{a}thrm{F}}\, L(x)\cong L(y)$ for some $y\in \mathfrak{a}thbf{R}_2$, and that for each $y\in \mathfrak{a}thbf{R}_2$ we have $\tilde{\mathfrak{a}thrm{G}}\, L(y)\cong L(x)$ for some $x\in \mathfrak{a}thbf{R}_1$. Hence $\tilde{\mathfrak{a}thrm{F}}:\mathfrak{a}thscr{Y}_1\rbraceightarrow \mathfrak{a}thscr{Y}_2$ and $\tilde{\mathfrak{a}thrm{G}}:\mathfrak{a}thscr{Y}_2\rbraceightarrow \mathfrak{a}thscr{Y}_1$. That these functors are mutually inverse equivalences is proved in the same way as in Lemma~\rbraceef{l23}. \end{proof} For $x\in \mathfrak{a}thbf{R}_1$ set $N_x=\tilde{\mathfrak{a}thrm{F}}\, P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x)$. \begin{corollary}\lbraceambdabel{p22-c5} For every $x\in \mathfrak{a}thbf{R}_1$ we have $P^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}(\varphi(x)) \twoheadrightarrow N_x$. \end{corollary} \begin{proof} Using the Lemmas~\rbraceef{p22-l1}-\rbraceef{p22-l4}, for any $x\in \mathfrak{a}thbf{R}_1$ and $y\in \mathfrak{a}thbf{R}_2$ we have \begin{displaymath} \begin{array}{rcl} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(N_x,L(y))&=& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\tilde{\mathfrak{a}thrm{F}}\, P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x),L(y))\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}( P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x), \tilde{\mathfrak{a}thrm{G}}\,L(y))\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}( P^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}(x), L(\varphi^{-1}(y)))\\ &=& \begin{cases} \mathfrak{a}thbb{C},&\varphi(x)=y;\\0,& \text{otherwise}.\end{cases} \end{array} \end{displaymath} and the claim follows. \end{proof} \begin{lemma}\lbraceambdabel{p22-l6} \begin{enumerate}[(i)] \item\lbraceambdabel{p22-l6.1} For $x,w\in W$ we have $\theta_w^l\theta_x^l\cong \otimesperatornamelus_{y\mathfrak geq_{\mathfrak{a}thsf{L}}w}(\theta_y^l)^{m_y}$. \item\lbraceambdabel{p22-l6.2} Let $x,w\in W$ be such that $x<_{\mathfrak{a}thsf{R}}w$. Then $\theta_w^l L(x)=0$. \item\lbraceambdabel{p22-l6.3} For each $w\in W$ there exists $x\in W$ such that $x\sim_{\mathfrak{a}thsf{R}}w$ and $\theta^l_w L(x)\mathfrak neq 0$. \end{enumerate} \end{lemma} \begin{proof} To prove the first statement we use some ideas from the proof of \cite[Theorem~11]{Ma3}. Denote by $\sigma$ the unique anti-automorphism of $\mathfrak{a}thds{H}$, which maps $H_w$ to $H_{w^{-1}}$ (and hence $\underline{H}_w$ to $\underline{H}_{w^{-1}}$) for each $w\in W$. Using \eqref{formula1} we have: \begin{multline*} \underline{H}_w\underline{H}_x= \sigma(\sigma(\underline{H}_w\underline{H}_x)) =\sigma(\sigma(\underline{H}_x)\sigma(\underline{H}_w))=\\= \sigma(\underline{H}_{x^{-1}}\underline{H}_{w^{-1}}) =\sum_{y^{-1}\mathfrak geq_\mathfrak{a}thsf{R}w^{-1}}\sigma(a_y\underline{H}_{y^{-1}}) =\sum_{y\mathfrak geq_\mathfrak{a}thsf{L}w}a_y\underline{H}_{y}. \end{multline*} Now \eqref{p22-l6.1} follows from Proposition~\rbraceef{prop3}. Let $x,w\in W$ be such that $x<_{\mathfrak{a}thsf{R}}w$. We have $P(x)\cong \theta^l_x\Delta(e)\twoheadrightarrow L(x)$. Using \eqref{p22-l6.1} we have $\theta^l_wP(x)\cong \theta^l_w\theta^l_x\Delta(e)\cong \otimesperatornamelus_{y\mathfrak geq_{\mathfrak{a}thsf{L}}w}P(y)^{m_y}$. At the same time by Proposition~\rbraceef{prop5}\eqref{prop5.1}, the head of $\theta^l_w L(x)$ can contain only $L(y)$ such that $y\lbraceeq_{\mathfrak{a}thsf{R}}x$. Hence we have $y\lbraceeq_{\mathfrak{a}thsf{R}}x<_{\mathfrak{a}thsf{R}}w\lbraceeq_{\mathfrak{a}thsf{L}}y$, which is not possible. Therefore  $\theta^l_w L(x)=0$ proving \eqref{p22-l6.2}. Let $\mathfrak{a}thbf{R}$ be the right cell of $w$. Using Lemma~\rbraceef{l21}, we have \begin{displaymath} \theta^l_w\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}\Delta(e)\cong \mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}\theta^l_w\Delta(e)\cong \mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}P(w)\cong P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w)\mathfrak neq 0. \end{displaymath} Hence $\theta^l_w L(x)\mathfrak neq 0$ for some simple subquotient $L(x)$ of $\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}\Delta(e)$. In particular, $x\lbraceeq_{\mathfrak{a}thsf{R}}w$ (thanks to the definition of $\mathfrak{a}thrm{Z}^{\mathfrak{a}thbf{\mathfrak hat{R}}}$) and then $x\sim_{\mathfrak{a}thsf{R}}w$ follows from \eqref{p22-l6.2}. \end{proof} \begin{lemma}\lbraceambdabel{p22-l8} There exists $z\in \mathfrak{a}thbf{R}_1$ such that $N_z\in \mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$. \end{lemma} \begin{proof} We choose $w,y\in \mathfrak{a}thbf{R}_2$ such that $\theta_w^l L(y)\mathfrak neq 0$ (see Lemma~\rbraceef{p22-l6}\eqref{p22-l6.3}). By Corollary~\rbraceef{p22-c5}, there exists some $x\in \mathfrak{a}thbf{R}_1$ such that $P^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}(y)\twoheadrightarrow N_x$. Let $K$ be the kernel of the latter map. Consider the short exact sequence $K'\mathfrak hookrightarrow K\twoheadrightarrow K''$, where $K''$ is the maximal quotient of $K$, which contains only simple subquotients of the form $L(z)$, $z<_{\mathfrak{a}thsf{R}}w$. By Lemma~\rbraceef{p22-l6}\eqref{p22-l6.2} we have $\theta_w^l K'\cong \theta_w^l K$. Hence we have the short exact sequence of the form \begin{equation}\lbraceambdabel{p22-l8-e1} \theta_w^l K'\mathfrak hookrightarrow \theta_w^lP^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}(y) \twoheadrightarrow \theta_w^lN_x. \end{equation} Note that $\theta_w^lN_x\mathfrak neq 0$ since $\theta$ is exact, $\theta_w^lL(y)\mathfrak neq 0$ and $L(y)$ is the head of $N_x$. If $\theta_w^l K'=0$, we immediately get that $0\mathfrak neq \theta_w^lN_x\in \mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$. But the additive category, generated by indecomposable modules $N_z$, $z\in\mathfrak{a}thbf{R}_1$, is stable with respect to projective functors by Lemma~\rbraceef{l21}. This implies that $N_z\in \mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$ for some $z\in \mathfrak{a}thbf{R}_1$. Assume hence that $\theta_w^l K'\mathfrak neq 0$ and consider an arbitrary short exact sequence of the form $M'\mathfrak hookrightarrow \theta_w^l K'\twoheadrightarrow M''$ such that $M''$ is simple. Then $M''\cong L(v)$ for some $v\in\mathfrak{a}thbf{R}_2$. If we factor $M'$ out in \eqref{p22-l8-e1} we obtain the short exact sequence \begin{equation}\lbraceambdabel{p22-l8-e2} L(v)\mathfrak hookrightarrow X \twoheadrightarrow \theta_w^lN_x, \end{equation} where $X=\theta_w^lP^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}(y)/M'$. By Corollary~\rbraceef{p22-c5}, The heads of $X$ and $\theta_w^lN_x$ are isomorphic. Hence the sequence \eqref{p22-l8-e2} is not split. Apply now the functor $\tilde{\mathfrak{a}thrm{G}}$ to the sequence \eqref{p22-l8-e2}, which basically reduces to the application of the functor $\mathfrak{a}thcal{L}_1\mathfrak{a}thrm{Z}_t$ because of the definition of $\tilde{\mathfrak{a}thrm{G}}$. As $\mathfrak{a}thcal{L}_2\mathfrak{a}thrm{Z}_t\theta_w^lN_x=0$ and $\mathfrak{a}thcal{L}_0\mathfrak{a}thrm{Z}_tL(v)=0$ (this follows for example from Lemma~\rbraceef{p22-l4} and the definition of $\tilde{\mathfrak{a}thrm{G}}$), we obtain a short exact sequence \begin{equation}\lbraceambdabel{p22-l8-e3} \tilde{\mathfrak{a}thrm{G}}L(v)\mathfrak hookrightarrow \tilde{\mathfrak{a}thrm{G}}X \twoheadrightarrow \tilde{\mathfrak{a}thrm{G}}\theta_w^lN_x, \end{equation} in particular, $\tilde{\mathfrak{a}thrm{G}}X\in \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}$. Analogously one shows that $\tilde{\mathfrak{a}thrm{F}}\tilde{\mathfrak{a}thrm{G}}X\in \mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}_2}}$, which, together with Lemma~\rbraceef{p22-l4}, implies that the adjunction morphism induces an isomorphism $\tilde{\mathfrak{a}thrm{F}}\tilde{\mathfrak{a}thrm{G}}X\cong X$, and thus the sequence \eqref{p22-l8-e2} is obtained from the sequence \eqref{p22-l8-e3} by applying $\tilde{\mathfrak{a}thrm{F}}$. However, the sequence \eqref{p22-l8-e3} splits as $\tilde{\mathfrak{a}thrm{G}}\theta_w^lN_x$ is projective in $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}_1}}$. Therefore \eqref{p22-l8-e2} must be split as well, a contradiction. Hence $\theta_w^l K'\mathfrak neq 0$ is not possible. This completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\rbraceef{p22}.] To prove Proposition~\rbraceef{p22} it is enough to show that $\mathfrak{a}thscr{Q}=\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$. Let $\mathfrak{a}thscr{Q}_0$ and $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$ denote the intersections of $\mathfrak{a}thcal{O}_0$ with $\mathfrak{a}thscr{Q}$ and $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$ respectively. The definition of $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$ and Lemma~\rbraceef{l23} imply that it is even enough to show that $\mathfrak{a}thscr{Q}_0=\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$. From Lemma~\rbraceef{p22-l8} we know that $\mathfrak{a}thscr{Q}_0\cap \mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$ is not trivial. As $\mathfrak{a}thscr{Q}_0$ is additively closed by Lemma~\rbraceef{l23}\eqref{l23.2} we have that $\mathfrak{a}thscr{Q}_0$ contains some indecomposable projective from $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$. Applying projective functors and Theorem~\rbraceef{thm5} we get that $\mathfrak{a}thscr{Q}_0$ must contain all indecomposable projectives from $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$. But by Lemma~\rbraceef{l23} the categories $\mathfrak{a}thscr{Q}_0$ and $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$ contain the same number of pairwise non-isomorphism indecomposable modules. Hence $\mathfrak{a}thscr{Q}_0=\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)_0$. This completes the proof. \end{proof} Now we are prepared to prove Theorem~\rbraceef{thm6}. \begin{proof}[Proof of Theorem~\rbraceef{thm6}.] Assume first that $\mathfrak{a}thbf{R}_1$ is of the form described in Remark~\rbraceef{rem2}. Then $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ has a simple projective module by \cite[Section~3.1]{IS}. Let now $\mathfrak{a}thbf{R}_2$ be any other right cell in the same two-sided cell as $\mathfrak{a}thbf{R}_1$. By \cite[Proof of Theorem~1.4]{KLCoxeter} there is a sequence, $\mathfrak{a}thbf{R}_1=\mathfrak{a}thbf{R}^{(0)}$, $\mathfrak{a}thbf{R}^{(2)}$,\dots, $\mathfrak{a}thbf{R}^{(k)}=\mathfrak{a}thbf{R}_2$, such that $(\mathfrak{a}thbf{R}^{(i)},\mathfrak{a}thbf{R}^{(i+1)})$ satisfies the condition \eqref{Lcondition} for each $i=0,\dots,k-1$. Inductively applying Lemma~\rbraceef{l23} and Proposition~\rbraceef{p22} provides an equivalence between $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_1)$ and $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R}_2)$. This of course induces an equivalence of abelian categories. \end{proof} \subsection{Consequences}\lbraceambdabel{s4.2} Let $\mathfrak{a}thbf{R}$ be a right cell of $S_n$. From Theorem~\rbraceef{thm6} and Remark~\rbraceef{rem2} one can deduce the following facts: \begin{enumerate}[(I)] \item\lbraceambdabel{cons1} The Koszul grading on the algebra $A$ (\cite{SoKipp}) turns $\mathfrak{a}thrm{End}_{\mathfrak{a}thcal{O}_0}(\otimesperatornamelus_{w\in \mathfrak{a}thbf{R}}P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w))$ into a positively graded self-injective symmetric algebra, \cite[Theorem~5.4]{MS2}. \item\lbraceambdabel{cons2} The center of $\mathfrak{a}thrm{End}_{\mathfrak{a}thcal{O}_0} (\otimesperatornamelus_{w\in \mathfrak{a}thbf{R}}P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(w))$ is isomorphic to the cohomology algebra of the associated Springer fiber, see \cite[Theorem~2]{Br} and \cite[Theorem~4.1.1]{St2}. \item\lbraceambdabel{cons3} For each $w\in \mathfrak{a}thbf{R}$ there is a finite-dimensional $\mathfrak{a}thfrak{g}$-module $E$ such that each $P^{\mathfrak{a}thbf{\mathfrak hat{R}}}(x)$, $x\in \mathfrak{a}thbf{R}$, is a direct summand of $E\otimestimes L(w)$. This follows from \cite[Proposition~4.3(ii)]{Irself}. \item\lbraceambdabel{cons4} The projective modules in $\mathfrak{a}thscr{P}(\mathfrak{a}thbf{R})$ have all the same Loewy lengths (\cite[Theorem~5.2]{MS2}). \end{enumerate} \subsection{Counter-examples}\lbraceambdabel{s4.4} Perhaps the most remarkable feature of Theorem~\rbraceef{thm6} is that there is no way to extend this result to the categories $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$. For two right cells satisfying the condition of Remark~\rbraceef{rem2} this was already pointed out in \cite[Propositions~6]{Kh}. At the same time, in \cite[Propositions~7]{Kh}, it was shown that the corresponding $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$'s are derived equivalent. Even this weaker statement is not true in the general case. For example, take $W=S_4$, generated by the simple reflections $s,t,r$ such that $sr=rs$. Take the two right cells $\mathfrak{a}thrm{R}_1=\{sr,srt\}$ and $\mathfrak{a}thrm{R}_2=\{tsr,tsrt\}$. Then we have $\mathfrak{a}thbf{\mathfrak hat{R}}_1=\{e,s,r,ts,tr,sr,rts,str,srt\}$ whereas $\mathfrak{a}thbf{\mathfrak hat{R}}_2=\{e,t,ts,tr,tsr,tsrt\}$. In particular, the categories $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}_1}$ and $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}_2}$ have different numbers of simple modules; hence they cannot be derived equivalent. For right cells ${\mathfrak{a}thbf{\mathfrak hat{R}}}$ satisfying the condition of Remark~\rbraceef{rem2}, the categories $\mathfrak{a}thcal{O}^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ are special amongst the categories associated with right cells: they are equivalent to the principal block of some parabolic category $\mathfrak{a}thcal{O}$, in particular are highest weight categories (i.e. described by quasi-hereditary algebras), see \cite{RC}. This is not true for arbitrary right cells. The smallest such example is again the case $W=S_4$ with $\mathfrak{a}thrm{R}=\{t,ts,tr\}$. In this case $\mathfrak{a}thbf{\mathfrak hat{R}}= \{e,t,ts,tr\}$ and we have the following graded filtrations of projective and standard modules in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$: \begin{displaymath} \begin{array}{|c||c|c|c|c|} \mathfrak hline w & e & t & ts & tr\\ \mathfrak hline\mathfrak hline P(w)& \begin{array}{c}e\\t\\\text{\mathfrak hspace{2mm}}\end{array}& \begin{array}{ccc}&t&\\ts&e&tr\\&t&\end{array}& \begin{array}{c}ts\\t\\ts\end{array}& \begin{array}{c}tr\\t\\tr\end{array} \\ \mathfrak hline \Delta(w)& \begin{array}{c}e\\t\end{array}& \begin{array}{ccc}&t&\\ts&&tr\end{array}& \begin{array}{c}ts\\\text{\mathfrak hspace{2mm}}\end{array}& \begin{array}{c}tr\\\text{\mathfrak hspace{2mm}}\end{array}\\ \mathfrak hline \end{array} \end{displaymath} We see that not all projective modules have standard filtrations and hence $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbf{\mathfrak hat{R}}}$ is not a highest weight category. \section{Tensor products and parabolic induction}\lbraceambdabel{s5} In this section we show how one can categorify some standard representation theoretical operations like tensor products and parabolic induction. As application we categorify induced cell modules. Up to equivalence, the resulting categories depend only on the isomorphism class of the cell module, not on the actual cell module itself. \subsection{Inner and outer tensor products}\lbraceambdabel{s5.1} Let $W$ and $W'$ be arbitrary finite Weyl groups with sets of simple reflections $S$ and $S'$. Let $\mathfrak{a}thds{H}$, $\mathfrak{a}thds{H}'$ be the corresponding Hecke algebras. If $M$ is a right $\mathfrak{a}thds{H}$-module and $M'$ is a right $\mathfrak{a}thds{H}'$-module then the {\it outer tensor product} $M\boxtimes M'$ is the right $\mathfrak{a}thds{H}\otimestimes\mathfrak{a}thds{H}'$-module whose underlying space is $M\otimestimes M'$ and the module structure is given by $m\otimestimes m'(h\otimestimes h')=mh\otimestimes m'h'$ for $m\in M$, $m'\in M'$, $h\in\mathfrak{a}thds{H}$ and $h'\in\mathfrak{a}thds{H}'$. Let now $X$ and $Y$ be right $\mathfrak{a}thds{H}$-modules. Then the {\it inner tensor product} $X\otimestimes Y$ is the right $\mathfrak{a}thds{H}$-module whose underlying space is $X\otimestimes Y$ and the module structure is given by $(x\otimestimes y).h=x.h\otimestimes y.h$ for $x\in X$, $y\in Y$ and $h\in\mathfrak{a}thds{H}$. Given two categories $\mathfrak{a}thscr{C}_1$ and $\mathfrak{a}thscr{C}_2$ let $\mathfrak{a}thscr{C}_1\otimesperatornamelus\mathfrak{a}thscr{C}_2$ be the category with objects being pairs $(C_1,C_2)$, where $C_i$ is an object in $\mathfrak{a}thscr{C}_i$, and the morphisms from an object $(A_1,A_2)$ to an object $(B_1,B_2)$ being pairs of morphisms $(f_1,f_2)$, where $f_i:A_i\rbraceightarrow B_i$ for $i=1,2$. We assume that each of these categories is either equivalent to a module categories over some finite dimensional algebra $A$ or at least equivalent to its (bounded) derived category. Then $\otimesperatorname{Gr}(\mathfrak{a}thscr{C}_1\otimestimes\mathfrak{a}thscr{C}_2) \cong \otimesperatorname{Gr}(\mathfrak{a}thscr{C}_1)\otimestimes_\mathfrak{a}thbb{Z} \otimesperatorname{Gr}(\mathfrak{a}thscr{C}_2)$ and hence also $[\mathscr{C}_1\otimesperatornamelus\mathscr{C}_2]\cong [\mathfrak{a}thscr{C}_1]\otimestimes [\mathfrak{a}thscr{C}_2]$. Given two functors $F_i:\mathfrak{a}thscr{C}_i\rbraceightarrow\mathfrak{a}thscr{C}_i$, $i=1,2$, then we denote by $F_1\boxtimes F_2$ the endofunctor of $\mathfrak{a}thscr{C}_1\otimesperatornamelus\mathfrak{a}thscr{C}_2$ which maps $(A_1,A_2)$ to $(F_1(A_1),F_2(A_2))$ and $(f_1,f_2)$ to $(F_1(f_1),F_2(f_2))$. The following result gives a categorification of the outer and inner tensor products: \begin{proposition}[Tensor products]\lbraceambdabel{tensorproducts} Assume we are given a right $\mathfrak{a}thds{H}$-module $M$ and a right $\mathfrak{a}thds{H}'$-module $M'$ together with a categorification $(\mathfrak{a}thscr{S},\mathfrak{a}thcal{E},\{\mathfrak{a}thrm{F}_s\}_{s\in S})$ of $M$ with respect to the generators $\underline{H}_s$, $s\in S$, of $\mathfrak{a}thds{H}$; and a categorification $(\mathfrak{a}thscr{S}',\mathfrak{a}thcal{E}', \{\mathfrak{a}thrm{F}'_{s'}\}_{s'\in S'})$ of $M'$ with respect to the generators $\underline{H}_{s'}$, where $s'\in S'$, of $\mathfrak{a}thds{H}'$. Then we have: \begin{enumerate}[(i)] \item\lbraceambdabel{tensorproducts.1} The tuple \begin{displaymath} (\mathfrak{a}thscr{S}\otimesperatornamelus\mathfrak{a}thscr{S}', \mathfrak{a}thcal{E}\otimestimes\mathfrak{a}thcal{E}', \{\mathfrak{a}thrm{F}_s\boxtimes\mathfrak{a}thrm{F}'_{s'}\}_{s\in S,s'\in S'}) \end{displaymath} is a categorification of $M\boxtimes M'$ with respect to the generators $\underline{H}_{s}\otimestimes \underline{H}_{s'}$, $s\in S$, $s'\in S'$. \item\lbraceambdabel{tensorproducts.2} If both $M$, and $M'$ are right $\mathfrak{a}thds{H}$-modules then \begin{displaymath} (\mathfrak{a}thscr{S}\otimesperatornamelus\mathfrak{a}thscr{S}',\mathfrak{a}thcal{E}\otimestimes\mathfrak{a}thcal{E}', \{\mathfrak{a}thrm{F}_{s}\boxtimes\mathfrak{a}thrm{F}'_{s}\}_{s\in S}) \end{displaymath} is a categorification of $M\otimestimes M'$ with respect to the generators $\underline{H}_s$, where $s\in S$, of $\mathfrak{a}thds{H}$. \end{enumerate} \end{proposition} \begin{proof} This follows directly from the definitions. \end{proof} \subsection{Examples of parabolic induction}\lbraceambdabel{s5.2} Let now $W'$ be a parabolic subgroup of $W$ which corresponds to a subset $S'\subset S$. Let $\mathfrak{a}thds{H}'=\mathfrak{a}thds{H}(W',S')$ be the corresponding subalgebra of $\mathfrak{a}thds{H}$, and let $M$ be a (right) $\mathfrak{a}thds{H}'$-module. The purpose of this section is to give a categorification of the induced module $\mathfrak{a}thrm{Ind}_{\mathfrak{a}thds{H}'}^{\mathfrak{a}thds{H}} M= M\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}$, where $M$ is a cell module over $\mathfrak{a}thds{H}'$. We start by recalling examples from the literature. \subsubsection{Sign parabolic module} The assignment $H_s\mathfrak{a}psto -v$ for all $s\in S'$ defines a surjection $\mathfrak{a}thds{H}'\twoheadrightarrow \mathfrak{a}thbb{Z}[v,v^{-1}]$ and hence defines on $\mathfrak{a}thbb{Z}[v,v^{-1}]$ the structure of an $\mathfrak{a}thds{H}'$--bimodule. Consider the {\em sign parabolic} $\mathfrak{a}thds{H}$-module $\mathfrak{a}thcal{N}= \mathfrak{a}thbb{Z}[v,v^{-1}]\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}$. The set $\{N_x=1\otimestimes H_x\}$, where $x$ runs through the set $(W'\mathfrak{a}thbf{a}ckslash W)_{short}$ of shortest coset representatives in $W'\mathfrak{a}thbf{a}ckslash W$, forms a basis of $\mathfrak{a}thcal{N}$. The action of $\underline{H}_s$, $s\in S$, in this basis is given by (see \cite[Section~3]{SoKipp}): \begin{displaymath} N_x\underline{H}_s= \begin{cases} N_{xs}+vN_x, & \text{ if } xs\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}, xs>x;\\ N_{xs}+v^{-1}N_x, & \text{ if } xs\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}, xs<x;\\ 0, & \text{ if } xs\mathfrak not\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}. \end{cases} \end{displaymath} It is easy to see that the specialization $v=1$ gives the $W$-module $\mathfrak{a}thrm{Ind}_{W'}^W M$, where $M$ is the {\em sign} $W'$-module, that is $M=\mathfrak{a}thbb{Z}$ with the alternating action $1\, s=-1$ for all $s\in S'$. \subsubsection*{Its categorification} Let $\mathfrak{a}thfrak{p}\supseteq \mathfrak{a}thfrak{b}$ be the parabolic subalgebra of $\mathfrak{a}thfrak{g}$ corresponding to $S'$. Let further $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$ be the locally $\mathfrak{a}thfrak{p}$-finite part of $\mathfrak{a}thcal{O}_0$ (in the sense of \cite{RC}). This is the full extension closed subcategory of $\mathfrak{a}thcal{O}_0$, generated by the simple modules $L(w)$, $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. Finally, let $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p},\mathfrak{a}thbb{Z}}$ be the graded version of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$ (as defined in \cite{BGS}). Let $\Delta^{\mathfrak{a}thfrak{p}}(w)$ denote the corresponding standard graded lift of the generalized Verma module, i.e. the corresponding standard module in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p},\mathfrak{a}thbb{Z}}$ with head concentrated in degree $0$. The category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}}$ has finite homological dimension, and hence we have a unique isomorphism $\mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}}$ of $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules as follows: \begin{eqnarray*} \mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}}: \quad \mathfrak{a}thcal{N} & \otimesverset{\sim}{\lbraceongrightarrow} & \lbraceeft[\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p},\mathfrak{a}thbb{Z}}\rbraceight] \\ N_w & \mathfrak{a}psto & \lbraceeft[\Delta^{\mathfrak{a}thfrak{p}}(w)\rbraceight]. \end{eqnarray*} The following result is well-known (see for example \cite[Proposition~1.5]{StDuke}): \begin{proposition}\lbraceambdabel{p31} $(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p},\mathfrak{a}thbb{Z}},\mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}}, \{\theta_s^l\}_{s\in S})$ is a categorification of $\mathfrak{a}thcal{N}$ with respect to the generators $\underline{H}_s$, $s\in S$. \end{proposition} \subsubsection{Permutation parabolic module} \lbraceambdabel{permmodule} The assignment $H_s\mathfrak{a}psto v^{-1}$ for all $s\in S'$ defines a surjection $\mathfrak{a}thds{H}'\twoheadrightarrow \mathfrak{a}thbb{Z}[v,v^{-1}]$ and hence determines on $\mathfrak{a}thbb{Z}[v,v^{-1}]$ the structure of an $\mathfrak{a}thds{H}'$--bimodule. The {\em permutation parabolic} $\mathfrak{a}thds{H}$-module is defined as follows: $\mathfrak{a}thcal{M}= \mathfrak{a}thbb{Z}[v,v^{-1}]\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}$. There is the standard basis of $\mathfrak{a}thcal{M}$ given by $\{M_x=1\otimestimes H_x\}$, where $x$ runs through $(W'\mathfrak{a}thbf{a}ckslash W)_{short}$. The action of $\underline{H}_s$, $s\in S$, in this basis is given as follows (see \cite[Section~3]{SoKipp}): \begin{displaymath} M_x\underline{H}_s= \begin{cases} M_{xs}+vM_x, & \text{ if } xs\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}, xs>x;\\ M_{xs}+v^{-1}M_x, & \text{ if } xs\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}, xs<x;\\ (v+v^{-1})M_x, & \text{ if } xs\mathfrak not\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}. \end{cases} \end{displaymath} It is easy to see that the specialization $v=1$ gives the $W$-module $\mathfrak{a}thrm{Ind}_{W'}^W M$, where $M$ is the {\em trivial} $W'$-module, that is $M=\mathfrak{a}thbb{Z}$ with the trivial action $1\, s=1$ for all $s\in S'$. The module $\mathfrak{a}thrm{Ind}_{W'}^W M$ is usually called the {\em permutation module}, see \cite[2.1]{Sa}. \subsubsection*{Its categorification} Let $\mathfrak{a}thfrak{p}$ be as in the previous example. Let $\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p})$ be the additive category, closed with respect to the shift of grading, and generated by the indecomposable projective modules $\mathfrak{a}thtt{P}(w)\in\mathcal{O}_0^\mathfrak{a}thbb{Z}$, where $w$ runs through the set $(W'\mathfrak{a}thbf{a}ckslash W)_{long}$ of longest coset representatives in $W'\mathfrak{a}thbf{a}ckslash W$. The category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}= \otimesverline{\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p})}$ is the graded version of the category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres}$ from \cite{MS} (see also Subsection~\rbraceef{s3.244}). The simple objects of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres}$ are in a natural bijection with $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{long}$. For $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{long}$ denote by $\Delta^{\mathfrak{a}thfrak{p}-pres}(w)$ the standard object of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}$ corresponding to $w$ and with the head concentrated in degree $0$ (\cite[Theorem 2.16, Lemma 7.2]{MS}). Let $w'_0$ be the longest element of $W'$. All this defines a unique homomorphism $\mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres}$ of $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules as follows: \begin{eqnarray*} \mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres}: \quad \mathfrak{a}thcal{M} & \otimesverset{\sim}{\lbraceongrightarrow} & \lbraceeft[\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}\rbraceight] \\ M_{w'_0w} & \mathfrak{a}psto & \lbraceeft[\Delta^{\mathfrak{a}thfrak{p}-pres}(w)\rbraceight]. \end{eqnarray*} The category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}$ does not have finite homological dimension in general, however, projective dimension of all standard modules is finite. Hence we have $\lbraceeft[\Delta^{\mathfrak{a}thfrak{p}-pres}(w)\rbraceight]\in [\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p})]_{\otimesperatornamelus}$ for all $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{long}$. This can be extended to the following (see for example \cite[Theorem~7.7]{MS}): \begin{proposition}\lbraceambdabel{p32} \begin{enumerate}[(i)] \item\lbraceambdabel{p32.1} $(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}, \mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres}, \{\theta_s^l\}_{s\in S})$ is a precategorification where\-as $(\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p}),\mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres}, \{\theta_s^l\}_{s\in S})$ is a categorification of $\mathfrak{a}thcal{M}$ with respect to the generators $\underline{H}_s$, $s\in S$. \item\lbraceambdabel{p32.2} The homomorphism $\mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres}$ extends uniquely to the ${\mathfrak{a}thbb{Z}((v))}$-categorification $(\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres,\mathfrak{a}thbb{Z}}, \mathfrak{a}thcal{E}^{\mathfrak{a}thfrak{p}-pres},\{\theta_s^l\}_{s\in S})$ of $\mathfrak{a}thcal{M}^{\mathfrak{a}thbb{Z}((v))}$ with respect to the generators $\underline{H}_s$, $s\in S$. \end{enumerate} \end{proposition} \subsection{The `unification': The category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$}\lbraceambdabel{s5.3} In Subsection~\rbraceef{s5.2} we used certain parabolic categories to categorify the sign module, but also used categories of presentable modules to categorify the permutation parabolic modules. Both depend on a fixed parabolic $\mathfrak p\subset\mathfrak{a}thfrak{g}$. In this section we actually want to put these two approaches under one roof using a series $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ of categories, depending on the (fixed) $\mathfrak p$, $\mathfrak g$ and a (varying) category $\mathfrak{a}thscr{A}$. The categorifications from Subsection~\rbraceef{s5.2} will then emerge for a special choice of $\mathfrak{a}thscr{A}$. The categories $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ were first introduced in \cite{FKM} - as certain parabolic generalizations of the category $\mathfrak{a}thcal{O}$ which led to properly stratified algebras. The setup was afterwards extended in \cite[6.2]{Ma} to include general stratified algebras (in the sense of \cite{CPS}). Here we present a slight variation of the original definition. This variation seems to be more natural for us, and is better adapted to the examples we work with. Let $\tilde{\mathfrak{a}thfrak{a}}$ be a reductive complex finite dimensional Lie algebra with semisimple part $\mathfrak{a}thfrak{a}$ and center $\mathfrak{a}thfrak{z}(\tilde{\mathfrak{a}thfrak{a}})$. Let $\mathfrak{a}thscr{A}$ be a full subcategory of the category of finitely generated $\tilde{\mathfrak{a}thfrak{a}} $-modules. Then $\mathfrak{a}thscr{A}$ is called an {\it admissible} category (of $\tilde{\mathfrak{a}thfrak{a}}$-modules) if the following holds: \begin{enumerate}[(L1)] \item\lbraceambdabel{lll2} $\mathfrak{a}thscr{A}$ is stable under $E\otimestimes _-$ for each simple finite dimensional $\tilde{\mathfrak{a}thfrak{a}}$-module $E$; \item\lbraceambdabel{lll3} the action of $Z(\tilde{\mathfrak{a}thfrak{a}})$ gives a decomposition of $\mathfrak{a}thscr{A}$ into a direct sum of full subcategories, each of which is equivalent to a module category over a finite-dimensional self-injective associative algebra; \item\lbraceambdabel{lll4} the action of $\mathfrak{a}thfrak{z}(\tilde{\mathfrak{a}thfrak{a}})$ on any object $M$ from $\mathfrak{a}thscr{A}$ is diagonalizable. \end{enumerate} Since the functors $E\otimestimes _-$ and $E^*\otimestimes _-$ are both left and right adjoint to each other on the category of all $\tilde{\mathfrak{a}thfrak{a}}$-modules, (L\rbraceef{lll2}) implies that $E\otimestimes _-$ is in fact exact (as endo-functor of $\mathfrak{a}thscr{A}$). (L\rbraceef{lll3}) guarantees that $\mathfrak{a}thscr{A}$ is abelian, has enough projectives (which are also injective) and that each object of $\mathfrak{a}thscr{A}$ has finite length (with respect to the abelian structure of $\mathfrak{a}thscr{A}$, but not as a $\tilde{\mathfrak{a}thfrak{a}}$-module in general). Given an admissible $\mathfrak{a}thscr{A}$, we can construct a series of categories $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ as follows: We take a semisimple (or reductive) Lie algebra $\mathfrak g$ with a chosen Borel subalgebra $\mathfrak{a}thfrak{b}$, and require that $\mathfrak{a}thfrak{p}\supset \mathfrak{a}thfrak{b}$ is a parabolic subalgebra of $\mathfrak g$ such that $\mathfrak{a}thfrak{p}=\tilde{\mathfrak{a}thfrak{a}}\otimesperatornamelus \mathfrak{a}thfrak{n}_\mathfrak{a}thfrak{p}$ is the Levi decomposition of $\mathfrak{a}thfrak{p}$. Given these data it makes sense to make the following definition: \begin{definition}\lbraceambdabel{defopl} {\rbracem The category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ is the full subcategory of the category of $\mathfrak{a}thfrak{g}$-modules given by all objects which are \begin{enumerate}[(PL1)] \item\lbraceambdabel{OPLone} finitely generated, \item\lbraceambdabel{OPLtwo} locally $\mathfrak{a}thfrak{n}_\mathfrak{a}thfrak{p}$-finite, \item\lbraceambdabel{OPLthree} direct sums of objects from $\mathfrak{a}thscr{A}$ when viewed as $\tilde{\mathfrak{a}thfrak{a}}$-modules. \end{enumerate} } \end{definition} \subsection{Special cases of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$} \lbraceambdabel{special} \subsubsection*{Category $\mathfrak{a}thcal{O}$} If $\mathfrak{a}thfrak{p}=\mathfrak{a}thfrak{b}$ then $\tilde{\mathfrak{a}thfrak{a}}=\mathfrak{a}thfrak{h}$ is abelian. Let $\mathfrak{a}thscr{A}$ be the category of all finite dimensional semisimple $\mathfrak{a}thfrak{h}$-modules. This category is obviously admissible. The category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ in this case is nothing else than the usual category $\mathfrak{a}thcal{O}=\mathfrak{a}thcal{O}(\mathfrak{a}thfrak{g},\mathfrak{a}thfrak{b})$. Note that the property (PL\rbraceef{OPLthree}) in this case just means that the modules from $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ have a weight space decomposition. The category $\mathfrak{a}thcal{O}$ is a highest weight category with standard modules given by the Verma modules. \subsubsection*{The parabolic category $\mathcal{O}^\mathfrak p$} If $\mathfrak{a}thfrak{p}$ is any parabolic and $\mathfrak{a}thscr{A}$ is the category of finite dimensional semi-simple $\tilde{\mathfrak{a}thfrak{a}}$-modules, then $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ is the parabolic category $\mathfrak{a}thcal{O}^\mathfrak{a}thfrak{p}$. The category $\mathfrak{a}thcal{O}^\mathfrak p$ is a highest weight category with standard modules given by the parabolic Verma modules. \subsubsection*{The category $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{p}-pres}$}\lbraceambdabel{3} Let $\mathfrak{a}thfrak{p}$ be any parabolic subalgebra with Weyl group $W'$ and longest element $w'_0$. Consider the corresponding indecomposable projective module $P^{\mathfrak{a}thfrak{a}}(w'_0\cdot0)$ in the category $\mathfrak{a}thcal{O}(\mathfrak{a}thfrak{a}, \mathfrak{a}thfrak{a}\cap\mathfrak{a}thfrak{b})$ corresponding to $\mathfrak{a}thfrak{a}$. Let $\mathfrak{a}thscr{A}$ be the smallest abelian category which contains this $P^\mathfrak{a}thfrak{a}(w'_0\cdot0)$ and is closed under tensoring with finite-dimensional simple $\mathfrak{a}thfrak{a}$-modules and taking quotients. Extend $\mathfrak{a}thscr{A}$ to a category of $\tilde{\mathfrak{a}thfrak{a}}$-modules by allowing diagonalizable action of $\mathfrak{a}thfrak{z}(\tilde{\mathfrak{a}thfrak{a}})$. Then the category $\mathfrak{a}thscr{A}$ is admissible and $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ is the category of modules which are presentable by the $P(w\cdot \lbraceambdambda)\in\mathcal{O}$, where $w$ runs through $(W'\mathfrak{a}thbf{a}ckslash W)_{long}$ and $\lbraceambdambda$ is an integral weight in $\mathfrak h^*_{dom}$ (for details see \cite{MS}). This category is also equivalent to the category of Harish-Chandra bimodules with generalized trivial integral central character from the left hand side and the singular central character given by $W'$ from the right hand side (for details see e.g. \cite{BG}, \cite[Kapitel 6]{Ja2}). This category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ is not a highest weight category in general, but still equivalent to a module category over a so-called properly stratified algebra, see \cite{MS}. \subsection{From highest weight categories to stratified algebras} As usual, the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ decomposes into a direct sum of full subcategories, each of which is equivalent to a module category over a finite-dimensional associative algebra. Any block of the (parabolic) category $\mathcal{O}$ is a highest weight category, hence the associated algebra is quasi-hereditary. In general, this is not true for a block of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ (see for example \cite{MS}). The algebras which appear from blocks of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}\}$ are however always {\em weakly properly stratified} in the sense of \cite[Section~2]{Fr}. The proof of this fact is completely analogous to the properly stratified case, and we refer to \cite[Section 3]{FKM} for details. A weakly properly stratified structure of an algebra means the following: the isomorphism classes of simple modules are indexed by a partially pre-ordered set $I$ and we have so-called {\em standard} and {\em proper standard} modules (both indexed by $I$ again) such that projective modules have standard filtrations, i.e. filtrations with subquotients isomorphic to standard modules, and standard modules have proper standard filtrations. Which subquotients are allowed to occur in the above filtrations and in the Jordan-H{\"o}lder filtrations of proper standard modules is given by the partial pre-order (for a precise definition we refer to \cite{Fr}). The modules defining the stratified structure are given in terms of parabolically induced modules as follows: If $V$ is any $\tilde{\mathfrak{a}thfrak{a}}$-module, we consider $V$ as a $\mathfrak{a}thfrak{p}$-module by letting $\mathfrak{a}thfrak{n}_\mathfrak{a}thfrak{p}$ act trivially and define the {\it parabolically induced module} \begin{displaymath} \Delta(\mathfrak{a}thfrak{p}, V):=U(\mathfrak g)\otimestimes_{U(\mathfrak{a}thfrak{p})}V. \end{displaymath} If $V$ is a simple object of $\mathfrak{a}thscr{A}$ then $\Delta(\mathfrak{a}thfrak{p},V)$ is a {\em proper standard} module; if $V$ is projective then $\Delta(\mathfrak{a}thfrak{p},V)$ is a {\em standard} module. The dual construction (using conduction) gives rise to {\it (proper) costandard module}. If $V$ is a simple $\tilde{\mathfrak{a}thfrak{a}}$-module, then $\Delta(\mathfrak{a}thfrak{p},V)$ is usually called a {\em generalized Verma module}, or simply a {\em GVM}. Let $\mathfrak{a}thscr{F}(\Delta)$ denote the full subcategories of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$, given by all modules, which admit a standard filtration, that is a filtration, whose subquotients are standard modules. Analogously one defines $\mathfrak{a}thscr{F}(\otimesverline{\Delta})$ for modules with proper standard filtration, $\mathfrak{a}thscr{F}(\mathfrak nabla)$ for modules with costandard filtration, and $\mathfrak{a}thscr{F}(\otimesverline{\mathfrak nabla})$ for modules with proper costandard filtration. In this notation the property to be weakly properly stratified is equivalent to the claim that all projective modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$ belong to both $\mathfrak{a}thscr{F}(\Delta)$ and $\mathfrak{a}thscr{F}(\otimesverline{\Delta})$. We would like to point out that weakly properly stratified algebras form a strictly bigger class than properly stratified algebras as the classes of simple modules might be only partially pre-ordered, not partially ordered. As a consequence there could be non-isomorphic standard modules $\Delta_1$ and $\Delta_2$ such that $\textrm{Hom}(\Delta_1,\Delta_2)\mathfrak not=0\mathfrak not=\textrm{Hom}(\Delta_2,\Delta_1)$ (which will be in fact the case in almost all the examples occurring from now on in this paper). \subsection{Parabolic induction via $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}\}$}\lbraceambdabel{s5.4} Let us return to the case $W=S_n$ with some fixed parabolic subgroup $W'=S_{i_1}\times S_{i_2}\times\cdots\times S_{i_r}$, where $i_1+i_2+\dots +i_r=n$. Let $\mathfrak{a}thds{H}$ and $\mathfrak{a}thds{H}'$ be the corresponding Hecke algebras. Assume we are given a right cell $\mathfrak{a}thbf{R}'$ of $W'$, then $\mathfrak{a}thbf{R}'=\mathfrak{a}thbf{R}'_{i_1}\times \mathfrak{a}thbf{R}'_{i_2}\times\cdots\times \mathfrak{a}thbf{R}'_{i_r}$ for some right cells $\mathfrak{a}thbf{R}'_{i_j}$ in $S_{i_j}$. Recall from Theorem~\rbraceef{thm6} the categorification $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}'_{i_j}}$ of the right cell module associated with $\mathfrak{a}thbf{R}'_{i_j}$. From Subsection~\rbraceef{s5.1} we deduce that the outer product, call it $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}'}$, of these categories categorifies the cell module corresponding to $\mathfrak{a}thbf{R}'$. The objects of $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}'}$ are certain $\tilde{\mathfrak{a}thfrak{a}}:=\mathfrak{a}thfrak{gl}_{i_1}\otimesperatornamelus \mathfrak{a}thfrak{gl}_{i_2}\otimesperatornamelus\cdots\otimesperatornamelus \mathfrak{a}thfrak{gl}_{i_r}$-modules. Let $\mathfrak{a}thscr{P}$ denote the additive closure of the category of all modules, which have the form $E\otimestimes P$, where $P\in \mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}'}$ is projective and $E$ is a simple finite-dimensional $\tilde{\mathfrak{a}thfrak{a}}$-module. Set $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}= \otimesverline{\mathfrak{a}thscr{P}}$. \begin{lemma}\lbraceambdabel{lem51} $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ is admissible. \end{lemma} \begin{proof} As translations are exact, condition (L\rbraceef{lll2}) is satisfied by definition. Condition (L\rbraceef{lll4}) follows again from the definitions as the action of $\mathfrak{a}thfrak{z}(\tilde{\mathfrak{a}thfrak{a}})$ on any simple finite-dimensional $\tilde{\mathfrak{a}thfrak{a}}$-module is diagonalizable. It is left to check (L\rbraceef{lll3}). By definition, $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ is a subcategory of $\mathfrak{a}thcal{O}\{\tilde{\mathfrak{a}thfrak{a}}, \tilde{\mathfrak{a}thfrak{a}} \cap\mathfrak{a}thfrak{b})$. The block decomposition of the latter (with respect to the action of the center of $Z(\tilde{\mathfrak{a}thfrak{a}})$) induces a block decomposition of the former. Since translations are exact and send projectives to projectives, $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ has enough projectives. These projective modules are also injective by \eqref{cons1} from Subsection~\rbraceef{s4.2}. Therefore the condition (L\rbraceef{lll3}) follows from the definitions and \cite[Section~5]{Au}. \end{proof} By Lemma~\rbraceef{lem51}, the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ is defined. By construction, it is a subcategory of $\mathfrak{a}thcal{O}$ and hence inherits a decomposition from the block decomposition of $\mathfrak{a}thcal{O}$ which we call the block decomposition of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$. Denote by $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ the block of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ corresponding to the trivial central character. We note that one can show that $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ is indecomposable by invoking Theorem~\rbraceef{thm6}. We omit a proof, since the result will not be relevant for the following. From the definition of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ we have that simple objects in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ are in a natural bijection with the elements from $W$ of the form $xw$, where $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$ and $x\in \mathfrak{a}thbf{R}'$. Denote by $\Delta({\mathfrak{a}thfrak{p}},xw)$ and $\otimesverline{\Delta}(\mathfrak{a}thfrak{p},xw)$ the standard, respectively proper standard, module in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ corresponding to $xw$. Dually we also have the (proper) costandard module $\mathfrak nabla({\mathfrak{a}thfrak{p}},xw)$ ($\otimesverline{\mathfrak nabla}(\mathfrak{a}thfrak{p},xw)$). For details see \cite[Section~2]{Fr}. Finally, let $P(\mathfrak{a}thfrak{p},xw)$ be the projective cover of $\Delta(\mathfrak{a}thfrak{p},xw)$ in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$. Set \begin{gather*} \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')=\{(x,w)\,:\,x\in \mathfrak{a}thbf{R}', w\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}\},\\ \mathfrak{a}thds{J}(\mathfrak{a}thbf{R}')=\{y\in W\,:\, y\mathfrak geq_{\mathfrak{a}thsf{R}}\mathfrak{a}thbf{R}', y\mathfrak neq xw \text{ for any }(x,w)\in \mathfrak{a}thcal{I}(\mathfrak{a}thbf{R}')\}. \end{gather*} In particular, by above the set $\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ indexes bijectively the isomorphism classes of indecomposable projective modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$. From Subsection~\rbraceef{s3.2} and the definitions it follows that for any $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ the module $P(\mathfrak{a}thfrak{p},xw)$ is the quotient of $P(xw)$ modulo the trace of all $P(y)$, $y\in \mathfrak{a}thds{J}(\mathfrak{a}thbf{R}')$. In particular, all projectives in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ are gradable and hence the endomorphism ring $B$ of a minimal projective generator of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$ inherits a $\mathfrak{a}thbb{Z}$-grading from the ring $A$ from Subsection~\rbraceef{s25.5}. We denote by $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^\mathfrak{a}thbb{Z}$ the category of finite-dimensional graded $B$-modules. Let $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ be the induced cell module. By definition, it has a $\mathfrak{a}thbb{Z}[v,v^{-1}]$-basis given by $\Delta_{x,w}:= \underline{H}_x\otimestimes H_w$, where $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$. Hence we can define a homomorphism of $\mathfrak{a}thbb{Z}[v,v^{-1}]$-modules as follows: \begin{eqnarray}\lbraceambdabel{Psi} \Psi_{\mathfrak{a}thbf{R}'}:S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H} &\otimesverset{\sim}{\lbraceongrightarrow}&\lbraceeft[\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}\rbraceight]\\ \Delta_{x,w} &\mathfrak{a}psto &\lbraceeft[\Delta({\mathfrak{a}thfrak{p}},xw)\rbraceight]. \mathfrak nonumber \end{eqnarray} Let $\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'})$ denote the additive category of all (graded) projective modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0$. We obtain the following main result: \begin{theorem}[Categorification of induced cell modules]\lbraceambdabel{thm53} {\tiny .} \begin{enumerate}[(i)] \item\lbraceambdabel{thm53.0} The map $\Psi_{\mathfrak{a}thbf{R}'}$ is a homomorphism of $\mathfrak{a}thds{H}$-modules. \item\lbraceambdabel{thm53.1} $\big(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}\;, \Psi_{\mathfrak{a}thbf{R}'},\{\theta_s^l\}_{s\in S}\big)$ is precategorification of the induced (right) cell $\mathfrak{a}thds{H}$-module $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ whereas $\big(\mathfrak{a}thscr{P}(\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}), \Psi_{\mathfrak{a}thbf{R}'},\{\theta_s^l\}_{s\in S}\big)$ is a categorification of this module with respect to the generators $\underline{H}_s$, $s\in S$. \item\lbraceambdabel{thm53.2} The map $\Psi_{\mathfrak{a}thbf{R}'}$ defines a $\mathfrak{a}thbb{Z}((v))$-categorification $\big(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}\;, \Psi_{\mathfrak{a}thbf{R}'},\{\theta_s^l\}_{s\in S}\big)$ of the induced cell $\mathfrak{a}thds{H}^{\mathfrak{a}thbb{Z}((v))}$-module $S(\mathfrak{a}thbf{R}')^{\mathfrak{a}thbb{Z}((v))}\otimestimes_{(\mathfrak{a}thds{H}')^{\mathfrak{a}thbb{Z}((v))}} \mathfrak{a}thds{H}^{\mathfrak{a}thbb{Z}((v))}$ with respect to the generators $\underline{H}_s$, $s\in S$. \end{enumerate} \end{theorem} \begin{proof} In order to prove our theorem we only have to prove that $\Psi_{\mathfrak{a}thbf{R}'}$ is a homomorphism of $\mathfrak{a}thds{H}$-modules. In other words, we have to compare the combinatorics of the action of $\mathfrak{a}thds{H}$ on $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ with the combinatorics of the action of $\{\theta_s^l\}_{s\in S}$ on $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}$. Fix $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ and $s\in S$. If $ws\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$ then the definition of $\mathfrak{a}thds{H}$ (see Subsection~\rbraceef{s25.2}) gives \begin{displaymath} \Delta_{x,w}\underline{H}_s= \begin{cases} \Delta_{x,ws} +v \Delta_{x,w}, & \text{ if } ws>w;\\ \Delta_{x,ws} +v^{-1} \Delta_{x,w}, & \text{ if } ws<w. \end{cases} \end{displaymath} If $ws\mathfrak not\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$ we have that $ws=s'w$ for $s'\in S\cap W'$. In particular $ws>w$ and the definition of $\mathfrak{a}thds{H}$ gives $H_w\underline{H}_s=\underline{H}_{s'}H_w$. Therefore \begin{equation}\lbraceambdabel{eqthm53.1} \Delta_{x,w}\underline{H}_s= (\underline{H}_x\otimestimes H_w)\underline{H}_s= \underline{H}_x\otimestimes\underline{H}_{s'}H_w= \underline{H}_x\underline{H}_{s'}\otimestimes H_w, \end{equation} and $\underline{H}_x\underline{H}_{s'}$ can be computed using the definition of $S(\mathfrak{a}thbf{R}')$, i.e. \eqref{formula1}. Now let us compare this with the combinatorics of the translation functors. Assume first that $ws=s'w\mathfrak not\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. If $M$ is a $\tilde{\mathfrak{a}thfrak{a}}$-module and $E$ is a finite-dimensional $\mathfrak{a}thfrak{g}$-module (which we can also view as a finite-dimensional $\tilde{\mathfrak{a}thfrak{a}}$-module), then the Poincar{\'e}-Birkoff-Witt Theorem implies the so-called tensor identity $U(\mathfrak{a}thfrak{g})\otimestimes_{U(\mathfrak{a}thfrak{p})}(E\otimestimes M)\cong E\otimestimes U(\mathfrak{a}thfrak{g})\otimestimes_{U(\mathfrak{a}thfrak{p})}M$ as $\tilde{\mathfrak{a}thfrak{a}}$-modules (in both cases the $\tilde{\mathfrak{a}thfrak{a}}$-module structure is given by restriction). This implies that the computation of $[\theta_s^l\Delta({\mathfrak{a}thfrak{p}},xw)]$ reduces to the computation of $[\theta_{s'}^l V]$, where $V$ is the indecomposable projective module in $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ such that $\Delta(\mathfrak{a}thfrak{p},xw)= U(\mathfrak{a}thfrak{g})\otimestimes_{U(\mathfrak{a}thfrak{p})}V$. From Theorem~\rbraceef{thm5} it follows that the result is given by \eqref{formula1}. Hence it perfectly fits with the computation of \eqref{eqthm53.1}. Finally, let us assume that $ws\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. We have to show that in this case \begin{equation}\lbraceambdabel{eqthm53.2} [\theta_s^l\Delta({\mathfrak{a}thfrak{p}},xw)]= \begin{cases} [\Delta({\mathfrak{a}thfrak{p}},xws)]+v[\Delta({\mathfrak{a}thfrak{p}},xw)],& ws>w\\ [\Delta({\mathfrak{a}thfrak{p}},xws)]+v^{-1}[\Delta({\mathfrak{a}thfrak{p}},xw)],& ws<w. \end{cases} \end{equation} Since all (proper) standard modules are parabolically induced, from our observation about the parabolic induction and projective functors above it follows that projective functors preserve both $\mathfrak{a}thscr{F}(\Delta)$ and $\mathfrak{a}thscr{F}(\otimesverline{\Delta})$. By duality, projective functors also preserve both $\mathfrak{a}thscr{F}(\mathfrak nabla)$ and $\mathfrak{a}thscr{F}(\otimesverline{\mathfrak nabla})$. Hence $\theta_s^l\Delta({\mathfrak{a}thfrak{p}},xw)\in \mathfrak{a}thscr{F}(\Delta)$ and we only have to compute which standard modules occur in the standard filtration of $\Delta({\mathfrak{a}thfrak{p}},xw)$ and with which multiplicity. From \cite[4.1]{Fr} it follows that the multiplicity of $\Delta({\mathfrak{a}thfrak{p}},y)\lbraceambdangle k\rbraceangle$ in the standard filtration of $\theta_s^l\Delta({\mathfrak{a}thfrak{p}},xw)$ equals the dimension of \begin{displaymath} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}}(\theta_s^l\Delta({\mathfrak{a}thfrak{p}},xw), \otimesverline{\mathfrak nabla}({\mathfrak{a}thfrak{p}},y)\lbraceambdangle k\rbraceangle). \end{displaymath} Write $\theta_s^l=\theta_s^{out}\theta_s^{on}$, where $\theta_s^{on}$ and $\theta_s^{out}$ are the graded translations onto and out of the $s$-wall (see \cite[Corollary 8.3]{Stgrad}). Adjunction properties \cite[Theorem 8.4]{Stgrad} give \begin{eqnarray*} &&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}} (\theta_s^{out}\theta_s^{on}\Delta^{\mathfrak{a}thfrak{p}}(xw), \otimesverline{\mathfrak nabla}({\mathfrak{a}thfrak{p}},y)\lbraceambdangle k\rbraceangle)\\ &=& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}} (\theta_s^{on}\Delta^{\mathfrak{a}thfrak{p}}(xw), \theta_s^{on}\otimesverline{\mathfrak nabla}({\mathfrak{a}thfrak{p}},y)\lbraceambdangle k+1\rbraceangle). \end{eqnarray*} A character argument shows that $\theta_s^{on}\Delta({\mathfrak{a}thfrak{p}},xw)$ is a graded lift of a standard module and $\theta_s^{on}\otimesverline{\mathfrak nabla}({\mathfrak{a}thfrak{p}},y)$ is a graded lift of a proper standard module on the wall, and a direct calculation (using \cite[Theorem 8.1]{Stgrad}) gives \begin{displaymath} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}} (\theta_s^{on}\Delta({\mathfrak{a}thfrak{p}},xw), \theta_s^{on}\otimesverline{\mathfrak nabla}({\mathfrak{a}thfrak{p}},y)\lbraceambdangle k+1\rbraceangle)= \begin{cases} \mathfrak{a}thbb{C},& y=xws, k=0;\\ \mathfrak{a}thbb{C},& y=xw, k=1, ws>w;\\ \mathfrak{a}thbb{C},& y=xw, k=-1, ws<w;\\ 0,&\text{ otherwise. } \end{cases} \end{displaymath} Formula \eqref{eqthm53.2} follows and the proof is complete. \end{proof} \subsection{Uniqueness of categorification} Assume that we are still in the situation of Subsection~\rbraceef{s5.4}. \begin{proposition}\lbraceambdabel{cunique} Let $\mathfrak{a}thbf{R}'_1$ and $\mathfrak{a}thbf{R}'_2$ by two right cells of $W'$ inside the same two-sided cell. Then the categories $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}_1'}\}$ and $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'_2}\}$ are equivalent. \end{proposition} \begin{proof} The equivalence between $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ and $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'_2}$, constructed in Theorem~\rbraceef{thm6} extends in a straightforward way to an equivalence between the categories $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ and $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'_2}\}$. \end{proof} \section{Combinatorics and filtrations of induced cell modules} In this section we first introduce a non-degenerate bilinear form on the induced cell modules and establish a categorical version of it. As a result we get four different distinguished bases in any induced cell modules which we then will interpret via four distinguished classes of objects in the corresponding categorification. Afterwards we describe the resulting refined Kazhdan-Lusztig combinatorics and also introduce a natural filtration on induced cell modules which are induced from a natural counterpart on their categorifications. \subsection{Different bases and the combinatorics of induced cell modules} Assume that we are still in the situation of Subsection~\rbraceef{s5.4}. Any cell module $S(\mathfrak{a}thbf{R}')$ has a unique up to a scalar non-degenerate symmetric bilinear $\mathfrak{a}thds{H}'$-invariant form $\lbraceambdangle\cdot,\cdot\rbraceangle$. We normalize this form such that its categorification is given by Proposition~\rbraceef{bilinearform}. We first state the following easy lemma: \begin{lemma} The induced module $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ has a non-degenerate symmetric $\mathfrak{a}thds{H}$-invariant bilinear form $(\cdot, \cdot)$ with values in $\mathfrak{a}thbb{Z}[v,v^{-1}]$ given by \begin{eqnarray*} (m\otimestimes H_x, n\otimestimes H_y)=\delta_{x,y}\lbraceambdangle m,n\rbraceangle, \end{eqnarray*} for any $x$, $y\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$ and $m$, $n\in S(\mathfrak{a}thbf{R}')$. \end{lemma} \begin{proof} The form is obviously symmetric and non-degenerate, since so is $\lbraceambdangle\cdot, \cdot\rbraceangle$. It is left to show the $\mathfrak{a}thds{H}$-invariance. Let $s\in S\subset W$ and $m, n,x,y$ as above. For the rest of the proof it would be convenient to set $X=(m\otimestimes H_xH_s, n\otimestimes H_y)$ and $Y=(m\otimestimes H_x, n\otimestimes H_yH_s)$. Assume first that $xs, ys\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. If $xs>x$ and $ys>y$ then $xs\mathfrak neq y$, $x\mathfrak neq ys$ and hence $X=(m\otimestimes H_{xs}, n\otimestimes H_y)=\delta_{xs,y}\lbraceambdangle m,n\rbraceangle=0$, and $Y=(m\otimestimes H_x, n\otimestimes H_{ys})=\delta_{x,ys}\lbraceambdangle m,n\rbraceangle=0$. If $xs>x$ and $ys<y$ then $X=(m\otimestimes H_{xs}, n\otimestimes H_y)=\delta_{xs,y}\lbraceambdangle m,n\rbraceangle$, and $Y=(m\otimestimes H_x, n\otimestimes H_{ys}+(v^{-1}-v)H_{y})=\delta_{x,ys}\lbraceambdangle m,n\rbraceangle=\delta_{xs,y}\lbraceambdangle m,n\rbraceangle$ (as $x\mathfrak neq y$, and $xs=y$ if and only if $x=ys$). If $xs<x$ and $ys>y$ then the argument is analogous (by symmetry). If $xs<x$ and $ys<y$ then $X=(m\otimestimes H_{xs}+(v^{-1}-v)H_{x}, n\otimestimes H_y)=(v^{-1}-v)\delta_{x,y}\lbraceambdangle m,n\rbraceangle$, and $Y=(m\otimestimes H_x, n\otimestimes H_{ys}+(v^{-1}-v)H_{y})=(v^{-1}-v)\delta_{x,y}\lbraceambdangle m,n\rbraceangle$. Now let us assume $xs\mathfrak notin (W'\mathfrak{a}thbf{a}ckslash W)_{short}$, $ys\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. We write $xs=s'x$ and get $X=(m H_{s'}\otimestimes H_{x}, n\otimestimes H_y)=\delta_{x,y}\lbraceambdangle m,n\rbraceangle=0$. On the other hand, $Y=(m\otimestimes H_x, n\otimestimes H_{y}H_s)=0$ as $x\mathfrak not\in{y,ys}$. Finally let us assume $xs\mathfrak notin (W'\mathfrak{a}thbf{a}ckslash W)_{short}$, $ys\mathfrak notin (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. We write $xs=s'x$ and $ys=ty$, where $s', t\in W'$ are simple reflections. Then $X=(mH_{s'}\otimestimes H_x, n\otimestimes H_y)\mathfrak not=0$ implies $x=y$, and then also $s'=t$. The same holds if $Y=(m\otimestimes H_x, nH_t\otimestimes H_y)\mathfrak not=0$; and both terms have the same value, namely $\lbraceambdangle mH_{t},n\rbraceangle=\lbraceambdangle m,nH_{t}\rbraceangle$, since $\lbraceambdangle\cdot,\cdot\rbraceangle$ is $\mathfrak{a}thds{H}'$-invariant. The statement of the lemma follows. \end{proof} The involution $h'\mathfrak{a}psto \otimesverline{h'}$ on $\mathfrak{a}thds{H}'$ restricts to an involution on any right cell module and is on the other hand itself the restriction of the involution $h\mathfrak{a}psto\otimesverline{h}$ on $\mathfrak{a}thds{H}$. Therefore, we get an involution \begin{eqnarray*} S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}&\rbraceightarrow& S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H} \\ m\otimestimes h&\mathfrak{a}psto&\otimesverline{m\otimestimes h}:=\otimesverline{m}\otimestimes\otimesverline{h}. \end{eqnarray*} For $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ we define the {\it Kazhdan-Lusztig element} $\underline{H}_x\boxdot{H}_w$ as the unique self-dual element in $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$, satisfying \begin{displaymath} \underline{H}_x\boxdot{H}_w\in \underline{H}_x\otimestimes {H}_w + \sum_{(x',w')\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')}v\mathfrak{a}thbb{Z}[v] \underline{H}_{x'}\otimestimes {H}_{w'}. \end{displaymath} The existence and uniqueness of such elements is obtained by standard arguments (see e.g. \cite[Theorem~2.1]{SoKipp}). The induced module $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ has then four distinguished bases: \begin{itemize} \item the {\it Kazhdan-Lusztig basis} (or short {\it KL basis}) given by the Kazhdan-Lusztig elements ${\underline{H}_x\boxdot{H}_w}$, where $x\in \mathfrak{a}thbf{R}'$ and $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. \item the {\it Kazhdan-Lusztig-standard basis} (or short {\it KL-s basis}) given by the elements $\Delta_{x,w}=\underline{H}_x\otimestimes H_w$, where $x\in \mathfrak{a}thbf{R}'$ and $w\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. \item the {\it dual Kazhdan-Lusztig basis} (or short {\it dual KL basis}) which is the dual of the KL-basis with respect to the form $(\cdot, \cdot)$. \item the {\it dual Kazhdan-Lusztig-standard basis} (or short {\it dual KL-s basis}). \end{itemize} These bases have the following categorical interpretation: \begin{theorem}[Combinatorics]\lbraceambdabel{combinatorics} The isomorphism $\Psi_{\mathfrak{a}thbf{R}'}$ from \eqref{Psi} defines the following correspondences: \begin{tabular}[thb]{ccc} \\ \text{KL-s basis}&$\lbraceeftrightarrow$& \text{isoclasses of standard lifts of standard modules}\\ \\ \text{KL basis}&$\lbraceeftrightarrow$& \text{isoclasses of standard lifts of indecomposable projectives}\\ \\ \text{dual KL-s basis}&$\lbraceeftrightarrow$& \text{isoclasses of standard lifts of proper standard modules}\\ \\ \text{dual KL basis}&$\lbraceeftrightarrow$& \text{isoclasses of standard lifts of simple modules}. \\ \end{tabular} \end{theorem} \begin{proof} Let $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$. The isomorphism class $\lbraceeft[\Delta({\mathfrak{a}thfrak{p}},xw)\rbraceight]$ is mapped to $\Delta_{x,w}$ by definition, hence the first statement of the theorem is obvious. Note that for $w=e$, the module $\Delta({\mathfrak{a}thfrak{p}},x)$ is always projective and $\Delta_{x,e}=\underline{H}_x\boxdot{H}_e= \underline{H}_x\otimestimes {H}_e$. This provides the starting point for an induction argument which proves the remaining part of the theorem. Before we do the induction argument we have to recall a few facts. First recall that for $s\in S$ the functor $\theta_s^l$ sends projectives to projectives, since it is left adjoint to an exact functor. The usual weight argument also shows that if $ws\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$ and $ws>w$ then \begin{eqnarray} \lbraceambdabel{eq:dec} \theta_s^lP(\mathfrak{a}thfrak{p},xw)\cong P(\mathfrak{a}thfrak{p},xws)\bigoplus_{(y,z)\mathfrak not=(x,w)} a_{y,z}P(\mathfrak{a}thfrak{p},yz), \end{eqnarray} at least if we forget the grading. Since the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ is by definition a subcategory of $\mathfrak{a}thcal{O}$, we could take the projective cover $P(xw)\in\mathfrak{a}thcal{O}_0^{\mathfrak{a}thbb{Z}}$ of $P(\mathfrak{a}thfrak{p},xw)\in \mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}$, and the decomposition \eqref{eq:dec} is controlled by that of $\theta_s^lP(xw)$. In particular, it is of the form as stated in \eqref{eq:dec}(even as graded modules). Assume now that the statement of the theorem is true for some $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ and let $s\in W$ such that $ws>w$ and $ws\in (W'\mathfrak{a}thbf{a}ckslash W)_{short}$. From Theorem~\rbraceef{thm53} we know that $\theta_s^lP(\mathfrak{a}thfrak{p},xw)$ corresponds to $H:=(\underline{H}_x\boxdot{H}_w) \underline{H}_s$ under $\Psi_{\mathfrak{a}thbf{R}'}^n$. In particular $H=\underline H_{x}\otimestimes {H}_{ws}+\sum_{(x',w')\mathfrak not= (x,ws)}\beta_{(x',w')}(v)\underline{H}_{x'}\otimestimes {H}_{w'}$, where $\beta_{x',w'}(v)\in\mathfrak{a}thbb{Z}[v]$. From \eqref{eq:dec} and the explanation afterwards we get then that $P(\mathfrak{a}thfrak{p},xw)$ corresponds to \begin{displaymath} H':=(\underline H_x\boxdot H_w)\underline{H}_s-\beta_{x',w'}(0) \underline{H}_{x'}\otimestimes H _{w'}. \end{displaymath} Note that $H'$ can be characterized as the unique self-dual element with the property that $H'\in \underline{H}_x\otimestimes {H}_{ws}+\sum_{(x',w')}v\mathfrak{a}thbb{Z}[v] \underline{H}_{x'}\otimestimes{H}_{w'}$. The same characterization holds for the element $\underline H_x\boxdot{H}_{ws}$. Hence $\underline{H}_x\boxdot{H}_{ws}$ is mapped to $\lbraceeft[P(\mathfrak{a}thfrak{p},xws)\rbraceight]$ under the isomorphism $\Psi_{\mathfrak{a}thbf{R}'}$ and the second part of the theorem follows. It is not difficult to verify that the bilinear form $(\cdot,\cdot)$ has again a categorical version as in Proposition~\rbraceef{bilinearform}. In particular, the isomorphism classes of simple modules are dual to the ones of indecomposable projective modules. Finally, the proper standard modules form a dual basis to the costandard modules thanks to the duality on $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ and the usual Ext-orthogonality between standard and proper costandard module (\cite[Theorem 3]{Fr}). The theorem follows. \end{proof} \begin{example} {\rbracem Let $W=S_3=<s,t>$ and $W'=<s>\cong S_2\times S_1$ and choose the right cell $\mathfrak{a}thbf{R}'$ of $W'$ corresponding to the (longest) element $s$. Then $(W'\mathfrak{a}thbf{a}ckslash W)_{short}=\{e,t,ts\}$. The categorification $\mathfrak{a}thscr{C}^{\mathfrak{a}thbf{R}'}$ is then equivalent to the category of graded $R=\mathfrak{a}thbb{C}[x]/(x^{2})=\otimesperatornameeratorname{End}_{\mathfrak{a}thfrak{gl}(2)}(\mathfrak{a}thtt{P}(s\cdot0))$ as in Example~\rbraceef{ex6}, and $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\} \cong\mathcal{O}_0^{\mathfrak p-pres}$ from Subsection~\rbraceef{special}. The module $\Delta(\mathfrak{a}thfrak{p},se)$ is projective, hence $\Delta(\mathfrak{a}thfrak{p},se)= P(\mathfrak{a}thfrak{p},se)$. A direct calculation shows that the projective module $P(\mathfrak{a}thfrak{p},st)$ has a standard-filtration of length two, with $\Delta(\mathfrak{a}thfrak{p},st)$ as a quotient, and $\Delta(\mathfrak{a}thfrak{p},se)$ as a submodule; whereas $P(\mathfrak{a}thfrak{p},sts)$ has a standard filtration with $\Delta(\mathfrak{a}thfrak{p},sts)$ occurring as a quotient, $\Delta(\mathfrak{a}thfrak{p},st)$ as a subquotient, and $\Delta(\mathfrak{a}thfrak{p},se)$ as a submodule (see the detailed example in \cite[Section 9]{MS}). On the combinatorial side, the standard basis element $\underline{H}_s\otimestimes H_e$ is a self-dual KL-basis element. The element $\underline{H}_s\otimestimes H_t+v\underline{H}_s\otimestimes H_e$ is a KL-basis element. Now, $\underline{H}_s\otimestimes \underline{H}_{ts}= \underline{H}_s\otimestimes(H_{ts}+v(H_t+H_s)+v^2H_e)$ is self-dual and equal to $\underline{H}_s\otimestimes H_{ts}+v \underline{H}_s\otimestimes H_t+ \underline{H}_s \otimestimes H_e+v^2\underline{H}_s\otimestimes H_e.$ Hence subtracting $\underline{H}_s\otimestimes H_e$ gives $\underline{H}_s\boxdot H_{ts}=\underline{H}_s\otimestimes H_{ts}+v\underline{H}_s\otimestimes H_{t}+v^2\underline{H}_s\otimestimes H_{e}.$ } \end{example} \subsection{Stratifications of induced modules}\lbraceambdabel{s5.5} Let us come back to the examples in Subsection~\rbraceef{s5.2} and assume $W=S_n$ with parabolic subgroup $W'$. Let $\mu$ be the composition of $n$ which defines $W'$ and let $\lbraceambda$ be the corresponding partition. Consider again the permutation module $\mathfrak{a}thcal{M}=\mathfrak{a}thcal{M}^\lbraceambda$ and the irreducible cell module $S(\lbraceambda)$, which specializes to the irreducible Specht module $S^\lbraceambda$ corresponding to $\lbraceambda$. This is naturally a submodule of $\mathfrak{a}thcal{M}^\lbraceambda$. Over the complex numbers, however, $\mathfrak{a}thcal{M}^\lbraceambda$ is completely reducible and contains $S(\lbraceambda)^{\mathfrak{a}thbb{C}}$ as a unique direct summand. Furthermore, over the complex numbers, any finite dimensional (right) $\mathfrak{a}thds{H}^{\mathfrak{a}thbb{C}}$-module $M$ has a decomposition into isotypic components. This special feature is however not independent of the ground field (as Specht module are only indecomposable but not irreducible in general), in particular it is not an integral phenomenon. However, there is a natural filtration of $\mathfrak{a}thcal{M}$ by Specht modules which always exists (see e.g. \cite[4.10 Corollary]{Mathas}). The purpose of this subsection is to give a very natural categorical construction of a somewhat rougher filtration on all induced cell modules. The idea is to use the notion of Gelfand-Kirillov-dimension. Consider the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}$. The objects of this category are certain $\mathfrak{a}thfrak{gl}_n$-modules. Any such module $M$ has a well-defined {\it Gelfand-Kirillov-dimension} $\otimesperatornameeratorname{GKdim}(M)$. Recall the following easy facts: \begin{lemma} \lbraceambdabel{GK} \begin{enumerate} \item For any $s\in S\subset W$ we have $\otimesperatornameeratorname{GKdim}(\theta_s^lM)\lbraceeq\otimesperatornameeratorname{GKdim}(M)$. \item $\otimesperatornameeratorname{GKdim}(M)=\otimesperatornameeratorname{max}\{\otimesperatornameeratorname{GKdim}(L_j)\}$, where $L_j$ runs through the composition factors of $M$. \end{enumerate} \end{lemma} \begin{proof} See for example \cite[Lemmas 8.6, 8.8 and 8.7(1)]{Ja2}. \end{proof} For any positive integer $j$ we define $(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}})_{\lbraceeq j}$ to be the full subcategory of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}$ consisting of all modules which have Gelfand-Kirillov dimension at most $j$. From the Lemma above it follows that this subcategory is closed under taking submodules, quotients and extensions, and also stable under translations through walls. Therefore, we have a filtration of the $\mathfrak{a}thds{H}$-module $\lbraceeft[\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}\rbraceight]$. For simplicity we relabel the filtration such that we have: \begin{multline*}\lbraceambdabel{gkdime1} \{0\}\subsetneq \lbraceeft[(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}})_1\rbraceight] \subsetneq \lbraceeft[(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}})_2\rbraceight] \subsetneq \cdots\\ \cdots\subsetneq\lbraceeft[(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}})_r\rbraceight]= \lbraceeft[\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'} \}_0^{\mathfrak{a}thbb{Z}}\rbraceight] \end{multline*} The set of partitions of $n$ is ordered via the so-called dominance ordering which we denote by $\unrhd$. Given two partitions $\mathfrak nu=\mathfrak nu_1\mathfrak geq \mathfrak nu_2\lbracedots$ and $\mu=\mu_1\mathfrak geq \mu_2\lbracedots$ we have $\mathfrak nu\unrhd\mu$ if and only if $\sum_{j=1}^i\mathfrak nu_j\mathfrak geq\sum_{j=1}^i\mu_j$ for any $i\mathfrak geq 1$. The simple composition factors of the module $\mathfrak{a}thcal{M}^\lbraceambda$ are all of the form $S(\mu)$, where $\lbraceambda\unlhd\mu$ (see e.g. \cite[4.10, Exercise 1]{Mathas} or \cite[Corollary~2.4.7]{Sa}). The following result is the technical formulation of a fact which is quite easy to describe: For every induced cell module $\mathfrak{a}thds{H}$-module $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ we have a corresponding categorification, hence an attached category $\mathscr{C}$, of modules over some Lie algebra. The Gelfand-Kirillov dimension induces a filtration on $\mathscr{C}$ that corresponds to a filtration of $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds H'}\mathfrak{a}thds{H}$ which is an analogue of the Specht filtration of the induced cell module given by the dominance ordering. More precisely we have the following: \begin{theorem}\lbraceambdabel{GKdim} Assume that we are in the setup of Subsection~\rbraceef{s5.4}. For $i\mathfrak geq 0$ set \begin{displaymath} Q_i=\{v\in S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}\,:\, \Psi_{\mathfrak{a}thbf{R}'}(v)\in \lbraceeft[(\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}})_i\rbraceight]\}. \end{displaymath} Then we have: \begin{enumerate}[(i)] \item \lbraceambdabel{GKdim.1} $Q_0\subsetneq Q_1\subsetneq\dots\subsetneq Q_r= S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}$ is a filtration of the induced cell module $S(\mathfrak{a}thbf{R}')\otimestimes_{\mathfrak{a}thds{H}'}\mathfrak{a}thds{H}$. \item \lbraceambdabel{GKdim.2} Assume, $S(\lbraceambda)$ occurs in the $i$-th and $S(\mu)$ in the $j$-th filtration in \eqref{GKdim.1} respectively. Then $\lbraceambda\lbracehd\mu$ implies $i<j$ (in other words: if $\lbraceambda\lbracehd\mu$ then $S(\lbraceambda)$ occurs earlier than $S(\mu)$ as a subquotient of \eqref{GKdim.1}). \item \lbraceambdabel{GKdim.3} All subquotients of the filtration \eqref{GKdim.1} are direct sums of Specht modules. \item \lbraceambdabel{GKdim.4} In the permutation module $\mathfrak{a}thcal{M}^{\lbraceambda}$ the Specht submodule $S(\lbraceambda)$ coincides with $Q_1$ (i.e. it is given by the subcategory of modules of the minimal possible Gelfand-Kirillov-dimension from $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}_0^{\mathfrak{a}thbb{Z}}$). \end{enumerate} \end{theorem} \begin{proof} The statement \eqref{GKdim.1} follows from Theorem~\rbraceef{thm53} and definitions. To prove \eqref{GKdim.2} recall that there is Joseph's explicit formula (see e.g. \cite[10.11 (2)]{Ja2}) \begin{equation}\lbraceambdabel{gkdimf1} 2\otimesperatornameeratorname{GKdim} (L(w\cdot0))=n(n-1)-\sum_{i}\mu_i(\mu_i-1), \end{equation} where $\mu$ is the shape of the tableaux associated with $w\in S_n$ via the Robinson-Schensted correspondence (in particular, simple modules in the same right cell have the same Gelfand-Kirillov dimension). Hence the statement \eqref{GKdim.2} follows from Lemma~\rbraceef{question} below, since for two partitions $\mu$ and $\mathfrak nu$ of $n$ we have $\sum_{i}\mu_i(\mu_i-1)< \sum_{i}\mathfrak nu_i(\mathfrak nu_i-1)$ if and only $\sum_{i}\mu_i^2<\sum_{i}\mathfrak nu_i^2$. \begin{lemma}\lbraceambdabel{question} Let $\mu$ and $\mathfrak nu$ be partitions of $n$ and $l$ be the maximum of the lengths of the partition. Then $\mu\lbracehd\mathfrak nu$ implies $\sum_{i=1}^n\mu_i^2<\sum_{i=1}^n\mathfrak nu_i^2$. \end{lemma} \begin{proof} If $l=2$ then $2(\mu_1^2+\mu_2^2)=(\mu_1+\mu_2)^2+(\mu_1-\mu_2)^2< (\mathfrak nu_1+\mathfrak nu_2)^2+(\mathfrak nu_1-\mathfrak nu_2)=2(\mathfrak nu_1^2+\mathfrak nu_2^2)$. We will do induction on $l$. Without loss of generality assume $\mu_i\mathfrak not=\mathfrak nu_i$ for $1\lbraceeq i\lbraceeq l$. Choose now $i$ minimal such that $\mu_i<\mathfrak nu_i$, but $\mu_{i+1}>\mathfrak nu_{i+1}$ and set $m:=\min\{\mu_i-\mathfrak nu_i, \mathfrak nu_{i+1}-\mu_{i+1}\}$. It is easy to check that we get a new partition $\sigma$, where $\sigma_i=\mu_i-m$, $\sigma_{i+1}=\mu_{i+i}+m$ and $\sigma_j=\mu_j$ for all other $j$. Note that $\sigma_k=\mathfrak nu_k$ for some $k\in\{i,i+1\}$. So we may apply the induction hypothesis to the partitions $\sigma$ and $\mathfrak nu$ with the common part removed. On the other hand $(\mu_i,\mu_{i+1})\lbracehd(\sigma_{i}, \sigma_{i+1})$ satisfies the assumption of the lemma, hence $\mu_i^2+\mu_{i+1}^2<\sigma_i^2+\sigma_{i+1}^2$ and so $\sum_{j=1}^l\mu_j^2<\sum_{j=1}^l\sigma_j^2<\sum_{j=1}^l\mathfrak nu_j^2$. \end{proof} From \eqref{GKdim.2} and \cite[Theorem~5.1]{Ge} it follows that the indexing partitions of the Specht modules occurring in a fixed subquotient of the filtration from \eqref{GKdim.1} are not comparable in the right order. This implies \eqref{GKdim.3}. The claim \eqref{GKdim.4} follows immediately from \eqref{GKdim.2} and \cite[Corollary~2.4.7]{Sa} (see the remark before the formulation of the theorem). \end{proof} \begin{remark}\lbraceambdabel{gkremark} {\rbracem Using \cite[Corollary~2.4.7]{Sa} and \cite[4.10 Corollary]{Mathas} one can construct the following natural integral filtration of the permutation module $\mathfrak{a}thcal{M}^{\lbraceambdambda}$: For the first step of the filtration we take the submodule $S(\lbraceambda)$ (note again that $\lbraceambda$ is the minimal partition, with respect to the dominance order, amongst the partitions indexing the subquotients of $\mathfrak{a}thcal{M}^{\lbraceambdambda}$). To construct the second step in the quotient we take the direct sum of all Specht modules, whose partitions are minimal elements in the dominance order among all other partitions which occur; and so on. For $n\lbraceeq 6$ the constructed {\em dominance order filtration} will coincide with the one given by Theorem~\rbraceef{GKdim}\eqref{GKdim.1}. However, already for $n=7$ one gets that the filtration given by Theorem~\rbraceef{GKdim}\eqref{GKdim.1} is a proper refinement of the dominance order filtration. For example if we take $n=7$ and the permutation module corresponding to the sign representation (this permutation module is isomorphic to the regular representation of the Hecke algebra), it turns out that the `dominance order filtration' contains a step in which the subquotients are Specht modules corresponding to partitions $(5,1,1)$ and $(4,3)$. However, $5^2+1^2+1^2=27\mathfrak neq 25=4^2+3^2$ and hence these Specht modules occur in different layers of the filtration given by Theorem~\rbraceef{GKdim}\eqref{GKdim.1} because of \eqref{gkdimf1}. } \end{remark} \section{An alternative categorification of the permutation module}\lbraceambdabel{s6} In this section we propose an alternative categorification of the permutation parabolic modules. The connection to the categorification from Subsection~\rbraceef{permmodule} is not completely obvious (but can be made precise using \cite[6.4-6.5]{MOS}). Let $W'$ be a subgroup of $W$ and let $\lbraceambda\in\mathfrak h^*_{dom}$ be an integral weight with stabilizer $W'$ with respect to the dot-action. The isomorphism classes of the Verma modules in $\mathfrak{a}thcal{O}_\lbraceambda$ are exactly given by the $M(x\cdot\lbraceambda)$, where $x\in (W/W')_{short}$. For any simple reflection $s\in S$, the {\it twisting functor} $T_s:\mathcal{O}\rbraceightarrow \mathcal{O}$ (see Subsection~\rbraceef{s25.5}) preserves blocks, in particular induces $T_s:\mathcal{O}_\lbraceambda\rbraceightarrow{}\mathcal{O}_\lbraceambda$. The most convenient description (for our purposes) of these functors is given in \cite{KM} in terms of partial coapproximation: Let $M\in{}\mathcal{O}_\lbraceambda$ be projective. Let $M'\subset M$ be the smallest submodule such that $M/M'$ has only composition factors of the form $L(x\cdot\lbraceambda)$, where $sx>x$. Then $M\mathfrak{a}psto M'$ defines a functor $T_s$ from the additive category of projective modules in ${}\mathcal{O}_\lbraceambda$ to ${}\mathcal{O}_\lbraceambda$. This functor extends in a unique way to a right exact functor $T_s:\mathcal{O}_\lbraceambda\rbraceightarrow{}\mathcal{O}_\lbraceambda$ (for details see \cite{KM}). From this definition of $T_s$ it is immediately clear that this functor is gradable. More precisely, we have the following: \begin{lemma}\lbraceambdabel{twist}{\rbracem (\cite[Proposition 5.1]{FKS})} For any simple reflection $s\in W$ and integral weight $\lbraceambda\in\mathfrak h^*_{dom}$, the twisting functor $T_s:{}\mathcal{O}_\lbraceambda\rbraceightarrow{}\mathcal{O}_\lbraceambda$ is gradable. A graded lift is unique up to isomorphism and shift in the grading. \end{lemma} \begin{proposition}\lbraceambdabel{SchurWeyl} Let $s\in S$. \begin{enumerate} \item The twisting functor $T_s$ is right exact, and exact when restricted the subcategory $\mathfrak{a}thcal{V}_\lbraceambda$ of $\mathcal{O}_\lbraceambda$ of modules having a filtration with subquotients isomorphic to Verma modules. \item One can choose a graded lift ${\bf T}_s$ satisfying the following properties: \begin{align} \lbraceambdabel{Rrel} \begin{split} &\big[{\bf T}_s M(x\cdot\lbraceambda)\big]\\ &=\begin{cases} [( M(sx\cdot\lbraceambda)]+(v^{-1}-v)[( M(x\cdot\lbraceambda)]& \text{ if $sx<x$, $sx\in W/W'_{short}$},\\ [( M(sx\cdot\lbraceambda))]&\text{ if $sx>x$, $sx\in (W/W')_{short}$},\\ v^{-1}[( M(x\cdot\lbraceambda))]&\text{ if $sx\mathfrak notin (W/W')_{short}$.} \end{cases} \end{split} \end{align} \item There is an isomorphism of (left) $\mathfrak{a}thbb{Z}[W]$-modules \begin{eqnarray*} \Psi_\lbraceambda:\mathfrak{a}thbb{Z}[W]\otimestimes_{\mathfrak{a}thbb{Z}[W']}\mathfrak{a}thbb{Z}&\lbraceongrightarrow& [\mathfrak{a}thcal{D}^b(\mathcal{O}_\lbraceambda)]\\ x\otimestimes 1&\lbraceongmapsto&\lbraceeft[ M(x\cdot\lbraceambda)\rbraceight], \end{eqnarray*} where the $\mathfrak{a}thbb{Z}[W']$-structure on $\mathfrak{a}thbb{Z}$ is trivial, and the $\mathfrak{a}thbb{Z}[W]$-structure on the right hand side is induced by the action of the left derived twisting functors $\mathfrak{a}thcal{L} T_s$. \end{enumerate} \end{proposition} \begin{proof} The first statement follows directly from \cite[Lemma 2.1]{AS}. If we forget the grading (and put $v=1$) then the second statement follows directly from \cite[Theorem 6.2, Definition 5.1 (ii)]{AL} and implies the last statement. For the graded setup we refer to the proof of \cite[Proposition 5.2]{FKS}. \end{proof} \section{Remarks on Schur-Weyl dualities}\lbraceambdabel{s7} For completeness we would like to formulate here a categorical version of the Schur-Weyl duality generalizing the approach of \cite{FKS}. Complete proofs and also a geometric interpretation in terms of generalized Steinberg varieties will appear in \cite{SS}. \subsection{For permutation parabolic modules}\lbraceambdabel{s7.1} We assume again the setup of Subsection~\rbraceef{s5.2}. Let $\lbraceambda, \mu\in\mathfrak h^*_{dom}$ be integral. If $F:\mathcal{O}_\lbraceambda\rbraceightarrow\mathcal{O}_\mu$ is a projective functor then it induces a homomorphism $F^G:[\mathcal{O}_\lbraceambda]\rbraceightarrow [\mathcal{O}_\mu]$. Since finite direct sums of projective functors are again projective functors, they form a monoid. On the other hand, the composition of two projective functors (if defined) is again a projective functor. The same holds if we work in the graded setup with graded translation functors between the graded versions $\mathcal{O}_\lbraceambda^{\mathfrak{a}thbb{Z}}$ and $\mathcal{O}_\mu^{\mathfrak{a}thbb{Z}}$ of $\mathcal{O}_\lbraceambda$ and $\mathcal{O}_\mu$ (see \cite{BGS}). This means we have the additive category of (graded) projective functors from $ \mathcal{O}_\lbraceambda^{\mathfrak{a}thbb{Z}}$ to $\mathcal{O}_\mu^{\mathfrak{a}thbb{Z}}$ with its complexified split Grothendieck group $\lbraceeft[\text{ projective functors: }\mathcal{O}_\lbraceambda\rbraceightarrow\mathcal{O}_\mu\rbraceight]_{\otimesperatornamelus}^{\mathfrak{a}thbb{C}}$. \begin{theorem} With the notation from Subsection~\rbraceef{s5.2} we have the following: There is an isomorphism of $\mathfrak{a}thbb{C}[v,v^{-1}]$-modules \begin{eqnarray*} \Psi_{\lbraceambda,\mu}:\quad\lbraceeft[\text{ projective functors: }\mathcal{O}_\lbraceambda\rbraceightarrow\mathcal{O}_\mu\rbraceight]_{\otimesperatornamelus}^{\mathfrak{a}thbb{C}}&\cong& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thds{H}^{\mathfrak{a}thbb{C}}}(\mathfrak{a}thcal{M}^\lbraceambda,\mathfrak{a}thcal{M}^\mu)\\ F&\mathfrak{a}psto&\Psi_\mu^{-1} F^G\Psi_\lbraceambda. \end{eqnarray*} \end{theorem} The latter result is true for any reductive complex Lie algebra $\mathfrak{a}thfrak{g}$. In the following we assume however $\mathfrak{a}thfrak{g}=\mathfrak{a}thfrak{sl}_n$. For any Young subgroup $S_\lbraceambda$ of $S_n$ we {\bf pick} some integral weight $\lbraceambda\in\mathfrak h^*_{dom}$ where $W_\lbraceambda\cong S_\lbraceambda$. Let $\Lambda$ be the set of all these $\lbraceambdambda$'s. For any positive integer $d$ let $\Lambda(d)$ denote the subset of $\Lambda$ whose elements correspond to partitions with at most $d$ rows. The complexified Grothendieck group of all projective functors from $\otimesperatornamelus_{\lbraceambda\in\Lambda(d)}\mathcal{O}_\lbraceambdambda^{\mathfrak{a}thbb{Z}}$ to $\otimesperatornamelus_{\lbraceambda\in\Lambda(d)}\mathcal{O}_\lbraceambdambda^{\mathfrak{a}thbb{Z}}$ has also a multiplication induced from the composition of projective functors which induces a ring structure. Let $\otimesperatorname{Func}(d)$ denote the complexification of this Grothendieck ring of all projective functors from $\otimesperatornamelus_{\lbraceambda\in\Lambda(d)}\mathcal{O}_\lbraceambdambda^{\mathfrak{a}thbb{Z}}$ to $\otimesperatornamelus_{\lbraceambda\in\Lambda(d)}\mathcal{O}_\lbraceambdambda^{\mathfrak{a}thbb{Z}}$. Finally let $\mathfrak{a}thbf{S}^{\mathfrak{a}thbb{C}}_{\mathfrak{a}thbb{Z},v}(d,n)=\otimesperatornameeratorname{End}_{\mathfrak{a}thds{H}}(\otimesperatornamelus_{\lbraceambda\in \Lambda(d)}M^\lbraceambda)$ be the (generic) Schur algebra attached to the numbers $d$, $n$. Then the following holds \begin{theorem} \lbraceambdabel{Schuralgebra} There is an isomorphism of $\mathfrak{a}thbb{C}[v,v^{-1}]$-algebras \begin{eqnarray*} \otimesperatornameeratorname{Func}(d)\;\cong \mathfrak{a}thbf{S}_{\mathfrak{a}thbb{Z},v}^{\mathfrak{a}thbb{C}}(d,n). \end{eqnarray*} \end{theorem} The double centralizer property (see \cite[Theorem 4.19]{Mathas}) of the Hecke algebra $\mathfrak{a}thds{H}^{\mathfrak{a}thbb{C}}$ for the symmetric group $S_n$ is an isomorphism \begin{eqnarray*} \mathfrak{a}thds{H}^{\mathfrak{a}thbb{C}}\cong\otimesperatornameeratorname{End}_{\mathfrak{a}thbf{S}^{\mathfrak{a}thbb{C}}_{\mathfrak{a}thbb{Z},v}(d,n)} (\otimesperatornamelus_{\lbraceambda\in\Lambda}\mathfrak{a}thcal{M}^\lbraceambda). \end{eqnarray*} It is well-known (see \cite[Theorem 3.2]{AS}) that twisting functors commute naturally (in the sense of \cite{Khom}) with translation functors. From Proposition~\rbraceef{SchurWeyl} we know that the permutation parabolic modules can be categorified via certain singular blocks of category $\mathcal{O}$ together with the action of the twisting functors. Together with the remarks of this section one can deduce the following categorical version of the double centralizer property: {\it The left derived functors of the graded versions of twisting functors categorify the above action of the Schur algebra and commute naturally with projective functors.} \subsection{For sign parabolic modules}\lbraceambdabel{s7.2} Here we get the analogous result using Koszul duality (\cite{BGS}). Translation functors should be replaced by the so-called Zuckerman functors and twisting functors should be replaced by Irving's shuffling functors. For the Koszul duality of these functors see \cite[Section 6]{MOS} and \cite{Steen}. \section{Properties of $\mathfrak{a}thscr{X}:=\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ in case of type $A$}\lbraceambdabel{s8} This section describes in more detail the categories $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$, which were used to categorify induced cell modules, in the special case where $\mathfrak{a}thfrak{g}=\mathfrak{a}thfrak{sl}_n$. We will describe projective-injective modules, the associated Serre functor and show that the categories are always Ringel self-dual. From now on we assume that we are in the situation of Subsection~\rbraceef{s5.4} and will use the notation introduced there. Additionally we assume that the Lie algebra $\mathfrak{a}thfrak{g}$ is of type $A$. We fix a right cell $\mathfrak{a}thbf{R}'$ of $W'$ and for simplicity put \begin{displaymath} \mathfrak{a}thscr{X}:=\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}''}\}_0. \end{displaymath} For $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ we denote by $L^{\mathfrak{a}thscr{X}}(xw)$ the simple object of $\mathfrak{a}thscr{X}$, which corresponds to $x$ and $w$. We also have the corresponding standard module $\Delta^{\mathfrak{a}thscr{X}}(xw)$, proper standard module $\otimesverline{\Delta}^{\mathfrak{a}thscr{X}}(xw)$, indecomposable projective module $P^{\mathfrak{a}thscr{X}}(xw)$, and indecomposable injective module $I^{\mathfrak{a}thscr{X}}(xw)$. We further denote by $T^{\mathfrak{a}thscr{X}}(xw)$ the indecomposable tilting module in $\mathfrak{a}thscr{X}$ whose standard filtration starts with a submodule $\Delta^{\mathfrak{a}thscr{X}}(xw)$ (see \cite[4.2]{Fr} for its existence and properties). We denote by $w_0'$ the longest element in $W'\subset W$ and $\otimesverline{w}=w'_0w_0$ the longest element in $(W'\mathfrak{a}thbf{a}ckslash W)_{short}$. \subsection{Irving-type properties}\lbraceambdabel{s8.1} The following theorem is a generalization of both \cite[Main result]{Irself} and \cite[Proposition~4.3]{Irself}. \begin{theorem}\lbraceambdabel{irving} Let $(x,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$. Then the following conditions are equivalent: \begin{enumerate}[(a)] \item \lbraceambdabel{irving.1} $L^{\mathfrak{a}thscr{X}}(x,w)$ occurs in the socle of some standard module from $\mathfrak{a}thscr{X}$. \item \lbraceambdabel{irving.2} $L^{\mathfrak{a}thscr{X}}(x,w)$ occurs in the socle of some proper standard module from $\mathfrak{a}thscr{X}$. \item \lbraceambdabel{irving.25} $L^{\mathfrak{a}thscr{X}}(x,w)$ occurs in the socle of some tilting module from $\mathfrak{a}thscr{X}$. \item \lbraceambdabel{irving.3} $P^{\mathfrak{a}thscr{X}}(x,w)$ is injective. \item \lbraceambdabel{irving.4} $P^{\mathfrak{a}thscr{X}}(x,w)$ is tilting. \item \lbraceambdabel{irving.5} $xw\in W$ belongs to the same right cell $\tilde{\mathfrak{a}thbf{R}}$ of $W$ as $x\otimesverline{w}$. \end{enumerate} \end{theorem} \begin{remark}\lbraceambdabel{rem070106-1} {\rbracem As $\mathfrak{a}thbf{R}'$ is a right cell of $W'$, we have that with our fixed $\otimesverline{w}$ all the $y\otimesverline{w}$, where $y$ runs through $\mathfrak{a}thbf{R}'$, are in the same right cell of $W$. We denote this right cell by $\tilde{\mathfrak{a}thbf{R}}$, see the condition \eqref{irving.5} above. } \end{remark} \begin{proof} Since the parabolic induction is exact, Consequence \eqref{cons1} from Subsection~\rbraceef{s4.2}, and the definition of proper standard modules as induced simple modules, implies that $\otimesverline{\Delta}^{\mathfrak{a}thscr{X}}(xw)$ is a submodule of $\Delta^{\mathfrak{a}thscr{X}}(xw)$, hence \eqref{irving.2}$\mathbb Rightarrow$\eqref{irving.1}. Since any standard module has a proper standard filtration we also have \eqref{irving.1}$\mathbb Rightarrow$\eqref{irving.2}. Analogously, as $\Delta^{\mathfrak{a}thscr{X}}(xw)\subset T^{\mathfrak{a}thscr{X}}(xw)$ and tilting modules have standard filtrations, the equivalence \eqref{irving.1}$\Leftrightarrow$ \eqref{irving.25} is clear. Consequence \eqref{cons1} from Subsection~\rbraceef{s4.2} implies that for each $x\in \mathfrak{a}thbf{R}'\subset W'\subset W$ the module $\Delta^{\mathfrak{a}thscr{X}}(x\otimesverline{w})$ is both standard and costandard, hence tilting, and that the socle of $\Delta^{\mathfrak{a}thscr{X}}(x\otimesverline{w})$ is isomorphic to $L^{\mathfrak{a}thscr{X}}(x\otimesverline{w})$. Let $\theta$ be a projective functor and $\theta'$ be its adjoint. For any $(y,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ we have \begin{displaymath} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thscr{X}}(L^{\mathfrak{a}thscr{X}}(yw),\theta \Delta^{\mathfrak{a}thscr{X}}(x\otimesverline{w}))= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thscr{X}}(\theta' L^{\mathfrak{a}thscr{X}}(yw), \Delta^{\mathfrak{a}thscr{X}}(x\otimesverline{w})). \end{displaymath} Since projective functors respect the right order (Proposition~\rbraceef{prop5}), the latter space can be non-zero only if $yw\mathfrak geq_{\mathfrak{a}thsf{R}}x\otimesverline{w}$ in the right order. Since $\otimesverline{w}$ is the longest element in $(W'\mathfrak{a}thbf{a}ckslash W)_{short}$ and $\mathfrak{a}thbf{R}'$ is a right cell, it follows that $yw$ is in the same right cell than $x\otimesverline{w}$. From the proof of Theorem~\rbraceef{thm53} (namely from the formula \eqref{eqthm53.2}) it follows that, translating the tilting module $\Delta^{\mathfrak{a}thscr{X}}(x\otimesverline{w})$ inductively through the walls, we obtain, as direct summands, all indecomposable tilting modules in $\mathfrak{a}thscr{X}$. The equivalence \eqref{irving.25}$\Leftrightarrow$\eqref{irving.5} follows. A module which is projective and injective, is in particular tilting (since it has a standard and a proper costandard filtration). On the other hand, a tilting module has by definition a standard filtration and a proper costandard filtration, but by the construction described above even a costandard filtration and a proper standard filtration? Hence the dual module of a tilting module is again tilting. By weight arguments, it is isomorphic to the original tilting module. Hence a projective tilting module is also injective and so \eqref{irving.3}$\Leftrightarrow$\eqref{irving.4}. By Proposition~\rbraceef{p22}, the category $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ has a simple projective module. Using this and \cite[Theorem~1]{DOF} one shows that $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ has a simple projective module (this statement also follows from Consequence~\eqref{cons3} and \cite[3.1]{IS}). Translating this module out of the wall one gets that there is at least one indecomposable projective module in $\mathfrak{a}thscr{X}$ which is also injective. As we have seen already, this module must be then of the form $P^{\mathfrak{a}thscr{X}}(xw)$ for some $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ such that $xw\in\tilde{\mathfrak{a}thbf{R}}$. Applying to $P^{\mathfrak{a}thscr{X}}(xw)$ projective functors we get that $P^{\mathfrak{a}thscr{X}}(yu)$ is both projective and injective for all $(y,u)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$ such that $yu\in\tilde{\mathfrak{a}thbf{R}}$. Hence, finally, \eqref{irving.3}$\Leftrightarrow$\eqref{irving.5}. \end{proof} \begin{remark} {\rbracem The following statements from Theorem~\rbraceef{irving} do not require the additional assumption that $\mathfrak{a}thfrak{g}$ is of type $A$: $\eqref{irving.1} \Leftrightarrow\eqref{irving.2}\Leftrightarrow\eqref{irving.25}\Leftrightarrow \eqref{irving.5}$, $\eqref{irving.3}\Leftrightarrow\eqref{irving.4} \mathbb Rightarrow\eqref{irving.25}$. We use that $\mathfrak{a}thfrak{g}$ is of type $A$ when we refer to \cite[3.1]{IS} in the last paragraph of the proof (in particular, using \cite{IS} the complete statement of Theorem~\rbraceef{irving} extends to some other special cases treated in \cite{IS}, but not to the general case because of the counterexample from \cite[5.1]{IS}). We believe, however, that the whole Theorem~\rbraceef{irving} holds for arbitrary type, but do not have a complete argument. Basically, to complete the proof for arbitrary type one has to show that $\mathfrak{a}thscr{X}$ always contains a projective-injective module. } \end{remark} \subsection{Double centralizer and the center}\lbraceambdabel{s8.2} Recall that an algebra $R$ has the {\it double centralizer property} with respect to an $R$-module $M$, if there is an algebra isomorphism \begin{displaymath} R\cong\mathfrak{a}thrm{End}_{\mathfrak{a}thrm{End}_{R}(M)}(M). \end{displaymath} If now $R$ has a double centralizer property with respect to a module $M$ and $\mathfrak{a}thscr{C}\cong \mathfrak{a}thrm{mod-}R$, then we also say that $\mathfrak{a}thscr{C}$ has the double centralizer property (with respect to the image of $M$ under the equivalence). We call a module $M\in \mathfrak{a}thrm{mod-}R$ projective-injective, if it is both projective and injective. If it is a direct sum of non-isomorphic indecomposable projective-injective modules, exactly one from each isomorphism class, then we call the module a {\it full basic projective-injective module}. The following statement is a generalization of \cite[Theorem~5.2(ii)]{MS2}: \begin{proposition}\lbraceambdabel{pr991} The category $\mathfrak{a}thscr{X}$ satisfies the double centralizer property with respect to any full basic projective-injective module. \end{proposition} To prove the statement we first need a generalization of \cite[Lemma~4.7]{MS2}: \begin{lemma}\lbraceambdabel{l992} Let $\mathfrak{a}thbf{R'}$ be a right cell in $W'\subset W$ and $x\in \mathfrak{a}thbf{R'}$. Then the socle of $\Delta^{\mathfrak{a}thscr{X}}(xe)\in \mathfrak{a}thscr{X}$ is simple. \end{lemma} \begin{proof} Theorem~\rbraceef{irving} ensures the existence of projective-injective tilting modules in $\mathfrak{a}thscr{X}$. Translation functors preserve the category of projective-injective tilting modules. Any translation of a module with standard filtration has a standard filtration. Further, from the combinatorics in the proof of Theorem~\rbraceef{thm53} it follows that any standard module can be translated to a module, whose standard filtration contains $\Delta^{\mathfrak{a}thscr{X}}(xe)$ as a subquotient. Therefore, any projective-injective tilting module can be translated to some projective-injective tilting module $T$, whose standard filtration contains $\Delta^{\mathfrak{a}thscr{X}}(xe)$ as a subquotient. The module $T$ contains $\Delta^{\mathfrak{a}thscr{X}}(xe)$ as a submodule since $\Delta^{\mathfrak{a}thscr{X}}(xe)$ is projective, and hence $T^{\mathfrak{a}thscr{X}}(xe)$ is a direct summand of $T$. Thus $T^{\mathfrak{a}thscr{X}}(xe)$ is projective-injective, in particular, has simple socle. As $\Delta^{\mathfrak{a}thscr{X}}(xe)\mathfrak hookrightarrow T^{\mathfrak{a}thscr{X}}(xe)$, the claim follows. \end{proof} \begin{proof}[Proof of Proposition~\rbraceef{pr991}] Let $x\in\mathfrak{a}thbf{R'}$. Then the inclusion $\Delta^{\mathfrak{a}thscr{X}}(xe)\mathfrak hookrightarrow T^{\mathfrak{a}thscr{X}}(xe)$ extends to a short exact sequence of the following form: \begin{equation}\lbraceambdabel{eq991-1} 0\rbraceightarrow \Delta^{\mathfrak{a}thscr{X}}(xe)\rbraceightarrow T^{\mathfrak{a}thscr{X}}(xe)\rbraceightarrow K\rbraceightarrow 0, \end{equation} where $K\in \mathfrak{a}thscr{F}(\Delta^{\mathfrak{a}thscr{X}})$. The module $T^{\mathfrak{a}thscr{X}}(xw)$ is projective-injective by Lemma~\rbraceef{l992}. Projective functors are exact and preserve $\mathfrak{a}thscr{F}(\Delta^{\mathfrak{a}thscr{X}})$. Hence, applying to \eqref{eq991-1} appropriate projective functors and taking the direct sum over all $(y,w)\in \mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$, we get an exact sequence \begin{displaymath} 0\rbraceightarrow P^{\mathfrak{a}thscr{X}}\rbraceightarrow M_1\rbraceightarrow M_2\rbraceightarrow 0, \end{displaymath} where $P^{\mathfrak{a}thscr{X}}$ is a projective generator for $\mathfrak{a}thscr{X}$, while $M_1$ is projective-injective and $M_2\in \mathfrak{a}thscr{F}(\Delta^{\mathfrak{a}thscr{X}})$. By Theorem~\rbraceef{irving}, the injective envelope of $M_2$ is projective. The statement now follows from \cite[Theorem~2.8]{KSX}. \end{proof} \begin{corollary}\lbraceambdabel{c993} Let $Q^{\mathfrak{a}thscr{X}}$ denote a full basic projective-injective module of $\mathfrak{a}thscr{X}$. Then the centres of $\mathfrak{a}thscr{X}$ and $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(Q^{\mathfrak{a}thscr{X}})$ are isomorphic. \end{corollary} \begin{proof} The center of $\mathfrak{a}thscr{X}$ is isomorphic to the center of $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(P^{\mathfrak{a}thscr{X}})$, where $P^{\mathfrak{a}thscr{X}}$ is a projective generator. Thanks to Proposition~\rbraceef{pr991}, the centres of $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(P^{\mathfrak{a}thscr{X}})$ and $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(Q^{\mathfrak{a}thscr{X}})$ are isomorphic (see \cite[Theorem~5.2(ii)]{MS2} for details). \end{proof} Because of Proposition~\rbraceef{cunique} we can now assume that there is a parabolic subgroup, $W''$ of $W'$ such that $\mathfrak{a}thbf{R}'$ contains the element $w''_0w'_0$, where $w'_0$ and $w''_0$ denote the longest elements in $W'$ and $W''$ respectively. Set $S''=W''\cap S'$. Let $\mathfrak{a}thfrak{q}$ denote the parabolic subalgebra of $\mathfrak{a}thfrak{g}$, which contains $\mathfrak{a}thfrak{b}$ and such that the Weyl group of its Levi factor is $W''$. Both $\mathfrak{a}thscr{X}$ and $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$ are subcategories of the category $\mathcal{O}_0$ for $\mathfrak{a}thfrak{g}$ and we have the following result: \begin{lemma}\lbraceambdabel{Tscoincide} \begin{enumerate}[(i)] \item\lbraceambdabel{Tscoincide.1} $\mathfrak{a}thscr{X}$ is a subcategory of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$. \item\lbraceambdabel{Tscoincide.2} The projective-injective modules in $\mathfrak{a}thscr{X}$ and $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$, considered as objects in $\mathcal{O}_0$, coincide. \end{enumerate} \end{lemma} \begin{proof} For $s\in S''$ we obviously have $sw''_0w'_0>w''_0w'_0$. As $w''_0w'_0\in \mathfrak{a}thbf{R}'$ and $\mathfrak{a}thbf{R}'$ is a right cell, it follows that $sxw>xw$ for all $s\in S''$ and $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$. In particular, $L^{\mathfrak{a}thscr{X}}(xw)\in \mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$ $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$, which implies \eqref{Tscoincide.1}. Consider now the element $w''_0w'_0\otimesverline{w}=w''_0w_0'w_0'w_0=w_0''w_0$. Then the module $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ is projective-injective in $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$ (see \cite{KMS} for details). As a $\mathfrak{a}thfrak{g}$-module, the module $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})$ has simple top $L(w_0''w_0\cdot 0)$. Hence, as $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})\in \mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$ by \eqref{Tscoincide.1}, we get that $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})$ is a quotient of $P^\mathfrak{a}thfrak{q}(w_0''w_0)$. On the other hand, from the existence of a simple projective module in $\mathfrak{a}thcal{O}^{\mathfrak{a}thfrak{q}}$ (see \cite[3.1]{IS}) it follows that $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ is a direct summand of some translation of $L(w_0''w_0)$ (see Conseqeuence~\rbraceef{cons3} in Subsection~\rbraceef{s4.2}), which, in turn, is the simple quotient of $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})$. Hence $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ is a quotient of some translation of $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})$. As $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ has simple top, it follows that the only possibility is that $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ is a quotient of $P^\mathfrak{a}thscr{X}(w''_0w'_0\otimesverline{w})$. The above implies that the $\mathfrak{a}thfrak{g}$-modules $P^\mathfrak{a}thfrak{q}(w_0''w_0)$ and $P^\mathfrak{a}thscr{X}(w_0''w_0'\otimesverline{w})$ are isomorphic, and the claim \eqref{Tscoincide.2} follows by applying projective functors. \end{proof} Lemma~\rbraceef{Tscoincide} implies the following result: \begin{proposition}\lbraceambdabel{pcentre} The algebra $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(Q^{\mathfrak{a}thscr{X}})$ is symmetric. The center of $\mathfrak{a}thscr{X}$ is isomorphic to the center of $\mathfrak{a}thcal{O}_0^{\mathfrak{a}thfrak{q}}$. \end{proposition} \begin{proof} By Lemma~\rbraceef{Tscoincide}, the first statement is nothing else than \cite[Theorem 4.6]{MS2}. The second statement is given by Corollary~\rbraceef{c993}. \end{proof} \begin{remark}\lbraceambdabel{rem994} {\rbracem Recall our assumption that $\mathfrak{a}thfrak{g}$ is of type $A$. In this case the center of $\mathfrak{a}thcal{O}^{\mathfrak{a}thfrak{q}}_0$ has a nice geometric description: it is isomorphic to the cohomology algebra of a certain Springer fiber. This is described in \cite{Br} and \cite{St3}. } \end{remark} \subsection{The Serre functor for $\mathfrak{a}thcal{D}^p(\mathfrak{a}thscr{X})$}\lbraceambdabel{s8.3} Let $\mathfrak{a}thcal{D}^p(\mathfrak{a}thscr{X})$ denote the full subcategory of $\mathfrak{a}thcal{D}^b(\mathfrak{a}thscr{X})$ given by perfect complexes, that is complexes which are quasi-isomorphic to finite complexes of projective objects from $\mathfrak{a}thscr{X}$. Recall that if $\mathfrak{a}thscr{C}$ is a $\Bbbk$-linear additive category with finite-dimensional homomorphism spaces, then a {\em Serre functor} on $\mathfrak{a}thscr{C}$ is an auto-equivalence $\mathfrak{a}thrm{F}$ of $\mathfrak{a}thscr{C}$ such that the bifunctors $(X,Y)\mathfrak{a}psto\mathfrak{a}thscr{C}(X,\mathfrak{a}thrm{F}Y)$ and $(X,Y)\mathfrak{a}psto\mathfrak{a}thscr{C}(Y,X)^*$ are isomorphic (here, $*$ denotes the ordinary duality of vector spaces). Denote by $\mathfrak{a}thrm{Coapp}_{\mathfrak{a}thbf{R}'}:\mathfrak{a}thscr{X}\rbraceightarrow\mathfrak{a}thscr{X}$ the functor of partial coapproximation with respect to a fixed full basic projective-injective module $Q^{\mathfrak{a}thscr{X}}$. It is constructed as follows (see \cite[2.5]{KM} for details): If $M\in \mathfrak{a}thscr{X}$ then $\mathfrak{a}thrm{Coapp}_{\mathfrak{a}thbf{R}'}(M)$ is obtained from $M$ by first maximally extending $M$ using simple modules, which do not occur in the top of $Q^{\mathfrak{a}thscr{X}}$, and afterwards deleting all occurrences of such modules in the top part. \begin{proposition}\lbraceambdabel{prserre} The functor $\mathfrak{a}thcal{R}\,\mathfrak{a}thrm{Coapp}_{\mathfrak{a}thbf{R}'}^2$ is a Serre functor for $\mathfrak{a}thcal{D}^{p}(\mathfrak{a}thscr{X})$. \end{proposition} \begin{proof} Thanks to Proposition~\rbraceef{pr991} and Proposition~\rbraceef{pcentre}, we are in the situation of \cite[Theorem~3.7]{MS2}, except that the category $\mathfrak{a}thscr{X}$ usually does not have finite global dimension. Using \cite[Proposition~20.5.5(i)]{Gi} (see \cite[4.3]{MS2} for details), one can get rid of the assumption of finite global dimension by working with the category of perfect complexes instead of the bounded derived category. \end{proof} \subsection{Ringel self-duality of $\mathfrak{a}thscr{X}$}\lbraceambdabel{s8.4} Consider the module \begin{displaymath} T^{\mathfrak{a}thscr{X}}=\bigoplus_{(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')}T^{\mathfrak{a}thscr{X}}(xw). \end{displaymath} Based on \cite{Ri}, the algebra $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(T^{\mathfrak{a}thscr{X}})$ is called the {\em Ringel dual} of the algebra $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(P^{\mathfrak{a}thscr{X}})$, see \cite{Fr}. If $\mathfrak{a}thbf{R}'=\{e\}$, the category $\mathfrak{a}thscr{X}$ is {\it Ringel self-dual} (that is $\mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(P^{\mathfrak{a}thscr{X}}) \cong \mathfrak{a}thrm{End}_{\mathfrak{a}thscr{X}}(T^{\mathfrak{a}thscr{X}})$) by \cite[Section~7]{So4}. If $\mathfrak{a}thbf{R}'=\{w'_0\}$, the category $\mathfrak{a}thscr{X}$ is Ringel self-dual by \cite[Theorem~3]{FKM2}, see also \cite[Proposition~4.9]{MS2}. The following theorem generalizes both these results: \begin{theorem}\lbraceambdabel{trsd} The category $\mathfrak{a}thscr{X}$ is Ringel self-dual for each $\mathfrak{a}thbf{R}'$. \end{theorem} \begin{proof} We retain all assumptions and notation from Subsection~\rbraceef{s8.2} (especially the ones before Lemma~\rbraceef{Tscoincide}). To prove this statement we will construct an endofunctor $\mathfrak{a}thrm{F}=\mathfrak{a}thrm{F}_2\mathfrak{a}thrm{F}_1$ on $\mathfrak{a}thcal{O}$ which maps $P^{\mathfrak{a}thscr{X}}$ to $T^{\mathfrak{a}thscr{X}}$ preserving the endomorphism ring. The functor $\mathfrak{a}thrm{F}_2$ is an auto-equivalence of $\mathfrak{a}thcal{O}$ which is easy to describe: Since $\mathfrak{a}thfrak{g}$ is assumed to be of type $A$, the Dynkin diagram has an involution which is on any $A_n$-component just the flip mapping the $i$-th vertex to the $(n+1-i)$-th vertex. This involution induces an automorphism $\mathfrak phi$ of $\mathfrak{a}thfrak{g}$, and $F_2$ maps a module $M$ to $M^\mathfrak phi$, the same vector space with the $\mathfrak{a}thfrak{g}$-action twisted by $\mathfrak phi$. The functor $\mathfrak{a}thrm{F}_1$ is more complicated. Let $w_0=s_{i_1}s_{i_2}\cdots s_{i_{l(w_0)}}$ be a reduced expression. Consider the twisting functor \begin{displaymath} \mathfrak{a}thrm{T}:=\mathfrak{a}thrm{T}_{i_1}\mathfrak{a}thrm{T}_{i_2}\cdots \mathfrak{a}thrm{T}_{i_{l(w_0)}} :\mathfrak{a}thcal{O}\rbraceightarrow \mathfrak{a}thcal{O} \end{displaymath} (see Subsection~\rbraceef{s25.5} and then \cite{AS}, \cite{So4} for details). This functor is right exact, commutes with projective functors, and $\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}$ is a self-equivalence of $\mathfrak{a}thcal{D}^b(\mathfrak{a}thcal{O})$, see \cite{AS}. We define $\mathfrak{a}thrm{F}_1= \mathfrak{a}thcal{L}_{l(w''_0)}\mathfrak{a}thrm{T}$ and claim that $\mathfrak{a}thrm{F}=\mathfrak{a}thrm{F}_2\mathfrak{a}thrm{F}_1$ does the required job. The arguments to deduce this are very much along the lines of \cite[Proposition~4.4]{MS2}. Here we just outline the arguments leaving to work out the details (following \cite[Proposition~4.4]{MS2}) to the reader. Denote by $\mathfrak{a}thfrak{k}$ the semi-simple part of the Levi factor of $\mathfrak{a}thfrak{q}$. Then each finite-dimensional simple $\mathfrak{a}thfrak{k}$-module $M$ comes along with its so-called BGG-resolution (see \cite{BGG}), that is a resolution by (direct sums of) Verma modules, The (exact) parabolic induction functor from $\mathfrak{a}thfrak{k}$ to $\mathfrak{a}thfrak{a}$ can be applied to the BGG-resolution, and we obtain a resolution of $M'=\mathcal{U}(\mathfrak{a})\otimestimes_{\mathcal{U}(\mathfrak{a}thfrak{k}+(\mathfrak{a}thfrak{b}\cap\mathfrak{a}thfrak{a}))}M$ by (direct sums of) Verma $\mathfrak{a}thfrak{a}$-modules. From the uniqueness result, Remark~\rbraceef{rem2} and \cite[3.1]{IS}, there is a simple, projective object in $\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}$ which is parabolically induced from a simple finite-dimensional $\mathfrak{a}thfrak{k}$-module (see also Subsection~\rbraceef{s9.3} for details). This is the $M'$ we want to consider. Its resolution by (direct sums of) Verma $\mathfrak{a}thfrak{a}$-modules gives rise to a resolution of the projective module $\Delta(\mathfrak{a}thfrak{p},M'):=U(\mathfrak{a}thfrak{g})\otimestimes_{\mathcal{U}(\mathfrak p)}M'$ by (direct sums of) Verma $\mathfrak{a}thfrak{g}$-modules. Then $\mathfrak{a}thcal{L}\mathfrak{a}thrm{T} \Delta(\mathfrak{a}thfrak{p},M')= \mathfrak{a}thcal{L}_{l(w''_0)}\mathfrak{a}thrm{T}\Delta(\mathfrak{a}thfrak{p},M')$ (following the arguments in \cite{MS2}), and the latter becomes a dual parabolic Verma module. From the construction of $\mathfrak{a}thscr{X}$ we know that each projective in $\mathfrak{a}thscr{X}$ can be obtained as a direct summand of some translation of $\Delta(\mathfrak{a}thfrak{p},M')$. The previous paragraph says that $\Delta(\mathfrak{a}thfrak{p},M')$ has an (explicitly given) resolution by (direct sums of) Verma $\mathfrak{a}thfrak{g}$-modules. From \cite{AL} and \cite{AS} (see also Proposition~\rbraceef{SchurWeyl}), we have explicit formulas for the action of the functor $\mathfrak{a}thrm{T}$ on Verma modules. Using these formulas one shows by a direct computation that $\mathfrak{a}thrm{F}$ maps $\Delta(\mathfrak{a}thfrak{p},M')$ to a tilting module from $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$. As $\mathfrak{a}thrm{F}$ commutes (up to the automorphism defining $\mathfrak{a}thrm{F}_2$) with projective functors, it follows that $\mathfrak{a}thrm{F}$ sends projective modules from $\mathfrak{a}thscr{X}$ to tilting modules from $\mathfrak{a}thscr{X}$. Finally, as both $\mathfrak{a}thrm{F}_2$ and $\mathfrak{a}thrm{T}$ are equivalences, it also follows that $\mathfrak{a}thrm{F}$ preserves the endomorphism ring. This completes the proof. \end{proof} \section{The rough structure of generalized Verma modules}\lbraceambdabel{s9} In this section we want to apply the results of the paper to determine the `rough structure' of generalized Verma modules. We will start by giving some background information. \subsection{Basic questions}\lbraceambdabel{s9.1} Let $\mathfrak{a}thfrak{g}$ be a Lie algebra with the triangular decomposition $\mathfrak{a}thfrak{g}=\mathfrak{a}thfrak{n}_-\otimesperatornamelus\mathfrak{a}thfrak{h}\otimesperatornamelus \mathfrak{a}thfrak{n}_+$. Let $\mathfrak{a}thfrak{p}\supset \mathfrak{a}thfrak{h}\otimesperatornamelus \mathfrak{a}thfrak{n}_+$ be a parabolic subalgebra of $\mathfrak{a}thfrak{g}$, and $V$ a simple $\mathfrak{a}thfrak{p}$-module, annihilated by the nilpotent radical of $\mathfrak{a}thfrak{p}$. The module \begin{displaymath} \Delta(\mathfrak{a}thfrak{p},V)=U(\mathfrak{a}thfrak{g})\otimestimes_{U(\mathfrak{a}thfrak{p})}V \end{displaymath} is usually called the {\em generalized Verma module} (or simply GVM) associated with $\mathfrak{a}thfrak{p}$ and $V$. If $\mathfrak{a}thfrak{p}=\mathfrak{a}thfrak{b}$ and $V$ is one-dimensional then we get an ordinary Verma module. If $\mathfrak{a}thfrak{p}$ is arbitrary, but $V$ still finite dimensional, then the resulting module is a parabolic generalized Verma module as studied for example in \cite{Ja}. The most basic questions about GVMs are: \begin{itemize} \item In which case is $\Delta(\mathfrak{a}thfrak{p},V)$ irreducible? \item If $\Delta(\mathfrak{a}thfrak{p},V)$ is not irreducible: which simple $\mathfrak{a}thfrak{g}$-modules occur as subquotients of $\Delta(\mathfrak{a}thfrak{p},V)$, and what are their multiplicity (in case this makes sense at all)? \end{itemize} These questions were studied in special cases by many authors, we refer the reader to \cite[Introduction]{KM2} for a more detailed survey. The answer to the questions above is also of interest in theoretical physics, since the structure of generalized Verma modules determines the structure of Verma modules for (super)algebras appearing in conformal field theory (see for example \cite{Se} for an affine setup). The most general known facts in the theory of generalized Verma modules are the main results of \cite{KM2} (based on \cite{MiSo}) under the assumption that the module $V$ has minimal possible annihilator: \cite[Theorem~22]{KM2} gives an explicit criterion for the irreducibility of $\Delta(\mathfrak{a}thfrak{p},V)$; and \cite[Theorem~23]{KM2} describes what is called the {\it rough} structure of $\Delta(\mathfrak{a}thfrak{p},V)$, defined as follows: each $\Delta(\mathfrak{a}thfrak{p},V)$ has a unique simple quotient, denoted by $L(\mathfrak{a}thfrak{p},V)$. If $V'$ is another simple $\mathfrak{a}thfrak{p}$-module with minimal annihilator then \cite[Theorem~23]{KM2} says that the multiplicity $[\Delta(\mathfrak{a}thfrak{p},V):L(\mathfrak{a}thfrak{p},V')]$ is well-defined (in particular, it is always finite); an explicit formula for its computation in terms of Kazhdan-Lusztig polynomials is also provided. In general, this does not describe the structure of $\Delta(\mathfrak{a}thfrak{p},V)$ completely: $\Delta(\mathfrak{a}thfrak{p},V)$ may have many other subquotients, it even might be of infinite length (because of the example due to Stafford, see \cite[Theorem 4.1]{Stafford}). No reasonable information about this so-called {\it fine structure} of $\Delta(\mathfrak{a}thfrak{p},V)$ is known so far. In what follows we want to explain how one can drop the restriction on the minimality of the annihilator of $V$ by applying the techniques we have developed so far in this paper. Following the approach proposed in \cite{MiSo} and developed further in \cite{KM2}, an essential part of the argument is an improved answer to the so-called `Kostant's problem' for certain simple and induced modules. \subsection{Kostant's problem}\lbraceambdabel{s9.2} Let $\mathfrak{a}thfrak{g}$ be a complex reductive finite-dimensional Lie algebra. For every $\mathfrak{a}thfrak{g}$-module $M$ we have the bimodule $\mathfrak{a}thscr{L}(M,M)$ of all $\mathfrak{a}thbb{C}$-linear endomorphisms of $M$, on which the adjoint action of $U(\mathfrak{a}thfrak{g})$ is locally finite (that means any vector $f\in\mathfrak{a}thscr{L}(M,M)$ lies inside a finite dimensional subspace which is stable under the adjoint action defined as $x.f(m)=x(f(m))-f(xm)$ for $x\in\mathfrak{a}thfrak{g}$, $m\in M$). Initiated by \cite{Jo}, {\it Kostant's problem} became the standard terminology for the following question concerning an arbitrary $\mathfrak g$-module M: {\em Is the natural injection $U(\mathfrak{a}thfrak{g})/\mathfrak{a}thrm{Ann}(M)\mathfrak hookrightarrow \mathfrak{a}thscr{L}(M,M)$ surjective? } Although there are several classes of modules for which the answer is known to be positive (see \cite{Jo}, \cite{Ma2} and references therein), a complete answer to this problem seems to be far away - not even for simple highest weight modules the problem is solved. There is even an instance of a simple highest weight module for which the answer is negative. The details of such an example (which was first mentioned in \cite[9.5]{Jo}) will be discussed in Subsection~\rbraceef{B2}. In the following we will show that for certain simple and induced modules which appeared already in the present paper, the answer to Kostant's problem is positive. We conjecture that this is always the case for simple highest weight modules for Lie algebras of type $A$. Theorem~\rbraceef{cor2-061107} shows that the answer only depends on the left cell associated with a simple highest weight module. \subsection{General assumptions}\lbraceambdabel{ass} For the rest of the paper let $\mathfrak{a}thfrak{g}$ be an arbitrary complex reductive finite-dimensional Lie algebra with a fixed triangular decomposition $\mathfrak{a}thfrak{g}=\mathfrak{a}thfrak{n}_-\otimesperatornamelus\mathfrak{a}thfrak{h}\otimesperatornamelus\mathfrak{a}thfrak{n}_+$. Let $\mathfrak{a}thfrak{p}\supset \mathfrak{a}thfrak{h}\otimesperatornamelus\mathfrak{a}thfrak{n}_+$ be a parabolic subalgebra of $\mathfrak{a}thfrak{g}$ with Levi factor $\mathfrak{a}thfrak{a}'$ and nilpotent radical $\mathfrak{a}thfrak{n}$. Finally, denote by $\mathfrak{a}thfrak{a}$ the semi-simple part of $\mathfrak{a}thfrak{a}'$. Then $\mathfrak{a}thfrak{a}$ is a semi-simple finite-dimensional Lie algebra with induced triangular decomposition. We {\bf assume} that $\mathfrak{a}thfrak{a}$ is of type $A$, that means: {\em We assume that $\mathfrak{a}thfrak{a}\cong\bigoplus_{i\in I}\mathfrak{a}thfrak{sl}_{k_i}$, where $I$ is some finite set and $k_i\in\{2,3,\dots\}$. } \subsection{Kostant's problem for the IS-module}\lbraceambdabel{s9.3} We have to start with some technical statements which involve explicit definitions of certain weights. Assume $\mathfrak{a}thfrak{a}\cong\mathfrak{a}thfrak{sl}_{k}$ for some $k\mathfrak geq 2$. We consider $\mathfrak{a}thfrak{a}$ as a subalgebra of $\mathfrak{a}thfrak{gl}_{k}$ in the canonical way. In particular, all simple highest weight $\mathfrak{a}thfrak{gl}_{k}$-modules are simple highest weight $\mathfrak{a}thfrak{a}$-modules via restriction. Let $\alpha_i$ ($i=1,\dots,k-1$) be the list of simple roots of $\mathfrak{a}thfrak{a}$ in the usual ordering. As before we denote the Weyl group of $\mathfrak{a}thfrak{a}$ by $W'$ and note that $W'\cong S_k$. Let $r$ be a partition of $k$ of length $s$, that is $r=(r_1,\dots,r_s)\in\mathfrak{a}thbb{N}^s$, $r_1+\dots+r_s=k$ and $r_1\mathfrak geq r_2\mathfrak geq\dots$. Set $r_0=0$. Depending on $r$, we define $\mathfrak pi=\{\alpha_i\,:i\in I\}$, where {\small \begin{displaymath} I=\{1,2,\dots,r_1-1\}\cup \{r_1+1,\dots, r_1+r_2-1\}\cup\dots\cup\{r_1+\dots+r_{s-1}+1,\dots, r_1+\dots+r_{s}-1\} \end{displaymath} } and then the $\mathfrak{a}thfrak{gl}_{k}$-weight $\mathfrak nu$ as \begin{displaymath} \mathfrak nu=(b_1,\dots,b_k),\quad b_{r_{j-1}+m}=r_j-m, \text{ for } m\in\{1,\dots,r_{j}\}. \end{displaymath} In \cite[Proposition~3.1]{IS}, it is shown that $\mathfrak nu$ is the only $\mathfrak pi$-dominant weight in $W'\mathfrak nu$ and hence the corresponding simple highest weight module $L(\mathfrak nu-\rbraceho)$ is a projective simple module in the parabolic category $\mathfrak{a}thcal{O}$ associated with $\mathfrak pi$. This is what we call the {\it simple projective IS-module}. Denote by $\mu$ the weight such that $\mu+\rbraceho$ is the dominant weight in $W'\mathfrak nu$. To proceed we have to construct Weyl group elements $x_\mathfrak nu$, $x_\mu$ such that $L(\mathfrak nu-\rbraceho)$ is the translation of $L(x_\mathfrak nu\cdot0)$ to $\mathcal{O}_\mu$, and $L(\mu)$ is the translation of $L(x_\mu\cdot0)$ to $\mathcal{O}_\mu$. Let $\xi=(\xi_1,\dots, \xi_k)$ be a $k$-tuple of non-negative integers. We convert the coordinates of $\xi$ into the sequence $(\eta_1,\dots,\eta_k)$ without repetitions, which differs from $(k-1,k-2,\dots,0)$ only by a permutation, and satisfies $\eta_j<\eta_k$ if $j<k$ or $\xi_j<\xi_k$ (in practice we first replace all occurring zeros from the left to the right by $0$, $1,\dots, m_0$, where $m_0+1$ is the total number of zeros in $\xi$, then all occurring ones by $m_0+1$, $m_0+2$ etc.). Applying this procedure to $\mathfrak nu$ and $\mu+\rbraceho$ we obtain weights $\mathfrak nu'+\rbraceho$ and $\mu'+\rbraceho$ from the orbit $W'(k-1,k-2,\dots,0)$. Then $\mathfrak nu'+\rbraceho=x_\mathfrak nu(k-1,k-2,\dots,0)$ and $\mu'+\rbraceho=x_\mu(k-1,k-2,\dots,0)$ for some $x_\mathfrak nu$, $x_\mu\in W'$. By construction, $L(x_\mathfrak nu\cdot0)$ and $L(x_\mu\cdot0)$ are simple highest weight modules with the desired properties described above. \begin{example}{\rbracem Consider the case where $\mathfrak{a}=\mathfrak{a}thfrak{sl}_4$ with the three simple reflections $s_1$, $s_2$, $s_3$, where $s_1$ and $s_3$ commute. The partition $r=(2,2)$ gives $\mathfrak pi=\{\alpha_1,\alpha_3\}$ and $\mathfrak nu=(1,0,1,0)$. Then $\mu+\rbraceho=(1,1,0,0)$, $\mathfrak nu'+\rbraceho=(2,0,3,1)$ and $\mu'+\rbraceho=(2,3,0,1)$. Hence $x_\mathfrak nu=s_2s_1s_3$, $x_\mu=s_1s_3$. } \end{example} Our crucial technical observation is the following \begin{lemma}\lbraceambdabel{lem1-061107} $x_\mathfrak nu$ and $x_\mu$ belong to the same left cell. \end{lemma} \begin{proof} We prove this by induction on $k$. If $k=2$, there is nothing to prove. If $r_1>r_2$, then in both, $\mathfrak nu'+\rbraceho$ and $\mu'+\rbraceho$, the element $k-1$ stays at the leftmost place, and the induction hypothesis applies to the remaining parts of $\mathfrak nu'+\rbraceho$ and $\mu'+\rbraceho$. The only tricky part is therefore the case $r_1=r_2$, which may in fact easily be reduced inductively to the case $r_1=r_2=\dots=r_s$. Consider first the case $s=2$. Then $x_\mathfrak nu$ is the following permutation on $\{0,\dots,k-1\}$, which we consider as an element of $W'$: {\small \begin{displaymath} x_\mathfrak nu= \lbraceeft( \begin{array}{ccccccccc} 0&1&\dots&r_1-1&r_1& r_1+1& r_1+2&\dots&m\\ m-1& m-3&\dots&2& 0&m &m-2 &\dots&1 \end{array} \rbraceight). \end{displaymath} } where $m=r_1+r_2-1$. Since $0<2<m$, we can apply Knuth transformation (see \cite[Definition~3.6.8]{Sa}) to interchange $0$ and $m$ in the second row of the above permutation. This can be continued until $m$ appears at the second left position, where the procedure stops. Since the Knuth transformations preserve left cells (\cite[Lemma~3.6.9]{Sa}), the new permutation $\sigma$ will be in the same left cell as $x_\mathfrak nu$. Now in $\sigma$ and $x_\mu$ the first two elements coincide. So, applying the induction hypothesis to the remaining parts, we get that $x_\mu $ and $x_\mathfrak nu$ are in the same left cell. The case $s>2$ follows now inductively. We omit the details. \end{proof} The following result is crucial an its proof is based on the categorification results from Section~\rbraceef{s6} and Section~\rbraceef{s7}: \begin{theorem}\lbraceambdabel{cor2-061107} \begin{enumerate}[(i)] \item\lbraceambdabel{cor2-061107.1} The modules $L(\mathfrak nu-\rbraceho)$ and $L(\mu)$ have the same annihilator. \item\lbraceambdabel{cor2-061107.2} For any projective functor $\theta$ we have \begin{displaymath} \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(\mathfrak nu-\rbraceho),\theta L(\mathfrak nu-\rbraceho))= \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(\mu),\theta L(\mu)). \end{displaymath} \item\lbraceambdabel{cor2-061107.3} Kostant's problem has a positive answer for $L(\mu)$ and $L(\mathfrak nu-\rbraceho)$. \item \lbraceambdabel{cor2-061107.2a} For any projective functor $\theta$ we have \begin{displaymath} \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(x\cdot0),\theta L(x\cdot0))= \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(y\cdot0),\theta L(y\cdot0)) \end{displaymath} whenever $x$ and $y$ are in the same left cell of $W'$. In particular, Kostant's problem has a positive answer for $L(x\cdot0)$ if and only if it has a positive answer for $L(y\cdot0)$. \end{enumerate} \end{theorem} \begin{proof} The annihilators of the modules $L(\mathfrak nu')$ and $L(\mu')$ coincide since $x_\mathfrak nu$ and $x_\mu$ belong to the same left cell by Lemma~\rbraceef{lem1-061107}. The statement \eqref{cor2-061107.1} is now obtained by translating to the wall and applying \cite[5.4 (3)]{Ja2}. We will see later that the statement \eqref{cor2-061107.2} follows from \eqref{cor2-061107.2a}. To prove \eqref{cor2-061107.2a} we have to work much harder. The principal idea is the following: Given two simple modules in the same block, and indexed by elements in the same left cell, then Proposition~\rbraceef{SchurWeyl} tells us that they are connected via twisting functors. These twisting functors commute with projective functors and therefore they could be used to obtain estimates for the dimensions of homomorphism spaces, which would result in \eqref{cor2-061107.2a}. Let us make this idea precise. Assume $x\in W$ and $s$ is a simple reflection such that $sx<x$, and the elements $sx$ an $x$ belong to the same left cell. For the twisting functor $\mathfrak{a}thrm{T}_s:\mathcal{O}\rbraceightarrow\mathcal{O}$ we have: \begin{eqnarray} \lbraceambdabel{Ts} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(x\cdot0),L(x\cdot0))&\cong& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s\theta L(x\cdot0), \mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s L(x\cdot0))\\ &\cong& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta\mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s L(x\cdot0), \mathfrak{a}thcal{L}\mathfrak{a}thrm{T}_s L(x\cdot0))\mathfrak nonumber\\ &\cong& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta\mathfrak{a}thrm{T}_s L(x\cdot0), \mathfrak{a}thrm{T}_s L(x\cdot0))\mathfrak nonumber\\ &\cong&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\mathfrak{a}thrm{T}_s\theta L(x\cdot0), \mathfrak{a}thrm{T}_s L(x\cdot0))\mathfrak nonumber. \end{eqnarray} by \cite[Corollary 4.2, Theorem 2.2, Theorem 6.1, Theorem 3.2]{AS}. Moreover, we also have $\mathfrak{a}thrm{T}_sL(x\cdot0)\mathfrak not=0$ and a short exact sequence \begin{eqnarray} 0\rbraceightarrow U\lbraceongrightarrow\mathfrak{a}thrm{T}_s L(x\cdot0) \otimesverset{\mathfrak{a}thrm{nat}}{\lbraceongrightarrow} L(x\cdot0)\rbraceightarrow 0, \end{eqnarray} where $\mathfrak{a}thrm{nat}$ is the evaluation at $L(x\cdot0)$ of the natural transformation from $\mathfrak{a}thrm{T}_s$ to the identity functor, given by \cite[Theorem~4]{KM}, and $U$ is the kernel of $\mathfrak{a}thrm{nat}$. Further, the module $U$ is semi-simple, and has $L(sx\cdot0)$ as a simple subquotient with multiplicity one by \cite[Section~7]{AS}. As $L(x\cdot0)$ is simple and $s$-infinite, the module $U$ coincides with the maximal $s$-finite submodule of $\mathfrak{a}thrm{T}_sL(x\cdot0)$, see \cite[Proposition~5.4]{AS} and \cite[2.5 and Theorem~10]{KM}. Analogously we have a short exact sequence \begin{eqnarray} 0\rbraceightarrow U'\lbraceongrightarrow\mathfrak{a}thrm{T}_s \theta L(x\cdot0) \otimesverset{\mathfrak{a}thrm{nat}}{\lbraceongrightarrow} \theta L(x\cdot0)\rbraceightarrow 0, \end{eqnarray} where $U'$ is just the kernel of $\mathfrak{a}thrm{nat}$. As all simple submodules of the socle of $\theta L(x\cdot0)$ are $s$-infinite, the module $U'$ again coincides with the maximal $s$-finite submodule of $\mathfrak{a}thrm{T}_s\theta L(x\cdot0)$. This implies $U'\cong\theta U$. Now, any non-zero homomorphism $f\in \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(x\cdot0),L(x\cdot0))$ is automatically surjective and gives rise to a diagram as follows (in which the square of solid arrows commutes): \begin{eqnarray*} \xymatrix{ \theta U\ar@{^{(}->}[r] \ar@{.>}[d]^{f'}&\mathfrak{a}thrm{T}_s\theta L(x\cdot0) \ar@{->>}[r]^{\mathfrak{a}thrm{nat}} \ar@{->>}[d]^{\mathfrak{a}thrm{T}_s f}&\theta L(x\cdot0)\ar@{->>}[d]^{f}\\ U\ar@{^{(}->}[r]&\mathfrak{a}thrm{T}_s L(x\cdot0) \ar@{->>}[r]^{\mathfrak{a}thrm{nat}}&L(x\cdot0), } \end{eqnarray*} inducing the map $f'$. We claim that the map $f'$ restricts to a non-zero map \begin{eqnarray*} \otimesverline{f}'\in\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(sx\cdot0),L(sx\cdot0)). \end{eqnarray*} We first claim that the cokernel of $f'$ does not contain any simple module $L(z)$, where $z$ is in the same left cell as $x$. By the Snake Lemma, the cokernel of $f'$ embeds into $\theta L(x\cdot0)$. From Proposition~\rbraceef{prop5}\eqref{prop5.1} it follows that $\theta L(x\cdot0)$ only has composition factors indexed by $z$'s either in the same right cell as $x$ or in smaller right cells. From the Robinson-Schensted algorithm it is directly clear that smaller rights cells intersect the left cell of $x$ trivially. Robinson-Schensted also implies that any given left and right cell inside the same two-sided cell intersect in exactly one point; so the only possible $z$ is $z=x$. Since $L(x\cdot0)$ is not a composition factor of $U$ by \cite[Theorem 6.3 (ii)]{AS} the claim follows. In particular, $L(sx\cdot0)$ occurs in the image of $f'$. Let now $L(z\cdot0)$ be a simple subquotient of $U$. If $z$ belongs to a smaller two-sided cell than $sx$, then the arguments of Theorem~\rbraceef{GKdim} imply that the GK-dimension of $L(z\cdot0)$ is strictly smaller than that of $L(sx\cdot0)$. Hence $\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(z\cdot0),L(sx\cdot0))=0$. If $z$ is in the same left cell as $sx$ then by Proposition~\rbraceef{prop5}\eqref{prop5.1} and Robinson-Schensted the inequality $\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(z\cdot0),L(sx\cdot0))\mathfrak not=0$ is possible only for $z=sx$. This implies $\otimesverline{f}'\mathfrak not=0$ and the claim follows. Hence we get the inequality \begin{equation}\lbraceambdabel{abs123} \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(x\cdot0),L(x\cdot0))\lbraceeq \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(sx\cdot0),L(sx\cdot0)). \end{equation} Analogously one obtains the inequality \begin{equation}\lbraceambdabel{abs1234} \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(x\cdot0),L(x\cdot0))\lbraceeq \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(tx\cdot0),L(tx\cdot0)) \end{equation} in the case when $sx$ belongs to the smaller left cell than $x$ and $t$ is a simple reflection such that $(st)^3=e$ and the element $tx$ belongs to the same left cell as $x$. Since left cell modules are irreducible, using \eqref{abs123} and \eqref{abs1234} inductively one also obtains the opposite inequalities, which implies that \begin{equation}\lbraceambdabel{abs123456} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(x\cdot0),L(x\cdot0))= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\theta L(y\cdot0),L(y\cdot0)) \end{equation} if $x$ and $y$ are in the same left cell. By \cite[6.8(3)]{Ja2} for any $\mathfrak{a}thfrak{a}$-module $M$ and any simple finite-dimensional $\mathfrak{a}thfrak{a}$-module $F$ we have \begin{equation}\lbraceambdabel{abs12345} [\mathfrak{a}thscr{L}(M,M):F]= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(F\otimestimes M,M). \end{equation} Since the modules $L(x\cdot0)$ and $L(y\cdot0)$ have the same annihilator by \cite[5.25]{Ja2}, the formulas \eqref{abs12345} and \eqref{abs123456} imply that Kostant's problem has a positive answer either for both $L(x\cdot 0)$ and $L(y\cdot 0)$ or for none of them. This proves \eqref{cor2-061107.2a}. By Lemma~\rbraceef{lem1-061107}, both $L(\mathfrak nu-\rbraceho)$ and $L(\mu)$ are obtained by translating two simple modules, indexed by elements from the same left cell, from $\mathcal{O}_0$ to a fixed singular block. Hence, using Proposition~\rbraceef{SchurWeyl}, the statement \eqref{cor2-061107.2} is proved just in the same way as the statement \eqref{cor2-061107.2a} is proved above. As in \eqref{cor2-061107.2a}, the statement \eqref{cor2-061107.2} implies that Kostant's problem has a positive answer either for both $L(\mathfrak nu-\rbraceho)$ and $L(\mu)$ or for none of them. Since $\mu$ is dominant, Kostant's problem has a positive answer for $L(\mu)$ by \cite[6.9]{Ja2}. Hence the answer to Kostant's problem for $L(\mathfrak nu-\rbraceho)$ is positive as well. This completes the proof. \end{proof} \subsection{A negative answer to Kostant's problem: type $B_2$}\lbraceambdabel{B2} The answer to Kostant's problem is not positive in general. The following negative example was constructed first in \cite[9.5]{Jo}: Let \begin{displaymath} W'=\{e,s,t,st,ts,sts,tst,stst=tsts\} \end{displaymath} be the Weyl group of type $B_2$ with the two simple reflections $s$ and $t$ as generators. Then the requirement of Kostant's problem fails for $L(ts)$ and $L(st)$. What goes wrong in our arguments? The right cells of $W'$ are $\{e\}$, $\{s,st,sts\}$, $\{t,ts,tst\}$, $\{stst\}$. In particular, the elements $s$ and $sts$ are both in the same left and in the same right cell. In our arguments we used several times that the intersection of a given left with a given right cell contains at most one element. An easy direct calculation also shows that $\mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(\theta L(s),L(s))=\mathfrak{a}thbb{C}= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(\theta L(sts),L(sts))$, whereas $\mathfrak{a}thrm{Hom}_{\mathfrak{a}thcal{O}}(\theta L(ts),L(ts))=\mathfrak{a}thbb{C}^2$ for $\theta=\theta_s\theta_t\theta_s$. Hence Theorem~\rbraceef{cor2-061107} \eqref{cor2-061107.2a} fails in this case. \subsection{Coker-categories and their equivalence} Recall the setup from Sub\-sec\-ti\-on~\rbraceef{ass}, in particular that $\mathfrak{a}thfrak{a}$ is assumed to be of type $A$. Let $V$ be an arbitrary simple $\mathfrak{a}thfrak{a}'$-module. Let $L$ be the $\mathfrak{a}thfrak{a}$-module obtained by restriction. For simplicity \begin{eqnarray}\lbraceambdabel{easylife} \begin{array}{c} \text{\it we assume for the moment that $L$ has}\\ \text{\it integral and regular central character} \end{array} \end{eqnarray} and refer to Remark~\rbraceef{simplicity} for the general case. Now, $V$ is determined uniquely by the underlying simple $\mathfrak{a}thfrak{a}$-module $L$ and some functional $\eta$ on the center of $\mathfrak{a}thfrak{a}'$. We first construct an admissible category attached to this data: Let $L(x\cdot 0)$ be a simple highest weight module with the same annihilator as $L$. Without loss of generality we assume that $x$ is contained in a right cell associated with a parabolic subalgebra $\mathfrak p$ as in Remark~\rbraceef{rem2} (the latter is possible as $x$ can be chosen arbitrarily in its left cell by \cite[5.25]{Ja2}). By \cite[3.1]{IS}, there is a block $\mathcal{O}_\mu^\mathfrak p$ (for some integral weight $\mu\in\mathfrak{a}thfrak{h}^*_{dom}$) which contains exactly one simple (highest weight) module $L(y\cdot\mu)$, and this module is also projective. We assume that $ys<y$ for any simple reflection $s$ such that $s\cdot \mu=\mu$. The module $L(y\cdot\mu)$ is a tensor product of simple highest weight modules over all simple components of $\mathfrak{a}thfrak{a}$. Each of the factors has the form $L(\mathfrak nu)$, where $\mathfrak nu$ is as in Subsection~\rbraceef{s9.3}. Because of our assumptions we also have that $L(y\cdot\mu)$ is the translation of $L(y\cdot 0)$ to the $\mu$-wall, and that $x$ and $y$ belong to the same right cell (Consequence~\rbraceef{cons3} in Subsection~\rbraceef{s4.2}). \begin{proposition}\lbraceambdabel{F} There is some projective functor $F:\mathcal{O}(\mathfrak{a}thfrak{a},\mathfrak{a}thfrak{a}\cap\mathfrak{a}thfrak{b})\rbraceightarrow \mathcal{O}(\mathfrak{a}thfrak{a},\mathfrak{a}thfrak{a}\cap\mathfrak{a}thfrak{b})$ such that $FL(x\cdot 0)\cong \bigoplus_{i=1}^k L(y\cdot\mu)$ for some finite number $k>0$. \end{proposition} \begin{proof} First we claim that there is a projective functor $\theta$ such that $L(y\cdot0)$ occurs as a composition factor in $\theta L(x\cdot0)$. Indeed, recall that the elements $x$ and $y$ are in the same right cell of $W'$. Consider the basis of simple modules for the categorification of the cell module (corresponding to $x$ and $y$) given by Theorem~\rbraceef{thm5}. As cell modules are irreducible in type $A$, there is a projective functor $\theta$ such that $[\theta L(x\cdot0)]$ has a non-zero coefficient at $[L(y\cdot0)]$, when expressed with respect to the basis of simple modules. This means exactly that $L(y\cdot0)$ occurs as a composition factor in $\theta L(x\cdot0)$. Let $\theta'$ be the translation to the $\mu$-wall. Then the functor $F=\theta'\theta$ satisfies the requirement of the lemma as the module $L(y\cdot\mu)$ is simple projective and is a unique simple modules in its parabolic block (see the definition of $L(y\cdot\mu)$ and \cite[3.1]{IS}). \end{proof} We fix $F$ as in Proposition~\rbraceef{F} and set $N:=FL$. \begin{lemma}\lbraceambdabel{Nnotzero} \begin{enumerate}[(i)] \item\lbraceambdabel{Nnotzero.1} $N=FL\mathfrak not=0$. \item\lbraceambdabel{Nnotzero.2} $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}N=\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(y\cdot\mu)= \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(\mu)$. \end{enumerate} \end{lemma} \begin{proof} Since $\otimesperatornameeratorname{Ann} L=\otimesperatornameeratorname{Ann} L(x\cdot0)$, we have \begin{displaymath} \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}N=\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})} F L= \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})} F L(x\cdot0) \end{displaymath} (see \cite[5.4]{Ja2}). The second statement follows then directly from Proposition~\rbraceef{F}, Theorem~\rbraceef{cor2-061107} and the definition of $L(y\cdot\mu)$. Since $FL(x\cdot0)\mathfrak not=0$, we also have $FL\mathfrak not=0$. \end{proof} If $\mathfrak{a}thfrak{g}$ is any complex Lie algebra and $Q$ a $\mathfrak{a}thfrak{g}$-module, then we denote by $\otimesperatorname{Coker}(Q\otimestimes E)$ the full subcategory of $\mathfrak{a}thfrak{g}\mathfrak{a}thrm{-mod}$, which consists of all modules $M$ having a presentation $X\rbraceightarrow Y\twoheadrightarrow M$, where both $X$ and $Y$ are direct summands of $Q\otimestimes E$ for some finite dimensional module $E$. In particular, if we choose the Lie algebra to be $\mathfrak{a}$, then we have the two categories $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesperatorname{Coker}(N\otimestimes E)$. \begin{lemma}\lbraceambdabel{projLy} $L(y\cdot\mu)$ is projective in $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$. \end{lemma} \begin{proof} Of course $L(y\cdot\mu)$ is contained in $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$. On the other hand, $L(y\cdot\mu)\in\mathcal{O}_\mu^\mathfrak p$ is projective. It is even projective in $\mathcal{O}^\mathfrak p$. The latter is stable under tensoring with finite dimensional modules, hence contains $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ as a full subcategory. The statement follows. \end{proof} Unfortunately, we do not know how to prove directly that the module $N$ is semi-simple. To get around this problem we have to make sure that there is a `nice' simple subquotient $\otimesverline{N}$ of $N$. Let $G$ be the adjoint functor of $F$. Then the adjunction morphism $a: L\rbraceightarrow GFL$ is injective, since $L$ is simple. Since $G$ is exact, there must therefore be a simple subquotient $\otimesverline{N}$ of $N$ such that $G\otimesverline{N}$ contains $L$ as a quotient. Let $\chi_\mu=\otimesperatornameeratorname{Ann}_{Z(\mathfrak{a})}M(\mu)$ be the central character of the Verma module with highest weight $\mu$. Then we denote by $\mathfrak{a}thcal{M}(\mu)$ the category of all $\mathfrak{a}$-modules $M$ such that $(\chi_\mu)^nM=0$ for some $n$ (depending on $M$), i.e. $M$ has generalized central character $\chi_\mu$. With this notation the following holds: \begin{lemma}\lbraceambdabel{projfuncN} Let $\theta:\mathfrak{a}thcal{M}(\mu)\rbraceightarrow\mathfrak{a}thcal{M}(\mu)$ be an indecomposable projective functor which is not the identity. Then $\theta L(\mu)=0$, $\theta N=0$ and $\theta\otimesverline{N}=0$. \end{lemma} \begin{proof} We first show the statement for $L(\mu)$: if $\theta:\mathcal{O}_\mu\rbraceightarrow\mathcal{O}_\mu$ is an indecomposable projective functor then $\theta L(\mu)\mathfrak not=0$ means that $\theta$ is the identity functor. To see this, take the projective cover $P(\mu)$ of $L(\mu)$. Then \begin{eqnarray}\lbraceambdabel{hom} \textrm{Hom}_{\mathfrak{a}}(P(\mu),\theta L(\mu))&=&\textrm{Hom}_\mathfrak{a}(\theta' P(\mu),L(\mu)), \end{eqnarray} where $\theta'$ is the adjoint functor of $\theta$. Note that $\theta'$ is an indecomposable projective functor if so is $\theta$. The classification theorem of projective functors gives $\theta' M(\mu)=P(\zeta)$ for some $\zeta$. If we assume the space $\eqref{hom}$ to be non-trivial then we have $\zeta=\mu$, which forces (by the classification theorem again) $\theta'$ to be the identity functor, and then $\theta$ is the identity functor as well. Assume therefore $\eqref{hom}$ is trivial, but $\theta L(\mu)\mathfrak not=0$. Recall the categorification result of Proposition~\rbraceef{SchurWeyl} and extend the scalars to $\mathfrak{a}thbb{C}$. Together with Theorem~\rbraceef{Schuralgebra} we get that $\theta$ induces an endomorphism of the complexified Grothendieck group of $\mathcal{O}_\mu$. The module $L(\mu)$ has minimal Gelfand-Kirillov dimension and is contained in the categorification of the irreducible (Specht) submodule of $\mathfrak{a}thcal{M}^\mu$ corresponding to the partition given by $\mu$. The endomorphism of the parabolic permutation module given by $\theta$ is a homomorphism of the symmetric group which underlies the Hecke algebra $\mathfrak{a}thbb{H}$ and restricts to an endomorphism of the irreducible submodule which has to be a multiple $c\in \mathfrak{a}thbb{C}$ of the identity. But since $0\mathfrak not=\theta L(\mu)$ has at most the same Gelfand-Kirillov dimension as $L(\mu)$ (by Lemma~\rbraceef{GK}), we deduce that $c\mathfrak not=0$. On the other hand the fact that both sides of the equality \eqref{hom} are equal to $0$ is equivalent to the statement that $L(\mu)$ does not occur as a composition factor in $\theta L(\mu)$, a contradiction. In particular, $c\mathfrak not=0$ forces $\theta$ to be the identity functor. Hence the claim is true for $L(\mu)$. Assume again that $\theta$ is not the identity functor. To see that $\theta N=\theta FL=0$ we consider the annihilator $\otimesperatornameeratorname{Ann} \theta N=\otimesperatornameeratorname{Ann}_{\mathcal{U}(\mathfrak{a})}(\theta N)$. By \cite[6.35 (1)]{Ja2} we have \begin{eqnarray}\lbraceambdabel{eq:Ann} \mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} \theta N&=&\theta^{l}\theta^{r}(\mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} N), \end{eqnarray} where $\mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} N$ is considered as a $\mathcal{U}(\mathfrak{a})$-bimodule, and $\theta^{l}$ is the projective functor $\theta$ when considering left $\mathcal{U}(\mathfrak{a})$-modules, whereas $\theta^{r}$ is the projective functor $\theta$ when considering right $\mathcal{U}(\mathfrak{a})$-modules (see also Subsection \rbraceef{s25.4}). On the other hand we have an equality of bimodules \begin{eqnarray}\lbraceambdabel{eq:anns} \mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} N&=&\mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} L(\mu)=\mathfrak{a}thscr{L}(L(\mu),L(\mu)). \end{eqnarray} Here the first equality is \cite[5.4]{Ja2}. The second equality is given by the natural map, since Kostant's problem has a positive answer in this case (Theorem~\rbraceef{cor2-061107}\eqref{cor2-061107.3}). Putting everything together we get \begin{eqnarray*} \mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} \theta N&=&\theta^{l}\theta^{r}(\mathcal{U}(\mathfrak{a})/\otimesperatornameeratorname{Ann} N)=\theta^{l}\theta^{r}\mathfrak{a}thscr{L}(L(\mu),L(\mu))\\ &\cong&\mathfrak{a}thscr{L}(\theta L(\mu),\theta L(\mu))=0. \end{eqnarray*} For the penultimate isomorphism we refer to \cite[6.33(6)]{Ja2}. It follows that $\theta N=0$. As $\theta$ is exact and $\otimesverline{N}$ is a subquotient of $N$ we also get that $\theta\otimesverline{N}=0$. \end{proof} \begin{proposition}\lbraceambdabel{smallN} The following holds: \begin{enumerate}[(i)] \item\lbraceambdabel{smallN1} $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}\otimesverline{N}= \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(y\cdot\mu)= \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(\mu)$. \item\lbraceambdabel{smallN2} $\otimesverline{N}$ is projective in $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$. \item\lbraceambdabel{smallN3} Kostant's problem has a positive solution for the module $\otimesverline{N}$. \end{enumerate} \end{proposition} \begin{proof} Of course, $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}\otimesverline{N}\supseteq \otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}{N}$. Let us assume $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}\otimesverline{N}$ is strictly bigger than $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}{N}$. Choose $z$, such that $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}\otimesverline{N}=\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(z\cdot\mu)$. Since $\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}N=\otimesperatornameeratorname{Ann}_{U(\mathfrak{a}thfrak{a})}L(y\cdot\mu)$ (Lemma~\rbraceef{Nnotzero}), it follows that $z$ is strictly smaller than $y$ in the left order. Hence, also strictly smaller than $x$ in the left order. On the other hand (by definition of the modules) $L(y\cdot0)$ can be obtained as a subquotient in a translation of $L(z\cdot0)$. Proposition~\rbraceef{prop5}~\eqref{prop5.1} tells then that $y\lbraceeq_{\mathfrak{a}thsf{R}}z$. In particular, $y$ is smaller than or equal to $x$ in the twosided order. This contradicts the fact that $z$ should be strictly smaller than $y$ in the left order. The first statement follows. By definition $\otimesverline{N}\in \otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$. Moreover, $\otimesverline{N}\in\mathfrak{a}thcal{M}(\mu)$ as $\otimesverline{N}$ is simple (\cite[Proposition~2.6.8]{Di}). Hence we can apply Lemma~\rbraceef{projfuncN} and obtain that the intersection of $\mathfrak{a}thcal{M}(\mu)$ with the additive closure of $\otimesverline{N}\otimestimes E$ consists just of direct sums of copies of $\otimesverline{N}$. Since $\otimesverline{N}$ is simple, the cokernel of any homomorphism between direct sums of copies of $\otimesverline{N}$ is isomorphic to a direct sum of copies of $\otimesverline{N}$ as well (\cite[Proposition~2.6.5(iii)]{Di}). Hence the intersection of $\mathfrak{a}thcal{M}(\mu)$ with $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$ also consists just of direct sums of copies of $\otimesverline{N}$. This implies that $\otimesverline{N}$ is projective in $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$. By Theorem~\rbraceef{cor2-061107} we know that Kostant's problem has a positive answer for the module $L(y\cdot\mu)$. So, by part \eqref{smallN1}, it would suffice to show that \begin{displaymath} \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\otimesverline{N},\theta \otimesverline{N})= \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(y\cdot\mu),\theta L(y\cdot\mu)) \end{displaymath} for all indecomposable projective functors $\theta$. This is true if $\theta$ is not the identity functor (Lemma~\rbraceef{projfuncN}), otherwise Schur's Lemma (\cite[2.6.5]{Di}) does the job. \end{proof} Finally we get the following main result: \begin{theorem}\lbraceambdabel{thm-eqv1} With the notation from above there is an equivalence of categories \begin{eqnarray*} \otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)\cong \otimesperatorname{Coker}(\otimesverline{N}\otimestimes E). \end{eqnarray*} \end{theorem} \begin{proof} By Lemma~\rbraceef{projLy} and Proposition~\rbraceef{smallN}, $L(y\cdot\mu)$ is projective in $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesverline{N}$ is projective in $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$. By Theorem~\rbraceef{cor2-061107}\eqref{cor2-061107.3}, Kostant's problem has a positive solution for $L(y\cdot\mu)$. By Proposition~\rbraceef{smallN}, Kostant's problem has a positive solution for $\otimesverline{N}$. Hence the claim follows from Proposition~\rbraceef{smallN} and \cite[Theorem~5]{KM2}. \end{proof} \subsection{Categories of induced modules and their equivalence}\lbraceambdabel{s9.4} We extend the categories $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$ of $\mathfrak{a}thfrak{a}$-modules to categories of $\mathfrak{a}thfrak{a}'$-modules by allowing arbitrary scalar actions of the center of $\mathfrak{a}thfrak{a}'$. Abusing notation we denote the resulting categories by the same symbols. \begin{lemma}\lbraceambdabel{s9.2-lem1} $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$ are both admissible. \end{lemma} \begin{proof} The conditions (L\rbraceef{lll2}) and (L\rbraceef{lll4}) are clear by definition, so we have only to check the condition (L\rbraceef{lll3}). By Lemma~\rbraceef{projLy}, $L(y\cdot\mu)$ is projective in $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesverline{N}$ is projective in $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$ by Proposition~\rbraceef{smallN}. In particular, all modules of the form $L(y\cdot\mu)\otimestimes E$ and $\otimesverline{N}\otimestimes E$, where $E$ is finite-dimensional, are projective in the corresponding categories. It follows that both categories $\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ and $\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)$ have enough projectives. Now the condition (L\rbraceef{lll3}) follows for instance from \cite[Section~5]{Au}. \end{proof} Lemma~\rbraceef{s9.2-lem1} allows us to consider the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)\}$ and the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)\}$. Both categories have a block decomposition with respect to central characters. By \cite[Theorem~6.1]{Ma}, these blocks are equivalent to module categories over some standardly stratified algebras (it is easy to see that these algebras are even weakly properly stratified in the sense of \cite{Fr}). Denote by $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)\}_{\otimesperatorname{int}}$ and $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}}$ the direct sums of all blocks corresponding to integral central characters. The main result of this section is the following statement: \begin{theorem}\lbraceambdabel{s9.2-thm2} There is a blockwise equivalence of categories \begin{eqnarray*} \xi:\quad\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}} &\cong& \mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)\}_{\otimesperatorname{int}}, \end{eqnarray*} which sends proper standard modules to proper standard modules. \end{theorem} \begin{proof} To construct a blockwise equivalence it is enough to verify the assumptions of \cite[Theorem~5]{KM2}. The module $L(y\cdot\mu)$ is projective in $\mathfrak{a}thscr{C}:=\otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)$ (Lemma~\rbraceef{projLy}). Hence the induced module $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$ is both standard and proper standard in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{C}\}$ for any linear functional on the center of $\mathfrak{a}'$ which extends the $\mathfrak{a}thfrak{a}$-action on $L(y\cdot\mu)$. We pick the linear functional such that the module $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$ is projective in some regular block of $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{C}\}_{\otimesperatorname{int}}$. It is easy to see that all projective modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{C}\}_{\otimesperatorname{int}}$ can be obtained by translating $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$. In particular, we have \begin{displaymath} \mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{C}\}_{\otimesperatorname{int}}\cong \otimesperatorname{Coker}(\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))\otimestimes E). \end{displaymath} Analogously \begin{displaymath} \mathfrak{a}thcal{O}(\mathfrak{a}thfrak{p}, \otimesperatorname{Coker}(\otimesverline{N}\otimestimes E))_{\otimesperatorname{int}}\cong \otimesperatorname{Coker}(\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})\otimestimes E). \end{displaymath} for the same linear functional. To be able to apply \cite[Theorem~5]{KM2} we just have to verify that Kostant's problem has a positive answer for the modules $\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})$ and $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$. This will be done in the following Lemmas \rbraceef{s9.2-lem3} and \rbraceef{s9.2-lem4}. Hence there is an equivalence of categories $\xi$, but it is left to show that is sends proper standard objects to such. The partial ordering on the simple modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \mathfrak{a}thscr{C}\}_{\otimesperatorname{int}}$ induces a partial ordering on the simple modules in $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \otimesperatorname{Coker}(\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}}$, which defines a stratified structure. Since proper standard modules have a categorical definition, they will be sent to proper standard modules by any blockwise equivalence. \end{proof} For a finite dimensional $\mathfrak{a}thfrak{g}$-module $E$ we denote by $\tilde{E}$ its underlying $\mathfrak{a}$-module. Let $\tilde{E}_0$ be the direct sum of finite dimensional $\mathfrak{a}$-submodules of $\tilde{E}$ where the center of the reductive Lie algebra $\mathfrak{a}'$ acts trivially. \begin{lemma}\lbraceambdabel{s9.2-lem3a} Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},L(\mu))$. \end{lemma} \begin{proof} The module $\Delta(\mathfrak{a}thfrak{p},L(\mu))$ is a quotient of the dominant Verma module and therefore Kostant's problem is affirmative by \cite[6.9 (10)]{Ja2}. \end{proof} \begin{lemma}\lbraceambdabel{s9.2-lem3} Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$. \end{lemma} \begin{proof} For any simple finite-dimensional $\mathfrak{a}thfrak{g}$-module $E$ we have \small \begin{eqnarray} \lbraceambdabel{multi} &&[U(\mathfrak{a}thfrak{g})/\mathfrak{a}thrm{Ann}_{U(\mathfrak{a}thfrak{g})}(\Delta(\mathfrak{a}thfrak{p},L(\mu))):E]= \dim\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},L(\mu))\otimestimes E,\Delta(\mathfrak{a}thfrak{p},L(\mu)) \end{eqnarray} \mathfrak normalsize by Lemma~\rbraceef{s9.2-lem3a} and \cite[6.8(3)]{Ja2}. Since for $\zeta\in\{\mu, y\cdot\mu\}$, the module $\Delta(\mathfrak{a}thfrak{p},L(\zeta))$ is a projective standard module in its corresponding $\otimesperatorname{Coker}$-category, the standard adjointness gives \begin{displaymath}\lbraceambdabel{firsteq} \begin{array}{rcl} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},L(\zeta))\otimestimes E,\Delta(\mathfrak{a}thfrak{p},L(\zeta)))&=& \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},L(\zeta)),\Delta(\mathfrak{a}thfrak{p},L(\zeta))\otimestimes E^*)\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(\zeta),L(\zeta)\otimestimes E_0^*)\\ &=&\mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(L(\zeta)\otimestimes E_0,L(\zeta)). \end{array} \end{displaymath} The latter is however independent of the choice of $\zeta$ by Theorem~\rbraceef{cor2-061107}, and therefore \small \begin{equation}\lbraceambdabel{s9.2-eq11} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},L(\mu))\otimestimes E,\Delta(\mathfrak{a}thfrak{p},L(\mu)))= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))\otimestimes E, \Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu )). \end{equation} \mathfrak normalsize The modules $L(y\cdot\mu)$ and $L(\mu)$ have the same annihilator (by Theorem~\rbraceef{cor2-061107} again), therefore the modules $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$ and $\Delta(\mathfrak{a}thfrak{p},L(\mu))$ have the same annihilator by \cite[Proposition~5.1.7]{Di}. Together with \eqref{s9.2-eq11} and Lemma~\rbraceef{s9.2-lem3a} we deduce that Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$. \end{proof} \begin{lemma}\lbraceambdabel{s9.2-lem4} Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})$. \end{lemma} \begin{proof} Since $\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})$ is a projective standard module in the corresponding $\otimesperatorname{Coker}$-category, as in \eqref{firsteq} we have \begin{displaymath} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},\otimesverline{N}),\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})\otimestimes E)= \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{a}}(\otimesverline{N},\otimesverline{N}\otimestimes E_0). \end{displaymath} Recall that $\otimesverline{N}$ and $L(\mu)$ have the same annihilator (Proposition~\rbraceef{smallN}), and Kostant's map is surjective in both cases (Theorem~\rbraceef{cor2-061107} and Proposition~\rbraceef{smallN}). Together with \eqref{multi} we have \begin{displaymath} \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}} (\Delta(\mathfrak{a}thfrak{p},\otimesverline{N}),\Delta(\mathfrak{a}thfrak{p},\otimesverline{N}) \otimestimes E)\cong \mathfrak{a}thrm{Hom}_{\mathfrak{a}thfrak{g}}(\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu)), \Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))\otimestimes E). \end{displaymath} Now, $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$ and $\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})$ have the same annihilator (Proposition~\rbraceef{smallN} and \cite[Proposition 5.1.7]{Di}. So, the latter equality and the fact that Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},L(y\cdot\mu))$ imply that Kostant's problem has a positive answer for $\Delta(\mathfrak{a}thfrak{p},\otimesverline{N})$. This completes the proof. \end{proof} \subsection{The rough structure of generalized Verma modules: main results}\lbraceambdabel{s9.5} The equivalence $\xi$ from Theorem~\rbraceef{s9.2-thm2} induces a bijection between the sets of the isomorphism classes of indecomposable projective modules in the categories \begin{displaymath} \mathfrak{a}thscr{Y}_{\otimesverline{N}}=\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\otimesperatorname{Coker} (\otimesverline{N}\otimestimes E)\}_{\otimesperatorname{int}}\quad\text{ and }\quad \mathfrak{a}thscr{Y}_{L(y\cdot\mu)}=\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p}, \otimesperatorname{Coker}(L(y\cdot\mu)\otimestimes E)\}_{\otimesperatorname{int}}. \end{displaymath} Therefore $\xi$ also induces a bijection \begin{displaymath} \otimesverline{\xi}:\quad\otimesperatorname{Irr}(\mathfrak{a}thscr{Y}_{L(y\cdot\mu)})\rbraceightarrow \otimesperatorname{Irr}(\mathfrak{a}thscr{Y}_{\otimesverline{N}}) \end{displaymath} between the sets of isomorphism classes of simple objects in $\mathfrak{a}thscr{Y}_{L(y\cdot\mu)}$ and $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$ respectively. This induces moreover a bijection \begin{displaymath} \mathfrak hat{\xi}:\quad\otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thscr{Y}_{\otimesverline{N}})\rbraceightarrow \otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thscr{Y}_{L(y\cdot\mu)}) \end{displaymath} between the sets of isomorphism classes of the simple quotients, as $\mathfrak{a}thfrak{g}$-modules, of the modules from $\otimesperatorname{Irr}(\mathfrak{a}thscr{Y}_{\otimesverline{N}})$ and $\otimesperatorname{Irr}(\mathfrak{a}thscr{Y}_{L(y\cdot\mu)})$ respectively. Each module $X\in \otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thscr{Y}_{\otimesverline{N}})$ or $\otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thscr{Y}_{L(y\cdot\mu)})$ has the form $L(\mathfrak{a}thfrak{p},V_X)$ for a uniquely defined simple $\mathfrak{a}'$-module $V_X$. As a consequence of Theorem~\rbraceef{s9.2-thm2} we obtain the following result: \begin{theorem}\lbraceambdabel{s9.5-cor1} For $X,Y\in \otimesperatorname{Irr}^{\mathfrak{a}thfrak{g}}(\mathfrak{a}thscr{Y}_{\otimesverline{N}})$ we have the following multiplicity formula in the category of $\mathfrak{a}thfrak{g}$-modules: \begin{displaymath} [\Delta(\mathfrak{a}thfrak{p},V_X):L(\mathfrak{a}thfrak{p},V_Y)]= [\Delta(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(X)}): L(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(Y)})]. \end{displaymath} \end{theorem} \begin{proof} Let $P(X)\in\mathfrak{a}thscr{Y}_{\otimesverline{N}}$ be an indecomposable projective, whose head (as a $\mathfrak{a}thfrak{g}$-module) is isomorphic to $X$. Then $[\Delta(\mathfrak{a}thfrak{p},V_X):L(\mathfrak{a}thfrak{p},V_Y)]$ is just the dimension of the homomorphism space from $P(X)$ to the proper standard module in $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$ corresponding to $X$ (see \cite[Section~5]{KM2}). Exactly the same holds if we replace $X$ by $\mathfrak hat{\xi}(X)$ and work with the category $\mathfrak{a}thscr{Y}_{L(y\cdot\mu)}$ instead of $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$. Since $\xi$ is an equivalence of categories (Theorem~\rbraceef{s9.2-thm2}) sending proper standard objects to proper standard objects, the claim follows. \end{proof} \begin{remark}[Additional remarks to Theorem~\rbraceef{s9.5-cor1}]\lbraceambdabel{finalremarks} {\rbracem Theorem~\rbraceef{s9.5-cor1} describes only multiplicities of certain simple subquotients of $\Delta(\mathfrak{a}thfrak{p},V_X)$, namely multiplicities of those simple subquotients, which occur as heads of indecomposable projectives in $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$. Following \cite{KM2} we call this the {\em rough structure} of $\Delta(\mathfrak{a}thfrak{p},V_X)$. The theorem reduces the question about the rough structure of the module $\Delta(\mathfrak{a}thfrak{p},V_X)$ to the analogous question for the module $\Delta(\mathfrak{a}thfrak{p},V_{\mathfrak hat{\xi}(X)})$. The latter module is an object of $\mathfrak{a}thcal{O}$ and hence the problem can be solved inductively using the Kazhdan-Lusztig combinatorics. } \end{remark} Let $L$ be as in Subsection~\rbraceef{s9.3}. Then the module $\Delta(\mathfrak{a}thfrak{p},L)$ has generalized trivial integral central character, and $L(\mathfrak{a}thfrak{p},L)$ is the simple top of some indecomposable projective module, $P$ say, in $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$. Let $\mathfrak{a}thscr{X}$ be the block of $\mathfrak{a}thscr{Y}_{\otimesverline{N}}$ corresponding to the trivial central character. It contains $P$ by construction. By Theorem~\rbraceef{s9.2-thm2} and Subsection~\rbraceef{s5.4}, simple modules in $\mathfrak{a}thscr{X}$ are (bijectively) indexed by $(x,w)\in\mathfrak{a}thds{I}(\mathfrak{a}thbf{R}')$. Therefore, there is a pair $(x,w)$ for each $\Delta(\mathfrak{a}thfrak{p},L)$ in $\mathfrak{a}thscr{X}$. Theorem~\rbraceef{s9.5-cor1} allows us to formulate the following irreducibility criterion for generalized Verma modules: \begin{theorem}\lbraceambdabel{s9.5-cor2} Let $(x,w)$ be the pair associated with $\Delta(\mathfrak{a}thfrak{p},L)$. Then the module $\Delta(\mathfrak{a}thfrak{p},L)$ is irreducible if and only if $w=\otimesverline{w}$. \end{theorem} \begin{proof} Theorem~\rbraceef{s9.5-cor1} reduces this to the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ from Subsection~\rbraceef{s5.4}. For the category $\mathfrak{a}thcal{O}\{\mathfrak{a}thfrak{p},\mathfrak{a}thscr{A}^{\mathfrak{a}thbf{R}'}\}$ the statement follows from the proof of Theorem~\rbraceef{thm53}. \end{proof} \begin{remark}[Unnecessary restrictions]\mathfrak hfill\\ \lbraceambdabel{simplicity} {\rbracem \begin{enumerate}[(i)] \item The restriction of integrability for the central character is not really essential and can be taken away using methods proposed by Soergel in \cite[Bemerkung~1]{Sperv} on the reduction of the Kazhdan-Lusztig conjecture to the integral case. \item In this paper we only worked with the trivial central character to avoid even more notation. The singular case follows by translation to the regular case, using our results there and translating back (invoking the fact that the composition of these translation functors is just a multiple of the identity). \end{enumerate} } \end{remark} \end{document}
\begin{document} \allowdisplaybreaks \newcommand{1910.08393}{1910.08393} \renewcommand{\arabic{footnote}}{} \renewcommand{113}{113} \FirstPageHeading \ShortArticleName{$q$-Difference Systems for the Jackson Integral of Symmetric Selberg Type} \ArticleName{$\boldsymbol{q}$-Difference Systems for the Jackson Integral\\ of Symmetric Selberg Type\footnote{This paper is a~contribution to the Special Issue on Elliptic Integrable Systems, Special Functions and Quantum Field Theory. The full collection is available at \href{https://www.emis.de/journals/SIGMA/elliptic-integrable-systems.html}{https://www.emis.de/journals/SIGMA/elliptic-integrable-systems.html}}} \Author{Masahiko ITO} \AuthorNameForHeading{M.~Ito} \Address{Department of Mathematical Sciences, University of the Ryukyus, Okinawa 903-0213, Japan} \Email{\href{mailto:[email protected]}{[email protected]}} \ArticleDates{Received April 29, 2020, in final form October 29, 2020; Published online November 08, 2020} \Abstract{We provide an explicit expression for the first order $q$-difference system for the Jackson integral of symmetric Selberg type. The $q$-difference system gives a generalization of $q$-analog of contiguous relations for the Gauss hypergeometric function. As a basis of the system we use a set of the symmetric polynomials introduced by Matsuo in his study of the $q$-KZ equation. Our main result is an explicit expression for the coefficient matrix of the $q$-difference system in terms of its Gauss matrix decomposition. We introduce a~class of symmetric polynomials called {\it interpolation polynomials}, which includes Matsuo's polynomials. By repeated use of three-term relations among the interpolation polynomials we compute the coefficient matrix.} \Keywords{$q$-difference equations; Selberg type integral; contiguous relations; Gauss decomposition} \Classification{33D60; 39A13} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} The Gauss hypergeometric function \begin{equation*} _2F_1 \left( \begin{matrix} a,b \\ c \end{matrix} ;x \right) =\frac{\Gamma(c)}{\Gamma(a)\Gamma(c-a)}\int_0^1 z^{a-1}(1-z)^{c-a-1}(1-xz)^{-b}\,{\rm d}z, \end{equation*} where $\operatorname{Re}c>\operatorname{Re}a>0$ and $|x|<1$, satisfies the contiguous relations \begin{equation} \langlebel{eq:conti1} _2F_1 \left( \begin{matrix} a,b \\ c \end{matrix} ;x \right) ={}_2F_1 \left( \begin{matrix} a,b+1 \\ c+1 \end{matrix} ;x \right) -x\frac{a(c-b)}{c(c+1)} {}_2F_1 \left( \begin{matrix} a+1,b+1 \\ c+2 \end{matrix} ;x \right) \end{equation} and \begin{equation} \langlebel{eq:conti2} _2F_1 \left( \begin{matrix} a,b \\ c \end{matrix} ;x \right) ={}_2F_1 \left( \begin{matrix} a+1,b \\ c+1 \end{matrix} ;x \right) -x\frac{b(c-a)}{c(c+1)} {}_2F_1 \left( \begin{matrix} a+1,b+1 \\ c+2 \end{matrix} ;x \right). \end{equation} These contiguous relations for the Gauss hypergeometric function are extended to a difference system for a function defined by multivariable integral with respect to the Selberg type kernel~\cite{V2} \begin{equation*} \Psi(z):=\prod_{i=1}^n z_i^{\alpha-1}(1-z_i)^{\beta-1}(x-z_i)^{\gamma-1} \prod_{1\le j< k\le n}|z_j-z_k|^{2\tau}. \end{equation*} For the integral \begin{equation*} \langle e_i \rangle:=\int_C e_i(z)\Psi(z)\,{\rm d}z_1\cdots {\rm d}z_n, \qquad i=0,1,\ldots,n, \end{equation*} where $e_i(z)$ is the function specified by \begin{equation*} e_i(z):=\prod_{j=1}^{n-i}(x-z_j)\prod_{k=n-i+1}^n(1-z_k) \end{equation*} and $C$ is some suitable region, the $(n+1)$-tuple $(\langle e_0\rangle,\langle e_1\rangle,\ldots,\langle e_n\rangle)$ satisfies the following difference system. Let $\delta_{ij}$ be the symbol of Kronecker's delta. \begin{Proposition}[{\cite[Theorem~2.2]{FI10}}] \langlebel{prop:FI10} Let $T_\alpha$ be the shift operator with respect to $\alpha\to \alpha+1$, i.e., $T_\alpha f(\alpha)=f(\alpha+1)$ for an arbitrary function $f\colon \mathbb{C}\to \mathbb{C}$. Then \begin{equation} \langlebel{eq:system1} T_\alpha(\langle e_0\rangle,\langle e_1\rangle,\ldots,\langle e_n\rangle)=(\langle e_0\rangle,\langle e_1\rangle,\ldots,\langle e_n\rangle)M, \end{equation} where the $(n+1)\times(n+1)$ matrix $M$ is written in terms of its Gauss matrix decomposition as \begin{equation} \langlebel{eq:system1-1} M=LDU=U'D'L'. \end{equation} Here $L=(l_{ij})_{0\le i,j\le n}$, $D=(d_{j}\delta_{ij})_{0\le i,j\le n}$, $U=(u_{ij})_{0\le i,j\le n}$ are the lower triangular, diagonal, upper triangular matrices, respectively, given by \begin{gather*} l_{ij}=(-x)^{i-j} {n-j\choose n-i} \frac{(\gamma+j\tau;\tau)_{i-j}} {(\alpha+\gamma+ 2j\tau;\tau)_{i-j}} ,\\ d_{j}= \frac{x^j (\alpha;\tau)_{j}(\alpha+\gamma+2j\tau;\tau)_{n-j}} {(\alpha+\gamma+(j-1)\tau;\tau)_{j}(\alpha+\beta+\gamma+(n+j-1)\tau;\tau)_{n-j}},\\ u_{ij}=(-1)^{j-i} {j\choose i} \frac{(\beta+(n-j)\tau;\tau)_{j-i}} {(\alpha+\gamma+ 2i\tau;\tau)_{j-i}}, \end{gather*} and $U'=(u'_{ij})_{0\le i,j\le n}$, $D'=(d'_{j}\delta_{ij})_{0\le i,j\le n}$, $L'=(l'_{ij})_{0\le i,j\le n}$ are the upper triangular, diagonal, lower triangular matrices, respectively, given by \begin{gather*} u'_{ij}=(-x^{-1})^{j-i}{j\choose i} \frac{(\beta+(n-j)\tau;\tau)_{j-i} }{(\alpha+\beta+2(n-j)\tau;\tau)_{j-i}} ,\\ d'_{j}=\frac{x^j(\alpha+\beta+2(n-j)\tau;\tau)_j(\alpha;\tau)_{n-j} }{(\alpha+\beta+\gamma+(2n-j-1)\tau;\tau)_j(\alpha+\beta+(n-j-1)\tau;\tau)_{n-j}} ,\\ l'_{ij}=(-1)^{i-j}{n-j\choose n-i}\frac{(\gamma +j\tau;\tau)_{i-j}} {(\alpha+\beta+2(n-i)\tau;\tau)_{i-j}} ,\end{gather*} where $(x;\tau)_0:=1$ and $(x;\tau)_i:=x(x+\tau)(x+2\tau)\cdots (x+(i-1)\tau)$ for $i=1,2,\ldots$. \end{Proposition} In particular, when $n=1$ the system (\ref{eq:system1}) is given by \begin{subequations} \begin{gather} T_\alpha(\langle e_0\rangle,\langle e_1\rangle) =(\langle e_0\rangle,\langle e_1\rangle) \left(\!\! \begin{array}{cc} \alpha+\gamma & 0 \\ -x\gamma & x\alpha \end{array} \!\right) \begin{pmatrix} \alpha+\beta+\gamma & \beta \\ 0 & \alpha+\gamma \end{pmatrix}^{-1}, \langlebel{eq:system1-2a}\\ T_\alpha(\langle e_0\rangle,\langle e_1\rangle) =(\langle e_0\rangle,\langle e_1\rangle) \begin{pmatrix} \alpha &-\beta \\ 0 & x(\alpha+\beta) \end{pmatrix} \begin{pmatrix} \alpha+\beta & 0 \\ \gamma & \alpha+\beta+\gamma \end{pmatrix}^{-1}. \langlebel{eq:system1-2b} \end{gather} \end{subequations} The system (\ref{eq:system1-2b}) can be rewritten as the following system of three-term equations \begin{equation}\langlebel{eq:conti3} (\alpha+\beta+\gamma)T_\alpha\langle e_{1}\rangle=-\beta \langle e_{0}\rangle+x(\alpha+\beta)\langle e_{1}\rangle, \end{equation} and \begin{equation}\langlebel{eq:conti4} \gamma T_\alpha \langle e_{1}\rangle+(\alpha+\beta)T_\alpha \langle e_{0}\rangle=\alpha \langle e_{0}\rangle. \end{equation} Since, for $n=1$ \begin{equation*} \langle e_0\rangle=\int_0^1 z^{\alpha-1}(1-z)^{\beta-1}(x-z)^{\gamma}\,{\rm d}z \qquad\mbox{and}\qquad \langle e_1\rangle=\int_0^1 z^{\alpha-1}(1-z)^{\beta}(x-z)^{\gamma-1}\,{\rm d}z, \end{equation*} under the conditions $\operatorname{Re}\alpha>0$, $\operatorname{Re}\beta>0$ and $|x|>1$ we see that the equations (\ref{eq:conti3}) and (\ref{eq:conti4}) exactly coincide with the contiguous relations~(\ref{eq:conti1}) and (\ref{eq:conti2}), respectively, after the substitutions $\alpha \to a$, $\beta\to c-a$, $\gamma\to -b$ and $x\to 1/x$. Therefore the difference system~(\ref{eq:system1}) expressed in terms of Gauss matrix decomposition (\ref{eq:system1-1}) can be regarded as a natural extension of the contiguous relations~(\ref{eq:conti1}) and~(\ref{eq:conti2}). For further applications of the difference system~(\ref{eq:system1}) in random matrix theory, see~\cite{FI10}. In the discussion of the result being a generalization of contiguous relations of the Gauss hypergeometric function, the analog of the equation~\eqref{eq:conti1} for the Selberg integral can be found in~\cite{FW08, Kan93}. Next we would like to discuss a $q$-analogue of the difference system~(\ref{eq:system1}) in Proposition~\ref{prop:FI10}. This is one of the aims of this paper. For an arbitrary $c\in \mathbb{C}^*$ we use the {\em $c$-shifted factorial} for $x\in \mathbb{C}$ \begin{equation*} (x;c)_i= \begin{cases} (1-x)(1-cx)\cdots\big(1-c^{i-1}x\big) & \mbox{if}\quad i=1,2,\ldots , \\ 1 & \mbox{if}\quad i=0, \\ \displaystyle \frac{1}{\big(1-c^{-1}x\big)\big(1-c^{-2}x\big)\cdots\big(1-c^{i}x\big)} & \mbox{if}\quad i=-1,-2,\ldots, \end{cases} \end{equation*} and the {\em $c$-binomial coefficient} \begin{equation*} \qbin{i}{j}{c}=\frac{(c;c)_i}{(c;c)_{i-j}(c;c)_j}. \end{equation*} We also use the symbol $(x;c)_\infty:=\prod_{i=0}^\infty\big(1-c^{i}x\big)$ for $|c|<1$. Throughout this paper we fix $q\in\mathbb{C}^*$ with $|q|<1$. For a point $\xi=(\xi_1,\ldots,\xi_n)\in (\mathbb{C}^*)^n$ and a function $f(z)=f(z_1,\ldots,z_n)$ on~$(\mathbb{C}^*)^n$ we define the following sum over the lattice $\mathbb{Z}^n$ by \begin{equation}\langlebel{eq:00jac4} \int_0^{\xi\infty}f(z) \frac{{\rm d}_qz_1}{z_1}\wedge\cdots\wedge\frac{{\rm d}_qz_n}{z_n} :=(1-q)^n\sum_{(\nu_1,\ldots,\nu_n)\in \mathbb{Z}^n}f\big(\xi_1 q^{\nu_1},\ldots,\xi_n q^{\nu_n}\big), \end{equation} if it converges. We call it the {\it Jackson integral of $f(z)$}. By definition the Jackson integral (\ref{eq:00jac4}) is invariant under the $q$-shift $\xi_i\to q\xi_i$ ($i=1,\ldots,n$). Let $\Phi_{n,m}(z)$ and $\Delta(z)$ be the functions on~$(\mathbb{C}^*)^n$ specified by \begin{gather}\langlebel{eq:Phi} \Phi_{n,m}(z) :=\prod_{i=1}^n \left\{z_i^\alpha \prod_{r=1}^m \frac{\big(qa_r^{-1}z_i;q\big)_\infty} {(b_rz_i;q)_\infty}\right\} \prod_{1\le j<k\le n}z_j^{2\tau-1}\frac{\big(qt^{-1} z_k/z_j;q\big)_\infty}{(t z_k/z_j;q)_\infty}, \\ \Delta(z):=\prod_{1\le i<j\le n}(z_i-z_j),\nonumber \end{gather} where $t=q^\tau$. For a point $\xi=(\xi_1,\ldots,\xi_n)\in (\mathbb {C}^*)^n$ and an arbitrary symmetric function $\phi(z)=\phi(z_1,\ldots,z_n)$ on $(\mathbb {C}^*)^n$ we set \begin{equation*} \langle \phi,\xi\rangle:=\int_{0}^{\xi\infty}\phi(z)\Phi_{n,m}(z)\Delta(z) \frac{{\rm d}_qz_1}{z_1}\wedge\cdots\wedge\frac{{\rm d}_qz_n}{z_n}, \end{equation*} which we call the {\it Jackson integral of symmetric Selberg type}. In the study of $q$-difference de Rham cohomology associated with Jackson integrals~\cite{Ao90,AK91}, Aomoto and Kato~\cite{AK93-1} showed that the Jackson integral of symmetric Selberg type satisfies $q$-difference systems of rank ${n+m-1\choose m-1}$ when the parameters are generic. When $m=1$ the Jackson integral of symmetric Selberg type is equivalent to the $q$-Selberg integral defined by Askey~\cite{As80} and proved by others, see~\cite{Ao98,Ev92,Ha88, Kad88} for instance. See also recent references \cite[Section~2.3]{FW08} and~\cite{IF2017}. $q$-Selberg integral is a very active area of research with important connections to special functions, combinatorics, mathematical physics and orthogonal polynomials (see \cite{ARW, KO, KS, KY,RTVZ,W2005} and \cite[Section~5]{IZ}). Using the Jackson integral of symmetric Selberg type for $m=2$, Matsuo \cite{ma1,ma2} constructed a set of solutions of the $q$-KZ equation. Varchenko~\cite{V1} extended Matsuo's construction to more general setting of the $q$-KZ equation using the Jackson integral of symmetric Selberg type for general $m$. Writing $a_r=x_r$, $b_r=q^{\beta_r}x_r^{-1}$ in \eqref{eq:Phi}, the $q$-KZ equation they studied can be regarded as the $q$-difference system with respect to the $q$-shift $x_r\to qx_r$ $(r=1,\ldots,m)$. In another context, writing $a_r=qx_r^{-1}$, $b_r=q^{\mu_r}x_r$ in \eqref{eq:Phi}, Kaneko \cite{Kan96} showed an explicit expression for the $q$-difference system with respect to the $q$-shift $x_r\to qx_r$ $(r=2,\ldots,m)$ satisfied by the Jackson integral of symmetric Selberg type for general $m$ with special constraints $\mu_2=\cdots=\mu_m=1$ or $\mu_2=\cdots=\mu_m=-\tau$. With these constraints the $q$-difference system degenerates to be very simple and it can also be regarded as a generalization of the second order $q$-difference equation satisfied by Heine's $_2\phi_1$ $q$-hypergeometric function. In this paper, we fix $m=2$ for \eqref{eq:Phi}, and study two types of $q$-difference systems for the Jackson integral of symmetric Selberg type for $\Phi(z)=\Phi_{n,2}(z)$. One is the $q$-difference system with respect to the shift $\alpha\to \alpha +1$, and the other is the system with respect to the $q$-shifts $a_i \to qa_i$ and $b_i \to q^{-1}b_i$ simultaneously. For these purposes, we define the set of symmetric polynomials $\{e_i(a,b;z)\,|\, i=0,1,\ldots,n\}$, where \begin{equation}\langlebel{eq:matsuo} e_i(a,b;z):=\frac{1}{\Delta(z)}\times {\cal A}\left( \prod_{j=1}^{n-i}(1-bz_j)\prod_{j=n-i+1}^n\big(1-a^{-1}z_j\big) \prod_{1\le k<l\le n}\big(z_k-t^{-1}z_l\big) \right), \end{equation} which we call {\it Matsuo's polynomials}. The symbol $\cal A$ means the skew-symmetrization (see the definition~\eqref{eq:00Af} of $\cal A$ in Section~\ref{section02}). With these symmetric polynomials, we denote \begin{equation}\langlebel{eq:<e_i(a,b),x>} \langle e_i(a,b),\xi\rangle :=\int_0^{\xi\infty} e_i(a,b;z)\Phi(z)\Delta(z) \frac{{\rm d}_qz_1}{z_1}\wedge\cdots\wedge\frac{{\rm d}_qz_n}{z_n}. \end{equation} We assume that \begin{equation*} \big|qa_1^{-1}a_2^{-1}b_1^{-1}b_2^{-1} \big|<\big|q^\alpha\big|<1\qquad\mbox{and}\qquad \big|qa_1^{-1}a_2^{-1}b_1^{-1}b_2^{-1} \big|<\big|q^\alpha t^{2n-2}\big|<1 \end{equation*} for convergence of the Jackson integrals \eqref{eq:<e_i(a,b),x>}. (See \cite[Lemma~3.1]{IN2018} for details of convergence.) For the polynomials \eqref{eq:matsuo} let $R$ be the $(n+1)\times(n+1)$ matrix defined by \begin{gather} \big(e_n(a_2,b_1;z),e_{n-1}(a_2,b_1;z),\ldots,e_0(a_2,b_1;z)\big)\nonumber\\ \qquad {}=\big(e_0(a_1,b_2;z),e_1(a_1,b_2;z),\ldots,e_n(a_1,b_2;z)\big)R.\langlebel{eq:R} \end{gather} The transition matrix $R$ is called the {\it $R$-matrix} in the context of~\cite{ma2}. Matsuo~\cite{ma2} gave the $q$-difference system with respect to the $q$-shifts $a_i \to qa_i$ and $b_i \to q^{-1}b_i$ simultaneously, using Matsuo's polynomials as follows. \begin{Proposition}[Matsuo]\langlebel{prop:Matsuo} Let $T_{q,u}$ be the $q$-shift operator with respect to $u\to qu$, and $T_{q,b_i}^{-1}T_{q,a_i}$ $(i=1,2)$ denote the $q$-shift operator with respect to $a_i\to qa_i$ and $b_i\to q^{-1}b_i$ simultaneously. Then, the Jackson integrals of symmetric Selberg type satisfy the $q$-difference system with respect to $T_{q,b_i}^{-1}T_{q,a_i}$ $(i=1,2)$ given by \begin{gather} T_{q,b_i}^{-1}T_{q,a_i}\big(\langle e_n(a_2,b_1),\xi\rangle,\langle e_{n-1}(a_2,b_1),\xi\rangle,\ldots,\langle e_0(a_2,b_1),\xi\rangle\big)\nonumber\\ \qquad=\big(\langle e_n(a_2,b_1),\xi\rangle,\langle e_{n-1}(a_2,b_1),\xi\rangle,\ldots,\langle e_0(a_2,b_1),\xi\rangle\big) K_i,\langlebel{eq:Ki} \end{gather} whose coefficient matrices $K_i$ are expressed as $K_1=R^{-1}D_1$ and $K_2=D_2\big(T_{q,b_2}^{-1}T_{q,a_2}R\big)$, where~$R$ is the $(n+1)\times(n+1)$ matrix given by~\eqref{eq:R}, and $D_1$, $D_2$ are the diagonal matrices given by \begin{equation*} D_1=\big(\big(q^\alpha t^{n-1}\big)^{n-i}\delta_{ij}\big)_{0\le i,j\le n},\qquad D_2=\big(\big(q^\alpha t^{n-1}\big)^{i} \delta_{ij}\big)_{0\le i,j\le n}. \end{equation*} \end{Proposition} \begin{Remark} If we replace $a_i$ and $b_i$ as $a_i=x_i$ and $b_i=q^{\beta_i}x_i^{-1}$, respectively, then \eqref{eq:Ki} simplifies to the case considered by Matsuo, and then $T_{q,b_i}^{-1}T_{q,a_i}$ in \eqref{eq:Ki} becomes the single $q$-shift operator $T_{q,x_i}$, and the system \eqref{eq:Ki} coincides with the $q$-KZ equation (see \cite{ma1,ma2,V1}). For the problem of finding the explicit form of the coefficient matrix $K_i$ for the system \eqref{eq:Ki} Aomoto and Kato used the information of a connection matrix \cite{AK95} between two kinds of fundamental solutions of \eqref{eq:Ki} specified by their asymptotic behaviors. Based on Birkhoff's classical theory they introduced a way to derive the explicit form of the coefficient matrix for a linear ordinary $q$-difference system from its connection matrix. They call their method the {\it Riemann--Hilbert approach for $q$-difference equations from connection matrices} \cite{Ao95}, and they presented $K_i$ explicitly when $n=1$ and $2$ as an example of their method (see \cite[p.~272, examples]{AK98}). The problem of finding the explicit form of the coefficient matrix for the equivalent $q$-difference system with respect to $T_{q,x_i}$ was also studied by Mimachi~\cite{mi1,mi2}. As a basis of the system Mimachi~\cite{mi2} introduced a family of Schur polynomials different from Matsuo's polynomials, and he calculated the entries of the coefficient matrix explicitly when $n=1,2$ and $3$. \end{Remark} From Proposition \ref{prop:Matsuo}, if we want to know the coefficient matrices $K_i$ of the above $q$-difference systems, it suffices to give the explicit expression for the transition matrix $R$ or its inverse $R^{-1}$. \begin{Theorem}[\cite{BM,KOY}]\langlebel{thm:R=LDU} The matrix $R$ is written in terms of its Gauss matrix decomposition as \begin{equation}\langlebel{eq:R=LDU} R=L_{\mbox{\tiny $R$}}\,D_{\mbox{\tiny $R$}}\,U_{\mbox{\tiny $R$}} =U'{}_{\!\!\!{\mbox{\tiny $R$}}}\,D'{}_{\!\!{\mbox{\tiny $R$}}}\,L'{}_{\!\!{\mbox{\tiny $R$}}}, \end{equation} where $L_{\mbox{\tiny $R$}}=\big(l^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$, $D_{\mbox{\tiny $R$}}=\big(d^{\mbox{\tiny $R$}}_j \delta_{ij}\big)_{0\le i,j\le n}$, $U_{\mbox{\tiny $R$}}=\big(u^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$ are the lower triangular, diagonal, upper triangular matrices, respectively, given by \begin{subequations} \begin{gather} l^{\mbox{\tiny $R$}}_{ij}= \qbin{n-j}{n-i}{t^{-1}} \frac{(-1)^{i-j}t^{-{i-j\choose 2}}\big(a_2b_2t^{j};t\big)_{i-j}}{\big(a_1^{-1}a_2 t^{-(n-2j-1)};t\big)_{i-j}}, \langlebel{eq:lRij}\\ d^{\mbox{\tiny $R$}}_j=\frac{\big(a_1a_2^{-1}t^{-j};t\big)_{n-j}(a_2b_1;t)_{j}}{(a_1b_2;t)_{n-j}\big(a_1^{-1}a_2t^{-(n-j)};t\big)_{j}}, \langlebel{eq:dRij}\\ u^{\mbox{\tiny $R$}}_{ij}=\qbin{j}{i}{t^{-1}} \frac{\big(a_1b_1t^{n-j};t\big)_{j-i}}{\big(a_1a_2^{-1}t^{n-i-j};t\big)_{j-i}}, \langlebel{eq:uRij} \end{gather} \end{subequations} and $U'{}_{\!\!\!{\mbox{\tiny $R$}}}=\big(u^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$, $D'{}_{\!\!{\mbox{\tiny $R$}}}=\big(d^{{\mbox{\tiny $R$}}\,\prime}_{j} \delta_{ij}\big)_{0\le i,j\le n}$, $L'{}_{\!\!{\mbox{\tiny $R$}}}=\big(l^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ are the upper triangular, diagonal, lower triangular matrices, respectively, given by \begin{subequations} \begin{gather} u^{{\mbox{\tiny $R$}}\,\prime}_{ij}= \qbin{j}{i}{t} \frac{(-1)^{j-i}t^{{j-i\choose 2}}\big(a_1^{-1}b_1^{-1}t^{-(n-i-1)};t\big)_{j-i}}{\big(b_1^{-1}b_2t^{i+j-n};t\big)_{j-i}},\langlebel{eq:u'Rij}\\ d^{{\mbox{\tiny $R$}}\,\prime}_{j}=\frac{\big(b_1b_2^{-1}t^{n-2j+1};t\big)_{j}\big(a_2^{-1}b_1^{-1}t^{-(n-j-1)};t\big)_{n-j}} {\big(a_1^{-1}b_2^{-1}t^{-(j-1)};t\big)_{j}\big(b_1^{-1}b_2t^{-(n-2j-1)};t\big)_{n-j}},\langlebel{eq:d'Rij}\\ l^{{\mbox{\tiny $R$}}\,\prime}_{ij}= \qbin{n-j}{n-i}{t} \frac{\big(a_2^{-1}b_2^{-1}t^{-(i-1)};t\big)_{i-j}}{\big(b_1b_2^{-1} t^{n-2i+1};t\big)_{i-j}}.\langlebel{eq:l'Rij} \end{gather} \end{subequations} \end{Theorem} One of the main aims of this paper is to give a proof of the above result, which we will do in Section \ref{section06}. The explicit expression for $R^{-1}$ in terms of its Gauss matrix decomposition is also presented as Corollary \ref{cor:inverseR} in Section \ref{section06}. \begin{Remark}After completing of earlier version of this paper, the author was informed that Theorem~\ref{thm:R=LDU} previously appeared implicitly in \cite[Section~5]{BM} and~\cite{KOY}. Our proof of the Theorem is, however, very different to that of~\cite{BM} and~\cite{KOY}. \end{Remark} From Theorem \ref{thm:R=LDU} we immediately obtain a closed-form expression for the determinant of $R$ (or $K_i$). \begin{Corollary} The determinant of the transition matrix $R$ evaluates as \begin{equation*} \det R =d^{\mbox{\tiny $R$}}_0\,d^{\mbox{\tiny $R$}}_1\cdots d^{\mbox{\tiny $R$}}_n=\big({-}a_1a_2^{-1}\big)^{{n+1\choose 2}} \prod_{i=1}^n\frac{(a_2b_1;t)_i}{(a_1b_2;t)_i}. \end{equation*} The determinants of the coefficient matrices $K_1$ and $K_2$ given in \eqref{eq:Ki} evaluate as \begin{equation*} \det K_1=\det \big(R^{-1}D_1\big)= \big({-}a_2a_1^{-1}q^\alpha t^{n-1}\big)^{{n+1\choose 2}}\prod_{i=1}^n\frac{(a_1b_2;t)_i}{(a_2b_1;t)_i} \end{equation*} and \begin{equation*} \det K_2=\det \big(D_2\big(T_{q,b_2}^{-1}T_{q,a_2}R\big)\big)= \big({-}a_1a_2^{-1}q^{\alpha-1} t^{n-1}\big)^{{n+1\choose 2}} \prod_{i=1}^n\frac{(qa_2b_1;t)_i}{\big(q^{-1}a_1b_2;t\big)_i}. \end{equation*} \end{Corollary} Next, we focus on the $q$-difference system with respect to the shift $\alpha\to \alpha+1$ for the Jackson integral of symmetric Selberg type. Using Matsuo's polynomials $\{e_i(a_1,b_2;z)\,|\, i=0,1,\ldots,n\}$, this $q$-difference system is given explicitly in terms of its Gauss matrix decomposition. \begin{Theorem}\langlebel{thm:main} Let $T_\alpha$ be the shift operator with respect to $\alpha\to \alpha+1$, i.e., $T_\alpha f(\alpha)=f(\alpha+1)$ for an arbitrary function $f(\alpha)$ of $\alpha\in \mathbb{C}$. Then \begin{gather} T_{\alpha}\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big)\nonumber\\ \qquad{}=\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big)A,\langlebel{eq:main} \end{gather} where the coefficient matrix $A$ is written in terms of its Gauss matrix decomposition as \begin{equation*} A=L_{\mbox{\tiny $A$}}\,D_{\mbox{\tiny $A$}}\,U_{\mbox{\tiny $A$}} =U'{}_{\!\!\!{\mbox{\tiny $A$}}}\,D'{}_{\!\!{\mbox{\tiny $A$}}}\,L'{}_{\!\!{\mbox{\tiny $A$}}}. \end{equation*} Here $L_{\mbox{\tiny $A$}}=\big(l^{\mbox{\tiny $A$}}_{ij}\big)_{0\le i,j\le n}$, $D_{\mbox{\tiny $A$}}=\big(d^{\mbox{\tiny $A$}}_j\,\delta_{ij}\big)_{0\le i,j\le n}$, $U_{\mbox{\tiny $A$}}=\big(u^{\mbox{\tiny $A$}}_{ij}\big)_{0\le i,j\le n}$ are the lower triangular, diagonal, upper triangular matrices, respectively, given by \begin{subequations} \begin{gather} l^{\mbox{\tiny $A$}}_{ij} =(-1)^{i-j} t^{{n-i\choose 2}-{n-j\choose 2}} \qbin{n-j}{n-i}{t} \frac{\big(a_2b_2 t^j;t\big)_{i-j}} {\big(q^\alpha a_2b_2 t^{2j};t\big)_{i-j}},\langlebel{eq:lAij} \\ d^{\mbox{\tiny $A$}}_j= a_1^{n-j}a_2^jt^{{j\choose 2}+{n-j\choose 2}} \frac{\big(q^\alpha;t\big)_{j}\big(q^\alpha a_2b_2 t^{2j};t\big)_{n-j}} {\big(q^\alpha a_2b_2 t^{j-1};t\big)_{j}\big(q^\alpha a_1a_2b_1b_2 t^{n+j-1};t\big)_{n-j}}, \langlebel{eq:dAij}\\ u^{\mbox{\tiny $A$}}_{ij} =\big({-}q^\alpha a_1^{-1}a_2\big)^{j-i}t^{{j\choose 2}-{i\choose 2}} \qbin{j}{i}{t} \frac{\big(a_1b_1t^{n-j};t\big)_{j-i}} {\big(q^\alpha a_2b_2 t^{2i};t\big)_{j-i}},\langlebel{eq:uAij} \end{gather} \end{subequations} and $U'{}_{\!\!\!{\mbox{\tiny $A$}}}=\big(u^{{\mbox{\tiny $A$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$, $D'{}_{\!\!{\mbox{\tiny $A$}}}=\big(d^{{\mbox{\tiny $A$}}\,\prime}_{j}\,\delta_{ij}\big)_{0\le i,j\le n}$, $L'{}_{\!\!{\mbox{\tiny $A$}}}=\big(l^{{\mbox{\tiny $A$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ are the upper triangular, diagonal, lower triangular matrices, respectively, given by \begin{subequations} \begin{gather} u^{{\mbox{\tiny $A$}}\,\prime}_{ij} =\big({-}q^\alpha\big)^{j-i}t^{{n-i\choose 2}-{n-j\choose 2}} \qbin{j}{i}{t} \frac{\big(a_1b_1t^{n-j};t\big)_{j-i} }{\big(q^\alpha a_1b_1 t^{2(n-j)};t\big)_{j-i}},\langlebel{eq:u'Aij}\\ d^{{\mbox{\tiny $A$}}\,\prime}_{j} =a_1^{n-j}a_2^jt^{{j\choose 2}+{n-j\choose 2}} \frac{\big(q^\alpha a_1b_1 t^{2(n-j)};t\big)_j\big(q^\alpha;t\big)_{n-j} }{\big(q^\alpha a_1a_2b_1b_2 t^{2n-j-1};t\big)_j\big(q^\alpha a_1b_1 t^{n-j-1};t\big)_{n-j}},\langlebel{eq:d'Aij}\\ l^{{\mbox{\tiny $A$}}\,\prime}_{ij}=\big({-}a_1a_2^{-1}\big)^{i-j}t^{{j\choose 2}-{i\choose 2}} \qbin{n-j}{n-i}{t}\frac{\big(a_2b_2t^j;t\big)_{i-j}} {\big(q^\alpha a_1b_1 t^{2(n-i)};t\big)_{i-j}}.\langlebel{eq:l'Aij} \end{gather} \end{subequations} \end{Theorem} The first part of Theorem \ref{thm:main} will be proved in Section~\ref{section05}, while the latter part of Theorem~\ref{thm:main} will be explained in the Appendix. Note that, from this theorem we immediately have the following. \begin{Corollary}The determinant of the coefficient matrix $A$ evaluates as \begin{equation*} \det A=d^{\mbox{\tiny $A$}}_0\,d^{\mbox{\tiny $A$}}_1\cdots d^{\mbox{\tiny $A$}}_n= (a_1a_2)^{{n+1\choose 2}}t^{2{n+1\choose 3}}\prod_{i=1}^n \frac{\big(q^\alpha;t\big)_{i}}{\big(q^\alpha a_1a_2b_1b_2 t^{2n-i-1};t\big)_i}. \end{equation*} \end{Corollary} \begin{Remark}The expression \eqref{eq:main} for the $q$-difference equation is equivalent to \begin{gather*} T_{\alpha}\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big) U_{\mbox{\tiny $A$}}^{-1}\nonumber\\ \qquad{}=\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big)L_{\mbox{\tiny $A$}} D_{\mbox{\tiny $A$}} \end{gather*} or \begin{gather*} T_{\alpha}\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big) L'{}_{\!\!{\mbox{\tiny $A$}}}^{-1} \nonumber\\ \qquad=\big(\langle e_0(a_1,b_2),\xi\rangle,\langle e_1(a_1,b_2),\xi\rangle,\ldots,\langle e_n(a_1,b_2),\xi\rangle\big)U'{}_{\!\!\!{\mbox{\tiny $A$}}}D'{}_{\!\!{\mbox{\tiny $A$}}}. \end{gather*} Here the entries of $U_{\mbox{\tiny $A$}}^{-1}$ and $L'{}_{\!\!{\mbox{\tiny $A$}}}^{-1}$ are also factorized into binomials like $U_{\mbox{\tiny $A$}}$ and $L'{}_{\!\!{\mbox{\tiny $A$}}}$, respectively. We will see the explicit expression for $U_{\mbox{\tiny $A$}}^{-1}$ in Section~\ref{section04} as Proposition~\ref{prop:inverseU_A}. For the explicit expression for $L'{}_{\!\!{\mbox{\tiny $A$}}}$, see Proposition~\ref{prop:inverseL'_A}. \end{Remark} \begin{Remark} If we consider the $q\to 1$ limit after replacing $a_i$ and $b_i$ as $a_1=1$, $a_2=x$, $b_1= q^\beta$ and $b_2= q^{\gamma}x^{-1}$ on Theorem \ref{thm:main}, then $e_i\big(1,q^{\gamma}x^{-1};z\big)\Phi(z)\Delta(z)\frac{{\rm d}_qz_1}{z_1}\cdots \frac{{\rm d}_qz_n}{z_n}$ tends to $e'_i(z)\Psi'(z){\rm d}z_1\cdots {\rm d}z_n$, where \begin{gather*} \Psi'(z) =\prod_{i=1}^n z_i^{\alpha-1}(1-z_i)^{\beta-1} (1-z_i/x)^{\gamma-1} \prod_{1\le j<k\le n}(z_j-z_k)^{2\tau},\\ e'_i(z) ={\cal A}\left(\prod_{j=1}^{n-i}(1-z_j/x)\prod_{j=n-i+1}^n(1-z_j)\right). \end{gather*} This confirms that Theorem~\ref{thm:main} in the $q\to 1$ limit is consistent with the result presented in Proposition~\ref{prop:FI10}. \end{Remark} The paper is organized as follows. After defining some basic terminology in Section~\ref{section02}, we characterize in Section~\ref{section03} Matsuo's polynomials by their vanishing property (Proposition~\ref{prop:matsuo2}), and define a family of symmetric polynomials of higher degree, which includes Matsuo's polynomials. We call such polynomials the {\it interpolation polynomials}, which are inspired from Aomoto's method~\cite[Section~8]{AAR}, \cite{Ao87}, which is a technique to obtain difference equations for the Selberg integrals (see also~\cite{IF2017} for a $q$-analogue of Aomoto's method). We state several vanishing properties for the interpolation polynomials, which are used in subsequent sections. In Section~\ref{section04} we present three-term relations (Lemma~\ref{lem:3term1st}) among the interpolation polynomials. These are key equations for obtaining the coefficient matrix of the $q$-difference system with respect to the shift $\alpha\to \alpha+1$. By repeated use of these three-term relations we obtain a proof of Theorem~\ref{thm:main}. Section~\ref{section05} is devoted to the proof of Lemma~\ref{lem:3term1st}. In Section \ref{section06} we explain the Gauss decomposition of the transition matrix~$R$. For this purpose, we introduce another set of symmetric polynomials called the {\it Lagrange interpolation polynomials of type A} in~\cite{IN2018}, which are different from Matsuo's polynomials. Both upper and lower triangular matrices in the decomposition can be understood as a transition matrix between Matsuo's polynomials and the other polynomials. In the Appendix we explain the proof of the latter part of Theorem~\ref{thm:main}. Finally we would like to make some remarks about the original motivation for the current paper. Although the author already knew the results of this paper before publishing~\cite{FI10}, many years have passed since then. The author recently learned of an interesting application of the $q$-difference systems of this paper in collaboration with Yasuhiko Yamada. They intend to publish the detail in a forthcoming paper. \section{Notation}\langlebel{section02} Let $S_n$ be the symmetric group on $\{1,2,\ldots, n\}$. For a function $f\colon (\mathbb{C}^*)^n\to \mathbb{C}$ we define an action of the symmetric group $S_n$ on $f$ by \begin{equation*} (\sigma f)(z):=f\big(\sigma^{-1}(z)\big)=f(z_{\sigma(1)},z_{\sigma(2)},\ldots,z_{\sigma(n)}) \qquad\text{for}\quad \sigma\in S_n. \end{equation*} We say that a function $f(z)$ on $({\mathbb{C}^*})^n$ is {\it symmetric} or {\it skew-symmetric} if $\sigma f(z)=f(z)$ or $\sigma f(z)=(\operatorname{sgn}\sigma ) f(z)$ for all $\sigma \in S_n$, respectively. We denote by ${\cal A} f(z)$ the alternating sum over~$S_n$ defined by \begin{equation}\langlebel{eq:00Af} ({\cal A} f)(z):=\sum_{\sigma\in S_n}(\operatorname{sgn} \sigma) (\sigma f)(z), \end{equation} which is skew-symmetric. Let $P$ be the set of partitions defined by \begin{equation*} P:=\big\{(\langlembda_1,\langlembda_2,\ldots,\langlembda_n)\in {\mathbb Z}^n \,|\,\langlembda_1\ge \langlembda_2\ge \cdots \ge \langlembda_n \ge 0 \big\}. \end{equation*} We define the {\em lexicographic ordering} $<$ on $P$ as follows. For $\langlembda, \mu\in P$, we denote $\langlembda < \mu$ if there exists a positive integer $k$ such that $\langlembda_i=\mu_i$ for all $i<k$ and $\langlembda_k<\mu_k$. For $\langlembda=(\langlembda_1,\langlembda_2,\ldots,\langlembda_n) \in {\mathbb Z}^n$, we denote by $z^\langlembda$ the monomial $z_1^{\langlembda_1}z_2^{\langlembda_2}\cdots z_n^{\langlembda_n}$. For $\langlembda\in P$ the {\it monomial symmetric polynomials} $m_\langlembda (z)$ are defined by \begin{equation*} m_\langlembda (z):=\sum_{\mu\in S_n\langlembda}z^{\mu}, \end{equation*} where $S_n\langlembda:=\{\sigma\langlembda\,|\, \sigma\in S_n\}$ is the $S_n$-orbit of $\langlembda$. For $\langlembda\in P$, we denote by $m_i$ the multiplicity of $i$ in $\langlembda$, i.e., $m_i=\#\{j\,|\,\langlembda_j=i\}$, see \cite{Mac} for instance. It is convenient to use the notation $\langlembda=\big(1^{m_1}2^{m_2}\cdots r^{m_r}\cdots\big)$: for example, $\big(1^32^2\big)=(2,2,1,1,1,0)$ and $z^{(1^32^2)}=z_1^2z_2^2z_3z_4z_5$. \section{Interpolation polynomials}\langlebel{section03} In this section we define a family of symmetric functions which extends Matsuo's polynomials. For $a, b\in \mathbb{C}^*$ and $z=(z_1,\ldots,z_n)\in (\mathbb{C}^*)^n$ let $E_{k,i}(a,b;z)$ $(k,i=0,1,\ldots,n)$ be functions specified by \begin{equation}\langlebel{eq:Eki1} E_{k,i}(a,b;z):=z_1z_2\cdots z_k\Delta(t;z) \prod_{j=1}^{n-i}(1-bz_j)\prod_{j=n-i+1}^n\big(1-a^{-1}z_j\big), \end{equation} where \begin{equation*} \Delta(t;z):=\prod_{1\le i<j\le n}\big(z_i-t^{-1}z_j\big)=t^{-{n\choose 2}}\prod_{1\le i<j\le n}(tz_i-z_j), \end{equation*} and let $\tilde{E}_{k,i}(a,b;z)$ $(k,i=0,1,\ldots,n)$ be the symmetric functions of $z\in (\mathbb{C}^*)^n$ specified by \begin{equation} \langlebel{eq:Eki2} \tilde{E}_{k,i}(a,b;z):=\frac{{\cal A}E_{k,i}(a,b;z)}{\Delta(z)}, \end{equation} which, in particular, satisfy \begin{equation*} \tilde{E}_{0,i}(a,b;z)=e_i(a,b;z)\qquad\text{and}\qquad \tilde{E}_{n,i}(a,b;z)=z_1z_2\cdots z_n e_i(a,b;z), \end{equation*} as special cases. We sometimes abbreviate $\tilde{E}_{k,i}(a,b;z)$ to $\tilde{E}_{k,i}(z)$. The leading term of the symmetric polynomial $\tilde{E}_{k,i}(z)$ is $m_{(1^{n-k}2^k)}(z)$, i.e., \begin{equation*} \tilde{E}_{k,i}(z)=C_{ki} m_{(1^{n-k}2^k)}(z)+\text{lower order terms}, \end{equation*} where the coefficient $C_{ki}$ of the monomial $m_{(1^{n-k}2^k)}(z)$ is expressed as \begin{equation*} C_{ki}=(-1)^n\frac{\big(t^{-1};t^{-1}\big)_k\big(t^{-1};t^{-1}\big)_{n-k}}{a^{i}b^{-(n-i)}\big(1-t^{-1}\big)^n}. \end{equation*} For arbitrary $x,y\in {\mathbb C}^*$, we set \begin{equation} \zeta_j(x,y):=\big(\underbrace{yt^{-(j-1)},yt^{-(j-2)},\ldots,yt^{-1},y\phantom{\Big|}\!\!}_j, \underbrace{x,xt,xt^2,\ldots,xt^{n-j-1}\phantom{\Big|}\!\!}_{n-j}\big)\in ({\mathbb C}^*)^n.\langlebel{eq:zeta(xy)} \end{equation} The following gives another characterization of Matsuo's polynomials $e_i(a,b;z)=\tilde{E}_{0,i}(z)$. \begin{Proposition}\langlebel{prop:matsuo2} The leading term of the function $\tilde{E}_{0,i}(z)$ is $m_{(1^n)}(z)$ up to a multiplicative constant. The functions $\tilde{E}_{0,i}(z)$, $i=0,1,\ldots,n$, satisfy \begin{equation}\langlebel{eq:matsuo2} \tilde{E}_{0,i}\big(\zeta_j\big(a,b^{-1}\big)\big)=c_i\delta_{ij}, \end{equation} where the constant $c_i$ is given by \begin{subequations} \begin{align} c_i&=\big(abt^{i};t\big)_{n-i}\big(a^{-1}b^{-1}t^{-(i-1)};t\big)_{i} \frac{(t;t)_i(t;t)_{n-i}} {t^{n\choose 2}(1-t)^n} \langlebel{eq:c-i-a}\\ &= (ab;t)_{n-i}\big(a^{-1}b^{-1}t^{-(n-1)};t\big)_{i} \frac{\big(t^{-1};t^{-1}\big)_i\big(t^{-1};t^{-1}\big)_{n-i}} {\big(1-t^{-1}\big)^n}.\langlebel{eq:c-i-b} \end{align} \end{subequations} \end{Proposition} \begin{Remark} The set of symmetric functions $\big\{\tilde{E}_{0,i}(z)\,|\,i=0,1,\ldots,n\big\}$ forms a basis of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda\le (1^n)\}$. Conversely such basis satisfying the condition~(\ref{eq:matsuo2}) is uniquely determined. Thus we can take Proposition~\ref{prop:matsuo2} as a definition of Matsuo's polynomials, instead of~(\ref{eq:matsuo}). \end{Remark} \begin{proof} By definition $\tilde{E}_{0,i}(\zeta_j(a,b^{-1}))=0$ if $i\ne j$. $\tilde{E}_{0,i}(\zeta_i(a,b^{-1}))$ evaluates as \begin{equation*} \tilde{E}_{0,i}(\zeta_i(a,b^{-1}))= \frac{E_{0,i}\big(at^{n-i-1},\ldots,at^2,at,a,b^{-1},b^{-1}t^{-1},\ldots,b^{-1}t^{-(i-1)}\big)} {\Delta\big(at^{n-i-1},\ldots,at^2,at,a,b^{-1},b^{-1}t^{-1},\ldots,b^{-1}t^{-(i-1)}\big)}, \end{equation*} which coincides with (\ref{eq:c-i-a}) and (\ref{eq:c-i-b}). \end{proof} \begin{Lemma}[triangularity]\langlebel{lem:triangularity} Suppose \begin{equation*} \xi_j:=\big(\underbrace{b^{-1}t^{-(j-1)},b^{-1}t^{-(j-2)},\ldots,b^{-1}t^{-1},b^{-1}\phantom{\Big|}\!\!}_j,z_1,z_2,\ldots,z_{n-j}\big)\in ({\mathbb C}^*)^n. \end{equation*} Then \begin{equation}\langlebel{eq:tri-xi1} \tilde{E}_{k,i}(\xi_j)=0\qquad\text{if}\quad 0\le i<j\le n. \end{equation} Moreover, $\tilde{E}_{0,i}(\xi_i)$ evaluates as \begin{align} \tilde{E}_{0,i}(\xi_i) &= \frac{t^{-i(n-i)}\big(t^{-1};t^{-1}\big)_i\big(t^{-1};t^{-1}\big)_{n-i}} {\big(1-t^{-1}\big)^n}\big(a^{-1}b^{-1}t^{-(i-1)};t\big)_{i} \prod_{j=1}^{n-i}\big(1-z_jbt^i\big)\nonumber\\ &=\frac{(t;t)_i(t;t)_{n-i}} {t^{n\choose 2}(1-t)^n} \big(a^{-1}b^{-1}t^{-(i-1)};t\big)_{i} \prod_{j=1}^{n-i}\big(1-z_jbt^i\big).\langlebel{eq:tri-xi2} \end{align} On the other hand, if $\eta_j:=\big(z_1,z_2,\ldots,z_j,\underbrace{a,at,at^2,\ldots,at^{n-j-1}\phantom{\Big|}\!\!}_{n-j}\big)\in ({\mathbb C}^*)^n,$ then \begin{equation}\langlebel{eq:tri-eta1} \tilde{E}_{k,i}(\eta_j)=0\qquad\text{if}\quad 0\le j<i\le n. \end{equation} Moreover, $\tilde{E}_{0,i}(\eta_i)$ evaluates as \begin{equation} \langlebel{eq:tri-eta2} \tilde{E}_{0,i}(\eta_i)=\frac{\big(t^{-1};t^{-1}\big)_i\big(t^{-1};t^{-1}\big)_{n-i}}{\big(1-t^{-1}\big)^n}(ab;t)_{n-i} \prod_{j=1}^i \big(1-z_j/at^{n-i}\big). \end{equation} \end{Lemma} \begin{proof}By the definition (\ref{eq:Eki1}) of $\tilde{E}_{k,i}(z)$ it immediately follows that $\tilde{E}_{k,i}(\xi_j)=0$ if $i<j$, and $\tilde{E}_{k,i}(\eta_j)=0$ if $j<i$. If we put $z_j=b^{-1}t^{-i}$ $(j=1,2,\ldots,n-i)$ in the polynomial $\tilde{E}_{k,i}(\xi_i)$, then we have $\tilde{E}_{k,i}(\xi_i)=0$ because $\tilde{E}_{k,i}(\xi_i)$ satisfies the condition of~(\ref{eq:tri-xi1}). This implies that $\tilde{E}_{k,i}(\xi_i)$ is divisible by $\prod_{j=1}^{n-i}(1-z_jbt^i)$ up to a constant. Thus we write $\tilde{E}_{k,i}(\xi_i)=c\prod_{j=1}^{n-i}(1-z_jbt^i)$, where $c$ is some constant independent of $z_1,\ldots,z_{n-i}$. Next we determine the explicit form of $c$. If we put $z_1=a, z_2=a t, \ldots, z_{n-i}=a t^{n-i-1}$ in $\tilde{E}_{k,i}(\xi_i)$, then $\tilde{E}_{k,i}(\xi_i)=\tilde{E}_{0,i}\big(\zeta_i\big(a,b^{-1}\big)\big)$. From~(\ref{eq:matsuo2}), we have $c_i=c(abt^i;t)_{n-i}$, where $c_i$ is given by~(\ref{eq:c-i-b}). Therefore the constant~$c$ is evaluated as $c=c_i/(abt^i;t)_{n-i}$, i.e., we obtain the expression~(\ref{eq:tri-xi2}) for $\tilde{E}_{k,i}(\xi_i)$. The evaluation (\ref{eq:tri-eta2}) is carried out in the same way as above. \end{proof} \begin{Lemma}\langlebel{lem:tri-zeta(xb)} For $0\le j\le n$, let $\zeta_j\big(x,b^{-1}\big)\in ({\mathbb C}^*)^n$ be the point specified by {\rm (\ref{eq:zeta(xy)})} with $y=b^{-1}$. Then \begin{equation}\langlebel{eq:tri-zeta(xb)2} \tilde{E}_{0,i}(\zeta_{j}(x,b^{-1}))= \frac{\big(xbt^{i};t\big)_{n-i}\big(xa^{-1};t\big)_{i-j}\big(a^{-1}b^{-1}t^{-(j-1)};t\big)_{j}(t;t)_n} {t^{n\choose 2}(1-t)^n} \frac{\qbin{i}{j}{t}}{\qbin{n}{j}{t}}. \end{equation} Moreover, if $i+k\le n$ $($i.e., $k\le n-i\le n-j)$, then \begin{equation}\langlebel{eq:tri-zeta(xb)3} \tilde{E}_{k,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big) =x^kt^{(n-j)k-{k+1\choose 2}}\tilde{E}_{0,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big). \end{equation} \end{Lemma} \begin{proof} If $i\le j$, $\tilde{E}_{k,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big)=0$ is a special case of~(\ref{eq:tri-xi1}). Suppose $j\le i$. If we put $x=b^{-1}t^{-i-k} $ $(k=0,1,\ldots,n-i-1)$, then the polynomial $\tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)$ satisfies the condition (\ref{eq:tri-xi1}) of Lemma~\ref{lem:triangularity}, so that it is equal to zero, which implies $\tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)$ is divisible by $(xbt^{i};t)_{n-i}$. If we put $x=at^{-k} $ $(k=0,1,\ldots,i-j-1)$, then the polynomial $\tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)$ satisfies the condition~(\ref{eq:tri-eta1}) of Lemma~\ref{lem:triangularity}, so that it is also equal to zero, which implies $\tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)$ is divisible by $\big(xa^{-1};t\big)_{i-j}$. Therefore we have \begin{equation}\langlebel{eq:E0i-zeta(xb)} \tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)=c\big(xbt^{i};t\big)_{n-i}\big(xa^{-1};t\big)_{i-j}, \end{equation} where $c$ is some constant independent of $x$. Next we determine the explicit form of~$c$. Put $x=b^{-1}t^{-(i-1)}$ in (\ref{eq:E0i-zeta(xb)}). Then, using~(\ref{eq:tri-xi2}), the left-hand side of~(\ref{eq:E0i-zeta(xb)}) is written as \begin{align} \tilde{E}_{0,i}\big(\zeta_j\big(x,b^{-1}\big)\big)\Big|_{x=b^{-1}t^{-(i-1)}} &=\tilde{E}_{0,i}\big(\zeta_i\big(x,b^{-1}\big)\big)\Big|_{x=b^{-1}t^{-(j-1)}}\nonumber\\ &=\frac{(t;t)_i(t;t)_{n-i}(t;t)_{n-j}} {t^{n\choose 2}(1-t)^n(t;t)_{i-j}} \big(a^{-1}b^{-1}t^{-(i-1)};t\big)_{i},\langlebel{eq:E0i-zeta(xb)l} \end{align} while the right-hand side of (\ref{eq:E0i-zeta(xb)}) is \begin{equation}\langlebel{eq:E0i-zeta(xb)r} c\big(xbt^{i};t\big)_{n-i}\big(xa^{-1};t\big)_{i-j}\Big|_{x=b^{-1}t^{-(i-1)}} =c(t;t)_{n-i}\big(a^{-1}b^{-1}t^{-(i-1)};t\big)_{i-j}. \end{equation} Comparing with (\ref{eq:E0i-zeta(xb)l}) and (\ref{eq:E0i-zeta(xb)r}), we have \begin{equation*} c=\frac{(t;t)_i(t;t)_{n-j}} {t^{n\choose 2}(1-t)^n(t;t)_{i-j}} \big(a^{-1}b^{-1}t^{-(j-1)};t\big)_{j}. \end{equation*} Therefore we obtain (\ref{eq:tri-zeta(xb)2}). Moreover, if $i+k\le n$, by definition we have \begin{equation*} \tilde{E}_{k,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big) = \big(xt^{(n-j-1)}\big)\big(xt^{(n-j-2)}\big)\cdots\big(xt^{(n-j-k)}\big) \tilde{E}_{0,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big), \end{equation*} which coincides with (\ref{eq:tri-zeta(xb)3}). \end{proof} As a counterpart of Lemma \ref{lem:tri-zeta(xb)}, we have the following. \begin{Lemma}\langlebel{lem:tri-zeta(ay)} For $0\le j\le n$, let $\zeta_j(a,y)\in ({\mathbb C}^*)^n$ be the point specified by \eqref{eq:zeta(xy)} with $x=a$. Then \begin{equation}\langlebel{eq:tri-zeta(ay)2} \tilde{E}_{0,i}(\zeta_j(a,y))= \frac{\big(ybt^{-(j-i-1)};t\big)_{j-i}\big(y/at^{n-1};t\big)_i(ab;t)_{n-j}\big(t^{-1};t^{-1}\big)_{n}}{\big(1-t^{-1}\big)^n} \frac{\qbin{n-i}{n-j}{t^{-1}}} {\qbin{n}{j}{t^{-1}}}. \end{equation} Moreover, if $n\le i+k$ $($i.e., $n-j\le n-i\le k)$, then \begin{equation*} \tilde{E}_{k,i}(\zeta_j(a,y)) =y^{k+j-n}a^{n-j}t^{{n-j\choose 2 }-{k+j-n\choose 2 }} \tilde{E}_{0,i}(\zeta_j(a,y)). \end{equation*} \end{Lemma} \begin{proof} This lemma can be proved in the same way as Lemma~\ref{lem:tri-zeta(xb)} using Lemma~\ref{lem:triangularity}. \end{proof} \section{Three-term relations}\langlebel{section04} In this section we fix $\tilde{E}_{k,i}(z)=\tilde{E}_{k,i}(a_1, b_2;z)$. In particular we have $e_i(a_1, b_2;z)=\tilde{E}_{0,i}(a_1, b_2;z)$. The following lemma is a technical key for computing the coefficient matrix $A$ of (\ref{eq:main}) in Theorem \ref{thm:main}. We abbreviate $\big\langle \tilde{E}_{k,i},x\big\rangle$ to $\big\langle \tilde{E}_{k,i}\big\rangle$ throughout this section. \begin{Lemma}[three-term relations] \langlebel{lem:3term1st} Suppose $i+k\le n$. Then, \begin{gather}\langlebel{eq:3term01} \frac{1-q^\alpha a_1 a_2 b_1 b_2 t^{2n-k-1}}{a_1t^{n-i-k}} \big\langle \tilde{E}_{k,i}\big\rangle=\big(1-q^\alpha a_2 b_2 t^{n+i-k}\big)\big\langle \tilde{E}_{k-1,i}\big\rangle-\big(1-a_2 b_2 t^{i}\big)\big\langle \tilde{E}_{k-1,i+1}\big\rangle.\!\!\!\! \end{gather} On the other hand, if $i+k\ge n$, then, \begin{gather} t^{n-k}\big(1-q^\alpha t^{n-k}\big)\big\langle \tilde{E}_{k-1,i+1}\big\rangle\nonumber\\ \qquad{}= a_1^{-1} q^\alpha t^{n+i-k}\big(1- a_1 b_1 t^{n-i-1}\big) \big\langle \tilde{E}_{k,i}\big\rangle+ a_2^{-1}\big(1-q^\alpha a_2 b_2 t^{n+i-k}\big) \big\langle \tilde{E}_{k,i+1}\big\rangle.\langlebel{eq:3term02} \end{gather} \end{Lemma} The proof of this lemma will be given in Section~\ref{section05}. The rest of this section is devoted to computing the Gauss matrix decomposition of $A$ in Theorem~\ref{thm:main} using Lemma~\ref{lem:3term1st}. By repeated use of Lemma \ref{lem:3term1st}, we have the following. \begin{Corollary}\langlebel{cor:<Eki>} Suppose $i+k\le n$. For $0\le l\le k$, $\big\langle\tilde{E}_{k,i}\big\rangle$ is expressed as \begin{gather}\langlebel{eq:<Eki>01} \big\langle \tilde{E}_{k,i}\big\rangle=\sum_{j=0}^l L_{k-l,i+j}^{k,i}\big\langle \tilde{E}_{k-l,i+j}\big\rangle, \end{gather} where the coefficients $L_{k-l,i+j}^{k,i}$ is expressed as \begin{equation*} L_{k-l,i+j}^{k,i} = \qbin{l}{j}{t} \frac{(-1)^j\big(a_1t^{n-i-k}\big)^lt^{{l-j\choose 2}} \big(a_2b_2 t^i;t\big)_{j}\big(q^\alpha a_2b_2 t^{n+i+j-k};t\big)_{l-j}} {\big(q^\alpha a_1a_2b_1b_2 t^{2n-k-1};t\big)_{l}}. \end{equation*} On the other hand, if $i+k\ge n$, then, \begin{equation}\langlebel{eq:<Eki>02} \big\langle \tilde{E}_{k,i}\big\rangle=\sum_{j=0}^lU^{k,i}_{k-l+j,i-j}\big\langle \tilde{E}_{k-l+j,i-j}\big\rangle, \end{equation} where the coefficient $U^{k,i}_{k-l+j,i-j}$ is expressed as \begin{equation*} U^{k,i}_{k-l+j,i-j} =\qbin{l}{j}{t} \frac{\big({-}q^\alpha a_1^{-1}t^{i-l}\big)^j\big(a_2t^{n-k}\big)^lt^{{l\choose 2}}\big(a_1b_1t^{n-i};t\big)_j\big(q^\alpha t^{n-k};t\big)_{l-j}} {\big(q^\alpha a_2b_2 t^{n+i-k-j-1};t\big)_{l-j}\big(q^\alpha a_2b_2 t^{n+i-k-2j+l};t\big)_{j}}. \end{equation*} \end{Corollary} \begin{proof}\eqref{eq:<Eki>01} and \eqref{eq:<Eki>02} follow by double induction on $l$ and $j$ using \eqref{eq:3term01} and \eqref{eq:3term02}, respecti\-ve\-ly. \end{proof} In particular, we immediately have the following as a special case of Corollary~\ref{cor:<Eki>}. \begin{Corollary} For $0\le j\le n$, $\langle E_{n-j,j}\rangle$ is expressed as \begin{equation}\langlebel{eq:<En-j,j>} \big\langle \tilde{E}_{n-j,j}\big\rangle=\sum_{i=j}^n\tilde{l}_{ij}\big\langle \tilde{E}_{0,i}\big\rangle, \end{equation} where the coefficients $\tilde{l}_{ij}$ is expressed as \begin{equation}\langlebel{eq:<En-j,j>c} \tilde{l}_{ij}=L_{0,i}^{n-j,j} = \qbin{n-j}{n-i}{t} \frac{(-1)^{i-j}a_1^{n-j}t^{{n-i\choose 2}} \big(a_2b_2 t^j;t\big)_{i-j}\big(q^\alpha a_2b_2 t^{i+j};t\big)_{n-i}} {\big(q^\alpha a_1a_2b_1b_2 t^{n+j-1};t\big)_{n-j}}, \end{equation} while, for $0\le j\le n$, $\langle E_{n,j}\rangle$ is expressed as \begin{equation}\langlebel{eq:<En,j>} \big\langle \tilde{E}_{n,j}\big\rangle=\sum_{i=0}^j\tilde{u}_{ij}\big\langle \tilde{E}_{n-i,i}\big\rangle, \end{equation} where the coefficients $\tilde{u}_{ij}$ is expressed as \begin{equation}\langlebel{eq:<En,j>c} \tilde{u}_{ij}=U^{n,j}_{n-i,i}=\qbin{j}{i}{t} \frac{\big({-}q^\alpha a_1^{-1}\big)^{j-i}a_2^jt^{{j\choose 2}} \big(q^\alpha;t\big)_{i}\big(a_1b_1t^{n-j};t\big)_{j-i}} {\big(q^\alpha a_2b_2 t^{i-1};t\big)_{i}\big(q^\alpha a_2b_2 t^{2i};t\big)_{j-i}}. \end{equation} \end{Corollary} We now give the explicit expression for $A$ in terms of Gauss decomposition. \begin{proof}[Proof of Theorem \ref{thm:main}] From (\ref{eq:<En-j,j>}), we have \begin{equation*} \big(\big\langle \tilde{E}_{n,0}\big\rangle,\big\langle \tilde{E}_{n-1,1}\big\rangle,\ldots,\big\langle \tilde{E}_{1,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big) =\big(\big\langle \tilde{E}_{0,0}\big\rangle,\big\langle \tilde{E}_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}_{0,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big)\tilde{L}, \end{equation*} where the matrix $\tilde{L}=\big(\tilde{l}_{ij}\big)_{0\le i,j\le n}$ is defined by \eqref{eq:<En-j,j>c}. Moreover, from \eqref{eq:<En,j>} we have \begin{subequations} \begin{align} \big(\big\langle \tilde{E}_{n,0}\big\rangle,\big\langle \tilde{E}_{n,1}\big\rangle,\ldots,\big\langle \tilde{E}_{n,n-1}\big\rangle,\big\langle \tilde{E}_{n,n}\big\rangle\big) &= \big(\big\langle \tilde{E}_{n,0}\big\rangle,\big\langle \tilde{E}_{n-1,1}\big\rangle,\ldots,\big\langle \tilde{E}_{1,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big)\tilde{U} \langlebel{eq:-U}\\ &= \big(\big\langle \tilde{E}_{0,0}\big\rangle,\big\langle \tilde{E}_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}_{0,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big)\tilde{L}\tilde{U},\langlebel{eq:-L-U} \end{align} \end{subequations} where the matrix $\tilde{U}=\big(\tilde{u}_{ij}\big)_{0\le i,j\le n}$ is defined by (\ref{eq:<En,j>c}). Since $T_{\alpha}\Phi(z)=z_1z_2\cdots z_n\Phi(z)$ and $z_1z_2\cdots z_n \tilde{E}_{0,i}(z)=\tilde{E}_{n,i}(z)$, we have $T_{\alpha}\big\langle \tilde{E}_{0,i}\big\rangle=\big\langle \tilde{E}_{n,i}\big\rangle$, i.e., \begin{equation} \langlebel{eq:T-a()} T_{\alpha}\big(\big\langle \tilde{E}_{0,0}\big\rangle,\big\langle \tilde{E}_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}_{0,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big) =\big(\big\langle \tilde{E}_{n,0}\big\rangle,\big\langle \tilde{E}_{n,1}\big\rangle,\ldots,\big\langle \tilde{E}_{n,n-1}\big\rangle,\big\langle \tilde{E}_{n,n}\big\rangle\big). \end{equation} From (\ref{eq:-L-U}) and (\ref{eq:T-a()}), we obtain the difference system \begin{equation*} T_{\alpha}\big(\big\langle \tilde{E}_{0,0}\big\rangle,\big\langle \tilde{E}_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}_{0,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big) =\big(\big\langle \tilde{E}_{0,0}\big\rangle,\big\langle \tilde{E}_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}_{0,n-1}\big\rangle,\big\langle \tilde{E}_{0,n}\big\rangle\big)\tilde{L}\tilde{U}. \end{equation*} Comparing this with (\ref{eq:main}), we therefore obtain $A=\tilde{L}\tilde{U}=L_{\mbox{\tiny $A$}} D_{\mbox{\tiny $A$}} U_{\mbox{\tiny $A$}}$, i.e., \begin{equation*} l^{\mbox{\tiny $A$}}_{ij}=\frac{\tilde{l}_{ij}}{\tilde{l}_{jj}}, \qquad d^{\mbox{\tiny $A$}}_{j}=\tilde{l}_{jj}\tilde{u}_{jj}, \qquad u^{\mbox{\tiny $A$}}_{ij}=\frac{\tilde{u}_{ij}}{\tilde{u}_{ii}}. \end{equation*} Corollary~\ref{cor:<Eki>} implies that $l^{\mbox{\tiny $A$}}_{ij}$, $d^{\mbox{\tiny $A$}}_{j}$ and $u^{\mbox{\tiny $A$}}_{ij}$ coincide with~\eqref{eq:lAij}, \eqref{eq:dAij} and~\eqref{eq:uAij}, respectively. The Gauss decomposition of $A$ in the opposite direction, i.e., $A=U'{}_{\!\!\!{\mbox{\tiny $A$}}}D'{}_{\!\!{\mbox{\tiny $A$}}}L'{}_{\!\!{\mbox{\tiny $A$}}}$ can also be given in the same way as above. This will be explained in the Appendix. \end{proof} Finally we state the explicit forms for $U_{\mbox{\tiny $A$}}^{-1}$. \begin{Proposition}\langlebel{prop:inverseU_A} The inverse matrix $U_{\mbox{\tiny $A$}}^{-1}=\big(u^{{\mbox{\tiny $A$}} *}_{ij}\big)_{0\le i,j\le n}$ is upper triangular, and is written as \begin{equation}\langlebel{eq:inverseU_A} u^{{\mbox{\tiny $A$}} *}_{ij}= \big(q^\alpha a_1^{-1}a_2t^{j-1}\big)^{j-i} \qbin{j}{i}{t}\frac{\big(a_1b_1 t^{n-j};t\big)_{j-i}} {\big(q^\alpha a_2b_2 t^{j+i-1};t\big)_{j-i}}. \end{equation} \end{Proposition} \begin{proof}Since $A=\tilde{L}\tilde{U}=L_{\mbox{\tiny $A$}} D_{\mbox{\tiny $A$}} U_{\mbox{\tiny $A$}}$, we have $\tilde{U}=D_{\tilde{U}}U_{\mbox{\tiny $A$}}$, where $D_{\tilde{U}}$ is the diagonal matrix defined by the diagonal elements of $\tilde{U}=\big(\tilde{u}_{ij}\big)_{0\le i,j\le n}$, i.e., $ D_{\tilde{U}}=\big(\tilde{u}_{ii}\delta_{ij}\big)_{0\le i,j\le n} $, where $\tilde{u}_{ij}$ is given by \eqref{eq:<En,j>c}. We first compute the explicit expression for $\tilde{U}^{-1}$. From \eqref{eq:-U}, $\tilde{U}^{-1}$ is regarded as the transition matrix \begin{gather}\langlebel{E=EU^{-1}} (\langle E_{n,0}\rangle,\langle E_{n-1,1}\rangle,\ldots,\langle E_{1,n-1}\rangle,\langle E_{0,n}\rangle)= (\langle E_{n,0}\rangle,\langle E_{n,1}\rangle,\ldots,\langle E_{n,n-1}\rangle,\langle E_{n,n}\rangle)\tilde{U}^{-1}, \end{gather} namely, if we write $\tilde{U}^{-1}=\big(\tilde{v}_{ij}\big)_{0\le i,j\le n}$, then \eqref{E=EU^{-1}} is equivalent to $\big\langle \tilde{E}_{n-j,j}\big\rangle=\sum_{i=0}^j\tilde{v}_{ij}\big\langle \tilde{E}_{n,i}\big\rangle$. Similar to Corollary \ref{cor:<Eki>}, by repeated use of the three-term relation \eqref{eq:<Eki>02} inductively, $\langle\tilde{E}_{k,i}\rangle$ is generally expressed as \begin{equation*} \big\langle \tilde{E}_{k,i}\big\rangle=\sum_{j=0}^l V_{k+l,i-j}^{k,i}\big\langle \tilde{E}_{k+l,i-j}\big\rangle, \end{equation*} where the coefficients $V_{k+l,i-j}^{k,i}$ is given by \begin{equation*} V_{k+l,i-j}^{k,i} = \qbin{l}{j}{t} \frac{\big(q^\alpha a_1^{-1}t^{i-1}\big)^j t^{{l-j\choose 2}-{j\choose 2}} \big(a_1b_1 t^{n-i};t\big)_{j}\big(q^\alpha a_2b_2 t^{n+i-k-l-1};t\big)_{l-j}} {\big(a_2t^{n-k-1}\big)^{l-j}\big(q^\alpha t^{n-k-l};t\big)_{l}}. \end{equation*} In particular, the entries $\tilde{v}_{ij}$ of $\tilde{U}^{-1}$ are explicitly expressed as \begin{equation}\langlebel{eq:tilde{v}_{ij}} \tilde{v}_{ij}=V_{n,i}^{n-j,j}= \qbin{j}{i}{t} \frac{\big(q^\alpha a_1^{-1}t^{j-1}\big)^{j-i} t^{{i\choose 2}-{j-i\choose 2}} \big(a_1b_1 t^{n-j};t\big)_{j-i}\big(q^\alpha a_2b_2 t^{j-1};t\big)_{i}} {\big(a_2t^{j-1}\big)^{i}\big(q^\alpha;t\big)_{j}}. \end{equation} Next we compute $U_{\mbox{\tiny $A$}}^{-1}=\big(u^{{\mbox{\tiny $A$}} *}_{ij}\big)_{0\le i,j\le n}$. Since $U_{\mbox{\tiny $A$}}^{-1}$ is expressed as $U_{\mbox{\tiny $A$}}^{-1}=\tilde{U}^{-1}D_{\tilde{U}}$, using \eqref{eq:<En,j>c} and \eqref{eq:tilde{v}_{ij}}, we obtain \begin{gather*} u^{{\mbox{\tiny $A$}} *}_{ij}=\tilde{v}_{ij}\tilde{u}_{jj}\\ \hphantom{u^{{\mbox{\tiny $A$}} *}_{ij}}{} = \qbin{j}{i}{t} \frac{\big(q^\alpha a_1^{-1}t^{j-1}\big)^{j-i} t^{{i\choose 2}-{j-i\choose 2}} \big(a_1b_1 t^{n-j};t\big)_{j-i}\big(q^\alpha a_2b_2 t^{j-1};t\big)_{i}} {\big(a_2t^{j-1}\big)^{i}\big(q^\alpha;t\big)_{j}} \frac{ a_2^jt^{{j\choose 2}} \big(q^\alpha;t\big)_{j}} {\big(q^\alpha a_2b_2 t^{j-1};t\big)_{j}}, \end{gather*} which coincides with \eqref{eq:inverseU_A}. \end{proof} \section{Proof of Lemma \ref{lem:3term1st}}\langlebel{section05} The aim of this section is to give a proof of Lemma \ref{lem:3term1st}. Throughout this section we fix $\tilde{E}_{k,i}(z)=\tilde{E}_{k,i}(a_1, b_2;z)$. For $\Phi(z)=\Phi_{n,2}$, let $\nabla$ be the operator specified by \begin{equation*} (\nabla\varphi)(z):=\varphi(z)-\frac{T_{q,z_1}\Phi(z)}{\Phi(z)}T_{q,z_1}\varphi(z), \end{equation*} where $T_{q,z_1}$ is the $q$-shift operator with respect to $z_1\to qz_1$, i.e., $T_{q,z_1} f(z_1,z_2,\ldots,z_n)=f(qz_1,z_2,\ldots,z_n)$ for an arbitrary function $f(z_1,z_2,\ldots,z_n)$. Here the ratio $T_{q,z_1}\Phi(z)/\Phi(z)$ is expressed explicitly as \begin{equation*} \frac{T_{q,z_1}\Phi(z)}{\Phi(z)}= \frac{q^\alpha t^{2(n-1)}(1-b_1z_1) (1-b_2z_1)}{\big(1-qa_1^{-1}z_1\big)\big(1-qa_2^{-1}z_1\big)} \prod_{j=2}^n \frac{z_1-t^{-1}z_j}{qz_1-tz_j} =\frac{G_1(z)}{T_{z_1}F_1(z)}, \end{equation*} where \begin{gather*} F_1(z) =\big(1-a_1^{-1}z_1\big)\big(1-a_2^{-1}z_1\big)\prod_{j=2}^n(z_1-tz_j),\\ G_1(z) =q^\alpha t^{2(n-1)}(1-b_1z_1)(1-b_2z_1)\prod_{j=2}^n\big(z_1-t^{-1}z_j\big). \end{gather*} \begin{Lemma}\langlebel{lem:nabla=0}Suppose that $\int_0^{x\infty}\Phi(z)\varphi(z)\varpi_q$ converges for a meromorphic function $\varphi(z)$, then \begin{equation*} \int_0^{x\infty}\Phi(z)\nabla\varphi(z)\varpi_q=0. \end{equation*} Moreover, \begin{equation*} \int_0^{x\infty}\Phi(z){\cal A}\nabla\varphi(z)\varpi_q=0. \end{equation*} \end{Lemma} \begin{proof}See Lemma~5.3 in \cite{IN2018}. \end{proof} The rest of this section is devoted to the proof of Lemma~\ref{lem:3term1st}. We show a further lemma before proving Lemma~\ref{lem:3term1st}. For this purpose we abbreviate $\tilde{E}_{k,i}(a_1,b_2;z)$ to $\tilde{E}_{k,i}(z)$. When we need to specify the number of variables $z_1,\ldots,z_n$, we use the notation $\tilde{E}_{k,i}^{(n)}(z)=\tilde{E}_{k,i}(z)$ and $\Delta^{\!(n)}(z)=\Delta(z)$. We set $\varphi_{k,i}(z):=F_1(z)E_{k-1,i}^{(n-1)}(z_2,\ldots,z_n)$. Then \begin{equation*} \nabla\varphi_{k,i}(z)= (F_1(z)-G_1(z))E_{k-1,i}^{(n-1)}(z_2,\ldots,z_n). \end{equation*} Let $\tilde{\varphi}_{k,i}(z)$ be the skew-symmetrization of $\nabla\varphi_{k,i}(z)$, i.e., \begin{equation} \langlebel{eq:tildevarphi_k,i} \tilde{\varphi}_{k,i}(z):={\cal A}\nabla\varphi_{k,i}(z)=\sum_{j=1}^n(-1)^{j-1} (F_j(z)-G_j(z)) \tilde{E}_{k-1,i}^{(n-1)}(\widehat{z}_{j})\Delta^{\!(n-1)}(\widehat{z}_{j}), \end{equation} where $\widehat{z}_{j}:=(z_1,\ldots,z_{j-1},z_{j+1},\ldots,z_n)\in (\mathbb{C}^*)^{n-1}$ for $j=1,\ldots,n$, and \begin{gather*} F_i(z)=\big(1-a_1^{-1}z_i\big)\big(1-a_2^{-1}z_i\big) \prod_{\substack{1\le k\le n\\[1pt] k\ne i} }(z_i-tz_k),\\ G_i(z)=q^\alpha t^{2(n-1)}(1-b_1z_i) (1-b_2z_i) \prod_{\substack{1\le k\le n\\[1pt] k\ne i}}\big(z_i-t^{-1}z_k\big), \end{gather*} which satisfy the following vanishing property at the point $z=\zeta_j\big(x,b_2^{-1}\big)$ or $z=\zeta_j(a_1,y)$. \begin{Lemma}\langlebel{lem:vanishingFG}If $i\ne 0$ and $i\ne j$, then $F_{i+1}\big(\zeta_j\big(x,b_2^{-1}\big)\big)=0$. If $i\ne n$, then $G_i\big(\zeta_j\big(x,b_2^{-1}\big)\big)=0$. Otherwise, \begin{subequations} \begin{gather} F_{1}\big(\zeta_j\big(x,b_2^{-1}\big)\big)= \frac{\big(1-a_1^{-1}b_2^{-1}t^{-(j-1)}\big)\big(1-a_2^{-1}b_2^{-1}t^{-(j-1)}\big)}{\big(b_2t^{j-1}\big)^{n-1}(1-t)} (t;t)_j\big(xb_2t^{j};t\big)_{n-j}, \langlebel{eq:vanishingFG-1}\\ F_{j+1}\big(\zeta_j\big(x,b_2^{-1}\big)\big)=(-1)^j\frac{\big(1-a_1^{-1}x\big)\big(1-a_2^{-1}x\big)x^{n-j-1}} {b_2^jt^{{j-1\choose 2}-1}(1-t)}(t,t)_{n-j}\big(xb_2t^{-1};t\big)_{j}, \langlebel{eq:vanishingFG-2}\\ G_n\big(\zeta_j\big(x,b_2^{-1}\big)\big)= q^\alpha t^{2(n-1)}\big(1-b_1xt^{n-j-1}\big)\big(1-b_2xt^{n-j-1}\big)\nonumber\\ \hphantom{G_n\big(\zeta_j\big(x,b_2^{-1}\big)\big)=}{} \times\big({-}b_2^{-1}\big)^jt^{-{j+1\choose 2}}\big(xbt^{n-j};t\big)_j\big(xt^{n-j-1}\big)^{n-j-1}\frac{\big(t^{-1};t^{-1}\big)_{n-j}}{1-t^{-1}}, \langlebel{eq:vanishingFG-3} \end{gather} \end{subequations} while, if $i\ne 1$, then $F_i(\zeta_j(a_1,y))=0$. If $i\ne n$ and $i\ne j$, then $G_i(\zeta_j(a_1,y))=0$. Otherwise, \begin{subequations} \begin{gather} F_1(\zeta_j(a_1,y)) =\big(1-ya_2^{-1}t^{-(j-1)}\big)(-yt)^{j-1}(-a_1t)^{n-j}t^{{n-j\choose 2}-{j-1\choose 2}} \nonumber\\ \hphantom{F_1(\zeta_j(a_1,y)) =}{} \times \big(ya_1^{-1}t^{-(n-1)};t\big)_{n-j+1} \frac{\big(t^{-1};t^{-1}\big)_j}{1-t^{-1}}, \langlebel{eq:vanishingFG-4}\\ G_j(\zeta_j(a_1,y)) =q^\alpha t^{2(n-1)}(1-yb_1)(1-yb_2) \nonumber\\ \hphantom{G_j(\zeta_j(a_1,y)) =}{} \times y^{j-1}(-a_1t^{-1})^{n-j}t^{{n-j\choose 2}} \big(ya_1^{-1}t^{-(n-j-2)};t\big)_{n-j}\frac{\big(t^{-1};t^{-1}\big)_j}{1-t^{-1}}, \langlebel{eq:vanishingFG-5}\\ G_n(\zeta_j(a_1,y)) =q^\alpha t^{2(n-1)}\big(1-a_1b_1t^{n-j-1}\big)\big(1-a_1b_2t^{n-j-1}\big) \nonumber\\ \hphantom{G_n(\zeta_j(a_1,y)) =}{} \times\big(a_1t^{n-j-1}\big)^{n-1}\big(ya_1^{-1}t^{-(n-1)};t\big)_j \frac{\big(t^{-1};t^{-1}\big)_{n-j}}{1-t^{-1}}.\langlebel{eq:vanishingFG-6} \end{gather} \end{subequations} \end{Lemma} \begin{proof} The proof follows by direct computation and we omit the details. \end{proof} Since the leading term of the symmetric polynomial $\tilde{\varphi}_{k,i}(z)/\Delta^{\!(n)}(z)$ is equal to $m_{(1^{n-k}2^k)}(z)$ up to a multiplicative constant, $\tilde{\varphi}_{k,i}(z)/\Delta^{\!(n)}(z)$ is expressed as the linear combination of the symmetric polynomials $\tilde{E}_{l,j}^{(n)}(z)$ in the following two ways: \begin{equation}\langlebel{eq:expand02} \frac{\tilde{\varphi}_{k,i}(z)}{\Delta^{\!(n)}(z)}= \sum_{l=0}^{k}\sum_{j=0}^{n-l}c_{lj}\tilde{E}_{l,j}^{(n)}(z) =\sum_{l=0}^{k}\sum_{j=n-l}^{n}d_{lj}\tilde{E}_{l,j}^{(n)}(z), \end{equation} where $c_{lj}$ and $d_{lj}$ are some coefficients. \begin{Lemma}\langlebel{lem:3term1st-01} Suppose $i+k\le n$. Then, \eqref{eq:expand02} is written as \begin{equation}\langlebel{eq:expand03} \frac{\tilde{\varphi}_{k,i}(z)}{\Delta^{\!(n)}(z)} =c_{k,i}\tilde{E}_{k,i}^{(n)}(z)+c_{k-1,i}\tilde{E}_{k-1,i}^{(n)}(z)+c_{k-1,i+1}\tilde{E}_{k-1,i+1}^{(n)}(z), \end{equation} where \begin{subequations} \begin{gather} c_{k,i}=-a_1^{-1}a_2^{-1}b_2^{-1}t^{k-1}\big(1-q^\alpha a_1 a_2 b_1 b_2 t^{2n-k-1}\big) =q^\alpha b_1 t^{2n-2}-a_1^{-1}a_2^{-1}b_2^{-1}t^{k-1}, \langlebel{eq:ck,i}\\ c_{k-1,i}=a_2^{-1}b_2^{-1}t^{n-i-1}\big(1-q^\alpha a_2 b_2 t^{n+i-k}\big) =a_2^{-1}b_2^{-1}t^{n-i-1}-q^\alpha t^{2n-k-1}, \langlebel{eq:ck-1,i}\\ c_{k-1,i+1}=-a_2^{-1}b_2^{-1}t^{n-i-1}\big(1-a_2 b_2 t^{i}\big) =t^{n-1}\big(1-a_2^{-1}b_2^{-1}t^{-i}\big). \langlebel{eq:ck-1,i-1} \end{gather} \end{subequations} Suppose $i+k\ge n$. Then, \eqref{eq:expand02} is written as \begin{equation}\langlebel{eq:expand04} \frac{\tilde{\varphi}_{k,i}(z)}{\Delta^{\!(n)}(z)} =d_{k,i}\tilde{E}_{k,i}^{(n)}(z)+d_{k,i+1}\tilde{E}_{k,i+1}^{(n)}(z)+d_{k-1,i+1}\tilde{E}_{k-1,i+1}^{(n)}(z), \end{equation} where \begin{subequations} \begin{gather} d_{k,i} =-a_1^{-1}q^\alpha t^{n+i-1}\big(1- a_1 b_1 t^{n-i-1}\big)=q^\alpha b_1 t^{2n-2}-q^\alpha a_1^{-1}t^{n+i-1},\langlebel{eq:dk,i}\\ d_{k,i+1} =-a_2^{-1}t^{k-1}\big(1-q^\alpha a_2 b_2 t^{n+i-k}\big)=q^\alpha b_2t^{n+i-1}-a_2^{-1}t^{k-1},\langlebel{eq:dk,i+1}\\ d_{k-1,i+1} =t^{n-1}\big(1-q^\alpha t^{n-k}\big)=t^{n-1}-q^\alpha t^{2n-k-1}.\langlebel{eq:dk-1,i+1} \end{gather} \end{subequations} \end{Lemma} \begin{Remark}Given Lemma \ref{lem:3term1st-01}, Lemma \ref{lem:3term1st} immediately follows by Lemma \ref{lem:nabla=0}. Instead of proving Lemma \ref{lem:3term1st} it thus suffices to prove Lemma \ref{lem:3term1st-01}. \end{Remark} Before proving Lemma \ref{lem:3term1st-01} we show it holds for the following specific cases. \begin{Lemma}\langlebel{lem:3term1st-02} If $i+k\le n$, then the equation \eqref{eq:expand03} holds for the points $z=\zeta_j\big(x,b_2^{-1}\big)$ $(j=0,1,\ldots,n)$, while if $i+k\ge n$, then the equation~\eqref{eq:expand04} holds for the points $z=\zeta_j(a_1,y)$ $(j=0,1,\ldots,n)$. \end{Lemma} \begin{proof}Suppose $i+k\le n$. If $z=\zeta_j\big(x,b_2^{-1}\big)$, then the right-hand side of~(\ref{eq:expand03}) with coefficients given by (\ref{eq:ck,i})--(\ref{eq:ck-1,i-1}) can be written as \begin{gather} c_{k,i}\tilde{E}_{k,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)+c_{k-1,i}\tilde{E}_{k-1,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big) +c_{k-1,i+1}\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big) \nonumber\\ \qquad{}=\big(c_{k,i}xt^{n-j-k}+c_{k-1,i}\big) \tilde{E}_{k-1,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big) + c_{k-1,i+1}\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)\nonumber\\ \qquad{}=\big[\big(1-a_1^{-1}xt^{i-j}\big)a_2^{-1}b_2^{-1}t^{n-i-1}-q^\alpha t^{2n-k-1}\big(1-xb_1t^{n-j-1}\big)\big] \tilde{E}_{k-1,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)\nonumber\\[2pt] \qquad\quad {}+ t^{n-1}\big(1-a_2^{-1}b_2^{-1}t^{-i}\big)\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)\nonumber\\ \qquad{}=\left[\frac{\big(1-xb_2 t^i\big)\big(1-t^{i-j+1}\big)}{a_2b_2 t^i\big(1-t^{i+1}\big) }+\big(1-a_2^{-1}b_2^{-1}t^{-i}\big)\right]t^{n-1}\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)\nonumber\\ \qquad\quad {}-q^\alpha t^{2n-k-1}\big(1-xb_1t^{n-j-1}\big) \tilde{E}_{k-1,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big).\langlebel{eq:expand05} \end{gather} The final equality follows from the relation \begin{gather*} \big(1-a_1^{-1}xt^{i-j}\big)\big(1-t^{i+1}\big)\tilde{E}_{k-1,i}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big)= \big(1-xb_2 t^i\big)\big(1-t^{i-j+1}\big)\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_j\big(x,b_2^{-1}\big)\big), \end{gather*} which follows from (\ref{eq:tri-zeta(xb)2}) and (\ref{eq:tri-zeta(xb)3}). On the other hand, using \eqref{eq:tildevarphi_k,i} and \eqref{eq:vanishingFG-1}--\eqref{eq:vanishingFG-3}, the left-hand side of~(\ref{eq:expand03}) at $z=\zeta_j\big(x,b_2^{-1}\big)$ can be written as \begin{gather} \frac{\tilde{\varphi}_{k,i}\big(\zeta_{j}\big(x,b^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} =F_1\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(x,b^{-1}\big)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(x,b^{-1}\big)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} \nonumber\\ \qquad {}+(-1)^{j}F_{j+1}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}\big(xt,b^{-1}\big)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}\big(xt,b^{-1}\big)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} \nonumber\\ \qquad {}+(-1)^{n}G_{n}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}\big(x,b^{-1}\big)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}\big(x,b^{-1}\big)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)}.\langlebel{eq:expand06} \end{gather} Since we can compute \begin{subequations} \begin{gather} \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(x,b^{-1}\big)\big)= \frac{t^{n-1}(1-t)(1-xbt^i)\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)}{\big(1-xbt^{n-1}\big) \big(1-a^{-1}b^{-1}t^{-(j-1)}\big)\big(1-t^{i+1}\big)}, \langlebel{eq:E-E(zeta(xb))1}\\ \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}\big(xt,b^{-1}\big)\big)= \frac{t^{n-1}(1-t)(1-t^{i-j+1})\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)}{\big(1-xa^{-1}\big)\big(1-t^{i+1}\big) \big(1-t^{n-j}\big)}, \langlebel{eq:E-E(zeta(xb))2}\\ \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}\big(x,b^{-1}\big)\big) =\frac{t^{n-k}(1-t)\tilde{E}_{k-1,i}^{(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)}{\big(1-xbt^{n-1}\big)\big(1-t^{n-j}\big)}, \langlebel{eq:E-E(zeta(xb))3} \end{gather} and \begin{gather} \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(x,b^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} =\frac{\big(b_2t^{j-1}\big)^{n-1}}{(t;t)_{j-1}\big(xbt^{j-1};t\big)_{n-j}}, \langlebel{eq:D-D(zeta(xb))1}\\ \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}\big(xt,b^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} =\frac{x^{-(n-j-1)}b_2^jt^{{j\choose 2}}}{(t;t)_{n-j-1}(xb;t)_j} \langlebel{eq:D-D(zeta(xb))2}\\ \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}\big(xb^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b^{-1}\big)\big)} =\frac{x^{-(n-j-1)}b_2^jt^{{j\choose 2}-{n-j-1\choose 2}}}{(t;t)_{n-j-1}\big(xbt^{n-j-1};t\big)_{j}}, \langlebel{eq:D-D(zeta(xb))3} \end{gather} \end{subequations} applying (\ref{eq:E-E(zeta(xb))1})--(\ref{eq:D-D(zeta(xb))3}) to (\ref{eq:expand06}), the left-hand side of~(\ref{eq:expand03}) at $z=\zeta_{j}\big(x,b_2^{-1}\big)$ can be expressed as \begin{gather} \frac{\tilde{\varphi}_{k,i}\big(\zeta_{j}\big(x,b_2^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b_2^{-1}\big)\big)} =\bigg[\frac{\big(1-a_2^{-1}b_2^{-1}t^{-(j-1)}\big)\big(1-t^j\big)\big(1-xb_2t^{i}\big)}{\big(1-xb_2t^{j-1}\big)\big(1-t^{i+1}\big)}\nonumber\\ \hphantom{\frac{\tilde{\varphi}_{k,i}\big(\zeta_{j}\big(x,b_2^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b_2^{-1}\big)\big)=}}{} +\frac{\big(1-a_2^{-1}x\big)\big(1-xb_2t^{-1}\big)\big(1-t^{i-j+1}\big)}{\big(1-xb_2t^{j-1}\big)\big(1-t^{i+1}\big)}t^{j}\bigg] t^{n-1}\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_{j}^{(n)}\big(x,b_2^{-1}\big)\big) \nonumber\\ \hphantom{\frac{\tilde{\varphi}_{k,i}\big(\zeta_{j}\big(x,b_2^{-1}\big)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}\big(x,b_2^{-1}\big)\big)=}}{} -q^\alpha t^{2n-k-1}\big(1-xb_1t^{n-j-1}\big)\tilde{E}_{k-1,i}^{(n)}\big(\zeta_{j}^{(n)}\big(x,b_2^{-1}\big)\big).\langlebel{eq:expand07} \end{gather} Comparing (\ref{eq:expand05}) with (\ref{eq:expand07}), the claim of the lemma is proved if we can check the identity \begin{gather*} \frac{\big(1-xb_2 t^i\big)\big(1-t^{i-j+1}\big)}{a_2b_2 t^i(1-t^{i+1}) }+\big(1-a_2^{-1}b_2^{-1}t^{-i}\big)\\ \qquad{}= \frac{\big(1-a_2^{-1}b_2^{-1}t^{-(j-1)}\big)\big(1-t^j\big)\big(1-xb_2t^{i}\big)}{\big(1-xb_2t^{j-1}\big)\big(1-t^{i+1}\big)} +\frac{\big(1-a_2^{-1}x\big)\big(1-xb_2t^{-1}\big)\big(1-t^{i-j+1}\big)}{\big(1-xb_2t^{j-1}\big)\big(1-t^{i+1}\big)}t^{j}, \end{gather*} which is confirmed by direct computation. Next suppose $i+k\ge n$. If $z=\zeta_j(a_1,y)$, then the right-hand side of (\ref{eq:expand04}) with coefficients given by (\ref{eq:dk,i})--(\ref{eq:dk-1,i+1}) can be written as \begin{gather} d_{k,i}\tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y)) +d_{k,i+1}\tilde{E}_{k,i+1}^{(n)}(\zeta_j(a_1,y))+d_{k-1,i+1}\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ {}=d_{k,i}\tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y))+\big[ d_{k,i+1}yt^{-(k+j-n-1)}+d_{k-1,i+1}\big]\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ {}=-q^\alpha a_1^{-1}t^{n+i-1}\big(1-a_1b_1t^{n-i-1}\big)\tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ \quad{} +\big[t^{n-1}\big(1-ya_2^{-1}t^{-(j-1)}\big)-q^\alpha t^{2n-k-1}\big(1-yb_2 t^{-(j-i-1)}\big)\big]\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ {}= -q^\alpha t^{n+i-1}\left[ a_1^{-1}\big(1-a_1b_1t^{n-i-1}\big) +\frac{\big(1-ya_1^{-1}t^{-(n-i-1)}\big)\big(1-t^{-(j-i)}\big)}{yt^{-(j-i-1)}\big(1-t^{-(n-i)}\big)}\right] \tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ \quad{} +t^{n-1}\big(1-ya_2^{-1}t^{-(j-1)}\big)\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ {}= -q^\alpha t^{n+j-2}\left[ \frac{1-a_1b_1t^{n-i-1}}{a_1t^{j-i-1}} +\frac{\big(1-ya_1^{-1}t^{-(n-i-1)}\big)\big(1-t^{-(j-i)}\big)}{y\big(1-t^{-(n-i)}\big)}\right]\tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y)) \nonumber\\ \quad{} +t^{n-1}\big(1-ya_2^{-1}t^{-(j-1)}\big)\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y)). \langlebel{eq:expand08} \end{gather} On the other hand, using \eqref{eq:tildevarphi_k,i} and \eqref{eq:vanishingFG-4}--\eqref{eq:vanishingFG-6}, the left-hand side of (\ref{eq:expand04}) at $z=\zeta_j(a_1,y)$ can be written as \begin{gather} \frac{\tilde{\varphi}_{k,i}(\zeta_{j}(a_1,y))}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} =F_1(\zeta_{j}^{(n)}(a_1,y))\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}(a_1,y)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}(a_1,y)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} \nonumber\\ \qquad{} -(-1)^{j-1}G_{j}\big(\zeta_{j}^{(n)}(a_1,y)\big)\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(a_1,yt^{-1}\big)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(a_1,yt^{-1}\big)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} \nonumber\\ \qquad {}-(-1)^{n-1}G_{n}\big(\zeta_{j}^{(n)}(a_1,y)\big)\tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}(a_1,y)\big) \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}(a_1,y)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)}. \langlebel{eq:expand09} \end{gather} Since we can compute \begin{subequations} \begin{gather} \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}(a,y)\big)= \frac{\big(1-t^{-1}\big)\tilde{E}_{k-1,i+1}^{(n)}\big(\zeta_{j}^{(n)}(a,y)\big)} {\big(1-t^{-j}\big)\big(1-ya^{-1}t^{-(n-1)}\big)}, \langlebel{eq:E-E(zeta(ay))1}\\ \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j-1}^{(n-1)}\big(a,yt^{-1}\big)\big)= \frac{\big(1-t^{-1}\big)\big(1-t^{-(j-i)}\big)\tilde{E}_{k,i}^{(n)}\big(\zeta_{j}^{(n)}(a,y)\big)}{\big(1-t^{-j}\big)\big(1-t^{-(n-i)}\big)(1-yb)}, \\ \tilde{E}_{k-1,i}^{(n-1)}\big(\zeta_{j}^{(n-1)}(a,y)\big) =\frac{\big(1-t^{-1}\big)\big(1-ya^{-1}t^{-(n-i-1)}\big)\tilde{E}_{k,i}^{(n)}\big(\zeta_{j}^{(n)}(a,y)\big)} {at^{n-j-1}\big(1-t^{-(n-i)}\big)\big(1-ya^{-1}t^{-(n-1)}\big)\big(1-abt^{n-j-1}\big)}, \end{gather} and \begin{gather} \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}(a_1,y)\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} =\frac{(-1)^{n-j}} {y^{j-1}a_1^{n-j}t^{{n-j\choose 2}-{j-1\choose 2}}\big(ya_1^{-1}t^{-(n-2)};t\big)_{n-j}\big(t^{-1};t^{-1}\big)_{j-1}},\\ \frac{\Delta^{\!(n-1)}\big(\zeta_{j-1}^{(n-1)}(a_1,yt^{-1})\big)}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} =\frac{(-1)^{n-1}} {y^{j-1}a_1^{n-j}t^{{n-j\choose 2}}\big(ya_1^{-1}t^{-(n-j-1)};t\big)_{n-j}\big(t^{-1};t^{-1}\big)_{j-1}},\\ \frac{\Delta^{\!(n-1)}\big(\zeta_{j}^{(n-1)}(a_1,y)\big)} {\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} =\frac{(-1)^{n-1}}{\big(a_1t^{n-j-1}\big)^{n-1}\big(ya_1^{-1}t^{-(n-2)};t\big)_j\big(t^{-1};t^{-1}\big)_{n-j-1}}, \langlebel{eq:D-E(zeta(ay))3} \end{gather} \end{subequations} applying (\ref{eq:E-E(zeta(ay))1})--(\ref{eq:D-E(zeta(ay))3}) to (\ref{eq:expand09}), the left-hand side of~(\ref{eq:expand04}) at $z=\zeta_{j}(a,y)$ can be expressed as \begin{gather} \frac{\tilde{\varphi}_{k,i}(\zeta_{j}(a_1,y))}{\Delta^{\!(n)}\big(\zeta_{j}^{(n)}(a_1,y)\big)} =t^{n-1}\big(1-ya_2^{-1}t^{-(j-1)}\big)\tilde{E}_{k-1,i+1}^{(n)}(\zeta_j(a_1,y))\nonumber\\ \qquad{}-q^\alpha t^{n+j-2}\bigg[\frac{(1-yb_1)\big(1-ya^{-1}t\big)\big(1-t^{-(j-i)}\big)}{y\big(1-ya^{-1}t^{-(n-j-1)}\big)\big(1-t^{-(n-i)}\big)}\nonumber\\ \qquad{}+\big(1-a_1b_1 t^{n-j-1}\big)\frac{t\big(1-ya^{-1}t^{-(n-i-1)}\big) \big(1-t^{-(n-j)}\big)}{a\big(1-ya^{-1}t^{-(n-j-1)}\big)\big(1-t^{-(n-i)}\big)}\bigg] \tilde{E}_{k,i}^{(n)}(\zeta_j(a_1,y)). \langlebel{eq:expand10} \end{gather} Comparing with (\ref{eq:expand08}) and (\ref{eq:expand10}), the claim of the lemma is proved if we can check the identity \begin{gather*} \frac{1-a_1b_1t^{n-i-1}}{a_1t^{j-i-1}} +\frac{\big(1-ya_1^{-1}t^{-(n-i-1)}\big)\big(1-t^{-(j-i)}\big)}{y\big(1-t^{-(n-i)}\big)} =\frac{(1-yb_1)\big(1-ya^{-1}t\big)\big(1-t^{-(j-i)}\big)}{y\big(1-ya^{-1}t^{-(n-j-1)}\big)\big(1-t^{-(n-i)}\big)} \\ \qquad{}+\big(1-a_1b_1 t^{n-j-1}\big)\frac{t\big(1-ya^{-1}t^{-(n-i-1)}\big)\big(1-t^{-(n-j)}\big)}{a\big(1-ya^{-1}t^{-(n-j-1)}\big)\big(1-t^{-(n-i)}\big)}, \end{gather*} which follows from direct computation. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:3term1st-01}] Set $D_j=\big\{(l,i)\in \mathbb{Z}^2\,|\, j\le i, 0\le l, i+l\le n\big\}$, which satisfies $D_0\supset D_1\supset\cdots\supset D_n=\{(0,n)\}$. The set $\big\{\tilde{E}_{k,i}(z)\,|\, (k,i)\in D_0\big\}$ forms a basis for the linear space spanned by $\{m_\langlembda(z)\,|\, \langlembda\le (2^n)\}$. If we put \begin{equation*} \psi(z):= \frac{\tilde{\varphi}_{k,i}(z)}{\Delta(z)} -\big(c_{k,i}\tilde{E}_{k,i}(z)+c_{k-1,i}\tilde{E}_{k-1,i}(z)+c_{k-1,i+1}\tilde{E}_{k-1,i+1}(z)\big), \end{equation*} where $c_{k,i}$, $c_{k-1,i}$ and $c_{k-1,i+1}$ are specified by (\ref{eq:ck,i})--(\ref{eq:ck-1,i-1}), then the symmetric polynomial $\psi(z)$ is expressed as a linear combination of $\tilde{E}_{k,i}(z)$, $(k,i)\in D_0$, i.e., \begin{equation}\langlebel{eq:psi(z)} \psi(z)=\sum_{(l,m)\in D_0}c'_{lm}\tilde{E}_{l,m}(z), \end{equation} where the coefficients $c'_{lm}$ are some constants. We now prove $\psi(z)=0$ identically, i.e., $c'_{lm}=0$ for all $(l,m)\in D_0$ inductively. Namely, we prove that, if $c'_{lm}=0$ for $(l,m)\in D_{j+1}$, then $c'_{lm}=0$ for $(l,m)\in D_{j}.$ First we show that $c'_{0n}=0$ as the starting point of induction. Using Lemma~\ref{lem:tri-zeta(xb)} for (\ref{eq:psi(z)}) at $z=\zeta_n\big(x,b_2^{-1}\big)$ we have $\psi\big(\zeta_n\big(x,b_2^{-1}\big)\big)=c'_{0n}\tilde{E}_{0,n}\big(\zeta_n\big(x,b_2^{-1}\big)\big)$. From Lemma~\ref{lem:3term1st-02} we have $\psi\big(\zeta_n(x,b_2^{-1}\big)\big)=0$, while $\tilde{E}_{0,n}\big(\zeta_n\big(x,b_2^{-1}\big)\big)\ne 0$. Therefore $c'_{0n}=0$. Next suppose that $c'_{lm}=0$ for $(l,m)\in D_{j+1}$. Then using Lemma~\ref{lem:tri-zeta(xb)} for (\ref{eq:psi(z)}) at $z=\zeta_j\big(x,b_2^{-1}\big)$ we have \begin{equation*} \psi\big(\zeta_j\big(x,b_2^{-1}\big)\big)=\sum_{l=0}^{n-j}c'_{lj}\tilde{E}_{l,j}\big(\zeta_j\big(x,b_2^{-1}\big)\big)= \left(\sum_{l=0}^{n-j}c'_{lj}x^lt^{l(n-j)-{l+1\choose 2}}\right)\tilde{E}_{0,j}\big(\zeta_j\big(x,b_2^{-1}\big)\big). \end{equation*} From Lemma \ref{lem:3term1st-02} $\psi\big(\zeta_j\big(x,b_2^{-1}\big)\big)$ vanishes as a function of $x$, while $\tilde{E}_{0,j}\big(\zeta_j\big(x,b_2^{-1}\big)\big)\ne 0$. Thus, $\sum_{l=0}^{n-j}c'_{lj}x^lt^{l(n-j)-{l+1\choose 2}}=0$, i.e., the coefficient $c'_{lj}t^{l(n-j)-{l+1\choose 2}}$ of $x^l$ vanishes for $0\le l\le n-j$. Therefore $c'_{lj}=0$ for $0\le l\le n-j$. This implies $c'_{lm}=0$ for $(l,m)\in D_{j}$. On the other hand, we prove \eqref{eq:expand04} of Lemma \ref{lem:3term1st-01}. Set $D'_j=\big\{(l,i)\in \mathbb{Z}^2\,|\, n\le i+l,0\le l\le n$, $0\le i\le j\big\}$, which satisfies $\{(n,0)\}=D'_0\subset D'_1\subset\cdots\subset D'_n$. The set $\{\tilde{E}_{k,i}(z)\,|\, (k,i)\in D'_n\}$ also forms a basis for the linear space spanned by $\{m_\langlembda(z)\,|\, \langlembda\le (2^n)\}$. If we put \begin{equation*} \psi'(z):= \frac{\tilde{\varphi}_{k,i}(z)}{\Delta(z)} -\big(d_{k,i}\tilde{E}_{k,i}(z)+d_{k,i+1}\tilde{E}_{k,i+1}(z)+d_{k-1,i+1}\tilde{E}_{k-1,i+1}(z)\big), \end{equation*} where $d_{k,i}, d_{k,i+1}$ and $d_{k-1,i+1}$ are specified by (\ref{eq:dk,i})--(\ref{eq:dk-1,i+1}), then the symmetric polynomial $\psi'(z)$ is expressed as a linear combination of $\tilde{E}_{k,i}(z)$, $(k,i)\in D'_n$, i.e., \begin{equation}\langlebel{eq:psi'(z)} \psi'(z)=\sum_{(l,m)\in D'_n}d'_{lm}\tilde{E}_{l,m}(z), \end{equation} where the coefficients $d'_{lm}$ are some constants. We now prove $\psi'(z)=0$ identically, i.e., $d'_{lm}=0$ for all $(l,m)\in D'_n$ inductively. Namely, we prove that, if $d'_{lm}=0$ for $(l,m)\in D'_{j-1}$, then $d'_{lm}=0$ for $(l,m)\in D'_{j}.$ First we show that $d'_{n0}=0$ as the starting point of induction. Using Lemma \ref{lem:tri-zeta(ay)} for~(\ref{eq:psi'(z)}) at $z=\zeta_0((a_1,y))$ we have $\psi'(\zeta_0(a_1,y))=d'_{n0}\tilde{E}_{n,0}(\zeta_0(a_1,y))$. From Lemma~\ref{lem:3term1st-02} we have $\psi'(\zeta_0(a_1,y))\allowbreak =0$, while $\tilde{E}_{n,0}(\zeta_0(a_1,y))\ne 0$. Therefore $d'_{n0}=0$. Next suppose that $d'_{lm}=0$ for $(l,m)\in D'_{j-1}$. Then using Lemma~\ref{lem:tri-zeta(ay)} for~(\ref{eq:psi'(z)}) at $z=\zeta_j(a_1,y)$ we have \begin{gather*} \psi'(\zeta_j(a_1,y))=\sum_{l=n-j}^nd'_{lj}\tilde{E}_{l,j}(\zeta_j(a_1,y))\\ \hphantom{\psi'(\zeta_j(a_1,y))}{} = \left(\sum_{l=n-j}^nd'_{lj}y^{l+j-n}a_1^{n-j}t^{{n-j\choose 2}-{l+j-n\choose 2}}\right)\tilde{E}_{0,j}(\zeta_j(a_1,y)). \end{gather*} From Lemma \ref{lem:3term1st-02} $\psi'(\zeta_j(a_1,y))$ vanishes as a function of~$y$, while $\tilde{E}_{0,j}(\zeta_j(a_1,y))\ne 0$. Thus, $\sum_{l=n-j}^nd'_{lj}y^{l+j-n}a_1^{n-j}t^{{n-j\choose 2}-{l+j-n\choose 2}}=0$, i.e., the coefficient $d'_{lj}a_1^{n-j}t^{{n-j\choose 2}-{l+j-n\choose 2}}$ of $y^{l+j-n}$ va\-nishes for $n-j\le l\le n$. Therefore $d'_{lj}=0$ for $n-j\le l\le n$. This implies $d'_{lm}=0$ for $(l,m)\in D'_{j}$. \end{proof} \section[The transition matrix $R$]{The transition matrix $\boldsymbol{R}$}\langlebel{section06} In this section we give a proof of Theorem \ref{thm:R=LDU}. Before proving Theorem \ref{thm:R=LDU}, we will show the results deduced from Theorem \ref{thm:R=LDU}. By the definition \eqref{eq:R} of the transition matrix $R$, we have \begin{equation*} R^{-1}=J\bar{R}J, \end{equation*} where the symbol $\bar{R}$ is the matrix $R$ after the interchange $(a_1,b_1)\leftrightarrow (a_2,b_2)$ and $J$ is the matrix specified by \begin{equation*} J=\begin{pmatrix} & & &1 \\ & &1 & \\ & \cdots & & \\ 1 & & & \end{pmatrix}. \end{equation*} The explicit form of the inverse matrix of $R$ is given by \begin{Corollary} \langlebel{cor:inverseR} The inverse matrix $R^{-1}$ is written as Gauss matrix decomposition \begin{equation*} R^{-1}=U_{\mbox{\tiny $R$}}^{-1}D_{\mbox{\tiny $R$}}^{-1}L_{\mbox{\tiny $R$}}^{-1}=L'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}D'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}U'{}_{\!\!\!{\mbox{\tiny $R$}}}^{-1}, \end{equation*} where the inverse matrices $L_{\mbox{\tiny $R$}}^{-1}=\big(l^{{\mbox{\tiny $R$}} *}_{ij}\big)_{0\le i,j\le n}$, $D_{\mbox{\tiny $R$}}^{-1}=\big(d^{{\mbox{\tiny $R$}} *}_{j}\delta_{ij}\big)_{0\le i,j\le n}$, $U_{\mbox{\tiny $R$}}^{-1}=\big(u^{{\mbox{\tiny $R$}} *}_{ij}\big)_{0\le i,j\le n}$ are lower triangular, diagonal, upper triangular, respectively, given by \begin{subequations} \begin{gather} l^{{\mbox{\tiny $R$}} *}_{ij}=\overline{u^{\mbox{\tiny $R$}}_{n-i,n-j}}= \qbin{n-j}{n-i}{t^{-1}} \frac{\big(a_2b_2t^{j};t\big)_{i-j}}{\big(a_2a_1^{-1}t^{i+j-n};t\big)_{i-j}},\langlebel{eq:l*Rij}\\ d^{{\mbox{\tiny $R$}} *}_{j}=\overline{d^{\mbox{\tiny $R$}}_{n-j}}= \frac{\big(a_2a_1^{-1}t^{-(n-j)};t\big)_{j}(a_1b_2;t)_{n-j}}{(a_2b_1;t)_{j}\big(a_2^{-1}a_1t^{-j};t\big)_{n-j}}, \langlebel{eq:d*Rij}\\ u^{{\mbox{\tiny $R$}} *}_{ij}=\overline{l^{\mbox{\tiny $R$}}_{n-i,n-j}}= (-1)^{j-i}t^{-{j-i\choose 2}} \qbin{j}{i}{t^{-1}} \frac{\big(a_1b_1t^{n-j};t\big)_{j-i}}{\big(a_2^{-1}a_1 t^{n-2j+1};t\big)_{j-i}},\langlebel{eq:u*Rij} \end{gather} \end{subequations} and the inverse matrices $U'{}_{\!\!\!{\mbox{\tiny $R$}}}^{-1}=\big(u^{{\mbox{\tiny $R$}}\,\prime*}_{ij}\big)_{0\le i,j\le n}$, $D'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}=\big(d^{{\mbox{\tiny $R$}}\,\prime*}_{j}\delta_{ij}\big)_{0\le i,j\le n}$, $L'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}=\big(l^{{\mbox{\tiny $R$}}\,\prime*}_{ij}\big)_{0\le i,j\le n}$ are upper triangular, diagonal, lower triangular, respectively, given by \begin{subequations} \begin{gather} u^{{\mbox{\tiny $R$}}\,\prime*}_{ij}=\overline{l^{{\mbox{\tiny $R$}}\,\prime}_{n-i,n-j}}= \qbin{j}{i}{t} \frac{\big(a_1^{-1}b_1^{-1}t^{-(n-i-1)};t\big)_{j-i}}{\big(b_2b_1^{-1}t^{-(n-2i-1)};t\big)_{j-i}},\langlebel{eq:u'*Rij}\\ d^{{\mbox{\tiny $R$}}\,\prime*}_{j}=\overline{d^{{\mbox{\tiny $R$}}\,\prime}_{n-j}}=\frac{\big(b_2b_1^{-1}t^{-(n-2j-1)};t\big)_{n-j} \big(a_1^{-1}b_2^{-1}t^{-(j-1)};t\big)_j} {\big(a_2^{-1}b_1^{-1}t^{-(n-j-1)};t\big)_{n-j}\big(b_2^{-1}b_1t^{n-2j+1};t\big)_j},\langlebel{eq:d'*Rij}\\ l^{{\mbox{\tiny $R$}}\,\prime*}_{ij}=\overline{u^{{\mbox{\tiny $R$}}\,\prime}_{n-i,n-j}} =(-1)^{i-j}t^{i-j\choose 2}\qbin{n-j}{n-i}{t} \frac{\big(a_2^{-1}b_2^{-1}t^{-(i-1)};t\big)_{i-j}}{\big(b_2^{-1}b_1t^{n-i-j};t\big)_{i-j}}.\langlebel{eq:l'*Rij} \end{gather} \end{subequations} \end{Corollary} \begin{proof} Since $R^{-1}=J\bar{R}J$, we have $R^{-1}=U_{\mbox{\tiny $R$}}^{-1}D_{\mbox{\tiny $R$}}^{-1}L_{\mbox{\tiny $R$}}^{-1}$, where $L_{\mbox{\tiny $R$}}^{-1}=J\overline{U_{\mbox{\tiny $R$}}} J$, $D_{\mbox{\tiny $R$}}^{-1}=J\overline{D_{\mbox{\tiny $R$}}}J$ and $U_{\mbox{\tiny $R$}}^{-1}=J\overline{L_{\mbox{\tiny $R$}}}J$. Thus we immediately have the expressions $l^{{\mbox{\tiny $R$}} *}_{ij}=\overline{u^{\mbox{\tiny $R$}}_{n-i,n-j}}$, $d^{{\mbox{\tiny $R$}} *}_{j}=\overline{d^{\mbox{\tiny $R$}}_{n-j}}$ and $u^{{\mbox{\tiny $R$}} *}_{ij}=\overline{l^{\mbox{\tiny $R$}}_{n-i,n-j}}$. From Theorem \ref{thm:R=LDU} this gives the explicit forms \eqref{eq:l*Rij}, \eqref{eq:d*Rij} and \eqref{eq:u*Rij}. On the other hand, we also have $R^{-1}=L'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}D'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}U'{}_{\!\!\!{\mbox{\tiny $R$}}}^{-1}$, where $U'{}_{\!\!\!{\mbox{\tiny $R$}}}^{-1}=J\overline{L'{}_{\!\!{\mbox{\tiny $R$}}}}J$, $D'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}=J\overline{D'{}_{\!\!{\mbox{\tiny $R$}}}}J$ and $L'{}_{\!\!{\mbox{\tiny $R$}}}^{-1}=J\overline{U'{}_{\!\!\!{\mbox{\tiny $R$}}}}J$. Therefore $u^{{\mbox{\tiny $R$}}\,\prime*}_{ij}=\overline{l^{{\mbox{\tiny $R$}}\,\prime}_{n-i,n-j}}$, $d^{{\mbox{\tiny $R$}}\,\prime*}_{j}=\overline{d^{{\mbox{\tiny $R$}}\,\prime}_{n-j}}$, and $l^{{\mbox{\tiny $R$}}\,\prime*}_{ij}=\overline{u^{{\mbox{\tiny $R$}}\,\prime}_{n-i,n-j}}$. Thus we obtain the expressions \eqref{eq:u'*Rij}, \eqref{eq:d'*Rij} and \eqref{eq:l'*Rij}. \end{proof} The rest of this section is devoted to the proof of Theorem \ref{thm:R=LDU}. For this purpose we introduce another set of symmetric polynomials different from Matsuo's polynomials. For $0\le r\le n$, let $f_r(a_1,a_2;t;z)$ be (symmetric) polynomials specified by \begin{equation}\langlebel{eq:fr(a1,a2;t;z)01} f_r(a_1,a_2;t;z) := \sum_{\substack{I\subseteq\{1,\ldots,n\}\\[1pt] |I|=r}}\, \prod_{k=1}^r\frac{z_{i_k}-a_2t^{i_k-k}}{a_1t^{k-1}-a_2t^{i_k-k}} \prod_{l=1}^{n-r}\frac{z_{j_l}-a_1t^{j_l-l}}{a_2t^{l-1}-a_1t^{j_l-l}}, \end{equation} where the summation is over all $r$-subsets $I$ of $\{1,\ldots,n\}$, and $I=\{i_1<\cdots<i_r\}$, $J=\{1,\ldots,n\}\backslash I=\{ j_1<\cdots<j_{n-r}\}$. In particular, \begin{equation*} f_0(a_1,a_2;t;z)=\prod_{i=1}^n\frac{z_i-a_1}{a_2t^{i-1}-a_1}, \qquad f_n(a_1,a_2;t;z)=\prod_{i=1}^n\frac{z_i-a_2}{a_1t^{i-1}-a_2}. \end{equation*} We remark that the polynomials $f_r(a_1,a_2;t;z)$ are called the {\it Lagrange interpolation polynomials of type $A$} and their properties are discussed in \cite[Appendix B]{IN2018}. By definition the polynomial $f_i(a_1,a_2;t;z)$ satisfies \begin{equation} \langlebel{eq:f_i=f_n-i} f_i(a_1,a_2;t;z)=f_{n-i}(a_2,a_1;t;z). \end{equation} When we need to specify the number of variables $z_1,\ldots,z_n$, we use the notation $f_i^{(n)}(a_1,a_2;t;z)\allowbreak =f_i(a_1,a_2;t;z)$. \begin{Lemma}[recurrence relation]\langlebel{lem:rec}The polynomials \eqref{eq:fr(a1,a2;t;z)01} satisfy the following recurrence relations: \begin{gather*} f_i^{(n)}(a_1,a_2;t;z) =\frac{z_n-a_2t^{n-i}}{a_1t^{i-1}-a_2t^{n-i}} f_{i-1}^{(n-1)}(a_1,a_2;t; \widehat{z}_n) +\frac{z_n-a_1t^i}{a_2t^{n-i-1}-a_1t^i} f_{i}^{(n-1)}(a_1,a_2;t; \widehat{z}_n) \end{gather*} for $i=0,1,\ldots,n$, where $ \widehat{z}_n =(z_1,\ldots,z_{n-1})\in (\mathbb{C}^*)^{n-1}$. \end{Lemma} \begin{proof}The lemma follows from a direct computation and we omit the detail. \end{proof} For arbitrary $x,y\in \mathbb{C}^*$ we define \begin{equation}\langlebel{eq:xi_j(x,y;t)} \xi_j(x,y;t):=\big(\underbrace{x,x t,\ldots,x t^{j-1}\phantom{\Big|}\!\!}_j, \underbrace{y,y t,\ldots,y t^{n-j-1}\phantom{\Big|}\!\!}_{n-j}\big)\in (\mathbb{C}^*)^n \end{equation} for $j=0,1,\ldots, n$. \begin{Proposition}\langlebel{prop:vanishing-f} The polynomial $f_i(a_1,a_2;t;z)$ is symmetric in the variables $z=(z_1,\ldots,z_n)$. The leading term of $f_i(a_1,a_2;t;z)$ is $m_{(1^n)}(z)$ up to a multiplicative constant. The functions $f_i(a_1,a_2;t;z)$ $(i=0,1,\ldots,n)$ satisfy \begin{equation} \langlebel{eq:vanishing-f} f_i(a_1,a_2;t;\xi_j(a_1,a_2;t))=\delta_{ij}. \end{equation} \end{Proposition} \begin{proof}See \cite[Example 4.3 and equation~(4.7)]{IN2018}. Otherwise, using Lemma~\ref{lem:rec} we can also prove this proposition directly by induction on~$n$. \end{proof} \begin{Remark}The set of symmetric polynomials $\{f_i(a_1,a_2;t;z)\,|\,i=0,1,\ldots,n\}$ forms a basis of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda\le (1^n)\}$. Conversely such basis satisfying the condition~\eqref{eq:vanishing-f} is uniquely determined. Thus we can take Proposition~\ref{prop:vanishing-f} as a definition of the polynomials $f_i(a_1,a_2;t;z)$, instead of~(\ref{eq:fr(a1,a2;t;z)01}). \end{Remark} \begin{Lemma}[triangularity]\langlebel{lem:Triangularity-f} Suppose that \begin{equation*} \xi_j(a_1):=\big(\underbrace{a_1,a_1 t,\ldots,a_1 t^{j-1}\phantom{\Big|}\!\!}_{j},z_1,z_2,\ldots,z_{n-j}\big) \in (\mathbb{C}^*)^n. \end{equation*} If $i<j$, then \begin{equation}\langlebel{eq:f_i=0xi} f_i(a_1,a_2;t;\xi_j(a_1))=0. \end{equation} Moreover, $f_i(a_1,a_2;t;\xi_i(a_1))$ evaluates as \begin{equation}\langlebel{eq:fi(xi_i(a_1))} f_i(a_1,a_2;t;\xi_i(a_1))=\prod_{l=1}^{n-i}\frac{z_l-a_1t^{i}}{a_2t^{l-1}-a_1t^{i}} =\frac{\prod_{l=1}^{n-i}\big(1-z_la_1^{-1}t^{-i}\big)}{\big(a_2a_1^{-1}t^{-i};t\big)_{n-i}}. \end{equation} On the other hand, suppose that \begin{equation*} \eta_j(a_2):=\big(z_1,z_2,\ldots,z_j, \underbrace{a_2,a_2 t,\ldots,a_2 t^{n-j-1}\phantom{\Big|}\!\!}_{n-j}\big) \in (\mathbb{C}^*)^n. \end{equation*} If $i>j$, then \begin{equation}\langlebel{eq:f_i=0eta} f_i(a_1,a_2;t;\eta_j(a_2))=0. \end{equation} Moreover, $f_i(a_1,a_2;t;\eta_i(a_2))$ evaluates as \begin{equation}\langlebel{eq:fi(eta_i(a_2))} f_i(a_1,a_2;t;\eta_i(a_2))=\prod_{l=1}^i\frac{z_l-a_2t^{n-i}}{a_1t^{l-1}-a_2t^{n-i}} =\frac{\prod_{l=1}^i\big(1-z_la_2^{-1}t^{-(n-i)}\big)}{\big(a_1a_2^{-1}t^{-(n-i)};t\big)_i}. \end{equation} \end{Lemma} \begin{proof}First we show \eqref{eq:f_i=0eta} by induction on $n$. For simplicity we write $\eta_i(a_2)$ as $\eta_i$. Suppose $i>j$. Using Lemma~\ref{lem:rec} we have \begin{gather*} f_i^{(n)}(a_1,a_2;t;\eta_j) =\frac{a_2t^{n-j-1}-a_2t^{n-i}}{a_1t^{i-1}-a_2t^{n-i}} f_{i-1}^{(n-1)}\big(a_1,a_2;t;\eta_j^{(n-1)}\big)\\ \hphantom{f_i^{(n)}(a_1,a_2;t;\eta_j) =}{} +\frac{a_2t^{n-j-1}-a_1t^i}{a_2t^{n-i-1}-a_1t^i} f_{i}^{(n-1)}\big(a_1,a_2;t;\eta_j^{(n-1)}\big), \end{gather*} where $\eta_j^{(n-1)}=\big(z_1,z_2,\ldots,z_j, a_2,a_2 t,\ldots,a_2 t^{n-j-2} \big)\in (\mathbb{C}^*)^{n-1}$. Since $f_{i}^{(n-1)}\big(a_1,a_2;t;\eta_i^{(n-1)}\big)=0$ by the induction hypothesis, we have \begin{equation*} f_i^{(n)}(a_1,a_2;t;\eta_j) =\frac{a_2t^{n-j-1}-a_2t^{n-i}}{a_1t^{i-1}-a_2t^{n-i}} f_{i-1}^{(n-1)}\big(a_1,a_2;t;\eta_j^{(n-1)}\big). \end{equation*} If $i-1>j$, then $f_{i-1}^{(n-1)}\big(a_1,a_2;t;\eta_j^{(n-1)}\big)=0$ by the induction hypothesis, while if $i-1=j$, then $a_2t^{n-j-1}-a_2t^{n-i}=0$. In any case we obtain $f_i^{(n)}(a_1,a_2;t;\eta_i)=0$, which is the claim of~\eqref{eq:f_i=0eta}. Next we show \eqref{eq:fi(eta_i(a_2))}. If we put $z_l=a_2 t^{n-i}$ for $l\in \{1,\ldots,i\}$ in the polynomial $f_i(a_1,a_2;t;\allowbreak\eta_i(a_2))$ of $z_1,\ldots, z_i$, then we have $f_i(a_1,a_2;t; \eta_i(a_2))=0$ because $f_i(a_1,a_2;t;\eta_i(a_2))|_{z_k=a_2t^{n-i}}$ satisfies the condition of~\eqref{eq:f_i=0eta}. This implies $f_i(a_1,a_2;t;\eta_i(a_2))$ is divisible by $\prod_{l=1}\big(z_l-a_2t^{n-i}\big)$, so that we have $f_i(a_1,a_2;t;\eta_i(a_2))=c\prod_{l=1}^i\big(z_l-a_2t^{n-i}\big)$, where $c$ is some constant. Thus \begin{equation*} f_i(a_1,a_2;t;\eta_i(a_2))\Big|_{(z_1,\ldots,z_i)=(a_1,a_1t,\ldots,a_1t^{i-1})} =c\prod_{l=1}^i\big(a_1t^{l-1}-a_2t^{n-i}\big). \end{equation*} On the other hand, \eqref{eq:vanishing-f} implies that \begin{equation*} f_i(a_1,a_2;t;\eta_i(a_2))\Big|_{(z_1,\ldots,z_i)=(a_1,a_1t,\ldots,a_1t^{i-1})} =f_i(a_1,a_2;t;\xi_i(a_1,a_2;t))=1. \end{equation*} We therefore obtain $c=1/\prod_{l=1}^i\big(a_1t^{l-1}-a_2t^{n-i}\big)$, which implies~\eqref{eq:fi(eta_i(a_2))}. Finally we show \eqref{eq:f_i=0xi} and \eqref{eq:fi(xi_i(a_1))}. From \eqref{eq:f_i=f_n-i} we have \begin{equation*} f_i(a_1,a_2;t;\xi_j(a_1))= f_{n-i}(a_2,a_1;t;\xi_j(a_1))= f_{n-i}(a_2,a_1;t;\eta_{n-j}(a_1)). \end{equation*} If $i<j$ (i.e., $n-i>n-j$), then using \eqref{eq:f_i=0eta} we see that the right-hand side of the above is equal to zero. Moreover, using \eqref{eq:fi(eta_i(a_2))} we obtain \begin{equation*} f_i(a_1,a_2;t;\xi_i(a_1))=f_{n-i}(a_2,a_1;t;\eta_{n-i}(a_1)) =\prod_{l=1}^{n-i}\frac{z_l-a_1t^{i}}{a_2t^{l-1}-a_1t^{i}} =\frac{\prod_{l=1}^{n-i}\big(1-z_la_1^{-1}t^{-i}\big)}{\big(a_2a_1^{-1}t^{-i};t\big)_{n-i}}, \end{equation*} which completes the proof. \end{proof} \begin{Corollary}\langlebel{cor:Triangularity-f2} Let $\xi_j(x,a_2;t)\in (\mathbb{C}^*)^n$ be the point specified by~\eqref{eq:xi_j(x,y;t)} with $y=a_2$. Then $f_i(a_1,a_2;t;\xi_j(x,a_2;t))$ evaluates as \begin{equation}\langlebel{eq:f_i(xi_j(x,a_2;t))} f_i(a_1,a_2;t;\xi_j(x,a_2;t)) = \qbin{j}{i}{t^{-1}} \frac{\big(xa_1^{-1};t\big)_{j-i}\big(xa_2^{-1}t^{-(n-j)};t\big)_i} {\big(a_1^{-1}a_2t^{n-j-i};t\big)_{j-i}\big(a_1a_2^{-1}t^{-(n-i)};t\big)_i}. \end{equation} \end{Corollary} \begin{proof}If $i>j$, then $f_i(a_1,a_2;t;\xi_j(x,a_2;t))=0$ is a special case of \eqref{eq:f_i=0eta} in Lemma~\ref{lem:Triangularity-f}. Now we assume that $i\le j$. If we put $x=a_2t^{n-j-k}$ $(k=0,1,\ldots, i-1)$, then from~\eqref{eq:f_i=0eta} we have $f_i(a_1,a_2;t;\xi_j(x,a_2;t))=0$. If we put $x=a_1t^{-k}$ $(k=0,1,\ldots, j-i-1)$, then from~\eqref{eq:f_i=0xi} we also have $f_i(a_1,a_2;t;\xi_j(x,a_2;t))=0$. This implies that $f_i(a_1,a_2;t;\xi_j(x,a_2;t))$ as a polynomial of $x$ is divisible by $(xa_1)_{j-i}\big(xa_2^{-1}t^{-(n-j)}\big)_i$. Since the degree of $f_i(a_1,a_2;t;\xi_j(x,a_2;t))$ as a function of $x$ is equal to $j$, the function $f_i(a_1,a_2;t;\xi_j(x,a_2;t))$ can be expressed as \begin{equation*} f_i(a_1,a_2;t;\xi_j(x,a_2;t))=c\big(xa_1^{-1};t\big)_{j-i}\big(xa_2^{-1}t^{-(n-j)};t\big)_i, \end{equation*} where $c$ is some constant. In order to determine the constant $c$, we put $x=a_2t^{n-j-i}$ in the above equation. Then \begin{equation*} f_i(a_1,a_2;t;\xi_j(x,a_2;t))\Big|_{x=a_2t^{n-j-i}}=c\big(a_1^{-1}a_2t^{n-j-i};t\big)_{j-i}\big(t^{-i};t\big)_i, \end{equation*} while, from \eqref{eq:fi(eta_i(a_2))} we have \begin{gather*} f_i(a_1,a_2;t;\xi_j(x,a_2;t)) \Big|_{x=a_2t^{n-j-i}} = f_i(a_1,a_2;t;\xi_i(x,a_2;t)) \Big|_{x=a_2t^{n-j-i}} =\frac{\big(t^{-j};t\big)_i}{\big(a_1a_2^{-1}t^{-(n-i)};t\big)_i}. \end{gather*} The constant $c$ can be explicitly computed as \begin{align*} c&=\frac{\big(t^{-j};t\big)_i}{\big(a_1^{-1}a_2t^{n-j-i};t\big)_{j-i}\big(a_1a_2^{-1}t^{-(n-i)};t\big)_i\big(t^{-i};t\big)_i}\\ &=\frac{\big(t^{-1};t^{-1}\big)_j}{\big(a_1^{-1}a_2t^{n-j-i};t\big)_{j-i}\big(a_1a_2^{-1}t^{-(n-i)};t\big)_i \big(t^{-1};t^{-1}\big)_{j-i}\big(t^{-1};t^{-1}\big)_i}. \end{align*} We therefore obtain \eqref{eq:f_i(xi_j(x,a_2;t))}. \end{proof} \begin{Lemma}\langlebel{lem:(e)=(f)tildeU} Suppose that $\tilde{U}_{\mbox{\tiny $R$}}$ is the $(n+1)\times(n+1)$ matrix satisfying \begin{gather} \big(e_n(a_2,b_1;z),e_{n-1}(a_2,b_1;z),\ldots,e_0(a_2,b_1;z)\big)\nonumber\\ \qquad{}=\big(f_n(a_1,a_2;t;z),f_{n-1}(a_1,a_2;t;z),\ldots,f_0(a_1,a_2;t;z)\big)\tilde{U}_{\mbox{\tiny $R$}}.\langlebel{eq:(e)=(f)tildeU} \end{gather} Then $\tilde{U}_{\mbox{\tiny $R$}}=\big(\tilde{u}^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$ is an upper triangular matrix with entries given by \begin{equation}\langlebel{eq:tilde u} \tilde{u}^{\mbox{\tiny $R$}}_{ij} = \frac{\big(a_1b_1t^{n-j};t\big)_{j-i}\big(a_1a_2^{-1}t^{-i};t\big)_{n-j}(a_2b_1;t)_i\big(t^{-1};t^{-1}\big)_{n}} {\big(1-t^{-1}\big)^n}\frac{\qbin{j}{i}{t^{-1}}}{\qbin{n}{i}{t^{-1}}}. \end{equation} Suppose that $\tilde{L}_{\mbox{\tiny $R$}}$ is the $(n+1)\times(n+1)$ matrix satisfying \begin{gather} \big(f_n(a_1,a_2;t;z),f_{n-1}(a_1,a_2;t;z),\ldots,f_0(a_1,a_2;t;z)\big)\nonumber\\ \qquad=\big(e_0(a_1,b_2;z),e_1(a_1,b_2;z),\ldots,e_n(a_1,b_2;z)\big)\tilde{L}_{\mbox{\tiny $R$}}.\langlebel{eq:(f)=(e)tildeL} \end{gather} Then $\tilde{L}_{\mbox{\tiny $R$}}=\big(\tilde{l}^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$ is a lower triangular matrix with entries given by \begin{gather}\langlebel{eq:tilde l} \tilde{l}^{\mbox{\tiny $R$}}_{ij} = \frac{(-1)^{i-j}t^{-{i-j\choose 2}}\big(a_2b_2t^j;t\big)_{i-j}\big(1-t^{-1}\big)^n} {\big(a_1^{-1}a_2t^{-(n-2j-1)};t\big)_{i-j}(a_1b_2;t)_{n-j}\big(a_1^{-1}a_2t^{-(n-j)};t\big)_j\big(t^{-1};t^{-1}\big)_{n}} \qbin{n}{i}{t^{-1}}\qbin{i}{j}{t^{-1}}.\!\!\! \end{gather} \end{Lemma} \begin{proof}Since both $\{e_i(a_2,b_1;z)\,|\,i=0,1,\ldots,n\}$ and $\{f_i(a_1,a_2;t;z)\,|\,i=0,1,\ldots,n\}$ form bases of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda<(1^n)\}$, the polynomial $e_i(a_2,b_1;z)$ is expressed as a~linear combination of $f_i(a_1,a_2;t;z)$ $(i=0,1,\ldots,n)$, i.e., \begin{equation*} e_{n-j}(a_2,b_1;z)=\sum_{i=0}^{n} \tilde{u}^{\mbox{\tiny $R$}}_{ij} f_{n-i}(a_1,a_2;t;z), \end{equation*} where $\tilde{u}^{\mbox{\tiny $R$}}_{ij}$ are some constants. From the vanishing property~\eqref{eq:vanishing-f}, the coefficient~$\tilde{u}^{\mbox{\tiny $R$}}_{ij}$ is given by \begin{equation}\langlebel{eq:tilde u-2} \tilde{u}^{\mbox{\tiny $R$}}_{ij} =e_{n-j}(a_2,b_1;\xi_{n-i}(a_1,a_2;t)) =e_{n-j}(a_2,b_1;\zeta_{n-i}(a_2,y))\Big|_{y=a_1t^{n-i-1}}. \end{equation} From \eqref{eq:tri-zeta(ay)2} in Lemma \ref{lem:tri-zeta(ay)}, $e_{n-j}(a_2,b_1;\zeta_{n-i}(a_2,y))$ evaluates as \begin{gather*} e_{n-j}(a_2,b_1;\zeta_{n-i}(a_2,y))\\ \qquad{} = \big(yb_1t^{-(j-i-1)};t\big)_{j-i}\big(ya_2^{-1}t^{-(n-1)};t\big)_{n-j}(a_2b_1;t)_i \frac{\big(t^{-1};t^{-1}\big)_{n-i}\big(t^{-1};t^{-1}\big)_{j}}{\big(t^{-1};t^{-1}\big)_{j-i}\big(1-t^{-1}\big)^n}. \end{gather*} Combining this and \eqref{eq:tilde u-2}, we obtain the expression~\eqref{eq:tilde u}. Since $\{e_i(a_1,b_2;z)\,|\,i=0,1,\ldots,n\}$ is also a basis of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda<(1^n)\}$, the polynomial $f_i(a_1,a_2;t;z)$ is expressed as a linear combination of $e_i(a_1,b_2;z)$ $(i=0,1,\ldots,n)$, i.e., \begin{equation*} f_{n-j}(a_1,a_2;t;z)=\sum_{i=0}^{n}\tilde{l}^{\mbox{\tiny $R$}}_{ij}e_{i}(a_1,b_2;z), \end{equation*} where $\tilde{l}^{\mbox{\tiny $R$}}_{ij}$ are some constants. From \eqref{eq:matsuo2}, the coefficient $\tilde{l}^{\mbox{\tiny $R$}}_{ij}$ is written as \begin{equation}\langlebel{eq:tilde l-2} \tilde{l}^{\mbox{\tiny $R$}}_{ij} =\frac{f_{n-j}\big(a_1,a_2;t;\zeta_i\big(a_1,b_2^{-1}\big)\big)}{c_i}= c_i^{-1}f_{n-j}(a_1,a_2;t;\xi_{n-i}(a_1,x))\Big|_{x=b_2^{-1}t^{-(i-1)}}, \end{equation} where $c_i$ is given explicitly in \eqref{eq:c-i-b} as \begin{equation}\langlebel{eq:c-i-2} c_i= (a_1b_2;t)_{n-i}\big(a_1^{-1}b_2^{-1}t^{-(n-1)};t\big)_{i} \frac{\big(t^{-1};t^{-1}\big)_i\big(t^{-1};t^{-1}\big)_{n-i}} {\big(1-t^{-1}\big)^n}. \end{equation} Using \eqref{eq:f_i(xi_j(x,a_2;t))} in Corollary~\ref{cor:Triangularity-f2}, we have \begin{align*} f_{n-j}(a_1,a_2;t;\xi_{n-i}(a_1,x))&= f_{j}(a_2,a_1;t;\xi_{n-i}(a_1,x))=f_{j}(a_2,a_1;t;\xi_{i}(x,a_1))\\ &= \qbin{i}{j}{t^{-1}} \frac{\big(xa_2^{-1};t\big)_{i-j}\big(xa_1^{-1}t^{-(n-i)};t\big)_j} {\big(a_2^{-1}a_1t^{n-i-j};t\big)_{i-j}\big(a_2a_1^{-1}t^{-(n-j)};t\big)_j}. \end{align*} Combining this, \eqref{eq:tilde l-2} and \eqref{eq:c-i-2}, we therefore obtain the expression \eqref{eq:tilde l}. \end{proof} \begin{Lemma}\langlebel{lem:(e)=(f)tildeL'} Suppose that $\tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}}$ is the $(n+1)\times(n+1)$ matrix satisfying \begin{gather} \big(e_n(a_2,b_1;z),e_{n-1}(a_2,b_1;z),\ldots,e_0(a_2,b_1;z)\big)\nonumber\\ \qquad{} =\big(f_n\big(b_1^{-1},b_2^{-1};t^{-1};z\big),f_{n-1}\big(b_1^{-1},b_2^{-1};t^{-1};z\big),\ldots, f_0\big(b_1^{-1},b_2^{-1};t^{-1};z\big)\big) \tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}}.\langlebel{eq:(e)=(f)tildeL'} \end{gather} Then $\tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}}=\big(\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ is a lower triangular matrix with entries given by \begin{equation}\langlebel{eq:c-(e)=(f)tildeL'} \tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij} = \frac{\big(a_2^{-1}b_2^{-1}t^{-(i-1)};t\big)_{i-j}\big(a_2^{-1}b_1^{-1}t^{-(n-i-1)};t\big)_{n-i}\big(b_1b_2^{-1}t^{n-i-j+1};t\big)_{j} (t;t)_n}{t^{{n\choose 2}}(1-t)^n} \frac{\qbin{i}{j}{t}}{\qbin{n}{j}{t}}. \end{equation} Suppose that $\tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}}$ is the $(n+1)\times(n+1)$ matrix satisfying \begin{gather} \big(f_n\big(b_1^{-1},b_2^{-1};t^{-1};z\big),f_{n-1}\big(b_1^{-1},b_2^{-1};t^{-1};z\big),\ldots,f_0\big(b_1^{-1},b_2^{-1};t^{-1};z\big)\big) \nonumber\\ \qquad=\big(e_0(a_1,b_2;z),e_1(a_1,b_2;z),\ldots,e_n(a_1,b_2;z)\big) \tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}},\langlebel{eq:(f)=(e)tildeU'} \end{gather} then $\tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}}=\big(\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ is an upper triangular matrix with entries given by \begin{gather} \tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}= \frac{(-1)^{j-i}t^{{j-i\choose 2}+{n\choose 2}}\big(a_1^{-1}b_1^{-1}t^{-(n-i-1)};t\big)_{j-i}(1-t)^n} {\big(b_1^{-1}b_2t^{-(n-i-j)};t\big)_{j-i}\big(b_1^{-1}b_2t^{-(n-2j-1)};t\big)_{n-j}\big(a_1^{-1}b_2^{-1}t^{-(j-1)};t\big)_j(t;t)_n}\nonumber\\ \hphantom{\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}=}{}\times \qbin{n}{j}{t}\qbin{j}{i}{t}.\langlebel{eq:c-(f)=(e)tildeU'} \end{gather} \end{Lemma} \begin{proof}Since both $\{e_i(a_2,b_1;z)\,|\,i=0,1,\ldots,n\}$ and $\big\{f_i\big(b_1^{-1},b_2^{-1};t^{-1};z\big)\,|\,i=0,1,\ldots,n\big\}$ form bases of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda<(1^n)\}$, the polynomial $e_i(a_2,b_1;z)$ is expressed as a linear combination of $f_i\big(b_1^{-1},b_2^{-1};t^{-1};z\big)$ $(i=0,1,\ldots,n)$, i.e., \begin{equation*} e_{n-j}(a_2,b_1;z)=\sum_{i=0}^{n}\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij} f_{n-i}\big(b_1^{-1},b_2^{-1};t^{-1};z\big), \end{equation*} where $\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij}$ are some constants. From~\eqref{eq:vanishing-f} we have \begin{equation}\langlebel{eq:tilde l'2} \tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij}=e_{n-j}\big(a_2,b_1;\xi_{n-i}\big(b_1^{-1},b_2^{-1};t^{-1}\big)\big) =e_{n-j}\big(a_2,b_1;\zeta_{n-i}\big(x,b_1^{-1}\big)\big)\Big|_{x=b_2^{-1}t^{-(i-1)}}. \end{equation} Using \eqref{eq:tri-zeta(xb)2} in Lemma \ref{lem:tri-zeta(xb)} we have \begin{gather*} e_{n-j}\big(a_2,b_1;\zeta_{n-i}\big(x,b_1^{-1}\big)\big)=\big(xb_1t^{n-j};t\big)_j \big(xa_2^{-1};t\big)_{i-j}\big(a_2^{-1}b_1^{-1}t^{-(n-i-1)};t\big)_{n-i}\\ \hphantom{e_{n-j}\big(a_2,b_1;\zeta_{n-i}\big(x,b_1^{-1}\big)\big)=}{}\times \frac{(t;t)_{n-j}(t;t)_{i}}{t^{{n\choose 2}}(1-t)^n(t;t)_{i-j}}. \end{gather*} Combining this and \eqref{eq:tilde l'2}, we obtain \eqref{eq:c-(e)=(f)tildeL'}. On the other hand, since $\{e_i(a_1,b_2;z)\,|\,i=0,1,\ldots,n\}$ is also a basis of the linear space spanned by $\{m_\langlembda(z)\,|\,\langlembda<(1^n)\}$, the polynomial $f_i\big(b_1^{-1},b_2^{-1};t^{-1};z\big)$ is expressed as a linear combination of $e_i(a_1,b_2;z)$ $(i=0,1,\ldots,n)$, i.e., \begin{equation*} f_{n-j}\big(b_1^{-1},b_2^{-1};t^{-1};z\big) =\sum_{i=0}^{n} \tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij} e_i(a_1,b_2;z), \end{equation*} where $\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}$ are some constants. From~\eqref{eq:matsuo2} we have \begin{gather} \tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij} =\frac{f_{n-j}\big(b_1^{-1},b_2^{-1};t^{-1};\zeta_i\big(a_1,b_2^{-1}\big)\big)}{c_i}\nonumber\\ \hphantom{\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}}{} =\frac{f_{n-j}\big(b_1^{-1},b_2^{-1};t^{-1};\xi_{n-i}\big(x,b_2^{-1};t^{-1}\big)\big)}{c_i}\Big|_{x=a_1t^{n-i-1}},\langlebel{eq:tilde u'2} \end{gather} where $c_i$ is the constant given in \eqref{eq:c-i-a} as \begin{equation}\langlebel{eq:c-i-3} c_i=\big(a_1b_2t^{i};t\big)_{n-i}\big(a_1^{-1}b_2^{-1}t^{-(i-1)};t\big)_{i} \frac{(t;t)_i(t;t)_{n-i}}{t^{n\choose 2}(1-t)^n}. \end{equation} From \eqref{eq:f_i(xi_j(x,a_2;t))} in Corollary \ref{cor:Triangularity-f2} we have \begin{gather*} f_{n-j}\big(b_1^{-1},b_2^{-1};t^{-1};\xi_{n-i}\big(x,b_2^{-1};t^{-1}\big)\big)\\ \qquad{} = \frac{\big(xb_1;t^{-1}\big)_{j-i}\big(xb_2t^i;t^{-1}\big)_{n-j}(t;t)_{n-i}} {\big(b_1b_2^{-1}t^{-(i+j-n)};t^{-1}\big)_{j-i}\big(b_1^{-1}b_2t^j;t^{-1}\big)_{n-j}(t;t)_{j-i}(t;t)_{n-j}}. \end{gather*} Combining this, \eqref{eq:tilde u'2} and \eqref{eq:c-i-3}, we therefore obtain the expression~\eqref{eq:c-(f)=(e)tildeU'}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:R=LDU}] From \eqref{eq:(e)=(f)tildeU} and \eqref{eq:(f)=(e)tildeL} in Lemma \ref{lem:(e)=(f)tildeU}, we have \begin{gather} \big(e_n(a_2,b_1;z),e_{n-1}(a_2,b_1;z),\ldots,e_0(a_2,b_1;z)\big)\nonumber\\ \qquad{}=\big(e_0(a_1,b_2;z),e_1(a_1,b_2;z),\ldots,e_n(a_1,b_2;z)\big)\tilde{L}_{\mbox{\tiny $R$}}\tilde{U}_{\mbox{\tiny $R$}},\langlebel{eq:(e)=(e)tildeLU} \end{gather} where $\tilde{L}_{\mbox{\tiny $R$}}=\big(\tilde{l}^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$ and $\tilde{U}_{\mbox{\tiny $R$}}=\big(\tilde{u}^{\mbox{\tiny $R$}}_{ij}\big)_{0\le i,j\le n}$ are the matrices given by~\eqref{eq:tilde u} and~\eqref{eq:tilde l}, respectively. Comparing \eqref{eq:(e)=(e)tildeLU} with~\eqref{eq:R=LDU}, we obtain $R=\tilde{L}_{\mbox{\tiny $R$}}\tilde{U}_{\mbox{\tiny $R$}}=L_{\mbox{\tiny $R$}} D_{\mbox{\tiny $R$}} U_{\mbox{\tiny $R$}}$, i.e., \begin{equation*} l^{\mbox{\tiny $R$}}_{ij}=\frac{\tilde{l}^{\mbox{\tiny $R$}}_{ij}}{\tilde{l}^{\mbox{\tiny $R$}}_{jj}} , \qquad d^{\mbox{\tiny $R$}}_{j}=\tilde{l}^{\mbox{\tiny $R$}}_{jj}\tilde{u}^{\mbox{\tiny $R$}}_{jj}, \qquad u^{\mbox{\tiny $R$}}_{ij}=\frac{\tilde{u}^{\mbox{\tiny $R$}}_{ij}}{\tilde{u}^{\mbox{\tiny $R$}}_{ii}}. \end{equation*} Lemma \ref{lem:(e)=(f)tildeU} implies that $l^{\mbox{\tiny $R$}}_{ij}$, $d^{\mbox{\tiny $R$}}_{j}$ and $u^{\mbox{\tiny $R$}}_{ij}$ above coincide with~\eqref{eq:lRij}, \eqref{eq:dRij} and~\eqref{eq:uRij}, respectively. On the other hand, from~\eqref{eq:(e)=(f)tildeL'} and \eqref{eq:(f)=(e)tildeU'} in Lemma~\ref{lem:(e)=(f)tildeL'}, we have \begin{gather} \big(e_n(a_2,b_1;z),e_{n-1}(a_2,b_1;z),\ldots,e_0(a_2,b_1;z)\big)\nonumber\\ \qquad{}=\big(e_0(a_1,b_2;z),e_1(a_1,b_2;z),\ldots,e_n(a_1,b_2;z)\big) \tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}}\tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}},\langlebel{eq:(e)=(e)tildeU'L'} \end{gather} where $\tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}}=\big(\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ and $\tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}}=\big(\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij}\big)_{0\le i,j\le n}$ are the matrices given by~\eqref{eq:c-(e)=(f)tildeL'} and~\eqref{eq:c-(f)=(e)tildeU'}, respectively. Comparing \eqref{eq:(e)=(e)tildeU'L'} with \eqref{eq:R=LDU}, we obtain $R=\tilde{U}'{}_{\!\!\!{\mbox{\tiny $R$}}}\tilde{L}'{}_{\!\!{\mbox{\tiny $R$}}} =U'{}_{\!\!\!{\mbox{\tiny $R$}}}D'{}_{\!\!{\mbox{\tiny $R$}}}L'{}_{\!\!{\mbox{\tiny $R$}}}$, i.e., \begin{equation*} u^{{\mbox{\tiny $R$}}\,\prime}_{ij}=\frac{\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{ij}}{\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{jj}}, \qquad d^{{\mbox{\tiny $R$}}\,\prime}_{j}=\tilde{u}^{{\mbox{\tiny $R$}}\,\prime}_{jj}\,\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{jj},\qquad l^{{\mbox{\tiny $R$}}\,\prime}_{ij}=\frac{\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ij}}{\tilde{l}^{{\mbox{\tiny $R$}}\,\prime}_{ii}}. \end{equation*} Lemma \ref{lem:(e)=(f)tildeL'} implies that $u^{{\mbox{\tiny $R$}}\,\prime}_{ij}$, $d^{{\mbox{\tiny $R$}}\,\prime}_{j}$ and $l^{{\mbox{\tiny $R$}}\,\prime}_{ij}$ above coincide with~\eqref{eq:u'Rij}, \eqref{eq:d'Rij} and~\eqref{eq:l'Rij}, respectively. \end{proof} \appendix \section{Appendix}\langlebel{sectionA} In this appendix we consider the Gauss decomposition part $A=U'{}_{\!\!\!{\mbox{\tiny $A$}}}D'{}_{\!\!{\mbox{\tiny $A$}}}L'{}_{\!\!{\mbox{\tiny $A$}}}$ of Theorem~\ref{thm:main}. Since the method to compute $A=U'{}_{\!\!\!{\mbox{\tiny $A$}}}D'{}_{\!\!{\mbox{\tiny $A$}}}L'{}_{\!\!{\mbox{\tiny $A$}}}$ is almost the same as that to compute $A=L_{\mbox{\tiny $A$}} D_{\mbox{\tiny $A$}} U_{\mbox{\tiny $A$}}$, we only give the outline of the proof. For this purpose, we define another family of interpolation polynomials $\tilde{E}'_{k,i}(a,b;z)$ slightly different from (\ref{eq:Eki2}). Set \begin{equation*} \tilde{E}'_{k,i}(z)=\tilde{E}'_{k,i}(a,b;z):={\cal A}E'_{k,i}(a,b;z)/\Delta(z), \end{equation*} where \begin{equation*} E'_{k,i}(a,b;z):=\underbrace{z_{n-k+1}z_{n-k+2}\cdots z_n\phantom{\Big|}\!\!}_k \Delta(t;z) \prod_{j=1}^{n-i}(1-bz_j)\prod_{j=n-i+1}^n\big(1-a^{-1}z_j\big). \end{equation*} We now specify $a=a_1$, $b=b_2$, i.e., we set $\tilde{E}'_{k,i}(z)=\tilde{E}'_{k,i}(a_1,b_2;z)$ throughout this section. \begin{Lemma}[three-term relations] Suppose $k\le i$. Then, \begin{gather} a_2^{-1}\big(1-q^\alpha a_1a_2b_1b_2 t^{2n-k-1}\big)\big\langle\tilde{E}'_{k,i}\big\rangle\nonumber\\ \qquad{} =t^{k-1}\big(1-q^\alpha a_1b_1t^{2n-k-i}\big)\big\langle\tilde{E}'_{k-1,i}\big\rangle - q^\alpha t^{n-1}\big(1-a_1b_1t^{n-i}\big)\big\langle\tilde{E}'_{k-1,i-1}\big\rangle.\langlebel{eq:3term03} \end{gather} On the other hand, if $k\ge i$, then, \begin{gather} a_1^{-1}\big(1-q^\alpha a_1b_1t^{2n-k-i}\big)\big\langle\tilde{E}'_{k,i-1}\big\rangle\nonumber\\ \qquad{} = t^{k-i}\big(1-q^\alpha t^{n-k}\big)\big\langle\tilde{E}'_{k-1,i-1}\big\rangle -a_2^{-1}t^{-(i-1)}\big(1-a_2b_2 t^{i-1}\big)\big\langle\tilde{E}'_{k,i}\big\rangle.\langlebel{eq:3term04} \end{gather} \end{Lemma} \begin{proof}Put $ \tilde{\varphi}'_{k,i-1}(z) :={\cal A}\nabla_{\!1}\varphi'_{k,i-1}(z) $, where \begin{equation*} \varphi'_{k,i-1}(z):=\big(1-a_1^{-1}z_1\big)\big(1-a_2^{-1}z_1\big)\prod_{j=2}^n(z_1-tz_j) \times E_{k-1,i-1}^{(n-1)}(z_2,\ldots,z_n). \end{equation*} Then, by a similar argument to that used in the proof of Lemma~\ref{lem:3term1st-01}, it follows that, if $k\le i$, the polynomial $\tilde{\varphi}'_{k,i-1}(z)$ satisfies \begin{equation} \langlebel{eq:3term03phi} \frac{\tilde{\varphi}'_{k,i-1}(z)}{\Delta(z)} =c'_{k,i}\tilde{E}'_{k,i}(z)+c'_{k-1,i}\tilde{E}'_{k-1,i}(z)+c'_{k-1,i-1}\tilde{E}'_{k-1,i-1}(z), \end{equation} where \begin{gather*} c'_{k,i}=-a_2^{-1}t^{n-1}\big(1-q^\alpha a_1a_2b_1b_2 t^{2n-k-1}\big),\\ c'_{k-1,i}=t^{n+k-2}\big(1-q^\alpha a_1b_1t^{2n-k-i}\big),\\ c'_{k-1,i-1}=-q^\alpha t^{2n-2}\big(1-a_1b_1t^{n-i}\big), \end{gather*} while, if $i\le k$, then $\tilde{\varphi}'_{k,i-1}(z)$ satisfies \begin{equation}\langlebel{eq:3term04phi} \frac{\tilde{\varphi}'_{k,i-1}(z)}{\Delta(z)}=d'_{k,i}\tilde{E}'_{k,i}(z)+d'_{k,i-1}\tilde{E}'_{k,i-1}(z) +d'_{k-1,i-1}\tilde{E}'_{k-1,i-1}(z), \end{equation} where \begin{gather*} d'_{k,i}=-a_2^{-1}t^{n-1}\big(1-a_2b_2 t^{i-1}\big),\\ d'_{k,i-1}=-a_1^{-1}t^{n+i-2}\big(1-q^\alpha a_1b_1t^{2n-k-i}\big),\\ d'_{k-1,i-1}=t^{n+k-2}\big(1-q^\alpha t^{n-k}\big). \end{gather*} Using the expressions (\ref{eq:3term03phi}) and (\ref{eq:3term04phi}) for $\tilde{\varphi}_{k,i-1}(z)$, (\ref{eq:3term03}) and~(\ref{eq:3term04}) follow by application of Lemma~\ref{lem:nabla=0}. \end{proof} By repeated use of the three-term relations~\eqref{eq:3term03} and~\eqref{eq:3term04}, we obtain the following. \begin{Lemma} If $k\le i$, then \begin{equation*} \big\langle\tilde{E}'_{k,i}\big\rangle=\sum_{j=0}^l U'^{k,i}_{k-l,i-j}\big\langle\tilde{E}'_{k-l,i-j}\big\rangle, \end{equation*} where \begin{equation*} U'^{k,i}_{k-l,i-j} =\big({-}q^\alpha t^{n-k+l-1}\big)^j \big(a_2t^{k-l}\big)^lt^{{l-j \choose 2}} \qbin{l}{j}{t} \frac{\big(a_1b_1t^{n-i};t\big)_j\big(q^\alpha a_1b_1 t^{2n-k-i+j};t\big)_{l-j} }{\big(q^\alpha a_1a_2b_1b_2 t^{2n-k-1};t\big)_l}, \end{equation*} while, if $k\ge i$, then \begin{equation*} \big\langle\tilde{E}'_{k,i}\big\rangle=\sum_{j=0}^l L'^{k,i}_{k-l+j,i+j}\big\langle\tilde{E}'_{k-l+j,i+j}\big\rangle, \end{equation*} where \begin{equation*} L'^{k,i}_{k-l+j,i+j} = \qbin{l}{j}{t} \frac{\big({-}a_2^{-1}t^{-(k-1)}\big)^j\big(a_1t^{k-i-1}\big)^lt^{-{l\choose 2}}\big(q^\alpha t^{n-k};t\big)_{l-j}\big(a_2b_2t^i;t\big)_j} {\big(q^\alpha a_1b_1 t^{2n-k-i-j-1};t\big)_{l-j}\big(q^\alpha a_1b_1 t^{2n-k-i-2j+l};t\big)_j}. \end{equation*} \end{Lemma} As a special case of the above lemma we immediately have the following. \begin{Lemma}\langlebel{lem:u'l'-A} For $0\le j\le n$, $\big\langle \tilde{E}'_{j,j}\big\rangle$ is expressed as \begin{equation}\langlebel{eq:<E'j,j>} \big\langle\tilde{E}'_{j,j}\big\rangle=\sum_{i=0}^j \tilde{u}'_{ij}\big\langle\tilde{E}'_{0,i}\big\rangle, \end{equation} where \begin{equation}\langlebel{eq:<E'j,j>c} \tilde{u}'_{ij}=U'^{j,j}_{0,i}=\big({-}q^\alpha t^{n-1}\big)^{j-i}a_2^jt^{{i\choose 2}} \qbin{j}{i}{t} \frac{\big(a_1b_1t^{n-j};t\big)_{j-i}\big(q^\alpha a_1b_1 t^{2n-i-j};t\big)_i }{\big(q^\alpha a_1a_2b_1b_2 t^{2n-j-1};t\big)_j}, \end{equation} while, for $0\le j\le n$, $\big\langle \tilde{E}'_{n,j}\big\rangle$ is expressed as \begin{equation}\langlebel{eq:<E'n,j>} \big\langle\tilde{E}'_{n,j}\big\rangle=\sum_{i=j}^n \tilde{l}'_{ij}\big\langle\tilde{E}'_{i,i}\big\rangle, \end{equation} where \begin{equation}\langlebel{eq:<E'n,j>c} \tilde{l}'_{ij}=L'^{n,j}_{i,i}= (-1)^{i-j} \qbin{n-j}{n-i}{t} \frac{a_1^{n-j}a_2^{-(i-j)}t^{{n-i\choose 2}+{j\choose 2}-{i\choose 2}}\big(q^\alpha;t\big)_{n-i}\big(a_2b_2t^j;t\big)_{i-j}} {\big(q^\alpha a_1b_1 t^{n-i-1};t\big)_{n-i}\big(q^\alpha a_1b_1 t^{2(n-i)};t\big)_{i-j}}. \end{equation} \end{Lemma} \begin{proof}[Proof of \eqref{eq:u'Aij}--\eqref{eq:l'Aij} in Theorem \ref{thm:main}] From (\ref{eq:<E'j,j>}), we have \begin{equation*} \big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{1,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{n-1,n-1}\big\rangle,\big\langle \tilde{E}'_{n,n}\big\rangle\big) =\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{0,n-1}\big\rangle,\big\langle \tilde{E}'_{0,n}\big\rangle\big)\tilde{U}', \end{equation*} where the matrix $\tilde{U}'=\big(\tilde{u}'_{ij}\big)_{0\le i,j\le n}$ is defined by~(\ref{eq:<E'j,j>c}). Moreover, from~(\ref{eq:<E'n,j>}) we have \begin{align} \big(\big\langle \tilde{E}'_{n,0}\big\rangle,\big\langle \tilde{E}'_{n,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{n,n-1}\big\rangle, \big\langle \tilde{E}'_{n,n}\big\rangle\big) &=\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{1,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{n-1,n-1}\big\rangle,\big\langle \tilde{E}'_{n,n}\big\rangle\big)\tilde{L}' \nonumber\\ &=\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{0,n-1}\big\rangle,\big\langle \tilde{E}'_{0,n}\big\rangle\big)\tilde{U}'\tilde{L}',\langlebel{eq:-U-L} \end{align} where the matrix $\tilde{L}'=\big(\tilde{l}'_{ij}\big)_{0\le i,j\le n}$ is defined by (\ref{eq:<E'n,j>c}). Since $T_{\alpha}\Phi(z)=z_1z_2\cdots z_n\Phi(z)$ and $z_1z_2\cdots z_n \tilde{E}'_{0,i}(z)=\tilde{E}'_{n,i}(z)$, we have $T_{\alpha}\big\langle \tilde{E}'_{0,i}\big\rangle=\big\langle \tilde{E}'_{n,i}\big\rangle$, i.e., \begin{equation}\langlebel{eq:T-a(E')} T_{\alpha}\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{0,n-1}\big\rangle,\big\langle \tilde{E}'_{0,n}\big\rangle\big) =\big(\big\langle \tilde{E}'_{n,0}\big\rangle,\big\langle \tilde{E}'_{n,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{n,n-1}\big\rangle,\big\langle \tilde{E}'_{n,n}\big\rangle\big). \end{equation} From (\ref{eq:-U-L}) and (\ref{eq:T-a(E')}), we obtain the difference system \begin{equation*} T_{\alpha}\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{0,n-1}\big\rangle,\big\langle \tilde{E}'_{0,n}\big\rangle\big) =\big(\big\langle \tilde{E}'_{0,0}\big\rangle,\big\langle \tilde{E}'_{0,1}\big\rangle,\ldots,\big\langle \tilde{E}'_{0,n-1}\big\rangle,\big\langle \tilde{E}'_{0,n}\big\rangle\big)\tilde{U}'\tilde{L}'. \end{equation*} Comparing this with (\ref{eq:main}), we therefore obtain $A=\tilde{U}'\tilde{L}' =U'{}_{\!\!\!{\mbox{\tiny $A$}}}D'{}_{\!\!{\mbox{\tiny $A$}}}L'{}_{\!\!{\mbox{\tiny $A$}}}, $ i.e., \begin{equation*} u^{{\mbox{\tiny $A$}}\,\prime}_{ij}=\frac{\tilde{u}'_{ij}}{\tilde{u}'_{jj}} , \qquad d^{{\mbox{\tiny $A$}}\,\prime}_{j}=\tilde{u}'_{jj}\tilde{l}'_{jj}, \qquad l^{{\mbox{\tiny $A$}}\,\prime}_{ij}=\frac{\tilde{l}'_{ij}}{\tilde{l}'_{ii}} . \end{equation*} Lemma \ref{lem:u'l'-A} implies that $u^{{\mbox{\tiny $A$}}\,\prime}_{ij}$, $d^{{\mbox{\tiny $A$}}\,\prime}_{j}$ and $l^{{\mbox{\tiny $A$}}\,\prime}_{ij}$ above coincide with \eqref{eq:u'Aij}, \eqref{eq:d'Aij} and \eqref{eq:l'Aij}, respectively, which completes the proof. \end{proof} Finally we give an explicit forms for ${L'_A}^{\!-1}$. \begin{Proposition}\langlebel{prop:inverseL'_A} The inverse matrix $L'{}_{\!\!{\mbox{\tiny $A$}}}^{-1}=\big(l^{{\mbox{\tiny $A$}}\,\prime *}_{ij}\big)_{0\le i,j\le n}$ is lower triangular and is written as \begin{equation*} l^{{\mbox{\tiny $A$}}\,\prime *}_{ij} = \qbin{n-j}{n-i}{t} \frac{\big(a_1a_2^{-1}t^{-j}\big)^{i-j}\big(a_2b_2t^j;t\big)_{i-j}} {\big(q^\alpha a_1b_1 t^{2n-i-j-1};t\big)_{i-j}}. \end{equation*} \end{Proposition} \begin{proof}Using \eqref{eq:3term04} we can calculate the entries $l^{{\mbox{\tiny $A$}}\,\prime *}_{ij}$ of the lower triangular matrix $L'{}_{\!\!{\mbox{\tiny $A$}}}^{-1}$ by completely the same way as Proposition~\ref{prop:inverseU_A}. We omit the details. \end{proof} \LastPageEnding \end{document}
\begin{document} \begin{abstract} We study definable sets in power series fields with perfect residue fields. We show that certain `one-dimensional' definable sets are in fact existentially definable. This allows us to apply results from \cite{Anscombe?2} about existentially definable sets to one-dimensional definable sets. More precisely, let $F$ be a perfect field and let $\mathbf{a}$ be a tuple from $F((t))$ of transcendence degree $1$ over $F$. Using the description of $F$-automorphisms of $F((t))$ given by Schilling, in \cite{Schilling44}, we show that the orbit of $\mathbf{a}$ under $F$-automorphisms is existentially definable in the ring language with parameters from $F(t)$. We deduce the following corollary. Let $X$ be an $F$-definable subset of $F((t))$ which is not contained in $F$, then the subfield generated by $X$ is equal to $F((t^{p^{n}}))$, for some $n<\omega$. \end{abstract} \maketitle Let $F$ be a fixed \bf perfect \rm field and let $v$ denote the $t$-adic valuation on the power series field $F((t))$. The valuation ring of $v$ is $F[[t]]$ and the maximal ideal is $tF[[t]]$. Let $\mathcal{U}$ denote the set of uniformisers in $F((t))$, i.e. those elements of value $1$. Let $\mathcal{L}_{\mathrm{ring}}:=\{+,\cdot,0,1\}$ be the \em language of rings \rm and let $\mathcal{L}_{vf}:=\mathcal{L}_{\mathrm{ring}}\cup\{O\}$ be the \em language of valued fields, \rm which is an expansion of $\mathcal{L}_{\mathrm{ring}}$ by a unary predicate $O$ (intended to be interpreted as the valuation ring). Let $\mathcal{L}_{\mathrm{ring}}(F)$ and $\mathcal{L}_{vf}(F)$ denote the expansion of each language by constants for elements of $F$. For a tuple $\mathbf{a}\subseteq F((t))$, we let $\mathrm{Orb}(\mathbf{a})$ denote the orbit of $\mathbf{a}$ under the $\mathcal{L}_{vf}(F)$-automorphisms of $F((t))$, and let $\mathrm{tp}(\mathbf{a})$ denote the $\mathcal{L}_{vf}(F)$-type of $\mathbf{a}$. In a slight abuse of notation, we write $(F((t)),v)$ in place of the $\mathcal{L}_{vf}$-structure $(F((t)),F[[t]])$. Let $p$ be the characteristic exponent of $F$, i.e if $\mathrm{char}(F)>0$ then $p:=\mathrm{char}(F)$, and otherwise $p:=1$. The well-known theorem of Ax-Kochen/Ershov (see for example Theorem 3, \cite{Ax-Kochen66}) gives an axiomatisation of the $\mathcal{L}_{vf}$-theory of $(F((t)),v)$ in the case that $\mathrm{char}(F)=0$. However, there is no corresponding known axiomatisation for the theory of $(F((t)),v)$ if $\mathrm{char}(F)>0$; neither is there a description of the definable sets in this structure. This note provides a small step forward by studying the `one-dimensional' $F$-definable subsets of $F((t))$, i.e. those $F$-definable sets which contain a tuple of transcendence degree $1$ over $F$. Since the $\mathrm{char}(F)=0$ case is so well-understood, the reader might like to focus on the case $\mathrm{char}(F)>0$, although our results hold for arbitrary characteristic. In \autoref{section:theorem} we prove the following theorem. \begin{theorem}\label{thm:1-dim} Let $\mathbf{a}$ be a tuple from $F((t))$ of transcendence degree $1$ over $F$. Then $\mathrm{Orb}(\mathbf{a})$ is \begin{enumerate} \item $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-definable (i.e. definable by an existential $\mathcal{L}_{\mathrm{ring}}(F(t))$-formula), \item $\mathcal{L}_{\mathrm{ring}}(F)$-definable, and \item equal to the type $\mathrm{tp}(\mathbf{a})$ of $\mathbf{a}$ over $F$. \end{enumerate} \end{theorem} \noindent By combining this with work from \cite{Anscombe?2}, in \autoref{section:corollary} we are able to deduce the following corollary. \begin{corollary}\label{cor:subfields} Let $X\subseteq F((t))$ be an $\mathcal{L}_{vf}(F)$-definable subset. Then either $X\subseteq F$ or there exists $n<\omega$ such that $$(X)=F((t^{p^{n}})),$$ where $(X)$ denotes the subfield of $F((t))$ generated by $X$. \end{corollary} \noindent Finally, we give corollaries about subfields generated by $\mathcal{L}_{vf}$-definable subsets of $\mathbb{F}_{p}((t))$ and $\mathbb{F}_{p}((t))^{\mathrm{perf}}$. \section{$F$-automorphisms of $F((t))$} \label{section:automorphisms} Schilling gives, in \cite{Schilling44}, a description of the $\mathcal{L}_{vf}(F)$-automorphisms of $F((t))$, and their representation as substitutions $t\longmapsto s$ for $s\in\mathcal{U}$. In Lemma 1 of \cite{Schilling44}, Schilling shows that all $\mathcal{L}_{\mathrm{ring}}$-automorphisms are in fact $\mathcal{L}_{vf}$-automorphisms. Let $\mathbf{G}$ denote the group of $\mathcal{L}_{vf}(F)$-automorphisms of $F((t))$. For $b\in F((t))$, let $\mathrm{Orb}(b)$ denote the orbit of $b$ under the action of $\mathbf{G}$. \begin{fact}\rm[\bf Theorem 1, \cite{Schilling44}\rm] \em Let $\circ:F((t))\times\mathcal{M}\longrightarrow F((t))$ denote the composition map. It is continuous map. The restriction of $\circ$ to $\mathcal{U}\times\mathcal{U}$ is associative, $t$ is the identity element, and every element is invertible. For each $s\in\mathcal{M}$, the map given by $x\longmapsto x\circ s$ is a ring homomorphism. Thus $(\mathcal{U},\circ)$ is a group which acts on $F((t))$ as a group of $F$-automorphisms. The corresponding representation $(\mathcal{U},\circ)\longrightarrow\mathbf{G}$ is an isomorphism. \end{fact} In particular, we have the following. \begin{fact}\label{fact:orb(t)} $\mathcal{U}=\mathrm{Orb}(t)$. \end{fact} For $n>1$, let $\mathbf{G}_{n}$ denote the subgroup of $\mathbf{G}$ of those automorphisms corresponding to substitutions $t\longmapsto s$, for $s\in t+\mathcal{M}^{n}$. In Theorem 3 of \cite{Schilling44}, Schilling proves that these groups are the same as the \em pseudo-ramification groups \rm of MacLane, see Section 9 of \cite{MacLane39iii}. For $b\in F((t))$ and $n>1$, let $\mathrm{Orb}_{n}(b)$ denote the orbit of $b$ under the action of $\mathbf{G}_{n}$. Recall that $f\in F[[t]]$ may also be thought of as a function $$\begin{array}{rll} f:tF[[t]]&\longrightarrow&F[[t]]\\ x&\longmapsto&f(x):=f\circ x \end{array}.$$ \begin{fact}\label{fact:orb(f)} Let $f\in F[[t]]$ and let $n>1$. Then $f(t+\mathcal{M}^{n})=\mathrm{Orb}_{n}(f(t))$. \end{fact} \section{A Hensel-like Lemma} \label{section:hensel.like} In this section we prove a `Hensel-like' Lemma (\autoref{prp:Hensel-like}) in the ring $F[[t]]$ of formal power series over an \em arbitrary \rm field $F$. \autoref{prp:Hensel-like} can be deduced from a version of Newton's Lemma for power series, but we give a direct proof. For $N\in\mathbb{Z}$, let $$B(N,a):=\{b\in F((t))\;|\;v(b-a)>N\}$$ denote the \em open ball of radius $m$ around $a$\rm. For tuples $\mathbf{N}=(N_{1},...,N_{s})\subseteq\mathbb{Z}$ and $\mathbf{a}=(a_{1},...,a_{s})\subseteq F((t))$, we write $$\mathbf{B}(\mathbf{N};\mathbf{a}):=B(N_{1};a_{1})\times...\times B(N_{s};a_{s}).$$ \begin{proposition}\label{prp:Hensel-like} Suppose that $f\in F[[t]]\setminus F[[t]]^{p}$. Then, for each $n<\omega$, there exists $N<\omega$ such that$$B(N;f(t))\subseteq f(t+\mathcal{M}^{n}).$$ \end{proposition} The rough idea is as follows. Let $(y_{j})_{j\geq1}$ be `formal indeterminates' over $F$ (e.g. algebraically independent over $F$ in a field extension linearly disjoint from $F((t))/F$) and let $y:=\sum_{j\geq1}y_{j}t^{j}$. We study the power series $$f(y)\in F[(y_{j})_{j\geq1}][[t]].$$ The coefficients of $f(y)$ are polynomials in finitely many of the variables $(y_{j})$. We show that the $h$-th coefficient is a polynomial in the variables $(y_{1},...,y_{h'})$, where the function $h\longmapsto h'$ is eventually strictly increasing. This allows us to choose $N$ so that we may recursively define solutions to the equations $f(y)=b$, for $b\in B(N;f(t))$.\\\\ \noindent Without further comment, we shall assume that $f,b,y$ are written as: $$\begin{array}{lll} f=\sum_{i}a_{i}t^{i},&b=\sum_{h}b_{h}t^{h},&y=\sum_{j\geq1}y_{j}t^{j}. \end{array}$$ For $i<\omega$, we write $y^{i}=\sum_{j}y^{(i)}_{j}t^{j}$. \begin{lemma}\label{lem:y^i_j.1} Let $i,j<\omega$ be such that $p\nmid i$. Then there exists $Y^{(i)}_{j}\in\mathbb{F}[y_{1},...,y_{j-i}]$ such that$$y^{(i)}_{j}=Y^{(i)}_{j}+iy_{1}^{i-1}y_{j-i+1}.$$In particular, $y^{(i)}_{j}\in\mathbb{F}[y_{1},...,y_{j-i+1}]$. \end{lemma} \begin{proof} We have that $$y_{j}^{(i)}=\sum_{\sum_{r=1}^{i}j_{r}=j}\bigg(\prod_{r=1}^{i}y_{j_{r}}\bigg).$$ We observe that $y_{j}^{(i)}$ is a polynomial in the variables $(y_{j})_{j<\omega}$. The variable with the highest index that occurs nontrivially is $y_{j-i+1}$ and the only term in which $y_{j-i+1}$ appears is $iy_{1}^{i-1}y_{j-i+1}$. \end{proof} \begin{lemma}\label{lem:y^i_j.2} Let $i,j,k,l<\omega$ be such that $i=kp^{l}$ and $p\nmid k$. Then $$y_{j}^{(i)}=\left\{ \begin{array}{ll} 0&\text{if }p^{l}\nmid j\\ Y_{jp^{-l}}^{(k)}+ky_{1}^{k-1}y_{jp^{-l}-k+1}&\text{if }p^{l}\mid j. \end{array}\right.$$ In particular, $y^{(i)}_{j}\in\mathbb{F}[y_{1},...,y_{jp^{-l}-k+1}]$. \end{lemma} \begin{proof} First we note that $y^{(i)}_{j}=y^{(k)}_{jp^{-l}}$. Then the conclusion is immediate from \autoref{lem:y^i_j.1}. \end{proof} \autoref{lem:y^i_j.2} motivates the study of the functions $h\longmapsto hp^{-l}-k+1$. \begin{definition} Let $i_{0}:=\mathrm{min}\{i\;|\;a_{i}\neq0\text{ and }p\nmid i\}$ and let$$N':=\left\lceil\mathrm{max}\bigg\{\frac{i_{0}-k}{1-p^{-1}}\;\bigg|\;k<i_{0}\bigg\}\right\rceil.$$ \end{definition} Note that $i_{0}$ is \bf not \rm a valuation and it is well-defined by our assumption that $f\notin F[[x]]$. \begin{lemma}\label{lem:monomial-in-chief} Let $h<\omega$ be such that $N'<h$; and let $i<\omega$ be such that $i\neq i_{0}$ and $a_{i}\neq0$. Choose $k,l<\omega$ such that $i=kp^{l}$ and $p\nmid k$. Then we have$$hp^{-l}-k+1<h-i_{0}+1.$$ \end{lemma} \begin{proof} First, suppose that $k<i_{0}$. Then we must have $0<l$, by definition of $i_{0}$. We have $$\begin{array}{lllll} \frac{i_{0}-k}{1-p^{-1}}&\leq&N'&<&h. \end{array}$$ A simple rearrangement gives: $$hp^{-1}-k+1<h-i_{0}+1.$$ Thus $$\begin{array}{ll} hp^{-l}-k+1&\leq hp^{-1}-k+1\\ &<h-i_{0}+1, \end{array}$$ as required. On the other hand, suppose that $i_{0}\leq k$. Then $$\begin{array}{ll} hp^{-l}-k+1&\leq h-k+1\\ &\leq h-i_{0}+1. \end{array}$$ It is clear that equality holds if and only if $i_{0}=k$ and $l=0$; i.e. if and only if $i=i_{0}$. \end{proof} For $h<\omega$, let $C_{h}(\sum_{i}a_{i}t^{i}):=a_{h}$. \begin{lemma}\label{lem:combined.coef.functions} Let $h<\omega$ be such that $h>N'$. Then there exists $Z_{h}\in\mathbb{F}(a_{i})_{i<\omega}[y_{1},...,y_{h-i_{0}}]$ such that$$C_{h}(f(y))=Z_{h}+a_{i_{0}}i_{0}y_{1}^{i_{0}-1}y_{h-i_{0}+1}.$$In particular, $C_{h}(f(y))\in\mathbb{F}(a_{i})_{i<\omega}[y_{1},...,y_{h-i_{0}+1}]$. \end{lemma} \begin{proof} Combining \autoref{lem:monomial-in-chief} and \autoref{lem:y^i_j.2}, we have that $\sum_{i\neq i_{0}}a_{i}y_{h}^{(i)}\in\mathbb{F}(a_{i})_{i<\omega}[y_{1},...,y_{h-i_{0}}]$. Another application of \autoref{lem:y^i_j.2} gives that there exists $Y_{h}^{(i_{0})}\in\mathbb{F}(a_{i})_{i<\omega}[y_{1},...,y_{h-i_{0}}]$ such that $$y_{h}^{(i_{0})}=Y_{h}^{(i_{0})}+i_{0}y_{1}^{i_{0}-1}y_{h-i_{0}+1}.$$ Now: $$\begin{array}{ll} C_{h}(f(y))&=C_{h}(\sum_{i}a_{i}y^{i})\\ &=C_{h}(\sum_{i}a_{i}\sum_{j}y^{(i)}_{j}t^{j})\\ &=\sum_{i}a_{i}C_{h}(\sum_{j}y^{(i)}_{j}t^{j})\\ &=\sum_{i}a_{i}y^{(i)}_{h}\\ &=\sum_{i\neq i_{0}}a_{i}y^{(i)}_{h}+a_{i_{0}}(Y_{h}^{(i_{0})}+i_{0}y_{1}^{i_{0}-1}y_{h-i_{0}+1})\\ &=Z_{h}+a_{i_{0}}i_{0}y_{1}^{i_{0}-1}y_{h-i_{0}+1}, \end{array}$$ where $Z_{h}:=\sum_{i\neq i_{0}}a_{i}y^{(i)}_{h}+a_{i_{0}}Y_{h}^{(i_{0})}\in\mathbb{F}(a_{i})_{i<\omega}[y_{1},...,y_{h-i_{0}}]$. \end{proof} \subsection{The proof of Proposition \ref{prp:Hensel-like}} Choose $N<\omega$ such that $N\geq N'$ and $N-i_{0}+1\geq n-1$. Let $b\in B(N;f(t))$. We seek $y\in t+\mathcal{M}^{n}$ such that$$f(y)=b.$$We rephrase this goal: we seek $(y_{j})_{j<\omega}\subseteq F$ such that: \begin{enumerate} \item $\sum_{j<\omega}y_{j}t^{j}\in t+\mathcal{M}^{n}$ and \item for each $h<\omega$, we have $C_{h}(f(\sum y_{j}t^{j}))=b_{h}$. \end{enumerate} Set \begin{enumerate} \item $y_{1}:=1$ and \item $y_{i}:=0$, for $i\in\{2,...,N-i_{0}+1\}$. \end{enumerate} Then $\sum_{j=1}^{N-i_{0}+1}y_{j}t^{j}=t$. Trivially we have: \begin{enumerate} \item $t\in t+\mathcal{M}^{n}$ and \item $C_{h}(f(t))=b_{h}$, for all $h\leq N$. \end{enumerate} We now recursively define $y_{j}$, for $j>N-i_{0}+1$. Let $H>N$ and suppose that we have defined $y_{j}$ for $j<H-i_{0}+1$ such that$$C_{h}(f(d))=b_{h},$$ for all $h<H$, where $d:=\sum_{j=1}^{H-i_{0}}y_{j}t^{j}$. By rearranging the formula in \autoref{lem:combined.coef.functions}, we may choose $y_{H-i_{0}+1}\in F$ such that $$C_{H}(f(e))=b_{H},$$ where $e:=d+y_{H-i_{0}+1}t^{H-i_{0}+1}$. It is also clear that $$C_{h}(f(e))=C_{h}(f(d))=b_{h},$$for $h<H$. Then $y:=\sum_{j<\omega}y_{j}t^{j}$ is as required. This completes the proof of \autoref{prp:Hensel-like}. \section{Orbits are `nearly open'} \begin{lemma}\label{lem:nearly.open.1} Let $a\in F((t))\setminus F((t))^{p}$ and let $n<\omega$. Then there exists $N<\omega$ such that $$B(N;a)\subseteq\mathrm{Orb}_{n}(a).$$ \end{lemma} \begin{proof} First we suppose that $a\in F[[t]]$. By applying \autoref{prp:Hensel-like} to $f:=a$, there exists $N<\omega$ such that $B(N;f(t))\subseteq f(t+\mathcal{M}^{n})$. By \autoref{fact:orb(f)}, $f(t+\mathcal{M}^{n})=\mathrm{Orb}_{n}(a)$. If, on the other hand, $a\notin F[[t]]$ then $a^{-1}\in F[[t]]$, so there exists $N<\omega$ such that $B(N;a^{-1})\subseteq\mathrm{Orb}_{n}(a^{-1})$. Since the map $x\longmapsto x^{-1}$ is continuous, there exists $N'<\omega$ such that $B(N';a)\subseteq\mathrm{Orb}_{n}(a^{-1})^{-1}=\mathrm{Orb}_{n}(a)$, as required. \end{proof} We now extend \autoref{lem:nearly.open.1} to elements of $F((t))^{p}\setminus F$. \begin{lemma}\label{lem:nearly.open.2} Let $b\in F((t))\setminus F$ and let $n\in\mathbb{N}$. There exists $l,N<\omega$ such that$$B(N;b)\cap F((t))^{p^{l}}\subseteq\mathrm{Orb}_{n}(b).$$ \end{lemma} \begin{proof} Let $l\in\mathbb{N}$ be such that $b\in F((t))^{p^{l}}\setminus F((t))^{p^{l+1}}$. Set $a:=b^{p^{-l}}$. By \autoref{lem:nearly.open.1}, there exists $N'<\omega$ such that $B(N';a)\subseteq\mathrm{Orb}_{n}(a)$. Let $N:=p^{l}(N'+1)-1$. For $x\in F((t))$ we have $$\begin{array}{lll} v(x-a)>N'&\text{ iff }&v(x-a)\geq N'+1\\ &\text{ iff }&v((x-a)^{p^{l}})\geq p^{l}(N'+1)\\ &\text{ iff }&v(x^{p^{l}}-b)>p^{l}(N'+1)-1=N. \end{array}$$ Thus $x\in B(N';a)$ if and only if $x^{p^{l}}\in B(N;b)$. Therefore $B(N;b)\cap F((t))^{p^{l}}\subseteq\mathrm{Orb}_{n}(b)$. \end{proof} \begin{lemma}\label{lem:continuity} Let $c\in F((t))\setminus F$ and let $N<\omega$. Then there exists $n<\omega$ such that $\mathrm{Orb}_{n}(c)\subseteq B(N;c)$. \end{lemma} \begin{proof} This follows from the continuity of the map $u\longmapsto c\circ u$. \end{proof} \section{A description of orbits of one-dimensional tuples} \label{section:1-dim.orbits} For an $\mathbf{x}$-tuple $\mathbf{a}\subseteq F((t))$, we let $\mathrm{locus}(\mathbf{a})$ denote the $F((t))$-rational points of the smallest Zariski-closed set which is defined over $F$ and contains $\mathbf{a}$. Equivalently, $\mathrm{locus}(\mathbf{a})$ is the set of those $\mathbf{x}$-tuples $\mathbf{a}'\subseteq F((t))$ which are zeroes of all polynomials (with coefficients from $F$) which are zero at $\mathbf{a}$. For $l\in\mathbb{N}$, let $P_{l}:=\{(y,\mathbf{z})\;|\;y\in F((t))^{p^{l}}\}$. \begin{lemma}\label{lem:local.1} Let $\mathbf{a}$ be a tuple from $F((t))$ of transcendence degree $1$ over $F$. Then there exist $l<\omega$ and a tuple $\mathbf{N}\subseteq\omega$ such that $$\mathrm{locus}(\mathbf{a})\cap B(\mathbf{N};\mathbf{a})\cap P_{l}\subseteq\mathrm{Orb}(\mathbf{a}).$$ \end{lemma} \begin{proof} Since $F((t))/F$ is separable, we may re-write the tuple $\mathbf{a}$ as a $(y,\mathbf{z})$-tuple $(b,\mathbf{c})$ such that $\mathbf{c}$ is separably algebraic over $F(b)$ and $b$ is transcendental over $F$; i.e. $b$ is a separating transcendence base for $\mathbf{a}$ over $F$. By Theorem 7.4 of \cite{Prestel-Ziegler78}, a field admitting a nontrivial henselian valuation (such as $F((t))$) satisfies the `Implicit Function Theorem' (for polynomials). By an easy elaboration of the Implicit Function Theorem (as given in \cite{Anscombe?2}), and since $\mathbf{c}$ is separably algebraic over $F(b)$, there exist $N_{1},\mathbf{N}_{2}\in\mathbb{Z}$ such that$$\mathrm{locus}(b,\mathbf{c})\cap B(N_{1},\mathbf{N}_{2};b,\mathbf{c})$$is the graph of a continuous function$$B(N_{1};b)\longrightarrow B(\mathbf{N}_{2};\mathbf{c}).$$ By \autoref{lem:continuity}, we may choose $n<\omega$ so that$$\mathrm{Orb}_{n}(\mathbf{c})\subseteq B(\mathbf{N}_{2};\mathbf{c});$$and, by \autoref{lem:nearly.open.2}, there exists $l,N_{1}'<\omega$ such that $N_{1}'\geq N_{1}$ and$$B(N_{1}';b)\cap F((t))^{p^{l}}\subseteq\mathrm{Orb}_{n}(b).$$Our aim is to show that$$\mathrm{locus}(b,\mathbf{c})\cap B(N_{1}',\mathbf{N}_{2};b,\mathbf{c})\cap P_{l}\subseteq\mathrm{Orb}(b,\mathbf{c}).$$The result then follows from setting $\mathbf{N}:=(N_{1}',\mathbf{N}_{2})$. Let $(y,\mathbf{z})\in\mathrm{locus}(b,\mathbf{c})\cap B(N_{1}',\mathbf{N}_{2};b,\mathbf{c})\cap P_{l}$. Then $y\in B(N_{1}';b)\cap F((t))^{p^{l}}\subseteq\mathrm{Orb}_{n}(b)$. Thus there exists $s\in t+\mathcal{M}^{n}$ (corresponding to the automorphism $\sigma$) such that $y=\sigma(b)$. By our choice of $n$, we have that $\sigma(\mathbf{c})\in B(\mathbf{N}_{2};\mathbf{c})$. Thus $(y,\sigma(\mathbf{c}))\in B(N_{1}',\mathbf{N}_{2};b,\mathbf{c})$. Since $\sigma$ is an automorphism, we also have that $(y,\sigma(\mathbf{c}))=\sigma(b,\mathbf{c})\in\mathrm{locus}(b,\mathbf{c})$. Therefore both tuples $(y,\sigma(\mathbf{c}))$ and $(y,\mathbf{z})$ are members of $\mathrm{locus}(b,\mathbf{c})\cap B(N_{1}',\mathbf{N}_{2};b,\mathbf{c})$, which is the graph of a function. Thus $\sigma(\mathbf{c})=\mathbf{z}$. We have shown that $$\begin{array}{ll} (y,\mathbf{z})&=\sigma(b,\mathbf{c})\\ &\in\mathrm{Orb}_{m}(b,\mathbf{c})\\ &\subseteq\mathrm{Orb}(b,\mathbf{c}), \end{array}$$ as required. \end{proof} \begin{lemma}\label{lem:local.2} Let $\mathbf{a}$ be a tuple from $F((t))$ of transcendence degree $1$ over $F$, and choose $l<\omega$ and $\mathbf{N}\subseteq\omega$ as in \autoref{lem:local.1}. Then $$\mathrm{locus}(\mathbf{a})\cap\mathbf{U}\cap P_{l}=\mathrm{Orb}(\mathbf{a}),$$ where $\mathbf{U}$ is the open set $\bigcup_{\sigma\in\mathbf{G}}B(\mathbf{N};\sigma(\mathbf{a}))$. \end{lemma} \begin{proof} First we note that $\mathrm{locus}(\mathbf{a})$ and $P_{l}$ are closed set-wise under automorphisms from $\mathbf{G}$. ($\subseteq$) This follows immediately from \autoref{lem:local.1} noting that $\mathrm{Orb}(\mathbf{a})$ is closed under automorphisms. ($\supseteq$) We note that $\mathbf{a}\in\mathrm{locus}(\mathbf{a})\cap B(\mathbf{N};\mathbf{a})\cap P_{l}$. The result then follows by applying automorphisms. \end{proof} \section{Definability of orbits of one-dimensional tuples} \label{section:theorem} \begin{lemma}\label{lem:rational.functions.dense} Let $B(\mathbf{N};\mathbf{a})$ be as in \autoref{lem:local.1}. There exists a tuple $\mathbf{f}\subseteq F(t)$ of rational functions such that $B(\mathbf{N};\mathbf{a})=B(\mathbf{N};\mathbf{f}(t))$. \end{lemma} \begin{proof} This follows from the fact that $F(t)$ is $t$-adically dense in $F((t))$. \end{proof} \begin{theorem.1} Let $\mathbf{a}$ be a tuple from $F((t))$ of transcendence degree $1$ over $F$. Then $\mathrm{Orb}(\mathbf{a})$ is \begin{enumerate} \item $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-definable, \item $\mathcal{L}_{\mathrm{ring}}(F)$-definable, and \item equal to the type $\mathrm{tp}(\mathbf{a})$ of $\mathbf{a}$ over $F$. \end{enumerate} \end{theorem.1} \begin{proof} Let notation be as in \autoref{lem:local.2}. In particular, there is a variable $y$ in the tuple $\mathbf{x}$ and $P_{l}=\{\mathbf{x}\;|\;y\in F((t))^{p^{l}}\}$. \begin{enumerate} \item Let $I$ be the ideal in $F[\mathbf{x}]$ of polynomials which are zero on $\mathbf{a}$. Since $F[\mathbf{x}]$ is Noetherian, there is a tuple $\mathbf{g}=(g_{1},...,g_{r})$ of polynomials which generates $I$. Let $\phi(\mathbf{x})$ be the formula$$\bigwedge_{i=1}^{r}g_{i}(\mathbf{x})=0;$$then $\phi(\mathbf{x})$ defines $\mathrm{locus}(\mathbf{a})$. Note that $\phi(\mathbf{x})$ is a quantifier-free $\mathcal{L}_{\mathrm{ring}}(F)$-formula. Let $\psi(\mathbf{x})$ be the formula$$\exists w\;w^{p^{l}}=y;$$then $\psi(\mathbf{x})$ defines $P_{l}$. Note that $\psi(\mathbf{x})$ is an $\exists$-$\mathcal{L}_{\mathrm{ring}}$-formula. Our next task is to define $B(\mathbf{N};\sigma(\mathbf{a}))$, uniformly for $\sigma\in\mathbf{G}$. Write $\mathbf{N}=(N_{1},...,N_{s})$, $\mathbf{a}=(a_{1},...,a_{s})$, $\mathbf{f}=(f_{1},...,f_{s})$, and $\mathbf{x}=(x_{1},...,x_{s})$. Then $$\begin{array}{ll} B(\mathbf{N};\mathbf{a})&=B(N_{1};a_{1})\times...\times B(N_{s};a_{s})\\ &=B(N_{1};f_{1}(t))\times...\times B(N_{s};f_{s}(t)). \end{array}$$ For $j\in\{1,...,s\}$, let $\chi_{j}(x;t)$ be the formula$$\exists y_{1}...\exists y_{(N_{j}+1)}\;\bigg(x-f_{j}(t)=\prod_{k=1}^{N_{j}+1}y_{k}\wedge\bigwedge_{k=1}^{N_{j}+1}C(y_{k};t)\bigg);$$then $\chi_{j}(x;t)$ defines $B(N_{j};a_{j})$. Let $\chi(\mathbf{x};t)$ be the formula$$\bigwedge_{j=1}^{s}\chi_{j}(x_{j};t);$$then $\chi(\mathbf{x};t)$ defines $B(\mathbf{N};\mathbf{a})$. For $\sigma\in\mathbf{G}$, we have that $\chi(\mathbf{x};\sigma(t))$ defines $B(\mathbf{N};\sigma(\mathbf{a}))$. Let $G(x;t)$ be the $\exists$-$\mathcal{L}_{\mathrm{ring}}(t)$-formula which defines $\mathcal{U}$, as in \autoref{lem:L_ring(t).definitions}. Let $\alpha(\mathbf{x};t)$ be the formula$$\exists u\;(G(u;t)\wedge\chi(\mathbf{x};u));$$then $\alpha(\mathbf{x};t)$ defines $\mathbf{U}$. Note that $\alpha(\mathbf{x};t)$ is an $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-formula. Finally, let $\beta(\mathbf{x};t)$ be the formula$$(\phi(\mathbf{x})\wedge\psi(\mathbf{x})\wedge\alpha(\mathbf{x};t));$$then $\beta(\mathbf{x};t)$ defines $\mathrm{Orb}(\mathbf{a})$, by \autoref{lem:local.2}. Note that $\beta(\mathbf{x};t)$ is an $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-formula. \item Let $H''(u)$ be the $\mathcal{L}_{\mathrm{ring}}$-formula which defines $\mathcal{U}$, as in \autoref{lem:L_ring.definitions}. Let $\gamma(\mathbf{x})$ be the formula$$\exists u\;(H''(u)\wedge\beta(\mathbf{x};u));$$then $\gamma(\mathbf{x})$ defines $\mathrm{Orb}(\mathbf{a})$. Note that $\gamma(\mathbf{x})$ is an $\mathcal{L}_{\mathrm{ring}}$-formula. \item It is a basic fact of Model Theory that the $\mathcal{L}_{vf}(F)$-type $\mathrm{tp}(\mathbf{a})$ is closed under $\mathcal{L}_{vf}(F)$-automorphisms. Thus $\mathrm{Orb}(\mathbf{a})\subseteq\mathrm{tp}(\mathbf{a})$. By the second part of this theorem, $\mathrm{Orb}(\mathbf{a})$ is $\mathcal{L}_{\mathrm{ring}}(F)$-definable; thus $\mathrm{tp}(\mathbf{a})\subseteq\mathrm{Orb}(\mathbf{a})$. \end{enumerate} \end{proof} \section{Subsets and subfields of $F((t))$} \label{section:corollary} Suppose that $X$ is an $F$-definable \bf subset \rm of $F((t))$, i.e. $X\subseteq F((t))$, and let $(X)$ denote the subfield of $F((t))$ generated by $X$. \begin{proposition}\label{prp:subsets} If $X\not\subseteq F$ then $X\setminus F$ is a union of infinite $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-definable sets. \end{proposition} \begin{proof} Let $a\in X\setminus F$. Then $a$ is of transcendence degree $1$ over $F$. By \autoref{thm:1-dim}, $\mathrm{Orb}(a)\subseteq X$ is infinite and $\exists$-$\mathcal{L}_{\mathrm{ring}}(F(t))$-definable. \end{proof} \begin{corollary.2} If $X\not\subseteq F$ then there exists $n\in\mathbb{N}$ such that $(X)=F((t^{p^{n}}))$. \end{corollary.2} \begin{proof} In \cite{Anscombe?2} it was shown that this result holds for existentially definable sets (even with parameters). By \autoref{prp:subsets}, $X$ contains an infinite existentially definable set. \end{proof} In particular, if the field of constants is finite, we have the following corollary. \begin{corollary}\label{cor:finite.constants} If $F=\mathbb{F}_{p}$ then either $(X)=\mathbb{F}_{p}$ or there exists $n\in\mathbb{N}$ such that $(X)=\mathbb{F}_{p}((t^{p^{n}}))$. \end{corollary} \subsection{Subsets and subfields of $F((t))^{\mathrm{perf}}$} Suppose now that $X$ is an $F$-definable subset of $F((t))^{\mathrm{perf}}$, i.e. $X\subseteq F((t))^{\mathrm{perf}}$. \begin{corollary}\label{cor:perfect.constants} If $X\not\subseteq F$ then $(X)=F((t))^{\mathrm{perf}}$. \end{corollary} \begin{proof} Let $x\in X\setminus F$. Let $n\in\mathbb{Z}$ be chosen maximal such that $x\in F((t^{p^{n}}))$. Then $x\notin F((t^{p^{n+1}}))$. Let $s:=t^{p^{n}}$. The set $X\cap F((s))$ is invariant under $F$-automorphisms of $F((s))$, since automorphisms of $F((s))$ extend to automorphisms of $F((s))^{\mathrm{perf}}$. By \autoref{prp:subsets}, $X\cap F((s))$ contains an infinite $\exists$-$F(s)$-definable set. As before we apply the result from \cite{Anscombe?2}, thus the field generated by $X\cap F((s))$ is $F((s))$. In particular, $F((s))\subseteq(X)$. Now consider the automorphism $f$ of $F((t))^{\mathrm{perf}}$ that fixes $F$ pointwise and sends $t\longmapsto t^{1/p}$. The set $X$ is closed under $f$. Thus $(X)=F((t))^{\mathrm{perf}}$, as required. \end{proof} \begin{remark} These results can be seen in the context of Corollary 5.6, from \cite{Junker-Koenigsmann10}, in which it is shown that a henselian field of characteristic zero has no proper parameter-definable subfields; and Question 10 of \cite{Junker-Koenigsmann10} in which Junker and Koenigsmann ask whether $\mathbb{F}_{p}((t))^{\mathrm{perf}}$ is very slim (see Definition 1.1 in \cite{Junker-Koenigsmann10}). If $\mathbb{F}_{p}((t))^{\mathrm{perf}}$ were very slim then in particular it would have no infinite proper parameter-definable subfields. \autoref{cor:perfect.constants} shows that $\mathbb{F}_{p}((t))^{\mathrm{perf}}$ has no infinite proper subfields which are $\emptyset$-definable but at present we are not able to extend our methods to study sets definable with parameters. \end{remark} \section{Appendix: definability of certain subsets of $F((t))$} The following well-known fact is based on an old result of Julia Robinson about the $p$-adic numbers. The original statement can be found in Section 2 of \cite{Robinson65}. \begin{fact}[Folklore, based on \cite{Robinson65}]\label{fact:Robinson.definitions} The valuation ring $F[[t]]$ is defined by the $\mathcal{L}_{\mathrm{ring}}(t)$-formulas: \begin{enumerate} \item $A(x;t):=\exists y\;1+x^{l}t=y^{l}$ (for some prime $l\neq p$), and \item $B(x;t):=\neg\exists z\;(xzt=1\wedge A(z;t))$. \end{enumerate} \end{fact} The next lemma collects together several well-known and easy consequences of \autoref{fact:Robinson.definitions}. For the convenience of the reader, in the following two lemmas we write `($\exists$)' or `($\forall$)' after each formula to denote whether the formula is existential of universal in the given language. \begin{lemma}[Definitions in $\mathcal{L}_{\mathrm{ring}}(t)$]\label{lem:L_ring(t).definitions} We have that $\mathcal{M}$ is defined by the $\mathcal{L}_{\mathrm{ring}}(t)$-formulas: \begin{enumerate} \setcounter{enumi}{4} \item $C(x;t):=\exists y\;(x=0\vee(xy=1\wedge\neg B(y;t)))$ \rm($\exists$)\em, and \item $D(x;t):=\neg\exists y\;(xy=1\wedge A(y;t))$ \rm($\forall$)\em; \end{enumerate} and $\mathcal{O}^{\times}$ is defined by the $\mathcal{L}_{\mathrm{ring}}(t)$-formulas: \begin{enumerate} \setcounter{enumi}{6} \item $E(x;t):=\exists y\;(A(x;t)\wedge A(y;t)\wedge xy=1)$ \rm($\exists$)\em, and \item $F(x;t):=\neg\exists y\;(C(x;t)\vee(yx=1\wedge C(y;t))$ \rm($\forall$)\em; \end{enumerate} and $\mathcal{U}$ is defined by the $\mathcal{L}_{\mathrm{ring}}(t)$-formulas: \begin{enumerate} \setcounter{enumi}{9} \item $G(x;t):=\exists y\;(E(y;t)\wedge x=yt)$ \rm($\exists$)\em, and \item $H(x;t):=\forall y\forall z\;(D(x;t)\wedge\neg(x=yz\wedge C(y;t)\wedge C(z;t)))$ \rm($\forall$)\em. \end{enumerate} \end{lemma} For convenience, this next lemma collects some well-known facts about some $\mathcal{L}_{vf}$-definable subsets of $F((t))$. \begin{lemma}[Definitions in $\mathcal{L}_{vf}$]\label{lem:L_vf.definitions} We have that $\mathcal{M}$ is defined by the $\mathcal{L}_{vf}$-formulas: \begin{enumerate} \item $C'(x):=\exists y\;(x=0\vee(xy=1\wedge y\notin\mathcal{O}))$ \rm($\exists$)\em, and \item $D'(x):=\neg\exists y\;(xy=1\wedge y\in\mathcal{O})$ \rm($\forall$)\em; \end{enumerate} and $\mathcal{O}^{\times}$ is defined by the $\mathcal{L}_{vf}$-formulas: \begin{enumerate} \setcounter{enumi}{2} \item $E'(x):=\exists y\;(x\in\mathcal{O}\wedge y\in\mathcal{O}\wedge xy=1)$ \rm($\exists$)\em, and \item $F'(x):=\neg\exists y\;(C'(x)\vee(yx=1\wedge C'(y))$ \rm($\forall$)\em; \end{enumerate} and $\mathcal{U}$ is defined by the $\mathcal{L}_{vf}$-formula: \begin{enumerate} \setcounter{enumi}{8} \item $H'(x):=\forall y\forall z\;(D'(x)\wedge\neg(x=yz\wedge C(y)\wedge C(z)))$ \rm($\forall$)\em; \end{enumerate} \end{lemma} The following fact is due to James Ax and is found in the proof of the Theorem in \cite{Ax65}. \begin{fact}[\cite{Ax65}]\label{fact:Ax.definition} The valuation ring $F[[t]]$ is defined by the $\mathcal{L}_{\mathrm{ring}}$-formula: $$\begin{array}{lr} A''(x):=&\exists w\exists y\forall u\forall x_{1}\forall x_{2}\exists z\forall y_{1}\forall y_{2}\;\big((z^{m}=1+wx_{1}^{m}x_{2}^{m}\vee y_{1}^{m}\neq1\\ &+wx_{1}^{m}\vee y_{2}^{m}\neq1+wx_{2}^{m})\wedge u^{m}\neq w\wedge y^{m}=1+wx^{m}\big). \end{array}$$ \end{fact} \begin{lemma}[Definitions in $\mathcal{L}_{\mathrm{ring}}$]\label{lem:L_ring.definitions} $\mathcal{M}$, $\mathcal{O}^{\times}$, and $\mathcal{U}$ are $\mathcal{L}_{\mathrm{ring}}$-definable. \end{lemma} \begin{proof} Let $A''(x)$ be as in \autoref{fact:Ax.definition}. For any variable $u$ we replace the atomic formula $u\in\mathcal{O}$ with $A''(u)$ in the $\mathcal{L}_{vf}$-formulas $C'(x)$, $E'(x)$, and $H'(x)$ to obtain $\mathcal{L}_{\mathrm{ring}}$-formulas $C''(x)$, $E''(x)$, and $H''(x)$. \end{proof} \end{document}
\begin{document} \publicationdetails{}{}{}{}{} \maketitle \begin{abstract} In this paper we prove the following new sufficient condition for a digraph to be Hamiltonian: {\it Let $D$ be a 2-strong digraph of order $n\geq 9$. If $n-1$ vertices of $D$ have degrees at least $n+k$ and the remaining vertex has degree at least $n-k-4$, where $k$ is a non-negative integer, then $D$ is Hamiltonian}. This is an extension of Ghouila-Houri's theorem for 2-strong digraphs and is a generalization of an early result of the author (DAN Arm. SSR (91(2):6-8, 1990). The obtained result is best possible in the sense that for $k=0$ there is a digraph of order $n=8$ (respectively, $n=9$) with the minimum degree $n-4=4$ (respectively, with the minimum $n-5=4$) whose $n-1$ vertices have degrees at least $n-1$, but it is not Hamiltonian. We also give a new sufficient condition for a 3-strong digraph to be Hamiltonian-connected. \end{abstract} \section{Introduction} In this paper, we consider finite digraphs without loops and multiple arcs. We shall assume that the reader is familiar with the standard terminology on digraphs and refer the reader to \cite{[5]}. Every cycle and path is assumed simple and directed. A cycle (a path) in a digraph $D$ is called {\it Hamiltonian} ({\it Hamiltonian path}) if it includes all the vertices of $D$. A digraph $D$ is {\it Hamiltonian} if it contains a Hamiltonian cycle. Hamiltonicity is one of the most central in graph theory, and it has been extensively studied by numerous researchers. The problem of deciding Hamiltonicity of a graph (digraph) is $NP$-complete, but there are numerous sufficient conditions which ensure the existence of a Hamiltonian cycle in a digraph (see \cite{[5], [6], [16], [18]}). Among them are the following classical sufficient conditions for a digraph to be Hamiltonian.\\ \noindent\textbf{Theorem 1.1} (\cite{[20]}). {\it Let $D$ be a digraph of order $n\geq 2$. If for every vertex $x$ of $D$, $d^+(x)\geq n/2$ and $d^-(x)\geq n/2$, then $D$ is Hamiltonian.} \\ \noindent\textbf{Theorem 1.2} (\cite{[14]}). {\it Let $D$ be a strong digraph of order $n\geq 2$. If for every vertex $x$ of $D$, $d(x)\geq n$, then $D$ is Hamiltonian.}\\ \noindent\textbf{Theorem 1.3} (\cite{[27]}). {\it Let $D$ be digraph of order $n\geq 2$. If $d^+(x)+d^-(y)\geq n$ for all pairs of distinct vertices $x$ and $y$ of $D$ such that there is no arc from $x$, to $y$, then $D$ is Hamiltonian.}\\ \noindent\textbf{Theorem 1.4} (\cite{[19]}). Let $D$ be a strong digraph of order $n\geq 2$. If $d(x)+d(y)\geq 2n-1$ for all pairs of non-adjacent distinct vertices $x$ and $y$ of $D$, then $D$ is Hamiltonian.\\ It is known that all the lower bounds in the above theorems are tight. Notice that for the strong digraphs Meyniel's theorem is a generalization of Nash-Williams, Ghouila-Houri's and Woodall's theorems. A beautiful short proof the later can found in the paper by \cite{[7]}. \cite{[20]} suggested the problem of characterizing all the strong digraphs of order $n$ and minimum degree $n-1$ that have no Hamiltonian cycle. As a partial solution of this problem, \cite{[22]} in his excellent paper proved a structural theorem on the extremal digraphs. An analogous problem for the Meyniel theorem was considered by \cite{[8]}, proving a structural theorem on the strong non-Hamiltonian digraphs $D$ of order $n$, with the condition that $d(x)+d(y)\geq 2n-2$ for every pair of non-adjacent distinct vertices $x,y$. This improves the corresponding structural theorem of Thomassen. \cite{[8]} also proved that: if $m$ is the length of longest cycle in $D$, then $D$ contains cycles of all lengths $k=2,3,\ldots ,m$. \cite{[22]} and \cite{[9]} described all the extremal digraphs for the Nash-Williams theorem, respectively, when the order of a digraph is odd and when the order of a digraph is even. Here we combine they in the following theorem . \\ \noindent\textbf{Theorem 1.5} (\cite{[22]} and \cite{[9]}). {\it Let $D$ be a digraph of order $n\geq 4$ with minimum degree $n-1$. If for every vertex $x$ of $D$, $d^+(x)\geq n/2-1$ and $d^-(x)\geq n/2-1$, then $D$ is Hamiltonian, unless some exceptions, which completely are characterized.}\\ \cite{[15]} relaxed the condition of the Ghouila-Houri theorem by proving the following theorem.\\ \noindent\textbf{Theorem 1.6} (\cite{[15]}). {\it Let $D$ be a strong digraph of order $n\geq 2$. If $n-1$ vertices of $D$ have degrees at least $n$ and the remaining vertex has degree at least $n-1$, then $D$ is Hamiltonian.}\\ Note that Theorem 1.6 is an immediate consequence of Theorem 1.4. \cite{[15]} for any $n\geq 5$ presented two examples of non-Hamiltonian strong digraphs of order $n$ such that: (i) In the first example, $n-2$ vertices have degrees equal to $n+1$ and the other two vertices have degrees equal to $n-1$. (ii) In the second example, $n-1$ vertices have degrees at least $n$ and the remaining vertex has degree equal to $n-2$.\\ \textbf{Remark 1.} It is worth to mention that \cite{[22]} constructed a strong non-Hamiltonian digraph of order $n$ with only two vertices of degree $n-1$ and all other $n-2$ vertices have degrees at least $(3n-5)/2$.\\ \cite{[28]} reduced the lower bound in Theorem 1.3 by 1, and proved that the conclusion still holds, with only a few exceptional cases that can be clearly characterized. \cite{[11]} announced that the following theorem is holds.\\ \noindent\textbf{Theorem 1.7} (\cite{[11]}). {\it Let $D$ be a 2-strong digraph of order $n\geq 9$ such that its $n-1$ vertices have degrees at least $n$ and the remaining vertex has degree at least $n-4$. Then $D$ is Hamiltonian.}\\ The proof of Theorem 1.7 has never been published. G. Gutin suggested me to publish the proof of this theorem anywhere. Recently, \cite{[12]} presented a new proof of the first part of Theorem 1.7, by proving the following:\\ \noindent\textbf{Theorem 1.8} (\cite{[12]}). {\it Let $D$ be a 2-strong digraph of order $n\geq 9$ such that its $n-1$ vertices have degrees at least $n$ and the remaining vertex $z$ has degree at least $n-4$. If $D$ contains a cycle of length $n-2$ through $z$, then $D$ is Hamiltonian.} \\ \cite{[12]} also proposed the following conjecture. \textbf{Conjecture 1}. {\it Let $D$ be a 2-strong digraph of order $n$. Suppose that $n-1$ vertices of $D$ have degrees at least $n+k$ and the remaining vertex has degree at least $n-k-4$, where $k\geq 0$ is an integer. Then $D$ is Hamiltonian.} \\ Note that, for $k=0$ this conjecture is Theorem 1.7. By inspecting the proof of Theorem 1.8 and the handwritten proof of Theorem 1.7, by the similar arguments we settled Conjecture 1 by proving the following theorem.\\ \noindent\textbf{Theorem 1.9}. {\it Let $D$ be a 2-strong digraph of order $n\geq 9$. If $n-1$ vertices of $D$ have degrees at least $n+k$ and the remaining vertex $z$ has degree at least $n-k-4$, where $k\geq 0$ is an integer, then $D$ is Hamiltonian.}\\ \cite{[13]} presented the proof of the first part of Conjecture 1 for any $k\geq 1$, which we formulated as Theorem 3.6 (Section 3). The goal of this article to present the complete proof of the second part of the proof of Theorem 1.9 and show that this theorem is best possible in the sense that for $k=0$ there is a 2-strong digraph of order $n=8$ (respectively, $n=9$) with the minimum degree $n-4=4$ (respectively, $n-5=4$) whose $n-1$ vertices have degrees at least $n$, but it is not Hamiltonian. To see that the theorem is best possible, it suffices consider the digraphs defined in the Examples 1 and 2, see Figure 1. In figures an undirected edge represents two directed arcs of opposite directions.\\ \textbf{Example 1}. Let $D_8$ be a digraph of order 8 with vertex set $V(D_8)=\{x_1,x_2,x_3,x_4,y_1,y_2,y_3,z\}$ and arc set $A(D_8)$, which satisfies the following conditions: $D_8\langle \{y_1,y_2,y_3\}\rangle$ is a complete digraph, $x_4\rightarrow \{y_1,y_2,y_3\}\rightarrow x_1$, $x_2\rightarrow \{y_1,y_2,y_3\}\rightarrow x_2$, $D_8$ contains the following 2-cycles and arcs $x_i\leftrightarrow x_{i+1}$ for all $i\in [1,3]$, $x_1\leftrightarrow x_{3}$, $x_3\leftrightarrow z$, $x_4\leftrightarrow x_{2}$, $x_4\rightarrow x_1$, $x_4\rightarrow z$ and $z\rightarrow x_1$. $A(D_8)$ contains no other arcs.\\ \textbf{Example 2}. Let $D_9$ be a digraph of order 9 with vertex set $V(D_9)=\{x_1,x_2,x_3,x_4,x_5,y_1,y_2,y_3,z\}$ and arc set $A(D_9)$, which satisfies the following conditions: $D_9\langle \{y_1,y_2,y_3\}\rangle$ is a complete digraph, $x_5\rightarrow \{y_1,y_2,y_3\}\rightarrow x_1$, $x_3\rightarrow \{y_1,y_2,y_3\}\rightarrow \{x_1,x_2,x_3$\}, $D_9$ contains the following 2-cycles and arcs $x_i\leftrightarrow x_{i+1}$ for all $i\in [1,4]$, $x_1\leftrightarrow x_{4}$, $x_3\leftrightarrow x_5$, $x_4\leftrightarrow x_{2}$, $x_4\leftrightarrow z$, $x_5\rightarrow z$, $z\rightarrow x_1$ and $x_5\rightarrow x_1$. $A(D_9)$ contains no other arcs.\\ Observe that every vertex other than $z$ in $D_8$ (in $D_9$) has degree at least $|V(D_8)|=8$ (at least $|V(D_9)|= 9$) and $d(z)=4$ in both digraphs $D_8$ and $D_9$. It is not hard to check that for every $u\in V(D_8)$ ($u\in V(D_9)$), $D_8-u$ ($D_9-u$) is strong, i.e., $D_8$ and $D_9$ both are 2-strong. To see this, it suffices to consider a longest cycle in $D_8-u$ (in $D_9-u$) and apply the following well-known proposition.\\ \textbf{Proposition 1} (see Exercise 7.26 \cite{[5]}). Let $D$ be a $(k\geq1)$-strong digraph, let $x$ be a new vertex and $D'$ be a digraph obtained from $D$ and $x$ by adding $k$ arcs from $x$ to distinct vertices of $D$ and $k$ arcs from distinct vertices of $D$ to $x$. Then $D'$ also is $k$-strong.\\ Let $D'_9$ be the digraph obtained from $D_9$ by adding the arcs $x_3x_1$ and $x_5x_2$. Now we will show that $D'_9$ is not Hamiltonian. Assume that this is not the case. Let $R$ be an arbitrary Hamiltonian cycle in $D'_9$. Then $R$ necessarily contains either the arc $x_4z$ or the arc $x_5z$. If $x_4z\in A(R)$, then it is not difficult to see that either $R[x_4,y_i]=x_4zx_1x_2x_3y_i$ or $R[x_4,y_i]=x_4zx_1x_2x_3x_5y_i$, which is impossible since $N^+(y_i, \{x_1,x_2, \ldots , x_5\})=\{x_1,x_2,x_3\}$. We may therefore assume that $x_5z\in A(R)$. Then necessarily $R$ contains the arc $x_3y_i$ and either the path $x_5zx_1$ or the path $x_5zx_4$. It is easy to check that either $x_2x_3\in A(R)$ or $x_4x_3\in A(R)$. If $x_5zx_4$ is in $R$, then $R[x_5,y_i]=x_5zx_4x_j\ldots x_3y_i$, where $j\in [1,3]$, and if $x_5zx_1$ is in $R$, then $R[x_5,y_i]$ is one of the following pats: $x_5zx_1x_2x_3y_i$, $x_5zx_1x_2x_4x_3y_i$, $x_5zx_1x_4x_3y_i$ and $x_5zx_1x_4x_2x_3y_i$, which is impossible since $N^+(y_i, \{x_1,x_2, \ldots , x_5\})=\{x_1,x_2,x_3\}$. So, in all cases we have a contradiction. Therefore, $D'_9$ is not Hamiltonian, which in turn implies that the digraphs $D_9$, $D_9+\{(x_3x_1)\}$ and $D_9+\{(x_5x_2)\}$ also are not Hamiltonian. By a similar argument we can show that $D_8$ also is not Hamiltonian.\\ \begin{figure*} \caption{The digraphs $D_8$ and $D_9$.} \label{Fig.1} \end{figure*} A digraph $D$ is {\it Hamiltonian-connected} if for any pair of distinct vertices $x,y$, $D$ has a Hamiltonian path from $x$ to $y$. \cite{[21]} proved the following sufficient condition for a digraph to be Hamiltonian-connected. \\ \textbf{Theorem 1.10:} (\cite{[21]}). {\it Let $D$ be a 2-strong digraph of order $n\geq 3$ such that, for each two non-adjacent distinct vertices $x,y$ we have $d(x)+d(y)\geq 2n+1$. Then for each two distinct vertices $u,v$ with $d^+(u)+d^-(v)\geq n+1$ there is a Hamiltonian $(u,v)$-path.}\\ Let $D$ be a digraph of order $n\geq 3$ and let $u$ and $v$ be two distinct vertices in $V(D)$. Following \cite{[21]}, we define a new digraph $H_D(u,v)$ as follows: $$V(H_D(u,v))=V(D-\{u,v\})\cup \{z\} \,\, (z \,\, \hbox{a new vertex}),$$ $$A(H_D(u,v))= A(D-\{u,v\})\cup \{zy\, |\, y\in N^+_{D-v}(u)\} \cup \{yz\, |\, y\in N^-_{D-u}(v)\}.$$ Now, using Theorem 1.9, we will prove the following theorem, which is an analogue of the Overbeck-Larisch theorem.\\ \textbf{Theorem 1.11:} {\it Let $D$ be a 3-strong digraph of order $n+1\geq 10$ with minimum degree at least $n+k+2$. If for two distinct vertices $u,v$, $d^+_D(u)+d^-_D(v)\geq n-k-2$ or $d^+_D(u)+d^-_D(v)\geq n-k-4$ with $uv\notin A(D)$, then there is a Hamiltonian $(u,v)$-path in $D$.} \textbf{Proof:} Let $D$ be a 3-strong digraph of order $n+1\geq 10$ and let $u, v$ be two distinct vertices in $v(D)$. Suppose that $D$ and $u,v$ satisfy the degree conditions of the theorem. Now we consider the digraph $H:=H_D(u,v)$ of order $n\geq 9$. By an easy computation, we obtain that the minimum degree of $H$ is at least $n-k-4$, and $H$ has $n-1$ vertices of degrees at least $n+k$. Moreover, we know that $H$ is 2-strong (see \cite{[10]}). Thus, the digraph $H$ satisfies the conditions of Theorem 1.9. Therefore, $H$ is Hamiltonian, which in turn implies that in $D$ there is a Hamiltonian $(u,v)$-path. \fbox \\\\ There are a number of sufficient conditions depending on degree or degree sum for hamiltonicity of bipartite digraphs. Here we combine several of them in the following theorem.\\ \noindent\textbf{Theorem 1.12}. {\it Let $D$ be a balanced bipartite strong digraph of order $2a\geq 6$. Then $D$ is Hamiltonian provided one of the following holds: (a) (\cite{[3]}). $d^+(x)+d^-(y)\geq a+2$ for every pair of vertices $x$, $y$ such that $x$, $y$ belong to different partite sets and $xy\notin A(D)$. (b) (\cite{[4]}). $d(x)+d(y)\geq 3a$ for every pair of non-adjacent distinct vertices $x,y$. (c) (\cite{[1]}). $d(x)+d(y)\geq 3a$ for every pair of vertices $x, y$ with a common in-neighbour or a common out-neighbour. (d) (\cite{[2]}). $d(x)+d(y)\geq 3a+1$ for every pair of vertices $x,y$ with a common out-neighbour.}\\ All the lower bounds in Theorem 1.12 are the best possible. However, \cite{[23]},\cite{[26]}, \cite{[25]} and \cite{[24]} reduced all the lower bounds in Theorem 1.12 by 1, and completely described all non-Hamiltonian bipartite digraphs, that is the extremal bipartite digraphs for Theorem 1.12. Motivated by Theorems 1.9, 1.12 and Remark 1, it is natural to suggest the following problems. \textbf{Problem 1}. Suppose that $D$ is a $k$-strong balanced bipartite digraph of order $2a\geq 6$. Let $\{x_0,y_0\}$ be a pair of distinct vertices in $V(D)$ such that $d(x_0)+d(y_0)\geq 3a-l$, where $l\geq 1$ is an integer. Find the minimum value of $k$ and the maximum value of $l$ such that $D$ is Hamiltonian provided one of the following holds: (i) $x_0$ and $y_0$ are not adjacent and $d(x)+d(y)\geq 3a$ for every pair $\{x,y\}$ of non-adjacent vertices $x$, $y$ other than $\{x_0,y_0\}$. (ii) $\{x_0,y_0\}$ is a pair with a common out-neighbour and $d(x)+d(y)\geq\ 3a$ for every pair $\{x,y\}$ of vertices $x$, $y$ with a common out-neighbour such that $\{x,y\}\not= \{x_0,y_0\}$.\\ \textbf{Problem 3}. Suppose that $D$ is a $k$-strong balanced bipartite digraph of order $2a\geq 6$. Let $u_0$ and $v_0$ be two vertices from different partite sets such that $u_0 \nrightarrow v_0$ and $d^+(u_0)+d^-(y_0)\geq a+2-l$, where $l\geq 2$ is an integer. Find the minimum value of $k$ and the maximum value of $l$ such that $D$ is Hamiltonian provided that the following holds: $d^+(u)+d^-(v)\geq a+2$ for all vertices $u$ and $v$ from different partite sets such that $\{u,v\}\not=\{u_0,v_0\}$ and $u\nrightarrow v$.\\\\ \section{Terminology and notation} In this paper, we consider finite digraphs without loops and multiple arcs. For the terminology not defined in this paper, the reader is referred to the book \cite{[5]}. The vertex set and the arc set of a digraph $D$ are denoted by $V(D)$ and $A(D)$, respectively. The {\it order} of $D$ is the number of its vertices. For any $x,y\in V(D)$, if $xy\in A(D)$, we also write $x\rightarrow y$, and say that $x$ {\it dominates} $y$ or $y$ is {\it dominated} by $x$. The notion $x y\notin A(D)$ means that $xy\notin A(D)$. If $x\rightarrow y$ and $y\rightarrow x$ we shall use the notation $x\leftrightarrow y$ ($x\leftrightarrow y$ is called 2-{\it cycle}). If $x\rightarrow y$ and $y\rightarrow z$, we write $x\rightarrow y\rightarrow z$. Let $A$ and $B$ be two disjoint subsets of $V(D)$. The notation $A\rightarrow B$ means that every vertex of $A$ dominates every vertex of $B$. We define $A_D(A\rightarrow B)=\{xy\in A(D)\, |\, x\in A, y\in B\}$ and $A_D(A,B)=A_D(A\rightarrow B)\cup A_D(B\rightarrow A)$. If $x\in V(D)$ and $A=\{x\}$ we sometimes write $x$ instead of $\{x\}$. The {\it converse digraph} of $D$ is the digraph obtained from $D$ by reversing the direction of all arcs, and is denoted by $D^{rev}$. Let $N_D^+(x)$, $N_D^-(x)$ denote the set of out-neighbors, respectively the set of in-neighbors of a vertex $x$ in a digraph $D$. If $A\subseteq V(D)$, then $N_D^+(x,A)= A \cap N_D^+(x)$ and $N_D^-(x,A)=A\cap N_D^-(x)$. The {\it out-degree} of $x$ is $d_D^+(x)=|N_D^+(x)|$ and $d_D^-(x)=|N_D^-(x)|$ is the {\it in-degree} of $x$. Similarly, $d_D^+(x,A)=|N_D^+(x,A)|$ and $d_D^-(x,A)=|N_D^-(x,A)|$. The {\it degree} of the vertex $x$ in $D$ is defined as $d_D(x)=d_D^+(x)+d_D^-(x)$ (similarly, $d_D(x,A)=d_D^+(x,A)+d_D^-(x,A)$). We omit the subscript if the digraph is clear from the context. The subdigraph of $D$ induced by a subset $A$ of $V(D)$ is denoted by $D\langle A\rangle$ and $D-A$ is the subdigraph induced by $V(D)\setminus A$, i.e. $D-A=D\langle V(D)\setminus A\rangle$. For integers $a$ and $b$, $a\leq b$, let $[a,b]$ denote the set $\{x_a,x_{a+1},\ldots , x_b\}$. If $j<i$, then $\{x_i,\ldots , x_j\}=\emptyset$. A path is a digraph with vertex set $\{x_1,x_2,\ldots , x_k\}$ and arc set $\{x_1x_2,x_2x_3,\ldots , x_{k-1}x_k\}$, and is denoted by $x_1x_2\ldots x_k$. This is also called an $(x_1,x_k)$--path or a path from $x_1$ to $x_k$. If we add the arc $x_kx_1$ to the above, we obtain a cycle $x_1x_2\ldots x_kx_1$. The {\it length} of a cycle or a path is the number of its arcs. If a digraph $D$ contains a path from a vertex $x$ to a vertex $y$ we say that $y$ is {\it reachable} from $x$ in $D$. In particular, $x$ is reachable from itself. If $P$ is a path containing a subpath from $x$ to $y$, we let $P[x,y]$ denote that subpath. Similarly, if $C$ is a cycle containing vertices $x$ and $y$, $C[x,y]$ denotes the subpath of $C$ from $x$ to $y$. For a cycle $C$, a $C$-bypass is an $(x,y)$-path $P$ of length at least two such that $V(P)\cap V(C)=\{x,y\}$. The {\em flight} of $C$-bypass $P$ respect to $C$ is $|V(C[x,y])|-2$. For integers $a$ and $b$, $a\leq b$, let $[a,b]$ denote the set of all integers, which are not less than $a$ and are not greater than $b$. The path (respectively, the cycle) consisting of the distinct vertices $x_1,x_2,\ldots ,x_m$ ($m\geq 2 $) and the arcs $x_ix_{i+1}$, $i\in [1,m-1]$ (respectively, $x_ix_{i+1}$, $i\in [1,m-1]$, and $x_mx_1$), is denoted by $x_1x_2\cdots x_m$ (respectively, $x_1x_2\cdots x_mx_1$). We say that $x_1x_2\cdots x_m$ is a path from $x_1$ to $x_m$ or is an $(x_1,x_m)$-{\it path}. Let $x$ and $y$ be two distinct vertices of a digraph $D$. Cycle that passing through $x$ and $y$ in $D$, we denote by $C(x,y)$. Let $D$ be a digraph and $z\in V(D)$. By $C_m(x)$ (respectively, $C(x)$) we denote a cycle in $D$ of length $m$ through $x$ (respectively, a cycle through $x$). Similarly, we denote by $C_k$ a cycle of length $k$. By $K_n^*$ is denoted the complete digraph of order $n$. Let $D$ be a digraph of order $n$. If $E$ is a set of arcs in $K_n^*$, then we denote by $D+E$ the digraph obtained from $D$ by adding all arcs of $E$. A digraph $D$ is {\it strongly connected} (or, just, {\it strong}) if there exists a path from $x$ to $y$ and a path from $y$ to $x$ for every pair of distinct vertices $x,y$. A digraph $D$ is {\it $k$-strongly} connected (or {\it $k$-strong}), where $k\geq 1$, if $|V(D)|\geq k+1$ and $D- A$ is strongly connected for any subset $A\subset V(D)$ of at most $k-1$ vertices. Two distinct vertices $x$ and $y$ are {\it adjacent} if $xy\in A(D)$ or $yx\in A(D) $ (or both). We will use {\it the principle of digraph duality}: Let $D$ be a digraph, then $D$ contains a subdigraph $H$ if and only if $D^{rev}$ contains the subdigraph $H^{rev}$. \section{Preliminaries} In our proofs we extensively will use the following well-known simple lemmas. \noindent\textbf{Lemma 3.1} (\cite{[17]}). {\it Let $D$ be a digraph of order $n\geq 3$ containing a cycle $C_m$, $m\in [2,n-1]$. Let $x$ be a vertex not contained in this cycle. If $d(x,V(C))\geq m+1$, then $D$ contains a cycle $C_k$ for every $k\in [2,m+1]$.}\\ The next lemma is a slight modification of a lemma by \cite{[7]} it is very useful and will be used extensively throughout this paper.\\ \noindent\textbf{Lemma 3.2.} Let $D$ be a digraph of order $n\geq 3$ containing a path $P:=x_1x_2\ldots x_m$, $m\in [1,n-1]$. Let $x$ be a vertex not contained in this path. If one of the following condition holds: (i) $d(x,V(P))\geq m+2$, (ii) $d(x,V(P))\geq m+1$ and $x\nrightarrow x_1$ or $x_m\nrightarrow x$, (iii) $d(x,V(P))\geq m$, $x\nrightarrow x_1$ and $x_m\nrightarrow x$, then there is an $i\in [1,m-1]$ such that $x_i\rightarrow x\rightarrow x_{i+1}$, i.e., $D$ contains a path $x_1x_2\ldots x_ixx_{i+1}\ldots x_m$ of length $m$ (we say that $x$ can be inserted into $P$).\\ We note that in the above Lemma 3.2 as well as throughout the whole paper we allow paths of length 0, i.e., paths that have exactly one vertex. Using Lemma 3.2, it is not difficult to prove the following lemma.\\ \noindent\textbf{Lemma 3.3.} {\it Let $D$ be a digraph of order $n\geq 4$. Suppose that $P:=x_1x_2\ldots x_m$, $m\in [2,n-2]$, is a longest path from $x_1$ to $x_m$ in $D$ and $V(D)\setminus V(P)$ contains two distinct vertices $y_1$, $y_2$ such that $d(y_1,V(P))=d(y_2,V(P))=m+1$. If in subdigraph $D\langle V(D)\setminus V(P)\rangle$ there exists a path from $y_1$ to $y_2$ and a path from $y_2$ to $y_1$, then there is an integer $l\in [1, m]$ such that for every $i\in [1,2]$ $$ O(y_i,V(P))=\{x_1,x_2,\ldots ,x_l\} \quad \hbox{and} \quad I(y_i,V(P))=\{x_l,x_{l+1},\ldots , x_m\}. $$ } \noindent\textbf{Theorem 3.4} (\cite{[10]}). {\it Let $D$ be a strong digraph of order $n\geq 2$. Suppose that $d(x)+d(y)\geq 2n-1$ for all pairs of non-adjacent vertices $x, y\in V(D)\setminus \{z\}$, where $z$ is an arbitrary fixed vertex in $V(D)$. Then $D$ contains a cycle of length is at least $n-1$.}\\ From Theorem 3.4 it follows that the following corollary is true.\\ \noindent\textbf{Corollary 1}. (\cite{[10]}). Let $D$ be a strong digraph of order $n\geq 2$. Suppose that $n-1$ vertices of $D$ have degrees at least $n$. Then $D$ either is Hamiltonian or contains a cycle of length $n-1$ (in fact $D$ has a cycle that contains all the vertices with degree at least $n$).\\ \noindent\textbf{Lemma 3.5} (\cite{[12]}). {\it Let $D$ be a digraph of order $n\geq 4$ such that for any vertex $x\in V(D)\setminus \{z\}$, $d(x)\geq n$, where $z$ is an arbitrary fixed vertex in $V(D)$. Moreover, $d(z)\leq n-2$. Suppose that $C_m(z)=x_1x_2\ldots x_mx_1$, $m\leq n-1$, is a cycle of length $m$ through $z$ and $C_m(z)$ has an $(x_i,x_j)$-bypass such that $z\notin V(C_m(z)[x_{i+1},x_{j-1}])$. Then $D$ has a cycle, say $Q$, of length at least $m+1$ such that $V(C_m(z))\subset V(Q)$.}\\ \noindent\textbf{Theorem 3.6} (\cite{[12]}). {\it Let $D$ be a 2-strong digraph of order $n\geq 9$ such that $n-1$ vertices of $D$ have degrees at least $ n+k$ and the remaining vertex $z$ has degree at least $n-k-4$, where $k\geq 0$ is an integer. If the length of a longest cycle through $z$ is at least $n-k-2$, then $D$ is Hamiltonian.} \section{Proof of Theorem 1.9} \noindent\textbf{Theorem 1.9.} {\it Let $D$ be a 2-strong digraph of order $n\geq 9$. If $n-1$ vertices of $D$ have degrees at least $n+k$ and the remaining vertex $z$ has degree at least $n-k-4$, where $k\geq 0$ is an integer, then $D$ is Hamiltonian.} \begin{proof} By contradiction, suppose that $D$ is not Hamiltonian. Then from Theorem 3.6 it follows that $D$ has no $C(z)$-cycle of length greater than $n-k-3$. By Corollary 1, $D$ contains a cycle of length $n-1$. Let $C_{n-1}:=x_1x_2\ldots x_{n-1}x_1$ be an arbitrary cycle in $D$. By Lemma 3.1, $z\notin V(C_{n-1})$. Since $D$ is 2-strong, there are two distinct vertices, say $x_1$ and $x_{n-d-1}$, such that $x_{n-d-1}\rightarrow z\rightarrow x_1 $ and $d(z,\{x_{n-d},x_{n-d+1}, \ldots , x_{n-1}\})=0$. Without loss of generality, assume that the flight $d:=|\{x_{n-d}, x_{n-d+1},\ldots , x_{n-1}\}|$ of $z$ respect to $C_{n-1}$ is smallest possible over all the cycles of length $n-1$ in $D$. For any $i\in [1,d]$, let $y_i=x_{n-d-1+i}$ and $Y=\{y_1, y_2, \ldots , y_d\}$. Note that $y_1y_2\ldots y_d$ is a path in $D\langle Y\rangle$. Since $z$ cannot be inserted into $C_{n-1}$, using Lemma 3.2, we obtain $n-k-4\leq d(z)\leq n-d$. Hence, $d\leq k+4$. On the other hand, $n-d\leq n-k-3$, i.e., $d\geq k+3$, since $zx_1x_2\ldots x_{n-d-1}z$ is a $C(z)$-cycle of length $n-d$. From now on, by $P$ we denote the path $x_1x_2\ldots x_{n-d-1}$ (see Figure 1). \begin{figure*} \caption{The cycles $C_{n-1} \label{Fig.2} \end{figure*} In order to prove the theorem, it is convenient for the digraph $D$ and the path $P$ to prove the following Claims 1-4 below.\\ \textbf{Claim 1.} {\em Suppose that $D\langle Y\rangle$ is strong and each vertex $y_j$ of $Y$ cannot be inserted into $P$. If $d^+(x_i,Y)\geq 1$ with $i\in [1,n-d-2]$, then $A(Y\rightarrow \{x_{i+1},x_{i+2}, \ldots , x_{n-d-1}\})=\emptyset$}. \begin{proof} By contradiction, suppose that there are vertices $x_s, x_q$ with $1\leq s<q\leq n-d-1$ and $u,v\in Y$ such that $x_s\rightarrow u$, $v\rightarrow x_q$. Since $D\langle Y\rangle$ is strong, it contains a $(u,v)$-path, and let $Q$ be such a longest path. We may assume that $A(Y,\{x_{s+1},\ldots , x_{q-1}\})=\emptyset$. Since $D\langle Y\rangle$ is strong and every vertex $y_j$ cannot be inserted into $P$, using the fact that $D$ has no $C(z)$-cycle of length at least $n-k-2$, we obtain that $q-s\geq 2$. We now extend the path $x_qx_{q+1}\ldots x_{n-d-1}zx_1x_{2}\ldots x_{s}$ with vertices $x_{s+1},x_{s+2},\ldots , x_{q-1}$ as much as possible. Then some vertices $z_1,z_2, \ldots , z_m\in \{x_{s+1},x_{s+2},\ldots , x_{q-1}\}$, where $0\leq m\leq q-s-1$, are not on the obtained extended path, say $R$. We consider the cases $m\geq 1$ and $m=0$ separately. Assume first that $m\geq 1$. Since every vertex $y_j$ cannot be inserted into $P$ and $d(y_j,\{z,x_{s+1},\\x_{s+2},\ldots ,x_{q-1}\})=0$, using Lemma 3.2(i), we obtain $$ n+k\leq d(y_j)=d(y_j,Y)+d(y_j,\{x_1,x_2,\ldots , x_s\})+d(y_j,\{x_q,x_{q+1},\ldots , x_{n-d-1}\}) $$ $$\leq 2d-2+(s+1)+(n-d-1-q+2)=n+s+d-q \quad \hbox{and} $$ $$ n+k\leq d(z_i)=d(z_i,V(R))+d(z_i, \{z_1,z_2,\ldots , z_m\})\leq |V(R)|+1+2m-2 $$ $$= n-d-m+1+2m-2=n+m-d-1. $$ Therefore, $$ 2n+2k\leq d(z_i)+d(y_j)\leq n+m-d-1+n+s+d-q= 2n+m-1+s-q$$ $$ \leq 2n-1+q-s-1+s-q=2n-2, $$ which is a contradiction since $k\geq 0$. Assume next that $m=0$. This means that $D$ contains an $(x_q,x_s)$-path with vertex set $\{z\}\cup V(P)$. This and the fact that $D$ contains no cycle of length at least $n-k-2$ through $z$ imply that $d=k+4$, $|V(Q)|=1$, i.e., $u=v$, and $ A(x_s\rightarrow Y\setminus \{u\})=A(Y\setminus \{u\}\rightarrow x_q)=\emptyset $. Since any vertex of $Y$ cannot be inserted into $P$, using Lemma 3.2(ii), for each $y\in Y\setminus \{u\}$ we obtain $$ n+k\leq d(y)=d(y,Y)+d(y,\{x_1,x_2,\ldots , x_s\})+ d(y,\{x_q,x_{q+1},\ldots , x_{n-k-5}\})$$ $$\leq 2k+6+s+n-k-5-q+1= n+k+2-(q-s). $$ This means that all the inequalities used in the last expression are actually equalities, i.e., $q-s= 2$, $d(y,Y)=2k+6$, i.e., $D\langle Y\rangle$ is a complete digraph, and $$ d(y,\{x_1,x_2,\ldots , x_s\})=s,\,\, d(y,\{x_q,x_{q+1},\ldots , x_{n-k-5}\})=n-k-q-4.$$ Again using Lemma 3.2(ii), from the last two equalities and $ A(x_s\rightarrow Y\setminus \{u\})=A(Y\setminus \{u\}\rightarrow x_q)=\emptyset$ we obtain and $x_{n-k-5}\rightarrow Y\setminus \{u\}\rightarrow x_1$. We claim that $x_{s+1}$ can be inserted into $x_1x_2\ldots x_s$ or $x_qx_{q+1}\ldots x_{n-k-5}$. Assume that this is not the case. Then by Lemma 3.2(i), $$ n+k\leq d(x_{s+1})=d(x_{s+1},\{x_1,x_2,\ldots , x_s\})+ d(x_{s+1},\{x_q=x_{s+2},x_{s+3},\ldots , x_{n-k-5}\})$$ $$+d(x_{s+1},\{z\})\leq s+1+n-k-5-s-1+1+2=n-k-(q-s)=n-k-2, $$ which is a contradiction. This contradiction shows that there is either an $(x_1,x_s)$-path, say $R_1$, with vertex set $\{x_1,x_2,\ldots , x_s,x_{s+1}\}$ or an $(x_q,x_{n-k-5})$-path, say $R_2$, with vertex set $\{x_{s+1},x_{s+2},\ldots , x_{n-k-5}\}$. Let $H$ be a Hamiltonian path in $D\langle Y\setminus \{u\}\rangle$. We know that $d(z,V(H))=0$, $|V(H)|=k+3$ and $x_{n-k-5}\rightarrow Y\setminus \{u\}\rightarrow x_1$. Therefore, $F_1:=x_1R_1ux_q\ldots x_{n-k-5}Hx_1$ or $F_2:=x_1\ldots x_suR_2x_{n-k-5}Hx_1$, is a cycle of length $n-1$. We have that the flight of $z$ respect to $F_1$ (or $F_2$) is equal to $k+3$, which contradicts the minimality of $d=k+4$ and the choice of the cycle $C_{n-1}$ of length $n-1$. This completes the proof of the claim. \end{proof} \textbf{Claim 2.} {\it If $x_j\rightarrow z$ with $j\in [1,n-d-2]$, then $A(z\rightarrow \{x_{j+1},x_{j+2},\ldots , x_{n-d-1}\})=\emptyset$.} \begin{proof} By contradiction, suppose that $x_j\rightarrow z$ with $j\in [1,n-d-2]$ and $z\rightarrow x_l$ with $l\in [j+1,n-d-1]$. We may assume that $d(z,\{x_{j+1},\ldots , x_{l-1}\})=0$. Since $D$ contains no $C(z)$-cycle of length at least $n-k-2$ and $C_{n-l+j+1}(z):=x_1x_2\ldots x_jzx_l\ldots x_{n-d-1}y_1y_2\ldots y_dx_1$, it follows that $l\geq j+k+4$. Then, since $z$ cannot be inserted into $P$, by Lemma 3.2(i), we have $$ n-k-4\leq d(z)=d(z,\{x_1,x_2,\ldots , x_j\})+d(z,\{x_l,x_{l+1},\ldots , x_{n-d-1}\})$$ $$\leq (j+1)+(n-d-1-l+2)=n+2+j-d-l$$ $$\leq n+2+(l-k-4)-d-l=n-k-2-d, $$ i.e., $d\leq 2$, which contradicts that $d\geq k+3$. Claim 2 is proved. \end{proof} Since $D$ is 2-strong, we have $d^-(z)\geq 2$ and $d^+(z)\geq 2$. From this and Claim 2 it follows that there exists an integer $t\in [2,n-d-2]$ such that $x_t\rightarrow z$ and $$ d^-(z,\{x_1,x_2,\ldots , x_{t-1}\})= d^+(z,\{x_{t+1},x_{t+2},\ldots , x_{n-d-1}\})=0. \eqno (1) $$ From (1) and $d(z)\geq n-k-4$ it follows that if $d=k+4$, then $n-d-1=n-k-5$ and $$ N^+(z)=\{x_1,x_2,\ldots , x_{t}\} \quad \hbox{and} \quad N^-(z)=\{x_{t},x_{t+1},\ldots , x_{n-k-5}\}. \eqno (2) $$ \textbf{Claim 3.} {\it Suppose that there is an integer $l\in [2,n-d-2]$ such that $$ A(\{x_1,x_2, \ldots , x_{l-1}\} \rightarrow Y)=A(Y\rightarrow\{x_{l+1},x_{l+2}, \ldots , x_{n-d-1}\}) = \emptyset. $$ Then for every $j\in [2,n-d-2]$, $$ A(\{x_1,x_2, \ldots , x_{j-1}\}\rightarrow\{x_{j+1},x_{j+2}, \ldots , x_{n-d-1}\}) \not= \emptyset. $$} \begin{proof} Suppose, on the contrary, that for some $j\in [2,n-d-2]$, $ A(\{x_1,x_2, \ldots , x_{j-1}\}\rightarrow\{x_{j+1},x_{j+2}, \\\ldots , x_{n-d-1}\}) = \emptyset $. Without loss of generality, we may assume that $j\leq l$. If $d^-(z,\{x_1,x_2,\ldots , x_{j-1}\})=0$, then by the suppositions of the claim, we have $$ A(\{x_1,x_2, \ldots , x_{j-1}\}\rightarrow Y\cup \{z, x_{j+1},x_{j+2}, \ldots , x_{n-d-1}\}) = \emptyset. $$ If $d^-(z,\{x_1,x_2,\ldots , x_{j-1}\})\geq 1$, then by Claim 2, $d^+(z,\{x_{j+1},x_{j+2},\ldots , x_{n-d-1}\})=0$. This together with the supposition of the claim implies that $$ A(\{z,x_1,x_2, \ldots , x_{j-1}\} \rightarrow Y\cup \{x_{j+1},x_{j+2}, \ldots , x_{n-d-1}\}) = \emptyset. $$ Thus, in both cases, $D-x_j$ is not strong, which is a contradiction. Claim 3 is proved. \end{proof} \textbf{Claim 4}. {\em Any vertex $y_j$ with $j\in [1,d]$ cannot be inserted into $P$.} \begin{proof} By contradiction, suppose that there is a vertex $y_p$ with $p\in [1,d]$ and an integer $s\in [1,n-d-2]$ such that $x_s\rightarrow y_p\rightarrow x_{s+1}$. Then $R(z):=x_1x_2\ldots x_sy_px_{s+1}\ldots x_{n-d-1}zx_1$ is a cycle of length $n-d+1$. Since $D$ contains no $C(z)$-cycle of length at least $n-k-2$, it follows that $n-d+1\leq n-k-3$, i.e., $d\geq k+4$. Therefore, $d=k+4$ since $d\leq k+4$. It is easy to see that any vertex $y_i$ other than $y_p$ cannot be inserted into $P$. Note that (2) holds since $d=k+4$. We will consider the cases $p\in [2,k+3]$ and $p=1$ separately. Note that if $p=k+4$, then in the converse digraph of $D$ we have case $p=1$. \\ \textbf{Case 1.} $p\in [2,k+3]$. If $y_{p-1}\rightarrow y_{p+1}$, then the cycle $x_1x_2\ldots x_sy_px_{s+1}\ldots x_{n-k-5}y_1\ldots y_{p-1}y_{p+1}\ldots y_{k+4}x_1$ is a cycle of length $n-1$ and the flight of $z$ respect to this cycle is equal to $k+3$, which is a contradiction. We may therefore assume that $y_{p-1}\nrightarrow y_{p+1}$. Since both $y_{p-1}$ and $y_{p+1}$ cannot be inserted into $R(z)$, using Lemma 3.2(i), we obtain $d(y_{p-1},V(R(z)))\leq n-k-3$ and $d(y_{p+1}, V(R(z)))\leq n-k-3$. These together with $d(y_{p-1})\geq n+k$ and $d(y_{p+1})\geq n+k$ imply that $d(y_{p-1}, Y\setminus \{y_p\})\geq 2k+3$ and $d(y_{p+1}, Y\setminus \{y_p\})\geq 2k+3$. Hence, it is easy to see that $y_{p+1}\rightarrow y_{p-1}$ and $d^+(x_s, Y\setminus \{y_p\}) =d^-(x_{s+1}, Y\setminus \{y_p\})=0$ (for otherwise $D$ contains a $C(z)$-cycle of length at least $n-k-2$, a contradiction). Since every vertex of $Y\setminus \{y_p\}$ cannot be extended into $P$, using Lemma 3.2 and the last equalities, we obtain that if $u\in \{y_{p-1}, y_{p-1}\}$, then $$ n+k\leq d(u)=d(u,Y)+ d(u,\{x_1,x_2,\ldots , x_{s}\})+d(u,\{x_{s+1},x_{s+2},\ldots , x_{n-k-5}\})$$ $$\leq 2k+5+s+(n-5-k-s)=n+k. $$ From this, in particular, we have $d(u,Y)=2k+5$, $d(u,\{x_1,x_2,\ldots , x_s\})= s$ and $d(u,\{x_{s+1},x_{s+2},\\\ldots , x_{n-k-5}\})=n-k-5-s$. Again using Lemma 3.2(ii), we obtain that $x_{n-k-5}\rightarrow \{y_{p-1}, y_{p+1}\}\rightarrow x_1$. From $d(u, Y)=2k+5$ it follows that $u\leftrightarrow Y\setminus \{y_{p-1}, y_{p+1}\}$ since $y_{p-1}y_{p+1}\notin A(D)$. Hence it is not difficult to see that in $D\langle Y\setminus \{y_p\}\rangle$ there is a $(y_{p-1},y_{k+4})$- or $(y_{p-1},y_{p+1})$-Hamiltonian path, say $H$. Thus $x_1x_2\ldots x_sy_px_{s+1}\ldots x_{n-k-5}Hx_1$ is a cycle of length $n-1$ and the flight of $z$ respect to this cycle is equal to $k+3$, a contradiction.\\ \textbf{Case 2.} $p=1$, i.e., $x_s\rightarrow y_1 \rightarrow x_{s+1}$. Observe that $d^-(x_{s+1},\{y_2,y_3,\ldots ,y_{k+4}\})=0$ and $R(z)$ is a longest cycle through $z$ in $D$, which has length $n-k-3$. For Case 2 we will prove the following proposition.\\ \textbf{Proposition 1.} Suppose that for $j$, $j\in [2,k+4]$, in $Q:=D\langle \{y_2, y_3,\ldots , y_{k+4},x_1\}\rangle$ there is a Hamiltonian $(y_j,x_1)$-path, say $H^j$. Then $x_{n-k-5}y_j\notin A(D)$. In particular, $x_{n-k-5}y_2\notin A(D)$. \begin{proof} Suppose that the claim is not true, that is $x_{n-k-5}\rightarrow y_j$ with $j\in [2,k+4]$ and $Q$ has a Hamiltonian $(y_j,x_1)$-path, say $H^j$. Then $x_1x_2\ldots x_sy_1x_{s+1}\ldots x_{n-k-5}H^jx_1$ is a cycle of length $n-1$ and the flight of $z$ respect to this cycle is equal to $k+3$, a contradiction. Thus $x_{n-k-5}\nrightarrow y_j$. It is easy to see that $H^2=y_2y_3\ldots y_{k+4}x_1$ is a Hamiltonian path in $Q$. Therefore by the first part of this proposition, $x_{n-k-5}\nrightarrow y _2)$. \end{proof} To complete the proof of Claim 4, we will consider the cases $x_s\nrightarrow y _2$, $x_s\rightarrow y_2$ separately. \textbf{Subcase 2.1.} $x_sy_2\notin A(D)$. We know that $y_2x _{s+1}\notin A(D)$ and $x_{n-k-5}y_2\notin A(D)$. Now, since $y_2$ cannot be inserted into $P$, using Lemmas 3.2(ii) and 3.2(iii), we obtain $$ n+k\leq d(y_2)=d(y_2,Y)+d(y_2,\{x_1,x_2,\ldots , x_s\})$$ $$ +d(y_2,\{x_{s+1},x_{s+2},\ldots , x_{n-k-5}\})\leq 2k+6+s+(n-k-s-6)=n+k. $$ This implies that $d(y_2,Y)=2k+6$, i.e., $y_2\leftrightarrow Y\setminus \{y_2\}$, in particular, $y_2\leftrightarrow y_1$ and $D\langle Y\rangle$ is strong, and $$ d(y_2,\{x_1,x_2, \ldots ,\\ x_s\})=s \,\, \hbox{and} \,\, d(y_2,\{x_{s+1},x_{s+2}, \ldots , x_{n-k-5}\})=n-k-s-6. \eqno (3) $$ Thus, for the longest cycle $R(z)$ we have that $V(D)\setminus V(R(z))=\{y_2,y_3,\ldots ,y_{k+4}\}$, $D\langle V(D)\setminus V(R(z))\rangle$ is strong and $y_2\leftrightarrow y_1$. Therefore by Lemma 3.5, $$ A(\{y_2,y_3,\ldots , y_{k+4}\}\rightarrow \{x_{s+1},x_{s+2}, \ldots , x_{n-k-5}\})= A(\{x_1,x_2, \ldots , x_{s}\}\rightarrow \{y_2,y_{3}, \ldots , y_{k+4} \}) = \emptyset.\eqno (4) $$ This together with $x_{n-k-5}\nrightarrow y_2$ and (3) implies that $y_2$ and $x_{n-k-5}$ are not adjacent and $$ N^+(y_2, V(P))=\{x_1,x_2, \ldots , x_{s}\} \,\, \hbox{and} \,\, N^-(y_2,V(P))=\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}). $$ By the above arguments, we have that $H^3=y_3y_4\ldots y_{k+4}y_2x_1$ is a $(y_3,x_1)$-Hamiltonian path in $Q$. Therefore by Proposition 1, $x_{n-k-5}\nrightarrow y_3$. This together with (4) implies that $x_{n-k-5}$ and $y_3$ are not adjacent. As for $y_2$, for $y_3$ we obtain that $y_3\leftrightarrow Y\setminus \{y_3\}$ and $$ N^+(y_3, V(P))=\{x_1,x_2, \ldots , x_{s}\} \,\, \hbox{and} \,\, N^-(y_3,V(P))=\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}). $$ Proceeding in the same manner, we obtain that $d(x_{n-k-5},\{y_2,y_3, \ldots , y_{k+4}\})=0$, $D\langle Y \rangle$ is a complete digraph and $$ \{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}\rightarrow Y\setminus \{y_1\} \rightarrow \{x_1,x_2, \ldots , x_{s}\}. \eqno (5) $$ If $s=n-k-6$, then from (4) and $d(x_{n-k-5},\{y_2,y_3,\ldots , y_{k+4}\})=0$ it follows that $A(V(P)\cup \{z\}\rightarrow Y\setminus \{y_1\})=\emptyset$, i.e., $D-y_1$ is not strong, a contradiction. Therefore, we may assume that $s\leq n-k-7$. Let $s=1$. Since $D\langle Y\rangle$ is strong, from (5) it follows that $A(Y\rightarrow \{x_{3},x_4,\ldots , x_{n-k-5}\})=\emptyset$. (for otherwise, $y_1\rightarrow x_i$ with $i\in [3,n-k-5]$ and $C_{n-k-2}(z) =x_1x_2\ldots x_{i-1}y_2y_1x_i\ldots x_{n-k-5}zx_1$, a contradiction). If $d^+(x_1,\{x_3,x_4,\ldots , x_{n-k-5}\})=0$, then $A(\{x_1\}\cup Y\rightarrow \{z,x_3,x_4, \ldots , x_{n-k-5}\})=\emptyset$, i.e., $D-x_2$ is not strong, a contradiction. So, we can assume that for some $b\in [3,n-k-5]$, $x_1\rightarrow x_b$. By (5) and (2), respectively, we have $x_{b-1}\rightarrow y_2$ and $z\rightarrow x_2$. Therefore, $C_{n-1}(z):=x_1x_b\ldots x_{n-k-5}zx_2\ldots x_{b-1}y_2y_3\ldots y_{k+4}x_1$, a contradiction. Let finally $2\leq s\leq n-k-7$. It is easy to see that $A(\{x_1,x_2,\ldots , x_{s-1}\}\rightarrow \{x_{s+1},x_{s+2}, \ldots ,x_{n-k-5} \})\not=\emptyset$ (for otherwise, using the fact that $A(\{x_1,x_2,\ldots , x_{s-1}\}\rightarrow Y)=\emptyset$, Claim 2 and (2), it is not difficult to show that $D-x_s$ is not strong, a contradiction). Let $x_a\rightarrow x_b$ with $a\in [1,s-1]$ and $b\in [s+1,n-k-5]$. Then by (4), $y_2\rightarrow x_{a+1}$, and by (2), either $z\rightarrow x_{a+1}$ or $x_{b-1}\rightarrow z$. By (4), we also have that $x_{b-1}\rightarrow y_2$ or $x_{b-1}\rightarrow y_1$ when $b=s+1$. Therefore, $C(z)=x_1x_2\ldots x_ax_b\ldots x_{n-k-5}zx_{a+1}\ldots x_{b-1}(y_1\, or \,y_2)y_2y_3\ldots y_{k+4}x_1$ is a cycle of length at least $n-1$ or $C_{n-k-2}(z)=x_1x_2\ldots x_ax_b\ldots x_{n-k-5}y_1y_2x_{a+1}\ldots x_{b-1}zx_1$, respectively, for $z\rightarrow x_{a+1}$ and for $x_{b-1}\rightarrow z$. Thus, for any possible case we have a contradiction. This completes the discussion of Subcase 2.1. \textbf{Subcase 2.2.} $x_s\rightarrow y_2$. Using Lemma 3.5 and the fact that $R(z)$ is a longest cycle of length $n-k-3$ through $z$, we obtain $$ A(Y\setminus \{y_1\}\rightarrow \{x_{s+1},x_{s+2},\ldots , x_{n-k-5}\})=\emptyset. \eqno (6) $$ Since $x_s\rightarrow y_1\rightarrow x_{s+1}$, it follows that in $D\langle Y\rangle$ there is no $(y_2,y_1)$-path, i.e., $d^-(y_1,\{y_2,y_3,\ldots ,\\ y_{k+4}\})=0$ (for otherwise $D$ has a cycle of length at least $n-k-2$ through $z$, which is a contradiction). This implies that for all $i\in [1,k+4]$, $d(y_i,Y)\leq 2k+5$. Recall that $x_{n-k-5}\nrightarrow y_2$ (Proposition 1). Therefore, since $y_2$ cannot be inserted into $P$ and $y_2\nrightarrow x_{s+1}$, using Lemma 3.2, we obtain $$ n+k\leq d(y_2)=d(y_2,Y)+d(y_2,\{x_1,x_2,\ldots , x_s\})+d(y_2,\{x_{s+1},x_{s+2},\ldots , x_{n-k-5}\})$$ $$\leq 2k+5+s+1+(n-k-s-6)=n+k. $$ Therefore, $y_2\leftrightarrow Y\setminus \{y_1,y_2\}$, in particular, $D\langle Y\setminus \{y_1\}\rangle$ is strong, $$ d(y_2,\{x_1,x_2,\ldots , x_s\})=s+1 \,\, \hbox{and} \,\, d(y_2,\{x_{s+1},x_{s+2}, \ldots , x_{n-k-5}\})=n-k-s-6. \eqno (7) $$ From (6) and $x_{n-k-5} y_2\notin A(D)$ it follows that $y_2$ and $x_{n-k-5}$ are not adjacent. Therefore by (7) and (6), $\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}\rightarrow y_2$, and by Lemma 3.2, $y_2\rightarrow x_1$. Note that $H^3=y_3y_4\ldots y_{k+4}y_2x_1$ is a Hamiltonian $(y_3,x_1)$-path in $Q$. Therefore by Proposition 1, $x_{n-k-5}y_3\notin A(D)$, which together with (6) implies that $y_3$ and $x_{n-k-5}$ are not adjacent. Now by the same arguments, as for $y_2$, we obtain that $y_3\leftrightarrow Y\setminus \{y_1,y_3\}$, $$ d(y_3,\{x_1,x_2,\ldots , x_s\})=s+1 \,\, \hbox{and} \,\, d(y_3,\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\})=n-k-s-6. \eqno (8) $$ Now by (8) and (6), $\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}\rightarrow y_3$. We know that $P_1:=x_1x_2\ldots x_s$ is a longest $(x_1,x_s)$-path in $D\langle V(P_1)\cup Y\setminus \{y_1\}\rangle$. Therefore, since $d(y_2,V(P_1))=d(y_3,V(P_1))= s+1$, by Lemma 3.3, there exists an integer $q\in [1,s]$ such that for every $j\in [2,3]$ $$ N^+(y_j,V(P_1))=\{x_1,x_2,\ldots , x_q\} \,\, \hbox{and}\,\, N^-(y_j,V(P_1))=\{x_q,x_{q+1},\ldots , x_s\}. $$ Proceeding in the same manner, we conclude that $\{x_{s+1},x_{s+2}, \ldots , x_{n-k-6}\}\rightarrow Y\setminus \{y_1\}$ and for all $j\in[2,k+d]$, $$ N^+(y_j,V(P_1))=\{x_1,x_2,\ldots , x_q\} \,\, \hbox{and}\,\, N^-(y_j,V(P_1))=\{x_q,x_{q+1},\ldots , x_s\}. \eqno (9) $$ If $q=1$, then $A(\{y_2,y_3,\ldots , y_{k+4}\}\rightarrow \{z,y_1,x_2,x_3,\ldots , x_{n-k-6}\})=\emptyset$, which implies that $D- x_1$ is not strong, a contradiction. Therefore, we may assume that $q\geq 2$, i.e., $q\in [2,s]$. If $x_i\rightarrow y_1$ with $i\in [1,q-1]$ then by (9), $C_n(z)=x_1x_2\ldots x_iy_1y_2\ldots y_{k+4}x_{i+1}x_{i+2}\ldots x_{n-k-5}zx_1$, a contradiction. We may therefore assume that $d^-(y_1,\{x_1,x_2,\ldots, x_{q-1}\})=0$. This together with (9) implies that $A(\{x_1,x_2,\ldots, x_{q-1}\}\rightarrow Y)=\emptyset$. Since $D$ is 2-strong, the last equality and (2) imply that there are integers $a\in [1,q-1]$ and $b\in [q+1,n-k-5]$ such that $x_a\rightarrow x_b$, for otherwise it is easy to see that $D-x_q$ is not strong. By (9) and (2), we have $y_{k+4}\rightarrow x_{a+1}$, $x_{b-1}\rightarrow y_2$ and $z\rightarrow x_{a+1}$ or $x_{b-1}\rightarrow z$. Therefore, if $z\rightarrow x_{a+1}$, then $C_{n-1}(z)=x_1x_2\ldots x_ax_b\ldots x_{n-k-5} zx_{a+1}\ldots x_{b-1}y_2\ldots y_{k+4}x_1$, and if $x_{b-1}\rightarrow z$, then $C_{n}(z)=x_1x_2\ldots x_ax_b\ldots x_{n-k-5}y_1\ldots y_{k+4}x_{a+1} \ldots x_{b-1}zx_1$. So, in any case we have a contradiction. Claim 4 is proved. \end{proof} For any $j\in [1,d]$, we have $$n+k\leq d(y_j)=d(y_j,V(P))+d(y_j,Y)\leq d(y_j,V(P))+2d-2.$$ From this, $d(y_j,V(P))\geq n+k-2d+2$. On the other hand, by Lemma 3.2 and Claim 4, $d(y_j, V(P))\leq n-d$. Therefore, $$ n+k-2d+2\leq d(y_j, V(P))\leq n-d \quad \hbox{and} \quad d+k\leq d(y_j,Y)\leq 2d-2. \eqno (10) $$ We distinguish two cases according to the subdigraph $D\langle Y\rangle$ is strong or not.\\ \textbf{Case A.} $D\langle Y\rangle$ is strong. In this case, by Claim 4, the suppositions of Claim 1 hold. Therefore, if for some $$ i\in [1,n-d-2] \,\, \hbox{and}\,\, d^+(x_i,Y)\geq 1,\,\, \hbox{then}\,\, A(Y\rightarrow \{x_{i+1},x_{i+2}, \ldots , x_{n-d-1}\})=\emptyset. \eqno (11) $$ Since $D$ is 2-strong, (11) implies that $d^+(x_1,Y)=d^-(x_{n-d-1},Y)=0$, there exists $l\in [2,n-d-2]$ such that $d^+(x_l,Y)\geq 1$ and $$ A(\{x_1,x_2,\ldots , x_{l-1}\}\rightarrow Y)=A(Y\rightarrow \{x_{l+1},x_{l+2},\ldots , x_{n-d-1}\})=\emptyset. \eqno (12) $$ From this we see that the supposition of Claim 3 holds. Therefore, for all $j\in [2,n-d-2]$, $$ A(\{x_1,x_2, \ldots , x_{j-1}\}\rightarrow\{x_{j+1},x_{j+2}, \ldots , x_{n-d-1}\}) \not= \emptyset. \eqno(13) $$ For Case A, we will prove the following two claims below.\\ \textbf{Claim 5.} (i) {\em $A(D)$ contains every arc of the forms $z\rightarrow x_i$ and $x_j\rightarrow z$, where $i\in [1,t]$ and $j\in [t,n-d-1]$, maybe except one when $d=k+3$. (ii) For every $i\in [1,d]$, $A(D$) contains every arc of the forms $y_i\rightarrow x_q$ and $x_j\rightarrow y_i$ where $q\in [1,l]$ and $j\in [l,n-d-1]$, maybe except one when $d=k+3$ or except two when $d=k+4$.} \begin{proof} (i) If $d=k+4$, then Claim 5(i) is an immediate consequence of (2). Assume that $d=k+3$. Then by (1), we have $$ n-k-4\leq d(z)=d^+(z,\{x_1,x_2,\ldots , x_{t-1}\})+ d(z,\{x_t\})+ d^-(z,\{x_{t+1}, x_{t+2},\ldots , x_{n-k-4}\})$$ $$ \leq t-1+2+ n-k-4-t=n-k-3.$$ Now, it is easy to see that Claim 5(i) is true. (ii) By (10) and (12) we have $$ n+k-2d+2\leq d(y_i,V(P))=d^+(y_i,\{x_1,x_2,\ldots , x_{l-1}\})+ d(y_i,\{x_l\})$$ $$+ d^-(y_i,\{x_{l+1},x_{l+2},\ldots , x_{n-d-1}\})\leq l-1+2+n-d-1-l=n-d. $$ Now, considering the cases $d=k+3$ and $d=k+4$ separately, it is not difficult to see that Claim 5(ii) also is true. Claim 5 is proved. \end{proof} \textbf{Claim 6}. {\em If $D\langle Y\rangle$ is strong and $z\rightarrow x_{a+1}$, then $d^+(x_{b-1},Y)=0$.} {\begin{proof} Suppose, on the contrary, that is $D\langle Y\rangle$ is strong, $z\rightarrow x_{a+1}$ and $d^+(x_{b-1}, Y)\geq 1$. Let $x_{b-1}\rightarrow y_i$, where $i\in [1,d]$. Recall that $k+3\leq d\leq k+4$. If $i\in [1,k+3]$, then the cycle $C(z)=x_1x_2\ldots x_ax_b\ldots x_{n-d-1}zx_{a+1}\ldots x_{b-1}y_i\ldots y_dx_1$ has length at least $n-k-2$, which is a contradiction. Therefore, we may assume that $d^+(x_{b-1},\{y_1,y_2,\ldots , y_{k+3}\})=0$. Then from $d^+(x_{b-1},Y)\geq 1$ it follows that $d=k+4$ and $x_{b-1}\rightarrow y_{k+4}$. Hence by (11), $A(Y\rightarrow \{x_{b},x_{b+1},\ldots , x_{n-k-5}\})=\emptyset$. Note that for each $i\in [1,k+3]$, $D\langle Y\rangle$ contains a $(y_{k+4},y_i)$-path since $D\langle Y\rangle$ is strong. Hence it is not difficult to see that if $d^-(x_1, \{y_1,y_2,\ldots , y_{k+3}\})\geq 1$, then $D$ contains a $C(z)$-cycle of length at least $n-k-2$, a contradiction. Therefore, we may assume that $d^-(x_1, \{y_1,y_2,\ldots , y_{k+3}\})= 0$. This together with $d^+(x_1,Y)=0$ implies that $d(x_1, \{y_1,y_2,\ldots ,y_{k+3}\})= 0$. Now using Lemma 3.2, Claim 4, $A(Y\rightarrow \{x_b,x_{b+1},\ldots , x_{n-k-5}\})=\emptyset$ and $d^+(x_{b-1},\{y_1,y_2,\ldots , y_{k+3}\})=0$, for any $i\in [1,k+3]$ we obtain, $$ n+k\leq d(y_i)=d(y_i, Y)+d(y_i,\{x_2, x_3,\ldots , x_{b-1}\})+d^-(y_i,\{x_b,x_{b+1},\ldots , x_{n-k-5}\})$$ $$ \leq 2k+6+(b-2)+(n-k-5-b+1)=n+k. $$ This means that all inequalities which were used in the last expression in fact are equalities, i.e., for any $i\in [1,k+3]$, $d(y_i, Y)=2k+6$ (i.e., $D\langle Y\rangle$ is a complete digraph), and $d(y_i,\{x_2,x_3, \ldots , x_{b-1}\})=b-2$. Therefore, since any vertex $y_i$ with $i\in [1,k+3]$ cannot be inserted into $P$ (Claim 4), $d(y_i,\{x_2,x_3, \ldots ,\\ x_{b-1}\})=b-2$ and $x_{b-1}\nrightarrow y_i$, using Lemma 3.2, we obtain that $y_i\rightarrow x_2$. Hence, if $a\geq 2$, then $C_{n-1}(z)=x_2\ldots x_ax_b\ldots x_{n-k-5}zx_{a+1}\ldots x_{b-1}Lx_2$, where $L$ is a Hamiltonian $(y_{k+4},y_{k+3})$-path in $D\langle Y\rangle$, a contradiction. Therefore, we may assume that $a=1$. Recall that $d=k+4$, which implies that (2) holds, in particular, $x_{b-1}\rightarrow z$. Therefore, $C_{n-1}(z)=x_1x_b\ldots x_{n-k-5}y_1y_2\ldots y_{k+3}x_2\ldots x_{b-1}zx_1$, a contradiction. Claim 6 is proved. \end{proof} Now using the digraph duality, we prove that it suffices to consider only the case $t\geq l$. \begin{proof} Indeed, assume that $l\geq t+1$ and consider the converse digraph $D^{rev}$ of $D$. Let $V(D^{rev})=\{u_1,u_2,\ldots , u_{n-d-1},v_1,v_2,\ldots , v_d,z\}$, where $u_i:=x_{n-d-i}$ and $v_j:=y_{d+1-j}$ for all $i\in [1,n-d-1]$ and $j\in [1,d]$, in particular, $x_l=u_{n-d-l}$ and $x_t=u_{n-d-t}$. Let $p:=n-d-l$ and $q:=n-d-t$. Note that $q\geq p+1$ and $\{v_1,v_2, \ldots , v_d\}=Y$. Observe that from the definitions of $l, t, p$ and $q$ it follows that $d^-_{D^{rev}}(u_p,Y)\geq 1$ $zu_q\in A(D^{rev})$, $d^-_{D^{rev}}(z,\{u_1,u_2,\ldots , u_{q-1}\})=0$ and $A_{D^{rev}}(\{u_1,u_2,\ldots , u_{p-1}\}\rightarrow Y)=\emptyset$. Now using Claim 5(i), we obtain that $d^-_{D^{rev}}(z,\{u_q,u_{q+1}\})\geq 1$ and $A_{D^{rev}}(\{u_p,u_{p+1}\}\rightarrow Y)\not=\emptyset$ when $d=k+3$ and $A_{D^{rev}}(\{u_p,u_{p+1}, u_{p+2}\}\rightarrow Y)\not=\emptyset$ when $d=k+4$. Let $u_{t'}z\in A(D^{rev})$, $d^+_{D^{rev}}(u_{l'},Y)\geq 1$ and $t', l'$ are minimal with these properties. It is clear that $t'\in [q,q+1]$ and $l'\in [p,p+2]$. We claim that $t'\geq l'$. Assume that this is not the case, i.e., $t'\leq l'-1$. Then it is not difficult to see that $t'\leq l'-1$ is possible when $l'=p+2$ and $t'=p+1=q$. By Claim 5(ii), $d=k+4$ and $2\leq p=q-1\leq n-k-7$. Therefore, in $D^{rev}$ the following hold: $$ d_{D^{rev}}(u_{p+1},Y)=0, \,\, \{u_{q+1},u_{q+2},\ldots , u_{n-k-5}\}\rightarrow Y \rightarrow \{u_{1},u_2,\ldots , u_{p}\}, $$ $$N^+_{D^{rev}}(z)\\=\{u_{1},u_2,\ldots , u_{q}\}\,\, \hbox{and} \,\, N^-_{D^{rev}}(z)=\{u_{q},u_{q+1},\ldots , u_{n-k-5}\}.$$ Since $D^{rev}$ is 2-strong and $A_{D^{rev}}(\{u_{1},u_2,\ldots , u_{p}\}\rightarrow \{z\}\cup Y)=\emptyset$, it follows that there are $r\in [1,p]$ and $s\in [p+2,n-k-5]$ such that $u_ru_s\in A(D^{rev})$. Taking into account the above observation, it is not difficult to show that if $r\leq p-1$, then $C_n(z)=u_1u_2\ldots u_ru_s\ldots u_{n-k-5}v_1v_2\ldots v_{k+4}u_{r+1}\ldots \\ u_{s-1}zu_1$ is a Hamiltonian cycle in $D^{rev}$, and if $s\geq p+3=q+2$, then $C_n(z)=u_1u_2\ldots u_ru_s\ldots \\ u_{n-k-5}zu_{r+1}\ldots u_{s-1}v_1v_2\ldots v_{k+4}u_1$ is a Hamiltonian cycle in $D^{rev}$, which contradicts that $D$ is not Hamiltonian. We may therefore assume that $r=p$ and $s=p+2$. This means that $A_{D^{rev}}(\{u_1,u_{2},\ldots ,\\ u_{p-1}\}\rightarrow \{u_{p+2},u_{p+3},\ldots , u_{n-k-5}\})=\emptyset$. Therefore, since $D^{rev}$ is 2-strong, for some $i\in [1,p-1]$, $u_iu_{p+1}\in A(D^{rev})$. Hence, $u_1u_2\ldots u_iu_{p+1}zx_{i+1}\ldots u_pu_{p+2}\ldots u_{n-k-5}v_1v_2\ldots v_{k+4}u_1$ is a Hamiltonian cycle in $D^{rev}$, which is a contradiction. Therefore, the case $t\leq l-1$ is equivalent to the case $t\geq l$. \end{proof} From now on, we assume that $l\leq t$. Note that from (13) it follows that there are $a\in [1,t-1]$ and $b\in [t+1,n-d-1]$ such that $x_a\rightarrow x_b$.\\ \textbf{Subcase A.1.} $z\rightarrow x_{a+1}$. Recall that $a\in [1,t-1]$ and $b\in [t+1,n-d-1]$. By Claim 6, we have that $d^+(x_{b-1},Y)=0$.\\ \textbf{Subcase A.1.1}. $z\rightarrow x_{a+1}$ and $b\geq t+2$. Then $b-2\geq t\geq l$. If $x_{b-2}\rightarrow y_i$ with $i\in [1,2]$, then the cycle $C(z)=x_1x_2\ldots x_ax_b\ldots x_{n-d-1}\\zx_{a+1}\ldots x_{b-2}y_i \ldots y_dx_1$ has length at least $n-2$, a contradiction. We may therefore assume that $d^+(x_{b-2},\{y_1,y_2\})=0$. This together with Claim 6 implies that $A(\{x_{b-2}, x_{b-1}\}\rightarrow \{y_1,y_2\})=\emptyset$. Therefore by Claim 5(ii) and $l\leq t$, we have that $d=k+4$, in particular, (2) holds. If $b\geq t+3$, then from $d^-(y_1,\{ x_{b-2}, x_{b-1}\})=0$ and Claim 5(ii) it follows that $x_{b-3}\rightarrow y_1$ and $C_{n-2}(z)=x_1x_2\ldots x_ax_b\ldots x_{n-k-5}zx_{a+1}\ldots x_{b-3}y_1y_2\ldots y_{k+4}x_1$, a contradiction. Therefore, we may assume that $b=t+2$. If $x_{t}\rightarrow y_3$, then $C_{n-3}(z)=x_1x_2\ldots x_ax_{t+2}\ldots x_{n-k-5}zx_{a+1}\ldots x_{t}y_3\ldots y_{k+4}x_1$ and the subdigraph $D\langle \{x_{t+1},y_1,y_2\}\rangle $ is not strong since $d^+(x_{t+1},\{y_1,y_2\})=0$, which is a contradiction. Therefore, $d^-(y_3,\{x_t,x_{t+1}\})=0$. Hence, if $l=t$, then $y_3\rightarrow x_{a+1}$ (Claim 5(ii)) and $C_{n-k-2}(z)=x_1x_2\ldots x_ax_{t+2}\ldots x_{n-k-5}y_1y_2y_3x_{a+1}\ldots x_tzx_1$, a contradiction. We may assume that $l\leq t-1$. If $a\leq t-2$, then from $d^-(y_1,\{x_t,x_{t+1}\})=0$ and Claim 5(ii), we have $x_{t-1}\rightarrow y_1$ and $C_{n-2}(z)=x_1x_2\ldots x_ax_{t+2}\ldots x_{n-k-5}zx_{a+1}\ldots x_{t-1}y_1y_2\ldots y_{k+4}x_1$, a contradiction. We may therefore assume that $a=t-1$. From $l\leq t-1$ and $l\geq 2$ it follows that $t\geq 3$. Thus we have that $a=t-1\geq 2$ and $b=t+2$, which mean that $A(\{x_1,x_2,\ldots ,x_{a-1}=x_{t-2}\}\rightarrow \{x_{t+2},x_{t+3},\ldots , x_{n-k-5}\})=\emptyset$. This together with (13) implies that for some $i\in [1,t-2]$ and $j\in [t,t+1]$, $x_i\rightarrow x_j$. Recall that $z\rightarrow x_{i+1}$ and $x_{t+1}\rightarrow z$ because of (2). Therefore, $C(z)=x_1x_2\ldots x_ix_{j} x_{t+1}zx_{i+1}\ldots x_{t-1}x_{t+2}\ldots x_{n-k-5}y_1y_2\ldots y_{k+4}x_1$ is cycle of length at least $n-1$, a contradiction. This completes the discussion of Sabcase A.1.1.\\ \textbf{Subcase A.1.2}. $z\rightarrow x_{a+1}$ and $b= t+1$. Since $b-1=t$, $d^+(x_{b-1}, Y)=0$ (Claim 6) and $d^+(x_l, Y)\geq 1$, we have $d^+(x_t, Y)=0$, $t-1\geq l\geq 2$. Assume first that $t+1\leq n-d-2$. Taking into account Subcase A.1.1 and $b= t+1$, we may assume that $A(\{x_1,x_2,\ldots , x_{t-1}\}\rightarrow \{x_{t+2},x_{t+3},\ldots , x_{n-d-1}\})=\emptyset$. This together with (13) implies that there is $j\in[t+2,n-d-1]$ such that $x_t\rightarrow x_j$. If $x_{j-1}\rightarrow z$, then $C_n(z)= x_1x_2\ldots x_ax_{t+1}\ldots x_{j-1}zx_{a+1}\ldots x_t\\x_j\ldots x_{n-d-1}y_1y_2\ldots y_dx_1$, a contradiction. Therefore, we may assume that $x_{j-1}\nrightarrow z$. This together with (2) implies that $d=k+3$. If $j\geq t+3$, then $x_{j-2}\rightarrow z$ (Claim 5(i)) and $C_{n-1}(z)= x_1x_2\ldots x_ax_{t+1}\ldots x_{j-2}zx_{a+1}\ldots x_tx_j\ldots x_{n-k-4}y_1y_2\ldots y_{k+3}x_1$, a contradiction. Assume that $j=t+2$. Since $d^+(x_t, Y)=0$, $d^+(x_l, Y)\geq 1$, $d=k+3$ and $l\leq t-1$, by Claim 5(ii) we have $\{x_l,x_{l+1},\ldots ,x_{t-1}\}\rightarrow y_1$. If $a\leq t-2$, then $x_{t-1}\rightarrow y_1$ and $C_{n-1}(z)=x_1x_2\ldots x_ax_{t+1}\ldots x_{n-k-4}z\\x_{a+1}\ldots x_{t-1}y_1y_2\ldots y_{k+3}x_1$, a contradiction. Assume that $a=t-1$. From $t\geq 3$ and $a=t-1$ it follows that $ A(\{x_1,x_2,\ldots , x_{t-2}\}\rightarrow \{x_{t+1},x_{t+2},\ldots , x_{n-k-4}\})=\emptyset$. This together with (13) implies that for some $s\in [1,t-2]$, $x_s\rightarrow x_t$. Since $d=k+3$ and $x_{j-1}\nrightarrow z$, from Claim 5(i) it follows that $z\rightarrow x_{s+1}$. Therefore, $C(z)=x_1x_2\ldots x_sx_{t}zx_{s+1}\ldots x_{t-1}x_{t+1}\ldots x_{n-k-4}y_1y_2\ldots y_{k+3}x_1$ is a Hamiltonian cycle in $D$, a contradiction. Assume next that $t+1=n-d-1$. Recall that $b=t+1$ and $d^+(x_t,Y)=0$. Let $a\leq t-2$. If $x_{t-1}\rightarrow y_i$ with $i\in [1,2]$, then $C(z)=x_1x_2\ldots x_ax_{n-d-1}zx_{a+1}\ldots x_{t-1}y_iy_2\ldots \\y_dx_1$ is a cycle of length at least $n-2$, a contradiction. Therefore, we may assume that for every $i\in [1,2]$, $d^-(y_i,\{x_{t-1}, x_t\})=0$. This together with Claim 5(ii) implies that $d=k+4$, which in turn implies that (2) holds, in particular, $z\rightarrow x_t$. If $l=t-1$, then from $d^-(y_2,\{x_{t-1},x_t\})=0$ and Claim 5(ii) it follows that $y_2\rightarrow x_{a+1}$ and $C_{n-k-2}(z)=x_1x_2\ldots x_ax_{n-k-5}y_1y_2x_{a+1}\ldots x_tzx_1$, a contradiction. Therefore, we may assume that $l\leq t-2$. It is easy to see that $a=t-2$ (for otherwise $a\leq t-3$, $x_{t-2}\rightarrow y_1$ and $C_{n-2}(z)=x_1x_2\ldots x_ax_{n-k-5}zx_{a+1}\ldots x_{t-2}y_1y_2\ldots y_{k+4}x_1$, a contradiction). Using Claim 5(ii) and the facts that $a=t-2\geq l\geq 2$, $d^-(y_1, \{x_{t-1},x_{t}\})=0$, it is easy to see that $x_a\rightarrow y_1$. From $a=t-2\geq 2$ it follows that $d^-(x_{n-k-5}, \{x_{1},x_2,\ldots , x_{a-1}\})=0$. Therefore by (13), there exist $s\in [1,a-1]$ and $j\in [t-1,t]$ such that $x_s\rightarrow x_j$. Then by (2), $z\rightarrow x_{s+1}$ and $C(z)=x_1x_2\ldots x_sx_{j}x_tx_{n-k-5}zx_{s+1}\ldots x_a y_1y_2\ldots y_{k+4}x_1$ is a cycle of length at least $n-1$, a contradiction. Let now $a=t-1$. Recall that $b=t+1=n-d-1$. Then from $a=t-1\geq 2$ we have that $d^-(x_{n-d-1}, \{x_{1},x_2,\ldots , x_{a-1}\})=0$. This together with (13) implies that for some $s\in [1,t-2]$, $x_s\rightarrow x_t$. It is easy to see that $z\nrightarrow x _{s+1}$ (for otherwise, $z\rightarrow x_{s+1}$ and $C_n(z)=x_1x_2\ldots x_sx_{t}zx_{s+1}\ldots x_{t-1}x_{t+1} y_1y_2\ldots y_dx_1$, a contradiction). From (2), Claim 5(ii) and $z\nrightarrow x_{s+1}$ it follows that $d=k+3$ and $x_{t-1}\rightarrow y_1$. If $s\leq t-3$, then $z\rightarrow x_{s+2}$ and $C_{n-1}(z)=x_1x_2\ldots x_sx_{t}z x_{s+2}\ldots x_{t-1}x_{t+1} y_1y_2\ldots y_dx_1$, a contradiction. Thus, we may assume that $s=t-2$. If $t-2\geq 2$, then we have that $A(\{x_1,x_2,\ldots , x_{t-2}\} \rightarrow \{x_t,x_{t+1}=x_{n-k-4}\})=\emptyset$. Therefore by (13), there is $p\in [1,t-3]$ such that $x_p\rightarrow x_{t-1}$ and $z\rightarrow x_{p+1}$. If $l\leq t-2$, then $x_{t-2}\rightarrow y_1$ and $C_{n}(z)=x_1x_2\ldots x_px_{t-1} x_{n-k-4}zx_{p+1}\ldots x_{t-2} y_1y_2\ldots y_{k+3}x_1$, a contradiction. Assume that $l=t-1$. Then $y_1\rightarrow x_{p+1}$ and $C_{n-k-2}=x_1x_2\ldots x_px_{t-1}x_{t+1}y_1x_{p+1}\ldots x_{t-2}x_tzx_1$, a contradiction. Finally assume that $t-2=1$. Then $n-k-4=4$ and $d(x_3, Y)=0$. Therefore, $n+k\leq d(x_3)\leq 8$ and $n\leq 8$, which contradicts that $n\geq 9$. This completes the discussion of Subcase A.1.2. \\ \textbf{Subcase A.2.} $z\nrightarrow x_{a+1}$. From $z\nrightarrow x_{a+1}$, Claim 5(i), (1) and (2) it follows that $d=k+3$, $x_{b-1}\rightarrow z$ and $$ \{x_{t},x_{t+1},\ldots , x_{n-k-4}\}\rightarrow z\rightarrow \{x_1,x_2,\ldots , x_a, x_{a+2},x_{a+3}, \ldots , x_t\}. \eqno (14) $$ Assume first that $$ A(\{x_1,x_2,\ldots , x_{t-2}\}\rightarrow \{x_{t+1},x_{t+2},\ldots , x_{n-k-4}\})=\emptyset. \eqno (15) $$ Then $a=t-1$, i.e., $x_{t-1}\rightarrow x_b$. Using (14), $d^+(z)\geq 2$ and Claim 2, we obtain that $t-1\geq 2$. From (13) and (15) it follows that there exists $s\in [1,t-2]$ such that $x_s\rightarrow x_t$. Then, since $z\rightarrow x_{s+1}$, $C_n(z)=x_1x_2\ldots x_{s}x_t\ldots x_{b-1}zx_{s+1}\ldots x_{t-1}x_b\ldots x_{n-k-4}y_1y_2\ldots y_{k+3}x_1$, a contradiction. Assume next that (15) is not true. Then we may assume that $a\leq t-2$. Note that $z\rightarrow \{x_{a+2},\ldots , x_t\}$ (by (14)). If $y_i\rightarrow x_{a+1}$ with $i\in [1,k+3]$, then the cycle $C(z)=x_1x_2\ldots x_{a}x_b\ldots x_{n-k-4}y_1\ldots y_ix_{a+1}\\ \ldots x_{b-1}zx_1$ has length at least $n-k-2$, a contradiction. We may therefore assume that $d^-(x_{a+1},Y)=0$. Let $b\geq t+2$. Then, since $d=k+3$ and $t\geq l$, from Claim 5(ii) it follows that for some $j\in [b-2,b-1]$, $x_j\rightarrow y_1$. Then the cycle $C(z)=x_1x_2\ldots x_{a}x_b\ldots x_{n-k-4}zx_{a+2}\ldots x_jy_1y_2\ldots y_{k+3}x_1$ has length at least $n-2$, a contradiction. Let now $b=t+1$. We claim that $l\leq t-1$. Assume that this is not the case, i.e., $l=t$. Then using Claim 5(ii) and the facts that $d=k+3$, $d^-(x_{a+1}, Y)=0$, we obtain that $x_t\rightarrow y_1$. Therefore, $C_{n-1}(z)=x_1x_2\ldots x_ax_{t+1}\ldots x_{n-k-4}zx_{a+2}\ldots x_{t}y_1\ldots y_{k+3}x_1$, a contradiction. This shows that $l\leq t-1$. From $a\leq t-2$, $l\leq t-1$, $x_t\nrightarrow y_1$ and Claim 5(ii) it follows that $x_{t-1}\rightarrow y_1$. Therefore, if $a\leq t-3$, then $C_{n-2}(z) =x_1x_2\ldots x_ax_{t+1}\ldots x_{n-k-4}zx_{a+2}\ldots x_{t-1}y_1y_2\ldots y_{k+3}x_1$, a contradiction. We may therefore assume that $a=t-2$. Assume first that $a\geq 2$. Since $a=t-2$, we have $$ A(\{x_1,x_2,\ldots , x_{t-3}\}\rightarrow \{x_{t+1},x_{t+2},\ldots , x_{n-k-4}\})=\emptyset. $$ This together with (13) implies that there exist $s\in [1,a-1=t-3]$ and $p\in [t-1,t]$ such that $x_s\rightarrow x_p$. Then by (14), the cycle $C(z)=x_1x_2\ldots x_sx_px_tzx_{s+1}\ldots x_{t-2}x_{t+1}\ldots x_{n-k-4}y_1y_2\ldots y_{k+3}x_1$ has length at least $n-1$, a contradiction. Assume next that $a=1$. Then $t=3$. Let $t+1\leq n-k-5$. Since $b=t+1$, we have $d^+(x_{1}, \{x_{t+2},x_{t+3},\ldots , x_{n-k-4}\})=0$. Again using (13), we obtain that there exist $p\in [t-1,t]$ and $q\in [t+2,n-k-4]$ such that $x_p\rightarrow x_q$. Recall that $z\rightarrow x_t$ and $x_{q-1}\rightarrow \{z,y_1\}$. Therefore, if $p=t$, then $C_{n-1}(z)=x_1x_{t+1}\ldots x_{q-1}zx_tx_q\ldots x_{n-k-4}y_1y_2\ldots y_{k+3}x_1$, and if $p=t-1$, then $C_{n}(z)=x_1x_2\ldots x_{t-1}x_q\ldots x_{n-k-4}zx_t\ldots x_{q-1}y_1y_2\ldots y_{k+3}x_1$. Thus, in both cases, we have a contradiction. This completes the discussion of Subcase A.2, and also completes the proof of the theorem when $D\langle Y\rangle$ is strong.\\ \textbf{Subcase B.} $D\langle Y\rangle$ is not strong. Since $y_1y_2\ldots y_{d}$ is a path in $D\langle Y\rangle$ and $k+3\leq d\leq k+4$, using the fact that every vertex $y_i$ with $i\in [1,d]$ cannot be inserted into $P$ (Claim 4) and Lemma 3.2, we obtain $d(y_i, V(P))\leq n-d$, $d(y_i, Y)\geq d+k$, $d=k+4$ and $k=0$. Therefore, $d(y_i, V(P))\leq n-4$ and $d(y_i, Y)\geq 4$. Since $D\langle Y\rangle$ is not strong and $y_1y_2y_3y_4$ is a path in $D\langle Y\rangle$, it is not difficult to check that for all $i\in [1,4]$, $d(y_i,Y)=4$, $d(y_i,V(P))=n-4$,the arcs $y_1y_3, y_1y_4,y_2y_1, y_2y_4,y_4y_3$ also are in $A(D)$ and $A(\{y_3,y_4\}\rightarrow \{y_1,y_2\})=\emptyset$. Since $D$ has no cycle of length at least $n-2$ and any vertex $y_i$ with $i\in [1,4]$ cannot be inserted into $P=x_1x_2\ldots x_{n-5}$, using Lemma 3.3, it is not difficult to show that there are two integers $l_1$ and $l_2$ with $2\leq l_1, l_2\leq n-6$ such that $$ \left\{ \begin{array}{lc}\{x_{l_1},\ldots ,x_{n-5}\}\rightarrow \{y_1,y_2\}\rightarrow \{x_1,\ldots , x_{l_1}\}, \\ \{x_{l_2},\ldots ,x_{n-5}\}\rightarrow \{y_3,y_4\} \rightarrow \{x_1,\ldots , x_{l_2}\}. \\ \end{array} \right. \eqno (16) $$ It is easy to see that $l_1\geq l_2$. Indeed, if $l_1\leq l_2-1$, then from (16) it follows that $x_{l_1}\rightarrow y_1$, $y_4\rightarrow x_{l_1+1}$ and hence, $C_n(z)=x_1\ldots x_{l_1}y_1y_2y_3y_4x_{l_1+1}\ldots x_{n-5}zx_1$, a contradiction. Since $D$ is 2-strong, (16) together with (2) implies that there are two integers $p\in [1,l_2-1]$ and $q\in [l_2+1,n-5]$ such that $x_p\rightarrow x_q$ (for otherwise $D-x_{l_2}$ is not strong). Assume first that $l_2\leq t$. Then from (2) and (16), respectively, we have $z\rightarrow x_{p+1}$ and $x_{q-1}\rightarrow y_3$. Therefore, $C_{n-2}(z)=x_1\ldots x_px_q\ldots x_{n-5}zx_{p+1}\ldots x_{q-1}y_3y_4x_1$, a contradiction. Assume next that $l_2\geq t+1$. Then by (16), $y_4\rightarrow x_{p+1}$, and by (2), $x_{q-1}\rightarrow z $. Therefore, $C(z)=x_1x_2\ldots x_px_q\ldots x_{n-5}y_1\ldots y_4x_{p+1}\ldots x_{q-1}zx_1$ is a Hamiltonian cycle in $D$, which is contradiction. This completes the discussion of Case B. The theorem is proved.} \end{proof} \textbf{Acknowledgements} I am grateful to Professor Gregory Gutin for motivating me to present the complete proof of Theorem 1.7. Also thanks to Dr. Parandzem Hakobyan for formatting the manuscript of this paper. \end{document}
\begin{document} \begin{center} \textbf{A~CHARACTERIZATION OF~ROOT CLASSES OF~GROUPS} \end{center} \begin{center} \textsc{E.~V.~Sokolov} \end{center} \begin{abstract} We prove that a~class of~groups is root in~a~sense of~K.~W.~Gruenberg if, and~only if, it is closed under subgroups and~Cartesian wreath products. Using this result we obtain a~condition which is sufficient for the~generalized free product of~two nilpotent groups to~be residual solvable. \footnotetext{\textit{Key words and phrases:} root class of~groups, generalized free product, residual solvability, nilpotent group.} \footnotetext{\textit{2010 Mathematics Subject Classification:} primary 20E22; secondary 20E26, 20E06, 20F18.} \end{abstract} Recall that a~class of~groups $\mathcal{C}$ is called root if~it satisfies the~following conditions: 1)\hspace{0.5em}an~arbitrary subgroup of~a \hbox{$\mathcal{C}$-}group is also a~\hbox{$\mathcal{C}$-}group; 2)\hspace{0.5em}the~direct product of~any two \hbox{$\mathcal{C}$-}groups belongs to~$\mathcal{C}$; 3)\hspace{0.5em}the~Gruenberg condition: for any group~$X$ and~for any subnormal sequence $Z \leqslant Y \leqslant X$ with~factors in~$\mathcal{C}$, there exists a~normal subgroup~$T$ of~$X$ such that $T \leqslant Z$ and~$X/T \in \mathcal{C}$. It is easy to~see that the~classes of~all finite groups, of~all finite $p$-groups, and~of~all solvable groups are root. The class of~all nilpotent groups fails to~be root because it does not satisfy the~Gruenberg condition. The notion of~the root class was introduced by~K.~W.~Gruenberg~\cite{li01} in~connection with~studying of~residual properties of~solvable groups. D.~N.~Azarov and~D.~Tieudjo~\cite{li02} proved that every free group is residually a~\hbox{$\mathcal{C}$-}group for any nontrivial (i.~e. containing at least one non-unit group) root class of~groups. It follows from this statement and~the~results of~K.~W.~Gruenberg~\cite{li01} that, for every nontrivial root class of~groups $\mathcal{C}$, the~free product of~an arbitrary number of~residually \hbox{$\mathcal{C}$-}groups is itself residually a~\hbox{$\mathcal{C}$-}group. The property `to~be residually a~\hbox{$\mathcal{C}$-}group', where $\mathcal{C}$~is an~arbitrary root class of~groups, was studied in~\cite{li03} and~\cite{li04} in~relation to~generalized free products and~HNN-extensions. The results of~these papers show that many conditions known to~be necessary or~sufficient for a~given group to~be residually a~\hbox{$\mathcal{C}$-}group for some concrete root class~$\mathcal{C}$ remain true as well in~the~case when $\mathcal{C}$~is an~arbitrary root class of~groups. The~definition given by~K.~W.~Gruenberg doesn't allow to~describe all root classes of~groups clearly. The~main aim of~this paper is to~give another, more simple, characterization of~root classes. It is easy to~see that the~second condition in~the~definition of~the root class follows from the~first and~the~third and, thus, is excessive. As~for the~Gruenberg condition, it can be replaced by~a~more clear criterion as~the~next theorem shows. \textbf{Theorem~1.}~Let $\mathcal{C}$ be a~class of~groups closed under taking subgroups. Then the~following statements are equivalent: 1.\hspace{0.5em}$\mathcal{C}$ satisfies the~Gruenberg condition (and, hence, is root). 2.\hspace{0.5em}$\mathcal{C}$ is closed under Cartesian wreath products. 3.\hspace{0.5em}$\mathcal{C}$ is closed under extensions and, for any two groups $X, Y \in \mathcal{C}$, contains the~Cartesian product $\prod_{y \in Y}X_{y}$, where $X_{y}$~is an~isomorphic copy of~$X$ for every $y \in Y$. The next three statements follow immediately from Theorem~1. \textbf{Corollary~1} \cite{li05}. If a~class of~groups consists of~the only finite groups, then it is root if, and~only if, it is closed under subgroups and~extensions.~$\square$ \textbf{Corollary~2}. The~intersection of~any two root classes of~groups is again a~root class of~groups.~$\square$ \textbf{Corollary~3.} If~$\mathcal{C}$ is a~root class of~groups, then the~class $\mathcal{C}_{0}$ of~all torsion-free \hbox{$\mathcal{C}$-}groups is root too.~$\square$ The theorem given below serves as an~example of~application of~Theorem~1 to~studying of~residual properties of~generalized free products of~groups. \textbf{Theorem~2.} Let $\mathcal{C}$ be a~nontrivial root class of~groups closed under taking quotient groups. Let also $G = \langle A * B;\ H = K,\ \varphi \rangle$ be the~free product of~nilpotent \hbox{$\mathcal{C}$-}groups~$A$ and~$B$ with~subgroups $H \leqslant A$ and~$K \leqslant B$ amalgamated according to~an~isomorphism~$\varphi$. Suppose that $A$ and~$B$ possess such central series $$ 1 = A_{0} \leqslant A_{1} \leqslant \ldots \leqslant A_{n} = A,\ 1 = B_{0} \leqslant B_{1} \leqslant \ldots \leqslant B_{n} = B, $$ that $(A_{i} \cap H)\varphi = B_{i} \cap K$ for every $i \in \{1, 2, \ldots, n\}$. Then there exists a~homomorphism of~$G$ onto~a~solvable \hbox{$\mathcal{C}$-}group which is injective on~$A$ and~$B$. In~particular, $G$ is residually a~\hbox{$\mathcal{CS}$-}group, where $\mathcal{CS}$ is the~class of~all solvable \hbox{$\mathcal{C}$-}groups. We note that Theorem~2 strengthens and~generalizes Theorem~8 from \cite{li06}, which states that $G$ is poly-(residually solvable). \textit{Proof of~Theorem~1.} $1 \Rightarrow 2$. Let $X$, $Y$ be arbitrary \hbox{$\mathcal{C}$-}groups, and~let $B$ be the~Cartesian product of~isomorphic copies of~$X$ indexed by~the~elements of~$Y$ (i.~e. $B$ is the~set of~all functions mapping $Y$ into~$X$ with~the~coordinate-wise multiplication). Let also $W = X \wr Y$ be the~Cartesian wreath product of~$X$ and~$Y$. We need to~show that $W \in \mathcal{C}$. Recall that $W$ is the~set $Y \cdot B$ with~the~operation defined by~the~rule $y_{1}b_{1}y_{2}b_{2} = y_{1}y_{2}b^{y_2}b_{2}$, where $b^{y_2} \in B$ is the~function mapping $y$ to~$y_{2}y$ for every $y \in Y$. From this definition it follows that $B$ is normal in~$W$ and~$W/B \cong Y$. Let $A = \{b \in B \mid b(1) = 1\}$. Then $A$ is normal in~$B$ and~$B/A \cong X$. Since $W/B \cong Y$, then $A \leqslant B \leqslant W$ is a~subnormal sequence with~\hbox{$\mathcal{C}$-}factors, and, by~the~Gruenberg condition, there exists a~normal subgroup~$T$ of~$W$ such that $T \leqslant A$ and~$W/T \in \mathcal{C}$. As~$T$ is normal in~$W$, it is contained in~the~subgroup $$ A^{y} = \{b \in B \mid b(y) = 1\} $$ for any $y \in Y$. But~$\bigcap_{y \in Y}A^{y} = 1$, hence, $T = 1$ and~$W \in \mathcal{C}$ as~required. $2 \Rightarrow 3$. Let $X$, $Y$ be arbitrary \hbox{$\mathcal{C}$-}groups, $W = X \wr Y$, and~$B = \prod_{y \in Y}X_{y}$, where $X_{y}$ is an~isomorphic copy of~$X$ for every $y \in Y$. Then $W \in \mathcal{C}$, $B \leqslant W$, and~$B \in \mathcal{C}$ since $\mathcal{C}$ is closed under subgroups. Further, if~$Z$ is an~extension of~$X$ by~$Y$, then, by~the~theorem of~Frobenius, $Z$ embeds in~$W$ and,~therefore, belongs to~$\mathcal{C}$. $3 \Rightarrow 1$. Let~$X$ be a~group, and~let $Z \leqslant Y \leqslant X$ be a~subnormal sequence with~\hbox{$\mathcal{C}$-}factors. We put $T = \bigcap_{s \in S}Z^{s}$, where $S$ is some system of~all cosets representatives of~$Y$ in~$X$, and~show that~$T$ is required. It is obvious that~$T$ is a~normal subgroup of~$X$ lying in~$Z$. The quotient group $Y/T$ embeds in~the~Cartesian product $P$ of~the quotient groups $Y/Z^{s}$, $s \in S$, by~the~theorem of~Remak. Each of~groups $Y/Z^{s}$ is isomorphic to~the~\hbox{$\mathcal{C}$-}group~$Y/Z$. Therefore, $P \in \mathcal{C}$, and~$Y/T \in \mathcal{C}$ since $\mathcal{C}$ is closed under subgroups. Thus, $Y/T \in \mathcal{C}$, $X/Y \in \mathcal{C}$, and~$X/T \in \mathcal{C}$ because $\mathcal{C}$ is closed under extensions.~$\square$ \textit{Proof of~Theorem~2} will use an~induction on~$n$. If~$n = 1$, then there exists a~homomorphism of~$G$ onto~the~generalized direct product $P = \langle A \times B;\ H = K,\ \varphi \rangle$ continuing the~natural inclusions of~$A$ and~$B$, and~this homomorphism is required. Indeed, $P$ is isomorphic to~the~quotient group of~the direct product $A \times B$ by~the~subgroup $gp\{h(h\varphi)^{-1} \mid h \in H\}$. Since $\mathcal{C}$ is a~root class, then $A \times B \in \mathcal{C}$. It remains to~note that $\mathcal{C}$ is closed under quotient groups and~so $P \in \mathcal{C}$. Let now $n > 1$, and~let $\bar\varphi\colon HA_{1}/A_{1} \to KB_{1}/B_{1}$ be a~map such that $(hA_{1})\bar\varphi = (h\varphi)B_{1}$ for every $h \in H$. It follows from the~equality $(A_{1} \cap H)\varphi = B_{1} \cap K$ that $\bar\varphi$ is a~correctly defined isomorphism of~subgroups. Therefore, we can consider the~generalized free product $\bar G = \langle \bar A * \bar B;\ \bar H = \bar K,\ \bar\varphi \rangle$, where $\bar A = A/A_{1}$, $\bar B = B/B_{1}$, $\bar H = HA_{1}/A_{1}$, and~$\bar K = KB_{1}/B_{1}$. Since $\mathcal{C}$ is closed under quotient groups, $\bar A, \bar B \in \mathcal{C}$. It is easy to~see also that the~series \begin{gather*} 1 = A_{1}/A_{1} \leqslant A_{2}/A_{1} \leqslant \ldots \leqslant A_{n}/A_{1} = \bar A,\\ 1 = B_{1}/B_{1} \leqslant B_{2}/B_{1} \leqslant \ldots \leqslant B_{n}/B_{1} = \bar B \end{gather*} are $\bar\varphi$-compatible. Hence, $\bar G$ satisfies the~conditions of~the theorem and, by~induction hypothesis, there exists a~homomorphism of~this group onto~a~solvable \hbox{$\mathcal{C}$-}group~$Y$ which is injective on~$\bar A$ and~$\bar B$. Since $\mathcal{C}$ is closed under subgroups, $A_{1}, B_{1} \in \mathcal{C}$. Therefore, the~group \hbox{$G_{1} = \langle A_{1} * B_{1};\ H_{1} = K_{1},\ \varphi_{1} \rangle$,} where $H_{1} = H \cap A_{1}$, $K_{1} = K \cap B_{1}$, and \hbox{$\varphi_{1} = \varphi\vert_{H_1}$,} satisfies the~conditions of~the theorem too. Again by~induction hypothesis, there exists a~homomorphism of~this group onto~a~solvable \hbox{$\mathcal{C}$-}group~$X$ which is injective on~$A_{1}$ and~$B_{1}$. Now we use Lemma~2 from~\cite{li07}. By this lemma, there exists a~homomorphism~$\rho$ of~$G$ onto~$X \wr Y$ which is injective on~$A$ and~$B$. $X \wr Y$ is a~solvable group and, by~Theorem~1, belongs to~$\mathcal{C}$. Hence, $\rho$ is the~required homomorphism. By the~condition of~the theorem, $\mathcal{C}$ contains at least one nontrivial group. All cyclic subgroups of~this group and~their quotient groups belong to~$\mathcal{C}$ too. Hence, $\mathcal{C}$ includes at least one cyclic group of~prime order, say~$p$. It is well known that every finite $p$-group possesses a~normal series with~the~factors of~order~$p$. Taking into~account that $\mathcal{C}$ is closed under extensions we conclude that all finite $p$-groups are contained in~$\mathcal{C}$. Let $N = \ker\rho$. Since $N \cap A = N \cap B = 1$, then, by~the~theorem of~H.~Neumann~\cite{li08}, $N$ is free. As it is known, free groups are residually $p$-finite for any prime~$p$. Hence, $N$ is residually a~\hbox{$\mathcal{CS}$-}group. Thus, $G$ is an~extension of~the residually \hbox{$\mathcal{CS}$-}group~$N$ by~the~\hbox{$\mathcal{CS}$-}group $G/N$. By Corollary~2, the~class of~all solvable \hbox{$\mathcal{C}$-}groups is root. Therefore, $G$ is a~residually \hbox{$\mathcal{CS}$-}group by~Lemma~1.5 from~\cite{li01}.~$\square$ \end{document}
\begin{document} \title{Surfaces and their Profile Curves} \date{\today} \author{Joel Hass} \address{University of California, Davis, California 95616} \email{[email protected]} \thanks{This work was carried out while the author was visiting the Oxford Mathematical Institute and ISTA, and was a Christensen Fellow at St. Catherine's College. Research partially supported by NSF grant DMS:FRG 1760485 and BSF grant 2018313} \subjclass{Primary 57M50; Secondary 57M25, 65D17, 68U05} \keywords{profile curve, contour generator, silhouette, apparent contour, knot, 3-manifold, knotted surface} \begin{abstract} This paper examines the relationship between the profile curves of a surface in ${\mathbb{R}}^3$ and the isotopy class of the surface. \end{abstract} \maketitle \section{Profile curves} The points on a smoothly embedded surface in ${\mathbb{R}}^3$ that have vertical tangent planes form a collection of {\em profile curves}. These curves are often the most prominent features of a surface image, as indicated in Figure~\ref{profileseg}, and play an important role in image analysis and surface reconstruction. \begin{figure} \caption{Profile curves can be dominant features of a surface image. } \label{profileseg} \end{figure} In this paper we study the relationship between a profile curve and the surface on which it lies. Profile curves occur where the orthogonal projection of a smooth embedded surface to the $xy$-plane fails to be an immersion. Whitney determined the local singularities of maps from a surface to the plane \cite{Whitney}. Haefliger then determined when such a singularity set results from following an immersion of a surface to ${\mathbb{R}}^3$ by orthogonal projection to the plane \cite{Haefliger}, see also \cite{PlantingaVegter}. For an open, everywhere-dense subset of surface maps to the plane, the singularities of the projection consist of points where the map has either a fold along a smooth arc, or a cusp at an isolated point, as shown in Figure~\ref{fig:cusp}. Cusps are discussed further in Section~\ref{curvescusps}. We call surface projections with these properties {\em generic}. Any smoothly embedded (or immersed) surface in ${\mathbb{R}}^3$ can be perturbed slightly so that its projection to the $xy$-plane is generic. \begin{figure} \caption{The local singularities of generic surface projections, consisting of folds and g, occur along smooth profile curves on the surface. These project to piecewise-smooth curves in the plane.} \label{fig:cusp} \end{figure} Profile curves play an important role in the imaging, visualization and reconstruction of an object in 3-dimensions. They are referred to by a variety of terms, including the {\em contour generator}, {\em critical set}, {\em rim}, and {\em outline} of a surface. In some contexts, such as with images of transparent objects, the profile curves are clearly visible while the rest of the surface is not clearly seen. Thus profile curves play an important role in image analysis and surface reconstruction. The {\em profile curve projection}, the projection to the plane of a profile curve, is variously called the {\em apparent contour}, {\em occluding contour}, {\em silhouette} or {\em extremal boundary} of the surface \cite{BoyerBerger, CipollaBlake, FukudaYamamoto, Giblin, Pignoni, PlantingaVegter, VaillantFaugeras}. The {\em reconstruction problem} seeks to reconstruct a surface in ${\mathbb{R}}^3$ (up to some equivalence) from the full collection of profile curve projections. Generally this cannot be done without additional data, such as a labeling indicating the number of sheets above each arc, and then the problem can be solved algorithmically \cite{Bellettini}. Profile curves also play a role in understanding surfaces in ${\mathbb{R}}^4$ which can be projected first to ${\mathbb{R}}^3$ and then to the plane \cite{Carter}. Our focus is rather different, centering on the relationship between a profile curve and the isotopy type of its generating surface. We work in two settings, topological and geometric. We first ask what surfaces can generate a profile curve with a given knot type. As an example, the knotted torus in Figure~\ref{trefoils} has two parallel profile curves, each a trefoil knot. This might suggest that knotted surfaces always generate knotted profile curves but this is far from true. In Section~\ref{knotted} we show that any genus-$g$ surface, no matter how convoluted its embedding in ${\mathbb{R}}^3$, is isotopic to a surface whose profile curves form an unlink with $g+1$ components. So any surface can be deformed so that it generates a collection of profile curves forming an unlink. \begin{figure} \caption{A knotted torus in ${\mathbb{R} \label{trefoils} \end{figure} We then ask if it is possible for unknotted surfaces to generate knotted profile curves. In Section~\ref{unknotted} we show that this is always possible, and can even be achieved with a surface of smallest possible genus. Theorem~\ref{writhe} states that given a smooth curve $\gamma$ embedded on a surface in ${\mathbb{R}}^3$, we can isotop the curve and the surface so that the the curve becomes a profile curve. Moreover this can be done without moving the curve $\gamma$ if and only if a simple obstruction vanishes. Thus there is no general connection between the isotopy class of a profile curve and that of a surface that generates it, but there is an obstruction to obtaining a particular geometric realization of a curve. In Section~\ref{curvescusps} we extend this result to curves whose projections have cusps. In Section~\ref{restrictions} we give some applications of these results. As a consequence of Theorem~\ref{writhe}, we show that the unknotted curve in Figure~\ref{figeight}, which can be embedded on a sphere and also on a knotted torus, cannot be a profile curve for either of these surfaces. \begin{figure} \caption{This curve lies on a sphere, but cannot be the profile curve of a sphere or a knotted torus projected to the viewing plane.} \label{figeight} \end{figure} In contrast, the knotted curve in Figure~\ref{profiletrefoil} can be realized as a profile curve for either a knotted or an unknotted torus. \begin{figure} \caption{A knotted profile curve generated by an unknotted torus.} \label{profiletrefoil} \end{figure} Observations of this type are of interest when studying surfaces of unknown topology using images that reveal their profile curves. For example, if an image seen through a microscope exhibits a profile curve of the type shown in Figure~\ref{figeight}, then it follows that the generating surface is not homeomorphic to a 2-sphere. In addition, we can conclude that the generating surface is not a knotted torus. While we look at individual profile curves generated by a surface, for some purposes it makes sense to look at the entire collection of profile curves. Note that this full collection of profile curves is not by itself sufficient to determine the topology of the surface generating the curves. Figure~\ref{profilet} shows two tori that have identical profile curves, shown at right, when projected in the indicated direction. The first torus is knotted while the second is unknotted. The same pair of profile curves is also generated by a pair of disjoint 2-spheres. \begin{figure} \caption{A knotted and unknotted torus sharing the same pair of profile curves, shown at right.} \label{profilet} \end{figure} To use profile curves to reconstruct surfaces in ${\mathbb{R}}^3$ requires additional assumptions, or additional geometric or topological information. Algorithms for reconstructing surfaces in ${\mathbb{R}}^3$ from the planar projections of their profile curves are discussed in \cite{Bellettini}, using labels on arcs of the projection curves that indicate surface multiplicities. See also \cite{Hacon, Hacon2, MenascoNichols}. \section{Any surface generates a trivial profile curve link after isotopy} \label{knotted} We say that a genus-$g$ surface is {\em standardly embedded} if it is formed by taking a tubular neighborhood of a planar graph in the $xy$-plane. The profile curve projections form an unlink consisting of $g+1$ disjoint embedded loops. A surface in ${\mathbb{R}}^3$ is {\em unknotted} if it is isotopic to a standardly embedded surface. Figure~\ref{fig:profile1} shows a standardly embedded torus and the projections of its profile curves. The profile curves of a standardly embedded genus-$g$ surface form an unlink with $g+1$ components. \begin{figure} \caption{Profile curves of a standardly embedded torus and their planar projections.} \label{fig:profile1} \end{figure} Our first result shows that, after an isotopy, any surface can be positioned so that it generates a collection of profile curves that form an unlink with $g+1$ components. The isotopy class of the full collection of profile curves carries no information about the knotting of its generating surface. \begin{theorem} An embedded genus-$g$ surface $F \subset {\mathbb{R}}^3$ can be isotoped so that its profile curves form an unlink with $g+1$ components. \end{theorem} \begin{proof} If $F$ is a 2-sphere then it is isotopic to a round sphere and the result follows, so we assume the genus of $F$ is positive. The Loop Theorem implies that $F$ is compressible \cite{Papakyriakopoulos}. It follows that a genus $g$ surface $F$ can be compressed repeatedly until it is reduced to a collection of 2-spheres. Reversing this process, a surface isotopic to $F$ can be constructed by starting with a collection of disjoint round 2-spheres, each located so that its equator is a profile curve lying in the $xy$-plane, and then successively adding thin tubes to create a surface $F_1$. Each of the added tubes is a neighborhood of a near horizontal arc that starts and ends on a the equator of a 2-sphere. These tubes can lie on either side of a 2-sphere, and tubes can run through previously constructed tubes. The resulting surface $F_1$ is isotopic to $F$, and its profile curves are formed from arcs on the equators of the 2-spheres joined to two arcs for each tube, as in Figure~\ref{standard}. \begin{figure} \caption{Any surface can be isotoped so that its profile curves are equators of spheres banded together along near horizontal bands. The curves run along tubes that can start on either side of the 2-spheres, and one tube can run through a second tube.} \label{standard} \end{figure} We claim that at each stage of the construction, a connected component of genus $g$ has $g+1$ profile curves, and that each tube has two profile curves running over it, one of which is {\em special}, in that it runs over no other tube. Initially there is a single component, a sphere with one equatorial profile curve, so the number of profile curves equals $1$ ,with $g=0$ being the genus of the component, and our claim holds. Adding a disjoint 2-sphere creates a new component, and on each component that satisfies the claim. If we add a tube that starts and ends on the same component, then an isotopy slides the two attaching points of the arc defining the tube so that they are adjacent on the equator of a sphere and the arc is near to horizontal. The new tube then generates a new profile curve that runs once over it, and is disjoint from other tubes. Moreover previous special profile curves are not changed by this tube addition. The genus $g$ of the component with the added tube is increased by one, so there remain $2g+1$ profile curves on each component of the surface. Finally we consider the effect of adding a tube connecting two distinct components of genus $n_1$ and $n_2$. The tube is a neighborhood of an arc running from the equator of one sphere to that of a second sphere, away from the special profile curves. This tube addition joins two profile curves and so decreases the number of profile curves by one, while the genus of the two components adds. The resulting new connected component has $2n_1 +1 + 2n_2 +1 -1 = 2(n_1+n_2)+1$ profile curves and genus $n_1+1 + n_2$. When all tubes are added, the profile curves of the final connected surface form a link with $g+1$ components, and this link contains $g$ special profile curves, each running once over a tube. We now describe an isotopy that transforms the profile curves of $F_1$ to an unlink. This isotopy is supported in a neighborhood of a meridian of the $g$ tubes that have special profile curves running over them. The isotopy starts by taking meridian curves for each of the $g$ tubes in turn, and rotating the tube near the meridian, along with its interior and any tubes going through it, to create a horizontal neck. The resulting surface has a new profile curve going around this neck, as in Figure~\ref{move}. \begin{figure} \caption{A local isotopy and its effect on a surface's profile curves.} \label{move} \end{figure} During this isotopy two profile curves going over the tube have been combined and one new profile curve has been created, so the total number of profile curves remains at $g+1$. This isotopy is repeated in turn for each of the $g$ tubes. If other tubes go through the interior of this tube, they can be positioned so that they remain near horizontal after the isotopy and so that the profile curves that run along them are moved by an isotopy. Repeating this process for the selected $g$ tubes results in a surface isotopic to $F$ whose profile curves form a link with $g+1$ components. To see that the resulting link is trivial, start with an innermost tube. The profile curve arcs on this tube can be isotoped inside the tube till they lie on the 2-sphere to which the tube is attached. This can be done successively on a sequence of tubes which are innermost relative to the link of profile curves, until the entire collection of profile curves has been isotoped to a union of disjoint curves lying on spheres. These form an unlink. \end{proof} \section{Curve to Profile} \label{unknotted} In this section we determine when an isotopy of a surface can turn a curve on the surface into a profile curve. An example of this question is whether a curve representing a trefoil knot can be a profile curve for an unknotted torus. We fix some terminology. Let $F \subset {\mathbb{R}}^3$ be a smooth embedded orientable surface with normal unit vector field $\nu_F$. If $\gamma$ is an oriented regular curve on $F$, then the {\em surface normal framing} of $\gamma$ is the restriction of $\nu_F$ to $\gamma$. This framing can be used to produce a push-off curve in ${\mathbb{R}}^3$, $ \gamma_+ = \gamma + \epsilon \nu_F$ for $\epsilon>0$ a small constant. Define the {\em surface linking number} $\lambda (\gamma, F)$ of $\gamma$ in $F$ to be the linking number of $\gamma_+$ and $\gamma$. The value of $\lambda (\gamma, F)$ is independent of the orientation of $F$ or $\gamma$, and of $\epsilon$ for $\epsilon$ sufficiently small. Moreover $\lambda (\gamma, F)$ is preserved by an isotopy of $F$, since linking number is preserved by isotopy. Now consider an oriented smooth curve $\gamma$ in ${\mathbb{R}}^3$ with a finite number of vertical tangencies, at which the curvature and torsion are non-zero. Profile curves of generic surfaces satisfy these properties. The {\em blackboard framing } of $\gamma$ is the unit horizontal vector field $\beta$ along $\gamma$ that is normal to $\dot \gamma$. It is obtained by lifting to $\gamma$ a continuous unit planar vector field normal to the projection $\pi(\gamma)$ in the $xy$-plane. Note that at a cusp the projection $\pi_*(\beta) $ flips from one side of $\pi(\gamma)$ to the other, as in Figure~\ref{pushoff}. \begin{figure} \caption{The projection of a curve $\gamma$ near a cusp, and a planar vector field that lifts to the blackboard framing.} \label{pushoff} \end{figure} Let $\gamma_+ = \gamma + \epsilon \beta$, for $\epsilon >0$, be a push-off of $\gamma$ relative to this framing. The linking number of $\gamma_+$ and $\gamma$ is an integer $w(\gamma)$, called the {\em writhe}, and is independent of $\epsilon$ for $\epsilon$ sufficiently small. The writhe is not an isotopy invariant. We now show that starting with a surface $F_0$ and a smooth, embedded curve $\gamma \subset F_0$, there exists an isotopy, fixing $\gamma$, from $F_0$ to a surface $F_1$ for which $\gamma$ is a profile curve if and only if the writhe of $\gamma$ is equal to $\lambda (\gamma, F)$. We first establish this for curves whose projections have no cusps. \begin{theorem} \label{writhe} Let $\gamma$ be a smooth embedded curve on an embedded surface $F_0 \subset {\mathbb{R}}^3$ with $\dot \gamma$ not vertical. Then there is an isotopy from $F_0$ to a surface $F_1$ for which $\gamma$ is a profile curve, with $\gamma$ left fixed by this isotopy, if and only if $w(\gamma) = \lambda (\gamma, F_0)$. \end{theorem} \begin{proof} Suppose that there is an isotopy from $F_0$ to a surface $F_1$ that fixes $\gamma$ and that $\gamma$ is a profile curve of $F_1$. With appropriate orientations, the blackboard framing of $\gamma$ and the surface normal framing exactly coincide along a profile curve, with each a horizontal unit vector field along $\gamma$ normal to $F_1$. It follows that the writhe of $\gamma$ is equal to $\lambda (\gamma, F_1)$. Moreover an isotopy from $F_0$ to $F_1$ that fixes $\gamma$ preserves both invariants, so that $w(\gamma) = \lambda (\gamma, F_0)$. Now assume that $w(\gamma) = \lambda (\gamma, F_0)$. We construct an isotopy from $F_0$ to $F_1$ with support on an annular neighborhood $A$ of $\gamma$ that turns $\gamma$ into a profile curve. The annulus can be parametrized as $\{ A(t,s) , ~ 0 \le t \le 1, ~ -1 \le s \le 1 \}$ with $A(t,0) = \gamma(t)$ and $A(0,s) = A(1,s)$. The isotopy twists a neighborhood of $\gamma$ to make the surface normals along $\gamma$ horizontal, matching the blackboard framing, while leaving $\gamma$ and $\partial A$ fixed. Moreover in $F_1$ the transverse arc $\alpha_t(s) = \{ A(t, s), ~ -1 \le s \le 1 \} $ satisfies $\ddot \alpha_t(0) \cdot \nu_F >0 $ for $0 \le t \le 1$, so that $\alpha_t(s)$ is always convex in the same direction relative to the surface normal. After this isotopy, as $\gamma$ is traversed till it returns to its starting point in $F_1$, the annulus around $\gamma$ is rotated some number of times relative to its initial position, with the total number of rotations equal to the difference between the writhe and the surface linking number. When these two agree then $A(0,s)$ matches up with $A(1,s)$ and the boundary of the twisted annular neighborhood of $\gamma$ matches up with the rest of the surface. The isotopy fixes $\gamma$ throughout, and $\gamma$ ends up being a profile curve of $F_1$. \end{proof} The twisting process of the annulus neighborhood of $\gamma$ taking place in Theorem~\ref{writhe} is illustrated in Figure~\ref{twist} for a (1,1) curve on an unknotted torus. In this example the writhe is zero and the surface linking number is one, so the conditions of the Theorem do not apply. The initial arc $A(0,s)$ does not match up with the final arc $A(1,s)$, and the twisted annulus had boundary that does not match up with the torus. The torus cannot be isotoped so that this (1,1) curve is a profile curve. However by isotoping the (1,1) curve on the torus to change its writhe to equal one, we can arrange for Theorem~\ref{writhe} to apply. \begin{figure} \caption{A (1,1) curve on a torus has an annular neighborhood along it twisted so that its tangent plane is always vertical. Five cross-sections indicate twisting of an annular neighborhood to turn the curve into a profile curve. The last does not match the first, due to the addition of a full twist while traversing the curve. An isotopy of the curve that changes its writhe, as in the second torus, resolves this issue.} \label{twist} \end{figure} \begin{corollary} \label{afterisotopy} Let $\gamma_0$ be a smooth embedded curve on an embedded surface $F_0 \subset {\mathbb{R}}^3$. Then $\gamma_0$ is isotopic in $F_0$ to a curve $\gamma_1$ which is a profile curve of a surface $F_1$ that is smoothly isotopic to $F_0$. \end{corollary} \begin{proof} Suppose $\gamma$ has surface linking number $\lambda$ and writhe $w$. Let $\gamma_1$ be obtained from $\gamma_0$ by an isotopy that performs $\lambda-w$ positive Reidemeister I moves on $\gamma_0$, if $\lambda \ge w$, or $ w-\lambda$ negative Reidemeister I moves if $w >\lambda $, as in Figure~\ref{twist}. Such an isotopy always exists, and does not change the knot type of $\gamma_0$, but the resulting curve $\gamma_1$ has writhe equal to its surface linking number. Theorem~\ref{writhe} now applies, and there is an isotopy of the surface that fixes $\gamma_1$ and ends with $\gamma_1$ a profile curve. \end{proof} The {\em surface genus} of a curve $\gamma$ is the smallest genus of an unknotted surface in ${\mathbb{R}}^3$ that contains a curve isotopic to $\gamma$. \begin{corollary} A knot $K$ with surface genus $g(K)$ can be realized as a profile curve of a standardly embedded surface $F \subset {\mathbb{R}}^3$ having genus $g(K)$. \end{corollary} \begin{example} \label{multi} Any torus knot is realized as a profile curve generated by an unknotted torus. \end{example} \section{Curves with cusps} \label{curvescusps} In this section we extend Theorem~\ref{writhe} to curves whose projections have cusps. We start by reviewing the definition of a cusp. It follows from the work of Whitney and Haefliger that a generic smooth surface $F \subset {\mathbb{R}}^3$ has profile curves that are smooth and regular, so that the tangent vectors $\dot \gamma$ are never zero. Moreover $\dot \gamma$ is vertical at only a finite number of points, and these project to isolated cusps \cite{Whitney, Haefliger}, see also \cite{Arnold}, \cite{PlantingaVegter}. The local behavior of the projection of $F$ near $ \gamma \subset {\mathbb{R}}^3$ is modeled by two sheets that fold along the curve, except at the points where the profile curve is vertical. At a vertical tangency the projection of the surface has an ordinary cusp singularity, meaning that $\gamma(t)$ is modeled by the parametric curve $\gamma(t) = (t^2, t^3, t)$ with projection to the $xy$-plane given by $\pi(\gamma)(t) = (t^2, t^3)$. This curve has an ordinary cusp singularity at $t=0$. When we say that $\gamma$ projects to an ordinary cusp at a point where $\dot \gamma$ is vertical, we mean that after a smooth change of coordinates it has this form. We will drop the term ordinary since we only consider this type of cusp. The notions of writhe and surface linking number extend without change to curves whose projections have a finite number of cusps. We say that the {\em chirality} of a curve at a cusp is {\em right-handed} if the projection curve takes a right turn when proceeding through the cusp, and {\em left-handed} otherwise, as in Figure~\ref{cuspparity}. This notion is based on the orientation of the plane, not that of a surface $F$. \begin{figure} \caption{Right--handed and left-handed cusps in a profile curve projection.} \label{cuspparity} \end{figure} We now look at a smooth curve in ${\mathbb{R}}^3$ whose projection has a finite number of cusps. We consider the number and the chirality of the cusps that can occur if the curve is a profile curve generated by some surface. Haefliger considered a map from a surface to the plane and a planar curve that is a component of the image of the singular points of this map. He showed that such a map can be factored through an immersion of the surface into ${\mathbb{R}}^3$ followed by projection to the plane if the curve has an annular neighborhood on the surface and has an even number of cusps, or if it is orientation reversing on the surface and has an odd number of cusps \cite{Haefliger}. The following lemma considers a related, but somewhat different scenario. We start with a curve in ${\mathbb{R}}^3$, not initially associated to any surface map. We show that for this curve to be the profile curve of a projection of some surface requires the conditions specified by Haefliger and an additional condition involving cusp chirality. \begin{lemma} \label{chirality} A smooth curve $\gamma \subset {\mathbb{R}}^3$ is a profile curve generated by an annular neighborhood $A$ of $\gamma$ if and only if $\gamma$ has an even number of points that project to cusps, and each of these projects to a cusp with the same chirality. Similarly, $\gamma$ is a profile curve generated by a Mobius strip neighborhood of $\gamma$ if and only if it has an odd number of points that project to cusps, and each of these cusps projects to a cusp with the same chirality. \end{lemma} \begin{proof} Suppose $\gamma$ is a profile curve with an annular neighborhood $A$. There are transverse arcs $ \alpha_t \subset A$ normal to $\gamma$ at $\gamma(t)$ for $\{ t_0 - \delta \le t \le t_0 + \delta\}$ a neighborhood of the cusp at $\gamma(t_0)$. Each transverse arc $\alpha_t$ has non-zero curvature except for $\alpha(t_0)$, which passes through the cusp vertically. The planar projection of the curvature vector $\pi_*(\ddot \alpha_t)$ points towards the cusp (zero-angle) side of $\pi(\gamma)$. See Figure~\ref{forbidden}. \begin{figure} \caption{A cross section of a surface $F$ indicating the curvature vector of arcs transverse to $\gamma$ near a cusp. The projected curvature vector points towards the zero-angle side of the curve near a cusp. This forces all cusps in a profile curve projection to have the same chirality. Two adjacent cusps with opposite chirality, as shown on the curve at right, cannot occur in the projection of a profile curve.} \label{forbidden} \end{figure} The projection of the curvature vector to $\alpha$ flips directon relative to $\gamma$ when passing thorough the cusp, so that in both the incoming and outgoing arcs, $\pi_*(\ddot \alpha_t)$ points towards the cusp side of $\pi(\gamma)$. Since this flip occurs only at a cusp and the curvature matches up after a full traversal of $\gamma$, there must be an even number of cusps on each profile curve when $\gamma$ is orientation preserving, and an odd number otherwise. Two successive cusps must have the same chirality, as otherwise the projected curvature vector of $\alpha_t$ would point away from the cusp side as it approached one of them, as indicated in the rightmost configuration in Figure~\ref{forbidden}. It follows that all cusps on the projection of $\gamma$ have the same chirality. \end{proof} This has implications for image analysis. \begin{example} \label{multi2} If a curve in the plane contains a right-handed and a left-handed cusp then it is not the projection of a profile curve generated by an immersed surface in ${\mathbb{R}}^3$. \end{example} \begin{proof} Lemma~\ref{chirality} implies that the chirality of all arcs must be the same for the projection of a profile curve. \end{proof} An example of such a curve is shown in Figure~\ref{forbidden}. We now extend Theorem~\ref{writhe} to curves with cusps. \begin{theorem} \label{writhecusps} Let $\gamma$ be a piecewise-smooth embedded curve on an embedded surface $F_0 \subset {\mathbb{R}}^3$ that is smooth and non-vertical except at an even number of points which project to cusps, all of the same chirality. Then the following two statements are equivalent. \begin{enumerate} \item $F_0$ is isotopic to a surface $F_1$ for which $\gamma$ is a profile curve, with $\gamma$ fixed by this isotopy, \item The writhe of $\gamma$ is equal to its surface linking number in $F_0$. \end{enumerate} \end{theorem} \begin{proof} The proof is similar to that of Theorem~\ref{writhe}. When isotoping the surface along $\gamma$ to turn it into a profile curve, it may be necessary to twist it at above a cusp, but this will always be through an integer multiple of $\pi$ since the tangent plane of $F_0$ is already vertical at a point that projects to a cusp, so the local structure will be preserved. The total number of twists of the surface relative to the blackboard framing will be zero when $w(\gamma) = \lambda (\gamma, F_0)$. The convexity of the surface along the transverse arcs $\alpha_t$ will match at $t=0$ and $t=1$ because $F_0$ is oriented, and so there are an even number of points projecting to cusps along $\gamma$, where the convexity direction flips. \end{proof} \section{Geometric obstructions} \label{restrictions} In this section we give examples showing restrictions imposed on a surface $F$ by the condition that it contains a given curve $\gamma$ as a profile curve. \begin{example} The writhe 1 curve $\gamma$ shown in Figure~\ref{figeight} cannot be a profile curve generated by a sphere. \end{example} \begin{proof} The surface linking number determined by the sphere for any curve lying on it is zero, since the curve bounds a disk on the sphere disjoint from a normal push-off. Thus the profile curve of a sphere cannot contain a curve whose writhe is non-zero. \end{proof} An embedded planar loop can be a profile curve generated by either a knotted or unknotted torus. In contrast, a curve with a single crossing can only be generated as a profile curve by an unknotted torus. \begin{example} The curve $\gamma$ in Figure~\ref{figeight} cannot be a profile curve generated by a knotted torus. \end{example} \begin{proof} Suppose that the curve $\gamma$ is a profile curve. It does not bound a disk on the torus, since it has writhe one and a curve bounding a disk on a torus has surface linking number zero. Similarly $\gamma$ cannot represent a torus meridian, which also has surface linking number equal to zero. Thus $\gamma$ must represent a homotopically nontrivial, and non-meridianal curve on a possibly knotted torus. Since $\gamma$ is unknotted, this is possible only if the torus is unknotted and $\gamma$ is either a $(p,1)$ or a $(1,q)$ curve. \end{proof} The restrictions on profile curves due to Theorem~\ref{writhe} can apply even when knowing the writhe only up to sign. This situation occurs in photographic images of surfaces, where it is often unclear which profile curve arc is in front and which in back when their projections cross. \begin{example} The immersed planar figure-eight curve, obtained by ignoring the over and under-crossing information in Figure~\ref{figeight}, cannot be the projection to the plane of a profile curve generated by a sphere or a knotted torus. \end{example} \begin{proof} The planar figure-eight curve is a projection of a curve whose writhe is either $+1$ or $-1$. Theorem~\ref{writhe} rules out either possibility for a profile curve on the stated surfaces. \end{proof} \begin{example} If a curve $\gamma$ isotopic to a trefoil is a profile curve for a torus and if the number of crossings and cusps in the projection of $\gamma$ is less than six, then the torus is knotted. \end{example} \begin{proof} There are at least three crossings, so the condition on the number of crossings and cusps implies that the writhe of $\gamma$ is non-zero (mod $6$). The surface linking number of a trefoil on the surface of an unknotted torus is zero (mod $6$). Since these are not equal, Theorem~\ref{writhe} implies that $\gamma$ cannot be a profile curve for an unknotted torus. \end{proof} \end{document}
\begin{document} \title{AdaSplit: Adaptive Trade-offs for Resource-constrained Distributed Deep Learning} \author{Ayush Chopra} \affiliation{ \institution{MIT} \country{Cambridge, MA}} \author{Surya Kant Sahu} \affiliation{ \institution{BIT} \country{Durg, India}} \author{Abhishek Singh} \affiliation{ \institution{MIT} \country{Cambridge, MA}} \author{Abhinav Java} \affiliation{ \institution{DTU} \country{Delhi, India}} \author{Praneeth Vepakomma} \affiliation{ \institution{MIT} \country{Cambridge, MA}} \author{Vivek Sharma} \affiliation{ \institution{MIT} \country{Cambridge, MA}} \author{Ramesh Raskar} \affiliation{ \institution{MIT} \country{Cambridge, MA} } \begin{abstract} Distributed deep learning frameworks like federated learning (FL) and its variants are enabling personalized experiences across a wide range of web clients and mobile/IoT devices. However, these FL-based frameworks are constrained by computational resources at clients due to the exploding growth of model parameters (eg. billion parameter model). Split learning (SL), a recent framework, reduces client compute load by \textit{splitting} the model training between client and server. This flexibility is extremely useful for low-compute setups but is often achieved at cost of increase in bandwidth consumption and may result in sub-optimal convergence, especially when client data is heterogeneous. In this work, we introduce \textit{AdaSplit} which enables efficiently scaling SL to low resource scenarios by reducing bandwidth consumption and improving performance across heterogeneous clients. To capture and benchmark this multi-dimensional nature of distributed deep learning, we also introduce \textit{C3-Score}, a metric to evaluate performance under resource budgets. We validate the effectiveness of \textit{AdaSplit} under limited resources through extensive experimental comparison with strong federated and split learning baselines. We also present a sensitivity analysis of key design choices in AdaSplit which validates the ability of \textit{AdaSplit} to provide adaptive trade-offs across variable resource budgets. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Distributed machine (deep) learning is characterized by a setting where many clients (web browsers, mobile/IoT devices) collaboratively train a model under the orchestration of a central server (eg. service provider), while keeping the training data decentralized. As strict regulations emerge for data capture and storage such as GDPR ~\cite{gdpr}, CCPA ~\cite{ccpa}, distributed deep learning is being used to enable privacy-aware personalization across a wide range of web clients and smart edge devices with varying resource constraints. For instance, distributed deep learning is replacing third-party cookies in the chrome \textit{browser} for ad-personalization ~\cite{epasto2021massively,charles2021large}, enabling next-word prediction on \textit{mobile devices} ~\cite{next-word-fl}, speaker verification on \textit{smart home assistants} ~\cite{alexa-speech}, HIPPA-compliant diagnosis on \textit{clinical devices} ~\cite{fl-health-intro} and real-time navigation in \textit{vehicles} ~\cite{fl-self-driving}. A general distributed deep learning pipeline involves multiple rounds of \textit{training} and \textit{synchronization} steps where, in each round, a model is \textit{trained} with local client data and updates across multiple clients are \textit{synchronized} by the server into a global model. Techniques have been proposed with the goal to maximize accuracy under constraints on resource (bandwidth, compute) consumption. Figure ~\ref{fig:intro-figure} compares our proposed \textit{AdaSplit} (in \textcolor{yellow}{yellow}) with strong baselines ~\cite{fedavg, fedprox, scaffold, fednova,gupta2018distributed, thapa2020splitfed} along these dimensions. \begin{figure} \caption{AdaSplit achieves improved accuracy under limited resources (bandwidth $\&$ compute) and can adapt to variable resource budgets. Results on \textit{Mixed-NonIID} \label{fig:intro-figure} \end{figure} Federated learning (FL) ~\cite{fedavg} is one of the widely studied frameworks ~\cite{fedavg, fedprox, fednova, he2020fedml, cheng2017survey}. In each round of FL, \textit{first}, all clients train a copy of the model locally on their device for several iterations and \textit{then}, communicate the final model parameters with the server which \textit{synchronizes} updates across clients by \textit{averaging} all clients model parameters and shares back the unified global model for next training round. This is summarized in Figure ~\ref{fig:overall-arch}. With entire model training on-client, federated learning is challenged by the \textit{compute budgets} of client devices. \textit{First}, on-client model training needs resource-intensive clients (with high-performance GPUs to avoid stragglers) and is increasingly becoming impractical due to exploding growth in model sizes (eg. billion parameter models for language and image modeling ~\cite{gpt, bert, vit}). \textit{Second}, as the number of clients (and/or model sizes) scales, bandwidth requirements for the system may worsen as entire models need to be communicated between client and server. \textit{Furthermore}, at inference, often, storing the entire trained model on-client can have intellectual property implications that limit real-world usability. More recently, the split learning (SL) framework ~\cite{gupta2018distributed, thapa2020splitfed, poirot2019split, splitlearning2, vepakomma2018split} has emerged to alleviate some of these concerns in federated learning. SL reduces client computation load by involving the server in the training process. In each round, clients take turns to interact with the server for multiple iterations where they update parameters of a local model on the client and a (shared) global model residing on the server. Specifically, at each iteration, the client model generates input activations that are communicated to the server which uses them to compute gradients that are used to train the server model and transmitted to the client to train the client model. This is summarized in Figure ~\ref{fig:overall-arch}. While client computation is significantly reduced in SL than FL, this comes at cost of an increase in client-server communication and often in-efficient convergence. \textit{First}, since the client is dependent upon the server for training gradient, required \textit{communication budgets} increase as the client interacts with the server in every iteration of a round (vs once-per-round in FL). This server is also blocked to train synchronously with each client. \textit{Second,} since clients sequentially update shared parameters on the server, convergence may be in-efficient or sub-optimal, especially when client data is heterogeneous. Alleviating these concerns is the focus of this work. We introduce \textit{AdaSplit}, which enables split learning to scale to low-resource scenarios. \textit{First, } a key insight in AdaSplit is to eliminate client dependence on server gradient which reduces communication cost and also enables asynchronous (and reduced) computation. \textit{Next}, motivated by the fact that neural networks are vastly overparameterized, AdaSplit is able to improve performance by constraining heterogeneous clients to update sparse partitions of the server model. As shown in Figure ~\ref{fig:intro-figure}, this enables AdaSplit to not only achieve improved performance under fixed resources (higher accuracy when similar bandwidth and compute) but also adapt to variable resource budgets (the trade-off curve). Furthermore, to capture and benchmark this multi-dimensional nature of distributed deep learning, we also introduce \textit{C3-Score}, a metric to evaluate performance under resource budgets. The contributions of this work can be summarized as: \begin{itemize} \item We introduce \textit{AdaSplit}, an architecture for distributed deep learning that can adapt to and improve performance across variable resource constraints. \item We introduce \textit{C3-Score}, a metric to benchmark and compare distributed deep learning techniques. \item We validate the effectiveness of \textit{AdaSplit} through experiment comparison with state-of-the-art methods and sensitivity analysis of different design choices. \end{itemize} \begin{figure*} \caption{Training protocols with N=3 clients for federated learning (FL), split learning (SL) and our proposed AdaSplit which builds upon split learning framework. \textit{AdaSplit} \label{fig:overall-arch} \end{figure*} \section{Preliminaries and Motivation} \label{sec:prelim} \textit{First}, we formalize the protocol and notation for the split learning (SL) framework. This is also visualized in Figure ~\ref{fig:overall-arch}. For completeness, we also visualize the FL protocol in the same Figure. While the training protocol may appear different, we unify their design choices along key dimensions. \textit{Next}, we formulate three key design dimensions and contextualize specific choices of FL and SL. This helps motivate our proposed AdaSplit technique. \subsection{Split Learning} \label{sec:prelim-sl} Consider a distributed learning setup with $N$ participating clients and $1$ coordinating server. The key idea of split learning (SL) is to distribute (or \textit{split}) the parameters of the training model across client and server. Each client $i$, for $i \in [1,2,...,N]$ is characterized by a local client dataset $D_{i}$, local client model $M_{i}^{c}$ and a single server model $M^{s}$ which is updated by all the clients. The training protocol is executed over $R$ rounds of $T$ iterations each. In each round, the $N$ clients sequentially obtain access to interact with the server for model training over $T$ iterations. In each iteration $j$ (for $j \in [1,2,,..,T]$) of client $i$ updates the parameters of $M^{s}$ and $M^{c}_{i}$. \textit{First}, a mini-batch ($x_{i}, y_{i}$) is sampled from $D_{i}$ and passed through layers of client model $M^{c}_{i}$ to generate activations $a_{i}$ ($= M^{c}_{i}(x_{i})$). In this document, we may refer to $a_{i}$ as \textit{split activations}. \textit{Second}, the pair of ($a_{i}, y_{i}$) is transmitted to the server. \textit{Third}, at the server, $a_{i}$ is passed through layers of server model $M^{s}$ to generate predictions $\hat{y_{i}}$ ($=M^{s}(a_{i})$). The loss function $L(y_{i}, \hat{y_{i}})$ is computed to generate gradients which are used to locally update parameters of $M^{s}$ and then transmitted to the client to update parameters of $M^{c}_{i}$. In the classical setup, clients follow a round-robin mechanism where client $i+1$ can start interacting with the server only after client $i$ has completed its $T$ iterations for the round. The global model is synchronized implicitly across clients by updating weights of the shared server model $M^{s}$. Furthermore, in some variants, clients models is transmitted between pairs of clients during a round ~\cite{gupta2018distributed} or averaged over all clients after the round ends ~\cite{thapa2020splitfed}. Extensive work has been conducted to establish privacy in split learning and, while beyond scope of this paper, we briefly discuss that in Section ~\ref{sec:related-work}. \subsection{Design Dimensions: 3C's} \label{sec:prelim-design-dim} We define three key design dimensions which focus on how i) model is trained on local client data (\textit{Computation}) and, ii) updates across the clients are synchronized, via the server, into a global model (\textit{Communication} and \textit{Collaboration}). \textbf{\underline{1. Computation:}} This governs how the processing of data at each client is executed between client and server. Hence, the computation cost can be defined as the total sum of floating-point operations (FLOPs) executed on the client and server. For $N$ clients, this cost ($C1$) can be represented as: \begin{equation} C1 = \sum_{i=1}^{N} R*(F^{c}_{i}*T^{c}_{i} + F^{s}_{i}*T^{s}_{i}) \end{equation} where, $F^{c}_{i}$ are the FLOPs executed on client for $T^{c}_{i}$ iterations, $F^{s}_{i}$ are FLOPs executed on server for $T^{s}_{i}$ iterations when training with data for client $i$ and $R$ is number of rounds. $F^{c}_{i}$ and $F^{s}_{i}$ increase (or decrease) monotonically with increase (or decrease) in size of client model $M^{c}_{i}$ and server model $M^{s}$ respectively. In federated learning, $F^{s}_{i} = 0$ and $T^{s}_{i} = 0$ since the entire model is executed on client device ($M^{s} = 0$). In contrast, split learning allows to split the model and distribute $F_{c}$ and $F_{s}$ between client and server, based on resource availability. This flexibility of split learning allows scaling to low-resource setups where clients are compute constrained (but servers may scale horizontally) and is a key aspect for design of AdaSplit. However, we also note that this classical split learning framework increases compute load on the server and also blocks the server to train synchronously with each client. \textbf{\underline{2. Communication:}} This governs how client-and-server interact with each other. Hence, the communication cost can be defined as the total payload that is transmitted between each of the $N$ client-server pairs over multiple rounds of training. Federated and Split Learning differ based on the modality of the payload and frequency of interaction. However, without loss of generality, this cost ($C2$) can be represented as: \begin{equation} \label{eq:comm-prelim} C2 = \sum_{i=1}^{N}\sum_{j=1}^{R}\sum_{k=1}^{T} (P_{is} + P_{si})*\sigma(i,j,k) \end{equation} where $N$ is number of clients, $R$ is training rounds and $T$ is iterations per round. $P_{is}$ is the payload transmitted from a given client $i$ to server $s$ and $P_{si}$ is the payload transmitted from server $s$ to client $i$. $\sigma(i,j,k)$ denotes if client $i$ interacts with server during iteration $k$ of round $j$. In federated learning, client-server interact using model weights once-per-round. Hence, size of each $P_{is}, P_{si}$ is size of the total model and $\sigma(i,,j,k)= 1$ \textit{only} for $k=T$ (last iteration of every round). In split learning, $P_{is}, P_{si}$ is size of a batch of activations and gradients respectively and $\sigma(i,,j,k)= 1$ $\forall i,j,k$ since client depends upon server for gradient. We note that, even though size of the payload is relatively smaller for split learning (one activation batch vs full model), the high frequency of communication may result in more bandwidth consumption than federated learning. \textbf{\underline{3. Collaboration:}} This governs how learning (or updates) from local data across the clients is synchronized in the global model. Unlike communication and computation, the cost is non-trivial to define but the impact is measured from the converged accuracy. If the client datasets $D_{i}$ for $i \in [1,2,..,N]$ could be centralized, the unified dataset $D$ ($={D_{1} \cup D_{2} ... \cup D_{N}}$) can be used to train a performant model with gradient descent by sampling iid batches $b \sim D$. Federated and split learning require mechanisms to achieve convergence when this data is decentralized. Abstractly, federated learning executes this by averaging client model parameters (or gradients) on the server after each round, and split learning executes this by requiring all clients to (sequentially) update shared parameters of the server during the round. In federated training, the global model in a round $r$ and consequently updated client models ($M^{c}_{i}$) are obtained as: \begin{equation} \label{ref:eqn-client-avg} \begin{gathered} M^{g} = \sum_{i=1}^{N}(M^{c}_{i}*p_{i}^{r}) \\ M^{c}_{i} = M^{g} \forall i \in [1,2,...,N] \end{gathered} \end{equation} where $p_{i}^{r}$ is a weight assigned to client $i$ in round $r$. In split training during each round $r$, the server model ($M^{s}$) is updated sequentially by all client $i$ for $\forall i \in [1, 2,...,N]$ as: \begin{equation} \begin{gathered} M^{s} = M^{s} - \alpha*\triangledown \hat{L}( M^{s}(a_{i}), y_{i}) \\ \end{gathered} \end{equation} In some variants of split learning such as ~\cite{thapa2020splitfed}, local client models may also synchronized, at end of each round, similar to federated learning using equation ~\ref{ref:eqn-client-avg}. Then, the global model is obtained by stacking the server and client models. We note that when data across clients are non-iid (that is common on real-world distributed setup), inefficient or sub-optimal converged accuracy is observed in $M^{s,r}$. In split learning, we posit that this happens since gradients from non-iid activations sequentially update the same parameters which violates assumptions of empirical risk minimization ~\cite{vapnik1992principles}. \section{AdaSplit} \label{sec:method} In this section, we delineate the design choices of AdaSplit along each of the three dimensions. We also discuss corresponding trade-offs that enable AdaSplit to adapt to variable resource constraints. Unless specified otherwise, we follow the same notation as defined in Section ~\ref{sec:prelim}. The architecture is visualized in Figure ~\ref{fig:overall-arch}. \subsection{Computation} \label{ref:slpp-compute} The training model is split between the client and server. Following split learning, each client is characterized by a local client model $M^{c}_{i}$ and a global server model $M^{s}$ that is shared across all clients. This flexibility to distribute the model allows scaling split learning (and AdaSplit) to low resource setups. Recall from Section ~\ref{sec:prelim-design-dim}, that in classical split learning, this increases computation load on the server and also blocks the server to train synchronously with each client model which depends upon the server for the gradient. AdaSplit alleviates these concerns by i) eliminating the dependence of the client model on server gradient and ii) \textit{only} training the server intermittently. This further lowers the total computation cost by decreasing $T_{s}$ (compute iterations on the server) and also unblocks the server to execute asynchronously from the client. \textbf{Local Client Gradient:} \textit{First}, AdaSplit generates the gradient for training client model on-client itself using a local objective function $L_{client}$ which is a supervised version of NT-Xent Loss \cite{NIPS2016_6b180037}. Given an input batch, $b \sim D_{i}$, then for each input $(x_{i}, y_{i}) \sim b$, $L_{client}$ is applied on a projection ($H(.)$) of the activations $a_{i}$ generated by the client model ($=M^{c}_{i}(x_{i})$). Let $q_i = H(a_i)$ be the corresponding embedding of an input $x_i$, and $Q^{i}_{+}$ be the set of embeddings of other inputs with the same class as $x_i$ in the batch b, the loss can be represented as below: \begin{equation} L_{client} = \sum^{|b|}_{i=0} \sum_{q_{+} \in Q^{i}_{+}} -\log \frac{exp(q_i \cdot q_{+}/\tau)}{\sum^{|b|}_{j \ne i} exp(q_i \cdot q_j/\tau)} \end{equation} Here, $\tau$ is a hyperparameter, which controls the "margin" of closeness between embeddings. We set $\tau = 0.07$ in all our experiments. The pairs (anchor $q_{i}$, positive inputs $q_{+}$) required in $L_{client}$ are sampled using the ground truth labels ($y_{i}$) locally on client. \textbf{Intermittent Server Training:} \textit{Second}, AdaSplit also \textit{splits} the $R$ round training into two phases: A) \underline{\textit{Local Phase}} B) \underline{\textit{Global Phase}}. \textit{Local Phase} lasts for the first $\kappa$ rounds when only the client model trains, asynchronously and without interacting with the server, using $L_{client}$. After $\kappa$ rounds (till end), the \textit{Global Phase} starts where client interacts with the server by transmitting activations. The server model \textit{only now} start training on data from the clients. The server model $M^{s}$ is optimized using a server loss function ($L_{server}$) which is cross-entropy ($L_{ce}$) for classification tasks. We note that, even in global phase, client model continues to train using $L_{client}$ and \textit{does not} receive any gradient from the server. Essentially in AdaSplit, client models leverage compute resources of the server only when required. AdaSplit can adapt to variable computation budgets by regulating two key hyperparameters: i) size of the client model ($\mu$) (for client compute), ii) duration of local phase ($\kappa$) (for server compute). We study the specific impact of these design choices in Section ~\ref{sec:ablations}. Also, in practice, we observe considerable reductions in total computation since $\kappa$ can assume large values (0.8*R), where $R$ is total training rounds, without significant loss of performance. We corroborate this with results in Section ~\ref{sec:results}. \subsection{Communication} \label{ref:slpp-comm} Recall from Section ~\ref{sec:prelim-design-dim} that in classical split learning, the high client-server interaction can be prohibitive for communication cost. AdaSplit alleviates this problem by i) decreasing the payload size and ii) the frequency of communication. \textbf{Smaller Payload:} \textit{First,} we note that eliminating client dependence on server gradient also significantly reduces communication cost, apart from decreasing computation. In AdaSplit, $P_{si} = 0$ (from equation ~\ref{eq:comm-prelim} in Section ~\ref{sec:prelim-design-dim}) throughout training for each client $i$. Through sensitivity analysis in Section ~\ref{sec:ablations}, we validate that this design choice marginally drops the performance while significantly reducing communication. \textbf{Infrequent Transmission:} \textit{Second}, we note that two-phase training is also beneficial for communication. In the \textit{Local Phase}, there is no client-to-server communication, with the payload $P_{is} = 0$ for all clients $i$ (from equation ~\ref{eq:comm-prelim} in Section ~\ref{sec:prelim-design-dim}). In the \textit{Global Phase}, clients may start transmitting activations to the server. In this phase, only a subset of clients communicates with the server in each round. Specifically, we introduce an \textit{Orchestrator} (\textit{O}) which resides on the server and uses a running statistic of local client losses to select ($\eta N$) clients in each iteration, that communicate with the server. In AdaSplit,\textit{O} uses a UCB ~\cite{ucb} strategy to prioritize clients who need the server model to improve performance on their data (exploitation) while also ensuring that the final model can generalize well to different client data distributions (exploration). Let $S^{t}_{i}$ is a binary flag denoting if client $i$ is selected at iteration $t$ and $L_{i}^{t}$ denote the server loss from activations ($a_{i}$) for the iteration. At each iteration $t$, selected clients (i.e. $S_{i}^{t}=1$) transmit input activations to update server model and the loss $L_{i}^{t}$ is stored. For unselected clients (i.e. $S_{i}^{t} = 0$), $L_{i}^{t}$ is defined the average of their loss value in previous iterations ($L_{i}^{t} = \frac{L^{t-1}_{i} + L^{t-2}_{i}}{2}$). Here, we note that $L_{i}^{t}$ is only used for selection and the client model continues to train locally with $L_{client}$. \textit{Finally,} \textit{O} assigns a new score to each client using the advantage function and clients with the top-$\eta$ scores are selected for next iteration. The advantage function ($A_{i}$) for ~\cite{ucb} is defined below: \begin{equation} \begin{gathered} A_i = \frac{l_i}{s_i} + \sqrt{\frac{2 \log T}{s_i}} \\ \end{gathered} \end{equation} where, $l_i = \sum^{T}_{t=0}\gamma^{T-1-t} \cdot L^{t}_{i}$, $s_i = \sum^{T}_{t=0}\gamma^{T-1-t} \cdot S^{t}_{i}$ and $T$ is total iterations in the round. $\gamma \in [0, 1]$ is a hyperparameter that determines the importance of historical losses. We initialize $L^{t}_i = 100$ for all clients for $t=0$ and $t=1$. Before proceeding, we make a few statements here. \textit{First,} we note that subset selection has previously been used in FL to regulate communication cost ~\cite{fedavg, fedprox, gaurijoshi-fl} where the global model after a round may be obtained from few clients only (see variable $p_{i}^{r}$ in equation ~\ref{ref:eqn-client-avg}). \textit{However}, classical split learning does not have a similar infrastructure since each client is entirely dependent on the server for gradient during training. Eliminating client dependence on server gradient in AdaSplit helps realize the benefit. \textit{Finally}, we highlight that the design of orchestrator is specialized for AdaSplit where it needs to be invoked in each iteration (vs rounds in FL) and selects client for training (vs model averaging in FL). AdaSplit can adapt to variable communication budgets by regulating two key hyperparameters: i) the number of selected clients ($\eta$), ii) the duration of the local phase ($\kappa$). We study the specific impact of these design choices in Section ~\ref{sec:ablations}. In practice, we observe considerable reductions in communication cost since $\kappa$, $\eta$ can assume large values ($\kappa = 0.8*R, \eta = 0.6*N$) without significant loss of performance. We corroborate this with results in Section ~\ref{sec:results}. \subsection{Collaboration} \label{ref:slpp-collab} AdaSplit, like split learning, synchronizes updates in the global model by requiring clients to sequentially update shared server model parameters. Recall from Section ~\ref{sec:prelim-design-dim} that when inter-client data is heterogenous, this often results in the global model converging to sub-optimal accuracy. To alleviate this, the key idea in AdaSplit is to have each client update only a partition of the server model ($M_{s}$) parameters. The motivating insight is to take advantage of the fact that neural network models are vastly over-parameterized ~\cite{cl-vnp-14} and only a small proportion of the parameters can learn each (client's) task with little loss in performance~\cite{cl-vnp-13, cl-vnp, cl-vnp-11, cl-vnp-18}. \textbf{Update Sparse Partitions of Server Model:} During the \textit{global phase}, we add an $L^{1}$ weight regulator to promote sparsity in the server model $M^{s}$. Specifically, instead of pruning the network, we learn a client ($i$) specific multiplicative mask $m_{i}$ which constrains the set of $M^{s}$ parameters client $i$ can update. Given batch of activations $a_{i}$ from client $i$, server model $M^{s}$ is updated as: \begin{equation} M^{s} = M^{s} - \alpha*\hat{m}_{i}*\triangledown \hat{L}( M^{s}(a_{i}), y_{i}) \end{equation} This simulates relative sparsity (for each client) in $M^{s}$ without pruning any parameters since goal is to increase server model capacity (to accommodate many diverse clients) rather than achieving compression. Here, $m_{i}$ evolves during training and is forced to be extremely sparse by optimizing the following objective on the server for each client $i$: \begin{equation} L_{server} = L_{ce}(\hat{y_{i}}, y_{i}) + \lambda*\omega(m_{i}) \end{equation} where, $\omega(.)$ is an $L^{1}$ regularizer and $\hat{y_{i}} = M^{s}(M^{c}_{i}(x_{i}))$. The $\lambda$ hyperparameter promotes sparsity of the masks and can be intuitively visualized as controlling the extent of collaboration between clients on the server. At inference, the effective server model for client is $M^{s}*m_{i}$ where $m_{i}$ is a highly sparse binary mask and may be stored on client. Results in Section ~\ref{sec:results} show that this strategy of regulating collaboration significantly improves performance. Finally, we note similarities between \textit{each round} of collaboration in AdaSplit and continual learning, albeit AdaSplit works in activation space and is iterative. However, we anticipate exploring this connection may present interesting directions of future work. \section{Experimental Setup} \textit{First,} we formalize the experimental protocol with datasets and baselines. \textit{Next,} we define the evaluation protocol and introduce the \textit{C3-Score} as a unified metric to benchmark and compare the efficiency of distributed deep learning techniques. \textit{Finally}, we summarize implementation details for results presented in this work. \subsection{Datasets} To robustly validate the efficacy of AdaSplit, we conduct extensive experiments on benchmark datasets and simulate varying levels of inter-client heterogeneity. Specifically, we design two experimental protocols, as described next: \textbf{a) Mixed-CIFAR:} We divide the 10 classes of CIFAR-10 ~\cite{cifar} into 5 subsets of 2 distinct classes each. Every client is assigned data from one of the 5 subsets. In this protocol, there is low and consistent heterogeneity between data across all pairs of clients. \textbf{b) Mixed-NonIID:} We use 5 benchmark datasets: i) MNIST \cite{mnist} ii) CIFAR-10 ~\cite{cifar} iii) FMNIST ~\cite{fmnist} iv) CIFAR-100 ~\cite{cifar} v) Not-MNIST ~\cite{not-mnist} and each client receives samples from exactly one dataset. In this protocol, there is high and variable inter-client heterogeneity between client pairs. For instance, clients with FMNIST and MNIST have lower pairwise-heterogeneity between each other and high pairwise heterogeneity with clients containing CIFAR-100. For experiments with both protocols, input images are scaled to 32x32x3 and grayscale MNIST/FMNIST images are transformed by stacking along channels. We will release the evaluation protocol with the code for use by the research community. \subsection{Baselines} We compare state-of-the-art split learning and federated learning techniques. Specifically, for split learning, we compare with SL-basic ~\cite{gupta2018distributed} and SplitFed ~\cite{thapa2020splitfed}. To ensure validity of analysis and also highlight efficacy of results, we also compare with state-of-the-art federated learning techniques: FedAvg ~\cite{fedavg}, FedNova ~\cite{fednova}, Scaffold ~\cite{scaffold} and FedProx ~\cite{fedprox}. These techniques are specially designed for heterogenous (non-iid) setups and provide strong benchmarking for the efficacy of AdaSplit. \begin{table*}[] \centering \tabcolsep=0.45cm \begin{tabular}{l|c|c|c||c} \toprule Method & Accuracy & Bandwidth (GB) & Compute (TFLOPS) & \textbf{C3-Score} \\ \midrule SL-basic ~\cite{gupta2018distributed} & $84.65 \pm 0.32$ & 84.54 & 3.76 (15.14) & 0.72 \\ SplitFed ~\cite{thapa2020splitfed} & $84.67 \pm 0.24$ & 84.64 & 3.76 (15.14) & 0.73 \\ \midrule FedAvg ~\cite{fedavg} & $82.21 \pm 0.19$ & 2.39 & 17.13 (17.13) & 0.72 \\ FedProx ~\cite{fedprox} & \textbf{85.09 $\pm$ 0.29} & 2.39 & 17.13 (17.13) & 0.75 \\ Scaffold ~\cite{scaffold} & $84.73 \pm 0.17$ & 4.78 & 17.13 (17.13) & 0.74 \\ FedNova ~\cite{fednova} & $82.71 \pm 0.27$ & 2.39 & 17.13 (17.13) & 0.73 \\ \midrule \textbf{AdaSplit ($\kappa$=0.6, $\eta$=0.6)} & \textbf{88.88 $\pm$ 0.27} & 9.71 & 5.38 (8.82) & \textbf{0.85} \\ \textbf{AdaSplit($\kappa$=0.75, $\eta$=0.6)} & \textbf{87.11 $\pm$ 0.59} & 2.43 & 5.38 (10.88) & \textbf{0.83} \\ \bottomrule \end{tabular} \caption{Results on \textbf{Mixed-NonIID} dataset. AdaSplit achieves improved performance while reducing resource (bandwidth, compute) consumption. This is corroborated by the \textit{C3-Score} (higher is better). } \label{tab:non-iid} \end{table*} \begin{table*}[] \centering \tabcolsep=0.45cm \begin{tabular}{l|c|c|c||c} \toprule Method & Accuracy & Bandwidth (GB) & Compute (TFLOPS) & \textbf{C3-Score} \\ \midrule SL-basic ~\cite{gupta2018distributed} & $67.90 \pm 3.52$ & 34.88 & 1.66 (13.76) & 0.59 \\ SplitFed ~\cite{thapa2020splitfed} & $71.46 \pm 2.13$ & 35.94 & 1.66 (13.76) & 0.62 \\ \midrule FedAvg ~\cite{fedavg} & $91.31 \pm 0.49$ & 2.39 & 11.77 (11.77) & 0.79 \\ FedProx ~\cite{fedprox} & \textbf{92.54 $\pm$ 0.48} & 2.39 & 11.77 (11.77) & 0.81 \\ Scaffold ~\cite{scaffold} & $87.30 \pm 1.36$ & 4.79 & 11.77 (11.77) & 0.76 \\ FedNova ~\cite{fednova} & $88.94 \pm 0.32$ & 2.39 & 11.77 (11.77) & 0.77 \\ \midrule \textbf{AdaSplit ($\kappa$=0.6, $\eta$=0.6)} & \textbf{91.92 $\pm$ 1.88} & 2.85 & 2.38 (4.81) & \textbf{0.89} \\ \textbf{AdaSplit ($\kappa$=0.3, $\eta$=0.6)} & \textbf{92.91 $\pm$ 0.91} & 6.57 & 2.38 (6,63) & \textbf{0.88} \\ \bottomrule \end{tabular} \caption{Results on \textbf{Mixed-CIFAR} dataset. AdaSplit achieves improved performance while reducing resource (bandwidth, compute) consumption. This is corroborated by the \textit{C3-Score} (higher is better). } \label{tab:split-cifar} \end{table*} \subsection{Evaluation Metrics: C3-Score} We evaluate performance both independently along each of the dimensions as well as using a unified metric. To evaluate along design dimensions, we report \textit{Accuracy}, \textit{Bandwidth} and \textit{Compute}. Accuracy is measured as mean and standard deviation over multiple independent runs with different seeds. \textit{Bandwidth} is measured in GB and \textit{Compute} is measured in TFLOPS. We note that, in real-world cases, servers may scale horizontally and bottleneck is often at client. For completeness, we report both client compute and total (client+server) compute. \textbf{C3-Score:} For an efficient method, the goal is to maximize accuracy while minimizing resource (bandwidth, compute) consumption. We introduce \textit{\textbf{C3-score}}, a metric to evaluate joint performance of any distributed deep learning technique in this multi-dimensional setup. Let, $B_{max}, C_{max}$ be the maximum resource budgets for bandwidth and client compute. We note that $A_{max} = 100\%$ (max achievable accuracy) for predictive tasks. Consider, a given method $m$, with mean accuracy $A_{m}$, bandwidth consumption $B_{m}$ and client compute consumption $C_{m}$. Then, the \textit{C3-Score} is defined as below: \begin{equation} C3-Score(A_{m}, B_{m}, C_{m}) = (\hat{A}_{m})*e^{-( \hat{B}_{m} + \hat{C}_{m} )/T} \end{equation} where, $\hat{B}_{m} = B_{m} / B_{max}, \hat{C}_{m} = C_{m} / C_{max}, \hat{A}_{m} = A_{m} / A_{max}$ and $T$ is a scaling temperature. The \textit{C3-Score} always lies between 0 and 1 with a higher score representing a better (more efficient) method. Trivially, C3-Score $\to 0$ as $\hat{B}_{m} \to \infty$ or $\hat{C}_{m} \to \infty$ (as consumption increases or budget decreases). Here, we make a few comments about the metric. \textit{First,} considering resource budgets is useful for real-world use and can also enable to directly eliminate methods that exceed the budget. We take motivation from work in differential privacy (privacy budgets) which contextualize comparing between methods. Also, normalizing cost with the budget allows realizing a bounded metric which is useful for comparison and benchmarking. Finally, we assume a multiplicative form to the metric to enable an easy extension to other resource dimensions in the future. Specifically, incorporating privacy budgets is an interesting direction of future work. \subsection{Implementation Details} All experiments are trained for (R=20) rounds with 1 epoch per round using the same convolutional (LeNet) backbone. Results are reported for 5 (=N) clients. For the federated learning baselines, we use open-source implementations provided in ~\cite{niid-bench}. For robust comparison, we also tuned parameters for these baselines and note some performance gain was observed (over default values) which is then used for comparison. For all split learning baselines, we set the default client model size is 20\% ($\mu = 0.2$) and use Adam optimizer with a learning rate of 1e-3. This same optimizer configuration is used for both client and server loss in AdaSplit. In AdaSplit, the default parameters are: a) $\kappa$ = 0.6, $\eta$ = 0.6, $\gamma$ = 0.87, $\lambda$ = 1e-5 (for Mixed-CIFAR) and 1e-3 (for Mixed-NonIID). For experiments on \textbf{Mixed-CIFAR} and \textbf{Mixed-NonIID}, the communication and computation budgets are chosen to be the performance of worst-performing baselines on the corresponding dataset respectively. Experiments are implemented in Pytorch, executed on 1 NVIDIA RTX-3060 GPU and reported over 5 independent runs. \section{Results} \label{sec:results} We report performance on ~\textbf{Mixed-CIFAR} in Table ~\ref{tab:split-cifar} and ~\textbf{Mixed-NonIID} in Table ~\ref{tab:non-iid}. For \textit{C3-Score}, we set the bandwidth and communication budgets to be $B_{max} = 35.94$ GB and $C_{max} = 11.77$ TLFOPS on \textit{Mixed-CIFAR} and $B_{max} = 84.64$ GB and $C_{max} = 17.13$ TFLOPS on \textit{Mixed-NonIID}. These values correspond to the highest bandwidth and computation cost among all the methods for the corresponding datasets. The results on both datasets consistently support the following key observations: \circled{1} \textbf{AdaSplit} \textit{outperforms other split learning techniques} and achieves significantly better accuracy while also reducing bandwidth consumption. For instance, on \textit{Mixed-CIFAR} (Table ~\ref{tab:split-cifar}), in comparison to SL-basic, AdaSplit \textbf{improves performance by 24\%} and consumes \textbf{89\% lower} bandwidth. This is corroborated by an increase in C3-Score from 0.59 for SL-basic ~\cite{gupta2018distributed} to 0.89 for AdaSplit. Furthermore, similar trend is observed on \textit{Mixed-NonIID} (Table ~\ref{tab:non-iid}) as well. Specifically, AdaSplit achieves accuracy of 88.88 against 84.67 for SplitFed while consuming \textit{75 GB less bandwidth}. The corresponding trend is also captured by \textit{C3-Score} which is 0.85 for AdaSplit against 0.73 for SplitFed ~\cite{thapa2020splitfed}. \circled{2} \textbf{AdaSplit} \textit{makes} \textit{split learning a competitive alternative to federated learning}. On both datasets, we observe that AdaSplit consistently achieves higher (or similar) accuracy with significantly lower client compute and similar bandwidth. For instance, on \textit{Mixed-NonIID}, AdaSplit achieves 87.11\% accuracy with 2.43 GB bandwidth and 5.38 TFLOPS compute. In comparison, the closest federated learning baseline, FedProx, achieves 85\% accuracy but consumes 17.13 TFLOPS (3x more than AdaSplit) and with similar bandwidth (2.39 GB). This is corroborated with an improved C3-Score of 0.85 for AdaSplit and 0.75 for FedProx. \circled{3} \textbf{AdaSplit} \textit{consistently provides the \textbf{best trade-off among all} of federated and split learning baselines}. For instance, on \textit{Mixed-CIFAR}, AdaSplit achieves a \textit{C3-Score} of 0.89 with the closest federated learning baseline (FedProx) ~\cite{fedprox} is at 0.81, FedAvg ~\cite{fedavg} at 0.79 and SplitFed at 0.62. Furthermore, similar trend is observed on \textit{Mixed-NonIID} as well. Specifically, AdaSplit achieves a \textit{C3-Score} of 0.85 with the closest baseline FedProx at 0.75, Scaffold ~\cite{scaffold} at 0.74 and SL-basic ~\cite{gupta2018distributed} at 0.72. \circled{4} \textbf{AdaSplit} \textit{can adapt to variable resource budgets}. From results on \textbf{Mixed-NonIID} (Table ~\ref{tab:gradient-flow}), we can see that given a higher communication budget (13.36 GB), AdaSplit can further improve accuracy to 89.77\% which corresponds to a 5\% improvement over FedProx ~\cite{fedprox}. Figure ~\ref{fig:intro-figure} shows trade-offs that AdaSplit can achieve for accuracy by varying bandwidth (and compute) budget. We note that the respective trade-off curves for bandwidth (and compute) are obtained while keeping compute (and bandwidth) budgets fixed respectively. We discuss in more detail in Section ~\ref{sec:ablations}. \section{Discussion} \label{sec:ablations} In this section, we conduct a sensitivity analysis of key design choices in AdaSplit and analyze the consequent impact on accuracy and resource consumption. Results validate the ability of AdaSplit to efficiently adapt to variable resource budgets. Unless specified otherwise, the hyperparameters used are $\kappa=0.6$, $\eta=0.6$, $\mu=0.2$. \circled{1} \textbf{Varying Size of Client Model:} Table ~\ref{tab:client-size-cifar} presents results from varying number of layers on client for \textit{Mixed-CIFAR10} dataset. We observe that \textit{Computation} on client increases monotonically with the number of client layers. We also observe a decrease in \textit{Communication} cost as evident from lower bandwidth. This can be attributed to the convolution design of the model where \textit{split activations} becomes smaller with depth (reducing payload $P_{is}$). Also, we note marginal gain in performance for larger server model since it provides more parameters for \textit{Collaboration}. We observe similar trends on \textit{Mixed-NonIID} and include results in the appendix. Hence, \textit{AdaSplit adapts to variable client computation budgets}. \begin{table}[] \centering \tabcolsep=0.19cm \begin{tabular}{l|c|c|c} \toprule $\mu$ & Accuracy & Bandwidth (GB) & Compute (TFLOPS) \\ \midrule 0.2 & 91.92 $\pm$ 1.88 & 2.85 & 2.38 (4.81) \\ 0.4 & 92.12 $\pm$ 1.61 & 1.18 & 9.04 (9.85) \\ 0.6 & 86.37 $\pm$ 6.74 & 1.08 & 11.58 (11.68) \\ 0.8 & 90.14 $\pm$ 2.80 & 1.05 & 11.95 (11.97) \\ \bottomrule \end{tabular} \caption{Results on \textit{Mixed-CIFAR10}. Varying number of client layers ($\mu$) enables AdaSplit to adapt to variable client computation budgets. } \label{tab:client-size-cifar} \end{table} \circled{2} \textbf{Varying Duration of Local Phase:} Table ~\ref{tab:interrupt-duration} presents results from varying $\kappa$ on \textit{Mixed-CIFAR10} dataset. We observe that \textit{Communication} cost decreases as $k$ increases. This is because $P_{is} = 0$ for all rounds $r < \kappa$ on given client $i$. \textit{Computation} cost of the server also decreases on increasing $\kappa$ though client compute is unchanged. note marginal decrease in accuracy since higher $\kappa$ allows for fewer \textit{collaboration} iterations on server. Specifically, increasing $\kappa$ from $0.3$ to $0.9$ decrease accuracy from 89.80\% to 87.11\%, while bandwidth falls drastically from $17.22$ GB to $2.43$ GB. This is corroborated on \textit{Mixed-NonIID} as shown in Table ~\ref{tab:gradient-flow}. Hence, \textit{AdaSplit adapts to variable communication and server computation budgets}. \begin{table}[b] \centering \tabcolsep=0.19cm \begin{tabular}{l|c|c|c} \toprule $\kappa$ & Accuracy & Bandwidth (GB) & Compute (TFLOPS) \\ \midrule 0.3 & 92.91 $\pm$ 0.91 & 6.57 & 2.38 (6.63) \\ 0.45 & 90.97 $\pm$ 1.02 & 4.72 & 2.38 (5.72) \\ 0.6 & 89.77 $\pm$ 1.62 & 3.56 & 2.38 (4.81) \\ 0.75 & 88.62 $\pm$ 3.68 & 2.15 & 2.38 (3.90) \\ 0.90 & 88.02 $\pm$ 0.91 & 0.89 & 2.38 (2.98) \\ \bottomrule \end{tabular} \caption{ Results on \textit{Mixed-CIFAR10}. Varying duration of local phase ($\mu$) enables AdaSplit to adapt to variable communication and server computation budget. } \label{tab:interrupt-duration} \end{table} \circled{3} \textbf{Eliminating Gradient Dependence:} Table ~\ref{tab:gradient-flow} studies the impact of training client model without gradient from server on \textit{Mixed-CIFAR10} dataset. We observe \textit{Communication} cost decreases significantly with bandwidth reduced by one-half. We observe accuracy is generally insensitive though there is slight increase in standard deviation. Hence, \textit{AdaSplit adapts to variable communication budget} and provides consistent performance. \begin{table}[t] \begin{tabular}{l|c|c} \toprule $\kappa$ & Accuracy & Bandwidth (GB) \\ \midrule \multirow{2}{*}{0.3} & 89.80 $\pm$ 0.38 & 17.22 \\ & 89.96 $\pm$ 0.23 & 34.84 \\ \midrule \multirow{2}{*}{0.45} & 89.77 $\pm$ 0.34 & 13.36 \\ & 89.47 $\pm$ 0.21 & 27.18 \\ \midrule \multirow{2}{*}{0.60} & 89.08 $\pm$ 0.38 & 9.65 \\ & 89.03 $\pm$ 0.28 & 19.79 \\ \midrule 0.75 & 88.17 $\pm$ 0.59 & 6.10 \\ & 88.31 $\pm$ 0.40 & 12.06 \\ \midrule \multirow{2}{*}{0.90} & 87.11 $\pm$ 0.45 & 2.43 \\ & 87.05 $\pm$ 0.39 & 4.89 \\ \toprule \end{tabular} \caption{Results on \textit{Mixed-NonIID}. In each Accuracy cell, Row-1 trains client with $L_{client}$ and Row-2 trains client with $L_{client} + L_{server}$. Accuracy is largely insensitive to server gradient across various $\kappa$} \label{tab:gradient-flow} \end{table} \circled{4} \textbf{Further Reducing Payload Size:} While we sparsify server model parameters to improve collaboration in \textit{AdaSplit}, here we also consider sparsification of split activations to reduce communication payload. Specifically, we train the client model with an additional $L^{1}$ regularizer that regulates magnitude of split activations. Results are presented on \textit{Mixed-NonIID} in Table ~\ref{tab:activation-sparsity}. \textit{Computation} remains unchanged. \textit{Communication} decreases as payload ($P_{ij}$) becomes sparse. This results in fall in accuracy which highlights worsening collaboration. For instance, AdaSplit can train with only 0.76 GB of bandwidth and achieve 85.79\% accuracy. From Table ~\ref{tab:non-iid}, FedProx achieves 85.09\% and consumes 2.39 GB budget. Hence, \textit{AdaSplit adapts to extremely low communication budgets}. \begin{table}[b] \centering \tabcolsep=0.19cm \begin{tabular}{l|c|c} \toprule $\beta$ & Accuracy & Bandwidth (GB) \\ \midrule 0 & 91.09 $\pm$ 1.48 & 3.45 \\ 1e-7 & 90.52 $\pm$ 2.16 & 3.25 \\ 1e-6 & 91.92 $\pm$ 1.89 & 2.85 \\ 5e-6 & 87.6 $\pm$ 4.82 & 1.19 \\ 1e-5 & 85.79 $\pm$ 4.10 & 0.76 \\ 0.0001 & 79.18 $\pm$ 4.81 & 0.08 \\ 0.1 & 51.00 $\pm$ 0.42 & 0.0044 \\ \bottomrule \end{tabular} \caption{Results on \textit{Mixed-CIFAR10} dataset. Sparsification of split activations enables AdaSplit to adapt to extremely low communication budgets. } \label{tab:activation-sparsity} \end{table} \section{Related Work} \label{sec:related-work} In this section, we review the literature in distributed and collaborative deep learning broadly, and specific research in split learning. \paragraph{\textbf{Parallelizable Deep Learning}} For centralized machine learning, some parallelization methods have been proposed to enable training on large-scale data sources. \textbf{Data Parallelism} ~\citep{hillis1986data} based distributed ML simulates large mini-batch training by splitting data among multiple identical models and training each model on a shard of the data independently. The key challenge is to ensure consistency of the global model by synchronizing across multiple model copies. This is achieved via i) \textit{Synchronous Optimization} - which synchronizes after every minibatch ~\cite{das2016distributed, chen2016revisiting}, resulting in high communication overhead, and ii) \textit{Asynchronous Optimization} which does global synchronization less frequently ~\cite{dean2012large}, improving communication but at a cost of poor sample efficiency due to staleness of model copies. \textbf{Model Parallelism} splits the model parameters over multiple processors either to improve parallelization~\cite{ben2019demystifying} or when it is too large to fit in the memory of a single device ~\cite{shazeer2018mesh, harlap2018pipedream, lepikhin2020gshard}. This is increasingly common as the size of the state-of-the-art models is exploding (to billions of parameters ~\cite{brown2020language, radford2019language}), which can be challenging for data parallelism. However, model parallelism has its challenges including i) \textit{high communication costs}, due to activations communicated between devices for each mini-batch and ii) \textit{device under-utilization}, due to inter-device control-flow dependency during forward and backward propagation~\cite{ben2018torsten}. \textbf{Local Parallelism} alleviates the staleness issue by eliminating the control flow dependency between devices. The key idea is to perform training using intermediate local objectives. Various local learning configurations are proposed where a local objective can be used greedily for each layer ~\cite{lowe-2019,belilovsky-2020}, for a set of layers on each device ~\cite{laskin2020parallel} or layers across multiple devices ~\cite{gomez2020interlocking}. From a comparison standpoint, federated learning setup is close to data parallelism, split learning extends model parallelism to data-parallel setups and the proposed AdaSplit extends local parallelism to data-parallel setups. \paragraph{\textbf{Distributed Deep Learning}} This broadly involves training deep networks with data that is distributed across multiple devices (or clients) and a coordinating server. The two main paradigms for collaborative learning are Federated Learning (FL) ~\cite{fed-avg,kairouz2019advances} and Split Learning (SL) ~\cite{gupta2018distributed,poirot2019split}. This work is focused on the split learning paradigm, with a particular interest in mechanisms to improve the efficiency of communication and collaboration. However, here, we contextualize relevant literature in both federated and split learning along the key design choices introduced in Sec~\ref{sec:prelim}. \textbf{D1: Computation} FL generalizes data parallelism (to each client) and the entire model trained locally on-device at the data-owning client. While the computation at the server involves averaging of model parameters, computation at the client-side requires training ML model at every round. This can be a bottleneck for clients that are limited by computation. Some recent works have attempted to address this problem by training on a subset of parameters~\cite{diao2020heterofl} or performing pruning~\cite{li2022hermes,zhou2021adaptcl} on the trained model. All such methods are limited by the extent of pruning that can be performed and require the full model to be trained on the device. In contrast, SL distributes the model parameters between clients and servers, this allows reduced as well as tunable computation load distribution between client and server. Several architectures for parameter distribution are discussed in ~\cite{splitlearning2}. Our proposed AdaSplit allows to further improves computation by eliminating gradient flow and training server intermittently. \textbf{D2: Communication} In FL, client and server communicate once every training round and this is executed through model weights or gradients of the locally trained model copies. This communication cost scales with the size of the model and the number of clients in the system, and is a major challenge for deploying in resource-constrained setups. Various methods have been proposed to reduce this cost through model compression on client~\cite{konevcny2016federated,malekijoo2021fedzip,hamer2020fedboost}, client subset selection~\cite{cho2020bandit,nishio2019client,balakrishnan2020diverse} as well as greedy federated training of client models ~\cite{nishio2019client,mo2021ppfl}. In SL, the client and server communicate in every training iteration using minibatch activations. While each payload is small (and relatively independent of model size), a high frequency of communication is a challenge. In this work, locally-parallel AdaSplit reduces communication costs by alleviating the need for high-frequency server-to-client gradient flow. \textbf{D3: Collaboration} For Federated Learning, while FedAvg ~\cite{fedavg} performs poorly with non-iid clients, various techniques have been proposed for working in such heterogenous setups ~\cite{zhu2021federated, li2019fedmd, zhao2018federated}. Similarly, SL-basic and SplitFed perform poorly in non-iid setups, resulting in worsened communication cost and accuracy. To the best of our knowledge, no method has been proposed to tackle this issue. AdaSplit alleviates the same through parameter sparsification on the server. \paragraph{\textbf{Split Learning: Research and Applications}} While most of the recent works~\citep{kairouz2019advances} in federated learning try to address the aforementioned D1, D2, and D3 aspects, the clients are still restricted by the requirement of executing the complete computation graph on the device. This restriction is circumvented by split learning~\citep{gupta2018distributed} that revamps the federated learning architecture by operationalizing model parallelism for multi-institution collaborative learning. In split learning, clients share activations produced by a neural network instead of the weight or gradients as done in federated learning. Different architectures for forward and backward propagation are illustrated in \cite{vepakomma2018split}. This architecture adds flexibility to the client-side computation by distributing it with the centralized server. A rigorous comparison between FL and SL have been made across several dimensions by \cite{thapa2021advancements} and other aspects have been explored such as communication~\citep{singh2019detailed}, healthcare applications~\citep{gawali2021comparison}, privacy~\citep{kim2020multiple}, IOT~\citep{gao2020end} and frameworks~\citep{vepakomma2018no}. Several recent works have integrated federated and split learning architecture such as SplitFed~\citep{thapa2020splitfed}, SplitFedV3~\citep{gawali2021comparison} and FedSL~\citep{abedi2020fedsl}. In addition to the computational and communication benefit, split learning allows distributed and privacy-preserving prediction that is not possible under the federated learning framework. Consequently, several works have used split learning for inference to build defense~\citep{maxentropy, shredder, osia2020hybrid, xiang2019interpretable, vepakomma2020nopeek, bertran2019adversarially, liu2019privacy, li2019deepobfuscator, singh2021disco, samragh2020private} and attack mechanisms~\citep{he2019model, pasquini2020unleashing, madaan2021vulnerability}. Some of these benefits have led to applied evaluation of split learning for mobile phones~\citep{palanisamy2021spliteasy}, IoT~\citep{park2021communication, koda2020distributed, koda2020communication}, model selection~\citep{sharma2019expertmatcher1,sharma2019expertmatcher2} and healthcare~\citep{poirot2019split}. \section{Conclusion} We introduce \textit{AdaSplit}, a technique for scaling distributed deep learning to low resource scenarios. \textit{AdaSplit} builds upon the split learning framework and reduces a) computation by eliminating client dependence on server gradient and training the server intermittently, b) bandwidth by reducing payload size and communication frequency between client-and-server, and c) improves performance by constraining each client to update sparse partitions of server model. To capture and benchmark this multi dimensional nature of distributed deep learning, we also introduce\textit{C3-Score}, a metric to evaluate performance under resource budgets. We validate the effectiveness of \textit{AdaSplit} under limited resources through extensive experimental comparison with strong federated and split learning baselines. We also present sensitivity analysis of key design choices in AdaSplit which validates the ability of \textit{AdaSplit} to provide adaptive trade-offs across variable resource budgets. \end{document}
\betaegin{document} \tauitle{Distributed Multi-Agent Optimization with State-Dependent Communication\footnote{ This research was partially supported by the National Science Foundation under Career grant DMI-0545910, the DARPA ITMANET program, and the AFOSR MURI R6756-G2.}} \alphauthor{Ilan Lobel\footnote{Microsoft Research New England Lab and Stern School of Business, New York University, [email protected]}, Asuman Ozdaglar\footnote{Laboratory for Information and Decision Systems, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, [email protected]} and Diego Feijer\footnote{Laboratory for Information and Decision Systems, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, [email protected]}} \muaketitle {\it This paper is dedicated to the memory of Paul Tseng, a great researcher and friend.} \tauhispagestyle{empty} \betaegin{abstract} We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a {\it state-dependent communication model} over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a {\it projected multi-agent subgradient algorithm} under state-dependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents' estimates, taking a subgradient step along his local objective function, and projecting the estimates on his local constraint set. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a ``disagreement metric" between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence. \epsilonnd{abstract} \sigmaetcounter{page}{1} \sigmaection{Introduction} Due to computation, communication, and energy constraints, several control and sensing tasks are currently performed collectively by a large network of autonomous agents. Applications are vast including a set of sensors collecting and processing information about a time-varying spatial field (e.g., to monitor temperature levels or chemical concentrations), a collection of mobile robots performing dynamic tasks spread over a region, mobile relays providing wireless communication services, and a set of humans aggregating information and forming beliefs about social issues over a network. These problems motivated a large literature focusing on design of optimization, control, and learning methods that can operate using local information and are robust to dynamic changes in the network topology. The standard approach in this literature involves considering ``consensus-based" schemes, in which agents exchange their local estimates (or states) with their neighbors with the goal of aggregating information over an {\it exogenous} (fixed or time-varying) network topology. In many of the applications, however, the relevant network topology is configured {\it endogenously as a function of the agent states}, for example, the communication network varies as the location of mobile robots changes in response to the objective they are trying to achieve. A related set of problems arises when the current information of decentralized agents influences their potential communication pattern, which is relevant in the context of sensing applications and in social settings where disagreement between the agents would put constraints on the amount of communication among them. In this paper, we propose a general framework for design and analysis of distributed multi-agent optimization algorithms with state dependent communication. Our model involves a network of $m$ agents, each endowed with a local objective function $f_i:\muathbb{R}^n\tauo \muathbb{R}$ and a local constraint $X_i\sigmaubseteq \muathbb{R}^n$ that are private information, i.e., each agent only knows its own objective and constraint. The goal is to design distributed algorithms for solving a global constrained optimization problem for optimizing an objective function, which is the sum of the local agent objective functions, subject to a constraint set given by the intersection of the local constraint sets of the agents. These algorithms involve each agent maintaining an estimate (or state) about the optimal solution of the global optimization problem and update this estimate based on local information and processing, and information obtained from the other agents. We assume that agents communicate over a network with randomly varying topology. Our random network topology model has two novel features: First, we assume that the communication at each time instant $k$, (represented by a {\it communication matrix} $A(k)$ with positive entries denoting the availability of the links between agents) is Markovian on the states of the agents. This captures the time correlation of communication patterns among the agents.\footnote{Note that our model can easily be extended to model Markovian dependence on other stochastic processes, such as channel states, to capture time correlation due to global network effects. We do not do so here for notational simplicity.} The second, more significant feature of our model is that the probability of communication between any two agents at any time is a function of the agents' states, i.e., the closer the states of the two agents, the more likely they are to communicate. As outlined above, this feature is essential in problems where the state represents the position of the agents in sensing and coordination applications or the beliefs of agents in social settings. For this problem, we study a {\it projected multi-agent subgradient algorithm}, which involves each agent performing a local averaging to combine his estimate with the other agents' estimates he has access to, taking a subgradient step along his local objective function, and projecting the estimates on his local constraint set. We represent these iterations as stochastic linear time-varying update rules that involve the agent estimates, subgradients and projection errors explicitly. With this representation, the evolution of the estimates can be written in terms of stochastic transition matrices $\mathbb{P}hi(k,s)$ for $k\gammae s\gammae 0$, which are products of communication matrices $A(t)$ over a window from time $s$ to time $k$. The transition matrices $\mathbb{P}hi(k,s)$ represent aggregation of information over the network as a result of local exchanges among the agents, i.e., in the long run, it is desirable for the transition matrices to converge to a uniform distribution, hence aligning the estimates of the agents with uniform weights given to each (ensuring that information of each agent affects the resulting estimate uniformly). As a result, the analysis of our algorithm involves studying convergence properties of transition matrices, understanding the limiting behavior of projection errors, and finally studying the algorithm as an ``approximate subgradient algorithm" with bounds on errors due to averaging and projections. In view of the dependence of information exchange on the agent estimates, it is not possible to decouple the effect of stepsizes and subgradients from the convergence of the transition matrices. We illustrate this point by first presenting an example in which the projected multi-agent subgradient algorithm is used with a constant stepsize $\alphalpha(k)=\alphalpha$ for all $k\gammae 0$. We show that in this case, agent estimates and the corresponding global objective function values may diverge with probability one for any constant value of the stepsize. This is in contrast to the analysis of multi-agent algorithms over exogenously varying network topologies where it is possible to provide error bounds on the difference between the limiting objective function values of agent estimates and the optimal value as a function of the constant stepsize $\alpha$ (see \cite{Ilanoptim}). We next adopt an assumption on the stepsize sequence\ $\{\alpha(k)\}$ (see Assumption \muathbb{R}f{ass. small steps}), which ensures that $\alpha(k)$ decreases to zero sufficiently fast, while satisfying $\sigmaum_{k=0}^\infty \alpha(k)=\infty$ and $\sigmaum_{k=0}^\infty \alphalpha^2(k)<\infty$ conditions. Under this assumption, we provide a bound on the expected value of the disagreement metric, defined as the difference $\muax_{i,j} |[\mathbb{P}hi(k,s)]_{ij}-{1\over m}|$. Our analysis is novel and involves constructing and bounding (uniformly) the probability of a hierarchy of events, the length of which is specifically tailored to grow faster than the stepsize sequence, to ensure propagation of information across the network before the states drift away too much from each other. In contrast to exogenous communication models, our bound is {\it time-nonhomogeneous}, i.e., it depends on the initial starting time $s$ as well as the time difference $(k-s)$. We also consider the case where we have the assumption that the agent constraint sets $X_i$'s are compact, in which case we can provide a bound on the disagreement metric without any assumptions on the stepsize sequence. Our next set of results study the convergence behavior of agent estimates under different conditions on the constraint sets and stepsize sequences. We first study the case when the local constraint sets of agents are the same, i.e., for all $i$, $X_i=X$ for some nonempty closed convex set. In this case, using the time-nonhomogeneous contraction provided on the disagreement metric, we show that agent estimates reach almost sure consensus under the assumption that stepsize sequence $\{\alpha(k)\}$ converges to 0 sufficiently fast (as stated in Assumption \muathbb{R}f{ass. small steps}). Moreover, we show that under the additional assumption $\sigmaum_{k=0}^\infty \alpha(k)=\infty$, the estimates converge to the same optimal point of the global optimization problem with probability one. We then consider the case when the constraint sets of the agents $X_i$ are different convex compact sets and present convergence results both in terms of almost sure consensus of agent estimates and almost sure convergence of the agent estimates to an optimal solution under weaker assumptions on the stepsize sequence. Our paper contributes to the growing literature on multi-agent optimization, control, and learning in large-scale networked systems. Most work in this area builds on the seminal work by Tsitsiklis \cite{johnthes} and Bertsekas and Tsitsiklis \cite{distbook} (see also Tsitsiklis {\it et al.}\ \cite{distasyn}), which developed a general framework for parallel and distributed computation among different processors. Our work is related to different strands of literature in this area. One strand focuses on reaching consensus on a particular scalar value or computing exact averages of the initial values of the agents, as natural models of cooperative behavior in networked-systems (for deterministic models, see \cite{vicsek}, \cite{ali},\cite{reza}, \cite{spielman}, \cite{alexCDC}, and \cite{alexlong}; for randomized models, where the randomness may be due to the choice of the randomized communication protocol or due to the unpredictability in the environment that the information exchange takes place, see \cite{boyd}, \cite{mesbahi}, \cite{wu}, \cite{alireza-one}, \cite{alireza-two}, and \cite{fagnani}) Another recent literature studies optimization of more general objective functions using subgradient algorithms and consensus-type mechanisms (see \cite{nedic-ozdaglar}, \cite{quantization}, \cite{constconsoptim}, \cite{Ilanoptim}, \cite{baras}, \cite{raminc}, \cite{martinez}). Of particular relevance to our work are the papers \cite{Ilanoptim} and \cite{constconsoptim}. In \cite{Ilanoptim}, the authors studied a multi-agent unconstrained optimization algorithm over a random network topology which varies independently over time and established convergence results for diminishing and constant stepsize rules. The paper \cite{constconsoptim} considered multi-agent optimization algorithms under deterministic assumptions on the network topology and with constraints on agent estimates. It provided a convergence analysis for the case when the agent constraint sets are the same. A related, but somewhat distinct literature, uses consensus-type schemes to model opinion dynamics over social networks (see \cite{golub}, \cite{golub-two}, \cite{misinformation}, \cite{krause}, \cite{opinion-dyn}). Among these papers, most related to our work are \cite{krause} and \cite{opinion-dyn}, which studied dynamics with opinion-dependent communication, but without any optimization objective. The rest of the paper is organized as follows: in Section \muathbb{R}f{sec:model}, we present the optimization problem, the projected subgradient algorithm and the communication model. We also show a counterexample that demonstrates that there are problem instances where this algorithm, with a constant stepsize, does not solve the desired problem. In Section \muathbb{R}f{sec:stochastic}, we introduce and bound the disagreement metric $\rho$, which determines the spread of information in the network. In Section \muathbb{R}f{sec:optim}, we build on the earlier bounds to show the convergence of the projected subgradient methods. Section \muathbb{R}f{sec:conclusions} concludes. \vskip 2pc \noindent {\betaf Notation and Basic Relations:} \vskip .5pc A vector is viewed as a column vector, unless clearly stated otherwise. We denote by $x_i$ or $[x]_i$ the $i$-th component of a vector $x$. When $x_i\gammae 0$ for all components $i$ of a vector $x$, we write $x\gammae 0$. For a matrix $A$, we write $A_{ij}$ or $[A]_{ij}$ to denote the matrix entry in the $i$-th row and $j$-th column. We denote the nonnegative orthant by $\muathbb{R}^n_+$, i.e., $\muathbb{R}^n_+ = \{x\in \muathbb{R}^n\muid x\gammae 0\}$. We write $x'$ to denote the transpose of a vector $x$. The scalar product of two vectors $x,y\in\muathbb{R}^n$ is denoted by $x'y$. We use $\|x\|$ to denote the standard Euclidean norm, $\|x\|=\sigmaqrt{x'x}$. A vector $a\in\muathbb{R}^m$ is said to be a {\it stochastic vector} when its components $a_i$, $i=1,\lambdadots,m$, are nonnegative and their sum is equal to 1, i.e., $\sigmaum_{i=1}^m a_i =1$. A square $m\tauimes m$ matrix $A$ is said to be a {\it stochastic matrix} when each row of $A$ is a stochastic vector. A square $m\tauimes m$ matrix $A$ is said to be a {\it doubly stochastic} matrix when both $A$ and $A'$ are stochastic matrices. For a function $F:\muathbb{R}^n\tauo(-\infty,\infty]$, we denote the domain of $F$ by ${\rm dom}(F)$, where \[{\rm dom}(F)=\{x\in\muathbb{R}^n \muid F(x)<\infty\}.\] We use the notion of a subgradient of a {\it convex} function $F(x)$ at a given vector $\betaar x\in {\rm dom}(F)$. We say that $s_F(\betaar x)\in\muathbb{R}^n$ {\it is a subgradient of the function $F$ at $\betaar x\in{\rm dom}(F)$} when the following relation holds: \betaegin{equation} F(\betaar x) + s_F(\betaar x)'(x-\betaar x)\lambdae F(x)\qquad \hbox{for all }x\in{\rm dom}(F). \lambdaabel{sgdconvdef} \epsilonnd{equation} The set of all subgradients of $F$ at $\betaar x$ is denoted by $\sigmag F(\betaar x)$ (see \cite{ourbook}). In our development, the properties of the projection operation on a closed convex set play an important role. We write $dist(\betaar x, X)$ to denote the standard Euclidean distance of a vector $\betaar x$ from a set $X$, i.e., $$\deltaist(\betaar x,X) =\inf_{x\in X}\|\betaar x - x\|.$$ Let $X$ be a nonempty closed convex set in $\muathbb{R}^n$. We use $P_X[\betaar x]$ to denote the projection of a vector $\betaar x$ on set $X$, i.e., \[P_X[\betaar x]=\alpharg\muin_{x\in X}\|\betaar x-x\|.\] We will use the standard non-expansiveness property of projection, i.e., \betaegin{equation}\lambdaabel{nonexpan} \|P_X[x] - P_X[y]\|\lambdae \|x-y\|\qquad\hbox{for any }x \hbox{ and } y. \epsilonnd{equation} We will also use the following relation between the projection error vector and the feasible directions of the convex set $X$: for any $x\in \muathbb{R}^n$, \betaegin{equation}\|P_X[x]-y\|^2\lambdae \|x-y\|^2 - \|P_X[x]-x\|^2 \qquad\hbox{for all $y\in X$}.\lambdaabel{projerror-feasdir}\epsilonnd{equation} \sigmaection{The Model}\lambdaabel{sec:model} \sigmaubsection{Optimization Model} We consider a network that consists of a set of nodes (or agents) $\muathcal{M}=\{1,\deltaots,m\}$. We assume that each agent $i$ is endowed with a local objective (cost) function $f_i$ and a local constraint function $X_i$ and this information is distributed among the agents, i.e., each agent knows only his own cost and constraint component. Our objective is to develop distributed algorithms that can be used by these agents to cooperatively solve the following constrained optimization problem: \betaegin{eqnarray} & \tauext{minimize}\quad\; \sigmaum_{i=1}^m f_i(x) \lambdaabel{optim-prob}\\ & \tauext{subject to}\quad x\in\cap_{i=1}^m X_i,\nonumber \epsilonnd{eqnarray} where each $f_i:\muathbb{R}^n\rightarrow\muathbb{R}$ is a convex (not necessarily differentiable) function, and each $X_i \sigmaubseteq \muathbb{R}^n$ is a closed convex set. We denote the intersection set by $X=\cap_{i=1}^m X_i$ and assume that it is nonempty throughout the paper. Let $f$ denote the global objective, that is, $f(x) = \sigmaum_{i=1}^m f_i(x)$, and $f^*$ denote the optimal value of problem (\muathbb{R}f{optim-prob}), which we assume to be finite. We also use $X^*=\{x\in X:f(x)=f^*\}$ to denote the set of optimal solutions and assume throughout that it is nonempty. We study a distributed multi-agent subgradient method, in which each agent $i$ maintains an {\it estimate} of the optimal solution of problem (\muathbb{R}f{optim-prob}) (which we also refer to as the {\it state of agent $i$}), and updates it based on his local information and information exchange with other neighboring agents. Every agent $i$ starts with some initial estimate $x_i(0)\in X_i$. At each time $k$, agent $i$ updates its estimate according to the following: \betaegin{equation} x_i(k+1) = P_{X_i}\lambdaeft[\sigmaum_{j=1}^m a_{ij}(k)x_j(k) - \alphalpha(k)d_i(k)\right], \lambdaabel{eq.update_rule} \epsilonnd{equation} where $P_{X_i}$ denotes the projection on agent $i$ constraint set $X_i$, the vector $[a_{ij}(k)]_{j\in\muathcal{M}}$ is a vector of weights for agent $i$, the scalar $\alphalpha(k)>0$ is the stepsize at time $k$, and the vector $d_i(k)$ is a subgradient of agent $i$ objective function $f_i(x)$ at his estimate $v_i(k) = \sigmaum_{j=1}^m a_{ij}(k)x_j(k)$. Hence, in order to generate a new estimate, each agent combines the most recent information received from other agents with a step along the subgradient of its own objective function, and projects the resulting vector on its constraint set to maintain feasibility. We refer to this algorithm as the {\it projected multi-agent subgradient algorithm}.\footnote{See also \cite{constconsoptim} where this algorithm is studied under deterministic assumptions on the information exchange model and the special case $X_i=X$ for all $i$.} Note that when the objective functions $f_i$ are identically zero and the constraint sets $X_i=\muathbb{R}^n$ for all $i\in\muathcal{M}$, then the update rule (\muathbb{R}f{eq.update_rule}) reduces to the classical averaging algorithm for {\it consensus} or {\it agreement} problems, as studied in \cite{multiagent} and \cite{ali}. In the analysis of this algorithm, it is convenient to separate the effects of different operations used in generating the new estimate in the update rule (\muathbb{R}f{eq.update_rule}). In particular, we rewrite the relation in Eq.\ (\muathbb{R}f{eq.update_rule}) equivalently as follows: \betaegin{eqnarray} v_i(k) &=& \sigmaum_{j=1}^m a_{ij}(k)x_j(k),\lambdaabel{convex-comb}\\ x_i(k+1) &=& v_i(k) -\alphalpha(k)d_i(k) +e_i(k),\lambdaabel{subgradient-step}\\ e_i(k) &=& P_{X_i}[v_i(k) -\alphalpha(k)d_i(k)] - \mathcal{B}ig(v_i(k) -\alphalpha(k)d_i(k)\mathcal{B}ig).\lambdaabel{proj-error} \epsilonnd{eqnarray} This decomposition allows us to generate the new estimate using a {\it linear update rule} in terms of the other agents' estimates, the subgradient step, and the projection error $e_i$. Hence, the nonlinear effects of the projection operation is represented by the projection error vector $e_i$, which can be viewed as a perturbation of the subgradient step of the algorithm. In the sequel, we will show that under some assumptions on the agent weight vectors and the subgradients, we can provide upper bounds on the projection errors as a function of the stepsize sequence, which enables us to study the update rule (\muathbb{R}f{eq.update_rule}) as an approximate subgradient method. We adopt the following standard assumption on the subgradients of the local objective functions $f_i$. \betaegin{assumption}(Bounded Subgradients) The subgradients of each of the $f_i$ are uniformly bounded, i.e., there exists a scalar $L>0$ such that for every $i\in\muathcal{M}$ and any $x\in\muathbb{R}^n$, we have \[\|d\|\lambdaeq L \qquad \hbox{for all } d\in\Psiartial f_i(x).\] \lambdaabel{ass.bounded_subgrad} \epsilonnd{assumption} \sigmaubsection{Network Communication Model} We define the {\it communication matrix} for the network at time $k$ as $A(k)=[a_{ij}(k)]_{i,j\in\muathcal{M}}$. We assume a probabilistic communication model, in which the sequence of communication matrices $A(k)$ is assumed to be Markovian on the {\it state variable} $x(k)=[x_i(k)]_{i\in\muathcal{M}}\in\muathbb{R}^{n\tauimes m}$. Formally, let $\{n(k)\}_{k \in \muathbb{N}}$ be an independent sequence of random variables defined in a probability space $(\Omegamega, \muathcal{F},P) = \Psirod_{k=0}^\infty (\Omegamega', \muathcal{F}',P')_k$, where $\{(\Omegamega', \muathcal{F}',P')_k\}_{k \in \muathbb{N}}$ constitutes a sequence of identical probability spaces. We assume there exists a function $\Psisi:\muathbb{R}^{n\tauimes m}\tauimes\Omegamega' \rightarrow\muathbb{R}^{m\tauimes m}$ such that \[ A(k)=\Psisi(x(k),n(k)). \] This Markovian communication model enables us to capture settings where the agents' ability to communicate with each other depends on their current estimates. We assume there exists some underlying communication graph $(\muathcal{M},\muathcal{E})$ that represents a `backbone' of the network. That is, for each edge $e \in \muathcal{E}$, the two agents linked by $e$ systematically attempt to communicate with each other [see Eq.\ (\muathbb{R}f{eq.prob_model}) for the precise statement]. We do not make assumptions on the communication (or lack thereof) between agents that are not adjacent in $(\muathcal{M},\muathcal{E})$. We make the following connectivity assumption on the graph $(\muathcal{M},\muathcal{E})$. \betaegin{assumption}[Connectivity] The graph $(\muathcal{M},\muathcal{E})$ is strongly connected. \lambdaabel{connectivity} \epsilonnd{assumption} The central feature of the model introduced in this paper is that the probability of communication between two agents is potentially small if their estimates are far apart. We formalize this notion as follows: for all $(j,i) \in \muathcal{E}$, all $k\gammaeq 0$ and all $\overline{x}\in \muathbb{R}^{m \tauimes n}$, \betaegin{equation} P(a_{ij}(k)\gammaeq\gammaamma | x(k)=\overline{x}) \gammaeq \muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\}, \lambdaabel{eq.prob_model} \epsilonnd{equation} where $K$ and $C$ are real positive constants, and $\deltaelta\in (0,1]$. We included the parameter $\deltaelta$ in the model to upper bound the probability of communication when $\|\overline{x}_i-\overline{x}_j\|^C$ is small. This model states that, for any two nodes $i$ and $j$ with an edge between them, if estimates $x_i(k)$ and $x_j(k)$ are close to each other, then there is a probability at least $\deltaelta$ that they communicate at time $k$. However, if the two agents are far apart, the probability they communicate can only be bounded by the inverse of a polynomial of the distance between their estimates $\| x_i(k) - x_j(k)\|$. If the estimates were to represent physical locations of wireless sensors, then this bound would capture fading effects in the communication channel. We make two more technical assumptions to guarantee, respectively, that the communication between the agents preserves the average of the estimates, and the agents do not discard their own information. \betaegin{assumption}[Doubly Stochastic Weights] The communication matrix $A(k)$ is doubly stochastic for all $k\gammaeq 0$, i.e., for all $k\gammae 0$, $a_{ij}(k)\gammaeq 0$ for all $i,j\in\muathcal{M}$, and $\sigmaum_{i=1}^m a_{ij}(k)=1 $ for all $j\in\muathcal{M}$ and $\sigmaum_{j=1}^m a_{ij}(k)=1$ for all $i\in\muathcal{M}$ with probability one. \lambdaabel{ass.stoch_weights} \epsilonnd{assumption} \betaegin{assumption}[Self Confidence] There exists $\gammaamma>0$ such that $a_{ii}\gammaeq\gammaamma$ for all agents $i\in\muathcal{M}$ with probability one. \lambdaabel{ass.self_conf} \epsilonnd{assumption} The doubly stochasticity assumption on the matrices $A(k)$ is satisfied when agents coordinate their weights when exchanging information, so that $a_{ij}(k)=a_{ji}(k)$ for all $i,j\in\muathcal{M}$ and $k\gammaeq 0$.\footnote{This will be achieved when agents exchange information about their estimates and ``planned" weights simultaneously and set their actual weights as the minimum of the planned weights; see \cite{nedic-ozdaglar} where such a coordination scheme is described in detail.} The self-confidence assumption states that each agent gives a significant weight to its own estimate. \sigmaubsection{A Counterexample} In this subsection, we construct an example to demonstrate that the algorithm defined in Eqs.\ (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}) does not necessarily solve the optimization problem given in Eq.\ (\muathbb{R}f{optim-prob}). The following proposition shows that there exist problem instances where Assumptions \muathbb{R}f{ass.bounded_subgrad}-\muathbb{R}f{ass.self_conf} hold and $X_i = X$ for all $i \in \muathcal{M}$, however the sequence of estimates $x_i(k)$ (and the sequence of function values $f(x_i(k))$) diverge for some agent $i$ with probability one. \betaegin{proposition}\lambdaabel{prop:counter} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights} and \muathbb{R}f{ass.self_conf} hold and let $X_i = X$ for all $i \in \muathcal{M}$. Let $\{x_i(k)\}$ be the sequences generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}). Let $C > 1$ in Eq.\ (\muathbb{R}f{eq.prob_model}) and let the stepsize be a constant value $\alphalpha$. Then, there does not exist a bound $M(m,L,\alphalpha) < \infty$ such that \[ \lambdaiminf_{k \tauo \infty}|f(x_i(k)) - f^*| \lambdaeq M(m,L,\alphalpha)\] with probability 1, for all agents $i \in \muathcal{M}$. \epsilonnd{proposition} \betaegin{proof} Consider a network consisting of two agents solving a one-dimensional minimization problem. The first agent's objective function is $f_1(x) = -x$, while the second agent's objective function is $f_2(x) = 2x$. Both agents' feasible sets are equal to $X_1 = X_2 = [0,\infty)$. Let $x_1(0) \gammaeq x_2(0) \gammaeq 0$. The elements of the communication matrix are given by $$ a_{1,2}(k) = a_{2,1}(k) = \lambdaeft\{ \betaegin{array}{ll} \gammaamma, & \hbox{with probability}\qquad \muin\lambdaeft\{\deltaelta,\frac{1}{|x_1(k)-x_2(k)|^C}\right\}; \\ 0, & \hbox{with probability}\qquad 1- \muin\lambdaeft\{\deltaelta,\frac{1}{|x_1(k)-x_2(k)|^C}\right\}, \epsilonnd{array} \right.$$ for some $\gammaamma \in (0,1/2]$ and $\deltaelta \in [1/2,1)$. The optimal solution set of this multi-agent optimization problem is the singleton $X^* = \{0\}$ and the optimal solution is $f^* = 0$. We now prove that $\lambdaim_{k \tauo \infty} x_1(k) = \infty$ with probability 1 implying that $\lambdaim_{k \tauo \infty} |f(x_1(k))-f^*| = \infty$. From the iteration in Eq.\ (\muathbb{R}f{eq.update_rule}), we have that for any $k$, \betaegin{eqnarray} x_1(k+1) &=& a_{1,1}(k)x_1(k) + a_{1,2}(k)x_2(k) + \alphalpha \lambdaabel{eq:it counter1}\\ x_2(k+1) &=& \muax\{0, a_{2,1}(k)x_1(k) + a_{2,2}(k)x_2(k) - 2\alphalpha\} \lambdaabel{eq:it counter2}. \epsilonnd{eqnarray} We do not need to project $x_1(k+1)$ onto $X_1 = [0,\infty)$ because $x_1(k+1)$ is non-negative if $x_1(k)$ and $x_2(k)$ are both non-negative. Note that since $\gammaamma \lambdaeq 1/2$, this iteration preserves $x_1(k) \gammaeq x_2(k) \gammaeq 0$ for all $k \in \muathbb{N}$. We now show that for any $k \in \muathbb{N}$ and any $x_1(k) \gammaeq x_2(k) \gammaeq 0$, there is probability at least $\epsilonpsilon > 0$ that the two agents will never communicate again, i.e., \betaegin{equation} P(a_{1,2}(k') = a_{2,1}(k') = 0 \hbox{ for all } k' \gammaeq k| x(k)) \gammaeq \epsilonpsilon > 0.\lambdaabel{eq:inf prod}\epsilonnd{equation} If the agents do not communicate on periods $k, k+1,...,k+j-1$ for some $j \gammaeq 1$, then \betaegin{eqnarray*} x_1(k+j) - x_2(k+j) &=& (x_1(k+j) - x_1(k)) + (x_1(k) - x_2(k)) + (x_2(k) - x_2(k+j))\\ &\gammaeq& \alphalpha j + 0 + 0, \epsilonnd{eqnarray*} from Eqs.\ (\muathbb{R}f{eq:it counter1}) and (\muathbb{R}f{eq:it counter2}) and the fact that $x_1(k) \gammaeq x_2(k)$. Therefore, the communication probability at period $k+j$ can be bounded by \[ P(a_{1,2}(k+j) =0| x(k), a_{1,2}(k') = 0 \hbox{ for all } k' \in \{k,...,k+j-1\}) \gammaeq 1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}.\] Applying this bound recursively for all $j \gammaeq k$, we obtain \betaegin{eqnarray*}&& P(a_{1,2}(k') = 0 \hbox{ for all } k' \gammaeq k| x(k))\\&&\qquad = \Psirod_{j=0}^\infty P(a_{1,2}(k+j) = 0 | x(k), a_{1,2}(k') = 0 \hbox{ for all } k' \in \{k,...,k+j-1\})\\&& \qquad \gammaeq \Psirod_{j=0}^\infty \lambdaeft(1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}\right)\epsilonnd{eqnarray*} for all $k$ and all $x_1(k) \gammaeq x_2(k)$. We now show that $\Psirod_{j=0}^\infty \lambdaeft(1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}\right) > 0$ if $C>1$. Define the constant $\overline{K} = \lambdaeft\lambdaceil\frac{2^{1 \over C}}{\alphalpha}\right\rceil$. Since $\deltaelta \gammaeq 1/2$, we have that $(\alphalpha j)^{-C} \lambdaeq \deltaelta$ for $j \gammaeq \overline{K}$. Hence, we can separate the infinite product into two components: \[ \Psirod_{j=0}^\infty \lambdaeft(1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}\right) \gammaeq \lambdaeft[\Psirod_{j<\overline{K}} \lambdaeft(1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}\right)\right]\lambdaeft[ \Psirod_{j\gammaeq\overline{K}} \lambdaeft(1 - (\alphalpha j)^{-C}\right)\right].\] Note that the term in the first brackets in the equation above is a product of a finite number of strictly positive numbers and, therefore, is a strictly positive number. We, thus, have to show only that $\Psirod_{j\gammaeq\overline{K}} \lambdaeft(1 - (\alphalpha j)^{-C}\right) > 0$. We can bound this product by \betaegin{eqnarray*}&& \Psirod_{j\gammaeq\overline{K}} \lambdaeft(1 - (\alphalpha j)^{-C}\right) = \epsilonxp\lambdaeft( \lambdaog \lambdaeft(\Psirod_{j\gammaeq\overline{K}} \lambdaeft(1 - (\alphalpha j)^{-C}\right)\right)\right)\\&&\qquad = \epsilonxp \lambdaeft( \sigmaum_{j\gammaeq\overline{K}} \lambdaog \lambdaeft(1 - (\alphalpha j)^{-C}\right)\right) \gammaeq \epsilonxp\lambdaeft( \sigmaum_{j\gammaeq\overline{K}} -(\alphalpha j)^{-C}\lambdaog(4)\right),\epsilonnd{eqnarray*} where the inequality follows from $\lambdaog(x) \gammaeq (x-1)\lambdaog(4)$ for all $x \in [1/2,1]$. Since $C>1$, the sum $\sigmaum_{j\gammaeq\overline{K}} (\alphalpha j)^{-C}$ is finite and $\Psirod_{j=0}^\infty \lambdaeft(1 - \muin\{\deltaelta,(\alphalpha j)^{-C}\}\right) > 0$, yielding Eq.\ (\muathbb{R}f{eq:inf prod}). Let $K^*$ be the (random) set of periods when agents communicate, i.e., $a_{1,2}(k) = a_{2,1}(k) = \gammaamma$ if and only if $k \in K^*$. For any value $k \in K^*$ and any $x_1(k) \gammaeq x_2(k)$, there is probability at least $\epsilonpsilon$ that the agents do not communicate after $k$. Conditionally on the state, this is an event independent of the history of the algorithm by the Markov property. If $K^*$ has infinitely many elements, then by the Borel-Cantelli Lemma we obtain that, with probability 1, for infinitely many $k$'s in $K^*$ there is no more communication between the agents after period $k$. This contradicts the infinite cardinality of $K^*$. Hence, the two agents only communicate finitely many times and $\lambdaim_{k \tauo \infty} x_1(k) = \infty$ with probability 1. \epsilonnd{proof} The proposition above shows the algorithm given by Eqs.\ (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}) does not, in general, solve the global optimization problem (\muathbb{R}f{optim-prob}). However, there are two important caveats when considering the implications of this negative result. The first one is that the proposition only applies if $C > 1$. We leave it is an open question whether the same proposition would hold if $C \lambdaeq 1$. The second and more important caveat is that we considered only a constant stepsize in Proposition \muathbb{R}f{prop:counter}. The stepsize is typically a design choice and, thus, could be chosen to be diminishing in $k$ rather than a constant. In the subsequent sections, we prove that the algorithm given by Eqs.\ (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}) does indeed solve the optimization problem of Eq.\ (\muathbb{R}f{optim-prob}), under appropriate assumptions on the stepsize sequence. \sigmaection{Analysis of Information Exchange}\lambdaabel{sec:stochastic} \sigmaubsection{The Disagreement Metric} In this section, we consider how some information that a given agent $i$ obtains at time $s$ affects a different agent $j$'s estimate $x_j(k)$ at a later time $k \gammaeq s$. In particular, we introduce a disagreement metric $\rho(k,s)$ that establishes how far some information obtained by a given agent at time $s$ is from being completely disseminated in the network at time $k$. The two propositions at the end of this section provide bounds on $\rho(k,s)$ under two different set of assumptions. In view of the the linear representation in Eqs.\ (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), we can express the evolution of the estimates using products of matrices: for any $s\gammaeq 0$ and any $k\gammaeq s$, we define the {\it transition matrices} as \[ \mathbb{P}hi(k,s)= A(s)A(s+1)\cdots A(k-1)A(k)\qquad \hbox{for all $s$ and $k$ with $k\gammae s$}. \] Using the transition matrices, we can relate the estimates at time $k$ to the estimates at time $s<k$ as follows: for all $i$, and all $k$ and $s$ with $k>s$, \betaegin{eqnarray} x_i(k+1) = \sigmaum_{j=1}^m[\mathbb{P}hi(k,s)]_{ij}x_j(s) &-& \sigmaum_{r=s+1}^k\sigmaum_{j=1}^m [\mathbb{P}hi(k,r)]_{ij}\alphalpha(r-1)d_j(r-1) - \alphalpha(k)d_i(k)\nonumber\\ & +& \sigmaum_{r=s+1}^k \sigmaum_{j=1}^m [\mathbb{P}hi(k,r)]_{ij} e_j(r-1) + e_i(k).\lambdaabel{evolution-est} \epsilonnd{eqnarray} Observe from the iteration above that $[\mathbb{P}hi(k,s)]_{ij}$ determines how the information agent $i$ obtains at period $s-1$ impacts agent $j$'s estimate at period $k+1$. If $[\mathbb{P}hi(k,s)]_{ij} = 1/m$ for all agents $j$, then the information agent $i$ obtained at period $s-1$ is evenly distributed in the network at time $k+1$. We, therefore, introduce the {\it disagreement metric} $\rho$, \betaegin{equation}\lambdaabel{eq:def-over-rho} \rho(k,s) = \muax_{i,j\in\muathcal{M}}\lambdaeft|[\mathbb{P}hi(k,s)]_{ij} - \frac{1}{m}\right| \qquad \hbox{ for all } k\gammaeq s\gammaeq 0, \epsilonnd{equation} which, when close to zero, establishes that all information obtained at time $s-1$ by all agents is close to being evenly distributed in the network by time $k+1$. \sigmaubsection{Propagation of Information} The analysis in the rest of this section is intended to produce upper bounds on the disagreement metric $\rho(k,s)$. We start our analysis by establishing an upper bound on the maximum distance between estimates of any two agents at any time $k$. In view of our communication model [cf.\ Eq.\ (\muathbb{R}f{eq.prob_model})], this bound will be essential in constructing positive probability events that ensure information gets propagated across the agents in the network. \betaegin{lemma} Let Assumptions \muathbb{R}f{ass.bounded_subgrad} and \muathbb{R}f{ass.stoch_weights} hold. Let $x_i(k)$ be generated by the update rule in (\muathbb{R}f{eq.update_rule}). Then, we have the following upper bound on the norm of the difference between the agent estimates: for all $k\gammae 0$, \[ \muax_{i,h\in\muathcal{M}}\|x_i(k)-x_h(k)\|\lambdaeq\Delta+2mL\sigmaum_{r=0}^{k-1}\alphalpha(r)+ 2\sigmaum_{r=0}^{k-1} \sigmaum_{j=1}^m \|e_j(r)\|, \]where $\Delta=2m\muax_{j\in\muathcal{M}}\|x_j(0)\|$, and $e_j(k)$ denotes the projection error. \lambdaabel{lem.max_distance} \epsilonnd{lemma} \betaegin{proof} Letting $s=0$ in Eq.\ (\muathbb{R}f{evolution-est}) yields, \betaegin{eqnarray*} &&x_i(k) = \sigmaum_{j=1}^m[\mathbb{P}hi(k-1,0)]_{ij}x_j(0)\\&&\qquad\qquad - \sigmaum_{r=1}^{k-1}\sigmaum_{j=1}^m [\mathbb{P}hi(k-1,r)]_{ij}\alphalpha(r-1)d_j(r-1) - \alphalpha(k-1)d_i(k-1)\nonumber\\&&\qquad\qquad + \sigmaum_{r=1}^{k-1} \sigmaum_{j=1}^m [\mathbb{P}hi(k-1,r)]_{ij} e_j(r-1) + e_i(k-1).\epsilonnd{eqnarray*} Since the matrices $A(k)$ are doubly stochastic with probability one for all $k$ (cf.\ Assumption \muathbb{R}f{ass.stoch_weights}), it follows that the transition matrices $\mathbb{P}hi(k,s)$ are doubly stochastic for all $k\gammae s\gammae0$, implying that every entry $[\mathbb{P}hi(k,s)]_{ij}$ belongs to $[0,1]$ with probability one. Thus, for all $k$ we have, \betaegin{eqnarray*} \|x_i(k)\|\lambdaeq\sigmaum_{j=1}^m\|x_j(0)\|&+& \sigmaum_{r=1}^{k-1}\sigmaum_{j=1}^m\alphalpha(r-1)\|d_j(r-1)\| + \alphalpha(k-1)\|d_i(k-1)\| \\&+& \sigmaum_{r=1}^{k-1} \sigmaum_{j=1}^m \|e_j(r-1)\| + \|e_i(k-1)\|. \epsilonnd{eqnarray*} Using the bound $L$ on the subgradients, this implies \[\|x_i(k)\|\lambdaeq\sigmaum_{j=1}^m\|x_j(0)\| + \sigmaum_{r=0}^{k-1}mL\alphalpha(r) + \sigmaum_{r=0}^{k-1} \sigmaum_{j=1}^m \|e_j(r)\|.\] Finally, the fact that $\|x_i(k)-x_h(k)\|\lambdaeq\|x_i(k)\|+\|x_h(k)\|$ for every $i,h\in\muathcal{M}$, establishes the desired result. \epsilonnd{proof} The lemma above establishes a bound on the distance between the agents' estimates that depends on the projection errors $e_j$, which are endogenously determined by the algorithm. However, if there exists some $M > 0$ such that $\|e_i(k)\| \lambdaeq M \alphalpha(k)$ for all $i\in \muathcal{M}$ and all $k \gammaeq 0$, then lemma above implies that, with probability 1, $\muax_{i,h\in\muathcal{M}}\|x_i(k)-x_h(k)\|\lambdaeq\Delta+2m(L+M)\sigmaum_{r=0}^{k-1}\alphalpha(r)$. Under the assumption that such an $M$ exists, we define the following set for each $k \in \muathbb{N}$, \betaegin{equation}\lambdaabel{eq:def-R-M} R_M(k) = \lambdaeft\{ x \in \muathbb{R}^{m \tauimes n}\ |\ \muax_{i,h\in\muathcal{M}}\|x_i(k)-x_h(k)\|\lambdaeq\Delta+2m(L+M)\sigmaum_{r=0}^{k-1}\alphalpha(r)\right\}. \epsilonnd{equation} This set represents the set of agent states which can be reached when the agents use the projected subgradient algorithm. We next construct a sequence of events, denoted by $G(\cdot)$, whose individual occurrence implies that information has been propagated from one agent to all other agents, therefore, implying a contraction of the disagreement metric $\rho$. We say a link $(j,i)$ is {\it activated at time $k$} when $a_{ij}(k)\gammaeq\gammaamma$, and we denote by $\muathcal{E}(k)$ the set of such edges, i.e., \[\muathcal{E}(k) = \{(j,i)\ |\ a_{ij}(k)\gammaeq\gammaamma\}.\] Here we construct an event in which the edges of the graphs ${\cal E}(k)$ are activated sequentially over time $k$, so that information propagates from every agent to every other agent in the network. To define this event, we fix a node $w\in\muathcal{M}$ and consider {\it two directed spanning trees} rooted at $w$ in the graph $(\muathcal{M},{\cal E})$: an in-tree $T_{in,w}$ and an out-tree $T_{out,w}$. In $T_{in,w}$ there exists a directed path from every node $i\ne w$ to $w$; while in $T_{out,w}$, there exists a directed path from $w$ to every node $i\ne w$. The strongly connectivity assumption imposed on $(\muathcal{M},{\cal E})$ guarantees that these spanning trees exist and each contains $m-1$ edges (see \cite{LP}). We order the edges of these spanning trees in a way such that on any directed path from a node $i\ne w$ to node $w$, edges are labeled in nondecreasing order. Let us represent the edges of the two spanning trees with the order described above as \betaegin{equation}T_{in,w}=\{e_1,e_2,\lambdadots,e_{n-1}\},\qquad T_{out,w}=\{f_1,f_2,\lambdadots,f_{n-1}\}.\lambdaabel{treeorder}\epsilonnd{equation} For the in-tree $T_{in,w}$, we pick an arbitrary leaf node and label the adjacent edge as $e_1$; then we pick another leaf node and label the adjacent edge as $e_2$; we repeat this until all leaves are picked. We then delete the leaf nodes and the adjacent edges from the spanning tree $T_{in,r}$, and repeat the same process for the new tree. For the out-tree $T_{out,w}$, we proceed as follows: pick a directed path from node $w$ to an arbitrary leaf and sequentially label the edges on that path from the root node $w$ to the leaf; we then consider a directed path from node $w$ to another leaf and label the unlabeled edges sequentially in the same fashion; we continue until all directed paths to all the leaves are exhausted. For all $l=1,\lambdadots,m-1$, and any time $k\gammaeq 0$, consider the events \betaegin{align} & B_l(k) = \{\omega \in \Omegamega\ |\ a_{e_l}(k+l-1)\gammae\gammaamma\},\lambdaabel{eq.in_event}\\ & D_l(k) = \{\omega \in \Omegamega\ |\ a_{f_l}(k+(m-1)+l-1)\gammae\gammaamma\}\lambdaabel{eq.out_event}, \epsilonnd{align} and define, \betaegin{equation} G(k) = \betaigcap_{l=1}^{m-1} \mathcal{B}ig(B_l(k)\cap D_l(k)\mathcal{B}ig). \lambdaabel{eq.G_event} \epsilonnd{equation} For all $l=1,\lambdadots,m-1$, $B_l(k)$ denotes the event that edge $e_l\in T_{in,w}$ is activated at time $k+l-1$, while $D_l(k)$ denotes the event that edge $f_l\in T_{out,w}$ is activated at time $k+(m-1)+l-1$. Hence, $G(k)$ denotes the event in which each edge in the spanning trees $T_{in,w}$ and $T_{out,w}$ are activated sequentially following time $k$, in the order given in Eq.\ (\muathbb{R}f{treeorder}). The following result establishes a bound on the probability of occurrence of such a $G(\cdot)$ event. It states that the probability of an event $G(\cdot)$ can be bounded as if the link activations were independent and each link activation had probability of occurring at least \[\muin\lambdaeft\{\deltaelta,\frac{K}{(\Delta+2m(L+M)\sigmaum_{r=1}^{k+2m-3}\alphalpha(r))^C}\right\},\] where the $k+2m-3$ follows from the fact that event $G(\cdot)$ is an intersection of $2(m-1)$ events occurring consecutively starting at period $k$. \betaegin{lemma} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity} and \muathbb{R}f{ass.stoch_weights} hold. Let $\Delta$ denote the constant defined in Lemma \muathbb{R}f{lem.max_distance}. Moreover, assume that there exists $M>0$ such that $\|e_i(k)\|\lambdaeq M\alphalpha(k)$ for all $i$ and $k\gammaeq 0$. Then, \betaegin{enumerate} \item[(a)] For all $s\in\muathbb{N}$, $k\gammaeq s$, and any state $\overline x\in R_M(s)$, \[ P(G(k)|x(s)=\overline x) \gammaeq \muin\lambdaeft\{\deltaelta,\frac{K}{(\Delta+2m(L+M)\sigmaum_{r=1}^{k+2m-3}\alphalpha(r))^C}\right\}^{2(m-1)}. \] \item[(b)] For all $k\gammaeq 0$, $u\gammaeq 1$, and any state $\overline x\in R_M(k)$, \betaegin{align*} P&\lambdaeft(\betaigcup_{l=0}^{u-1} G(k+2(m-1)l)\mathcal{B}igg|x(k)=\overline{x}\right)\\&\gammaeq 1-\lambdaeft(1-\muin\lambdaeft\{\deltaelta,\frac{K}{(\Delta+2m(L+M)\sigmaum_{r=1}^{k+2(m-1)u-1}\alphalpha(r))^C}\right\}^{2(m-1)}\right)^u. \epsilonnd{align*} \epsilonnd{enumerate} \lambdaabel{lem.bound_prob_G} \epsilonnd{lemma} \betaegin{proof} (a) The proof is based on the fact that the communication matrices $A(k)$ are Markovian on the state $x(k)$, for all time $k\gammaeq 0$. First, note that \betaegin{align} P(G(k)|x(s)=\overline{x}) &= P\lambdaeft(\betaigcap_{l=1}^{m-1} \mathcal{B}ig(B_l(k)\cap D_l(k)\mathcal{B}ig)\mathcal{B}igg|x(s)=\overline{x}\right)\nonumber\\&= P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\mathcal{B}igg|x(s)=\overline{x}\right)P\lambdaeft(\betaigcap_{l=1}^{m-1} D_l(k)\mathcal{B}igg|\betaigcap_{l=1}^{m-1} B_l(k),x(s)=\overline{x}\right).\lambdaabel{eq.prob_G_split} \epsilonnd{align} To simplify notation, let $W = 2m(L + M)$. We show that for all $k\gammaeq s$, \betaegin{equation}\lambdaabel{eq:markov-G} \inf_{\overline{x} \in R_M(s)} P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\mathcal{B}igg|x(s)=\overline{x}\right)\gammaeq \muin\lambdaeft\{\deltaelta,\frac{K}{(\Delta+ W\sigmaum_{r=1}^{k+2m-3}\alphalpha(r))^C}\right\}^{(m-1)}. \epsilonnd{equation} We skip the proof of the equivalent bound for the second term in Eq.\ (\muathbb{R}f{eq.prob_G_split}) to avoid repetition. By conditioning on $x(k)$ we obtain for all $k \gammaeq s$, \betaegin{eqnarray*} &&\inf_{\overline{x} \in R_M(s)} P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(s)=\overline{x}\right) =\\&& \qquad \inf_{\overline{x} \in R_M(s)}\int_{x' \in \muathbb{R}^{m \tauimes n}} P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(k)=x', x(s)=\overline{x}\right)dP(x(k)=x'|x(s)=\overline{x}). \epsilonnd{eqnarray*} Using the Markov Property, we see that conditional on $x(s)$ can be removed from the right-hand side probability above, since $x(k)$ already contains all relevant information with respect to $\cap_{l=1}^{m-1} B_l(k)$. By the definition of $R_M(\cdot)$ [see Eq.\ (\muathbb{R}f{eq:def-R-M})], if $x(s) \in R_M(s)$, then $x(k) \in R_M(k)$ for all $k \gammaeq s$ with probability 1. Therefore, \betaegin{equation}\lambdaabel{eq:tranfs-s-k} \inf_{\overline{x} \in R_M(s)} P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(s)=\overline{x}\right) \gammaeq \inf_{\overline{x} \in R_M(k)} P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(k)=x'\right). \epsilonnd{equation} By the definition of $B_1(k)$, \betaegin{eqnarray} \lambdaabel{eq.probC_event}&& \inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(k)=\overline{x}\right) =\\ &&\qquad \inf_{\overline{x} \in R_M(k)}P(a_{e_1}(k)\gammaeq\gammaamma|x(s)=\overline{x})P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\muiddle|a_{e_1}(k)\gammaeq\gammaamma,x(k)=\overline{x}\right).\nonumber \epsilonnd{eqnarray} Define \[ Q(k)=\muin\lambdaeft\{\deltaelta,\frac{K}{\lambdaeft(\Delta+W\sigmaum_{r=1}^{k}\alphalpha(r)\right)^C}\right\}, \] and note that, in view of the assumption imposed on the norm of the projection errors and based on Lemma \muathbb{R}f{lem.max_distance}, we get \[ \muax_{i,h\in\muathcal{M}}\|x_i(k)-x_h(k)\|\lambdaeq\Delta+W\sigmaum_{r=0}^{k-1}\alphalpha(r). \] Hence, from Eq.\ (\muathbb{R}f{eq.prob_model}) we have \betaegin{equation} P(a_{ij}(k)\gammaeq\gammaamma | x(k)=\overline{x})\gammaeq Q(k). \lambdaabel{eq.bound_Pk} \epsilonnd{equation} Thus, combining Eqs.\ (\muathbb{R}f{eq.probC_event}) and (\muathbb{R}f{eq.bound_Pk}) we obtain, \betaegin{equation} \inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\mathcal{B}igg|x(k)=\overline{x}\right) \gammaeq Q(k)\inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\mathcal{B}igg|a_{e_1}(k)\gammaeq\gammaamma,x(k)=\overline{x}\right). \lambdaabel{eq.Prob_split} \epsilonnd{equation} By conditioning on the state $x(k+1)$, and repeating the use of the Markov property and the definition of $R_M(k+1)$, we can bound the right-hand side of the equation above, \betaegin{eqnarray} \nonumber&&\inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\mathcal{B}igg|a_{e_1}(k)\gammaeq\gammaamma,x(k)=\overline{x}\right) \\\nonumber &=&\inf_{\overline{x} \in R_M(k)} \int_{x'} P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\muiddle|x(k+1)=x'\right)dP(x(k+1)=x'|a_{e_1}(k)\gammaeq\gammaamma,x(k)=\overline{x})\\ &\gammaeq& \inf_{x' \in R_M(k+1)} P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\muiddle|x(k+1)=x'\right).\lambdaabel{eq:markov-repeat} \epsilonnd{eqnarray} Combining Eqs.\ (\muathbb{R}f{eq.probC_event}), (\muathbb{R}f{eq.Prob_split}) and (\muathbb{R}f{eq:markov-repeat}), we obtain \[ \inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(k)=\overline{x}\right) \gammaeq Q(k) \inf_{\overline{x} \in R_M(k+1)} P\lambdaeft(\betaigcap_{l=2}^{m-1} B_l(k)\muiddle|x(k+1)=x'\right).\] Repeating this process for all $l=1,...,m-1$, this yields \[ \inf_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(k)=\overline{x}\right) \gammaeq \Psirod_{l=1}^{m-1} Q(k+l-1).\] Since $Q$ is a decreasing function, $\Psirod_{l=1}^{m-1} Q(k+l-1) \gammaeq Q(k+2m-3)^{m-1}$. Combining with Eq.\ (\muathbb{R}f{eq:tranfs-s-k}), we have that for all $k \gammaeq s$ \[ \inf_{\overline{x} \in R_M(s)}P\lambdaeft(\betaigcap_{l=1}^{m-1} B_l(k)\muiddle|x(s)=\overline{x}\right) \gammaeq Q(k+2m-3)^{m-1},\] producing the desired Eq.\ (\muathbb{R}f{eq:markov-G}).\\ (b) Let $G^c(k)$ represent the complement of $G(k)$. Note that \betaegin{align*} P\lambdaeft(\betaigcup_{l=0}^{u-1} G(k+2(m-1)l)\mathcal{B}igg|x(k)=\overline{x}\right) = 1 - P\lambdaeft(\betaigcap_{l=0}^{u-1} G^c(k+2(m-1)l)\mathcal{B}igg|x(k)=\overline{x}\right). \epsilonnd{align*} By conditioning on $G^c(k)$, we obtain \betaegin{eqnarray*} &&P\lambdaeft(\betaigcap_{l=0}^{u-1} G^c(k+2(m-1)l)\mathcal{B}igg|x(k)=\overline{x}\right) =\\&&\qquad P\lambdaeft(G^c(k)\muiddle|x(k)=\overline{x}\right)P\lambdaeft(\betaigcap_{l=1}^{u-1} G^c(k+2(m-1)l)\mathcal{B}igg|G^c(k), x(k)=\overline{x}\right).\epsilonnd{eqnarray*} We bound the term $P\lambdaeft(G^c(k)\muiddle|x(k)=\overline{x}\right)$ using the result from part $(a)$. We bound the second term in the right-hand side of the equation above using the Markov property and the definition of $R_M(\cdot)$, which is the same technique from part $(a)$, \betaegin{eqnarray*} \nonumber&&\sigmaup_{\overline{x} \in R_M(k)}P\lambdaeft(\betaigcap_{l=1}^{u-1} G^c(k+2(m-1)l)\muiddle|G^c(k),x(k)=\overline{x}\right) \\\nonumber&&\qquad =\sigmaup_{\overline{x} \in R_M(k)} \int_{x'} P\lambdaeft(\betaigcap_{l=1}^{u-1} G^c(k+2(m-1)l)\muiddle|x(k+2(m-1))=x'\right)~\tauimes\\&&\qquad\qquad\qquad\qquad\qquad dP(x(k+2(m-1))=x'|G^c(k),x(k)=\overline{x})\\\nonumber &&\qquad \lambdaeq \sigmaup_{\overline{x} \in R_M(k+2(m-1))} P\lambdaeft(\betaigcap_{l=1}^{u-1} G^c(k+2(m-1)l)\muiddle|x(k+2(m-1))=x'\right). \epsilonnd{eqnarray*} The result follows by repeating the bound above $u$ times. \epsilonnd{proof} The previous lemma bounded the probability of an event $G(\cdot)$ occurring. The following lemma shows the implication of the event $G(\cdot)$ for the disagreement metric. \betaegin{lemma} Let Assumptions \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights} and \muathbb{R}f{ass.self_conf} hold. Let $t$ be a positive integer, and let there be scalars $s<s_1<s_2<\cdots <s_t<k$, such that $s_{i+1}-s_i\gammaeq 2(m-1)$ for all $i=1,\deltaots,t-1$. For a fixed realization $\omega \in \Omegamega$, suppose that events $G(s_i)$ occur for each $i=1,\deltaots,t$. Then, \[ \rho(k,s)\lambdaeq 2\lambdaeft(1+\frac{1}{\gammaamma^{2(m-1)}}\right)\lambdaeft(1-\gammaamma^{2(m-1)}\right)^t. \] \lambdaabel{lem.metric_bound} \epsilonnd{lemma} We skip the proof of this lemma since it would mirror the proof of Lemma 6 in \cite{Ilanoptim}. \sigmaubsection{Contraction Bounds} In this subsection, we obtain two propositions that establish contraction bounds on the disagreement metric based on two different sets of assumptions. For our first contraction bound, we need the following assumption on the sequence of stepsizes. \betaegin{assumption}\lambdaabel{ass. small steps} \epsilonmph{(Limiting Stepsizes)} The sequence of stepsizes $\{\alphalpha(k)\}_{k \in \muathbb{N}}$ satisfies \[ \lambdaim_{k\rightarrow\infty}k \lambdaog^p(k) \alphalpha(k)=0 \qquad \hbox{ for all $p < 1$}. \] \epsilonnd{assumption} The following lemma highlights two properties of stepsizes that satisfy Assumption \muathbb{R}f{ass. small steps}: they are always square summable and they are not necessarily summable. The convergence results in Section \muathbb{R}f{sec:optim} require stepsizes that are, at the same time, not summable and square summable. \betaegin{lemma}\lambdaabel{stepsizeprop} Let $\{\alphalpha(k)\}_{k \in \muathbb{N}}$ be a stepsize sequence that satisfies Assumption \muathbb{R}f{ass. small steps}. Then, the stepsizes are square summable, i.e, $\sigmaum_{k=0}^\infty \alphalpha^2(k) < \infty$. Moreover, there exists a sequence of stepsizes $\{\overline{\alphalpha}(k)\}_{k \in \muathbb{N}}$ that satisfies Assumption \muathbb{R}f{ass. small steps} and is not summable, i.e., $\sigmaum_{k=0}^\infty \overline{\alphalpha}(k) = \infty$. \epsilonnd{lemma} \betaegin{proof} From Assumption \muathbb{R}f{ass. small steps}, with $p=0$, we obtain that there exists some $\overline{K} \in \muathbb{N}$ such that $\alphalpha(k) \lambdaeq 1/k$ for all $k \gammaeq \overline{K}$. Therefore, \[ \sigmaum_{k=0}^\infty \alphalpha^2(k) \lambdaeq \sigmaum_{k=0}^{\overline{K}-1} \alphalpha^2(k) + \sigmaum_{k=\overline{K}}^\infty \frac{1}{k^2} \lambdaeq \overline{K}\muax_{k \in \{0,...,\overline{K}-1\}} \alphalpha^2(k) + \frac{\Psii^2}{6} < \infty.\] Hence, $\{\alphalpha(k)\}_{k \in \muathbb{N}}$ is square summable. Now, let $\overline{\alphalpha}(k) = \frac{1}{(k+2)\lambdaog(k+2)}$ for all $k \in \muathbb{N}$. This sequence of stepsizes satisfies Assumption \muathbb{R}f{ass. small steps} and is not summable since for all $K' \in \muathbb{N}$ \[ \sigmaum_{k=0}^{K'} \overline{\alphalpha}(k) \gammaeq \lambdaog(\lambdaog(K'+2))\] and $\lambdaim_{K' \tauo \infty} \lambdaog(\lambdaog(K'+2)) = \infty$. \epsilonnd{proof} The following proposition is one of the central results in our paper. It establishes, first, that for any fixed $s$, the expected disagreement metric $E[\rho(k,s)|x(s)=\overline{x}]$ decays at a rate of $e^{\sigmaqrt{k-s}}$ as $k$ goes to infinity. Importantly, it also establishes that, as $s$ grows, the contraction bound for a fixed distance $k-s$ decays slowly in $s$. This slow decay is quantified by a function $\betaeta(s)$ that grows to infinity slower than the polynomial $s^q$ for any $q > 0$. \betaegin{proposition} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf}, and \muathbb{R}f{ass. small steps} hold. Assume also that there exists some $M>0$ such that $\|e_i(k)\|\lambdaeq M\alphalpha(k)$ for all $i \in \muathcal{M}$ and $k \in \muathbb{N}$. Then, there exists a scalar $\muu > 0$, an increasing function $\betaeta(s):\muathbb{N}\tauo\muathbb{R}_+$ and a function $S(q):\muathbb{N}\tauo\muathbb{N}$ such that \betaegin{equation} \betaeta(s) \lambdaeq s^q \qquad \hbox{ for all $q > 0$ and all $s \gammaeq S(q)$}\lambdaabel{eq:def beta} \epsilonnd{equation}\betaegin{equation} \hbox{and } \qquad E[\rho(k,s)|x(s)=\overline{x}] \lambdaeq \betaeta(s) e^{-\muu\sigmaqrt{k-s}}\quad\mubox{for all } k\gammaeq s \gammaeq 0, ~ \overline{x} \in R_M(s). \lambdaabel{eq:contraction}\epsilonnd{equation}\lambdaabel{prop:contraction} \epsilonnd{proposition} \betaegin{proof} \epsilonmph{Part 1.} The first step of the proof is to define two functions, $g(k)$ and $w(k)$, that respectively bound the sum of the stepsizes up to time $k$ and the inverse of the probability of communication at time $k$, and prove some limit properties of the functions $g(k)$ and $w(k)$ [see Eqs.\ (\muathbb{R}f{eq:limit-g}) and (\muathbb{R}f{eq:limit-w})]. Define $g(k): \muathbb{R}_+ \tauo \muathbb{R}_+$ to be the linear interpolation of $\sigmaum_{r=0}^{\lambdafloor k\rfloor} \alphalpha(k)$, i.e, \[g(k) = \sigmaum_{r=0}^{\lambdafloor k \rfloor} \alphalpha(k) + (k - \lambdafloor k \rfloor) \alphalpha(k-\lambdafloor k \rfloor + 1).\] Note that $g$ is differentiable everywhere except at integer points and $g'(k) = \alphalpha(k-\lambdafloor k \rfloor + 1) = \alphalpha(\lambdaceil k \rceil)$ at $k \notin \muathbb{N}$. We thus obtain from Assumption \muathbb{R}f{ass. small steps} that for all $p < 1$, \betaegin{equation}\lambdaabel{eq:limit-g}\lambdaim_{k\rightarrow\infty, k \notin \muathbb{N}}k \lambdaog^p(k) g'(k) = \lambdaim_{k\rightarrow\infty}\lambdaceil k \rceil \lambdaog^p(\lambdaceil k \rceil) \alphalpha(\lambdaceil k \rceil) = 0.\epsilonnd{equation} Define $w(k)$ according to \betaegin{equation}\lambdaabel{eq:def-w} w(k) = \frac{(\Delta + 2m(L+M)g(k))^{2(m-1)C}}{K^{2(m-1)}},\epsilonnd{equation} where $\Delta = 2m \muax_{j \in \muathcal{M}} \|x_j(0)\|$ and $K$ and $C$ are parameters of the communication model [see Eq.\ (\muathbb{R}f{eq.prob_model})]. We now show that for any $p < 1$, \betaegin{equation}\lambdaabel{eq:limit-w}\lambdaim_{k\rightarrow\infty, k \notin \muathbb{N}}k \lambdaog^p(k) w'(k) = 0.\epsilonnd{equation} If $\lambdaim_{k \tauo \infty} w(k) < \infty$, then the equation above holds immediately from Eq.\ (\muathbb{R}f{eq:limit-g}). Therefore, assume $\lambdaim_{k \tauo \infty} w(k) = \infty$. By L'Hospital's Rule, for any $q > 0$, \betaegin{equation}\lambdaabel{eq:lhospital} \lambdaim_{k \tauo \infty, k \notin \muathbb{N}} \frac{w(k)}{\lambdaog^q(k)} = \frac{1}{q}\lambdaim_{k\rightarrow\infty, k \notin \muathbb{N}}\frac{k w'(k)}{\lambdaog^{q-1}(k)}.\epsilonnd{equation} At the same time, if we take $w(k)$ to the power $\frac{1}{2(m-1)C}$ before using L'Hospital's Rule, we obtain that for any $q > 0$, \betaegin{eqnarray*} \lambdaim_{k \tauo \infty, k \notin \muathbb{N}} \lambdaeft(\frac{w(k)}{\lambdaog^q(k)}\right)^{\frac{1}{2(m-1)C}} &=&\frac{1}{K^{1/C}}\lambdaim_{k\rightarrow\infty, k \notin \muathbb{N}}\frac{\Delta + 2m(L+M) g(k)}{\lambdaog^{\frac{q}{2(m-1)C}}(k)}\\&=& \frac{4m(m-1)(L+M)C}{K^{1/C}q}\lambdaim_{k\rightarrow\infty, k \notin \muathbb{N}}\frac{k g'(k)}{\lambdaog^{\frac{q}{2(m-1)C}-1}(k)} = 0,\epsilonnd{eqnarray*} where the last equality follows from Eq.\ (\muathbb{R}f{eq:limit-g}). From the equation above, we obtain \betaegin{equation}\lambdaabel{eq:limit-w2}\lambdaim_{k \tauo \infty, k \notin \muathbb{N}} \frac{w(k)}{\lambdaog^q(k)} = \lambdaeft[\lambdaim_{k \tauo \infty, k \notin \muathbb{N}} \lambdaeft(\frac{w(k)}{\lambdaog^q(k)}\right)^{\frac{1}{2(m-1)C}}\right]^{2(m-1)C} = 0,\epsilonnd{equation} which combined with Eq.\ (\muathbb{R}f{eq:lhospital}), yields the desired Eq.\ (\muathbb{R}f{eq:limit-w}) for any $p = 1-q < 1$.\\ \epsilonmph{Part 2.} The second step of the proof involves defining a family of events $\{H_i(s)\}_{i,s}$ that occur with probability at least $\Psihi > 0$. We will later prove that an occurrence of $H_i(s)$ implies a contraction of the distance between the estimates. Let $h_i(s) = i + \lambdaceil w(2s)\rceil$ for any $i, s \in \muathbb{N}$, where $w(\cdot)$ is defined in Eq.\ (\muathbb{R}f{eq:def-w}). We say the event $H_i(s)$ occurs if one out of a sequence of $G$-events [see definition in Eq.\ (\muathbb{R}f{eq.G_event})] starting after $s$ occurs. In particular, $H_i(s)$ is the union of $h_i(s)$ $G$-events and is defined as follows, \[ H_i(s) = \betaigcup_{j=1}^{h_i(s)} G\lambdaeft(s + 2(m-1)\lambdaeft(j-1+\sigmaum_{r=1}^{i-1} h_r(s)\right)\right) \qquad \hbox{ for all $i$, $s \in \muathbb{N}$},\] where $\sigmaum_{r=1}^0 (\cdot) = 0$. See Figure 1 for a graphic representation of the $H_i(s)$ events. \betaegin{figure} \centering \includegraphics[width=6.5in]{events.pdf} \lambdaabel{fig.events} \caption{The figure illustrates the three levels of probabilistic events considered in the proof: the events $B_l(s)$ and $D_l(s)$, which represent the occurrence of communication over a link (edge of the in-tree and out-tree, respectively); the events $G(s)$ as defined in (\muathbb{R}f{eq.G_event}), with length $2(m-1)$ and whose occurrence dictates the spread of information from any agent to every other agent in the network; the events $H_i(s)$ constructed as the union of an increasing number of events $G(s)$ so that their probability of occurrence is guaranteed to be uniformly bounded away from zero. The occurrence of an event $H_i(s)$ also implies the spread of information from one agent to the entire network and, as a result, leads to a contraction of the distance between the agents' estimates.} \epsilonnd{figure} We now show $P(H_i(s)|x(s) = \overline{x})$ for all $i,s \in \muathbb{N}$ and all $\overline{x} \in R_M(s)$ [see definition of $R_M(s)$ in Eq.\ (\muathbb{R}f{eq:def-R-M})]. From Lemma \muathbb{R}f{lem.bound_prob_G}(a) and the definition of $w(\cdot)$, we obtain that for all $\overline{x} \in R_M(s)$, \[ P(G(s)|x(s)=\overline{x}) \gammaeq \muin\lambdaeft\{\deltaelta^{2(m-1)},\frac{1}{w(s+2m-3)}\right\}.\] Then, for all $s,i \in \muathbb{N}$ and all $\overline{x} \in R_M(s)$, \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) &=& P\lambdaeft(\betaigcup_{j=1}^{h_i(s)} G\lambdaeft(s + 2(m-1)\lambdaeft(j-1+\sigmaum_{r=1}^{i-1} h_r(s)\right)\right)\muiddle|x(s)=\overline{x}\right)\\ &\gammaeq& 1 - \lambdaeft(1 - \muin\lambdaeft\{\deltaelta^{2(m-1)}, \frac{1}{w\lambdaeft(s + 2(m-1)\sigmaum_{r=1}^{i} h_r(s)\right)}\right\}\right)^{h_i(s)}, \epsilonnd{eqnarray*} where the inequality follows from Lemma \muathbb{R}f{lem.bound_prob_G}(b) and the fact that $w(\cdot)$ is a non-decreasing function. Note that $h_r(s) \gammaeq r$ for all $r$ and $s$, so that $s + 2(m-1)\sigmaum_{r=1}^{i} h_r(s) \gammaeq i^2$. Let $\hat{I}$ be the smallest $i$ such that $w(i^2) \gammaeq \deltaelta^{-2(m-1)}$. We then have that for all $i \gammaeq \hat{I}$, all $s$ and all $\overline{x} \in R_M(s)$, \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) &\gammaeq& 1 - \lambdaeft(1 - \frac{1}{w\lambdaeft(s + 2(m-1)\sigmaum_{r=1}^{i} h_r(s)\right)}\right)^{h_i(s)}, \epsilonnd{eqnarray*} Let $\tauilde{I}$ be the maximum between $\hat{I}$ and the smallest $i$ such that $w(i^2) > 1$. Using the inequality $(1 - 1/x)^x \lambdaeq e^{-1}$ for all $x \gammaeq 1$, and multiplying and dividing the exponent in the equation above by $w\lambdaeft(s + 2(m-1)\sigmaum_{r=1}^{i} h_r(s)\right)$ we obtain \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) \gammaeq 1 - e^{-\frac{h_i(s)}{w\lambdaeft(s + 2(m-1)\sigmaum_{r=1}^{i} h_r(s)\right)}} \epsilonnd{eqnarray*} for all $i \gammaeq \tauilde{I}$, all $s$ and all $\overline{x} \in R_M(s)$. By bounding $h_r(s) \lambdaeq h_i(s)$ and replacing $h_i(s) = i + \lambdaceil w(2s)\rceil$, we obtain \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) \gammaeq 1 - e^{-\frac{i + \lambdaceil w(2s)\rceil}{w\lambdaeft(s + 2(m-1)(i^2 + i \lambdaceil w(2s)\rceil)\right)}} \gammaeq 1 - e^{-\frac{i + w(2s)}{w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) +i)\right)}}. \epsilonnd{eqnarray*} We now show there exists some $\overline{I}$ such that \betaegin{equation}\lambdaabel{eq:increasing} 1 - e^{-\frac{i + w(2s)}{w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right)}} \qquad \hbox{is increasing in $i$ for all $i \gammaeq \overline{I}$, $s \in \muathbb{N}$.}\epsilonnd{equation} The function above is increasing in $i$ if $\frac{i + w(2s)}{w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right)}$ is increasing in $i$. The partial derivative of this function with respect to $i$ is positive if \betaegin{eqnarray*} &&w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right) - \\&& 2(m-1)(2i^2+i+3iw(2s)+w(2s)+w^2(2s))w'\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right) > 0\epsilonnd{eqnarray*} at all points where the derivative $w'(\cdot)$ exists, that is, at non-integer values. If $i \gammaeq \tauilde{I}$, then $w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right) > 1$ for all $s$ and it is thus sufficient to show \[2(m-1)(2i^2+i+3iw(2s)+w(2s)+w^2(2s))w'\lambdaeft(s + 2(m-1)(i^2 + i w(2s) + i)\right) \lambdaeq 1\] in order to prove that Eq.\ (\muathbb{R}f{eq:increasing}) hold. The equation above holds if \[2(m-1)(3i^2+4iw(2s)+w^2(2s))w'\lambdaeft(2(m-1)(i^2 + i w(2s)) + s\right) \lambdaeq 1.\] From Eq.\ (\muathbb{R}f{eq:limit-w}) with $p=1/2$, we have that there exists some $N$ such that for all $x\gammaeq N$, $w'(x) \lambdaeq \frac{1}{4x\sigmaqrt{\lambdaog(x)}}$. For $i^2 \gammaeq N$ and any $s \in \muathbb{N}$, \betaegin{eqnarray*}&&2(m-1)(3i^2+4iw(2s)+w^2(2s))w'\lambdaeft(2(m-1)(i^2 + i w(2s)) + s\right)\\&& \qquad \lambdaeq \frac{2(m-1)(3i^2+4iw(2s)+w^2(2s))}{(2(m-1)(4i^2 + 4 iw(2s)) + 4s)\sigmaqrt{\lambdaog(2(m-1)(i^2 + iw(2s) + s)}}\\&&\qquad \lambdaeq \frac{3i^2+4iw(2s)+w^2(2s)}{4i^2 + 4 iw(2s) + \frac{2}{m-1}s\sigmaqrt{\lambdaog(i^2)}}.\epsilonnd{eqnarray*} The term above is less than or equal to 1 if we select $i$ large enough such that $\frac{2}{m-1}s\sigmaqrt{\lambdaog(i^2)} \gammaeq w^2(2s)$ for all $s \in \muathbb{N}$ [see Eq.\ (\muathbb{R}f{eq:limit-w2}) with $q < 1/2$], thus proving there exists some $\overline{I}$ such that Eq.\ (\muathbb{R}f{eq:increasing}) holds. Hence, we obtain that for all $i, s \in \muathbb{N}$ and all $\overline{x} \in R_M(s)$, \betaegin{eqnarray*} &&P(H_i(s)|x(s)=\overline{x})\gammaeq \\&& \qquad \muin_{j \in \{1,...,\overline{I}\}} \lambdaeft\{ 1 - \lambdaeft(1 - \muin\lambdaeft\{\deltaelta^{2(m-1)}, \frac{1}{w\lambdaeft(s + 2(m-1)\sigmaum_{r=1}^{j} h_r(s)\right)}\right\}\right)^{h_j(s)}\right\}. \epsilonnd{eqnarray*} Since $P(H_i(s)|x(s)=\overline{x}) > 0$ for all $i, s \in \muathbb{N}$ and all $\overline{x} \in R_M(s)$, to obtain the uniform lower bound on $P(H_i(s)|x(s)=\overline{x}) \gammaeq \Psihi > 0$, it is sufficient to show that for all $i \in \{1,...,\overline{I}\}$ and all $\overline{x} \in R_M(s)$, \[ \lambdaim_{s \tauo \infty} P(H_i(s)|x(s)=\overline{x}) > 0.\] Repeating the steps above, but constraining $s$ to be large enough instead of $i$, we obtain there exists some $\tauilde{S}$ such that for all $s \gammaeq \tauilde{S}$, all $i \in \muathbb{N}$ and $\overline{x} \in R_M(s)$, \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) \gammaeq 1 - e^{-\frac{i + w(2s)}{w\lambdaeft(s + 2(m-1)(i^2 + i w(2s) +i)\right)}}. \epsilonnd{eqnarray*} Since there exists some $\hat{S}$ such that $w(2s) \lambdaeq \lambdaog(2s)$ for all $s \gammaeq \hat{S}$ [see Eq.\ (\muathbb{R}f{eq:limit-w2}) with $q=1$], we obtain \betaegin{eqnarray*} P(H_i(s)|x(s)=\overline{x}) \gammaeq 1 - e^{-\frac{i + w(2s)}{w\lambdaeft(s + 2(m-1)(i^2 + i \lambdaog(2s) +i)\right)}}. \epsilonnd{eqnarray*} for $s \gammaeq \muax\{\hat{S},\tauilde{S}\}$ and all $i \in \muathbb{N}$ and $\overline{x} \in R_M(s)$. Note that for every $i$, there exists some $\overline{S}(i)$ such that for all $s \gammaeq \overline{S}(i)$ the numerator is greater than the denominator in the exponent above. Therefore, for all $i \in \muathbb{N}$ and $\overline{x} \in R_M(s)$, \betaegin{eqnarray*} \lambdaim_{s \tauo \infty} P(H_i(s)|x(s)=\overline{x}) \gammaeq 1 - e^{-1}. \epsilonnd{eqnarray*} Hence, there indeed exists some $\Psihi >0$ such that $P(H_i(s)|x(s)=\overline{x}) \gammaeq \Psihi$ for all $i, s \in \muathbb{N}$ and $\overline{x} \in R_M(s)$.\\ \epsilonmph{Part 3.} In the previous step, we defined an event $H_i(s)$ and proved it had probability at least $\Psihi > 0$ of occurrence for any $i$ and $s$. We now determine a lower bound on the number of possible $H$-events in an interval $\{s,...,k\}$. The maximum number of possible $H$ events in the interval $\{s,...,k\}$ is given by \[ u(k,s) = \muax\lambdaeft\{t \in \muathbb{N}\ |\ s + 2(m-1)\sigmaum_{i=1}^t h_i(s) \lambdaeq k\right\}.\] Recall that $h_i(s) = i+\lambdaceil w(2s)\rceil \lambdaeq i + w(2s) + 1$ to obtain \[u(k,s) \gammaeq \muax\lambdaeft\{t \in \muathbb{N}\ |\ \sigmaum_{i=1}^t (i+ w(2s)+1) \lambdaeq \frac{k-s}{2(m-1)}\right\}.\] By expanding the sum and adding $\lambdaeft(\frac{3}{2}+w(2s)\right)^2$ to the left-hand side of the equation inside the maximization above, we obtain the following bound \betaegin{eqnarray*} u(k,s) &\gammaeq& \muax\lambdaeft\{t \in \muathbb{N}\ |\ t^2+ 3t +2w(2s)t +\lambdaeft(\frac{3}{2}+w(2s)\right)^2 \lambdaeq \frac{k-s}{m-1}\right\}\\ &=& \muax\lambdaeft\{t \in \muathbb{N}\ |\ t+\frac{3}{2}+w(2s)\lambdaeq \sigmaqrt{\frac{k-s}{m-1}}\right\}, \epsilonnd{eqnarray*} which yields the desired bound on $u(k,s)$, \betaegin{equation}\lambdaabel{eq:bound-u} u(k,s) \gammaeq \sigmaqrt{\frac{k-s}{m-1}}-\frac{5}{2}-w(2s).\epsilonnd{equation} \epsilonmph{Part 4.} We now complete the proof of the proposition. The following argument shows there is a high probability that several $H$-events occur in a given $\{s,...,k\}$ interval and, therefore, we obtain the desired contraction. Let $I_i(s)$ be the indicator variable of the event $H_i(s)$, that is $I_i(s) = 1$ if $H_i(s)$ occurs and $I_i(s) = 0$ otherwise. For any $k \gammaeq s \gammaeq 0$, any $\overline{x} \in R_M(s)$ and any $\deltaelta >0$, the disagreement metric $\rho$ satisfies \betaegin{eqnarray*} &&E[\rho(k,s)|x(s)=\overline{x}] =\\&&\quad E\lambdaeft[\rho(k,s)\muiddle|x(s)=\overline{x}, \sigmaum_{i=1}^{u(k,s)} I_i(s) > \deltaelta u(k,s)\right] P\lambdaeft(\sigmaum_{i=1}^{u(k,s)} I_i(s) > \deltaelta u(k,s)\muiddle|x(s)=\overline{x}\right)+\\&&\quad E\lambdaeft[\rho(k,s)\muiddle|x(s)=\overline{x}, \sigmaum_{i=1}^{u(k,s)} I_i(s) \lambdaeq \deltaelta u(k,s)\right] P\lambdaeft(\sigmaum_{i=1}^{u(k,s)} I_i(s) \lambdaeq \deltaelta u(k,s)\muiddle|x(s)=\overline{x}\right). \epsilonnd{eqnarray*} Since all the terms on the right-hand side of the equation above are less than or equal to 1, we obtain \betaegin{eqnarray} &&\lambdaabel{eq:over-rho}E[\rho(k,s)|x(s)=\overline{x}] \lambdaeq\\&&\quad E\lambdaeft[\rho(k,s)\muiddle|x(s)=\overline{x}, \sigmaum_{i=1}^{u(k,s)} I_i(s) > \deltaelta u(k,s)\right]+ P\lambdaeft(\sigmaum_{i=1}^{u(k,s)} I_i(s) \lambdaeq \deltaelta u(k,s)\muiddle|x(s)=\overline{x}\right).\nonumber \epsilonnd{eqnarray} We now bound the two terms in the right-hand side of Eq.\ (\muathbb{R}f{eq:over-rho}). Consider initially the first term. If $I_i(s) > \deltaelta u(k,s)$, then at least $\deltaelta u(k,s)$ $H$-events occur, which by the definition of $H_i(s)$ implies that at least $\deltaelta u(k,s)$ $G$-events occur. From Lemma \muathbb{R}f{lem.metric_bound}, we obtain \betaegin{equation}\lambdaabel{eq:over-rho1} E\lambdaeft[\rho(k,s)\muiddle|x(s)=\overline{x}, \sigmaum_{i=1}^{u(k,s)} I_i(s) > \deltaelta u(k,s)\right]\lambdaeq 2\lambdaeft(1+\frac{1}{\gammaamma^{2(m-1)}}\right)\lambdaeft(1-\gammaamma^{2(m-1)}\right)^{\deltaelta u(k,s)} \epsilonnd{equation} for all $\deltaelta>0$. We now consider the second term in the right-hand side of Eq.\ (\muathbb{R}f{eq:over-rho}). The events $\{I_i(s)\}_{i=1,...,u(k,s)}$ all have probability at least $\Psihi > 0$ conditional on any $x(s) \in R_M(s)$, but they are not independent. However, given any $x(s+\sigmaum_{i=1}^{j-1} h_j(r)) \in R(s+\sigmaum_{i=1}^{j-1} h_j(r))$, the event $I_j(s)$ is independent from the set of events $\{I_i(s)\}_{i=1,...,j-1}$ by the Markov property. Therefore, we can define a sequence of independent indicator variables $\{J_i(s)\}_{i=1,...,u(k,s)}$ such $P(J_i(s) = 1) = \Psihi$ and $J_i(s) \lambdaeq I_i(s)$ for all $i \in \{1,...,u(k,s)\}$ conditional on $x(s) \in R_M(s)$. Hence, \betaegin{equation}\lambdaabel{eq:over-rho2}P\lambdaeft(\sigmaum_{i=1}^{u(k,s)} I_i(s) \lambdaeq \deltaelta u(k,s)\muiddle|x(s)=\overline{x}\right) \lambdaeq P\lambdaeft(\sigmaum_{i=1}^{u(k,s)} J_i(s) \lambdaeq \deltaelta u(k,s)\right),\epsilonnd{equation} for any $\deltaelta > 0$ and any $\overline{x} \in R_M(s)$. By selecting $\deltaelta = \frac{\Psihi}{2}$ and using Hoeffding's Inequality, we obtain \betaegin{equation}\lambdaabel{eq:over-rho3}P\lambdaeft(\frac{1}{u(k,s)}\sigmaum_{i=1}^{u(k,s)} J_i(s) \lambdaeq \frac{\Psihi}{2}\right) \lambdaeq e^{-2\frac{\Psihi^2}{2^2}u(k,s)}.\epsilonnd{equation} Plugging Eqs.\ (\muathbb{R}f{eq:over-rho1}), (\muathbb{R}f{eq:over-rho2}) and (\muathbb{R}f{eq:over-rho3}), with $\deltaelta = \Psihi/2$, into Eq.\ (\muathbb{R}f{eq:over-rho}), we obtain \betaegin{eqnarray*} E[\rho(k,s)|x(s)=\overline{x}] \lambdaeq 2\lambdaeft(1+\frac{1}{\gammaamma^{2(m-1)}}\right)\lambdaeft(1-\gammaamma^{2(m-1)}\right)^{\frac{\Psihi}{2} u(k,s)} + e^{-\frac{\Psihi^2}{2}u(k,s)}, \epsilonnd{eqnarray*} for all $k\gammaeq s \gammaeq 0$ and all $\overline{x} \in R_M(s)$. This implies there exists some $\overline{\muu}_0, \overline{\muu}_1 > 0$ such that $\rho(k,s)\lambdaeq \overline{\muu}_0 e^{-\overline{\muu}_1 u(k,s)}$ and, combined with Eq.\ (\muathbb{R}f{eq:bound-u}), we obtain there exist some $K, \muu > 0$ such that \[ E[\rho(k,s)|x(s)=\overline{x}] \lambdaeq K e^{\muu\lambdaeft(w(2s)-\sigmaqrt{k-s}\right)} \qquad \hbox{ for all $k \gammaeq s \gammaeq 0, ~ x(s) \in R_M(s)$}. \] Let $\betaeta(s) = Ke^{\muu w(2s)}$. Note that $\betaeta(\cdot)$ is an increasing function since $w(\cdot)$ is an increasing function. To complete the proof we need to show that $\betaeta$ satisfies condition stipulated in Eq.\ (\muathbb{R}f{eq:def beta}). From Eq.\ (\muathbb{R}f{eq:limit-w2}), with $q=1$, we obtain that \[\lambdaim_{s \tauo \infty, s \notin \muathbb{N}} \frac{w(s)}{\lambdaog(s)} = 0.\] Note that since $w(\cdot)$ is a continuous function, the limit above also applies over the integers, i.e., $\lambdaim_{s \tauo \infty} \frac{w(s)}{\lambdaog(s)} = 0$. Since $\lambdaim_{s \tauo \infty}(s)$, for any $q>0$, we have \[0 = \lambdaim_{s \tauo \infty} \frac{w(2s)}{\lambdaog(2s)} = \lambdaim_{s \tauo \infty} \frac{\lambdaog(K) + \muu w(2s)}{q(-\lambdaog(2) + \lambdaog(2s))} = \lambdaim_{s \tauo \infty} \frac{\lambdaog(\betaeta(s))}{\lambdaog(s^q)}. \] Let $S(q)$ be a scalar such that $\frac{\lambdaog(\betaeta(s))}{\lambdaog(s^q)} \lambdaeq 1$ for all $s \gammaeq S(q)$. We thus obtain that $\betaeta(s) \lambdaeq s^q$ for all $s\gammaeq S(q)$, completing the proof of the proposition. \epsilonnd{proof} The above proposition yields the desired contraction of the disagreement metric $\rho$, but it assumes there exists some $M>0$ such that $\|e_i(k)\|\lambdaeq M\alphalpha(k)$ for all $i \in \muathcal{M}$ and $k \in \muathbb{N}$. In settings where we do not have a guarantee that this assumption holds, we use the proposition below. Proposition \muathbb{R}f{prop:compact-contraction} instead requires that the sets $X_i$ be compact for each agent $i$. With compact feasible sets, the contraction bound on the disagreement metric follows not from the prior analysis in this paper, but from the analysis of information exchange as if the link activations were independent across time. \betaegin{proposition} Let Assumptions \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights} and \muathbb{R}f{ass.self_conf} hold. Assume also that the sets $X_i$ are compact for all $i \in \muathcal{M}$. Then, there exist scalars $\kappaappa, \muu > 0$ such that for all $\overline{x} \in \Psirod_{i \in \muathcal{M}} X_i$, \betaegin{equation} E[\rho(k,s)|x(s)=\overline{x}] \lambdaeq \kappaappa e^{-\muu(k-s)}\quad\mubox{for all } k\gammaeq s \gammaeq 0. \lambdaabel{eq:compact-contraction}\epsilonnd{equation}\lambdaabel{prop:compact-contraction} \epsilonnd{proposition} \betaegin{proof} From Assumption \muathbb{R}f{connectivity}, we have that there exists a set of edges $\muathcal{E}$ of the strongly connected graph $(\muathcal{M}, \muathcal{E})$ such that for all $(j,i) \in \muathcal{E}$, all $k\gammaeq 0$ and all $\overline{x}\in\muathbb{R}^{m \tauimes n}$, \[ P(a_{ij}(k)\gammaeq\gammaamma | x(k)=\overline{x}) \gammaeq \muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\}.\] The function $\muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\}$ is continuous and, therefore, it attains its optimum when minimized over the compact set $\Psirod_{i\in\muathcal{M}} X_i$, i.e., \[\inf_{\overline{x} \in \Psirod_{i\in\muathcal{M}} X_i} \muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\} = \muin_{\overline{x} \in \Psirod_{i\in\muathcal{M}} X_i} \muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\}.\] Since the function $\muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\}$ is strictly positive for any $\overline{x} \in \muathbb{R}^{m \tauimes n}$, we obtain that there exists some positive $\epsilonpsilon$ such that \[\epsilonpsilon = \inf_{\overline{x} \in \Psirod_{i\in\muathcal{M}} X_i} \muin\lambdaeft\{\deltaelta,\frac{K}{\|\overline{x}_i-\overline{x}_j\|^C}\right\} > 0.\] Hence, for all $(j,i) \in \muathcal{E}$, all $k\gammaeq 0$ and all $\overline{x}\in\Psirod_{i\in\muathcal{M}} X_i$, \betaegin{equation}\lambdaabel{eq:rep-lemma7} P(a_{ij}(k)\gammaeq\gammaamma | x(k)=\overline{x}) \gammaeq \epsilonpsilon.\epsilonnd{equation} Since there is a uniform bound on the probability of communication for any given edge in $\muathcal{E}$ that is independent of the state $x(k)$, we can use an extended version of Lemma 7 from \cite{Ilanoptim}. In particular, Lemma 7 as stated in \cite{Ilanoptim} requires the communication probability along edges to be independent of $x(k)$ which does not apply here, however, it can be extended with straightforward modifications to hold if the independence assumption were to be replaced by the condition specified in Eq.\ (\muathbb{R}f{eq:rep-lemma7}), implying the desired result. \epsilonnd{proof} \sigmaection{Analysis of the Distributed Subgradient Method}\lambdaabel{sec:optim} In this section, we study the convergence behavior of the agent estimates $\{x_i(k)\}$ generated by the projected multi-agent subgradient algorithm (\muathbb{R}f{eq.update_rule}). We first focus on the case when the constraint sets of agents are the same, i.e., for all $i$, $X_i=X$ for some closed convex nonempty set. In this case, we will prove almost sure consensus among agent estimates and almost sure convergence of agent estimates to an optimal solution when the stepsize sequence converges to 0 sufficiently fast (as stated in Assumption \muathbb{R}f{ass. small steps}). We then consider the case when the constraint sets of the agents $X_i$ are different convex compact sets and present convergence results both in terms of almost sure consensus of agent estimates and almost sure convergence of the agent estimates to an optimal solution under weaker assumptions on the stepsize sequence. We first establish some key relations that hold under general stepsize rules that are used in the analysis of both cases. \sigmaubsection{Preliminary Relations} The first relation measures the ``distance" of the agent estimates to the intersection set $X=\cap_{i=1}^m X_i$. It will be key in studying the convergence behavior of the projection errors and the agent estimates. The properties of projection on a closed convex set, subgradients, and doubly stochasticity of agent weights play an important role in establishing this relation. \betaegin{lemma} Let Assumption \muathbb{R}f{ass.stoch_weights} hold. Let $\{x_i(k)\}$ and $\{e_i(k)\}$ be the sequences generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}). For any $z\in X=\cap_{i=1}^m X_i$, the following hold: \betaegin{itemize} \item[(a)] For all $k\gammae 0$, we have \betaegin{eqnarray*}\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 &\lambdae& \sigmaum_{i=1}^m\|x_i(k)-z\|^2 +\alpha^2(k) \sigmaum_{i=1}^m \|d_i(k)\|^2\\ &&\ -2\alpha(k)\sigmaum_{i=1}^m\lambdaeft(d_i(k)'(v_i(k)-z)\right) -\sigmaum_{i=1}^m\|e_i(k)\|^2.\epsilonnd{eqnarray*} \item[(b)] Let also Assumption \muathbb{R}f{ass.bounded_subgrad} hold. For all $k\gammae 0$, we have \betaegin{equation}\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z\|^2 +\alpha^2(k)m L^2 -2\alpha(k) \sigmaum_{i=1}^m (f_i(v_i(k))-f_i(z)).\lambdaabel{fvi-rel}\epsilonnd{equation} Moreover, for all $k\gammae 0$, it also follows that \betaegin{eqnarray} \sigmaum_{j=1}^m\|x_j(k+1)-z\|^2 &\lambdae& \sigmaum_{j=1}^m\|x_j(k)-z\|^2 +\alpha^2(k) mL^2 +2\alpha(k) L\sigmaum_{j=1}^m \|x_j(k)- y(k)\|\cr &&-2\alpha(k)\lambdaeft(f(y(k))-f(z)\right),\lambdaabel{fy-rel} \epsilonnd{eqnarray} \epsilonnd{itemize} \lambdaabel{key-rel} \epsilonnd{lemma} \betaegin{proof} (a)\ Since $x_i(k+1)=P_{X_i} [v_i(k)-\alpha(k) d_i(k)],$ it follows from the property of the projection error $e_i(k)$ in Eq.\ (\muathbb{R}f{projerror-feasdir}) that for any $z\in X,$ \[\|x_i(k+1)-z\|^2\lambdae \|v_i(k)-\alpha(k) d_i(k)-z\|^2-\|e_i(k)\|^2.\] By expanding the term $\|v_i(k)-\alpha(k) d_i(k)-z\|^2$, we obtain \[\|v_i(k)-\alpha(k) d_i(k)-z\|^2=\|v_i(k)-z\|^2 +\alpha^2(k) \|d_i(k)\|^2 -2\alpha(k) d_i(k)'(v_i(k)-z).\] Since $v_i(k)=\sigmaum_{j=1}^m a_{ij}(k) x_j(k)$, using the convexity of the norm square function and the stochasticity of the weights $a_{ij}(k)$, $j=1, \lambdadots,m$, it follows that \[\|v_i(k)-z\|^2\lambdae \sigmaum_{j=1}^m a_{ij}(k) \|x_j(k)-z\|^2.\] Combining the preceding relations, we obtain \betaegin{eqnarray*} \|x_i(k+1)-z\|^2 &\lambdae& \sigmaum_{j=1}^m a_{ij}(k) \|x_j(k)-z\|^2+\alpha^2(k) \|d_i(k)\|^2\\ && -2\alpha(k)d_i(k)'(v_i(k)-z) -\|e_i(k)\|^2. \epsilonnd{eqnarray*} By summing the preceding relation over $i=1,\lambdadots,m,$ and using the doubly stochasticity of the weights, i.e., \[\sigmaum_{i=1}^m \sigmaum_{j=1}^m a_{ij}(k) \|x_j(k)-z\|^2 =\sigmaum_{j=1}^m\lambdaeft(\sigmaum_{i=1}^m a_{ij}(k)\right) \|x_j(k)-z\|^2 =\sigmaum_{j=1}^m \|x_j(k)-z\|^2,\] we obtain the desired result. \vskip .5pc \noindent (b)\ Since $d_i(k)$ is a subgradient of $f_i(x)$ at $x=v_i(k)$, we have \[d_i(k)'(v_i(k)-z)\gammae f_i(v_i(k))-f_i(z).\] Combining this with the inequality in part (a), using subgradient boundedness and dropping the nonpositive projection error term on the right handside, we obtain \[\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z\|^2 +\alpha^2(k)m L^2 -2\alpha(k) \sigmaum_{i=1}^m (f_i(v_i(k))-f_i(z)),\] proving the first claim. This relation implies that \betaegin{eqnarray} \sigmaum_{j=1}^m\|x_j(k+1)-z\|^2 &\lambdae& \sigmaum_{j=1}^m\|x_j(k)-z\|^2 +\alpha^2(k) mL^2 -2\alpha(k)\sigmaum_{i=1}^m\lambdaeft(f_i(v_i(k))-f_i(y(k))\right)\cr &&-2\alpha(k)\lambdaeft(f(y(k))-f(z)\right).\qquad \lambdaabel{eqn:mid} \epsilonnd{eqnarray} In view of the subgradient boundedness and the stochasticity of the weights, it follows \[|f_i(v_i(k))-f_i(y(k))|\lambdae L\|v_i(k)-y(k)\| \lambdae L \sigmaum_{j=1}^m a_{ij}(k) \|x_j(k)-y(k)\|,\] implying, by the doubly stochasticity of the weights, that \[\sigmaum_{i=1}^m\lambdaeft|f_i(v_i(k))-f_i(y(k))\right|\lambdae L \sigmaum_{j=1}^m\lambdaeft(\sigmaum_{i=1}^m a_{ij}(k)\right) \|x_j(k)-y(k)\| = L \sigmaum_{j=1}^m \|x_j(k)-y(k)\|.\] By using this in relation (\muathbb{R}f{eqn:mid}), we see that for any $z\in X$, and all $i$ and $k,$ \betaegin{eqnarray*} \sigmaum_{j=1}^m\|x_j(k+1)-z\|^2 &\lambdae& \sigmaum_{j=1}^m\|x_j(k)-z\|^2 +\alpha^2(k) mL^2 +2\alpha(k) L\sigmaum_{j=1}^m \|x_j(k)- y(k)\|\cr &&-2\alpha(k)\lambdaeft(f(y(k))-f(z)\right). \epsilonnd{eqnarray*} \epsilonnd{proof} Our goal is to show that the agent disagreements $\|x_i(k)-x_j(k)\|$ converge to zero. To measure the agent disagreements $\|x_i(k)-x_j(k)\|$, we consider their average $\frac{1}{m}\sigmaum_{j=1}^m x_j(k)$, and consider the disagreement of agent estimates with respect to this average. In particular, we define \betaegin{equation}y(k)=\frac{1}{m}\sigmaum_{j=1}^m x_j(k)\qquad\hbox{for all }k.\lambdaabel{yxaver}\epsilonnd{equation} We have \[y(k+1)=\frac{1}{m}\sigmaum_{i=1}^m v_i(k) -\frac{\alpha(k) }{m}\sigmaum_{i=1}^m d_i(k) +\frac{1}{m}\sigmaum_{i=1}^m e_i(k).\] When the weights are doubly stochastic, since $v_i(k)=\sigmaum_{j=1}^m a_{ij}(k) x_j(k)$, it follows that \betaegin{equation} y(k+1)=y(k) -\frac{\alpha(k) }{m}\sigmaum_{i=1}^m d_i(k) +\frac{1}{m}\sigmaum_{i=1}^m e_i(k). \lambdaabel{y_evol} \epsilonnd{equation} Under our assumptions, the next lemma provides an upper bound on the agent disagreements, measured by $\mathcal{B}ig\{\|x_i(k)-y(k)\|\mathcal{B}ig\}$ for all $i$, in terms of the subgradient bounds, projection errors and the disagreement metric $\rho(k,s)$ defined in Eq.\ (\muathbb{R}f{eq:def-over-rho}). \betaegin{lemma} Let Assumptions \muathbb{R}f{ass.bounded_subgrad} and \muathbb{R}f{ass.stoch_weights} hold. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), and $\{y(k)\}$ be defined in Eq.\ (\muathbb{R}f{y_evol}). Then, for all $i$ and $k\gammae 2$, an upper bound on $\|x_i(k)-y(k)\|$ is given by \betaegin{eqnarray*} \|x_i(k)-y(k)\| &\lambdae & m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| + mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) + 2\alpha(k-1) L\cr && + \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\sigmaum_{j=1}^m \| e_j(r)\| + \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|. \epsilonnd{eqnarray*}\lambdaabel{est-xkyk} \epsilonnd{lemma} \betaegin{proof} From Eq.\ (\muathbb{R}f{evolution-est}), we have for all $i$ and $k\gammae s$, \betaegin{eqnarray*}x_i(k+1) = \sigmaum_{j=1}^m[\mathbb{P}hi(k,s)]_{ij}x_j(s) &-& \sigmaum_{r=s}^{k-1}\sigmaum_{j=1}^m [\mathbb{P}hi(k,r+1)]_{ij}\alphalpha(r)d_j(r) - \alphalpha(k)d_i(k)\nonumber\\ & +& \sigmaum_{r=s}^{k-1} \sigmaum_{j=1}^m [\mathbb{P}hi(k,r+1)]_{ij} e_j(r) + e_i(k).\epsilonnd{eqnarray*} Similarly, using relation (\muathbb{R}f{y_evol}), we can write for $y(k+1)$ and for all $k$ and $s$ with $k\gammae s,$ \betaegin{eqnarray*} y(k+1)= y(s)- \frac{1}{m}\sigmaum_{r=s}^{k-1} \sigmaum_{j=1}^m \alpha(r) d_j(r) - \frac{\alpha(k)}{m}\sigmaum_{i=1}^m d_i(k) + \frac{1}{m}\sigmaum_{r=s}^{k-1} \sigmaum_{j=1}^m e_j(r) + \frac{1}{m}\sigmaum_{j=1}^m e_j(k). \epsilonnd{eqnarray*} Therefore, since $y(s)=\frac{1}{m} \sigmaum_{j=1}^m x_j(s)$, we have for $s=0,$ \betaegin{eqnarray*} \|x_i(k)-y(k)\| &\lambdae & \sigmaum_{j=1}^m \lambdaeft|[\mathbb{P}hi(k-1,0)]_{ij} -\frac{1}{m}\right|\,\|x_j(0)\| \cr &&+ \sigmaum_{r=0}^{k-2} \sigmaum_{j=1}^m \lambdaeft|[\mathbb{P}hi(k-1,r+1)]_{ij}- \frac{1}{m}\right| \, \alpha(r) \| d_j(r)\|\cr &&+ \alpha(k-1)\|d_i(k-1)\| +\frac{\alpha(k-1)}{m}\sigmaum_{j=1}^m \| d_j(k-1)\|\cr && + \sigmaum_{r=0}^{k-2} \sigmaum_{j=1}^m \lambdaeft|[\mathbb{P}hi(k-1,r+1)]_{ij}-\frac{1}{m}\right| \| e_j(r)\| \cr &&+ \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|. \epsilonnd{eqnarray*} Using the metric $\rho(k,s) = \muax_{i,j \in {\cal M}} \lambdaeft|[\mathbb{P}hi(k,s)]_{ij}-{1\over m}\right|$ for $k\gammae s\gammae 0$ [cf.\ Eq.\ (\muathbb{R}f{eq:def-over-rho})], and the subgradient boundedness, we obtain for all $i$ and $k\gammae2,$ \betaegin{eqnarray*} \|x_i(k)-y(k)\| &\lambdae & m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| + mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) + 2\alpha(k-1) L\cr && + \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\sigmaum_{j=1}^m \| e_j(r)\| + \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|, \epsilonnd{eqnarray*} completing the proof. \epsilonnd{proof} In proving our convergence results, we will often use the following result on the infinite summability of products of positive scalar sequences with certain properties. This result was proven for geometric sequences in \cite{constconsoptim}. Here we extend it for general summable sequences. \betaegin{lemma} \lambdaabel{lemma:seq} Let $\{\betaeta_l\}$ and $\{\gammaamma_k\}$ be positive scalar sequences, such that $\sigmaum_{l=0}^{\infty}\betaeta_l<\infty$ and $\lambdaim_{k\tauo\infty}\gammaamma_k=0.$ Then, \[\lambdaim_{k\tauo\infty}\sigmaum_{\epsilonll=0}^k \betaeta_{k-\epsilonll}\gammaamma_\epsilonll=0.\] In addition, if $\sigmaum_{k=0}^\infty \gammaamma_k<\infty,$ then \[\sigmaum_{k=0}^\infty \sigmaum_{\epsilonll=0}^k \betaeta_{k-\epsilonll}\gammaamma_\epsilonll<\infty.\] \epsilonnd{lemma} \betaegin{proof} Let $\epsilonpsilon>0$ be arbitrary. Since $\gammaamma_k\tauo0$, there is an index $K$ such that $\gammaamma_k\lambdae\epsilonpsilon$ for all $k\gammae K$. For all $k\gammae K+1$, we have \[\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll= \sigmaum_{\epsilonll=0}^K\betaeta_{k-\epsilonll}\gammaamma_\epsilonll + \sigmaum_{\epsilonll=K+1}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll \lambdae \muax_{0\lambdae t\lambdae K}\gammaamma_t \sigmaum_{\epsilonll=0}^K\betaeta_{k-\epsilonll} +\epsilonpsilon\sigmaum_{\epsilonll=K+1}^k\betaeta_{k-\epsilonll}.\] Since $\sigmaum_{l=0}^{\infty}\betaeta_l<\infty$, there exists $B>0$ such that $\sigmaum_{\epsilonll=K+1}^k\betaeta_{k-\epsilonll}=\sigmaum_{\epsilonll=0}^{k-K-1}\betaeta_{\epsilonll}\lambdae B$ for all $k\gammae K+1$. Moreover, since $\sigmaum_{\epsilonll=0}^K\betaeta_{k-\epsilonll}=\sigmaum_{\epsilonll=k-K}^{k}\betaeta_{\epsilonll}$, it follows that for all $k\gammae K+1$, \[\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll \lambdae \muax_{0\lambdae t\lambdae K}\gammaamma_t \sigmaum_{\epsilonll=k-K}^{k}\betaeta_{\epsilonll} +\epsilonpsilon B.\] Therefore, using $\sigmaum_{l=0}^{\infty}\betaeta_l<\infty$, we obtain \[\lambdaimsup_{k\tauo\infty}\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll \lambdae \epsilonpsilon B.\] Since $\epsilonpsilon$ is arbitrary, we conclude that $\lambdaimsup_{k\tauo\infty}\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll=0$, implying \[\lambdaim_{k\tauo\infty}\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll=0.\] Suppose now $\sigmaum_k\gammaamma_k<\infty$. Then, for any integer $M\gammae1$, we have \[\sigmaum_{k=0}^M \lambdaeft(\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll\right) =\sigmaum_{\epsilonll=0}^M\gammaamma_\epsilonll\sigmaum_{t=0}^{M-\epsilonll}\betaeta_t \lambdae \sigmaum_{\epsilonll=0}^M\gammaamma_\epsilonll B,\] implying that \[\sigmaum_{k=0}^\infty \lambdaeft(\sigmaum_{\epsilonll=0}^k\betaeta_{k-\epsilonll}\gammaamma_\epsilonll\right) \lambdae B\sigmaum_{\epsilonll=0}^\infty\gammaamma_\epsilonll<\infty.\] \epsilonnd{proof} \sigmaubsection{Convergence Analysis when $X_i=X$ for all $i$}\lambdaabel{sec:sameconst} In this section, we study the case when agent constraint sets $X_i$ are the same. We study the asymptotic behavior of the agent estimates generated by the algorithm (\muathbb{R}f{eq.update_rule}) using Assumption \muathbb{R}f{ass. small steps} on the stepsize sequence. The next assumption formalizes our condition on the constraint sets. \betaegin{assumption} The constraint sets $X_i$ are the same, i.e., $X_i=X$ for a closed convex set X.\lambdaabel{sameconst} \epsilonnd{assumption} We show first that under this assumption, we can provide an upper bound on the norm of the projection error $\|e_i(k)\|$ as a function of the stepsize $\alpha(k)$ for all $i$ and $k\gammae 0$. \betaegin{lemma} Let Assumptions \muathbb{R}f{ass.bounded_subgrad} and \muathbb{R}f{sameconst} hold. Let $\{e_i(k)\}$ be the projection error defined by (\muathbb{R}f{proj-error}). Then, for all $i$ and $k\gammae 0$, the $e_i(k)$ satisfy \[\|e_i(k)\|\lambdae 2 L \alphalpha(k).\]\lambdaabel{bdprojerror} \epsilonnd{lemma} \betaegin{proof} Using the definition of projection error in Eq.\ (\muathbb{R}f{proj-error}), we have \[e_i(k) = x_i(k+1) - v_i(k) + \alphalpha(k)d_i(k).\] Taking the norms of both sides and using subgradient boundedness, we obtain \[\|e_i(k)\| \lambdae \|x_i(k+1) - v_i(k)\| + \alphalpha(k)L.\] Since $v_i(k) = \sigmaum_{j=1}^m a_{ij}(k) x_j(k)$, the weight vector $a_i(k)$ is stochastic, and $x_j(k)\in X_j=X$ (cf.\ Assumption \muathbb{R}f{sameconst}), it follows that $v_i(k)\in X$ for all $i$. Using the nonexpansive property of projection operation [cf.\ Eq.\ (\muathbb{R}f{nonexpan})] in the preceding relation, we obtain \[\|e_i(k)\| \lambdae \|v_i(k)-\alphalpha(k)d_i(k) - v_i(k)\| + \alphalpha(k)L \lambdae 2\alphalpha(k)L,\] completing the proof. \epsilonnd{proof} This lemma shows that the projection errors are bounded by the scaled stepsize sequence under Assumption \muathbb{R}f{sameconst}. Using this fact and an additional assumption on the stepsize sequence, we next show that the expected value of the sequences $\{\|x_i(k)-y(k)\|\}$ converge to zero for all $i$, thus establishing mean consensus among the agents in the limit. The proof relies on the bound on the expected disagreement metric $\rho(k,s)$ established in Proposition \muathbb{R}f{prop:contraction}. The mean consensus result also immediately implies that the agent estimates reach almost sure consensus along a particular subsequence. \betaegin{proposition} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf}, and \muathbb{R}f{sameconst} hold. Assume also that the stepsize sequence $\{\alpha(k)\}$ satisfies Assumption \muathbb{R}f{ass. small steps}. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), and $\{y(k)\}$ be defined in Eq.\ (\muathbb{R}f{y_evol}). Then, for all $i$, we have \[\lambdaim_{k\tauo \infty} E[\|x_i(k)-y(k)\|] = 0,\quad \hbox{and}\]\[\lambdaiminf_{k\tauo \infty} \|x_i(k)-y(k)\| = 0\qquad \hbox{with probability one.}\] \lambdaabel{mean-consensus-sameset} \epsilonnd{proposition} \betaegin{proof} From Lemma \muathbb{R}f{est-xkyk}, we have the following for all $i$ and $k\gammae 2$, \betaegin{eqnarray*} \|x_i(k)-y(k)\| &\lambdae & m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| + mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) + 2\alpha(k-1) L\cr && + \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\sigmaum_{j=1}^m \| e_j(r)\| + \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|. \epsilonnd{eqnarray*} Using the upper bound on the projection error from Lemma \muathbb{R}f{bdprojerror}, $\|e_i(k)\|\lambdae 2\alphalpha(k)L$ for all $i$ and $k$, this can be rewritten as \betaegin{eqnarray} \|x_i(k)-y(k)\| \lambdae m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| &+& 3mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) \cr &+& 6\alpha(k-1) L.\lambdaabel{bdcont} \epsilonnd{eqnarray} Under Assumption \muathbb{R}f{ass. small steps} on the stepsize sequence, Proposition \muathbb{R}f{prop:contraction} implies the following bound for the disagreement metric $\rho(k,s)$: for all $k\gammae s\gammae 0$, \[E[\rho(k,s)]\lambdae \betaeta(s)e^{-\muu \sigmaqrt{k-s}},\] where $\muu$ is a positive scalar and $\betaeta(s)$ is an increasing sequence such that \betaegin{equation}\betaeta(s)\lambdaeq s^q \qquad \hbox{ for all $q > 0$ and all $s \gammaeq S(q)$},\lambdaabel{betabd}\epsilonnd{equation} for some integer $S(q)$, i.e., for all $q>0$, $\betaeta(s)$ is bounded by a polynomial $s^q$ for sufficiently large $s$ (where the threshold on $s$, $S(q)$, depends on $q$). Taking the expectation in Eq.\ (\muathbb{R}f{bdcont}) and using the preceding estimate on $\rho(k,s)$, we obtain \betaegin{eqnarray*} E[\|x_i(k)-y(k)\|] \lambdae m\betaeta(0)e^{-\muu\sigmaqrt{k-1}}\sigmaum_{j=1}^m \|x_j(0)\| &+& 3mL \sigmaum_{r=0}^{k-2} \betaeta(r+1)e^{-\muu \sigmaqrt{k-r-2}}\alpha(r)\\ &+& 6\alpha(k-1) L. \epsilonnd{eqnarray*} We can bound $\betaeta(0)$ by $\betaeta(0)\lambdae S(1)$ by using Eq.\ (\muathbb{R}f{betabd}) with $q=1$ and the fact that $\betaeta$ is an increasing sequence. Therefore, by taking the limit superior in the preceding relation and using $\alphalpha(k)\tauo 0$ as $k\tauo \infty$, we have for all $i$, \betaegin{eqnarray*} \lambdaimsup_{k\tauo \infty}E[\|x_i(k)-y(k)\|] \lambdae 3mL \sigmaum_{r=0}^{k-2} \betaeta(r+1)e^{-\muu \sigmaqrt{k-r-2}}\alpha(r). \epsilonnd{eqnarray*} Finally, note that \[\lambdaim_{k\tauo \infty} \betaeta(k+1)\alphalpha(k)\lambdae \lambdaim_{k\tauo \infty} (k+1) \alphalpha(k) = 0,\] where the inequality holds by using Eq.\ (\muathbb{R}f{betabd}) with $q=1$ and the equality holds by Assumption \muathbb{R}f{ass. small steps} on the stepsize. Since we also have $\sigmaum_{k=0}^\infty e^{-\muu \sigmaqrt{k}}<\infty$, Lemma \muathbb{R}f{lemma:seq} applies implying that \[\lambdaim_{k\tauo \infty} \sigmaum_{r=0}^{k-2} \betaeta(r+1)e^{-\muu \sigmaqrt{k-r-2}}\alpha(r)=0.\] Combining the preceding relations, we have \[\lambdaim_{k\tauo \infty}E[\|x_i(k)-y(k)\|]=0.\] Using Fatou's Lemma (which applies since the random variables $\|y(k) - x_i(k)\|$ are nonnegative for all $i$ and $k$), we obtain \[ 0\lambdae E\mathcal{B}ig[\lambdaiminf_{k\tauo \infty}\|y(k) - x_i(k)\|\mathcal{B}ig] \lambdae \lambdaiminf_{k\tauo \infty} E[\|y(k) - x_i(k)\|] \lambdae 0.\] Thus, the nonnegative random variable $\lambdaiminf_{k\tauo \infty}\|y(k) - x_i(k)\|$ has expectation 0, which implies that \[\lambdaiminf_{k\tauo \infty}\|y(k) - x_i(k)\|=0\qquad \hbox{with probability one}.\] \epsilonnd{proof} The preceding proposition shows that the agent estimates reach a consensus in the expected sense. We next show that under Assumption \muathbb{R}f{sameconst}, the agent estimates in fact converge to an almost sure consensus in the limit. We rely on the following standard convergence result for sequences of random variables, which is an immediate consequence of the supermartingale convergence theorem (see Bertsekas and Tsitsiklis \cite{distbook}). \betaegin{lemma}Consider a probability space $(\Omegamega, F,P)$ and let $\{F(k)\}$ be an increasing sequence of $\sigmaigma$-fields contained in $F$. Let $\{V(k)\}$ and $\{Z(k)\}$ be sequences of nonnegative random variables (with finite expectation) adapted to $\{F(k)\}$ that satisfy \[E[V(k+1)\ |\ F(k)]\lambdae V(k) + Z(k),\] \[\sigmaum_{k=1}^\infty E[Z(k)] <\infty.\] Then, $V(k)$ converges with probability one, as $k\tauo \infty$.\lambdaabel{stocapproxlemma} \epsilonnd{lemma} \betaegin{proposition} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf}, and \muathbb{R}f{sameconst} hold. Assume also that the stepsize sequence $\{\alpha(k)\}$ satisfies Assumption \muathbb{R}f{ass. small steps}. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), and $\{y(k)\}$ be defined in Eq.\ (\muathbb{R}f{y_evol}). Then, for all $i$, we have: \betaegin{itemize} \item[(a)]\quad $\sigmaum_{k=2}^\infty \alpha(k)\|x_i(k)-y(k)\| <\infty$ with probability one. \item[(b)]\quad $\lambdaim_{k\tauo \infty} \|x_i(k)-y(k)\| = 0$ with probability one. \epsilonnd{itemize} \lambdaabel{as-consensus} \epsilonnd{proposition} \betaegin{proof} (a)\ Using the upper bound on the projection error from Lemma \muathbb{R}f{bdprojerror}, $\|e_i(k)\|\lambdae 2\alphalpha(k)L$ for all $i$ and $k$, in Lemma \muathbb{R}f{est-xkyk}, we have for all $i$ and $k\gammae 2$, \betaegin{eqnarray*} \|x_i(k)-y(k)\| \lambdae m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| + 3mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) + 6\alpha(k-1) L. \epsilonnd{eqnarray*} By multiplying this relation with $\alphalpha(k)$, we obtain \betaegin{eqnarray*} \alphalpha(k)\|x_i(k)-y(k)\| \lambdae m\alpha(k)\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| &+& 3mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(k)\alpha(r)\\ &+& 6\alpha(k)\alpha(k-1) L. \epsilonnd{eqnarray*} Taking the expectation and using the estimate from Proposition \muathbb{R}f{prop:contraction}, i.e., \[E[\rho(k,s)]\lambdae \betaeta(s)e^{-\muu \sigmaqrt{k-s}}\qquad \hbox{for all }k\gammae s\gammae 0,\] where $\muu$ is a positive scalar and $\betaeta(s)$ is a increasing sequence such that \betaegin{equation}\betaeta(s)\lambdaeq s^q \qquad \hbox{ for all $q > 0$ and all $s \gammaeq S(q)$},\lambdaabel{as-betabd}\epsilonnd{equation} for some integer $S(q)$, we have \betaegin{eqnarray*} E[\alphalpha(k)\|x_i(k)-y(k)\|] &\lambdae& m\alpha(k)\betaeta(0)e^{-\muu \sigmaqrt{k-1}}\sigmaum_{j=1}^m \|x_j(0)\| \\&& + 3mL \sigmaum_{r=0}^{k-2} \betaeta(r+1)e^{-\muu \sigmaqrt{k-r-2}}\alpha(k)\alpha(r)+ 6\alpha(k)\alpha(k-1) L. \epsilonnd{eqnarray*} Let $\xi(r) = \betaeta(r+1) \alphalpha(r)$ for all $r\gammae 0$. Using the relations $\alpha(k)\xi(r)\lambdae \alpha^2(k)+\xi^2(r)$ and $2\alpha(k)\alpha(k-1)\lambdae \alpha^2(k)+\alpha^2(k-1)$ for any $k$ and $r$, the preceding implies that \betaegin{eqnarray*} E[\alphalpha(k)\|x_i(k)-y(k)\|] &\lambdae& m\alpha(k)\betaeta(0)e^{-\muu \sigmaqrt{k-1}}\sigmaum_{j=1}^m \|x_j(0)\| + 3mL\sigmaum_{r=0}^{k-2} e^{-\muu \sigmaqrt{k-r-2}}\xi^2(r)\\ &&\qquad + 3L\alpha^2(k) \mathcal{B}ig(m \sigmaum_{r=0}^{k-2}e^{-\muu \sigmaqrt{k-r-2}} +1\mathcal{B}ig) +3\alpha^2(k-1) L. \epsilonnd{eqnarray*} Summing over $k\gammae 2$, we obtain \betaegin{eqnarray*} \sigmaum_{k=2}^\infty E[\alphalpha(k)\|x_i(k)-y(k)\|] &\lambdae& m\sigmaum_{j=1}^m \|x_j(0)\| \betaeta(0) \sigmaum_{k=2}^\infty \alpha(k)e^{-\muu \sigmaqrt{k-1}} \\ && \quad + 3L\sigmaum_{k=2}^\infty \lambdaeft(\mathcal{B}ig(m \sigmaum_{r=0}^{k-2} e^{-\muu \sigmaqrt{k-r-2}}+1\mathcal{B}ig)\alpha^2(k) + \alpha^2(k-1)\right) \\ && \quad + 3mL\sigmaum_{k=2}^\infty \sigmaum_{r=0}^{k-2}e^{-\muu \sigmaqrt{k-r-2}}\xi^2(r). \epsilonnd{eqnarray*} We next show that the right handside of the above inequality is finite: Since $\lambdaim_{k\tauo \infty}\alpha(k)= 0$ (cf.\ Assumption \muathbb{R}f{ass. small steps}), $\betaeta(0)$ is bounded, and $\sigmaum_k e^{-\muu \sigmaqrt{k}}< \infty$, Lemma \muathbb{R}f{lemma:seq} implies that the first term is bounded. The second term is bounded since $\sigmaum_k \alpha^2(k)<\infty$ by Assumption \muathbb{R}f{ass. small steps} and Lemma \muathbb{R}f{stepsizeprop}. Since $\xi(r) = \betaeta(r+1)\alphalpha(r)$, we have for some small $\epsilonpsilon>0$ and all $r$ sufficiently large \[\xi^2(r) = \betaeta^2(r+1)\alphalpha^2(r)\lambdae (r+1)^{2/3} \alphalpha^2(r)\lambdae (r+1)^{2/3} \frac{\epsilonpsilon}{r^2},\] where the first inequality follows using the estimate in Eq.\ (\muathbb{R}f{as-betabd}) with $q=1/3$ and the second inequality follows from Assumption \muathbb{R}f{ass. small steps}. This implies that $\sigmaum_k \xi^2(k)<\infty$, which combined with Lemma \muathbb{R}f{lemma:seq} implies that the third term is also bounded. Hence, we have \[\sigmaum_{k=2}^\infty E[\alpha(k)\|x_i(k)-y(k)\|] <\infty.\] By the monotone convergence theorem, this implies that \[E\mathcal{B}ig[\sigmaum_{k=2}^\infty \alphalpha(k)\|y(k) - x_i(k)\|\mathcal{B}ig]<\infty,\] and therefore \[\sigmaum_{k=2}^\infty \alphalpha(k)\|y(k) - x_i(k)\|<\infty \qquad \hbox{with probability }1,\] concluding the proof of this part. \vskip .5pc \noindent (b)\ Using the iterations (\muathbb{R}f{subgradient-step}) and (\muathbb{R}f{y_evol}), we obtain for all $k\gammae 1$ and $i$, \betaegin{eqnarray*} y(k+1)-x_i(k+1) = \mathcal{B}ig(y(k)-\sigmaum_{j=1}^m a_{ij}(k) x_j(k)\mathcal{B}ig)& -& \alphalpha(k)\mathcal{B}ig({1\over m} \sigmaum_{j=1}^m d_j(k) -d_i(k)\mathcal{B}ig)\\& +& \mathcal{B}ig({1\over m} \sigmaum_{j=1}^m e_j(k) - e_i(k)\mathcal{B}ig). \epsilonnd{eqnarray*} By the stochasticity of the weights $a_{ij}(k)$ and the subgradient boundedness, this implies that \[\|y(k+1)-x_i(k+1)\| \lambdae \sigmaum_{j=1}^m a_{ij}(k)\|y(k)-x_j(k)\| + 2L\alphalpha(k) + {2\over m}\sigmaum_{j=1}^m \|e_j(k)\|.\] Using the bound on the projection error from Lemma \muathbb{R}f{proj-error}, we can simplify this relation as \[\|y(k+1)-x_i(k+1)\| \lambdae \sigmaum_{j=1}^m a_{ij}(k)\|y(k)-x_j(k)\| + 6L\alphalpha(k).\] Taking the square of both sides and using the convexity of the squared-norm function $\|\cdot\|^2$, this yields \[\|y(k+1)-x_i(k+1)\|^2\lambdae \sigmaum_{j=1}^m a_{ij}(k)\|y(k)-x_j(k)\|^2 + 12L \alphalpha(k) \sigmaum_{j=1}^n a_{ij}(k)\|y(k)-x_j(k)\| + 36L^2 \alphalpha(k)^2.\] Summing over all $i$ and using the doubly stochasticity of the weights $a_{ij}(k)$, we have for all $k\gammae 1$, \[\sigmaum_{i=1}^m\|y(k+1)-x_i(k+1)\|^2\lambdae \sigmaum_{i=1}^m \|y(k)-x_i(k)\|^2 + 12L \alphalpha(k) \sigmaum_{i=1}^m \|y(k)-x_i(k)\| + 36L^2 m \alphalpha(k)^2.\] By part (a) of this lemma, we have $\sigmaum_{k=1}^\infty \alphalpha(k)\|y(k) - x_i(k)\|<\infty$ with probability one. Since, we also have $\sigmaum_k \alphalpha^2(k) <\infty$ (cf.\ Lemma \muathbb{R}f{stepsizeprop}), Lemma \muathbb{R}f{stocapproxlemma} applies and implies that $\sigmaum_{i=1}^m\|y(k)-x_i(k)\|^2$ converges with probability one, as $k\tauo \infty$. By Proposition \muathbb{R}f{mean-consensus-sameset}, we have \[\lambdaiminf_{k\tauo \infty} \|x_i(k)-y(k)\| = 0\qquad \hbox{with probability one.}\] Since $\sigmaum_{i=1}^m\|y(k)-x_i(k)\|^2$ converges with probability one, this implies that for all $i$, \[\lambdaim_{k\tauo \infty}\|x_i(k)-y(k)\|=0\qquad \hbox{with probability one},\] completing the proof. \epsilonnd{proof} We next present our main convergence result under Assumption \muathbb{R}f{ass. small steps} on the stepsize and Assumption \muathbb{R}f{sameconst} on the constraint sets. \betaegin{theorem} Let Assumptions \muathbb{R}f{ass.bounded_subgrad}, \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf} and \muathbb{R}f{sameconst} hold. Assume also that the stepsize sequence $\{\alpha(k)\}$ satisfies $\sigmaum_{k=0}^\infty \alphalpha(k)=\infty$ and Assumption \muathbb{R}f{ass. small steps}. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}). Then, there exists an optimal solution $x^*\in X^*$ such that for all $i$ \[\lambdaim_{k\tauo \infty} x_i(k) = x^*\qquad \hbox{with probability one}.\] \epsilonnd{theorem} \betaegin{proof} From Lemma \muathbb{R}f{key-rel}(b), we have for some $z^*\in X^*$ (i.e., $f(z^*)=f^*$), \betaegin{eqnarray} \sigmaum_{j=1}^m\|x_j(k+1)-z^*\|^2 &\lambdae& \sigmaum_{j=1}^m\|x_j(k)-z^*\|^2 +\alpha^2(k) mL^2 +2\alpha(k) L\sigmaum_{j=1}^m \|x_j(k)- y(k)\|\cr &&-2\alpha(k)\lambdaeft(f(y(k))-f^*\right),\lambdaabel{iterates-rel} \epsilonnd{eqnarray} [see Eq.\ (\muathbb{R}f{fy-rel})]. Rearranging the terms and summing these relations over $k=0,\lambdadots,K$, we obtain \betaegin{eqnarray*} 2 \sigmaum_{k=0}^K \alpha(k)\lambdaeft(f(y(k))-f^*\right) &\lambdae& \sigmaum_{j=1}^m\|x_j(0)-z^*\|^2 - \sigmaum_{j=1}^m\|x_j(K+1)-z^*\|^2 \\&&\ +mL^2 \sigmaum_{k=0}^K \alpha^2(k) +2L \sigmaum_{k=0}^K \alpha(k)\sigmaum_{j=1}^m \|x_j(k)- y(k)\|. \epsilonnd{eqnarray*} By letting $K\tauo \infty$ in this relation and using $\sigmaum_{k=0}^\infty \alphalpha^2(k)<\infty$ (cf.\ Lemma \muathbb{R}f{stepsizeprop}) and $\sigmaum_{k=0}^\infty \alpha(k) \sigmaum_{j=1}^m \|x_j(k)-y(k)\|<\infty$ with probability one, we obtain \[ \sigmaum_{k=0}^K \alpha(k)\lambdaeft(f(y(k))-f^*\right) <\infty\qquad \hbox{with probability one}.\] Since $x_i(k)\in X$ for all $i$, we have $y(k)\in X$ [cf.\ Eq.\ (\muathbb{R}f{yxaver})] and therefore $f(y(k))\gammae f^*$ for all $k$. Combined with the assumption $\sigmaum_{k=0}^\infty \alpha(k)=\infty$, the preceding relation implies \betaegin{equation}\lambdaiminf_{k\tauo \infty} f(y(k))=f^*.\lambdaabel{yvalue}\epsilonnd{equation} By dropping the nonnegative term $2\alpha(k)\lambdaeft(f(y(k))-f^*\right)$ in Eq.\ (\muathbb{R}f{iterates-rel}), we have \betaegin{eqnarray} \sigmaum_{j=1}^m\|x_j(k+1)-z^*\|^2 \lambdae\sigmaum_{j=1}^m\|x_j(k)-z^*\|^2 +\alpha^2(k) mL^2 +2\alpha(k) L\sigmaum_{j=1}^m \|x_j(k)- y(k)\|. \epsilonnd{eqnarray} Since $\sigmaum_{k=0}^\infty \alpha^2(k)<\infty$ and $\sigmaum_{k=0}^\infty \alpha(k) \sigmaum_{j=1}^m \|x_j(k)-y(k)\|<\infty$ with probability one, Lemma \muathbb{R}f{stocapproxlemma} applies and implies that $\sigmaum_{j=1}^m\|x_j(k)-z^*\|^2$ is a convergent sequence with probability one for all $z^*\in X^*$. By Lemma \muathbb{R}f{as-consensus}(b), we have $\lambdaim_{k\tauo \infty} \|x_i(k)-y(k)\|=0$ with probability one, therefore it also follows that the sequence $\|y(k)-z^*\|$ is also convergent. Since $y(k)$ is bounded, it must have a limit point. By Eq.\ (\muathbb{R}f{yvalue}) and the continuity of $f$ (due to convexity of $f$ over $\muathbb{R}^n$), this implies that one of the limit points of $\{y(k)\}$ must belong to $X^*$; denote this limit point by $x^*$. Since the sequence $\{\|y(k)-x^*\|\}$ is convergent, it follows that $y(k)$ can have a unique limit point, i.e., $\lambdaim_{k\tauo \infty} y(k)=x^*$ with probability one. This and $\lambdaim_{k\tauo\infty}\|x_i(k)-y(k)\|=0$ with probability one imply that each of the sequences $\{x_i(k)\}$ converges to the same $x^*\in X^*$ with probability one. \epsilonnd{proof} \sigmaubsection{Convergence Analysis for Different Constraint Sets}\lambdaabel{sec:difconst} In this section, we provide our convergence analysis for the case when all the constraint sets $X_i$ are different. We show that even when the constraint sets of the agents are different, the agent estimates converge almost surely to an optimal solution of problem (\muathbb{R}f{optim-prob}) under some conditions. In particular, we adopt the following assumption on the constraint sets. \betaegin{assumption} \lambdaabel{compactconst} For each $i$, the constraint set $X_i$ is a convex and compact set. \epsilonnd{assumption} An important implication of the preceding assumption is that for each $i$, the subgradients of the function $f_i$ at all points $x\in X_i$ are uniformly bounded, i.e., there exists some scalar $L>0$ such that for all $i$, \[\|d\| \lambdae L \qquad \hbox{for all }d\in \Psiartial f_i(x)\hbox{ and all }x\in X_i.\] Our first lemma shows that with different constraint sets and a stepsize that goes to zero, the projection error $e_i(k)$ converges to zero for all $i$ along all sample paths. \betaegin{lemma} Let Assumptions \muathbb{R}f{ass.stoch_weights} and \muathbb{R}f{compactconst} hold. Let $\{x_i(k)\}$ and $\{e_i(k)\}$ be the sequences generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}). Assume that the stepsize sequence satisfies $\alphalpha(k)\tauo 0$ as $k$ goes to infinity. \betaegin{itemize} \item[(a)] For any $z\in X$, the scalar sequence $\sigmaum_{i=1}^m \|x_i(k)-z\|^2$ is convergent. \item[(b)] The projection errors $e_i(k)$ converge to zero as $k\tauo \infty$, i.e., \[\lambdaim_{k\tauo \infty} \|e_i(k)\|=0\qquad \hbox{for all }i.\] \epsilonnd{itemize} \lambdaabel{error-behav} \epsilonnd{lemma} \betaegin{proof} (a)\ Using subgradient boundedness and the relation $|d_i(k)'(v_i(k)-z)|\lambdae \|d_i(k)\|\|v_i(k)-z\|$ in part (a) of Lemma \muathbb{R}f{key-rel}, we obtain \[\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z\|^2 +\alpha^2(k) mL^2 +2\alpha(k)L \sigmaum_{i=1}^m \|v_i(k)-z\| -\sigmaum_{i=1}^m\|e_i(k)\|^2.\] Since $v_i(k)=\sigmaum_{j=1}^m a_{ij}(k) x_j(k)$, using doubly stochasticity of the weights, we have $\sigmaum_{i=1}^m \|v_i(k)-z\|\lambdae \sigmaum_{i=1}^m \|x_i(k)-z\|$, which when combined with the preceding yields for any $z\in X$ and all $k\gammae 0$, \betaegin{equation}\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z\|^2 +\alpha^2(k) mL^2 +2\alpha(k)L \sigmaum_{i=1}^m \|x_i(k)-z\| -\sigmaum_{i=1}^m\|e_i(k)\|^2.\lambdaabel{reduced-rel}\epsilonnd{equation} Since $x_i(k)\in X_i$ for all $i$ and $X_i$ is compact (cf.\ Assumption \muathbb{R}f{compactconst}), it follows that the sequence $\{x_i(k)\}$ is bounded for all $i$, and therefore the sequence $\sigmaum_{i=1}^m \|x_i(k)-z\|$ is bounded. Since $\alpha(k)\tauo 0$ as $k\tauo \infty$, by dropping the nonnegative term $\sigmaum_{i=1}^m\|e_i(k)\|^2$ in Eq.\ (\muathbb{R}f{reduced-rel}), it follows that \betaegin{eqnarray*} \lambdaimsup_{k\tauo \infty}\sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 &\lambdae& \lambdaiminf_{k\tauo \infty}\sigmaum_{i=1}^m\|x_i(k)-z\|^2\\ &&\ + \lambdaim_{k\tauo \infty}\lambdaeft(\alpha^2(k) mL^2 +2\alpha(k)L \sigmaum_{i=1}^m \|x_i(k)-z\|\right)\\ & =& \lambdaiminf_{k\tauo \infty}\sigmaum_{i=1}^m\|x_i(k)-z\|^2. \epsilonnd{eqnarray*} Since the sequence $\sigmaum_{i=1}^m \|x_i(k)-z\|^2$ is bounded, the preceding relation implies that the scalar sequence $\sigmaum_{i=1}^m \|x_i(k)-z\|^2$ is convergent. \vskip .5pc \noindent (b)\ From Eq.\ (\muathbb{R}f{reduced-rel}), for any $z\in X$, we have \[\sigmaum_{i=1}^m \|e_i(k)\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z\|^2 - \sigmaum_{i=1}^m\|x_i(k+1)-z\|^2 +\alpha^2(k) mL^2 +2\alpha(k)L \sigmaum_{i=1}^m \|x_i(k)-z\|.\] Taking the limit superior as $k\tauo \infty$, we obtain \betaegin{eqnarray*} \lambdaimsup_{k\tauo \infty}\sigmaum_{i=1}^m \|e_i(k)\|^2 &\lambdae& \lambdaim_{k\tauo \infty }\lambdaeft(\sigmaum_{i=1}^m\|x_i(k)-z\|^2 - \sigmaum_{i=1}^m\|x_i(k+1)-z\|^2\right)\\ && +\lambdaim_{k\tauo \infty}\mathcal{B}ig(\alpha^2(k) mL^2 +2\alpha(k)L \sigmaum_{i=1}^m \|x_i(k)-z\|\mathcal{B}ig),\epsilonnd{eqnarray*} where the first term on the right handside is equal to zero by the convergence of the sequence $\sigmaum_{i=1}^m\|x_i(k)-z\|^2$, and the second term is equal to zero by $\lambdaim_{k\tauo \infty}\alpha(k)=0$ and the boundedness of the sequence $\sigmaum_{i=1}^m \|x_i(k)-z\|$, completing the proof. \epsilonnd{proof} The preceding lemma shows the interesting result that the projection errors $\|e_i(k)\|$ converge to zero along all sample paths even when the agents have different constraint sets under the compactness conditions of Assumption \muathbb{R}f{compactconst}. Similar to the case with $X_i=X$ for all $i$, we next establish mean consensus among the agent estimates. The proof relies on the convergence of projection errors to zero and the bound on the disagreement metric $\rho(k,s)$ from Proposition \muathbb{R}f{prop:compact-contraction}. Note that this result holds for all stepsizes $\alpha(k)$ with $\alpha(k)\tauo 0$ as $k\tauo \infty$. \betaegin{proposition} Let Assumptions \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf} and \muathbb{R}f{compactconst} hold. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), and $\{y(k)\}$ be defined in Eq.\ (\muathbb{R}f{y_evol}). Assume that the stepsize sequence satisfies $\alphalpha(k)\tauo 0$ as $k$ goes to infinity. Then, for all $i$, we have \[\lambdaim_{k\tauo \infty} E[\|x_i(k)-y(k)\|] = 0,\quad \hbox{and}\]\[\lambdaiminf_{k\tauo \infty} \|x_i(k)-y(k)\| = 0\qquad \hbox{with probability one.}\] \lambdaabel{mean-consensus-diffset} \epsilonnd{proposition} \betaegin{proof} From Lemma \muathbb{R}f{est-xkyk}, we have \betaegin{eqnarray*} \|x_i(k)-y(k)\| &\lambdae & m\rho(k-1,0)\sigmaum_{j=1}^m \|x_j(0)\| + mL \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\alpha(r) + 2\alpha(k-1) L\cr && + \sigmaum_{r=0}^{k-2} \rho(k-1,r+1)\sigmaum_{j=1}^m \| e_j(r)\| + \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|. \epsilonnd{eqnarray*} Taking the expectation of both sides and using the estimate for the disagreement metric $\rho(k,s)$ from Proposition \muathbb{R}f{prop:compact-contraction}, i.e., for all $k\gammae s\gammae 0$, \[E[\rho(k,s)]\lambdae \kappaappa e^{-\muu(k-s)},\] for some scalars $\kappaappa,\muu>0$, we obtain \betaegin{eqnarray*} E[\|x_i(k)-y(k)\|] &\lambdae & m \kappaappa e^{-\muu(k-1)}\sigmaum_{j=1}^m \|x_j(0)\| + mL\kappaappa \sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)}\alpha(r) + 2\alpha(k-1) L\cr && + \kappaappa\sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)}\sigmaum_{j=1}^m \| e_j(r)\| + \|e_i(k-1)\|+\frac{1}{m}\sigmaum_{j=1}^m \|e_j(k-1)\|. \epsilonnd{eqnarray*} By taking the limit superior in the preceding relation and using the facts that $\alphalpha(k)\tauo 0$, and $\|e_i(k)\|\tauo 0$ for all $i$ as $k\tauo \infty$ (cf.\ Lemma \muathbb{R}f{error-behav}(b)), we have for all $i$, \betaegin{eqnarray*} \lambdaimsup_{k\tauo \infty}E[\|x_i(k)-y(k)\|] \lambdae mL \kappaappa \sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)} \alpha(r) \ + \kappaappa \sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)}\sigmaum_{j=1}^m \| e_j(r)\|. \epsilonnd{eqnarray*} Finally, since $\sigmaum_{k=0}^\infty e^{-\muu k}< \infty$ and both $\alphalpha(k)\tauo 0$ and $\|e_i(k)\|\tauo 0$ for all $i$, by Lemma \muathbb{R}f{lemma:seq}, we have \[\lambdaim_{k\tauo \infty}\sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)}\alpha(r)=0\quad \hbox{and} \quad \lambdaim_{k\tauo \infty} \sigmaum_{r=0}^{k-2} e^{-\muu(k-r-2)} \sigmaum_{j=1}^m \| e_j(r)\|=0.\] Combining the preceding two relations, we have \[\lambdaim_{k\tauo \infty}E[\|x_i(k)-y(k)\|]=0.\] The second part of proposition follows using Fatou's Lemma and a similar argument used in the proof of Proposition \muathbb{R}f{mean-consensus-sameset}. \epsilonnd{proof} The next proposition uses the compactness of the constraint sets to strengthen this result and establish almost sure consensus among the agent estimates. \betaegin{proposition} Let Assumptions \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf} and \muathbb{R}f{compactconst} hold. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}), and $\{y(k)\}$ be defined in Eq.\ (\muathbb{R}f{y_evol}). Assume that the stepsize sequence satisfies $\alphalpha(k)\tauo 0$. Then, for all $i$, we have \[\lambdaim_{k\tauo \infty} \|x_i(k)-y(k)\| = 0 \qquad \hbox{with probability one}.\] \lambdaabel{difconstasconsensus} \epsilonnd{proposition} \betaegin{proof} Using the iterations (\muathbb{R}f{subgradient-step}) and (\muathbb{R}f{y_evol}), we obtain for all $k\gammae 1$ and $i$, \betaegin{eqnarray*} y(k+1)-x_i(k+1) = \mathcal{B}ig(y(k)-\sigmaum_{j=1}^m a_{ij}(k) x_j(k)\mathcal{B}ig)& -& \alphalpha(k)\mathcal{B}ig({1\over m} \sigmaum_{j=1}^m d_j(k) -d_i(k)\mathcal{B}ig)\\& +& \mathcal{B}ig({1\over m} \sigmaum_{j=1}^m e_j(k) - e_i(k)\mathcal{B}ig). \epsilonnd{eqnarray*} Using the doubly stochasticity of the weights $a_{ij}(k)$ and the subgradient boundedness (which holds by Assumption \muathbb{R}f{compactconst}), this implies that \betaegin{equation}\sigmaum_{i=1}^m \|y(k+1)-x_i(k+1)\| \lambdae \sigmaum_{i=1}^m \|y(k)-x_i(k)\| + 2Lm\alphalpha(k) + 2\sigmaum_{i=1}^m \|e_i(k)\|.\lambdaabel{convseq}\epsilonnd{equation} Since $\alpha(k)\tauo 0$, it follows from Lemma \muathbb{R}f{error-behav}(b) that $\|e_i(k)\|\tauo 0$ for all $i$. Eq.\ (\muathbb{R}f{convseq}) then yields \betaegin{eqnarray*} \lambdaimsup_{k\tauo \infty}\sigmaum_{i=1}^m \|y(k+1)-x_i(k+1)\| &\lambdae& \lambdaiminf_{k\tauo \infty}\sigmaum_{i=1}^m \|y(k)-x_i(k)\| \\ &&\ + \lambdaim_{k\tauo \infty} \mathcal{B}ig(2Lm\alphalpha(k) + 2\sigmaum_{i=1}^m \|e_i(k)\|\mathcal{B}ig)\\ &=& \lambdaiminf_{k\tauo \infty}\sigmaum_{i=1}^m \|y(k)-x_i(k)\|. \epsilonnd{eqnarray*} Using $x_i(k)\in X_i$ for all $i$ and $k$, it follows from Assumption \muathbb{R}f{compactconst} that the sequence $\{x_i(k)\}$ is bounded for all $i$. Therefore, the sequence $\{y(k)\}$ [defined by $y(k) = {1\over m}\sigmaum_{i=1}^m x_i(k)$, see Eq.\ (\muathbb{R}f{yxaver})], and also the sequences $\|y(k)-x_i(k)\|$ are bounded. Combined with the preceding relation, this implies that the scalar sequence $\sigmaum_{i=1}^m \|y(k)-x_i(k)\|$ is convergent. By Proposition \muathbb{R}f{mean-consensus-diffset}, we have \[\lambdaiminf_{k\tauo \infty} \|x_i(k)-y(k)\| = 0\qquad \hbox{with probability one.}\] Since $\sigmaum_{i=1}^m \|y(k)-x_i(k)\|$ converges, this implies that for all $i$, \[\lambdaim_{k\tauo \infty}\|x_i(k)-y(k)\|=0\qquad \hbox{with probability one},\] completing the proof. \epsilonnd{proof} The next theorem states our main convergence result for agent estimates when the constraint sets are different under some assumptions on the stepsize rule. \betaegin{theorem} Let Assumptions \muathbb{R}f{connectivity}, \muathbb{R}f{ass.stoch_weights}, \muathbb{R}f{ass.self_conf} and \muathbb{R}f{compactconst} hold. Let $\{x_i(k)\}$ be the sequence generated by the algorithm (\muathbb{R}f{convex-comb})-(\muathbb{R}f{proj-error}). Assume that the stepsize sequence satisfies $\sigmaum_k\alphalpha(k)=\infty$ and $\sigmaum_k \alphalpha^2(k)<\infty$. Then, there exists an optimal solution $x^*\in X^*$ such that for all $i$ \[\lambdaim_{k\tauo \infty} x_i(k) = x^*\qquad \hbox{with probability one}.\] \epsilonnd{theorem} \betaegin{proof} From Lemma \muathbb{R}f{key-rel}(b), we have for some $z^*\in X^*$, \betaegin{equation}\sigmaum_{i=1}^m\|x_i(k+1)-z^*\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(k)-z^*\|^2 +\alpha^2(k) \sigmaum_{i=1}^m \|d_i(k)\|^2 -2\alpha(k) \sigmaum_{i=1}^m (f_i(v_i(k))-f_i(z^*)).\lambdaabel{window-rel}\epsilonnd{equation} We show that the preceding implies that \betaegin{equation}\lambdaiminf_{k\tauo \infty} \sigmaum_{i=1}^m f_i(v_i(k))\lambdae f(z^*)=f^*.\lambdaabel{optvalue}\epsilonnd{equation} Suppose to arrive at a contradiction that $\lambdaiminf_{k\tauo \infty} \sigmaum_{i=1}^m f_i(v_i(k))>f^*$. This implies that there exist some $K$ and $\epsilonpsilon>0$ such that for all $k\gammae K$, we have \[\sigmaum_{i=1}^m f_i(v_i(k))> f^* +\epsilonpsilon.\] Summing the relation (\muathbb{R}f{window-rel}) over a window from $K$ to $N$ with $N>K$, we obtain \[\sigmaum_{i=1}^m\|x_i(N+1)-z^*\|^2 \lambdae \sigmaum_{i=1}^m\|x_i(K)-z^*\|^2 +m L^2\sigmaum_{k=K}^N \alpha^2(k) -2\epsilonpsilon \sigmaum_{k=K}^N\alpha(k).\] Letting $N\tauo \infty$, and using $\sigmaum_k\alphalpha(k)=\infty$ and $\sigmaum_k \alphalpha^2(k)<\infty$, this yields a contradiction and establishes the relation in Eq.\ (\muathbb{R}f{optvalue}). By Proposition \muathbb{R}f{difconstasconsensus}, we have \betaegin{equation}\lambdaim_{k\tauo \infty} \|x_i(k)-y(k)\|=0\qquad \hbox{with probability one}.\lambdaabel{est-cons}\epsilonnd{equation} Since $v_i(k)=\sigmaum_{j=1}^m a_{ij}(k) x_j(k)$, using the stochasticity of the weight vectors $a_i(k)$, this also implies \betaegin{equation}\lambdaim_{k\tauo \infty} \|v_i(k)-y(k)\|\lambdae \lambdaim_{k\tauo \infty} \sigmaum_{j=1}^m a_{ij}(k) \|x_j(k)-y(k)\|=0\qquad \hbox{with probability one}.\lambdaabel{vconv}\epsilonnd{equation} Combining Eqs.\ (\muathbb{R}f{optvalue}) and (\muathbb{R}f{vconv}), we obtain \betaegin{equation}\lambdaiminf_{k\tauo \infty} f(y(k)) \lambdae f^*\qquad \hbox{with probability one}.\lambdaabel{limopt}\epsilonnd{equation} From Lemma \muathbb{R}f{error-behav}(a), the sequence $\{\sigmaum_{i=1}^m\|x_i(k)-z^*\|\}$ is convergent for all $z^*\in X^*$. Combined with Eq.\ (\muathbb{R}f{est-cons}), this implies that the sequence $\{\|y(k)-z^*\|\}$ is convergent with probability one. Therefore, $y(k)$ is bounded and it must have a limit point. Moreover, since $x_i(k) \in X_i$ for all $k\gammae 0$ and $X_i$ is a closed set, all limit points of the sequence $\{x_i(k)\}$ must lie in the set $X_i$ for all $i$. In view of Eq.\ (\muathbb{R}f{est-cons}), this implies that all limit points of the sequence $\{y(k)\}$ belong to the set $X$. Hence, from Eq.\ (\muathbb{R}f{limopt}), we have \[\lambdaiminf_{k\tauo \infty} f(y(k)) = f^*\qquad \hbox{with probability one}.\] Using the continuity of $f$ (due to convexity of $f$ over $\muathbb{R}^n$), this implies that one of the limit points of $\{y(k)\}$ must belong to $X^*$; denote this limit point by $x^*$. Since the sequence $\{\|y(k)-x^*\|\}$ is convergent, it follows that $y(k)$ can have a unique limit point, i.e., $\lambdaim_{k\tauo \infty} y(k)=x^*$ with probability one. This and $\lambdaim_{k\tauo\infty}\|x_i(k)-y(k)\|=0$ with probability one imply that each of the sequences $\{x_i(k)\}$ converges to the same $x^*\in X^*$ with probability one. \epsilonnd{proof} \sigmaection{Conclusions}\lambdaabel{sec:conclusions} We studied distributed algorithms for multi-agent optimization problems over randomly-varying network topologies. We adopted a state-dependent communication model, in which the availability of links in the network is probabilistic with the probability dependent on the agent states. This is a good model for a variety of applications in which the state represents the position of the agents (in sensing and communication settings), or the beliefs of the agents (in social settings) and the distance of the agent states affects the communication and information exchange among the agents. We studied a projected multi-agent subgradient algorithm for this problem and presented a convergence analysis for the agent estimates. The first step of our analysis establishes convergence rate bounds for a disagreement metric among the agent estimates. This bound is time-nonhomogeneous in that it depends on the initial time. Despite this, under the assumption that the stepsize sequence decreases sufficiently fast, we proved that agent estimates converge to an almost sure consensus and also to an optimal point of the global optimization problem under some assumptions on the constraint sets. The framework introduced in this paper suggests a number of interesting further research directions. One future direction is to extend the constrained optimization problem to include both local and global constraints. This can be done using primal algorithms that involve projections, or using primal-dual algorithms where dual variables are used to ensure feasibility with respect to global constraints. Another interesting direction is to consider different probabilistic models for state-dependent communication. Our current model assumes the probability of communication is a continuous function of the $l_2$ norm of agent states. Considering other norms and discontinuous functions of agent states is an important extension which is relevant in a number of engineering and social settings. \betaibliographystyle{amsplain} \betaibliography{distributed} \epsilonnd{document}
\betagin{document} \tildetle{Twisted Modules over Lattice Vertex Algebras} \author{Bojko Bakalov } \address{Department of Mathematics, NCSU, Raleigh, NC 27695, USA} \email{bojko\[email protected]} \author{Victor G.~Kac } \address{Department of Mathematics, MIT, Cambridge, MA 02139, USA} \email{[email protected]} \partialate{February 19, 2004} \title{Twisted Modules over Lattice Vertex Algebras} \partialedicatory{\textit{ Dedicated to Professor Ivan Todorov on the occasion of his 70th birthday. }} \section{Introduction}\lbb{sintro} A vertex algebra is essentially the same as a chiral algebra in conformal field theory \cite{BPZ, G}. Vertex algebras arose naturally in the representation theory of infinite-dimensional Lie algebras and in the construction of the ``moonshine module'' for the Monster finite simple group \cite{B1, FLM}. If $V$ is a vertex algebra and $Gamma$ is a finite group of automorphisms of $V$, the subalgebra $V^Gamma$ of $Gamma$-invariant elements in $V$ is called an \emph{orbifold vertex algebra}. Orbifolds play an important role in string theory; in the physics literature they were introduced in one of the earliest papers on conformal field theory \cite{DVVV, DHVW}. Recently, there have been numerous mathematical papers on orbifolds. All these papers are concerned in some way or another with the problem of describing the representations of $V^Gamma$ in terms of the vertex algebra $V$ and the group $Gamma$. However, the solution is known only in very special cases and it is highly nontrivial. Let $Q$ be an integral lattice. Then one can construct a vertex algebra $V_Q$ called a \emph{lattice vertex algebra} \cite{B1, FLM, K}. If $\barGamma$ is a finite group of isometries of $Q$, its elements can be lifted to automorphisms of $V_Q$. One obtains a group $Gamma\subset\mathcal{A}ut V_Q$, which is a central extension of $\barGamma$. In \cite{BKT}, we construct a collection of $V_Q^Gamma$-modules, and we compute their characters and modular transformations. Some of these modules play an important role in the attempts of a conformal field theory understanding of the fractional quantum Hall effect (see \cite{CGT} and the references therein). The present paper is the first step in the construction of the $V_Q^Gamma$-modules \cite{BKT}. Here we classify the so-called \emph{twisted $V_Q$-modules}; our main result is \thetaref{ttwvqm}. In the case when $Q$ is a root lattice of a simple finite-dimensional Lie algebra and $\sigma$ is an element of its Weyl group, our results agree with those of \cite{KP, KT}. Some of our results were obtained independently in \cite{R}, and some special cases were studied earlier in \cite{D, X}. {\flushleft{\bf{Acknowledgments.}}} We are grateful to Ivan Todorov for many stimulating discussions and for collaboration on \cite{BKT}. We thank the organizers of the Varna Workshop for inviting us to present our results and for the inspiring workshop. We also acknowledge the hospitality of the Erwin Schr\"odinger Institute, where some of this work was done. The first author was supported in part by the Miller Institute for Basic Research in Science. The second author was supported in part by NSF grant DMS-9970007. \section{Preliminaries on Vertex Algebras}\lbb{sprva} In this section we recall the definition of a vertex algebra and some of its properties, following the book~\cite{K}. Below $z,w,\partialots$ denote formal commuting variables. All vector spaces are over the field $\mathbb{C}$ of complex numbers. \subsection{Local Fields}\lbb{slocf} Let $V$ be a vector space. A {\em field\/} on $V$ is a formal power series in $z,z^{-1}$ of the form \betagin{equation}\lbb{field} \varphii(z)=\sum_{m{\mathrm{i}}n\mathbb{Z}}\varphii_{(m)} z^{-m-1}, \qquad \varphii_{(m)}{\mathrm{i}}n\End(V), \end{equation} such that \betagin{equation}\lbb{gg} \varphii_{(m)} a = 0 \quad\text{for all }a{\mathrm{i}}n V\text{ and }m{\mathfrak{g}}g0. \end{equation} Note that all fields form a vector space invariant under $\partial_z$. Two fields $\varphii(z)$ and $\psi(z)$ are called {\em local\/} with respect to each other if \betagin{equation}\lbb{lcl} (z-w)^N\bigl[\varphii(z),\psi(w)\bigr]=0 \quad\text{for }N{\mathfrak{g}}g0 \end{equation} as a formal power series in $z,z^{-1},w,w^{-1}$. We will also say that the pair $(\varphii,\psi)$ is local. Obviously, if $(\varphii,\psi)$ is local, then so are $(\psi,\varphii)$ and $(\partial\varphii,\psi)$. The locality \eqref{lcl} is equivalent to the commutator formula \betagin{equation}\lbb{lcl2} \bigl[\varphii(z),\psi(w)\bigr] = \sum_{n{\mathfrak{g}}e0} \bigl(\varphii(w)_{(n)}\psi(w)\bigr) \partial_w^{(n)}\partiale(z-w), \end{equation} where $\partial_w^{(n)} = \partial_w^n/n!$, \betagin{equation}\lbb{del} \partiale(z-w)=\sum_{k{\mathrm{i}}n\mathbb{Z}} z^k w^{-k-1} \end{equation} is the formal $\partiale$-function, and \betagin{equation}\lbb{nprod} \varphii(w)_{(n)}\psi(w) = \Res_z (z-w)^n [\varphii(z),\psi(w)\bigr], \qquad n{\mathfrak{g}}e0, \end{equation} is the $n$th {\em product\/} of the fields $\varphii$, $\psi$. Here, as usual, $\Res_z$ denotes the coefficient at $z^{-1}$. Note that the sum in \eqref{lcl2} is finite. One can define $n$th product of fields for any $n{\mathrm{i}}n\mathbb{Z}$: \betagin{equation}\lbb{nprod2} \betagin{split} \varphii(w)_{(n)}\psi(w) &= \Res_z \varphii(z)\psi(w) \, i_{z,w} (z-w)^n \\ &-\Res_z \psi(w)\varphii(z) \, i_{w,z} (z-w)^n. \end{split} \end{equation} Here $i_{z,w}$ means that we expand in the domain $|z|>|w|$, i.e., \betagin{equation}\lbb{izw} i_{z,w} (z-w)^n = \sum_{k=0}^{\mathrm{i}}nfty \binom{n}{k} z^{n-k} (-w)^k, \end{equation} while \betagin{equation}\lbb{iwz} i_{w,z} (z-w)^n = \sum_{k=0}^{\mathrm{i}}nfty \binom{n}{k} z^k (-w)^{n-k}. \end{equation} In particular, $\partiale(z-w) = i_{z,w}(z-w)^{-1} - i_{w,z}(z-w)^{-1}$. The $(-1)$st product is called the {\em normally ordered product\/} and is denoted by $\nop{\varphii\psi}$. One has \betagin{equation}\lbb{nop1} \nop{\varphii(w)\psi(w)} = \varphii(w)^+\psi(w) + \psi(w)\varphii(w)^-, \end{equation} where \betagin{equation}\lbb{nop2} \varphii(w)^+ = \sum_{m<0}\varphii_{(m)} w^{-m-1}, \quad \varphii(w)^- = \sum_{m{\mathfrak{g}}e0}\varphii_{(m)} w^{-m-1}. \end{equation} One also has \betagin{equation}\lbb{nprod3} \varphii(w)_{(-n-1)}\psi(w) = \nop{(\partial_w^{(n)}\varphii(w))\psi(w)}, \qquad n{\mathfrak{g}}e0. \end{equation} \subsection{Vertex Algebras}\lbb{sdva} \betagin{definition}\lbb{dva} A {\em vertex algebra\/} is a vector space $V$ (space of states) endowed with a vector $|0\rangle{\mathrm{i}}n V$ (vacuum vector), an endomorphism $T$ (infinitesimal translation operator) and a linear map {}from $V$ to the space of fields on $V$ (state-field correspondence) \betagin{equation}\lbb{y} a\mapsto Y(a,z)=\sum_{m{\mathrm{i}}n\mathbb{Z}}a_{(m)}z^{-m-1}, \end{equation} such that the following properties hold: locality axiom: all fields $Y(a,z)$ are local with respect to each other, translation covariance: \betagin{equation}\lbb{t} \bigl[T,Y(a,z)\bigr]=\partial_z Y(a,z), \end{equation} vacuum axioms: \betagin{align} \lbb{vac1} &Y(|0\rangle,z)={\mathrm{i}}d_V, \;\;\; T|0\rangle=0, \\ \lbb{vac2} &Y(a,z)|0\rangle-a {\mathrm{i}}n zV[[z]]. \end{align} \end{definition} Here are some corollaries of the definition: \betagin{equation}\lbb{taa} Ta=a_{(-2)}|0\rangle, \quad Y(Ta,z)=\partial_zY(a,z). \end{equation} We also have the skew-symmetry \betagin{equation}\lbb{sks} Y(a,z)b=e^{zT}Y(b,-z)a. \end{equation} The most important property of a vertex algebra $V$ is the following {\em Borcherds identity\/} (which along with the vacuum axioms provides an equivalent definition of a vertex algebra): \betagin{equation}\lbb{bor} \betagin{split} \Res_{z-w} Y(Y(a,z-w)b,w) \, i_{w,z-w}&F(z,w) \\ =\Res_z Y(a,z)Y(b,w) \, i_{z,w}&F(z,w) \\ -\Res_z Y(b,w)Y(a,z) \, i_{w,z}&F(z,w) \end{split} \end{equation} for any $a,b{\mathrm{i}}n V$ and any rational function $F(z,w)$ with poles only at $z=0$, $w=0$ or $z=w$. Let us give some consequences of the Borcherds identity. Taking $F(z,w)=(z-w)^n\partiale(u-z)$, viewed as a series in $u,u^{-1}$, we obtain \betagin{equation}\lbb{bor2} \betagin{split} Y(a,u)&Y(b,w) \, i_{u,w}(u-w)^n -Y(b,w)Y(a,u) \, i_{w,u}(u-w)^n \\ &=\sum_{m=0}^{\mathrm{i}}nfty Y(a_{(m+n)}b,w) \, \partial_w^{(m)}\partiale(u-w). \end{split} \end{equation} For $n=0$ this gives the commutator formula \betagin{equation}\lbb{comm} \bigl[Y(a,u),Y(b,w)\bigr] = \sum_{m=0}^{\mathrm{i}}nfty Y(a_{(m)}b,w) \, \partial_w^{(m)}\partiale(u-w), \end{equation} (which implies locality). Taking $\Res_u$ of \eqref{bor2}, we get \betagin{equation}\lbb{npr} Y(a,w)_{(n)}Y(b,w)=Y(a_{(n)}b,w), \qquad n{\mathrm{i}}n\mathbb{Z}. \end{equation} \subsection{Conformal Vertex Algebras}\lbb{sconfdva} \betagin{definition}\lbb{dconfva} A vertex algebra $V$ is {\em conformal\/} of central charge (or rank) $c{\mathrm{i}}n\mathbb{C}$ if there exists a vector $\nu{\mathrm{i}}n V$ (the {conformal vector}) with the following properties: (i) The modes $L_n$, $n{\mathrm{i}}n\mathbb{Z}$, of the field $L(z) \equiv Y(\nu,z)=\sum_{n{\mathrm{i}}n\mathbb{Z}}L_n z^{-n-2}$ give a representation of the Virasoro algebra with central charge $c$. (The field $L(z)$ is called the energy-momentum field in CFT.) (ii) $L_{-1}$ is the infinitesimal translation operator $T$. (iii) The operator $L_0$ is diagonalizable with a non-negative spectrum. ($L_0$ is called the energy operator or Hamiltonian.) \end{definition} In a conformal vertex algebra $V$, \eqref{comm} implies \betagin{equation}\lbb{l02} [L_0,Y(a,z)] = zY(L_{-1}a,z) + Y(L_0a,z). \end{equation} The operator $L_0$ defines a gradation of $V$, $V=\bigoplus_{\Delta{\mathfrak{g}}e0}V(\Delta)$, such that \betagin{equation}\lbb{l0} L_0|_{V(\Delta)} = \Delta{\mathrm{i}}d_{V(\Delta)}. \end{equation} For $a{\mathrm{i}}n V(\Deltalta)$, the field $Y(a,z)$ is of {\em conformal weight\/} $\Deltalta$, i.e., $\partialeg a_{(m)}=\Deltalta-m-1$ for all $m{\mathrm{i}}n\mathbb{Z}$. The field $L(z)$ is of conformal weight $2$ and $\nu=L_{-2}|0\rangle$. \section{Twisted Modules over Vertex Algebras}\lbb{stwmod} In this section we study twisted modules over vertex algebras (cf.\ \cite{DL, FLM}). It seems that some of our results are new even in the untwisted case. \subsection{Definition in Terms of Borcherds Identity}\lbb{sdbid} Let $V$ be a vertex algebra and $\sigma$ be an automorphism of $V$ of finite order $N$. We let $\varepsilons=e^{2\pi{\mathrm{i}}/N}$ and $V_{j} = \{a{\mathrm{i}}n V \; | \; \sigma a=\varepsilons^{-j}a \}$, $0\le j\le N-1$. An {\em $N$-twisted field\/} $\varphii(z)$ on a vector space $M$ is a formal power series in $z^{1/N},z^{-1/N}$ of the form \betagin{equation}\lbb{twfield} \varphii(z)=\sum_{m{\mathrm{i}}n\frac1N\mathbb{Z}}\varphii_{(m)} z^{-m-1}, \qquad \varphii_{(m)}{\mathrm{i}}n\End(M), \end{equation} such that \betagin{equation}\lbb{twgg} \varphii_{(m)} v = 0 \quad\text{for all }v{\mathrm{i}}n M\text{ and }m{\mathfrak{g}}g0. \end{equation} For any integer $k$, we will denote by $\varphii(e^{2\pi{\mathrm{i}} k} z)$ the field obtained from $\varphii(z)$ by substituting $z^{1/N}$ with $\varepsilons^k z^{1/N}$, i.e., $\varphii(e^{2\pi{\mathrm{i}} k} z) = \sum_{m{\mathrm{i}}n\frac1N\mathbb{Z}}\varphii_{(m)} e^{-2\pi{\mathrm{i}} km} z^{-m-1}$. \betagin{definition}\lbb{dvrep} A {\em $\sigma$-twisted $V$-module\/} is a vector space $M$ endowed with a linear map {}from $V$ to the space of $N$-twisted fields on $M$, \betagin{align} \lbb{ym} a\mapsto Y^M(a,z)&=\sum_{m{\mathrm{i}}n\frac1N\mathbb{Z}}a^M_{(m)}z^{-m-1}, \\ {\mathrm{i}}ntertext{such that for all $a{\mathrm{i}}n V$} \lbb{ginv} Y^M(\sigma a,z) &= Y^M(a,e^{2\pi{\mathrm{i}}}z), \\ \lbb{vacm} Y^M(|0\rangle,z) &= {\mathrm{i}}d_M, \end{align} and the following twisted Borcherds identity holds for any $a{\mathrm{i}}n V_j$, $b{\mathrm{i}}n V$, $0\le j\le N-1$, and any rational function $F(z,w)$ with poles only at $z=0$, $w=0$ or $z=w$: \betagin{equation}\lbb{twbor} \betagin{split} \Res_{z-w} Y^M(Y(a,z-w)b,w) \, i_{w,z-w} \, z^{j/N} &F(z,w) \\ =\Res_z Y^M(a,z)Y^M(b,w) \, i_{z,w} \, z^{j/N} &F(z,w) \\ -\Res_z Y^M(b,w)Y^M(a,z) \, i_{w,z} \, z^{j/N} &F(z,w). \end{split} \end{equation} Of course, for $\sigma=1$, a $1$-twisted module is called just a {\em module\/}. \end{definition} \betagin{remark}\lbb{rtwbor} The Borcherds identity \eqref{twbor} is equivalent to the following collection of identities ($a{\mathrm{i}}n V_j$, $b{\mathrm{i}}n V$, $c{\mathrm{i}}n M$, $m{\mathrm{i}}n\frac jN+\mathbb{Z}$, $n{\mathrm{i}}n\mathbb{Z}$, $k{\mathrm{i}}n\frac1N\mathbb{Z}$): \betagin{equation}\lbb{twboco} \betagin{split} \sum_{i=0}^{\mathrm{i}}nfty \binom{m}{i} (a_{(n+i)}b)^M_{(m+k-i)}c = \sum_{i=0}^{\mathrm{i}}nfty (-1)^i &\binom{n}{i} a^M_{(m+n-i)}(b^M_{(k+i)}c) \\ - \sum_{i=0}^{\mathrm{i}}nfty (-1)^{i+n} &\binom{n}{i} b^M_{(k+n-i)}(a^M_{(m+i)}c). \end{split} \end{equation} \end{remark} When $V$ is a conformal vertex algebra we will assume that the automorphism $\sigma$ preserves the conformal vector: $\sigma(\nu)=\nu$. Then, for any $\sigma$-twisted $V$-module $M$, the modes $L_n^M$ of $Y^M(\nu,z)=\sum_{n{\mathrm{i}}n\mathbb{Z}}L_n^M z^{-n-2}$ give a representation in $M$ of the Virasoro algebra with central charge $c=\rank V$. \betagin{definition}\lbb{dvrep2} A {\em $\sigma$-twisted module over a conformal vertex algebra $V$\/} is a $\sigma$-twisted $V$-module $M$ in the sense of \partialeref{dvrep} satisfying the additional requirement that the operator $L_0^M$ be diagonalizable with finite-dimensional eigenspaces. \end{definition} \subsection{Consequences of the Definition}\lbb{subconsd} Let $M$ be a $\sigma$-twisted $V$-module. In this subsection we will derive some consequences of \partialeref{dvrep}. First of all, note that, by \eqref{ginv}, $Y^M(a,z)z^{j/N}$ contains only integer powers of $z$ for $a{\mathrm{i}}n V_j$. For $0\le j\le N-1$, we introduce the $N$-twisted $\partiale$-functions \betagin{equation}\lbb{twdel} \partiale_j(z-w) = z^{j/N}w^{-j/N}\partiale(z-w) = \sum_{k{\mathrm{i}}n\frac jN+\mathbb{Z}} z^k w^{-k-1} . \end{equation} Then \betagin{equation}\lbb{twdel2} \Res_z Y^M(a,z) \partiale_j(w-z) = Y^M(\pi_j a,w), \end{equation} where \betagin{equation}\lbb{pij} \pi_j = \frac1N \sum_{k=0}^{N-1} \varepsilons^{kj} \sigma^k \end{equation} is the projection onto $V_{j}$. Putting $F(z,w)=z^{-j/N}(z-w)^n\partiale_j(u-z)$ in the twisted Borcherds identity \eqref{twbor}, we obtain a twisted version of \eqref{bor2}: \betagin{equation}\lbb{twbor2} \betagin{split} Y^M&(a,u)Y^M(b,w) \, i_{u,w}(u-w)^n -Y^M(b,w)Y^M(a,u) \, i_{w,u}(u-w)^n \\ &=\sum_{m=0}^{\mathrm{i}}nfty Y^M(a_{(m+n)}b,w) \, \partial_w^{(m)}\partiale_j(u-w), \qquad a{\mathrm{i}}n V_j, n{\mathrm{i}}n\mathbb{Z}. \end{split} \end{equation} The collection of identities \eqref{twbor2} for all $n{\mathrm{i}}n\mathbb{Z}$ is equivalent to \eqref{twbor}. Note that, as in the untwisted case, all the fields $Y^M(a,z)$ are local with respect to each other, since, letting $n=0$ in \eqref{twbor2}, we get the twisted commutator formula $(a{\mathrm{i}}n V_j)$: \betagin{equation}\lbb{twcomm} \bigl[Y^M(a,u),Y^M(b,w)\bigr] = \sum_{m=0}^{\mathrm{i}}nfty Y^M(a_{(m)}b,w) \, \partial_w^{(m)}\partiale_j(u-w) . \end{equation} Let us define the $n$th product ($n{\mathrm{i}}n\mathbb{Z}$) of two twisted fields $Y^M(a,w)$, $Y^M(b,w)$ for $a{\mathrm{i}}n V_j$ as $\Res_u u^{j/N}w^{-j/N}$ of the left hand side of \eqref{twbor2}. For $n=-1$, this definition coincides with \eqref{nop1}, \eqref{nop2}, but for $n<-1$ it differs from \eqref{nprod3}. By \eqref{twbor2}, we have for $a{\mathrm{i}}n V_j$ \betagin{equation}\lbb{twnpr} Y^M(a,w)_{(n)}Y^M(b,w) = \sum_{m=0}^{\mathrm{i}}nfty \binom{j/N}{m} w^{-m} \, Y^M(a_{(m+n)}b,w) . \end{equation} In particular, letting $b=|0\rangle$ and $n=-2$ in \eqref{twnpr}, we get \betagin{equation}\lbb{twtaa} Y^M(Ta,z)=\partial_zY^M(a,z). \end{equation} When $V$ is a conformal vertex algebra, the commutator formula \eqref{twcomm} implies: \betagin{align} \lbb{tm} [T^M,Y^M(a,z)] &= \partial_z Y^M(a,z), \\ \lbb{l0m} [L_0^M,Y^M(a,z)] &= z\partial_z Y^M(a,z) + Y^M(L_0^M a,z), \end{align} where $T^M := L_{-1}^M$. \subsection{Associativity of Twisted Fields }\lbb{sass} We will try to invert formula \eqref{twnpr}, i.e., to express $Y^M(a_{(n)}b,w)$ in terms of $Y^M(a,w)_{(n)}Y^M(b,w)$. To this end, we multiply both sides of \eqref{twnpr} by $z^{-n-1}w^{j/N}$ and sum over $n{\mathrm{i}}n\mathbb{Z}$. Using the properties of the $\partiale$-function and Taylor's formula, we get \betagin{align} \notag i_{w,z} &(z+w)^{j/N} \, Y^M(Y(a,z)b,w) = i_{z,w} (z+w)^{j/N} \, Y^M(a,z+w)Y^M(b,w) \\ \lbb{as1} &- \sum_{k=0}^{\mathrm{i}}nfty Y^M(b,w) a^M_{(k+j/N)} \, (-\partial_w)^{(k)}\partiale(z-(-w)), \qquad a{\mathrm{i}}n V_j. \end{align} Of course, we cannot divide \eqref{as1} by $(z+w)^{j/N}$. Note, however, that the sum in \eqref{as1} becomes finite when applied to any $v{\mathrm{i}}n M$. Using that $(z+w)^n (-\partial_w)^{(k)}\partiale(z-(-w)) = 0$ for $n>k$, we obtain the {\em associativity\/} of twisted fields \betagin{equation}\lbb{as2} \betagin{split} i_{w,z} (z&+w)^{n+j/N} \, Y^M(Y(a,z)b,w)v \\ &= i_{z,w} (z+w)^{n+j/N} \, Y^M(a,z+w)Y^M(b,w)v, \\ &\qquad\qquad\qquad a{\mathrm{i}}n V_j, b{\mathrm{i}}n V, v{\mathrm{i}}n M, n{\mathfrak{g}}g0. \end{split} \end{equation} Here we can take the minimal $n{\mathfrak{g}}e0$ such that $a^M_{(k+j/N)}v=0$ for $k{\mathfrak{g}}e n$. Note that the powers of $z$ and $w$ in both sides of \eqref{as2} are bounded from below. Therefore, we can multiply both sides by $i_{w,z}(z+w)^{-n-j/N}$ to get ($a{\mathrm{i}}n V_j, b{\mathrm{i}}n V, v{\mathrm{i}}n M$): \betagin{equation}\lbb{as3} \betagin{split} &Y^M(Y(a,z)b,w)v \\ &= (i_{w,z}(z+w)^{-n-j/N}) \, i_{z,w} (z+w)^{n+j/N} \, Y^M(a,z+w)Y^M(b,w)v. \end{split} \end{equation} \subsection{Definition in Terms of Associativity}\lbb{sdass} We will show that the associativity \eqref{as2}, together with \eqref{ginv}, \eqref{vacm}, implies the twisted Borcherds identity \eqref{twbor}. First let $b=|0\rangle$ in \eqref{as2}. Using \eqref{vacm} and the identity $Y(a,z)|0\rangle = e^{zT}a$ (cf.\ \eqref{sks}), we get: \betagin{equation*} i_{w,z} (z+w)^{n+j/N} \, Y^M(e^{zT}a,w)v = i_{z,w} (z+w)^{n+j/N} \, Y^M(a,z+w)v. \end{equation*} Notice that the right-hand side contains only non-negative integer powers of $z+w$. This implies $Y^M(e^{zT}a,w) = i_{w,z} Y^M(a,z+w)$ and, in particular, Eq.~\eqref{twtaa}. Now let $a{\mathrm{i}}n V_j, b{\mathrm{i}}n V_k, v{\mathrm{i}}n M$, and let $n,p{\mathrm{i}}n\mathbb{Z}$ be such that $a^M_{(m+j/N)}v=0$ for $m{\mathfrak{g}}e n$, $b^M_{(m+k/N)}v=0$ for $m{\mathfrak{g}}e p$. Replacing in \eqref{as2} $b$ with $e^{uT}b$, we obtain: \betagin{equation}\lbb{as4} \betagin{split} i_{z,w} i_{w,u} & (z+w)^{n+j/N} \, Y^M(a,z+w) Y^M(b,u+w) v \\ &= i_{w,z} i_{w,u} i_{z,u} (z+w)^{n+j/N} \, Y^M(Y(a,z-u)b,u+w) v. \end{split} \end{equation} Note that, if we multiply the left-hand side by $i_{w,u} (u+w)^{p+k/N}$, it will contain only non-negative integer powers of $u+w$ and hence of $w$. Therefore it makes sense to put $w=0$: \betagin{equation}\lbb{as5} \betagin{split} z^{n+j/N} u^{p+k/N} \, & Y^M(a,z) Y^M(b,u) v \\ = & i_{w,z} i_{w,u} i_{z,u} (z+w)^{n+j/N} (u+w)^{p+k/N} \\ &\tildemes Y^M(Y(a,z-u)b,u+w) v \big|_{w=0}. \end{split} \end{equation} Interchanging the roles of $a$ and $b$ and using \eqref{sks}, we get: \betagin{equation}\lbb{as6} \betagin{split} z^{n+j/N} u^{p+k/N} \, & Y^M(b,u) Y^M(a,z) v \\ = & i_{w,z} i_{w,u} i_{u,z} (z+w)^{n+j/N} (u+w)^{p+k/N} \\ &\tildemes Y^M(Y(a,z-u)b,u+w) v \big|_{w=0}. \end{split} \end{equation} Notice that \betagin{equation*} i_{w,z} (z+w)^{n+j/N} = i_{w,u} \sum_{i{\mathrm{i}}n\mathbb{Z}_+} \binom{n+j/N}{i} (u+w)^{n-i+j/N} (z-u)^i \end{equation*} and \betagin{equation*} i_{z,u} (z-u)^{-m-1} - i_{u,z} (z-u)^{-m-1} = \partial_u^{(m)} \partiale(z-u), \qquad m{\mathfrak{g}}e0. \end{equation*} Using this and \eqref{as5}, \eqref{as6}, we get for $l{\mathrm{i}}n\mathbb{Z}$: \betagin{equation*} \betagin{split} i_{z,u} (z-u)^l & z^{n+j/N} u^{p+k/N} \, Y^M(a,z) Y^M(b,u) v \\ &- i_{u,z} (z-u)^l z^{n+j/N} u^{p+k/N} \, Y^M(b,u) Y^M(a,z) v \\ =i_{w,u} & \sum_{\substack{ i,m{\mathrm{i}}n\mathbb{Z} \\ i{\mathfrak{g}}e0, m{\mathfrak{g}}e i+l }} \binom{n+j/N}{i} (u+w)^{n+p-i+(j+k)/N} \\ &\tildemes Y^M(a_{(m)}b,u+w) v \, \partial_u^{(m-i-l)} \partiale(z-u) \big|_{w=0}. \end{split} \end{equation*} {}From here it is easy to deduce \eqref{twbor2}, which implies \eqref{twbor}. Therefore, we have the following equivalent definition of a $\sigma$-twisted $V$-module. \betagin{proposition}\lbb{pdefas} A $\sigma$-twisted $V$-module is the same as a vector space $M$ endowed with a linear map \eqref{ym} {}from $V$ to the space of $N$-twisted fields on $M$, satisfying \eqref{ginv}, \eqref{vacm} and \eqref{as2}. \end{proposition} \section{Twisted Modules over a Lattice Vertex Algebra}\lbb{stmvq} In the first subsection we introduce the main object of our study: the lattice vertex algebra $V_Q$. The remainder of the section is devoted to the classification of all $\sigma$-twisted $V_Q$-modules, where $\sigma$ is an automorphism of the lattice $Q$. \subsection{Lattice Vertex Algebras}\lbb{slva} The purpose of this subsection is to fix the notation and review some properties of lattice vertex algebras. For more details, see~\cite{K}. Let $Q$ be an integral lattice of rank $l$. We denote the bilinear form on $Q$ by $(\cdot|\cdot)$, and write $|\alpha|^2=(\alpha|\alpha)$ for $\alpha{\mathrm{i}}n Q$. We extend the bilinear form to ${\mathfrak{h}} = \mathbb{C}\otimes_\mathbb{Z} Q$ by $\mathbb{C}$-bilinearity. There exists a bimultiplicative function $\varepsilon\colon Q\tildemes Q\to\{\pm1\}$ satisfying \betagin{equation}\lbb{epal} \varepsilon(\alpha,\alpha) = (-1)^{|\alpha|^2(|\alpha|^2+1)/2}, \qquad \alpha{\mathrm{i}}n Q. \end{equation} Then by bimultiplicativity \betagin{equation}\lbb{epalbe} \varepsilon(\alpha,\beta)\varepsilon(\beta,\alpha) = (-1)^{(\alpha|\beta) + |\alpha|^2|\beta|^2}, \qquad \alpha,\beta{\mathrm{i}}n Q. \end{equation} Introduce the twisted group algebra $\mathbb{C}_\varepsilon[Q]$: it has a basis $\{e^\alpha\}_{\alpha{\mathrm{i}}n Q}$ over $\mathbb{C}$ and multiplication \betagin{equation}\lbb{ealebe} e^\alpha e^\beta = \varepsilon(\alpha,\beta) e^{\alpha+\beta}, \qquad \alpha,\beta{\mathrm{i}}n Q. \end{equation} Let ${\mathfrak{h}}at{\mathfrak{h}} = {\mathfrak{h}}[t,t^{-1}]{\mathrm{op}}lus\mathbb{C} K$ be the {\em Heisenberg current algebra}; this is a Lie algebra with the bracket \betagin{equation}\lbb{htht} [ht^m, h't^n] = m\partiale_{m,-n}(h|h')K, \quad [ht^m,K]=0, \qquad h,h'{\mathrm{i}}n{\mathfrak{h}}. \end{equation} It has a unique irreducible representation of level $1$ (i.e., with $K=1$) on the Fock space $S=S({\mathfrak{h}}[t^{-1}]t^{-1})$ such that ${\mathfrak{h}}[t^{-1}]t^{-1}$ acts by multiplication and ${\mathfrak{h}}[t]1=0$. This representation extends to the space $V_Q=S\otimes\mathbb{C}_\varepsilon[Q]$ by \betagin{equation}\lbb{hteal} (ht^m)(s\otimes e^\alpha) = \bigl( ht^m + \partiale_{m,0}(h|\alpha) \bigr) s\otimes e^\alpha \qquad\text{for}\;\; m{\mathfrak{g}}e0. \end{equation} We define a representation of the algebra $\mathbb{C}_\varepsilon[Q]$ in $V_Q$ by left multiplication: \betagin{equation}\lbb{egaeal1} e^{\mathfrak{g}}amma(s\otimes e^\alpha) = \varepsilon({\mathfrak{g}}amma,\alpha)s\otimes e^{\alpha+{\mathfrak{g}}amma}. \end{equation} This gives rise to a representation in $V_Q$ of the associative algebra $\mathcal{A}_Q = U({\mathfrak{h}}at{\mathfrak{h}})\otimes\mathbb{C}_\varepsilon[Q]$, which is a ``twisted'' tensor product of the universal enveloping algebra $U({\mathfrak{h}}at{\mathfrak{h}})$ of ${\mathfrak{h}}at{\mathfrak{h}}$ and the algebra $\mathbb{C}_\varepsilon[Q]$ by the relation \betagin{equation}\lbb{ealhtm} e^\alpha(ht^m) = (ht^m-\partiale_{m,0}(\alpha|h)) e^\alpha. \end{equation} Here and further $e^\alpha$ (respectively $ht^m$) stands for $1\otimes e^\alpha$ (respectively $ht^m \otimes1$). The algebra $\mathcal{A}_Q$ has a $\mathbb{Z}_2$-gradation (i.e., it is an associative superalgebra) defined by \betagin{equation}\lbb{pueal1} p(u\otimes e^\alpha) = |\alpha|^2 \mod 2\mathbb{Z}. \end{equation} This induces a $\mathbb{Z}_2$-gradation on $V_Q$: \betagin{equation}\lbb{pueal2} p(s\otimes e^\alpha) = |\alpha|^2 \mod 2\mathbb{Z}. \end{equation} Introduce the following fields on $V_Q$ (called {\em currents}): \betagin{equation}\lbb{hz} h(z) = \sum_{m{\mathrm{i}}n\mathbb{Z}} (ht^m)z^{-m-1}, \qquad h{\mathrm{i}}n{\mathfrak{h}}. \end{equation} Then the commutation relations \eqref{htht} for $K=1$ can be rewritten as \betagin{equation}\lbb{hzhz} [h(z),h'(w)] = (h|h')\,\partial_w\partiale(z-w), \qquad h,h'{\mathrm{i}}n{\mathfrak{h}}. \end{equation} Hence all the fields $h(z)$ are local with respect to each other. For $\alpha{\mathrm{i}}n Q$, introduce the {\em vertex operator\/} \betagin{equation}\lbb{gaal} \betagin{split} Y_\alpha(z) &= e^\alpha \,\nop{\exp\tildent\alpha(z)} \\ &\equiv e^\alpha z^\alpha \exp\Bigl( \sum_{n<0} (\alpha t^n)\frac{z^{-n}}{-n} \Bigr) \exp\Bigl( \sum_{n>0} (\alpha t^n)\frac{z^{-n}}{-n} \Bigr) \end{split} \end{equation} of parity $|\alpha|^2 \mod 2\mathbb{Z}$. The vertex operator $Y_\alpha(z)$ is a field on $V_Q$, and it is local with respect to $h(z)$ because \betagin{equation}\lbb{gaalhz} [h(z),Y_\alpha(w)] = (h|\alpha)Y_\alpha(w)\partiale(z-w), \qquad h{\mathrm{i}}n{\mathfrak{h}},\alpha{\mathrm{i}}n Q. \end{equation} Moreover, the fields $Y_\alpha(z)$ are local among themselves. This follows from \eqref{epalbe} and the formula \betagin{equation}\lbb{gaalgabe} Y_\alpha(z)Y_\beta(w) = \varepsilon(\alpha,\beta) (z-w)^{(\alpha|\beta)} \, e^{\alpha+\beta} \, \nop{\exp\bigl( \tildent\alpha(z)+\tildent\beta(w) \bigr)} \,. \end{equation} \betagin{theorem}\lbb{pvq} The fields $Y(ht^{-1},z)=h(z)$ $(h{\mathrm{i}}n{\mathfrak{h}})$, of parity $0$, and $Y(e^\alpha,z)=Y_\alpha(z)$ $(\alpha{\mathrm{i}}n Q)$, of parity $p(e^\alpha)=|\alpha|^2 \mod 2\mathbb{Z}$, generate a vertex algebra structure on $V_Q=S\otimes\mathbb{C}_\varepsilon[Q]$ with the vacuum vector $|0\rangle=1\otimes1$ and the operator $T$ defined by \betagin{equation}\lbb{tvq} [T,ht^m]=-m \, ht^{m-1}, \quad Te^\alpha=(\alpha t^{-1})e^\alpha, \qquad h{\mathrm{i}}n{\mathfrak{h}},\alpha{\mathrm{i}}n Q. \end{equation} This vertex algebra is conformal of rank $l=\rank Q$ with the conformal vector \betagin{equation}\lbb{nuvq} \nu = \frac12 \sum_{i=1}^l (a^it^{-1})(b^it^{-1}), \end{equation} where $\{a^i\}$, $\{b^i\}$ are dual bases of\/ ${\mathfrak{h}}$. \end{theorem} Let $\sigma$ be an automorphism of the lattice $Q$ of finite order $N$. For $0\le j\le N-1$, let \betagin{equation}\lbb{hj} {\mathfrak{h}}_j = \{h{\mathrm{i}}n{\mathfrak{h}} \; | \; \sigma h=\varepsilons^{-j}h \}, \qquad \varepsilons=e^{2\pi{\mathrm{i}}/N}. \end{equation} Then ${\mathfrak{h}}_j$ and ${\mathfrak{h}}_k$ are orthogonal unless $j+k\equiv 0\mod N\mathbb{Z}$. Since there exists a unique, up to equivalence, $\pm1$-valued $2$-cocycle $\varepsilon$ on $Q$ satisfying \eqref{epal}, the cocycles $\varepsilon(\alpha,\beta)$ and $\varepsilon(\sigma\alpha,\sigma\beta)$ are equivalent. Hence there exists a function $\eta\colon Q\to\{\pm1\}$ such that \betagin{equation}\lbb{etaep} \eta(\alpha)\eta(\beta)\varepsilon(\alpha,\beta) = \eta(\alpha+\beta)\varepsilon(\sigma\alpha,\sigma\beta), \qquad \alpha,\beta{\mathrm{i}}n Q. \end{equation} Moreover, $\eta$ can be chosen in such a way that \betagin{equation}\lbb{etah0} \eta(\alpha)=1 \quad\text{for}\;\; \alpha{\mathrm{i}}n Q\cap{\mathfrak{h}}_0. \end{equation} \betagin{proposition}\lbb{pautvq} Any automorphism $\sigma$ of $Q$ can be lifted to an automorphism of the vertex algebra $V_Q$ so that \betagin{equation}\lbb{autvq} \sigma(ht^m)=\sigma(h)t^m, \quad \sigma(e^\alpha)=\eta(\alpha)^{-1}e^{\sigma\alpha}, \qquad h{\mathrm{i}}n{\mathfrak{h}},\alpha{\mathrm{i}}n Q. \end{equation} It fixes the conformal vector $\nu${\rm:} $\sigma(\nu)=\nu$. \end{proposition} \betagin{proof} Follows from the observation that $\sigma$ defines an automorphism of the associative superalgebra $\mathcal{A}_Q$. \end{proof} \betagin{remark}\lbb{rordsi} If $\sigma\colon Q\to Q$ is an automorphism of order $N$, then its lifting $\sigma\colon V_Q\to V_Q$ defined by \eqref{autvq} has order $N$ or $2N$. \end{remark} \subsection{Twisted Vertex Operators}\lbb{sstwvo} Let $\sigma$ be an automorphism of $Q$ lifted to an automorphism of $V_Q$, as in \prref{pautvq}. We use the notation from \seref{stwmod} for $V=V_Q$. Let $M$ be a $\sigma$-twisted $V_Q$-module (see \partialeref{dvrep2}). We will study the fields $Y^M(ht^{-1},z)$ and $Y^M(e^\alpha,z)$ for $h{\mathrm{i}}n{\mathfrak{h}},\alpha{\mathrm{i}}n Q$. Comparing the commutator formulas \eqref{comm} and \eqref{twcomm}, we see that Eqs.\ \eqref{hzhz}, \eqref{gaalhz} immediately imply \betagin{align} \lbb{twc1} [Y^M(ht^{-1},z),Y^M(h't^{-1},w)] &= (h|h')\,\partial_w\partiale_j(z-w), \\ \lbb{twc2} [Y^M(ht^{-1},z),Y^M(e^\alpha,w)] &= (h|\alpha)\, Y^M(e^\alpha,w)\partiale_j(z-w), \\ \notag &\qquad\qquad h{\mathrm{i}}n{\mathfrak{h}}_j,h'{\mathrm{i}}n{\mathfrak{h}}, \; \alpha{\mathrm{i}}n Q. \end{align} These formulas can be restated as: \betagin{align} \lbb{twc3} [h^M_{(m)},(h')^M_{(n)}] &= (\pi_{Nm} h|h') \, m\partiale_{m,-n}, \\ \lbb{twc4} [h^M_{(m)},Y^M(e^\alpha,w)] &= (\pi_{Nm} h|\alpha) \, w^m Y^M(e^\alpha,w), \\ \notag &\qquad\qquad h,h'{\mathrm{i}}n{\mathfrak{h}}, \; m,n{\mathrm{i}}n\tfrac1N\mathbb{Z}, \; \alpha{\mathrm{i}}n Q, \end{align} where $\pi_j$ is the projection of ${\mathfrak{h}}$ onto ${\mathfrak{h}}_{j\!\!\!\mod N\mathbb{Z}}$. (Note that this $\pi_j$ is the restriction to $({\mathfrak{h}} t^{-1})|0\rangle$ of the $\pi_j$ defined by \eqref{pij}). In the sequel we will use the notation $h_0=\pi_0 h$ for $h{\mathrm{i}}n{\mathfrak{h}}$. Also note that, for $a=ht^{-1}$, \eqref{ginv} is equivalent to \betagin{equation}\lbb{twc5} (\sigma h)^M_{(m)} = h^M_{(m)} \, e^{-2\pi{\mathrm{i}} m}, \qquad h{\mathrm{i}}n{\mathfrak{h}}, \; m{\mathrm{i}}n\tfrac1N\mathbb{Z}. \end{equation} \betagin{lemma}\lbb{lymea} There exist operators $U^M_\alpha$ $(\alpha{\mathrm{i}}n Q)$ on $M$ such that \betagin{equation}\lbb{ymeal} Y^M(e^\alpha,z) = z^{b_\alpha} U^M_\alpha E^M_\alpha(z) \end{equation} where \betagin{equation}\lbb{dal} b_\alpha = (|\alpha_0|^2 - |\alpha|^2)/2 \end{equation} and \betagin{equation}\lbb{emal} \betagin{split} E^M_\alpha(z) &= \nop{\exp\tildent Y^M(\alpha t^{-1},z)} \\ &\equiv z^{ \alpha^M_{(0)} } \, \exp\Bigl( \sum_{n{\mathrm{i}}n\frac1N\mathbb{Z}_{<0}} \alpha^M_{(n)}\frac{z^{-n}}{-n} \Bigr) \exp\Bigl( \sum_{n{\mathrm{i}}n\frac1N\mathbb{Z}_{>0}} \alpha^M_{(n)}\frac{z^{-n}}{-n} \Bigr). \end{split} \end{equation} The operators $U^M_\alpha$ satisfy \betagin{align} \lbb{hual} [h^M_{(m)},U^M_\alpha] &= \partiale_{m,0}(h_0|\alpha) \, U^M_\alpha, \qquad h{\mathrm{i}}n{\mathfrak{h}}, \; m{\mathrm{i}}n\tfrac1N\mathbb{Z}, \\ \lbb{gual} U^M_{\sigma\alpha} &= \eta(\alpha) U^M_\alpha e^{ 2\pi{\mathrm{i}}(b_\alpha+\alpha^M_{(0)}) }. \end{align} \end{lemma} \betagin{proof} Define the operators \betagin{equation*} U^M_\alpha(z) = \exp\Bigl( \sum_{n{\mathrm{i}}n\frac1N\mathbb{Z}_{<0}} \alpha^M_{(n)}\frac{z^{-n}}{n} \Bigr) \, Y^M(e^\alpha,z) \, z^{ -\alpha^M_{(0)} } \exp\Bigl( \sum_{n{\mathrm{i}}n\frac1N\mathbb{Z}_{>0}} \alpha^M_{(n)}\frac{z^{-n}}{n} \Bigr). \end{equation*} Then, by \eqref{twc3}, \eqref{twc4}, \betagin{equation*} [h^M_{(m)},U^M_\alpha(z)] = \partiale_{m,0}(h_0|\alpha) \, U^M_\alpha(z), \qquad h{\mathrm{i}}n{\mathfrak{h}}. \end{equation*} In particular, $U^M_\alpha(z)$ commutes with $\alpha^M_{(n)}$ for $n<0$; hence, we have \betagin{equation}\lbb{ymeal2} Y^M(e^\alpha,z) = U^M_\alpha(z) E^M_\alpha(z). \end{equation} We will deduce Eqs.\ \eqref{ymeal} and \eqref{hual} once we show that $U^M_\alpha := z^{ -b_\alpha} U^M_\alpha(z)$ is independent of $z$. To this end we use the translation invariance \eqref{twtaa}. By \eqref{tvq}, we have $Te^\alpha=(\alpha t^{-1})e^\alpha = \alpha_{(-1)}e^\alpha$. Using \eqref{twtaa}, \eqref{tvq} and \eqref{twnpr}, we find \betagin{align*} \partial_z Y^M(e^\alpha,z) &= Y^M(T e^\alpha,z) = Y^M(\alpha_{(-1)}e^\alpha,z) \\ &= \nop{ Y^M(\alpha t^{-1},z)Y^M(e^\alpha,z) } - \sum_{j=0}^{N-1} \frac jN (\pi_j\alpha|\alpha) z^{-1} Y^M(e^\alpha,z). \end{align*} Therefore \betagin{equation*} \partial_z U^M_\alpha(z) = -\sum_{j=0}^{N-1} \frac jN (\pi_j\alpha|\alpha) z^{-1} U^M_\alpha(z). \end{equation*} Using that $(\pi_j\alpha|\alpha) = (\pi_{N-j}\alpha|\alpha)$ for $1\le j\le N-1$, it is easy to see that \betagin{equation}\lbb{xalal} \sum_{j=0}^{N-1} \frac jN (\pi_j\alpha|\alpha) = -b_\alpha. \end{equation} This proves that $U^M_\alpha = z^{ -b_\alpha} U^M_\alpha(z)$ is independent of $z$. Finally, to prove \eqref{gual}, we apply \eqref{ginv} for $a=e^\alpha$, using that $\sigma(e^\alpha)=\eta(\alpha)^{-1}e^{\sigma\alpha}$ and that by \eqref{twc5} we have $E^M_{\sigma\alpha}(z) = e^{-2\pi{\mathrm{i}}\alpha^M_{(0)}} E^M_\alpha(e^{2\pi{\mathrm{i}}}z)$. \end{proof} \betagin{lemma}\lbb{lyalbe1} We have \betagin{equation}\lbb{yalbe1} Y^M(e^\alpha,z)Y^M(e^\beta,w) = i_{z,w} f_{\alpha,\beta}(z,w) \, z^{b_\alpha} w^{b_\beta} \, U^M_\alpha U^M_\beta \, E^M_{\alpha,\beta}(z,w), \end{equation} where \betagin{align} \lbb{falbe} f_{\alpha,\beta}(z,w) &= \prod_{k=0}^{N-1} \bigl(z^{1/N} - \varepsilons^k w^{1/N}\bigr)^{ (\sigma^k\alpha|\beta) }, \\ \lbb{ealbe} E^M_{\alpha,\beta}(z,w) &= \nop{\exp\bigl( \tildent Y^M(\alpha t^{-1},z) +\tildent Y^M(\beta t^{-1},w) \bigr)}. \end{align} \end{lemma} \betagin{proof} Standard exercise, using the fact that $ e^Ae^Be^{-A} = e^{\ad A}e^B = e^{[A,B]}e^B $ for any two operators $A,B$ commuting with $[A,B]$. \end{proof} \betagin{lemma}\lbb{lyalbe2} We have \betagin{align}\lbb{yalbe2} &Y^M(Y(e^\alpha,z)e^\beta,w) \\ \notag &= \varepsilon(\alpha,\beta) B_{\alpha,\beta}^{-1} \, i_{w,z} f_{\alpha,\beta}(z+w,w) \, (z+w)^{b_\alpha} w^{b_\beta} \, U^M_{\alpha+\beta} \, E^M_{\alpha,\beta}(z+w,w), \end{align} where \betagin{equation}\lbb{balbe} B_{\alpha,\beta} = \frac{ f_{\alpha,\beta}(z,w) }{ (z-w)^{(\alpha|\beta)} } \Big|_{ z^{1/N}=w^{1/N}=1 } = N^{ -(\alpha|\beta) } \prod_{k=1}^{N-1} \bigl(1 - \varepsilons^k \bigr)^{ (\sigma^k\alpha|\beta) }. \end{equation} \end{lemma} \betagin{proof} We use the same argument as in the proof of \leref{lymea}. First, using \eqref{twcomm}, \eqref{twc4} and \eqref{hteal}, we compute the commutator ($h{\mathrm{i}}n{\mathfrak{h}}_j$, $n{\mathrm{i}}n\frac jN +\mathbb{Z}$): \betagin{align*} [h^M_{(n)}, \, & Y^M(Y(e^\alpha,z)e^\beta,w)] \\ &= \sum_{m=0}^{\mathrm{i}}nfty \binom nm w^{n-m} \, Y^M\bigl( h_{(m)}(Y(e^\alpha,z)e^\beta) ,w \bigr) \\ &= \sum_{m=0}^{\mathrm{i}}nfty \binom nm w^{n-m} \bigl( (h|\alpha)z^m + \partiale_{m,0} (h|\beta) \bigr) Y^M(Y(e^\alpha,z)e^\beta,w) \\ &= \bigl( i_{w,z}(z+w)^n (h|\alpha) + w^n (h|\beta) \bigr) Y^M(Y(e^\alpha,z)e^\beta,w). \end{align*} It follows that \betagin{equation*} Y^M(Y(e^\alpha,z)e^\beta,w) = U^M_{\alpha,\beta}(z,w) E^M_{\alpha,\beta}(z+w,w), \end{equation*} where the operator $U^M_{\alpha,\beta}(z,w)$ satisfies \betagin{equation*} [h^M_{(n)},U^M_{\alpha,\beta}(z,w)] = \partiale_{n,0}(h_0|\alpha+\beta) \, U^M_{\alpha,\beta}(z,w). \end{equation*} Next, we note that (by \eqref{tvq}, \eqref{twtaa}, \eqref{t}) \betagin{align*} Y^M\bigl(Y(e^\alpha,z)(\beta_{(-1)}e^\beta),w\bigr) &= Y^M\bigl(Y(e^\alpha,z)(Te^\beta),w\bigr) \\ &= (\partial_w-\partial_z) Y^M(Y(e^\alpha,z)e^\beta,w). \end{align*} A similar computation as above, using \eqref{twnpr} for $n=-1$, shows that for $h{\mathrm{i}}n{\mathfrak{h}}_j$ one has \betagin{multline*} Y^M\bigl(Y(e^\alpha,z)(h_{(-1)}e^\beta),w\bigr) = \nop{ Y^M(ht^{-1},w) Y^M(Y(e^\alpha,z)e^\beta,w) } \\ - \Bigl( z^{-1} i_{w,z} \Bigl(1+\frac zw\Bigr)^{j/N} (h|\alpha) + \frac jN w^{-1} (h|\beta) \Bigr) \, Y^M(Y(e^\alpha,z)e^\beta,w). \end{multline*} Therefore \betagin{align*} (\partial_w-\partial_z) & U^M_{\alpha,\beta}(z,w) \\ &= -\sum_{j=0}^{N-1} \Bigl( z^{-1} i_{w,z} \Bigl(1+\frac zw\Bigr)^{j/N} (\pi_j\beta|\alpha) + \frac jN w^{-1} (\pi_j\beta|\beta) \Bigr) \, U^M_{\alpha,\beta}(z,w). \end{align*} With some more computation, we see that \betagin{align*} - \sum_{j=0}^{N-1} z^{-1} i_{w,z} \Bigl(1+\frac zw\Bigr)^{j/N} (\pi_j\beta|\alpha) &= i_{w,z} i_{w,z+w} \sum_{k=0}^{N-1} \frac{ (1/N)\varepsilons^k w^{1/N-1} (\sigma^k\alpha|\beta) }{ (z+w)^{1/N} - \varepsilons^k w^{1/N} } \\ &= i_{w,z} \frac{ (\partial_w-\partial_z) f_{\alpha,\beta}(z+w,w) }{ f_{\alpha,\beta}(z+w,w) } \end{align*} On the other hand, by \eqref{xalal}, we have \betagin{equation*} -\sum_{j=0}^{N-1} \frac jN w^{-1} (\pi_j\beta|\beta) = (\partial_w-\partial_z) w^{b_\beta} / w^{b_\beta} . \end{equation*} It follows that \betagin{equation*} U^M_{\alpha,\beta}(z,w) / i_{w,z} f_{\alpha,\beta}(z+w,w) w^{b_\beta} \end{equation*} depends only on $z+w$. Finally, note that \betagin{align*} Y^M(Y(e^\alpha,z)e^\beta,w) &= z^{(\alpha|\beta)} \varepsilon(\alpha,\beta) Y^M(e^{\alpha+\beta},w) + \text{higher powers of $z$}, \\ {\mathrm{i}}ntertext{while} i_{w,z} f_{\alpha,\beta}(z+w,w) &= z^{(\alpha|\beta)} B_{\alpha,\beta} \, w^{ (\alpha_0-\alpha|\beta) } + \text{higher powers of $z$}. \end{align*} Since $(\alpha_0-\alpha|\beta) = b_{\alpha+\beta} - b_\alpha - b_\beta$, this completes the proof. \end{proof} \betagin{corollary}\lbb{cualube} In any $\sigma$-twisted $V_Q$-module $M$, one has \betagin{equation}\lbb{ualube} U^M_\alpha U^M_\beta = \varepsilon(\alpha,\beta) B_{\alpha,\beta}^{-1} \, U^M_{\alpha+\beta}, \qquad \alpha,\beta{\mathrm{i}}n Q. \end{equation} \end{corollary} \betagin{proof} Follows immediately from \eqref{yalbe1}, \eqref{yalbe2} and the associativity \eqref{as2}. \end{proof} \betagin{remark}\lbb{rpfcom} In the proofs of Lemmas \ref{lyalbe1} and \ref{lyalbe2}, we used only the commutator formulas \eqref{twc3}, \eqref{twc4}, the translation invariance \eqref{twtaa}, and formula \eqref{twnpr} for $n=-1$, $a=ht^{-1}$, $b=e^\beta$. \end{remark} \subsection{The Heisenberg Pair $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$}\lbb{shchg} The results of the previous subsection motivate the following definitions. The {\em $\sigma$-twisted current algebra} ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ consists of all $\sigma$-invariant elements from $\mathbb{C} K{\mathrm{op}}lus{\mathfrak{h}}[t^{1/N},t^{-1/N}]$, where $\sigma$ acts as \betagin{equation}\lbb{ght} \sigma(h t^m) = \sigma(h) e^{2\pi{\mathrm{i}} m}t^m, \;\; \sigma(K)=K, \qquad h{\mathrm{i}}n{\mathfrak{h}}, \; m{\mathrm{i}}n\tfrac1N\mathbb{Z}. \end{equation} In other words, ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ is spanned over $\mathbb{C}$ by $K$ and the elements $ht^m$ such that $h{\mathrm{i}}n{\mathfrak{h}}_j$, $m{\mathrm{i}}n\frac jN +\mathbb{Z}$. This is a Lie algebra with bracket \eqref{htht}. Let $G = \mathbb{C}^\tildemes \tildemes \exp{\mathfrak{h}}_0 \tildemes Q$ be the set consisting of elements $c \, e^h U_\alpha$ ($c{\mathrm{i}}n\mathbb{C}^\tildemes$, $h{\mathrm{i}}n{\mathfrak{h}}_0$, $\alpha{\mathrm{i}}n Q$). We define a multiplication in $G$ by the formulas: \betagin{align} \lbb{tig1} e^h e^{h'} &= e^{h+h'}, \\ \lbb{tig2} e^h U_\alpha e^{-h} &= e^{(h|\alpha)} U_\alpha, \\ \lbb{tig3} U_\alpha U_\beta &= \varepsilon(\alpha,\beta) B_{\alpha,\beta}^{-1} \, U_{\alpha+\beta}. \end{align} Then $G$ is a group. {}From \eqref{tig3} we get the commutator \betagin{equation}\lbb{tig4} C_{\alpha,\beta} := U_\alpha U_\beta U_\alpha^{-1} U_\beta^{-1} = (-1)^{|\alpha|^2|\beta|^2} \prod_{k=0}^{N-1} \bigl(-\varepsilons^k \bigr)^{ -(\sigma^k\alpha|\beta) }. \end{equation} We give another expression for $C_{\alpha,\beta}$ which will be useful in the sequel: \betagin{equation}\lbb{calbe} \betagin{split} C_{\alpha,\beta} &= (-1)^{|\alpha|^2|\beta|^2} e^{\pi{\mathrm{i}}(\alpha_0|\beta)} e^{2\pi{\mathrm{i}}(\alpha_*|\beta)} \\ &\qquad\qquad\text{for}\;\; \alpha=\alpha_0+(1-\sigma)\alpha_*, \; \alpha_0{\mathrm{i}}n{\mathfrak{h}}_0, \alpha_*{\mathrm{i}}n{\mathfrak{h}}_0^\perp. \end{split} \end{equation} Denote by $Q_{\mathrm{ev}}$ the sublattice of $Q$ consisting of all even elements, i.e., $\alpha$ such that $|\alpha|^2 {\mathrm{i}}n 2\mathbb{Z}$. \betagin{lemma}\lbb{lcentg} The center $Z(G)$ of $G$ consists of all elements of the form $c \, e^{2\pi{\mathrm{i}}\lambda_0} U_{(1-\sigma)\lambda}$, where $c{\mathrm{i}}n\mathbb{C}^\tildemes$ and $\lambda{\mathrm{i}}n (Q_{\mathrm{ev}})^*$ is such that $\alpha:=(1-\sigma)\lambda{\mathrm{i}}n Q$ and $\lambda{\mathrm{i}}n Q^*$ if $\alpha{\mathrm{i}}n Q_{\mathrm{ev}}$. \end{lemma} \betagin{proof} If $e^{2\pi{\mathrm{i}} h} U_\alpha$ is in the center, then \eqref{tig2} implies $\alpha_0=0$. Then $\alpha=(1-\sigma)\alpha_*$ for some uniquely defined $\alpha_*{\mathrm{i}}n{\mathfrak{h}}_0^\perp$. Letting $\lambda=h+\alpha_*$, we get $e^{2\pi{\mathrm{i}} h} U_\alpha = e^{2\pi{\mathrm{i}}\lambda_0} U_{(1-\sigma)\lambda}$. Using \eqref{tig2} and \eqref{calbe}, we see that $e^{2\pi{\mathrm{i}}\lambda_0} U_{(1-\sigma)\lambda}$ commutes with $U_\beta$ iff \betagin{equation}\lbb{labe} (\lambda|\beta)+|\alpha|^2|\beta|^2/2 {\mathrm{i}}n\mathbb{Z} \quad \text{for $\beta{\mathrm{i}}n Q$}. \end{equation} Since \betagin{equation*} |\alpha|^2 = ((1-\sigma)\alpha_* | (1-\sigma)\alpha_*) = 2 (\alpha_*|\alpha_*) - 2 (\alpha_*|\sigma\alpha_*) = 2 (\alpha_*|\alpha) = 2 (\lambda|\alpha), \end{equation*} equation \eqref{labe} is equivalent to $\lambda{\mathrm{i}}n Q^*$ if $\alpha{\mathrm{i}}n Q_{\mathrm{ev}}$ and to $\lambda{\mathrm{i}}n (Q_{\mathrm{ev}})^*$ if $\alpha{\mathrm{i}}n Q\setminus Q_{\mathrm{ev}}$. \end{proof} In particular, by \leref{lcentg}, all elements of the form $e^{2\pi{\mathrm{i}}\alpha_0} U_{(1-\sigma)\alpha}$ ($\alpha{\mathrm{i}}n Q$) are central in $G$. We let $G_\sigma$ be the factor of $G$ over the central subgroup \betagin{equation}\lbb{tig5} N_\sigma := \{ \eta(\alpha) U_{\sigma\alpha}^{-1} U_\alpha e^{ 2\pi{\mathrm{i}}(b_\alpha+\alpha_0) } \; | \; \alpha{\mathrm{i}}n Q \} \, . \end{equation} Note that $N_\sigma\cap\mathbb{C}^\tildemes = \{1\}$. We endow $Q$ with the discrete topology so that $G$ and $G_\sigma$ are Lie groups with a Lie algebra $\mathbb{C}{\mathrm{op}}lus{\mathfrak{h}}_0$. By \eqref{tig5}, \eqref{dal} and \eqref{etah0}, we have \betagin{equation}\lbb{tig6} e^{ 2\pi{\mathrm{i}}\alpha } = 1 \quad\text{in $G_\sigma$ for $\alpha{\mathrm{i}}n Q\cap{\mathfrak{h}}_0$}. \end{equation} It is easy to see that the connected component of the unit in $G_\sigma$ is equal to $\mathbb{C}^\tildemes$ times the torus \betagin{equation}\lbb{tsi} T_\sigma := \exp 2\pi{\mathrm{i}}({\mathfrak{h}}_0 / Q\cap{\mathfrak{h}}_0). \end{equation} The group $G$ acts on ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ by conjugation: \betagin{equation}\lbb{tig7} (c e^h U_\alpha) (h't^m + c'K) (c e^h U_\alpha)^{-1} = h't^m + \partiale_{m,0}(h'_0|\alpha)K + c'K. \end{equation} This action is compatible with the adjoint action of $\mathbb{C}{\mathrm{op}}lus{\mathfrak{h}}_0$ on ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ (which is trivial), hence, $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G)$ is a {\em Heisenberg pair\/} in the sense of \cite{FK}. The same is true for $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$ because $N_\sigma$ acts trivially on ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$. A module $M$ over ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ or over $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$ will be called {\em restricted\/} if the action of ${\mathfrak{h}}_0$ is diagonalizable and for any $v{\mathrm{i}}n M$, $(ht^m)v=0$ for $h{\mathrm{i}}n{\mathfrak{h}}$ and sufficiently large $m{\mathrm{i}}n\tfrac1N\mathbb{Z}$. Now we can summarize the results of \seref{sstwvo} as follows. \betagin{proposition}\lbb{ptvqm} Any $\sigma$-twisted $V_Q$-module $M$ is naturally a restricted module over the Heisenberg pair $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$ of level $1$ $($i.e., both $K{\mathrm{i}}n{\mathfrak{h}}at{\mathfrak{h}}_\sigma$ and $1{\mathrm{i}}nG_\sigma$ act as $1)$. Conversely, any restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-module of level $1$ can be endowed with the structure of a $\sigma$-twisted $V_Q$-module. This establishes an equivalence of the corresponding abelian categories. \end{proposition} \betagin{proof} 1. Let $M$ be a $\sigma$-twisted $V_Q$-module. By definition, the action of $L_0^M$ is diagonalizable with finite-dimensional eigenspaces. Since ${\mathfrak{h}}_0$ commutes with $L_0^M$, its action is diagonalizable too. By \eqref{twc3}, the modes $h^M_{(m)}$ ($h{\mathrm{i}}n{\mathfrak{h}}$, $m{\mathrm{i}}n\frac1N\mathbb{Z}$) provide a restricted representation of ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ of level $1$. The action of $U_\alpha{\mathrm{i}}nG_\sigma$ is given by the operator $U^M_\alpha$, see \eqref{hual}, \eqref{gual}, \eqref{ualube}. 2. Conversely, let $M$ be a restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-module of level $1$. Denote the image of $ht^m$ in $\End M$ by $h^M_{(m)}$, and that of $U_\alpha$ by $U^M_\alpha$. This allows us to define the fields $Y^M(ht^{-1},z)$ and then $Y^M(e^\alpha,z)$ by \eqref{ymeal} ($h{\mathrm{i}}n{\mathfrak{h}},\alpha{\mathrm{i}}n Q$), They satisfy \eqref{ginv}, \eqref{twcomm}; in particular, they are local. The map $Y^M$ can be extended uniquely to the whole $V_Q$ by applying \eqref{as3} repeatedly for $a{\mathrm{i}}n{\mathfrak{h}} t^{-1}$. Note that \eqref{as3} implies the translation invariance \eqref{twtaa} for $a{\mathrm{i}}n{\mathfrak{h}} t^{-1}$. Then the proof of \leref{lymea} shows that \eqref{twtaa} holds for $a=e^\alpha$, and hence for any $a{\mathrm{i}}n V_Q$. It follows from \reref{rpfcom} that Lemmas \ref{lyalbe1} and \ref{lyalbe2} hold. This implies the associativity \eqref{as2}. By \prref{pdefas}, $M$ is a $\sigma$-twisted $V_Q$-module. \end{proof} \subsection{The Groups $G^\perp$ and $G_\sigma^\perp$}\lbb{sgsgsp} Before we proceed to the classification of all restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-modules of level $1$, we need to study the groups $G$ and $G_\sigma$ in more detail. Let $G^\perp \subset G$ be the subgroup of $G$ consisting of all $c \, U_\alpha$ with $c{\mathrm{i}}n\mathbb{C}^\tildemes$, $\alpha{\mathrm{i}}n Q\cap{\mathfrak{h}}_0^\perp$. Clearly, the centralizer of ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ in $G$ equals $\exp{\mathfrak{h}}_0 \tildemes G^\perp$ (cf.\ \eqref{tig7}). In other words, $G^\perp$ is the outer centralizer of the torus $\exp{\mathfrak{h}}_0$ in~$G$. Denote by $G_\sigma^\perp$ the image of $G^\perp$ in $G_\sigma$. It can be described as the factor of $G^\perp$ over the central subgroup (cf.\ \eqref{tig5}, \eqref{dal}) \betagin{equation}\lbb{tigp} N_\sigma^\perp := N_\sigma\capG^\perp = \{ \eta(\alpha)(-1)^{|\alpha|^2} U_{\sigma\alpha}^{-1} U_\alpha \; | \; \alpha{\mathrm{i}}n Q\cap{\mathfrak{h}}_0^\perp \} \, . \end{equation} The centralizer of ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ (and of $T_\sigma$) in $G_\sigma$ is equal to $T_\sigma \tildemes G_\sigma^\perp$. Notice that $G_\sigma^\perp$ is a central extension (by $\mathbb{C}^\tildemes$) of the finite abelian group $(Q\cap{\mathfrak{h}}_0^\perp) / (1-\sigma)(Q\cap{\mathfrak{h}}_0^\perp)$. \betagin{definition}\lbb{dzsi} \textup{(i)} Let $P_\sigma$ be the set of all $\lambda$ that appear in \leref{lcentg}, i.e., the set of all $\lambda{\mathrm{i}}n (Q_{\mathrm{ev}})^*$ such that $(1-\sigma)\lambda{\mathrm{i}}n Q$ and $\lambda{\mathrm{i}}n Q^*$ if $(1-\sigma)\lambda{\mathrm{i}}n Q_{\mathrm{ev}}$. Note that $P_\sigma$ is a sublattice of $(Q_{\mathrm{ev}})^*$ containing $Q$. \textup{(ii)} Let $Q_\sigma = (1-\sigma)P_\sigma \subset Q$. \textup{(iii)} Let $Z_\sigma = P_\sigma/Q$ be the subgroup of $((Q_{\mathrm{ev}})^*/Q)^\sigma$ consisting of classes $\lambda+Q$ such that $\lambda{\mathrm{i}}n Q^*$ if $(1-\sigma)\lambda{\mathrm{i}}n Q_{\mathrm{ev}}$. In particular, when the lattice $Q$ is even, $Z_\sigma = (Q^*/Q)^\sigma$ is the group of $\sigma$-invariant elements in $Q^*/Q$. \end{definition} Similarly to \leref{lcentg}, we can describe the centers of $G_\sigma$, $G^\perp$ and~$G_\sigma^\perp$. \betagin{lemma}\lbb{lcgg} \textup{(i)} $Z(G_\sigma) \sigmameq Z(G)/N_\sigma \sigmameq \mathbb{C}^\tildemes \tildemes Z_\sigma$. \textup{(ii)} $Z(G^\perp) = \{ c \, U_\alpha \; | \; c{\mathrm{i}}n\mathbb{C}^\tildemes, \alpha{\mathrm{i}}n Q_\sigma \} \sigmameq \mathbb{C}^\tildemes \tildemes Q_\sigma$. \textup{(iii)} $Z(G_\sigma^\perp) \sigmameq Z(G^\perp) / N_\sigma^\perp \sigmameq \mathbb{C}^\tildemes \tildemes Q_\sigma / (1-\sigma)(Q\cap{\mathfrak{h}}_0^\perp)$. \end{lemma} \betagin{proof} (i) follows from \leref{lcentg}, \eqref{calbe} and the fact that $N_\sigma\cap\mathbb{C}^\tildemes = \{1\}$. (ii) A similar argument as in the proof of \leref{lcentg} shows that the center of $G^\perp$ consists of all elements of the form $c \, U_{(1-\sigma)\lambda}$, where $c{\mathrm{i}}n\mathbb{C}^\tildemes$ and $\lambda{\mathrm{i}}n (Q_{\mathrm{ev}}\cap{\mathfrak{h}}_0^\perp)^*$ is such that $\alpha:=(1-\sigma)\lambda{\mathrm{i}}n Q$ and $\lambda{\mathrm{i}}n (Q\cap{\mathfrak{h}}_0^\perp)^*$ if $\alpha{\mathrm{i}}n Q_{\mathrm{ev}}$. Next, we use the following lemma. \betagin{lemma}\lbb{lproj*} Let ${\mathfrak{h}}'$ be a subspace of ${\mathfrak{h}}$ such that the restriction of the bilinear form on it is nondegenerate. Denote by $\pi'$ the orthogonal projection of ${\mathfrak{h}}$ onto ${\mathfrak{h}}'$. Then for any lattice $L\subset{\mathfrak{h}}$, one has $L^*\cap{\mathfrak{h}}' = (\pi' L)^{*'}$, where $*'$ means that the dual is taken in ${\mathfrak{h}}'$. \end{lemma} \betagin{proof} Follows from the fact that $(h|\alpha)=(h|\pi'\alpha)$ for $h{\mathrm{i}}n{\mathfrak{h}}'$, $\alpha{\mathrm{i}}n L$. \end{proof} Now, by \leref{lproj*}, $(Q\cap{\mathfrak{h}}_0^\perp)^* = \pi_\perp (Q^*)$, and similarly for $Q_{\mathrm{ev}}$, where $\pi_\perp$ is the orthogonal projection from ${\mathfrak{h}}$ to ${\mathfrak{h}}_0^\perp$. Noting that $(1-\sigma) \pi_\perp = 1-\sigma$ completes the proof of part (ii). Part (iii) follows from part (ii), \eqref{calbe} and the fact that $N_\sigma^\perp\cap\mathbb{C}^\tildemes = \{1\}$. \end{proof} \betagin{corollary}\lbb{ccgg} There is a natural exact sequence \betagin{equation}\lbb{zseq} 1 \to \exp 2\pi{\mathrm{i}} (Q^*\cap{\mathfrak{h}}_0) \to Z(G) \overlineerset{p}\to Z(G^\perp) \to 1 \, , \end{equation} where the homomorphism $p$ is given by \betagin{equation}\lbb{zseq2} c \, e^{2\pi{\mathrm{i}}\lambda_0} U_{(1-\sigma)\lambda} \overlineerset{p}\mapsto c \, U_{(1-\sigma)\lambda} \, , \qquad c{\mathrm{i}}n\mathbb{C}^\tildemes \, , \; \lambda{\mathrm{i}}n P_\sigma \, . \end{equation} The sequence \eqref{zseq} splits, so we have a non-canonical isomorphism $Z(G) \sigmameq \exp 2\pi{\mathrm{i}} (Q^*\cap{\mathfrak{h}}_0) \tildemes Z(G^\perp)$. \end{corollary} \betagin{proof} Clearly, the kernel of $p$ consists of $e^{2\pi{\mathrm{i}}\lambda_0}$ with $\lambda{\mathrm{i}}n Q^*$ such that $(1-\sigma)\lambda = 0$. Let $s\colon \pi_\perp(P_\sigma) \to P_\sigma$ be a linear section of the projection $\pi_\perp$. For $\lambda'{\mathrm{i}}n\pi_\perp(P_\sigma)$, let $\lambda=s(\lambda'){\mathrm{i}}n P_\sigma$. Then $\lambda'=\pi_\perp(\lambda)$ and $(1-\sigma)\lambda = (1-\sigma)\lambda'$. The map $c \, U_{(1-\sigma)\lambda'} \mapsto c \, e^{2\pi{\mathrm{i}} \lambda_0} U_{(1-\sigma)\lambda}$ is a splitting of \eqref{zseq}. \end{proof} Since $G_\sigma^\perp$ is a central extension of a finite abelian group, its representations are completely reducible and the irreducible ones are classified by the characters of $Z(G_\sigma^\perp)$. We will consider only representations on which $1{\mathrm{i}}nG_\sigma^\perp$ acts as the identity operator. The irreducible ones are classified by the finite abelian group $Q_\sigma / (1-\sigma)(Q\cap{\mathfrak{h}}_0^\perp)$. All of them have the same dimension $d(\sigma)$, which satisfies \betagin{equation}\lbb{ddef1} d(\sigma)^2 = |G_\sigma^\perp/ Z(G_\sigma^\perp)| = |(Q\cap{\mathfrak{h}}_0^\perp) / Q_\sigma| . \end{equation} \betagin{definition}\lbb{ddefect} The non-negative integer $d(\sigma)$ is called the {\em defect\/} of $\sigma$ (cf.~\cite{KP}). \end{definition} \subsection{Representations of\/ $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$}\lbb{srephg} In this subsection we show that the category of all restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-modules of level $1$ is semisimple, and we classify the irreducible ones. Let ${\mathfrak{h}}at{\mathfrak{h}}_\sigma^-$ (resp.\ ${\mathfrak{h}}at{\mathfrak{h}}_\sigma^+$) be the subalgebra of ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$ consisting of all elements $ht^m$ with $m>0$ (resp.\ $m<0$). It is well known (see, e.g., \cite{K1}) that any restricted ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$-module $M$ of level $1$ is induced from its vacuum subspace \betagin{equation}\lbb{omm} \Omega_M = \{ v{\mathrm{i}}n M \; | \; {\mathfrak{h}}at{\mathfrak{h}}_\sigma^- v=0 \}. \end{equation} More precisely, \betagin{equation}\lbb{omm2} M \sigmameq \Ind^{{\mathfrak{h}}at{\mathfrak{h}}_\sigma}_{{\mathfrak{h}}_0{\mathrm{op}}lus{\mathfrak{h}}at{\mathfrak{h}}_\sigma^-} \Omega_M \sigmameq S({\mathfrak{h}}at{\mathfrak{h}}_\sigma^+)\otimes\Omega_M . \end{equation} The subalgebra ${\mathfrak{h}}_0$ acts on $\Omega_M$ diagonally, and the ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$-module $M$ is completely reducible (it is irreducible iff $\partialim\Omega_M=1$). Now assume that $M$ is a restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-module of level $1$. It follows from \eqref{tig7} that $\Omega_M$ is a $G_\sigma$-module. By definition, $1{\mathrm{i}}nG_\sigma$ acts as $1$ and the torus $T_\sigma$ acts diagonally (cf.\ \eqref{tsi}). We will call such $G_\sigma$-modules {\em restricted\/}. If $\Omega$ is a restricted $G_\sigma$-module, it has a compatible ${\mathfrak{h}}_0$-action, because ${\mathfrak{h}}_0$ is the Lie algebra of the torus $T_\sigma$. We let ${\mathfrak{h}}at{\mathfrak{h}}_\sigma^-$ act trivially on $\Omega$ and form the induced ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$-module $M(\Omega) = \Ind^{{\mathfrak{h}}at{\mathfrak{h}}_\sigma}_{{\mathfrak{h}}_0{\mathrm{op}}lus{\mathfrak{h}}at{\mathfrak{h}}_\sigma^-}\Omega$. Using \eqref{tig7}, we can extend the action of $G_\sigma$ from $\Omega$ to $M(\Omega)$. Then $M(\Omega)$ becomes a restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-module of level $1$. \betagin{proposition}\lbb{pmomm} The functors $M\mapsto\Omega_M$ and $\Omega\mapsto M(\Omega)$ establish an equivalence of abelian categories between the category of restricted $({\mathfrak{h}}at{\mathfrak{h}}_\sigma,G_\sigma)$-modules of level $1$ and the category of restricted $G_\sigma$-modules. \end{proposition} Therefore we are left with describing restricted $G_\sigma$-modules. \betagin{proposition}\lbb{padmgg} Any restricted $G_\sigma$-module is completely reducible and is determined by the action of the center of $G_\sigma$. Isomorphism classes of restricted irreducible $G_\sigma$-modules are parameterized by the {\rm(}finite{\rm)} set $Z_\sigma$. \end{proposition} Let $\Omega$ be a restricted $G_\sigma$-module. For $\mu{\mathrm{i}}n{\mathfrak{h}}_0$, we denote by $\Omega_\mu$ the weight $\mu$ subspace of $\Omega:$ \betagin{equation}\lbb{ommu} \Omega_\mu := \{ v{\mathrm{i}}n\Omega \; | \; e^h v = e^{(h|\mu)} v \;\;\text{for}\;\; h{\mathrm{i}}n{\mathfrak{h}}_0 \}. \end{equation} Then $(\mu|\alpha){\mathrm{i}}n\mathbb{Z}$ for $\alpha{\mathrm{i}}n Q\cap{\mathfrak{h}}_0$, i.e., $\mu{\mathrm{i}}n(Q\cap{\mathfrak{h}}_0)^* = \pi_0(Q^*)$ by \leref{lproj*}. \betagin{lemma}\lbb{ladmgg1} \textup{(i)} $U_\alpha\Omega_\mu = \Omega_{\mu+\pi_0\alpha}$ for all $\alpha{\mathrm{i}}n Q$, $\mu{\mathrm{i}}n\pi_0(Q^*)$. In particular, the subgroup $G_\sigma^\perp \subset G_\sigma$ preserves each $\Omega_\mu$. \textup{(ii)} Let $\Omega_{\mu+\pi_0(Q)} = \sum_{\alpha{\mathrm{i}}n Q} \Omega_{\mu+\pi_0\alpha} \subset \Omega$. Then $\Omega_{\mu+\pi_0(Q)}$ is a $G_\sigma$-submodule of~$\,\Omega$. \textup{(iii)} The $G_\sigma$-module $\Omega_{\mu+\pi_0(Q)}$ is irreducible if and only if the $G_\sigma^\perp$-module $\Omega_\mu$ is irreducible. \end{lemma} \betagin{proof} (i) It follows from \eqref{tig2} that $U_\alpha\Omega_\mu \subset \Omega_{\mu+\pi_0\alpha}$ for any $\alpha{\mathrm{i}}n Q$. Since by \eqref{tig3} $U_\alpha^{-1}$ is proportional to $U_{-\alpha}$\,, we get $U_\alpha\Omega_\mu = \Omega_{\mu+\pi_0\alpha}$\,. (ii) follows from (i) and the definition of $G_\sigma$ (see \eqref{tig1}--\eqref{tig3}). (iii) First, let $\Omega_\mu$ be an irreducible $G_\sigma^\perp$-module. Assume that $\Lambda$ is a nontrivial $G_\sigma$-submodule of $\Omega_{\mu+\pi_0(Q)}$. Using the action of $T_\sigma$, we can write $\Lambda = \sum_{\alpha{\mathrm{i}}n Q} \Lambda_{\mu+\pi_0\alpha}$ where each $\Lambda_{\mu+\pi_0\alpha} \subset \Omega_{\mu+\pi_0\alpha}$\,. Moreover, $\Lambda_{\mu+\pi_0\alpha} = U_\alpha \Lambda_\mu$. In particular, $\Lambda_\mu$ is a $G_\sigma^\perp$-submodule of $\Omega_\mu$. But $\Omega_\mu$ is irreducible; hence, $\Lambda_\mu = \Omega_\mu$ and $\Lambda = \Omega_{\mu+\pi_0(Q)}$. Conversely, assume that the $G_\sigma^\perp$-module $\Omega_\mu$ is not irreducible. Since $G_\sigma^\perp$ is a central extension of a finite abelian group, its representations are completely reducible. If $\Omega_\mu = \bigoplus_i L_i$ as a $G_\sigma^\perp$-module, let $L^i = \sum_{\alpha{\mathrm{i}}n Q} U_\alpha L_i$. Then each $L^i$ is a $G_\sigma$-submodule of $\Omega_{\mu+\pi_0(Q)}$ and $\Omega_{\mu+\pi_0(Q)} = \bigoplus_i L^i$ as a $G_\sigma$-module. \end{proof} \betagin{remark}\lbb{romind} $\Omega_{\mu+\pi_0(Q)}$ is isomorphic to the induced module $\Ind^{G_\sigma}_{T_\sigma \tildemes G_\sigma^\perp} \Omega_\mu$. \end{remark} {}From \leref{ladmgg1}(iii) and its proof, we see that the $G_\sigma$-module $\Omega_{\mu+\pi_0(Q)}$ is completely reducible. Since $\Omega = \bigoplus_{ [\mu]{\mathrm{i}}n\pi_0 (Q^*)/\pi_0(Q) } \Omega_{[\mu]}$, it is also completely reducible. Now let $\Omega$ be an irreducible $G_\sigma$-module. Then $\Omega=\Omega_{\mu+\pi_0(Q)}$ for some $\mu{\mathrm{i}}n\pi_0 (Q^*)$ and the $G_\sigma^\perp$-module $\Omega_\mu$ is irreducible. Any irreducible $G_\sigma^\perp$-module is completely determined by the action of the center $Z(G_\sigma^\perp)$. Let $\zeta\colon Z(G_\sigma^\perp) \to \mathbb{C}^\tildemes$ be the central character of $\Omega_\mu$. We can view $\Omega$ as a $G$-module on which $N_\sigma$ acts trivially, and similarly $\Omega_\mu$ as a $G^\perp$-module with a trivial action of $N_\sigma^\perp$. Recall that, by \leref{lcgg}(iii), $Z(G_\sigma^\perp) \sigmameq Z(G^\perp) / N_\sigma^\perp$, so we can extend $\zeta$ to a character of $Z(G^\perp)$. If $\mu'=\mu+\pi_0\alpha$ for some $\alpha{\mathrm{i}}n Q$, then $\Omega_{\mu'}=U_\alpha\Omega_\mu$ and $\Omega_{\mu+\pi_0(Q)}=\Omega_{\mu'+\pi_0(Q)}$. For $v{\mathrm{i}}n\Omega_\mu$, $U_\beta{\mathrm{i}}n Z(G^\perp)$, $\beta{\mathrm{i}}n Q_\sigma$ (see \leref{lcgg}(ii)), we have: $U_\beta U_\alpha v = C_{\alpha,\beta}^{-1} U_\alpha U_\beta v = C_{\alpha,\beta}^{-1} \zeta(U_\beta) U_\alpha v$ where $C_{\alpha,\beta}$ is given by \eqref{tig4}. Hence, two pairs $(\mu,\zeta)$ and $(\mu',\zeta')$ correspond to the same irreducible $G_\sigma$-module if and only if they are related by: \betagin{equation}\lbb{muze} \mu'=\mu+\pi_0\alpha \,, \quad \zeta'(U_\beta) = C_{\alpha,\beta}^{-1} \zeta(U_\beta) \,, \qquad \alpha{\mathrm{i}}n Q \,, \; \beta{\mathrm{i}}n Q_\sigma \,. \end{equation} For $\lambda{\mathrm{i}}n P_\sigma$ the element $e^{2\pi{\mathrm{i}}\lambda_0} U_{(1-\sigma)\lambda} {\mathrm{i}}n Z(G)$ acts on $\Omega_\mu$ as the scalar \linebreak $e^{2\pi{\mathrm{i}}(\lambda_0|\mu)} \zeta(U_{(1-\sigma)\lambda})$ (cf.\ Lemmas \ref{lcentg} and \ref{lcgg}(ii)). Using \coref{ccgg} and \leref{lproj*}, it is easy to see that the action of $Z(G)$ on $\Omega$ determines uniquely the equivalence class of $(\mu,\zeta)$ under \eqref{muze}, and hence it determines the isomorphism class of the $G_\sigma$-module $\Omega$. Conversely, different pairs $(\mu,\zeta)$ give rise to different actions of $Z(G)$ on the corresponding modules $\Omega_{\mu+\pi_0(Q)}$. This completes the proof of \prref{padmgg}. \subsection{Classification of $\sigma$-Twisted $V_Q$-Modules}\lbb{sclastwm} Combining Propositions \ref{ptvqm}, \ref{pmomm} and \ref{padmgg}, we obtain the main result of the paper. \betagin{theorem}\lbb{ttwvqm} The category of $\sigma$-twisted $V_Q$-modules is a semisimple abelian category with finitely many isomorphism classes of simple objects, parameterized by the set $Z_\sigma$. \end{theorem} \betagin{remark}\lbb{racth} The irreducible $\sigma$-twisted $V_Q$-module corresponding to $\lambda+Q{\mathrm{i}}n Z_\sigma$ is isomorphic as an ${\mathfrak{h}}at{\mathfrak{h}}_\sigma$-module to $S({\mathfrak{h}}at{\mathfrak{h}}_\sigma^+) \otimes e^{\lambda_0}\mathbb{C}[\pi_0 Q] \otimes\mathbb{C}^{d(\sigma)}$, where $\mathbb{C}$ carries the zero action and $d(\sigma)$ is the defect of $\sigma$. \end{remark} \betagin{thebibliography}{FLM} \bibitem{BKT} B.~Bakalov, V.~G.~Kac, and I.~T.~Todorov, {\textit{Orbifolds of lattice vertex algebras}}, in preparation. \bibitem{BPZ} A.~A.~Belavin, A.~M.~Polyakov, and A.~B.~Zamolodchikov, {\em{Infinite conformal symmetry in two-dimensional quantum field theory}}, Nuclear Phys. {\bf B 241} (1984), no. 2, 333--380. \bibitem{B1} R.~E.~Borcherds, {\em{Vertex algebras, Kac--Moody algebras, and the Monster}}, Proc. Nat. Acad. Sci. U.S.A. {\bf 83} (1986), no. 10, 3068--3071. \bibitem{CGT} A.~Cappelli, L.~S.~Georgiev, and I.~T.~Todorov, {\textit{ A unified conformal field theory description of paired quantum Hall states}}, Comm. Math. Phys. {\bf 205} (1999), 657--689. \bibitem{DVVV} R.~Dijkgraaf, C.~Vafa, E.~Verlinde, and H.~Verlinde, {\textit{The operator algebra of orbifold models}}, Comm. Math. Phys. {\bf 123} (1989), 485--526. \bibitem{DHVW} L.~Dixon, J.~A.~Harvey, C.~Vafa, and E.~Witten, {\textit{String on orbifolds}}, Nucl. Phys. {\bf B261} (1985), 620--678; {\textit{String on orbifolds. II}}, Nucl. Phys. {\bf B274} (1986), 285--314. \bibitem{D} C.~Dong, {\em{Twisted modules for vertex algebras associated with even lattices}}, J. Algebra {\bf 165} (1994), 91--112. \bibitem{DL} C.~Dong and J.~Lepowsky, {\em{Generalized vertex algebras and relative vertex operators}}, Progress in Math., 112. Birkh\"auser Boston, 1993. \bibitem{FK} I.~B.~Frenkel and V.~G.~Kac, {\em{Basic representations of affine Lie algebras and dual resonance models}}, Invent. Math. {\bf 62} (1980/81), no. 1, 23--66. \bibitem{FLM} I.~B.~Frenkel, J.~Lepowsky and A.~Meurman, {\em{Vertex operator algebras and the Monster}}, Pure and Appl. Math., vol. 134, Academic Press, Boston, 1988. \bibitem{G} P.~Goddard, {\textit{Meromorphic conformal field theory}}, in ``Infinite-dimensional Lie algebras and groups'' (Luminy-Marseille, 1988), 556--587, Adv. Ser. Math. Phys., vol.\ 7, World Sci. Publishing, Teaneck, NJ, 1989. \bibitem{K1} V.~G.~Kac, {\textit{Infinite-dimensional Lie algebras}}, 3rd edition, Cambridge University Press, Cambridge, 1990. \bibitem{K} V.~G.~Kac, {\em{Vertex algebras for beginners}}, AMS University Lecture Series, vol. 10, 1996. 2nd edition, 1998. \bibitem{KP} V.~G.~Kac and D.~H.~Peterson, {\em{$112$ constructions of the basic representation of the loop group of $E_8$}}, Symposium on anomalies, geometry, topology (Chicago, Ill., 1985), 276--298, World Sci. Publishing, Singapore, 1985. \bibitem{KT} V.~G.~Kac and I.~T.~Todorov, {\em{Affine orbifolds and rational conformal field theory extensions of $W_{1+{\mathrm{i}}nfty}$}}, Comm. Math. Phys. {\bf 190} (1997), 57--111. \bibitem{R} M.~Roitman, {\em{On twisted representations of vertex algebras}}, Adv. Math. {\bf 176} (2003), 53--88. \bibitem{X} X.~Xu, {\em{Twisted modules of coloured lattice vertex operators superalgebras}}, Quart. J. Math. Oxford Ser. (2) {\bf 47} (1996), 233--259. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} Interest in Conformal Field Theories and Quantum Field Theory lead physicists to consider configuration spaces of marked points on the complex projective line, $Conf_{0,d}(\mathbb{P})$. In this paper, a real semi-algebraic stratification of $Conf_{0,d}(\mathbb{C})$, invariant under Coxeter-Weyl group is constructed, using the natural relation of this configuration space with the space $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ of complex monic degree $d>0$ polynomials in one variable with simple roots. This decomposition relies on subsets of $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ forming a good cover in the sense of \v Cech of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ and such that each piece of the decomposition is a set of polynomials, indexed by a decorated graph reminiscent of {\it Grothendieck's dessins d'enfant}. This decomposition in Coxeter-Weyl chambers brings into light a very deep interaction between the {\it real locus} of the moduli space $\overline{\mathcal{M}}_{0,d}(\mathbb{R})$ and {\it the complex} one $\overline{\mathcal{M}}_{0,d}(\mathbb{C})$. Using this decomposition, the existence of geometric invariants of those configuration spaces has been shown. Many examples are provided. Applications of these results in braid theory are discussed, namely for the braid operad. \end{abstract} \maketitle \tableofcontents \section{Introduction} Interest in Conformal Field Theories and Quantum Field Theory lead physicists to consider configuration spaces of marked points on the complex projective line, $Conf_{0,d}(\mathbb{P})$ and the moduli spaces of marked points on genus $0$ curves. The properties of moduli spaces of genus 0 curves with $d$ unordered marked points $\mathcal{M}_{0,d}(\mathbb{C})$ are still an important subject of investigations. Those spaces lie at the heart of challenging problems in relation with the calculation of Gromov-Witten invariants, a particular importance is given to them. In this paper, a Coxeter-Weyl semi-algebraic stratification of this moduli space is given, highlighting the existence of new topological invariants of the space of configurations of $n$ marked points on the complex plane. We use the natural relation between the moduli space $\mathcal{M}_{0,d}(\mathbb{C})$ and the space $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ of complex, monic, degree $d>0$ polynomials in one variable, having simple roots with sum equal to zero (i.e. Tschirnhausen polynomials)~\cite{C0}: $\mathcal{M}_{0,[d]}$ is the quotient of the $d$-th {\it unordered} configuration space on the complex plane modulo the group $PSL_{2}(\mathbb{C})$ and the configuration space can be considered as the space $\mathop{{}^{\textsc{D}}\text{Pol}}_d$, due to the fundamental theorem of algebra. This approach allows a new insight on $\mathcal{M}_{0,[d]}(\mathbb{C})$ and $\mathcal{M}_{0,[d]}(\mathbb{R})$, since new geometric invariants of configuration of points on $\mathbb{C}$ are given. Moreover, this decomposition being invariant under a Coxeter-Weyl group it brings into light a very deep relation between the approach used to consider the real locus of the moduli space of marked points on the sphere $\mathcal{M}_{0,d}(\mathbb{R})$ and the classical complex one $\mathcal{M}_{0,d}(\mathbb{C})$. As an application to braid theory, it gives an alternative way to describe braid generators and to construct the braid operad. To prove the existence of new geometric invariants, we introduce a decomposition of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$, which leads to the construction of a good cover in the sens of \v Cech. The nerve of this covering provides the new {\it geometric invariant}. It turns out, that those geometric invariants have many symmetries, in particular polyhedral ones. This decomposition is based on the Tits-Bruhat-Deligne theory of chambers and galleries. In the following, by {\it Weyl-Coxeter chamber}~\cite{De},\cite{Bki}(Chap IX, Sect. 5.2) we mean a fundamental domain, along with reflections hyperplanes. The method we present has the following advantageous property: namely, it allows to define any braid relation in $B_{d}$, as a path in a {\it Weyl-Coxeter gallery}. The study of the configuration spaces with $d$ marked points in the complex plane was initiated in the early times, 1960-1970, by V. Arnold and, further developed by the Russian school: V. Arnold, V. Goryunov, O. Lyashko, A. Vassiliev, D. Fuchs, S. Chmutov, S. Duzhin, J. Mostovoy\quad~~\cite{{Ar70},{AVG1},{AVG2},{CH},{Mo}}. During the last 50 years, the compactified moduli space $\overline{\mathcal{M}}_{0,[n]}(\mathbb{C})$ has been essentially studied from a complex geometry point of view. This point of view was introduced by Deligne-Mumford~\cite{DeM}, developed then by Knudsen~\cite{Knu} and Keel~\cite{Kee}. Other versions were provided: for instance by Kapranov~\cite{Ka93v1}, using Tits-Bruhat buildings and Fulton-MacPherson~\cite{FM}. Approaches using real geometry, have only been considered until now in the case of real {\it ordered} points, i.e. for $\overline{\mathcal{M}}_{0,n}(\mathbb{R})$. Namely, in~\cite{Dev}, Devadoss introduces his mosaic tesselation in order to construct the mosaic operad. In~\cite{EHKR}, the real cell decomposition is used to consider the cohomology of the real locus of those moduli spaces. Some other results concerning this real locus are due to Khovanov~\cite{Kho96} and Davis, Januszkiewicz and Scott~\cite{DJS}. The decomposition we provide, using the real-algebraic geometry point of view, allows a rich description of the open algebraic variety $\mathcal{M}_{0,[n]}(\mathbb{C})$, which is {\it not} visible in the framework of the previous decompositions. However, interesting connections exist with the mosaic tesselation of $\overline{\mathcal{M}}_{0,n}(\mathbb{R})$ and with the one given by Kapranov~\cite{Ka93v1,Ka93}. Namely, the set of generic cells of our decomposition are in bijective correspondence with pairs of all $d$ bracketings and, therefore, related to the associahedron and to the Stasheff polytope~\cite{Sta}. Note that, somehow, the main difference in the combinatorial structure is due to the fact the letters in the bracketings do not matter in our new decomposition, since we consider the unordered points case. The present paper grew out of the previous works~\cite{NAC,C0,C1,Co1}, where we bring into light the existence of a topological real algebraic stratification, obtained through the notion of {\sl drawings} $\mathcal{C}_{P}$ of a polynomial $P$. Such an object is reminiscent of Grothendieck's {\it dessin's d'enfant}~\cite{Gro84} in the sense that we consider the inverse image of the real and imaginary axes under a complex polynomial. The {\sl drawing} associated to a complex polynomial is, by convention, a system of blue and red curves properly embedded in the complex plane, being the inverse image under a polynomial $P$ of the union of the real axis (colored in blue) and the imaginary axis (colored in red)~\cite{C1}, $\mathcal{C}_{P}=P^{-1}(\mathbb{R}\cup \imath \mathbb{R})$. For a polynomial $P\in \mathop{{}^{\textsc{D}}\text{Pol}}_{d}$, the drawing contains $d$ blue and $d$ red curves, each blue curve intersecting exactly one red curve. The entire drawing forms a forest (in terms of graphs), whose leaves (terminal vertices) go to infinity in the asymptotic directions of the angle $\pi/2d$. The origins of the idea of considering the configuration space as a space of $\mathbb{C}$-polynomials, was introduced by V. Arnold~\cite{Ar70}. In 1992, S. Barannikov~\cite{Ba} restudied problems concerning this space of polynomials, in the light of works of Gauss~\cite{Gauss}. More precisely, he used Gauss's approach to the study of complex polynomials (a polynomial is uniquely determined by taking the inverse image under this polynomial of the real and imaginary axis). This approach was recently used by E. Ghys, see~\cite{{GhysA},{Ghys}} (p. 72) to discuss a question of M. Kontsevitch, in 2009, on the intersection of polynomials, which lead to the following theorem: four polynomials, $P_1,P_2,P_3,P_4$ of a real variable $x$ cannot satisfy: \begin{itemize} \item $P_1(x)<P_2(x)<P_3(x)<P_4(x)$ for small $x<0$, \item $P_2(x)<P_4(x)<P_1(x)<P_3(x)$ for small $x>0$ . \end{itemize} N. A'Campo, introduces in~\cite{NAC}, the idea to use combinatorics of bi-colored forest to construct a real semi-algebric decomposition of the space of complex polynomials with distinct roots. A'Campo's construction is based on equivalence classes (in the sens of Barannikov) of Gauss' drawings. As well, he shows that a representative of an equivalence class is a polynomial and vice versa and proves that such an equivalence class forms a contractible set in $\mathop{{}^{\textsc{D}}\text{Pol}}_d$. In this paper, the isotopy classes of polynomial's drawings, relatively to the $4d$ asymptotic directions generate our construction. In order to avoid any confusion, we call {\it elementa}, this isotopy classes. This decomposition in elementa allows us, not only to define a {\it topological stratification} of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ but as well, generates the construction of a good \v Cech cover of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$~\cite{C1,Co1}. This approach can be applied to braid theory. It is known that the fundamental group of the configuration space of $d$ unordered marked points on the complex plane is a $d$ strand braid group. Therefore, any braid relation can be investigated in a new manner, using the decomposition in Weyl-Coxeter chambers. In particular, we bring a new insight on braid towers, which is obtained using the natural inclusions: $B_{d}\to B_{d+1}$. In addition, this gives a new insight on the braid operad. An operad $O_{*} = \{O_{k},k \geq 1\}$ is a collection of spaces together with some composition maps $ O_{n} \times O_{i_{1}} \times \dots \times O_{i_{n}} \to O_{i} $(where $i = \sum_{k=1}^{n} i_{k}$) satisfying some axioms, with cabling \[ B_d \times B_{i_{1}} \times...B_{i_{d}}\to B_{i},\] as composition, with $i=\sum_{k=1}^{d} i_k$. Moreover, this decomposition is also good in the sense of \v Cech. So, as a second application, one can explicitly calculate the cohomology of the braid group with values in a sheaf. This information turns out to be important, since by~\cite{KSV} it is known that monoidal functors preserve operads; hence the homology of an operad (in spaces) as composition is an operad in graded modules. The paper is composed of four main points. In section 2, we introduce the notion of elementa, signatures, strata and as well as their properties. We discuss as well the Whitehead moves on the signatures. In section 3, we investigate the classification of generic signatures. We show that there exist four classes of generic signatures: $M,F,S$ and $FS$. Two elementa are incident if the signature of one can be obtained from the other one by a so-called {\it half-Whitehead move} (this is a topological operation on the red (or blue) edges of a given signature modifying one signature into the other one). This incidence relation on elementa is deeply connected to the topological closure of each topological stratum: In the section 4, we prove that the elementa are endowed with an incidence relation, which forms a partially strictly ordered set. We introduce the notion of inclusion diagram of the partially strictly ordered set and show that this inclusion diagram is the \v Cech nerve~\cite{Ce,C1}. This gives a geometric invariant. The construction is based on properties of generic elementa in such a way that each vertex corresponds to a generic elementum, each edge corresponds to a elementum of codimension 1 and each 2-face to a elementum of codimension 2 (etc). The incidence relations between the elementa are preserved on such a diagram. The construction of the inclusion diagram is illustrated for low degrees: $d=2,3,4$. In particular, we pay attention to the case $d=4$, we have a very rich geometrical structure. In the section 5, we prove the main statement: the decomposition is invariant under a Coxeter group and present a method of construction. We prove the main theorem: \begin{theorem}[Main theorem] Let $d>3$. The decomposition of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ in $4d$ Weyl-Coxeter-chambers, induced by the topological stratification of $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ in elementa, is invariant under the Coxeter group given by the presentation: \[W=\langle r, s_1,...,s_{2d+1} | r^2= Id, (rs_j)^{2p}=Id, p|d\rangle, p\in \mathbb{N}^{*}\] \end{theorem} Note, that this structure is reminiscent of translation surfaces. As an application to braid theory, we show that: \begin{theorem} Any braid relations following from the natural inclusion of braids $i^{\star}: B_d \hookrightarrow B_{d+1}$ can be described using the decomposition in $Q$-pieces. \end{theorem} Braids and the natural inclusions $i^{\star}: B_d \hookrightarrow B_{d+1}$ can be read from this geometric decomposition in chambers and galleries. More over, this method can be used as an alternative description of the braid operad. \section{Elementa, signatures and classes of polynomials} The isotopy classes of drawings of polynomials of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$, relatively to their $4d$ asymptotic directions are the basic objects of our decomposition of this space. So, in the following, we call them {\it elementa}. Since to each drawing corresponds a unique polynomial $P\in \mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ to an elementa $A$ correspond a subset of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ which will be denoted by the same symbol. \begin{definition} A chord diagram of degree $d$ consists in an oriented circle with $ 2d$ distinct points in counterclockwise order and a distinguished set of $d$ disjoint pairs of points. Every two points belonging to the same pair are joined by a chord (diagonal). A forest-chord diagram is a chord diagram such that the intersection of chords cannot form any $k$-gon in the disc, but only a vertex of valency $2k$. \end{definition} \begin{definition}[Signature]~\label{De:1} A degree $d$ signature is a forest embedded in a unit disc $\mathbb{D}$, with points (the terminal vertices) $S=\{0,1,2,.., 4d-1\}$ on $\partial\mathbb{D}$ lying on the $4d$ roots of the unity, which is obtained by a superimposition of two degree $d$ forest-chord diagrams: one on the even points $B=\{0,2,\dots,4d-2\}$ called the blue forest-chord diagram and the other one on the odd points $R=\{1,3,\dots,4d-1\}$ the red forest-chord diagram, with the constraint that red and blue forest chords intersect at $d$ points, such that one blue chord intersects only one red chord. \end{definition} Let us notice that a signature of degree $d$ has three types of vertices: \[\mathcal{V}=\{v, \bar{v},\bar{\bar{v}}\mid\, v\in\partial{\mathbb{D}},\,\bar{v},\bar{\bar{v}}\in int(\mathbb{D}),\, |v|=4d,\, |\bar{v}|=d,\, 0\leq |\bar{\bar{v}}|\leq d-1\},\] with $ deg(v)=1,\, deg(\bar v)=4,\, 0\leq deg(\bar{\bar{v}})\leq 2d.$ \begin{figure} \caption{An example of signature of degree 6. The vertices: $v$ are indexed by numbers, $\bar v$ by black points and $\bar{\bar{v} \end{figure} Notice that: drawings are geometrical objects, while signatures are combinatorial ones. Indeed, a drawing $\mathcal{C}_{P}$ characterizes completely the polynomial $P$: coordinates of its roots, of its critical points, singularities of its real and imaginary part. The signatures characterizes only the isotopy classes, to each signature $\sigma$ corresponds an elementum, denoted $A_{\sigma}$. In a signature the blue and red lines (drawn in a curved way) characterize only asympthotic directions of drawing and the localisation of the critical points and values of the classes of polynomials indexed by this signature. \begin{definition}[Short and long chords]~\label{D:diagonalsij} Let $\sigma$ be a signature. A chord (diagonal) connecting a pair of terminal vertices $i,j\in \{0,...,4d-1\}$ of the same parity, is denoted by $(i,j)$ . \begin{itemize} \item A chord is short if $|i-j|=2$. \item A chord is long if it is not short. \end{itemize} \end{definition} \begin{figure} \caption{An example of a long blue chord $(0,10)$, intersecting a short red chord $(1,15)$.} \end{figure} \begin{definition} A signature is generic if it does not contain any vertex $\bar{\bar{v}}$. An elementum $A_{\sigma} $ is generic if the signature $\sigma$ is generic. \end{definition} \begin{definition}[Codimension]\label{D:cod} The {\rm real codimension} of an elementum $A_{\sigma}$ is the sum of the local indices of all the vertices $\bar{\bar{v}}$ of the signature $\sigma$. The local index $Ind_{loc}(\bar{\bar{v}})$ at a vertex $\bar{\bar{v}}$ is the number $Ind_{loc}(\bar{\bar{v}})=2deg(\bar{\bar{v}})-3$. \end{definition} All the generic elementa are of codimension 0. \begin{definition}\label{D:suc} Let $\sigma$ be a signature. \begin{itemize} \item A 2-cell in $\mathbb{D}\setminus\sigma$ contains in its boundary a set of its edges of a tree. Such a tree is said to bound this 2-cell. \item Two trees are adjoining if they lie in the boundary of the same 2-cell in $\mathbb{D}\setminus \sigma$. \item Consider a pair of chords of the same color in a generic signature: $(i,j)$ and $(k,l)$. We call them successive if their terminal vertices satisfy one of the following conditions: $|i-k|=2$ or $|i-l|=2$ or $|j-k|=2$ or $|j-l|=2$. \end{itemize} \end{definition} \begin{proposition}[\cite{C0,CoJ}] For every $d$, the number of elementa is finite. \end{proposition} Let us introduce the topological operations on the signatures: the half-Whitehead moves which allows to define the topological closure of $A_{\sigma}$. \begin{definition}[\cite{C0}]~\label{D:2.18} A half-Whitehead move is a topological operation on the chords of a signature, carried out in the following way: \begin{enumerate} \item A {\it contracting half-Whitehead move} on a signature $\sigma_1$ is a glueing of $m>0$ chords of the same color in one point $\bar{\bar{v}}$ such that the obtained diagram is a signature $\sigma$. \item A smoothing half-Whitehead move on a signature $\sigma$ is an unsticking of $m> 0$ chords at a point $\bar{\bar{v}}$ such that the obtained diagram is a signature $\sigma_{2}$. \end{enumerate} \end{definition} \begin{definition}\label{D:band} Let $\sigma_1$ and $\sigma_2$ be two different signatures of the same codimension. A {\it Whitehead move} $\delta$ is the composition of a contracting and smoothing half-Whitehead moves starting at $\sigma_1$ and ending on $\sigma_2$.Two signatures differing by a Whitehead move are called adjacent signatures and are denoted $\sigma_1\leftrightarrow \sigma_2$. \end{definition} Notice that, $\leftrightarrow$ is reflexive but not necessarily transitive. Let us illustrate below a composition of a contracting and a smoothing half-Whitehead move for a pair of chords. \begin{figure} \caption{Whitehead move as a contracting half-Whitehead move and a smoothing half-Whitehead move } \end{figure} \begin{example} We illustrate a Whitehead move (c.f. definition~\ref{D:band}) on a pair of red chords. \ \begin{itemize} \item The two generic signatures, for $d=2$ (on the right and on the left of the figure below) are incident to a codimension 1 signature ( in the middle of the figure)~\ref{F:WHR}. \begin{figure} \caption{Whitehead move on a pair of red chords} \label{F:WHR} \end{figure} \item The Whitehead move has incidence on the deformation of coefficients of given polynomial of degree 3. In this following example, we consider the deformation of blue curves ($Im(P)=0$) of the drawing of the polynomial $P_{1}$: \[\begin{aligned} P_{1}&=(z+0.5-0.5\imath)(z+0.5+0.5\imath)(z-0.2-0.6\imath),\\ P_{2}&=(z+0.5-0.5\imath)(z+0.5+0.5\imath)(z+0.6-0.4\imath),\\ P_{3}&=(z+0.5-0.5\imath)(z+0.5+0.5\imath)(z+0.5-0.1\imath) \end{aligned}\] \begin{figure} \caption{Whitehead move of $Im(P)=0$ of monic polynomials} \end{figure} \end{itemize} \end{example} \section{Classification of signatures and adjacence relations} \subsection{Matrix notation of signatures and Classification of elementa} For convenience, we shall represent {\it algebraically} the diagrams and signatures, using some matrix notation. Without loss of generality we focus on generic signatures and matrix notation will be defined only for generic diagrams. Note that in that case, signatures are formed from $d$ trees, having one inner node of valency 4, the four incident edges are colored respectively red, blue, red, blue. The matrix has 2 lines and at most $d$ columns, each column consists of one $(i,j)$, and this indicates the existence of a chord connecting the terminal vertices $i$ and $j$ . Chords having terminal vertices of the same parity lie on the same line in the matrix. For instance, the matrix $\left[\begin{smallmatrix}i,j\\k,l\end{smallmatrix}\right]$ indicates that there exists a chord of a given color, connecting the terminal vertices $i$ with $j$ and another chord of the other color, connecting the terminal vertices $k$ with $l$. In case we have one short chord in the diagram $(j-1,j+1)$ intersecting a chord $(i,j)$ of the opposite color, we write it as $\left|\begin{smallmatrix}j\\i\end{smallmatrix}\right|$. \begin{remark} Each signature is associated to one unique matrix. \end{remark} \subsubsection{Generic elementa} The generic elementa can be classified by the signatures in the following way~\cite{C0}: \begin{enumerate} \item Trees of signatures consisting of two short (red and blue) diagonals are said to be of type $M$. An $M$ tree is denoted by $\left|\begin{smallmatrix}i\\i+2\end{smallmatrix}\right|$, where the number on the first line indicates that there exists a short diagonal $(i+1,i+3)$ crossing the diagonal $(i,i+2)$. A signature with only $M$ trees is called an $M$-signature. Note, that {\it there exist only four} $M$-signatures, for any $d$: \begin{enumerate} \item $M_1\leftrightarrow$ $\left|\begin{smallmatrix}3\\1\end{smallmatrix}\right|\left|\begin{smallmatrix}7\\5\end{smallmatrix}\right|...\left|\begin{smallmatrix}4d-1\\4d-3\end{smallmatrix}\right|$. \item $M_2 \leftrightarrow$ $\left|\begin{smallmatrix}1\\3\end{smallmatrix}\right|\left|\begin{smallmatrix}5\\7\end{smallmatrix}\right|...\left|\begin{smallmatrix}4d-3\\4d-1\end{smallmatrix}\right|$. \item $M_3 \leftrightarrow$ $\left|\begin{smallmatrix}3\\5\end{smallmatrix}\right|\left|\begin{smallmatrix}7\\9\end{smallmatrix}\right|...\left|\begin{smallmatrix}1\\4d-1\end{smallmatrix}\right|$. \item $M_4 \leftrightarrow$ $\left|\begin{smallmatrix}5\\3\end{smallmatrix}\right|\left|\begin{smallmatrix}9\\7\end{smallmatrix}\right|...\left|\begin{smallmatrix}4d-1\\1\end{smallmatrix}\right|$. \end{enumerate} \item Trees consisting of one short and one long diagonal are of type $F$. An $F$ tree is denoted by $\left|\begin{smallmatrix}j\\i\end{smallmatrix}\right|$ where $j$ and $i$ are labels of the terminal vertices of the long diagonal; the number $j$, indicates that there exists a short diagonal of the opposite color joining the vertex $j-1$ to $j+1$. Two $F$ signatures are opposite if they have opposite $F$-trees: i.e. if their indexes are switched. For instance $F^{+}=\left|\begin{smallmatrix}j\\i\end{smallmatrix}\right|$ and $F^{-}=\left|\begin{smallmatrix}i\\j\end{smallmatrix}\right|$ are opposite. A signature with only $M$ and $F$ trees is called an $F$-signature. A signature of type $F^{\otimes m}$ has exactly $m$ trees of type $F$ (and other of type $M$). \item Trees consisting of two long diagonals are of type $S$. An $S$ tree is denoted by $\left[\begin{smallmatrix}i,j\\k,l\end{smallmatrix}\right]$, where the first line gives the coordinates of a long diagonal $(i,j)$, the second line gives the coordinates of the second long diagonal $(k,l)$. A signature with only $M$ and $S$ trees is called an $S$-signature. As for the previous family of signatures, a signature is of type $S^{\otimes m}$ if there exists $m$ trees of type $S$, the other are $M$ trees. We focus on $S$ trees given by pairs of diagonals $(i,j)$ and $(i+1,j+1)$ or $(i,j)$ and $(i-1,j-1)$ and call the {\it narrow} $S$ trees. If we have a signature $S^{\otimes d-2}$ then all the $S$ trees are narrow. A pair of $S$-trees are opposite if the matrices are of type $S^{+}=\left[\begin{smallmatrix}i,j\\i+1,j-1\end{smallmatrix}\right]$ and $S^{-}=\left[\begin{smallmatrix}i,j\\i-1,j+1\end{smallmatrix}\right]$. \item The combination of $F$, $S$ and $M$ trees gives an $FS$-signature. \end{enumerate} The figure~\ref{F:MFS} presents examples of signatures of type $M, F, S$ and $FS$, for $d=4$. Each signature is indexed by its algebraic notation and by its contracted notation. In the contracted notation the parenthesis $\left(\begin{smallmatrix}j\\j\pm1\end{smallmatrix}\right)$ replaces the sequence of repetitive motifs and the orientation. \begin{figure} \caption{Examples of $M, F,S$ and $FS$ trees and their matrices for $d=4$} \label{F:MFS} \end{figure} \subsubsection{Classification of the signatures for $d=3$} In this subsection we consider for the case of $d=3$ not only the generic signatures but also the signatures of non zero codimension. For each codimension, we give the number of generic signatures and classified by characteristic paterns. \noindent{\bf Signatures of codimension 0:} In the decomposition by signatures of $\mathop{{}^{\textsc{D}}\text{Pol}}_3$, there exist 22 generic signatures among which there are four $M$ signatures, six $S$ signatures (containing only one $S$ tree) and twelve $F$ signatures (six with one red long diagonal and six with one long blue diagonal): \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 4 signatures of type M:}; \draw (0,0) circle (1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red, name path=R9R11](\R{9}:1) .. controls(\R{9}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \fill[name intersections={of=B4B6 and R5R7,name=i}] \XDOT ; \fill[name intersections={of=B8B10 and R9R11,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R3,name=i}] \XDOT ; \end{tikzpicture} } \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 6 signatures of type S:}; \draw (0,0) circle (1) ; \draw[blue, name path=B6B8](\B{6}:1) .. controls(\B{6}:0.7)and (\B{8}:0.7) .. (\B{8}:1) ; \draw[blue, name path=B4B10](\B{4}:1) .. controls(\B{4}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R9](\R{7}:1) .. controls(\R{7}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R5R11](\R{5}:1) .. controls(\R{5}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \fill[name intersections={of=B6B8 and R7R9,name=i}] \XDOT ; \fill[name intersections={of=B4B10 and R5R11,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R3,name=i}] \XDOT ; \end{tikzpicture} } \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 12 signatures of type F:}; \draw (0,0) circle (1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R9](\R{7}:1) .. controls(\R{7}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R5R11](\R{5}:1) .. controls(\R{11}:0.7)and (\R{5}:0.7) .. (\R{11}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ;\ \fill[name intersections={of=B8B10 and R7R9,name=i}] \XDOT ; \fill[name intersections={of=B4B6 and R5R11,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R3,name=i}] \XDOT ; \end{tikzpicture} } In addition to generic classes there exist \begin{itemize} \item 48 signatures of codimension 1, four families of twelve signatures which are equivalent up to rotation; \item 30 signatures of codimension 2; \item 4 signatures of codimension 3. \end{itemize} Let us notice that the alternating sum of the number $N_{k}$ of k-codimemsional signatures $ \sum_{k=0}^{3} (-1)^{k}N_{k}=0$. \noindent{\bf Signatures of codimension 1:} \begin{center} \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (0,0) circle (1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R9](\R{7}:1) .. controls(\R{7}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R3R11](\R{3}:1) .. controls(\R{3}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R1R5](\R{1}:1) .. controls(\R{1}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[name intersections={of=R3R11 and R1R5,name=i}] \RDOT ; \draw[name intersections={of=R1R5 and R3R11,name=i}] \RDOT ; \fill[name intersections={of=B8B10 and R7R9,name=i}] \XDOT ; \fill[name intersections={of=B4B6 and R1R5,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R5,name=i}] \XDOT ; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (0,0) circle (1) ; \draw[blue, name path=B6B10](\B{6}:1) .. controls(\B{6}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B8](\B{4}:1) .. controls(\B{4}:0.7)and (\B{8}:0.7) .. (\B{8}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R9](\R{7}:1) .. controls(\R{7}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R5R11](\R{5}:1) .. controls(\R{5}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[name intersections={of=B6B10 and B4B8,name=i}] \BDOT ; \draw[name intersections={of=B4B8 and B6B10,name=i}] \BDOT ; \fill[name intersections={of=B4B8 and R7R9,name=i}] \XDOT ; \fill[name intersections={of=B4B8 and R5R11,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R3,name=i}] \XDOT ; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (0,0) circle (1) ; \draw[blue, name path=B6B10](\B{6}:1) .. controls(\B{6}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B8](\B{4}:1) .. controls(\B{4}:0.7)and (\B{8}:0.7) .. (\B{8}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red, name path=R3R9](\R{3}:1) .. controls(\R{3}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R1R11](\R{1}:1) .. controls(\R{1}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[name intersections={of=B6B10 and B4B8,name=i}] \BDOT ; \draw[name intersections={of=B4B8 and B6B10,name=i}] \BDOT ; \fill[name intersections={of=B6B10 and R5R7,name=i}] \XDOT ; \fill[name intersections={of=B6B10 and R3R9,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R11,name=i}] \XDOT ; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (0,0) circle (1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R11](\R{7}:1) .. controls(\R{7}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R5R9](\R{5}:1) .. controls(\R{5}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[name intersections={of=R5R9 and R7R11,name=i}] \RDOT ; \fill[name intersections={of=B8B10 and R5R9,name=i}] \XDOT ; \fill[name intersections={of=B4B6 and R5R9,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R3,name=i}] \XDOT ; \end{tikzpicture} \end{center} \noindent {\bf Diagrams of codimension 2:} \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 1 family of order 6}; \draw (0,0) circle (1) ; \draw[blue, name path=B6B8](\B{6}:1) .. controls(\B{6}:0.7)and (\B{8}:0.7) .. (\B{8}:1) ; \draw[blue, name path=B4B10](\B{4}:1) .. controls(\B{4}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R5R9](\R{5}:1) .. controls(\R{5}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R3R11](\R{3}:1) .. controls(\R{3}:0.7)and (\R{11}:0.7) .. (\R{11}:1); \draw[red, name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[name intersections={of=R5R9 and R1R7,name=i}] \RDOT ; \draw[name intersections={of=R3R11 and R1R7,name=i}] \RDOT ; \draw[name intersections={of=R1R7 and R5R9,name=i}] \RDOT ; \fill[name intersections={of=B6B8 and R1R7,name=i}] \XDOT ; \fill[name intersections={of=B4B10 and R1R7,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R7,name=i}] \XDOT ; \end{tikzpicture} } \hspace{1cm} \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 2 families of order 12}; \draw (0,0) circle (1) ; \draw[blue, name path=B6B10](\B{6}:1) .. controls(\B{6}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B8](\B{4}:1) .. controls(\B{4}:0.7)and (\B{8}:0.7) .. (\B{8}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R9](\R{7}:1) .. controls(\R{7}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R3R11](\R{3}:1) .. controls(\R{3}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R1R5](\R{1}:1) .. controls(\R{1}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[name intersections={of=B6B10 and B4B8,name=i}] \BDOT ; \draw[name intersections={of=B4B8 and B6B10,name=i}] \BDOT ; \draw[name intersections={of=R3R11 and R1R5,name=i}] \RDOT ; \draw[name intersections={of=R1R5 and R3R11,name=i}] \RDOT ; \fill[name intersections={of=B4B8 and R7R9,name=i}] \XDOT ; \fill[name intersections={of=B4B8 and R1R5,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R5,name=i}] \XDOT ; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (0,0) circle (1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue,name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R7R11](\R{7}:1) .. controls(\R{7}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R3R9](\R{3}:1) .. controls(\R{3}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R1R5](\R{1}:1) .. controls(\R{1}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[name intersections={of=R7R11 and R3R9,name=i}] \RDOT ; \draw[name intersections={of=R3R9 and R7R11,name=i}] \RDOT ; \draw[name intersections={of=R3R9 and R1R5,name=i}] \RDOT ; \draw[name intersections={of=R1R5 and R3R9,name=i}] \RDOT ; \fill[name intersections={of=B8B10 and R3R9,name=i}] \XDOT ; \fill[name intersections={of=B4B6 and R1R5,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R5,name=i}] \XDOT ; \end{tikzpicture} } \noindent {\bf Signatures of codimension 3:} \hspace{2cm}{ \begin{tikzpicture}[scale=0.8] \newcommand{\degree}[0]{6} \newcommand{\last}[0]{5} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[ red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[ blue](i-1) circle (0.05)} \newcommand{\XDOT}[0]{[black](i-1) circle (0.045)} \draw (-6,-0.3) node {\bf $\bullet$ 1 familiy of order 4}; \draw (0,0) circle (1) ; \draw[blue, name path=B8B10](\B{8}:1) .. controls(\B{8}:0.7)and (\B{10}:0.7) .. (\B{10}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[red, name path=R5R11](\R{5}:1) .. controls(\R{5}:0.7)and (\R{11}:0.7) .. (\R{11}:1) ; \draw[red, name path=R3R9](\R{3}:1) .. controls(\R{3}:0.7)and (\R{9}:0.7) .. (\R{9}:1) ; \draw[red, name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[name intersections={of=R5R11 and R3R9,name=i}] \RDOT ; \draw[name intersections={of=R5R11 and R1R7,name=i}] \RDOT ; \draw[name intersections={of=R3R9 and R5R11,name=i}] \RDOT ; \fill[name intersections={of=B8B10 and R3R9,name=i}] \XDOT ; \fill[name intersections={of=B4B6 and R5R11,name=i}] \XDOT ; \fill[name intersections={of=B0B2 and R1R7,name=i}] \XDOT ; \end{tikzpicture} } \begin{lemma}\label{L:Diago1} Let $\sigma$ be a generic signature, having all red (resp. blue) diagonals short. Consider a pair of adjoining $F$ trees, of long blue (resp. red) diagonals $(i,j)(j+2,k)$ and deform them by a Whitehead move. If $k= i-2 \mod 4d$, then the number of blue (resp. red) short diagonals is increased by two. Otherwise, the number is increased by one. \end{lemma} \begin{proof} Consider the first case. Applying a Whitehead move onto this pair of diagonals induces a new signature, where the new pair of diagonals is $(i,i-2)(j, j+2)$, which are both short in the sense of the definition~\ref{D:diagonalsij}. So, the number of short blue (resp. red) diagonals is increased by two. Concerning the second case, the Whitehead move applied to the pair of diagonals $(i,j)(i-2,k)$ induces $(i,i-2)(j,k)$, where $(i,i-2)$ is a short diagonal. So, the number of short blue (resp. red) diagonals is increased by one. \end{proof} \begin{corollary} Consider a generic signature $\sigma$ having only short red (resp. blue) diagonals. The repetitive application of Whitehead moves onto pairs of blue (resp. red) diagonals induces an $M$-signature, in a finite number of Whitehead moves. \end{corollary} \begin{lemma}\label{L:Diago2} Let $\sigma$ be a generic signature having all red (resp. blue) short diagonals. Consider a pair of non-successive, adjoining blue diagonals in $\sigma$. Then, applying a Whitehead move onto this pair of diagonals we have one of the following situation: \begin{enumerate} \item the number of long blue (resp. red) diagonals increases by two, if both diagonals are short, \item the number of long blue (resp. red) diagonals increases by one, if one of the diagonals is short, \item if both diagonals are long a new pair of long diagonals appears. \end{enumerate} \end{lemma} \begin{proof} Let Let $(i,j)$ and $(l,k)$ be the pair of blue (resp. red) diagonals. \begin{itemize} \item If both diagonals are short then $|i-j|=|l-k|=2$, where $k\neq j+2 \mod 4d$, and $l\neq i-2\mod 4d$ (using definition~\ref{D:diagonalsij} and definition~\ref{D:suc}). So, applying the Whitehead move onto the pair $(i,j)$ and $(k,l)$ induces the new pair of diagonals $(i,k)(j,l)$, where $|i-l|\neq 2$ and $|l-j|\neq 2$. \item Let us suppose, without loss of generality, that $(k,l)$ is short. Applying the Whitehead move onto $(i,j)$ and $(k,k+2)$ gives the pair of diagonals $(i,k+2)(j,k)$. Since $(i,j)$ and $(k,l)$ are non-successive, then $l\neq i-2$ and $k\neq j+2$. Therefore, $|i-k-2|\neq 2$ and $|j-k|\neq 2$. Since both diagonals are long, the number of long blue (resp. red) diagonals, is increased by one. \item Let us apply a Whitehead move onto the pair of diagonals $(i,j)$ and $(k,l)$. Since $(i,j)$ and $(k,l)$ are disjoint and belong to a given signature $\sigma$, their terminal vertices verify $k\equiv i\equiv 1\mod 4$ and $j\equiv l\equiv 3 \mod 4$ (see definition~\ref{De:1}). This pair is first modified by a half-Whitehead move into a pair of diagonals meeting at one point: $(i,k)(j,l)$. This pair is then modified by a smoothing Whitehead move, which induces the unique possible pair of diagonals $(i,l)(j,k)$. \end{itemize} \end{proof} \subsection{Adjacence relations} We recall a few results from~\cite{C0}. \begin{theorem}[Adjacence theorem]\label{T:Adj} Let $d>3$. The relations between generic signatures obtained in one Whitehead-moves are the following: \begin{enumerate} \item $M$-signatures are connected only to $d(d-1)$ $F$-signatures ($\binom{d}{2}$ red and $\binom{d}{2}$ blue); \item $F$-signatures are connected to $M$-, $S$-signatures; \item $S$-signatures are connected only to $FS$-signatures or $S$-signatures . \end{enumerate} \end{theorem} \begin{proof} \ \begin{itemize} \item Consider an $M$ tree. By lemma~\ref{L:Diago2}, we know that in one Whitehead move applied onto a pair of the short diagonals, we obtain a signature having a pair of long diagonals. This is an $F$-signature. There exist $d$ blue or red diagonals. Choosing a pair of blue (or red) diagonals gives $\binom{d}{2}$ possibilities. Therefore we have $d(d-1)$ adjacent $F$-signatures to an $M$-signature. \item Consider an $F$ signature. One Whitehead move applied onto a pair of short red diagonals in a pair of adjoining $F$ trees gives $S$ trees. If there exist no other long diagonals in the signature, then this is an $S$-signature. Otherwise, it is a $FS$-signature. From (1) we know that $F$-signatures are connected to $M$ signatures. \item Consider two adjoining trees, one of those trees being an $S$ tree. If the other one is an $M$ tree then applying a Whitehead move to a pair of adjoining diagonals gives an $FS$-signature (this follows form lemma~\ref{L:Diago2}). Otherwise, we have a couple of adjoining $F$ trees. \end{itemize} \end{proof} \begin{center} \begin{tikzpicture}[scale=0.5] \node (M2) at (0,0) {$\scriptstyle \left(\begin{smallmatrix}1\\3\end{smallmatrix}\right)$}; \node (F17) at (2.5,2.5) {$\scriptstyle\left|\begin{smallmatrix}1\\7\end{smallmatrix}\right|$}; \node (F11193) at (3.6,1.1) {$\scriptstyle\left|\begin{smallmatrix}1\\11\end{smallmatrix}\right|\left|\begin{smallmatrix}9\\3\end{smallmatrix}\right|$}; \node (F133) at (1.0,3.2) {$\scriptstyle\left|\begin{smallmatrix}13\\3\end{smallmatrix}\right|$}; \node (F515137)at (-3.6,1.1) {$\scriptstyle \left|\begin{smallmatrix}5\\15\end{smallmatrix}\right| \left|\begin{smallmatrix}13\\7\end{smallmatrix}\right|$}; \node (F511) at (-2.5,2.5) {$\scriptstyle\left|\begin{smallmatrix}5\\11\end{smallmatrix}\right|$} ; \node (F915) at (-1.0,3.2) {$\scriptstyle\left|\begin{smallmatrix}9\\15\end{smallmatrix}\right|$} ; \node (F60) at (2.5,-2.5){$\scriptstyle\left|\begin{smallmatrix}6\\0\end{smallmatrix}\right|$}; \node (F28100) at (3.6,-1.2) {$\scriptstyle \left|\begin{smallmatrix}2\\8\end{smallmatrix}\right| \left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|$}; \node (F212) at (1.0,-3.2) {$\scriptstyle\left|\begin{smallmatrix}2\\12\end{smallmatrix}\right|$}; \node (F144612) at (-3.6,-1.1) {$\scriptstyle \left|\begin{smallmatrix}14\\4\end{smallmatrix}\right| \left|\begin{smallmatrix}6\\12\end{smallmatrix}\right|$}; \node (F104) at (-2.5,-2.5) {$\displaystyle\left|\begin{smallmatrix}10\\4\end{smallmatrix}\right|$}; \node (F148) at (-1.0,-3.2) {$\displaystyle\left|\begin{smallmatrix}14\\8\end{smallmatrix}\right|$}; \draw[->, very thick] (M2) -- (F17); \draw[->,very thick] (M2) -- (F11193); \draw[->,very thick] (M2) -- (F133); \draw[->,very thick] (M2) -- (F515137); \draw[->,very thick] (M2) -- (F511); \draw[->,very thick] (M2) -- (F915); \draw[->,very thick] (M2)--(F60); \draw[->,very thick] (M2)--(F28100); \draw[->,very thick] (M2)--(F212); \draw[->,very thick] (M2)--(F144612); \draw[->,very thick] (M2)--(F104); \draw[->,very thick] (M2)--(F148); \draw (0,-5) node { Deformation of the $M_{2}$-signature $\scriptstyle\left(\begin{smallmatrix}1\\3\end{smallmatrix}\right)$} ; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale=0.5] \node (F0-10) at (0,0) { $\scriptstyle\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|$}; \node (M3) at (0,3) { $\scriptstyle M_{3}$} ; \node (S1) at (0,-3) { $\scriptstyle\left|\begin{smallmatrix}1,11\\0,10\end{smallmatrix}\right|$} ; \node (F0-6) at (-3,0) {$\scriptstyle\left|\begin{smallmatrix}0\\6\end{smallmatrix}\right|$} ; \node (F4-10) at (3,0) { $\scriptstyle\left|\begin{smallmatrix}4\\10\end{smallmatrix}\right|$} ; \node (FF2) at (2.5,2.5) {$\scriptstyle\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|\left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|$} ; \node (FF3) at (-2.5,2.5) { $\scriptstyle\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|\left|\begin{smallmatrix}3\\9\end{smallmatrix}\right|$} ; \node (FS1) at (-2.5,-2.5) {$\left|\begin{smallmatrix}15,9\\0,10\end{smallmatrix}\right|\left|\begin{smallmatrix}7\\1\end{smallmatrix}\right|$} ; \node (S2) at (2.5,-2.5) {$\scriptstyle\left|\begin{smallmatrix}15,5\\0,10\end{smallmatrix}\right|$} ; \draw[->,very thick] (F0-10) -- (M3); \draw[->,very thick] (F0-10) -- (F0-6); \draw[->,very thick] (F0-10) -- (F4-10); \draw[->,very thick] (F0-10) -- (FS1); \draw[->,very thick] (F0-10) -- (FF2); \draw[->,very thick] (F0-10)-- (FF3); \draw[->,very thick] (F0-10) -- (S1); \draw[->,very thick] (F0-10) -- (S2); \draw (0,-5) node { Deformation of the $F$-signature $\scriptstyle\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|$} ; \end{tikzpicture} \begin{tikzpicture}[scale=0.5] \node (a) at (0,0) {$\scriptstyle \left[\begin{smallmatrix}1,11\\10,0\end{smallmatrix}\right]$} ; \node (c) at (2.7,2.2) {$\scriptstyle\left|\begin{smallmatrix}3\\9\end{smallmatrix}\right|\left[\begin{smallmatrix}1,11\\10,0\end{smallmatrix}\right]$} ; \node (i) at (2.7,-2.2) {$\scriptstyle\left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|\left[\begin{smallmatrix}1,11\\10,0\end{smallmatrix}\right]$} ; \node (e) at (-2.7,2.2) {$\scriptstyle\left|\begin{smallmatrix}7\\1\end{smallmatrix}\right|\left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|$} ; \node (g) at (-2.7,-2.2) { $\scriptstyle\left|\begin{smallmatrix}4\\10\end{smallmatrix}\right|\left|\begin{smallmatrix}1\\11\end{smallmatrix}\right|$} ; \node (d) at (0,3) {$\scriptstyle\left|\begin{smallmatrix}11\\1\end{smallmatrix}\right|$} ; \node (h) at (0,-3) { $\scriptstyle\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|$} ; \node (b) at (4,0) {$\scriptstyle\left|\begin{smallmatrix}1,11\\0,6\end{smallmatrix}\right|$} ; \node (f) at (-4,0) { $\scriptstyle\left|\begin{smallmatrix}0,10\\11,5\end{smallmatrix}\right|$} ; \draw[->,very thick] (a) -- (d); \draw[->,very thick] (a) -- (b); \draw[->,very thick] (a) -- (c); \draw[->,very thick] (a) -- (d); \draw[->,very thick] (a) -- (e); \draw[->,very thick] (a) -- (f); \draw[->,very thick] (a) -- (g); \draw[->,very thick] (a) -- (h); \draw[->,very thick] (a) -- (i); \draw (0,-5) node { Deformation of the $S$-signature $\scriptstyle \left[\begin{smallmatrix}1,11\\10,0\end{smallmatrix}\right]$} ; \end{tikzpicture} \end{center} \begin{figure} \caption{ Example of deformations of an $M,F$ and $S$-signatures} \end{figure} \begin{theorem}~\cite{C0}\label{Th:Path} Let $\sigma\in \Sigma_{d}$ be a generic signature. Then, for every $\sigma$ there exists a sequence of Whitehead moves starting at $\sigma$ and ending on an $M$-signature. \end{theorem} \begin{proof} The proof is by induction on the number of long blue diagonals. Let $\sigma$ be generic signature such that on the left side of a long blue diagonal there exist only short diagonals of red and blue color. \begin{enumerate} \item {\it Base case}. Let $\sigma$ have only one long blue diagonal. Then two cases are discussed: \begin{enumerate} \item The red diagonals are all short. \item Not all red diagonals are short. \end{enumerate} Consider the first case. Let us apply one of the lemma~\ref{L:Diago1} onto the long blue diagonal and the blue short diagonals on its right side. Each such Whitehead move step increases by one the number of short blue diagonals. So, we proceed using this method until there are $d$ short diagonals in the signature: this is an $M$ signature. Consider the second case, where we have $k$ red long diagonals. Then, the union of the long blue diagonal and of these $k$ long red diagonals compartment the signature into $q$ disjoint adjacent 2-cells lying in $\mathbb{C}\setminus\sigma$. If we consider each of these regions independently from the signature, we can interpret them as local $M$-signatures of smaller degree than $d$. Apply lemma~\ref{L:Diago1} to the long blue diagonal lying in one compartment a finite number of times (Whitehead moves on the long blue diagonal and short blue diagonals in the compartment). This compartment is thus, locally, an $M$-signature. Let $L$ be the new long blue diagonal, obtained from this procedure. This diagonal $L$ intersects now an adjacent compartment. As previously, we apply lemma~\ref{L:Diago1} a finite number of times onto $L$ and the short blue diagonals in the adjacent compartment, in order to have a local $M$-signature. Proceeding in this way on the blue long diagonal for all adjacent compartments, gives in final only short blue diagonals in the signature, which defines an F-signature. The remaining step is to apply the same procedure done for the blue long diagonals onto the red long diagonals: after a finite number of Whitehead moves all the red diagonals are short and this defines an $M$ signature.\\ \item {\it Induction case.} Suppose that for a signature with $m$ long blue diagonals there exists a path from the signature to an $M$ signature. Let us show that for $m+1$ long diagonals this statement is also true. Take a block of adjacent $m$ long diagonals and apply the induction hypothesis to it. Then there exists a finite number of deformations such that these $m$ long blue and red diagonals are all short, leaving only one long blue diagonal in the signature. We can thus apply the case (1) from the discussion above. \end{enumerate} \end{proof} \begin{remark} The theorem~\ref{Th:Path} above can be interpreted as the path connectedness of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$, from which we recover the fact that $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ is connected since, as the complement of a hyperplane arrangement it is a open subset of $\mathbb{C}^{d}$. \end{remark} \section{Inclusion diagrams}~\label{S:incl} In this section we show the existence of geometric invariants of configuration spaces. These geometric invariants (that we call {\it inclusion diagrams}) are obtained from the topological stratification. These objects are constructed by studying the incidence and adjacence relations between strata. As a corollary from previous works~\cite{Co1}, we show that these geometric objects are in bijection with the nerve of the \v Cech cover. For simplicity, we define those geometric invariants directly as the nerve of this cover. \subsection{Combinatorial closure of a signature} A contracting half Whitehead move defines a partial strict order ($\prec$) on the set $\Sigma_{d}$ of all signatures. Let $\sigma, \tau \in\Sigma_{d} $ in such a subset, we say that $\sigma \prec \tau$ if there exists a sequence of signatures $\sigma=\sigma_{1} \prec \sigma_{2} ... \prec\sigma_{n}=\tau$. The symbol $\prec$ is an incidence relation between those signatures. \vskip.1cm \begin{lemma} The incidence relation on the set $\Sigma_{d}$ forms a partial strict order. \end{lemma} \begin{proof} Clearly, we have an irreflexive relation: $\sigma\prec \sigma$ does not hold for any $\sigma$ in $\Sigma_{d}$, and a transitive relation. We prove that the relation is antisymmetric. If one has $\sigma\prec\tau$ then $codim(\tau) > codim(\sigma)$. This strict order $\prec$ induces the partial partial order $\preceq$ by\[\sigma \preceq \tau = \begin{cases}\sigma \prec \tau &\text{if\ } codim(\tau) > codim(\sigma),\\ \sigma =\tau &\text{if } codim(\tau)= codim(\sigma). \end{cases}\] \end{proof} \begin{definition} The set of $\sigma$ and all signatures incident to the signature $\sigma$ will be denoted by $\overline{\sigma}$. By abuse of notation we call $\overline{\sigma}$ the combinatorial closure of $\sigma$. \end{definition} \vskip.1cm From~\cite{C0,C1} we know that the combinatorial closure of an elementa is equivalent to its topological closure, i.e.: \[A_{\overline{\sigma}}= \bigcup_{\tau\in \bar \sigma}A_{\tau}=\overline{A_{\sigma}}.\] \subsection{Properties of inclusion diagram} Let us consider the union of topological closure of $\overline{A_{\sigma}}$ and its tubular $Tub$ neighborhood~\cite{C0,C1}, we call $A^{+}_{\sigma}=\overline{A_{\sigma}}\cup Tub$ the thickened elementa. The set of thickened elementa $A^{+}_{\sigma_{g}}$ for generic $\sigma_{g}$ forms a good cover of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$, in the sense of \v Cech (i.e. multiple intersections are either empty or contractible). The {\it nerve} $\mathcal{N}$ of this \v Cech covering is the cell complex on the vertex set $\Sigma_{G}$ of generic signatures, consisting of those finite non-empty subsets $ I$ of $\Sigma_{G}$ such that $\cap_{i\in I}$ $A^{+}_{\sigma_{i}}\neq \emptyset$. \begin{definition}\label{D:dual} We call inclusion diagram the cell complex $(\mathcal{W},\subset)$, where the set of its $k$-faces is in bijection with the set of codimension $k$-elementa and which is order-preserving. \end{definition} So, if a couple $(W_{\sigma}, W_{\tau})$ in the inclusion diagram verifies $W_{\sigma}\subset \overline{W_{\tau}}$, then $\sigma\prec \tau$. As well, suppose that an element $A_{\mu}$ of codimension $k$ is incident to a collection of elementa $A_{\sigma_{1}},...,A_{\sigma_{m}}$ of smaller codimensions, such that $A_{\mu} \subset \bigcap_{i=1}^{m} \overline{A_{\sigma_{i}}}$. Then, in the inclusion diagram, the collection of faces $W_{\sigma_{1}},...,W_{\sigma_{m}}$ lies in the boundary of a $k$-dimensional face $W_{\mu}$. \begin{corollary} The inclusion diagram is isomorphic to the nerve $\mathcal{N}$ of the cover $(A^{+}_{\sigma_{g}})_{\sigma_{g}\in\Sigma_{G}}$. \end{corollary} \begin{lemma}[Edges and 2-faces of the inclusion diagram]~\label{Lem:quadra} Let $(\mathcal{W},\subset)$ be the inclusion diagram associated to $(A_{\sigma})_{\sigma\in {\rm \Sigma}_d}$. Then: \begin{enumerate} \item each 1-dimensional face in $\mathcal{W}$ is bounded by 2 vertices. \item each 2-dimensional face in $\mathcal{W}$ is bounded by 4 vertices, 4 edges and forms a quadrangle. \end{enumerate} \end{lemma} \begin{proof} Statement (1): Consider a signature $\beta$, of codimension 1. Then, by definition~\ref{D:cod} there exists a pair of intersecting chords of the same color. Suppose that the set of indexes of terminal vertices of those diagonals is $\{i,j,k,l\}$ where $i<j<k<l$. Those numbers $i,j,k,l$ are of the same parity and by definition~\ref{De:1} verify: $i\equiv k\equiv 1\mod 4$ and $j\equiv l\equiv 3\mod 4$ (resp. $i\equiv k\equiv 2\mod 4$ and $j\equiv l\equiv 0\mod 4$). Again, from definition ~\ref{De:1} we know that in a generic signature each terminal vertex congruent to 1 $\mod 4$ (resp. 2 $\mod 4$) is attached by an edge to a terminal vertex which is congruent to $3 \mod 4$ (resp. 0 $\mod 4$ ). So, using a smoothing half-Whitehead move the intersection point is smoothed and we obtain two different possible pairs of diagonals: $(i,j)(k,l)$ or $(i,l)(j,k)$, with all the other diagonals of the signature remaining invariant. So, applying the definition~\ref{D:dual} to construct the inclusion diagram, we have that each 1-dimensional face (corresponding to a codimension 1 signature) in $\mathcal{W}$ is bounded by exactly 2 vertices (corresponding to the signatures obtained by smoothing the meeting point in $\beta$). Statement (2): Consider a signature of codimension 2, denoted by $\omega$. By definition~\ref{D:cod}, there exist two critical points. Applying the smoothing half-Whitehead move onto one of the critical points gives two different possible signatures of codimension 1 (this last statement follows from the first point above). So, applying the same arguments to the second critical point, implies that there exist four signatures of codimension 1, incident to $\omega$. In other words: there exist $\{\beta_{0},\beta_{1},\beta_{2},\beta_{3}\}\prec\omega$, where $codim(\beta_{i})=1$ and $i\in \{0,...,3\}$. The smoothing modification applied simultaneously to both critical points, gives four codimension 0 signatures, all incident to $\omega$: $\{\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3}\}\prec \omega$. Applying (1) to every diagram of codimension 1 implies that there exist two signatures of codimension 0 , which are incident to each signature of codimension 1. So, we obtain the following relations: \[\{\sigma_{0},\sigma_{1}\}\prec \beta_{0},\] \[\{ \sigma_{1},\sigma_{2}\}\prec \beta_{1},\] \[ \{\sigma_{2},\sigma_{3}\}\prec \beta_{2},\] \[ \{\sigma_{3},\sigma_{0}\}\prec \beta_{3}\] The construction of the inclusion diagram $\mathcal{W}$ from definition~\ref{D:dual}, implies that we have a quadrangle. \end{proof} \begin{example} An explicit construction of the inclusion diagrams is given below, for $d=2$. We illustrate the relations between the diagrams. There exist 4 generic signatures and 4 signatures of codimension 1, illustrated on the figure below. \begin{center}~\label{F:d=2} \begin{tikzpicture}[scale=.8] \node (a) at (-4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node { \tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node { \tiny \k} ; }; \draw[blue, name path=B0B6](\B{0}:1) .. controls(\B{0}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B2B4](\B{2}:1) .. controls(\B{2}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (b) at (0,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; } \draw[blue, name path=B0B4](\B{0}:1) .. controls(\B{0}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[blue, name path=B2B6](\B{2}:1) .. controls(\B{2}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[red,,name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny \k} ; }; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red,name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \end{tikzpicture} }; \draw[<->,thick] (b) -- (a); \draw[<->,thick] (b) -- (c); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (a) at (-4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node { \tiny\k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; } \draw[blue, name path=B0B6](\B{0}:1) .. controls(\B{0}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B2B4](\B{2}:1) .. controls(\B{2}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[red, name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red, name path=R3R5](\R{3}:1) .. controls(\R{3}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \end{tikzpicture} }; \node (b) at (0,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny \k} ; }; \draw[blue, name path=B0B4](\B{0}:1) .. controls(\B{0}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[blue, name path=B2B6](\B{2}:1) .. controls(\B{2}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red, name path=R3R5](\R{3}:1) .. controls(\R{3}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; }; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red, name path=R3R5](\R{3}:1) .. controls(\R{3}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \end{tikzpicture} }; \draw[<->,thick] (b) -- (a); \draw[<->,thick] (b) -- (c); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (a) at (-4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny\k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; }; \draw[blue, name path=B0B6](\B{0}:1) .. controls(\B{0}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B2B4](\B{2}:1) .. controls(\B{2}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (b) at (0,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny\k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; }; \draw[blue, name path=B0B6](\B{0}:1) .. controls(\B{0}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B2B4](\B{2}:1) .. controls(\B{2}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[red, name path=R1R5](\R{1}:1) .. controls(\R{1}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[red,,name path=R3R7](\R{3}:1) .. controls(\R{3}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; }; \draw[blue, name path=B0B6](\B{0}:1) .. controls(\B{0}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[blue, name path=B2B4](\B{2}:1) .. controls(\B{2}:0.7)and (\B{4}:0.7) .. (\B{4}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \draw[red,name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \end{tikzpicture} }; \draw[<->,thick] (b) -- (a); \draw[<->,thick] (b) -- (c); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (a) at (-4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node { \tiny\k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny\k} ; }; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R1R3](\R{1}:1) .. controls(\R{1}:0.7)and (\R{3}:0.7) .. (\R{3}:1) ; \draw[red, name path=R5R7](\R{5}:1) .. controls(\R{5}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (b) at (0,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny \k} ; }; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R1R5](\R{1}:1) .. controls(\R{1}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[red,,name path=R3R7](\R{3}:1) .. controls(\R{3}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \node (c) at (4,0) { \begin{tikzpicture}[scale=.6] \newcommand{\degree}[0]{4} \newcommand{\B}[1]{#1*180/\degree} \newcommand{\R}[1]{#1*180/\degree } \newcommand{\RDOT}[0]{[red](i-1) circle (0.05)} \newcommand{\BDOT}[0]{[blue](i-1) circle (0.05)} \draw[black,thick] (0,0) circle (1) ; \foreach \k in {0,2,4,6} { \draw[blue] (\B{\k}:1.15) node {\tiny \k};} ; \foreach \k in {1,3,5,7} { \draw[ red] (\R{\k}:1.15) node {\tiny \k} ; }; \draw[blue, name path=B0B2](\B{0}:1) .. controls(\B{0}:0.7)and (\B{2}:0.7) .. (\B{2}:1) ; \draw[blue, name path=B4B6](\B{4}:1) .. controls(\B{4}:0.7)and (\B{6}:0.7) .. (\B{6}:1) ; \draw[red, name path=R3R5](\R{3}:1) .. controls(\R{3}:0.7)and (\R{5}:0.7) .. (\R{5}:1) ; \draw[red,name path=R1R7](\R{1}:1) .. controls(\R{1}:0.7)and (\R{7}:0.7) .. (\R{7}:1) ; \end{tikzpicture} }; \draw[<->,thick] (b) -- (a); \draw[<->,thick] (b) -- (c); \end{tikzpicture} \end{center} \begin{figure} \caption{Relations between diagrams for $d=2$} \end{figure} Note, that in this case ($d=2$), there are {\it no} 2-faces. Indeed, composing by two Whitehead moves on the pairs of chords of red and blue color, gives a a superimposition of chord diagrams which is not compatible with the definition~\ref{De:1}. Therefore, the inclusion diagram $\mathcal{W}$ for this $d=2$ case, is a quadrangle (see Fig~\ref{F:2-face}) constituted from: \begin{enumerate} \item four vertices, corresponding to the generic signatures, \item four edges, corresponding to the codimension 1 signatures \end{enumerate} \begin{figure} \caption{Inclusion diagram for $d=2$} \label{F:2-face} \end{figure} \end{example} \begin{corollary} \ \begin{itemize} \item Let $\sigma_0$ and $\sigma_1$ be two generic signatures. If $\sigma_0$ and $\sigma_1$ are both incident to a signature of codimension 1, then this signature of codimension 1 is unique. \item Let $\sigma_0,\sigma_1,\sigma_2,\sigma_3$ be four generic signatures. If these signatures are incident to a signature of codimension 2 , then this signature of codimension 2 is unique. \end{itemize} \end{corollary} \subsection{Structure of inclusion diagrams} \begin{theorem} Let $d>2$. The inclusion diagram is a cell-complex. \end{theorem} \begin{proof} Let $X^{n}$ be the set set of faces of dimension $n$. We know that each $n$-face in $X^{n}$ is a topological ball of dimension $n$. Indeed, for $n=0$: the set $X^{0}$ is the set of vertices of the inclusion diagram. Now, for any $n$, the set $X^{n}$ is in bijection with the set of codimension $n$ signatures, which are the indices of elementa of codimension $n$. Those elementa are known to be topological balls of codimension $n$, from~\cite{C0,C1}. So, the face of $X^{n}$ are $n$-balls. The glueing between faces is done by the following criterion: if two signatures verify $\tau\prec \tau'$ such that $k=codim(\tau)<codim(\tau')=k+1$, then $A_{\tau'}\subset A_{\bar{\tau}}$. In particular, the $k$-face $f_{\tau}$ in $X^{k}$ (corresponding to $\tau$) lies in the boundary of the face $f_{\tau'}$ (corresponding to $\tau'$) where $dim(f_{\tau})+1=dim(f_{\tau'})$. \end{proof} For any $d>2$, the inclusion diagram contains two main parts: a part that we call {\it exterior} part and a part to which we refer as an {\it interior} part. \begin{enumerate} \item The \underline{ exterior part} is formed from a necklace of four beads. Those beads, denoted $NC_{d}$, are obtained from a union of cells of the cell complex, forming a connected component. A structure $NC_{d}$ is a linearly ordered subset of $(\Sigma_{d},\prec)$ with one upper bound $\sigma$ having $d$ blue (resp. red) chords intersecting at one point. The notation $NC_{d}$ (or $NC$ in short) is due to the relation with the non-crossing partition of the set of $d$ elements $\{1,2,...,d\}$~\cite{Kre72}. Indeed, there is a bijection between the set of vertices in a $NC_{d}$ structure and the set of non-crossing partitions of $\{1,2,...,d\}$. Consider the upper bound of an $NC$ structure: it is a signature with $d$ short diagonals of a given color and $d$ long diagonals intersecting in one vertex $\bar{\bar{v}}$ of the other color. The vertex is of valency $2d$. Two $NC$ structures are said to be of the same color if the intersecting chords of the upper bound signature are of the same color. In each $NC$ there exist two $M$-signatures. Those $NC$ structures are connected one to another by the $M$-signatures (see an example of one of $NC_{4}$ structures, figures~\ref{F:NC} and~\ref{F:NC12}). \begin{figure} \caption{Exterior structure: $NC_{d} \label{F:NC} \end{figure} In the case of $d=4$, the connections between $M_{1}$ and $M_{3}$ are represented on figure~\ref{F:NC12}. The other connections in figure~\ref{F:NC} are of the same type. \begin{figure} \caption{One of $NC_{4} \label{F:NC12} \end{figure} The edges in figure~\ref{F:NC12} correspond to signatures of codimension 1. The signatures of codimensions 3, 4 and 5 corresponding to the faces of higher dimensions are depicted below. \begin{figure} \caption{Diagrams of codimensions 3,4, and 5 between $M_{1} \label{F:NC13} \end{figure} \item The \underline{ interior part} is divided into two structures: bridges $B$ and open-book $O$. \begin{itemize} \item A {\it bridge} $B$ is a linearly ordered subset with one upper bound. This upper bound is incident to a couple of opposite $F$-signatures, respectively lying in $NC$ structures of the same color. In figure~\ref{F:NC}, one vertical (resp. horizontal) bridge structures is drawn as a red (resp. green) line. \begin{figure} \caption{Example of bridge structures between two opposite $NC_{4} \label{F:B4} \end{figure} \item An {\it open book} $O$ is a linearly ordered subset having one upper bound incident to generic signatures lying in two adjacent $NC$ structures of the opposite color. This upper bound is a signature of codimension $2d-4$ having at least two intersection points of different colors. Notice that $O$ substructures exist only for $d>3$. \end{itemize} \end{enumerate} \begin{proposition} For any $d>1$ there exist four structures $NC_{d}$ which are glued one to another by their $M$-signatures, so that each $M$-signature is incident to only two $NC_{d}$ structures of the opposite colors and a pair of $NC_{d}$ structures have at most one $M$-signature in common. \end{proposition} \begin{proof} We know that there exists four $NC_{d}$ structures in the inclusion diagram, where each $NC_{d}$ structure containing among its set of vertices a pair of $M$-signatures, for any $d>1$. The vertices along which the $NC_{d}$ are glued to each other are the $M$-signatures. An $M$-signature is a vertex of valency 2$\binom{d}{2}$, since there exist $\binom{d}{2}$ possibilities to make one Whitehead move starting form an $M$-signature for the red (resp. blue) diagonals (theorem~\ref{T:Adj}). So, this argument shows that an $M$-signature is the intersection of a pair of $NC_{d}$ structures of each color. Suppose, that this common $M$-signature is denoted by $M_1$. Note, that there still remain two $M$-signatures in this pair of $NC_{d}$ structures. We will show that the two remaining $M$-signatures are different. Indeed, in one $NC_{d}$ structure only the red diagonals were modified; in the other $NC_{d}$ structure only the blue diagonals were modified. Therefore, in the first $NC_{d}$ structure, the ending $M$-signature has different blue diagonals than in $M_1$ and in the second $NC_{d}$ structure the $M$-signature has different red diagonals than in $M _1$. Now, since there exist four $M$-signatures, the four $NC_{d}$ structures are glued to each other by their $M$-signatures, and a pair of $NC_{d}$ structures have at most one $M$-signature in common. \end{proof} \begin{example}{Inclusion diagram for $d=3$} The figure~\ref{F:graph3} illustrates the inclusion diagram for $d=3$, using the matricial notation. \begin{figure} \caption{Inclusion diagram for $d=3$} \label{F:graph3} \end{figure} For any The inclusion diagram contains two distinct parts: \begin{enumerate} \item The four substructures in black are the $NC_{3}$ substructures.They connect pairs of $M$-signatures, having the same short diagonals of a given color. Except from $M$-signatures, these black substructures contain three vertices corresponding to $F$-signatures. The 3-face of the black substructure corresponds to the codimension 3 signature and it is incident to three 2-faces which correspond to the codimension 2 signatures. Those signatures have two inner vertices, incident to four edges of the same color and three inner vertices which are incident to four edges of alternating colors. \\ \item The substructure with colored edges. This colored part of the inclusion diagram corresponds to the parts which appear in the construction given in figure~\ref{F:Q3}. In particular, the vertices in this colored part are the $S$-signatures. The 2-faces in the interior part correspond to codimension 2 signatures. Those signatures of codimension 2, in addition to the three inner nodes with incident edges of alternating color, have one inner vertex incident to 4 blue edges and the other one incident to 4 red edges. \end{enumerate} This inclusion diagram is resumed by the construction in Appendix~\ref{A: Incldiag3}: Figure~\ref{F:2Fincldiag} and Figure~\ref{F: Incldiag3}. \begin{itemize} \item The first quadrangle 2-face connecting the following two $F$-signatures and two $S$-signatures $\left|\begin{smallmatrix}2\\8\end{smallmatrix}\right|$,$\left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|$, $\left[\begin{smallmatrix}1,7\\2,8\end{smallmatrix}\right]$ and$\left[\begin{smallmatrix}2,8\\3,9\end{smallmatrix}\right]$ corresponds in figure~\ref{F:graph3} to the blue vertical cycle. The $F$-signatures have one long red diagonal. \\ \item The second quadrangle 2-face connecting the following two $F$-signatures and two $S$-signatures $\left|\begin{smallmatrix}9\\3\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}3\\9\end{smallmatrix}\right|$, $\left[\begin{smallmatrix}2,8\\3,9\end{smallmatrix}\right]$, $\left[\begin{smallmatrix}3,9\\4,10\end{smallmatrix}\right]$ corresponds in figure~\ref{F:graph3} to the blue horizontal cycle. The $F$-signatures have two long blue diagonals. \end{itemize} \end{example} \subsection{$Q$-diagrams and $Q$-pieces} In the following, for $d\geq 3$ we introduce the notions of $Q$-diagrams and $Q$-pieces. \begin{definition}\label{D:Q} \begin{enumerate} \item A {\bf $Q$-diagram} is a signature in $\Sigma_d$, having $(d-2)$ $S$-trees. This $Q$-diagram is a superimposition of two monochromatic diagrams (one blue, one red) both having $d-2$ long diagonals; the red (resp.blue) diagram is turned through $\frac{\pi}{4d}$ relatively to the center of the diagram and to the blue (resp. red) diagram. By $\mathcal{L}$ we denote the reflection axis of the blue (resp. red) diagram. It is parallel to the long diagonals. \item A {\bf $Q$-piece} is the union of elementa $A_{\sigma}$, indexed by those signatures which are adjacent to a $Q$-diagram, and having at least one long diagonal parallel to the ones of the $Q$-diagram. To this set of signatures, we add those $M$-signatures, being adjacent to them, in a minimal number of Whitehead moves. \item A pair of {\bf adjacent $Q$-pieces} denoted $(Q_{i},Q_{i+1})$ verify the following properties: \begin{enumerate} \item \underline{(Copy and Paste)}: There exists an identity map from the set of blue (resp. red) monochromatic diagrams in the $Q_{i}$-piece to the set of blue (resp. red) monochromatic diagrams in the $Q_{i+1}$-piece. \item \underline{(Copy/Paste and Rotate)}: There exists an identity map from the set of red (resp. blue) monochromatic diagrams in the $Q_{i}$-piece to the set of red (resp. blue) monochromatic diagrams in the $Q_{i+1}$-piece, composed with a rotation of $-\frac{\pi}{2d}$ about the center of the polygon. \item One signature in a $Q_i$-piece is bijectively mapped to another one in the $Q_{i+1}$-piece, if their blue (resp. red) monochromatic diagrams are identical and if their red (resp. blue) monochromatic diagrams are symmetric to each other, about the reflection axis $\mathcal{L}$ of the blue (resp. red) monochromatic $Q$-diagram. Any pair of such signatures are said to be {\bf consecutive}. \end{enumerate} \item A {\bf Connection piece} is the union of elementa which glue a pair of adjacent $Q$-pieces together. \end{enumerate} \end{definition} \vskip.1cm A $Q$-diagram is associated to a matrix $[\begin{smallmatrix}L_{0} \\L_{1} \end{smallmatrix}]$ where $L_{0} $ (resp. $L_{1}$) describes the monochromatic blue (resp. red) diagram in terms of pairs of indices of terminal vertices. A pairing of integers $i, j$, denoted by $(i,j)$, corresponds, in the signature, to a diagonal connecting the terminal vertices labeled respectively by $i$ and $j$; The integers in $L_{0}$ and $L_{1}$ belong respectively to the sets $\{0,2....,4d-2\}$ and $\{1,3,...,4d-1\}$. \begin{definition} The matrix of the $Q$-diagram satisfies the following conditions: - The matrix is of size $(d-2)\times 2$, where each column of the matrix contains a pairing of integers of the same parity; - in one column: there is a couple of pairs of integers, which correspond to intersecting diagonals of opposite colors; - the first and the last columns of the matrix contains the pair of numbers on the terminal vertices of the shortest long diagonals; - the paired integers, lying in adjacent columns, correspond to a pair of adjoining diagonals (i.e. lying in the boundary of a 2-cell in $\mathbb{D}\setminus \sigma$). \end{definition} \begin{lemma}~\label{L:rotateM} Consider the $2d$ regular polygon formed from the blue (resp. red) terminal vertices in a signature. Let $r$ be the rotation through $\frac{\pi}{2d}$ about the center of the polygon. Let $(i,j)$ be a diagonal of the polygon. So, $r^{k}$ : $(i,j)\mapsto (i+2k,j+2k)\mod 4d$. \end{lemma} \begin{proof} Le us proceed by induction on the angle $\frac{k\pi}{2d}$. \begin{enumerate} \item {\sl Base case.} Let $k=1$. Let us rotate by $r=\frac{\pi}{2d}$ the diagonal $(i,j)$. Since $i,j\in\{1,3,..,4d-1\}$ (resp.\{2,4,..,4d\} ) are the vertices of a regular $2d$-gon, then the rotation maps the diagonal $(i,j)$ to the diagonal $(i+2,j+2)$. \item {\it Induction case.} Suppose that for a given $k$ in $\{1,...,2d\}$ the statement is true: the diagonal $(i,j)$ rotated by an angle $r^{k}=\frac{k\pi}{2d}$ is mapped onto the diagonal $(i+2k,j+2k)$. Let us show that for $k+1$ the statement is true. We rotate about an angle $\frac{(k+1)\pi}{2d}$ the diagonal $(i,j)$. By induction hypothesis rotating by $\frac{k\pi}{2d}$ maps $(i,j)$ onto $(i+2k,j+2k)$. Rotating the diagonal $(i+2k,j+2k)$ by an angle of $\frac{\pi}{2d}$ maps it by (1) onto the diagonal $(i+2k+2,j+2k+2)$ {\it i.e.} $(i+2(k+1),j+2(k+1))$.\end{enumerate} \end{proof} \begin{example}{Adjacent $Q$-pieces of the inclusion diagram for $d=4$} In this example, we detail the construction by induction of the inclusion diagram for $d=4$ . \begin{itemize} \item There are eight $Q$-pieces. Each $Q$-piece satisfies the relations in figure~\ref{F:2Qp}. \item Each pair of adjacent $Q$-pieces are connected by a connection piece. \item Two consecutive $Q$-diagrams are related by the following commutative diagram: \begin{center} \begin{tikzpicture}[scale=0.6] \node(a) at (0,2){$\scriptstyle\left[\begin{smallmatrix}i,i+6\\i+15,i+5\end{smallmatrix}\right]\left[\begin{smallmatrix}i+14,i+8\\i+13,i+7\end{smallmatrix}\right]$}; \node(b1) at (-6,1) {$\scriptstyle\left|\begin{smallmatrix}i+14\\i+8\end{smallmatrix}\right|\scriptstyle\left|\begin{smallmatrix}i+6\\i\end{smallmatrix}\right|$}; \node(b2) at (6,1){$\scriptstyle\left|\begin{smallmatrix}i+8\\i+14\end{smallmatrix}\right|\left[\begin{smallmatrix}i,i+6\\i+5,i+5\end{smallmatrix}\right]$}; \node(c1) at (-6,-1){$\scriptstyle\left|\begin{smallmatrix}i+4\\i+8\end{smallmatrix}\right|\left[\begin{smallmatrix}i+1,i+7\\i,i+6\end{smallmatrix}\right]$}; \node(c2) at (6,-1){$\scriptstyle\left|\begin{smallmatrix}i\\i+6\end{smallmatrix}\right|\left|\begin{smallmatrix}i+8\\i+14\end{smallmatrix}\right|$}; \node(d) at (0,-2){$\scriptstyle\left|\begin{smallmatrix}i,i+6\\i+1,i+7\end{smallmatrix}\right|\left|\begin{smallmatrix}i+8,1+14\\i+9,i+15\end{smallmatrix}\right|$}; \draw (a)--(b1); \draw (a)--(b2); \draw (b1)--(c1); \draw (b2)--(c2); \draw (d)--(c1); \draw (d)--(c2); \end{tikzpicture} \end{center} \end{itemize} In the following paragraph, the explicit construction of the first two $Q$-pieces is done. An illustration of this construction is in Figure~\ref{F:2Qp}. \begin{enumerate} \item Let us start with the first $Q$-piece, having the $Q$ diagram $\left[\begin{smallmatrix}1,11&3,9 \\0,10 &2,8 \end{smallmatrix}\right]$. \begin{enumerate} \item Apply a Whitehead-move to the pair of blue diagonals $(2,8),(4,6)$, where $(2,8)$ belongs to the $Q$-diagram and $(4,6)$ is a short adjacent diagonal, in order to obtain $\left[\begin{smallmatrix}1,11 \\0,10 \end{smallmatrix}\right]\left|\begin{smallmatrix}9\\3\end{smallmatrix}\right|$. So, one obtains an $FS$-signature. Let us deform the long red diagonal in the $F$-tree $\left|\begin{smallmatrix}9\\3\end{smallmatrix}\right|$ with the short red diagonal in the $M$ tree $\left|\begin{smallmatrix}5\\7\end{smallmatrix}\right|$: it gives an $M$ tree. So, there remains only one $S$ diagram: $\left[\begin{smallmatrix}1,11 \\0,10 \end{smallmatrix}\right]$. The $F$-signatures, obtained from this $S$-signature by a minimal number of Whitehead moves are: $\left|\begin{smallmatrix}7\\1\end{smallmatrix}\right|,\left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|, \left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|,\left|\begin{smallmatrix}11\\1\end{smallmatrix}\right|,\left|\begin{smallmatrix}1\\11\end{smallmatrix}\right|,\left|\begin{smallmatrix}4\\10\end{smallmatrix}\right|$. \item Deform the pair of blue diagonals $(0,10)$,$(12,14)$: this gives an $SF$-signature $\left|\begin{smallmatrix}11\\1\end{smallmatrix}\right|\left[\begin{smallmatrix}3,9 \\2,8 \end{smallmatrix}\right]$. Deform the long red diagonal in the $F$-tree $\left|\begin{smallmatrix}11\\1\end{smallmatrix}\right|$ with the short red one, in the $M$ tree $\left|\begin{smallmatrix}13\\15\end{smallmatrix}\right|$. This gives also an $M$ tree and so, there remains one $S$ diagram: $\left[\begin{smallmatrix}3,9 \\2,8 \end{smallmatrix}\right]$. The $F$-signatures which are obtained from this $S$ diagram by a minimal number of deformation operations are $\left|\begin{smallmatrix}15\\9\end{smallmatrix}\right|,\left|\begin{smallmatrix}2\\8\end{smallmatrix}\right|, \left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|,\left|\begin{smallmatrix}12\\2\end{smallmatrix}\right|,\left|\begin{smallmatrix}3\\9\end{smallmatrix}\right|,\left|\begin{smallmatrix}9\\3\end{smallmatrix}\right|$. \end{enumerate} \item The generic signatures of an adjacent $Q$-piece to the previous one are described below. This construction is done in two steps. \begin{enumerate} \item The adjacent $Q$-piece contains the following $Q$-diagram $\left[\begin{smallmatrix}0,10 &2,8 \\ 15,9&1,7 \end{smallmatrix}\right]$ and the $S$-signature $\left[\begin{smallmatrix}1,7\\2,8\end{smallmatrix}\right]$, which is obtained in one Whitehead moves, vie two possible ways. The first possibility is to deform a pair of blue diagonals giving the $FS$-signature $\left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|\left[\begin{smallmatrix}2,8 \\1,7 \end{smallmatrix}\right]$. The second possibility is to deform the pair of red diagonals giving an $FS$-signature $\left|\begin{smallmatrix}15\\9\end{smallmatrix}\right|\left[\begin{smallmatrix}2,8 \\1,7 \end{smallmatrix}\right]$. The $S$ signature $\left[\begin{smallmatrix}2,8 \\1,7 \end{smallmatrix}\right]$ is adjacent after one Whitehead move to the following $F$-signatures: $\left|\begin{smallmatrix}15\\9\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}9\\15\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}11\\1\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}14\\8\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}2\\8\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|$. \item The $Q$-diagram $\left[\begin{smallmatrix}0,10 &2,8 \\ 15,9&1,7 \end{smallmatrix}\right]$ is also adjacent to the $S$-signature $\left[\begin{smallmatrix}15,9\\0,10\end{smallmatrix}\right]$. This is obtained by deforming a pair of red diagonals giving the $FS$-signature $\left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|\left[\begin{smallmatrix}15,9 \\0,10 \end{smallmatrix}\right]$, or a pair of blue diagonals giving an $FS$-signature $\left|\begin{smallmatrix}8\\2\end{smallmatrix}\right|\left[\begin{smallmatrix}2,8 \\1,7 \end{smallmatrix}\right]$ . The $S$ signature $\left[\begin{smallmatrix}15,9 \\0,10 \end{smallmatrix}\right]$ is adjacent after one Whitehead move to the following $F$-signatures $\left|\begin{smallmatrix}7\\1\end{smallmatrix}\right|,\left|\begin{smallmatrix}1\\7\end{smallmatrix}\right|, \left|\begin{smallmatrix}6\\0\end{smallmatrix}\right|,\left|\begin{smallmatrix}3\\9\end{smallmatrix}\right|,$ $ \left|\begin{smallmatrix}0\\10\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}10\\0\end{smallmatrix}\right|$. \end{enumerate} \end{enumerate} \begin{figure} \caption{The first two adjacent $Q$-pieces for $d=4$} \label{F:2Qp} \end{figure} \begin{lemma} Let $d=4$ and $i\in\{0,..,2d-1\}$. Any pair of adjacent $Q$-pieces $(Q_i,Q_{i+1})$ can be constructed from the first two $(Q_1,Q_2)$. \end{lemma} \begin{proof} Any $Q$-piece in $\mathop{{}^{\textsc{D}}\text{Pol}}_4$ can be constructed by the following method. Let $k\in\{0,1,...,4d-1\}$. By lemma~\ref{L:rotateM}, adding $2k$ to all the pairs of integers of the matrix is equivalent to rotating the diagram by $\frac{k\pi}{2d}$ about the center of the disc. \begin{enumerate} \item Consider the $Q$-diagram $\left[\begin{smallmatrix}1+2k,11+2k&3+2k,9+2k \\0+2k,10+2k &2+2k,8+2k \end{smallmatrix}\right]$. Deform the pair of diagonals blue diagonals $(2+2k,8+2k),(4+2k,6+2k)$. This gives $\left[\begin{smallmatrix}1+2k,11+2k \\0+2k,10+2k \end{smallmatrix}\right]\left|\begin{smallmatrix}9+2k\\3+2k\end{smallmatrix}\right|$. This is an $FS$-signature. Reciprocally, deform the long red diagonal in $\left|\begin{smallmatrix}9+2k\\3+2k\end{smallmatrix}\right|$ with the short red diagonal in the $M$ tree $\left|\begin{smallmatrix}5+2k\\7+2k\end{smallmatrix}\right|$. It gives an $M$ tree, leaving the diagram to be an $S$-signature $\left[\begin{smallmatrix}1+2k,11+2k \\0+2k,10+2k \end{smallmatrix}\right]$. It is easy to obtain the $F$ signatures, obtained from the $S$-signature in a minimal number of Whitehead moves. Those signatures are: $\left|\begin{smallmatrix}7+2k\\1+2k\end{smallmatrix}\right|$, $\left|\begin{smallmatrix}0+2k\\10+2k\end{smallmatrix}\right|,$$\left|\begin{smallmatrix}10+2k\\0+2k\end{smallmatrix}\right|,$ $\left|\begin{smallmatrix}11+2k\\1+2k\end{smallmatrix}\right|$,$\left|\begin{smallmatrix}1+2k\\11+2k\end{smallmatrix}\right|,\left|\begin{smallmatrix}4+2k\\10+2k\end{smallmatrix}\right|$. \item Deform the pair of blue diagonals $(0+2k,10+2k),(12+2k,14+2k)$ in order to obtain $\left|\begin{smallmatrix}11+2k\\1+2k\end{smallmatrix}\right|\left[\begin{smallmatrix}3+2k,9+2k \\2+2k,8+2k \end{smallmatrix}\right]$. So, one obtains an $SF$-signature. Let us deform the long blue diagonal in the tree $\left|\begin{smallmatrix}11+2k\\1+2k\end{smallmatrix}\right|$ with the short blue diagonal in the $M$ tree $\left|\begin{smallmatrix}13+2i\\15+2i\end{smallmatrix}\right|$. This turns it into an $M$ tree and so, we have the $S$-signature $\left[\begin{smallmatrix}3+2k,9+2k \\2+2k,8+2k \end{smallmatrix}\right]$. The $F$-signatures which are obtained from this $S$-signature by a minimal number of deformation operations are $\left|\begin{smallmatrix}15+2k\\9+2k\end{smallmatrix}\right|,\left|\begin{smallmatrix}2+2k\\8+2k\end{smallmatrix}\right|, \left|\begin{smallmatrix}8+2k\\2+2k\end{smallmatrix}\right|,\left|\begin{smallmatrix}12+2k\\2+2k\end{smallmatrix}\right|,\left|\begin{smallmatrix}3+2k\\9+2k\end{smallmatrix}\right|,\left|\begin{smallmatrix}9+2k\\3+2k\end{smallmatrix}\right|$. \item The adjacent $Q$-piece to the previous one, contains the following $Q$-diagram: $\left[\begin{smallmatrix}0+2k,10+2k &2+2k,8+2k \\ 15+2k,9+2k &1+2k ,7+2k \end{smallmatrix}\right]$. The adjacent $S$-signature, via Whitehead move, is $\left[\begin{smallmatrix}1+2k,7+2k \\2+2k ,8+2k \end{smallmatrix}\right]$. It is obtained by deforming a pair of red diagonals giving the $FS$ diagram $\left|\begin{smallmatrix}10+2k \\0+2k \end{smallmatrix}\right|\left[\begin{smallmatrix}2+2k ,8+2k \\1+2k ,7+2k \end{smallmatrix}\right]$, or a pair of blue diagonals giving an $FS$-signature $\left|\begin{smallmatrix}15+2k \\9+2k \end{smallmatrix}\right|\left[\begin{smallmatrix}2+2k ,8+2k \\1+2k ,7+2k \end{smallmatrix}\right]$ . The $S$-signature $\left[\begin{smallmatrix}2+2k ,8+2k \\1+2k ,7+2k \end{smallmatrix}\right]$ is adjacent after one deformation operation to the following $F$-signatures $\left|\begin{smallmatrix}15+2k \\9+2k \end{smallmatrix}\right|,$$\left|\begin{smallmatrix}9+2k \\15+2k \end{smallmatrix}\right|$, $\left|\begin{smallmatrix}11+2k \\1+2k \end{smallmatrix}\right|,\left|\begin{smallmatrix}14+2k \\8+2k \end{smallmatrix}\right|$$\left|\begin{smallmatrix}2+2k \\8+2k \end{smallmatrix}\right|\left|\begin{smallmatrix}8+2k \\2+2k \end{smallmatrix}\right|$. \item The $Q$-piece contains the $Q$-diagram $\left[\begin{smallmatrix}0+2k ,10+2k &2+2k ,8+2k \\ 15+2k ,9+2k &1+2k ,7+2k \end{smallmatrix}\right]$ and an adjacent $S$-signature $\left[\begin{smallmatrix}15+2k ,9+2k \\0+2k ,10+2k \end{smallmatrix}\right]$ obtained by deforming a pair of blue diagonals giving the $FS$ diagram $\left|\begin{smallmatrix}10+2k \\0+2k \end{smallmatrix}\right|\left[\begin{smallmatrix}15+2k ,9+2k \\0+2k ,10+2k \end{smallmatrix}\right]$, or a pair of red diagonals giving an $FS$-signature $\left|\begin{smallmatrix}8+2k \\2+2k \end{smallmatrix}\right|\left[\begin{smallmatrix}2+2k,8+2k \\1+2k ,7+2k \end{smallmatrix}\right]$. The $S$-signature $\left[\begin{smallmatrix}15+2k ,9+2k \\0+2k ,10+2k \end{smallmatrix}\right]$ is adjacent after one deformation operation to the following $F$-signatures $\left|\begin{smallmatrix}7+2k \\1+2k \end{smallmatrix}\right|,\left|\begin{smallmatrix}1+2k \\7+2k \end{smallmatrix}\right|, \left|\begin{smallmatrix}6+2k \\0+2k \end{smallmatrix}\right|,\left|\begin{smallmatrix}3+2k \\9+2k \end{smallmatrix}\right|\left|\begin{smallmatrix}0+2k \\10+2k \end{smallmatrix}\right|\left|\begin{smallmatrix}10+2k \\0+2k \end{smallmatrix}\right|$. \end{enumerate} Note that here we only use the generic signatures and the codimension 1 signatures. To have the higher codimension signatures, we determine among the decomposition in $Q$-pieces the $B, O $ and $NC$ structures. \end{proof} \end{example} \section{Decomposition invariant under Coxeter groups} \subsection{Adjacence, chambers and galleries } In this section, we show that the decomposition in elementa (discussed previously) is invariant under a Coxeter group. In order to prove the main statement, we use the geometric properties of the $Q$-piece decomposition and the fact that signatures are invariant under polyhedral groups since they are the superimposition of diagrams, having a dihedral symmetry. Via this method of construction, we show explicitly the existence of chambers and galleries in the decomposition. \begin{lemma} The group of rotations acting on the $Q$-diagram is of order $2d$. \end{lemma} \begin{proof} Consider in the $Q$-diagram separately the the blue/red monochromatic diagrams. The blue (resp. red) diagram has a mirror line $\mathcal{L}$ being parallel to the long diagonals. Indexing the long diagonals from 1 to $d-2$, note that if $d$ is even then this mirror line lies between the diagonals $\frac{d-2}{2}$ and $\frac{d}{2}$. If $d$ is odd then, the mirror lies on the diagonal $\frac{d-1}{2}$. Therefore, there exists a finite group acting independently on both monochromatic diagrams, which is of order of $2d$ defined by $\langle r | r^{2d}=Id\rangle$, $r=\frac{\pi}{2d}$. The order of the group of rotations acting on the $Q$-diagram is also of order $2d$. Thus, rotating the $Q$-diagram by an angle $\pi$ (i.e. $2d$ rotations by an angle $\frac{\pi}{2d}$) gives the identity. \end{proof} \begin{lemma} For any $d>3$, there exist $2d$ $Q$-pieces in the decomposition. \end{lemma} \begin{proof} Apply Whitehead moves onto the $Q$-diagram such that in each of the new generic signatures there remains at least one long blue (or red) diagonal parallel to the mirror line. In one Whitehead move, one obtains the $M$, diagrams from an $F$ signature. The union of all the strata indexed by those signatures defines a $Q$-piece of the stratification. There are $2d$ such $Q$-pieces, since there exist $2d$ rotations of a $Q$-diagram about an angle of $\frac{\pi}{2d}$. \end{proof} \begin{example} Detail of a $Q$-piece for $d=4$ is given in figure~\ref{detailLego4} \begin{figure} \caption{Some $Q$-piece details for $d=4$} \label{detailLego4} \end{figure} \end{example} To prove that the stratification is invariant under a Coxeter group, we describe the construction using an inductive procedure. \begin{lemma}\label{L:consec} Let $Q_1$ and $Q_2$ be two $``Q"$-diagrams, lying in a pair of adjacent $Q$-pieces. Then, their associated matrices verify \[Q_1=\left[\begin{smallmatrix}L_{0} \\L_{1} \end{smallmatrix}\right ],Q_2=\left[\begin{smallmatrix}L_{1} \\L_{2}:=L_{1}-1\end{smallmatrix}\right],\] where the symbol $L_{1}-1$ means that 1 is substracted from each integer in $L_{1}$ modulo $4d$. \end{lemma} \begin{proof} Suppose that $Q_1=\left[\begin{smallmatrix}L_{0} \\L_{1} \end{smallmatrix}\right ]$. By hypothesis, since $Q_{2}$ belongs to an adjacent $Q$-piece, it is the superimposition of the blue (resp. red) monochromatic diagram of $Q_{1}$, and of a red monochromatic diagram, which is a rotation of $\frac{-\pi}{2d}$ about the red one of $Q_1$, relatively to the center of the polygon. Suppose, that $L_{1}$ is the pairing of indexes of terminal vertices (a diagonal is an edge connecting a pair of labelled endverticess) of the monochromatic diagram, common to $Q_{1}$ and $Q_{2}$. From the lemma~\ref{L:rotateM} we have that $\left[\begin{smallmatrix}L_{1} \\L_{0}-2 \end{smallmatrix}\right ]$ in $Q_{2}$. Now, since the monochromatic diagrams differ by a rotation about $\frac{-\pi}{4d}$ we have that $L_{1}=L_{0}-1$ and thus $L_{2}=L_{0}-1$, modulo $4d$. \end{proof} \begin{corollary}\label{C:cons} Any pair of consecutive diagrams lying in adjacent $Q$-pieces have respectively the matrices: \[Q_1=\left[\begin{smallmatrix}L_{0} \\L_{1} \end{smallmatrix}\right ],Q_2=\left[\begin{smallmatrix}L_{1} \\L_{2}:=L_{1}-1\end{smallmatrix}\right],\] where the symbol $L_{1}-1$ means that 1 is substracted from each integer in $L_{1}$ modulo $4d$. \end{corollary} \begin{example} In the case of $d=3$, the adjacent $Q$-diagrams forming a spine are illustrated in Appendix A . The adjacent $Q$-diagrams for $d=4$ is given in appendix, Fig~\ref{F:lego4}. \end{example} In the following part, one Whitehead move is an operation on only one pair of diagonals of the same color. This corresponds to the modification of one generic signature into another one. \begin{lemma} Let $Q_1$ and $Q_2$ be two $Q$-diagrams, lying in adjacent $Q$-pieces. Then, these diagrams are glued to each other via a $F^{\otimes{d-2}}$ diagram. \end{lemma} \begin{proof} We induct on $d>3$ (lower degrees are irrelevant since the $Q$ diagrams do not exist). \begin{itemize} \item {\it Base case} $d=4$. The $Q$ diagram $[\begin{smallmatrix}L_{0} \\L_{1} \end{smallmatrix}]$ has two long blue diagonals and two long red diagonals. The pair of red and blue long diagonals bound, the same 2-face in $\mathbb{D}$. Let us modify by a Whitehead move the red diagonals: this operation leaves the (blue) long diagonals fixed and thus gives a signature of type $F^{\otimes{2}}$. Therefore, a $Q$-diagram is connected to $F^{\otimes{2}}$, in one Whitehead move. Consider in the signature $F^{\otimes{2}}$ two pairs of adjoining short red diagonals. In each of those pairs, only one short diagonal intersect a long blue diagonal. Deform each pair simultaneously by a Whitehead move. One obtains a signature of type $SS$, with matrix $[\begin{smallmatrix}L_{1} \\L_{1}-1\end{smallmatrix}]$. \\ Before we consider the general induction case, we enumerate the types of generic signatures obtained in one Whitehead move from a $Q$-diagram. One deformation of the $Q$-diagram gives an $SF$ signature belong to one of the three following types: \begin{enumerate} \item $S^{\otimes{d-2}}$ is modified in one Whitehead move into a signature $S^{\otimes{d-3}}F$ (or to $FS^{\otimes{d-3}}$). This is obtained by modifying a pair of diagonals (one in $M$ and one in adjoining $S$, both being of the same color). \\ \item $S^{\otimes{d-2}}$ is deformed in one deformation step to $S^{\otimes{d-4}}FF$ (or to $FFS^{\otimes{d-4}}$). This is obtained by modifying a pair of long diagonals: one being the shortest long diagonal of the signature, the other one being adjoining to it. \\ \item $S^{\otimes{d-2}}$ is deformed in one deformation step to $S...SFFS...S$ where the number of $S$ is $d-4$. This is obtained by deforming a pair of adjoining long diagonals (which are not the shortest long diagonals of the signature). \end{enumerate} \item {\it Induction case}. Consider a $Q$-diagram in $\Sigma_{d+1}$. Assume that the statement is true for a pair of adjacent $Q$-pieces where the $Q$-diagrams belong to $\Sigma_{d}$ and are of type $S^{\otimes{d-2}}$. We will prove that for two signatures of type $S^{\otimes{d-1}}$ signatures with $d-1$ pairs of $S$ trees lying in adjacent $Q$-pieces, there exists a sequence of deformations containing a signature of type $F^{\otimes{d-1}}$. Consider the signature $S^{\otimes{d-1}}$: it contains one $S$-tree added at the right (or left) of $S^{\otimes{d-2}}$, in the signature. Using the induction hypothesis for $S^{\otimes{d-2}}$, it is known that there exists a sequence of deformations from $S^{\otimes{d-1}}$ to a signature $SF^{\otimes{d-2}}$ (resp. $F^{\otimes{d-2}}S$), where $S$ is a tree given by the crossing of the shortest long diagonals in the signature. It remains to deform a pair of diagonals lying in the $S$ tree and in its adjoining $M$ tree, so as to obtain a signature $F^{\otimes{d-1}}$. Now, starting from $F^{\otimes{d-1}}$, there exists again, by induction hypothesis, a sequence of Whitehead moves giving the signature $S^{\otimes{d-2}}F$ (resp. $FS^{\otimes{d-2}}$). Finally, one more Whitehead move applied to the pair of short diagonals, lying in the adjoining trees $F$ and $M$ induces the $S^{\otimes{d-1}}$ signature. \end{itemize} \end{proof} \subsection{Main theorems and their proofs} The next result allows a decomposition in chambers and galleries of the configuration space. \begin{lemma}\label{L:O2} The group of symmetries of a $Q$-piece is $\langle r | r^{2}= Id \rangle$. \end{lemma} \begin{proof} Let us describe the construction of a $Q$-piece. Recall, from the previous lemmas, that the monochromatic diagrams in a $Q$-diagram have a mirror line, parallel to the long diagonals. The set of possible Whitehead moves applied to the set of pairs of diagonals on the right hand side of the reflection line of the $Q$-diagram, is the isomorphic to the set of possible Whitehead moves applied to the set of pairs of diagonals in the left hand side of the reflection line. Therefore, the set of adjacent signatures to the $Q$-diagram, via Whitehead moves, forming a $Q$-piece is invariant under the group $\langle r | r^{2}= Id \rangle$. \end{proof} \begin{lemma} If $d$ is even, then the $Q$-piece has two reflections, in the sense of Coxeter groups. \end{lemma} \begin{proof} From lemma~\ref{L:O2}, it is known that in a $Q$-piece there exists one reflection hyperplane, coming across the $Q$-diagram. Now, consider $F$ signatures in the $Q$-piece such that there is one $F$ tree of long diagonal $(i,j)$ where $j=i+6\mod 4d$. Its reflection gives an $F$ signature with one $F$ tree of long diagonal $(i+2d,j+2d) \mod 4d$. Notice that the short diagonals of the opposite color are the same if $d$ is even. We modify $(i,j)$ by a Whitehead move with the diagonal $(i+2,i+4)\mod 4d$, so that it becomes $(i,i+2)$ $(i+4,i+6)$. By symmetry, we proceed on $ (i+2d,j+2d)$ and $(i+2+2d,i+4+2d)$ so, that the Whitehead operation gives the pair $(i+2d,i+2+2d) (i+4+2d,i+6+2d) \mod 4d$. Hence, we obtain the same $M$ signatures. Therefore, there is a second reflection hyperplane coming across the $M$ signatures in the $Q$-piece. \end{proof} \begin{lemma}\label{Pr:G} Let $Z$ be a couple of adjacent $Q$-pieces and let $Y$ be the stratification. Then ${\bf p}:Y\to Z$ is a Galois covering, with Galois group of order $d$. \end{lemma} \begin{proof} Let us define ${\bf p}:Y\to Z$ where $Y=\cup_{\rho \in G}Z^{\rho}$ and $G$ is a cyclic group of finite order $d$, where $Y$ is connected. Each inverse image ${\bf p}^{-1}(A_{\sigma})$ of $A_{\sigma}\in Z$ is constituted from classes, having signatures which are equivalent up to a rotation of $\sigma$. So, any action $\rho \in Aut_{Z}(Y)$ on $\sigma$ gives a signature $\sigma'$ which belongs to ${\bf p}^{-1}(A_{\sigma})$. We show that for $Y\times_{Z} Y =\{(z,z')\in Y\times Y, {\bf p}(z)={\bf p}(z')\}$, the map \[\phi: G\times Y \to Y\times_{Z} Y, \] \[(g,z)\mapsto (z,gz)\] is a homeomorphism. \begin{itemize} \item The map is a bijection. First let us show that the map is injective. Consider $\phi(g,z)=\phi(g',z')$ i.e. $(z,gz)=(z',g'z')\in Y\times_{Z} Y$. Then, $z=z'$ and $g'=g$ and in particular $(g,z)=(g',z')$ in $G\times Y$, so the map is injective. The map is surjective since for every $(z,gz)\in Y\times_{Z} Y$, there exists at least one element in $(g,z)\in G\times Y$ such that $\phi((g,z))=(z,gz)$. \item The map is bicontinuous because the group $G$ continuously acts on $Y$. \end{itemize} So, the map ${\bf p}:Y\to Z$ satisfies the definition of a Galois covering for an order $d$ Galois group. \end{proof} \begin{lemma}\label{L:rotate} Let $S_1=[\begin{smallmatrix}L_{0}+2d \\L_{0}+2d-1 \end{smallmatrix}]$ and $S_2=[\begin{smallmatrix}L_{0}-2d \\L_{0}-2d-1 \end{smallmatrix}]$ be two $S$ signatures. Then, $S_1=S_2$. \end{lemma} \begin{proof} Consider any diagonal $(i,j)\in L_0$, where $i,j$ are integers of the same parity modulo $4d$. We have $i\equiv i \mod 4d\iff i+4d\equiv i\mod 4d \iff i+2d\equiv i-2d\mod 4d$. Proceeding similarly for $j$ we have that $S_1=S_2$. \end{proof} \noindent \underline{{\bf Decomposition in $Q$-pieces}} Consider a $Q$-piece. Let LHS (resp. RHS) denote the left (resp. right) hand side of the $Q$-piece about its reflection axis; let $S_1$ (resp. $S_2$) be a signature in the LHS (resp. RHS) of the $Q-$piece, such that $S_1=[\begin{smallmatrix}L_{0} \\L_{0}-1 \end{smallmatrix}],S_2=[\begin{smallmatrix}L_{0}+2d \\L_{0}-1+2d \end{smallmatrix}]$. Let us discuss the relations between each pair of adjacent $Q$-piece. \begin{enumerate} \item Consider a $Q_1$-piece. Any signature $\sigma_1$ in the LHS is associated to $[\begin{smallmatrix}L_0\\L_0-1\end{smallmatrix}]$. Its reflection $r(\sigma_1)$ in the RHS is $\sigma_1$ rotated by an angle $\pi$ and is given by $[\begin{smallmatrix} L_0+2d\\ L_0-1+2d \end{smallmatrix}]$. Consider the adjacent $Q_2$-piece. Applying lemma~\ref{L:consec}: any signature $\sigma_2$ of the $Q_2$-piece is obtained by the copy-paste and turn step. Therefore, if $\sigma_2$ lies in the LHS part of the $Q_2$-piece then, its matrix is $[\begin{smallmatrix}L_0-1\\L_0-2\end{smallmatrix}]$. Its reflection in the RHS part is $[\begin{smallmatrix} L_0+2d-1\\ L_0-1+2d-2 \end{smallmatrix}].$ \\ \item More generally, take any signature in the LHS of the $i$-th $Q$-piece. Its matrix is $[\begin{smallmatrix}L_0-i+1\\L_0-i\end{smallmatrix}]$ and its reflection in RHS has matrix $[\begin{smallmatrix} L_0-i+1+2d\\ L_0-i+2d \end{smallmatrix}]$. Applying lemma~\ref{L:consec}, the consecutive signatures are respectively of type $[\begin{smallmatrix}L_0-i\\L_0-(i+1)\end{smallmatrix}]$ and $[\begin{smallmatrix} L_0-i+2d\\L_0-(i+1)+2d \end{smallmatrix}]$. \\ \item For the $(2d-1)$-th $Q_{2d-1}$-piece, we have $[\begin{smallmatrix}L_0-(2d-1)\\L_0-2d\end{smallmatrix}]$ and $[\begin{smallmatrix} L_0+2d-(2d-2)\\ L_0+2d-1-(2d-1) \end{smallmatrix}]$. The consecutive signatures are respectively $[\begin{smallmatrix}L_0-(2d-1)\\L_0-2d \end{smallmatrix}]$ and $[\begin{smallmatrix} L_0+2d-2d\\ L_0+2d-1-2d \end{smallmatrix}]$. \begin{remark} Note that, after $4d$ iterations of this procedure the identity is obtained. \end{remark} \end{enumerate} Below we present a part of the collection of glued $Q$-pieces, the connections are represented by thin vertical double arrow, deformations by a thick horizontal double arrow. The final structure is obtained by glueing the top and the bottom of Fig~\ref{F:Q}, along the long and thin horizontal arrows. \begin{figure} \caption{Detail of the collection of glued $Q$-pieces.} \label{F:Q} \end{figure} \begin{theorem}\label{Th:mob} The decomposition of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ in $4d$ ($d>2$) Weyl-Coxeter-chambers, is induced by the topological stratification of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ in elementa and is invariant under the the Coxeter group given by the presentation: \[W=\langle r,s_1,...,s_{2d} | r^2=Id, (rs_j)^{2p}=Id, p|d\rangle, p\in \mathbb{N}^{*}.\] \end{theorem} \begin{proof} Let us recall from lemma~\ref{L:O2} that each $Q$-piece is invariant under an automorphism group of order 2. Moreover, it follows from the proposition~\ref{Pr:G} that the stratification is invariant under a cyclic group of order $2d$. We show that the stratification is invariant under the Coxeter group $W=\langle r,s_1,...,s_{2d} | r^2=Id, (rs_j)^{2p}=Id, p|d\rangle$, using the $Q$-procedure described previously. Between any pair of adjacent $Q$-pieces there exists a reflection hyperplane. Let us consider the pair of $i$-th and $(i+1)$ adjacent $Q$-pieces, where $i\in \{0,...,2d-1\}$. Denote this pair by $(Q_{i},Q_{i+1}$). The the set of blue (resp. red) monochromatic diagrams are bijectively mapped, by the Identity, to the set of blue (resp. red) monochromatic diagrams in the $Q_{i+1}$-piece: this is the copy and paste step. As for the set of red (resp. blue) monochromatic diagrams, they are bijectively mapped to the set of red (resp. blue) monochromatic diagrams in the $Q_{i+1}$-piece as follows: after making a copy and paste step, red (resp. blue) diagrams are rotated by $\pi$ about the center of the diagram. In other words, each red monochromatic diagram in the $Q_{i+1}$-piece is symmetric to its pre-image in the $Q_{i}$-piece, about the vertical reflection axis of the blue (resp. red) monochromatic diagram in the $Q$-diagram. Therefore, we have a bijection which is order preserving between from the set of signatures in the $Q_i$-piece to the set of signatures in the $Q_{i+1}$-piece: the incidence relations in both $Q$-pieces are preserved. From the definition~\ref{D:Q} of adjacent $Q$-pieces it follows that the adjacent $Q$-pieces are symmetric about a reflection hyperplane. The next $Q$-piece, indexed $(i+2)$, is obtained by making a copy and paste step from the set of signatures in the $i$-th $Q$-piece and by rotating those signatures by $\frac{\pi}{2d}$ about the center of the signature. Using the same argument as previously, we have thus, $2d$ reflection hyperplanes. to conclude, this implies that there exist $2d$ horizontal reflection hyperplanes and one vertical reflection hyperplane. \end{proof} \begin{corollary} For $d>2$, there exist $4d$ chambers in this stratification, in the sense of Weyl-Coxeter. One $Q$-piece is the union of two chambers. \end{corollary} \begin{corollary}\label{Th:S} Let $\mathcal{W}$ be the inclusion diagram. Then $\mathcal{W}$ is invariant under the Klein group $\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}$. \end{corollary} \subsection{Braid towers} We have shown the existence of a stratification of $\mathop{{}^{\textsc{D}}\text{Pol}}_d$, which forms a decomposition of this space, which is invariant under a Coxeter group. Since $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ is isomorphic to the configuration space of $d$ marked points on the complex plane, denoted $Conf(\mathbb{C})_d$, we have therefore a stratification of this configuration space which is invariant under this polyhedral Coxeter group. From the other side, configuration spaces are related to the theory of braids. By a result of Fadell, Fox, Neuwirth~\cite{FaN62, FoN62} it is well known that the braid group with $d$ strands is the fundamental group of the configuration space $Conf(\mathbb{C})_d$. It is interesting to explain our geometric approach to braid theory. \begin{theorem} Any braid relations following from the natural inclusion of braids $i^{\star}: B_d \hookrightarrow B_{d+1}$ can be described using the decomposition in $Q$-pieces. \end{theorem} \begin{proof} \hspace{1cm}(i) The first application of the decomposition in $Q$-pieces appears in the case of the embedding $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}\hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$ (table~\ref{T:embed}). More concretely, this $Q$-decomposition lists and enumerates all the possible intertwinings of the roots in $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}$ in an explicit and geometric way. Indeed, starting with a given configuration of polynomial roots (corresponding to the one in a $Q$-diagram), we obtain all the possible intertwinings that may happen between the roots. \small{\begin{table}[h] \begin{center} \renewcommand{1.5}{1.5} \begin{tabular}{|c|c|c|c|c|c|c|c|}\hline d &3&4&5&6&7 \\ \hline mod&12&16&20&24&28 \\ &$0\sim 12$&$0\sim 16$&$0\sim 20$&$0\sim 24$&$0\sim 28$ \\ \hline &&&&&$\left[\begin{smallmatrix} 21,15&23,13&25,11&27,9\,&1,7\, \\20,14&22,12&24,10&26,8&28,6\,\end{smallmatrix}\right]$ \\ &&&&&$\left[\begin{smallmatrix} 20,14&22,12&24,10&26,8\,&28,6\,\\19,13&21,11&23,9\,&25,7\,&27,5\,\end{smallmatrix}\right]$ \\\cline{5-5} &&&&$\left[\begin{smallmatrix} 19,13&21,11&23,13&\,1,7\ \\18,12&20,10&22,12&24,6\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 19,13&21,11&23,9\,&25,7\,&27,5\,\\18,12&20,10&22,8\,&24,6\,&26,4\,\end{smallmatrix}\right]$\\ &&&&$\left[\begin{smallmatrix} 18,12&20,10&22,12&24,6\,\\17,11&19,9\, &21,7\, &23,5\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 18,12&20,10&22,8\,&24,6\,&26,4\,\\17,11&19,9\ &21,7\,&23,5\, &25,3\,\end{smallmatrix}\right]$ \\ \cline{4-4} &&&$\left[\begin{smallmatrix} 17,11&19,9\ &\,1,7\,\\16,10&18,8\ &20,6\,\end{smallmatrix}\right]$ &$\left[\begin{smallmatrix} 17,11&19,9\,&21,7\,&23,5\,\\16,10&18,8\,&20,6\,&22,4\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 17,11&19,9\,&21,7\ ,&23,5\,&25,3\,\\16,10&18,8\,&20,6\ &22,4\ &24,2\,\end{smallmatrix}\right]$ \\ &&&$\left[\begin{smallmatrix} 16,10\,&18,8\,&20,6\,\\15,9\,&17,7\,&19,5\, \end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 16,10&18,8\,&20,6\,&22,4\,\\15,9\,&17,7\,&19,5\,&21,3\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 16,10&18,8\,&20,6\,&22,4\, &24,2\,\\15,9&17,7\,&19,5\ &21,3\ &23,1\,\end{smallmatrix}\right]$ \\ \cline{3-3} &&$\left[\begin{smallmatrix}15,9\,&\,1,7\,\\14,8\,&16,6\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 15,9\ &17,7\,&19,5\,\\14,8\,Ê&16,6\,Ê&18,4\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 15,9\,&17,7\,&19,5\,&21,3\,\\14,8\,&16,6\,&18,4\,&20,2\,\end{smallmatrix}\right]$ &$\left[\begin{smallmatrix} 15,9\,&17,7\,\ &19,5\,&21,3\ &23,1\,\\14,8\,&16,6\ &18,4\,&20,2\ &22,28\,\end{smallmatrix}\right]$ \\ &&$\left[\begin{smallmatrix}14,8\,&16,6\,\\13,7\,&15,5\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 14,8\,&16,6\,&18,4\,\\13,7\,&15,5\,&17,3\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 14,8\,&16,6\,&18,4\,&20,2\,\\13,7\,&15,5\,&17,3\,&19,1\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 14,8\,&16,6\ &18,4\,&20,2\ &22,28\,\\13,7\,&15,5\ &17,3\,&19,1\ &21,27\end{smallmatrix}\right]$\\ \cline{2-2} &$\left[\begin{smallmatrix}\,1,7\,\\12,6\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix}13,7\,&15,5\,\\12,6\,&14,4\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 13,7\,&15,5\,&17,3\,\\12,6\,&14,4\,&16,2\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 13,7\,&15,5\,&17,3\,&19,1\,\\12,6\,&14,4\,&16,2\,&18,24\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 13,7\,&15,5\,&17,3\,&19,1\, &21,27\\12,6\,&14,4\,&16,2\,&18,28\ &20,26\end{smallmatrix}\right]$ \\ &$\left[\begin{smallmatrix}12,6\\11,5\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix}12,6\,&14,4\,\\11,5\,&13,3\,\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 12,6\,&14,4\,&16,2\,\\11,5\,\,&13,3\,&15,1\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 12,6\,&14,4\,&16,2\,&18,24\\11,5\,&13,3\,&15,1\,&17,23\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 12,6\,&14,4\, &16,2\,&18,28\ &20,26\\11,5\,&13,3\,&15,1\,&17,27\ &19,25\end{smallmatrix}\right]$\\ &$\left[\begin{smallmatrix}11,5\,\\10,4\,\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix}11,5\,&13,3\,\\10,4\,&12,2\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 11,5\,&13,5\,&15,1\,\\10,4\,&12,2\,&14,20\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 11,5\,&13,3\,&15,1\,&17,23\\10,4\,&12,2\,&14,24&16,22\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 11,5\,&13,3\,&15,1\,&17,27\ &19,25\\10,4\,&12,2\,&14,28\,&16,26\ &18,24\end{smallmatrix}\right]$\\ &$\left[\begin{smallmatrix}10,4\,\\\,9,3\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix}10,4&12,2\\9,3&11,1\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 10,4\,&12,4\,&14,20\\9,3\,&11,1\,&13,19\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 10,4\,&12,2\,&14,24&16,22\\9,3\,&11,1\,&13,23&15,21\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 10,4\ &12,2\, &14,28&16,26&18,24\\ \,9,3\,&11,1\, &13,27&15,25&17,23\end{smallmatrix}\right]$\\ &$\left[\begin{smallmatrix} 9,3\,&\\ \, 8,2\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} \,9,3\,&11,1\,\\ \,8,2\,&10,16\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 9,3\,&11,1\,&13,19\\8,2\,&10,20&12,18\\\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 9,3\,&11,1\,&13,23&15,21\\ 8,2\,&10,24&12,22&14,20\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} \,\,9,3\,&11,1\, &13,27&15,25&17,23 \\ \,8,2\,&\,10,28\, &12,26&14,24&16,22\end{smallmatrix}\right]$\\ &$\left[\begin{smallmatrix}\, 8,2\,\\ \,7,1\,\end{smallmatrix}\right]$&$\left[\begin{smallmatrix}\, 8,2\,&10,16\,\\\,7,1\,&\,9,15\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} 8,2\,\,&10,20&12,18\\7,1&9,19&11,17\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} \,8,2\,&10,24&12,22&14,20\\ \,7,1\,&\,9,23&11,21&13,19\end{smallmatrix}\right]$&$\left[\begin{smallmatrix} \,8,2\,&\,10,28&12,26&14,24&16,22\\ \,7,1\,&\,9,27&11,25&13,23&15,21\end{smallmatrix}\right]$\\ \hline \end{tabular} \end{center} \caption{Table of inclusions}\label{T:embed} \end{table} } More precisely, a pair of intertwining roots corresponds to a set of 4 generic signatures - connected between each other by 4 codimension 1 signatures, forming a quadrangle, in the nerve (see figure~\ref{F:lego4}, for $d=4$ ). The natural embedding $Q(d) \hookrightarrow Q(d+1)$ gives an inductive and explicit method of the embedding $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}\hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$. Indeed, the procedure of $Q(d) \hookrightarrow Q(d+1)$ is equivalent to adding a tree in the $Q(d)$ in trigonometric way. As a remark, note that the embedding $\mathop{{}^{\textsc{D}}\text{Pol}}_{d} \hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$ extends to the embedding $ j: \mathbb{C}^{d} \hookrightarrow \mathbb{C}^{d+1}$, defined as follows. Take a polynomial $P$ in $\mathbb{C}^{d}$ with $d$ roots (possibly with some multiplicities), $ z_1,\dots,z_d$. The roots of the polynomial $j\circ P$ are given by $ z_1,\dots,z_d,z_{d+1}$ where $ z_1,\dots,z_d$ are roots of $P$ and $z_{d+1}= z_0+\max_{1 \leq i < d}\,|z_i-z_0| +1$, with $z_0$ is the arithmetic mean of the roots $z_1,\dots,z_d$, for more details see~\cite{Ar70}(page 44). \hspace{1cm}(ii)This application of the construction has implications for the injective homomorphism of groups (reciprocally for the projection map $B_{d+1} \hookrightarrow B_{d}$). Indeed, each quadrangle is an intertwining of roots, therefore it corresponds to an elementary braid. Therefore, the algorithmic method and the induction explained above illustrates the inclusion $i^{\star}: B_d \hookrightarrow B_{d+1}$. \hspace{1cm}(iii) Finally, using this construction we have a geometric interpretation of the increasing chain of braid groups. More precisely, there exist a natural embedding $Q(d) \hookrightarrow Q(d+1)$ between $Q(d)$-diagrams of polynomials of degree $d$ and $Q(d+1)$-diagrams of polynomials of degree $d+1$ as shown in table~\ref{T:embed}. The construction gives an explicit presentation of the embedding $\mathop{{}^{\textsc{D}}\text{Pol}}_{d} \hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$. This property holds in a more general framework: for $Q(d) \hookrightarrow Q(d+k)$, where $k\geq1$. The decomposition formed from the union of all $Q$-pieces presents, in an explicit way, all the possible intertwining of the roots. Thus, considering a decomposition for the $d+1$ marked points, we have an explicit illustration, of the embedding $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}\hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$ and thus for $j: \mathbb{C}^{d} \hookrightarrow \mathbb{C}^{d+1}$. Since, the intertwining of the roots corresponds to an elementary braid, this construction shows the natural inclusion of braids. This inclusion is an injective homomorphism of groups, such that: $ i^{\star}: B_{d} \to B_{d+1}$, with $i^{\star}( \sigma^{(d)}_i) = \sigma^{(d+1)}_{i}$ for $i$ in the set from $1$ to $d-1$. So, that there exists \begin{itemize} \item an embedding of $\mathop{{}^{\textsc{D}}\text{Pol}}_{d}\hookrightarrow \mathop{{}^{\textsc{D}}\text{Pol}}_{d+1}$ given explicitly by $Q$-pieces; \item an increasing chain of braid groups $B_1 \subset B_2\subset B_3\subset\dots B_d \subset B_{d+1}$. associated to $Q$-pieces. \end{itemize} In $\pi_1(\mathop{{}^{\textsc{D}}\text{Pol}}_{d})$ there exists paths given by a ``quadrangles'' forming a loop~\cite{C0}, corresponding to, adjacent two by two, four elementas, which are generic and of codimemsion 1. Therefore, the $Q$-pieces allow a description of the braid relations. In particular, an explicit description of the inclusion of braids is this given. \end{proof} \begin{corollary} Any braid relations following from the projection $B_{d+1}\rightarrow B_{d}$ can be described using $Q$-pieces. \end{corollary} \begin{corollary} The braid operad can be described using $Q$-pieces. \end{corollary} \section{Concluding remark} In this paper, we have investigated a new cell decomposition of $\mathcal{M}_{0,[d]}$, using its natural relation with the space of complex monic degree $d>0$ polynomials in one variable with simple roots, which the $d$-th unordered configuration space of the complex plane space. The decomposition of $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ has been done considering {\it drawings of polynomials} - objects reminiscent of Grothendieck's {\it dessin's d'enfant}. These objects are decorated graphs, properly embedded in the complex plane, obtained by taking the inverse image under a polynomial $P$ of the real and imaginary axes. A detailed study of those objects has been given in section 2. A stratum, (called {\it elementa}) of the decomposition is defined as a set of polynomials having drawings belonging to the same isotopy class, relatively to the asymptotic directions and which is described as a signature (for simplicity signatures are represented as circular diagrams). An important step towards the stratification is that the topological closure of a stratum indexed by the signature $\sigma$ is given by the combinatorial closure of the signature $\sigma$~\cite{C0}, which motivated our investigations in this paper concerning the combinatorial closure of an elementa. The study of this stratification lead to the introduction of the notion of an inclusion diagram which is a precious tool describing geometrically not only the neighborhood of each stratum but the entire stratification itself. It is also known that this inclusion diagram is the nerve in the sense of \v Cech, of the cover in~\cite{C0,C1}. We showed that the stratification for $d>2$ is invariant under a Coxeter group with $4d$ chambers, $2d$ horizontal reflection hyperplanes and one vertical reflection hyperplane. To conclude, the paths and loops in $\mathop{{}^{\textsc{D}}\text{Pol}}_d$ have been defined in a new way, using diagrams. Moreover, from the deep relation $\pi_1(\mathop{{}^{\textsc{D}}\text{Pol}}_d)=B_d$ (\cite{Bri71,FaN62,FoN62}) we showed that a braid can be defined, using a sequence of signatures, obtained one from another via Whitehead move. From the result, showed in this paper, a braid can be defined as a path (loop) in one chamber. This result is also very advantageous for the calculation of the \v Cech cohomology of this space, since it reduces significantly the complexity. On the other side, it allows an alternative description to the braid operad. \eject \appendix \section{Inclusion diagram for $d=3$}~\label{A: Incldiag3} \begin{figure} \caption{The 2-faces of the inclusion diagram for $d=3$} \label{F:2Fincldiag} \end{figure} \begin{figure} \caption{Inclusion diagram for $d=3$} \label{F: Incldiag3} \end{figure} \section{The adjacent $Q$-pieces for $d=4$}~\label{A:tower4} \begin{figure} \caption{$Q$-pieces for d=4} \label{F:lego4} \end{figure} \eject \end{document}
\begin{document} \title{A New Weak Choice Principle} \begin{abstract} \noindent For every natural number $n$ we introduce a new weak choice principle $\mathrm{nRC_{fin}}$: \begin{addmargin}[27pt]{27pt}\textit{Given any infinite set $x$, there is an infinite subset $y\subseteq x$ and a selection function $f$ that chooses an $n$-element subset from every finite $z\subseteq y$ containing at least $n$ elements.} \end{addmargin} By constructing new permutation models built on a set of atoms obtained as Fra\"iss\'e limits, we will study the relation of $\mathrm{nRC_{fin}}$ to the weak choice principles $\mathrm{RC_m}$ (that has already been studied in \cite{lorenz} and \cite{montenegro}): \begin{addmargin}[27pt]{27pt} \textit{Given any infinite set $x$, there is an infinite subset $y\subseteq x$ with a choice function $f$ on the family of all $m$-element subsets of $y$.} \end{addmargin} Moreover, we prove a stronger analogue of the results in \cite{montenegro} when we study the relation between $\mathrm{nRC_{fin}}$ and $\mathrm{kC_{fin}^-}$ which is defined by: \begin{addmargin}[27pt]{27pt} \textit{Given any infinite family $\mathcal{F}$ of finite sets of cardinality greater than $k$, there is an infinite subfamily $\mathcal{A}\subseteq \mathcal{F}$ with a selection function $f$ that chooses a $k$-element subset from each $A\in\mathcal{A}$.} \end{addmargin} \end{abstract} \keywords{weak forms of the Axiom of Choice, consistency results, Ramsey Choice, Fraenkel-Mostowski permutation models of ZFA+$\neg$AC, Pincus’ transfer theorems, partial $n$-selection for infinite families of finite sets}\\ \\ \mathclass{\textbf{03E25} 03E35} \section{Notation and Choice Principles} In this paper we will use the following terminology: \begin{itemize} \item By $\omega$ we denote the set of all natural numbers $\{0,1,2,\dots\}$ and $\mathrm{fin}(\omega)$ denotes the set of finite subsets of $\omega$. \item Given a set $x$ and a natural number $n$, $[x]^n$ is defined as the set of \emph{all\/} the subsets of $x$ with cardinality $n$. Similarly, $[x]^{>n}$ is the set of all the \emph{finite\/} subsets of $x$ with cardinality greater than $n$. \item Given a permutation model $\mathcal{M}$ and a statement $\phi$, we will write $\mathcal{M}\models\phi$ to indicate that $\phi$ holds in $\mathcal{M}$. \item $\mathrm{BFM}$ is the well known Basic Fraenkel Model. \end{itemize} Furthermore, we shall use the following notation for weak choice principles: \begin{itemize} \item $\mathrm{RC_n}$ is the following axiom: given any infinite set $x$, there exists an infinite subset $y\subseteq x$ with a choice function $f\colon[y]^n\to y$ such that, for all $z\in[y]^n$, $f(z)\in z$. \item $\mathrm{nRC_{fin}}$ is the following axiom: given any infinite set $x$, there exists an infinite subset $y\subseteq x$ with a selection function $f\colon[y]^{>n}\to[y]^n$ such that, for all $z\in[y]^{>n}$, $f(z)\subseteq z$. \item $\mathrm{C_n}$ is the following axiom: any infinite family $\mathcal{A}$ of sets of cardinality $n$ has a choice function $f\colon\mathcal{A}\to\bigcup\mathcal{A}$ such that, for all $A\in\mathcal{A}$, $f(A)\in A$. \item $\mathrm{C^-_n}$ is the following axiom: given any infinite family $\mathcal{F}$ of non-empty sets with cardinality $n$, there exists an infinite subfamily $\mathcal{A}\subseteq\mathcal{F}$ which has a choice function $f\colon\mathcal{A}\to\bigcup\mathcal{A}$ such that, for all $A\in\mathcal{A}$, $f(A)\in A$. \item $\mathrm{nC_{fin}}$ is the following axiom: any infinite family $\mathcal{A}$ of finite sets with cardinality greater than $n$ has a selection function $f\colon\mathcal{A}\to[\bigcup\mathcal{A}]^n$ such that, for all $A\in\mathcal{A}$, $f(A)\subseteq A$. \item $\mathrm{nC^-_{fin}}$ is the following axiom: given any infinite family $\mathcal{F}$ of finite sets with cardinality greater than $n$, there exists an infinite subfamily $\mathcal{A}\subseteq\mathcal{F}$ which has a selection function $f\colon\mathcal{A}\to[\bigcup\mathcal{A}]^n$ such that, for all $A\in\mathcal{A}$, $f(A)\subseteq A$. \item $\mathrm{ACF^-}$ is the following axiom: given any infinite family $\mathcal{F}$ of non-empty finite sets, there exists an infinite subfamily $\mathcal{A}\subseteq\mathcal{F}$ which has a choice function $f\colon\mathcal{A}\to\bigcup\mathcal{A}$ such that, for all $A\in\mathcal{A}$, $f(A)\in A$. \end{itemize} \section{Introduction} Following the terminology used in \cite{reduced}, we introduce a new class of diminished choice principles $\mathrm{nRC_{fin}}$ and study its relation with the two classes $\mathrm{RC_n}$ and $\mathrm{nC^-_{fin}}$, which have in general inspired the new one; we will indeed obtain analogous results to \cite{lorenz} and \cite{salome}. The title of each section refers to the following diagram. The labels of the arrows indicate in which section we analyze that specific implication. Here, n,k,m,j stand all for natural numbers. \begin{center} \begin{tikzcd}[column sep=7em, row sep=4em] \mathrm{nRC_{fin}} \arrow [loop left, lu, "\mathrm{Sec.\,8}"] \arrow[r, shift left, "\mathrm{Sec.\,6}"] \arrow[d, shift left, "\mathrm{Sec.\,9}"] & \mathrm{kC^-_{fin}} \arrow[shift left, l, "\mathrm{Sec.\,5}"] \arrow[d, shift left, "\scriptsize{\cite{salome}}"]\\ \mathrm{RC_m} \arrow[shift left, u, "\mathrm{Sec.\,4}"] \arrow[r, shift left, "\scriptsize{\cite{lorenz}}"] & \mathrm{C^-_j} \end{tikzcd} \end{center} \noindent To be more precise, we will prove the following results: \begin{itemize} \item Relation between $\mathrm{nRC_{fin}}$ and $\mathrm{RC_m}$: \begin{itemize} \item For each $n\in\omega$, $\mathrm{RC_n}\nRightarrow\mathrm{nRC_{fin}}$ in ZF+$\neg$AC. \item For all $k,n\in\omega$, $\mathrm{nRC_{fin}}\Rightarrow \mathrm{RC_{kn+1}}$. \item $\mathrm{4RC_{fin}}\Rightarrow \mathrm{RC_n}$ whenever $n$ is odd and greater than $4$. \end{itemize} \item Relation between $\mathrm{nRC_{fin}}$ and $\mathrm{kC_{fin}^-}$: \begin{itemize} \item For each $n\in\omega$, $\mathrm{nC_{fin}^-}\nRightarrow\mathrm{nRC_{fin}}$ in ZF+$\neg$AC. \item For all $n\in\{2,3,4,6\}$, $\mathrm{nRC_{fin}}\Rightarrow \mathrm{nC_{fin}^-}$. \item For all primes $p$ and all $k\in\omega$ we have that $\mathrm{p^kRC_{fin}}\Rightarrow \mathrm{p^kWOC_{fin}^-}$. \end{itemize} \item A relation between $\mathrm{nRC_{fin}}$ and $\mathrm{C_k^-}$: \begin{itemize} \item $\mathrm{4RC_{fin}}\Rightarrow \mathrm{C_3^-}$. \end{itemize} \item Relation between $\mathrm{nRC_{fin}}$ and $\mathrm{kRC_{fin}}$: \begin{itemize} \item For all $k,n\in\omega$, $\mathrm{nRC_{fin}}\Rightarrow \mathrm{knRC_{fin}}$. \item Let $k,n\in \omega$ with $k>n$. If $k$ is not a multiple of $n$, then $\mathrm{nRC_{fin}}\nRightarrow\mathrm{kRC_{fin}}$ in ZF+$\neg$AC. \end{itemize} \end{itemize} \section{Approach and Transferability} We will prove independence between choice principles in $\mathrm{ZF}$ via permutation models. In a few words, we can say that a permutation model is built from a ground model, which is a model of $\mathrm{ZFA}$: a variation of $\mathrm{ZF}$ set theory in which the axiom of extensionality is weakened in order to allow the existence of new objects (called atoms) containing no elements, but which are still distinct from the empty set. From this ground model (which satisfies $\mathrm{AC}$), one can extract a submodel of $\mathrm{ZFA}$ in which $\mathrm{AC}$ fails. For details regarding this construction, see, for example, \cite{libro}. We will simply denote a permutation model by the structure of the set of atoms $A$, the normal ideal $I$ on $A$ and the group of permutations $G$: the normal filter on $G$ will always be the one generated by $I$. Given a permutation model, we will get conclusion regarding $\mathrm{ZF}$ in the following way: Suppose we manage to build a permutation model in which a certain choice principle $\mathrm{Ax1}$ holds and some other $\mathrm{Ax2}$ fails. Using the results of \cite{pincus}, we can conclude that if $\mathrm{Ax1}$ and $\mathrm{Ax2}$ both belong to a certain class of statements (in which case the statements are said to be injectively boundable), then there is a model of $\mathrm{ZF}$ in which $\mathrm{Ax1}$ holds, $\mathrm{Ax2}$ fails, and both, $\mathrm{Ax1}$ and $\mathrm{Ax2}$, have the same $\textit{meaning}$ as in the permutation model, i.e., cardinalities and cofinalities remain unchanged between the two models. For the definition of injectively boundable, see \cite{pincus} or \cite{libro2}. Once done, it is not hard to see that all the choice principles we will consider are injectively boundable: an injection of $\omega$ in an infinite set gives an infinite subset of which the power set admits a choice function. \section{Vertical Upward} In this section we show that for any positive $m\in\omega$, $(\forall n\in\omega\:\mathrm{RC}_n)$ does not imply $m\mathrm{RC}_\mathrm{fin}$. To this end, we use the model which in \cite{salome} is called $\mathcal{V}_\mathrm{fin}$. The results contained in this section are not new and can be found stated in \cite{libro2} and proved in \cite{levy}. The model $\mathcal{V}_\mathrm{fin}$ is constructed from a countable set of atoms $A$ partitioned in a well ordered family of blocks $\{B_i:i\in\omega\}$, such that for every $i\in\omega$, $B_i$ has cardinality $p_i$, where $p_i$ is the $i$-th prime number. For each $i\in\omega$, fix a cyclic permutation $\varphi_i$ on $B_i$ that has no fixed points. The considered group of permutations $G$ is given by all the permutations $\varphi$ on $A$ that move only finitely many atoms and such that, for every $i\in\omega$, $\varphi$ restricted to $B_i$ equals some power of $\varphi_i$. The corresponding normal filter is generated by the normal ideal of all finite subsets of $A$. \begin{theorem} We have that $\mathcal{V}_\mathrm{fin}\models\forall n\in\omega\:(\mathrm{C_n}\land\lnot\mathrm{nRC_{fin}})$. \end{theorem} Since evidently $\mathrm{C_n}$ implies $\mathrm{RC_n}$, the theorem proves what we claimed in this section. \section{Horizontal Left} In this section we briefly mention that any conjunction of $\mathrm{mC^-_{fin}}$ does not imply any $\mathrm{nRC_{fin}}$. \begin{theorem} We have that $\mathrm{BFM}\models\forall n\in\omega\:(\mathrm{nC^-_{fin}}\land\lnot\mathrm{nRC_{fin}})$. \end{theorem} \begin{proof} It is known (see, e.g., \cite{libro2}) that $\mathrm{ACF^-}$ holds in $\mathrm{BFM}$, and it is easy to see that by $n$ consecutive applications of $\mathrm{ACF^-}$ one obtains $\mathrm{nC^-_{fin}}$. The conclusion follows from noticing that any $\mathrm{nRC_{fin}}$ fails on the set of atoms. \end{proof} \section{Horizontal Right} \subsection{Positive} This subsection starts with the very few cases in which we have a full positive answers. \begin{lemma} For $n\in\{2,3\}$ we have $\mathrm{nRC_{fin}}\Rightarrow\mathrm{nC_{fin}^-}$. \end{lemma} \begin{proof} We prove each case separately. Let $n=2$ and $\mathcal{A}=\{A_j:j\in J\}$ be an infinite family of pairwise disjoint finite sets (we can assume they are disjoint by replacing each $A_i$ with the unique function $A_i^*:A_i\to\{A_i\}$). Set $x=\bigcup\mathcal{A}$ and apply $\mathrm{2RC_{fin}}$ to get an infinite $y\subseteq x$ and a function $g:[y]^{>2}\to[y]^2$ such that, for all $Y\in[y]^{>2}$, $g(Y)\subseteq Y$. Since every element of $\mathcal{A}$ is finite, there must be an infinite subset $I$ of $J$ such that, for all $i\in I$, $A_i\cap y\neq\emptyset$. If for infinitely many $i\in I$, $|A_i\cap y|=2$ the claim is obvious, and likewise if $i\in I$, $|A_i\cap y|>2$ for infinitely many $i\in I$, then we are done by defining for each and every such $i\in I$ the function $f:A_i\mapsto g(A_i\cap y)$. If that is not the case, apply a second time $\mathrm{2RC_{fin}}$ to $\bigcup\{A_i:i\in I\}\setminus y$, to get another infinite subset $z\subseteq x$ with $z\cap y=\emptyset$. If, again, $\{i\in I: |A_i\cap z|\geq2\}$ is finite, then we get that $K=\{i\in I: |A_i\cap y|=|A_i\cap z|=1\}$ is infinite, together with the obvious function $f:A_k\mapsto A_k\cap(y\cup z)$, for all $k\in K$. For the other case we start similarly: let $n=3$ and $\mathcal{A}=\{A_j:j\in J\}$ be an infinite family of pairwise disjoint finite sets. Set $x=\bigcup\mathcal{A}$ and apply $\mathrm{3RC_{fin}}$ to get an infinite $y\subseteq x$ and a function $g:[y]^{>3}\to[y]^3$ such that, for all $Y\in[y]^{>3}$, $g(Y)\subseteq Y$. Since every element of $\mathcal{A}$ is finite, there must be an infinite subset $I$ of $J$ such that, for all $i\in I$, $A_i\cap y\neq\emptyset$. If $\{i\in I: |A_i\cap y|=1\:\mathrm{or}\:|A_i\cap y|\geq 3\}$ is infinite, with a perfectly analogous approach to the previous case we get the conclusion. Otherwise, for all but finitely many $i\in I$, $|A_i\cap y|=2$. At this point we use Montenegro's result that $\mathrm{RC_4}$ implies $\mathrm{C_4^-}$, which in turns implies $\mathrm{C_2^-}$. Since, by Lemma [9.1], $\mathrm{3RC_{fin}}$ implies $\mathrm{RC_4}$, by applying $\mathrm{C_2^-}$ to $\{A_i: |A_i\cap y|=2\}$ we get to a case which has already been solved, namely the one in which $\{i\in I: |A_i\cap y|=1\}$ is infinite. \end{proof} In the proofs of the following two theorems we use the ideas Montenegro needed in \cite{montenegro} to show the implication $\mathrm{RC_4}\implies\mathrm{C^-_4}$. \begin{theorem} $\mathrm{4RC_{fin}}\Rightarrow\mathrm{4C_{fin}^-}$. \end{theorem} \begin{proof} Let $\mathcal{A}=\{A_j:j\in J\}$ be an infinite family of pairwise disjoint finite sets. Set $x=\bigcup\mathcal{A}$ and apply $\mathrm{4RC_{fin}}$ to get an infinite $y\subseteq x$ and a function $g\colon[y]^{>4}\to[y]^4$ such that, for all $Y\in[y]^4$, $g(Y)\subseteq Y$. Since every element of $\mathcal{A}$ is finite, there must be an infinite subset $I$ of $J$ such that, for all $i\in I$, $A_i\cap y\neq\emptyset$. With perfectly analogous arguments as in the previous lemma, it is easy to see that the only difficult case is when, for all $i\in I$, $|A_i\cap y|=3$. The following part of the proof shows that $\mathrm{4RC_{fin}}$ implies $\mathrm{C_3^-}$. For all $i\in I$, set $B_i=A_i\cap y$. We define a directed graph $G\subseteq I^2$ on $I$: let $(i,j)$ be an edge if and only if $B_j\nsubseteq g(B_i\cup B_j)$. The idea behind this definition is that every time $(i,j)$ is an edge, then the function $g$ selects one element from $B_j$ whenever considered together with $B_i$ (we choose the element in $B_i\setminus g(B_i\cup B_j)$ if $\lvert B_i\cup g(B_i\cup B_j)\rvert=2$). We say that an $i\in I$ has outdegree $k$ whenever $|\{j\in I: (i,j)\in G\}|=k$. Notice that we can assume that no $i\in I$ has infinite outdegree, otherwise we could easily select one element from infinitely many $B_i$. Now we claim that for each $k\in\omega$ there are only finitely many $i\in I$ such that $i$ has outdegree $k$. To prove this, assume towards a contradiction that there exists some $k'\in\omega$ and $\widetilde{I}\subseteq I$ such that $|\widetilde{I}|=2k'+3$ and that for all $i\in\widetilde{I}$, $i$ has outdegree $k'$. By construction, if $n$ is the number of edges contained in $\widetilde{I}^2$, then $\binom{|\widetilde{I}|}{2}\leq n\leq|\widetilde{I}|k'$, from which follows $k'+1\leq k'$, a contradiction. We have obtained a well ordered partition of $I$ into finite classes according to the outdegree of every $i\in I$. In symbols: $$ I_k=\{i\in I: i\textrm{ has outdegree }k\}\text{ for every }k\in\omega. $$ Applying $\mathrm{4RC_{fin}}$ to $I$, we extract at most $4$ elements from each class, and for some $1\leq m\leq 4$, we get exactly $m$ elements from infinitely many classes, so we can assume that we get $m$ elements from every class. Write $f(I_k)$ for the $m$ extracted elements from $I_k$. We finish the proof by analyzing each of these cases separately. If $m=1$, then, for all $k\in\omega$, there is at least one edge between the element of $f(I_{2k})$ and the one of $f(I_{2k+1})$, and this allows us to select one element from $B_{2k}$ or $B_{2k+1}$. If $m=2$, we just consider, for all $k\in\omega$, the edges (there must be at least one) between the two elements of $f(I_k)$, and conclude the proof as in the previous case. If $m=3$, consider again, for all $k\in\omega$, all the inner edges contained in $f(I_k)^2$. Since each $i\in f(I_k)$ can be chosen at most $2$ times and there are at least $3$ inner edges, we are always able to choose one element from some some $B_j$ with $j\in f(I_k)$. If $m=4$, consider, for all $k\in\omega$, $g(\cup_{i\in f(I_k)}B_i)$. If it selects less than $4$ elements from $f(I_k)$, we are in one of the previous cases and if it selects exactly one element from each $B_i$, with $i\in f(I_k)$, we are also done. This concludes the proof. \end{proof} In the proof of the following theorem we will also use, without going into details, techniques and arguments which were carefully explained in the two preceding proofs. \begin{theorem} $\mathrm{6RC_{fin}}\Rightarrow\mathrm{6C^-_{fin}}$. \end{theorem} \begin{proof} Let $\mathcal{A}=\{A_j:j\in J\}$ be an infinite family of pairwise disjoint finite sets. Set $x=\bigcup\mathcal{A}$ and apply $\mathrm{6RC_{fin}}$ to get an infinite $y\subseteq x$ and a function $g\colon[y]^{>6}\to[y]^6$. As usual, we can assume $|A_j\cap y|<6$ for all $j\in J$. Moreover, it is possible to assume $|A_j\cap y|<4$ for all $j\in J$, as well. To see it, take for instance the case in which $|A_i\cap y|=5$ for all the infinitely many $i\in I\subseteq J$. As in the previous theorem, define an oriented graph $G\subseteq I^2$ and let $(i,j)$ be an edge if and only if $A_j\nsubseteq g(A_i\cup A_j)$. This way, we obtain a well ordered partition of $I$ into finite classes $I_k$, for $k\in\omega$, according to the outdegree of each $i\in I$. Apply $\mathrm{6RC_{fin}}$ to $I$ and extract a finite set $f(I_k)$ of at most $6$ elements from each class $I_k$. Then extract again at most $6$ elements from each $\cup_{i\in f(I_k)}(A_i\cap y)$. The only case which is not solved by the last $\textit{extraction}$ is when $|f(I_k)|=1$ for all $k\in\omega$, but this is easily handled as the case $m=1$, at the end of the previous proof. The case when $|A_i\cap y|=4$ for infinitely many $i\in I\subseteq J$ can be solved in the same way. Now, given $\mathcal{A}=\{A_j:j\in J\}$ and $y\subseteq\bigcup\mathcal{A}$, let $I=\{i\in J:A_i\cap y\neq\emptyset\}$. Apply $\mathrm{6RC_{fin}}$ to $\cup_{i\in I}A_i\setminus y$ to get an infinite $z\subseteq \bigcup\mathcal{A}\setminus y$. Similarly, if $K=\{k\in I:A_k\cap z\neq\emptyset\}$, apply $\mathrm{6RC_{fin}}$ to $\cup_{k\in K}A_k\setminus(y\cup z)$ to get an infinite $w\subseteq \bigcup\mathcal{A}\setminus (y\cup z)$. A straightforward analysis shows that the only non trivial case is given, modulo symmetries, by the one in which $$ |A_j\cap y|=3,\,|A_j\cap z|=|A_j\cap w|=2\textrm{ and }|A_j|=7,\textrm{ for all }j\in J.$$ Our goal is to select one element either from $|A_j\cap y|$ or $|A_j\cap z|$, for infinitely many $j\in J$. In order to do this, we consider the family of edges $\mathcal{E}=\{E_j\coloneqq(A_j\cap y)\times(A_j\cap z):j\in J\}$ and the corresponding partitions $$ F^j_a=\{e\in(A_j\cap y)\times(A_j\cap z):e(1)=a\},\,a\in A_j\cap y, $$ $$ G^j_b=\{e\in(A_j\cap y)\times(A_j\cap z):e(2)=b\},\,b\in A_j\cap z. $$ Notice that for all $j\in J$, $|F^j_a|=2$ and $|G^j_b|=3$. It is easy to see that whenever we select a proper subset of $E_j$ for some $j\in J$, we are able to select one element from $A_j\cap y$ or from $A_j\cap z$. Also for this reason, when applying $\mathrm{6RC_{fin}}$ to $E\coloneqq\bigcup\mathcal{E}$, we can assume that we get a selection function $f$ on the set of all edges $E$. To simplify the notation, let $\widetilde{f}$ be defined as $\widetilde{f}\colon[E]^7\to E$, $\widetilde{f}\colon S\mapsto S\setminus f(S)$. Now, for $j\in J$ and $b\in A_j\cap z$, define the degree $$\mathrm{deg}(G^j_b)=|\{F^i_a\cup F^i_{a'}:i\in J\wedge a,a'\in(A_j\cap z)\wedge \widetilde{f}(G^j_b\cup F^i_a\cup F^i_{a'})\in F^i_a\cup F^i_{a'}\}|.$$ We can assume that every $G_b^j$ has finite degree, since we would be otherwise able to select a proper subset from infinitely many $E_j$. In addition, assume that for some $k_0\in\omega$ there are infinitely many $G^j_b$ with degree equal to $k_0$. Then order $k_0+1$ distinct $4$-element sets of the form $F^i_a\cup F^i_{a'}$ for some $i\in J$ and $a,a'\in(A_j\cap z)$. For each $G^j_b$, there must be a first of these $k_0+1$ sets with the property that $\widetilde{f}(G^j_b\cup F^i_a\cup F^i_{a'})\in G^j_b$, but this fact allows us to select one edge from each $G^j_b$ with degree equal to $k_0$. Thus, assume that for each $k\in\omega$ there are only finitely many $G^j_b$ with degree $k$. This gives us a well ordered partition of $\{G^j_b:j\in J\wedge b\in A_j\cap z\}$ into finite subclasses. Explicitly into the subclasses $$ H_k=\{G^j_b:\mathrm{deg}(G^j_b)=k\}. $$ Apply one last time the function $f$ to each $\bigcup H_k$ and notice that the only case in which we are not able to select a proper subset from infinitely many $E_j$, is when, for all but finitely many $k\in\omega$, $f(\bigcup H_k)=E_i$ for some $i\in J$. We conclude the proof by solving this last case. Suppose that for infinitely many $k\in\omega$, given $f(\bigcup H_k)=E_i$, there is at least one $G^i_b\subseteq E_i$ such that for an $l\in J$ and a $k'\in\omega$, with $f(\bigcup H_{k'})=E_l$, it is possible to select an element from $G^i_b$ by considering the set $$ \mathrm{sel}(G_b^i,l)\coloneqq\{\widetilde{f}(G^i_b\cup F^l_a\cup F^l_{a'}):|F^l_a\cup F^l_{a'}|=4\wedge F^l_a\cup F^l_{a'}\subseteq E_l\}. $$ Then we can conclude by choosing, for each such $k\in\omega$, the first $k'\in\omega$ with the mentioned property. If that is not the case, fix $k_1\in\omega$ with $f(H_{k_1})=E_i$ such that for infinitely many $j\in J$ and $k\in\omega$ with $f(H_k)=E_j$ we have that $$ |\mathrm{sel}(G_b^i,j)|=3\textrm{ for both }b\in A_i\cap z, $$ and conclude by fixing some $b_0\in A_i\cap z$ and $a_0\in A_i\cap y$. This selects a proper subset from infinitely many $E_j$, namely that unique $F^j_a\cup F^j_{a'}\subseteq E_j$ such that $$\widetilde{f}(G^i_b\cup F^j_a\cup F^j_{a'})=(a_0,b_0).$$ \end{proof} We conclude the subsection with an example of how it is possible to obtain a weaker implication than $\mathrm{nRC_{fin}}\implies\mathrm{nC^-_{fin}}$ for some infinite class of cases. $\mathrm{p^kWOC^-_{fin}}$ is essentially the same axiom as $\mathrm{p^kC^-_{fin}}$. The only difference is that we require the family of finite sets to be well-ordered. \begin{theorem} For all primes $p$ and all natural numbers $k$, $\mathrm{p^kRC_{fin}}\Rightarrow\mathrm{p^kWOC^-_{fin}}$. \end{theorem} \begin{proof} Let $\mathcal{A}=\{A_i:i\in\omega\}$ be a well-ordered family of finite sets such that $|A_i|>p^k$ for all $i\in\omega$. $\mathrm{p^kWOC^-_{fin}}$ is basically obtained by repeated applications of $\mathrm{p^kRC_{fin}}$ to $\bigcup\mathcal{A}$, together with the following two considerations: The first is that if a finite sum $\sum a$ of divisors of $p^k$ is such that $\sum a>p^k$, then it is possible to extract a subsum $\sum a'$ such that $\sum a'=p^k$. The second consideration, which allows us to conclude the proof, is the following: Given a well-ordered family $\mathcal{B}=\{B_i:i\in B\}$ of finite sets of the same size $m\nmid p^k$, with $\mathrm{p^kRC_{fin}}$ we can extract a family of subsets $\mathcal{B'}=\{B'_i:i\in B'\subseteq B\}$ such that for every $i\in B'$, $\emptyset\subsetneq B'_i\subsetneq B_i$. To see this, it is enough to apply $\mathrm{p^kRC_{fin}}$ to $\bigcup\mathcal{B}$ and, if needed, to choose $p^k$ elements from the union the first $l$ sets, where $l$ is the least natural number such that $lm>p^k$, and repeat for every next block of $l$ elements of the family $\mathcal{B}$. \end{proof} \subsection{Negative} A partial negative answer is provided by the models $\mathcal{V}$, introduced and used in \cite{lorenz}, to which we refer for more detailed explanations. In general, the model $\mathcal{V}_n$ has a countable set of atoms $A$ partitioned in blocks $A_i=\{a^i_1,\dots,a^i_n\},i\in\mathbb{Q},$ of size $n$ which are linearly ordered isomorphically to $\mathbb{Q}$. The normal ideal is the one given by the finite subsets and the permutation group $G$ is the one generated by all those permutations $\varphi_i$ on $A$ that act as the identity on $A\setminus A_i$ and as the cycle $(a^i_1,\dots,a^i_n)$ on $A_i$, for some $i\in\mathbb{Q}$. $\mathcal{V}_n$ is generalized to $\mathcal{V}_{n_1,\dots,n_l}$, which is built basically in the same way, but in which the set of atoms is partitioned in $l$ distinct and disjoint $\mathbb{Q}$-lines of blocks. We have the following result. \begin{theorem} Let $l\in\omega$, $p_1,\dots,p_l$ be distinct primes and $a_1,\dots,a_l$ natural numbers greater than $0$. Then, we have that $$\mathcal{V}_{p_1^{a_1},\dots,p_l^{a_l}}\models\mathrm{nRC_{fin}}\iff\mathcal{V}_{p_1^{a_1},\dots,p_l^{a_l}}\models\mathrm{nC^-_{fin}}\iff n\textrm{ is a multiple of }\prod_{k=1}^lp_k^{a_k}.$$ \end{theorem} \begin{proof} It suffices to prove the theorem for $l=1$. The general case then follows from $$ \mathcal{V}_{p_1^{a_1},\dots,p_l^{a_l}}\models\mathrm{nRC_{fin}}\iff\bigwedge_{k=1}^l\mathcal{V}_{p_k^{a_k}}\models\mathrm{nRC_{fin}}, $$ and $$ \mathcal{V}_{p_1^{a_1},\dots,p_l^{a_l}}\models\mathrm{nC^-_{fin}}\iff\bigwedge_{k=1}^l\mathcal{V}_{p_k^{a_k}}\models\mathrm{nC^-_{fin}}. $$ In \cite[Proposition 5.3, Lemma 5.4]{salome} it is shown that $$ \mathcal{V}_{p_1^{a_1}}\models\mathrm{nC^-_{fin}}\iff n\textrm{ is a multiple of }p_1^{a_1}. $$ It remains to show that the same holds for $\mathrm{nRC_{fin}}$. We start with showing that $\mathrm{p_1^{a_1}RC_{fin}}$ holds. In order to do so, we use the construction from \cite[Fact 4]{lorenz} , which we now briefly recall. To help the reader, we use the same notation. Let $x$ be an infinite not well-orderable set with support $E$ and $z\in x$ an element with support $E_z$ which is not supported by $E$. Let $A_r$ be a block of atoms included in $E_z$ but not in $E$. Then, if we define the set $f$ as $$f=\{(\varphi(z),\varphi(A_r)):\varphi\in\mathrm{fix}_G(E_z\setminus A_r)\},$$ the following statements hold: \begin{itemize} \item $f$ is supported by $E_z\setminus A_r$; \item $f$ is a function with $\mathrm{dom}(f)\subseteq x$ and $\mathrm{ran}(f)=\{A_q:q\in I\}$ for some possibly unbounded interval $I\subseteq\mathbb{Q}$; \item if $y=\mathrm{dom}(f)$ and $\mathcal{Y}=\{f^{-1}(A_q):q\in I\}$, then $\mathcal{Y}$ is a linearly orderable partition of $y$; \item the elements of $\mathcal{Y}$ are finite sets all having the same cardinality, which has to be a divisor of $p_1^{a_1}$; \item we can write $\mathcal{Y}=\{U_\varphi:\varphi\in\mathrm{fix}_G(E_z\setminus A_r)\}$, where for $\varphi\in\mathrm{fix}_G(E_z\setminus A_r)$, $$ U_\varphi=\{\eta z:\eta\in\mathrm{fix}_G(E_z\setminus A_r),\varphi^{-1}\eta(A_r)=A_r\}. $$ \end{itemize} Consider now the orbits $O_s=\{\varphi(s):s\in[y]^{>p_1^{a_1}},\varphi\in\mathrm{fix}_G(E_z\setminus A_r)\}$ and write $\mathcal{O}=\{O_s:s\in[y]^{>p_1^{a_1}}\}$. The goal is to show that it is possible to choose for each $O_s$ a subset $\tilde{s}\subsetneq s$ such that $|\tilde{s}|=p_1^{a_1}$ and if $O_s=O_t$ with $\varphi(s)=t$, then $\varphi(\tilde{s})=\tilde{t}.$ Notice that this is equivalent to requiring that every time $\varphi(s)=s$ for some $\varphi\in\mathrm{fix}_G(E_z\setminus A_r)$, then $\varphi(\tilde{s})=\tilde{s}.$ Now, fix an $O_s\in\mathcal{O}$. Notice that if $s$ is a union $s=\bigcup\{U_\varphi:\varphi\in P_s\}$ for some subset $P_s\subseteq\mathrm{fix}_G(E_z\setminus A_r)$ the conclusion is trivial. To deal with the other cases, once more we will fully rely on the fact that if a sum of divisors of $p_1^{a_1}$ is greater than $p_1^{a_1}$, then there is a subsum equal to $p_1^{a_1}$. Indeed, notice that for all $a\in s$, the cardinality of $\{\varphi(a):\varphi\in\mathrm{fix}_G(E_z\setminus A_r),\varphi(s)=s\}$ has to be a divisor of $p_1^{a_1}$. The conclusion is given by the last claim together with the fact that if $\tilde{s}\subseteq s$ is a union of orbits in the form $\{\varphi(a):\varphi\in\mathrm{fix}_G(E_z\setminus A_r),\varphi(s)=s\}$, then $\varphi(s)=s$ implies $\varphi(\tilde{s})=\tilde{s}$. To finish the proof we have to show that $\mathrm{nRC_{fin}}$ is false in $\mathcal{V}_{p_1^{a_1}}$ whenever $n$ is not a multiple of $p_1^{a_1}$. But this can easily be shown on the set of all atoms. \end{proof} \section{Intermezzo: A new model} Fix a positive integer $n$ and let $\mathcal{L}_n$ be the signature containing an $(m+n)$-place relation symbol $\mathrm{Sel_m}$ for each $m\in\omega$ with $m>n$. Let $\mathrm{T_n}$ be the $\mathcal{L}_n$-theory containing the following axiom schema: \begin{addmargin}[27pt]{27pt} \textit{For each $m\in\omega$ with $m>n$, we have $$\mathrm{Sel_m}(x_1,\ldots,x_m,\,x_1',\ldots,x_n')$$ if and only if the following holds: \begin{itemize} \item $\bigwedge_{1\le i<j\le m} x_i\neq x_j\;\wedge \bigwedge_{1\le i<j\le n} x_i'\neq x_j'$ \item For each $1\le j\le n$ there is a $1\le i\le m$ such that $x_j'=x_i$. \item For any $m$ pairwise distinct elements $x_1,\ldots,x_m$ there are $x_1',\ldots,x_n'$ such that $\mathrm{Sel_m}(x_1,\ldots,x_m,\,x_1',\ldots,x_n')$. \item If\/ $\mathrm{Sel_m}(x_1,\ldots,x_m,\,x_1',\ldots,x_n')$ and $\rho$ is a permutation of\/ $\{1,\ldots,m\}$, then\/ $\mathrm{Sel_m}(x_{\rho(1)},\ldots,x_{\rho(m)},\,x_1',\ldots,x_n')$ \end{itemize} } \end{addmargin} \noindent In any model of the theory $\mathrm{T_n}$, the set of all the relations $\mathrm{Sel_m}$ is equivalent to a function $\mathrm{Sel}$ which assigns an $n$-element subset to any finite and big enough set. So, for the sake of simplicity we shall write $\mathrm{Sel}(\{x_1,\ldots,x_m\})=\{x_1',\ldots,x_n'\}$ instead of $\mathrm{Sel_m}(x_1,\ldots,x_m,\,x_1',\ldots,x_n')$. \newline \noindent For a model $\mathbf{M}$ of $\mathrm{T_n}$ with domain $M$, we will simply write $M\models\mathrm{T_n}$. Let $$\widetilde{C}=\{M:M\in\mathrm{fin}(\omega)\wedge M\models\mathrm{T_n}\}.$$ Evidently $\widetilde{C}\neq\emptyset$. Partition $\widetilde{C}$ into maximal isomorphism classes and let $C$ be a set of representatives. We proceed with the construction of the set of atoms for our permutation model. The next theorem and its proof are taken from \cite[Ch.\,8]{libro}, with a minor difference which will play an essential role in our work. \begin{theorem} For any positive integer $n$ there exists a model $\mathbf{F}\models\mathrm{T_n}$ with domain $\omega$ such that: \begin{itemize} \item Given a non empty $M\in C$, $\mathbf{F}$ admits infinitely many submodels isomorphic to~$M$. \item Any isomorphism between two finite submodels of\/ $\mathbf{F}$ can be extended to an automorphism of\/ $\mathbf{F}$. \end{itemize} \end{theorem} \begin{proof} The construction of $\mathbf{F}$ is by induction on $\omega$. Let $F_0=\emptyset$. $F_0$ is trivially a model of $\mathrm{T_n}$ and, for every element $M$ of $C$ with $|M|\leq0$, $F_0$ contains a submodel isomorphic to $M$. Let $F_s$ be a model of $\mathrm{T_n}$ with a finite initial segment of $\omega$ as domain and such that for every $M\in C$ with $|M|\leq s$, $F_s$ contains a submodel isomorphic to $M$. Let \begin{itemize} \item $\{A_i:i\leq p\}$ be an enumeration of $[F_s]^{\leq n}$, \item $\{R_k: k\leq q\}$ be an enumeration of all $M\in C$ with $1\leq|M|\leq s+1$, \item $\{j_l:l\leq u\}$ be an enumeration of all the embeddings $j_l:F_s|_{A_i}\xhookrightarrow{} R_k$, where $i\leq p$, $k\leq q$ and $|R_k|=|A_i|+1$. \end{itemize} For each $l\leq u$, let $a_l\in\omega$ be the least natural number such that $a_l\notin F_s\cup\{a_{l'}:l'<l\}$. The idea is to add $a_l$ to $F_s$, extending $F_s|_{A_i}$ to a model $F_s|_{A_i}\cup\{a_l\}$ isomorphic to $R_k$, where $j_l:F_s|_{A_i}\xhookrightarrow{} R_k$. Define $F_{s+1}:=F_s\cup\{a_l:l\leq u\}$. In \cite{libro}, $F_{s+1}$ is made into a model of $\mathrm{T_n}$ in a non-controlled way, while here we impose the following: Let $\{x_1,\dots,x_{m^\prime}\}$ be a subset of $F_{s+1}$ from which we have not already chosen an $n$-element subset. Suppose $m^{\prime}>n$ and that $i>j$ implies $x_i>x_j$ (recall that $F_{s+1}$ is a subset of $\omega$). Then we simply impose $\mathrm{Sel}(\{x_1,\dots,x_{m^{\prime}}\})=\{x_{m^{\prime}-n+1},\dots,x_{m^{\prime}}\}$. The desired model is finally given by $\mathbf{F}=\bigcup_{s\in\omega}F_s$. \noindent We conclude by showing that every isomorphism between finite submodels can be extended to an automorphism of $\mathbf{F}$. Let $i_0:M_1\to M_2$ be an isomorphism of $\mathrm{T_n}$-models. Let $a_1$ be the least natural number in $\omega\setminus(M_1\cup M_2)$. Then $M_1\cup M_2\cup\{a_1\}$ is contained in some $F_n$ and by construction we can find some $a'_1\in\omega$ such that $\mathbf{F}|_{M_1\cup\{a_1\}}$ is isomorphic to $\mathbf{F}|_{M_2\cup\{a'_1\}}$. Extend $i_0$ to $i_1:M_1\cup\{a_1\}\to M_2\cup\{a'_1\}$ by imposing $i_1(a_1)=a'_1$. Let $a_2$ be the least integer in $\omega\setminus(M_1\cup M_2\cup\{a_1,a'_1\})$ and repeat the process. The desired automorphism of $\mathbf{F}$ is $i=\bigcup_{t\in\omega}i_t$. \end{proof} \begin{defi}{\rm Let us fix some notations and terminology. The elements of the model~$\mathbf{F}$ above constructed will be the atoms of our permutation model. Since for each atom $a$ there is a unique triple $s,i,k$ such that $F_s|_{A_i}\cup\{a\}$ is isomorphic to $R_k$, each atom $a$ corresponds to a unique embedding $j_a:F_s|_{A_i}\xhookrightarrow{} R_k$. We shall call the domain of the embedding $j_a$ the $\emph{ground}$ of $a$. Furthermore, given two atoms $a$ and $b$, we say that $a<b$ in case $a<_\omega b$ according to the natural ordering. Notice that this well ordering of the atoms does not exist in the permutation model.} \end{defi} Let $A$ be the domain of the model $\mathbf{F}$ of the theory $\mathrm{T_n}$. Then the permutation model $\mathrm{MOD_n}$ is built as follows: Consider the normal ideal given by all the finite subsets of $A$ and the group of permutations $G$ defined by$$\pi\in G\iff \forall\,X\in[\omega]^{\mathrm{fin}},\pi(\mathrm{Sel}(X))=\mathrm{Sel}(\pi X).$$ \begin{theorem} \label{thm:7.2} $\mathrm{MOD_n}\models\mathrm{nRC_{fin}}$. \end{theorem} \begin{proof} Firstly, notice that because for any $m>n$ the function $\mathrm{Sel}$ selects an $n$-element set from each $m$-element set of atoms, $\mathrm{nRC_{fin}}$ holds in $\mathrm{MOD_n}$ for any infinite set of atoms. So, for an infinite set $X$ in $\mathrm{MOD_n}$, it is enough to construct a bijection between an infinite set of atoms and a subset of $X$\,---\,the function $\mathrm{Sel}$ on the finite sets of atoms will then induce a selection function on the finite subsets of some infinite subset of~$X$. Let $X$ be an infinite set in $\mathrm{MOD_n}$ with support $S'$. If $X$ is well ordered, the conclusion is trivial, so let $x_0\in X$ be an element not supported by $S'$ and let $S$ be a support of $x_0$ with $S'\subseteq S$. Let $a_0\in S\setminus S'$. If $\mathrm{fix_G}(S\setminus\{a_0\})\subseteq\mathrm{Sym_G}(x_0)$ then $S\setminus\{a_0\}$ is a support of $x_0$, so by iterating the process finitely many times we can assume that there exists a permutation $\tau\in\mathrm{fix_G}(S\setminus\{a_0\})$ such that $\tau(x_0)\neq x_0$. Our conclusion will follow by showing that there is a bijection between an infinite set of atoms and a subset of $X$, namely between $\{\pi(a_0):\pi\in\mathrm{fix_G}(S\setminus\{a_0\})\}$ and $\{\pi(x_0):\pi\in\mathrm{fix_G}(S\setminus\{a_0\})\}$. Suppose towards a contradiction that there are two permutations $\sigma,\sigma'\in\mathrm{fix_G}(S\setminus\{a_0\})$ such that $\sigma(x_0)=\sigma'(x_0)$ but $\sigma(a_0)\neq\sigma'(a_0)$. Then, by direct computation, the permutation $\sigma^{-1}\sigma'$ is such that $\sigma^{-1}\sigma'(a_0)\neq a_0$ and $\sigma^{-1}\sigma'(x_0)=x_0$. Let $b=\sigma^{-1}\sigma'(a_0)$. Then $\{b\}\cup (S\setminus\{a_0\})$ is a support of $x$. By construction, the set $\{\pi(a_0):\pi\in\mathrm{fix_G}(\{b\}\cup (S\setminus\{a_0\}))\}$ is infinite, from which we deduce that also the set $$ L=\big{\{}a\in A: \exists\,\pi\in\mathrm{fix_G}(S\setminus\{a_0\})\textrm{ such that $\pi(x_0)=x_0$ and $\pi(a_0)=a$}\big{\}} $$ is infinite. Now, by assumption there is a permutation $\tau\in\mathrm{fix_G}(S\setminus\{a_0\})$ such that $\tau(x_0)\neq x_0$. Let $y_0:=\tau(x_0)$. Then a standard argument shows that also $$ R=\big{\{}a\in A: \exists\,\pi\in\mathrm{fix_G}(S\setminus\{a_0\})\textrm{ such that $\pi(x_0)=y_0$ and $\pi(a_0)=a$}\big{\}} $$ must be infinite.\newline \noindent First note that in $L$ (and similarly also in $R$) there are infinitely many elements with ground $S\setminus \{a_0\}$. This is because $(S\setminus \{a_0\})\cup \{a_0\}\subseteq \textbf{F}$ is a finite model of $\mathrm{T_n}$ and in the construction of our permutation model we add infinitely many atoms $a_l$ (where from outside, $a_l\in\omega$), such that $(S\setminus \{a_0\})\cup \{a_l\}$ and $(S\setminus \{a_0\})\cup \{a_0\}$ are isomorphic via an isomorphism $\delta$ with $\delta\vert_{S\setminus\{a_0\}}=\operatorname{id}\vert_{S\setminus\{a_0\}}$ and $\delta(a_l)=a_0$. We can extend $\delta$ to an automorphism $\delta\in\mathrm{fix_G}(S\setminus \{a_0\})$. By definition of $L$ we have that $a_l\in L$. \newline \noindent Let $r\in R$ and $p,l\in L$ all having the same ground $S\setminus \{a_0\}$ such that $r\geq p$, $l\geq p$ and $\min(\{p,q,r\})>\max(S\setminus\{a_0\})$. We want to show that every map $$ \gamma:(S\setminus \{a_0\})\cup \{p\}\cup\{l\}\to (S\setminus \{a_0\})\cup \{p\}\cup \{r\} $$ with $\gamma\vert_{(S\setminus \{a_0\})\cup \{p\}}=\operatorname{id}_{(S\setminus \{a_0\})\cup \{p\}}$ and $\gamma(l)=r$ is an isomorphism of $\mathrm{T_n}$-models. Let $X\subseteq (S\setminus \{a_0\})\cup \{p\}\cup\{l\}$. If $\{p,l\}\cap X=\emptyset$ we have that $\gamma(\mathrm{Sel}(X))=\mathrm{Sel}(\gamma(X))$. If $l\in X$ and $p\notin X$ let $\pi_l,\pi_r\in\mathrm{fix_G}(S\setminus\{a_0\})$ with $\pi_l(a_0)=l$ and $\pi_r(a_0)=r$. Then $\pi_r\circ\pi_l^{-1}\vert_X=\gamma\vert_X$. So since $\pi_r\circ\pi_l^{-1}\in G$ we have $\gamma(\mathrm{Sel}(X))=\mathrm{Sel}(\gamma(X))$. In the last case, when $\{p,l\}\subseteq X$, the selection function $n$ biggest elements because of the particular care we took in the construction of the selection function on the set of atoms and since $p,r$ and $l$ have ground $S\setminus \{a_0\}$. So we can extend $\gamma$ to a function $\tau^{\prime}\in\mathrm{fix_G}(\{p\}\cup (S\setminus \{a_0\}))$ with $\tau^{\prime}(l)=r$.\newline \noindent Let $\pi_r\in\mathrm{fix_G}(S\setminus\{a_0\})$ such that $\pi_r(a_0)=r$ and $\pi_r(x_0)=y_0$. Let $\pi_l\in\mathrm{fix_G}(S\setminus\{a_0\})$ with $\pi_l(a_0)=l$ and $\pi_l(x_0)=x_0$. Then we have that $\pi_r^{-1}\circ\tau^{\prime}\circ\pi_l(a_0)=a_0$ which implies that $\pi_r^{-1}\circ\tau^{\prime}\circ\pi_l(x_0)=x_0$ because the function fixes $S$. So \begin{equation} \label{eq:1} \tau^{\prime}(x_0)=\tau^{\prime}\circ\pi_{l}(x_0)=\pi_r(x_0)=y_0. \end{equation} Now let $\pi_p\in \mathrm{fix_G}(S\setminus\{a_0\})$ with $\pi_p(a_0)=p$ and $\pi_p(x_0)=x_0$. Since $S$ is a support of $x_0$, $\pi_p(S)=\{p\}\cup (S\setminus\{a_0\})$ is also a support of $\pi_p(x_0)=x_0$. Therefore, $$ \tau^{\prime}(x_0)=x_0. $$ This is a contradiction to (\ref{eq:1}). So we showed that for all $\sigma,\sigma'\in\mathrm{fix_G}(S\setminus\{a_0\})$, $\sigma(x_0)=\sigma'(x_0)$ implies $\sigma(a_0)=\sigma'(a_0)$, from which we get the desired bijection. \end{proof} Due to the following theorem, the class of models $\mathrm{MOD_n}$ will not tell us anything about the horizontal implications in the diagram. \begin{theorem} For each $n\in\omega$, $\mathrm{MOD_n}\models\mathrm{ACF^-}$. \end{theorem} \begin{proof} Fix $n\in\omega$ and let $\mathcal{A}=\{A_i:i\in I\}$ be a family of finite sets. By applying $\mathrm{nRC_{fin}}$ to $\bigcup\mathcal{A}$, it is enough to show that for all $m\leq n$, $\mathrm{C^-_m}$ holds in $\mathrm{MOD_n}$. Fix $m\in\omega$ with $m\leq n$ and suppose $\mathcal{A}=\{A_i:i\in I\}$ is a family of $m$-element sets, and let $P$ be a support of $\mathcal{A}$. If $\bigcup\mathcal{A}$ is well-orderable we are done, so let $x\in\mathcal{A}$ be an element which is not supported by $P$, let $S'$ be a support of $x$ and $a\in S'\setminus P$ an atom such that for some $\pi\in\mathrm{fix}_G(P\cup (S'\setminus \{a\}))$, we have that $\pi(x)\neq x$, as in the previous proof. Set $S=P\cup (S'\setminus \{a\})$ and $X=\{\pi(x):\pi\in\mathrm{fix}_G(S)\}$. Now, we can replace $\mathcal{A}$ with $\{A_i\cap X:i\in I\}$ since a choice function on this last set gives a choice function on the previous $\mathcal{A}$ as well, and let us assume that $\mathcal{A}$ is family of $m^\prime$-element sets for some $m^\prime\leq m$. As in the proof of Theorem \ref{thm:7.2} we can show that there is a bijection between the infinite set $X$ and the set of atoms $Y:=\{\pi(a)\mid \pi\in \mathrm{fix}_G(S)\}$. So we can without loss of generality assume that $\mathcal{A}$ is a family of $m^\prime$-element subsets of the atoms. Let $A_i\in\mathcal{A}$ with $A_i\cap S=\emptyset$, let $a_0\in A_i$ and let $R^\prime\subseteq A\setminus (S\cup A_i)$ be an $(n-1)$-element set. By construction of the permutation model, we can find an $r_0\in A\setminus (S\cup A_i\cup R^\prime)$ such that $$ \forall a\in A_i\setminus \{a_0\}~\left(\mathrm{Sel}(R^\prime\cup\{r_0\}\cup \{a\})=R^\prime\cup \{r_0\}\right) $$ and $$ \mathrm{Sel}(R^\prime\cup\{r_0\}\cup\{a_0\})=R^\prime\cup \{a_0\}. $$ Define $R:=R^\prime\cup\{r_0\}$. Again by construction of the permutation model, we can find infinitely many $b_0\in A$ that behave the same way as $a_0$ with respect to $R\cup S\cup (A_i\setminus\{a_0\})$. In other words, if $\mathrm{repl}$ is the function that replaces $a_0$ by $b_0$, i.e. \begin{align*} \mathrm{repl}:A&\to A\\ x&\mapsto \begin{cases} a_0 &\text{ if } x=b_0;\\ b_0 & \text{ if } x=a_0;\\ x &\text{ otherwise,} \end{cases} \end{align*} we have that for all $X\subseteq R\cup S\cup (A\setminus\{a_i\})$ \begin{equation} \label{eq:2} \mathrm{repl}(\mathrm{Sel}(X\cup\{a_0\}))=\mathrm{Sel}(\mathrm{repl}(X\cup\{a_0\})). \end{equation} Define $$ \gamma: S\cup R\cup A_i\to S\cup R\cup (A_i\setminus\{a_0\})\cup\{b_0\} $$ by $ \gamma:=\mathrm{repl}\vert_{S\cup R\cup A_i}$. With (\ref{eq:2}) we see that $\gamma$ is an isomorphism of $\mathrm{T_n}$-models because for all $X\subseteq R\cup S\cup A_i$ $$ \gamma(\mathrm{Sel}(X))=\mathrm{Sel}(\gamma(X)). $$ So we can extend $\gamma$ to the whole model $\mathbf{F}$. Since $\gamma\in \mathrm{fix_G}(S\cup R),$ $\gamma(A_i)\in \mathcal{A}$. So there are infinitely many $A_j\in\mathcal{A}$ such that there is exactly one element $a\in A_j$ with $a\in\mathrm{Sel}(R\cup\{a\})$. Choose this element $a$. This gives a choice function with support $R\cup S$. \end{proof} We just mention that fact that all of $\mathrm{C_n}$ and $\mathrm{nC_{fin}}$ for $n\in\omega$ are false in every $\mathrm{MOD_m}$: it is enough to consider the family of all set of atoms of correspondent cardinalities. \section{Loop} \subsection{Positive} In this subsection there is only to notice the straightforward: \begin{lemma} \label{lemma} For all $k,n\in\omega$, $\mathrm{nRC_{fin}}\Rightarrow\mathrm{knRC_{fin}}$. \end{lemma} \subsection{Negative} \begin{theorem} Let $m,n\in\omega$ with $n>m$. For every $n$ which is not a multiple of $m$, $\mathrm{MOD_m}\not\models\mathrm{nRC_{fin}}$. \end{theorem} \begin{proof} Consider the set of the atoms and suppose that there is an infinite subset $A$ with a function $f$ which selects $n$ elements from every finite and large enough subset of $A$. Let $S$ be a support of $f$. Let $M$ be any model of the theory $\mathrm{T}_m$ with cardinality $|M|=mk$ for $k\in\omega$ such that $m(k-1)<n<mk$. Then it is possible to find an $mk$-element subset $N=\{x_1,\dots,x_{mk}\}\subseteq \omega$ such that: \begin{enumerate} \item $N$ and $M$ are isomorphic as models of $\mathrm{T}_m$; \item $\mathrm{Sel}(Z)$ can be fixed arbitrarily whenever $Z\subseteq S\cup N$ with $\lvert Z\cap S\rvert\geq 1$ and $\lvert Z\cap N\rvert\geq 1$; \item $\mathrm{Sel}(\{x_{im+1},\dots,x_{mk}\})=\{x_{im+1},\dots, x_{(i+1)m}\}$ holds for all $i< k$. \end{enumerate} Notice that that condition 3 is only a matter of reordering. Consider the following permutation of $N$, written as a finite product of finite cycles: $$ \widetilde{\pi}=\prod_{i< k}(x_{im+1},x_{im+2},\dots,x_{(i+1)m}). $$ Our conclusion will follow by showing that there is a model $M$ of $\mathrm{T_m}$, a corresponding subset $N\subseteq C$ and a permutation $\pi\in\mathrm{fix_G}(S)$ such that $\pi$ acts on $N$ exactly as $\widetilde{\pi}$ on $M$. First we want to find a $\mathrm{T_m}$-model $M=\{x_1,\dots, x_{mk}\}$ such that $M$ and $\widetilde{\pi}M$ are isomorphic as $\mathrm{T_m}$-models. Naturally we first impose condition 3, namely for all $i\leq k$ $$ \mathrm{Sel}(\{x_{im+1},x_{im+2},\dots,x_{mk}\})=\{x_{im+1},x_{im+2},\dots,x_{(i+1)m}\}. $$ The main ide of the proof is the following: Let $L$ be a subset of $M$ with $|L|>m$ and $L\neq\{x_{im+1},x_{im+2},\dots,x_{mk}\}$ for every $i\in k$. Consider the orbit $\{\widetilde{\pi}^lL:l\in\omega\}$. Now we choose an $m$-element subset $L^\prime\subseteq L$ and define $\mathrm{Sel}(L):=L^\prime$. Extend this choice to the whole orbit by defining $$ \mathrm{Sel}(\widetilde{\pi}^lL):=\widetilde{\pi}^l(\mathrm{Sel}(L)). $$ The choice of $\mathrm{Sel}(L)$ has to be suitable in the sense that $\widetilde{\pi}^jL=L$ must imply $\widetilde{\pi}^j(\mathrm{Sel}(L))=\mathrm{Sel}(L)$. \begin{itemize} \item First of all assume that for some $I\subseteq k$, $$ |L\cap(\bigcup_{i\in I}\{x_{im+1},\dots,x_{(i+1)m}\})|=m. $$ Then a suitable choice for $\mathrm{Sel}(L)$ is given by $\bigcup_{i\in I}\{x_{im+1},\dots,x_{(i+1)m}\}\cap L$. \item Otherwise, let $J\subseteq k$ be the set of indices $j$ such that $\widetilde{\pi}^s$ fixes $$ L\cap\{x_{jm+1},\dots,x_{(j+1)m}\} $$ only if $s$ is a multiple of $m$. If $|L\setminus\bigcup_{j\in J}\{x_{jm+1},\dots,x_{(j+1)m}\}|\leq m$, then a suitable choice for $\mathrm{Sel}(L)$ is given by any $m$-element subset of $L$ which includes $L\setminus\bigcup_{j\in J}\{x_{jm+1},\dots,x_{(j+1)m}\}$. \item Let $J\subseteq k$ be as above and suppose that $m <|L\setminus\bigcup_{j\in J}\{x_{im+1},\dots,x_{(i+1)m}\}|$. By replacing $L$ by $L\setminus\bigcup_{j\in J} \{x_{im+1},\dots, x_{(i+1)m}\}$ we can assume that for each $i < k$ there exists a $1< s< m$ such that $\widetilde{\pi}^s$ fixes $L\cap\{x_{im+1},\dots,x_{(i+1)m}\}$. Our goal is now to get rid of the case in which, for some $i< k$, $0\neq|L\cap\{x_{im+1},\dots,x_{(i+1)m}\}|\nmid m$. Fix such an $i'< k$ and let $s'\in\omega$ be the least integer greater than $1$ for which $\widetilde{\pi}^{s'}$ fixes $L\cap\{x_{i'm+1},\dots,x_{(i'+1)m}\}$. Then the cardinality $|L\cap\{x_{i'm+1},\dots,x_{(i'+1)m}\}|$ must be a multiple of $\frac{m}{s'}$. Indeed, $\frac{m}{s'}$ is the cardinality of each orbit $$ \{(\widetilde{\pi}^{s'})^s(x):x\in L\cap\{x_{i'm+1},\dots,x_{(i'+1)m}\}\land s\in\omega\}. $$ In the next step we can consider each of these orbits as different subsets of the form $L\cap \{x_{im+1},\dots, x_{(i+1)m}\}$. So we can without loss of generality assume that $\lvert L\cap \{x_{im+1},\dots, x_{(i+1)m}\}\rvert$ divides $m$ for all $i<k$ and that $\widetilde{\pi}^s$ fixes $L\cap \{x_{im+1},\dots, x_{(i+1)m}\}$ for some $1<s<m$. \item Finally choose $K\subseteq k$ such that Let finally $J\subseteq k$ be such that \begin{enumerate} \item $|L\cap(\bigcup_{j\in K}\{x_{jm+1},\dots,x_{(j+1)m}\})|\geq m$ is minimal and \item $|L\cap\{x_{jm+1},\dots,x_{(j+1)m}\}|\mid m$ for all $j\in K$. \end{enumerate} Replace $L$ by $L\cap\left(\bigcup_{j\in K}\{x_{jm+1},\dots,x_{(j+1)m}\}\right)$. Set $a_j=|L\cap\{x_{jm+1},\dots,x_{(j+1)m}\}|$ for each $j\in K$. By writing $\sum_{j\in K}{a_j}=m+(|L|-m)$, we can see that $\gcd_{j\in K}(a_j)\mid (|L|-m)$. Now, notice that in order for a power $\widetilde{\pi}^s$ to fix $L\cap\{x_{jm+1},\dots,x_{(j+1)m}\}$ for some $j\in K$, $s$ has to be a multiple of $\frac{m}{a_j}$. It follows that, in order for a power $\widetilde{\pi}^s$ to fix $L$, $s$ has to be a multiple of $m'=\mathrm{lcm}_{j\in K}(\frac{m}{a_j})=\frac{m}{\gcd_{j\in K}(a_j)}$. Summarizing: \begin{enumerate} \item $\gcd_{j\in K}(a_j)\mid (|L|-m)$. \item $\widetilde{\pi}^s$ fixes $L$ if and only if $s$ is a multiple of $m'=\frac{m}{\gcd_{j\in K}(a_j)}$. \item $|L|-m<a_j$, for all $j\in K$. \end{enumerate} Fix a $j\in K$. The conclusion will follow by finding an $F\subseteq L\cap\{x_{jm+1},\dots,x_{(j+1)m}\}$ of cardinality $|L|-m$ such that whenever some $\widetilde{\pi}^s$ fixes $L$, then $\widetilde{\pi}^s$ fixes $F$ as well. We can find such a set $F$ through the following procedure: Start with $F=\emptyset$. Let $x\in (L\cap\{x_{jm+1},\dots,x_{(j+1)m}\})\setminus F$, and replace $F$ by $F\cup\{(\widetilde{\pi}^{m'})^t(x):t\in\omega\}$, noticing that the cardinality of the orbit is exactly $\gcd_{j\in K}(a_j)$. If $|F|=|L|-m$ we are done, otherwise repeat the procedure with some $y\in (L\cap\{x_{jm+1},\dots,x_{(j+1)m}\})\setminus F$. After a finite number of repetitions we get $\lvert F\rvert=\lvert L\rvert-m$. \end{itemize} Now we can show that $S$ is not a support of the selection function $f$ we chose at the beginning of the proof. Let $M$ be the $\mathrm{T_m}$-model we constructed above that satisfies $\widetilde{\pi}M=M$. Let $N\subseteq \omega$ be a $\mathrm{T_m}$-model that is isomorphic to $M$ and satisfies conditions 1,2 and 3. The proof above shows that $\pi(\mathrm{Sel}(L))=\mathrm{Sel}(\pi(L))$ for all $L\subseteq N$. Moreover, condition 2 says that $N$ can even be chosen such that $\pi(\mathrm{Sel}(L))=\mathrm{Sel}(\pi(L))$ for all $L\subseteq N\cup S$. So $\pi$ can be extended to a function $\pi\in \mathrm{fix_G}(S)$ on the whole model $\mathbf{F}$. Note that for all $n$-element subsets of $N$ we have that $\pi(N)\not = N$. So $S$ is indeed not a support of the selection function $f$. This is a contradiction. \end{proof} \section{Vertical Downward} \subsection{Positive} As an immediate consequence of Lemma \ref{lemma}, we get the following. \begin{lemma} \label{lem:9.1} For all $k,n\in\omega$, $\mathrm{nRC_{fin}}\Rightarrow\mathrm{RC}_{kn+1}$. \end{lemma} It is interesting to notice that Lemma \ref{lem:9.1} and the next theorem are here proven using qualitatively the same approach. Despite this fact, the forthcoming proof is more complex than the other. \begin{theorem} \label{thm:9.2} $\mathrm{4RC_{fin}}\Rightarrow\mathrm{RC_7}$. \end{theorem} \begin{proof} Let $A$ be an infinite set and apply $\mathrm{4RC_{fin}}$ to get an infinite subset $B\subseteq A$ with a function $\widetilde{f}\colon[B]^{>4}_\mathrm{fin}\to[B]^4$. Let $S$ be a $7$-element subset of $x$. In this proof we are going to consider all the possible ways in which the function $\widetilde{f}$ can act on the subsets of $S$ in order to show that it is always possible to choose a particular element of $S$, and hence to verify $\mathrm{RC_7}$. Though making use of symmetries in a few passages, it will substantially be a case-by-case analysis. Let $S=\{x,y,z,a,b,c,d\}$ with $\widetilde{f}(S)=\{a,b,c,d\}$. To simplify the notation, define the two functions \begin{itemize} \item $f\colon[S]^{>4}\to\mathscr{P}(S)$ given by $f\colon T\mapsto T\setminus\widetilde{f}(T)$; \item $g\colon[S]^{<3}\to [S]^{<3}$ given by $g\colon T\mapsto f(S\setminus T)$. \end{itemize} For simplicity we will write, for instance, $g(a)$ instead of $g(\{a\})$. We can assume that for all $l\in \widetilde{f}(S)=\{a,b,c,d\}$ we have that $g(l)\cap \{x,y,z\}=\emptyset$. Otherwise, there is a natural way to choose an element from $S$. Now we build, step by step, all the possibilities for $\{g(a),g(b),g(c),g(d)\}$ which do not allow us to immediately choose an element from $S$. \begin{enumerate} \item By symmetry, we can fix $g(d)=f(x,y,z,a,b,c)=\{a,b\}$. \item There are now only two non-equivalent cases:\begin{enumerate} \item $g(d)=\{a,b\}$ and $g(c)=\{a,b\}$; \item $g(d)=\{a,b\}$ and $g(c)=\{a,d\}$, which is equivalent to the third possible choice $g(c)=\{b,d\}$. \end{enumerate} \item The two cases branch now in five: \begin{enumerate} \item $g(d)=\{a,b\}$, $g(c)=\{a,b\}$ and $g(b)=\{a,c\}$. This is symmetric to which is symmetric to $g(d)=\{a,b\}$, $g(c)=\{a,b\}$ and $g(b)=\{a,d\}$. \item $g(d)=\{a,b\}$, $g(c)=\{a,b\}$ and $g(b)=\{c,d\}$. \item $g(d)=\{a,b\}$, $g(c)=\{a,d\}$ and $g(b)=\{a,c\}$. \item $g(d)=\{a,b\}$, $g(c)=\{a,d\}$ and $g(b)=\{c,d\}$. \item $g(d)=\{a,b\}$, $g(c)=\{a,d\}$ and $g(b)=\{a,d\}$. \end{enumerate} \end{enumerate} Notice how the option 3.c can be ignored, since it allows us to choose $a$ in $S$ independently from $g(a)$. With similar arguments we can show that the only four non-symmetric choices for $g(a)$ in which we cannot immediately choose an element from $S$ are: \begin{enumerate} \item $g(d)=\{a,b\}$, $g(c)=\{a,b\}$, $g(b)=\{a,c\}$ and $g(a)=\{b,d\}$; \item $g(d)=\{a,b\}$, $g(c)=\{a,b\}$, $g(b)=\{c,d\}$ and $g(a)=\{c,d\}$; \item $g(d)=\{a,b\}$, $g(c)=\{a,d\}$, $g(b)=\{c,d\}$ and $g(a)=\{b,c\}$; \item $g(d)=\{a,b\}$, $g(c)=\{a,d\}$, $g(b)=\{a,d\}$ and $g(a)=\{c,d\}$. \end{enumerate} For each of the above cases we can check that the only permutations on $\{a,b,c,d\}$ that preserve $g$ are given by \begin{enumerate} \item $(a,b)(c,d)$; \item $(a,b)$, $(c,d)$, $(a,b)(c,d)$, $(a,c)(b,d)$, $(a,d)(b,c)$; \item $(a,c)(b,d)$; \item $(a,d)(b,c)$. \end{enumerate} In each of these cases, it is possible to select a particular double transposition (in case 2, pick $(a,b)(c,d)$). Last, consider how $g$ acts on the six distinct pairs included in $\{a,b,c,d\}$. A double transposition selects exactly two of these pairs: for instance $(a,b)(c,d)$ selects $\{a,b\}$ and $\{c,d\}$. We conclude the proof by considering the uniquely determined $g(g(a,b)\cup g(c,d))$. \end{proof} \begin{corollary} $\mathrm{4RC_{fin}}$ implies $\mathrm{RC_n}$ whenever $n$ is odd and greater than $4$. \end{corollary} \begin{proof} We have that either $n=1+4k$ or $n=3+4k$ for a $k\in\omega$. The first case follows directly by Lemma \ref{lem:9.1}. In the second case let $x$ be an infinite set and apply $\mathrm{4RC_{fin}}$ to get an infinite subset $y\subseteq x$ with a selection function $f:[y]^{>4}_{\mathrm{fin}}\to [y]^4$. Let $z\subseteq y$ be an $n$-element subset. Apply $f$ exactly $k$ times to find a $3$-element subset $z_0$ of $z$. Then $\lvert z_0\cup f(z)\rvert=7$ and we can use Theorem \ref{thm:9.2}. \end{proof} \subsection{Negative} \begin{theorem} Let\/ $m,n\in\omega\setminus \{0\}$. Then\/ $\mathrm{MOD_m}\not\models\mathrm{RC_n}$ whenever for some prime $p$ divisor of $m$, $n\not\equiv 1\Mod{p}$. \end{theorem} \begin{proof} Let $n,m\in\omega\setminus\{0\}$ and let $p$ be a prime divisor of $m$ such that $n\not\equiv 1 \Mod{p}$. Consider the set of the atoms and suppose that there is an infinite subset $A$ with a function $f$ which selects an element from every $n$-element subset of $A$. Let $S$ be a support of $f$. Let $M$ be any $\mathrm{T_m}$-model with cardinality $|M|=n$ and write $n=pk+r$ for unique $k,r\in\omega$, with $1< r< p$. Then it is possible to find an $n$-element subset $N=\{x_1,\dots,x_n\}$ of $C$ such that: \begin{enumerate} \item $N$ and $M$ are isomorphic as models of $T$; \item $\mathrm{Sel}(Z)$ can be arbitrarily fixed whenever $Z\subseteq S\cup N$ with $\lvert Z\cap S\rvert\geq 1$ and $\lvert Z\cap N\rvert\geq 1$; \item $\mathrm{Sel}(\{x_{im+1},\dots,x_n\})=\{x_{im+1},\dots, x_{(i+1)m}\}$ holds for all $i< k$. \end{enumerate} Notice that that condition 3 is only a matter of reordering. Consider the following permutation of $N$, written as finite product of finite cycles. $$\widetilde{\pi}=(x_{pk+1},x_{pk+2},\dots,x_n)\prod_{i=0}^{k-1}(x_{pi+1},x_{pi+2},\dots,x_{p(i+1)})$$ Our conclusion will follow by showing that there is a model $M$ of $\mathrm{T}_m$, a corresponding subset $N\subseteq C$ and a permutation $\pi\in\mathrm{fix_G}(S)$ such that $\pi$ acts on $N$ exactly as $\widetilde{\pi}$ acts on $M$. Notice that every cycle in the definition of $\widetilde{\pi}$ is non trivial if and only if $r\neq 1$. First we want to find a $\mathrm{T_m}$-model $M=\{x_1,\dots, x_n\}$ such that $M$ and $\widetilde{\pi}M$ are isomorphic as $\mathrm{T_m}$-models. Naturally we first impose condition 3, namely that for all $i<k$ $$ \mathrm{Sel}(\{x_{im+1},x_{im+2},\dots,x_n\})=\{x_{im+1},x_{im+2},\dots,x_{(i+1)m}\}. $$ The main idea of the proof is the following: Let $L$ be a subset of $M$ with $|L|>m$, $L\neq\{x_{im+1},x_{im+2},\dots,x_n\}$ for every $i< k$. Consider the orbit $\{\widetilde{\pi}^lL:l\in\omega\}$. Now we will choose an $m$-element subset $L^\prime\subseteq L$ and define $\mathrm{Sel}(L):=L^\prime$. Extend this choice to the whole orbit by defining $$ \mathrm{Sel}(\widetilde{\pi}^lL)=\widetilde{\pi}^l(\mathrm{Sel}(L)). $$ The choice of $\mathrm{Sel}(L)$ has to be suitable in the sense that $\widetilde{\pi}^jL=L$ must imply $\widetilde{\pi}^j(\mathrm{Sel}(L))=\mathrm{Sel}(L)$. \begin{itemize} \item First of all assume that for some $I\subseteq k$, $$ |L\cap(\bigcup_{i\in I}\{x_{pi+1},\dots,x_{p(i+1)}\})|\in\{m,m-|L\cap\{x_{pk+1},\dots,x_n\}|\}. $$ Then a suitable choice for $\mathrm{Sel}(L)$ is given by either $L\cap(\bigcup_{i\in I}\{x_{pi+1},\dots,x_{p(i+1)}\})$ or by $L\cap(\bigcup_{i\in I}\{x_{pi+1},\dots,x_{p(i+1)}\}\cup\{x_{pk+1},\dots,x_n\})$. \item Otherwise, let $J\subseteq k$ be the set of indices $j$ such that $0<|L\cap\{x_{jp+1},\dots,x_{(j+1)p}\}|\neq p$. Moreover, replace $J$ by $J\cup\{k\}$ if $|L\cap\{x_{kp+1},\dots,x_{n}\}|$ either is $1$ or does not divide $r$. For the sake of notation, let us write $\{x_{kp+1},\dots,x_{(k+1)p}\}$ instead of $\{x_{kp+1},\dots,x_n\}$. If $|L\setminus\bigcup_{j\in J}\{x_{jp+1},\dots,x_{(j+1)p}\}|\leq m$, then we claim that a suitable choice for $\mathrm{Sel}(L)$ is given by any $m$-subset of $L$ which includes $L\setminus\bigcup_{j\in J}\{x_{jp+1},\dots,x_{(j+1)p}\}$. The claim follows by the fact that, given a set $\{y_1,\dots,y_{p'}\}$ for some prime $p'\in\omega$, if $\tau$ is the permutation $(y_1,\dots,y_{p'})$ and some power $\tau^a$ fixes a proper subset $H\subsetneq\{y_1,\dots,y_{p'}\}$, then $\tau^a$ is the identity on $\{y_1,\dots,y_{p'}\}$. \end{itemize} Note that we covered every possible case. Indeed, if we are not in the last case, then for some $k',r'\in\omega$ with $r'\leq r$ it is true that $m< k'p+r'$. Then, since $r< p$ and $p\mid m$, we are actually in the first case. Now we can show that $S$ is not a support of the selection function $f$ we chose at the beginning of the proof. Let $M$ be the $\mathrm{T_m}$-model we constructed above that satisfies $\widetilde{\pi}M=M$. Let $N\subseteq \omega$ be a $\mathrm{T_m}$-model that is isomorphic to $M$ and satisfies conditions 1,2 and 3. With the proof above and condition 2 we can choose $N$ such that $\pi(\mathrm{Sel}(L))=\mathrm{Sel}(\pi(L))$ for all $L\subseteq N\cup S$. So $\pi$ can be extended to a function $\pi\in \mathrm{fix_G}(S)$ on the whole model $\mathbf{F}$. Note that for all $n$-element subsets of $N$ we have that $\pi(N)\not = N$. So $S$ is indeed not a support of the selection function $f$. This is a contradiction. \end{proof} With the same arguments it is possible to emulate the previous result in the following way. \begin{theorem} Let $m\in\omega$ be greater than $2$. Then for all $1< n< m$, $\mathrm{MOD_m}\not\models\mathrm{RC_n}$. \end{theorem} \begin{proof} Exactly as in the previous theorem: just consider the permutation $$ \widetilde{\pi}=(x_1,\dots,x_n) $$ and impose that $\mathrm{Sel}(L)\supset L\cap N$ whenever $L\cap N\neq\emptyset$, with $L\subseteq S\cup N$. \end{proof} As an immediate consequence of the last results, we get the following Corollary: \begin{corollary} Let $k\in\omega\setminus\{0\}$, $\{p_1,\dots,p_k\}$ be distinct prime numbers and $n=\prod_{i=1}^{k}p_i$. Then\/ $\mathrm{nRC_{fin}}\Rightarrow\mathrm{RC_m}$ if and only if\/ $m\equiv 1\Mod{n}$. \end{corollary} \section{Open Questions} \begin{itemize} \item For $n\in\{2,3,4,6\}$ we have that $\mathrm{nRC_{fin}}\Rightarrow \mathrm{nC_{fin}^-}$. Does this implication hold for $n=5$? Or more generally: For which $n\in\omega$ does this implication hold? \item Write a natural number as unique product of powers of primes $n=\prod_{i=1}^{k}p_i^{m_i}$. Is it the case that\/ $\mathrm{nRC_{fin}}\Rightarrow\mathrm{RC_m}$ if and only if\/ $m>n$ and $m\equiv1\Mod{\prod_{i=1}^{k}p_i}$? \end{itemize} \end{document}
\begin{document} \title{\center{Wave functions in the neighborhood of a toroidal surface;\\ hard vs. soft constraint}} \author{Mario Encinosa } \author{Lonnie Mott} \author{Babak Etemadi} \affiliation{ Florida A\&M University,Department of Physics, Tallahassee FL 32307} \begin{abstract} The curvature potential arising from confining a particle initially in three-dimensional space onto a curved surface is normally derived in the hard constraint $q \rightarrow 0$ limit, with $q$ the degree of freedom normal to the surface. In this work the hard constraint is relaxed, and eigenvalues and wave functions are numerically determined for a particle confined to a thin layer in the neighborhood of a toroidal surface. The hard constraint and finite layer (or soft constraint) quantities are comparable, but both differ markedly from those of the corresponding two dimensional system, indicating that the curvature potential continues to influence the dynamics when the particle is confined to a finite layer. This effect is potentially of consequence to the modelling of curved nanostructures. \end{abstract} \pacs{03.65Ge, 68.65.-k} \maketitle \section{\label{sec:level1}Introduction} The existence of a potential $V_C$ in the Schrodinger equation which stems from constraining a particle to a one or two-dimensional surface embedded in three dimensions has a long history \cite{jenskoppe,dacosta1,dacosta2,exnerseba,matsutani,burgsjens}. The manifestations of $V_C$ have been investigated through formal and numerical means \cite{duclosexner,bindscatt,goldjaffe,ouyang,popov,midgwang,clarbrac,schujaff,ee1, ee2,ieee}, motivated recently in part by the sophistication with which nanostructures can be fabricated. The physics of objects with novel geometries is increasingly relevant to the modelling of real devices, hence substantial effort has been directed towards understanding the physics of bent tubes and wires, as well as more complicated shapes \cite{lin,chapblic,qu,nils,mott,halberg1,halberg2}. Consider a surface $\Sigma(u,v)$ with $(u,v)$ surface coordinates and define $q$ as the coordinate labelling the degree of freedom normal to $\Sigma(u,v)$. Generally $V_C$ (as detailed in the following section) is derived by imposing a hard constraint on the particle wherein a $q \rightarrow 0$ limit is taken along with a wave function re-scaling such that the norm is preserved. Here, rather than imposing a hard constraint, the particle will be confined to a thin layer in the neighborhood of a toroidal surface. The extent to which the hard constraint mirrors the more physically realizable soft constraint is then determined by calculating some low-lying eigenvalues and eigenfunctions of the system. There are several reasons to investigate these ideas with a torodial structure: \vskip 6pt \noindent 1. The symmetry of the torus reduces computational intensiveness, but, because a torus has non-trivial mean and Gaussian curvatures, curvature effects remain important \cite{encmott}. \vskip 6pt \noindent 2. The spectrum and eigenfunctions for a particle on a toroidal surface have been determined \cite{fpl} , so comparisons can be made between the finite layer system and the two dimensional system both with and without $V_C$ present. \vskip 6pt \noindent 3. Toroidal structures have been fabricated and calculations addressing their transport \cite{shea,sano,latge,sasaki} and magnetic \cite{liu} properties performed. Toroidal structures are novel because unlike a bulk sample, conductivity through the device is anticipated to be dominated by azimuthal modes. \vskip 6pt The remainder of this paper is organized as follows: in section II, $H_q$, the Hamiltonian for a particle near a toroidal surface is derived. The hard constraint $q \rightarrow 0$ limit of that Hamiltonian is then taken; under the requirement that the norm of the wave function be preserved, $H_C$ obtains. Finally, the ab initio $q = 0$ Hamiltonian $H_0$ is written. In section III the computational method employed to generate eigenvalues and wave functions is presented. Section IV gives results and section V is reserved for conclusions. \section{The toroidal Schrodinger \ equations} To restate, there are three Hamiltonians relevant to this work: \vskip 6pt \noindent 1. $H_q$ will be the Hamiltonian for a particle allowed to move in a thin layer normal to $T^2$, where again the normal degree of freedom will be labelled by $q$. \vskip6pt \noindent 2. $H_C$ will be the Hamiltonian derived from $H_q$ after imposing the $q \rightarrow 0$ hard constraint, and \vskip 6pt \noindent 3. $H_0$ the Hamiltonian for a particle restricted ab initio to $T^2$, i.e., $q = 0$ at the onset of the derivation of $H_0$. \vskip 6pt It is best to begin with the most general case, that of $H_q$, and later take the appropriate limits to obtain $H_C$ and $H_0$. Points near a toroidal surface of major radius $R$ and minor radius $a$ may be parameterized in terms of cylindrical coordinate unit vectors and a vector $\mathbf{\hat{n}}$ normal to the surface by \cite{fpl} $$ {\bf r}(\theta,\phi,q)=(R + a \ {\rm cos} \theta )\mathbf{\hat{\rho}} +a\ {\rm sin} \theta\mathbf{\hat {k}} + q\mathbf{\hat{n}}. \eqno(1) $$ Applying $d$ to Eq.(2) gives $$ d{\bf r}= (a+q)d\theta\mathbf{ \hat{\theta}}+(R + (a+q) \ {\rm cos} \theta)d\phi\mathbf{ \hat{\phi}}+dq\mathbf{ \hat{n}} \eqno(2) $$ with $\mathbf{\hat{n}} \equiv \mathbf{ \hat{\phi}} \ {\rm x}\ \mathbf{ \hat{\theta}}$, and $\mathbf{\hat{\theta}} =-\rm sin \theta \mathbf{\hat{\rho}}+\rm cos \theta \mathbf{\hat {k}}$. The metric elements $g_{ij}$ can be read off of $$ d{\bf r}\cdot d{\bf r}=(a+q)^2 d\theta^2+ (R+(a+q){\rm cos}\theta)^2d\phi^2 + dq^2 \eqno(3) $$ and the Laplacian derived from $$ \nabla^2= g^{-{1 \over 2}}{\partial \over \partial q^i} \bigg [ g^{1 \over 2}\ g^{ij}{\partial \over \partial q^j} \bigg ]. \eqno(4) $$ Setting $a_q = a+q$ and $F_q = R+ (a+q) \rm cos \theta$ yields the $H_q$ Schrodinger equation \begin{widetext} $$ {1 \over {a_q^2}}{\partial^2 \psi \over \partial \theta^2} - {{\rm sin} \theta \over a_q F_q} {\partial \psi \over \partial \theta} +{1 \over F_q^2}{\partial^2 \psi \over \partial \phi^2} + 2h {\partial \psi \over \partial q} + {\partial^2 \psi \over \partial q^2} - 2V_n(q) + 2E\psi =0 \eqno(5) $$ \end{widetext} with $h$ the mean curvature given by $$ h \equiv {1 \over 2}(k_1+k_2) = {1 \over 2}\bigg[{1 \over a_q}+ {{\rm cos}\theta \over F_q}\bigg]. \eqno(6) $$ It is convenient to also define the Gaussian curvature $k$, $$ k \equiv k_1k_2 = {1 \over a_q} {{\rm cos}\theta \over F_q}. \eqno(7) $$ To derive the Schrodinger equation appropriate to $H_C$, $V_n(q)$ must be chosen to drive the particle arbitrarily close to the $q = 0$ limit{\cite{jenskoppe,dacosta1,dacosta2}}. As $q \rightarrow 0$ the wave function is expected to decouple into surface and normal parts as $$ \psi(\theta,\phi,q) \rightarrow \chi_s(\theta,\phi)\chi_n(q). \eqno(8) $$ Conservation of the norm must be preserved leading to \cite{jenskoppe,dacosta1,dacosta2,matsutani2,kaplan} $$ |\psi|^2 WdSdq=|\chi_s|^2|\chi_n|^2dSdq \eqno(9) $$ or $$ \psi =\chi_s \chi_n W^{-{1\over2}} \eqno(10) $$ where $W=1+2qh+q^2k$ and $dS$ the surface measure. Performing the differentiations and letting $q\rightarrow 0$ gives the pair of equations $$ {\partial^2 \psi \over \partial \theta^2} - {\alpha \ { \rm sin}\ \theta \over F} {\partial \psi \over \partial \theta} +{{\alpha}^2 \over F^2}{\partial^2 \psi \over \partial \phi^2} -2a^2 V_C +\beta \psi =0, \eqno(11) $$ $$ -{1 \over 2}{\partial^2 \chi_n \over \partial q^2} + V_n(q) \chi_n =E_n \chi_n \eqno(12) $$ with $\alpha = a/R, \beta = 2Ea^2$ and $F=1+\alpha \ \rm cos\theta$. The curvature potential $V_C$ is $$ V_C = -{1\over 8 a^2} {1\over F^2}. \eqno(13) $$ Making the standard ansatz for the azimuthal part of the eigenfunction $\chi(\phi) = exp \ [im\phi]$ reduces Eq. (11) to $$ {\partial^2 \psi \over \partial \theta^2} - {\alpha \ {\rm sin}\ \theta \over [1 + \alpha \ \cos\theta]}{\partial \psi \over \partial \theta} -{(m^2 \alpha^2- {1\over4}) \over [1 + \alpha \ \cos\theta]^2}\psi +\beta\psi = 0. \eqno(14) $$ Eq. (14) is the Schrodinger equation that corresponds to $H_C$. It is the analog to Eq. (5) wherein the $q$ dependence has decoupled from the the angular part of the kinetic energy operator and a curvature potential $V_C$ results from insisting upon conservation of the norm. The Hamiltonian $H_0$ for a particle that lives on the surface may be obtained by the method employed to derive $H_q$ by setting $q = 0$ in Eq. (1) from which $$ d{\bf r}\cdot d{\bf r}=a^2 d\theta^2+ (R+a\ {\rm cos}\theta)^2d\phi^2. \eqno(15) $$ The resulting expression is simple; $H_0$ is Eq. (11) with $V_C$ omitted \cite{fpl}. It should be emphasized that for more complicated surfaces the kinetic energy operator will have terms depending on the surface curvature not present here because of the azimuthal symmetry of the torus \cite{ee1}. The normalization of an eigenfunction is determined by $$ \int^{q_f}_{q_i} \int^{2\pi}_0\int^{2\pi}_0 \psi^*(q, \theta,\phi)\psi(q,\theta,\phi) M(\theta, q) d\theta d\phi dq= 1. \eqno(16) $$ with $$ M(\theta,q) = a_q F_q \eqno(17) $$ when $q \neq 0$. Wave functions obtained from $H_{0,C}$ are normalized with $M(\theta,0)$ and the $q$ integration omitted. \section{Computational method} The goal is to obtain eigenvalues/functions of $H_q$ that can be compared to those of $H_C$ and $H_0$. A procedure for determining the low-lying eigenvalues and eigenfunctions of $H_0$ has been given in \cite{fpl} and applied to $H_C$ in \cite{encmott} so the focus here may be placed on the method employed for solving Eq. (5). In the $q \rightarrow 0$ limit the surface solutions are independent of the specific choice of $V_n(q)$, but for finite $q$ a form or forms for $V_n(q)$ must be settled upon. Two convenient choices are hard wall confinement with the walls at $\pm {L / 2}$ and an oscillator potential $V_n(q)= \omega^2 q^2/2$. The main complication in solving Eq. (5) ensues from the integration measure for the geometry described by Eq. (3), which precludes adopting a simple basis set of trignometric functions in $\theta$ since they are not in general orthogonal over $M(\theta,q)$. It should also be noted that for finite $q$ it is not possible to recover orthogonality by re-scaling the basis states because the resulting Hamiltonian matrix is not Hermitian. These difficulties can be avoided if one performs a two-variable Gram-Schmidt procedure. Because the interest here is focused on issues other than generating a large number of eigenfunctions, two basis functions in $q$ and three in $\theta$ were used. This is a reasonable ansatz; the $q$ motion will produce an energy spectrum with much larger spacing than the $\theta,\phi$ motion, so two functions in $q$ are sufficient to insure nothing is missed. Additionally, it has been shown previously \cite{fpl} that only a few trigonometric functions in general are necessary to accurately describe an eigenfunction on $T^2$. For hard walls the basis functions with $n = (0,1,2)$ are $$ \Phi^{0n}_{hw}={\rm cos}({\pi q \over L}){\rm cos}(n\theta) \eqno(18) $$ $$ \Phi^{1n}_{hw}={\rm sin}({2\pi q \over L}){\rm cos}(n\theta) \eqno(19) $$ and for oscillator confinement $$ \Phi^{0n}_{osc}=e^{-\omega q^2}{\rm cos}(n\theta) \eqno(20) $$ $$ \Phi^{1n}_{osc}=e^{-\omega q^2}H_1(\sqrt \omega q){\rm cos}(n\theta). \eqno(21) $$ For each case the six states computed from the Gram-Schmidt procedure are employed to construct the matrix $$ H^{ijmn}_{hw,osc}= \big<\Phi^{im}_{hw,osc}|H_q|\Phi^{jn}_{hw,osc} \big > \eqno(22) $$ that yields eigenvalues and wave functions. \section{Results} Toroidal radii $R = 500 \AA $ and $a = 250 \AA$ were chosen on the order of structures that have been synthesized \cite{garsia, lorke,zhang}, and surface layer widths as set by $L, \omega$ within realistic values for confinement regions. It should be emphasized that the results which follow are very representative; the trends exhibited below were found to obtain for larger values of $R, a,$ and $L$ as well as for $m \neq 0$ and negative parity states. In table I the spectra for hard wall and oscillator confinement potentials with $L = 25, 10$ $ \AA$ and $\omega = .05,.1 $ $\AA^{-2}$ respectively are shown. The dimensionless eigenvalues $\beta_i$ are found from subtracting the $q$ degree of freedom energy ($\pi^2/2L^2$ or $\omega /2$) from the eigenvalues found from the Hamiltonian matrix defined through Eq. (22) and multiplying by $2a^2$. The $\beta_i$ are compared to those found in \cite{encmott} where $V_C$ was included in the $T^2$ Hamiltonian and in \cite{fpl} where it was not. These results indicate the soft constraint quantities are relatively insensitive to differing $L$ and $\omega$, and are better matched by the spectra of \cite{encmott}. In tables II and III the ground and first excited state wave functions for the six cases described above are shown. The results illustrate that hard constraint eigenvalues and eigenfunctions are very good approximations to the physically realistic soft constraint values, at least for cases where the length scale that determines the surface energies of the system is near the curvature length scale of the device. Here that scale is set by the minor radius $a$; however, in general as the length scale that sets local curvature becomes small, $V_C$ increases such that $\big < i|V_C|j \big >$ matrix elements may become comparable to the largest energy in the system. For a disc or strip structure the scales can be very different. In the case of a disk for example, the energy scale is set by the radius of the disk, but a bump or ripples can be placed on the disc at much smaller scales \cite{ee2,ieee}. Although the results here are relatively independent of whether hard wall or oscillator confinement was used, it was found that some care must be taken with the choice of $V_n(q)$. If instead of using hard walls at $\pm L/2$, the walls are placed at $0$ and $L$, agrement with the hard constraint spectrum is lessened, though by only of order ten percent in both the eigenvalues and wave function expansion coefficients. A possible explanation for this is the ${\rm sin} (n\pi q/L)$ functions always vanish on the $q = 0$ surface so that some terms that multiply curvature functions are zero there. The basis set expansion employed here comprises two functions in the $q$ degree of freedom; for the sake of brevity, only angular eigenvalues/eigenfunctions which belong to the $q$ ground state wave function have been reported. The surface states that correspond to excited normal modes lie much higher in energy than the low-lying surface excitations dealt with here, but may prove important to device modelling as the $q$-motion becomes more diffusive. \section{Conclusions} The main result of this paper is the good agreement between the low-lying spectra and eigenfunctions resulting from $H_0$ and those that emerge from $H_q$, and, as importantly, the relative disagreement that the eigenfunctions of $H_q$ display when contrasted to those of $H_0$. If a two-dimensional approximation is to be adopted for curved nanostructures, the results here indicate that for cases where the normal excitations are unimportant the physics would be better captured with $H_0+V_C$ than with $H_0$. \begin{table} \caption{Ground, first and second excited state eigenvalues $\beta_i$ for the six Hamiltonians relevant to this paper with $R = 500 \AA$ and $a = 250\AA$. } \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \ \ & $L =25 \AA$ & $L = 10 \AA$ & $\omega = .05\AA^{-2} $& $\omega = .1 \AA^{-2}$& Ref.\protect\cite{encmott} & Ref.\protect\cite{fpl} \\ \hline $\beta_0$ & -.3405 & -.3406 & -.3489 & -.3488 & -.3511 & .0 \\ $\beta_1$ & .6618 & .6610 & .6515 & .6446 & .6386 & 1.1223\\ $\beta_2$ & 3.7919 & 3.7886 & 3.7800 & 3.7876 & 3.6529 & 4.0520\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Ground state wave functions; coefficients are normalized to the constant term in the series to facilitate comparisons. Terms not shown are at least an order of magnitude smaller than those given.} \begin{center} \begin{tabular}{|l|l|} \hline & \qquad \qquad \ \ $\psi_0(\theta,q)$\\ \hline $L=25 \AA$ & $(1- .3676 \ \rm cos \theta+.0693 \ \rm cos2 \theta){\rm cos {\pi q \over 25}}$\\ $L=10\AA $ & $(1- .3675 \ \rm cos \theta+.0693 \ \rm cos2 \theta){\rm cos {\pi q \over 10}} $ \\ $\omega = .05\AA^{-2}$ & $(1- .3580 \ \rm cos \theta+.0669 \ \rm cos2 \theta )e^{-.025q^2}$ \\ $\omega = .1\AA^{-2}$ & $(1- .3567 \ \rm cos \theta+.0654\ \rm \ cos2 \theta)e^{-.05q^2} $ \\ Ref.\protect\cite{encmott} & $1- .3679 \ \rm cos \theta+.0784 \ \rm cos2 \theta$ \\ Ref.\protect\cite{fpl} & $1$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{First excited state wave functions; coefficients are normalized to the dominant $\rm cos\theta$ term in the series to facilitate comparisons. Terms not shown are at least an order of magnitude smaller than those given.} \begin{center} \begin{tabular}{|l|l|} \hline & \qquad \qquad \ \ $\psi_1(\theta,q)$\\ \hline $L=25\AA $ & $(-.0842+ \rm cos \theta-.1369 \ \rm cos2 \theta){\rm cos {\pi q \over 25}}$\\ $L=10\AA $ & $(-.0842+\rm cos \theta- .1370 \ \rm cos2 \theta){\rm cos {\pi q \over 10}} $ \\ $\omega = .05\AA^{-2}$ & $(-.0879 + \rm cos \theta-.1358 \ \rm cos2 \theta )e^{-.025q^2}$ \\ $\omega = .1\AA^{-2}$ & $(-.0877 + \rm cos \theta-.1362 \ \rm cos2 \theta)e^{-.05q^2} $ \\ Ref.\protect\cite{encmott} & $-.0851 + \rm cos \theta - .1540 \ \rm cos2 \theta$ \\ Ref.\protect\cite{fpl} & $-.2500+ \rm cos \theta -.0820 \ \rm cos 2 \theta$ \\ \hline \end{tabular} \end{center} \end{table} \begin{thebibliography}{36} \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \bibitem[{\citenamefont{Jensen and Koppe}(1971)}]{jenskoppe} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Jensen}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Koppe}}, \bibinfo{journal}{Ann. of Phys.} \textbf{\bibinfo{volume}{63}}, \bibinfo{pages}{586} (\bibinfo{year}{1971}). \bibitem[{\citenamefont{da~Costa}(1981)}]{dacosta1} \bibinfo{author}{\bibfnamefont{R.~C.~T.} \bibnamefont{da~Costa}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{1982} (\bibinfo{year}{1981}). \bibitem[{\citenamefont{da~Costa}(1982)}]{dacosta2} \bibinfo{author}{\bibfnamefont{R.~C.~T.} \bibnamefont{da~Costa}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{2893} (\bibinfo{year}{1982}). \bibitem[{\citenamefont{Exner and Seba}(1989)}]{exnerseba} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Exner}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Seba}}, \bibinfo{journal}{J. Math. Phys.} \textbf{\bibinfo{volume}{30}}, \bibinfo{pages}{2574} (\bibinfo{year}{1989}). \bibitem[{\citenamefont{Matusani}(1991)}]{matsutani} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Matusani}}, \bibinfo{journal}{J. Phys. Soc. Jap.} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{55} (\bibinfo{year}{1991}). \bibitem[{\citenamefont{Burgess and Jensen}(1993)}]{burgsjens} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Burgess}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Jensen}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{1861} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Duclos and Exner}(1995)}]{duclosexner} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Duclos}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Exner}}, \bibinfo{journal}{Rev. Math. Phys.} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{73} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Londergan et~al.}(1999)\citenamefont{Londergan, Carini, and Murdock}}]{bindscatt} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Londergan}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Carini}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Murdock}}, \emph{\bibinfo{title}{Binding and scattering in two dimensional systems; applications to quantum wires, waveguides, and photonic crystals}} (\bibinfo{publisher}{Springer-Verlag}, \bibinfo{address}{Berlin}, \bibinfo{year}{1999}). \bibitem[{\citenamefont{Goldstone and Jaffe}(1991)}]{goldjaffe} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Goldstone}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Jaffe}}, \bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{14100} (\bibinfo{year}{1991}). \bibitem[{\citenamefont{Ouyang et~al.}(1998)\citenamefont{Ouyang, Mohta, and Jaffe}}]{ouyang} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ouyang}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Mohta}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Jaffe}}, \bibinfo{journal}{Ann. of Phys.} \textbf{\bibinfo{volume}{275}}, \bibinfo{pages}{297} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Popov}(2000)}]{popov} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Popov}}, \bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{269}}, \bibinfo{pages}{148} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Midgley and Wang}(2000)}]{midgwang} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Midgley}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Wang}}, \bibinfo{journal}{Aus. J. Phys.} \textbf{\bibinfo{volume}{53}}, \bibinfo{pages}{77} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Clark and Bracken}(1996)}]{clarbrac} \bibinfo{author}{\bibfnamefont{I.~J.} \bibnamefont{Clark}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~J.} \bibnamefont{Bracken}}, \bibinfo{journal}{J. Phys. A} \textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{4527} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Schuster and Jaffe}(2003)}]{schujaff} \bibinfo{author}{\bibfnamefont{P.~C.} \bibnamefont{Schuster}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Jaffe}}, \bibinfo{journal}{Ann. Phys.} \textbf{\bibinfo{volume}{307}}, \bibinfo{pages}{132} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Encinosa and Etemadi}(1998{\natexlab{a}})}]{ee1} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Etemadi}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{58}}, \bibinfo{pages}{77} (\bibinfo{year}{1998}{\natexlab{a}}). \bibitem[{\citenamefont{Encinosa and Etemadi}(1998{\natexlab{b}})}]{ee2} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Etemadi}}, \bibinfo{journal}{Physica B} \textbf{\bibinfo{volume}{266}}, \bibinfo{pages}{361} (\bibinfo{year}{1998}{\natexlab{b}}). \bibitem[{\citenamefont{Encinosa}(2000)}]{ieee} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}}, \bibinfo{journal}{IEEE Trans. Elec.} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{878} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Lin and Jaffe}(1996)}]{lin} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Lin}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Jaffe}}, \bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{5757} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Chaplik and Blick}(2004)}]{chapblic} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Chaplik}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~H.} \bibnamefont{Blick}}, \bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{33} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Qu and Geller}(2004)}]{qu} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Qu}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Geller}}, \bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{085414} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Nilsson}()}]{nils} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Nilsson}}, \bibinfo{note}{\texttt{cond-mat/0103029}}. \bibitem[{\citenamefont{Mott et~al.}()\citenamefont{Mott, Encinosa, and Etemadi}}]{mott} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Mott}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Etemadi}}, \bibinfo{note}{\texttt {quant-ph /0406074}, accepted for publication in Physica E}. \bibitem[{\citenamefont{Schulze-Halberg}(2003)}]{halberg1} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schulze-Halberg}}, \bibinfo{journal}{Found. Phys. Lett.} \textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{677} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Schulze-Halberg}(2004)}]{halberg2} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schulze-Halberg}}, \bibinfo{journal}{Modern Phys. Lett. A} \textbf{\bibinfo{volume}{19}}, \bibinfo{pages}{1759} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Encinosa and L.Mott}(2003)}]{encmott} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}} \bibnamefont{and} \bibinfo{author}{\bibnamefont{L.Mott}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{68}}, \bibinfo{pages}{014102} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Encinosa and Etemadi}(2003)}]{fpl} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Encinosa}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Etemadi}}, \bibinfo{journal}{Found. Phys. Lett.} \textbf{\bibinfo{volume}{16}}, \bibinfo{pages}{403} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{H.R.Shea et~al.}(2000)\citenamefont{H.R.Shea, Martel, and Avouris}}]{shea} \bibinfo{author}{\bibnamefont{H.R.Shea}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Martel}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Avouris}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{4441} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Sano et~al.}(2003)\citenamefont{Sano, Kamino, Okamura, and Shinkai}}]{sano} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Sano}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Kamino}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Okamura}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Shinkai}}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{293}}, \bibinfo{pages}{1299} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Lat$\acute{\rm g}$e et~al.}(2003)\citenamefont{Lat$\acute{\rm g}$e, C.G.Rocha, L.A.L.Wanderley, M.Pacheco, P.Orellana, and Z.Barticevic}}]{latge} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lat$\acute{\rm g}$e}}, \bibinfo{author}{\bibnamefont{C.G.Rocha}}, \bibinfo{author}{\bibnamefont{L.A.L.Wanderley}}, \bibinfo{author}{\bibnamefont{M.Pacheco}}, \bibinfo{author}{\bibnamefont{P.Orellana}}, \bibnamefont{and} \bibinfo{author}{\bibnamefont{Z.Barticevic}}, \bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{155413} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Sasaki et~al.}(2004)\citenamefont{Sasaki, Kawazoe, and Saito}}]{sasaki} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Sasaki}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Kawazoe}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Saito}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{321}}, \bibinfo{pages}{369} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Liu et~al.}(2002)\citenamefont{Liu, Guo, Jayanthi, and Wu}}]{liu} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Liu}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Guo}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Jayanthi}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wu}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{217206} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Matsutani}(1999)}]{matsutani2} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Matsutani}}, \bibinfo{journal}{Rev. Math. Phys.} \textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{171} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{L.Kaplan et~al.}(1992)\citenamefont{L.Kaplan, Maitra, and Heller}}]{kaplan} \bibinfo{author}{\bibnamefont{L.Kaplan}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Maitra}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Heller}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{56}}, \bibinfo{pages}{2592} (\bibinfo{year}{1992}). \bibitem[{\citenamefont{García et~al.}(1997)\citenamefont{García, Medeiros-Ribeiro, Schmidt, Ngo, Feng, Lorke, Kotthaus, and Petroff}}]{garsia} \bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{García}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Medeiros-Ribeiro}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Schmidt}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Ngo}}, \bibinfo{author}{\bibfnamefont{J.~L.} \bibnamefont{Feng}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lorke}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kotthaus}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~M.} \bibnamefont{Petroff}}, \bibinfo{journal}{App. Phys. Lett.} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{2014} (\bibinfo{year}{1997}). \bibitem[{\citenamefont{Lorke et~al.}(2000)\citenamefont{Lorke, Luyken, Govorov, and Kotthaus}}]{lorke} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lorke}}, \bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Luyken}}, \bibinfo{author}{\bibfnamefont{A.~O.} \bibnamefont{Govorov}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Kotthaus}}, \bibinfo{journal}{Phys. Rev. Lett} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{2223} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Zhang et~al.}(2003)\citenamefont{Zhang, Chung, and Mirkin}}]{zhang} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Zhang}}, \bibinfo{author}{\bibfnamefont{S.~W.} \bibnamefont{Chung}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~A.} \bibnamefont{Mirkin}}, \bibinfo{journal}{Nano. Lett.} \textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{43} (\bibinfo{year}{2003}). \end{thebibliography} \end{document}
\begin{document} \numberwithin{equation}{section} \newtheorem{definition}[equation]{Definition} \newtheorem{theorem}[equation]{Theorem} \newtheorem{lemma}[equation]{Lemma} \newtheorem{corollary}[equation]{Corollary} \newtheorem{proposition}[equation]{Proposition} \newtheorem{remark}[equation]{Remark} \newtheorem{remarks}[equation]{Remarks} \newtheorem{example}[equation]{Example} \newtheorem{conjecture}[equation]{Conjecture} \newtheorem{problem}[equation]{Problem} \newtheorem{note}[equation]{Note} \def\frac{1}{2}{\frac{1}{2}} \def\mathbb C{\mathbb C} \def\mathbb F{\mathbb K} \def\mathbb Z{\mathbb Z} \def\mathbb F{\mathbb F} \def\mathbb N{\mathbb N} \def\mathbb I{\mathbb I} \def\mathbb K{\mathbb K} \newcommand{\ad} {\hbox{\rm ad\,}} \newcommand{\hbox{\rm aut}}{\hbox{\rm aut}} \newcommand{U_q(\widehat{\mathfrak{sl}}_2)}{U_q(\widehat{\mathfrak{sl}}_2)} \newcommand\g {\mathfrak{g}} \newcommand{\otimes}{\otimesimes} \setcounter{secnumdepth}{2} \title[]{\large The Equitable Basis for $\mathfrak{sl}_2$} \hbox{\rm aut}hor[]{Georgia Benkart$^{\star}$} \address{Department of Mathematics \\ University of Wisconsin \\ Madison, WI 53706, USA} \email{[email protected]} \hbox{\rm aut}hor[]{Paul Terwilliger \address{ \\ }} \email{[email protected]} \thanks{$^{\star}$Support from NSF grant \#{}DMS--0245082 is gratefully acknowledged. \hfil \break {\bf Keywords}: equitable basis, modular group, Kac-Moody algebra, Cartan matrix \hfil\break \noindent {\bf 2000 Mathematics Subject Classification}: 17B37} \date{January 6, 2010} \maketitle \begin{abstract} This article contains an investigation of the equitable basis for the Lie algebra $\mathfrak{sl}_2$. Denoting this basis by $\lbrace x,y,z\rbrace$, we have \begin{eqnarray*} [x,y] = 2x + 2y, \qquad [y,z] = 2y + 2z, \qquad [z, x] = 2z + 2x. \end{eqnarray*} We determine the group of automorphisms $G$ generated by $\exp(\ad x^*)$, \,$\exp(\ad y^*)$, \,$\exp(\ad z^*)$, where $\lbrace x^*,y^*,z^*\rbrace $ is the basis for $\mathfrak{sl}_2$ dual to $\lbrace x,y,z\rbrace $ with respect to the trace form $(u,v) = \hbox{\rm tr}(uv)$ and study the relationship of $G$ to the isometries of the lattices $L={\mathbb Z}x \oplus {\mathbb Z}y\oplus {\mathbb Z}z$ and $L^* ={\mathbb Z}x^* \oplus {\mathbb Z}y^*\oplus {\mathbb Z}z^*$. The matrix of the trace form is a Cartan matrix of hyperbolic type, and we identify the equitable basis with a set of simple roots of the corresponding Kac-Moody Lie algebra $\mathfrak{g}$, so that $L$ is the root lattice and $\frac{1}{2} L^*$ is the weight lattice of $\mathfrak g$. The orbit $G(x)$ of $x$ coincides with the set of real roots of $\mathfrak g$. We determine the isotropic roots of $\mathfrak g$ and show that each isotropic root has multiplicity 1. We describe the finite-dimensional $\mathfrak{sl}_2$-modules from the point of view of the equitable basis. In the final section, we establish a connection between the Weyl group orbit of the fundamental weights of $\mathfrak{g}$ and Pythagorean triples. \end{abstract} \section{ Introduction} The purpose of this article is to investigate systematically a certain basis, called the equitable basis, for the Lie algebra $\mathfrak{sl}_2$ of $2 \times 2$ trace zero matrices over a field $\mathbb F$ of characteristic zero. This basis has already appeared in the theory of tridiagonal pairs \cite{H1}, \cite{H2} and of the three-point loop algebra $\mathfrak{sl}_2 \otimesimes \mathbb F[t,t^{-1}, (t-1)^{-1}]$ (\cite{HT}, \cite{BT}, see also \cite{ITW}). As we will show, it exhibits many striking features and has connections with the theory of Kac-Moody Lie algebras. In Section 2, we introduce the equitable basis $\lbrace x,y,z\rbrace$ and its dual basis $\lbrace x^*,y^*,z^*\rbrace$ with respect to the trace form $(u,v) = \hbox{\rm tr}(uv)$ on $\mathfrak{sl}_2$. In Section 3, we study the group $G$ generated by $\exp(\ad x^*)$, \,$\exp(\ad y^*)$, \,$\exp(\ad z^*)$ and show in Theorem \ref{thm1} that $G$ is isomorphic to the modular group $\hbox{\rm PSL}_2({\mathbb Z})$. We then turn our attention to the lattices $L={\mathbb Z}x \oplus {\mathbb Z}y\oplus {\mathbb Z}z$ and $L^* ={\mathbb Z}x^* \oplus {\mathbb Z}y^*\oplus {\mathbb Z}z^*$, and in Theorem \ref{thm2}, give a characterization of the orbit $G(x)$ as the elements $u \in L$ with $(u,u) = 2$. Since there is an automorphism of $\mathfrak{sl}_2$ in $G$ which cyclically permutes $x,y,z$, this is the same as the orbit of $y$ and of $z$. The next two sections are devoted to a study of the isometries, automorphisms, and antiautomorphisms of $L$ and $L^*$. Theorems \ref{thm:isostab}-\ref{thm:autL} and \ref{thm:starsame} give the precise relationship between (i) the group $G$, \ (ii) the group of automorphisms for $\mathfrak{sl}_2$ that preserve $L$, (iii) the group of automorphisms and antiautomorphisms for $\mathfrak{sl}_2$ that preserve $L$, and (iv) the group of isometries $\hbox{\rm Isom}_{\mathbb Z}(L)$ for $(\,,\,)$ that preserve $L$, as well as analogous results for the lattice $L^* ={\mathbb Z}x^* \oplus {\mathbb Z}y^*\oplus {\mathbb Z}z^*$. In Section 7, we make explicit the connections between the equitable basis and the hyperbolic Kac-Moody Lie algebra $\mathfrak g$ corresponding to the Cartan matrix $$\left[\begin{array}{ccc} \ \ 2 & -2 & -2 \\ -2 & \ \ 2 & -2 \\ -2 & -2 &\ \ 2 \end{array} \right].$$ The simple roots of $\mathfrak g$ can be identified with the elements of the equitable basis, and the real roots with the orbit $G(x)$ (see Theorem \ref{thm3}). In Proposition \ref{prop:iso2} we show that the set of isotropic roots can be identified with $\bigcup_{n \in \mathbb Z, n \neq 0} 2nG (z^*)$, which is precisely the set of nonzero nilpotent matrices in $\mathfrak{sl}_2(\mathbb Z)$, and prove that each isotropic root has multiplicity 1 (see Corollary \ref{cor:isomult}). We determine the relationship between the Weyl group $W$ of $\mathfrak g$ and the group $\hbox{\rm Isom}_{\mathbb Z}(L)$ in Proposition \ref{prop:isoWeyl}. Section 8 studies the finite-dimensional representations of $\mathfrak{sl}_2$ from the equitable point of view. Then starting with the equitable picture of the adjoint representation for $\mathfrak{sl}_2$, in the final section we apply reflections in $W$ to obtain the Poincar\'e disk. The equitable basis enables us to connect the Weyl group orbit of the fundamental weights of $\mathfrak{g}$ with Pythagorean triples. \section{The equitable basis} Throughout, $\{e,f,h\}$ will denote the basis for $\mathfrak{sl}_2$ given by $$ e = \left[\begin{array}{cc}0 & 1 \\ 0 & 0 \end{array}\right], \quad f = \left[\begin{array}{cc}0 & 0 \\ 1 & 0 \end{array}\right], \quad h = \left[\begin{array}{cc}1 &\ \ 0 \\ 0 & -1 \end{array}\right], $$ \noindent and having products \ $[e,f] = h, \ [h,e] = 2e, \ [h,f] = -2f$. The {\it equitable basis} $\{x,y,z\}$ for $\mathfrak{sl}_2$ consists of the matrices \begin{eqnarray} x &=& \left[\begin{array}{cc}1 & \ \ 0 \\ 0 & -1 \end{array}\right] = h, \nonumber \\ y &=& \left[\begin{array}{cc} -1 & 2 \\ \ \ 0 & 1 \end{array}\right] = 2e - h, \label{eqbas} \\ z &=& \left[\begin{array}{cc} -1 & 0 \\ -2 & 1 \end{array}\right] = -2f - h, \nonumber \end{eqnarray} \noindent whose products satisfy \begin{equation}\label{eq:mult} [x,y] = 2x + 2y, \qquad [y,z] = 2y + 2z, \qquad [z, x] = 2z + 2x. \end{equation} {F}rom this it follows that there is a Lie algebra automorphism $\varrho$ of $\mathfrak{sl}_2$ of order $3$ such that \begin{equation}\label{eq:rho} \varrho(x) = y, \qquad \varrho(y) = z, \qquad \varrho(z) = x. \end{equation} Note that $$ y = \exp(\ad e)(-h), \qquad \qquad z = \exp(\ad f)(-h),$$ where $\ad u (v) = [u,v]$ and $\exp (w) =\sum_{n=0}^\infty w^n/n!$. We will relate the automorphisms $\exp(\ad e)$ and $\exp(\ad f)$ to $\varrho$ in Section 3. In our work we will use the trace form $(u,v) := \hbox{\rm tr}(uv)$ for $u,v \in \mathfrak{sl}_2$. We could use instead the Killing form $\kappa(u,v) := \hbox{\rm tr}(\ad u\, \ad v) = 4(u,v)$, but the trace form has some aesthetic advantages. Relative to the equitable basis, the matrix of the trace form is given by \begin{equation}\label{eq:CM}\mathcal A = \left[\begin{array}{ccc} \ \ 2 & -2 & -2 \\ -2 & \ \ 2 & -2 \\ -2 & -2 &\ \ 2 \end{array} \right]. \end{equation} This is a (generalized) Cartan matrix as defined in (\cite[\S1.1]{K}, \cite[\S 3.4]{MP}); the corresponding Kac-Moody Lie algebra will be related to the equitable basis in Section 6. Let $\{x^*, y^*, z^*\}$ denote the basis for $\mathfrak{sl}_2$ that is dual to the equitable basis in the sense that $(u,v^*) = 2 \delta_{u,v}$ for all $u,v \in \{x,y,z\}$ (the factor of $2$ is inessential but convenient). Then \begin{equation} x+y=-2z^*, \qquad y+z=-2x^*,\qquad z+x=-2y^* \label{eq:1p5} \end{equation} and \begin{equation} \varrho(x^*) = y^*, \qquad \varrho(y^*) = z^*, \qquad \varrho(z^*) = x^*. \end{equation} Relative to the basis $\lbrace x^*,y^*,z^*\rbrace $ the matrix of the trace form is \begin{equation}\label{eq:CMinv} 4\mathcal A^{-1} = \left[\begin{array}{ccc} \ \ 0 & -1 & -1 \\ -1 & \ \ 0 & -1 \\ -1 & -1 &\ \ 0 \end{array} \right]. \end{equation} The equitable basis and its dual are related by the following multiplication tables: \begin{equation}\label{eq:tab1-2} \begin{tabular}[t]{|c||c|c|c|} \hline $[\, ,\,]$ & $x^*$ & $y^*$ & $z^*$ \\ \hline \hline $x^*$ & $0$ & $z$ & $-y$ \\ \hline $y^*$ & $-z$ & $0$ & $x$ \\ \hline $z^*$ & $y$ & $-x$ & $0$ \\ \hline \end{tabular} \qquad \qquad \begin{tabular}[t]{|c||c|c|c|} \hline $[\, ,\,]$ & $x$ & $y$ & $z$ \\ \hline \hline $x$ & $0$ & $-4z^*$ & $4y^*$ \\ \hline $y$ & $4z^*$ & $0$ & $-4x^*$ \\ \hline $z$ & $-4y^*$ & $4x^*$ & $0$ \\ \hline \end{tabular} \end{equation} We also have \begin{equation} \label{tablexxs} \begin{tabular}[t]{|c||c|c|c|} \hline $[\, ,\,]$ & $x$ & $y$ & $z$ \\ \hline \hline $x^*$ & $y-z$ & $y+z$ & $-y-z$ \\ \hline $y^*$ & $-z-x$ & $z-x$ & $z+x$ \\ \hline $z^*$ & $x+y$ & $-x-y$ & $x-y$ \\ \hline \end{tabular} \qquad \quad \end{equation} and \begin{eqnarray} && x-y=2(x^*-y^*), \qquad y-z=2(y^*-z^*), \qquad z-x=2(z^*-x^*), \nonumber \\ && x^*+y^*=z^*-z,\qquad y^*+z^*=x^*-x,\qquad z^*+x^*=y^*-y, \label{eq:3sum} \\ && \qquad \qquad \qquad \qquad x+y+z=-x^*-y^*-z^*.\nonumber \end{eqnarray} By (\ref{eqbas}), each of the matrices $x,y,z$ is semisimple (diagonalizable) with eigenvalues $1$ and $-1$. Since \begin{equation} x^* = h-e+f, \qquad y^* = f, \qquad z^* = -e, \label{eq:shownil} \end{equation} \noindent each of the dual basis elements $u\in \{x^*, y^*, z^*\}$ is nilpotent with $u^2 = 0$ $(\ad u)^3 = 0$. \section{Connections with the modular group} Let $G$ denote the subgroup of the automorphism group $\hbox{\rm Aut}_{\mathbb F}(\mathfrak {sl}_2)$ generated by $\exp(\ad x^*)$, \,$\exp(\ad y^*)$, \,$\exp(\ad z^*)$. In this section we will prove that $G$ is isomorphic to the modular group $\hbox{\rm PSL}_2(\mathbb Z)$. Recall that $\hbox{\rm PSL}_2(\mathbb Z)$ is obtained from the group $\hbox{\rm SL}_2(\mathbb Z)$ of $2 \times 2$ integral matrices of determinant 1 by factoring out the subgroup consisting of the matrices $\pm I$. It is a free product of a cyclic group of order 2 and a cyclic group of order 3 (see for example, \cite{A1}). To establish the isomorphism with $G$, we first locate generators for $G$ of order 2 and 3. \begin{definition} \label{def1} Let $\sigma_x$, $\sigma_y$, $\sigma_z$ be the automorphisms of $\mathfrak{sl}_2$ defined by $$ \sigma_x = \exp(\ad x^*), \qquad \sigma_y = \exp(\ad y^*), \qquad \sigma_z = \exp(\ad z^*).$$ \end{definition} Using the table in \eqref{tablexxs} we obtain \begin{lemma}\label{lem0} The matrices of $\sigma_x,\sigma_y,\sigma_z$ relative to the equitable basis are given by \begin{equation*} \sigma_x \rightarrow \left[\begin{array}{ccc} 1 & 0 & \ \ 0 \\ 2 & 2 & -1 \\ 0 & 1 & \ \ 0 \end{array}\right], \qquad \sigma_y \rightarrow \left[\begin{array}{ccc} \ \ 0 & 0 & 1 \\ \ \ 0 & 1 & 0 \\ -1 & 2 & 2 \end{array}\right], \qquad \sigma_z \rightarrow \left[\begin{array}{ccc} 2 & -1 & 2 \\ 1 & \ \ 0 & 0 \\ 0 & \ \ 0 & 1 \end{array}\right]. \end{equation*} \end{lemma} \begin{lemma}\label{lem1} \begin{itemize} \item[(i)] $\varrho \sigma_x \varrho^{-1} = \sigma_y$, \qquad $\varrho \sigma_y \varrho^{-1} = \sigma_z$, \qquad $\varrho \sigma_z \varrho^{-1} = \sigma_x$; \item [(ii)] $\varrho$ is equal to each of the products $\sigma_x \sigma_y$, $\sigma_y \sigma_z$, $\sigma_z \sigma_x$. In particular $\varrho \in G$. \end{itemize} \end{lemma} \noindent {\it Proof:} Part (i) follows from the well-known identity $$\varphi \exp(\ad u) \varphi^{-1} = \exp(\ad \varphi(u))$$ which holds for all $\varphi \in \hbox{\rm Aut}_{\mathbb F}(\mathfrak{sl}_2)$ and all nilpotent $u \in \mathfrak {sl}_2$. To see that $\varrho= \sigma_x\sigma_y$, use \eqref{eq:rho} and Lemma \ref{lem0} to verify that $\varrho$ and $\sigma_x \sigma_y$ agree on the elements of the equitable basis. To obtain the other two expressions for $\varrho$, apply part (i) above. \qed \begin{lemma}\label{lem2} For $\sigma_x$, $\sigma_y$, $\sigma_z$ as in Definition \ref{def1}, we have the following: \begin{itemize} \item[(a)] $\sigma_x = \sigma_y \sigma_z \sigma_y^{-1}$; \item[(b)] $\sigma_x = \sigma_z^{-1} \sigma_y \sigma_z$; \item[(c)] $(\sigma_y \sigma_z)^3 = 1$; \item[(d)] $\sigma_y \sigma_z \sigma_y = \sigma_z \sigma_y \sigma_z$; \item[(e)] $(\sigma_y \sigma_z \sigma_y)^2 = 1$; \item[(f)] $G$ is generated by $\sigma_y$ and $\sigma_z$. \end{itemize} \end{lemma} \noindent {\it Proof:} These properties can be deduced from Lemma \ref{lem1}. \qed \begin{definition}\label{def2} Let $\tau_x$, $\tau_y$, $\tau_z$ denote the elements of $G$ defined by \begin{eqnarray*} \tau_x &=& \sigma_y \sigma_z \sigma_y = \exp(\ad y^*) \exp(\ad z^*) \exp(\ad y^*) \\ &=& \sigma_z \sigma_y \sigma_z = \exp(\ad z^*) \exp(\ad y^*) \exp (\ad z^*), \\ \tau_y &=& \sigma_z \sigma_x \sigma_z = \exp(\ad z^*) \exp(\ad x^*) \exp(\ad z^*) \\ &=& \sigma_x \sigma_z \sigma_x = \exp(\ad x^*) \exp(\ad z^*) \exp (\ad x^*), \\ \tau_z &=& \sigma_x \sigma_y \sigma_x = \exp(\ad x^*) \exp(\ad y^*) \exp(\ad x^*) \\ &=& \sigma_y \sigma_x \sigma_y = \exp(\ad y^*) \exp(\ad x^*) \exp (\ad y^*). \end{eqnarray*} \end{definition} \begin{lemma}\label{lem3} For $\tau_x$, $\tau_y$, $\tau_z$ as in Definition \ref{def2}, the following relations hold: \begin{itemize} \item[(a)] $\varrho \tau_x \varrho^{-1} = \tau_y, \qquad \varrho \tau_y \varrho^{-1} = \tau_z, \qquad \varrho \tau_z \varrho^{-1} = \tau_x$; \item[(b)] $\tau_x^2 = \tau_y^2 = \tau_z^2 = 1$; \item[(c)] $\sigma_z = \tau_x \varrho^{-1}$ and $\sigma_y = \varrho^{-1} \tau_x$; \item[(d)] $\tau_x(x) = -x$, \ $\tau_x(y) = 2x + z$, \ $\tau_x(z) = 2x+y$. \end{itemize} \end{lemma} \noindent {\it Proof:} Part (a) follows from Lemma \ref{lem1}\,(i), while (b) comes from (a) and Lemma \ref{lem2}\,(e). Concerning (c), the first (resp. second) equation follows from $\varrho=\sigma_y\sigma_z$ and $\tau_x=\sigma_z\sigma_y\sigma_z$ (resp. $\tau_x=\sigma_y\sigma_z\sigma_y$). To get (d), use $\tau_x=\sigma_z \varrho$ together with \eqref{eq:rho} and Lemma \ref{lem0}. \qed Combining Lemma \ref{lem3} with Lemma \ref{lem2}\,(f), we have \begin{corollary}\label{cor1} Each of the following is a generating set for the group $G$. $\hbox{\rm (i)} \ \varrho,\, \tau_x$; \qquad \hbox{\rm (ii)} $\varrho, \, \tau_y$; \qquad \hbox{\rm (iii) }$\varrho, \tau_z$. \end{corollary} For $\theta \in \hbox{\rm SL}_2(\mathbb Z)$, conjugation by $\theta$ determines an automorphism $\widehat \theta$ of $\mathfrak{sl}_2$: $$\widehat \theta: u \mapsto \theta u \theta^{-1}.$$ \noindent The map \begin{equation}\label{eq:thetahat} \begin{array}{ccc} \hbox{\rm SL}_2(\mathbb Z) &\rightarrow& \hbox{\rm Aut}_{\mathbb F}(\mathfrak{sl}_2) \\ \theta &\mapsto& \widehat \theta \end{array} \end{equation} is a group homomorphism with kernel $\{\pm I\}$. This map induces an embedding \begin{equation} \imath: \hbox{\rm PSL}_2(\mathbb Z) \rightarrow \hbox{\rm Aut}_{\mathbb F}(\mathfrak{sl}_2). \end{equation} \begin{theorem} \label{thm1} The image of the embedding $ \imath: \hbox{\rm PSL}_2(\mathbb Z) \rightarrow \hbox{\rm Aut}_{\mathbb F}(\mathfrak{sl}_2) $ coincides with $G$. Therefore $G$ is isomorphic to $\hbox{\rm PSL}_2(\mathbb Z)$. \end{theorem} \noindent {\it Proof:} The matrices \begin{equation} A=\left [\begin{array}{cc} 0& -1 \\ 1 & \ 1 \end{array} \right], \qquad B=\left [\begin{array}{cc} 0 & -1 \\ 1 & \ 0 \end{array} \right], \qquad C=\left [\begin{array}{cc} 1& -1 \\ 1 & \ 0 \end{array} \right] \end{equation} are in $\hbox{\rm SL}_2(\mathbb Z)$ and satisfy $C=BAB^{-1}$. Let $a, b, c$ denote the images of $A,B,C$ respectively under the canonical homomorphism $\hbox{\rm SL}_2(\mathbb Z) \to \hbox{\rm PSL}_2(\mathbb Z)$, and note that $c=bab^{-1}$. By \cite{A2}, the elements $a,b$ generate $\hbox{\rm PSL}_2(\mathbb Z)$, so $b,c$ generate $\hbox{\rm PSL}_2(\mathbb Z)$. One checks that ${\widehat B}=\tau_x$ and ${\widehat C}=\varrho$ so $\imath (b)=\tau_x$ and $\imath (c)=\varrho$. The result then follows in view of Corollary \ref{cor1}\,(i). \qed \section{The $G$-orbit of $x$} In this section we describe the orbit $G(x)$ of $x$ under the group $G $ generated by $\exp(\ad x^*), \, \exp(\ad y^*), \, \exp(\ad z^*)$. Since $\varrho$ belongs to $G$ and cyclically permutes the elements of the equitable basis, $G(x)$ coincides with the $G$-orbit of $y$ and the $G$-orbit of $z$. Later in the paper we relate $G(x)$ to the set of real roots for the Kac-Moody Lie algebra associated with the Cartan matrix $\mathcal A$ from \eqref{eq:CM}. We begin by determining the stabilizer of $x$ in $G$. \begin{lemma}\label{lem4} Suppose $g \in G$ and $g(x) = x$. Then $g = 1$. \end{lemma} \noindent {\it Proof:} By Theorem \ref{thm1} and the paragraph preceding it, there exists $\theta \in \hbox{\rm SL}_2(\mathbb Z)$ such that ${\widehat \theta}=g$. Therefore $\theta x \theta^{-1} = g(x)=x$ gives $\theta x = x \theta$, and this along with the fact that $x=\mbox{\rm diag}(1,-1)$ implies $\theta$ is diagonal. The diagonal entries of $\theta$ are integers whose product is 1, so they are both 1 or both $-1$; thus $\theta=\pm I$ so $g=1$. \qed \begin{corollary}\label{cor2} The map \begin{equation*} \begin{array}{ccc} G &\rightarrow& G(x) \\ g& \mapsto& g(x) \end{array} \end{equation*} is a bijection. \end{corollary} We turn our attention now to the lattice \begin{eqnarray}\label{eq:lattice} L: = \mathbb Z x \oplus \mathbb Z y \oplus \mathbb Z z. \end{eqnarray} Our goal is to prove that $$ G(x) = \{u \in L \mid (u,u) = 2\}, $$ \noindent but this necessitates a few comments about $L$. Note that $L$ is closed under the Lie bracket and invariant under the group $G$. Further observe that \begin{equation}\label{eq:mod2} L = \left \{ \left [\begin{array}{cc} p & \ \ q \\ r & -p \end{array} \right] \, \Bigg | \,p,q,r \in \mathbb Z,\quad q,r \; {\rm even} \right \}. \end{equation} \noindent This realization of $L$ shows that it is the Lie algebra analogue of the congruence subgroup of $\hbox{\rm PSL}_2(\mathbb Z)$. Recall that an {\it isometry} of $\mathfrak{sl}_2$ is an $\mathbb F$-linear bijection $\varphi:\mathfrak{sl}_2 \to \mathfrak{sl}_2$ such that $(\varphi(u),\varphi(v))=(u,v)$ for all $u,v\in \mathfrak{sl}_2$. \begin{lemma}\label{eq:ginv} Each automorphism of $\mathfrak{sl}_2$ is an isometry of $\mathfrak{sl}_2$. \end{lemma} \noindent {\it Proof:} Each automorphism of a finite-dimensional Lie algebra is an isometry relative to the Killing form. Since the trace map is a multiple of the Killing form, the result is apparent. \qed The next lemma provides some useful formulae for the square norm $(u,u)$ of an element $u \in \mathfrak{sl}_2$. \begin{lemma} \label{lem:4a} For $u = \alpha x +\beta y+\gamma z \in \mathfrak{sl}_2$, the expression $(u,u)/2$ is equal to each of the following: \begin{eqnarray*} &&\alpha^2 + \beta^2 +\gamma^2 -2 (\alpha\beta + \beta\gamma +\gamma\alpha), \\ &&2\big(\alpha^2 +\beta^2 +\gamma^2\big) - (\alpha+\beta+\gamma)^2, \\ && \big(\alpha +\beta -\gamma\big)^2 - 4\alpha \beta, \\ && \big(\beta +\gamma -\alpha \big)^2 - 4\beta \gamma, \\ && \big(\gamma+\alpha-\beta\big)^2 - 4\gamma \alpha. \end{eqnarray*} \end{lemma} \noindent {\it Proof:} The above five scalars are mutually equal; this can be checked by algebraic manipulation. Observe that $(u,u)=(\alpha,\beta,\gamma){\mathcal A}(\alpha,\beta,\gamma)^t$ where $\mathcal A$ is from \eqref{eq:CM}. Evaluating this triple product by matrix multiplication, we find that $(u,u)/2$ is equal to the first expression above, so the result follows. \qed \begin{definition} \label{def:r} \rm Define $R = \{u \in L \mid (u,u) = 2\}$. We note that $R$ is $G$-invariant by Lemma \ref{eq:ginv}. \end{definition} \begin{lemma}\label{lem5} For $u=\alpha x +\beta y +\gamma z$ in the set $R$ in Definition \ref{def:r}, the coefficients $\alpha,\beta,\gamma$ are either all nonnegative or all are nonpositive. \end{lemma} \noindent {\it Proof:} We assume the result is false and reach a contradiction. There exists a pair of coefficients having opposite signs; without loss in generality we may assume they are $\alpha$ and $\beta$. Thus $\alpha \beta \leq -1$. Using $(u,u)=2$ and the third expression in Lemma \ref{lem:4a}, we find that $$-4 \geq 4 \alpha \beta = (\alpha + \beta -\gamma)^2 - 1 \geq -1.$$ This is a contradiction, so the result must be true. \qed \begin{definition} \rm For the set $R$ in Definition \ref{def:r} and for $u = \alpha x + \beta y + \gamma z \in R$, we define the {\it height} of $u$ to be the sum $\hbox{\rm ht}(u) = \alpha+\beta+\gamma$. Let $R^+ = \{ u \in R \mid \hbox{\rm ht}(u) > 0\}$ and $R^- = \{ u \in R \mid \hbox{\rm ht}(u) < 0\}$. \end{definition} By Definition \ref{def:r} and Lemma \ref{lem5} we have $R = R^+ \cup R^-,$ $R^- = -R^+$, and $\varrho (R^\pm) = R^\pm$. Next we describe how the automorphisms $\tau_x$, $\tau_y$, $\tau_z$ act on the set $R^+$. \begin{lemma}\label{lem6} For $u \in \lbrace x,y,z\rbrace $, the map $\tau_u$ sends $u$ to $-u$ and permutes the elements of $R^+ \setminus \{u\}$. \end{lemma} \noindent {\it Proof:} There is no loss in generality in assuming $u=x$. Recall that $\tau_x(x) = -x$ by Lemma \ref{lem3}\,(d). Now suppose we are given $v = \alpha x + \beta y + \gamma z \in R^+$ such that $\tau_x(v) \in R^-$. It suffices to argue that $v = x$. Using Lemma \ref{lem3}(d), we have \begin{eqnarray*} \tau_x(v) &=& \alpha(-x) + \beta(2x+z) + \gamma(2x+y) \\ &=& (2\beta+2\gamma-\alpha)x+ \gamma y + \beta z. \end{eqnarray*} Observe $\beta\geq 0$ since $v \in R^+$ and $\beta \leq 0$ since $\tau_x(v) \in R^-$ so $\beta=0$. Similarly $\gamma=0$. Now $\alpha=1$ since $(v,v) = 2$. Therefore $v=x$ and the result follows. \qed \begin{theorem} \label{thm2} $G(x) = R = \{u \in L \mid (u,u) = 2\}$. \end{theorem} \noindent {\it Proof:} The set $R$ contains $x$ and is $G$-invariant so $G(x)\subseteq R$. To show equality holds, we assume there exists $u \in R \setminus G(x)$ and arrive at a contradiction. Without loss we may further assume that $u \in R^+$ and that $u$ has minimal height with this property. Note that $u$ is not one of $x,y,z$ as they are in $G(x)$. Write $u=\alpha x + \beta y + \gamma z$. By Lemma \ref{lem6}, each of $\tau_x(u)$, $\tau_y(u)$, $\tau_z(u)$ belongs to $R^+ \setminus G(x)$. By our minimality assumption all these elements have height at least $\hbox{\rm ht}(u)$. Evaluating these inequalities we determine that \begin{eqnarray*} \alpha \leq \beta +\gamma, \qquad \beta\leq \gamma +\alpha, \qquad \gamma\leq \alpha + \beta. \end{eqnarray*} Since the situation is cyclically symmetric, we may assume that $\alpha \geq \beta$. Using $(u,u)=2$ and the third expression in Lemma \ref{lem:4a} we see that \begin{equation*} (\alpha +\beta-\gamma)^2 = 1+4\alpha \beta > 4 \beta^2, \end{equation*} so that $\alpha + \beta -\gamma > 2 \beta$ and then $\alpha > \beta + \gamma$. This is a contradiction, so it must be that $G(x)=R$. \qed \begin{note}\label{cor3} \rm Combining Theorem \ref{thm1}, Corollary \ref{cor2}, and Theorem \ref{thm2} and using Theorem \ref{lem:4a}, we get a bijection between $\hbox{\rm PSL}_2(\mathbb Z)$ and the set of integral solutions $(\alpha,\beta,\gamma)$ to the quadratic equation $$2\big(\alpha^2 +\beta^2 +\gamma^2\big) - (\alpha+\beta+\gamma)^2 = 1.$$ \end{note} We close this section with a result about the coefficients of elements of $G(x)$. \begin{proposition}\label{prop4} For $\alpha x + \beta y + \gamma z \in R$, exactly one of the coefficients $\alpha,\beta,\gamma$ is odd. \end{proposition} \noindent {\it Proof:} Let $S$ denote the set of elements in $G(x)$ with exactly one odd coefficient. Note that $S$ contains $x$. For $u =\alpha x + \beta y + \gamma z \in S$ we have $\tau_x(u) = (2\beta+2\gamma-\alpha)x + \gamma y + \beta z$, and modulo 2, this element has the same coefficients as $u$. Therefore $\tau_x(u) \in S$. Since $\varrho(u) \in S$ also, we see that $S$ is $G$-invariant. Consequently $G(x) \subseteq S$ so $G(x)=S$. \qed \section{Automorphisms, antiautomorphisms, and isometries} In this section we continue our study of the lattice $L$ from \eqref{eq:lattice}. We use the equitable basis to determine the precise relationship between the following four groups: (i) the group $G$ from Section 3, (ii) the group of automorphisms of $\mathfrak{sl}_2$ that preserve $L$, (iii) the group of automorphisms and antiautomorphisms of $\mathfrak{sl}_2$ that preserve $L$, and (iv) the group of isometries of $\mathfrak{sl}_2$ that preserve $L$. By an {\it antiautomorphism} of $\mathfrak{sl}_2$ we mean an $\mathbb F$-linear bijection $\phi: \mathfrak{sl}_2\to \mathfrak{sl}_2$ such that $\phi(\lbrack u,v\rbrack)= \lbrack \phi(v),\phi(u)\rbrack$ for $u,v \in \mathfrak{sl}_2$. Here are some examples. The map $-1:u\to -u$ is an antiautomorphism of $\mathfrak{sl}_2$ (in fact of any Lie algebra). For distinct $u,v \in \lbrace x,y,z\rbrace$, the $\mathbb F$-linear map $(u\,v):\mathfrak{sl}_2 \to \mathfrak{sl}_2$ that interchanges $u,v$ and fixes the remaining element in $\lbrace x,y,z\rbrace $ is an antiautomorphism of $\mathfrak{sl}_2$. Let $\hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2)$ denote the group consisting of the automorphisms and antiautomorphisms of $\mathfrak {sl}_2$. Then $\hbox{\rm Aut}_{\mathbb F}(\mathfrak {sl}_2)$ is a normal subgroup of $\hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2)$ of index 2, and \begin{eqnarray} \hbox{\rm AAut}_{\mathbb F}(\mathfrak{sl}_2) = \{\pm 1\} \ltimes \hbox{\rm Aut}_{\mathbb F}(\mathfrak{sl}_2). \label{eq:aainfo} \end{eqnarray} Define \begin{eqnarray*}\label{eq:autz} \hbox{\rm AAut}_{\mathbb Z}(L) &=& \{ \varphi \in \hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2) \mid \varphi(L) = L\}, \\ \hbox{\rm Aut}_{\mathbb Z}(L) &=& \{ \varphi \in \hbox{\rm Aut}_{\mathbb F}(\mathfrak {sl}_2) \mid \varphi(L) = L\} \end{eqnarray*} and note that \begin{eqnarray} \label{eq:normal} \hbox{\rm AAut}_{\mathbb Z}(L) = \{\pm 1\} \ltimes \hbox{\rm Aut}_{\mathbb Z}(L). \end{eqnarray} We remark that $\hbox{\rm AAut}_{\mathbb Z}(L)$ is the group of automorphisms and antiautomorphisms of $L$, viewed as a Lie algebra over $\mathbb Z$, since every such map over $\mathbb Z$ can be extended linearly to an automorphism or antiautomorphism in $\hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2)$. By the construction $G \subseteq \hbox{\rm Aut}_{\mathbb Z}(L)$. Let $\mbox{\rm Isom}_{\mathbb F}(\mathfrak{sl}_2)$ denote the group of all isometries of $\mathfrak{sl}_2$ and define \begin{eqnarray*} \mbox{\rm Isom}_{\mathbb Z}(L) = \lbrace \varphi \in \mbox{\rm Isom}_{\mathbb F}(\mathfrak{sl}_2) \;|\;\varphi(L)=L \rbrace. \end{eqnarray*} Since $-1$ is an isometry of $\mathfrak{sl}_2$, by Lemma \ref{eq:ginv} we have $\hbox{\rm AAut}_{\mathbb Z}(L)\subseteq \hbox{Isom}_{\mathbb Z}(L)$. So far we know that \begin{eqnarray*} G \subseteq \hbox{\rm Aut}_{\mathbb Z}(L) \subseteq \hbox{\rm AAut}_{\mathbb Z}(L) \subseteq \hbox{\rm Isom}_{\mathbb Z}(L). \label{eq:chain} \end{eqnarray*} Before describing this chain in more detail we compute the stabilizer of $x$ in $\hbox{\rm Isom}_{\mathbb Z}(L)$. In what follows $\langle S \rangle $ means the group generated by the set $S$. \begin{lemma} \label{lem:isostab} The stabilizer of $x$ in $\hbox{\rm Isom}_{\mathbb Z}(L)$ is $\langle (y\,z), -\tau_x\rangle $ where $\tau_x$ is from Definition \ref{def2}. Hence, this stabilizer is isomorphic to ${\mathbb Z}_2 \times {\mathbb Z}_2$. \end{lemma} \noindent {\it Proof:} Concerning the first assertion, one inclusion is clear since each of the maps $(y\,z)$, $-\tau_x$ fixes $x$ and is contained in $\hbox{\rm Isom}_{\mathbb Z}(L)$. To obtain the other inclusion, we pick $\varphi \in \hbox{\rm Isom}_{\mathbb Z}(L)$ such that $\varphi(x)=x$ and show $\varphi \in \langle (y\,z), -\tau_x\rangle $. The subspace $\hbox{\rm Span}_{\mathbb F}\lbrace y^*,z^*\rbrace $ is the orthogonal complement of $x$ relative to the trace form, so this subspace is $\varphi$-invariant. Write $\varphi(y^*)=ay^*+bz^*$ and $\varphi(z^*)=cy^*+dz^*$. Using $\varphi(x)=x$ and $x+z=-2y^*$, $x+y=-2z^*$, we have that $x+\varphi(z)=a(x+z)+b(x+y)$; this shows $a,b \in {\mathbb Z}$ since $\varphi(z) \in L$. Similarly $c,d \in {\mathbb Z}$. By \eqref{eq:CMinv} the matrix representing the trace form relative to $\lbrace y^*, z^*\rbrace $ is $\left [\begin{array}{cc} 0& -1 \\ -1 & \ 0 \end{array} \right]$. Since $\varphi$ is an isometry, \begin{eqnarray*} \left [\begin{array}{cc} a& b \\ c & d \end{array} \right] \left [\begin{array}{cc} 0& -1 \\ -1 & 0 \end{array} \right] \left [\begin{array}{cc} a& c \\ b & d \end{array} \right] = \left [\begin{array}{cc} 0& -1 \\ -1 & 0 \end{array} \right] \end{eqnarray*} and this yields $ab=0$, $cd=0$, $ad+bc=1$ after a brief calculation. By these equations $\left [\begin{array}{cc} a& c \\ b & d \end{array} \right]$ is one of \begin{eqnarray*} \left [\begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right], \qquad \left [\begin{array}{cc} 0& 1 \\ 1 & 0 \end{array} \right], \qquad \left [\begin{array}{cc} 0& -1 \\ -1 & 0 \end{array} \right],\qquad \left [\begin{array}{cc} -1& 0 \\ 0& -1 \end{array} \right] \end{eqnarray*} and these solutions correspond to $\varphi=1$, $\varphi=(y\,z)$, $\varphi=-\tau_x$, $\varphi=-(y\,z)\tau_x$ respectively. In any event, $\varphi \in \langle (y\,z), -\tau_x\rangle $ and the first assertion follows. The second assertion is a direct consequence of the first. \qed \begin{theorem} \label{thm:isostab} $\hbox{\rm Isom}_{\mathbb Z}(L)$ is equal to each of the groups \begin{eqnarray*} \langle (x\,y),-1 \rangle \ltimes G, \qquad \langle (y\,z), -1 \rangle \ltimes G, \qquad \langle (z\,x), -1 \rangle \ltimes G. \end{eqnarray*} In particular $\hbox{\rm Isom}_{\mathbb Z}(L)$ is isomorphic to $({\mathbb Z}_2 \times {\mathbb Z}_2) \ltimes G$. \end{theorem} \noindent {\it Proof:} We first show that $\hbox{\rm Isom}_{\mathbb Z}(L)=\langle (y\,z), -1 \rangle \ltimes G$. Certainly $-1$ normalizes $G$ since $-1$ commutes with everything in $G$. The element $(y\,z)$ also normalizes $G$ since $(y\,z)\tau_x =\tau_x (y\,z)$, $(y\,z) \varrho = \varrho^2 (y\,z)$, and $\tau_x, \varrho$ together generate $G$. The group $G$ has trivial intersection with $\langle (y\,z), -1 \rangle$ because $G$ has trivial intersection with $\langle (y\,z), -\tau_x \rangle$ by Lemma \ref{lem4} and Lemma \ref{lem:isostab}, and because $\tau_x \in G$. To see that $\hbox{\rm Isom}_{\mathbb Z}(L)$ is generated by $G$, $(y\,z)$, $-1$, choose $\varphi \in \hbox{\rm Isom}_{\mathbb Z}(L)$. Since $\varphi$ is an isometry of $\mathfrak{sl}_2$, the set $R$ from Definition \ref{def:r} is $\varphi$-invariant. Recall that $x \in R$, so $\varphi(x) \in R$. But $R=G(x)$ so there exists $g \in G$ such that $\varphi(x)=g(x)$. Now $g^{-1}\varphi (x)=x$, so $g^{-1}\varphi \in \langle (y\,z), -\tau_x \rangle$ in view of Lemma \ref{lem:isostab}. Thus, $\varphi$ is in the subgroup of $\hbox{\rm Isom}_{\mathbb Z}(L)$ generated by $G$, $(y\,z)$, $-\tau_x$. But $\tau_x \in G$ so $\varphi$ belongs to the subgroup of $\hbox{\rm Isom}_{\mathbb Z}(L)$ generated by $G$, $(y\,z)$, $-1$. Therefore $\hbox{\rm Isom}_{\mathbb Z}(L)$ is generated by $G$, $(y\,z)$, $-1$. By the above comments, $\hbox{\rm Isom}_{\mathbb Z}(L)= \langle (y\,z), -1 \rangle \ltimes G$. The other assertions follow by symmetry or a routine argument. \qed \begin{theorem} \label{thm:equal} $\hbox{\rm AAut}_{\mathbb Z}(L)=\hbox{\rm Isom}_{\mathbb Z}(L)$. \end{theorem} \noindent {\it Proof:} We know already that $\hbox{\rm AAut}_{\mathbb Z}(L)\subseteq \hbox{\rm Isom}_{\mathbb Z}(L)$. By Theorem \ref{thm:isostab} we have $\hbox{\rm Isom}_{\mathbb Z}(L) = \langle (y\,z),-1 \rangle \ltimes G$. But $(y\,z)$, $-1$, $G$ are all contained in $\hbox{\rm AAut}_{\mathbb Z}(L)$, so $ \hbox{\rm Isom}_{\mathbb Z}(L) \subseteq \hbox{\rm AAut}_{\mathbb Z}(L)$ holds as well. \qed \begin{theorem}\label{thm:autL} $\hbox{\rm Aut}_{\mathbb Z}(L)$ is equal to each of the groups \begin{eqnarray*} \langle -(x\,y) \rangle \ltimes G, \qquad \langle -(y\,z) \rangle \ltimes G, \qquad \langle -(z\,x) \rangle \ltimes G. \end{eqnarray*} In particular $\hbox{\rm Aut}_{\mathbb Z}(L)$ is isomorphic to ${\mathbb Z}_2 \ltimes G$. \end{theorem} \noindent {\it Proof:} Combine \eqref{eq:normal}, Theorem \ref{thm:isostab}, and Theorem \ref{thm:equal}. \qed \section {The lattice $L^* = \mathbb Z x^* \oplus \mathbb Z y^* \oplus \mathbb Z z^*$} In previous sections we have discussed the lattice $L = \mathbb Z x \oplus \mathbb Z y \oplus \mathbb Z z$. Here we consider the lattice \begin{eqnarray}\label{eq:ls} L^*: = \mathbb Z x^* \oplus \mathbb Z y^* \oplus \mathbb Z z^* \end{eqnarray} from a similar point of view. By \eqref{eq:shownil}, $L^*$ is equal to $\mathfrak{sl}_2(\mathbb Z)$ and contains $L$. Regarding $L$ and $L^*$ as free abelian groups, we see {f}rom \eqref{eq:mod2} that $L$ has index 4 in $L^*$ and $L^*/L \cong \mathbb Z_2 \times \mathbb Z_2$. The duality between $\{x,y,z\}$ and $\{x^*, y^*, z^*\}$ gives \begin{eqnarray} \label{eq:lsdef} L^* &=& \lbrace u \in \mathfrak{sl}_2 \,\vert \, (u,v) \in 2 \mathbb Z\quad \forall v \in L\rbrace, \\ \label{eq:ldef} L &=& \lbrace u \in \mathfrak{sl}_2 \,\vert \, (u,v) \in 2 \mathbb Z\quad \forall v \in L^*\rbrace. \end{eqnarray} Define \begin{eqnarray*} \hbox{\rm Aut}_{\mathbb Z}(L^*) &=& \{ \varphi \in \hbox{\rm Aut}_{\mathbb F}(\mathfrak {sl}_2) \mid \varphi(L^*) = L^*\}, \\ \hbox{\rm AAut}_{\mathbb Z}(L^*) &=& \{ \varphi \in \hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2) \mid \varphi(L^*) = L^*\}, \\ \hbox{\rm Isom}_{\mathbb Z}(L^*) &=& \{ \varphi \in \hbox{\rm Isom}_{\mathbb F}(\mathfrak {sl}_2) \mid \varphi(L^*) = L^*\}. \end{eqnarray*} \begin{theorem} \label{thm:starsame} We have \begin{itemize} \item[(i)] $\hbox{\rm Aut}_{\mathbb Z}(L^*) =\hbox{\rm Aut}_{\mathbb Z}(L)$, \item[(ii)] $ \hbox{\rm AAut}_{\mathbb Z}(L^*) =\hbox{\rm AAut}_{\mathbb Z}(L), $ \item[(iii)] $\hbox{\rm Isom}_{\mathbb Z}(L^*) =\hbox{\rm Isom}_{\mathbb Z}(L)$. \end{itemize} \end{theorem} \noindent {\it Proof:} By (\ref{eq:lsdef}) and (\ref{eq:ldef}), for each isometry $\sigma$ of $\mathfrak{sl}_2$ we have \begin{eqnarray} \sigma(L)=L \quad \Longleftrightarrow \quad \sigma(L^*)=L^*. \label{eq:iff} \end{eqnarray} Part (iii) is immediate from this. Parts (i) and (ii) also follow, since by Lemma \ref{eq:ginv} and (\ref{eq:aainfo}) each of the groups $\hbox{\rm Aut}_{\mathbb F}(\mathfrak {sl}_2) $, $\hbox{\rm AAut}_{\mathbb F}(\mathfrak {sl}_2)$ is contained in $\hbox{\rm Isom}_{\mathbb F}(\mathfrak {sl}_2)$. \qed Since $\varrho \in G$ cyclically permutes $x^*, y^*, z^*$, it follows that $G(x^*) = G(y^*) = G(z^*)$. We will describe the orbit $G(z^*)$ after first determining the stabilizer of $z^*$ in $G$. \begin{theorem} \label{thm:z*stab} The stabilizer of $z^*$ in $G$ is the subgroup generated by $\exp(\ad z^*)$. \end{theorem} \noindent {\it Proof:} The subgroup of $G$ generated by $\exp(\ad z^*)$ is clearly contained in the stabilizer of $z^*$. For the reverse containment, we observe by \eqref{eq:thetahat} that to any $g \in G$ there corresponds $\theta \in \hbox{\rm SL}_2(\mathbb Z)$ such that ${\widehat \theta}=g$. Writing $\theta= \left [\begin{array}{cc} m& p \\ n & q \end{array} \right] $ and using $z^* = \left [\begin{array}{cc} 0& -1 \\ 0 & \ \ 0 \end{array} \right] $, we see that \begin{equation}\label{eq:gz*} g(z^*) = \theta z^* \theta^{-1} = \left [\begin{array}{cc} mn& -m^2 \\ n^2 & -mn \end{array} \right]. \end{equation} When $g(z^*) = z^*$, we must have $n=0$, which implies that $mq = \det(\theta) = 1$. Hence $m=q \in \lbrace 1,-1\rbrace$. Then either $\theta$ or $-\theta$ is an integral power of $\exp(z^*)$, putting $g$ in the subgroup generated by $\exp(\ad z^*)$. \qed \begin{theorem}\label{thm:z*orbstab} \begin{itemize} \item[(i)] The orbit $G(z^*)$ consists of $y^*, z^*$ together with all matrices of the form $ \left[\begin{array}{cc} mn & -m^2 \\ n^2 & -mn \end{array} \right]$ where $m,n \in {\mathbb Z}$ are nonzero and relatively prime. \item[(ii)] $G(z^*)$ consists of $x^*, y^*, z^*$ together with all vectors of the form \begin{equation*}- \frac{1}{2}(a^2 x + b^2y + c^2 z),\end{equation*} where $a,b,c$ are relatively prime positive integers such that one of the relations $a = b+c$, $b = c+a$, $c = a+b$ holds. \end{itemize} \end{theorem} \noindent {\it Proof:} For statement (i), we assume $g \in G$ and $g(z^*)$ is as in \eqref{eq:gz*} above. Since $\det(\theta) = mq-np=1$, either $m,n$ are nonzero and relatively prime, or one of integers $m,n$ is zero. If $m=0$ then $n=-p \in \lbrace 1,-1\rbrace$ and $g(z^*)=y^*$. We have seen in the proof of Theorem \ref{thm:z*stab} that when $n = 0$ then $g(z^*) = z^*$. Hence (i) holds. Part (ii) expresses a matrix from part (i) in terms of the equitable basis. This conversion to a combination of $x,y,z$ just amounts to the identity \begin{equation} \label{eq:orbelt} \left[\begin{array}{cc} mn & -m^2 \\ n^2 & -mn \end{array} \right] = -\frac{1}{2} \Big ( (m-n)^2 x + m^2 y + n^2 z\Big). \end{equation} \qed \begin{theorem}\label{thm:z*autoorb} The sets $G(z^*)$ and $-G(z^*)$ are disjoint and the following sets coincide: \begin{itemize} \item[{\rm (i)}] $G(z^*) \cup \big (- G(z^*)\big)$ \item[{\rm (ii)}] the orbit of $z^*$ under $\hbox{\rm Aut}_{\mathbb Z}(L^*)$ \item[{\rm (iii)} ] the orbit of $z^*$ under $\hbox{\rm AAut}_{\mathbb Z}(L^*) = \hbox{\rm Isom}_{\mathbb Z}(L^*)$. \end{itemize} \end{theorem} \noindent {\it Proof:} That $G(z^*)$ and $-G(z^*)$ are disjoint can be seen from Theorem \ref{thm:z*orbstab}. The fact that the sets in (i)-(iii) are all equal is a direct consequence of Theorems \ref{thm:isostab}, \ref{thm:equal}, \ref{thm:autL}, and \ref{thm:starsame}. \qed \section{Connections with a hyperbolic Kac-Moody Lie algebra} In this section we make explicit the relationship between the equitable basis for $\mathfrak{sl}_2$ and the Kac-Moody Lie algebra $\mathfrak{g}=\mathfrak{g}({\mathcal A})$ over $\mathbb F$ associated with the Cartan matrix $\mathcal A$ from \eqref{eq:CM}. (All the terminology and necessary background material used here can be found in \cite[Ch.~1]{K}.) The Coxeter-Dynkin diagram corresponding to $\mathcal A$ is \begin{center}\begin{pspicture}(-2,-1)(1,1) \psset{xunit=.15cm,yunit=.15cm} {\psline{-}(-5,-3.2)(.2,6.3)} {\psline{-}(-5.8,-3.1)(-.5,6.5)} {\psline{-}(5,-3.2)(-.2,6.3)} {\psline{-}(5.8,-3.1)(.5,6.5)} {\psline{-}(-5.4,-3)(5.4,-3)} {\psline{-}(-5.8,-3.6)(5.8,-3.6)} \rput(-5.4,-3.2){{$\bullet$}} \rput(5.4,-3.2){{$\bullet$}} \rput(0,6.3){{$\bullet$}} \end{pspicture} \end{center} Since each subdiagram is the diagram of a Cartan matrix of finite type $\hbox{\rm A}_1$ or of affine type $\hbox{\rm A}_1^{(1)}$, the matrix $\mathcal A$ is of hyperbolic type (see \cite[\S 4.10]{K}). We adopt the point of view that $x,y,z$ are the simple roots for $\mathfrak{g}$ and that there is a symmetric bilinear form $(\, ,\,)$ on $\hbox{\rm Span}_{\mathbb F}\lbrace x,y,z\rbrace$ whose values on these simple roots are specified by $\mathcal A$. Since $\mathcal A$ is nonsingular, the simple roots are linearly independent and the bilinear form is nondegenerate. The root lattice may be identified with $L= \mathbb Z x \oplus \mathbb Z y \oplus \mathbb Z z$ in our earlier notation. Note that $L$ comes equipped with the Lie product $\lbrack \ ,\ \rbrack$. Note also that the group ${\rm Isom}_{\mathbb Z}(L)$ makes sense in the present context. Three elements belonging to this group are the simple reflections $r_x$, $r_y$, $r_z$. For $u \in \lbrace x,y,z\rbrace$ we have $(u,u)=2$, so the reflection $r_u$ is given by \begin{equation} r_u(v) = v - \frac{2(u,v)}{(u,u)} u = v - (u,v) u \end{equation} for all $v \in L$. For example, \begin{eqnarray*} r_x(x) = -x, \qquad \qquad r_x(y) = y + 2x, \qquad \qquad r_x(z) = z + 2x. \end{eqnarray*} Comparing this with Lemma \ref{lem3}\,(d) and using symmetry, we have \begin{equation}\label{eq:reflects} r_x = (y\,z)\tau_x, \qquad \qquad r_y = (z\,x)\tau_y, \qquad \qquad r_z = (x\,y)\tau_z. \end{equation} The subgroup $W$ of ${\rm Isom}_{\mathbb Z}(L)$ generated by the reflections $r_x$, $r_y$, $r_z$ is the {\it Weyl group}. There is another subgroup of ${\rm Isom}_{\mathbb Z}(L)$ that comes up naturally here. Observe that $(x\,y)$, $(y\,z)$, $(z\,x)$ and $1$, $\varrho$, $\varrho^2$ together form a subgroup of ${\rm Isom}_{\mathbb Z}(L)$ that is isomorphic to the symmetric group $S_3$; we identify this group with $S_3$ for the rest of the paper. In what follows, we adopt $\pm S_3$ as a shorthand for $\langle \pm 1\rangle \times S_3$. We note that $S_3$ is the group of diagram automorphisms associated with $\mathcal A$ in the sense of \cite[p.~68]{K}. The following theorem is implied by \cite[Cor.~5.10\,(b)]{K}. \begin{theorem} \label{thm:kacc} For the Cartan matrix $\mathcal A$ in \eqref{eq:CM}, \begin{eqnarray*} {\rm Isom}_{\mathbb Z}(L)= \pm S_3 \ltimes W. \end{eqnarray*} \end{theorem} We now describe how $W$ is related to $G$. Instead of doing this directly, we will relate them both to a certain normal subgroup of $W$ denoted $W^+$. Recall that for $w \in W$ the {\it length} of $w$ is the number of factors $r_x, r_y, r_z$ in a reduced expression for $w$. Let $W^+$ denote the subgroup of $W$ consisting of the elements of even length. Then $W^+$ is a normal subgroup of $W$ with index 2. \begin{proposition}\label{prop:w+} $W^+=W \cap {\rm Aut}_{\mathbb Z}(L)$. Moreover $W^+ $ is a normal subgroup of ${\rm Isom}_{\mathbb Z}(L)$ with index 24. \end{proposition} \noindent {\it Proof:} To get the first assertion, note that each of $r_x, r_y, r_z$ is an antiautomorphism of $\mathfrak{sl}_2$ preserving $L$. The second assertion follows from the first, using Theorem \ref{thm:kacc} and the fact that ${\rm Aut}_{\mathbb Z}(L)$ is normal in ${\rm Isom}_{\mathbb Z}(L)$ with index 2. \qed The following result is immediate from the definition of $W^+$. \begin{proposition} $W $ is equal to each of the groups \begin{eqnarray*} \langle r_x \rangle \ltimes W^+, \qquad \langle r_y \rangle \ltimes W^+, \qquad \langle r_z \rangle \ltimes W^+. \end{eqnarray*} In particular $W$ is isomorphic to ${\mathbb Z}_2 \ltimes W^+$. \end{proposition} \begin{proposition} $W^+$ is a normal subgroup of $G$, and the cosets of $W^+$ in $G$ are \begin{eqnarray} \label{eq:cosets} W^+, \quad \tau_x W^+, \quad \tau_y W^+, \quad \tau_z W^+, \quad \varrho W^+, \quad \varrho^2 W^+. \end{eqnarray} The quotient group $G/W^+$ is isomorphic to $S_3$. \end{proposition} \noindent {\it Proof:} The group $W^+$ is contained in $G$ since \begin{equation}\label{eq:even} r_xr_y = \varrho \tau_z \tau_y, \qquad \quad r_y r_z = \varrho \tau_x \tau_z, \qquad \quad r_z r_x = \varrho \tau_y \tau_x. \end{equation} Moreover, $W^+$ is normal in $G$ because $W^+$ is normal in ${\rm Isom}_{\mathbb Z}(L)$. The index of $W^+$ in $G$ is 6, since the index of $W^+$ in ${\rm Isom}_{\mathbb Z}(L)$ is 24 and the index of $G$ in ${\rm Isom}_{\mathbb Z}(L)$ is 4. Using (\ref{eq:reflects}) and (\ref{eq:even}), one can readily verify that the list (\ref{eq:cosets}) consists of the cosets of $W^+$ in $G$, and that $G/W^+ \simeq S_3$. \qed \begin{proposition}\label{prop:isoWeyl} $\hbox{\rm Isom}_{\mathbb Z}(L)/W^+ \cong \langle \pm 1 \rangle \times D$, Œ where $D$ is the dihedral group of order 12. \end{proposition} \noindent {\it Proof:} Let $E = \hbox{\rm Isom}_{\mathbb Z}(L)/W^+$. To argue that $E \cong \langle \pm 1 \rangle \times D$, we will produce elements $\eta, \vartheta \in E$ such that $\eta$ has order 2, $\vartheta$ has order 6, and \begin{equation}\label{eq:1} \eta \vartheta \eta = \vartheta^{-1}.\end{equation} \noindent Set $\eta = (y \, z)W^+$ and $\vartheta = (y\,z)\tau_yW^+ = \tau_z(y \, z)W^+$. Note that $\eta$ has order 2, and since $$(y \, z)\Big( (y \, z) \tau_y \Big)(y \, z) = \tau_y (y\, z) = \Big((y \, z) \tau_y \Big)^{-1},$$ \noindent equation \eqref{eq:1} holds. Moreover, \begin{eqnarray*} \Big ((y \, z) \tau_y\Big)^2 &=& (y \, z) \tau_y (y \, z) \tau_y \\ &=& \tau_z \tau_y \\ &=&(x \, y) r_z (z\,x) r_y \\ &=&(x \, y)(z\,x) r_x r_y \\ &=& \varrho^2 r_x r_y \\ & \equiv & \varrho^2 \quad \mod W^+ \end{eqnarray*} \noindent so that $\vartheta^6 = (\varrho^2W^+)^3 = W^+$. Thus, $\vartheta$ has order 6. Together $\eta, \vartheta$ generate a subgroup of $E$ isomorphic to $D$. Since $\langle \pm 1\,W^+ \rangle$ is a central subgroup of $E$ and $|\,E\,| = 24$ by Proposition \ref{prop:w+}, we have that $E \cong \langle \pm 1 \rangle \times D$, as claimed. \qed Let $\Delta$ denote the set of roots attached to the Cartan matrix $\mathcal A$. Then $\Delta = \Delta_+ \cup \Delta_-$ where $\Delta_+$ (resp. $\Delta_-$) is the set of roots that are nonnegative (resp. nonpositive) integral linear combinations of the simple roots $x,y,z$. We have $\Delta_-=-\Delta_+$. A root is {\it real} if it lies in the $W$-orbit of a simple root; otherwise it is {\it imaginary}. Thus $\Delta$ decomposes into the disjoint union of the sets of real and imaginary roots: $\Delta = \Delta^{\hbox{\rm re}} \cup \Delta^{\hbox{\rm im}}$. The following theorem is implied by \cite[Prop. 5.10\,(a)]{K}. \begin{theorem}\label{thm3} For the Cartan matrix $\mathcal A$ in \eqref{eq:CM} the set of real roots $\Delta^{\hbox{\rm re}}$ coincides with the set $R$ from Definition \ref{def:r}. \end{theorem} With the above theorem in mind, we have the following result which gives an interpretation of Proposition \ref{prop4}. \begin{proposition}\label{prop5} For the Cartan matrix $\mathcal A$ from \eqref{eq:CM}, the corresponding Weyl group $W$ and simple roots satisfy \begin{itemize} \item[(i)] $W(x) = \{ \alpha x + \beta y + \gamma z \in \Delta^{\hbox{\rm re}} \mid \alpha \equiv 1, \ \beta \equiv 0, \ \gamma \equiv 0 \mod 2\}$; \item[(ii)] $W(y) = \{ \alpha x + \beta y + \gamma z \in \Delta^{\hbox{\rm re}} \mid \beta \equiv 1, \ \alpha\equiv 0, \ \gamma \equiv 0 \mod 2\}$; \item[(iii)] $W(z) = \{ \alpha x + \beta y + \gamma z \in \Delta^{\hbox{\rm re}} \mid \gamma \equiv 1, \ \alpha\equiv 0, \ \beta \equiv 0 \mod 2\}$. \end{itemize} \end{proposition} \noindent {\it Proof:} Each element in $W(x)$ is obtained from $x$ by applying a product of reflections in the simple roots. The root $x$ belongs to the set on the right-hand side of (i), and whenever $u = \alpha x + \beta y + \gamma z$ belongs to that set, then so do \begin{eqnarray*} r_x(u) &=& (2\beta + 2 \gamma -\alpha)x + \beta y + \gamma z, \\ r_y(u) &=& \alpha x + (2 \alpha + 2 \gamma - \beta)y + \gamma z, \\ r_z(u) &=& \alpha x + \beta y + (2 \alpha +2 \beta - \gamma)z. \end{eqnarray*} Consequently $W(x)$ is contained in the right side of (i). Similar results apply in parts (ii) and (iii). Thus $\Delta^{\hbox{\rm re}}= W(x) \cup W(y) \cup W(z)$ is contained in the (disjoint) union of the three sets on the right, which forces equality to hold in each case. \qed Consider the set of imaginary roots $\Delta^{\hbox{\rm im}}$ associated with the Cartan matrix $\mathcal A$ from \eqref{eq:CM}. It follows from \cite{M} or \cite[Prop.~5.2]{K} that \begin{equation} \Delta^{\hbox{\rm im}} = \{ u \in \Delta \mid (u,u) \leq 0\}. \end{equation} An important special case is the set of isotropic roots \begin{equation} \Delta^0 = \{u \in \Delta \mid (u,u) = 0\}.\end{equation} This set will be our focus for the rest of this section. Each of the following four propositions contains a characterization $\Delta^0$. \begin{proposition} \label{prop:iso1pre} {\rm \cite[Prop. 5.10\,(c)]{K}} The set of isotropic roots corresponding to the Cartan matrix $\mathcal A$ from \eqref{eq:CM} is given by \begin{eqnarray*} \Delta^0 = \lbrace u \in L\setminus \lbrace 0 \rbrace \;|\; (u,u)=0\rbrace. \end{eqnarray*} \end{proposition} The following result is implied by \cite[Prop.~5.7]{K}. \begin{proposition}\label{prop:iso1} For the Cartan matrix $\mathcal A$ in \eqref{eq:CM}, a vector is an isotropic root if and only if it is $W$-equivalent to a nonzero integer multiple of at least one of \begin{eqnarray*} 2x^* = -(y+z), \qquad 2y^* = -(z+x), \qquad 2z^* = -(x+y). \end{eqnarray*} \end{proposition} \begin{proposition}\label{prop:iso2} For the Cartan matrix $\mathcal A$ in \eqref{eq:CM}, a vector is an isotropic root if and only if it is $\hbox{\rm Isom}_{\mathbb Z}(L)$-equivalent to a positive even integer multiple of $z^*$ if and only if it is $G$-equivalent to a nonzero even integer multiple of $z^*$. Thus, the set of isotropic roots is given by $$\Delta^0 = \bigcup_{n \in \mathbb Z, n \neq 0} 2nG (z^*).$$ \end{proposition} \noindent {\it Proof:} This follows from Proposition \ref{prop:iso1}, {f}rom the fact that $\hbox{\rm Isom}_{\mathbb Z}(L) = \pm S_3 \ltimes W = \langle (y\,z), -1 \rangle \ltimes G$, and {f}rom the fact that $\Delta^0$ is $G$-invariant by Proposition \ref{prop:iso1pre}. \qed \begin{proposition}\label{prop:iso3} The isotropic roots corresponding to the Cartan matrix $\mathcal A$ in \eqref{eq:CM} are precisely the vectors of the form $n( a^2 x + b^2 y + c^2 z)$ for some $n \in \mathbb Z \setminus \{0\}$ and $a,b,c \in \mathbb Z_{\geq 0}$ (not all 0) such that at least one of \begin{eqnarray} \label{eq:3pos} a=b+c, \qquad \qquad b=c+a,\qquad \qquad c=a+b. \end{eqnarray} \end{proposition} \noindent {\it Proof:} Combine Theorem \ref{thm:z*orbstab}\,(iii) and Proposition \ref{prop:iso2}. \qed Suppose now that $e_i,f_i,h_i \ (1 \leq i \leq 3)$ are the Chevalley generators for the Kac-Moody Lie algebra $\mathfrak g$ over $\mathbb F$ corresponding to the Cartan matrix $\mathcal A$. They satisfy the Serre relations (see \cite[(0.3.1)]{K}). The Lie algebra has a decomposition $\g = \mathfrak h \oplus \bigoplus_{u \in \Delta} \mathfrak g_u$, where $\mathfrak h$ is the span of the $h_i$. For $u = \alpha x + \beta y + \gamma z \in \Delta_+$, $\mathfrak g_u$ is the subspace of $\mathfrak g$ consisting of all products of $e_i$ such that the number of $e_1$'s appearing is $\alpha$, $e_2$'s is $\beta$ and $e_3$'s is $\gamma$. A similar statement applies for $u \in \Delta_-$ with $f_i$'s replacing the $e_i$'s. Each subspace $\mathfrak g_u$ is finite-dimensional, and its dimension is said to be the {\it multiplicity} of the root $u$. There is an automorphism of $\mathfrak g$ interchanging $e_i$ and $f_i$ and sending $h_i$ to $-h_i$. Thus, the multiplicities of $u$ and $-u$ are the same. Since \begin{eqnarray*} \dim \mathfrak g_{\sigma u} = \dim \mathfrak g_{u} \end{eqnarray*} \noindent holds for all roots $u$ and all $\sigma \in W$, it follows that the multiplicity of each real root is 1. For the isotropic roots, we conclude the following. \begin{corollary}\label{cor:isomult} For the Cartan matrix $\mathcal A$ in \eqref{eq:CM}, each isotropic root has multiplicity 1. \end{corollary} \noindent {\it Proof:} Let $u \in \Delta^0$. By Proposition \ref{prop:iso1}, we may assume that $u$ is a nonzero integer multiple of $2x^*$, $2y^*$, or $2z^*$. Since the situation is cyclically symmetric, and since $u$ and $-u$ have the same multiplicity, there is no loss in generality in assuming $u = n(x+y)$ for $n \in \mathbb Z_{>0}$. Thus, elements in $\mathfrak g_u$ are commutators with $n$ factors equal to $e_1$ and $n$ factors equal to $e_2$. Since no $e_3,f_3,h_3$ are involved and since the relations governing $e_i,f_i,h_i$ for $i=1,2$ are the same as for the affine Lie algebra $\mathfrak g(\hbox{\rm A}_1^{(1)})$ corresponding to the matrix $\left[\begin{array}{cc} \ \ 2 & -2 \\ -2 & \ \ 2 \end{array}\right ],$ the multiplicity of $u$ is the same as the multiplicity of $n(x+y)$ in $\mathfrak g(\hbox{\rm A}_1^{(1)})$, which is known to be 1 (see \cite[Cor.~7.4]{K}, for example). \qed \section{$\mathfrak{sl}_2$-modules and the equitable basis} Let $\mathcal V$ denote a finite-dimensional $\mathfrak{sl}_2$-module. Let $\varphi: \mathfrak{sl}_2 \rightarrow \mathfrak{gl}(\mathcal V)$ be the representation afforded by $\mathcal V$, so that $\varphi(u)(v) = u.v$ for all $u \in \mathfrak{sl}_2$ and $v \in \mathcal V$. For all nilpotent $u \in \mathfrak{sl}_2$, the map $\varphi (u) :{\mathcal V} \to {\mathcal V}$ is nilpotent; therefore the map $\exp\big(\varphi (u)\big) = \sum_{n=0}^\infty \varphi (u)^n/n!$ is well-defined. We note that $\exp\big(\varphi (u)\big)$ is invertible with inverse $\exp\big(-\varphi (u)\big)$, and that \begin{equation}\label{eq:cong} \exp\big(\varphi (u)\big) \varphi (t) \exp\big(\varphi (u)\big)^{-1} = \exp\big(\ad \varphi (u)\big)\big(\varphi(t)\big) \end{equation} for all $t \in \mathfrak{sl}_2$. Using the dual basis $\{x^*,y^*,z^*\}$ for $\mathfrak{sl}_2$, we define the maps \begin{eqnarray*} \label{eq:PT} P &=& \exp\big(\varphi (x^*)\big)\exp\big(\varphi (y^*)\big),\nonumber \\ T_x &=& \exp\big(\varphi (y^*)\big)\exp\big(\varphi (z^*)\big) \exp\big(\varphi (y^*)\big), \nonumber \\ T_y &=& \exp\big(\varphi (z^*)\big)\exp\big(\varphi (x^*)\big) \exp\big(\varphi (z^*)\big),\nonumber \\ T_z&=& \exp\big(\varphi (x^*)\big)\exp\big(\varphi (y^*)\big) \exp\big(\varphi (x^*)\big). \nonumber \end{eqnarray*} If $\varphi$ is the adjoint representation, then these are just the maps $\varrho$, $\tau_x$, $\tau_y$, $\tau_z$ from Section 3. The next results say that analogues of the relations in Lemmas \ref{lem1} and \ref{lem2} hold for arbitrary representations $\varphi$. \begin{lemma} \label{lem7} Let $\varphi: \mathfrak{sl}_2 \rightarrow \mathfrak{gl}(\mathcal V)$ be a finite-dimensional $\mathfrak{sl}_2 $-representation. Then the corresponding maps $P, T_x,T_y, T_z$ satisfy {\rm (i)}--{\rm (v)} below. \begin{itemize} \item [(i)] $P\varphi(x)P^{-1} = \varphi(y), \ \ P\varphi(y) P^{-1} = \varphi(z), \ \ P\varphi(z)P^{-1} =\varphi(x)$; \item [(ii)] $P\varphi(x^*)P^{-1} = \varphi(y^*), \ \ P\varphi(y^*)P^{-1} = \varphi(z^*), \ \ P\varphi(z^*)P^{-1} =\varphi(x^*)$; \item [(iii)] $P \exp\left(\varphi(x^*)\right) P^{-1} = \exp\left(\varphi(y^*)\right), \ \ P \exp\left(\varphi(y^*)\right) P^{-1} = \exp\left(\varphi(z^*)\right), \\ P \exp\left(\varphi(z^*)\right) P^{-1} = \exp\left(\varphi(x^*)\right)$; \item [(iv)] $PT_xP^{-1} =T_y, \ \ PT_yP^{-1} = T_z, \ \ PT_zP^{-1} =T_x$; \item [(v)] $T_x \varphi(x) T^{-1}_x = -\varphi(x)$, \ \ $T_x \varphi(y)T^{-1}_x = 2 \varphi(x)+\varphi(z)$, \ \ $T_x \varphi(z)T^{-1}_x = 2 \varphi(x)+\varphi(y)$. \end{itemize} \end{lemma} \noindent {\it Proof:} Using \eqref{eq:cong}, we have \begin{eqnarray*} P \varphi (x)P^{-1} &=& \exp\big(\ad \varphi(x^*)\big)\exp\big(\ad \varphi(y^*)\big)\big(\varphi(x)\big) \\ &=& \varphi \big(\exp (\ad x^*)\exp (\ad y^*)(x)\big). \end{eqnarray*} Recall $\exp (\ad x^*)\exp (\ad y^*) = \varrho$ by Lemma \ref{lem1}(ii) and $\varrho(x)=y$ by \eqref{eq:rho} so $P\varphi(x)P^{-1} = \varphi(y)$. The remaining assertions can be deduced from similar arguments. \qed \begin{lemma} \label{cor5} Let $\varphi: \mathfrak{sl}_2 \rightarrow \mathfrak{gl}(\mathcal V)$ be a finite-dimensional $\mathfrak{sl}_2 $-representation. Then the corresponding maps $P, T_x,T_y, T_z$ satisfy {\rm (i)}--{\rm (vi)} below. \begin{itemize} \item[(i)] $P = \exp\big(\varphi(x^*)\big)\exp\big(\varphi(y^*)\big) = \exp\big(\varphi(y^*)\big)\exp\big(\varphi(z^*)\big)$ $= \exp\big(\varphi(z^*)\big)\exp\big(\varphi(x^*)\big)$; \item[(ii)] $T_x = \exp\big(\varphi(y^*)\big)\exp\big(\varphi(z^*)\big)\exp\big(\varphi(y^*)\big)$ $\ = \exp\big(\varphi(z^*)\big)\exp\big(\varphi(y^*)\big) \exp\big(\varphi(z^*)\big)$; \item[(iii)] $T_y = \exp\big(\varphi(z^*)\big) \exp\big(\varphi(x^*)\big) \exp\big(\varphi(z^*)\big)$ $\ = \exp\big(\varphi(x^*)\big) \exp\big(\varphi(z^*)\big) \exp\big(\varphi(x^*)\big)$; \item[(iv)] $T_z = \exp\big(\varphi(x^*)\big) \exp\big(\varphi(y^*)\big) \exp\big(\varphi(x^*)\big)$ $\ = \exp\big(\varphi(y^*)\big) \exp\big(\varphi(x^*)\big) \exp\big(\varphi(y^*)\big)$; \item[(v)] $P^3 = T_x^2 = T_y^2 = T_z^2$; \item[(vi)] Let $C$ denote the map in {\rm (v)}. Then $C$ commutes with $\varphi(u)$ for all $u \in \mathfrak {sl}_2$. \end{itemize} \end{lemma} \noindent {\it Proof:} Parts (i)--(iv) follow easily from Lemma \ref{lem7}\,(iii), so consider part (v). To prove that $P^3=T_x^2$, in the left-hand side of $T_x T_x=T_x^2$ evaluate the first factor (resp. second factor) using the first (resp. second) equation in (ii) above, and simplify the result using $P=\exp\big(\varphi(y^*)\big)\exp\big(\varphi(z^*)\big)$. The equations $P^3=T_y^2$ and $P^3=T_z^2$ are obtained similarly. Concerning (vi), note by Lemma \ref{lem7}(i) that $P^3$ commutes with each of $\varphi(x)$, $\varphi(y)$, $\varphi(z)$ and recall $x,y,z$ is a basis for $\mathfrak{sl}_2$. \qed \begin{remark} \rm Let $\hbox{\rm B}_3$ be Artin's braid group given by generators $s_1, s_2$ and the relation\, $s_1 s_2 s_1 = s_2 s_1 s_2$. It can be seen from Lemma \ref{cor5} that each finite-dimensional $\mathfrak{sl}_2$-representation $\varphi: \mathfrak{sl}_2 \rightarrow \mathfrak{gl}(\mathcal V)$ determines a representation of $\hbox{\rm B}_3$ given by $$s_1 \mapsto \exp\big(\varphi(x^*)\big), \qquad \qquad s_2 \mapsto \exp\big(\varphi(y^*)\big).$$ (Of course we could replace the pair $x,y$ by $y,z$ or $z,x$ in the above line.) The center of $\hbox{\rm B}_3$ is generated by $(s_1 s_2)^3$ and this maps to $P^3 = T_x^2 = T_y^2 = T_z^2$. In Corollary \ref{cor:pttt} below, we will show that $P^3$ must act as a scalar multiple of the identity when $\mathcal V$ is an irreducible $\mathfrak {sl}_2$-module, and we will determine the exact value of that scalar. More general information on irreducible $\hbox{\rm B}_3$-modules, particularly those of dimension $\leq 5$, can be found in \cite{TW}. \end{remark} For each integer $d\geq 0$, there exists a unique irreducible $\mathfrak{sl}_2$-module of dimension $d+1$ up to isomorphism. This module, which we denote ${\mathcal V}(d)$, has a basis $\lbrace v_i\rbrace_{i=0}^d$ such that \begin{eqnarray*} h.v_i &=& (d-2i) v_i, \\ f.v_i &=& (i+1) v_{i+1}, \\ e.v_i &=& (d-i+1)v_{i-1} \end{eqnarray*} for $0 \leq i \leq d$, where $v_{-1}=0$ and $v_{d+1}=0$. We call $\lbrace v_i\rbrace_{i=0}^d$ a {\it standard basis} of $\mathcal V(d)$. By \eqref{eqbas} we have \begin{eqnarray*}\label{eq:eqact} x.v_i &=& (d-2i) v_i, \\ y.v_i &=& 2(d-i+1) v_{i-1} + (2i-d)v_i, \nonumber \\ z.v_i &=& (2i-d)v_i -2(i+1)v_{i+1} \nonumber \end{eqnarray*} for $0 \leq i \leq d$, and by \eqref{eq:shownil} we have \begin{eqnarray*}\label{eq:staract} x^*.v_i &=& (i-d-1)v_{i-1}+(d-2i) v_i + (i+1)v_{i+1}, \\ y^*.v_i &=& (i+1) v_{i+1}, \nonumber \\ z^*.v_i &=& (i-d-1)v_{i-1} \nonumber \end{eqnarray*} for $0 \leq i \leq d$. Let $\varphi_d$ denote the $\mathfrak{sl}_2$ representation afforded by ${\mathcal V}(d)$. \begin{lemma} \label{lem:expfacts} With respect to a standard basis for ${\mathcal V}(d)$, \begin{itemize} \item[(i)] the matrix representing $\exp\big(\varphi_d(y^*)\big)$ is lower triangular with $(i,j)$ entry ${i \choose j}$ for $0 \leq j\leq i\leq d$; \item[(ii)] the matrix representing $\exp\big(-\varphi_d(y^*)\big)$ is lower triangular with $(i,j)$ entry $(-1)^{i-j}{i \choose j}$ for $0 \leq j\leq i\leq d$; \item[(iii)] the matrix representing $\exp\big(\varphi_d(z^*)\big)$ is upper triangular with $(i,j)$ entry $(-1)^{j-i}{d-i \choose j-i}$ for $0 \leq i\leq j\leq d$; \item[(iv)] the matrix representing $\exp\big(-\varphi_d(z^*)\big)$ is upper triangular with $(i,j)$ entry ${d-i \choose j-i}$ for $0 \leq i\leq j\leq d$. \end{itemize} \end{lemma} \noindent {\it Proof:} This is a routine calculation using the definition of the exponential and the actions of $y^*$, $z^*$ on the standard basis. \qed \begin{lemma} \label{lem:txaction} For a standard basis $\lbrace v_i\rbrace_{i=0}^d$ of ${\mathcal V}(d)$, \begin{eqnarray} T_x v_i = (-1)^i v_{d-i} \qquad \qquad (0 \leq i \leq d). \label{eq:txact} \end{eqnarray} \end{lemma} \noindent {\it Proof:} By construction $v_i$ is an eigenvector for $x$ with eigenvalue $d-2i$. This combined with the first equation of Lemma \ref{lem7}\,(v) implies that $T_x v_i$ is an eigenvector for $x$ with eigenvalue $2i-d$. Therefore there exists $\alpha_i \in {\mathbb F}$ such that $T_x v_i=\alpha_i v_{d-i}$. We show $\alpha_i = (-1)^i$. One readily checks that $\alpha_0=1$ using the definition of $T_x$ and the data in Lemma \ref{lem:expfacts}. For $1 \leq i \leq d$, we apply the second equation of Lemma \ref{lem7}\,(v) to the vector $T_x v_i$; this yields $\alpha_i=-\alpha_{i-1}$ after a brief calculation. By the above comments $\alpha_i=(-1)^i$ for $0 \leq i \leq d$ and the result follows. \qed \begin{corollary} \label{cor:pttt} For the maps $P$, $T_x$, $T_y$, $T_z$ corresponding to $\varphi_d$, \begin{eqnarray} P^3= T^2_x = T^2_y= T^2_z=(-1)^dI. \label{eq:minusone} \end{eqnarray} \end{corollary} \noindent {\it Proof:} Let $\lbrace v_i \rbrace_{i=0}^d$ denote a standard basis for ${\mathcal V}(d)$. By Lemma \ref{lem:txaction}, for $0 \leq i \leq d$ we see that $T_x v_i=(-1)^iv_{d-i}$ and $T_x v_{d-i}=(-1)^{d-i}v_i $ so $T_x^2 v_i=(-1)^d v_i$. Therefore $T_x^2= (-1)^dI$. The result follows in view of Lemma \ref{cor5}\,(v). \qed Starting with a standard basis $\lbrace v_i\rbrace_{i=0}^d$ of ${\mathcal V}(d)$ and using the map $P$ corresponding to $\varphi_d$, we obtain three different bases for ${\mathcal V}(d)$: \begin{eqnarray} \label{eq:3bases} \lbrace v_i\rbrace_{i=0}^d, \quad \qquad \lbrace Pv_i\rbrace_{i=0}^d, \quad \qquad \lbrace P^2v_i\rbrace_{i=0}^d. \end{eqnarray} One significance of these bases is that for $0 \leq i \leq d$ the vector $v_i$ (resp. $Pv_i$, resp. $P^2v_i$) is an eigenvector for $x$ (resp. $y$, resp. $z$) with eigenvalue $d-2i$; this can be checked using Lemma \ref{lem7}\,(i). Our next goal is to describe how the three bases (\ref{eq:3bases}) are related. To this end the following lemma will be useful. \begin{lemma} For a standard basis $\lbrace v_i\rbrace_{i=0}^d$ of ${\mathcal V}(d)$ and for the map $P$ associated with $\varphi_d$, \begin{eqnarray} \exp\big(\varphi_d(y^*)\big)v_i &=& (-1)^{d-i}P^2 v_{d-i}, \label{eq:p2one} \\ \label{eq:p2two} \exp\big(-\varphi_d(z^*)\big)v_i &=& (-1)^{d-i}Pv_{d-i} \end{eqnarray} for $0 \leq i \leq d$. \end{lemma} \noindent {\it Proof:} Equation \eqref{eq:p2one} follows from \begin{eqnarray*} P^{-2}\exp\big(\varphi_d(y^*)\big)v_i &=& (-1)^d P\exp\big(\varphi_d(y^*)\big)v_i \\ &=& (-1)^d \exp\big(\varphi_d(y^*)\big) \exp\big(\varphi_d(z^*)\big) \exp\big(\varphi_d(y^*)\big) v_i \\ &=& (-1)^d T_xv_i \\ &=& (-1)^{d-i}v_{d-i} \end{eqnarray*} and line \eqref{eq:p2two} can be derived similarly. \qed \begin{corollary} \label{cor:threesum} For a standard basis $\lbrace v_i\rbrace_{i=0}^d$ of ${\mathcal V}(d)$ and for the map $P$ associated with $\varphi_d$, \begin{eqnarray} Pv_0= \sum_{i=0}^d v_i, \qquad P^2v_0= \sum_{i=0}^d Pv_i, \qquad (-1)^d v_0= \sum_{i=0}^d P^2v_i. \label{eq:threesum} \end{eqnarray} \end{corollary} \noindent {\it Proof:} To derive the equation on the left in \eqref{eq:threesum}, set $i=d$ in \eqref{eq:p2two} and evaluate the result using Lemma \ref{lem:expfacts}(iv). The other two relations in \eqref{eq:threesum} can be obtained similarly using $P^3=(-1)^dI$. \qed \begin{proposition}\label{prop3} For a standard basis $\lbrace v_i\rbrace_{i=0}^d$ of $\mathcal V(d)$ and for the map $P$ corresponding to $\varphi_d$ the following {\rm (i)}--{\rm (iii)} hold for $0 \leq i \leq d$. \begin{itemize} \item[(i)] The image of $\big(\varphi_d(z^*)\bigr)^{d-i}$ on ${\mathcal V}(d)$ is equal to each of the subspaces \begin{eqnarray*} \hbox{\rm Span}_{\mathbb F}\{v_0, \dots, v_i \}, \qquad \hbox{\rm Span}_{\mathbb F}\{Pv_d,\dots, Pv_{d-i}\}. \end{eqnarray*} \item[(ii)] The image of $\bigl(\varphi_d(x^*)\bigr)^{d-i}$ on ${\mathcal V}(d)$ is equal to each of the subspaces \begin{eqnarray*} \hbox{\rm Span}_{\mathbb F}\{Pv_0, \dots, Pv_i \}, \qquad \hbox{\rm Span}_{\mathbb F}\{P^2v_d,\dots, P^2v_{d-i}\}. \end{eqnarray*} \item[(iii)] The image of $\bigl(\varphi_d(y^*)\bigr)^{d-i}$ on ${\mathcal V}(d)$ is equal to each of the subspaces \begin{eqnarray*} \hbox{\rm Span}_{\mathbb F}\{P^2v_0, \dots, P^2v_i \}, \qquad \hbox{\rm Span}_{\mathbb F}\{v_d,\dots, v_{d-i}\}. \end{eqnarray*} \end{itemize} \end{proposition} \noindent {\it Proof:} (i) \ Recall $z^*.v_i=(i-d-1)v_{i-1}$ so the image of $\big(\varphi_d(z^*)\bigr)^{d-i}$ on ${\mathcal V}(d)$ is $\hbox{\rm Span}_{\mathbb F}\{v_0, \dots, v_i \}$. Also $\hbox{\rm Span}_{\mathbb F}\{v_0, \dots, v_i \}= \hbox{\rm Span}_{\mathbb F}\{Pv_d,\dots, Pv_{d-i}\}$ in view of Lemma \ref{lem:expfacts}(iv) and \eqref{eq:p2two}. (ii), (iii): Apply $P$ and $P^2$ to the equations in part (i). \qed We have described how the three bases \eqref{eq:3bases} are related. To visualize this description it is helpful to draw some diagrams. In these pictures, the following convention will be adopted. Given bases $\lbrace w_i\rbrace_{i=0}^d$ and $\lbrace w'_i\rbrace_{i=0}^d$ for $\mathcal V(d)$, the display below will mean that $\hbox{\rm Span}_{\mathbb F}\{w_0,\dots, w_i\} = \hbox{\rm Span}_{\mathbb F}\{w_d', \dots, w_{d-i}'\}$ for $0 \leq i \leq d$: \begin{pspicture}(-4,-1)(4,4) \psset{xunit=.5cm,yunit=.5cm} {\psline{-}(0,0)(4,6.9)} {\psline{-}(0,0)(8,0)} \rput(8.1,-.5){${\black {\boldsymbol {w_d}}}$} \rput(6.6,-.5){${\black{\boldsymbol {w_{d-1}}}}$} \rput(4,-.5){${\black{\boldsymbol {\cdots}}}$} \rput(1.5,-.5){${\black{\boldsymbol {w_1}}}$} \rput(0,-.5){${\black{\boldsymbol {w_0}}}$} \rput(8.1,0){${\black {\boldsymbol {\bullet}}}$} \rput(6.5,0){${\black{\boldsymbol {\bullet}}}$} \rput(1.6,0){${\black{\boldsymbol {\bullet}}}$} \rput(0,0){${\black{\boldsymbol {\bullet}}}$} \rput(2.8,6.8){${\black {\boldsymbol {w_0'}}}$} \rput(2.15,5.5){${\black{\boldsymbol {w_{1}'}}}$} \rput(.8,3){${\black{\boldsymbol {\cdot} }}$} \rput(1.1,3.5){${\black{\boldsymbol {\cdot }}}$} \rput(1.4,4){${\black{\boldsymbol {\cdot} }}$} \rput(-.2,1.7){${\black{\boldsymbol {w_{d-1}'}}}$} \rput(-.8,.5){${\black{\boldsymbol {w_d'}}}$} \rput(4,6.9){${\black {\boldsymbol {\bullet}}}$} \rput(3.2,5.5){${\black{\boldsymbol {\bullet}}}$} \rput(.9,1.5){${\black{\boldsymbol {\bullet}}}$} \end{pspicture} \noindent With this convention in mind, Proposition \ref{prop3} tells us that \begin{pspicture}(-4,-.5)(4,4.5) \psset{xunit=.6cm,yunit=.6cm} {\psline{-}(0,0)(4,6.9)} {\psline{-}(4,6.9)(8,0)} {\psline{-}(0,0)(8,0)} \rput(8,-.5){${\black {\boldsymbol {v_d}}}$} \rput(6.8,-.5){${\black{\boldsymbol {v_{d-1}}}}$} \rput(3.5,-.5){${\black{\boldsymbol {\cdot}}}$} \rput(4,-.5){${\black{\boldsymbol {\cdot}}}$} \rput(4.5,-.5){${\black{\boldsymbol {\cdot}}}$} \rput(1.5,-.5){${\black{\boldsymbol {v_1}}}$} \rput(0,-.5){${\black{\boldsymbol {v_0}}}$ \rput(7.7,.6){${\black {\boldsymbol {\bullet }}}$} \rput(6.1,.6){${\black{\boldsymbol {\bullet}}}$} \rput(1.3,.6){${\black{\boldsymbol {\bullet}}}$} \rput(-.2,.6){${\black{\boldsymbol {\bullet}}}$ } \rput(2.8,7.5){${\black {\boldsymbol {Pv_0}}}$} \rput(3.68,7.55){${\black {\boldsymbol {\bullet}}}$} \rput(2,6){${\black{\boldsymbol {Pv_1}}}$} \rput(2.75,6){${\black{\boldsymbol {\bullet}}}$} \rput(.8,3.3){${\black{\boldsymbol {\cdot} }}$} \rput(1.1,3.8){${\black{\boldsymbol {\cdot }}}$} \rput(1.4,4.3){${\black{\boldsymbol {\cdot} }}$} \rput(-.55,2.2){${\black{\boldsymbol {Pv_{d-1}}}}$} \rput(.6,2.2){${\black{\boldsymbol {\bullet}}}$} \rput(-1.2,.9){${\black{\boldsymbol {Pv_d}}}$} \rput(4.7,7.5){${\black {\boldsymbol {P^2v_d}}}$} \rput(5.9,6){${\black{\boldsymbol {P^2v_{d-1}}}}$} \rput(4.6,6){${\black{\boldsymbol {\bullet}}}$} \rput(6.6,3.3){${\black{\boldsymbol {\cdot} }}$} \rput(6.3,3.8){${\black{\boldsymbol {\cdot} }}$} \rput(6,4.3){${\black{\boldsymbol {\cdot}}}$} \rput(7.9,2.2){${\black{\boldsymbol {P^2v_1}}}$} \rput(6.75,2.2){${\black{\boldsymbol {\bullet}}}$} \rput(8.7,.9){${\black{\boldsymbol {P^2v_0}}}$}} \end{pspicture} \noindent By Corollary \ref{cor:threesum}, the sum of the vectors on each edge of the triangle is a scalar multiple of the vector at the opposite vertex. For the special case of the adjoint module $\mathcal V(2)$, the elements \begin{eqnarray*} v_0 = z^* = -e, \qquad v_1 = x = h, \qquad v_2 = y^* = f \end{eqnarray*} form a standard basis. In this case the picture is \begin{pspicture}(-4,0)(4,4) \psset{xunit=.5cm,yunit=.5cm} {\psline{-}(0,0)(4,6.9)} {\psline{-}(4,6.9)(8,0)} {\psline{-}(0,0)(8,0)} \rput(8.6,.1){${\black {\boldsymbol {y^*}}}$} \rput(4,-.6){${\black{\boldsymbol {x}}}$} \rput(-.7,.1){${\black{\boldsymbol {z^*}}}$ \rput(8.2,.25){${\black {\boldsymbol {\bullet }}}$} \rput(4.3,.25){${\black {\boldsymbol {\bullet }}}$} \rput(.35,.25){${\black{\boldsymbol {\bullet}}}$} \rput(1.7,3.9 ){${\black{\boldsymbol {y} }}$} \rput(6.8,3.9 ){${\black{\boldsymbol {z} }}$} \rput(4.5,7.8 ){${\black{\boldsymbol {x^*} }}$} \rput(2.4,3.8){${\black{\boldsymbol {\bullet} }}$} \rput(4.3,7.2){${\black{\boldsymbol {\bullet} }}$} \rput(6.2,3.8){${\black{\boldsymbol {\bullet} }}$} } \end{pspicture} \noindent By \eqref{eq:3sum} we have \begin{eqnarray*} z^* + x + y^* = x^*, \qquad x^* + y + z^* = y^*, \qquad y^* + z + x^* = z^*, \end{eqnarray*} which is just \eqref{eq:threesum} in this special case. It is well known that each irreducible $\mathfrak{sl}_2$-module $\mathcal V(d)$ can be realized explicitly as the space of homogeneous polynomials over $\mathbb F$ of total degree $d$ in two variables. The symmetry in the equitable basis for $\mathfrak{sl}_2$ is reflected in how it acts on these polynomials, as we now discuss. The action of $\mathfrak{sl}_2$ on $\mathcal V(1)$ extends to an action of $\mathfrak{sl}_2$ on the symmetric algebra $\mathcal S: = S(\mathcal V(1))$ by derivations, so that $d.(uv) = (d.u)v + u(d.v)$ holds for all $d \in \mathfrak{sl}_2$ and all $u,v \in \mathcal S$. We regard $\mathcal S$ as the polynomial algebra $\mathbb F[s,t]$ in two commuting indeterminates $s,t$ and set $r = -s-t$. Then $$\mathcal S = \mathbb F[r,s] = \mathbb F[s,t] = \mathbb F[t,r] \quad \hbox{and} \quad r+s+t = 0.$$ We identify the variables $t,s$ with the standard basis elements $v_0,v_1$ respectively, and get the actions \begin{eqnarray*}\label{eq:sact} \begin{array}{ccccccccc} x:&r\mapsto& s-t \quad \quad &y:&s\mapsto& t-r \quad \quad &z:&t\mapsto& r-s \\ &s\mapsto& -s \quad \quad &&t\mapsto& -t \quad \quad &&r\mapsto& -r \\ &t\mapsto& t \quad\quad &&r\mapsto& r\quad \quad &&s \mapsto& s, \end{array} \end{eqnarray*} \begin{eqnarray*}\label{eq:sact2} \begin{array}{ccccccccc} x^*: &r\mapsto&\ 0 \quad \qquad &y^*: &s\mapsto&\ 0\quad \qquad&z^*:&t\mapsto&\ 0 \\ &s\mapsto&\ r \quad \qquad &&t\mapsto&\ s \quad \qquad&&r \mapsto&\ t \\ &t\mapsto&-r \quad\qquad &&r\mapsto& -s \quad \qquad&&s\mapsto& -t. \end{array} \end{eqnarray*} \begin{proposition}\label{prop:symact} For an integer $d \geq 0$ the following hold. \begin{itemize} \item [(i)] The homogeneous polynomials of degree $d$ form an $\mathfrak{sl}_2$-submodule of $\mathcal S$ that is isomorphic to $\mathcal V(d)$; \item [(ii)] each of the sets $\lbrace s^i t^{d-i}\rbrace_{i=0}^d$, $\lbrace t^i r^{d-i}\rbrace_{i=0}^d$, $\lbrace r^i s^{d-i}\rbrace_{i=0}^d$ is a basis for this submodule; \item [(iii)] for $0 \leq i \leq d$ the vector $s^{i}t^{d-i}$ (resp. $t^{i}r^{d-i}$, resp. $r^{i}s^{d-i}$) is an eigenvector for $x$ (resp. $y$, resp. $z$) with eigenvalue $d-2i$. \end{itemize} \end{proposition} For $d=3$ these three bases appear on the perimeter of the triangle below. \vskip .7 truein \begin{pspicture}(-4,0)(4,4) \psset{xunit=.6cm,yunit=.6cm} {\psline{-}(0,0)(4.5,7.8)} {\psline{-}(4.5,7.8)(9,0)} {\psline{-}(0,0)(9,0)} \rput(9.3,-.5){${\black {\boldsymbol {s^3}}}$} \rput(6,-.5){${\black {\boldsymbol {s^2t}}}$} \rput(3,-.5){${\black{\boldsymbol {st^2}}}$} \rput(-.3,-.5){${\black{\boldsymbol {t^3}}}$} \rput(9,0){${\black {\boldsymbol {\bullet }}}$} \rput(6,0){${\black {\boldsymbol {\bullet }}}$} \rput(3,0){${\black {\boldsymbol {\bullet }}}$} \rput(0,0){${\black{\boldsymbol {\bullet }}}$} \rput(.8,2.7 ){${\black{\boldsymbol {t^2r}}}$} \rput(1.5,2.6){${\black{\boldsymbol {\bullet}}}$} \rput(3,5.2){${\black{\boldsymbol {\bullet} }}$} \rput(2.3,5.3 ){${\black{\boldsymbol {tr^2}}}$} \rput(4.5,7.8){${\black{\boldsymbol {\bullet} }}$} \rput(4.7,8.4 ){${\black{\boldsymbol {r^3}}}$} \rput(6,5.2){${\black{\boldsymbol {\bullet} }}$} \rput(6.7,5.3 ){${\black{\boldsymbol {r^2s}}}$} \rput(7.5,2.6){${\black{\boldsymbol {\bullet} }}$} \rput(8.2,2.7 ){${\black{\boldsymbol {rs^2 }}}$} {\psline{-}(1.5,2.6)(7.5,2.6)} {\psline{-}(1.5,2.6)(3,0)} {\psline{-}(3,0)(6,5.2)} {\psline{-}(6,0)(3,5.2)} {\psline{-}(6,0)(7.5,2.6)} {\psline{-}(6,5.2)(3,5.2)} \rput(4.5,2.6){${\black{\boldsymbol {\bullet }}}$} \rput(5.2,2.9){${\black{\boldsymbol {rst }}}$} \end{pspicture} \section{Connections with the Poincar\'e Disk and Pythagorean Triples} Recall that $(v,w^*) = 2\delta_{v,w}$ for $v,w \in \{x,y,z\}$. Thus, the elements $\frac{1}{2} x^*$, $\frac{1}{2} y^*$, $\frac{1}{2} z^*$ are the {\it fundamental weights} of the Kac-Moody algebra $\mathfrak g$, and the lattice $\frac{1}{2} L^* = \mathbb Z(\frac{1}{2} x^*) \oplus \mathbb Z(\frac{1}{2} y^*) \oplus \mathbb Z(\frac{1}{2} z^*)$ is the {\it weight lattice}. The elements in the set $\Omega := W(x^*) \cup W(y^*) \cup W(z^*)$ are the Weyl group images of twice the fundamental weights. Here we consider the set $\Omega$, and relate it to Pythagorean triples. By Proposition \ref{prop:iso1}, Proposition \ref{prop:iso3}, and Theorem \ref{thm:z*orbstab}, the elements $u \in \Omega$ have the form $u = -\frac{1}{2}\big( a^2 x + b^2 y + c^2 z\big)$ where $a,b,c \in \mathbb Z_{\geq 0}$ (not all 0) and at least one of the following holds: \begin{eqnarray*} a=b+c, \qquad \qquad b=c+a, \qquad \qquad c=a+b. \end{eqnarray*} Each element in $\Omega$ corresponds to a Pythagorean triple $(\alpha,\beta, \gamma)$ in the following way. When $c = a+b$, the Pythagorean triple $(\alpha, \beta,\gamma)$ can be obtained from the matrix equation $M(a^2,b^2,c^2)^{\mathfrak t} = (\alpha,\beta,\gamma)^{\mathfrak t}$ where \begin{equation}\label{eq:changer} M = \left[\begin{array}{ccc} -1 & \ \ 1 & 1 \\ \ \ 0 & -1 & 1 \\ \ \ 0 & \ \ 1 & 1 \end{array} \right ], \end{equation} and $\gamma^2 = \alpha^2 + \beta^2$. Similarly, when $a = b+c$, then for $(\beta,\gamma,\alpha)^{\mathfrak t}:= M(b^2,c^2,a^2)^{\mathfrak t}$ we have $\alpha^2 = \beta^2 + \gamma^2$, and when $b = c+a$ then for $(\gamma, \alpha, \beta)^{\mathfrak t}: = M(c^2,a^2,b^2)^{\mathfrak t}$ we have $\beta^2 = \gamma^2 + \alpha^2$. There is a beautiful way to visualize the set $\Omega$ that brings together many of the ideas in this paper. Starting with the picture for $\mathcal V(2)$ in Section 8, and applying reflections in $W$ we obtain the Poincar\'e disk $\mathcal P$. We have displayed a portion of the disk in Figure 1 below and have given the corresponding triple $a^2, b^2, c^2$ for each displayed point on the circumference between $x^*$ and $y^*$. For each of these points, $c = a +b$. The other labels can be obtained by permuting $x^*,y^*,z^*$ and $x,y,z$. The vertices on the perimeter are the elements of $\Omega$; that is, they are the Weyl group reflections of the fundamental weights of the hyperbolic Kac-Moody algebra, after each one is multiplied by a factor of $2$. The group ${\rm Isom}_{\mathbb Z}(L) = \langle (y\, z), -1 \rangle \ltimes G = \pm S_3 \ltimes W$ is the group of automorphisms $\hbox{Aut}(\mathcal P)$ of the disk. Indeed, the Weyl group $W$ permutes the triangles. By multiplying any $\xi \in \hbox{Aut}(\mathcal P)$ by an appropriate element of $W$, we can assume that $\xi$ maps the central triangle to itself. Such an automorphism can be seen to belong to $\pm S_3$. For the other realization of $ \hbox{Aut}(\mathcal P)$, observe that by multiplying an automorphism $\xi \in \hbox{Aut}(\mathcal P)$ by a suitable element of $G$, we can assume that $\xi$ fixes the edge labeled by $x$ and $-x$. Since the stabilizer of $x$ in $G$ is trivial according to Lemma \ref{lem4}, such an automorphism must belong to $\langle (y\, z), -1 \rangle$. \eject {\hskip 1.4 truein {\Large Figure 1}} \vskip .5 truein \begin{pspicture}(-4.5,-6)(10,10) \psset{xunit=1cm,yunit=1cm} {\cnode(0,0){8}{b}} \rput(0,8){{$\bullet$}} \rput(8,0){{$\bullet$}} \rput(0,-8){{$\bullet$}} \rput(-8,0){{$\bullet$}} \rput(7.75,2){{$\bullet$}} \rput(-7.75,2){{$\bullet$}} \rput(7.75,-2){{$\bullet$}} \rput(-7.75,-2){{$\bullet$}} \rput(6.9,4){{$\bullet$}} \rput(-6.9,4){{$\bullet$}} \rput(6.9,-4){{$\bullet$}} \rput(-6.9,-4){{$\bullet$}} \rput(5.6,5.6){{$\bullet$}} \rput(-5.6,5.6){{$\bullet$}} \rput(5.6,-5.6){{$\bullet$}} \rput(-5.6,-5.6){{$\bullet$}} \rput(4,6.9){{$\bullet$}} \rput(-4,6.9){{$\bullet$}} \rput(4,-6.9){{$\bullet$}} \rput(-4,-6.9){{$\bullet$}} \rput(2,7.75){{$\bullet$}} \rput(2,-7.75){{$\bullet$}} \rput(-2,7.75){{$\bullet$}} \rput(-2,-7.75){{$\bullet$}} {\pscurve{-}(6.9,-4)(2.4,1.4)(0,8)} {\pscurve{-}(-6.9,-4)(-2.4,1.4)(0,8)} {\pscurve{-}(-6.9,-4)(0,-2.8)(6.9,-4)} \rput(2.1,1.2){\Large $\boldsymbol{z}$} \rput(-2.1,1.2){\Large $\boldsymbol{y}$} \rput(0,-2.3){\Large $\boldsymbol{x}$} {\pscurve{-}(6.9,4)(2.3,5.1)(0,8)} {\pscurve{-}(6.9,4)(5.5,0)(6.9,-4)} \rput (2.7,1.6){\Large {$\boldsymbol{-z}$}} {\pscurve{-}(-6.9,4)(-2.3,5.1)(0,8)} {\pscurve{-}(-6.9,4)(-5.5,0)(-6.9,-4)} \rput (-2.8,1.6){\Large {$\boldsymbol{-y}$}} {\pscurve{-}(-6.9,-4)(-2.3,-5.1)(0,-8)} {\pscurve{-}(6.9,-4)(2.3,-5.1)(0,-8)} \rput(-.1,-3.2){\Large {$\boldsymbol{-x}$}} {\pscurve{-}(-6.9,4)(-5,5)(-4,6.9)} {\pscurve{-}(6.9,4)(5,5)(4,6.9)} {\pscurve{-}(-6.9,-4)(-5,-5)(-4,-6.9)} {\pscurve{-}(6.9,-4)(5,-5)(4,-6.9)} {\pscurve{-}(0,-8)(2.1,-6.7)(4,-6.9)} {\pscurve{-}(0,-8)(-2.1,-6.7)(-4,-6.9)} {\pscurve{-}(0,8)(-2.1,6.7)(-4,6.9)} {\pscurve{-}(0,8)(2.1,6.7)(4,6.9)} {\pscurve{-}(8,0)(6.7,2.1)(6.9,4)} {\pscurve{-}(8,0)(6.7,-2.1)(6.9,-4)} {\pscurve{-}(-8,0)(-6.7,2.1)(-6.9,4)} {\pscurve{-}(-8,0)(-6.7,-2.1)(-6.9,-4)} {\pscurve{-}(8,0)(7.5,1)(7.75,2)} {\pscurve{-}(-8,0)(-7.5,1)(-7.75,2)} {\pscurve{-}(-8,0)(-7.5,-1)(-7.75,-2)} {\pscurve{-}(8,0)(7.5,-1)(7.75,-2)} {\pscurve{-}(7.75,2)(6.95,3)(6.9,4)} {\pscurve{-}(-7.75,2)(-6.95,3)(-6.9,4)} {\pscurve{-}(-7.75,-2)(-6.95,-3)(-6.9,-4)} {\pscurve{-}(7.75,-2)(6.95,-3)(6.9,-4)} {\pscurve{-}(6.9,4)(5.8,4.8)(5.6,5.6)} {\pscurve{-}(6.9,-4)(5.8,-4.8)(5.6,-5.6)} {\pscurve{-}(-6.9,4)(-5.8,4.8)(-5.6,5.6)} {\pscurve{-}(-6.9,-4)(-5.8,-4.8)(-5.6,-5.6)} {\pscurve{-}(5.6,5.6)(4.8,5.8)(4,6.9)} {\pscurve{-}(5.6,-5.6)(4.8,-5.8)(4,-6.9)} {\pscurve{-}(-5.6,5.6)(-4.8,5.8)(-4,6.9)} {\pscurve{-}(-5.6,-5.6)(-4.8,-5.8)(-4,-6.9)} {\pscurve{-}(4,6.9)(3,6.95) (2,7.75)} {\pscurve{-}(4,-6.9)(3,-6.95) (2,-7.75)} {\pscurve{-}(-4,6.9)(-3,6.95) (-2,7.75)} {\pscurve{-}(-4,-6.9)(-3,-6.95) (-2,-7.75)} {\pscurve{-}(2,7.75)(1,7.5)(0,8)} {\pscurve{-}(2,-7.75)(1,-7.5)(0,-8)} {\pscurve{-}(-2,7.75)(-1,7.5)(0,8)} {\pscurve{-}(-2,-7.75)(-1,-7.5)(0,-8)} \rput(0.1,8.4){\large{$\boldsymbol{\blue x^*}$}} \rput (0,9.4){\Large{$\boldsymbol{\red 0^2}$}} \rput(0,10) {\Large{$\boldsymbol{\red 1^2}$}} \rput(0,10.6) {\Large{$\boldsymbol{\red 1^2}$}} \rput(2.5,9.1){\Large{$\boldsymbol{\red 1^2}$}} \rput(2.7,9.7){\Large{$\boldsymbol{\red 3^2}$}} \rput(2.9,10.3){\Large{$\boldsymbol{\red 4^2}$}} \rput(3.3,7.6){\large{$\boldsymbol{\blue 6}$}} \rput(3.65,7.55){\large{$\boldsymbol{\blue x^*}$}} \rput(3.9,7.35){\large{$\boldsymbol{\blue +}$}} \rput(4.2,7.15){\large{$\boldsymbol{\blue 3}$}} \rput(4.5,7.05){\large{$\boldsymbol{\blue y^*}$}} \rput(4.7,6.8){\large{$\boldsymbol{\blue -}$}} \rput(5.0,6.62){\large{$\boldsymbol{\blue 2}$}} \rput(5.3,6.6){\large{$\boldsymbol{\blue z^*}$}} \rput(5.1,8.3) {\Large{$\boldsymbol{\red 1^2}$}} \rput(5.5,8.9) {\Large{$\boldsymbol{\red 2^2}$}} \rput(5.9,9.5) {\Large{$\boldsymbol{\red 3^2}$}} \rput(6.8,6.8){\Large{$\boldsymbol{\red 2^2}$}} \rput(7.3,7.4){\Large{$\boldsymbol{\red 3^2}$}} \rput(7.8,8){\Large{$\boldsymbol{\red 5^2}$}} \rput(6.55,5){\large{$\boldsymbol{\blue 2}$}} \rput(6.9,4.95){\large{$\boldsymbol{\blue x^*}$}} \rput(7,4.6){\large{$\boldsymbol{\blue +}$}} \rput(7.2,4.2){\large{$\boldsymbol{\blue 2}$}} \rput(7.5,4.15){\large{$\boldsymbol{\blue y^*}$}} \rput(7.5,3.75){\large{$\boldsymbol{\blue -}$}} \rput(7.9,3.7){\large{$\boldsymbol{\blue z^*}$}} \rput(-7.5,3.5){\large{$\boldsymbol{\blue z^*}$}} \rput(-7.35,3.9){\large{$\boldsymbol{\blue +}$}} \rput(-7.8,3.5){\large{$\boldsymbol{\blue 2}$}} \rput(-7.1,4.3){\large{$\boldsymbol{\blue x^*}$}} \rput(-7.4,4.3){\large{$\boldsymbol{\blue 2}$}} \rput(-7,4.75){\large{$\boldsymbol{\blue -}$}} \rput(-6.55,4.8){\large{$\boldsymbol{\blue y^*}$}} \rput(8.4,5){\Large{$\boldsymbol{\red 1^2}$}} \rput(9,5.5){\Large{$\boldsymbol{\red 1^2}$}} \rput(9.7,6){\Large{$\boldsymbol{\red 2^2}$}} \rput(9.1,2.6){\Large{$\boldsymbol{\red 3^2}$}} \rput(9.8,3){\Large{$\boldsymbol{\red 2^2}$}} \rput(10.5,3.4){\Large{$\boldsymbol{\red 5^2}$}} \rput(8.7,.25){\large{$\boldsymbol{\blue 3x^*}$}} \rput(9.6,.25){\large{$\boldsymbol{\blue+6y^*}$}} \rput(10.6,.25){\large{$\boldsymbol{\blue-2z^*}$}} \rput (9.3,-.25){\Large{$\boldsymbol{\red 2^2}$}} \rput (10.1,-.25){\Large{$\boldsymbol{\red 1^2}$}} \rput (10.8,-.25){\Large{$\boldsymbol{\red 3^2}$}} \rput(9.1,-2.2){\Large{$\boldsymbol{\red 3^2}$}} \rput(9.9,-2.4){\Large{$\boldsymbol{\red 1^2}$}} \rput(10.7,-2.6){\Large{$\boldsymbol{\red 4^2}$}} \rput(7.4,-4.2){\large{$\boldsymbol{\blue y^*}$}} \rput(-7.4,-4.2){\large{$\boldsymbol{\blue z^*}$}} \rput(8.2,-4.6) {\Large{$\boldsymbol{\red 1^2}$}} \rput(8.9,-5) {\Large{$\boldsymbol{\red 0^2}$}} \rput(9.6,-5.5) {\Large{$\boldsymbol{\red 1^2}$}} \rput(-.9,-8.3){\large{$\boldsymbol{\blue 2y^*}$}} \rput(0,-8.3){\large{$\boldsymbol{\blue+2z^*}$}} \rput(.9,-8.3){\large{$\boldsymbol{\blue-x^*}$}} \end{pspicture} \eject \end{document}
{\begin{equation}}gin{document} \eqnsection \newcommand{{I_{n,i}}}{{I_{n,i}}} \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\mathcal D}}{{\mathcal D}} \newcommand{{\mathcal F}n}{{{\mathcal F}_n}} \newcommand{{{\mathcal G}_n}}{{{\mathcal G}_n}} \newcommand{{{\mathcal H}_n}}{{{\mathcal H}_n}} \newcommand{{\mathcal F}p}{{{\mathcal F}^p}} \newcommand{{{\mathcal G}^p}}{{{\mathcal G}^p}} \newcommand{{\mathbf P}}{{\mathbf P}} \newcommand{{P\otimes \PPP}}{{P\otimes {\mathbf P}}} \newcommand{\HH^{\rm Var}phi}{\HH^{\rm Var}phi} \newcommand{{\nu^W}}{{\nu^W}} \newcommand{{\theta^*}}{{\theta^*}} \newcommand{{\begin{equation}}q}[1]{{\begin{equation}}gin{equation}\label{#1}} \newcommand{{\end{equation}}q}{\end{equation}} \newcommand{{\rm I\!N}}{{\rm I\!N}} \newcommand{{\mathcal D}T}{{D_{\mathbb T^2}}} \newcommand{{\partial \DT}}{{\partial {\mathcal D}T}} \newcommand{{\mathbb E}}{{\mathbb E}} \newcommand{{\tilde{\delta}}}{{\tilde{\delta}}} \newcommand{{\tilde{I}}}{{\tilde{I}}} \newcommand{{\ep_n}}{{\ep_n}} \def{\rm Var}{{\rm Var}} \def{\rm Cov}{{\rm Cov}} \def{\bf 1}{{\bf 1}} \def{\mathcal L}eb{{\mathcal L}eb} \def{\mbox{\sf H\"older}}{{\mbox{\sf H\"older}}} \def{\mbox{\sf Thick}}{{\mbox{\sf Thick}}} \def{\mbox{\sf CThick}}{{\mbox{\sf CThick}}} \def{\mbox{\sf Late}}{{\mbox{\sf Late}}} \def{\mbox{\sf CLate}}{{\mbox{\sf CLate}}} \def{\mbox{\sf PLate}}{{\mbox{\sf PLate}}} \newcommand{\ffrac}[2] {\left( \frac{#1}{#2} \right)} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{\dfn}{\stackrel{\triangle}{=}} \newcommand{{\begin{equation}}qn}[1]{{\begin{equation}}gin{eqnarray}\label{#1}} \newcommand{{\end{equation}}qn}{\end{eqnarray}} \newcommand{\overline}{\overline} \newcommand{\underline}{\underline} \newcommand{{\mbox{\boldmath$\cdot$}}}{{\mbox{\boldmath$\cdot$}}} \newcommand{{\rm \,Var\,}}{{\rm \,Var\,}} \def\squarebox#1{\hbox to #1{ \vbox to #1{ }}} \renewcommand{\qed}{\hspace*{\fill} \vbox{\hrule\hbox{\vrule\squarebox{.667em}\vrule}\hrule} } \newcommand{\half}{\frac{1}{2}\:} \newcommand{{\begin{equation}}aa}{{\begin{equation}}gin{eqnarray*}} \newcommand{{\end{equation}}aa}{\end{eqnarray*}} \newcommand{{\mathcal K}}{{\mathcal K}} \def{\overline{{\rm dim}}_{_{\rm M}}}{{\overline{{\rm dim}}_{_{\rm M}}}} \def\overlineth{{\hat{\theta}}} \def\dim_{_{\rm P}}{\dim_{_{\rm P}}} \def{\hat\tau}_m{{\hat\tau}_m} \def{\hat\tau}_mk{{\hat\tau}_{m,k}} \def{\hat\tau}_mkj{{\hat\tau}_{m,k,j}} \def{\mbox{\rm loc}}{{\mbox{\rm loc}}} \title[Tightness for the GFF] {Tightness of the recentered maximum of the two--dimensional discrete Gaussian Free Field} \author[Maury Bramson\,, Ofer Zeitouni] {Maury Bramson$^*$\,\, Ofer Zeitouni$^\S$} \date{September 16, 2010. \newline\indent $^*$Research partially supported by NSF grant number CCF-0729537. \newline\indent $^\S$Research partially supported by NSF grant number DMS-0804133 and by the Herman P. Taubman chair of Mathematics at the Weizmann Institute.} {\begin{equation}}gin{abstract} \noindent We consider the maximum of the discrete two dimensional Gaussian free field (GFF) in a box, and prove that its maximum, centered at its mean, is tight, settling a long--standing conjecture. The proof combines a recent observation of \cite{BDZ} with elements from \cite{bramson2} and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order $1$, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two--dimensional torus, are also discussed. \end{abstract} \maketitle \section{Introduction} \label{sec-introduction} We consider the discrete Gaussian Free Field (GFF) in a two-dimensional box of side $N$, with Dirichlet boundary conditions. That is, let $V_N=([0,N-1]\cap \mathbb{Z})^2$ and $V_N^o=((0,N-1)\cap \mathbb{Z})^2$, and let $\{w_m\}_{m\geq 0}$ denote a simple random walk started in $V_N^o$ and killed at $\tau=\min\{m: w_m\in \partial V_N\}$ (that is, killed upon hitting the boundary $\partial V_N=V_N\setminus V_N^o$). For $x,y\in V_N$, define $G_N(x,y)=E^x(\sum_{m=0}^\tau {\bf 1}_{w_m=y})$, where $E^x$ denotes expectation with respect to the random walk started at $x$. The GFF is the zero-mean Gaussian field $\{{\mathcal X}_z^N\}_z$ indexed by $z\in V_N$ with covariance $G_N$. Let ${\mathcal X}_N^*=\max_{z\in V_N} {\mathcal X}_z^N$. It was proved in \cite{BDG} that ${\mathcal X}_N^*/(\log N)\to c$ with $c=2 \sqrt{2/\pi}$; the proof is closely related to the proof of the law of large numbers for the maximal displacement of a branching random walk in $\mathbb{R}$. Let $M_N:={\mathcal X}_N^*-E{\mathcal X}_N^*$. The goal of this paper is to prove the following. {\begin{equation}}gin{theorem} \label{theo-1} The sequence of random variables $\{M_N\}_{N\geq 1}$ is tight. \end{theorem} The statement in Theorem \ref{theo-1} has been a ``folklore'' conjecture for some time, and appears in print, e.g., as open problem \#4 in \cite{sourav} (for an earlier appearance in print of a related conjecture, see \cite{CLD}). To the best of our knowledge, prior to the current paper, the sharpest result in this direction is due to \cite{sourav}, who shows that the variance of $M_N$ is $o(\log N)$, and to \cite{BDZ}, who show, building on an argument of \cite{DH91}, that Theorem \ref{theo-1} holds if one replaces $N$ by an appropriate deterministic sequence $\{N_k\}_{k\geq 1}$. In the same paper \cite{BDZ}, it is shown that Theorem \ref{theo-1} holds as soon as one proves that, for an appropriate constant $C$, $E {\mathcal X}_{2N}^*\leq E{\mathcal X}_N^*+C$ for all $N=2^n$ with $n$ integer. Theorem \ref{theo-1} thus follows immediately from the following theorem, which is our main result. {\begin{equation}}gin{theorem} \label{theo-2} With notation as above, {\begin{equation}}gin{equation} \label{eq-main} E{\mathcal X}_{2^n}^*= c_1n - c_2 \log n+O(1), \end{equation} with $c_1=2\sqrt{2/\pi}\log 2$ and $c_2=(3/4)\sqrt{2/\pi}$. \end{theorem} One should note the striking similarity with the behavior of branching random walks (BRW), see \cite{bramson2} (where branching Brownian motions are considered) and \cite{ABR}. The relation with (an imbedded) BRW is already apparent in \cite{BDG}, but the argument there is not sharp enough to allow for a control of the $\log n$ term in \eqref{eq-main}. Our approach to the proof of Theorem \ref{theo-2} involves two main components. The first is a comparison argument (based on the Sudakov-Fernique inequality, see Lemma \ref{sud-fer} below), that will allow us to quickly prove an upper bound in \eqref{eq-main}, and to relate $E{\mathcal X}_{2^n}^*$ to the expectation of the maximum of other Gaussian fields (and, in particular, to a version of the Gaussian Free Field on the torus, denoted $\{{\mathcal Y}_x^N\}_{x\in V_N}$ below, as well as to a modified version of branching random walk, denoted $\{{\mathcal S}_x^N\}_{x\in V_N}$ below). The second step consists of the analysis of the modified branching random walk ${\mathcal S}_x^N$, by properly modifying the second moment argument in \cite{bramson2} (see also \cite{ABR}). Our results also provide an analog of Theorem \ref{theo-2} for the torus GFF $\{{\mathcal Y}_x^N\}_{x\in V_N}$, in Propositions \ref{prop-tgffmbrw} and \ref{prop-lb}, which is of interest in its own right. Intuitively, the model is the natural counterpart of the modified branching random walk $\{{\mathcal S}_x^N\}_{x\in V_N}$, which plays a central role in the proof of Theorem \ref{theo-2}. We have not proved the analog of Theorem \ref{theo-1} for the torus GFF, which requires a modification of the argument in \cite{BDZ}. The paper is structured as follows. In the next section, we recall a fundamental comparison between maxima of Gaussian fields; we then introduce the torus GFF, branching random walk, and modified branching random walk, and estimate their covariances. Section \ref{sec-ub} is devoted to the proof of the upper bound in Theorem \ref{theo-2}. The rest of the paper deals with the lower bound in Theorem \ref{theo-2}. Section \ref{sec-lbpre} reduces the proof of the lower bound to a lower bound on the maximum of a truncated version of the modified branching random walk introduced in Section \ref{sec-prelim}. Section \ref{LBMBRW} reduces the proof of the latter to a lower bound on the maximum of the modified branching random walk over a subset of $V_N$. The proof of this bound is given in Section \ref{sec-6}, using the second moment method. The proofs of some technical estimates, closely related to estimates in \cite{bramson2}, are sketched in the appendix.\\ \noindent {\bf Notation:} throughout, the letter $C$ indicates a positive constant, independent of $N$, whose value may change from line to line. Positive constants that are fixed once and for all are denoted by the lower case $c$ with a subscript, for example $c_5$ or $c_X$. \section{Preliminaries and approximations} \label{sec-prelim} In this section, we recall a comparison tool between the maxima of different Gaussian fields and introduce Gaussian fields that approximate the GFF. \subsection{The Sudakov--Fernique inequality} The following inequality allows for the comparison of the expectation of the maxima of different Gaussian fields. For a proof, see \cite{sudfer}. {\begin{equation}}gin{lemma}[Sudakov--Fernique] \label{sud-fer} Let ${\AA}$ denote an arbitrary (finite) set, let $\{G_\alpha^i\}_{\alpha\in {\AA}}$, $i=1,2$, denote two zero mean Gaussian fields and set $G^*_i=\max_{\alpha\in {\AA}}G_\alpha^{i}$. If {\begin{equation}}gin{equation} E(G_\alpha^1-G_{\begin{equation}}ta^1)^2\geq E(G_\alpha^2-G_{\begin{equation}}ta^2)^2\,, \quad \mbox{\rm for all $\alpha,{\begin{equation}}ta\in {\AA}$}\,, \end{equation} then {\begin{equation}}gin{equation} \label{eq-sf} E G^*_1\geq EG^*_2\,. \end{equation} \end{lemma} In particular, if $\{G_\alpha\}_{\alpha\in {\AA}}$ and $\{g_\alpha\}_{\alpha \in {\AA}}$ are independent centered Gaussian fields, then one sees that $$E (\max_{\alpha\in \AA} (G_\alpha+g_\alpha))\geq E(\max_{\alpha\in \AA} G_\alpha)\,,$$ a fact that is also easy to check without the Gaussian assumption. \subsection{The Torus GFF, Branching Random Walks, and Modified Branching Random Walks} We introduce several Gaussian fields with index set $V_N$ that will play a role in the proof of Theorem \ref{theo-2}. \subsubsection{The Torus GFF} One of the drawbacks of working with the GFF is that its variance is not the same at all points of $V_N$. The Torus GFF (TGFF) $\{{\mathcal Y}_z^N\}_{z\in V_N}$ is a Gaussian field whose correlation structure resembles the GFF, but has the additional property that its variance is constant across $V_N$. To define it formally, for $x,y\in \Z^2$, write $x\ \!\! \sim_N \ \!\! y$ if $x-y\in (N\Z)^2$. Similarly, for $B,B'\subset V_N$, write $B\sim_N B'$ if there exist integers $i,j$ so that $B'=B+(iN,jN)$. Let $\tau'$ denote an exponential random variable of parameter $1/N^2$ and, with $\{w_m\}_{m\geq 0}$ denoting a simple random walk independent of $\tau'$, define, for $x,y\in V_N,$ $$ \bar G_N(x,y)=E^x(\sum_{m=0}^{\tau'} {\bf 1}_{w_m\ \!\! \sim_N\ \!\!y})\,,$$ where $E^x$ denotes expectation over both $\tau'$ and the random walk started at $x$. That is, $\bar G_N$ is the Green function of a simple random walk on the torus of side $N$, killed at the independent exponential time $\tau'$. The TGFF is the centered Gaussian process $\{{\mathcal Y}_z^N\}_{z\in V_N}$ with covariance $\bar G_N$. By construction, for $x,y\in V_N$, $E(({\mathcal Y}_x^N)^2)=E(({\mathcal Y}_y^N)^2)$, and an easy computation, using known properties of the Green function of two dimensional simple random walk, see, e.g., \cite{lawler}, reveals that {\begin{equation}}gin{equation} \label{eq-vartgff} |E(({\mathcal Y}_x^N)^2)-\frac{2}{\pi} \log N|\leq C \,. \end{equation} (Recall that, by our convention on constants, $C$ in \eqref{eq-vartgff} does not depend on $N$.) We define ${\mathcal Y}^*_N=\max_{z\in V_N}{\mathcal Y}_z^N$. \subsubsection{Branching Random Walks} In what follows, we consider $N=2^n$ for some positive integer $n$. For $k=0,1,\ldots,n$, let $\BB_k$ denote the collection of subsets of $\Z^2$ consisting of squares of side $2^k$ with corners in $\Z^2$, let $\BB{\mathcal D}D_k$ denote the subset of $\BB_k$ consisting of squares of the form $([0,2^k-1]\cap \Z)^2+(i2^k,j2^k)$. Note that the collection $\BB{\mathcal D}D_k$ partitions $\Z^2$ into disjoint squares. For $x\in V_N$, let $\BB_k(x)$ denote those elements $B\in \BB_k$ with $x\in B$. Define similarly $\BB{\mathcal D}D_k(x)$. Note that the set $\BB{\mathcal D}D_k(x)$ contains exactly one element, whereas $\BB_k(x)$ contains $2^{2k}$ elements. Let $\{a_{k,B}\}_{k\geq 0, B\in \BB{\mathcal D}D_k}$ denote an i.i.d. family of standard Gaussian random variables. The BRW $\{{\mathcal R}_z^N\}_{z\in V_N}$ is defined by $${\mathcal R}_z^N=\sum_{k=0}^n \sum_{B\in \BB{\mathcal D}D_k(z)} a_{k,B}\,.$$ We again define ${\mathcal R}^*_N=\max_{z\in V_N}{\mathcal R}_z^N$. \subsubsection{Modified Branching Random Walks} We continue to consider $N=2^n$ for some positive integer $n$ and again employ the notation $\BB_k$ and $\BB_k(x)$. Let $\BB_k^N$ denote the collection of subsets of $\Z^2$ consisting of squares of side $2^k$ with lower left corner in $V_N$. Let $\{b_{k,B}\}_{k\geq 0, B\in \BB_k^N}$ denote an i.i.d. family of centered Gaussian random variables of variance $2^{-2k}$, and define $$ b_{k,B}^N=\left\{{\begin{equation}}gin{array}{ll} b_{k,B},& B\in \BB_k^N,\\ b_{k,B'},& B\sim_N B'\in \BB_k^N\,. \end{array} \right. $$ The modified branching random walk (MBRW) $\{{\mathcal S}_z^N\}_{z\in V_N}$ is defined by $${\mathcal S}_z^N=\sum_{k=0}^n \sum_{B\in \BB_k(z)} b_{k,B}^N\,.$$ Note that, by construction, $E(({\mathcal R}_z^N)^2) =E(({\mathcal S}_z^N)^2)=n+1$. We again define ${\mathcal S}^*_N=\max_{z\in V_N}{\mathcal S}_z^N$. \subsubsection{Geometric distances} The following are several notions of distances between points in $V_N$. First, $\|\cdot\|$ denotes the Euclidean norm, while $\|\cdot\|_\infty$ denotes the $\ell^\infty$ norm. Thus, for $x,y\in V_N$, $\|x-y\|$ and $\|x-y|_\infty$ induce metrics with $$\|x-y\|_\infty\leq \|x-y\|\leq \sqrt{2}\|x-y\|_\infty\,.$$ We also need to consider distances on the torus determined by $V_N$. Those are defined by $$ d^N(x,y)=\min_{z: \ z\ \!\!\sim_N \ \!\!y}\ \|x-z\|\,,\quad d^N_\infty(x,y)=\min_{z:\ z\ \!\!\sim_N \ \!\!y} \ \|x-z\|_\infty\,. $$ \subsection{Covariance comparisons} We collect in this subsection some basic facts concerning the covariances of the Gaussian fields introduced earlier. For a centered Gaussian field $\{G_z\}$, we write $\RRR_G(x,y)=E(G_xG_y)$ for its covariance function. Thus, for example, the covariance function of the GFF (on $V_N$) is denoted by $\RRR_{{\mathcal X}^N}$. The following is an estimate on $\RRR_{{\mathcal Y}^N}$, $\RRR_{{\mathcal S}^N}$ and $\RRR_{{\mathcal X}^N}$. {\begin{equation}}gin{lemma} \label{lem-mbrwcov} There exists a constant $C$ so that, with $N=2^n$, the following estimates hold: for any $x,y\in V_N$, {\begin{equation}}gin{equation} \label{eq-comp1} |\RRR_{{\mathcal Y}^N}(x,y)-\frac{2\log 2}{\pi}(n-\log_2 d^N(x,y))| \leq C\, \end{equation} and {\begin{equation}}gin{equation} \label{eq-comp2} |\RRR_{{\mathcal S}^N}(x,y)-(n-\log_2 d^N(x,y))| \leq C\,. \end{equation} Further, for any $x,y\in V_N+(2N,2N)$, {\begin{equation}}gin{equation} \label{eq-comp1a} |\RRR_{{\mathcal X}^{4N}}(x,y)-\frac{2\log 2}{\pi}(n-(\log_2 \|x-y\|)_+) |\leq C\,. \end{equation} \end{lemma} \proof We begin with the estimate \eqref{eq-comp2} concerning the MBRW. For $x=(x_1,x_2)$ and $y=(y_1,y_2)$, write, for $i=1,2$, $t_i(x,y)=\min(|x_i-y_i|,|x_i-y_i-N|,|x_i-y_i+N|)$. One then has {\begin{equation}}gin{eqnarray} \RRR_{{\mathcal S}^N}(x,y)&=& \sum_{k=\lceil \log_2 (d^N_\infty(x,y)+1)\rceil}^n 2^{-2k} \left[2^k-t_1(x,y)\right]\cdot \left[2^k-t_2(x,y)\right]\nonumber\\ &=& \sum_{k=\lceil \log_2 (d^N_\infty(x,y)+1)\rceil}^n \left(1-\frac{t_1(x,y)}{2^k}-\frac{t_2(x,y)}{2^k} +\frac{t_1(x,y)t_2(x,y)}{4^k}\right)\,. \label{eq-280710d} \end{eqnarray} Because $a+b-ab\geq 0$ for $0\leq a,b\leq 1$, we get that {\begin{equation}}gin{equation} \label{eq-240710a} \RRR_{{\mathcal S}^N}(x,y)\leq n- \log_2 (d^N_\infty(x,y)+1)+2\leq n-\log_2 (d^N(x,y)+1)+3\,. \end{equation} On the other hand, using that $a+b-ab\leq a+b$ for $a,b\geq 0$, we get that {\begin{equation}}gin{eqnarray} \label{eq-240710b} \RRR_{{\mathcal S}^N}(x,y)&\geq &n- \log_2 (d^N_\infty(x,y)+1)- \sum_{k=\lceil \log_2 (d^N_\infty(x,y)+1)\rceil}^n 2^{-(k-1)} d^N_\infty (x,y)\nonumber\\ &\geq& n-\log_2 (d^N(x,y)+1)-C\,. \end{eqnarray} Combining \eqref{eq-240710a} and \eqref{eq-240710b} yields the claimed estimate on $\RRR_{{\mathcal S}^N}$. We next prove the estimate \eqref{eq-comp1a}. Note that for $x,y\in V_N+(2N,2N)$, {\begin{equation}}gin{equation} \label{eq-270710d} \RRR_{{\mathcal X}^{4N}}(x,y)= P^x(\tau_y\leq \tau_{4N})\RRR_{{\mathcal X}^{4N}}(y,y)\,, \end{equation} where $\tau_y=\min\{m\geq 0: w_m=y\}$ and $\tau_{4N}=\min\{m\geq 0: w_m\not\in V_{4N}\}$. Using, e.g., \cite[Exercise 1.6.8]{lawler}, $$ P^x(\tau_y\leq \tau_{4N})= \left(1-\frac{(\log \|x-y\|)_+}{n\log 2}\right)+\frac{O(1)}{n}\,. $$ Moreover, for $x\in V_N+(2N,2N)$, $$\RRR_{{\mathcal X}^{4N}}(x,x)=\frac{2\log 2}{\pi}n+O(1)\,,$$ see, e.g., \cite{sourav}. Combining these estimates yields \eqref{eq-comp1a}. The estimate on $\RRR_{{\mathcal Y}^N}$ in \eqref{eq-comp1} requires more work but is still straight forward. Recall the simple random walk $\{w_m\}$ and, for $y\in V_N$, denote by $[y]_N=\{z\in V_N:\ z\ \!\!\sim_N \ \!\!y\}$ the collection of points in $\Z^2$ identified with $y$ for the torus. Then, by the Markov property and the memoryless property of the exponential distribution, {\begin{equation}}gin{eqnarray} \label{eq-240710c} \RRR_{{\mathcal Y}^N}(x,y)&=& E ({\mathcal Y}^N_y)^2 P^x(\{w_m\}\ \mbox{\rm hits $[y]_N$ before $\tau'$})\nonumber\\ &=& \frac{2\log 2}{\pi} n P^x(\{w_m\}\ \mbox{\rm hits $[y]_N$ before $\tau'$})+O(1)\,, \end{eqnarray} where we recall that $\tau'$ denotes a geometric random variable of mean $N^2$ and we used \eqref{eq-vartgff} in the second equality. Let $\eta$ denote the hitting time of the boundary of a (Euclidean) ball of radius $N/2$ around $x$, that is $$\eta=\min\{m\geq 0: \|w_m-x\|\geq N/2\}\,.$$ Let $\tau_y$ denote the hitting time of $[y]_N$, that is $$\tau_y=\min\{m\geq 0: w_m\in [y]_N\}\,.$$ Note that the probability in the right side of \eqref{eq-240710c} is $P^x(\tau_y<\tau')$. We have {\begin{equation}}gin{eqnarray} \label{eq-260710a} P^x(\tau_y<\tau')&=&P^x(\tau_y<\eta)+ P^x(\tau_y<\tau', \tau_y\geq \eta)- P^x(\tau_y<\eta, \tau_y\geq \tau')\nonumber\\ &=:&P_1+P_2-P_3\,. \end{eqnarray} By standard estimates for two dimensional simple random walk, see again e.g., \cite[Exercise 1.6.8]{lawler}, {\begin{equation}}gin{equation} \label{p1est} |P_1-[n-\log_2|x-y|]_+/n|\leq C/n \end{equation} and, using the memoryless property of the exponential distribution, $$P_2\leq \max_{z:\ \|z-[y]_N\|\geq N/4} P^z(\tau_y\leq \tau')\leq C/n\,.$$ To estimate $P_3$, we use the fact (see e.g., \cite[Lemma 10.4]{sourav}) that, for all $m\geq 1$, {\begin{equation}}gin{equation} \label{eq-250710b} P^x(w_m\in [y]_N)\leq \left\{{\begin{equation}}gin{array}{ll} \frac{C}{m}e^{-(d^N(x,y))^2/4m}\,,& m\leq N^2,\\ &\\ \frac{C}{N^2}\,,& m>N^2\,. \end{array} \right. \end{equation} Write $P_m(x,z):=P^x(w_m\in [z]_N)$. Then, again using the memoryless property of the exponential distribution and the Markov property of the simple random walk, {\begin{equation}}gin{equation} \label{eq-250710c} P_3=P^x(\tau'\leq \tau_y<\eta)\leq \frac{C}{N^2}\sum_{m=1}^\infty e^{-m/N^2}\sum_{z\in V_N} P_m(x,z)P^z(\tau_y<\eta)\,. \end{equation} We split the sum in the right side of \eqref{eq-250710c} into three parts, according to the range of $m$ in the summation, writing $P_3=P_{3,1}+P_{3,2}+P_{3,3}$, with the terms in the right side determined according to $m\leq N^2/n$, $m\in (N^2/n, N^2)$ or $m\geq N^2$. We have $$P_{3,1}= \frac{C}{N^2}\sum_{m=1}^{N^2/n} e^{-m/N^2}\sum_{z\in V_N} P_m(x,z)P^z(\tau_y<\eta) \leq \frac{C}{N^2}\sum_{m=1}^{N^2/n} e^{-m/N^2} \leq \frac{C}{n}\,. $$ Next, consider $P_{3,3}$: we have, using \eqref{eq-250710b} in the first inequality and standard estimates for simple random walk in the second, see \cite[Exercise 1.6.8]{lawler}, {\begin{equation}}gin{eqnarray*} P_{3,3}&=& \frac{C}{N^2}\sum_{m=N^2}^{\infty} e^{-m/N^2}\sum_{z\in V_N} P_m(x,z)P^z(\tau_y<\eta)\\ &\leq & \frac{C}{N^4} \sum_{m=N^2}^{\infty} e^{-m/N^2}\sum_{z\in V_N} P^z(\tau_y<\eta)\\ &\leq & \frac{C}{N^4} \sum_{m=N^2}^{\infty} e^{-m/N^2}\sum_{z\in V_N} \left(\frac{n-\log_2 d^N(z,y)}{n}\right)\\ &\leq & \frac{C}{N^2} \sum_{z\in V_N} \left(\frac{n-\log_2 d^N(z,y)}{n}\right)\\ &\leq & \frac{C}{N^2} \sum_{r=1}^n 2^{2r} \left(1-\frac{r}{n}\right) \\ &=& C\sum_{r=1}^n 2^{2(r-n)} \left(1-\frac{r}{n}\right)=\frac{C}{n}\sum_{k=1}^n k2^{-2k}\leq \frac{C}{n} \,. \end{eqnarray*} It remains to estimate $P_{3,2}$. Note first that, for $x$ and $y$ fixed and each integer part of the value of $d^N(x,z)$, there are at most $Cr$ many possible points $z\in V_N$ with $d^N(y,z)\in [r,2r]$. Also, due to \eqref{eq-250710b}, we can write {\begin{equation}}gin{eqnarray*} P_{3,2}&\leq& \frac{C}{n}+ \frac{C}{N^2}\sum_{m=N^2/n}^{N^2} e^{-m/N^2}\sum_{z\in V_N, d^N(x,z) \leq d_m} P_m(x,z)P^z(\tau_y<\eta)\\ &=:& \frac{C}{n}+\frac{C}{N^2}\sum_{m=N^2/n}^{N^2} e^{-m/N^2} P_{3,2,m}\,,\end{eqnarray*} where $d_m=\sqrt{m\log\log m}\wedge \sqrt{2}N$. By summing radially (so that the index $k$ runs over the possible integer parts of $d^N(x,z)$ and $n-\ell$ runs over the possible integer parts of $\log_2 d^N(y,z)$), we can estimate $P_{3,2,m}$ (using \eqref{p1est} for the estimate $P_m(x,z)\leq Cm^{-1}e^{-k^2/4m}$ and \eqref{eq-250710b} for the estimate $P^z(\tau_y<\eta)\leq C \ell/n$) by $$ \frac{C}{nm}\sum_{k=1}^{d_m} e^{-k^2/4m} \sum_{\ell=1}^n \ell 2^{n-\ell} \leq \frac{CN}{nm}\sum_{k=1}^{d_m} e^{-k^2/4m} \leq \frac{CN}{n\sqrt{m}}\,.$$ Substituting in the expression for $P_{3,2}$, we get $$P_{3,2} \leq \frac{C}{n}+\frac{CN}{N^2 n} \sum_{m=N^2/n}^{N^2} \frac{e^{-m/N^2}}{\sqrt{m}} \leq \frac{C}{n}\,.$$ Combining the estimates on $P_{3,1}, P_{3,2}$ and $P_{3,3}$ and substituting in \eqref{eq-250710c} shows that $P_3\leq C/n$. Together with \eqref{eq-240710c}, \eqref{eq-260710a} and the estimates on $P_1$ and $P_2$, this completes the proof of the claimed estimate on $\RRR_{{\mathcal Y}^N}$ and hence of the lemma. \qed \section{The upper bound} \label{sec-ub} Our goal in this section is to provide the upper bound in Theorem \ref{theo-2}; this is achieved in Proposition \ref{prop-tgffmbrw} below. We begin by relating the maxima of the GFF and the TGFF with the MBRW. {\begin{equation}}gin{lemma} \label{lem-compgfftgffmbrw} Let $\{g_z\}_{z\in V_N}$ denote a collection of i.i.d. standard Gaussian random variables. Then, there exists a constant $C_1$ so that {\begin{equation}}gin{equation} \label{eq-270710c} \max(E {\mathcal X}_N^*,E{\mathcal Y}_N^*) \leq \sqrt{\frac{2\log 2}{\pi}} E(\max_{z\in V_N}({\mathcal S}_z^N + C_1g_z))\,. \end{equation} \end{lemma} \proof We give the argument for the GFF; the argument for the TGFF is similar. Note first that, by the definitions and an application of Lemma \ref{sud-fer}, {\begin{equation}}gin{equation} \label{eq-270710f} E( {\mathcal X}_N^*)\leq E(\max_{z\in V_N+(2N,2N)} {\mathcal X}_z^{4N})\,. \end{equation} On the other hand, writing $x_N=x+(2N,2N)$, $y_N=y+(2N,2N)$ for $x,y\in V_N$ and using \eqref{eq-comp1a} of Lemma \ref{lem-mbrwcov}, we have {\begin{equation}}gin{equation} \label{eq-290710c} E(({\mathcal X}^{4N}_{x_N}-{\mathcal X}^{4N}_{y_N})^2)\leq \frac{2\log 2}{\pi}E(({\mathcal S}^{N}_x-{\mathcal S}^{N}_y)^2)+C\,. \end{equation} Another application of Lemma \ref{sud-fer}, together with \eqref{eq-270710f}, completes the proof of \eqref{eq-270710c} for the GFF. \qed It follows from \cite{bramson2} and \cite{ABR} that {\begin{equation}}gin{equation} \label{eq-270710b} E {\mathcal R}_N^*=2\sqrt{\log 2} n -\frac3{4\sqrt{\log 2}} \log n+O(1)\,. \end{equation} (The statement in \cite{bramson2} is given for branching Brownian motions, but the argument given there applies to our BRW as well.) This fact, together with Lemmas \ref{sud-fer} and \ref{lem-compgfftgffmbrw}, yields the following upper bound on the GFF and the TGFF. {\begin{equation}}gin{proposition} \label{prop-tgffmbrw} There exists a constant $C_2$ such that {\begin{equation}}gin{equation} \label{eq-270710a} \max(E{\mathcal X}_N^*,E{\mathcal Y}_N^*)\leq 2\log 2\sqrt{\frac{2}{\pi}}n- \frac34\sqrt{\frac{2}{\pi}}\log n+C_2\,. \end{equation} \end{proposition} \proof By construction, for $x,y\in V_N$, $E(({\mathcal R}_x^N)^2)=E( ({\mathcal S}_x^N)^2)$ and $\RRR_{{\mathcal R}^N}(x,y)\leq \RRR_{{\mathcal S}^N}(x,y)+C$. By Lemma \ref{sud-fer}, this yields the existence of a positive integer $\bar C_1$ such that, with $C_1$ and $g_z$ as in Lemma \ref{lem-compgfftgffmbrw}, {\begin{equation}}gin{equation} \label{eq-280710a} E(\max_{z\in V_N} ({\mathcal S}_z^N+C_1 g_z))\leq E(\max_{z\in V_N}({\mathcal R}_z^N+\bar C_1g_z))\,. \end{equation} Note however that, by construction, $$E(\max_{z\in V_N}({\mathcal R}_z^N+\bar C_1g_z))\leq E({\mathcal R}_{2^{ {\bar C_1}^2} N}^*)\,.$$ Combining this with \eqref{eq-270710b}, \eqref{eq-280710a} and Lemma \ref{lem-compgfftgffmbrw} completes the proof of \eqref{eq-270710a} and hence of Proposition \ref{prop-tgffmbrw}. \qed \section{The lower bound: preliminaries} \label{sec-lbpre} In this section, we bound from below the expected maxima of the GFF and the TGFF by an appropriate truncation of the MBRW. An analysis of the latter, provided in Sections \ref{LBMBRW} and \ref{sec-6}, will then complete the proof of Theorem \ref{theo-2}. We begin by introducing the truncation of the MBRW alluded to above. Recall that $${\mathcal S}_z^N=\sum_{k=0}^n \sum_{B\in \BB_k(z)} b_{k,B}^N\,.$$ For a non-negative integer $k_0\leq n$, define $${\mathcal S}_z^{N,k_0}=\sum_{k=k_0}^n \sum_{B\in \BB_k(z)} b_{k,B}^N\,,$$ and write ${\mathcal S}_{N,k_0}^*=\max_{z\in V_N} {\mathcal S}_z^{N,k_0}$. Clearly, ${\mathcal S}^{N,0}={\mathcal S}^N$. Define, for $x,y\in V_N$, $\rho_{N,k_0}(x,y)= E( ({\mathcal S}_x^{N,k_0}-{\mathcal S}_y^{N,k_0})^2)$. The following are basic properties of $\rho_{N,k_0}$. {\begin{equation}}gin{lemma} \label{lem-k0} The function $\rho_{N,k_0}$ has the following properties.\\ {\begin{equation}}gin{eqnarray} &&\rho_{N,k_0}(x,y)\ \mbox{\rm decreases in $k_0$}.\label{k0prop1}\\ && \limsup_{k_0\to\infty} \limsup_{N\to\infty} \sup_{x,y\in V_N: d_N(x,y)\leq 2^{\sqrt{k_0}}} \rho_{N,k_0}(x,y)=0\,. \label{k0prop2}\\ &&\mbox{\rm There is a function $g:\Z_+\to \R_+$ so that $g(k_0)\to_{k_0\to \infty} \infty$ } \nonumber\\ && \mbox{\rm and, for $x,y\in V_N$ with $d^N(x,y)\geq 2^{\sqrt{k_0}}$,} \label{k0prop3} \\ && \rho_{N,k_0}(x,y)\leq \rho_{N,0}(x,y)-g(k_0)\,, \quad n>k_0. \nonumber \end{eqnarray} \end{lemma} \proof As in \eqref{eq-280710d} and employing the same notation, we have, for $x\neq y$, {\begin{equation}}gin{eqnarray} \label{eq-280710h} &&\rho_{N,k_0}(x,y)\\ &=& \sum_{k=\lceil \log_2 (d^N_\infty(x,y)+1)\rceil\vee k_0}^n 2\left(\frac{t_1(x,y)}{2^k}+\frac{t_2(x,y)}{2^k} -\frac{t_1(x,y)t_2(x,y)}{4^k}\right)\nonumber\\ &&\quad+2(\lceil \log_2 (d_\infty^N(x,y)+1) \rceil-k_0)_+ \,. \nonumber \end{eqnarray} All properties follow at once from this representation. Indeed, \eqref{k0prop1} and \eqref{k0prop2} are immediate whereas, to see \eqref{k0prop3}, note that, for $ \log_2 d_\infty^N(x,y)\geq \sqrt{k_0}-1$, $$\rho_{N,0}(x,y)-\rho_{N,k_0}(x,y)\geq \sqrt{k_0}-1\,. $$ \qed An immediate corollary of Lemmas \ref{sud-fer}, \ref{lem-mbrwcov} and \ref{lem-k0} is the following domination by the TGFF of a truncated MBRW. {\begin{equation}}gin{corollary} \label{cor-tgffmbrw} There exists a constant $k_0$ such that, for all $N=2^n$ large and all $x,y\in V_N$, {\begin{equation}}gin{equation} \label{eq-290710a} \frac{2\log 2}{\pi} \rho_{N,k_0}(x,y)\leq E( ({\mathcal Y}_x^N-{\mathcal Y}_y^N)^2)\,. \end{equation} In particular, {\begin{equation}}gin{equation} \label{eq-290710aa} E{\mathcal Y}_N^*\geq \sqrt{\frac{2\log 2}{\pi}}E{\mathcal S}_{N,k_0}^*\,. \end{equation} \end{corollary} We also need a comparison between the maxima of the GFF and of the MBRW. Note that {\begin{equation}}gin{equation} \label{eq-290710b} {\mathcal X}_N^*\geq \max_{z\in V_{N/4}+(N/2,N/2)} {\mathcal X}_z^N\,. \end{equation} On the other hand, for $x,y\in V_{N/4}$, we have, with the same proof as that of \eqref{eq-290710c}, {\begin{equation}}gin{equation} \label{eq-290710d} E(({\mathcal X}^{N}_{x+(N/2,N/2)}-{\mathcal X}^{N}_{y+(N/2,N/2)})^2)\geq \frac{2\log 2}{\pi}E(({\mathcal S}^{N/4}_x-{\mathcal S}^{N/4}_y)^2)-C\,. \end{equation} Using again Lemmas \ref{sud-fer} and \ref{lem-k0}, we get the following domination by the GFF of a truncated MBRW. {\begin{equation}}gin{corollary} \label{cor-gffmbrw} There exists a constant $k_0$ such that, for all $N=2^n$ large, and all $x,y\in V_N$, {\begin{equation}}gin{equation} \label{eq-290710ab} E{\mathcal X}_N^*\geq \sqrt{\frac{2\log 2}{\pi}}E{\mathcal S}_{N/4,k_0}^*\,. \end{equation} \end{corollary} \section{A lower bound for the truncated MBRW} \label{LBMBRW} In this section, we present the proof of the lower bound in Theorem \ref{theo-2} and an analogous bound for the TGFF. That is, we prove the following. {\begin{equation}}gin{proposition} \label{prop-lb} The following holds: {\begin{equation}}gin{equation} \label{eq-mainilb} E{\mathcal X}_{2^n}^*\geq c_1n - c_2 \log n+O(1)\,,\quad E{\mathcal Y}_{2^n}^*\geq c_1n - c_2 \log n+O(1)\,, \end{equation} with $c_1=2\sqrt{2/\pi}\log 2$ and $c_2=(3/4)\sqrt{2/\pi}$. \end{proposition} In view of Corollaries \ref{cor-tgffmbrw} and \ref{cor-gffmbrw}, it is immediate that Proposition \ref{prop-lb} follows from the following proposition, whose proof will take the rest of this section and the next one. {\begin{equation}}gin{proposition} \label{proplbmbrw} There exists a function $f:\Z_+\to \R_+$ such that, for all $N\geq 2^{2 k_0}$, {\begin{equation}}gin{equation} \label{eq-290710e} E{\mathcal S}_{N,k_0}^*\geq (2 \sqrt{\log 2})n-(3/(4\sqrt{\log 2})) \log n-f(k_0)\,. \end{equation} \end{proposition} When proving Proposition \ref{proplbmbrw}, it will be more convenient to restrict the maximum in the definition of $S^*_{N,k_0}$ to a subset of $V_N$. Toward this end, set $V_N'=V_{N/2}+(N/4,N/4)\subset V_N$ and define $$\tilde S^*_{N,k_0}=\max_{z\in V_N'} S_z^{N,k_0}\,,\quad \tilde S^*_N=\tilde S^*_{N,0}\,.$$ The main ingredient in the proof of Proposition \ref{proplbmbrw} is a lower bound on the upper tail of the distribution of $\tilde {\mathcal S}_{N,0}^*$, as given in Proposition \ref{proplbmbrw1} below. {\begin{equation}}gin{proposition} \label{proplbmbrw1} Let $A_n=(2\sqrt{\log 2})n-(3/(4\sqrt{\log 2}))\log n$. There exists a constant $\delta_0\in (0,1)$ such that, for all $N$, {\begin{equation}}gin{equation} \label{eq-300710a} P(\tilde {\mathcal S}_{N}^*\geq A_n)\geq \delta_0\,. \end{equation} \end{proposition} The proof of Proposition \ref{proplbmbrw1} is technically involved and is deferred to Section \ref{sec-6}. In the rest of this section, we show how Proposition \ref{proplbmbrw} follows from Proposition \ref{proplbmbrw1}.\\ \noindent {\it Proof of Proposition \ref{proplbmbrw} (assuming Proposition \ref{proplbmbrw1})} Our plan is to show that the left tail of $\tilde S_{N}^{*}$ is decreasing exponentially fast; together with the bound \eqref{eq-300710a}, this will imply \eqref{eq-290710e} with $k_0=0$. At the end of the proof, we show how the bound for $k_0>0$ follows from the case $k_0=0$. In order to show the exponential decay, we compare $\tilde{S}_{N}^{*}$, after appropriate truncation, to four independent copies of the maximum over smaller boxes, and then iterate. For $i=1,2,3,4$, introduce the four sets $W_{N,i}=[0,N/32)^2+z_i$ where $z_1=(N/4,N/4)$, $z_2=(23N/32,N/4)$, $z_3=(N/4,23N/32)$ and $z_4=(23N/32,23N/32)$. (We have used here that $3/4-1/32= 23/32$.) Note that $\cup_i W_{N,i} \subset V_N$, and that these sets are $N/4$-separated, that is, for $i\neq j$, $$\min_{z\in W_{N,i}, z'\in W_{N,j}} d_\infty^N(x,y)> N/4\,.$$ Recall the definition of ${\mathcal S}_z^{N}$ and define, for $n>6$, $$\bar{\mathcal S}_z^{N}=\sum_{k=0}^{n-6} \sum_{B\in \BB_k(z)} b_{k,B}^N\,;$$ note that $${\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N}= \sum_{j=0}^5 \sum_{B\in \BB_{n-j}(z)} b_{n-j,B}^N \,.$$ Our first task is to bound the probability that $\max_{z\in V_N}({\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N})$ is large. This will be achieved by applying Fernique's criterion in conjunction with Borell's inequality. We introduce some notation. Let $m(\cdot)=m_N(\cdot)$ denote the uniform probability measure on $V_N$ (i.e., the counting measure normalized by $|V_N|$) and let $g:(0,1]\to \R_+$ be the function defined by $$ g(t)=\left( \log (1/t)\right)^{1/2}\,.$$ Set $\G_z^{N} ={\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N}$ and $$B(z,{\rm Var}epsilonilon)=\{z'\in V_N: E( (\G_z^N-\G_{z'}^N)^2) \leq {\rm Var}epsilonilon^2\}\,.$$ Then, Fernique's criterion, see \cite[Theorem 4.1]{adler}, implies that, for some universal constant $K\in (1,\infty)$, {\begin{equation}}gin{equation} \label{eq-fer} E(\max_{z\in V_N}\G_z^N)\leq K\sup_{z\in V_N} \int_0^\infty g(m(B(z,{\rm Var}epsilonilon))) d{\rm Var}epsilonilon\,. \end{equation} For $n\geq 6$, we have, in the notation of Lemma \ref{lem-k0}, $$ E((\G_z^N-\G_{z'}^N)^2)=\rho_{N,n-5}(z,z')\,.$$ Therefore, employing \eqref{eq-280710h}, there exists a constant $C$ such that, for ${\rm Var}epsilonilon\geq 0$, $$ \{z'\in V_N: d^N_\infty(z,z')\leq {\rm Var}epsilonilon^2 N/C\}\subset B(z,{\rm Var}epsilonilon)\,.$$ In particular, for $z\in V_N$ and ${\rm Var}epsilonilon>0$, $$ m(B(z,{\rm Var}epsilonilon))\geq ( ({\rm Var}epsilonilon^4 /C^2)\vee (1/N^2) )\wedge 1\,.$$ Consequently, $$\int_0^\infty g(m(B(z,{\rm Var}epsilonilon))) d{\rm Var}epsilonilon \leq \int_0^{\sqrt{C/N}} \sqrt{\log(N^2)} d{\rm Var}epsilonilon+ \int_{\sqrt{C/N}}^{\sqrt{C}} \sqrt{\log(C^2/{\rm Var}epsilonilon^4)} d{\rm Var}epsilonilon<C_4\,,$$ for some constant $C_4$. Applying Fernique's criterion \eqref{eq-fer}, we deduce that $$E (\max_{z\in V_N} ({\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N}))\leq C_4K\,.$$ The expectation $E(({\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N})^2)$ is bounded in $N$. Therefore, using Borell's inequality, see, e.g., \cite[Theorem 2.1]{adler}, it follows that, for some constant $C_5$ and all ${\begin{equation}}ta>0$, {\begin{equation}}gin{equation} \label{eq-300710b} P( \max_{z\in V_N} ({\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N})\geq C_4K+{\begin{equation}}ta)\leq 2e^{-C_5 {\begin{equation}}ta^2}\,. \end{equation} We also note the following bound, which is obtained similarly: there exist constants $C_5,C_6$ such that, for all ${\begin{equation}}ta>0$, {\begin{equation}}gin{equation} \label{eq-300710bb} P( \max_{z\in V_{N/16}'} (\bar{\mathcal S}_z^{N}-{\mathcal S}_z^{N/16})\geq C_6+{\begin{equation}}ta)\leq 2e^{-C_7 {\begin{equation}}ta^2}\,. \end{equation} The advantage of working with $\bar {\mathcal S}^{N}$ instead of ${\mathcal S}^{N}$ is that the fields $\{\bar{\mathcal S}_z^{N}\}_{z\in W_{N,i}}$ are independent for $i=1,\ldots,4$. For every $\alpha,{\begin{equation}}ta> 0$, we have the bound {\begin{equation}}gin{eqnarray} \label{eq-300710e} &&P(\tilde {\mathcal S}_{N}^*\geq A_n-\alpha)\\ &\geq & P(\max_{z\in V_N'} \bar{\mathcal S}^{N}_z\geq A_n+C_4-\alpha+{\begin{equation}}ta)- P( \max_{z\in V_N'} ({\mathcal S}_z^{N}-\bar{\mathcal S}_z^{N})\geq C_4+{\begin{equation}}ta)\nonumber\\ &\geq& P(\max_{z\in V_N'}\bar{\mathcal S}_{N}\geq A_n+C_4-\alpha+{\begin{equation}}ta)- 2e^{-C_5 {\begin{equation}}ta^2}\,, \nonumber \end{eqnarray} where \eqref{eq-300710b} was used in the last inequality. On the other hand, for any $\gamma,\gamma'>0$, {\begin{equation}}gin{eqnarray*} &&P(\max_{z\in V_N'}\bar{\mathcal S}_z^{N}\geq A_n-\gamma)\geq P(\max_{i=1}^4\max_{z\in W_{N,i}} \bar{\mathcal S}_z^{N} \geq A_n-\gamma)\\ &=&1-( P(\max_{z\in W_{N,1}} \bar{\mathcal S}_z^{N} <A_n-\gamma))^4\\ &\geq& 1-\left( P(\max_{z\in V_{N/16}'} {\mathcal S}_z^{N/16} <A_n-\gamma+C_6+\gamma')+2e^{-C_7(\gamma')^2}\right)^4 \,, \end{eqnarray*} where \eqref{eq-300710bb} was used in the inequality. Combining this estimate with \eqref{eq-300710e}, we get that, for any $\alpha,{\begin{equation}}ta,\gamma'>0$, {\begin{equation}}gin{eqnarray} \label{eq-300710f} &&P(\tilde {\mathcal S}_{N}^*\geq A_n-\alpha) \\ &\geq& 1-2e^{-C_5 {\begin{equation}}ta^2}\nonumber\\ && \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! -\left( P(\max_{z\in V_{N/16}'} {\mathcal S}_z^{N/16} <A_n+C_4+C_6+{\begin{equation}}ta+\gamma'-\alpha)+2e^{-C_7(\gamma')^2}\right)^4\,. \nonumber \end{eqnarray} We now iterate the last estimate. Let $\eta_0=1-\delta_0<1$ and, for $j\geq 1$, choose a constant $C_8=C_8(\delta_0)>0$ so that, for ${\begin{equation}}ta_j=\gamma_j'=C_8 \sqrt{\log (1/\eta_j)}$, $$ \eta_{j+1}=2e^{-C_5 {\begin{equation}}ta_j^2}+(\eta_j+2e^{-C_7 (\gamma_j')^2})^4$$ satisfies $\eta_{j+1}<\eta_j(1-\delta_0)$. (It is not hard to verify that such a choice is possible.) With this choice of ${\begin{equation}}ta_j$ and $\gamma_j'$, set $\alpha_0=0$ and $\alpha_{j+1}=\alpha_j+C_4+C_6+{\begin{equation}}ta_j+ \gamma_j'$, noting that $\alpha_j\leq C_9 \sqrt{\log(1/\eta_j)}$ for some $C_9=C_9(\delta_0)$. Substituting in \eqref{eq-300710f} and using Proposition \ref{proplbmbrw} to start the recursion, we get that {\begin{equation}}gin{equation} \label{eq-020910a} P(\tilde {\mathcal S}_{N}^*\geq A_n-\alpha_{j+1}) \geq 1-\eta_{j+1}\,. \end{equation} Therefore, {\begin{equation}}gin{eqnarray*} E\tilde{\mathcal S}_{N}^*&\geq& A_n-\int_0^\infty P(\tilde{\mathcal S}_{N}^*\leq A_n-\theta)d\theta\\ &\geq& A_n-\sum_{j=0}^\infty \alpha_j P(\tilde{\mathcal S}_{N}^*\leq A_n-\alpha_j)\\ &\geq &A_n- C_9 \sum_{j=0}^\infty \eta_j\sqrt{\log(1/\eta_j)}\,. \end{eqnarray*} Since $\eta_j\leq (1-\delta_0)^j$, it follows that there exists a constant $C_{10}>0$ so that {\begin{equation}}gin{equation} \label{eq-020910g} E{\mathcal S}_{N}^* \geq E\tilde {\mathcal S}_{N}^* \geq A_n-C_{10}\,. \end{equation} This completes the proof of Proposition \ref{proplbmbrw} in the case $k_0=0$. To consider the case $k_0>0$, define $$\hat S^*_{N,k_0}=\max_{z\in V_N'\cap 2^{k_0}\Z^2} {\mathcal S}_z^{N,k_0}\,. $$ Then, $\hat S^*_{N,k_0}\leq \tilde S^*_{N,k_0}$. On the other hand, $\hat S^*_{N,k_0}$ has, by construction, the same distribution as $\tilde S^*_{2^{-k_0}N,0}=\tilde S^*_{2^{-k_0}N}$. Therefore, for any $y\in \R$, $$ P(\tilde S^*_{N,k_0}\geq y)\geq P(\hat S^*_{N,k_0}\geq y) \geq P(\tilde S^*_{2^{-k_0}N}\geq y)\,. $$ We conclude that $$ ES^*_{N,k_0}\geq E\tilde S^*_{N,k_0}\geq E\tilde S^*_{2^{-k_0}N}\,.$$ Application of \eqref{eq-020910g} completes the proof of Proposition \ref{proplbmbrw}. \qed \section{Proof of Proposition \ref{proplbmbrw1}} \label{sec-6} The proof is based on the second moment method and is very similar to the argument in \cite[Section 6]{bramson2}. We begin by introducing some notation. Recall that, for $z\in V_N$, $$ {\mathcal S}_z^{N}=\sum_{k=0}^n \sum_{B\in \BB_k(z)} b_{k,B}^N\,.$$ We introduce a time parameter, setting {\begin{equation}}gin{equation} \label{eq-mbrwtime} {\mathcal S}_z^N(j)=\sum_{k=n-j}^n \sum_{B\in \BB_k(z)} b_{k,B}^N\, \end{equation} for $j=0,\ldots,n$. Fix a (large) constant $c_5$ and introduce the function $L_n(j)$, $j=0,1,\ldots,n$, with $L_n(0)=L_n(n)=0$ and $$ L_n(j)=\left\{{\begin{equation}}gin{array}{ll} c_5 \log j,& j=1,\ldots,\lfloor n/2\rfloor\\ c_5 \log (n-j),& j=\lfloor n/2\rfloor+1,\ldots,n-1\,. \end{array} \right. $$ We next introduce events involving the path ${\mathcal S}_z^N(\cdot)$. Recall that $A_n=(2\sqrt{\log 2})n- (3/(4\sqrt{\log 2}))\log n$. For $z\in V_n$, define the event $$ \CC_z=\{{\mathcal S}_z^N(j)\leq \frac{j}{n} (A_n+1)-L_n(j)+1, j=0,1,\ldots,n, {\mathcal S}_z^N\in [A_n,A_n+1]\}\,.$$ Define $$ h=\sum_{z\in V_N'} {\bf 1}_{\CC_z}\,.$$ We have the following proposition. {\begin{equation}}gin{proposition} \label{prop-m1} There exists a constant $C_{11}>0$ such that {\begin{equation}}gin{equation} \label{eq-260810a} Eh\geq C_{11}^{-1} \end{equation} and {\begin{equation}}gin{equation} \label{eq-260810b} Eh^2\leq C_{11} \end{equation} \end{proposition} Proposition \ref{proplbmbrw1} follows at once from Proposition \ref{prop-m1}, the Cauchy-Schwartz inequality $P(h\geq 1)\geq (Eh)^2/E(h^2)$, and the inequality $$ P(\tilde S^*_{N}\geq A_n)\geq P(h\geq 1)\,.$$ The rest of this section is devoted to the proof of Proposition \ref{prop-m1}. In the proof, certain crucial estimates (Lemmas \ref{lem-mauryinsist} and \ref{lem-11}) are discrete analogues of corresponding results in \cite{bramson2}. We provide in the appendix some detail on the proof of these lemmas.\\ \noindent {\bf Proof of Proposition \ref{prop-m1}} We begin with the following lemma; the appendix supplies details on the proof. {\begin{equation}}gin{lemma} \label{lem-mauryinsist} For some $C_{12},C_{13}>0$, {\begin{equation}}gin{equation} \label{eq-260810c} C_{12}N^{-2}\geq P(\CC_z)\geq C_{13} N^{-2}\,. \end{equation} \end{lemma} It follows from Lemma \ref{lem-mauryinsist} that $$ Eh\geq C_{12} |V_N'|/|V_N|= C_{12}/4\,,$$ proving \eqref{eq-260810a}. To compute the second moment in \eqref{eq-260810b}, we first set $$r(z,z')=n-\lceil \log_2 (d_N^\infty(z,z')+1)\rceil\,,$$ for $z,z'\in V_N'$. A crucial observation is that, by construction, {\begin{equation}}gin{eqnarray} \label{eq-270810a} && \mbox{\rm the process $\{{\mathcal S}_{z'}^N(\ell+r(z,z'))-{\mathcal S}_{z'}^N(r(z,z'))\}_{\ell\geq 0}$}\nonumber\\ && \mbox{\rm is independent of the sigma algebra generated by}\\ &&\mbox{\rm the processes $\{{\mathcal S}_z^N(j)\}_{j\geq 0}$ and $\{{\mathcal S}_{z'}^N(j)\}_{j\leq r(z,z')}$.}\nonumber \end{eqnarray} (Note that the boxes involved in the construction of the first process are disjoint from those of the other two processes.) We employ the decomposition {\begin{equation}}gin{eqnarray*} Eh^2 &=&\sum_{z,z'\in V_N'} P(\CC_z, \CC_{z'})\\ &=& \sum_{z,z': r(z,z')<n/2} P(\CC_z, \CC_{z'}) +\sum_{z,z': r(z,z')\geq n/2} P(\CC_z, \CC_{z'}) =: Q_1+Q_2\,.\end{eqnarray*} We begin by considering $Q_2$. For this, we introduce the event $$ \tilde \CC_{z,z'}=\{ {\mathcal S}_{z'}^N(r(z,z'))\leq \frac{r(z,z')}{n} (A_n+1)-L_n(r(z,z'))+1, {\mathcal S}_{z'}^N\in [A_n,A_n+1]\}\,,$$ noting that $\CC_{z'}\subset \CC_{z,z'}$. It follows from \eqref{eq-270810a} that {\begin{equation}}gin{eqnarray} \label{eq-270810b} P(\CC_z,\CC_{z'})&\leq & P(\CC_z,\tilde \CC_{z,z'})\nonumber\\ &\leq& P(\CC_z)P\left(G_{z,z'} \geq \left(1-\frac{r(z,z')}{n}\right)(A_n+1)+L_n(r(z,z'))\right)\,, \end{eqnarray} where $G_{z,z'}$ is a centered Gaussian random variable of variance $$n-r(z,z')=\lceil \log_2 (d_N^\infty(z,z')+1)\rceil=: u(z,z').$$ Therefore, using \eqref{eq-260810c}, {\begin{equation}}gin{eqnarray} \label{eq-270810d} \nonumber P(\CC_z,\CC_{z'})&\leq& C_{14}N^{-2} \exp\left(-((A_n/n)u(z,z')+ L_n(r(z,z')))^2/2 u(z,z')\right)\\ &\leq &C_{15} 2^{-2n - 2\log_2 d_N^{\infty}(z,z')} e^{3 \log n} e^{-(A_n/n)L_n(r(z,z'))}\,. \end{eqnarray} Since the number of points $z'$ with $d_N^\infty(z,z')\in [2^k,2^{k+1}]$ is bounded by a constant multiple of $2^{2k}$, we conclude from \eqref{eq-270810d} that $$Q_2 \leq C_{16}\sum_{k=n/2}^n (n-k)^{-c_5A_n/n}n^{3}<C_{17}\,,$$ if $c_5$ is chosen large enough. It thus remains to handle $Q_1$. Introduce the events $${\mathcal D}D_{z,z'}^{(1)}=\{ {\mathcal S}_{z'}^N(r(z,z'))\leq \frac{r(z,z')}{n} (A_n+1)-L_n(r(z,z'))+1 \}$$ and, for $w\in \R$, {\begin{equation}}gin{eqnarray*} {\mathcal D}D_{z,z',w}^{(2)}&=&\{ {\mathcal S}_{z'}^N(j)-{\mathcal S}_{z'}^N(r(z,z'))\leq \frac{j}{n} (A_n+1)-w+1, j=r(z,z'),\ldots,n,\\ &&{\mathcal S}_{z'}^N(n)-{\mathcal S}_{z'}^N(r(z,z'))\in [A_n-w,A_n+1-w]\}\,. \end{eqnarray*} It follows again from \eqref{eq-270810a} that {\begin{equation}}gin{eqnarray} \label{eq-270810bnew} P(\CC_z,\CC_{z'})&\leq & P(\CC_z,{\mathcal D}D_{z,z'}^{(1)},{\mathcal D}D_{z,z',{\mathcal S}_{z'}^N(r(z,z'))}^{(2)})\nonumber\\ &\leq& P(\CC_z)\max_{w\leq r(z,z') (A_n+1)/n-L_n(r(z,z'))+1} P({\mathcal D}D_{z,z',w}^{(2)})\,. \end{eqnarray} To analyze $P({\mathcal D}D_{z,z',w}^{(2)})$, we employ the following lemma. (Details of the proof are given in the appendix.) {\begin{equation}}gin{lemma} \label{lem-11} With notation as above, there exist constants $C_{19}$ and $C_{20}$ so that, if $r(x,x')\leq n/2$ and $w\leq r(z,z') (A_n+1)/n-L_n(r(z,z'))+1$, then {\begin{equation}}gin{equation} \label{eq-270810f} P({\mathcal D}D_{z,z',w}^{(2)})\leq C_{20} (L(r(z,z'))+1) \cdot 2^{-2\log_2d_N^\infty(z,z')} \cdot e^{- C_{19}L(r(z,z'))}\,. \end{equation} \end{lemma} Substituting (\ref{eq-270810f}) of the lemma into (\ref{eq-270810bnew}), we conclude that $$ Q_1\leq C_{20} \sum_{k=0}^{n/2} (L(k)+1)e^{-C_{19} L(k)}\leq C_{21}\,.$$ This completes the proof of Proposition \ref{prop-m1}. \qed \section{Appendix} {\begin{equation}}gin{appendix} \setcounter{equation}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} In this appendix, we provide more detail on the bounds in (\ref{eq-260810c}) of Lemma \ref{lem-mauryinsist} and \eqref{eq-270810f} of Lemma \ref{lem-11}, which in both cases are very similar to material in \cite{bramson2}. \subsubsection*{Bounds in Lemma \ref{lem-mauryinsist}} Using the Brownian bridge $\frak{z}(s)$, $s\in [0,n]$, that is standard Brownian motion $\frak{x}(s)$, $s\ge 0$, conditioned on $\frak{x}(n)=0$, one has {\begin{equation}}gin{equation} \label{eqA1} P(\frak{z}(s)<2, s=0,\ldots,n) \ge P(\mathcal{C}_z) / K(n) \ge P(\frak{z}(s) < 1-L(s), s=0,\ldots,n), \end{equation} where $K(n) = P(\frak{x}(n) \in [A_n, A_n+1]$. One can check that $(N^2 / n)K(n) \in [C_{22},C_{23}]$, for $0 < C_{22}< C_{23} < \infty$. So, in order to demonstrate Lemma \ref{lem-mauryinsist}, it suffices to show the bounds on each side of (\ref{eqA1}) are of order $1/n$. By \cite[Proposition 2$^\prime$ on page 555]{bramson2}, {\begin{equation}}gin{equation} \label{eqA2} P(\frak{z}(s) < 1-L(s), s\in [0,n]) \ge C_{24} /n, \end{equation} which gives the desired lower bound. One obtains the analogous upper bound $C_{25} /n$ for the left side of (\ref{eqA1}) by applying the reflection principle to Brownian bridge; see also \cite[Lemma 9]{bramson2}. (The bound in discrete time is the same as that in continuous time, up to the constant $C_{25}$, since the ``overshoot" of the normal past a boundary, over a unit time interval, has bounded second moment.) \subsubsection*{Bound in Lemma \ref{lem-11}} The bound in (6.9) is obtained in the same manner as are parts (a) and (b) of Lemma 11 on page 565 of \cite{bramson2}. One can apply the reflection principle as in part (a), but for discrete time instead of continuous time. (As in the derivation of the upper bound of (\ref{eq-260810c}), the ``overshoot" of the normal only affects the constant in front.) One obtains with a little work that, for $w\le r(z,z^{\prime})(A_{n}+1)/n - L_{n}(z,z^{\prime})+1$, {\begin{equation}}gin{equation} \label{eqA3} P(D_{z,z^{\prime},w}^{(2)}) \le C_{26} n^{-\frac{3}{2}}(-w + \frac{r(z,z^{\prime})}{n}A_n + 2) \exp\{-(A_n - w)^2 / 2u(z,z^{\prime})\}. \end{equation} As in part (b) of Lemma 11 of \cite{bramson2}, for the above range of $w$, the right side of (\ref{eqA3}) is maximized at the boundary $w = r(z,z^{\prime})(A_{n}+1)/n - L_{n}(z,z^{\prime})+1$. Plugging this value of $w$ into the right side of (\ref{eqA3}) yields (\ref{eq-270810f}). \end{appendix} {\begin{equation}}gin{thebibliography}{9999999} \bibitem[ABR09]{ABR} L. Addario-Berry and B. Reed, Minima in branching random walks, {\it Annals Probab.} {\bf 37} (2009), pp. 1044--1079. \bibitem[Ad90]{adler} R. Adler, {\it An Introduction to Continuity, Extrema and Related Topics for General Gaussian Processes}, Lecture Notes - Monograph Series, Institute Mathematical Statistics, Hayward, CA (1990). \bibitem[BDG01]{BDG} E. Bolthausen, J.-D. Deuschel and G. Giacomin, {Entropic repulsion and the maximum of the two-dimensional harmonic crystal}, {\it Ann. Probab.} {\bf 29} {(2001)}, pp. 1670--1692. \bibitem[BDZ10]{BDZ} E. Bolthausen, J.-D. Deuschel and O. Zeitouni, Recursions and tightness for the maximum of the discrete, two dimensional Gaussian Free Field. Submitted.\\ {http://arxiv.org/pdf/1005.5417v1} \bibitem[Br78]{bramson2} M. Bramson, Maximal displacement of branching Brownian motion, {\it Comm. Pure Appl. Math.} {\bf 31} (1978), pp. 531--581. \bibitem[Br83]{bramson3} M. Bramson, Convergence of Solutions of the Kolmogorov Equation to Travelling Waves, {\it Mem. Amer. Math. Soc.} {\bf 44} (1983), \#285. \bibitem[CLD01]{CLD} D. Carpentier and P. Le Doussal, Glass transition for a particle in a random potential, front selection in nonlinear renormalization group, and entropic phenomena in Liouville and Sinh-Gordon models, {\it Phys. Rev. E} {\bf 63} (2001), 026110. \bibitem[Ch08]{sourav} S. Chatterjee, {\it Chaos, concentration, and multiple valleys}, arXiv:0810.4221v2. \bibitem[DH91]{DH91} F. M. Dekking and B. Host, Limit distributions for minimal displacement of branching random walks, {\it Probab. Theory Relat. Fields} {\bf 90} (1991), pp. 403--426. \bibitem[Fe75]{sudfer} X. Fernique, Regularit\'{e} des trajectoires des fonctions al\'{e}atoires gaussiennes, Ecole d'Et\'{e} de Probabilit\'{e}s de Saint-Flour, IV-1974, pp. 1--96. {\it Lecture Notes in Math.} {\bf 480}, Springer, Berlin, (1975). \bibitem[La91]{lawler} G. Lawler, {\it Intersections of Random Walks}, Birkh\"{a}user, Boston (1991). \end{thebibliography} {\begin{equation}}gin{tabular}{llll} & Maury Bramson&& Ofer Zeitouni\\ & School of Mathematics && School of Mathematics\\ &University of Minnesota&& University of Minnesota\\ &206 Church St. SE && 206 Church St. SE\\ &Minneapolis, MN 55455&& Minneapolis, MN 55455\\ &[email protected]&& \; and \\ & && Faculty of Mathematics\\ &&& Weizmann Institute of Science\\ &&& Rehovot 76100, Israel\\ &&& [email protected] \end{tabular} \end{document}
\begin{document} \title*{Networks, Random Graphs and Percolation} \author{Philippe Deprez and Mario V.~W\"uthrich} \institute{Philippe Deprez \at ETH Zurich, RiskLab, Department of Mathematics, R\"amistrasse 101, 8092 Zurich, \email{[email protected]} \and Mario V.~W\"uthrich \at Swiss Finance Institute SFI Professor, ETH Zurich, RiskLab, Department of Mathematics, R\"amistrasse 101, 8092 Zurich, \email{[email protected]}} \maketitle \abstract*{The theory of random graphs goes back to the late 1950s when Paul Erd\H{o}s and Alfr\'ed R\'enyi introduced the Erd\H{o}s-R\'enyi random graph. Since then many models have been developed, and the study of random graph models has become popular for real-life network modelling such as social networks and financial networks. The aim of this overview paper is to review relevant random graph models for real-life network modelling. Therefore, we analyse their properties in terms of stylised facts of real-life networks.} \abstract{The theory of random graphs goes back to the late 1950s when Paul Erd\H{o}s and Alfr\'ed R\'enyi introduced the Erd\H{o}s-R\'enyi random graph. Since then many models have been developed, and the study of random graph models has become popular for real-life network modelling such as social networks and financial networks. The aim of this overview is to review relevant random graph models for real-life network modelling. Therefore, we analyse their properties in terms of stylised facts of real-life networks.} \section{Stylised facts\index{stylised facts} of real-life networks} \label{section introduction} A network is a set of particles that may be linked to each other. The particles represent individual network participants and the links illustrate how they interact among each other, for an example see Fig.~\ref{ER_and_NSW} below. Such networks appear in many real-life situations, for instance, there are virtual social networks with different users that communicate with (are linked to) each other, see Newman et al.~\cite{NSW2}, or there are financial networks such as the banking system that exchange lines of credits with each other, see Amini et al.~\cite{Cont1} and Cont et al.~\cite{Cont2}. These two examples represent rather recently established real-life networks that originate from new technologies and industries but, of course, the study of network models is much older motivated by studies in sociology or questions about interacting particle systems in physics. Such real-life networks, in particular social networks, have been studied on many different empirical data sets. These studies have raised several stylised facts about large real-life networks that we would briefly like to enumerate, for more details see Newman et al.~\cite{NSW2} and Section 1.3 in Durrett \cite{Durrett} and the references therein. \begin{enumerate} \item Many pairs of distant particles are connected by a very short chain of links. This is sometimes called the ``small-world'' effect\index{small-world effect}. Another interpretation of the small-world effect is the observation that the typical distance of any two particles in real-life networks is at most six links, see Watts \cite{Watts} and Section 1.3 in Durrett \cite{Durrett}. The work of Watts \cite{Watts} was inspired by the statement of his father saying that ``he is only six handshakes away from the president of the United States''. For other interpretations of the small-world effect we refer to Newman et al.~\cite{NSW2}. \item The clustering property\index{clustering property} of real-life networks is often observed which means that linked particles tend to have common friends. \item The distribution of the number of links\index{degree!distribution} of a single particle is heavy-tailed, i.e.~its survival probability has a power law decay. In many real-life networks the power law constant (tail parameter) $\tau$ is estimated between 1 and 2 (finite mean and infinite variance, see also (\ref{NSW degree}) below). Section 1.4 in Durrett \cite{Durrett} presents the following examples: \begin{itemize} \item number of oriented links on web pages: $\tau \approx 1.5$, \item routers for e-mails and files: $\tau \approx 1.2$, \item movie actor network: $\tau \approx 1.3$, \item citation network Physical Review D: $\tau \approx 1.9$. \end{itemize} Typical real-life networks are heavy-tailed in particular if maintaining links is free of costs. \end{enumerate} Since real-life networks are too complex to be modelled particle by particle and link by link, researchers have developed many models in random graph theory that help to understand the geometry of such real-life networks. The aim of this overview paper is to review relevant models in random graph theory, in particular, we would like to analyse whether these models fulfil the stylised facts. Standard literature on random graph and percolation theory is Bollob\'{a}s \cite{Bollobas}, Durrett \cite{Durrett}, Franceschetti-Meester \cite{Franceschetti}, Grimmett \cite{Grimmett1, Grimmett2} and Meester-Roy \cite{Meester}. \section{Erd\H{o}s-R\'enyi random graph\index{random graph!Erd\H{o}s-R\'enyi}} We choose a set of particles $V_n=\{1,\ldots, n\}$ for fixed $n\in \mathbb{N}$. Thus, $V_n$ contains $n$ particles. The Erd\H{o}s-R\'enyi (ER) random graph introduced in the late 1950s, see \cite{ER}, attaches to every pair of particles $x,y\in V_n$, $x\neq y$, independently an edge with fixed probability\index{edge probability} $p \in (0,1)$, i.e., \begin{equation}\label{Erdos-Renyi} \eta_{x,y}=\eta_{y,x} = \left\{ \begin{array}{ll} 1 \qquad & \text{ with probability $p$,}\\ 0 \qquad & \text{ with probability $1-p$,} \end{array} \right. \end{equation} where $\eta_{x,y}=1$ means that there is an edge between $x$ and $y$, and $\eta_{x,y}=0$ means that there is {\it no} edge between $x$ and $y$. Identity $\eta_{x,y}=\eta_{y,x}$ illustrates that we have an undirected random graph. We denote this random graph model by ${\rm ER}(n,p)$. In Fig.~\ref{ER_and_NSW} (lhs) we provide an example for $n=12$, observe that this realisation of the ER random graph has one isolated particle and the remaining ones lie in the same connected component. We say that $x$ and $y$ are {\it adjacent}\index{adjacent} if $\eta_{x,y}=1$. We say that $x$ and $y$ are {\it connected}\index{connected} if there exists a path of adjacent particles from $x$ to $y$. We define the {\it degree}\index{degree} ${\cal D}(x)$ of particle $x$ to be the number of adjacent particles of $x$ in $V_n$. Among others, general random graph theory is concerned with the limiting behaviour of the ER random graph ${\rm ER}(n,p_n)$ for $p_n=\vartheta/n$, $\vartheta>0$, as $n\to\infty$. Observe that for $k\in \{0,\ldots, n-1\}$ we have, see for instance Lemma 2.9 in \cite{W}, \begin{equation}\label{ER degree} g_k=g^{(n)}_k={\mathbb P}\left[{\cal D}(x)=k\right] =\binom{n-1}{k} p_n^k \left(1-p_n\right)^{n-1-k} \quad \rightarrow \quad e^{-\vartheta} \frac{\vartheta^k}{k!}, \end{equation} as $n\to \infty$. We see that the degree distribution of a fixed particle $x \in V_n $ with edge probability $p_n$ converges for $n\to \infty$ to a Poisson distribution with parameter $\vartheta>0$. In particular, this limiting distribution is light-tailed and, therefore, the ER graph does not fulfil the stylised fact of having a power law decay of the degree distribution. The ER random graph has a phase transition\index{phase transition} at $\vartheta=1$, reflecting different regimes for the size of the largest connected component in the ER random graph. For $\vartheta<1$, all connected components\index{connected component} are small, the largest being of order $\mathcal{O}(\log n)$, as $n\to \infty$. For $\vartheta >1$, there is a constant $\chi(\vartheta)>0$ and the largest connected component of the ER random graph is of order $\chi(\vartheta)n$, as $n\to\infty$, and all other connected components are small, see Bollob\'{a}s \cite{Bollobas} and Chapter 2 in Durrett \cite{Durrett}. At criticality ($\vartheta=1$) the largest connected component is of order $n^{2/3}$, however, this analysis is rather sophisticated, see Section 2.7 in Durrett \cite{Durrett}. Moreover, the ER random graph has only very few complex connected components such as cycles (see Section 2.6 in Durrett \cite{Durrett}): for $\vartheta \neq 1$ most connected components are trees, only a few connected components have triangles and cycles, and only the largest connected component (for $\vartheta>1$) is more complicated. At criticality the situation is more complex, a few large connected components emerge and finally merge to the largest connected component as $n\to \infty$. \begin{figure} \caption{{\bf lhs} \label{ER_and_NSW} \end{figure} \section{Newman-Strogatz-Watts random graph\index{random graph!Newman-Strogatz-Watts}} \label{Section NSW graph} The approach of Newman-Strogatz-Watts (NSW) \cite{NSW1, NSW2} aims at directly describing the degree distribution $(g_k)_{k \ge 0}$ of ${\cal D}(x)$ for a given particle $x \in V_n$ ($n\in \mathbb{N}$ being large). The aim is to modify the degree distribution in (\ref{ER degree}) so that we obtain a power law distribution. Assume that any particle $x \in V_n$ has a degree distribution\index{degree!distribution} of the form $g_0=0$ and \begin{equation}\label{NSW degree} g_k={\mathbb P}\left[{\cal D}(x)=k\right] \sim c k^{-(\tau+1)}, \qquad \text{ as $k\to \infty$,} \end{equation} for given tail parameter $\tau>0$ and $c>0$. Note that $\sum_{k\ge 1}k^{-(\tau+1)}<1+1/\tau$ which implies that $c>0$ is admissible. By definition the survival probability of this degree distribution has a power law with tail parameter $\tau>0$. However, this choice (\ref{NSW degree}) does not explain how one obtains an explicit graph from the degrees ${\cal D}(x)$, $x\in V_n$. The graph construction is done by the Molloy-Reed \cite{MR} algorithm\index{Molloy-Reed algorithm}: attach to each particle $x \in V_n$ exactly ${\cal D}(x)$ ends of edges and then choose these ends randomly in pairs (with a small modification if the total number of ends is odd). This will provide a random graph with the desired degree distribution. In Fig.~\ref{ER_and_NSW} (rhs) we provide an example for $n=12$, observe that this realisation of the NSW random graph has two connected components. The Molloy-Reed construction may provide multiple edges and self-loops, but if ${\cal D}(x)$ has finite second moment ($\tau>2$) then there are only a few multiple edges and self-loops, as $n\to \infty$, see Theorem 3.1.2 in Durrett \cite{Durrett}. However, in view of real-life networks we are rather interested into tail parameters $\tau \in (1,2)$ for which we so far have no control on multiple edges and self-loops. Newman et al.~\cite{NSW1, NSW2} have analysed this random graph by basically considering cluster growth in a two-step branching process. Define the probability generating function of the first generation by \begin{equation*} G_0(z) = {\mathbb E} \left[z^{{\cal D}(x)}\right] = \sum_{k \ge 1} g_k z^k,\qquad \text{ for $z\in {\mathbb R}$.} \end{equation*} Note that we have $G_0(1)=1$ and $\mu={\mathbb E}[{\cal D}(x)]=G'_0(1)$ (supposed that the latter exists). The second generation has then probability generating function given by \begin{equation*} G_1(z) = \sum_{k\ge 0} \frac{(k+1)g_{k+1}}{\mu}~ z^{k} = \sum_{k\ge 1} \frac{kg_k}{\mu}~ z^{k-1},\qquad \text{ for $z\in {\mathbb R}$,} \end{equation*} where the probability weights are specified by $\widetilde{g}_k=(k+1)g_{k+1}/\mu$ for $k\ge 0$. For $\tau>2$ the second generation has finite mean given by \begin{equation*} \vartheta= \sum_{k\ge 0} k \widetilde{g}_k = \sum_{k\ge 0} k~\frac{(k+1)g_{k+1}}{\mu} =\frac{1}{\mu} \sum_{k\ge 1} (k-1)kg_k. \end{equation*} Note that the probability generating functions are related to each other by $G_0'(z)= \mu G_1(z) = G_0'(1)G_1(z)$. Similar to the ER random graph there is a phase transition\index{phase transition} in this model. It is determined by the mean $\vartheta$ of the second generation, see (5)--(6) in Newman et al.~\cite{NSW2} and Theorems 3.1.3 and 3.2.2 in Durrett \cite{Durrett}: for $\vartheta>1$ the largest connected component has size of order $\chi(\vartheta)n$, as $n\to \infty$. The fraction $\chi(\vartheta)=1-G_0(z_0)$ is found by choosing $z_0$ to be the smallest fixed point of $G_1$ in $[0,1]$. Moreover, no other connected component has size of order larger than $\mathcal{O}(\log n)$. Note that we require finite variance $\tau>2$ for $\vartheta$ to exist. If $\vartheta <1$ the distribution of the size of the connected component of a fixed particle converges in distribution to a limit with mean $1+\mu/(1-\vartheta)$, as $n\to \infty$, see Theorem 3.2.1 in Durrett \cite{Durrett}. The size of the largest connected component in this case ($\tau>2$ and $\vartheta<1$) is conjectured to be of order $n^{1/\tau}$: the survival probability of the degree distribution has asymptotic behaviour of order $k^{-\tau}$, therefore the largest degree of $n$ independent degrees has size of order $n^{1/\tau}$, which leads to the same conjecture for the largest connected component, see also Conjecture 3.3.1 in Durrett \cite{Durrett}. From a practical point of view the interesting regime is $1<\tau<2$ because many real-life networks have such a tail behaviour, see Section 1.4 in Durrett \cite{Durrett}. In this case we have $\vartheta=\infty$ and an easy consequence is that the largest connected component grows proportionally to $n$ (because this model dominates a model with finite second moment and mean of the second generation being bigger than 1). In this regime $1<\tau<2$ we can study the graph distance\index{graph distance} of two randomly chosen particles (counting the number of edges connecting them) in the largest connected component, see Section 4.5 in Durrett \cite{Durrett}. In the Chung-Lu model \cite {CL0, CL}, which uses a variant to the Molley-Reed \cite{MR} algorithm, it is proved that this graph distance behaves as ${\mathcal O}(\log\log n)$, see Theorem 4.5.2 in Durrett \cite{Durrett}. Van der Hofstadt et al.~\cite{Remco2} obtain the same asymptotic behaviour ${\mathcal O}(\log\log n)$ for the NSW random graph in the case $1<\tau<2$. Moreover, in their Theorem 1.2 \cite{Remco2} they also state that this graph distance behaves as ${\mathcal O}(\log n)$ for $\tau>2$. These results on the graph distances can be interpreted as the small-world effect because two randomly chosen particles in $V_n$ are connected by very few edges. We conclude that NSW random graphs have heavy tails for the degree distribution choices according to (\ref{NSW degree}). Moreover, the graph distances have a behaviour that can be interpreted as small-world effect. Less desirable features of NSW random graphs are that they may have self-loops and multiple edges. Moreover, the NSW random graph is expected to be locally rather sparse leading to locally tree-like structures, see also Hurd-Gleeson \cite{Hurd1}. That is, we do not expect to get a reasonable local graph geometry and the required clustering property. Variations considered allowing for statistical interpretations in terms of likelihoods include the works of Chung-Lu \cite{CL0, CL} and Olhede-Wolfe \cite{OW}. \section{Nearest-neighbour bond percolation\index{percolation!nearest-neighbour bond}} \label{nearest-neighbour bond percolation} In a next step we would like to embed the previously introduced random graphs and the corresponding particles into Euclidean space. This will have the advantage of obtaining a natural distance function between particles, and it will allow to compare Euclidean distance to graph distance between particles (counting the number of edges connecting two distinct particles). Before giving the general random graph model we restrict ourselves to the nearest-neighbour bond percolation model on the lattice ${\mathbb Z}^d$ because this model is the basis for many derivations. More general and flexible random graph models are provided in the subsequent sections. Percolation theory was first presented by Broadbent-Hammersley \cite{BH}. It was mainly motivated by questions from physics, but these days percolation models are recognised to be very useful in several fields. Key monographs on nearest-neighbour bond percolation theory are Kesten \cite{Kesten} and Grimmett \cite{Grimmett1, Grimmett2}. Choose a fixed dimension $d\in \mathbb{N}$ and consider the square lattice ${\mathbb Z}^d$. The vertices of this square lattice are the particles and we say that two particles $x,y\in {\mathbb Z}^d$ are nearest-neighbour particles if $\|x-y\|=1$ (where $\|\cdot\|$ denotes the Euclidean norm). We attach at random edges to nearest-neighbour particles $x,y \in {\mathbb Z}^d$, independently of all other edges, with a fixed edge probability\index{edge probability} $p\in [0,1]$, that is, \begin{equation}\label{percolation} \eta_{x,y}=\eta_{y,x} = \left\{ \begin{array}{ll} 1_{\{\|x-y\|=1\}} \qquad & \text{ with probability $p$,}\\ 0 \qquad & \text{ with probability $1-p$,} \end{array} \right. \end{equation} where $\eta_{x,y}=1$ means that there is an edge between $x$ and $y$, and $\eta_{x,y}=0$ means that there is {\it no} edge between $x$ and $y$. The resulting graph is called nearest-neighbour (bond) random graph in ${\mathbb Z}^d$, see Fig.~\ref{NNBP_and_HomLRP} (lhs) for an illustration. Two particles $x, y \in {\mathbb Z}^d$ are connected if there exists a path of nearest-neighbour edges connecting $x$ and $y$. It is immediately clear that this random graph does not fulfil the small-world effect because one needs at least $\|x-y\|$ edges to connect $x$ and $y$, i.e.~the number of edges grows at least linearly in the Euclidean distance between particles $x,y\in {\mathbb Z}^d$. The degree distribution is finite because there are at most $2^d$ nearest-neighbour edges, more precisely, the degree has a binomial distribution with parameters $2^d$ and $p$. We present this square lattice model because it is an interesting basis for the development of more complex models. Moreover, this model is at the heart of many proofs in percolation problems which are based on so-called renormalisation techniques, see Sect.~\ref{Renormalisation techniques} below for a concrete example. In percolation theory, the object of main interest is the connected component\index{connected component} of a given particle $x\in {\mathbb Z}^d$ which we denote by \begin{equation*} {\cal C}(x) = \left\{ y \in {\mathbb Z}^d:~\text{$x$ and $y$ are connected by a path of nearest-neighbour edges}\right\}. \end{equation*} By translation invariance it suffices to define the percolation probability\index{percolation!probability} at the origin \begin{equation*} \theta(p)= {\mathbb P}_p \left[ |{\cal C}(0)|=\infty \right], \end{equation*} where $|{\cal C}(0)|$ denotes the size of the connected component of the origin and ${\mathbb P}_p$ is the product measure on the possible nearest-neighbour edges with edge probability $p\in [0,1]$, see Grimmett \cite{Grimmett1}, Section 2.2. The {\it critical probability}\index{critical probability} $p_c=p_c({\mathbb Z}^d)$ is then defined by \begin{equation*} p_c = \inf \left\{ p\in (0,1]:~\theta(p)>0\right\}. \end{equation*} Since the percolation probability $\theta(p)$ is non-decreasing, the critical probability is well-defined. We have the following result, see Theorem 3.2 in Grimmett \cite{Grimmett1}. \begin{theorem} \label{bernoulli theo 1} For nearest-neighbour bond percolation in ${\mathbb Z}^d$ we have \begin{itemize} \item[(a)] for $d=1$: $p_c({\mathbb Z})=1$; and \item[(b)] for $d\ge 2$: $p_c({\mathbb Z}^d) \in (0,1)$. \end{itemize} \end{theorem} This theorem says that there is a non-trivial phase transition\index{phase transition} in ${\mathbb Z}^d$, $d\ge 2$. This needs to be considered together with the following result which goes back to Aizenman et al.~\cite{AKN}, Gandolfi et al.~\cite{GGR} and Burton-Keane \cite{BK}. Denote by ${\cal I}$ the number of infinite connected components. Then we have the following statement, see Theorem 7.1 in Grimmett \cite{Grimmett1}. \begin{theorem}\label{bernoulli theo 2} For any $p\in (0,1)$ either ${\mathbb P}_p[{\cal I}=0]=1$ or ${\mathbb P}_p[{\cal I}=1]=1$. \end{theorem} Theorems \ref{bernoulli theo 1} and \ref{bernoulli theo 2} imply that there is a {\it unique} infinite connected component for $p>p_c({\mathbb Z}^d)$, a.s. This motivates the notation ${\cal C}_\infty$ for the unique infinite connected component\index{connected component!infinite} for the given edge configuration $(\eta_{x,y})_{x,y}$ in the case $p>p_c({\mathbb Z}^d)$. ${\cal C}_\infty$ may be considered as an infinite (nearest-neighbour) network on the particle system ${\mathbb Z}^d$ and we can study its geometrical and topological properties. Using a duality argument, Kesten \cite{Kesten12} proved that $p_c({\mathbb Z}^2)=1/2$ and monotonicity then provides $p_c({\mathbb Z}^{d+1})\le p_c({\mathbb Z}^{d})\le p_c({\mathbb Z}^{2})=1/2$ for $d\ge 2$. One object of interest is the so-called {\it graph distance} (chemical distance)\index{graph distance}\index{chemical distance} between $x,y\in {\mathbb Z}^d$, which is for a given edge configuration defined by \begin{eqnarray*} d(x,y)&=& \text{minimal length of path connecting $x$ and $y$ by} \\&&\hspace{3cm} \text{nearest-neighbour edges $\eta_{z_1,z_2}=1$}, \end{eqnarray*} where this is defined to be infinite if there is no nearest-neighbour path connecting $x$ and $y$ for the given edge configuration. We have already mentioned that $d(x,y)\ge \|x-y\|$ because this is the minimal number of nearest-neighbour edges we need to cross from $x$ to $y$. Antal-Pisztora \cite{AP} have proved the following upper bound. \begin{theorem} \label{Antal theorem} Choose $p>p_c({\mathbb Z}^d)$. There exists a positive constant $c=c(p,d)$ such that, a.s., \begin{equation*} \limsup_{\|x\|\to\infty} \frac{1}{\|x\|}d(0,x)1_{\{\text{\rm 0 and $x$ are connected}\}} \le c. \end{equation*} \end{theorem} \begin{figure} \caption{{\bf lhs} \label{NNBP_and_HomLRP} \end{figure} \section{Homogeneous long-range percolation}\index{percolation!homogeneous long-range} \label{homogeneous long-range percolation} Long-range percolation is the first extension of nearest-neighbour bond percolation. It allows for edges between any pair of particles $x,y\in {\mathbb Z}^d$. Long-range percolation was originally introduced by Schulman \cite{Schulman} in one dimension. Existence and uniqueness of the infinite connected component in long-range percolation was proved by Schulman \cite{Schulman} and Newman-Schulman \cite{NS} for $d=1$ and by Gandolfi et al.~\cite{GKN} for $d\ge 2$. Consider again the percolation model on the lattice ${\mathbb Z}^d$, but we now choose the edges differently. Choose $p\in [0,1]$, $\lambda>0$ and $\alpha>0$ fixed and define the edge probabilities\index{edge probability} for $x,y\in {\mathbb Z}^d$ by \begin{equation}\label{long-range percolation} p_{x,y} = \left\{ \begin{array}{ll} p \qquad & \text{ if $\|x-y\|=1$,}\\ 1-\exp(-\lambda \|x-y\|^{-\alpha}) \qquad & \text{ if $\|x-y\|>1$.} \end{array} \right. \end{equation} Between any pair $x,y\in {\mathbb Z}^d$ we attach an edge, independently of all other edges, as follows \begin{equation*} \eta_{x,y}=\eta_{y,x} = \left\{ \begin{array}{ll} 1 \qquad & \text{ with probability $p_{x,y}$,}\\ 0 \qquad & \text{ with probability $1-p_{x,y}$.} \end{array} \right. \end{equation*} We denote the resulting product measure on the edge configurations by ${\mathbb P}_{p,\lambda, \alpha}$. Figure \ref{NNBP_and_HomLRP} (rhs) shows part of a realised configuration. We say that the particles $x$ and $y$ are {\it adjacent} if there is an edge $\eta_{x,y}=1$ between $x$ and $y$. We say that $x$ and $y$ are {\it connected} if there exists a path of adjacent particles in ${\mathbb Z}^d$ that connects $x$ and $y$. The connected component\index{connected component} of $x$ is given by \begin{equation*} {\cal C}(x)=\left\{ y \in {\mathbb Z}^d:~\text{$x$ and $y$ are connected} \right\}. \end{equation*} We remark that the edge probabilities $p_{x,y}$ used in the literature have a more general form. Since for many results only the asymptotic behaviour of $p_{x,y}$ as $\|x-y\|\to \infty$ is relevant, we have decided to choose the explicit (simpler) form (\ref{long-range percolation}) because this also fits to our next models. Asymptotically we have the following power law \begin{equation*} p_{x,y}~ \sim~ \lambda \|x-y\|^{-\alpha}, \qquad \text{ as $\|x-y\|\to \infty$.} \end{equation*} Theorem \ref{bernoulli theo 1} (b) immediately implies that we have percolation in ${\mathbb Z}^d$, $d\ge 2$, for $p$ sufficiently close to 1. We have the following theorem, see Theorem 1.2 in Berger \cite{Berger1}. \begin{theorem} \label{theorem infinite cluster} For long-range percolation in ${\mathbb Z}^d$ we have, in an a.s.~sense,\index{phase transition} \begin{itemize} \item[(a)] for $\alpha \le d$: there is an infinite connected component; \item[(b)] for $d\ge 2$ and $\alpha > d$: for $p$ sufficiently close to 1 there is an infinite connected component; \item[(c)] for $d=1$: \begin{itemize} \item[(1)] $\alpha>2$: there is no infinite connected component; \item[(2)] $1<\alpha<2$: for $p$ sufficiently close to 1 there is an infinite connected component; \item[(3)] $\alpha=2$ and $\lambda>1$: for $p$ sufficiently close to 1 there is an infinite connected component; \item[(4)] $\alpha=2$ and $\lambda\le 1$: there is no infinite connected component. \end{itemize} \end{itemize} \end{theorem} The case $\alpha\le d$ follows from an infinite degree distribution for a given particle, i.e.~for $\alpha \le d$ we have, a.s., \begin{equation}\label{homogeneous lattice} {\cal D}(0)=\left|\left\{ x \in{\mathbb Z}^d:~\text{$0$ and $x$ are adjacent}\right\}\right|=\infty, \end{equation} and for $\alpha>d$ the degree distribution is light-tailed (we give a proof in the continuum space model in Sect.~\ref{homogeneous Poisson model}, because the proof turns out to be straightforward in continuum space). Interestingly, we now also obtain a non-trivial phase transition in the one dimensional case $d=1$ once long-range edges are sufficiently likely, i.e.~$\alpha$ is sufficiently small. At criticality $\alpha=2$ also the decay scaling constant $\lambda>0$ matters. The case $d\ge 2$ is less interesting because it is in line with nearest-neighbour bond percolation. The main interest of adding long-range edges is the study of the resulting geometric properties of connected components ${\cal C}(x)$. We will state below that there are three different regimes: \begin{itemize} \item $\alpha \le d$ results in an infinite degree distribution, a.s., see (\ref{homogeneous lattice}); \item $d < \alpha < 2d$ has finite degrees but is still in the regime of small-world behaviour; \item $\alpha >2d$ behaves as nearest-neighbour bond percolation. \end{itemize} We again focus on the graph distance \begin{equation}\label{definition of graph distance d} d(x,y)= \text{minimal number of edges that connect $x$ and $y$}, \end{equation} where this is defined to be infinite if $x$ and $y$ do not belong to the same connected component, i.e.~$y \notin {\cal C}(x)$. For $\alpha<d$ we have infinite degrees and the infinite connected component ${\cal C}_\infty$ contains all particles of ${\mathbb Z}^d$, a.s. Moreover, Benjamini et al.~\cite{BKPS} prove in Example 6.1 that the graph distance is bounded, a.s., by \begin{equation*} \left\lceil \frac{d}{d-\alpha} \right\rceil . \end{equation*} The case $\alpha \in (d,2d)$ is considered in Biskup \cite{Biskup}, Theorem 1.1, and in Trapman \cite{Trapman}. They have proved the following result: \begin{theorem}\label{theorem homogeneous 1} Choose $\alpha \in (d,2d)$ and assume, a.s., that there exists a unique infinite connected component ${\cal C}_\infty$. Then for all $\epsilon>0$ we have \begin{equation*} \lim_{\|x\|\to\infty} {\mathbb P}_{p,\lambda,\alpha} \left[\left.\Delta -\epsilon \le \frac{\log d(0,x)}{\log \log \|x\|} \le \Delta +\epsilon\right|0,x\in {\cal C}_\infty\right]=1, \end{equation*} where $\Delta^{-1}=\log_2 (2d/\alpha)$. \end{theorem} This result says that the graph distance $d(0,x)$ is roughly of order $(\log \|x\|)^\Delta$ with $\Delta=\Delta(\alpha, d)>1$. Unfortunately, the known bounds are not sufficiently sharp to give more precise asymptotic statements. Theorem \ref{theorem homogeneous 1} can be interpreted as small-world effect since it tells us that long Euclidean distances can be crossed by a few edges. For instance, $d=2$ and $\alpha=2.5$ provide $\Delta= 1.47$ and we get $(\log \|x\|)^\Delta=26.43$ for $\|x\|=10,000$, i.e.~a Euclidean distance of 10,000 is crossed in roughly 26 edges. The case $\alpha>2d$ is considered in Berger \cite{Berger2}. \begin{theorem} \label{theorem homogeneous 2} If $\alpha>2d$ we have, a.s., \begin{equation*} \liminf_{\|x\|\to \infty} \frac{d(0,x)}{\|x\|}>0. \end{equation*} \end{theorem} This result proves that for $\alpha>2d$ the graph distance behaves as in nearest-neighbour bond percolation, because it grows linearly in $\|x\|$. The proof of an upper bound is still open, but we expect a result similar to Theorem \ref{Antal theorem} in nearest-neighbour bond percolation, see Conjecture 1 of Berger \cite{Berger2}. We conclude that this model has a small-world effect for $\alpha <2d$. It also has some kind of clustering property because particles that are close share an edge more commonly, which gives a structure that is locally more dense, see Corollary 3.4 in Biskup \cite{Biskup}. But the degree distribution is light-tailed which motivates to extend the model by an additional ingredient. This is done in the next section. \section{Heterogeneous long-range percolation\index{percolation!heterogeneous long-range}} Heterogeneous long-range percolation extends the previously introduced long-range percolation models on the lattice ${\mathbb Z}^d$. Deijfen et al.~\cite{Remco} have introduced this model under the name of scale-free percolation\index{percolation!scale-free}. The idea is to place additional weights\index{weights} $W_x$ to the particles $x\in {\mathbb Z}^d$ which determine how likely a particle may play the role of a hub\index{hub} in the resulting network. Consider again the percolation model on the lattice ${\mathbb Z}^d$. Assume that $(W_x)_{x\in {\mathbb Z}^d}$ are i.i.d.~Pareto distributed with threshold parameter 1 and tail parameter $\beta>0$, i.e.~for $w\ge 1$ \begin{equation}\label{definition of Pareto} {\mathbb P}\left[ W_x \le w \right] = 1 - w^{-\beta}. \end{equation} Choose $\alpha>0$ and $\lambda>0$ fixed. Conditionally given $(W_x)_{x\in {\mathbb Z}^d}$, we consider the edge probabilities\index{edge probability} for $x,y\in {\mathbb Z}^d$ given by \begin{equation}\label{heterogeneous long-range percolation} p_{x,y} = 1-\exp(-\lambda W_xW_y\|x-y\|^{-\alpha}). \end{equation} Between any pair $x,y\in {\mathbb Z}^d$ we attach an edge, independently of all other edges, as follows \begin{equation*} \eta_{x,y}=\eta_{y,x} = \left\{ \begin{array}{ll} 1 \qquad & \text{ with probability $p_{x,y}$,}\\ 0 \qquad & \text{ with probability $1-p_{x,y}$.} \end{array} \right. \end{equation*} We denote the resulting probability measure on the edge configurations by ${\mathbb P}_{\lambda, \alpha, \beta}$. In contrast to (\ref{long-range percolation}) we have additional weights $W_x$ and $W_y$ in (\ref{heterogeneous long-range percolation}). The bigger these weights the more likely is an edge between $x$ and $y$. Thus, particles $x\in {\mathbb Z}^d$ with a big weight $W_x$ will have many adjacent particles $y$ (i.e.~particles $y\in {\mathbb Z}^d$ with $\eta_{x,y}=1$). Such particles $x$ will play the role of hubs in the network system. Figure \ref{HetLRP_and_RCM} (lhs) shows part of a realised edge configuration. \begin{figure} \caption{{\bf lhs} \label{HetLRP_and_RCM} \end{figure} The first interesting result is that this model provides a heavy-tailed degree distribution, see Theorems 2.1 and 2.2 in Deijfen et al.~\cite{Remco}. Denote again by ${\cal D}(0)$ the number of particles of ${\mathbb Z}^d$ that are adjacent to 0, then we have the following result. \begin{theorem} \label{infinite variance case} Fix $d\ge 1$. We have the following two cases for the degree distribution: \begin{itemize} \item for $\min\{\alpha, \beta \alpha\} \le d$, a.s., ${\cal D}(0)=\infty$; \item for $\min\{\alpha, \beta \alpha\} > d$ set $\tau=\beta\alpha/d$, then \begin{equation*} {\mathbb P}_{\lambda, \alpha,\beta}\left[{\cal D}(0) > k \right] =k^{-\tau} \ell(k), \end{equation*} for some function $\ell(\cdot)$ that is slowly varying at infinity. \end{itemize} \end{theorem} We observe that the heavy-tailedness of the weights $W_x$ induces heavy-tailedness in the degree distribution which is similar to choice (\ref{NSW degree}) in the NSW random graph model of Sect.~\ref{Section NSW graph}. For $\alpha>d$ there are three different regimes: (i) $\beta\alpha \le d$ implies infinite degree, a.s.; (ii) for $d<\beta\alpha <2d$ the degree distribution has finite mean but infinite variance because $1<\tau<2$; (iii) for $\beta\alpha>2d$ the degree distribution has finite variance because $\tau>2$. We will see that the distinction of the latter two cases has also implications on the behaviour of the percolation properties and the graph distances similar to the considerations in NSW random graphs. Note that from a practical point of view the interesting regime is (ii). We again consider the connected component of a given particle $x\in {\mathbb Z}^d$ denoted by ${\cal C}(x)$ and we define the percolation probability (for given $\alpha$ and $\beta$) \begin{equation*} \theta(\lambda)= {\mathbb P}_{\lambda, \alpha, \beta} \left[ |{\cal C}(0)|=\infty \right]. \end{equation*} The {\it critical percolation value}\index{percolation!critical value} $\lambda_c$ is then defined by \begin{equation*} \lambda_c = \inf \left\{ \lambda>0:~\theta(\lambda)>0\right\}. \end{equation*} We have the following result, see Theorem 3.1 in Deijfen et al.~\cite{Remco}. \begin{theorem} \label{theo 31} Fix $d\ge 1$. Assume $\min \{ \alpha, \beta \alpha \} > d$. \begin{itemize} \item[(a)] If $d\ge 2$, then $\lambda_c<\infty$. \item[(b)] If $d=1$ and $\alpha \in (1,2]$, then $\lambda_c<\infty$. \item[(c)] If $d=1$ and $\min \{ \alpha, \beta \alpha \} > 2$, then $\lambda_c=\infty$. \end{itemize} \end{theorem} \noindent This result is in line with Theorem \ref{theorem infinite cluster}. Since $W_x\ge 1$, a.s., an edge configuration from edge probabilities $p_{x,y}$ defined in (\ref{heterogeneous long-range percolation}) stochastically dominates an edge configuration with edge probabilities $1- \exp(-\lambda \|x-y\|^{-\alpha})$. The latter is similar to the homogeneous long-range percolation model on ${\mathbb Z}^d$ and the results of the above theorem directly follow from Theorem \ref{theorem infinite cluster}. For part (c) of the theorem we also refer to Theorem 3.1 of Deijfen et al.~\cite{Remco}. The next theorem follows from Theorems 4.2 and 4.4 of Deijfen et al.~\cite{Remco}. \begin{theorem} \label{theo 32} Fix $d\ge 1$. Assume $\min \{ \alpha, \beta \alpha \} > d$. \begin{itemize} \item[(a)] If $\beta \alpha <2d$, then $\lambda_c = 0$. \item[(b)] If $\beta \alpha >2d$, then $\lambda_c > 0$. \end{itemize} \end{theorem} \noindent Theorems \ref{theo 31} and \ref{theo 32} give the phase transition\index{phase transition} pictures for $d\ge1$, see Fig.~\ref{Picture: Phase} for an illustration. They differ for $d=1$ and $d\ge 2$ in that the former has a region where $\lambda_c=\infty$ and the latter does not, see also the distinction in Theorem \ref{theorem infinite cluster}. The most interesting case from a practical point of view is the infinite variance case, $1<\tau<2$ and $d<\beta\alpha<2d$, respectively, which provides percolation for any $\lambda>0$. It follows from Gandolfi et al.~\cite{GKN} that there is only one infinite connected component $\mathcal{C}_\infty$ whenever $\lambda>\lambda_c$, a.s. A difficult question to answer is what happens at criticality for $\lambda_c>0$. There is the following partial result, see Theorem 3 in Deprez et al.~\cite{HW}: for $\alpha \in (d,2d)$ and $\beta \alpha > 2d$, there does not exist an infinite connected component at criticality $\lambda_c>0$. The case $\min\{\alpha,\beta\alpha\}>2d$ is still open. \begin{figure} \caption{phase transition picture for $d\ge1$} \label{Picture: Phase} \end{figure} Next we consider the graph distance $d(x,y)$, see also (\ref{definition of graph distance d}). We have the following result, see Deijfen et al.~\cite{Remco} and Theorem 8 in Deprez et al.~\cite{HW}. \begin{theorem}\label{graph distance} Assume $\min \{ \alpha, \beta \alpha \} > d$. \begin{itemize} \item[(a)] (infinite variance of degree distribution $1<\tau<2$). Assume $d<\beta\alpha<2d$. For any $\lambda>\lambda_c=0$ there exists $\eta_1> 0$ such that for all $\epsilon>0$ \begin{equation*} \lim_{\|x\|\to \infty} {\mathbb P}_{\lambda, \alpha, \beta} \left[ \left. \eta_1\le \frac{d(0,x)}{\log \log \|x\|}\le \frac{2}{|\log(\beta\alpha/d-1)|}+\epsilon \right|0,x\in {\cal C}_\infty\right]=1. \end{equation*} \item[(b1)] (finite variance of degree distribution $\tau>2$ case 1). Assume that $\beta\alpha >2d$ and $\alpha \in (d,2d)$. For any $\lambda>\lambda_c$ there exists $\eta_2\ge 1 $ such that for all $\epsilon>0$ \begin{equation*} \lim_{\|x\|\to \infty} {\mathbb P}_{\lambda, \alpha, \beta} \left[ \left. \eta_2-\epsilon \le \frac{\log d(0,x)}{\log\log \|x\|}\le \Delta+\epsilon \right|0,x\in {\cal C}_\infty\right]=1, \end{equation*} where $\Delta$ was defined in Theorem \ref{theorem homogeneous 1}. \item[(b2)] (finite variance of degree distribution $\tau>2$ case 2). Assume $\min\{\alpha,\beta\alpha\} >2d$. There exists $\eta_3>0$ such that \begin{equation*} \lim_{\|x\|\to \infty} {\mathbb P}_{\lambda, \alpha, \beta} \left[\eta_3< \frac{ d(0,x)}{ \|x\|} \right]=1. \end{equation*} \end{itemize} \end{theorem} Compare Theorem \ref{graph distance} (heterogeneous case) to Theorems \ref{theorem homogeneous 1} and \ref{theorem homogeneous 2} (homogeneous case). We observe that in the finite variance cases (b1)--(b2), i.e.~for $\tau=\beta\alpha/d>2$, we obtain the same behaviour for heterogeneous and homogeneous long-range percolation models. The infinite variance case (a) of the degree distribution, i.e.~$1<\tau <2$ and $d<\beta\alpha<2d$, respectively, is new. This infinite variance case provides a much slower decay of the graph distance, that is $d(0,x)$ is of order $\log \log \|x\|$ as $\|x\|\to \infty$. This is a pronounced version of the small-world effect, and this behaviour is similar to the NSW random graph model. Recall that empirical studies often suggest a tail parameter $\tau$ between 1 and 2 which corresponds to the infinite variance regime of the degree distribution. In Fig.~\ref{Picture: Distances} we illustrate Theorem \ref{graph distance} and we complete the picture about the chemical distances with the corresponding conjectures. \begin{figure} \caption{chemical distances according to Theorem \ref{graph distance} \label{Picture: Distances} \end{figure} We conclude that this model fulfils all three stylised facts of small-world effect, the clustering property (which is induced by the Euclidean distance in the probability weights (\ref{heterogeneous long-range percolation})) and the heavy-tailedness of the degree distribution. \section{Continuum space long-range percolation model\index{percolation!continuum space long-range}} \label{homogeneous Poisson model} The model of last section is restricted to the lattice ${\mathbb Z}^d$. A straightforward modification is to replace the lattice ${\mathbb Z}^d$ by a homogeneous Poisson point process $X$ in ${\mathbb R}^d$. In comparison to the lattice model, some of the proofs simplify because we can apply classical integration in ${\mathbb R}^d$, other proofs become more complicated because one needs to make sure that the realisation of the Poisson point process is sufficiently regular in space. As in Deprez-W\"uthrich \cite{DW} we consider a homogeneous marked Poisson point process in ${\mathbb R}^d$, where \begin{itemize} \item $X$ denotes the spatially homogeneous Poisson point process\index{Poisson point process} in ${\mathbb R}^d$ with constant intensity $\nu>0$. The individual particles of $X$ are denoted by $x\in X \subset {\mathbb R}^d$; \item $W_x$, $x\in X$, are i.i.d.~marks\index{marks} having a Pareto distribution with threshold parameter 1 and tail parameter $\beta>0$, see (\ref{definition of Pareto}). \end{itemize} Choose $\alpha>0$ and $\lambda>0$ fixed. Conditionally given $X$ and $(W_x)_{x\in X}$, we consider the edge probabilities\index{edge probability} for $x,y\in X$ given by \begin{equation}\label{Poisson long-range percolation} p_{x,y} = 1-\exp(-\lambda W_xW_y\|x-y\|^{-\alpha}). \end{equation} Between any pair $x,y\in X$ we attach an edge, independently of all other edges, as follows \begin{equation*} \eta_{x,y}=\eta_{y,x} = \left\{ \begin{array}{ll} 1 \qquad & \text{ with probability $p_{x,y}$,}\\ 0 \qquad & \text{ with probability $1-p_{x,y}$.} \end{array} \right. \end{equation*} We denote the resulting probability measure on the edge configuration by ${\mathbb P}_{\nu,\lambda, \alpha, \beta}$. Figure \ref{HetLRP_and_RCM} (rhs) shows part of a realised configuration. We have the following result for the degree distribution, see Proposition 3.2 and Theorem 3.3 in Deprez-W\"uthrich \cite{DW}. \begin{theorem} \label{infinite variance case Poisson} Fix $d\ge 1$. We have the following two cases for the degree distribution: \begin{itemize} \item for $\min\{\alpha, \beta \alpha\} \le d$, a.s., ${\mathbb P}_0[{\cal D}(0)=\infty|W_0]=1$; \item for $\min\{\alpha, \beta \alpha\} > d$ set $\tau=\beta\alpha/d$, then \begin{equation*} {\mathbb P}_{0}\left[{\cal D}(0) > k \right] =k^{-\tau} \ell(k), \end{equation*} for some function $\ell(\cdot)$ that is slowly varying at infinity. \end{itemize} \end{theorem} \noindent {\it Remarks.} \begin{itemize} \item Note that the previous statement needs some care because we need to make sure that there is a particle at the origin. This is not straightforward in the Poisson case and ${\mathbb P}_0$ can be understood as the conditional distribution, conditioned on having a particle at the origin. The formally precise construction is known as the Palm distribution\index{Palm distribution}, which considers distributions shifted by the particles in the Poisson cloud $X$. \item In analogy to the homogeneous long-range percolation model in ${\mathbb Z}^d$ we could also consider continuum space homogeneous long-range percolation in ${\mathbb R}^d$. This is achieved by setting $W_x=W_y=1$, a.s., in (\ref{Poisson long-range percolation}). In this case the proof of the statement equivalent to (\ref{homogeneous lattice}) becomes rather easy. We briefly give the details in the next lemma, see also proof of Lemma 3.1 in Deprez-W\"uthrich \cite{DW}. \end{itemize} \begin{lemma} \label{lemma Poisson degrees} Choose $W_x=W_y=1$, a.s., in (\ref{Poisson long-range percolation}). For $\alpha\le d$ we have, a.s., ${\cal D}(0)=\infty$; for $\alpha>d$ the degree ${\cal D}(0)$ has a Poisson distribution. \end{lemma} \noindent \begin{Proof}[of Lemma \ref{lemma Poisson degrees} and (\ref{homogeneous lattice}) in continuum space]~ Let $X$ be a Poisson cloud with $0\in X$ and denote by $X(A)$ the number of particles in $X\cap A$ for $A\subset{\mathbb R}^d$. Every particle $x\in X\setminus\{0\}$ is now independently from the others removed from the Poisson cloud with probability $1-p_{0,x}$. The resulting process $\widetilde{X}$ is a thinned Poisson cloud having intensity function $x\mapsto\nu p_{0,x}=\nu(1-\exp(-\lambda\|x\|^{-\alpha}))\sim\nu\lambda\|x\|^{-\alpha}$ as $\|x\|\to \infty$. Since $\mathcal{D}(0)=\tilde X({\mathbb R}^d\setminus\{0\})$ it follows that $\mathcal{D}(0)$ is infinite, a.s., if $\alpha\le d$ and that $\mathcal{D}(0)$ has a Poisson distribution otherwise. To see this let $\mu$ denote the Lebesgue measure in ${\mathbb R}^d$ and choose a finite Borel set $A \subset {\mathbb R}^d$ containing the origin. Since $A$ contains the origin, we have $X(A) \ge \widetilde{X}(A)\ge 1$. This motivates for $k\in \mathbb{N}_0$ to study \begin{eqnarray*} {\mathbb P}_0\left[\widetilde{X}(A)=k+1 \right] &=& \sum_{i\ge k} {\mathbb P}_0\left[\left.\widetilde{X}(A)=k+1\right|X(A)=i+1 \right] {\mathbb P}_0\left[{X}(A)=i+1 \right]. \end{eqnarray*} Since $A$ contains the origin, the case $i=0$ is trivial, i.e.~${\mathbb P}_0[\widetilde{X}(A)=1 |X(A)=1 ]=1$. There remains $i\ge 1$. Conditionally on $\{{X}(A)=i+1\}$, the $i$ particles (excluding the origin) are independent and uniformly distributed in $A$. The conditional moment generating function for $r\in {\mathbb R}$ is then given by \begin{eqnarray*} &&\hspace{-1cm} {\mathbb E}_0 \left[\left. \exp \left\{ r\left(\widetilde{X}(A)-1\right)\right\}\right|X(A)=i+1 \right] \\&&=~ \frac{1}{\mu (A)^i} \int_{A\times \cdots \times A} {\mathbb E}_0 \left[ \exp \left\{ r \sum_{l=1}^i \eta_{0,x_l} \right\} \right] dx_1\cdots dx_i \\ &&=~ \frac{1}{\mu (A)^i} \int_{A\times \cdots \times A} {\mathbb P}rod_{l=1}^i {\mathbb E}_0 \left[ \exp \left\{ r \eta_{0,x_l} \right\} \right] dx_1\cdots dx_i \\&&=~ \left(\frac{1}{\mu (A)} \int_{A} {\mathbb E}_0 \left[ \exp \left\{ r \eta_{0,x} \right\} \right] dx\right)^i. \end{eqnarray*} We calculate the integral for $W_0=W_{x}=1$, a.s., in (\ref{Poisson long-range percolation}) \begin{eqnarray*} \frac{1}{\mu (A)} \int_{A} {\mathbb E}_0 \left[ \exp \left\{ r \eta_{0,x} \right\} \right] dx &=& \frac{1}{\mu (A)} \int_{A} e^r p_{0,x}+ (1- p_{0,x})~ dx \\&=& e^rp(A) + \left(1- p(A)\right), \end{eqnarray*} with $p(A)=\mu (A)^{-1} \int_{A} p_{0,x} dx \in (0,1)$. Thus, conditionally on $\{X(A)=i+1\}$, $\widetilde{X}(A)-1$ has a binomial distribution with parameters $i$ and $p(A)$. This implies that \begin{eqnarray*} {\mathbb P}_0\left[\widetilde{X}(A)=k+1 \right] &=&\sum_{i\ge k} \binom{i}{k}~ p(A)^k \left(1-p(A)\right)^{i-k}~ {\mathbb P}_0\left[{X}(A)=i+1 \right] \\&=&\sum_{i\ge k} \frac{p(A)^k}{k!} \frac{\left(1-p(A)\right)^{i-k}}{(i-k)!}~ \exp \{-\nu \mu(A)\}(\nu \mu(A))^i \\&=&\frac{(\nu \mu(A)p(A))^k}{k!} \sum_{i\ge k} \frac{\left(\nu \mu(A)(1-p(A))\right)^{i-k}}{(i-k)!} \exp \{-\nu \mu(A)\} \\&=& \exp \left\{-\nu \mu(A)p(A)\right\} \frac{(\nu \mu(A)p(A))^k}{k!} \\&=& \exp \left\{-\nu\int_{A} p_{0,x} dx\right\} \frac{\left( \nu\int_{A} p_{0,x} dx\right)^k}{k!}. \end{eqnarray*} This implies that $\widetilde{X}$ is a non-homogeneous Poisson point process with intensity function \begin{equation*} x\mapsto \nu p_{0,x} =\nu \left(1-\exp(-\lambda \|x\|^{-\alpha} )\right) \sim \nu \lambda \|x\|^{-\alpha}, \qquad \text{ as $\|x\|\to \infty$.} \end{equation*} But this immediately implies that the degree distribution ${\cal D}(0)=\widetilde{X}({\mathbb R}^d\setminus\{0\})$ is infinite, a.s., if $\alpha \le d$, and that it has a Poisson distribution otherwise. This finishes the proof. \qed \end{Proof} ~ \noindent We now switch back to the heterogeneous long-range percolation model (\ref{Poisson long-range percolation}). We consider the connected component $\mathcal{C}(0)$ of a particle in the origin under the Palm distribution ${\mathbb P}_0$. We define the percolation probability\index{percolation!probability} \begin{equation*} \theta(\lambda)= {\mathbb P}_{0} \left[ |{\cal C}(0)|=\infty \right]. \end{equation*} The {\it critical percolation value}\index{percolation!critical value} $\lambda_c$ is then defined by \begin{equation*} \lambda_c = \inf \left\{ \lambda>0:~\theta(\lambda)>0\right\}. \end{equation*} We have the following results, see Theorem 3.4 in Deprez-W\"uthrich \cite{DW}. \begin{theorem} \label{theo 31 DW} Fix $d\ge 1$. Assume $\min \{ \alpha, \beta \alpha \} > d$. \begin{itemize} \item[(a)] If $d\ge 2$, then $\lambda_c<\infty$. \item[(b)] If $d=1$ and $\alpha \in (1,2]$, then $\lambda_c<\infty$. \item[(c)] If $d=1$ and $\min \{ \alpha, \beta \alpha \} > 2$, then $\lambda_c=\infty$. \end{itemize} \end{theorem} \begin{theorem} \label{theo 32 DW} Fix $d\ge 1$. Assume $\min \{ \alpha, \beta \alpha \} > d$. \begin{itemize} \item[(a)] If $\beta \alpha <2d$, then $\lambda_c = 0$. \item[(b)] If $\beta \alpha >2d$, then $\lambda_c > 0$. \end{itemize} \end{theorem} These are the continuum space analogues to Theorems \ref{theo 31} and \ref{theo 32}, for an illustration see also Fig.~\ref{Picture: Phase}. The work on the graph distances in the continuum space long-range percolation model is still work in progress, but we expect similar results to the ones in Theorem \ref{graph distance}, see also Fig.~\ref{Picture: Distances}. However, proofs in the continuum space model are more sophisticated due to the randomness of the positions of the particles. The advantage of the latter continuum space model (with homogeneous marked Poisson point process) is that it can be extended to non-homogeneous Poisson point processes. For instance, if certain areas are more densely populated than others we can achieve such a non-homogeneous space model by modifying the constant intensity $\nu$ to a space-dependent density function $\nu(\cdot):{\mathbb R}^d \to {\mathbb R}_+$. \section{Renormalisation techniques} \label{Renormalisation techniques} In this section we present a crucial technique that is used in many of the proofs of the previous statements. These proofs are often based on renormalisation techniques. That is, one collects particles in boxes. These boxes are defined to be either {\it good} (having a certain property) or {\it bad} (not possessing this property). These boxes are then again merged to bigger good or bad boxes. These scalings\index{scaling} and renormalisations are done over several generations of box sizes, see Fig.~\ref{Figure: renormalisation} for an illustration. The purpose of these rescalings is that one arrives at a certain generation of box sizes that possesses certain characteristics to which classical site-bond percolation results apply. We exemplify this with a particular example. \begin{figure} \caption{ Example of the renormalisation technique. Define inductively the box lengths $m_n=2m_{n-1} \label{Figure: renormalisation} \end{figure} \subsection{Site-bond percolation\index{percolation!site-bond} } Though we will not directly use site-bond percolation, we start with the description of this model because it is often useful. Site-bond percolation in ${\mathbb Z}^d$ is a modification of homogeneous long-range percolation introduced in Sect.~\ref{homogeneous long-range percolation}. Choose a fixed dimension $d\ge 1$ and consider the square lattice ${\mathbb Z}^d$. Assume that every site $x\in {\mathbb Z}^d$ is occupied independently with probability $r^\ast\in [0,1]$ and every bond between $x$ and $y$ in ${\mathbb Z}^d$ is occupied independently with probability \begin{equation}\label{site-bond percolation} p^\ast_{x,y} = 1-\exp(-\lambda^\ast \|x-y\|^{-\alpha}), \end{equation} for given parameters $\lambda^\ast>0$ and $\alpha>0$. The connected component ${\cal C}^\ast(x)$ of a given site $x\in {\mathbb Z}^d$ is then defined to be the set of all occupied sites $y\in {\mathbb Z}^d$ such that $x$ and $y$ are connected by a path only running through occupied sites and occupied bonds (if $x$ is not occupied then ${\cal C}^\ast(x)$ is the empty set). We can interpret this as follows: we place particles at sites $x\in {\mathbb Z}^d$ at random with probability $r^\ast$. This defines a (random) subset of ${\mathbb Z}^d$ and then we consider long-range percolation on this random subset, i.e.~this corresponds to a thinning of homogeneous long-range percolation in ${\mathbb Z}^d$. We can then study the percolation properties of this site-bond percolation model, some results are presented in Lemma 3.6 of Biskup \cite{Biskup} and in the proof of Theorem 2.5 of Berger \cite{Berger1}. The aim in many proofs in percolation theory is to define different generations of box sizes using renormalisations, see Fig.~\ref{Figure: renormalisation}. We perform these renormalisations until we arrive at a generation of box sizes for which good boxes occur sufficiently often. If this is the case and if all the necessary dependence assumptions are fulfilled we can apply classical site-bond percolation results. In order to simplify our outline we use a modified version of the homogeneous long-range percolation\index{percolation!modified homogeneous long-range} model (\ref{long-range percolation}) of Sect.~\ref{homogeneous long-range percolation}. We set $p=1-\exp(-\lambda)$ and obtain the following model. \begin{model}[modified homogeneous long-range percolation] \label{long-range percolation model} Fix $d\ge 2$. Choose $\alpha>0$ and $\lambda>0$ fixed and define the edge probabilities\index{edge probability} for $x,y \in {\mathbb Z}^d$ by \begin{equation*} p_{x,y} = 1-\exp(-\lambda \|x-y\|^{-\alpha}). \end{equation*} Then edges between all pairs of particles $x,y \in {\mathbb Z}^d$ are attached independently with edge probability $p_{x,y}$ and the probability measure of the resulting edge configurations $\eta=(\eta_{x,y})_{x,y\in {\mathbb Z}^d}$ is denoted by ${\mathbb P}_{\lambda, \alpha}$. \end{model} Note that this model is a special case of site-bond percolation with $r^\ast=1$ and $\lambda^\ast=\lambda$ in (\ref{site-bond percolation}). \subsection{Largest semi-clusters\index{semi-cluster}} In order to demonstrate the renormalisation technique we repeat the proof of Lemma 2.3 of Berger \cite{Berger1} in the modified homogeneous long-range percolation Model \ref{long-range percolation model}, see Theorem \ref{connected component} below. This proof is rather sophisticated because it needs a careful treatment of dependence and we revisit the second version of the proof of Lemma 2.3 provided in Berger \cite{Berger11}. Fix $\alpha\in(d,2d)$ and choose $\lambda>0$ so large that there exists a unique infinite connected component, a.s., having density $\kappa>0$ (which exists due to Theorem \ref{theorem infinite cluster}). Choose $M \ge 1$ and $K\ge 0$ integer valued. For $v\in {\mathbb Z}^d$ we define box\index{box} $B_v$ and its $K$-enlargement\index{$K$-enlargement} $B^{(K)}_v$ by \begin{equation*} B_v = Mv + [0,M-1]^d \qquad \text{ and } \qquad B^{(K)}_v = Mv + [-K,M+K-1]^d. \end{equation*} For every box $B_v$ we define a $\ell$-\textit{semi-cluster} to be a set of at least $\ell$ sites in $B_v$ which are connected within $B_v^{(K)}$. For any $\varepsilon>0$ there exists $M'\ge 1$ such that for all $M\ge M'$ and some $K\ge 0$ we have \begin{eqnarray}\label{good probability} {\mathbb P}_{\lambda, \alpha} [\text{at least $M^d\kappa/2$ sites of $B_v$ belong to the infinite connected } ~ \\\text{component and these sites are connected within $B^{(K)}_v$}] &\ge& 1- \varepsilon/2. \nonumber \end{eqnarray} Existence of $M'\ge 1$ follows from the ergodic theorem and existence of $K$ from the fact that the infinite connected component is unique, a.s., and therefore all sites in $B_v$ belonging to the infinite connected component need to be connected within a certain $K$-enlargement of $B_v$. Formula (\ref{good probability}) says that we have a $(M^d\kappa/2)$-semi-cluster in $B_v$ with at least probability $1-\varepsilon/2$. We first show uniqueness of large semi-clusters. \begin{lemma} \label{uniqueness of semi-clusters} Choose $\xi \in (\alpha/d, 2)$ and $\gamma \in (0,1)$ with $18\gamma>16+\xi$. There exist $\varphi=\varphi(\xi,\gamma)>0$ and $M'=M'(\xi,\gamma)\ge 1$ such that for all $M\ge M'$ and all $K\ge 0$ we have \begin{equation*} {\mathbb P}_{\lambda, \alpha} \left[\text{there is at most one $M^{d\gamma}$-semi-cluster in $B_v$}\right] > 1-M^{-d\varphi}, \end{equation*} where by ``at most one'' we mean that there is no second $M^{d\gamma}$-semi-cluster in $B_v$ which is not connected to the first one within $B_v^{(K)}$. \end{lemma} \noindent \begin{Proof}[of Lemma \ref{uniqueness of semi-clusters}] The proof uses the notion of inhomogeneous random graphs\index{random graph!inhomogeneous} as defined in Aldous \cite{Aldous}. An inhomogeneous random graph $H(N,\xi)$ with size $N$ and parameter $\xi$ is a set of particles $\{1,\ldots,k\}$ and corresponding masses $s_1,\ldots,s_k$ such that $N=\sum_{i=1}^ks_i$; and any $i\neq j$ are connected independently with probability $1-\exp\left(-s_is_jN^{-\xi}\right)$. From Lemma 2.5 of Berger \cite{Berger11} we know that for any $1<\xi<2$ and $0<\gamma<1$ with $18\gamma>16+\xi$, there exist $\varphi=\varphi(\xi,\gamma)>0$ and $N'=N'(\xi,\gamma)\ge 1$ such that for all $N\ge N'$ and every inhomogeneous random graph with size $N$ and parameter $\xi$ we have \begin{eqnarray}\label{Equation: Lemma 2.5} &&\hspace{-.5cm}\nonumber {\mathbb P}\left[ \text{$H(N,\xi)$ contains more than one connected component $C$ with $\sum_{i\in C}s_i \ge N^\gamma$} \right] \\&&<~ N^{-\varphi}. \end{eqnarray} We now show uniqueness of $M^{d\gamma}$-semi-clusters in $B_v$. Choose $\xi \in (\alpha/d, 2)$ and $\gamma \in (0,1)$ such that $18\gamma>16+\xi$. For any $x,y\in B_v$ we have $\|x-y\| \le \sqrt{d}M$. Choose $M$ so large that $\lambda(\sqrt{d}M)^{-\alpha}>M^{-d\xi}$ and choose $K\ge 0$ arbitrarily. Particles $x,y\in B_v$ are then attached with probability $p_{x,y}$ uniformly bounded by \begin{eqnarray*} p_{x,y} &=& 1-\exp(-\lambda \|x-y\|^{-\alpha}) \\&\ge& 1- \exp\left(-\lambda\left(\sqrt{d}M\right)^{-\alpha}\right) > 1- \exp\left(-M^{-d\xi}\right) = \nu>0, \end{eqnarray*} where the last equality defines $\nu$. This allows to decouple the sampling of edges $\eta=(\eta_{x,y})_{x,y\in {\mathbb Z}^d}$ in $B_v$. For every $x,y\in B_v$, define $p'_{x,y}\in (0,1)$ by \begin{equation*} p_{x,y}= p'_{x,y} + \nu - \nu p'_{x,y}. \end{equation*} We now sample $\eta=(\eta_{x,y})_{x,y\in{\mathbb Z}^d}$ in two steps. We first sample $\eta'$ according to Model \ref{long-range percolation model} but with edge probabilities $p'_{x,y}$ if $x,y \in B_v$ and with edge probabilities $p_{x,y}$ otherwise. Secondly, we sample $\eta''$ as an independent configuration on $B_v$ where there is an edge between $x$ and $y$ with edge probability $\nu$ for $x,y \in B_v$. By definition of $p'_{x,y}$ we get that $\eta'\vee\eta'' \stackrel{\rm (d)}{=}\eta$. Let $S_{1},S_2 \subset B_v$ be two disjoint maximal sets of sites in $B_v$ that are $\eta'$-connected within $B_v^{(K)}$, i.e.~$S_1$ and $S_2$ are two disjoint maximal semi-clusters in $B_v$ for given edge configuration $\eta'$. Note that by maximality \begin{eqnarray*} &&\hspace{-1cm} {\mathbb P}\left[\left. \text{there is an $\eta$-edge between $S_1$ and $S_2$}\right|\eta' \right] \\&&= {\mathbb P}\left[\left. \text{there is an $\eta''$-edge between $S_1$ and $S_2$}\right|\eta' \right] \\&&=1-(1-\nu)^{|S_{1}||S_{2}|}~=~ 1- \exp\left(-|S_{1}||S_{2}|M^{-d\xi}\right). \end{eqnarray*} If we denote by $S_1, \ldots, S_k$ all disjoint maximal semi-clusters in $B_v$ for given edge configuration $\eta'$ then we see that these maximal semi-clusters form an inhomogeneous random graph of size $\sum_{i=1}^k |S_i| =M^d$ and parameter $\xi$. Therefore, there exist $\varphi>0$ and $M'\ge 1$ such that for all $M\ge M'$ and all $K\ge 0$ we have from (\ref{Equation: Lemma 2.5}) \begin{eqnarray*} &&\hspace{-.1cm} {\mathbb P}\Bigg[ \text{$H(M^d,\xi)$ contains more than one connected component $C$} ~ \\&&\hspace{6cm}\text{with $\sum_{i\in C}|S_i| \ge M^{d\gamma}$}\Bigg|\eta' \Bigg] <M^{-d\varphi}. \end{eqnarray*} Note that this bound is uniform in $\eta'$ and $K\ge 0$. Therefore, the probability of having at least two $M^{d\gamma}$-semi-clusters in $B_v$ which are not connected within $B_v^{(K)}$ is bounded by $M^{-d\varphi}$. \qed \end{Proof} ~ \noindent We can now combine (\ref{good probability}) and Lemma \ref{uniqueness of semi-clusters}. Choose $\varepsilon>0$. For all $M$ sufficiently large and $K\ge 0$ such that (\ref{good probability}) holds we have \begin{equation}\label{exaclty one large semi-cluster} {\mathbb P}_{\lambda, \alpha} \left[\text{there is exactly one $(M^{d}\kappa/2)$-semi-cluster in $B_v$}\right] \ge 1-\varepsilon, \end{equation} where by ``exactly one'' we mean that there is no other $(M^{d}\kappa/2)$-semi-cluster in $B_v$ which is not connected to the first one within $B_v^{(K)}$. This follows because of $\gamma<1$, which implies that $M^{d\gamma}\le M^d \kappa/2$ for all $M$ sufficiently large, and because $M^{-d\varphi}<\varepsilon/2$ for all $M$ sufficiently large. \subsection{Renormalisation} Choose $\varepsilon>0$ fixed, and $M> 1$ and $K\ge 0$ such that (\ref{exaclty one large semi-cluster}) holds. For $v\in {\mathbb Z}^d$ we say that box $B_v$ is {\it good} if there is exactly one $(M^d\kappa/2)$-semi-cluster in $B_v$ (where exactly one is meant in the sense of above). Therefore, on good boxes there are at least $M^d\kappa/2$ sites in $B_v$ that are connected within $B^{(K)}_v$ and we have \begin{equation}\label{good probability 2} {\mathbb P}_{\lambda, \alpha} [ \text{$B_v$ is good} ] \ge 1- \varepsilon. \end{equation} Note that the goodness properties of $B_{v_1}$ and $B_{v_2}$ for $v_1\neq v_2\in {\mathbb Z}^d$ are not necessarily independent because their $K$-enlargements $B^{(K)}_{v_1}$ and $B^{(K)}_{v_2}$ may overlap. Now, we define renormalisation over different generations $n\in \mathbb{N}_0$; terminology {\it $n$-stage}\index{box!$n$-stage} is referred to the $n$-th generation. Choose an integer valued sequence $a_n>1$, $n\in \mathbb{N}_0$, with $a_0=M$ and define the box lengths $(M_n)_{n\in \mathbb{N}_0}$ as follows: set $M_0=a_0=M$ and for $n\in\mathbb{N}$ \begin{equation*} M_n= a_nM_{n-1}=M_0{\mathbb P}rod_{i=1}^n a_i={\mathbb P}rod_{i=0}^n a_i. \end{equation*} Define the $n$-stage boxes, $n\in \mathbb{N}_0$, by \begin{equation*} B_{n,v} = M_nv + [0,M_n-1]^d \qquad \text{ with $v\in {\mathbb Z}^d$.} \end{equation*} Note that $n$-stage boxes $B_{n,v}$ have volume $M^d_n= a_n^dM^d_{n-1}={\mathbb P}rod_{i=0}^n a^d_i$ and every $n$-stage box $B_{n,v}$ contains $a_n^d$ of $(n-1)$-stage boxes $B_{n-1,x}\subset B_{n,v}$, and $(M_n/a_0)^d= {\mathbb P}rod_{i=1}^n a^d_i$ of $0$-stage boxes $B_{x}=B_{0,x}\subset B_{n,v}$, see also Fig.~\ref{Figure: renormalisation}. ~ \noindent {\bf Renormalisation.\index{renormalisation}} We define goodness of $n$-stage boxes $B_{n,v}$ recursively for a given sequence $\kappa_n \in (0,1)$, $n\in \mathbb{N}_0$, of densities where we initialise $\kappa_0 = \kappa/2$. ~ \noindent {\it (i) Initialisation $n=0$.} We say that $0$-stage box $B_{0,v}$, $v\in {\mathbb Z}^d$, is good\index{box!good} if it contains exactly one $(\kappa_0a_0^d)$-semi-cluster. Due to our choices of $M> 1$ and $K\ge 0$ we see that the goodness of $0$-stage box $B_{0,v}$ occurs with at least probability $1-\varepsilon$, see (\ref{good probability 2}). ~ \noindent {\it (ii) Iteration $n-1 \to n$.} Choose $n\in \mathbb{N}$ and assume that goodness of $(n-1)$-stage boxes $B_{n-1,v}$, $v\in {\mathbb Z}^d$, has been defined. For $v\in {\mathbb Z}^d$ we say that $n$-stage box $B_{n,v}$ is good if the event $A_{n,v}= A^{(a)}_{n,v}\cap A^{(b)}_{n,v}$ occurs, where \begin{itemize} \item[(a)] $A^{(a)}_{n,v}=\{\text{at least $\kappa_{n}a_{n}^d$ of the $(n-1)$-stage boxes $B_{n-1,x}\subset B_{n,v}$ are good}\}$; and \item[(b)] $A^{(b)}_{n,v}=\{\text{all $({\mathbb P}rod_{i=0}^{n-1}\kappa_ia_i^d)$-semi-clusters of all good $(n-1)$-stage boxes }$\\ $\text{\hspace{6cm} in $B_{n,v}$ are connected within $B^{(K)}_{n,v}$}\}$. \end{itemize} \qed ~ \noindent Observe that on event $A_{n,v}$ the $n$-stage box $B_{n,v}$ contains at least ${\mathbb P}rod_{i=0}^{n}\kappa_ia_i^d$ sites that are connected within the $K$-enlargement $B^{(K)}_{n,v}$ of $B_{n,v}$. We set density $u_n={\mathbb P}rod_{i=0}^n\kappa_i$ which gives \begin{equation*} {\mathbb P}rod_{i=0}^n\kappa_i a_i^d=M_n^d{\mathbb P}rod_{i=0}^n\kappa_i=M_n^d u_n. \end{equation*} Therefore, good $n$-stage boxes contain $(M_n^d u_n)$-semi-clusters. Our next aim is to calculate the probability $p_n$ of having a good $n$-stage box. The case $n=0$ follows from (\ref{good probability 2}), i.e.~for any $\varepsilon>0$ and any $M$ sufficiently large there exists $K\ge 0$ such that \begin{equation*} p_0~=~{\mathbb P}_{\lambda, \alpha} [ \text{$B_{0,v}$ is good} ] ~=~{\mathbb P}_{\lambda, \alpha} [ \text{$B_v$ is good} ] ~\ge~ 1-\varepsilon. \end{equation*} \begin{theorem}\label{connected component} Assume $\alpha \in (d,2d)$. Choose $\lambda>0$ so large that we have a unique infinite connected component, a.s., having density $\kappa>0$. For every $\varepsilon' \in (0,1)$ there exists $N_0\ge 1$ such that for all $N\ge N_0$ \begin{equation*} {\mathbb P}_{\lambda, \alpha}\left[|C_N| \ge N^{\alpha/2}\right]\ge 1-\varepsilon', \end{equation*} where $C_N$ is the largest connected component\index{connected component!largest} in $[0,N-1]^d$. \end{theorem} Note that for density $\kappa>0$ of the infinite connected component we expect roughly $\kappa N^d$ sites in box $[0,N-1]^d$ belonging to the infinite connected component. The above lemma however says that at least $N^{\alpha/2}$ sites in $[0,N-1]^d$ are connected {\it within that box}. That is, here we do not need any $K$-enlargements as in (\ref{good probability}) and, therefore, this event is independent for different disjoint boxes $vN+[0,N-1]^d$ and we may apply classical site-bond percolation results. ~ \noindent \begin{Proof}[of Theorem \ref{connected component}] Choose $\alpha\in(d, 2d)$ and $\varepsilon' \in (0,1)$ fixed. As in Lemma 2.3 of Berger \cite{Berger11} we now make a choice of parameters and sequences which will provide the statement of Theorem \ref{connected component}. Choose $\xi \in (\alpha/d, 2)$ and $\gamma \in (0,1)$ such that $18\gamma>16+\xi$. Choose $\delta>\vartheta>1$ with $2\vartheta<\delta(2d-\alpha)$ and $d\delta-\vartheta>d\gamma \delta$. Note that this is possible because it requires that $\delta \min\{1,(2d-\alpha)/2,d(1-\gamma)\}>\vartheta>1$. Define for $n\in \mathbb{N}$ \begin{equation}\label{sequences} \kappa_n=(n+1)^{-\vartheta} \qquad \text{ and } \qquad a_n=(n+1)^\delta. \end{equation} For simplicity, we assume that $\delta$ is an integer which implies that also $a_n>1$ is integer valued, and $\kappa_n \in (0,1)$ will play the role of densities introduced above. Observe that for $\vartheta >1$ we have for all $n\ge 1$ \begin{equation}\label{c_1} {\mathbb P}rod_{l=1}^n (1+3\kappa_l)~ \le~ \lim_{n\to \infty}{\mathbb P}rod_{l=1}^n (1+3\kappa_l)~=~c_1 \in (1,\infty). \end{equation} Choose $\varepsilon \in (0,\varepsilon'/c_1)\subset(0,1)$ fixed. There still remains the choice of $a_0=M\ge 1$ and $\kappa_0\in (0,1)$. We set $\kappa_0=\kappa/2$. Note that choices (\ref{sequences}) imply \begin{equation*} (2M_{n-1}^d)^{\gamma}=2^\gamma M^{d\gamma} (n!)^{d\gamma\delta} \qquad \text{ and } \qquad M_{n-1}^d u_{n-1}=\frac{\kappa}{2}M^d (n!)^{d\delta-\vartheta}. \end{equation*} Therefore, \begin{equation}\label{rhs uniformly bounded in n} \frac{M_{n-1}^d u_{n-1}}{(2M_{n-1}^d)^{\gamma}}=\frac{\kappa}{2^{1+\gamma}}M^{d(1-\gamma)} (n!)^{d\delta-\vartheta-d\gamma\delta}. \end{equation} Because of $d\delta-\vartheta>d\gamma\delta$ the right-hand side of (\ref{rhs uniformly bounded in n}) is uniformly bounded from below in $n\ge 1$ and for $M$ sufficiently large the right-hand side of (\ref{rhs uniformly bounded in n}) is strictly bigger than 1 for all $n\ge 1$. Therefore, there exists $m_1\ge 1$ such that for all $M\ge m_1$ and all $n\ge 1$ we have \begin{equation}\label{crucial inequality 1} (2M_{n-1}^d)^{\gamma}<M_{n-1}^d u_{n-1}. \end{equation} Next we are going to bound for $n\in \mathbb{N}_0$ the probabilities \begin{equation*} p_n~=~{\mathbb P}_{\lambda, \alpha} [ \text{$B_{n,v}$ is good} ] ~=~{\mathbb P}_{\lambda, \alpha} [ A_{n,v} ]. \end{equation*} We have for $n\ge 1$ \begin{eqnarray} 1-p_n&=&{\mathbb P}_{\lambda, \alpha} \left[A^{\rm c}_{n,v}\right]\nonumber \\&=& {\mathbb P}_{\lambda, \alpha} \left[(A^{(a)}_{n,v}\cap A^{(b)}_{n,v})^{\rm c}\right] \label{bound one} \le {\mathbb P}_{\lambda, \alpha} \left[(A^{(a)}_{n,v})^{\rm c} \right] +{\mathbb P}_{\lambda, \alpha}\left[ (A^{(b)}_{n,v})^{\rm c}\right]. \end{eqnarray} For the first term in (\ref{bound one}) we have, using Markov's inequality and translation invariance, \begin{eqnarray*} {\mathbb P}_{\lambda, \alpha} \left[ (A^{(a)}_{n,v})^{\rm c}\right] &=& {\mathbb P}_{\lambda, \alpha} \left[\sum_{B_{n-1,x}\subset B_{n,v}} 1_{A_{n-1,x}} < \kappa_{n}a_{n}^d \right] \\&=& {\mathbb P}_{\lambda, \alpha} \left[\sum_{B_{n-1,x}\subset B_{n,v}} 1_{A_{n-1,x}^{\rm c}} > (1-\kappa_{n})a_{n}^d \right] \\&\le& \frac{1}{(1-\kappa_{n})a_{n}^d}~ \sum_{B_{n-1,x}\subset B_{n,v}} {\mathbb P}_{\lambda, \alpha} \left[A_{n-1,x}^{\rm c}\right] \\&=& \frac{1}{1-\kappa_{n}}~ {\mathbb P}_{\lambda, \alpha} \left[A_{n-1,v}^{\rm c}\right]~=~\frac{1-p_{n-1}}{1-\kappa_{n}}. \end{eqnarray*} The second term in (\ref{bound one}) is more involved due to possible dependence in the $K$-enlargements. Choose $\varphi=\varphi(\xi,\gamma)>0$ and $M'(\xi,\gamma)\ge 1$ as in Lemma \ref{uniqueness of semi-clusters}. On event $(A^{(b)}_{n,v})^{\rm c}$ there exist at least two $(M_{n-1}^d u_{n-1})$-semi-clusters in good $(n-1)$-stage boxes $B_{n-1,v_1}$ and $B_{n-1,v_2}$ in $B_{n,v}$ that are not connected within the $K$-enlargement $B_{n,v}^{(K)}$. Define $B=B_{n-1,v_1} \cup B_{n-1,v_2}$. Note that $B$ has volume $2M_{n-1}^d$ and that any $x,y \in B_{n,v}$ have maximal distance $\sqrt{d}M_n$. We analyse the following ratio \begin{equation*} \frac{\lambda(\sqrt{d}M_n)^{-\alpha}}{(2M^d_{n-1})^{-\xi}} =\lambda d^{-\alpha/2}2^{\xi}~M^{d \xi-\alpha} (n!)^{(d \xi-\alpha)\delta} ~(n+1)^{-\alpha \delta}. \end{equation*} Note that $d\xi>\alpha$. This implies that the right-hand side of the previous equality is uniformly bounded from below in $n\ge 1$. Therefore, there exists $m_2\ge m_1$ such that for all $M\ge m_2$ and all $n\ge 1$ inequality (\ref{crucial inequality 1}) holds and \begin{equation} \label{rhs uniformly bounded in n 2} \lambda(\sqrt{d}M_n)^{-\alpha}>(2M^d_{n-1})^{-\xi}. \end{equation} This choice implies that for any $x,y \in B$ we have \begin{eqnarray*} p_{x,y} &=& 1-\exp(-\lambda \|x-y\|^{-\alpha}) \\&\ge& 1- \exp\left(-\lambda\left(\sqrt{d}M_n\right)^{-\alpha}\right) > 1- \exp\left(-(2M^d_{n-1})^{-\xi}\right) = \nu_n>0, \end{eqnarray*} where the last equality defines $\nu_n$. We now proceed as in Lemma \ref{uniqueness of semi-clusters}. Decouple the sampling of edges $\eta=(\eta_{x,y})_{x,y\in {\mathbb Z}^d}$ in $B$. For every $x,y\in B$, define $p'_{x,y}\in (0,1)$ by \begin{equation*} p_{x,y}= p'_{x,y} + \nu_n - \nu_n p'_{x,y}. \end{equation*} We again sample $\eta=(\eta_{x,y})_{x,y\in{\mathbb Z}^d}$ in two steps. We first sample $\eta'$ according to Model \ref{long-range percolation model} but with edge probabilities $p'_{x,y}$ if $x,y \in B$ and with edge probabilities $p_{x,y}$ otherwise. Secondly, we sample $\eta''$ as an independent configuration on $B$ where there is an edge between $x$ and $y$ with edge probability $\nu_n$ for $x,y \in B$. By definition of $p'_{x,y}$ we get that $\eta'\vee\eta'' \stackrel{\rm (d)}{=}\eta$. Let $S_{1},S_2 \subset B$ be two disjoint maximal sets of sites in $B$ that are $\eta'$-connected within $B_{n,v}^{(K)}$, i.e.~$S_1$ and $S_2$ are two disjoint maximal semi-clusters in $B$ for given edge configuration $\eta'$. Note that by maximality \begin{eqnarray*} &&\hspace{-1cm} {\mathbb P}\left[\left. \text{there is an $\eta$-edge between $S_1$ and $S_2$}\right|\eta' \right] \\&&= {\mathbb P}\left[\left. \text{there is an $\eta''$-edge between $S_1$ and $S_2$}\right|\eta' \right] \\&&=~1-(1-\nu_n)^{|S_{1}||S_{2}|}~=~ 1- \exp\left(-|S_{1}||S_{2}|(2M^d_{n-1})^{-\xi}\right). \end{eqnarray*} If we denote by $S_1, \ldots, S_k$ all disjoint maximal semi-clusters in $B$ for given edge configuration $\eta'$ then we see that these maximal semi-clusters form an inhomogeneous random graph of size $\sum_{i=1}^k |S_i| =2M_{n-1}^d \ge 2 M^d$ and parameter $\xi$. Therefore, for choices $\varphi=\varphi(\xi,\gamma)>0$ and $m_3 \ge \max\{m_2, M'(\xi,\gamma)\}$ (where $\varphi(\xi,\gamma)$ and $M'(\xi,\gamma)$ were given by Lemma \ref{uniqueness of semi-clusters}) we have that for all $M\ge m_3$ and all $n\ge 1$ inequality (\ref{crucial inequality 1}) holds, and for all $K\ge 0$ we have from (\ref{Equation: Lemma 2.5}) \begin{eqnarray*} &&\hspace{-.2cm} {\mathbb P}\Bigg[ \text{$H(2M_{n-1}^d,\xi)$ contains more than one connected component $C$ } ~ \\&&\hspace{5.2cm}\text{with $\sum_{i\in C}|S_i| \ge (2M_{n-1}^d)^{\gamma}$}\Bigg|\eta' \Bigg] < (2M_{n-1}^d)^{-\varphi}. \end{eqnarray*} Note that this bound is uniform in $\eta'$ and $K\ge 0$ and holds for all $n\ge 1$. Therefore, the probability of having at least two $(2M_{n-1}^d)^{\gamma}$-semi-clusters in $B$ which are not connected within $B_{n,v}^{(K)}$ is bounded by $(2M_{n-1}^d)^{-\varphi}$. Next we use that for all $m\ge m_3$ inequality (\ref{crucial inequality 1}) holds. Therefore, we get for all $M\ge m_3$, all $n\ge 1$ and all $K\ge 0$ \begin{equation*} {\mathbb P}_{\lambda, \alpha} \left[ \text{there are at least two $(M_{n-1}^d u_{n-1})$-semi-clusters in $B$}\right] < (2M_{n-1}^d)^{-\varphi}. \end{equation*} Note that $B_{n,v}$ contains $a_n^d$ disjoint $(n-1)$-stage boxes, therefore we get for all $M\ge m_3$, all $n\ge 1$ and all $K\ge 0$ \begin{equation*} {\mathbb P}_{\lambda,\alpha}\left[ (A^{(b)}_{n,v})^{\rm c}\right] ~\le~ \binom{a_n^d}{2}\left(2M_{n-1}^{d}\right)^{-\varphi} ~\le~ a_n^{2d}M_{n-1}^{-d\varphi}. \end{equation*} This implies for all $M\ge m_3$, all $n\ge 1$ and all $K\ge 0$ \begin{eqnarray*} 1-p_n&=&{\mathbb P}_{\lambda, \alpha} [ \text{$B_{n,v}$ is not good} ] \\&=&{\mathbb P}_{\lambda, \alpha} [ A_{n,0}^c ]\le \frac{1-p_{n-1}}{1-\kappa_{n}}+a_n^{2d}M_{n-1}^{-d\varphi} \le (1-p_{n-1})(1+2\kappa_n)+a_n^{2d}M_{n-1}^{-d\varphi}. \end{eqnarray*} Consider \begin{equation*} \frac{a_n^{2d}M_{n-1}^{-d\varphi}}{\varepsilon \kappa_n}= \frac{(n+1)^{2d\delta}M^{-d\varphi}(n!)^{-d\delta\varphi}}{\varepsilon (n+1)^{-\vartheta}} =\varepsilon^{-1}M^{-d\varphi} (n+1)^{2d\delta+\vartheta}(n!)^{-d\delta\varphi}. \end{equation*} Note that this is uniformly bounded from above in $n$. Therefore, there exists $m_4\ge m_3$ such that for all $M\ge m_4$, all $n\ge 1$ and all $K\ge 0$ \begin{equation*} 1-p_n \le (1-p_{n-1})(1+2\kappa_n)+\varepsilon \kappa_n \le (1+3\kappa_n)\max\{1-p_{n-1},\varepsilon\}. \end{equation*} Applying induction we obtain for all $M\ge m_4$, all $n\ge 1$ and all $K\ge 0$ \begin{equation*} 1-p_n \le \max\{\varepsilon, 1-p_0\} {\mathbb P}rod_{i=1}^n(1+3\kappa_i). \end{equation*} Choose $m_5 \ge m_4$ such that for all $M\ge m_5$ there exists $K=K(M)\ge 0$ such that (\ref{exaclty one large semi-cluster}) and (\ref{good probability 2}) hold. These choices imply that $p_0\ge 1-\varepsilon$. Therefore, for all $M\ge m_5$, $K(M)$ such that (\ref{good probability 2}) holds, and all $n\ge 1$ \begin{equation*} 1-p_n \le \varepsilon {\mathbb P}rod_{i=1}^n(1+3\kappa_i)\le \varepsilon c_1 < \varepsilon', \end{equation*} where $c_1\in (1,\infty)$ was defined in (\ref{c_1}). Thus, for all $M\ge m_5$, $K(M)$ such that (\ref{good probability 2}) holds, and for all $n\ge 0$ \begin{eqnarray}\label{lower bound provides} &&\hspace{-1cm} {\mathbb P}_{\lambda,\alpha}\left[\text{there are at least $M_n^du_n$ sites in box $B_{n,0}$ connected within $B_{n,0}^{(K)}$}\right] \\&&\ge~ {\mathbb P}_{\lambda,\alpha} [A_{n,0}] \ge 1- \varepsilon',\nonumber \end{eqnarray} note that $B_{n,0}=[0,M_n-1]^d$. Note that the explicit choices (\ref{sequences}) provide \begin{equation*} M_n =M ((n+1)!)^\delta \qquad \text{ and } \qquad u_n = \kappa_0 ((n+1)!)^{-\vartheta}. \end{equation*} The edge length of $B_{n,0}^{(K)}$ is given by $M_n + 2 K = M_n + 2 K(M) = M ((n+1)!)^\delta + 2 K(M)$. Therefore, for all $M\ge 1$ there exists $n_0\ge 1$ such that for all $n\ge n_0$ \begin{equation*} M_n + 2 K \le N=N(M,n)=2 M ((n+1)!)^\delta, \end{equation*} where the last identity is the definition of $N=N(M,n)$. For all $n \ge n_0$ the number of connected vertices in $B_{n,0}^{(K)}$ under (\ref{lower bound provides}) is at least \begin{equation*} M^d_n u_n = \kappa_0 M^d((n+1)!)^{-\vartheta+d\delta} =2^{\vartheta/\delta-d}\kappa_0M^{\vartheta/\delta}N^{d-\vartheta/\delta}. \end{equation*} Note that the choices of $\delta$ and $\vartheta$ are such that $d-\vartheta/\delta>\alpha/2>0$. Therefore, there exists $n_1\ge n_0$ such that for all $n\ge n_1$ we have \begin{equation*} M^d_n u_n \ge N^{\alpha/2}. \end{equation*} This implies for all $n\ge n_1$, see (\ref{lower bound provides}), \begin{eqnarray*} &&\hspace{-.5cm} {\mathbb P}_{\lambda,\alpha}\left[|C_N| \ge N^{\alpha/2}\right] \\&&\ge {\mathbb P}_{\lambda,\alpha}\left[\text{there are at least $N^{\alpha/2}$ sites in box $B_{n,0}$ connected within $B_{n,0}^{(K)}$}\right] \\&&\ge {\mathbb P}_{\lambda,\alpha}\left[\text{there are at least $M_n^du_n$ sites in box $B_{n,0}$ connected within $B_{n,0}^{(K)}$}\right] \\&&\ge~1- \varepsilon', \end{eqnarray*} where $N=N(M,n)\ge N(M,n_1)=2M((n_1+1)!)^\delta$. This proves the claim on the grid $N(M,n_1),N(M,n_1+1),\ldots, $ with $N(M,n+1)=N(M,n)(n+2)^\delta$ for $n\ge n_1$. For $n' \in [N(M,n), N(M,n+1))$ we have on the set $\{|C_{N(M,n)}| \ge \rho_0 N(M,n)^{d-\vartheta/\delta}\}$ with $\rho_0=2^{\vartheta/\delta-d}\kappa_0M^{\vartheta/\delta}$ \begin{eqnarray*} |C_{n'}| &\ge& |C_{N(M,n)}| ~\ge~ \rho_0 N(M,n)^{d-\vartheta/\delta} \\&=& \rho_0 N(M,n)^{d-\vartheta/\delta-\alpha/2} \left(\frac{N(M,n)}{n'}\right)^{\alpha/2} (n')^{\alpha/2} \\&\ge&\rho_0 N(M,n)^{d-\vartheta/\delta-\alpha/2} \left(\frac{N(M,n)}{N(M,n+1)}\right)^{\alpha/2} (n')^{\alpha/2} \\&=&\rho_0 ~\frac{N(M,n)^{d-\vartheta/\delta-\alpha/2}} {\left(n+2\right)^{\delta\alpha/2}}~ (n')^{\alpha/2}~\ge~ (n')^{\alpha/2}, \end{eqnarray*} for all $n$ sufficiently large. This finishes the proof of Theorem \ref{connected component}.\qed \end{Proof} ~ \noindent {\bf Conclusion.} Theorem \ref{connected component} defines good boxes $[0,N-1]^d$ on a new scale, i.e.~these are boxes that contain sufficiently large connected components $C_N$. The latter occurs with probability $1-\varepsilon'\ge r^\ast$, for small $\varepsilon'$. If we can prove that such large connected components in disjoint boxes are connected by an occupied edge with probability bounded below by (\ref{site-bond percolation}), then we are in the set-up of a site-bond percolation model. This is exactly what is used in Theorem 3.2 of Biskup \cite{Biskup} in order to prove that (i) large connected components are percolating, a.s.; and (ii) $|C_N|$ is even of order $\rho N^{d}$ for an appropriate positive constant $\rho>0$, which improves Theorem \ref{connected component}. \end{document}
\begin{document} \title{\uppercase{Measure of a 2-component link} \begin{abstract} A two-component link produces a torus as the product of the component knots in a two-point configuration space of a three-sphere. This space can be identified with a cotangent bundle and also with an indefinite Grassmannian. We show that the integration of the absolute value of the canonical symplectic form is equal to the area of the torus with respect to the pseudo-Riemannian structure, and that it attains the minimum only at the ``best'' Hopf links. \end{abstract} {\small {\it Key words and phrases}. Energy, link, symplectic measure, M\"obius geometry, pseudo-Riemannian geometry.} {\small 2000 {\it Mathematics Subject Classification.} Primary 57M25; Secondary 53A30} \section{Introduction} Since energy of knots was introduced in \cite{OH1} about twenty years ago, aiming at producing an optimal knot for each knot type as an energy minimizer, a lot of related works have appeared, which form so-called {\sl geometric knot theory} (see, for example, \cite{CMR,CMRS,idealknots}). The present paper deals with the same type of topic. We introduce a functional on the space of $2$-component links such that the absolute minimum is attained only at ``best'' Hopf links, not at trivial links. Let $C_1\cup C_2$ be a $2$-component link in $\vect{S}^3$. The value of our functional $A(C_1,C_2)$ can be interpreted in the following two ways. Observe that the link produces a torus $C_1\times C_2$ in $\vect{S}^3\times\vect{S}^3\setminus\Delta$, where $\Delta$ is the diagonal set. First, there is a natural identification between $\vect{S}^3\times\vect{S}^3\setminus\Delta$ and the total space of the cotangent bundle $T^\alphast\vect{S}^3$. The pull-back $\omega$ of the canonical symplectic form of $T^\alphast\vect{S}^3$ to $\vect{S}^3\times\vect{S}^3\setminus\Delta$ is the unique $2$-form (up to multiplication by a constant) which is invariant under the diagonal action of the M\"obius group. The $2$-form $\omega$ can also be considered as a natural symplectic form on the space of geodesics in a hyperbolic $4$-space $H^4$. As $\omega$ is exact, $\int_{C_1\times C_2}\omega$ vanishes, but $\int_{C_1\times C_2}|\omega|$ does not, which is $A(C_1,C_2)$. In this sense, it can be considered as an ``{\sl absolute symplectic measure}'' of the torus $C_1\cup C_2$ in $T^\alphast\vect{S}^3$. Second, from a M\"obius geometric viewpoint, $\vect{S}^3\times\vect{S}^3\setminus\Delta$ can be identified with the Grassmannian manifold $SO(4,1)/SO(1,1)\times SO(3)$ of oriented time-like $2$-dimensional vector subspaces in the $5$-dimensional Minkowski space $\Re\mathfrak{e} \,R^5_1$. By taking a pseudo-orthogonal complement of an oriented time-like $2$-dimensional vector subspace, we can identify this space with the Grassmannian manifold of oriented space-like $3$-dimensional vector subspaces in $\Re\mathfrak{e} \,R^5_1$. It has a natural pseudo-Riemannian structure which is compatible with the action of the Lorentz group, which induces the diagonal action of the M\"obius group to $\vect{S}^3\times\vect{S}^3\setminus\Delta$. Then $A(C_1,C_2)$ is equal to the measure (area) of the torus $C_1\cup C_2$ with respect to the pseudo-Riemannian metric. The key of the proof is that both the pull-back $\omega$ of the canonical symplectic form and the ``{\sl imaginary signed area element}'' with respect to the pseudo-Riemannian structure coincide with the real part of the {\em infinitesimal cross ratio}, which \j{is a ``complex valued $2$-form'' on $C_1\times C_2$} used in the joint paper with Langevin \cite{La-OH1}. \j{Geometrically, it can be considered as} the cross ratio of $x, x+dx, y$ and $y+dy$, where these four points are considered as complex numbers by identifying a sphere through them with the Riemann sphere $\mathbb{C}\cup\{\infty\}$. \j{Some remarks on the result on the energy of links \cite{AMN}, which is another characterization of the ``best'' Hopf link, will be given in Subsection \ref{AMN}.} Throughout the paper, a link means a smooth (or at least of class $C^1$) $2$-component link. {\bf Acknowledgment}. The author thanks deeply R\'emi Langevin, Masahiko Kanai and Luisa Paoluzzi for helpful suggestions. He also thanks the referee for a helpful suggestion concerning the symplectic form. \section{Two structures on {\boldmath $\vect{S}^3\times \vect{S}^3\setminus\Delta$}}\label{sec_S03} We introduce two structures on $\vect{S}^3\times \vect{S}^3\setminus\Delta$, the symplectic structure and the pseudo-Riemannian structure, both compatible with M\"obius transformations. It is easy to see that both can be naturally generalized to $\vect{S}^n\times \vect{S}^n\setminus\Delta$ for any $n$. \subsection{Symplectic structure of $\vect{S}^3\times \vect{S}^3\setminus\Delta$}\label{subsect_sympl_str} \subsubsection{Via hyperbolic space} As $\vect{S}^3$ can be considered as the boundary of $4$-dimensional hyperbolic space $H^4$, $\vect{S}^3\times \vect{S}^3\setminus\Delta$ can be considered as the space of oriented geodesics in $H^4$, which is denoted by ${\mathcal{G}}$. The tangent space $T_\gamma {\mathcal{G}}$ along a geodesic $\gamma$ is the space of Jacobi fields along $\gamma$. Let $\nabla$ denote the Levi-Civita connection. Then, if we put $$\omega_g(\xi,\eta)=(\xi(t),\nabla_{\!\primet{\gamma}}\,\eta(t))-(\eta(t),\nabla_{\!\primet{\gamma}}\,\xi(t)) \hspace{0.5cm}(t\in\Re\mathfrak{e} \,R)$$ for $\xi,\eta\in T_\gamma {\mathcal{G}}$, where $(\>,\>)$ denotes the standard inner product on $T_{\gamma(t)} H^4$, then $\omega_g$ is an isometry-invariant symplectic form on ${\mathcal{G}}$ (see \cite[2C]{B}, \cite[3.1]{Kl}). Since an isometry of $H^4$ induces a M\"obius transformation of the boundary sphere $\vect{S}^3$, $\omega_g$ defines a symplectic form on $\vect{S}^3\times \vect{S}^3\setminus\Delta$ which is invariant under the diagonal action of the M\"obius group. \subsubsection{Via cotangent bundle} It is known that the space $\mathcal{G}$ of geodesics in $H^4$ is symplectomorphic to the cotangent bundle $T^\alphast \vect{S}^3$ \cite{F}. Let us give an identification between $\vect{S}^3\times \vect{S}^3\setminus\Delta$ and $T^\alphast \vect{S}^3$ explicitly. Assume $\vect{S}^3$ is the unit sphere in $\mathbb{R}^{4}$. Let $\vect x$ be a point in $\vect{S}^3$ and $p_{\mbox{\scriptsize \boldmath$x$}}\colon\vect{S}^3\setminus\{\vect x\}\to(\textrm{Span}\langle\vect x\rangle)^\primeerp$ be a stereographic projection. By identifying $(\textrm{Span}\langle\vect x\rangle)^\primeerp$ with $T_{\mbox{\scriptsize \boldmath$x$}}\vect{S}^3\cong T_{\mbox{\scriptsize \boldmath$x$}}^{\alphast}\vect{S}^3$, we obtain a bijection $$ \varphi_{\mbox{\scriptsize \boldmath$x$}}\colon\vect{S}^3\setminus\{\vect x\}\ni\vect y\mapsto \left(T_{\mbox{\scriptsize \boldmath$x$}}\vect{S}^3\ni\vect v\mapsto p_{\mbox{\scriptsize \boldmath$x$}}(\vect y)\cdot \vect v \in\mathbb{R}\right)\in T_{\mbox{\scriptsize \boldmath$x$}}^{\alphast}\vect{S}^3, $$ where $\cdot$ denotes the standard inner product in $\mathbb R^{4}$. It induces a bijection \begin{equation}\label{phi_conf_sp_cot_bdl} \varphi\colon \vect{S}^3\times \vect{S}^3\setminus\Delta \ni (\vect x,\vect y) \,\mapsto\, \left(\vect x, \varphi_{\mbox{\scriptsize \boldmath$x$}}(\vect y)\right) \in T^{\alphast}\vect{S}^3. \end{equation} Let $\omegasub{S^3}$ be the canonical symplectic form of the cotangent bundle $T^{\alphast}\vect{S}^3$. Put $\omega=\varphi^\alphast\omegasub{S^3}$. In \cite{La-OH1}, we showed that $\omega$ is invariant under the diagonal action of the M\"obius group. The converse is also true. Namely, if a $2$-form $\rho$ is invariant under the diagonal action of the M\"obius group, then $\rho=c\,\omega$ for some $c\in\Re\mathfrak{e} \,R$ (Proposition \ref{prop_kanai} in Appendix). Therefore, we can see that $\omega$ coincides with $\omega_g$ mentioned above up to a constant factor. \subsection{Pseudo-Riemannian structure of $\vect{S}^3\times \vect{S}^3\setminus\Delta$} The {\em Minkowski space} $\mathbb{R}^5_1$ is $\mathbb{R}^5$ with the indefinite inner product $$\langle \vect x, \vect y \rangle\!=\!-x_0y_0+x_1y_1+\cdots+x_4y_4.$$ The set of light-like vectors and the origin $\vect L=\left\{\vect v\in\mathbb{R}^{5}_1 \, ;\, \langle\vect v,\vect v\rangle=0\right\}$ is called the {\em light cone}. The $3$-sphere can be considered as the projectivization $\mathbb{P}\vect L$ of the light cone. It can also be identified isometrically with the intersection of the light cone and a hyperplane given by $\{\vect x\,;\,\langle\vect x, \vect n\rangle=-1\}$, where $\vect n$ is a unit time-like vector. A $2$-dimensional vector subspace $\varPi$ of $\mathbb{R}^{5}_{1}$ is said to be time-like if $\langle\,,\,\rangle|_{\varPi}$ is non-degenerate and indefinite, namely, if $\varPi$ intersects the light cone transversely. A pair of points in $\vect{S}^3$ can be considered as the intersection of $\vect{S}^3$ and a $2$-dimensional time-like subspace of $\Re\mathfrak{e} \,R^5_1$. Therefore, if we also take the order of the points into account, $\vect{S}^3\times \vect{S}^3\setminus\Delta$ can be identified with the Grassmannian manifold $\wedgeidetilde{\textrm{\rm Gr}}_-(2;\mathbb{R}^{5}_1)$ of oriented $2$-dimensional time-like subspaces of $\mathbb{R}^{5}_1$, i.e., a homogeneous space $SO(4,1)/SO(3)\times SO(1,1).$ Let $\varPi$ be an oriented time-like $2$-dimensional plane spanned by an ordered basis $\{\vect u, \vect v\}$. Then $\varPi$ corresponds to a pure $2$-vector $\vect u\wedgeedge\vect v\in\stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$, which is determined by $\varPi$ up to a positive factor. As is stated on page 280 of \cite{hertrich}, $\vect u\wedge\vect v$ is time-like, i.e., $\langle \vect u\wedge\vect v, \vect u\wedge\vect v\rangle<0$, where the indefinite inner product on $\stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$ is given by \[ \langle \vect u^1\wedgeedge \vect u^2, \vect v^1\wedgeedge \vect v^2\rangle=\det\big(\langle \vect u^i, \vect v^j\rangle\big). \] On the other hand, it is known that a pure $2$-vector determines a $2$-plane. Thus the Grassmannian manifold $\wedgeidetilde{\textrm{\rm Gr}}_-(2;\mathbb{R}^{5}_1)$ of oriented $2$-dimensional time-like subspaces of $\mathbb{R}^{5}_1$ can be identified with the set of unit time-like pure $2$-vectors in $\stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$, where the norm of $\stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$ is given by $\|\vect v\|=\sqrt{|\langle\vect v, \vect v\rangle|}$. It is a $6$-dimensional pseudo-Riemannian manifold with index $3$. By taking a pseudo-orthogonal complement of an oriented time-like $2$-dimensional vector subspace, we can identify $\wedgeidetilde{\textrm{\rm Gr}}_-(2;\mathbb{R}^{5}_1)$ with the Grassmannian manifold $\wedgeidetilde{\textrm{\rm Gr}}_+(3;\mathbb{R}^{5}_1)$ of oriented space-like $3$-dimensional vector subspaces in $\Re\mathfrak{e} \,R^5_1$, which, in turn, can be identified with the set $\Theta(0,3)$ of unit space-like pure $3$-vectors in $\stackrel{3}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$. Through the identifications mentioned above, the bijection from $\wedgeidetilde{\textrm{\rm Gr}}_-(2;\mathbb{R}^{5}_1)$ to $\wedgeidetilde{\textrm{\rm Gr}}_+(3;\mathbb{R}^{5}_1)$ is equal to the minus of the restriction of the Hodge $\star$ which is an isomorphism from $\stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$ to $\stackrel{3}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$ given by \[ \vect a\wedgeedge \star\vect b=\langle\vect a, \vect b\rangle\, e_0\wedgeedge e_1 \wedgeedge \cdots \wedgeedge e_4 \hspace{0.6cm}\big(\vect a,\vect b\in \stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1\,\big) \] (see \cite[p.288]{hertrich}). Let $\vect u$ and $\vect v$ be light-like vectors in $\mathbb R^5_1$. Put $\vect u\times \vect v=-\star(\vect u\wedgeedge\vect v)\in \stackrel{3}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$. Since the Hodge $\star$ satisfies $\langle \star \vect a, \star \vect b\rangle =-\langle \vect a, \vect b\rangle$, where $\vect a,\vect b\in \stackrel{2}{\mbox{$\bigwedge$}}\,\mathbb{R}^{5}_1$, we have \begin{equation}\label{f_<>_cross_prod} \langle \vect u^1\times \vect u^2, \vect v^1\times \vect v^2\rangle=-\det\big(\langle \vect u^i, \vect v^j\rangle\big). \end{equation} Thus we have a bijection \begin{equation}\label{def_psi} \primesi\colon\vect{S}^3\times \vect{S}^3\setminus\Delta\ni(\vect x, \vect y)\mapsto \frac{\vect x\times\vect y}{\,\|\vect x\times\vect y\|\,}\in\Theta(0,3). \end{equation} Since the indefinite inner product in \eqref{f_<>_cross_prod} is invariant under the action of the Lorentz group $O(4,1)$, the pseudo-Riemannian structure on $\vect{S}^3\times \vect{S}^3\setminus\Delta$ induced by $\primesi$ is invariant under the diagonal action of the M\"obius group. \section{Measure of a 2-component link}\label{sec_main_thm} All the pairs of points $\{(x,y)\,;\,x\in C_1, y\in C_2\}$ form a torus in $\vect{S}^3\times \vect{S}^3\setminus\Delta$. Let us call it the {\em product torus} of a $2$-component link $L=C_1\cup C_2$. \subsection{Area of the product torus of a link} Let $\sigma$ be the composite of maps: \[\sigma\colon C_1\times C_2\stackrel{\iota}{\hookrightarrow} \vect{S}^3\times \vect{S}^3\setminus\Delta\,\spbmapright{\cong}{\primesi}\,\Theta(0,3).\] We identify $\sigma(C_1\times C_2)$ with $C_1\times C_2$ in what follows. The {\em area element} $dv$ of $C_1\times C_2$ associated with the pseudo-Riemannian structure of $\Theta(0,3)$ is given by $$ dv=\sqrt{\,\left|\det\left(\!\!\begin{array}{cc} \langle \sigma_x, \sigma_x \rangle & \langle \sigma_x, \sigma_y \rangle \\ \langle \sigma_y, \sigma_x \rangle & \langle \sigma_y, \sigma_y \rangle \end{array}\!\!\right)\right|\,}\,dx\wedge dy\,, $$ where $\sigma_x$ and $\sigma_y$ denote ${\primeartial \sigma}/{\primeartial x}(x,y)$ and ${\primeartial \sigma}/{\primeartial y}(x,y)$ in $T_{\sigma(x,y)}\Theta(0,3)$, respectively. \begin{definition} \rm Define the {\em measure} of a $2$-component link $L=C_1\cup C_2$ by the area of the product torus $$ A(C_1,C_2)=\int_{C_1\times C_2}dv =\int_{C_1\times C_2}\sqrt{\,\left|\det\left(\!\!\begin{array}{cc} \langle \sigma_x, \sigma_x \rangle & \langle \sigma_x, \sigma_y \rangle \\ \langle \sigma_y, \sigma_x \rangle & \langle \sigma_y, \sigma_y \rangle \end{array}\!\!\right)\right|\,}\,dx\wedge dy. $$ \end{definition} \subsection{Main Theorem} \begin{theorem}\label{main_theorem} \begin{enumerate} \item The measure of a $2$-component link satisfies \begin{equation}\label{measure=symplectic} A(C_1,C_2)=\int_{C_1\times C_2}|\iota^\alphast\omega|, \end{equation} where $\iota$ is the inclusion from $C_1\times C_2$ into $\vect{S}^3\times\vect{S}^3\setminus\Delta$ and $\omega$ is the pull-back of the canonical symplectic form of $T^\alphast\vect{S}^3$ to $\vect{S}^3\times\vect{S}^3\setminus\Delta$. \item The measure of a $2$-component link takes its minimum value $0$ if and only if $L$ is the image of the ``best'' Hopf link \begin{equation}\label{f_standard_Hopf_link} \{(z,w)\in\mathbb{C}^2;|z|=1, w=0\}\cup \{(z,w)\in\mathbb{C}^2;z=0, |w|=1\}\subset\vect{S}^3 \end{equation} by a M\"obius transformation. \end{enumerate} \end{theorem} The equation \eqref{measure=symplectic} implies that the area of the product torus can also be called the ``{\sl absolute symplectic measure}'' of it. We prove the theorem in the next section. \subsection{Area element of a product torus in $\vect{S}^3\times\vect{S}^3\setminus\Delta$}\label{subsec_area_element} \begin{lemma}\label{lem_null-vector_0-sphere} Both $\sigma_x$ and $\sigma_x$ are null vectors, i.e., $\langle \sigma_x, \sigma_x \rangle=\langle \sigma_y, \sigma_y \rangle=0$. Therefore the area element $dv$ is given by $\sigma^{\alphast}dv=|\langle \sigma_x, \sigma_y \rangle|\,dx\wedge dy.$ \end{lemma} \begin{proof} Suppose $\vect{S}^3$ is embedded in $\Re\mathfrak{e} \,R^5_1$, and points in $C_1$ and $C_2$ are expressed by $\bar{x}(s)$ and $\bar{y}(t)$, respectively. Put $p(s,t)=\bar{x}(s)\times\bar{y}(t)$ and $\tilde\sigma(s,t)=\sigma(\bar x(s), \bar y(t))$. Then it is given by \[\tilde\sigma(s,t)=\displaystyle \frac{p(s,t)}{{\langle p(s,t), p(s,t)\rangle}^{1/2}}\,.\] Since $\bar x$ and $\bar y$ are light-like vectors, the formula (\ref{f_<>_cross_prod}) implies $$ \langle {p}, {p}\rangle =\langle \bar{x}, \bar{y}\rangle^2, \>\> \langle {p}, {p}_s\rangle = \langle \bar{x}, \bar{y}\rangle \langle \bar{x}_s, \bar{y}\rangle, \>\> \langle {p}_s, {p}_s\rangle = \langle \bar{x}_s, \bar{y}\rangle^2. $$ Therefore \[\langle \tilde\sigma_s, \tilde\sigma_s\rangle = \displaystyle \frac{\langle {p}, {p}\rangle \langle {p}_s, {p}_s\rangle-\langle {p}, {p}_s\rangle^2}{\langle {p}, {p}\rangle^2}=0. \] \end{proof} We also put geometric explanation in Subsection \ref{subs_appendix_geom} in Appendix. Let us call $\langle \sigma_x, \sigma_y \rangle\,dx\wedge dy$ the {\em imaginary signed area element} of a product torus $C_1\times C_2$. \section{Proof of the Main Theorem} \subsection{The infinitesimal cross ratio} We assume that both components $C_1$ and $C_2$ are oriented. Suppose $x\in C_1$ and $y\in C_2$. Let $\varGamma(x,x,y)$ be the circle which is tangent to $C_1$ at $x$ that passes through $y$, oriented by the tangent vectors to $C_1$ at $x$. Let $\theta$ $(0\le\theta\le\primei)$ be the angle between $\varGamma(x,x,y)$ and the tangent vector to $C_2$ at $y$. We call it the {\em conformal angle} between $x$ and $y$ and denote it by $\theta_L(x,y)$. It was introduced by Doyle and Schramm. Let $\Omega_L$ be a complex valued $2$-form on $C_1\times C_2$ given by \begin{equation}\label{f_inf_cr} \displaystyle \Omega_L(x,y)=e^{i\theta_L(x,y)}\frac{dx\wedge dy}{|x-y|^2} \end{equation} (see \cite{La-OH1}). As both the conformal angle $\theta_L$ and the $2$-form ${dxdy}/{|x-y|^2}$ are equivariant under the diagonal action of a M\"obius transformation $T$, so is $\Omega_{L}$, namely, $(T\times T)^{\alphast}\Omega_{T(L)}=\Omega_{L}$ \cite{La-OH1}. Let us give a geometric interpretation of $\Omega_L$. Let $\varSigma_L(x,y)$ be a sphere that passes through four points $x, x+dx, y$ and $y+dy$, i.e., a sphere which is tangent to $C_1$ at $x$ and to $C_2$ at $y$. Let $p$ be a stereographic projection from $\varSigma_L(x,y)$ to $\mathbb{C}\cup\{\infty\}$ and $\tilde x$, $\tilde x+\wedgeidetilde{dx}$, $\tilde y$ and $\tilde y+\wedgeidetilde{dy}$ the images by $p$ of the four points $x, x+dx, y$ and $y+dy$, respectively. Then $\Omega_L(x,y)$ is equal to the cross ratio $(\tilde x+\wedgeidetilde{dx}, \tilde y ; \tilde x, \tilde y+\wedgeidetilde{dy})$: \begin{equation}\label{f_def_inf_cr} \Omega_L(x,y)=\frac{\wedgeidetilde{dx}\wedgeidetilde{dy}}{(\tilde x-\tilde y)^2} \sim\frac{(\tilde x+\wedgeidetilde{dx})-\tilde x}{(\tilde x+\wedgeidetilde{dx}) -(\tilde y+\wedgeidetilde{dy})} :\frac{\tilde y-\tilde x}{\tilde y-(\tilde y+\wedgeidetilde{dy})}. \end{equation} This is why we call $\Omega_L$ the {\em infinitesimal cross ratio}. We remark that the cross ratio does not depend on the stereographic projection $p$. \begin{remark}\label{remark31}\rm \j{The the form $dzdw/(z-w)^2$ on $\mathbb C\times \mathbb C\setminus\Delta$, which has been used in complex analysis, can also be obtained as the cross ratio of $w, w+dw, z$ and $z+dz$, as was mentioned by Rob Kusner, for example. In this sense, the infinitesimal cross ratio can be considered as generalization of $dzdw/(z-w)^2$ to a complex valued $2$-form on $C_1\times C_2$, or in general, $C\times C\setminus\Delta$, where $C$ is a union of space curves. In fact, when $C$ is a {\em plane} curve, the infinitesimal cross ratio can be obtained by restricting $dzdw/(z-w)^2$ to $C\times C\setminus\Delta$. In this case, it was used by H\'elein \cite{H} to show the isoperimetric inequality. } \j{However, there is difficulty for {space} curves. First, $dzdw/(z-w)^2$ cannot be generalized to a $2$-form on the ambient space $\vect{S}^3\times\vect{S}^3\setminus\Delta$, so the restriction which works for the planar case does not work. To be precise, while the real part of $dzdw/(z-w)^2$ can be generalized to a $2$-form on $\vect{S}^n\times\vect{S}^n\setminus\Delta$ as we will see in the next subsection, the imaginary part cannot when $n\ge3$ as we will see in Proposition \ref{prop_kanai2}. } \j{Secondly, even if we try to use the cross ratio to define the $2$-form, the cross ratio of four points in $\Re\mathfrak{e} \,R^n$ $(n\ge3)$ is not so well-behaved as in the planar case. This might be a reason why Ahlfors studied only the {absolute} cross ratio for the points in $\Re\mathfrak{e} \,R^n$ $(n\ge3)$ \cite{A}. When we want to define the cross ratio of (ordered) four points in $\Re\mathfrak{e} \,R^3$, we need the orientation of the sphere through the four points to avoid the ambiguity of complex conjugacy. There is a way to assign continuously the orientations to {all} the spheres given by the sets of ordered four points in $\Re\mathfrak{e} \,R^3$, i.e., there is a continuous map from ${\left(\Re\mathfrak{e} \,R^3\right)}^4\setminus\Delta$, where $\Delta$ is a big diagonal set, to the set of oriented $2$-speres in $\Re\mathfrak{e} \,R^3$, which can be identified with the de Sitter space in $5$-dimensional Minkowski space $\Re\mathfrak{e} \,R^5_1$. However, according to this method, the imaginary part of the cross ratio of any four points in $\Re\mathfrak{e} \,R^3$ is always non-negative (or, always non-positive according to the choice of a continuous map from ${\left(\Re\mathfrak{e} \,R^3\right)}^4\setminus\Delta$ to the de Sitter space). } The reader is referred to \cite{OH2} for the details. \j{As a result, the imaginary part of the infinitesimal cross ratio may have singularity where it vanishes, just like that of the absolute value of a smooth function. Anyway, we do not use the imaginary part in this paper. } \end{remark} \subsection{The real part of the infinitesimal cross ratio} In \cite{La-OH1} we showed that the pull-back of the canonical symplectic form of $T^\alphast\vect{S}^3$ to $C_1\times C_2$ coincides with the real part of the infinitesimal cross ratio up to a constant; \begin{equation}\label{real_part_of_inf_cr} \iota^\alphast\omega=\iota^{\alphast}\varphi^{\alphast}\omegasub{S^3}=-2\,\Re\mathfrak{e} \,\Omega_L =-2\,\frac{\cos\theta_L(x,y)\,dx\wedge dy}{|x-y|^2}. \end{equation} \j{It seems that this fact in the case of $\vect{S}^2$ is well known in symplectic geometry.} \begin{lemma}\label{thm_Re_inf_cr=signed_area_element} The imaginary signed area element of $C_1\times C_2$ with respect to the pseudo-Riemannian structure coincides with the real part of the infinitesimal cross ratio up to a constant; $$\langle \sigma_x, \sigma_y \rangle \, dx\wedgeedge dy=2\,\Re\mathfrak{e} \,\Omega_{L}.$$ \end{lemma} \begin{proof} Suppose points in $C_1$ and $C_2$ are expressed as $x(s)$ and $y(t)$. Suppose $\vect{S}^3$ is embedded in $\Re\mathfrak{e} \,R^5_1$ as the intersection of the light cone and a level hyperplane $\{x_0=1\}$. Let $\bar{x}$ and $\bar{y}$ be points in $\mathbb R^5_1$ corresponding to $x(s)$ and $y(t)$, i.e., $\bar{x}(s)=(1, x(s))$ and $\bar{y}(t)=(1, y(t))$. Put $\tilde\sigma(s,t)=\sigma(\bar x(s), \bar y(t))$ as before. The pull-back of the real part of the infinitesimal cross ratio is given by \begin{equation}\label{f1_re_infcr} \left((x\times y)^{\alphast}\Re\mathfrak{e} \,\Omega_{L}\right)(s,t)=\frac{\cos\theta_{L}(x(s),y(t))}{|x(s)-y(t)|^2}\,|x^{\prime}(s)||y^{\prime}(t)|\,ds\wedge dt. \end{equation} On the other hand, the pull-back of the imaginary signed area element is given by \begin{equation}\label{f2_signed_area_form} \left((x\times y)^{\alphast}\left(\langle \sigma_x, \sigma_y \rangle \, dx\wedge dy\right)\right)(s,t)=\langle\tilde\sigma_s, \tilde\sigma_t\rangle(s, t)\,ds\wedge dt. \end{equation} Fix any $(s_0, t_0)$. The M\"obius invariance of the both sides allows us to assume that $x(s_0)$ and $y(t_0)$ are antipodal. Then, at $(s_0, t_0)$, $$ \langle \bar x, \bar x\rangle=\langle \bar y, \bar y\rangle=0, \>\> \langle \bar x, \bar y\rangle=-2. $$ Therefore, by the formula (\ref{f_<>_cross_prod}), at $(s_0, t_0)$ there holds $$ \langle p, p\rangle=4, \>\> \langle p, p_s\rangle=0, \>\> \langle p_s, p_t\rangle=-2x^{\prime}(s_0)\cdot y^{\prime}(t_0), $$ which implies $$ \langle\tilde\sigma_s, \tilde\sigma_t\rangle=\frac{\langle p, p\rangle \langle p_s, p_t\rangle-\langle p, p_s\rangle \langle p, p_t\rangle}{\langle p, p\rangle^2} =-\frac12\,x^{\prime}(s_0)\cdot y^{\prime}(t_0). $$ Since $x_0=x(s_0)$ and $y_0=y(t_0)$ are antipodal, we have (Figure \ref{pi-conf_angle}) \[\theta_L(x(s_0), y(t_0))=\primei-\alphangle x^{\prime}(s_0)\cdot y^{\prime}(t_0). \] \begin{figure}\label{pi-conf_angle} \end{figure} \noindent It follows that \[\langle\tilde\sigma_s, \tilde\sigma_t\rangle(s_0, t_0)=-\frac12\,x^{\prime}(s_0)\cdot y^{\prime}(t_0) =2\frac{|x^{\prime}(s_0)||y^{\prime}(t_0)|}{|x(s_0)-y(t_0)|^2}\cos\theta_L(x(s_0), y(t_0)),\] which implies that the right-hand sides of \eqref{f1_re_infcr} and \eqref{f2_signed_area_form} coincide. \end{proof} We remark that an alternative geometric proof can be obtained if we use pseudo-orthonormal basis of $\vect{S}^3\times \vect{S}^3\setminus\Delta$ illustrated in Figure \ref{orthonormal_basis_S^3-2}. This is because $$ \langle \tilde\sigma_s+\tilde\sigma_t, \tilde\sigma_s+\tilde\sigma_t\rangle(s_0, t_0) =-x^{\prime}(s_0)\cdot y^{\prime}(t_0) $$ implies $\langle\tilde\sigma_s, \tilde\sigma_t\rangle(s_0, t_0)=-(1/2)\,x^{\prime}(s_0)\cdot y^{\prime}(t_0).$ \begin{corollary}\label{cor_signed_area_form=sympl_form} The imaginary signed area element of a product torus $C_1\times C_2$ with respect to the pseudo-Riemannian structure is equal to minus the pull-back of the canonical symplectic form: $$ \langle \sigma_x, \sigma_y \rangle \, dx\wedgeedge dy=-\iota^{\alphast}\varphi^{\alphast}\omegasub{S^3}. $$ \end{corollary} This completes the proof of Theorem \ref{main_theorem} (1). We remark that a statement similar to that of the above corollary does not hold for a general surface in $\vect{S}^3\times\vect{S}^3\setminus\Delta$ as we will see in Subsection \ref{signed_area_form_not=sympl_form} in Appendix. \subsection{Proof of Theorem \ref{main_theorem} (2)} As \setlength\alpharraycolsep{1pt} \begin{equation}\label{f_area} A(C_1,C_2)=2\int_{C_1\times C_2}\frac{|\cos\theta_L(x,y)|}{|x-y|^2}\,dx\,dy\,, \end{equation} it is equal to $0$ if and only if the conformal angle $\theta_L(x,y)$ is equal to ${\primei}/2$ for any $x\in C_1$ and $y\in C_2$. Suppose $A(C_1,C_2)=0$. Let $x$ be a point in $C_1$. Let $\mathcal{C}_{x}$ be the set of the circles which are tangent to $C_1$ at $x$. Then $C_2$ can intersect circles in $\mathcal{C}_{x}$ only at a right angle (Figure \ref{circles}). \begin{figure} \caption{The circles of $\mathcal{C} \label{circles} \caption{The image by $\primei$} \label{c1c2-2} \end{figure} Consider a stereographic projection $\primei$ from $\vect{S}^3\setminus\{x\}$ to $\Re\mathfrak{e} \,R^3$. It maps $\mathcal{C}_{x}$ to the set of parallel lines. Since $\primei(C_2)$ can intersect lines of $\primei(\mathcal{C}_{x})$ only at a right angle, $\primei(C_2)$ is contained in a $2$-plane which is orthogonal to the lines in $\primei(C_x)$ (Figure \ref{c1c2-2}). Therefore, $C_2$ is contained in a sphere $\varSigma_x$ which intersects $C_1$ at a right angle at $x$ (Figure \ref{area_link_sphere}). \begin{figure}\end{figure} Let $x^{\prime}$ be a point of $C_1$ close to $x$. As $C_1$ intersects $\varSigma_x$ orthogonally at $x$, we can take $x^{\prime}$ outside $\varSigma_x$. Therefore, $\varSigma_x\ne\varSigma_{x^{\prime}}$. Since $C_2$ is contained in the intersection $\varSigma_x\cap\varSigma_{x^{\prime}}$, $C_2$ must be a circle (Figure \ref{area_link_sphere2}). The same argument shows that $C_1$ is also a circle. Consider the stereographic projection $\primei$ again. Since $C_1$ is a circle, $\primei(C_1)$ is a line. Then $\primei(C_2)$ is the intersection of two spheres which intersect the line $\primei(C_1)$ at a right angle (Figure \ref{standard_Hopf_link}). Therefore, $\primei(C_2)$ is symmetric in the line $\primei(C_1)$. It follows that $\primei(C_1)\cup \primei(C_2)$ is an image of the standard Hopf link (Figure \ref{standard_Hopf_link}). This completes the proof of Theorem \ref{main_theorem} (2). \subsection{Corollary and Conjecture} Let $[L]$ denote an isotopy class of a link $L$. Define \[A([L])=\inf_{C_1^{\prime}\cup C_2^{\prime}\in[L]}A(C_1^{\prime},C_2^{\prime}).\] \begin{corollary} If $L$ is a separable link or a satellite link of a Hopf link, then $\mbox{\rm Area}\,([L])=0$. \end{corollary} \begin{figure} \caption{A satellite link of a Hopf link} \label{satellite_Hopf_link} \end{figure} \begin{proof} Suppose $L=C_1\cup C_2$ is a separable link in $\Re\mathfrak{e} \,R^3$. We can make $|x-y|$ $(x\in C_1, y\in C_2)$ as big as we like. Now the conclusion follows from the formula (\ref{f_area}). Suppose $L=C_1\cup C_2$ is a satellite link of a Hopf link. Then, after an ambient isotopy, it can be contained in a very thin tubular neighbourhood of the standard Hopf link given by (\ref{f_standard_Hopf_link}). Furthermore, for any positive constants $\delta_1$ and $\delta_2$, the link can be placed so that, outside a small region of $C_1\times C_2$ whose measure is $\delta_1\textsl{Length}(C_1)\cdot\textsl{Length}(C_2)$, the conformal angle satisfies $|\theta_L-{\primei}/2|\le\delta_2$. Then the formula (\ref{f_area}) implies the assertion of the corollary since $|x-y|$ $(x\in C_1, y\in C_2)$ is bounded below. \end{proof} \begin{conjecture} \rm We conjecture that $A([L])$ does not always vanish. For example, if $L=C_1\cup C_2$ is a hyperbolic link each component of which is a non-trivial knot, then there is no solid torus $H_1$ so that $C_1$ is contained in $H_1$ and $C_2$ in $\Re\mathfrak{e} \,R^2\setminus H_1$. We conjecture that $A([L])$ is positive for such a link type. \end{conjecture} \section{Appendix}\label{appendix} \subsection{Diagonal M\"obius invariance characterizes {$\vect{\omega}$}}\label{appendix1} \begin{proposition}\label{prop_kanai} Suppose $\rho$ is a $2$-form on $\vect{S}^n\times \vect{S}^n\setminus\Delta$ which is invariant under the diagonal action of M\"obius transformations. Then $\rho=c\,\omega$ for some constant $c$, where $\omega$ is the pull-back of the canonical symplectic form of $T^\alphast \vect{S}^n$ by the bijection from $\vect{S}^n\times \vect{S}^n\setminus\Delta$ to $T^\alphast \vect{S}^n$ given by \eqref{phi_conf_sp_cot_bdl}. \end{proposition} This fact has been mentioned in \cite{Kanai} in a more general form (see $\S$ 3.2). We put the proof here since the author could not find it in the literature. \begin{proof} We will show the equivalent statement for $\Re\mathfrak{e} \,R^n$. First note that the pull-back $\omegasub{{\Re\mathfrak{e} \,R^n}}$ of $\omega$ by a map $p^{-1}\times p^{-1}$ from $\Re\mathfrak{e} \,R^n\times \Re\mathfrak{e} \,R^n\setminus\Delta$ to $\vect{S}^n\times \vect{S}^n\setminus\Delta$, where $p$ is a stereographic projection from $\vect{S}^n$ to $\Re\mathfrak{e} \,R^n$, is given by \begin{equation}\label{omegasub_R} \begin{array}{rcl} \omegasub{{\Re\mathfrak{e} \,R^n}} &=&\displaystyle{2\left(\frac{\sum_{i=1}^{n} dX_i\wedge dY_i}{|\vect X-\vect Y|^2} -2\frac{\big(\sum_{i=1}^{n} (X_i-Y_i)\,dX_i\big)\wedge \big(\sum_{j=1}^{n} (X_j-Y_j)\,dY_j\big)}{|\vect X-\vect Y|^4}\right)} \end{array} \end{equation} (see \cite{La-OH1}). Let $\rho$ be a $2$-form on $\Re\mathfrak{e} \,R^n\times\Re\mathfrak{e} \,R^n\setminus\Delta$. Assume that $\rho$ is invariant under the diagonal action of M\"obius transformations. The invariance under the diagonal action of parallel translations implies that $\rho$ can be expressed as \begin{equation}\label{f_rho} \rho=\sum_{i,j}f_{ij}(\vect x-\vect y)\,dx_i\wedge dy_j+\sum_{i<j}g_{ij}(\vect x-\vect y)\,dx_i\wedge dx_j+\sum_{i<j}h_{ij}(\vect x-\vect y)\,dy_i\wedge dy_j \end{equation} for some functions $f_{ij}, g_{ij}$ and $h_{ij}$. (i) Let us show $g_{ij}=h_{ij}=0$ $(i<j)$. Suppose $(i,j)=(1,2)$. The invariance under the diagonal action of rotation in $\mbox{\rm Span}\langle \vect e_1, \vect e_2\rangle$ shows \begin{equation}\label{219-g12-1} \begin{array}{l} g_{12}(v_1,\dots,v_n) =\displaystyle g_{12}\Big(\sqrt{v_1^2+v_2^2}, \, 0, \,v_3,\dots,v_n\Big). \nonumber \end{array} \end{equation} On the other hand, the invariance under the diagonal action of reflection in the hyperplane $\{v_2=0\}$ shows \begin{equation}\label{219-g12-2} g_{12}(v_1,\dots,v_n) =\displaystyle -g_{12}(v_1,-v_2,v_3\dots,v_n). \nonumber \end{equation} The above two equations imply $g_{12}=0$. (ii) Assume $n=2$. We show the statement using complex coordinates $w=x_1+ix_2$ and $z=y_1+iy_2$. The above argument shows that $\rho$ can be expressed as $$ \rho=F_1(w-z)\,dw\wedge dz+F_2(w-z)\,d\wedgeb w\wedge d\bb z+F_3(w-z)\,d w\wedge d\bb z+F_4(w-z)\,d\wedgeb w\wedge dz $$ for some functions $F_i$. \begin{itemize} \item[(a)] The invariance of $\rho$ under $(w,z)\mapsto(\beta w, \beta z)$ $(\beta\in\mathbb{C}^\times)$ shows that $$ \left(\begin{array}{c} F_1(\beta\zeta)\\ F_2(\beta\zeta)\\ F_3(\beta\zeta)\\ F_4(\beta\zeta) \end{array}\right) =\left( \begin{array}{cc} \begin{array}{cc} \beta^{-2}&\\ &{\mb{\beta}}^{\,-2} \end{array} & \mbox{\Lambdarge\bf$0$} \\ \mbox{\Lambdarge\bf$0$}& \begin{array}{cc} |\beta|^{-2}&\\ &|\beta|^{-2} \end{array} \end{array} \right) \left(\begin{array}{c} F_1(\zeta)\\ F_2(\zeta)\\ F_3(\zeta)\\ F_4(\zeta) \end{array}\right). $$ \item[(b)] The invariance of $\rho$ under $(w,z)\mapsto(\wedgeb{w},\bb{z})$ shows that \setlength\alpharraycolsep{3pt} \begin{equation}\label{F_bar} \left(\begin{array}{c} F_1\big(\mb{\zeta}\big)\\ F_2\big(\mb{\zeta}\big)\\ F_3\big(\mb{\zeta}\big)\\ F_4\big(\mb{\zeta}\big) \end{array}\right) =\left( \begin{array}{cc} \begin{array}{cc} 0&1\primehantom{\mb{\zeta}}\hspace{-0.15cm}\\ 1&0\primehantom{\mb{\zeta}}\hspace{-0.15cm} \end{array} & \mbox{\Lambdarge\bf$0$} \\ \mbox{\Lambdarge\bf$0$}& \begin{array}{cc} 0&1\primehantom{\mb{\zeta}}\hspace{-0.15cm}\\ 1&0\primehantom{\mb{\zeta}}\hspace{-0.15cm} \end{array} \end{array} \right) \left(\begin{array}{c} F_1(\zeta)\primehantom{\mb{\zeta}}\hspace{-0.15cm}\\ F_2(\zeta)\primehantom{\mb{\zeta}}\hspace{-0.15cm}\\ F_3(\zeta)\primehantom{\mb{\zeta}}\hspace{-0.15cm}\\ F_4(\zeta)\primehantom{\mb{\zeta}}\hspace{-0.15cm} \end{array}\right). \end{equation} \setlength\alpharraycolsep{1pt} \end{itemize} Note that $$ \begin{array}{c} \displaystyle |\zeta|^2F_3(\zeta)\stackrel{(\mbox{\scriptsize \j{a}})}=F_3(1)\stackrel{(\mbox{\scriptsize \j{b}})}=F_4(1)\stackrel{(\mbox{\scriptsize \j{a}})}=|\zeta|^2F_4(\zeta),\\[1mm] \zeta^2F_1(\zeta)\stackrel{(\mbox{\scriptsize \j{a}})}=F_1(1)\stackrel{(\mbox{\scriptsize \j{b}})}=F_2(1)\stackrel{(\mbox{\scriptsize \j{a}})}={\bb{\zeta}}^{\,2}F_2(\zeta). \end{array} $$ Therefore, putting $a=F_1(1)$ and $b=F_3(1)$ we have $$ \rho=a\left(\frac{dw\wedge dz}{(w-z)^2}+\frac{d\wedgeb w\wedge d\bb z}{(\wedgeb w-\bb z)^2}\right) +b\frac{d w\wedge d\bb z+d\wedgeb w\wedge dz}{|w-z|^2}. $$ \begin{itemize} \item[(c)] Finally, the invariance of $\rho$ under $(w,z)\mapsto(1/w,1/z)$ implies $b=0$. \end{itemize} Since $$ \frac{dw\wedge dz}{(w-z)^2}+\frac{d\wedgeb w\wedge d\bb z}{(\wedgeb w-\bb z)^2}=2\Re\mathfrak{e} \, \left(\frac{dw\wedge dz}{(w-z)^2}\right) =-\omegasub{{\Re\mathfrak{e} \,R^2}}, $$ it completes the proof when $n=2$. (iii) Assume $n\ge3$. Put $c=-(1/2)f_{11}(\vect e_1)$, where $f_{11}$ is the function used in \eqref{f_rho} and $e_1$ is the first unit vector of $\Re\mathfrak{e} \,R^n$. If we apply the previous argument in (ii) to the $2$-planes $\textrm{Span}\langle \vect e_1, \vect e_j\rangle$ $(j\ne1)$ first and then to $\textrm{Span}\langle \vect e_i, \vect e_j\rangle$, we see that $\rho$ and $c\,\omegasub{\Re\mathfrak{e} \,R^n}$ have the same coefficients of $dx_i\wedge dy_j$ for all $i,j$, which completes the proof. \end{proof} \j{We give another proposition which was announced in Remark \ref{remark31}. } \j{ \begin{proposition}\label{prop_kanai2} Suppose $\rho$ is a $2$-form on $\vect{S}^n\times \vect{S}^n\setminus\Delta$ that satisfies $(T\times T)^\alphast \rho=(\textrm{\rm sgn}\, T) \rho,$ where $\textrm{\rm sgn}\, T$ is the signature of the Jacobian of $T$. Then $\rho=0$ if $n\ge3$, and $\rho=c\, \Im\mathfrak{m} \left(dz\wedgeedge dw/(w-z)^2\right)$ for some constant $c$ if $n=2$ under the identification $\vect{S}^2\cong \mathbb C\cup\{\infty\}$. \end{proposition} } \begin{proof} \j{The proof goes somehow parallel to that of Proposition \ref{prop_kanai}. Suppose $\rho$ is expressed as \eqref{f_rho}. } \j{(i) We can show $g_{ij}\equiv0$ and $h_{ij}\equiv0$ using the invariance up to sign under rotations and inversions in spheres. } \j{(ii) Assume $n=2$. Then, the right-hand side of the formula \eqref{F_bar} is replaced by the minus of it, which implies that $\rho$ is a multiple of \[\frac{dw\wedge dz}{(w-z)^2}-\frac{d\wedgeb w\wedge d\bb z}{(\wedgeb w-\bb z)^2}=2\Im\frak{m} \left(\frac{dw\wedge dz}{(w-z)^2}\right).\]} \j{(iii) Assume $n\ge3$. Let $(\vect x,\vect y)\in\Re\mathfrak{e} \,R^n\times\Re\mathfrak{e} \,R^n\setminus\Delta$, $\vect u\in T_x\mathbb R^n$, and $\vect v\in T_y\Re\mathfrak{e} \,R^n$. There is an orientation preserving M\"obius transformation $T$ such that $T(\vect x)=\vect 0, T(\vect y)=(1,0,\dots,0)$, and $T_\alphast(\vect u)=\vect e_1$. Put $\wedgeidetilde{\vect v}=2\left(T_\alphast(\vect v),\vect e_1\right)\vect e_1-T_\alphast(\vect v)$. Then, as there is a rotation around $\vect e_1$-axis preserving $T(\vect x), T(\vect y)$ and $T_\alphast(\vect u)$ that sends $\wedgeidetilde{\vect v}$ to $T_\alphast(\vect v)$, we have $\rho(T_\alphast(\vect u), T_\alphast(\vect v))=\rho(T_\alphast(\vect u), \wedgeidetilde{\vect v})$. On the other hand, as there is a reflection preserving $T(\vect x), T(\vect y)$ and $T_\alphast(\vect u)$ that sends $\wedgeidetilde{\vect v}$ to $T_\alphast(\vect v)$, we have $\rho(T_\alphast(\vect u), T_\alphast(\vect v))=-\rho(T_\alphast(\vect u), \wedgeidetilde{\vect v})$. Hence $\rho\equiv0$. } \end{proof} \j{It follows that the imaginary part of $dz\wedgeedge dw/(w-z)^2$ cannot be generalized to $\vect{S}^n\times \vect{S}^n\setminus\Delta$ when $n\ge3$. In fact, it can naturally be generalized to a K\"ahler form on $SO(n+1,1)/SO(2)\times SO(n-1,1)$, which is the space of oriented codimension $2$ spheres in $\vect{S}^n$. } \subsection{Pseudo-orthogonal basis of $\vect{\vect{S}^3\times\vect{S}^3\setminus\Delta}$}\label{subs_appendix_geom} Let us start with a baby case $\vect{S}^1\times\vect{S}^1\setminus\Delta$. It can be identified with the set of oriented time-like planes in the $3$-dimensional Minkowski space $\Re\mathfrak{e} \,R^3_1$. By taking a positive unit normal vector to each of these planes, $\vect{S}^1\times\vect{S}^1\setminus\Delta$ can be identified with the $2$-dimensional de Sitter space $\Lambda=\{\vect{x}\in\Re\mathfrak{e} \,R^3_1\,;\,\langle\vect x, \vect x\rangle=1\}$. Let $\varSigma=\{x,y\}$ be a pair of points in $\vect{S}^1\cong\primeartial \mathbb{H}^2$. Let $l$ denote the geodesic in $\mathbb{H}^2$ which joins $x$ and $y$. Take a point $M$ on $l$ (Figure \ref{pencil_S^0_in_S^1}), then it determines two pencils as follows. Let $\vect a$ and $\vect b$ be the ``end points" of the geodesic in $\mathbb{H}^2$ which is orthogonal to $l$ at point $M$ (the third of Figure \ref{pencil_S^0_in_S^1}). Let $\mathcal{P}_+$ be a pencil obtained by rotating the geodesic $l$ around $M$ and $\mathcal{P}_-$ the Poncelet pencil with limit points $\vect a$ and $\vect b$. Then $\mathcal{P}_+$ and $\mathcal{P}_-$ can be considered as geodesics in $\Lambda$, namely, the intersections with $\Lambda$ and space-like and time-like $2$-planes $\Pi_\primem$. A pair of the unit tangent vectors to $\mathcal{P}_+$ and $\mathcal{P}_-$ at $\sigma$ can serve as a pseudo-orthonormal basis of $T_{\sigma}\Lambda$, where $\sigma$ is a point in $\Lambda$ that corresponds to $\varSigma$. These vectors can be obtained in $\Pi_\primem$ by rotation and Lorentz boost (hyperbolic rotation) of $\sigma$. The corresponding vectors in $\vect{S}^1\times\vect{S}^1\setminus\Delta$ are illustrated as the second and the last of Figure \ref{pencil_S^0_in_S^1} \begin{figure} \caption{Pseudo-orthogonal basis of a tangent space of $\vect{S} \label{pencil_S^0_in_S^1} \end{figure} Suppose $\{\vect u, \vect v\}$ is a pseudo-orthonormal basis of $T_{\sigma}\Lambda$. Then we have another basis, $\left\{{(\vect u+\vect v)}/{\sqrt 2}, {(\vect u-\vect v)}/{\sqrt 2}\right\}$ consisting of two light-like vectors (Figure \ref{pencil_S^0_in_S^1-l}). \begin{figure} \caption{Light-like basis of a tangent space of $\vect{S} \label{pencil_S^0_in_S^1-l} \end{figure} This illustrates why $\sigma_x$ and $\sigma_y$ in Subsection \ref{subsec_area_element} are null vectors. The pseudo-orthonormal basis of ${\vect{S}^3\times\vect{S}^3\setminus\Delta}$ can be given by that of $\vect{S}^1\times\vect{S}^1\setminus\Delta$. In fact, we can consider three mutually orthogonal circles through a given pair of points, and take a pseudo-orthonormal basis in each circle as illustrated in Figure \ref{orthonormal_basis_S^3-2}. \begin{figure} \caption{Pseudo-orthogonal basis of a tangent space of $\vect{S} \label{orthonormal_basis_S^3-2} \end{figure} \subsection{The imaginary signed area element and the symplectic form}\label{signed_area_form_not=sympl_form} Corollary \ref{cor_signed_area_form=sympl_form} does not necessarily hold for a surface in $\vect{S}^3\times\vect{S}^3\setminus\Delta$ which is not the product of two curves in $\vect{S}^3$. Let us show it in $\Re\mathfrak{e} \,R^3\times\Re\mathfrak{e} \,R^3\setminus\Delta$, fixing a stereographic projection $p$ from $\vect{S}^3$ to $\Re\mathfrak{e} \,R^3\cup\{\infty\}$. Suppose a pair of points in $\Re\mathfrak{e} \,R^3$ are expressed by $X(s,t)$ and $Y(s,t)$. Let $M$ be a surface $\{(X(s,t), Y(s,t))\}_{(s,t)\in D}$ in $\Re\mathfrak{e} \,R^3\times\Re\mathfrak{e} \,R^3\setminus\Delta$, where $D$ is a domain in $\mathbb R^2$. Put $X_s={\primeartial X}/{\primeartial s}, X_t={\primeartial X}/{\primeartial t}$, and $$ \wedgeidetilde X_s=2\left(X_s,\, \frac{X-Y}{|X-Y|}\right)\frac{X-Y}{|X-Y|}-X_s, \> \wedgeidetilde X_t=2\left(X_t,\, \frac{X-Y}{|X-Y|}\right)\frac{X-Y}{|X-Y|}-X_t. $$ Then $\wedgeidetilde X_s$ is the tangent vector at $Y$ to a circle which is tangent to $X_s$ at $X$ that passes through $Y$ with $\big|\wedgeidetilde X_s\big|=|X_s|$. The same interpretation also holds for $\wedgeidetilde X_t$. The pull-back of the canonical symplectic form of $T^\alphast\Re\mathfrak{e} \,R^3\cong\Re\mathfrak{e} \,R^3\times\Re\mathfrak{e} \,R^3\setminus\Delta$ is given by $$ (X\times Y)^\alphast\omegasub{\Re\mathfrak{e} \,R^n}=-2\left(\wedgeidetilde X_s\cdot Y_t-\wedgeidetilde X_t\cdot Y_s\right)\frac{ds\wedgeedge dt}{|X-Y|^2}\,, $$ where $\omegasub{\Re\mathfrak{e} \,R^n}$ is given by \eqref{omegasub_R}. This can be verified by showing that the both sides coincide when $X$ and $Y$ are located on specific positions, say $X(s_0,t_0)=(1,0,0)$ and $Y(s_0,t_0)=(-1,0,0)$ because the both sides are equivariant under the diagonal action of M\"obius transformations. On the other hand, the ``{\sl signed area element}'' $\alpha_{\mbox{\tiny $M$}}$ of $M$ associated with the pseudo-Riemannian structure of $\Theta(0,3)$ can be given as follows. Let $\hat \sigma$ be the composite \[\hat \sigma:D\, \smash{ \mathop{\hbox to 1.2cm{\rightarrowfill}} \limits^{\displaystyle {X\times Y}}_{\displaystyle {}}}\, M\hookrightarrow\Re\mathfrak{e} \,R^3\times \Re\mathfrak{e} \,R^3\setminus\Delta\,\spbmapright{p^{-1}\times p^{-1}}{}\, \vect{S}^3\times \vect{S}^3\setminus\Delta\,\spbmapright{\cong}{\primesi}\,\Theta(0,3).\] Using the pseudo-orthonormal basis illustrated in Figure \ref{orthonormal_basis_S^3-2} and the M\"obius invariance, we have \setlength\alpharraycolsep{1pt} \[\begin{array}{rcl} (X\times Y)^\alphast \alpha_{\mbox{\tiny $M$}} &=&\displaystyle \sqrt{\,\det\left(\!\begin{array}{cc} \langle \hat \sigma_s, \hat \sigma_s \rangle \>&\> \langle \hat \sigma_s, \hat \sigma_t \rangle \\ \langle \hat \sigma_t, \hat \sigma_s \rangle \>&\> \langle \hat \sigma_t, \hat \sigma_t \rangle \end{array}\!\right)}\,ds\wedge dt \\[4mm] &=&\displaystyle 2\sqrt{\,\det\left(\!\begin{array}{cc} 2\wedgeidetilde X_s\cdot Y_s \>\>&\>\> \wedgeidetilde X_s\cdot Y_t+\wedgeidetilde X_t\cdot Y_s \\ \wedgeidetilde X_s\cdot Y_t+\wedgeidetilde X_t\cdot Y_s \>\>&\>\> 2\wedgeidetilde X_t\cdot Y_t \end{array}\!\right)}\>\frac{ds\wedgeedge dt}{|X-Y|^2}\,. \end{array}\] Therefore, the imaginary signed area element $\sqrt{-1}\,\alpha_M$ coincides with the pull-back of the canonical symplectic form $\omegasub{\Re\mathfrak{e} \,R^3}\big|_{C_1\times C_2}$ up to sign if and only if $ (\wedgeidetilde X_s\cdot Y_t)(\wedgeidetilde X_t\cdot Y_s)=(\wedgeidetilde X_s\cdot Y_s)(\wedgeidetilde X_t\cdot Y_t), $ which holds if and only if $\wedgeidetilde X_s\times \wedgeidetilde X_t\primeerp Y_s\times Y_t.$ It does not always hold in general. We remark that this condition does not necessarily imply that the surface is a product of two curves. We also remark that the above condition is always satisfied for a surface in $\vect{S}^1\times\vect{S}^1\setminus\Delta$. \j{ \subsection{Remark on energy minimizing Hopf links}\label{AMN} There is another variational characterization of the ``best'' Hopf link. } \j{The {\em M\"obius cross energy} \cite{FHW} of a $2$-component link $C_1\cup C_2$, which is generalzation of the energy for knots defined by the author \cite{OH1}, is given by \[E(C_1,C_2)=\int_{C_1\times C_2}\frac{dxdy}{|x-y|^2}.\] This energy is also invariant under M\"obius transformations. Recently, Agol, Marques and Neves proved Freedman-He-Wang's conjecture, namely, they showed that if the linking number of $C_1$ and $C_2$ is equal to $\primem1$, then $E(C_1,C_2)\ge2\primei^2$, and that the equality holds if and only if $C_1\cup C_2$ is an image of the ``best'' Hopf link by a M\"obius transformation. This is a much more difficult problem, and was proved using min-max theory which has also been used in the proof of the Willmore conjecture \cite{MN}. } \j{The formula \eqref{f_area} implies $E(C_1,C_2)\ge(1/2)A(C_1,C_2)$. To be more precise, the equality does not occur since the conformal angle between different components of a link cannot be identically zero. It might be interesting to point out that the infimum of $A(C_1,C_2)$ over all the $2$-component links is attained not at trivial links, but at the ``best'' Hopf link and the conformal image of it, whereas the infimum of $E(C_1,C_2)$ over all the $2$-component links is not attained, as $E(C_1,C_2)$ tends to $+0$ as the distance between $C_1$ and $C_2$ tends to $+\infty$. } \primear\noindent Department of Mathematics, \\ Tokyo Metropolitan University, \\ 1-1 Minami-Ohsawa, Hachiouji-Shi, \\ Tokyo 192-0397, Japan. \\ \noindent {[email protected]} \end{document}
\begin{document} \maketitle \centerline{\scshape Ahmad El Soufi } {\footnotesize \centerline{Laboratoire de Math\'ematiques et Physique Th\'eorique, UMR CNRS 7350} \centerline{Universit\'e Fran\c{c}ois Rabelais de Tours, Parc de Grandmont, F-37200 Tours, France}} \centerline{\scshape Evans M. Harrell II} {\footnotesize \centerline{School of Mathematics} \centerline{Georgia Institute of Technology, Atlanta GA 30332-0160, USA} } \begin{abstract} We prove that among all doubly connected domains of $\mathbb R^n$ bounded by two spheres of given radii, $Z(t)$, the trace of the heat kernel with Dirichlet boundary conditions, achieves its minimum when the spheres are concentric (i.e., for the spherical shell). The supremum is attained when the interior sphere is in contact with the outer sphere. This is shown to be a special case of a more general theorem characterizing the optimal placement of a spherical obstacle inside a convex domain so as to maximize or minimize the trace of the Dirichlet heat kernel. In this case the minimizing position of the center of the obstacle belongs to the ``heart'' of the domain, while the maximizing situation occurs either in the interior of the heart or at a point where the obstacle is in contact with the outer boundary. Similar statements hold for the optimal positions of the obstacle for any spectral property that can be obtained as a positivity-preserving or positivity-reversing transform of $Z(t)$, including the spectral zeta function and, through it, the regularized determinant. \end{abstract} \section {Introduction and statement of results}\label{1} Let $\Omega\subset \mathbb R^n$ be a bounded $C^2$ Euclidean domain and let $$\lambda_1(\Omega) < \lambda_2(\Omega) \le \lambda_3(\Omega) \le \cdots \le \lambda_i(\Omega) \le \cdots\rightarrow \infty ,$$ be the sequence of eigenvalues of the Dirichlet realization of the Laplacian $-\Delta$ in $\Omega$, where each eigenvalue is repeated according to its multiplicity. The corresponding ``heat operator" $e^{t\Delta}$ has finite trace for all $t>0$ (known in physical literature as the ``partition function''), which we denote \begin{equation}\label{Zdef} Z_{\Omega}(t) = \sum_{k\ge 1} e^{- \lambda_k(\Omega) t}. \end{equation} Let $\zeta_{\Omega}$ be the zeta function, defined as the meromorphic extension to the entire complex plane of $ \sum_{k=1}^{\infty}\lambda_k(\Omega)^{-s}$, which is known to be convergent and holomorphic on $\{\mathrm{Re} \ s >\frac n 2 \} $. Following \cite{RS}, we denote by $\det(\Omega)$ the regularized determinant of the Dirichlet Laplacian in $\Omega$ defined by \begin{equation}\label{detdef} \det(\Omega)=\exp \left(-\zeta_{\Omega}'(0)\right). \end{equation} Eigenvalue optimization problems date from Rayleigh's ``Theory of Sound'' (1877), where it was suggested that the disk should minimize the first eigenvalue $\lambda_1$ among all planar domains of given measure. Rayleigh's conjecture was proved in any dimension independently by Faber \cite{F} and Krahn \cite{Kr}. Later, Luttinger \cite{Lut} proved an isoperimetric result analogous to Faber-Krahn for $Z(t)$, considered as a functional on the set of bounded Euclidean domains, that is, for any bounded domain $\Omega\subset \mathbb{R}^n$ and any $t>0$, Luttinger showed that $$Z_{\Omega}(t)\le Z_{\Omega^*}(t),$$ where $\Omega^*$ is a Euclidean ball whose volume is equal to that of $\Omega$. A similar property was proved in \cite{OPS} for the regularized determinant of the Laplacian in two dimensions (see \cite {LaMo, R} for other examples of results in this direction). The case of multiply connected planar domains, i.e. whose boundary admits more than one component, was first considered by Hersch. Using the method of interior parallels, in \cite{H2} Hersch proved the following extremal property of annular membranes:\\ ``\emph{A doubly connected fixed membrane, bounded by two circles of given radii, has maximum $\lambda_1$ when the circles are concentric}''. Hersch's result has been extended to a wider class of domains in any dimension by Harrell, Kr\"oger and Kurata \cite{HKK} and Kesavan \cite{K}, whereby the authors consider a fixed domain $D$ from which an ``obstacle'' of fixed shape, usually spherical, has been excised. The position of the obstacle is allowed to vary, and the problem is to maximize or minimize $\lambda_1$. The critical assumption on the domain $D$ in \cite{HKK} is an ``interior symmetry property,'' and with this assumption the authors further proved that, for the special case of two balls, $\lambda_1$ decreases when the center of the small ball (the obstacle) moves away from the center of the large ball, using a technique of domain reflection. For a wider class of domains containing obstacles, it was shown in \cite{HKK} that the maximizing position of the obstacle resides in a special subset of $D$, which in the case where $D$ is convex corresponds to what has later come to be called the {\em heart} of $D$ in \cite{BrMa, BrMaSa}, denoted $\heartsuit(D)$ (see the definition below). El Soufi and Kiwan \cite {EK1, EK2} have moreover proved other extensions of Hersch's result including one valid for the second eigenvalue $\lambda_2$. The main aim of this paper is to establish a Hersch-type extremal result for the heat trace, the spectral zeta function, and the determinant of the Laplacian, as well as suitable generalizations for more general outer domains. We begin by stating the special case of domains bounded by balls: Given two positive numbers $R>r$ and a point ${\bf x}\in \mathbb R^n$, $|{\bf x}|<R-r$, we denote by $\Omega({\bf x})$ the domain of $\mathbb R^n$ obtained by removing the ball $B({\bf x},r)$ of radius $r$ centered at ${\bf x}$ from within the ball of radius $R$ centered at the origin. \begin{theorem}\label{ball} {Let $\Omega({\bf x})$ be the domain bounded by balls as in the preceding paragraph.} (i) For every $t>0$, the heat trace $Z_{\Omega({\bf x})}(t)$ is nondecreasing as the point ${\bf x}$ moves from the origin {directly} towards the boundary of the larger ball. In particular, $Z_{\Omega({\bf x})}(t)$ is minimal when the balls are concentric (${\bf x}=O$) and maximal when the small ball is in contact with the boundary of the larger ball ($|{\bf x}|=R-r$). (ii) For every $s>0$, the zeta function $\zeta_{\Omega({\bf x})}(s)$ increases as the point ${\bf x}$ moves from the origin {directly} towards the boundary of the larger ball. In particular, $\zeta_{\Omega({\bf x})}$ is minimal when the balls are concentric and maximal in the limiting situation when the small ball approaches the boundary of the larger ball. (iii) The determinant of the Laplacian $\det(\Omega({\bf x}))$ decreases as the point ${\bf x}$ moves from the origin {directly} towards the boundary of the larger ball. In particular, $\det(\Omega({\bf x}))$ is maximal when the balls are concentric and minimal in the limiting situation when the small ball approaches the boundary of the larger ball. \end{theorem} Let us clarify what we mean by the ``limiting situation when the ball approaches the boundary'' in this and in the more general Theorem \ref{heart}. Our conclusion in these cases is derived by contradiction. By assuming that the obstacle is in the interior of the domain, extremality can be excluded. If the function under consideration is continuous, then, since the set of configurations including the cases where the obstacle touches the boundary is compact, the extremum is attained there. Since the heat trace is continuous with respect to translations of the obstacle, claim (i) is not problematic, and the same is true for claim (ii) when $s > \frac n 2$. For smaller real values of $s$ the spectral zeta function is defined by analytic continuation, and the determinant of the Laplacian is defined by \eqref{detdef}. In the proof these quantities will be shown to be continuous as a function of the position of the obstacle as it moves to the boundary, and their limits are what we describe as the ``limiting situations'' of the theorems. The subtlety here is that when the obstacle is in contact with the boundary, a cusp is formed, as a consequence of which the heat-trace asymptotics are not necessarily known well enough to allow a direct definition of $\zeta(s)$, $s\le\frac n2$, by analytic continuation, cf. \cite{Vas}. Since $e^{- \lambda_1(\Omega) t}$ is the leading term in $Z_{\Omega}(t)$ as $t$ goes to infinity, it is clear that Theorem \ref{ball} implies the optimization result mentioned above for $\lambda_1$. In order to state the more general theorem of which Theorem \ref{ball} is a special case, we recall some definitions. \begin{definition} Let $P$ be a hyperplane in $\mathbb R^n$ which intersects $D$ so that $D\setminus P$ is the union of two open subsets located on either side of $P$. According to \cite{HKK}, the domain $D$ is said to have the \emph{interior reflection property} with respect to $P$ if the reflection through $P$ of one of these subsets, denoted $D_s$, is contained in $D$. Any such P will be called a \emph{hyperplane of interior reflection} for $D$. The subdomain $D_s$ will be called the {\em small side} of $D$ (with respect to $P$) and {\bf its complement } $D_b=D\setminus D_s$ will be called the {\em big side}. When $D$ is convex, the \emph{heart} of $D$ is defined as the intersection of all the big sides with respect to the hyperplanes of interior reflection of $D$. We denote it $\heartsuit(D)$. \end{definition} This definition of $\heartsuit(D)$ is equivalent to that introduced in \cite{BrMa, BrMaSa}, where several properties of the heart of a convex domain are investigated. A point ${\bf x}$ belongs to $\heartsuit(D)$ if either there is no hyperplane of interior reflection passing through ${\bf x}$ or if any hyperplane of interior reflection passing through ${\bf x}$ is such that the reflection of $\partial D_s\setminus P$ touches $\partial D_b$. The first situation occurs when ${\bf x}$ is an interior point of $\heartsuit(D)$ while the latter is characteristic of the boundary points of $\heartsuit(D)$. By construction the heart of a bounded convex domain $D$ is a nonempty relatively closed subset of $D$. Moreover, if $D$ is strictly convex and bounded, then ${\rm dist} (\heartsuit(D), \partial D)>0$. We observe that for the ball and for many other domains with sufficient symmetry to identify an unambiguous center point, the heart is simply the center. It is, however, shown in \cite{BrMa} that without reflection symmetries the typical heart has non-empty interior, even for simple polygons. For instance, for an asymmetric acute triangle, it is a quadrilateral bounded by two angle bisectors and two perpendicular axes, while for an asymmetric obtuse triangle, it can be either a quadrilateral or a pentagon. \begin{theorem}\label{heart} Let $D$ be a bounded $C^2$ convex domain of $\mathbb R^n$ and let $r>0$ be such that $D_r=\{{\bf x}\in D : {\rm dist}({\bf x}, \partial D) > r\}\ne \emptyset$. For every ${\bf x}\in \bar D_r$ we set $\Omega ( {\bf x} ) =D\setminus B({\bf x},r)$. \noindent(i) For each fixed $t>0$, the function ${\bf x}\in \bar D_r\mapsto Z_{\Omega({\bf x})}(t)$ achieves its minimum at a point ${\bf x}_0(t) \in \heartsuit(D) $, while the maximum is achieved either at an interior point ${\bf x}_1(t)$ of $\heartsuit(D)$ or in the limiting situation when the ball approaches the boundary of $D$. \noindent(ii) For each fixed $s>0$ and $ {\bf x} \in D_r\setminus \heartsuit(D)$, the zeta function satisfies \begin{equation}\label{supzeta} \zeta_{\Omega({\bf x})}(s)>\inf_{{\bf y}\in \heartsuit(D)\cap D_r} \zeta_{\Omega({\bf y})}(s), \end{equation} and $\zeta_{\Omega({\bf x})}(s)$ is less than the supremum of all the values attained by the zeta function in the limiting situations when the ball approaches the boundary of $D$. Moreover, if $r< {\rm dist}(\heartsuit(D), \partial D)$, then $ {\bf x} \in D_r\mapsto \zeta_{\Omega({\bf x})}(s)$ achieves its infimum at a point ${\bf x}_0(s) \in \heartsuit(D)$, while the supremum is reached either at an interior point ${\bf x}_1(s)$ of $\heartsuit(D)$ or in a limiting situation when the ball approaches the boundary of $D$. \noindent(iii) The regularized determinant of the Laplacian satisfies \begin{equation}\label{supdet} \det (\Omega ( {\bf x} ))<\sup_{{\bf y}\in \heartsuit(D)\cap D_r}\det (\Omega ( {\bf y} )) \end{equation} for every $ {\bf x} \in D_r\setminus \heartsuit(D)$, and $\det (\Omega ( {\bf x} ))$ is greater than the infimum of all the values attained by the determinant in the limiting situations when the ball approaches the boundary of $D$. Moreover, if $r< {\rm dist}(\heartsuit(D), \partial D)$, then the function $ {\bf x} \in D_r\mapsto \det (\Omega ( {\bf x} ))$ achieves its supremum at a point ${\bf x'}_0 \in \heartsuit(D)$, while the infimum is achieved either at an interior point ${\bf x'}_1$ of $\heartsuit(D)$ or in a limiting situation when the ball approaches the boundary of $D$. \end{theorem} We remark that a straightforward consideration of the limit $t \to \infty$ leads back to related results of \cite{HKK}. As in that article, it is not difficult to extend Theorem \ref{heart} to many nonconvex domains, at the price of entering into the sometimes complex nature of $\heartsuit(D)$. For simplicity we limit the present article to the case of convex $D$. We conjecture that the minimum of $Z_{\Omega({\bf x})}(t)$ is never achieved outside $\heartsuit(D)$, and that the maximum of $Z_{\Omega({\bf x})}(t)$ and resp. the maximum of $\zeta_{\Omega({\bf x})}(s)$ and the minimum of $\det (\Omega ( {\bf x} ))$), are achieved only in the limiting situation where $B({\bf x},r)$ touches the boundary of $D$. This is certainly the case for example when a convex domain $D$ admits a hyperplane of symmetry since then, ${\rm int }(\heartsuit(D))=\emptyset$. \begin{remark} We shall approach the analysis of the spectral zeta function and the regularized determinant through order-preserving integral transforms relating them to the heat trace. As in \cite{HaHe}, transform theory can similarly be used to obtain corollaries for many further functions, e.g., Riesz means, with respect to the optimal position of an obstacle. \end{remark} A main ingredient of the proof is the following Hadamard-type formula for the first variation of $Z_\Omega(t)$ with respect to a deformation $\Omega_\varepsilon=f_\varepsilon(\Omega)$ of the domain : \begin{equation}\label{hadamard-heat} \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} = -{t\over 2} \int_{\partial\Omega} \Delta K(t,{\bf x},{\bf x}) v({\bf x}) dx, \end{equation} where $v= X\cdot \nu$ is the component of the deformation vectorfield $X=\frac{df_\varepsilon}{d\varepsilon}\vert_{\varepsilon=0}$ in the direction of the inward unit normal $\nu$, and $K$ is the heat kernel (cf. \cite[Theorem 4.1]{EI1}). Notice that this formula coincides with that given by Ozawa in \cite[Theorem 4]{oza} for deformations of the form $f_\varepsilon({\bf x}) ={\bf x} +\varepsilon \rho ({\bf x})\nu({\bf x})$ along the boundary, where $\rho$ is a smooth function on $\partial\Omega$. Indeed, it is easy to check that for all $ {\bf x}\in\partial\Omega$, $v({\bf x})=\rho ({\bf x}) $ and (using \eqref{heatkerseries} below) $ \Delta K(t,{\bf x},{\bf x}) = {2} \sum_{k=0}^\infty e^{-\lambda_k t} \vert\nabla u_k({\bf x})\vert^2= {2} \sum_{k=0}^\infty e^{-\lambda_k t} \vert\frac{\partial u_k}{\partial\nu}({\bf x})\vert^2$. For more information about Hadamard deformations we refer to \cite{EI2, EI1, Gar, GaSc, Henry, oza, RS}.) \section {Proof of results}\label{2} Let $\Omega\subset \mathbb R^n$ be a domain of the form $\Omega=D\setminus B$, where $D$ is a bounded domain and $B$ is a convex domain such that the closure of $B$ is contained in $D$. {(For simplicity our theorems have been stated for the case of a spherical obstacle $B$, but the essential argument requires only a lower degree of symmetry.)} Let us start by establishing how the zeta function and the Laplacian determinant are related to the heat trace in our situation. Indeed, the following formula is valid for every complex number $s$ with $\mathrm{Re} \ s>\frac n2$: $$ \zeta_{\Omega} (s):=\sum_{k \geq 1}{}\ {1\over{\lambda_k^s(\Omega})}= {1\over \Gamma(s)} \int_{0}^{\infty} Z_{\Omega}(t) t^{s-1}dt.$$ It is well known that the function $Z_{\Omega}$ satisfies \begin{equation}\label{ZetaAsym} Z_{\Omega}(t)\sim \sum_{k \geq 0} a_{k}\ t^{(k-n) \over 2}\qquad {\rm as }\ t\to 0, \end{equation} where $a_k$ is a sequence of real numbers that depend only on the geometry of the boundary of $\Omega$ (see e.g. \cite{BrGi}). In particular, {\bf the coefficients $a_k$ are independent of the position of $B$ within $D$}. We set $$ \tilde Z_{\Omega}(t) = Z_{\Omega}(t) - \sum_{k =0 }^{n} a_{k}\ t^{(k-n) \over 2},$$ so that $\tilde Z_{\Omega}(t)/\sqrt t $ is a bounded function in a neighborhood of $t=0$. We also introduce the meromorphic function $$ R(s)= {1\over \Gamma(s)}\sum_{k =0 }^{n}a_{k}\int_{0}^{1} t^{s-1+(k-n)/2}dt = {1\over \Gamma(s)}\sum_{k =0 }^{n} \frac{a_{k}}{s-(n-k)/ 2}, $$ which has poles at $1/2, 1, 3/ 2, 2, \cdots, n/2$. (Note that $s=0$ is not a pole since $1\over s\Gamma(s)$ is a holomorphic function on $\mathbb C$.) Consequently, for every $s\in \mathbb R^+$, \begin{equation}\label{zeta} \zeta_{\Omega} (s)= R(s)+ {1\over \Gamma(s)} \int_{0}^{1} \tilde Z_{\Omega}(t) t^{s-1}dt + {1\over \Gamma(s)} \int_{1}^{\infty} Z_{\Omega}(t) t^{s-1}dt. \end{equation} where the last term is an entire function of $s$, since $ Z_{\Omega}(t) $ behaves as $ e^{-\lambda_1(\Omega) t}$ when $t\to +\infty$. On the other hand, the reciprocal gamma function $f(s):={1\over \Gamma(s)}$ vanishes at $s=0$ and satisfies $f'(0)=1$. Therefore, \begin{equation}\label{zetaprime} \zeta'_{\Omega} (0)= R'(0) + \int_{0}^{1} \tilde Z_{\Omega}(t) t^{-1}dt + \int_{1}^{\infty} Z_{\Omega}(t) t^{-1}dt . \end{equation} Assume that the domain $D$ has the interior reflection property with respect to a hyperplane $P$ about which the set $B$ is reflection-symmetric. (Here we do not need to restrict to convex $D$.) Our strategy is to consider a displacement of the obstacle by $\varepsilon$ in a certain direction and to show that $Z_{\Omega_{\varepsilon}}(t)$ is monotonically increasing in that direction. Thus let $V$ be the unit vector perpendicular to $P$ and pointing in the direction of the small side $D_s$. For small $\varepsilon >0$, we translate $B$ by a distance $\varepsilon$ in the direction of $V$ and set $B_{\varepsilon}:=B+\varepsilon V$ and $\Omega_{\varepsilon}:=D\setminus B_{\varepsilon}$. The results of this paper rely on the following proposition. \begin{proposition}\label{det} Assume that the domain $D$ has the interior reflection property with respect to a hyperplane $P$ about which the set $B$ is reflection-symmetric. Consider displacements as described above. Then, \begin{equation}\label{derivZ} \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} >0, \end{equation} {except possibly for a finite set of values $t$ in any interval $[\tau,\infty)$ with $\tau > 0$. Moreover, for each $s > 0$,} \begin{equation}\label{derivzeta} \frac{d}{d \varepsilon} \zeta_{\Omega_{\varepsilon}}(s)\big|_ {\varepsilon=0} >0, \end{equation} and \begin{equation}\label{derivdet} \frac{d}{d \varepsilon} \det({\Omega_{\varepsilon}})\big|_ {\varepsilon=0} <0. \end{equation} \end{proposition} \begin{proof} The heat kernel $K$ on $\Omega$ under the Dirichlet boundary condition is defined as the fundamental solution of the heat equation, that is $$\left\{ \begin{array}{l} ({\partial\over \partial t } - \Delta_y) K(t,{\bf x},{\bf y}) =0 \;\; \hbox{in}\; \Omega\\ \\ K(0^+,{\bf x},{\bf y})=\delta_{\bf x}({\bf y})\\ \\ K(t,{\bf x},{\bf y}) =0 \;\; \forall {\bf y}\in \partial \Omega. \; \end{array} \right.$$ The relationship between the heat kernel and the spectral decomposition of the Dirichlet Laplacian in $\Omega$ is given by \begin{equation}\label{heatkerseries} K(t,{\bf x},{\bf y})=\sum_{k\ge 1} e^{- \lambda_k(\Omega) t} u_k ({\bf x}) u_k ({\bf y}), \end{equation} where $(u_k)_{k\ge1}$ is an $L^2(\Omega)$-orthonormal family of eigenfunctions satisfying $$\left\{ \begin{array}{l} -\Delta u_k=\lambda_k (\Omega) u_k\;\; \hbox{in}\; \Omega\\ \\ u_k =0 \;\; \hbox{on}\; \partial \Omega.\\ \end{array} \right.$$\\ The heat trace is then given by $$Z_{\Omega}(t) = \int_\Omega K(t,{\bf x},{\bf x}) dx= \sum_{k\ge 1} e^{- \lambda_k (\Omega) t}.$$ Let $X$ be a smooth vectorfield such that $X$ vanishes on $\partial D$ and coincides with the vector $V$ on $\partial B$. For sufficiently small $\varepsilon$, one has $\Omega_{\varepsilon} = f_\varepsilon (\Omega)$ where $f_\varepsilon ({\bf x})={\bf x}+\varepsilon X({\bf x})$. The Hadamard-type formula \eqref{hadamard-heat} gives : \begin{eqnarray*} \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} &=& -{t\over 2} \int_{\partial\Omega} \Delta K(t,{\bf x},{\bf x}) \left(X\cdot \nu\right)({\bf x}) dx\\ &=& -{t\over 2} \int_{\partial B} \Delta K(t,{\bf x},{\bf x}) \, V\cdot\nu ({\bf x}) dx. \end{eqnarray*} Let $B_s$ be the half of $B$ contained in the small side $D_s$ of $D$ and $(\partial B)_s=\partial B\cap D_s$. Here, we assume without loss of generality that $D_s$ is connected (otherwise, $B_s$ is contained in one connected component of $D_s$ and we concentrate our analysis on this single component). Using the symmetry assumption on $B$ with respect to $P$ we obtain \begin{equation}\label{hadamard} \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} = -{t\over 2} \int_{(\partial B)_s} \left(\Delta K(t,{\bf x},{\bf x})-\Delta K(t,{\bf x}^*,{\bf x}^*)\right) \, V\cdot\nu ({\bf x}) dx \end{equation} where ${\bf x}^*$ stands for the reflection of ${\bf x}$ through $P$. Define the function $\phi(t,{\bf x},{\bf y})=K(t,{\bf x},{\bf y})-K(t,{\bf x}^*,{\bf y}^*)$ on $(0,\infty)\times \Omega_s\times\Omega_s$ with $\Omega_s=D_s\setminus B_s$. \textit{Claim }: For all $(t,{\bf x},{\bf y})\in (0,\infty)\times \Omega_s\times\Omega_s$, $\phi(t,{\bf x},{\bf y}) \le 0$. \noindent Let us check the sign of $\phi(t,{\bf x},{\bf y})$ on $(0,\infty)\times \partial\Omega_s\times\partial \Omega_s$. Notice that $\partial\Omega_s$ is the union of three components : $(\partial D)_s$, $(\partial B)_s$ and $\Omega \cap P$. First, from the boundary conditions, if ${\bf x}\in (\partial B)_s$ or ${\bf y}\in (\partial B)_s$, then $K(t,{\bf x},{\bf y})=K(t,{\bf x}^*,{\bf y}^*)=0$ and, hence, $\phi(t,{\bf x},{\bf y}) =0$. On the other hand, $K(t,{\bf x},{\bf y})$ vanishes as soon as ${\bf x}\in (\partial D)_s$ or ${\bf y}\in (\partial D)_s$, which implies $ \phi(t,{\bf x},{\bf y})=-K(t,{\bf x}^*,{\bf y}^*)\le 0$. It remains to consider the case where both ${\bf x}$ and ${\bf y}$ belong to $\Omega \cap P$. In this case we have ${\bf x}^*={\bf x}$, ${\bf y}^*={\bf y}$ and $\phi(t,{\bf x},{\bf y}) =0$. Observe next that for all ${\bf x}\in \bar\Omega_s$, the function $(t,{\bf y})\mapsto \phi(t,{\bf x},{\bf y})$ is a solution of the following parabolic problem : $$(*)\left\{ \begin{array}{l} ({\partial\over \partial t } - \Delta_y) \phi(t,{\bf x},{\bf y}) =0 \;\; \hbox{in}\; \Omega_s\\ \\ \phi(0^+,{\bf x},{\bf y})=0. \end{array} \right.$$ Given any ${\bf x}\in\partial\Omega_s$, the parabolic maximum principle (see e.g., \cite{Eva}, {\S 7.1}) tells us that, since $(t,{\bf y})\mapsto \phi(t,{\bf x},{\bf y})$ is nonpositive on the boundary of the cylinder $(0,\infty)\times \Omega_s$, it follows that $ \phi(t,{\bf x},{\bf y})\le 0$ for all $t>0$ and all ${\bf y}\in \bar\Omega_s$. Now, from the symmetry of $\phi$ with respect to ${\bf x}$ and ${\bf y}$, the function $(t,{\bf x})\mapsto \phi(t,{\bf x},{\bf y})$ satisfies the same parabolic system as $(*)$. Since we have established that $\forall {\bf y}\in \bar\Omega_s$, the function $(t,{\bf x})\mapsto \phi(t,{\bf x},{\bf y})$ is everywhere nonpositive on the boundary of the cylinder $(0,\infty)\times \Omega_s$, the parabolic maximum principle then implies that $\phi(t,{\bf x},{\bf y}) $ is nonpositive in the whole cylinder $(0,\infty)\times \Omega_s\times\Omega_s$. \textit{Claim }: $\Delta \phi (t,{\bf x},{\bf x})\le0$ for all $(t,{\bf x})\in (0,\infty)\times (\partial B)_s$. For a sufficiently small $\delta >0$, let $V=\{\psi({\bf z},\rho):={\bf z}+\rho\ \nu({\bf z}) \; ; \; {\bf z}\in\partial B\mbox{ and } 0\le\rho < \delta\}$ be the 1-sided $\delta$-tubular neighborhood of $\partial B$. The Euclidean metric $g_{E}$ can be expressed in $V$ with respect to so-called Fermi coordinates $({\bf z},\rho)\in\partial B \times (0,\delta)$ as follows (see for instance \cite[Lemma 3.1]{pac}) : $$g_E= d\rho^2+g_\rho,$$ where $g_\rho $ is a Riemannian metric on the hypersurface $ \Gamma_\rho= \{{\bf z}+\rho\ \nu({\bf z}) \; ; \; {\bf z}\in\partial B\}$. Consequently, the Euclidean Laplacian in $V$ takes on the following form with respect to Fermi coordinates : $$\Delta= \frac{\partial^2}{\partial \rho^2} - H_\rho \frac{\partial}{\partial \rho} + \Delta_{g_\rho},$$ where $H_\rho$ is the mean curvature of $ \Gamma_\rho$ and $\Delta_{g_\rho}$ is the Laplace-Beltrami operator of $(\Gamma_\rho, g_\rho)$. Now, $ K(t,{\bf x},{\bf x})=\sum_{k\ge 1} e^{- \lambda_k(\Omega) t} u_k ({\bf x})^2$, and it is known that for $C^2$ domains, $\|\nabla u_k\|_\infty$ is bounded by a constant times a finite power of $\lambda_k$ (see \cite{Gri,HaTa}). {Hence, the functions $ K(t,{\bf x},{\bf x})$ and consequently $\phi (t,{\bf x},{\bf x})$ vanish} quadratically on $(\partial B)_s$. Thus, for any point $ {\bf z}=\psi({\bf z}, 0) \in (\partial B)_s $, $$ \frac{\partial }{\partial \rho}\phi (t,\psi({\bf z}, \rho),\psi({\bf z}, \rho)) \big\vert_{\rho=0}=0 \quad \mbox{and}\quad \Delta_{g_\rho}\phi (t,\psi({\bf z}, \rho),\psi({\bf z}, \rho)) \big\vert_{\rho=0} =0.$$ Therefore, $$ \Delta \phi (t,{\bf z},{\bf z})= \frac{\partial^2}{\partial \rho^2}\phi (t,\psi({\bf z}, \rho),\psi({\bf z}, \rho))\big\vert_{\rho=0}, $$ which is nonpositive since $ \frac{\partial }{\partial \rho}\phi (t,\psi({\bf z}, \rho),\psi({\bf z}, \rho)) \big\vert_{\rho=0}=0 $ and, according to what we proved in the previous claim, the function $\rho\in[0,\delta)\mapsto \phi (t,\psi({\bf z}, \rho),\psi({\bf z}, \rho))$ achieves its maximum at $\rho=0$. \textit{Claim }: Let $\tau $ be any positive real number. Except possibly for a finite set of values of $t$ in $[\tau,\infty)$, $$ \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} >0.$$ From the assumptions that $B$ is convex and symmetric with respect to the hyperplane $P$, it follows that the product $V\cdot\nu({\bf x})$ is positive at {almost} every point ${\bf x}$ of $(\partial B)_s$. From equation \eqref{hadamard} and the previous claim we then deduce that $\forall t>0$, $$ \frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\big|_ {\varepsilon=0} \ge 0. $$ To show that this quantity cannot vanish at more than a finite set of values of $t\in [\tau,\infty)$, we shall show that it is analytic as a function of $t$ in the open right half plane, and positive for real values of $t$ sufficiently large. By the unique continuation theorem an analytic function that vanishes on a set with a point of accumulation is identically zero, which would pose a contradiction. To establish {the analytic properties} of $Z_{\Omega_{\varepsilon}}(t)$, we argue as follows. Observe first that the deformation $\Omega_\varepsilon$ depends analytically on $\varepsilon$ {in a neighborhood of $0$}, since $\Omega_\varepsilon=f_\varepsilon (\Omega)$ with $f_\varepsilon ({\bf x})={\bf x}+\varepsilon X({\bf x})$. As in the proof of Lemma 3.1 in \cite{EI1}, the {Dirichlet Laplacian in $\Omega_\varepsilon$ is an analytic family of operators in the sense of Kato \cite{Kat} with respect to the parameter $\varepsilon$.} Because each eigenvalue of the Laplacian is at most finitely degenerate, according to {\cite[p. 425]{Kat},} there is a numbering of the eigenvalues $\{\lambda_k(\Omega_\varepsilon)\} \to \{\Lambda_k(\varepsilon)\}$ for which each $ \Lambda_k(\varepsilon)$ is analytic in $\varepsilon$ in a neighborhood of $\varepsilon=0$. (Using this numbering, which is important only in a neighborhood of a degenerate eigenvalue, does not alter $Z(t)$ as defined in \eqref{Zdef}.) In consequence of the Hadamard formula for the derivative of an eigenvalue, $\frac{\partial \Lambda_k}{\partial \varepsilon} |_{\varepsilon=0}$ is dominated in norm by the integral of the square of the normal derivative of an associated $L^2$ normalized eigenfunction $u_k$ over the boundary of the obstacle. {We again call upon estimates for $C^2$ domains, by which both} $\|u_k\|_\infty$ and $\|\nabla u_k\|_\infty$ are bounded by constants times finite powers of $\lambda_k$ \cite{Gri,HaTa}, which in turn $\sim k^{\frac{2}{n}}$ by the Weyl law. It follows that both the series $\sum_{k\ge 1} e^{- \lambda_k(\Omega_{\varepsilon}) t}$ and its term-by-term derivative with respect to $\varepsilon$ converge uniformly on each set of the form $\{{\rm Re }\ t \ge \tau > 0\}$, and are therefore analytic on such sets. To finish the argument, we observe that by differentiating $ Z_{\Omega_{\varepsilon}}(t) = \sum_{k\ge 1} e^{- \lambda_k(\Omega_{\varepsilon}) t}$, $$ \left.\frac{\partial}{\partial \varepsilon} Z_{\Omega_{\varepsilon}}(t)\right|_ {\varepsilon=0} = e^{- \lambda_1(\Omega) t}\left( - \left.\frac{\partial \lambda_1(\Omega_\varepsilon)}{\partial \varepsilon}\right|_ {\varepsilon=0} + 0(e^{- (\lambda_2 -\lambda_1)t})\right). $$ This is positive for large $t$ because $\lambda_1$ is nondegenerate and $\frac{\partial \lambda_1(\Omega_\varepsilon)}{\partial \varepsilon}\big|_ {\varepsilon=0} < 0$ by \cite{HKK}. This completes the proof of \eqref{derivZ}. The proof of {\eqref{derivzeta} and} \eqref{derivdet} relies on the formulae \eqref{zeta} and \eqref{zetaprime} that give for every $s\in \mathbb R^+$ and every $\varepsilon\ne 0$ sufficiently small, \begin{equation}\label{zeta_epsilon} \zeta_{\Omega_\varepsilon} (s)= R(s)+ {1\over \Gamma(s)} \int_{0}^{1} \tilde Z_{\Omega_\varepsilon}(t) t^{s-1}dt + {1\over \Gamma(s)} \int_{1}^{\infty} Z_{\Omega_\varepsilon}(t) t^{s-1}dt \end{equation} and \begin{equation}\label{zetaprime_epsilon} \zeta'_{\Omega_\varepsilon} (0)= R'(0) + \int_{0}^{1} \tilde Z_{\Omega_\varepsilon}(t) t^{-1}dt + \int_{1}^{\infty} Z_{\Omega_\varepsilon}(t) t^{-1}dt , \end{equation} with $\det (\Omega_\varepsilon ) = e^{-\zeta'_{\Omega_\varepsilon} (0)}$. \end{proof} \begin{proof}[Proof of Theorem \ref{heart}] Let $D$ be a bounded convex domain of $\mathbb R^n$ and let $r>0$ be less than the inradius of $D$. Observe first that for every $t>0$, the function ${\bf x}\mapsto Z_{\Omega({\bf x})}(t)$ is continuous on $ \bar D_r=\{ {\bf x} \in D : {\rm dist}({\bf x}, \partial D) \ge r \} $. {Indeed, we know that $$\lambda_k(\Omega({\bf x}))\ge \frac {n}{n+2} C_n \left(\frac k{\vert\Omega({\bf x})\vert }\right)^{\frac2n}$$ (see \cite{LY}), where} $C_n$ is the constant appearing in Weyl's asymptotic formula and $\vert\Omega({\bf x})\vert$ is the volume of $\Omega({\bf x})$. Since $\vert\Omega({\bf x})\vert$ does not depend on ${\bf x}$, we deduce that the series $\sum e^{-\lambda_k(\Omega({\bf x})) t}$ converges uniformly on $ \bar D_r$ and that its sum $Z_{\Omega({\bf x})}(t)$ depends continuously on ${\bf x}$. (The continuity of eigenvalues $\lambda_k(\Omega({\bf x})) $ can be derived in several ways from standard continuity results cited in \cite[Section 2.3.3]{He}. In particular, see Remark 6.2 of \cite{Dan}.) Consequently, $Z_{\Omega({\bf x})}(t)$ achieves its extremal values in $ \bar D_r$. Let ${\bf x}\in D_r =\{ {\bf x} \in D : {\rm dist}({\bf x}, \partial D) > r \} $ be a point such that ${\bf x} \notin \heartsuit(D)$. From the definition of $\heartsuit(D)$, there exists a hyperplane of interior reflection $P$ of $D$ passing through ${\bf x}$. Moreover, since the reflection of $\partial D_s\setminus P$ is disjoint from $\partial D_b$, there exists $\delta>0$ such that $\forall \varepsilon\in[0,\delta]$, the hyperplane $P_\varepsilon$ parallel to $P$ and passing through ${\bf x_\varepsilon} = {\bf x}-\varepsilon V$ is a hyperplane of interior reflection, where $V$ is the unit vector perpendicular to $P$ and pointing in the direction of $D_s$. Applying Proposition \ref{det}, we see that the function $\varepsilon\mapsto Z_{\Omega({\bf x_\varepsilon})}(t) $ is monotonically nonincreasing (notice that the variation formula \eqref{derivZ} is given for a displacement into the small side $D_s$. Here, the obstacle moves in the opposite direction, that of $-V$, which has the effect of changing the sign of the derivative.) At the same time, the distance ${\rm dist}({\bf x}_\varepsilon,\heartsuit(D) ) $ is clearly decreasing since ${\bf x}_\varepsilon$ moves into the big side. It follows that the set of points where ${\bf x}\mapsto Z_{\Omega({\bf x})}(t)$ achieves its minimum cannot be disjoint from $\heartsuit(D)$. The minimum is therefore achieved at a point ${\bf x}_0(t)\in \heartsuit(D)$. Similarly, if a point ${\bf x}\in D_r $ does not belong to the interior of $\heartsuit(D)$, then there exists a hyperplane of interior reflection passing through ${\bf x}$ so that it is possible to move the obstacle into the small side $D_s$ along a line segment perpendicular to $P$. The function $Z_{\Omega({\bf x})}(t) $ is monotonically nondecreasing along such displacement while the obstacle approaches the boundary of $D$. Again, this proves that if the set of points where ${\bf x}\mapsto Z_{\Omega({\bf x})}(t)$ achieves its maximum is not contained in the interior of $\heartsuit(D)$, then it must hit $\{{\bf x}\in D\ :\ {\rm dist}({\bf x}, \partial D)=r\}$. The continuity of the zeta function and of the determinant in $D_r$ derive from the continuity of the heat trace, through \eqref{zeta} and \eqref{zetaprime}, according to which, for every $s\in \mathbb R^+$ and every ${\bf x}\in D_r$, \begin{equation}\label{zeta_x} \zeta_{\Omega({\bf x})} (s)= R(s)+ {1\over \Gamma(s)} \int_{0}^{1} \tilde Z_{{\Omega({\bf x})}}(t) t^{s-1}dt + {1\over \Gamma(s)} \int_{1}^{\infty} Z_{{\Omega({\bf x})}}(t) t^{s-1}dt, \end{equation} and \begin{equation}\label{zetaprime_x} \zeta'_{{\Omega({\bf x})}} (0)= R'(0) + \int_{0}^{1} \tilde Z_{{\Omega({\bf x})}}(t) t^{-1}dt + \int_{1}^{\infty} Z_{{\Omega({\bf x})}}(t) t^{-1}dt, \end{equation} where $R(s)$ is a function that does not depend on ${\bf x}$. These formulae are not necessarily valid in the situation where the ball $B({\bf x},r)$ touches the boundary of $D$ since, due to the cuspidal singularity that the domain $\Omega({\bf x})$ will then present, in consequence of which the function $\tilde Z_{\Omega({\bf x})}(t)/\sqrt t $ may cease to be bounded in the neighborhood of $t=0$. Let ${\bf x}\in D_r $ be a point lying outside $\heartsuit(D)$ and let $s\in\mathbb R^+$. As before, using the definition of $\heartsuit(D)$ and Proposition \ref{det}, we see that it is possible to move $B({\bf x},r) $ towards the heart so as to decrease $ \zeta_{\Omega({\bf x})} (s)$. This enables us to construct a (possibly finite) sequence of points converging to a point ${\bf y}\in \heartsuit(D)$, along which the zeta function is decreasing. Thus, $ \zeta_{\Omega({\bf x})} (s)< \zeta_{\Omega({\bf y})} (s)$, which leads to \eqref{supzeta}. Similarly, it is possible to move $B({\bf x},r) $ in the direction of the boundary so as to increase $ \zeta_{\Omega({\bf x})} (s)$, which implies that $ \zeta_{\Omega({\bf x})} (s)$ is less than a limiting value of $ \zeta_{\Omega({\bf y})} (s)$ as $B({\bf y},r) $ approaches the boundary of $D$.\\ Now, when $r< {\rm dist}(\heartsuit(D),\partial D)$, the heart is contained in $D_r$ and, consequently, ${\bf x}\mapsto \zeta_{\Omega({\bf x})} (s)$ is continuous on $\heartsuit(D)$ (which is a compact set) and achieves its minimum at a point ${\bf x}_0 (s)\in \heartsuit(D)$. Similarly, ${\bf x}\in\heartsuit(D) \mapsto \zeta_{\Omega({\bf x})} (s)$ achieves its maximum at a point which belongs to the interior of $\heartsuit(D)$, since a ball of radius $r< {\rm dist}(\heartsuit(D),\partial D)$ centered at the boundary of $\heartsuit(D)$ can be moved away from $\heartsuit(D)$ so as to increase $\zeta$. The statement concerning the determinant can be proved using the same arguments. \end{proof} {\bf Acknowledgments} The authors gratefully note that much of this work was done at the Centro de Giorgi in Pisa and while the second author was a visiting professor at the Universit\'e Fran\c{c}ois Rabelais, Tours, France. We also wish to thank the referee for pertinent comments. \def$'${$'$} \end{document}
\begin{document} \title{f{On conjectures of network distance measures by using graph spectra} \begin{abstract} In this note we resolve three conjectures from [M. Dehmer, S. Pickl, Y. Shi, G. Yu, \emph{New inequalities for network distance measures by using graph spectra}, Discrete Appl. Math. 252 (2019), 17--27] on the comparison of distance measures based on the graph spectra, by constructing families of counterexamples and using computer search. \end{abstract} {\bf Key words}: Distance measures; Graph spectra \vskip 0.1cm {{\bf AMS Classifications:} 05C05.} \vskip 0.1cm \section{Introduction} Eigenvalues of various graph-theoretical matrices often reflect structure properties of graphs meaningfully \cite{cvetkovic1}. In fact, studying eigenvalues of graphs and, then, characterizing those graphs based on certain properties of their eigenvalues has a long standing history, see \cite{cvetkovic1}. Also, eigenvalues have been used for characterzing graphs quantitatively in terms of defining graph complexity as well similarity measures \cite{randic_2001_2,DeEm14}. An analysis revealed that eigenvalue-based graphs measures tend to be quite unique, i.e., they are able to discriminate graphs uniquely \cite{dehmer_grabner_2012_2}. Some of the studied measures even outperformed measures from the family of the so-called Molecular ID Numbers, see \cite{dehmer_grabner_2012_2}. In this short paper, we further investigate an approach in \cite{DeEm14} where the authors explored inequalities for graph distance measures. Those are based on topological indices using eigenvalues of adjacency matrix, Laplacian matrix and signless Laplacian matrix. The graph distance measure is defined as $$ d_I(G, H) = d(I(G), I(H) = 1 - e^{- \left( \frac{I(G) - I(H)}{\sigma} \right) ^2}, $$ where $G$ and $H$ are two graphs and $I(G)$ and $I(H)$ are topological indices applied to both $G$ and $H$. In this short note, we are going to disprove three conjectures proposed in Dehmer et al. \cite{DePi19}, by constructing families of counterexamples and using computer search. \section{Main result} Let $G$ be a simple connected graph on $n$ vertices. Let $\lambda_1$ be the largest eigenvalue of the adjacency matrix of $G$, and $q_1$ be the largest eigenvalue of the Laplacian matrix of $G$. The authors from \cite{DePi19} proposed the following conjectures and stated that it is likely that we need deeper results from matrix theory and from the theory of graph spectra to prove these. \begin{conjecture} Let $T$ and $T'$ be two trees on $n$ vertices. Then $$ d_{q_1} (T, T') \geq d_{\lambda_1} (T, T'). $$ \end{conjecture} We are going to disprove the above conjecture by providing a family of counterexamples for which it holds $$ 0 = d_{q_1} (T, T') < d_{\lambda_1} (T, T'), $$ or in other words $q_1(T) = q_1(T')$ and $\lambda_1 (T) \neq \lambda_1 (T')$. In \cite{Os13}, the author proved the following result: Almost all trees have a cospectral mate with respect to the Laplacian matrix. \begin{theorem} Given fixed rooted graphs $(G, v)$ and $(H, v)$ and an arbitrary rooted graph $(K, w)$, if $(G, u)$ and $(H, v)$ are Laplacian (signless Laplacian, normalized Laplacian, adjacency) cospectrally rooted then $G\cdot K$ and $H \cdot K$ are cospectral with respect to the Laplacian (signless Laplacian, normalized Laplacian, adjacency) matrix. \end{theorem} Starting from Laplacian cospectrally rooted trees shown in Figure 1 - one can construct many graphs by choosing arbitrary trees $K$. \begin{figure} \caption{Rooted Laplacian cospectral trees.} \end{figure} By direct calculation, we get that these trees are not adjacency cospectral and therefore the adjacency spectral radiuses are different (2.0684 vs 2.0743). We rerun the same simulation for trees using \texttt{Nauty} \cite{Mc81} on $n = 10$ vertices as discussed in \cite{DePi19}. Based on the computed search - the smallest counterexample is on $n = 8$ vertices. \begin{center} \begin{tabular}{ |r|r|r| } \hline $n$ & tree pairs & Conjecture 5.1 counterexamples \\ \hline 4 & 3 & 0 \\ 5 & 6 & 0 \\ 6 & 21 & 0 \\ 7 & 66 & 0 \\ 8 & 276 & 2 \\ 9 & 1128 & 11 \\ 10 & 5671 & 89 \\ 11 & 27730 & 568 \\ 12 & 152076 & 3532 \\ 13 & 846951 & 21726 \\ 14 & 4991220 & 138080 \\ 15 & 29965411 & 877546 \\ 16 & 186640860 & 5725833 \\ \hline \end{tabular} \end{center} Degree powers, or the zeroth Randi\'{c} index are defined as $$ F_k = \sum_{v \in V} deg^k (v). $$ \begin{conjecture} Let $T$ and $T'$ be two trees on $n$ vertices. Then $$ d_{F_2} (T, T') \geq d_{q_1} (T, T'). $$ \end{conjecture} We rerun the same computer simulation and found many examples of pairs for which holds $$ |F_2 (T) - F_2(T')| < |q_1(T) - q_1(T')|, $$ and consequently $d_{F_2} (T, T') < d_{q_1} (T, T')$. In particular the smallest counterexample is on $n=6$ vertices and shown in Figure 2: clearly $F_2(T) = F_2(T') = 20$ and $$4.214320 = q_1(T) < q_1(T') = 4.302776.$$ This disproves the above conjecture and corrects the results from \cite{DePi19}. \begin{center} \begin{tabular}{ |r|r|r| } \hline $n$ & tree pairs & Conjecture 5.2 counterexamples \\ \hline 4 & 3 & 0 \\ 5 & 6 & 0 \\ 6 & 21 & 1 \\ 7 & 66 & 5 \\ 8 & 276 & 28 \\ 9 & 1128 & 117 \\ 10 & 5671 & 577 \\ 11 & 27730 & 2672 \\ 12 & 152076 & 13805 \\ 13 & 846951 & 72801 \\ 14 & 4991220 & 405454 \\ 15 & 29965411 & 2312368 \\ 16 & 186640860 & 13713949 \\ \hline \end{tabular} \end{center} \begin{figure} \caption{Two trees with $F_2(T) = F_2(T')$ and $q_1(T) \neq q_1 (T')$.} \end{figure} To conclude, these results disprove Conjectures 5.1 and Conjecture 5.2 from \cite{DePi19}, while Conjecture 5.3 on relationship between $\lambda_1$ and $F_2$ \cite{DePi19} directly follows from these. \end{document}
\begin{document} \title[Locally harmonic Maass forms]{Locally harmonic Maass forms and the kernel of the Shintani lift} \author{Kathrin Bringmann} \address{Mathematical Institute\\University of Cologne\\ Weyertal 86-90 \\ 50931 Cologne \\Gammaermany} \email{[email protected]} \author{Ben Kane} \address{Mathematical Institute\\University of Cologne\\ Weyertal 86-90 \\ 50931 Cologne \\Gammaermany} \email{[email protected]} \author{Winfried Kohnen} \address{Mathematisches Institut\\Universit\"at Heidelberg\\ INF 288\\ 69210, Heidelberg\\ Germany} \email{[email protected]} \dedicatory{In memory of Marvin Knopp} \date{\today} \thanks{The research of the first author was supported by the Alfried Krupp Prize for Young University Teachers of the Krupp Foundation and by the Deutsche Forschungsgemeinschaft (DFG) Grant No. BR 4082/3-1} \subjclass[2010] {11F37, 11F11, 11F25, 11E16} \keywords{hyperbolic Poincar\'e series, harmonic weak Maass forms, cusp forms, lifting maps, Shimura lift, Shintani lift, rational periods, wall crossing} \begin{abstract} In this paper we define a new type of modular object and construct explicit examples of such functions. Our functions are closely related to cusp forms constructed by Zagier \cite{ZagierRQ} which played an important role in the construction by Kohnen and Zagier \cite{KohnenZagier} of a kernel function for the Shimura and Shintani lifts between half-integral and integral weight cusp forms. Although our functions share many properties in common with harmonic weak Maass forms, they also have some properties which strikingly contrast those exhibited by harmonic weak Maass forms. As a first application of the new theory developed in this paper, one obtains a new proof of the fact that the even periods of Zagier's cusp forms are rational as an easy corollary. \end{abstract} \maketitle \section{Introduction and statement of results} For an integer $k>1$ and a discriminant $D>0$, define \begin{equation}\lambdabel{eqn:fkDdef} f_{k,D}\left(\tau\right):=\frac{D^{k-\frac{1}{2}}}{\binom{2k-2}{k-1}\pi} \sum_{\substack{a,b,c\in \mathbb{Z}\\ b^2-4ac=D}} \left(a\tau^2+b\tau+c\right)^{-k}, \end{equation} where $\tau\in \mathbb{H}$. This function was introduced by Zagier \cite{ZagierRQ} in connection with the Doi-Naganuma lift (between modular forms and Hilbert modular forms) and lies in the space $S_{2k}$ of (classical, holomorphic) cusp forms of weight $2k$ for $\Gammaamma_1:={\text {\rm SL}}_2(\mathbb{Z})$. More recently, generalizations of $f_{k,D}$ (where the form in the denominator is no longer quadratic) as well as the case when $D<0$ (resulting in meromorphic modular forms) were elegantly investigated by Bengoechea in her Ph.D. thesis \cite{Bengoechea}. Katok \cite{Katok} also realized $f_{k,D}$ as a certain linear combination of hyperbolic Poincar\'e series whose original construction is due to Petersson \cite{Petersson}. A good overview on hyperbolic Poincar\'e series and their relationship with $f_{k,D}$ was given by Imamo{$\overline{\text{g}}$}lu and O'Sullivan \cite{ImOS} (see also \cite{DITrational}). The functions $f_{k,D}$ (and certain variations of them) play an important role in the theory of modular forms of half-integral weight. Indeed, as shown in \cite{KohnenZagier} and later in \cite{KohnenCoeff}, they are the Fourier coefficients of holomorphic kernel functions for the Shimura \cite{Shimura} (resp. Shintani \cite{Shintani}) lifts between half-integral and integral weight cusp forms. More precisely, for $\tau,z\in\mathbb{H}$, define \begin{equation}\lambdabel{eqn:Omegadef} \Omega\left(\tau,z\right):=\sum_{0<D\equiv 0,1\pmod{4}} f_{k,D}\left(\tau\right) e^{2\pi i Dz}. \end{equation} Then $\Omega$ is a modular form of weight $2k$ in the variable $\tau$ and weight $k+\frac{1}{2}$ in the variable $z$. Furthermore, integrating $\Omega$ against a cusp form $f$ of weight $2k$ (resp. $k+\frac{1}{2}$) with respect to the first (resp. second) variable is the Shintani (resp. Shimura) lift of $f$. In a different way, the function $f_{k,D}$ also give important examples of modular forms with rational periods. These were studied in \cite{KohnenZagierRational} and have appeared more recently in work of Duke, Imamo{$\overline{\text{g}}$}lu, and T\'oth \cite{DITrational}, where they were shown to be related to be the error to modularity of certain fascinating holomorphic functions which are defined via cycle integrals. We elaborate further upon the interrelation between their interesting work and the results in this paper in Section \textnormal{Re}f{sec:DIT}. This paper is the first in a series of papers introducing and investigating a new type of modular object. In this paper, we construct an infinite family of functions of this new type and prove that they both closely resemble and are connected to $f_{k,D}$ through differential operators which naturally occur in the theory of harmonic weak Maass forms (see Theorem \textnormal{Re}f{thm:xiDk-1}). The resulting functions also give a new explanation of the rationality of the even periods of $f_{k,D}$ for $k$ even (see Theorem \textnormal{Re}f{thm:ratperiod}). We expect that these new objects will have further important applications to the theory of modular forms. Before introducing these new modular objects, we first recall that a weight $2-2k$ \begin{it}harmonic weak Maass form\end{it} is a real analytic function $\mathcal{F}$ which satisfies weight $2-2k$ modularity, is annihilated by the weight $2-2k$ \begin{it}hyperbolic Laplacian\end{it} $$ \Delta_{2-2k}:=-y^2\left(\frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2}\right) +i\left(2-2k\right)y\left(\frac{\partial}{\partial x} + i \frac{\partial}{\partial y}\right) $$ and has at most exponential growth at $i\infty$. Here and throughout $\tau\in \mathbb{H}$ is written as $\tau=x+iy$, $x,y\in \mathbb{R}$ with $y>0$. The theory of harmonic weak Maass forms has proven useful in many areas including combinatorics, number theory, physics, Lie theory, probability theory, and knot theory. To name a few examples, harmonic weak Maass forms have played a role in understanding Ramanujan's mock theta functions \cite{ZwegersThesis}, in proving asymptotics and congruences in partition theory \cite{BringmannOnoAnnals,BruinierOno,Rhoades}, in relating character formulas of Kac and Wakimoto \cite{KacWakimoto} to automorphic forms \cite{BringmannOnoKac,Folsom}, in the study of metastability thresholds for bootstrap percolation models \cite{AndrewsPercolation}, in the quantum theory of black holes \cite{Quantum,Manschot}, in studying the elliptic genera of $K3$ surfaces \cite{EguchiHikami,ManschotMoore}, and in the study of central values of $L$-series and their derivatives \cite{BruinierOnoAnnals}. Bruinier and Funke \cite{BruinierFunke} have shown that for every $f\in S_{2k}$, there exists a weight $2-2k$ harmonic weak Maass form $\mathcal{F}$ which is related to $f$ through the anti-holomorphic operator $\xi_{2-2k}:=2iy^{2-2k} \overline{\frac{d}{d\overline{\tau}}}$ by $\xi_{2-2k}\left(\mathcal{F}\right)=f$. Such an $\mathcal{F}$ may be constructed via parabolic Poincar\'e series (for the foundations of this approach, see \cite{Fay}). Although an algorithm exists to construct $\mathcal{F}$ for a given form, this approach would not seem to yield a universal treatment of all $f_{k,D}$. A more universal approach was undertaken by Duke, Imamo{$\overline{\text{g}}$}lu, and T\'oth \cite{DITrational}, who constructed a natural holomorphic function $F_k(\tau,Q)$ coefficient-wise via cycle integrals and related it to $f_{k,D}$, which we further explain in Section \textnormal{Re}f{sec:DIT}. However, their construction is (coefficient-wise) via cycle integrals and does not seem to yield an immediate connection with hyperbolic Poincar\'e series. Therefore, even though we know that \rm a lift of $f_{k,D}$ exists and a related harmonic Maass form was constructed in \cite{DITrational}, it is still desirable to constuct a particular lift which resembles the shape \eqref{eqn:fkDdef} and is also related to hyperbolic Poincar\'e series. The construction of such a function analogous to \eqref{eqn:fkDdef} leads to a new class of automorphic objects which are the topic of this paper. To describe the resulting object, we first require some notation. Let \begin{equation}\lambdabel{eqn:psidef} \psi\left(v\right):=\frac{1}{2}\beta\left(v;k-\frac{1}{2},\frac{1}{2}\right) \end{equation} be a special value of the incomplete $\beta$-function, which is defined for $s,w\in \mathbb{C}$ satisfying $\textnormal{Re}\left(s\right)$, $\textnormal{Re}\left(w\right)>0$ by $\beta\left(v;s,w\right):=\int_{0}^v u^{s-1}\left(1-u\right)^{w-1}du$ (for some properties, see p. 263 and p. 944 of \cite{AS}). The function $\psi$ may be written in a variety of forms, but we choose this representation because it generalizes to other weights (see \eqref{eqn:varphibeta} for another useful representation). Denote the set of integral binary quadratic forms $[a,b,c](X,Y):=aX^2+bXY+cY^2$ of discriminant $D$ by $\mathbb{Q}D:=\left\{ [a,b,c]: b^2-4ac=D,\ a,b,c\in\mathbb{Z}\right\}$. Since we want the occurring cycle integrals to be geodesics, we restrict in the following to the case where $D$ is a non-square discriminant. For $\tau\in \mathbb{H}$ we set \begin{equation}\lambdabel{eqn:PPkDdef} \mathcal{F}_{1-k,D}\left(\tau\right):=\frac{D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{Q=\left[a,b,c\right]\in \mathbb{Q}D}\operatorname{sgn}\left(a\left|\tau\right|^2 + bx +c\right) Q\left(\tau,1\right)^{k-1} \psi\left(\frac{Dy^2}{\left|Q\left(\tau,1\right)\right|^2_{\phantom{-}}}\right). \end{equation} \begin{remark} After presenting the results of this paper, Zagier has informed us that he has independently investigated (in unpublished work) examples similar to \eqref{eqn:PPkDdef} for some small $k$ (in cases where there are no cusp forms in $S_{2k}$). In these cases, as we see in Theorem \textnormal{Re}f{thm:PPkDexpansion}, the function \eqref{eqn:PPkDdef} is locally equal to a polynomial. Zagier's investigation of these functions was initiated by a question posed by physicists. It would be interesting to investigate what our new theory yields in physics. After viewing a preliminary version of this paper, Bruinier pointed out to the authors that his Ph.D. student Martin H\"ovel \cite{Hoevel} is also studying a related function in his upcoming thesis. H\"ovel's construction appears to have connections to the case when $k=1$ (i.e., weight $0$) which is excluded in our study. \end{remark} Before relating $\mathcal{F}_{1-k,D}$ and $f_{k,D}$, we investigate the functions $\mathcal{F}_{1-k,D}$ themselves a bit closer. We put \begin{equation}\lambdabel{eqn:EDdef} E_D:=\left\{ \tau=x+iy\in \mathbb{H}: \exists a,b,c\in \mathbb{Z},\ b^2-4ac=D,\ a\left|\tau\right|^2+bx+c=0\right\}. \end{equation} The group $\Gammaamma_1$ acts on this set, and $E_D$ is a union of closed geodesics (Heegner cycles) projecting down to finitely many on the compact modular curve. The set $E_D$ naturally partitions $\mathbb{H}$ into (open) connected components (see Lemma \textnormal{Re}f{lem:neighbor}). Owing to the sign in the definition of $\mathcal{F}_{1-k,D}$, the functions $\mathcal{F}_{1-k,D}$ exhibit discontinuities when crossing from one connected component to another, with the value of the limits from either side differing by a polynomial. The functions $\mathcal{F}_{1-k,D}$ hence exhibit what is known as wall crossing behavior. Wall crossing behavior has recently been extensively studied due to its appearance in the quantum theory of black holes in physics (see e.g. \cite{Quantum}). Although $\mathcal{F}_{1-k,D}$ is not a harmonic weak Maass form, it exhibits many similar properties. Outside of the exceptional set $E_D$, the functions $\mathcal{F}_{1-k,D}$ are locally annihilated by $\Delta_{2-2k}$ and satisfy weight $2-2k$ modularity. We hence call them \begin{it}locally harmonic Maass forms\end{it} with exceptional set $E_D$ (see Section \textnormal{Re}f{sec:prelim} for a full definition). \begin{theorem}\lambdabel{thm:converge} For $k>1$ and $D>0$ a non-square discriminant, the function $\mathcal{F}_{1-k,D}$ is a weight $2-2k$ locally harmonic Maass form with exceptional set $E_D$. \end{theorem} Although $\mathcal{F}_{1-k,D}$ exhibits some behavior which is similar to harmonic weak Maass forms, it also has some other surprising properties. The differential operator $\mathcal{D}^{2k-1}$ (where $\mathcal{D}:=\frac{1}{2\pi i} \frac{d }{d \tau}$) also plays a central role in the theory of harmonic weak Maass forms (see e.g., \cite{BruinierOnoRhoades}). However, a harmonic weak Maass form cannot map to a cusp form under both $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$, as is well known. Due to discontinuities along the exceptional set $E_D$, our function $\mathcal{F}_{1-k,D}$ is actually allowed to (locally) map to a constant multiple of $f_{k,D}$ under both operators. \begin{theorem}\lambdabel{thm:xiDk-1} Suppose that $k>1$ and $D>0$ is a non-square discriminant. Then for every $\tau\in \mathbb{H}\setminusE_D$, the function $\mathcal{F}_{1-k,D}$ satisfies \begin{eqnarray*} \xi_{2-2k}\left(\mathcal{F}_{1-k,D}\right)\left(\tau\right) &=& D^{\frac{1}{2}-k}f_{k,D}\left(\tau\right),\\ \mathcal{D}^{2k-1}\left(\mathcal{F}_{1-k,D}\right)\left(\tau\right) &=& -\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}}D^{\frac{1}{2}-k} f_{k,D}\left(\tau\right). \end{eqnarray*} \end{theorem} \begin{remark} The excluded case $k=1$ of Theorem \textnormal{Re}f{thm:xiDk-1} is a consequence of results in the thesis of H\"ovel \cite{Hoevel}. \end{remark} The aforementioned discontinuities of $\mathcal{F}_{1-k,D}$ along $E_D$ are captured by very simple functions, which are given piecewise as polynomials. The functions $\mathcal{F}_{1-k,D}$ are formed by adding these (piecewise) polynomials to real analytic functions which induce the image of $\mathcal{F}_{1-k,D}$ under the operators $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ given in Theorem \textnormal{Re}f{thm:xiDk-1}. Indeed, in the theory of harmonic weak Maass forms, the function $f_{k,D}$ has a natural (real analytic) preimage under $\xi_{2-2k}$ (resp. $\mathcal{D}^{2k-1}$) called the non-holomorphic (resp. holomorphic) Eichler integral. To be more precise, as in \cite{Zagier}, for $f\left(\tau\right) = \sum_{n=1}^{\infty} a_n q^n\in S_{2k}$ ($q=e^{2\pi i \tau}$) we define the \begin{it}non-holomorphic Eichler integral \cite{Eichler} of $f$\end{it} by \begin{equation}\lambdabel{eqn:f*def} f^*\left(\tau\right):=\left(2i\right)^{1-2k}\int_{-\overline{\tau}}^{i\infty} f^{c}\left(z\right)\left(z+\tau\right)^{2k-2} dz, \end{equation} where $f^c\left(\tau\right):=\overline{f\left(-\overline{\tau}\right)}$ is the cusp form whose Fourier coefficients are the conjugates of the coefficients of $f$. We likewise define the \begin{it}(holomorphic) Eichler integral of $f$\end{it} by \begin{equation}\lambdabel{eqn:Eichdef} \mathcal{E}_{f}\left(\tau\right):=\sum_{n=1}^{\infty} \frac{a_n}{n^{2k-1}} q^n. \end{equation} Eichler \cite{Eichler} and Knopp \cite{Knopp} independently showed that the error to modularity of Eichler integrals are polynomials of degree at most $2k-2$ whose coefficients are related to the periods of the corresponding cusp forms. Hence, combining Theorem \textnormal{Re}f{thm:xiDk-1} with the wall crossing behavior mentioned earlier in the introduction, we are able to obtain a certain type of expansion for $\mathcal{F}_{1-k,D}$. \begin{theorem}\lambdabel{thm:PPkDexpansion} Suppose that $k>1$, $D>0$ is a non-square discriminant, and $\mathcal{C}$ is one of the connected components partitioned by $E_D$. Then there exists a polynomial $P_{\mathcal{C}}$ of degree at most $2k-2$ such that for all $\tau\in \mathcal{C}$, $$ \mathcal{F}_{1-k,D}\left(\tau\right)=P_{\mathcal{C}}\left(\tau\right) + D^{\frac{1}{2}-k}f_{k,D}^*\left(\tau\right) - D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}}\mathcal{E}_{f_{k,D}}\left(\tau\right). $$ \end{theorem} \begin{remarks} \noindent \noindent \begin{enumerate} \item The local polynomial can be explicitly determined using \eqref{PCexplicit}. \item According to \cite{KohnenThesis}, one can obtain an exact formula for the coefficients of $f_{k,D}$ in terms of infinite sums involving Sali\'e sums and $J$-Bessel functions. For more details of the proof, see Theorem 3.1 of \cite{Parson}. \end{enumerate} \end{remarks} The polynomials $P_{\mathcal{C}}$ occurring in Theorem \textnormal{Re}f{thm:PPkDexpansion} are closely related to the even part of the period polynomial of $f_{k,D}$, which we denote by $r^+\left(f_{k,D};X\right)$ (see Section \textnormal{Re}f{sec:periodpoly} for a full definition). Kohnen and Zagier \cite{KohnenZagierRational} computed this even part in order to prove rationality of the even periods of $f_{k,D}$. Supplementary to the recent appearance of $r^+\left(f_{k,D};X\right)$ in the theory of harmonic weak Maass forms \cite{DITrational}, the polynomials $P_{\mathcal{C}}$ give a new perspective on the following theorem of Kohnen and Zagier \cite{KohnenZagierRational}. \begin{theorem}\lambdabel{thm:ratperiod} Suppose that $D>0$ is a non-square discriminant and $k>1$ is even. Then the even part of the period polynomial of $f_{k,D}$ satisfies \begin{equation}\lambdabel{eqn:ratperiod} r^+\left(f_{k,D};X\right)\equiv 2\sum_{\substack{\left[a,b,c\right]\in\mathbb{Q}D \\ a<0<c}}\left(aX^2+bX+c\right)^{k-1}\ \left(\bmod{\ \left(X^{2k-2}-1\right)}\right). \end{equation} \end{theorem} \begin{remarks} \noindent \begin{enumerate} \item By the congruence we mean that the left and right hand sides differ by a constant multiple of $X^{2k-2}-1$. The theorem of the third author and Zagier explicitly supplies the implied constant, which is a ratio of Bernoulli numbers times a certain class number. We also note that the sum in \eqref{eqn:ratperiod} is finite, which follows from reduction theory. \item It would be interesting to further investigate the relation between the (modular completion of the) holomorphic functions in \cite{DITrational} and the functions $\mathcal{F}_{1-k,D}$. \item The right-hand side of \eqref{eqn:ratperiod} precisely runs over those $Q\in \mathbb{Q}D$ with $a<0$ for which the corresponding semi-circles $S_Q$ contain $0$ in their interior. The appearance of these binary quadratic forms is explained by our proof of Theorem \textnormal{Re}f{thm:ratperiod}. In particular, the polynomial part $P_{\mathcal{C}}$ in Theorem \textnormal{Re}f{thm:PPkDexpansion} may be computed by comparing the polynomials in adjacent connected components. From this perspective, one obtains a contribution to $P_{\mathcal{C}}$ by crossing precisely those $S_Q$ which circumscribe $\mathcal{C}$. For the connected component containing $0$, this is precisely those $S_Q$ which have $0$ in their interior. \end{enumerate} \end{remarks} The Hecke algebra naturally decomposes $S_{2k}$ into one dimensional simultaneous eigenspaces for all Hecke operators. The action of the Hecke operators on $f_{k,D}$ is easily computed and strikingly simple \cite{Parson}, namely, for a prime $p$ $$ f_{k,D}D{D}\Big|_{2k}T_p = f_{k,D}D{Dp^2}+p^{k-1}\left(\frac{D}{p}\right)f_{k,D}D{D}+p^{2k-1}f_{k,D}D{\frac{D}{p^2}}, $$ where $T_p$ is the $p$-th Hecke operator acting on translation invariant functions (see \eqref{eqn:Heckedef} for a definition). Note that the right hand side of the above formula reflects the action of the half-integral weight Hecke operator $T_{p^2}$ (when the subscript $D$ is taken to denote the $D$-th coefficient). This is no accident, owing to the fact that $f_{k,D}$ is the $D$-th Fourier coefficient of the kernel function $\Omega$ (defined in \eqref{eqn:Omegadef}) in the $z$ variable and the Hecke operators commute with the Shimura and Shintani lifts. This connection between the integral and half-integral weight Hecke operators on the functions $f_{k,D}$ extends to the functions $\mathcal{F}_{1-k,D}$. \begin{theorem}\lambdabel{thm:Hecke} Suppose that $k>1$, $D>0$ is a non-square discriminant, and $p$ is a prime. Then \begin{equation}\lambdabel{eqn:Hecke} \mathcal{F}_{1-k,D}D{D}\Big|_{2-2k}T_p = \mathcal{F}_{1-k,D}D{Dp^2} + p^{-k}\left(\frac{D}{p}\right)\mathcal{F}_{1-k,D}D{D} +p^{1-2k}\mathcal{F}_{1-k,D}D{\frac{D}{p^2}}, \end{equation} where $\mathcal{F}_{1-k,D}D{\frac{D}{p^2}}=0$ if $p^2\nmid D$. \end{theorem} \begin{remark} The fact that the right hand side of \eqref{eqn:Hecke} looks like the formula for the half-integral weight $\frac{3}{2}-k$ Hecke operator hints towards a connection between integral weight $2-2k$ and half-integral weight $\frac{3}{2}-k$ objects, mirroring the behavior for weight $2k$ and $k+\frac{1}{2}$ cusp forms coming from the Shintani and Shimura lifts. In light of this, there could be some relation with the results in \cite{DIT} in the case $k=1$, which is not considered in this paper. \end{remark} The paper is organized as follows. In Section \textnormal{Re}f{sec:prelim} we give some background and a formal definition of locally harmonic Maass forms. In Section \textnormal{Re}f{sec:hyperbolic} we explain the interpretation of $\mathcal{F}_{1-k,D}$ as a (linear combination of) hyperbolic Poincar\'e series. We next show compact convergence in Section \textnormal{Re}f{sec:converge}. Section \textnormal{Re}f{sec:exceptional} is devoted to a discussion about the exceptional set $E_D$. Section \textnormal{Re}f{sec:xiDk-1} is devoted to proving Theorem \textnormal{Re}f{thm:xiDk-1}. The expansion given in Theorem \textnormal{Re}f{thm:PPkDexpansion} is proven in Section \textnormal{Re}f{sec:expansion}. Combining this with the results of the previous sections, we conclude Theorem \textnormal{Re}f{thm:converge}. In Section \textnormal{Re}f{sec:periodpoly} we connect the polynomials $P_{\mathcal{C}}$ from Theorem \textnormal{Re}f{thm:PPkDexpansion} to the period polynomial of $f_{k,D}$ in order to prove Theorem \textnormal{Re}f{thm:ratperiod}. We conclude the paper with the proof of Theorem \textnormal{Re}f{thm:Hecke} in Section \textnormal{Re}f{sec:Hecke} followed by a discussion about the interrelation with the results of \cite{DITrational} in Section \textnormal{Re}f{sec:DIT}. \section{Harmonic weak Maass forms and locally harmonic Maass forms}\lambdabel{sec:prelim} In this section, we recall the definition of harmonic weak Maass forms and introduce a formal definition of locally harmonic Maass forms. A good background reference for harmonic weak Maass forms is \cite{BruinierFunke}. As usual, we let $|_{2k}$ denote the \begin{it}weight $2k\in 2\mathbb{Z}$ slash-operator\end{it}, defined for $f:\mathbb{H}\to \mathbb{C}$ and $\gammaamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in \Gammaamma_1$ by $$ f\Big|_{2k}\gammaamma\left(\tau\right):=\left(c\tau+d\right)^{-2k} f\left(\gammaamma \tau\right), $$ where $\gammaamma\tau:=\frac{a\tau+b}{c\tau+d}$ is the action by fractional linear transformations. For $k\in \mathbb{N}$, a \begin{it}harmonic weak Maass form\end{it} $\mathcal{F}:\mathbb{H}\to\mathbb{C}$ of weight $2-2k$ for $\Gammaamma_1$ is a real analytic function satisfying: \noindent \noindent \begin{enumerate} \item $\mathcal{F}|_{2-2k} \gammaamma\left(\tau\right) = \mathcal{F}\left(\tau\right)$ for every $\gammaamma\in \Gammaamma_1$, \item $\Delta_{2-2k}\left(\mathcal{F}\right)=0$, \item $\mathcal{F}$ has at most linear exponential growth at $i\infty$. \end{enumerate} As noted in the introduction, the differential operators $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ naturally occur in the theory of harmonic weak Maass forms. More precisely, for a harmonic weak Maass form $\mathcal{F}$, one has $\xi_{2-2k}\left(\mathcal{F}\right), \mathcal{D}^{2k-1}\left(\mathcal{F}\right)\in M_{2k}^!$, the space of weight $2k$ weakly holomorphic modular forms (i.e., those meromorphic modular forms whose poles occur only at the cusps). It is well known that the operator $\xi_{2-2k}$ commutes with the group action of ${\text {\rm SL}}_2\left(\mathbb{R}\right)$. Moreover, by Bol's identity (\cite{Poincare}, see also \cite{Eichler} or \cite{BruinierOnoRhoades}, for a more modern usage), the operator $\mathcal{D}^{2k-1}$ also commutes with the group action of ${\text {\rm SL}}_2\left(\mathbb{R}\right)$. Furthermore, a direct calculation shows that \begin{equation}\lambdabel{eqn:Deltaxigen} \Delta_{2-2k}=-\xi_{2k}\xi_{2-2k}. \end{equation} Each harmonic weak Maass form $\mathcal{F}$ naturally splits into a holomorphic part and a non-holomorphic part. Indeed, in the special case that $\xi_{2-2k}\left(\mathcal{F}\right)=f\in S_{2k}$ (which is the only case relevant to this paper), one can show that $\mathcal{F}-f^*$ is holomorphic on $\mathbb{H}$, where $f^*$ was defined in \eqref{eqn:f*def}. We hence call $f^*$ the \begin{it}non-holomorphic part\end{it} of $\mathcal{F}$ and $\mathcal{F}-f^*$ the \begin{it}holomorphic part.\end{it} While the holomorphic part is obviously annihilated by $\xi_{2-2k}$, an easy calculation shows that the non-holomorphic part is annihilated by $\mathcal{D}^{2k-1}$. From this one also immediately sees that $\mathcal{D}^{2k-1}\left(\mathcal{F}\right)=\mathcal{D}^{2k-1}\left(\mathcal{F}-f^*\right)$ is holomorphic. We next define the new automorphic objects which we investigate in this paper. A weight $2-2k$ \begin{it}locally harmonic Maass form\end{it} for $\Gammaamma_1$ with \begin{it}exceptional set\end{it} $E_D$ (defined in \eqref{eqn:EDdef}) is a function $\mathcal{F}:\mathbb{H}\to \mathbb{C}$ satisfying: \noindent \noindent \begin{enumerate} \item For every $\gammaamma\in \Gammaamma_1$, $\mathcal{F}\big|_{2-2k}\gammaamma = \mathcal{F}$. \item For every $\tau\in \mathbb{H} \setminus E_D$, there is a neighborhood $N$ of $\tau$ in which $\mathcal{F}$ is real analytic and $\Delta_{2-2k}\left(\mathcal{F}\right)=0$. \item For $\tau\inE_D$ one has $$ \mathcal{F}\left(\tau\right) =\frac{1}{2}\lim_{w\to 0^+}\left(\mathcal{F}\left(\tau+iw\right) + \mathcal{F}\left(\tau-iw\right)\right) \qquad (w\in \mathbb{R}). $$ \item The function $\mathcal{F}$ exhibits at most polynomial growth towards $i\infty$. \end{enumerate} Since the theory of harmonic weak Maass forms has proven so fruitful, it might be interesting to further investigate the properties of functions in the space of locally harmonic Maass forms. \section{Locally harmonic Maass forms and hyperbolic Poincar\'e series}\lambdabel{sec:hyperbolic} In this section, we define Petersson's more general hyperbolic Poincar\'e series \cite{Petersson}, which span the space $S_{2k}$, and describe their connection to \eqref{eqn:fkDdef}. In addition, we define a weight $2-2k$ locally harmonic hyperbolic Poincar\'e series which basically maps to Petersson's hyperbolic Poincar\'e series under both $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ (see Proposition \textnormal{Re}f{prop:xiDk-1}). Suppose that $D>0$ is a non-square discriminant and $\mathcal{A}\subseteq \mathbb{Q}D$ is a \begin{it}narrow equivalence class\end{it} of integral binary quadratic forms (that is, there exists $Q_0\in\mathbb{Q}D$ such that $\mathcal{A}=:\left[Q_0\right]$ consists of precisely those $Q\in \mathbb{Q}D$ which are $\Gammaamma_1$-equivalent to $Q_0$). One defines \begin{equation}\lambdabel{eqn:fkAdef} f_{k,D,\narrow}\left(\tau\right):=\frac{\left(-1\right)^k D^{k-\frac{1}{2}}}{\binom{2k-2}{k-1}\pi}\sum_{\left[a,b,c\right]\in \mathcal{A}} \left(a\tau^2+b\tau+c\right)^{-k}\in S_{2k}. \end{equation} These functions were also studied by Kohnen and Zagier \cite{KohnenZagierRational} and Kramer \cite{Kramer} proved that they generate the entire space $S_{2k}$. In the spirit of \eqref{eqn:PPkDdef}, we define \begin{equation}\lambdabel{eqn:hypPoincMaass3} f_{k,D,\narrow}M\left(\tau\right):=\frac{\left(-1\right)^{k}D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{Q=\left[a,b,c\right]\in \mathcal{A}} \operatorname{sgn}\left(a\left|\tau\right|^2 + bx +c\right) Q\left(\tau,1\right)^{k-1} \psi\left(\frac{Dy^2}{\left|Q\left(\tau,1\right)\right|^2_{\phantom{-}}}\right), \end{equation} where $\psi$ was given in \eqref{eqn:psidef}. We see in Theorem \textnormal{Re}f{thm:localMaass} that $f_{k,D,\narrow}M$ is a locally harmonic Maass form with exceptional set $E_D$. As alluded to in the introduction, \eqref{eqn:fkAdef} is not the definition given by Petersson (in fact, the definition \eqref{eqn:fkAdef} was given in \cite{KohnenCoeff,KohnenZagier}). Since we make use of Petersson's definition repeatedly throughout the paper, we now describe Petersson's construction and give the link between the two definitions. Let $\eta, \eta'$ be real conjugate \begin{it}hyperbolic fixed points\end{it} of ${\text {\rm SL}}_2\left(\mathbb{R}\right)$ (that is, there exists a matrix $\gammaamma\in {\text {\rm SL}}_2\left(\mathbb{R}\right)$ fixing $\eta$ and $\eta'$). We call such a pair of points a \begin{it}hyperbolic pair.\end{it} Denote the group of matrices in $\Gammaamma_1$ fixing $\eta$ and $\eta'$ by $\Gammaamma_{\eta}$. The group $\Gammaamma_{\eta}/\left\{\pm I\right\}$ is an infinite cyclic subgroup of $\Gammaamma_1/\left\{\pm I\right\}$ and is generated by $$ g_{\eta}:=\pm \left(\begin{matrix} \frac{t+bu}{2}& cu\\ -au & \frac{t- bu}{2}\end{matrix}\right), $$ where $\eta=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$ and $t,u\in \mathbb{N}$ give the smallest solution to the Pell equation $t^2-D u^2=4$. For $Q=\left[a,b,c\right]$, the subgroup $\Gammaamma_{\eta}$ furthermore preserves the geodesic \begin{equation}\lambdabel{eqn:SQdef} S_Q:=\left\{ \tau\in \mathbb{H}: a\left|\tau\right|^2 + b\textnormal{Re}\left(\tau\right) + c =0\right\}, \end{equation} which is important in our study since the exceptional set $E_D$ (defined in \eqref{eqn:EDdef}) decomposes as $E_D=\bigcup_{Q\in \mathbb{Q}D} S_Q$. These semi-circles have played an important role in the interrelation between integral and half-integral weight modular forms \cite{KohnenCoeff,Shintani}. Let $A\in {\text {\rm SL}}_2\left(\mathbb{R}\right)$ satisfy $A\eta=\infty$ and $A\eta'=0$. We note that one may choose \begin{equation}\lambdabel{eqn:AQdef} A=A_{\eta}:= \pm \frac{1}{\sqrt{\left|\eta-\eta'\right|}} \left( \begin{matrix}1 & -\eta'\\ -\operatorname{sgn}\left(\eta-\eta'\right) &\operatorname{sgn}\left(\eta-\eta'\right) \eta\end{matrix}\right)\in{\text {\rm SL}}_2\left(\mathbb{R}\right). \end{equation} Since $g_{\eta}$ preserves the semi-circle $S_Q$, $A_{\eta}g_{\eta} A_{\eta}^{-1}$ is a scaling matrix $\left(\begin{smallmatrix} \zeta & 0 \\ 0 & \zeta^{-1}\end{smallmatrix}\right)$ for some $\zeta\in \mathbb{R}$ (see \cite{ImOS} for further details and helpful diagrams). For $h_k\left(\tau\right):=\tau^{-k}$ (the constant term of the hyperbolic expansion of a modular form), we now define Petersson's classical hyperbolic Poincar\'e series \cite{Petersson} \begin{equation}\lambdabel{eqn:hypPoinc} f_{k,D,\narrow}eta\left(\tau\right):=\sum_{\gammaamma\in \Gammaamma_{\eta}\backslash \Gammaamma_1} h_k\Big|_{2k} A\gammaamma\left(\tau\right), \end{equation} which converges compactly for $k>1$. By construction, $f_{k,D,\narrow}eta$ satisfies weight $2k$ modularity and is holomorphic. Petersson proved that indeed $f_{k,D,\narrow}eta$ is a cusp form and it was later shown that \begin{equation}\lambdabel{eqn:PPetaPP} f_{k,D,\narrow}eta=\binom{2k-2}{k-1}\pi D^{\frac{1-k}{2}}f_{k,D,\narrow} \end{equation} for $\mathcal{A}=\left[Q_0\right]$, where $Q_0$ has roots $\eta$, $\eta'$ \cite{Katok}. We move on to our construction of a weight $2-2k$ hyperbolic Poincar\'e series. Define \begin{equation}\lambdabel{eqn:varphidef} \varphi\left(v\right):=\int_{0}^{v}\sin\left(u\right)^{2k-2}du. \end{equation} Noting that $$ \left|a\tau^2+b\tau+c\right|^2 = Dy^2+\left(a\left|\tau\right|^2+bx+c\right)^2, $$ we see that $$ \arcsin\left(\frac{\sqrt{D}y}{\left|a\tau^2+b\tau+c\right|}\right) =\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+bx+c}\right|. $$ Therefore, using the fact that $\cos\left(\theta\right)\gammaeq 0$ for $0\leq \theta\leq \frac{\pi}{2}$, the change of variables $u=\sin\left(\theta\right)^2$ in the definition of the incomplete $\beta$-function yields (recall definition \eqref{eqn:psidef}) \begin{equation}\lambdabel{eqn:varphibeta} \psi\left(\frac{Dy^2}{\left|Q\left(\tau,1\right)\right|^2_{\phantom{-}}}\right) = \frac{1}{2}\beta\left(\frac{Dy^2}{|a\tau^2+b\tau+c|^2};k-\frac{1}{2},\frac{1}{2}\right) = \varphi\left(\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+bx+c}\right|\right), \end{equation} where we understand the arctangent to be equal to $\frac{\pi}{2}$ if $a\left|\tau\right|^2+bx+c=0$. Following our construction in the introduction, we set \begin{equation}\lambdabel{eqn:varphi*def} \widehat{\varphi}\left(\tau\right):= \tau^{k-1} \operatorname{sgn}\left(x\right) \varphi\left(\arctan\left|\frac{y}{x}\right|\right). \end{equation} We now define the weight $2-2k$ \begin{it}locally harmonic hyperbolic Poincar\'e series\end{it} by \begin{equation}\lambdabel{eqn:hypPoincMaass} f_{k,D,\narrow}Meta\left(\tau\right):=\sum_{\gammaamma\in \Gammaamma_{\eta}\backslash \Gammaamma_1} \widehat{\varphi}\Big |_{2-2k} A\gammaamma\left(\tau\right). \end{equation} We show in Proposition \textnormal{Re}f{prop:converge} that $f_{k,D,\narrow}Meta$ converges compactly for $k>1$. We want to show that $f_{k,D,\narrow}Meta$ and $f_{k,D,\narrow}M$ are connected in a way which is similar to the relation \eqref{eqn:PPetaPP} between $f_{k,D,\narrow}eta$ and $f_{k,D,\narrow}$. For a hyperbolic pair $\eta, \eta'\in \mathbb{R}$ with generator $g_{\eta}=\left(\begin{smallmatrix}\alpha&\beta\\ \gammaamma&\delta\end{smallmatrix}\right)$ of $\Gammaamma_{\eta}$, chosen so that $\operatorname{sgn}\left(\gammaamma\right)=\operatorname{sgn}\left(\eta-\eta'\right)$, we define $$ Q_{\eta}(\tau,w):= \gammaamma \tau^2+\left(\delta-\alpha\right) \tau w -\beta w^2. $$ Conversely, for $Q=[a,b,c]\in\mathbb{Q}D$, we choose the roots $\eta_Q=\frac{-b+\sqrt{D}}{2a}$, $\eta_Q'=\frac{-b-\sqrt{D}}{2a}$ and use the fact that $Q=Q_{\eta_Q}$ to obtain a correspondence. Note that $\operatorname{sgn}(\eta_Q-\eta_Q')=\operatorname{sgn}(a)$. We furthermore define $A_{Q}:=A_{\eta_Q}$, where $A_{\eta}$ was defined in \eqref{eqn:AQdef}. For $Q\in \mathbb{Q}D$, we denote the action of $\gammaamma\in \Gammaamma_1$ on $Q$ by $Q\circ \gammaamma$. We first need to relate $A_{\eta}\gammaamma$ and $A_Q$. \begin{lemma}\lambdabel{lem:AGamAQ} For a hyperbolic pair $\eta,\eta'$, $\gammaamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in \Gammaamma_1$, and $Q=Q_{\eta}\circ \gammaamma$, there exists a constant $r\in \mathbb{R}^+$ so that \begin{equation}\lambdabel{eqn:scale} A_{\eta}\gammaamma = \left(\begin{matrix}\sqrt{r} &0\\ 0 &\frac{1}{\sqrt{r}}\end{matrix}\right) A_Q \end{equation} and hence in particular $$ \arg\left(A_{\eta}\gammaamma\tau\right) = \arg\left(A_Q\tau\right)\qquad\text{ and }\qquad \operatorname{sgn}\left(\textnormal{Re}\left(A_{\eta}\gammaamma\tau\right)\right)=\operatorname{sgn}\left(\textnormal{Re}\left(A_Q\tau\right)\right). $$ Moreover, \begin{equation}\lambdabel{eqn:AQrel} \tau\Big|_{-2} A_{\eta}\gammaamma\left(\tau\right)=\tau\Big|_{-2} A_Q\left(\tau\right) = \frac{-Q\left(\tau,1\right)}{\sqrt{D}}. \end{equation} \end{lemma} \begin{proof} A direct calculation, using \eqref{eqn:AQdef}, yields \begin{equation}\lambdabel{eqn:AGamAQ} A_{\eta}\gammaamma \tau=\operatorname{sgn}(\eta-\eta')\frac{a-c\eta'}{a-c\eta} \left(\frac{\tau - \gammaamma^{-1}\eta'}{-\tau + \gammaamma^{-1}\eta}\right) . \end{equation} Denote $Q_{\eta}=[\alpha,\beta,\delta]$ and $Q=\left[a_Q,b_Q,c_Q\right]$ and recall that we have chosen $Q_{\eta}$ (resp. $\eta_Q$) such that $\operatorname{sgn}\left(\alpha\right)=\operatorname{sgn}\left(\eta-\eta'\right)$ (resp. $\operatorname{sgn}(\eta_Q-\eta_Q')=\operatorname{sgn}(a_Q)$). Hence $\eta-\eta'= \frac{\sqrt{D}}{\alpha}$ and one now concludes the second identity of \eqref{eqn:AQrel} after noting that $$ j\left(A_{\eta},\tau\right)=\mp \frac{\operatorname{sgn}\left(\eta-\eta'\right)}{\sqrt{\left|\eta-\eta'\right|}}\left(\tau - \eta\right). $$ and applying \eqref{eqn:AGamAQ} with $\eta=\eta_Q$ and $\gammaamma=I$. Since $Q=Q_{\eta}\circ \gammaamma$, $\gammaamma$ sends the roots of $Q$ to the roots of $Q_{\eta}$ and hence either $\gammaamma^{-1}\eta=\eta_Q$ or $\gammaamma^{-1}\eta'=\eta_Q$. Since $\eta_Q,\eta_Q'$ are ordered by $\operatorname{sgn}(\eta_Q-\eta_Q')=\operatorname{sgn}(a_Q)$, the identity $\gammaamma^{-1}\eta=\eta_Q$ is verified by \begin{multline*} \operatorname{sgn}\left(a_Q\right)=\operatorname{sgn}\left(Q_{\eta}\left(a,c\right)\right)= \operatorname{sgn}\left(\alpha\right)\operatorname{sgn}\left(\left(a-c\eta\right)\left(a-c\eta'\right)\right)\\ =\operatorname{sgn}\left(\frac{\eta-\eta'}{\left(a-c\eta\right)\left(a-c\eta'\right)}\right) = \operatorname{sgn}\left(\gammaamma^{-1}\eta-\gammaamma^{-1}\eta'\right). \end{multline*} Denoting $r:=\left|\frac{a-c\eta'}{a-c\eta}\right|$ and comparing \eqref{eqn:AGamAQ} with the definition \eqref{eqn:AQdef} of $A_Q$ yields $$ A_{\eta}\gammaamma \tau = rA_Q\tau. $$ One concludes \eqref{eqn:scale} from the fact that $A_{\eta}\gammaamma$ and $A_Q$ both have determinant $1$. Since $\tau$ is invariant by slashing with a scaling matrix in weight $-2$, the second identity of \eqref{eqn:AQrel} follows, completing the proof. \end{proof} We now use Lemma \textnormal{Re}f{lem:AGamAQ} to show that under the natural correspondence between narrow classes $\mathcal{A}\subseteq \mathbb{Q}D$ and hyperbolic pairs $\eta,\eta'\in \mathbb{R}$ given above, one has: \begin{lemma}\lambdabel{lem:PPMQ} For every hyperbolic pair $\eta,\eta'$ and $\mathcal{A}=\left[Q_{\eta}\right]\subseteq\mathbb{Q}D$, one has $$ f_{k,D,\narrow}Meta=\binom{2k-2}{k-1}\pi D^{\frac{k}{2}}f_{k,D,\narrow}M. $$ \end{lemma} \begin{proof} By Lemma \textnormal{Re}f{lem:AGamAQ}, \eqref{eqn:hypPoincMaass} may be rewritten as \begin{equation}\lambdabel{eqn:hypPoincMaass2} f_{k,D,\narrow}Meta\left(\tau\right)=\frac{(-1)^{k-1}}{D^{\frac{k-1}{2}}}\sum_{Q\in \mathcal{A}} \operatorname{sgn}\left(\textnormal{Re}\left(A_Q\tau\right)\right) Q\left(\tau,1\right)^{k-1} \varphi\left(\arctan\left|\frac{\textnormal{Im}\left(A_Q\tau\right)}{\textnormal{Re}\left(A_Q\tau\right)}\right|\right). \end{equation} We first note that $a\neq 0$ (since $D$ is not a square, by assumption). From \eqref{eqn:AGamAQ} with $\eta=\eta_Q$ and $\gammaamma=I$, one concludes \begin{equation} \lambdabel{eqn:AQre}\textnormal{Re}\left(A_Q\tau\right) = -\frac{a\left|\tau\right|^2+b x +c}{|a| \left|-\tau+\eta_Q\right|^2},\qquad\qquad \textnormal{Im}\left(A_Q \tau\right) = \frac{y\sqrt{D}}{|a|\left|-\tau+\eta_Q\right|^2}. \end{equation} This allows one to rewrite $\arctan\left|\frac{\textnormal{Im}\left(A_Q\tau\right)}{\textnormal{Re}\left(A_Q\tau\right)}\right|$. Using \eqref{eqn:varphibeta}, it follows that \eqref{eqn:hypPoincMaass2} equals \eqref{eqn:hypPoincMaass3}. \end{proof} \section{Convergence of $f_{k,D,\narrow}M$}\lambdabel{sec:converge} In this section we prove the convergence needed to show Theorem \textnormal{Re}f{thm:converge}. We need the following simple property of $\arctan\left|z\right|$ for $z\in \mathbb{C}$: \begin{equation}\lambdabel{eqn:arctanbnd} \arctan\left|z\right| \leq \min\left\{ \left|z\right|, \frac{\pi}{2}\right\}. \end{equation} For a convergence estimate, we also employ the following formula of Zagier (\cite{ZagierMFwhose}, Prop. 3). For a discriminant $0<D=\Delta f^2$ with $\Delta$ a fundamental discriminant and $\textnormal{Re}(s)>1$, one has \begin{equation}\lambdabel{eqn:Zagier} \sum_{a\in \mathbb{N}} \sum_{\substack{0\leq b<2a\\ b^2\equiv D\pmod{4a}}} a^{-s} = \frac{\zeta(s)}{\zeta\left(2s\right)}L_{\Delta}(s)\sum_{d\mid f}\mu(d)\chi_{\Delta}\left(d\right)d^{-s}\sigma_{1-2s}\left(\frac{f}{d}\right), \end{equation} where $L_{\Delta}(s):=L\left(s,\chi_{\Delta}\right)$ is the Dirichlet $L$-series associated to the quadratic character $\chi_{\Delta}(n):=\left(\frac{\Delta}{n}\right)$, $\mu$ is the M\"obius function, and $\sigma_s(n):=\sum_{d\mid n} d^{s}$. \begin{proposition}\lambdabel{prop:converge} For $k>1$, $f_{k,D,\narrow}M$ converges compactly on $\mathbb{H}$. \end{proposition} \begin{proof} Assume that $\tau=x+iy$ is contained in a compact subset $\mathscr{C}\subset\mathbb{H}$. We note that although we unjustifiably reorder the summation multiple times before showing convergence, in the end we show that the resulting sum converges absolutely, hence validating the legality of this reordering. Taking the absolute value of each term in \eqref{eqn:hypPoincMaass3} and extending the sum to all $Q\subset\mathbb{Q}D$, we obtain (noting \eqref{eqn:varphibeta}) $$ \frac{D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{Q=[a,b,c]\in \mathbb{Q}D}\left|Q\left(\tau,1\right)^{k-1} \varphi\left(\arctan\left|\frac{\sqrt{D} y}{a\left|\tau\right|^2 + bx +c}\right|\right)\right|. $$ We may assume that $a>0$, since the case $a<0$ is treated by changing $Q\to -Q$. We next rewrite $b$ as $b+2an$ with $0\leq b<2a$ and $n\in \mathbb{Z}$ and then split the sum into those summands with $|n|$ ``large'' and those with $|n|$ ``small.'' We first consider the case of large $n$, i.e., $\left|n\right|> 8\left(\left|\tau\right|+\sqrt{D}\right)$ and denote the corresponding sum by $f_{k,D,\narrow}b$. One easily sees that \begin{equation}\lambdabel{eqn:Qbndbig} \left|Q\left(\tau,1\right)\right|\ll an^2, \end{equation} where here and throughout the implied constant depends only on $k$ unless otherwise noted. By estimating $\left|x\right|<\left|\tau\right|<\frac{\left|n\right|}{8}$ and $b<2a$, one obtains (noting that $\left|n\right|>8$) $$ \left|a\left|\tau\right|^2 +\left(b+2an\right)x +c\right| \gammaeq \left|c\right|- \left|\left(b+2an\right)x\right|-a\left|\tau\right|^2 \gammaeq \left|c\right|-\frac{a}{4}\left(\left|n\right|+1\right)\left|n\right|-\frac{an^2}{64}\gammaeq \left|c\right|-\frac{19}{64}an^2. $$ However, $c=\frac{\left(b+2an\right)^2-D}{4a}$, so that the bounds $\left|n\right|>8$ and $D<\frac{n^2}{64}$ yield $$ \left|c\right|\gammaeq a\left(\left|n\right|-1\right)^2-\frac{n^2}{256a}\gammaeq \frac{3}{4}an^2. $$ Therefore \begin{equation} \left|a\left|\tau\right|^2 +\left(b+2an\right)x +c\right| \gammag an^2, \end{equation} and hence by \eqref{eqn:arctanbnd} one concludes $$ \arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+\left(b+2an\right)x + c }\right|\leq \left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+\left(b+2an\right)x + c }\right|\ll \frac{\sqrt{D}y}{a n^2}, $$ Using \eqref{eqn:varphidef} and \eqref{eqn:varphibeta}, one obtains the estimate $$ \int_0^{\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+bx + c }\right|} \sin(u)^{2k-2}du \ll \int_{0}^{\frac{\sqrt{D}y}{an^2}} \left|\sin(u)\right|^{2k-2}du. $$ Since $\left|\sin(u)\right|\leq u$ for $u\gammaeq 0$, we conclude that \begin{equation}\lambdabel{eqn:intbndbig} \int_{0}^{\frac{\sqrt{D}y}{an^2}} \left|\sin(u)\right|^{2k-2}du \leq \int_{0}^{\frac{\sqrt{D}y}{an^2}} u^{2k-2} du = \frac{1}{2k-1} \left(\frac{\sqrt{D}y}{an^2}\right)^{2k-1}. \end{equation} Combining \eqref{eqn:Qbndbig} and \eqref{eqn:intbndbig} and noting that all bounds are independent of $b$ yields \begin{equation}\lambdabel{eqn:PPbbnd} f_{k,D,\narrow}b\left(\tau\right)\ll y^{2k-1}D^{k-\frac{1}{2}}\sum_{a\in \mathbb{N}}\sum_{\substack{0\leq b<2a\\ b^2\equiv D\pmod{4a}}}a^{-k} \sum_{n> 8\left(\left|\tau\right|+\sqrt{D}\right)} n^{-2k}\ll \left(\frac{y\sqrt{D}}{\left|\tau\right|+\sqrt{D}}\right)^{2k-1}\ll_{\mathscr{C}, D} 1, \end{equation} where we have estimated the inner sum against the corresponding integral and evaluated the outer two sums with \eqref{eqn:Zagier}. Since $y$ (resp. $\left|\tau\right|$) may be bounded from above (resp. below) by a constant depending only on $\mathscr{C}$, it follows that $f_{k,D,\narrow}b$ converges uniformly on $\mathscr{C}$. We now move on to the case when $|n|\leq 8 \left(\left|\tau\right|+\sqrt{D}\right)$ and denote the corresponding sum by $f_{k,D,\narrow}a$. As in the case for $n$ large, one easily estimates \begin{equation}\lambdabel{eqn:Qbndsmall} \left|Q\left(\tau,1\right)\right|\ll a\left(\left|\tau\right|+\sqrt{D}\right)^{2}\ll_{\mathscr{C},D} a. \end{equation} We further split the sum over $a\in \mathbb{N}$. For $a>\frac{\sqrt{D}}{y}$ we have \begin{equation} \left|a\left|\tau\right|^2 + \left(b+2an\right)x +c\right| = \left|ay^2 + a\left(x+n+\frac{b}{2a}\right)^2 -\frac{D}{4a}\right|\gammag a y^2. \end{equation} Hence for the terms $a>\frac{\sqrt{D}}{y}$, we use \eqref{eqn:arctanbnd} to obtain \begin{equation}\lambdabel{eqn:varphibnda} \int_0^{\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+bx + c }\right|} \sin(u)^{2k-2}du \ll \int_{0}^{\frac{\sqrt{D}}{ay}} u^{2k-2}du =\frac{1}{2k-1} \left(\frac{\sqrt{D}}{ay}\right)^{2k-1}. \end{equation} For $a\leq \frac{\sqrt{D}}{y}$ we simply note that by \eqref{eqn:arctanbnd} we may trivially bound $\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2+bx + c }\right|\leq \frac{\pi}{2}$ and, since $\sin(u)\gammaeq 0$ for $0\leq u\leq \pi$, we may trivially estimate the remaining terms by the constant \begin{equation}\lambdabel{eqn:varphibndasmall} \int_0^{\frac{\pi}{2}} \sin(u)^{2k-2}du. \end{equation} Bounding the sum over $n$ trivially and using \eqref{eqn:Qbndsmall}, \eqref{eqn:varphibnda}, and \eqref{eqn:varphibndasmall} yields \begin{multline}\lambdabel{eqn:PPabnd} f_{k,D,\narrow}a\left(\tau\right)\ll\left(|\tau|+\sqrt{D}\right)^{2k-1}\sum_{a\leq\frac{\sqrt{D}}{y}} \sum_{\substack{0\leq b<2a\\ b^2\equiv D\pmod{4a}}}a^{k-1}\\ + D^{k-\frac{1}{2}}\left(\frac{|\tau|+\sqrt{D}}{y}\right)^{2k-1}\sum_{a>\frac{\sqrt{D}}{y}}\sum_{\substack{0\leq b<2a\\ b^2\equiv D\pmod{4a}}}a^{-k}\ll \left(|\tau|+\sqrt{D}\right)^{2k-1} \frac{D^{\frac{k+1}{2}}}{y^{k+1}}. \end{multline} Here we have employed \eqref{eqn:Zagier} for large $a$ and used trivial estimates for all other sums, completing the proof. \end{proof} \section{Values at exceptional points}\lambdabel{sec:exceptional} In this section, we describe the behavior of $f_{k,D,\narrow}M$ along the circles of discontinuity $E_D$ (defined in \eqref{eqn:EDdef}). For each $Q$, $S_Q$ (defined in \eqref{eqn:SQdef}) partitions $\mathbb{H}\setminus S_Q$ into two open connected components (one ``above'' and one ``below'' $S_Q$), which, for $\varepsilon =\pm$, we denote by \begin{equation}\lambdabel{eqn:HPdef} \mathcal{C}_Q^{\varepsilon}:=\left\{\tau\in \mathbb{H}: \varepsilon \operatorname{sgn}\left(\left|\tau +\frac{b}{2a}\right|-\frac{\sqrt{D}}{2|a|}\right) =1 \right\}. \end{equation} For each $\tau\in \mathbb{H}$, we further define \begin{equation}\lambdabel{eqn:bddef} \mathscr{B}_{\tau}=\mathscr{B}_{\tau,D}:= \left\{Q\in\mathbb{Q}D: \tau\in S_Q^{\phantom{-} }\right\}. \end{equation} In order for the second condition in the definition of locally harmonic Maass forms to be meaningful, it is first necessary to show that the set $E_D$ is nowhere dense in $\mathbb{H}$ and hence $E_D$ partitions $\mathbb{H}\setminusE_D$ into (open) connected components. \begin{lemma}\lambdabel{lem:neighbor} Suppose that $D>0$ is a non-square discriminant. For every $\tau_0=x_0+iy_0\in \mathbb{H}$, the following hold: \noindent \noindent \begin{enumerate} \item For all but finitely many $Q\in \mathbb{Q}D$, we have that $\tau_0\in \mathcal{C}_{Q}^+$. In particular, $\mathscr{B}_{\tau_0}$ is finite. \item There exists a neighborhood $N$ of $\tau_0$ so that for every $[a,b,c]\notin \mathscr{B}_{\tau_0}$ and $\tau=x+iy\in N$, $$ \operatorname{sgn}\left(a\left|\tau\right|^2+bx+ c\right) = \operatorname{sgn}\left(a\left|\tau_0\right|^2+bx_0+ c\right)\neq 0. $$ \end{enumerate} \end{lemma} \begin{proof} (1) We define the open set $$ N_1:=\left\{ \tau=x+iy\in \mathbb{H}: \left|x-x_0\right|<1, y> \frac{y_0}{2}\right\}. $$ If $\left|a\right|>\frac{\sqrt{D}}{y_0}$ and $\tau\in N_1$, then the inequality $$ \left|\tau + \frac{b}{2a}\right|\gammaeq y > \frac{y_0}{2}>\frac{\sqrt{D}}{2|a|} $$ implies that $\tau\in \mathcal{C}_Q^+$. Moreover, for $$ \left|b\right|>2|a|\max\Big\{ \left|x_0-1\right|,\left|x_0+1\right|\Big\} +\sqrt{D}, $$ we have $$ \left|\tau+\frac{b}{2a}\right| > \left| \frac{2a x +b}{2a}\right|\gammaeq \frac{|b|-2|a| |x|}{2|a|}> \frac{ 2|a|\left(\max\Big\{\left|x_0-1\right|,\left|x_0+1\right|\Big\} - |x|\right)+\sqrt{D}}{2|a|}. $$ One immediately concludes that \begin{equation}\lambdabel{eqn:CQ+} N_1\subseteq \mathcal{C}_Q^+ \end{equation} for all but finitely many $Q\in \mathbb{Q}D$. In particular, this proves the first statement. \noindent (2) In order to prove the second statement, for $a,b,c\in \mathbb{Z}$, we define $$ N_{a,b,c}:=\left\{ \tau=x+iy\in N_1: \operatorname{sgn}\left(a\left|\tau\right|^2+bx + c\right) = \operatorname{sgn} \left(a\left|\tau_0\right|^2+b x_0+ c\right)\right\}. $$ We denote the intersection of these open sets by $$ N=N_Q:=\bigcap_{\left[a,b,c\right]\in \mathbb{Q}D\setminus \mathscr{B}_{\tau_0}} N_{a,b,c}, $$ which we now prove is a neighborhood of $\tau_0$ satisfying the second statement of the lemma. A short calculation shows that \begin{equation}\lambdabel{eqn:sgnrewrite} \operatorname{sgn}\left(a\left|\tau\right|^2+bx + c\right)=\operatorname{sgn}\left(a\right)\operatorname{sgn}\left(\left|\tau + \frac{b}{2a}\right|- \frac{\sqrt{D}}{2|a|}\right), \end{equation} so that $N_{a,b,c}=N_1\cap C_Q^{\varepsilon}$ with $\varepsilon$ chosen such that $\tau_0\in \mathcal{C}_Q^{\varepsilon}$. Hence by \eqref{eqn:CQ+}, we conclude that $N_{a,b,c}=N_1$ for all but finitely many $\left[a,b,c\right]\in \mathbb{Q}D$. Therefore $N$ is the intersection of finitely many $N_{a,b,c}$. Hence $N$ is open and every $\tau\in N$ satisfies the conditions of the second statement, completing the proof. \end{proof} We are now ready to describe the value $f_{k,D,\narrow}M\left(\tau\right)$ whenever $\tau\in S_Q$ for some $Q\in \mathbb{Q}D$. \begin{proposition}\lambdabel{prop:boundary} If $\tau\in E_D$, then $$ f_{k,D,\narrow}M\left(\tau\right) = \frac{1}{2}\lim_{w\to 0^+}\left(f_{k,D,\narrow}M\left(\tau+iw\right) + f_{k,D,\narrow}M\left(\tau-iw\right)\right). $$ \end{proposition} \begin{proof} We first split the sum \eqref{eqn:hypPoincMaass3} defining $f_{k,D,\narrow}M$ into $Q\in \mathscr{B}_{\tau}$ and $Q\notin \mathscr{B}_{\tau}$ (defined in \eqref{eqn:bddef}). Due to local uniform convergence, we may interchange the limit $w\to 0^+$ with the sum. Since $\beta\left(t;k-\frac{1}{2},\frac{1}{2}\right)$ is continuous as a function of $0<t\leq 1$, one obtains \begin{multline}\lambdabel{eqn:circavg} \frac{1}{2}\lim_{w\to 0^+}\left(f_{k,D,\narrow}M\left(\tau+iw\right) + f_{k,D,\narrow}M\left(\tau-iw\right)\right)\\ =\frac{\left(-1\right)^kD^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi} \sum_{Q=[a,b,c]\notin \mathscr{B}_{\tau}} \operatorname{sgn}\left(a\left|\tau\right|^2 + b x+c\right) Q\left(\tau,1\right)^{k-1} \varphi\left(\arctan\left|\frac{\sqrt{D}y}{a\left|\tau\right|^2 + b x+c}\right|\right)\\ + \frac{\left(-1\right)^kD^{\frac{1}{2}-k}}{2\pi \binom{2k-2}{k-1}}\sum_{\substack{Q=[a,b,c]\in \mathscr{B}_{\tau}\\ \varepsilon\in \left\{\pm\right\}}} \lim_{w\to 0^+}\Bigg( \operatorname{sgn}\left(a\left|\tau+\varepsilon iw\right|^2 + b x+c\right) Q\left(\tau+\varepsilon iw,1\right)^{k-1}\\ \times\varphi\left(\arctan\left|\frac{\sqrt{D}\left(y+\varepsilon w\right)}{a\left|\tau+\varepsilon iw\right|^2 + b x+c}\right|\right)\Bigg). \end{multline} For each $Q=[a,b,c]\in \mathscr{B}_{\tau}$ and $0<w<y$, one concludes, since $\frac{b}{2a}$ is real, that \begin{equation}\lambdabel{eqn:circavg2} \left|\tau-iw+\frac{b}{2a}\right|-\frac{\sqrt{D}}{2\left|a\right|}<\left|\tau+\frac{b}{2a}\right|-\frac{\sqrt{D}}{2\left|a\right|}=0 <\left|\tau+iw+\frac{b}{2a}\right|-\frac{\sqrt{D}}{2\left|a\right|}. \end{equation} It follows from \eqref{eqn:sgnrewrite} that the $\pm$ terms on the right hand side of \eqref{eqn:circavg} have opposite signs. Since $\varphi$ is continuous, one concludes that the sum over $Q\in \mathscr{B}_{\tau}$ vanishes, completing the proof. \end{proof} \section{Action of $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$}\lambdabel{sec:xiDk-1} In this section, we determine the action of the operators $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ on $f_{k,D,\narrow}M$ (and $\mathcal{F}_{1-k,D}$). We prove the following proposition, which immediately implies Theorem \textnormal{Re}f{thm:xiDk-1}. \begin{proposition}\lambdabel{prop:xiDk-1} Suppose that $k>1$, $D>0$ is a non-square discriminant, and $\mathcal{A}\subseteq\mathbb{Q}D$ is a narrow class of binary quadratic forms. Then for every $\tau\in \mathbb{H}\setminusE_D$, the function $\mathcal{F}_{1-k,D}$ satisfies \begin{eqnarray*} \xi_{2-2k}\left(f_{k,D,\narrow}M\right)\left(\tau\right) &=& D^{\frac{1}{2}-k}f_{k,D,\narrow}\left(\tau\right),\\ \mathcal{D}^{2k-1}\left(f_{k,D,\narrow}M\right)\left(\tau\right) &=& -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} f_{k,D,\narrow}\left(\tau\right). \end{eqnarray*} In particular, we have that \begin{equation}\lambdabel{eqn:Deltaxi} \Delta_{2-2k}\left(f_{k,D,\narrow}M\right)\left(\tau\right)=0. \end{equation} \end{proposition} \begin{remark} As mentioned in the introduction, the case $k=1$ is addressed in H\"ovel's thesis. His method is based on theta lifts and differs greatly from the argument given here. \end{remark} \begin{proof} Assume that $\tau \in \mathbb{H}\setminusE_D$. By Lemma \textnormal{Re}f{lem:neighbor}, there is a neighborhood containing $\tau$ for which \eqref{eqn:hypPoincMaass3} is continuous and real differentiable. Inside this neighborhood, we use Lemma \textnormal{Re}f{lem:PPMQ} to rewrite $f_{k,D,\narrow}M$ in terms of $f_{k,D,\narrow}Meta$ for some hyperbolic pair $\eta, \eta'$ and then act by $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$ termwise on the expansion \eqref{eqn:hypPoincMaass}. However, the operator $\xi_{2-2k}$ (resp. $\mathcal{D}^{2k-1}$) commutes with the group action of ${\text {\rm SL}}_2\left(\mathbb{R}\right)$, so it suffices to compute the action of $\xi_{2-2k}$ (resp. $\mathcal{D}^{2k-1}$) on $\widehat{\varphi}$ (defined in \eqref{eqn:varphi*def}). By Lemma \textnormal{Re}f{lem:AGamAQ} and \eqref{eqn:AQre}, the assumption that $\tau\in\mathbb{H}\setminusE_D$ is equivalent to the restriction that $x\neq 0$ before slashing by $A\gammaamma$. For $x\neq 0$, we use \begin{equation}\lambdabel{eqn:sinarctan} \sin\left(\arctan\left|\frac{y}{x}\right|\right) = \frac{|y|}{\sqrt{x^2+y^2}} \end{equation} to evaluate \begin{equation}\lambdabel{eqn:xitau^k} \xi_{2-2k}\left(\widehat{\varphi}\right)\left(\tau\right)=i y^{2-2k} \operatorname{sgn}(x)\overline{\tau}^{k-1} \sin\left(\arctan\left|\frac{y}{x}\right|\right)^{2k-2} \left(-\frac{y\operatorname{sgn}(x)}{x^2+y^2} -i\frac{x\operatorname{sgn}(x)}{x^2+y^2}\right)=\tau^{-k}. \end{equation} Using Lemma \textnormal{Re}f{lem:PPMQ} and \eqref{eqn:PPetaPP}, on $\mathbb{H}\setminus E_D$ it follows that $$ \xi_{2-2k}\left(f_{k,D,\narrow}M\right)=\frac{D^{-\frac{k}{2}}}{\binom{2k-2}{k-1}\pi}\xi_{2-2k}\left(f_{k,D,\narrow}Meta\right)=\frac{D^{-\frac{k}{2}}}{\binom{2k-2}{k-1}\pi} f_{k,D,\narrow}eta=D^{\frac{1}{2}-k}f_{k,D,\narrow}. $$ Since $\xi_{2-2k}\left(f_{k,D,\narrow}M\right)$ is holomorphic in some neighborhood of $\tau$, one immediately obtains \eqref{eqn:Deltaxi} after using \eqref{eqn:Deltaxigen} to rewrite $\Delta_{2-2k}$. We next consider $\mathcal{D}^{2k-1}$. We first show that for $n\gammaeq 0$ and $x\neq 0$ we have \begin{equation}\lambdabel{eqn:PPMDn} \left(2\pi i\right)^n \mathcal{D}^{n}\left(\widehat{\varphi}\right)\left(\tau\right) =\frac{\Gammaamma\left(k\right)}{\Gammaamma\left(k-n\right)} \operatorname{sgn}(x)\tau^{k-1-n} \varphi\left(\arctan\left|\frac{y}{x}\right|\right) + \frac{P_n\left(x,y\right)}{\tau^n \overline{\tau}^{k-1}}, \end{equation} where $P_n\left(x,y\right)$ is the homogeneous polynomial of degree $2k-2$ defined inductively by $P_0(x,y):=0$ and \begin{equation}\lambdabel{eqn:Pndef} P_{n+1}\left(x,y\right):=\frac{-i}{2}\frac{\Gammaamma\left(k\right)}{\Gammaamma\left(k-n\right)} y^{2k-2} + \tau \frac{d}{d\tau}\left(P_n\left(x,y\right)\right) - n P_n\left(x,y\right) \end{equation} for $n\gammaeq 0$. The statement for $n=0$ is simply definition \eqref{eqn:varphi*def} of $\widehat{\varphi}$. We then use induction and apply \eqref{eqn:sinarctan} to establish \eqref{eqn:PPMDn} for $n\gammaeq 0$. In particular, for $n=2k-1$ the first term in \eqref{eqn:PPMDn} vanishes and thus we have $$ \mathcal{D}^{2k-1}\left(\widehat{\varphi}\right)\left(\tau\right)=\frac{P_{2k-1}\left(x,y\right)}{\left(2\pi i \right)^{2k-1}\tau^{2k-1}\overline{\tau}^{k-1}}. $$ However, in some neighborhood of $\tau$, \eqref{eqn:Deltaxi} implies that $\widehat{\varphi}$ is harmonic and hence $\mathcal{D}^{2k-1}\left(\widehat{\varphi}\right)$ is holomorphic. Thus $$ P_{2k-1}\left(x,y\right)=\overline{\tau}^{k-1}P\left(\tau\right) $$ for some polynomial $P\in \mathbb{C}[X]$. However, since $P_{2k-1}\left(x,y\right)$ is homogeneous of degree $2k-2$, it follows that $$ P_{2k-1}\left(x,y\right) =C\left|\tau\right|^{2k-2} =Cx^{2k-2}+O_y\left(x^{2k-3}\right) $$ for some constant $C\in \mathbb{C}$. In order to compute the constant, we note that, by \eqref{eqn:Pndef}, one easily inductively shows that for $n\gammaeq 1$ $$ P_{n+1}\left(x,y\right) = \frac{-i}{2}x^{n}\frac{d^{n}}{d\tau^{n}}\left(y^{2k-2}\right) + O_y\left(x^{n-1}\right). $$ We use this with $n=2k-2$ to obtain that $$ C=-\left(\frac{i}{2}\right)^{2k-1}\left(2k-2\right)!. $$ Hence it follows that $$ \mathcal{D}^{2k-1}\left(\widehat{\varphi}\right)\left(\tau\right) = -\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}}\tau^{-k}. $$ Therefore, using Lemma \textnormal{Re}f{lem:PPMQ} and \eqref{eqn:PPetaPP} to rewrite $f_{k,D,\narrow}Meta$ and $f_{k,D,\narrow}eta$, we complete the proof with $$ \mathcal{D}^{2k-1}\left(f_{k,D,\narrow}M\right)\left(\tau\right) = -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} f_{k,D,\narrow}\left(\tau\right). $$ \end{proof} \section{The expansion of $f_{k,D,\narrow}M$}\lambdabel{sec:expansion} In this section we investigate the ``shape'' of $f_{k,D,\narrow}M$. We are then able to prove that $f_{k,D,\narrow}M$ is a locally harmonic Maass form, completing the proof of Theorem \textnormal{Re}f{thm:converge}. To describe the expansion of $f_{k,D,\narrow}M$, we first need some notation. Recall that for $\textnormal{Re}\left(s\right),\textnormal{Re}\left(w\right)>0$, we have (for example, see (6.2.2) of \cite{AS}) \begin{equation}\lambdabel{eqn:betacompdef} \beta\left(s,w\right):=\beta\left(1;s,w\right)=\int_{0}^{1}u^{s-1}\left(1-u\right)^{w-1}du = \frac{\Gammaamma\left(s\right)\Gammaamma\left(w\right)}{\Gammaamma\left(s+w\right)}. \end{equation} In particular, by the duplication formula, one has \begin{equation}\lambdabel{eqn:betacompdef2} \beta\left(k-\frac{1}{2},\frac{1}{2}\right) = \frac{\Gammaamma\left(k-\frac{1}{2}\right)\Gammaamma\left(\frac{1}{2}\right)}{\Gammaamma\left(k\right)} = \binom{2k-2}{k-1}2^{2-2k}\pi. \end{equation} For $a>0$, $b\in \mathbb{Z}$, and a narrow equivalence class $\mathcal{A}\subseteq \mathbb{Q}D$, denote $$ r_{a,b}\left(\mathcal{A}\right):= \begin{cases} 1+\left(-1\right)^{k} & \text{if }\left[a,b,\frac{b^2-D}{4a}\right]\in \mathcal{A} \text{ and }\left[-a,-b,-\frac{b^2-D}{4a}\right]\in \mathcal{A},\vphantom{\begin{array}{l} \end{array}}\\ 1& \text{if }\left[a,b,\frac{b^2-D}{4a}\right]\in \mathcal{A}\text{ and }\left[-a,-b,-\frac{b^2-D}{4a}\right]\notin \mathcal{A},\vphantom{\begin{array}{l} \end{array}}\\ \left(-1\right)^{k} & \text{if }\left[a,b,\frac{b^2-D}{4a}\right]\notin \mathcal{A}\text{ and }\left[-a,-b,-\frac{b^2-D}{4a}\right]\in \mathcal{A},\vphantom{\begin{array}{l} \end{array}}\\ 0 & \text{otherwise}. \end{cases} $$ We define the constants \begin{eqnarray}\lambdabel{eqn:cinftyA} c_{\infty}\left(\mathcal{A}\right)&:=&-\frac{1}{2^{2k-2}\left(2k-1\right)\binom{2k-2}{k-1}} \sum_{a\in \mathbb{N}}a^{-k}\sum_{\substack{b\pmod{2a}\\ \substack{b^2\equiv D\pmod{4a}}}} r_{a,b}\left(\mathcal{A}\right),\\ \nonumber c_{\infty}&:=&-\frac{1}{2^{2k-2}\left(2k-1\right)\binom{2k-2}{k-1}} \frac{\zeta(k)}{\zeta\left(2k\right)}L_{\Delta}(k)\sum_{d\mid f}\mu(d)\chi_{\Delta}\left(d\right)d^{-k}\sigma_{1-2k}\left(\frac{f}{d}\right), \end{eqnarray} where $D=\Delta f^2$ and $\Delta$ is a fundamental discriminant. They play an important role in the expansions of $f_{k,D,\narrow}M$ and $\mathcal{F}_{1-k,D}$, respectively. By Proposition 3 of \cite{ZagierMFwhose}, the constant $c_{\infty}$ may also be written in terms of the zeta functions $$ \zeta\left(s,D\right):=\sum_{Q\in \mathbb{Q}D/\Gammaamma_1}\sum_{\substack{(m,n)\in \Gammaamma_Q\backslash \mathbb{Z}^2\\ Q(m,n)>0}}\frac{1}{Q(m,n)^s}, $$ where $\Gammaamma_Q\subset\Gammaamma_1$ is the stablizer of $Q$. To be more precise, we have $$ c_{\infty} =-\frac{1}{2^{2k-2}\left(2k-1\right)\binom{2k-2}{k-1}} \frac{\zeta\left(k,D\right)}{\zeta\left(2k\right)}. $$ These zeta functions, and hence the constant $c_{\infty}$, are also closely related to the coefficients of Cohen's Eisenstein series \cite{Cohen}, modular forms of weight $k+\frac{1}{2}$. Before we state the theorem, we refer the reader back to the definitions of $f_{k,D,\narrow}^*$ and $\mathcal{E}_{f_{k,D,\narrow}}$, given in \eqref{eqn:f*def} and \eqref{eqn:Eichdef}, respectively. \begin{theorem}\lambdabel{thm:expansion} Suppose that $k>1$, $D>0$ is a non-square discriminant, and $\mathcal{A}\subseteq\mathbb{Q}D$ is a narrow equivalence class. Then, for every connected component $\mathcal{C}$ of $\mathbb{H}\setminus \bigcup_{Q\in \mathcal{A}} S_Q$, there exists a polynomial $P_{\mathcal{C},\mathcal{A}}\in \mathbb{C}[X]$ of degree at most $2k-2$ such that \begin{equation}\lambdabel{eqn:PPMPoly} f_{k,D,\narrow}M\left(\tau\right) =D^{\frac{1}{2}-k} f_{k,D,\narrow}^*\left(\tau\right) -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D,\narrow}}\left(\tau\right) +P_{\mathcal{C},\mathcal{A}}\left(\tau\right) \end{equation} for every $\tau\in \mathcal{C}$. This polynomial is explicitly given by \begin{equation}\lambdabel{eqn:polycomp} P_{\mathcal{C},\mathcal{A}}\left(\tau\right)=c_{\infty}\left(\mathcal{A}\right) + \left(-1\right)^k 2^{3-2k}D^{\frac{1}{2}-k}\sum_{ \substack{Q=\left[a,b,c\right]\in \mathcal{A}\\ a|\tau|^2+bx+c>0>a}}Q\left(\tau,1\right)^{k-1}. \end{equation} \end{theorem} \begin{remark} In particular, for every $\tau\in \mathbb{H}$ with $y>\frac{\sqrt{D}}{2}$, $f_{k,D,\narrow}M$ has the Fourier expansion \begin{equation}\lambdabel{eqn:PPMFourier} f_{k,D,\narrow}M\left(\tau\right) = D^{\frac{1}{2}-k}f_{k,D,\narrow}^*\left(\tau\right) -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D,\narrow}}\left(\tau\right) + c_{\infty}\left(\mathcal{A}\right). \end{equation} One now concludes Theorem \textnormal{Re}f{thm:PPkDexpansion} immediately by summing over all narrow classes $\mathcal{A}\subseteq\mathbb{Q}D$. \end{remark} Before proving Theorem \textnormal{Re}f{thm:expansion}, we note an immediate corollary which is useful in computing the periods of $f_{k,D}$. In order to state this corollary, we abuse notation to denote by $\mathcal{C}_{\alpha}$ the (unique) connected component containing $\alpha\in \mathbb{Q}\cup\left\{i\infty\right\}$ on its boundary. This connected component is unique because the set $$ \left\{ \tau=x+iy\in \mathbb{H} : y>\frac{\sqrt{D}}{2}\right\}\subseteq \mathcal{C}_{i\infty} $$ and $\alpha = \gammaamma \left(i\infty\right)$ for some $\gammaamma\in \Gammaamma_1$. \begin{corollary}\lambdabel{cor:polyPPkD} Suppose that $k$ is even. Then for every $\tau\in \mathcal{C}_0$, $$ \mathcal{F}_{1-k,D}\left(\tau\right)=D^{\frac{1}{2}-k}f_{k,D}^*\left(\tau\right) -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D,\narrow}}\left(\tau\right) + P_{\mathcal{C}_0}\left(\tau\right), $$ where \begin{equation}\lambdabel{eqn:PC0def} P_{\mathcal{C}_0}\left(\tau\right):=c_{\infty} + 2^{3-2k}D^{\frac{1}{2}-k}\sum_{\substack{Q=[a,b,c]\in \mathbb{Q}D\\ a<0<c}}Q\left(\tau,1\right)^{k-1}. \end{equation} \end{corollary} A key step in determining the constant term of \eqref{eqn:polycomp} lies in computing the integral $$ \mathcal{I}_{a,D,k}\left(y\right):= \int_{-\infty}^{\infty} \left(a\left(w+iy\right)^2 -\frac{D}{4a}\right)^{k-1} \varphi\left(\arctan\left(\frac{\sqrt{D} y}{a\left(w^2+y^2\right)-\frac{D}{4a}}\right)\right) dw, $$ which is defined for $y>0$, $a\in\mathbb{N}$, $k\in \mathbb{N}$, and $D>0$ a non-square discriminant. \begin{lemma}\lambdabel{lem:Ival} For $a\in \mathbb{N}$, $D$ a non-square discriminant, and $k>1$, we have $$ \mathcal{I}_{a,D,k}\left(y\right) = \left(-1\right)^{k+1}\frac{D^{k-\frac{1}{2}}}{a^{k}2^{2k-2}\left(2k-1\right)}\pi. $$ \end{lemma} Due to the technical nature of the proof of Lemma \textnormal{Re}f{lem:Ival}, we first assume its statement and move its proof to the end of the section. \begin{proof}[Proof of Theorem \textnormal{Re}f{thm:expansion}] Suppose that $\tau\in \mathcal{C}$. As described when defining $f^*$ in \eqref{eqn:f*def}, we have \begin{align}\lambdabel{eqn:PP*xi} \xi_{2-2k}\left(f_{k,D,\narrow}^*\right)\left(\tau\right) &= f_{k,D,\narrow}\left(\tau\right),\\ \lambdabel{eqn:PP*Dk-1} \mathcal{D}^{2k-1}\left(f_{k,D,\narrow}^*\right)\left(\tau\right) &= 0. \end{align} Since $\mathcal{D}\left(q^n\right) = nq^n$, one easily computes \begin{equation}\lambdabel{eqn:PPEDk-1} \mathcal{D}^{2k-1}\left(\mathcal{E}_{f_{k,D,\narrow}}\right)\left(\tau\right) = f_{k,D,\narrow}\left(\tau\right), \end{equation} where $\mathcal{E}_f$ ($f\in S_{2k}$) was defined in \eqref{eqn:Eichdef}. Moreover, since $\mathcal{E}_{f_{k,D,\narrow}}$ is holomorphic, \begin{equation}\lambdabel{eqn:PPExi} \xi_{2-2k}\left(\mathcal{E}_{f_{k,D,\narrow}}\right)\left(\tau\right) = 0. \end{equation} From \eqref{eqn:PP*xi}, \eqref{eqn:PPExi}, and Proposition \textnormal{Re}f{prop:xiDk-1}, it follows that $$ \xi_{2-2k}\left(f_{k,D,\narrow}M-D^{\frac{1}{2}-k}f_{k,D,\narrow}^*+D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D,\narrow}}\right)\left(\tau\right)= 0, $$ and hence $$ P_{\mathcal{C},\mathcal{A}}\left(\tau\right):=f_{k,D,\narrow}M\left(\tau\right)-D^{\frac{1}{2}-k}f_{k,D,\narrow}^*\left(\tau\right)+ D^{\frac{1}{2}-k} \frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D,\narrow}}\left(\tau\right) $$ is holomorphic in $\mathcal{C}$. However, from \eqref{eqn:PP*Dk-1}, \eqref{eqn:PPEDk-1}, and Proposition \textnormal{Re}f{prop:xiDk-1}, we conclude that $$ \mathcal{D}^{2k-1}\left(P_{\mathcal{C},\mathcal{A}}\right)=0. $$ It follows that $P_{\mathcal{C},\mathcal{A}}$ defines a polynomial of degree at most $2k-2$ inside $\mathcal{C}$, establishing \eqref{eqn:PPMPoly}. We move on to the specific form of $P_{\mathcal{C},\mathcal{A}}$. We rewrite the conditions $a|\tau|^2+bx+c>0>a$ in each connected component $\mathcal{C}$ of $\mathbb{H}\setminusE_D$ so that the sum \eqref{eqn:polycomp} runs over those $[a,b,c]\in \mathcal{A}$ with $a<0$ in the set $$ \mathscr{B}d_{\mathcal{C}}=\mathscr{B}d_{\mathcal{C},\mathcal{A}}:=\left\{ Q\in \mathcal{A} : \tau\in \mathcal{C}_Q^{-}\text{ for all }\tau\in \mathcal{C}\right\}, $$ where $\mathcal{C}_Q^-$ was given in \eqref{eqn:HPdef}. The set $\mathscr{B}d_{\mathcal{C}}$ consists of precisely those $Q\in \mathcal{A}$ for which $S_Q$ (defined in \eqref{eqn:SQdef}) circumscribes $\mathcal{C}$ and it is finite by Lemma \textnormal{Re}f{lem:neighbor}. To be more precise, a direct calculation yields \begin{equation}\lambdabel{eqn:bddrewrite} \sum_{\substack{Q=\left[a,b,c\right]\in \mathcal{A}\\ a|\tau|^2+bx+c>0>a}}Q\left(\tau,1\right)^{k-1}=\sum_{\substack{Q=[a,b,c]\in \mathscr{B}d_{\mathcal{C}}\\ a<0}}Q\left(\tau,1\right)^{k-1} \end{equation} Since $\mathscr{B}d_{\mathcal{C}}$ is finite, we may prove the claim by induction on $\# \mathscr{B}d_{\mathcal{C}}$. We begin with the case $\#\mathscr{B}d_{\mathcal{C}}=0$, which is precisely the case that $\mathcal{C}=\mathcal{C}_{i\infty}$. Note that for $\tau=x+iy$, the equation $a\left|\tau\right|^2 + bx +\frac{b^2-D}{4a}=0$ gives the circle centered at $-\frac{b}{2a}$ of radius $\frac{\sqrt{D}}{2|a|}<\frac{\sqrt{D}}{2}$. Hence every $\tau\in \mathbb{H}$ with $\textnormal{Im}\left(\tau\right)>\frac{\sqrt{D}}{2}$ is in the same connected component $\mathcal{C}_{i\infty}$. It follows that $P_{\mathcal{C}_{i\infty},\mathcal{A}}$ is fixed under translations and hence is a constant which we now show agrees with $c_{\infty}\left(\mathcal{A}\right)$. For $y>\frac{\sqrt{D}}{2}$, we use Poisson summation on \eqref{eqn:hypPoincMaass3}. One may restrict to $a>0$ by the change of variables $a\to -a$ and $b\to -b$. Rewrite $b$ as $b+2an$ and note that \begin{align*} a\left|\tau\right|^2 + \left(b+2an\right)x + \frac{\left(b+2an\right)^2-D}{4a}&=a\left|\tau+n\right|^2 + b\left(x+n\right) +\frac{b^2-D}{4a},\\ a \tau ^2 + \left(b+2an\right)\tau + \frac{\left(b+2an\right)^2-D}{4a}&=a\left(\tau+n\right)^2+b\left(\tau+n\right)+\frac{b^2-D}{4a}, \end{align*} and that the $\operatorname{sgn}$ term in \eqref{eqn:hypPoincMaass3} is always positive for $y>\frac{\sqrt{D}}{2}$. Hence \eqref{eqn:hypPoincMaass3} becomes \begin{multline*} f_{k,D,\narrow}M\left(\tau\right) =\frac{\left(-1\right)^{k}D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{a\in \mathbb{N}}\sum_{\substack{b\pmod{2a}\\ \substack{b^2\equiv D\pmod{4a}\\ Q=\left[a,b,\frac{b^2-D}{4a}\right]}}} \mathfrak{h}space{-.12in} r_{a,b}\left(\mathcal{A}\right) \sum_{n\in \mathbb{Z}} Q\left(\tau+n,1\right)^{k-1}\\ \times \varphi\left(\arctan\left|\frac{\sqrt{D} y}{a\left|\tau+n\right|^2 + b\left(x+n\right) +\frac{b^2-D}{4a}}\right|\right). \end{multline*} Applying Poisson summation to the inner sum and using the change of variables $w\to w-\frac{b}{2a}+iy$, the associated constant term becomes $$ \int_{-\infty+iy}^{\infty+iy} Q\left(w,1\right)^{k-1} \varphi\left(\arctan\left(\frac{\sqrt{D} y}{a\left|w\right|^2 + b\textnormal{Re}\left(w\right) +c}\right)\right) dw=\mathcal{I}_{a,D,k}\left(y\right). $$ We immediately conclude \eqref{eqn:PPMFourier} by Lemma \textnormal{Re}f{lem:Ival}, establishing the case when $\mathscr{B}d_{\mathcal{C}}=\emptyset$. Next suppose that $\#\mathscr{B}d_{\mathcal{C}}=n>0$ and choose $Q_0\in \mathscr{B}d_{\mathcal{C}}$. Since two circles intersect at most twice and $\mathscr{B}d_{\mathcal{C}}$ is finite by Lemma \textnormal{Re}f{lem:neighbor}, it follows that there exists an (open) neighborhood $N$ containing an arc along the geodesic $S_{Q_0}$ (defined in \eqref{eqn:SQdef}) which does not intersect any other geodesics $S_Q$ for $Q\in \mathbb{Q}D$. In other words, there exists $\tau_0\in S_{Q_0}$ and a neighborhood $N$ of $\tau_0$ for which $$ N_1:=N\cap E_D\subset S_{Q_0}. $$ Thus $N_1$ is on the boundary of precisely two connected components, $\mathcal{C}$ and another connected component, which we denote $\mathcal{C}_1$. Then $\mathcal{C}_1$ contains those $\tau\in N$ for which $\tau=\tau_1+iw$ for some $\tau_1\in N_1$ and $w>0$ and $\mathcal{C}$ contains those for which $\tau=\tau_1-iw$. Our goal is to show (the analytic continuation of) identity \eqref{eqn:polycomp} for every $\tau\in N_1$, hence concluding the result by the identity theorem. One sees immediately that $\mathscr{B}d_{\mathcal{C}_{1}}\subsetneq \mathscr{B}d_{\mathcal{C}}$, since $Q\notin \mathscr{B}d_{\mathcal{C}_1}$. Hence by induction, we have \begin{equation}\lambdabel{eqn:polyCC1} P_{\mathcal{C}_{1},\mathcal{A}}\left(\tau\right)=c_{\infty}\left(\mathcal{A}\right) - \left(-1\right)^k 2^{2-2k}D^{\frac{1}{2}-k}\sum_{Q=\left[a,b,c\right]\in \mathscr{B}d_{\mathcal{C}_{1}}}\operatorname{sgn}(a)Q\left(\tau,1\right)^{k-1}. \end{equation} Since each summand in \eqref{eqn:PPMPoly} is piecewise continuous, for $\tau_1\in N_1$, we have $$ \lim_{w\to 0^+} \left(f_{k,D,\narrow}M\left(\tau-iw\right) - f_{k,D,\narrow}M\left(\tau+iw\right)\right)= P_{\mathcal{C},\mathcal{A}}\left(\tau\right) - P_{\mathcal{C}_1,\mathcal{A}}\left(\tau\right). $$ However, arguing as in \eqref{eqn:circavg} and \eqref{eqn:circavg2}, we may rewrite the limit to obtain, for every $\tau\in N_1$, \begin{multline}\lambdabel{eqn:polydiff} P_{\mathcal{C},\mathcal{A}}\left(\tau\right) - P_{\mathcal{C}_1,\mathcal{A}}\left(\tau\right) = \lim_{r\to 0^+} \left(f_{k,D,\narrow}M\left(\tau-ir\right) - f_{k,D,\narrow}M\left(\tau+ir\right)\right) \\ =-\frac{\left(-1\right)^{k}D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{Q=[a,b,c]\in \mathscr{B}_{\tau,\mathcal{A}}} \operatorname{sgn}(a)Q\left(\tau,1\right)^{k-1}\beta\left(\frac{Dy^2}{\left|Q\left(\tau,1\right)\right|^2}; k-\frac{1}{2},\frac{1}{2}\right), \end{multline} where $\mathscr{B}_{\tau,\mathcal{A}}:=\left\{ Q\in \mathcal{A}: \tau\in S_Q\right\}$. By the definition of $N_1$, we know that $\mathscr{B}_{\tau,\mathcal{A}}\subseteq \left\{Q_0,-Q_0\right\}$, because $S_{Q}=S_{\widetilde{Q}}$ if and only if $\widetilde{Q}=Q$ or $\widetilde{Q}=-Q$. Moreover, $\left|Q\left(\tau,1\right)\right|^2 = Dy^2$ for every $\tau\in N_1$. Since $\mathscr{B}d_{\mathcal{C}}=\mathscr{B}d_{\mathcal{C}_1}\cup\left( \left\{\pm Q_0\right\} \cap \mathcal{A}\right)$, we may hence combine definition \eqref{eqn:betacompdef} of $\beta\left(k-\frac{1}{2},\frac{1}{2}\right)$ with \eqref{eqn:polydiff} and \eqref{eqn:polyCC1} to obtain (for every $\tau\in N_1$) \begin{equation}\lambdabel{PCexplicit} P_{\mathcal{C},\mathcal{A}}\left(\tau\right) = c_{\infty}\left(\mathcal{A}\right) -\frac{\left(-1\right)^kD^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\beta\left(k-\frac{1}{2},\frac{1}{2}\right) \sum_{Q\in \mathscr{B}d_{\mathcal{C}}}\operatorname{sgn}(a)Q\left(\tau,1\right)^{k-1}. \end{equation} The result follows by \eqref{eqn:betacompdef2}. \end{proof} \begin{proof}[Proof of Corollary \textnormal{Re}f{cor:polyPPkD}] The polynomial $P_{\mathcal{C}_0}$ is obtained by $$ P_{\mathcal{C}_0} =\sum_{\mathcal{A}} P_{\mathcal{C}_0,\mathcal{A}}, $$ where the sum runs over all narrow classes of discriminant $D$. However, each $Q\in \mathbb{Q}D$ is contained in precisely one narrow class $\mathcal{A}$, and hence, plugging in \eqref{eqn:polycomp} and \eqref{eqn:bddrewrite}, one obtains $$ P_{\mathcal{C}_0}\left(\tau\right)=\sum_{\mathcal{A}} P_{\mathcal{C}_0,\mathcal{A}}\left(\tau\right)=\sum_{\mathcal{A}} c_{\infty}\left(\mathcal{A}\right)- 2^{2-2k}D^{\frac{1}{2}-k}\sum_{Q=\left[a,b,c\right]\in \bigcup_{\mathcal{A}} \mathscr{B}d_{\mathcal{C}_0,\mathcal{A}}} Q\left(\tau,1\right)^{k-1}. $$ Comparing \eqref{eqn:cinftyA} (with $k$ even) and \eqref{eqn:Zagier}, we have $$ \sum_{\mathcal{A}} c_{\infty}\left(\mathcal{A}\right)=c_{\infty}, $$ and it remains to compute $\bigcup_{\mathcal{A}} \mathscr{B}d_{\mathcal{C}_0,\mathcal{A}}$. This set consists of precisely those $Q=\left[a,b,c\right]\in\mathbb{Q}D$ for which one root is positive and one root is negative, or in other words, $\operatorname{sgn}\left(ac\right)=-1$. By the change of variables $Q\to -Q$, we may assume that $a<0<c$. The corollary now follows. \end{proof} \begin{proof}[Proof of Lemma \textnormal{Re}f{lem:Ival}] We first set $\widetilde{y}:= \frac{2a}{\sqrt{D}}y$ and make the change of variables $u=\frac{2a}{\sqrt{D}} w$, from which we obtain $$ \mathcal{I}_{a,D,k}\left(y\right) = \frac{D^{k-\frac{1}{2}}}{a^{k}2^{2k-1}} \int_{-\infty}^{\infty} \left(\left(u+i\widetilde{y}\right)^2-1\right)^{k-1} \varphi\left(\arctan\left(\frac{2\widetilde{y}}{u^2+\widetilde{y}^2-1}\right)\right) du. $$ Now define \begin{equation}\lambdabel{eqn:Ikdef} \mathcal{I}_k\left(\widetilde{y}\right):=\int_{-\infty}^{\infty} \left(\left(u+i\widetilde{y}\right)^2-1\right)^{k-1} \varphi\left(\arctan\left(\frac{2\widetilde{y}}{u^2+\widetilde{y}^2-1}\right)\right) du. \end{equation} We next show that $\mathcal{I}_{k}\left(\widetilde{y}\right)$ is independent of $\widetilde{y}>1$ (or equivalently $y>\frac{\sqrt{D}}{2a}$). Note that, for $a\in \mathbb{N}$ and $b\pmod{2a}$ ($b^2\equiv D\pmod{4a}$) fixed, either every $Q=\left[a,b,c\right]$ is an element of $\mathcal{A}$ or none of them are, because translations always give two equivalent quadratic forms. Recall that $\xi_{2-2k}\left(f_{k,D,\narrow}M\right)=f_{k,D,\narrow}$ and $D^{2k-1}\left(f_{k,D,\narrow}M\right) = cf_{k,D,\narrow}$, for some constant $c\in \mathbb{C}$, were shown termwise. Hence, arguing as before, but with $a$ fixed, the polynomial in the connected component including $i\infty$ must be constant and hence we get independence of $y>\frac{\sqrt{D}}{2a}$, because no discontinuities exist for $y>\frac{\sqrt{D}}{2a}$. Thus, \eqref{eqn:Ikdef} is constant for $\widetilde{y}>1$. Since \eqref{eqn:Ikdef} is continuous for $\widetilde{y}>0$, (although only constant for $\widetilde{y}\gammaeq 1$) for any $\widetilde{y}\gammaeq 1$ we have that \eqref{eqn:Ikdef} agrees with $$ \lim_{\widetilde{y}\to 1^+} \mathcal{I}_k\left(\widetilde{y}\right)=\mathcal{I}_k\left(1\right)=\int_{-\infty}^{\infty} \left(\left(u+i\right)^2-1\right)^{k-1} \varphi\left(\arctan\left(\frac{2}{u^2}\right)\right)du. $$ It hence suffices to prove \begin{equation}\lambdabel{eqn:Iksuff} \mathcal{I}_k:=\mathcal{I}_k\left(1\right) = \left(-1\right)^{k-1}\frac{2\pi}{2k-1}. \end{equation} We first expand \begin{equation}\lambdabel{eqn:zeros} \left(u+i\right)^2-1 =\left(u-\sqrt{2}\zeta_8^{-1}\right)\left(u-\sqrt{2}\zeta_8^{-3}\right), \end{equation} where $\zeta_n:=e^{\frac{2\pi i}{n}}$. Now rewrite \begin{equation}\lambdabel{eqn:varphi} \sin\left(u\right)^{2k-2} =-\left(-1\right)^{k}2^{2-2k} \sum_{m=0}^{2k-2} \binom{2k-2}{m}\left(-1\right)^{m} e^{i\left(2m-\left(2k-2\right)\right)u}. \end{equation} We may then explicitly integrate \eqref{eqn:varphi} as in definition \eqref{eqn:varphidef} of $\varphi$, yielding $$ \varphi\left(v\right) = -\left(-1\right)^k 2^{2-2k}\left(\binom{2k-2}{k-1}\left(-1\right)^{k-1} v-i \sum_{m\neq k-1} \frac{\binom{2k-2}{m}\left(-1\right)^m}{2m+2-2k} \left(e^{i\left(2m+2-2k\right)v}-1\right)\right). $$ We then use $e^{i\theta}=\cos\left(\theta\right)+i\sin\left(\theta\right)$ and \eqref{eqn:sinarctan} to expand \begin{multline}\lambdabel{eqn:varphiwrite} \varphi\left(\arctan\left(\frac{2}{u^2}\right)\right)=\frac{1}{2^{2k-2}}\Bigg(\binom{2k-2}{k-1}\arctan\left(\frac{2}{u^2}\right) + \left(-1\right)^{k}i\sum_{m\neq k-1} \frac{\binom{2k-2}{m}\left(-1\right)^m}{2m+2-2k} \\ \times \left(\left(\cos\left(\arctan\left(\frac{2}{u^2}\right)\right) + i\sin\left(\arctan\left(\frac{2}{u^2}\right)\right)\right)^{2m+2-2k}-1\right)\Bigg) \\ =\frac{1}{2^{2k-2}}\Bigg( \binom{2k-2}{k-1}\arctan\left(\frac{2}{u^2}\right) + \left(-1\right)^{k}i\sum_{m\neq k-1} \frac{\binom{2k-2}{m}\left(-1\right)^{m}}{2m+2-2k} \left(\frac{u^2+2i}{u^2-2i}\right)^{m+1-k}, \end{multline} since the sum involving $-1$ vanishes. We now note that $$ f\left(z\right):=-i\left(1-\left(z+i\right)^2\right)^{k-1}\sum_{m\neq k-1} \frac{\binom{2k-2}{m}\left(-1\right)^m}{2m+2-2k} \left(\frac{z^2+2i}{z^2-2i}\right)^{m+1-k} $$ is a meromorphic function in $z$ with no poles in the lower half plane (because the poles at $\sqrt{2}\zeta_8^{-1}$ and $\sqrt{2}\zeta_8^{-3}$ are cancelled by the zeros of order $k-1$ of $\left(\left(z+i\right)^2-1\right)^{k-1}$ from \eqref{eqn:zeros}). In order to evaluate $\mathcal{I}_k$, for $R>0$ we let $C_R$ denote the path from $-R$ to $R$ followed by the semi-circle in the lower half plane from $R$ to $-R$. Define $$ g^{\pm}\left(z\right):= \frac{i}{2}\log\left(\frac{z-\sqrt{2}\zeta_8^{\pm 1}}{z-\sqrt{2}\zeta_8^{\pm 3}}\right), $$ where $\log\left(z\right)$ is the principal branch. One easily checks that the branch cuts for $g^{\pm}$ are the the lines connecting $\zeta_8^{\pm 1}$ and $\zeta_8^{\pm 3}$ and the branch cuts for $\log\left(\frac{z^2-2i}{z^2+2i}\right)$ are those lines radially from the point $0$ to $\sqrt{2}\zeta_8^{2j-1}$ ($1\leq j\leq 4$). Hence the sum of the logarithms equals the logarithm of the product for every $z\in C_R$ by the identity theorem (since they agree when the parameter is real). Therefore, for all $z\in C_R$, we have (see (4.4.31) of \cite{AS}) $$ g^+\left(z\right)-g^-\left(z\right) =\frac{i}{2}\log\left(\frac{z^2-2i}{z^2+2i}\right) = \operatorname{arccot}\left(\frac{z^2}{2}\right)= \arctan\left(\frac{2}{z^2}\right). $$ We may henceforth interchange between the original definition of $\varphi\left(\operatorname{arccot}\left(\frac{z^2}{2}\right)\right)$ and that involving logarithms (in particular, in \eqref{eqn:varphiwrite}). We hence evaluate $$ \int_{C_R} \left(f(z) + 2^{2-2k}\binom{2k-2}{k-1} \left(\left(z+i\right)^2-1\right)^{k-1}\left(g^+\left(z\right)-g^{-}\left(z\right)\right)\right) dz. $$ Using \eqref{eqn:sinarctan}, for those $z$ on the semi-circle, one easily obtains $$ \left|\left(\left(z+i\right)^2-1\right)^{k-1}\varphi\left(\operatorname{arccot}\left(\frac{z^2}{2}\right)\right)\right|\ll R^{-2k}\to 0. $$ Hence the integral along the semi-circle vanishes for $R\to\infty$. Therefore $$ \mathcal{I}_k = \lim_{R\to\infty} \int_{C_{R}} \left(f(z)+2^{2-2k}\binom{2k-2}{k-1} \left(\left(z+i\right)^2-1\right)^{k-1}\left(g^+\left(z\right)-g^{-}\left(z\right)\right)\right)dz. $$ Since $f\left(z\right)$ and $\left(\left(z+i\right)^2-1\right)^{k-1}g^+\left(z\right)$ are holomorphic in the lower half plane, the Residue Theorem yields $$ \int_{C_R} \left(f\left(z\right)+ 2^{2-2k} \binom{2k-2}{k-1} \left(\left(z+i\right)^2-1\right)^{k-1}g^{+}\left(z\right)\right) dz =0. $$ Using integration by parts, one obtains \begin{multline}\lambdabel{eqn:g-simp} \int_{C_{R}} \left(\left(z+i\right)^2-1\right)^{k-1}g^-\left(z\right)dz=\frac{i}{2}\int_{C_{R}} \left(\left(z+i\right)^2-1\right)^{k-1}\log\left(\frac{z-\sqrt{2}\zeta_8^{-1}}{z-\sqrt{2}\zeta_8^{-3}}\right)dz\\ =-\frac{i}{2}\int_{C_{R}} \left(\int_0^z \left(\left(u+i\right)^2-1\right)^{k-1}du\right)\left(\frac{1}{z-\sqrt{2}\zeta_8^{-1}} - \frac{1}{z-\sqrt{2}\zeta_{8}^{-3}}\right)dz. \end{multline} Applying the Residue Theorem to \eqref{eqn:g-simp} (noting simple poles and a minus sign from taking the integral clockwise) and recalling the identity \eqref{eqn:betacompdef}, we obtain \begin{align*} \mathcal{I}_k &=2^{2-2k}\pi\binom{2k-2}{k-1}\int_{\sqrt{2}\zeta_8^{-3}}^{\sqrt{2}\zeta_8^{-1}} \left(\left(u+i\right)^2-1\right)^{k-1}du\\ &=2\pi \left(-1\right)^{k-1} \binom{2k-2}{k-1}\int_{0}^1 \left(u\left(1-u\right)\right)^{k-1} du = 2\pi \left(-1\right)^{k-1} \binom{2k-2}{k-1}\beta\left(k,k\right) = \frac{2\pi\left(-1\right)^{k-1}}{2k-1}, \end{align*} where $u\to 2u+\sqrt{2}\zeta_8^{-3}$ in the second identity. This is the desired equality \eqref{eqn:Iksuff}. \end{proof} We are finally ready to prove Theorem \textnormal{Re}f{thm:converge}. By taking linear combinations of the $f_{k,D,\narrow}M$, it suffices to show the following. \begin{theorem}\lambdabel{thm:localMaass} For $k>1$, $D$ a non-square discriminant, and $\mathcal{A}\subset \mathbb{Q}D$ a narrow class, the function $f_{k,D,\narrow}M$ is a weight $2-2k$ locally harmonic Maass form with exceptional set $E_D$. \end{theorem} \begin{proof} Suppose that $\gammaamma_1\in\Gammaamma_1$. By Lemma \textnormal{Re}f{lem:PPMQ}, we may choose a hyperbolic pair $\eta,\eta'$ so that $$ f_{k,D,\narrow}M\Big|_{2-2k}\gammaamma_1 = \frac{D^{-\frac{k}{2}}}{\binom{2k-2}{k-1}\pi}f_{k,D,\narrow}Meta\Big|_{2-2k}\gammaamma_1 = \frac{D^{-\frac{k}{2}}}{\binom{2k-2}{k-1}\pi}\sum_{\gammaamma\in \Gammaamma_{\eta}\backslash \Gammaamma_1} \widehat{\varphi}\Big |_{2-2k} A\gammaamma\gammaamma_1. $$ Due to the absolute convergence proven in Proposition \textnormal{Re}f{prop:converge}, we may rearrange the sum, from which we conclude weight $2-2k$ modularity. The local harmonicity of $f_{k,D,\narrow}M$ was shown in \eqref{eqn:Deltaxi}. Condition 3 is precisely Proposition \textnormal{Re}f{prop:boundary}. The functions $\mathcal{E}_{f_{k,D,\narrow}}$ and $f_{k,D,\narrow}^*$ decay towards $i\infty$. Thus, using \eqref{eqn:polycomp} with $\mathcal{C}=\mathcal{C}_{i\infty}$, \eqref{eqn:PPMPoly} implies that $f_{k,D,\narrow}M$ is bounded towards $i\infty$. \end{proof} \section{Relations to period polynomials}\lambdabel{sec:periodpoly} The main goal of this section is to use Corollary \textnormal{Re}f{cor:polyPPkD} to supply a different perspective on Theorem \textnormal{Re}f{thm:ratperiod}, i.e., the fact that the even periods of $f_{k,D}$ are rational. We begin by giving a formal definition of periods and period polynomials. For $f\in S_{2k}$ and $0\leq n\leq 2k-2$, the \begin{it}$n$-th period of $f$\end{it} is defined by (see Section 1.1 of \cite{KohnenZagierRational}) \begin{equation}\lambdabel{eqn:nthperiod} r_n\left(f\right):=\int_{0}^{\infty} f\left(it\right)t^n dt = n!\left(2\pi\right)^{-n-1}L\left(f,n+1\right), \end{equation} where $L\left(f,s\right)$ is the $L$-series associated to $f$. These can be nicely packaged into a \begin{it}period polynomial\end{it} $$ r\left(f;X\right):=\int_{0}^{i\infty} f\left(z\right) \left(X-z\right)^{2k-2}dz = \sum_{n=0}^{2k-2} i^{1-n}\binom{2k-2}{n}r_n\left(f\right) X^{2k-2-n} $$ and we denote the even part of the period polynomial by $$ r^+\left(f;X\right):=\sum_{\substack{0\leq n\leq 2k-2\\ n\text{ even}}} \left(-1\right)^{\frac{n}{2}}\binom{2k-2}{n}r_n\left(f\right) X^{2k-2-n}. $$ We now describe how the polynomials $P_{\mathcal{C},\mathcal{A}}$ in Theorem \textnormal{Re}f{thm:expansion} are related to period polynomials. We note that while neither $f_{k,D,\narrow}^*$ nor $\mathcal{E}_{f_{k,D,\narrow}}$ satisfy modularity, up to the constant term they are the non-holomorphic and holomorphic parts of certain harmonic weak Maass forms, respectively. This follows because the operator $\xi_{2-2k}$ is surjective by work of Bruinier and Funke \cite{BruinierFunke} and $\mathcal{D}^{2k-1}$ is surjective by work of Bruinier, Ono, and Rhoades \cite{BruinierOnoRhoades}. For $\gammaamma\in \Gammaamma_1$, $f_{k,D,\narrow}^*$ and $\mathcal{E}_{f_{k,D,\narrow}}$ satisfy \begin{align} \lambdabel{eqn:rpoly} f_{k,D,\narrow}^*\Big|_{2-2k}\gammaamma\left(\tau\right) &= f_{k,D,\narrow}^* + r_{\gammaamma}\left(\tau\right),\\ \lambdabel{eqn:Rpoly} \mathcal{E}_{f_{k,D,\narrow}}\Big|_{2-2k}\gammaamma\left(\tau\right) &= \mathcal{E}_{f_{k,D,\narrow}} + R_{\gammaamma}\left(\tau\right) \end{align} for certain period polynomials $r_{\gammaamma}$ and $R_{\gammaamma}$ (each is of degree at most $2k-2$). However, it is known that there exists $C\in \mathbb{C}$ such that \begin{equation}\lambdabel{eqn:Knopp} -\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1 }}R_{\gammaamma}\left(\tau\right) =r_{\gammaamma}^{c}\left(\tau\right) + C\left(j\left(\gammaamma,\tau\right)^{2k-2}-1\right), \end{equation} where $P^{c}\in \mathbb{C}[X]$ is the polynomial whose coefficients are the complex conjugates of the coefficients of $P\in \mathbb{C}[X]$ \cite{Knopp}. The following proposition relates the period polynomials to the polynomials $P_{\mathcal{C},\mathcal{A}}$ from the previous section. \begin{proposition} Suppose that $D>0$ is a non-square discriminant, $\mathcal{A}\subseteq\mathbb{Q}D$ is a narrow class, $\mathcal{C}$ is a connected component of $\mathbb{H}\setminus E_D$, $\tau\in \mathcal{C}$, and $\gammaamma\in \Gammaamma_1$. Then $$ P_{\mathcal{C},\mathcal{A}}\left(\tau\right) = D^{\frac{1}{2}-k}r_{\gammaamma}\left(\tau\right)- D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} R_{\gammaamma}\left(\tau\right) + P_{\gammaamma \mathcal{C},\mathcal{A}}\left(\gammaamma\tau\right) j\left(\gammaamma,\tau\right)^{2k-2}. $$ In particular, if $\gammaamma\mathcal{C}=\mathcal{C}_{i\infty}$, then \begin{equation}\lambdabel{eqn:periodinfty} P_{\mathcal{C},\mathcal{A}}\left(\tau\right) = D^{\frac{1}{2}-k}r_{\gammaamma}\left(\tau\right)- D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} R_{\gammaamma}\left(\tau\right)+c_{\infty}\left(\mathcal{A}\right) j\left(\gammaamma,\tau\right)^{2k-2}. \end{equation} \end{proposition} \begin{proof} By the modularity of $f_{k,D,\narrow}M$, we have $$ 0=f_{k,D,\narrow}M\Big|_{2-2k}\gammaamma\left(\tau\right)-f_{k,D,\narrow}M\left(\tau\right). $$ However, plugging in \eqref{eqn:PPMPoly} and definitions \eqref{eqn:rpoly} and \eqref{eqn:Rpoly} of the period polynomials, this becomes $$ 0 = D^{\frac{1}{2}-k}r_{\gammaamma}\left(\tau\right) -D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} R_{\gammaamma}\left(\tau\right)+P_{\gammaamma \mathcal{C},\mathcal{A}}\left(\gammaamma\tau\right) j\left(\gammaamma,\tau\right)^{2k-2} - P_{\mathcal{C},\mathcal{A}}\left(\tau\right). $$ This yields the first statement of the proposition. The second statement simply follows from the fact that $P_{\mathcal{C}_{i\infty},\mathcal{A}}=c_{\infty}\left(\mathcal{A}\right)$ by \eqref{eqn:polycomp}. \end{proof} \begin{proof}[Proof of Theorem \textnormal{Re}f{thm:ratperiod}] In order to get information about the even periods, we first show that \begin{equation}\lambdabel{eqn:r+} r\left(f_{k,D};\tau\right)-r^c\left(f_{k,D};\tau\right)=2ir^+\left(f_{k,D};\tau\right). \end{equation} To see this, note that $f_{k,D}\left(iy\right)$ is real because the change of variables $b\to -b$ yields $$ \sum_{Q=\left[a,b,c\right]\in \mathbb{Q}D} \left(-a+iyb+c\right)^{-k} = \overline{\sum_{Q=\left[a,b,c\right]\in \mathbb{Q}D} \left(-a+iyb+c\right)^{-k}}. $$ The integral \eqref{eqn:nthperiod} defining $r_{n}\left(f\right)$ is hence also real, from which \eqref{eqn:r+} follows. Plugging $\gammaamma=S$ into \eqref{eqn:periodinfty} and summing over all narrow classes, we obtain \begin{equation}\lambdabel{eqn:PCreal} P_{\mathcal{C}_{0}}\left(\tau\right) = D^{\frac{1}{2}-k}r_S\left(\tau\right)- D^{\frac{1}{2}-k}\frac{\left(2k-2\right)!}{\left(4\pi\right)^{k-1}} R_{S}\left(\tau\right)+c_{\infty}\tau^{2k-2}, \end{equation} where $P_{\mathcal{C}_0}$ was defined in \eqref{eqn:PC0def}. However, it can be proven (see (1.13) of \cite{BringmannGuerzhoyKentOno}) that \begin{equation}\lambdabel{eqn:RSpoly} R_S\left(\tau\right) = -\frac{\left(2\pi i\right)^{2k-1}}{\left(2k-2\right)!} r\left(f_{k,D};\tau\right). \end{equation} Hence by \eqref{eqn:Knopp} and \eqref{eqn:r+}, we may rewrite \eqref{eqn:PCreal} as \begin{multline}\lambdabel{eqn:PC0rewrite} P_{\mathcal{C}_{0}}\left(\tau\right) = -2^{1-2k}iD^{\frac{1}{2}-k}\left( -r^c\left(f_{k,D};\tau\right) + r\left(f_{k,D};\tau\right)\right) + C\left(\tau^{2k-2}-1\right) + c_{\infty}\tau^{2k-2}\\ =2^{2-2k}D^{\frac{1}{2}-k} r^+\left(f_{k,D};\tau\right) + C\left(\tau^{2k-2}-1\right) + c_{\infty}\tau^{2k-2} \end{multline} for some constant $C$. We now use Corollary \textnormal{Re}f{cor:polyPPkD} to rewrite the left hand side, obtaining $$ c_{\infty} + 2^{3-2k}D^{\frac{1}{2}-k}\sum_{\substack{Q=[a,b,c]\in \mathbb{Q}D\\ a<0<c}}Q\left(\tau,1\right)^{k-1} = 2^{2-2k}D^{\frac{1}{2}-k}r^+\left(f_{k,D};\tau\right) + C\left(\tau^{ 2k-2 }-1\right) + c_{\infty}\tau^{ 2k-2 }. $$ Rearranging yields \eqref{eqn:ratperiod}, completing the proof. \end{proof} \begin{remark} We note that the above method may also be applied to reprove the rationality of the even periods of $f_{k,D,\narrow}+f_{k,D,\narrow}A{-\mathcal{A}}$ (cf. Theorem 5 of \cite{KohnenZagierRational}). Note that a symmetrization is made here so that a statement similar to \eqref{eqn:r+} holds. Without this symmetrization, one would only obtain rationality for the imaginary part of the periods of $f_{k,D,\narrow}$. \end{remark} \section{Hecke operators}\lambdabel{sec:Hecke} In this section, we investigate the action of the Hecke operators on $\mathcal{F}_{1-k,D}$, proving Theorem \textnormal{Re}f{thm:Hecke}. We closely follow the argument of Parson \cite{Parson} used to compute the action of the Hecke operators on $f_{k,D}$. For a prime $p$, recall that the weight $2-2k$ Hecke operator $T_p$ acts on a translation invariant function $f:\mathbb{H}\to \mathbb{C}$ by \begin{equation}\lambdabel{eqn:Heckedef} f\Big|_{2-2k}T_p\left(\tau\right) := p^{1-2k} f\left(p\tau\right)+ p^{-1}\sum_{r\pmod{p}} f\left(\frac{\tau+r}{p}\right). \end{equation} In order to prove Theorem \textnormal{Re}f{thm:Hecke}, we first compute the action of $T_p$ on the intermediary function $$ f_{k,D,\narrow}MD{D}\left(\tau\right):=\frac{D^{\frac{1-k}{2}}}{\binom{2k-2}{k-1}\pi}\sum_{Q=\left[a,b,c\right]\in \mathbb{Q}D'}\operatorname{sgn}\left(a\left|\tau\right|^2 + bx +c\right) Q\left(\tau,1\right)^{k-1} \psi\left(\frac{Dy^2}{\left|Q\left(\tau,1\right)\right|^2_{\phantom{-}}}\right), $$ where $\mathbb{Q}D'$ denotes the set of primitive $Q=[a,b,c]\in \mathbb{Q}D$ (i.e., those with $\left(a,b,c\right)=1$). \begin{proof}[Proof of Theorem \textnormal{Re}f{thm:Hecke}] We first prove that \begin{equation}\lambdabel{eqn:Heckeprim} f_{k,D,\narrow}MD{D}\Big|_{2-2k} T_p = \begin{cases} p^{-k} f_{k,D,\narrow}MD{Dp^2} + p^{-k}\left(1+\left(\frac{D}{p}\right)\right)f_{k,D,\narrow}MD{D}& \text{if }p^2\nmid D_{\vphantom{\substack{A\\A}}},\\ p^{-k}f_{k,D,\narrow}MD{Dp^2} +p^{-k} \left(p-\left(\frac{D/p^{2}}{p}\right)\right)f_{k,D,\narrow}MD{\frac{D}{p^{2}}}& \text{if }p^2\mid D. \end{cases} \end{equation} We define the multiset $$ \mathcal{B}:=\left\{\left[ap^2,bp,c\right], \Big[a,bp+2ar,ar^2+bpr+cp^2\Big]: 0\leq r\leq p-1,\ a>0,\ \left[a,b,c\right]\in \mathbb{Q}D'\right\} $$ and for $g\in \mathbb{N}$, we define the set $$ \mathcal{B}\left(g\right):=\left\{[\mathsf{A},\mathsf{B},\mathsf{C}]\in \mathbb{Q}Dp: \left(\mathsf{A},\mathsf{B},\mathsf{C}\right)=g\right\}. $$ We first note that all $Q\in\mathcal{B}$ have discriminant $Dp^2$. A direct calculation yields $$ f_{k,D,\narrow}MD{D}\Big|_{2-2k} T_p\left(\tau\right) = \sum_{Q\in \mathcal{B}} \operatorname{sgn}\left(a\left|\tau\right|^2 + bx +c\right) Q\left(\tau,1\right)^{k-1}\varphi\left(\arctan\left|\tfrac{\sqrt{D} y}{a\left|\tau\right|^2 + bx +c}\right|\right). $$ In determining the action of the Hecke operators on the classical hyperbolic Poincar\'e series, Parson \cite{Parson} determined precisely how many choices of primitive $\left[a,b,c\right]\in \mathbb{Q}D$ yield a representation of each $[\mathsf{A},\mathsf{B},\mathsf{C}]\in\mathcal{B}\left(g\right)$ with $g\in \left\{1,p,p^2\right\}$. Then \eqref{eqn:Heckeprim} follows from this enumeration and the fact that each summand in \eqref{eqn:PPkDdef} is homogeneous of degree $k-1$ in the variables $a,b,c$. Denote $D=\Delta f^2$ with $\Delta$ a fundamental discriminant. We make use of the identity $$ \mathcal{F}_{1-k,D}D{D} = D^{-\frac{k}{2}}\sum_{g\mid f} f_{k,D,\narrow}MD{\Delta g^2} $$ and apply \eqref{eqn:Heckeprim} to $f_{k,D,\narrow}MD{\Delta g^2}$. This yields \begin{multline}\lambdabel{eqn:Heckewprim} \mathcal{F}_{1-k,D}D{D}\Big|_{2-2k}{T_p} =D^{-\frac{k}{2}}\sum_{g^2\mid D} f_{k,D,\narrow}MD{\Delta g^2}\Big|_{2-2k}T_p\\ = \left(Dp^2\right)^{-\frac{k}{2}}\sum_{g\mid f,\;p\nmid g}\left( f_{k,D,\narrow}MD{\Delta \left(gp\right)^2} + \left(1+\left(\tfrac{\Delta g^2}{p}\right)\right)f_{k,D,\narrow}MD{\Delta g^2}\right) \\ +\left(Dp^2\right)^{-\frac{k}{2}} \sum_{g\mid f,\;p\mid g} \left(f_{k,D,\narrow}MD{\Delta \left(gp\right)^2} + \left(p-\left(\tfrac{\Delta \left(g/p\right)^2}{p}\right)\right)f_{k,D,\narrow}MD{\Delta\left(\frac{g}{p}\right)^{2}}\right). \end{multline} We next combine $$ \sum_{g\mid f,\;p\nmid g}\left(f_{k,D,\narrow}MD{\Delta \left(gp\right)^2} + f_{k,D,\narrow}MD{\Delta g^2}\right) + \sum_{p\mid g\mid f } f_{k,D,\narrow}MD{\Delta \left(gp\right)^2} = \sum_{g\mid fp} f_{k,D,\narrow}MD{\Delta g^2} =\left(Dp^2\right)^{\frac{k}{2}}\mathcal{F}_{1-k,D}D{Dp^2} $$ and $$ \sum_{g\mid f,\;p\mid g}f_{k,D,\narrow}MD{\Delta\left(\frac{g}{p}\right)^{2}}=D^{\frac{k}{2}}p^{-k}\mathcal{F}_{1-k,D}D{\frac{D}{p^{2}}} $$ to rewrite the right hand side of \eqref{eqn:Heckewprim} as $$ \mathcal{F}_{1-k,D}D{Dp^2} + p^{1-2k}\mathcal{F}_{1-k,D}D{\frac{D}{p^{2}}} + p^{-k}D^{-\frac{k}{2}}\left(\sum_{g\mid f,\;p\nmid g}\left(\tfrac{\Delta g^2}{p}\right)f_{k,D,\narrow}MD{\Delta g^2} - \sum_{g\mid f,\;p\mid g}\left(\tfrac{\Delta \left(g/p\right)^2}{p}\right)f_{k,D,\narrow}MD{\Delta\left(\frac{g}{p}\right)^{2}}\right). $$ If $p\nmid f$, then \eqref{eqn:Hecke} follows by noting that $\left(\frac{\Delta f^2}{p}\right)=\left(\frac{\Delta g^2}{p}\right)$ for every $g\mid f$. If $p\mid f$, then we note that $\left(\frac{\Delta \left(g/p\right)^2}{p}\right)=0$ unless $p\| g$. In this case, the two remaining sums cancel by making the change of variables $g\to gp$ in the last sum. Hence when $p\mid f$ one obtains $$ \mathcal{F}_{1-k,D}D{D}\Big|_{2-2k}T_p = \mathcal{F}_{1-k,D}D{Dp^2} + p^{1-2k}\mathcal{F}_{1-k,D}D{\frac{D}{p^{2}}}, $$ from which \eqref{eqn:Hecke} follows because $\left(\frac{D}{p}\right)=0$. This completes the proof. \end{proof} \section{A lift of $f_{k,D}$ from \cite{DITrational}}\lambdabel{sec:DIT} As alluded to in the introduction, the functions $F_k(\tau, Q)$ constructed coefficient-wise in \cite{DITrational} via cycle integrals are closely connected to harmonic weak Maass forms related to $f_{k,D}$. In this section, we explicitly investigate this connection, using their functions to universally construct a lift of $f_{k,D}$. This leads to an intriguing relation to the locally harmonic Maass forms $\mathcal{F}_{1-k,D}$. In order to state this connection, we define $$ \mathcal{H}_k(\tau):=f_{k,D}^*\left(\tau\right) -\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} \mathcal{E}_{f_{k,D}}\left(\tau\right). $$ Although the following proposition is almost certainly known to the authors of \cite{DITrational}, they do not explicitly state it. We do so here for the benefit of the reader. \begin{proposition}\lambdabel{prop:lift} There exists a constant $C\in \mathbb{C}$ such that $$ \sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k(\tau,Q) + 2^{2k-2}\mathcal{H}_k(\tau) + C $$ is a weight $2-2k$ harmonic weak Maass form. \end{proposition} \begin{remark} Suppose that $C\in \mathbb{C}$ satisfies the conditions of the lemma. Then $$ \mathcal{G}_{1-k,D}(\tau):=\sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k(\tau,Q) + 2^{2k-2} \mathcal{H}_k(\tau) + C $$ is a harmonic weak Maass form for which, by \eqref{eqn:PP*xi} and \eqref{eqn:PPExi}, we have $$ \xi_{2-2k}\left(\mathcal{G}_{1-k,D}\right)=2^{2k-2} f_{k,D}. $$ In particular, $2^{2-2k}\mathcal{G}_{1-k,D}$ is a lift of $f_{k,D}$. \end{remark} \begin{proof} By Theorem 3 of \cite{DITrational}, we have \begin{multline}\lambdabel{eqn:Fkmodular} \sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k(\tau,Q)\Big|_{2-2k}S(\tau)- \sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k\left(\tau,Q\right) =- \sum_{\substack{[a,b,c]\in \mathbb{Q}D\\ ac<0}}\operatorname{sgn}(c) \left(a\tau^2+b\tau+c\right)^{k-1}\\ =-2 \sum_{\substack{[a,b,c]\in \mathbb{Q}D\\ a<0<c}} \left(a\tau^2+b\tau+c\right)^{k-1}. \end{multline} By \eqref{eqn:rpoly}, \eqref{eqn:Rpoly}, and \eqref{eqn:Knopp}, there exists a constant $C_1\in \mathbb{C}$ such that $$ \mathcal{H}_k\Big|_{2-2k}S(\tau)-\mathcal{H}_k(\tau) = r_{S}(\tau) -\frac{\left(2k-2\right)!}{\left(4\pi\right)^{2k-1}} R_S(\tau)= r_{S}(\tau) +r_S^{c}(\tau) +C_1\left(\tau^{2k-2}-1\right). $$ Using \eqref{eqn:RSpoly}, \eqref{eqn:Knopp}, and \eqref{eqn:r+} (as in the computation for \eqref{eqn:PC0rewrite}), we obtain \begin{equation}\lambdabel{eqn:Hkmodular} \mathcal{H}_k\Big|_{2-2k}S(\tau)-\mathcal{H}_k(\tau) = 2^{2-2k}r^+\left(f_{k,D};\tau\right) + C_1\left(\tau^{2k-2}-1\right). \end{equation} By Theorem 4 of \cite{KohnenZagierRational} (see also Theorem \textnormal{Re}f{thm:ratperiod}), there exists a constant $C_2\in \mathbb{C}$ (given explicitly in \cite{KohnenZagierRational}) such that $$ r^+\left(f_{k,D};\tau\right)=2\sum_{\substack{\left[a,b,c\right]\in\mathbb{Q}D \\ a<0<c}}\left(a\tau^2+b\tau+c\right)^{k-1} + C_2\left(\tau^{2k-2}-1\right). $$ Setting $C:=-2^{2k-2}C_1-C_2$ and combining \eqref{eqn:Fkmodular} and \eqref{eqn:Hkmodular} hence yields \begin{multline}\lambdabel{eqn:HFrel} \mathcal{H}_{k}\Big|_{2-2k}S(\tau)-\mathcal{H}_{k}(\tau)=-2^{2-2k}\left(\sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k(\tau,Q)\Big|_{2-2k}S(\tau)- \sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k\left(\tau,Q\right)\right)\\ -2^{2-2k} C\left(\tau^{2k-2}-1\right). \end{multline} The claim follows by computing the action of $S$ on the constant function. \end{proof} Combining Proposition \textnormal{Re}f{prop:lift} with Theorem \textnormal{Re}f{thm:PPkDexpansion} yields a surprising relationship between the functions $F_k(\tau,Q)$ and the local polynomial $P_{\mathcal{C}}$ (explicitly given via \eqref{PCexplicit}) which may warrant further investigation. \begin{proposition} There exists a constant $C\in \mathbb{C}$ such that $$ \sum_{Q\in \mathbb{Q}D/\Gammaamma_1}F_k(\tau,Q) - 2^{2k-2} D^{k-\frac{1}{2}} P_{\mathcal{C}}(\tau)+C $$ satisfies weight $2-2k$ modularity on $\Gammaamma_1$. \end{proposition} \begin{remark} The function given in the proposition is locally holomorphic, and is hence a very special kind of locally harmonic Maass form. \end{remark} \begin{proof} By Theorem \textnormal{Re}f{thm:PPkDexpansion} and the modularity of $\mathcal{F}_{1-k,D}$, we have $$ P_{\mathcal{C}}\Big|_{2-2k}S(\tau) - P_{\mathcal{C}}(\tau) =-D^{\frac{1}{2}-k} \left(\mathcal{H}_{k}\Big|_{2-2k}S(\tau)-\mathcal{H}_{k}(\tau)\right). $$ Plugging in Proposition \textnormal{Re}f{prop:lift} (or \eqref{eqn:HFrel}) yields the claim. \end{proof} \end{document}
\begin{document} \allowdisplaybreaks \mathrm{d}ate{\today} \subjclass[]{Primary: 35Q35, 35B40; Secondary: 35Q83.} \keywords{Cucker-Smale model, presureless Euler system, classical solution, time delay, asymptotic behavior, flocking} \begin{abstract} We study a hydrodynamic Cucker-Smale-type model with time delay in communication and information processing, in which agents interact with each other through normalized communication weights. The model consists of a pressureless Euler system with time delayed non-local alignment forces. We resort to its Lagrangian formulation and prove the existence of its global in time classical solutions. Moreover, we derive a sufficient condition for the asymptotic flocking behavior of the solutions. Finally, we show the presence of a critical phenomenon for the Eulerian system posed in the spatially one-dimensional setting. \varepsilonnd{abstract} \maketitle \centerline{\mathrm{d}ate} \tableofcontents \section{Introduction} We study the existence of global classical solutions and asymptotic behavior of the following system of pressureless Euler equations with time delayed non-local alignment forces: \begin{eqnarray} \label{Eul1} \partial_t \rho_t + \nabla \cdot (\rho_t u_t) &=& 0, \\ \partial_t (\rho_t u_t) + \nabla \cdot (\rho_t u_t \otimes u_t) &=& \rho_t\frac{\int_{\mathbb R^d} \psi(x-y) \rho_{t-\tau}(y)u_{t-\tau}(y)\,dy}{\int_{\mathbb R^d} \psi(x-y) \rho_{t-\tau}(y)\,dy} - \rho_t u_t, \label{Eul2} \end{eqnarray} for $t\geq 0$ and $x\in\mathbb R^d$ with $d\in\mathbb{N}$ the space dimension. The constant $\tau \geq 0$ denotes the fixed delay in communication and information processing. Here and in the sequel we denote by the subscript $\{\cdot\}_t$ the time-dependence of the respective variable. The \varepsilonmph{influence function} $\psi: \mathbb R^d \to \mathbb R_+$ satisfies the following set of assumptions: \begin{assumption}\label{ass:psi} The influence function $\psi$ is continuous and continuously differentiable on $\mathbb R^d$ with uniformly bounded derivatives up to order $\varepsilonll \in \mathbb{N} \cup \{0\}$. Moreover, it is radially symmetric, i.e., there exists a function $\widetilde\psi: [0,+\infty) \to (0,+\infty)$ such that \begin{eqnarray*} \psi(x) = \widetilde\psi(|x|)\qquad\mathcal{B}ox{for all }x\in\mathbb R^d. \end{eqnarray*} The function $\widetilde\psi$ is nonincreasing, positive and uniformly bounded on $[0,+\infty)$. Without loss of generality, we assume $\widetilde\psi(0)=1$. \varepsilonnd{assumption} Let us note that the (rescaled) influence function introduced in the seminal papers by Cucker and Smale \cite{CS1, CS2}, namely \begin{equation}\label{psi_cs} \psi(x) = \frac{1}{(1 + |x|^2)^\beta} \varepsilonnd{equation} with $\beta \geq 0$, satisfies the above set of assumptions. Associated to the fluid velocity $u_t$, we define the characteristic flow $\varepsilonta_t: \mathbb R^d \to \mathbb R^d$ by \begin{eqnarray}\label{eta_flow} \frac{\mathrm{d} \varepsilonta_t(x)}{\mathrm{d} t} = u_t(\varepsilonta_t(x)) \quad\mathcal{B}ox{for } t \geq -\tau, \qquad \mathcal{B}ox{subject to } \varepsilonta_{0}(x) = x \in \mathbb R^d. \end{eqnarray} Note that, emanating from $\varepsilonta_{0}(x) = x$ at $t=0$, we solve \varepsilonnd{equation}ref{eta_flow} both forward and backward in time to obtain the characteristics for $t \geq -\tau$. Let us denote the time-varying set $\Omegaega_t := \{x \in \mathbb R^d\,:\, \rho_t(x) \neq 0 \}$ for given initially bounded open set $\Omegaega_0$. The system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} is considered subject to the initial data \begin{eqnarray}\label{EulIC} (\rho_s(x), u_s(x)) = (\bar \rho_s(x),\bar u_s(x)) \qquad\mathcal{B}ox{for } (s,x) \in [-\tau,0]\times \Omegaega_s. \end{eqnarray} Since the total mass is conserved in time, without loss of generality, we may assume that $\rho_t$ is a probability density function, i.e., $\|\rho_t \|_{L^1} = 1$ for all $t \geq 0$ by assuming $\|\rho_0\|_{L^1} = 1$. The system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} can be formally derived from the kinetic Cucker-Smale type model, introduced and studied in \cite{Choi-Haskovec}, \begin{eqnarray} \label{kin_mt} \partialrtial_t f_t + v \cdot \nabla_x f_t + \nabla_v \cdot (F[f_{t-\tau}]f_t) = 0 \qquad\mathcal{B}ox{for } (x,v) \in \mathbb R^d \times \mathbb R^d, \end{eqnarray} with \begin{eqnarray*} F[f_{t-\tau}](x,v) := \mathrm{d}isplaystyle \frac{\int_{\mathbb R^{2d}} \psi(x-y)(w-v)f(y,w,t-\tau)\,\mathrm{d} y\mathrm{d} w}{\int_{\mathbb R^{2d}} \psi(x-y)f(y,w,t-\tau)\,\mathrm{d} y\mathrm{d} w}. \end{eqnarray*} Here the one-particle distribution function $f_t=f_t(x,v)$ is a time-dependent probability measure on the phase space $\mathbb R^d\times\mathbb R^d$, describing the probability of finding a particle at time $t\geq 0$ located at $x\in\mathbb R^d$ and having velocity $v\in\mathbb R^d$. The normalization in the expression for the interaction force $F[f_{t-\tau}]$ is similar to the one introduced in \cite{MT}. We define the mass and momentum densities by \begin{eqnarray*} \rho_t(x) := \int_{\mathbb R^d} f_t(x,v) \,\mathrm{d} v,\qquad \rho_t(x)u_t(x) := \int_{\mathbb R^d} v f_t(x,v) \,\mathrm{d} v. \end{eqnarray*} Then, \varepsilonnd{equation}ref{Eul1} is obtained directly by integrating the Vlasov equation \varepsilonnd{equation}ref{kin_mt} with respect to $v$, while \varepsilonnd{equation}ref{Eul2} follows from taking the first-order moment with respect to $v$ and adopting the monokinetic closure $f_t(x,v) = \rho_t(x) \mathrm{d}elta(v-u_t(x))$. In \cite{Choi-Haskovec} we proved the global existence and uniqueness of measure-valued solutions of \varepsilonnd{equation}ref{kin_mt}. Moreover, we provided a stability estimate in terms of the Monge-Kantorowich-Rubinstein distance, and, as a direct consequence, an asymptotic flocking result for the kinetic system. Here we shall extend these results for the hydrodynamic system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2}. In particular, we shall study the existence of global classical solutions, their asymptotic behavior (commonly referred to as \varepsilonmph{flocking}) and propagation of smoothness. We refer to \cite{CCP, CHL} for a recent overview of emergent dynamics of the Cucker-Smale model and its variants. \begin{definition}\label{def:classical} We call $(\rho_t, u_t)$ a classical solution of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} on $[0,T)$, subject to the initial datum \varepsilonnd{equation}ref{EulIC}, if $\rho_t$ and $u_t$ are continuously differentiable functions on the set $\{(t,x)\in [0,T)\times\Omegaega_t\}$, the characteristics $\varepsilonta_t = \varepsilonta_t(x)$ defined by \varepsilonnd{equation}ref{eta_flow} are diffeomorphisms for all $t\in [0,T)$, and $\rho_t$ and $u_t$ satisfy the equations \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} pointwise in $\{(t,x)\in [0,T)\times\Omegaega_t\}$, with the initial datum \varepsilonnd{equation}ref{EulIC}. The time derivative at $t=0$ has to be understood as a one-sided derivative. \varepsilonnd{definition} The aim of this paper is to prove the existence of classical solutions of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} and to study their asymptotic behavior for large times; in particular, we shall give sufficient conditions that lead to \varepsilonmph{asymptotic flocking} in the sense of the definition of Cucker and Smale \cite{CS1, CS2}, see statement \varepsilonnd{equation}ref{statement2} of Theorem \ref{thm:flocking}. We carry out this program by resorting to the Lagrangian formulation of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2}. This is derived in Section \ref{sec:main}, where we also state our main results. For global existence of solutions, we provide two different strategies: first one is based on Cauchy-Lipschitz theory, which is usually used for constructing a solution to systems of differential equations. This gives the global existence of smooth solutions for the Lagrangian system with only continuous initial datum, any further smoothness or smallness of the initial datum are not required. However, unfortunately, this argument cannot be applied to the case $\tau = 0$, i.e., no time delay. Furthermore, it is not that clear how to shift the existence result of Lagrangian system to the Eulerian system. On the other hand, the second strategy is based on the energy method combined with the large-time behavior estimate of solutions. This also does not require any smallness assumption for the initial data and provides the initial regularity persists globally in time. However, compared to the first strategy, we need additional assumptions used for the large-time behavior estimate, see Theorem \ref{thm_main}. Despite such assumptions, this strategy can be directly applied to the case of no time delay, $\tau = 0$, and the global existence of classical solutions to the Eulerian system \varepsilonnd{equation}ref{Eul1}-\varepsilonnd{equation}ref{Eul2} if we further assume that the initial datum are small enough. It is worth mentioning that the smallness assumption on the datum is not needed to construct the global-in-time classical solutions for the Lagrangian system \varepsilonnd{equation}ref{Lagr1}-\varepsilonnd{equation}ref{Lagr2}. The rest of the paper is organized as follows. As mentioned above, we discuss the derivation of the Largangian system from the Eulerian system \varepsilonnd{equation}ref{Eul1}-\varepsilonnd{equation}ref{Eul2} and present our main results on the global existence of solutions, the large-time behavior of solutions, and the critical phenomena for the Eulerian system in the one dimensional case. Continuous solutions of the Lagrangian formulation are constructed in Section \ref{sec:existence}. In Section \ref{sec:flocking} we study the asymptotic flocking behavior of the solutions, and in Section \ref{sec:classical} we prove the existence of global classical solutions of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} in the sense of Definition \ref{def:classical}. Finally, in Section \ref{sec:critical} we show the presence of a critical phenomenon for the Eulerian system posed in the spatially one-dimensional setting. \section{Lagrangian formulation and main results}\label{sec:main} In the sequel we shall work with the Lagrangian formulation of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2}. For $x\in\Omegaega_0$ we introduce the functions \begin{eqnarray} \label{hv} h_t(x):= \rho_t(\varepsilonta_t(x)),\qquad v_t(x) := u_t(\varepsilonta_t(x)), \end{eqnarray} where $\varepsilonta_t$ is the characteristic flow defined in \varepsilonnd{equation}ref{eta_flow}. Then, we formally rewrite the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} as \begin{align}\label{Lagr1} \begin{aligned} \frac{\mathrm{d} \varepsilonta_t(x)}{\mathrm{d} t} &= v_t(x),\\ \frac{\mathrm{d} v_t(x)}{\mathrm{d} t} &= \frac{\int_{\Omega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y)) \rho_0(y)v_{t-\tau}(y)\,dy}{\int_{\Omega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y)) \rho_0(y)\,dy} - v_t(x), \varepsilonnd{aligned} \varepsilonnd{align} and \begin{eqnarray} \label{Lagr2} h_t(x) = \rho_0(x) \mathrm{d}et(\nabla \varepsilonta_t(x))^{-1}. \end{eqnarray} We refer to \cite{DS, HKK15} for details. The system \varepsilonnd{equation}ref{Lagr1} is subject to the initial datum \begin{eqnarray} \label{LagrIC} v_s(x) := \bar u_s(\varepsilonta_s(x))\qquad\mathcal{B}ox{for } s\in[-\tau,0],\; x\in\Omegaega_0. \end{eqnarray} Classical solutions of the system \varepsilonnd{equation}ref{Lagr1}--\varepsilonnd{equation}ref{LagrIC} on $[0,T)\times\Omegaega_0$ are defined analogously to Definition \ref{def:classical}. Then, as long as the characteristic flow $\varepsilonta_t$ given by \varepsilonnd{equation}ref{eta_flow} is a diffeomorphism between $\Omegaega_0$ and $\Omegaega_t$, \varepsilonnd{equation}ref{hv} defines an equivalence between the classical solutions of the Eulerian system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{EulIC} and the classical solutions of the Lagrangian formulation \varepsilonnd{equation}ref{Lagr1}--\varepsilonnd{equation}ref{LagrIC}. Observe that the equation \varepsilonnd{equation}ref{Lagr2} for the mass density $h_t$ is decoupled from the system \varepsilonnd{equation}ref{Lagr1} for $(\varepsilonta_t, v_t)$. Therefore, our first main result establishes the global in time existence of solutions of \varepsilonnd{equation}ref{Lagr1}. The mass density $h_t$ is then calculated as a post-processing step, assuming that $\varepsilonta_t$ is a diffeomorphism, i.e., that the matrix $\nabla\varepsilonta_t$ is invertible. \begin{theorem}\label{thm:existence} Let Assumption \ref{ass:psi} be verified and $\tau > 0$. Suppose that the initial datum $(\varepsilonta_s, v_s)\in\mathcal C([-\tau,0]\times\overline\Omegaega_0)$. Then there exists a unique global in time solution $\varepsilonta_t\in\mathcal C^1([0,\infty); \mathcal C(\overline{\Omegaega_0}))$, $v_t\in\mathcal C([0,\infty)\times\overline\Omegaega_0)$ of the system \varepsilonnd{equation}ref{Lagr1}, satisfying \begin{eqnarray} \label{global_v} \mathbb{N}orm{v_t}_{L^\infty([0,\infty)\times \Omegaega_0)} \leq \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)}. \end{eqnarray} \varepsilonnd{theorem} \begin{remark}\label{rmk_21}Applying a bootstrapping argument to Theorem \ref{thm:existence} actually yields the global existence of classical solutions to the Lagrangian system \varepsilonnd{equation}ref{Lagr1}. \varepsilonnd{remark} Our second result deals with the asymptotic behavior of the solutions of \varepsilonnd{equation}ref{Lagr1} for large times. In particular, we prove that under additional assumptions on the initial velocity distribution and the influence function, the system exhibits the so-called \varepsilonmph{flocking behavior} \cite{CS1, CS2}, where the velocities converge to a common consensus value, while the mutual distances stay uniformly bounded. Let us introduce the notation for the spatial and velocity diameters of the solution, \begin{eqnarray} \label{dXdV} d_X(t) := \max_{x,y \in \overline{\Omegaega}_0} |\varepsilonta_t(x) - \varepsilonta_t(y)|, \qquad d_V(t) := \max_{x,y \in \overline{\Omegaega}_0} |v_t(x) - v_t(y)|. \end{eqnarray} \begin{theorem}\label{thm:flocking} Let Assumption \ref{ass:psi} be verified and $(\varepsilonta_t, v_t)\in\mathcal C^1([0,\infty); \mathcal C(\overline{\Omegaega_0}))$ be a solution of the system \varepsilonnd{equation}ref{Lagr1}. Suppose that the initial datum $v_s\in\mathcal C([-\tau,0]\times\overline\Omegaega_0)$ and the influence function $\widetilde\psi$ satisfy the following conditions: \begin{eqnarray} \label{R_V} \max_{s \in [-\tau,0]}\max_{x \in \overline \Omegaega_0} |v_s(x)| =: R_V < +\infty, \end{eqnarray} and \begin{eqnarray} \label{iii} d_V(0) + \int_{-\tau}^0 d_V(s)\mathrm{d} s < \int_{d_X(-\tau) + R_V \tau}^\infty \widetilde\psi(s) \ ds, \end{eqnarray} with $d_X$ and $d_V$ defined in \varepsilonnd{equation}ref{dXdV}. Then the spatial diameter $d_X$ of the solution of \varepsilonnd{equation}ref{Lagr1} is uniformly bounded and the velocity diameter $d_V$ decays exponentially in time, \begin{eqnarray} \label{statement2} \sup_{t\geq 0} d_X(t) < +\infty, \qquad d_V(t) \leq \left( \max_{s\in[-\tau,0]} d_V(s) \right) e^{-C t} \quad \mathcal{B}ox{for } t \geq 0, \end{eqnarray} for a suitable constant $C>0$ independent of time. \varepsilonnd{theorem} The assumption \varepsilonnd{equation}ref{iii} can be understood, for a fixed integrable influence function $\widetilde\psi$, as a condition for smallness of the delay $\tau$. Indeed, considering a fixed initial datum with $d_V(s)\varepsilonnd{equation}uiv: \bar d_V >0$ constant for $s\in [-\tau,0]$ and $d_X(-\tau)\varepsilonnd{equation}uiv: \bar d_X\geq 0$, then \varepsilonnd{equation}ref{iii} reads \begin{eqnarray*} (1+\tau) \bar d_V < \int_{\bar d_X + R_v \tau}^\infty \widetilde\psi(s)\,ds. \end{eqnarray*} Clearly, the left-hand side increases with increasing $\tau$, while the right-hand side decreases. So, generically, it is necessary to choose $\tau$ sufficiently small in order to satisfy the flocking condition. This is often the case in alignment models with delay, see, e.g., \cite{EHS}. On the other hand, if the influence function $\widetilde\psi$ has a heavy tail, i.e., \begin{eqnarray*} \int^\infty \widetilde\psi(s)\,d s = +\infty, \end{eqnarray*} then assumption \varepsilonnd{equation}ref{iii} is satisfied for any initial datum and any $\tau \geq 0$, which is a situation usually called \varepsilonmph{unconditional flocking}, see, e.g., \cite{CS1, CS2}. Let us note that if the influence function $\widetilde\psi$ is of the commonly used form \begin{eqnarray*} \widetilde\psi(s) = \frac{1}{(1+s^2)^{\beta}}, \end{eqnarray*} then unconditional flocking takes place for $\beta \in [0,1/2]$. In this case the Assumptions \ref{ass:psi} and \varepsilonnd{equation}ref{iii} are satisfied. Our third and final main result is concerned with the existence and uniqueness of global classical solutions of the system \varepsilonnd{equation}ref{Lagr1}. This result is based on proving sufficient regularity of the solutions constructed in Theorem \ref{thm:existence}, for which we will need the estimates derived in Theorem \ref{thm:flocking}. Thus the below result adopts the assumptions of Theorem \ref{thm:flocking}. \begin{theorem}\label{thm_main} Let Assumption \ref{ass:psi} be verified with some $\varepsilonll > \frac{d}2+1$, and let \varepsilonnd{equation}ref{R_V} and \varepsilonnd{equation}ref{iii} hold. Moreover, we assume that the initial datum satisfying the regularity: \begin{eqnarray*} (\bar\rho_s, \bar u_s) \in \mathcal C([-\tau,0];H^\varepsilonll(\Omegaega_s)) \times \mathcal C([-\tau,0];H^{\varepsilonll+1}(\Omegaega_s)). \end{eqnarray*} Then the system \varepsilonnd{equation}ref{Lagr1} admits a unique global classical solution $(\varepsilonta_t, v_t) \in \mathcal C^1([-\tau,\infty);H^{\varepsilonll+1}(\Omegaega_0)) \times \mathcal C([-\tau,\infty);H^{\varepsilonll+1}(\Omegaega_0))$. \varepsilonnd{theorem} For notational simplicity, we denote by $\|f\|_{L^p}$ the usual $L^p(\Omegaega_0)$-norm for a function $f(x)$ if there is no confusion, unless otherwise specified. \begin{remark} Note that for $\varepsilonll > d/2 + 1$ we have the embedding of the Sobolev space $H^\varepsilonll(\Omegaega)$ into the space of continuous functions $\mathcal C^1(\Omegaega)$. Thus the existence of solutions for the large-time behavior estimate \varepsilonnd{equation}ref{statement2} is also justified by Theorem \ref{thm_main}. \varepsilonnd{remark} So far, we established the global regularity of solutions for the Cauchy problem in the Lagrangian coordinates not taking into account the equation \varepsilonnd{equation}ref{Lagr2}. In that case, we do not need any smallness assumptions on the initial data, see Theorems \ref{thm:existence} and \ref{thm_main}. However, in order to go back to the Eulerian variables to study the global regularity for the Cauchy problem \varepsilonnd{equation}ref{Eul1}-\varepsilonnd{equation}ref{Eul2}, the smallness assumption on the initial data is required. In fact, we will show that smooth solutions can be blow up in a finite time when the initial data are not that small, see Theorem \ref{thm_cri} below for details. Note that if the characteristic flow defined in \varepsilonnd{equation}ref{eta_flow} is diffeomorphism, i.e., det$\nabla \varepsilonta_t > 0$ for all $t \geq 0$, then we can consider the Cauchy problem in the Eulerian coordinates. The theorem below shows that if the initial datum is small enough then the flow $\varepsilonta_t$ is indeed a diffeomorphism. \begin{theorem} \label{thm:Eulerian} Let the same assumptions in Theorem \ref{thm_main} be verified. Moreover, suppose that the initial data $\bar u_s \in \mathcal C([-\tau,0];H^{\varepsilonll+1}(\Omegaega_s))$ satisfy $\|\nabla \bar u_0\|_{L^2} + \max_{s \in [-\tau,0]}d_V(s) \leq \varepsilon$ for sufficiently small $\varepsilon > 0$. Then we have the global existence and uniqueness of classical solutions to the system \varepsilonnd{equation}ref{Eul1}-\varepsilonnd{equation}ref{Eul2}. \varepsilonnd{theorem} Finally, we show that the spatially one-dimensional version of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{EulIC} exhibits a critical threshold in terms of the derivative of the initial datum $\bar u_0$. In particular, if $\partialrtial_x \bar u_0(x)$ is negative enough for some $x\in\Omegaega_0$, then the corresponding solution blows up in finite time. This is due to the fact that $\varepsilonta_t$ ceases to be a diffeomorphism. The critical threshold phenomena for flocking models are studied in \cite{CCTT, TT}. \begin{theorem}\label{thm_cri} Consider the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2} with $d=1$. Let Assumption \ref{ass:psi} be verified with some $\varepsilonll \geq 1$. Moreover, we assume that the influence function $\psi$ satisfies $|\psi'| \leq C|\psi|$ for some positive constant $C$. Let $\overline C = 2CR_V$ with $R_V$ appeared in \varepsilonnd{equation}ref{R_V}. \begin{itemize} \item If $\overline C \leq 1$ and $\partialrtial_x u_0(x) \geq - \left(1 + \sqrt{1 - 4\overline C}\right)/2$ for all $x \in \mathbb R$, then system has a global classical solution. \item If there exists an $x\in\mathbb R$ such that $\partial_x u_0(x) < -\left( 1 + \sqrt{1 + \overline C}\right)/2$, then the solution blows up in a finite time. \varepsilonnd{itemize} \varepsilonnd{theorem} \begin{remark}It is easy to check that the influence function given in \varepsilonnd{equation}ref{psi_cs} satisfies $|\psi'| \leq \beta|\psi|$. \varepsilonnd{remark} \begin{remark}The condition $|\psi'| \leq C|\psi|$ can be replaced by the assumption for the large-time behavior estimate \varepsilonnd{equation}ref{R_V} and \varepsilonnd{equation}ref{iii}. In fact, in that case, the constant $\overline C$ is given by \begin{eqnarray*} \overline C = \frac{R_V \mathbb{N}orm{\psi'}_{L^\infty(0,\infty)}}{\psi(d_X^M)} \left( 1 + \frac{1}{\psi(d_X^M)} \right), \end{eqnarray*} where $d_X^M = \sup_{t\geq -\tau} d_X(t)$. \varepsilonnd{remark} \begin{convention} In the rest of the paper, generic, not necessarily equal, constants will be denoted by $C$. \varepsilonnd{convention} \section{Existence of solutions for the Lagrangian system - proof of Theorem \ref{thm:existence}} \label{sec:existence} We start by proving the following technical Lemma. \begin{lemma}\label{lem:growth} Let $u=u(t)$ be a nonnegative, continuous and piecewise $\mathcal C^1$-function satisfying the inequality \begin{eqnarray} \label{growth_ineq} \tot{}{t} u(t) \leq C_1 + C_2 \int_0^t u(s) \mathrm{d} s\qquad\mathcal{B}ox{for almost all } t > 0, \end{eqnarray} with some constants $C_1, C_2 > 0$. Then \begin{eqnarray} \label{growth_claim} u(t) \leq \left(u(0) + \frac{C_1}{\sqrt{C_2}} \right) e^{\sqrt{C_2}\, t} \qquad\mathcal{B}ox{for all } t > 0. \end{eqnarray} \varepsilonnd{lemma} \begin{proof} We integrate \varepsilonnd{equation}ref{growth_ineq} on $(0,t)$, \begin{eqnarray} \label{growth_int} u(t) \leq u(0) + C_1 t + C_2 \int_0^t \int_0^s u(r) \mathrm{d} r\mathrm{d} s. \end{eqnarray} Let us denote \begin{eqnarray*} T := \sup \{ t>0;\, \varepsilonnd{equation}ref{growth_claim} \mathcal{B}ox{ holds on } [0,t] \}. \end{eqnarray*} Since $C_1>0$ and due to the continuity of $u=u(t)$, we have $T>0$. For contradiction, assume that $T<+\infty$. Then, obviously, $u(T) = \left(u(0) + \frac{C_1}{\sqrt{C_2}} \right) e^{\sqrt{C_2}\, T}$, and inserting this into \varepsilonnd{equation}ref{growth_int} gives \begin{eqnarray*} \left(u(0) + \frac{C_1}{\sqrt{C_2}} \right) e^{\sqrt{C_2}\, T} &\leq& u(0) + C_1 T + C_2 \int_0^T \int_0^s u(r) \mathrm{d} r\mathrm{d} s \\ &\leq& u(0) + C_1 T + \left(u(0) + \frac{C_1}{\sqrt{C_2}} \right) \left( e^{\sqrt{C_2}\, T} - 1- T\sqrt{C_2} \right). \end{eqnarray*} This further implies \begin{eqnarray*} 0 \leq - \frac{C_1}{\sqrt{C_2}} - T \sqrt{C_2} u(0), \end{eqnarray*} a contradiction to the assumption $u(0) \geq 0$ and the positivity of $C_1$ and $C_2$. \varepsilonnd{proof} Before proceeding with the proof of Theorem \ref{thm:existence}, let us make a remark about the structure of the system \varepsilonnd{equation}ref{Lagr1}. Although the equation for $v_t$ formally is an integro-differential equation, observe that only $v_{t-\tau}(y)$ appears in the integrand on its right-hand side, while $\varepsilonta_t$ appears as $\varepsilonta_t(x)$ (while the integration is performed with respect to $y$). Consequently, employing the method of steps, see, e.g., \cite{Smith}, we shall prove the existence of solutions inductively on the intervals $[0,\tau]$, $[\tau, 2\tau]$, etc. On each of these intervals, \varepsilonnd{equation}ref{Lagr1} is merely a family of ordinary differential equations for $(\varepsilonta_t, v_t)$, parametrized by $x\in\Omegaega_0$, with $v_{t-\tau}$ taken from the previous step. Therefore, we can employ the classical Cauchy-Lipschitz theorem for proving local in time existence of solutions for each fixed $x\in\Omegaega_0$. A suitable a priori estimate, uniform in $t$ and $x$, will then provide the global in time existence. \begin{proof}[Proof of Theorem \ref{thm:existence}] We proceed inductively on time intervals of length $\tau > 0$. For a prescribed $(\varepsilonta_s,v_s)\in\mathcal C([-\tau,0]\times\overline\Omegaega_0)$ and for $t\in(0,\tau)$, we denote \begin{eqnarray*} F_t[\varepsilonta] := \int_{\Omegaega_0} \psi(\varepsilonta - \varepsilonta_{t-\tau}(y)) \rho_0(y)v_{t-\tau}(y)\mathrm{d} y,\qquad G_t[\varepsilonta] := \int_{\Omegaega_0} \psi|\varepsilonta - \varepsilonta_{t-\tau}(y)) \rho_0(y)\mathrm{d} y. \end{eqnarray*} Then, the system \varepsilonnd{equation}ref{Lagr1} is written as \begin{eqnarray} \frac{\mathrm{d} \varepsilonta_t(x)}{\mathrm{d} t} &=& v_t(x), \label{ODEs1}\\ \frac{\mathrm{d} v_t(x)}{\mathrm{d} t} &=& \frac{F_t[\varepsilonta_t(x)]}{G_t[\varepsilonta_t(x)]} - v_t(x), \label{ODEs2} \end{eqnarray} which is a family of ODE systems on $(0,t)$, parametrized by $x\in\Omegaega_0$, subject to the initial datum $\varepsilonta_0(x) = x$ and $v_0(x)$ given by the value of $v_s(x)$ at $s=0$. For any fixed $x\in\Omegaega_0$, we will show local in time existence of solutions \varepsilonnd{equation}ref{ODEs1}--\varepsilonnd{equation}ref{ODEs2} employing the classical Cauchy-Lipschitz theorem. We only need to prove that the expression $\frac{F_t[\varepsilonta]}{G_t[\varepsilonta]}$ is locally Lipschitz-continuous in $\varepsilonta$, uniformly in $t\in[0,\tau]$. For simplicity, we will omit the explicit notation of time dependence in $F[\cdot]$ and $G[\cdot]$. For any $\varepsilonta^1, \varepsilonta^2\in B_R^d$, where $B_R$ is the ball of radius $R>0$ in $\mathbb R^d$, we have \begin{eqnarray*} \left| \frac{F[\varepsilonta^1]}{G[\varepsilonta^1]} - \frac{F[\varepsilonta^2]}{G[\varepsilonta^2]} \right| \leq \left| \frac{F[\varepsilonta^1]-F[\varepsilonta^2]}{G[\varepsilonta^1]}\right| + |F[\varepsilonta^2]| \left| \frac{G[\varepsilonta^1]-G[\varepsilonta^2]}{G[\varepsilonta^1]G[\varepsilonta^2]} \right|. \end{eqnarray*} Since for $i=1,2$, $|\varepsilonta^i-\varepsilonta_{t-\tau}| \leq R + \max_{s\in[-\tau,0]} \mathbb{N}orm{\varepsilonta_s}_{L^\infty(\Omegaega_0)} < +\infty$, and due to the monotonicity of the influence function $\psi$, we have \begin{eqnarray*} \psi(\varepsilonta^i - \varepsilonta_{t-\tau}(y)) \geq \psi(R + \max_{s\in[-\tau,0]} \mathbb{N}orm{\varepsilonta_s}_{L^\infty(\Omegaega_0)}) =: \psi_R > 0. \end{eqnarray*} Therefore, $G[\varepsilonta^i] \geq \psi_R$. Due to the assumption $0 <\psi \leq 1$, we have the bound \begin{eqnarray*} |F[\varepsilonta^2]| \leq \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)} < +\infty. \end{eqnarray*} Moreover, \begin{eqnarray*} |F[\varepsilonta^1]-F[\varepsilonta^2]| \leq L_\psi |\varepsilonta^1-\varepsilonta^2| \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)}, \end{eqnarray*} where $L_\psi$ is the Lipschitz constant of the influence function $\psi$; a similar estimate obviously holds for $|G[\varepsilonta^1]-G[\varepsilonta^2]|$. Note that the Lipschitz continuity of $\psi$ follows from Assumption \ref{ass:psi}. Putting the above estimates together, we conclude that there exists a constant $C_R$, independent of $t\in(0,\tau)$, such that \begin{eqnarray*} \left| \frac{F[\varepsilonta^1]}{G[\varepsilonta^1]} - \frac{F[\varepsilonta^2]}{G[\varepsilonta^2]} \right| \leq C_R |\varepsilonta^1-\varepsilonta^2|. \end{eqnarray*} Consequently, the Cauchy-Lipschitz theorem provides the existence of a unique solution $(\varepsilonta_t(x), v_t(x))$ of the system \varepsilonnd{equation}ref{ODEs1}--\varepsilonnd{equation}ref{ODEs2} on the time interval $(0,T_x)$ for some $0 < T_x < \tau$, for all $x\in\Omegaega_0$. Next, still for a fixed but arbitrary $x\in\Omegaega_0$, we derive an a-priori bound on $(\varepsilonta_t(x), v_t(x))$. We multiply \varepsilonnd{equation}ref{ODEs2} by $v_t(x)$, \begin{eqnarray} \frac12 \frac{\mathrm{d} |v_t(x)|^2}{\mathrm{d} t} &=& \frac{F_t[\varepsilonta_t(x)]}{G_t[\varepsilonta_t(x)]}\cdot v_t(x) - |v_t(x)|^2 \nonumber \\ &\leq& |v_t| \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)} - |v_t(x)|^2, \label{est_v} \end{eqnarray} where we used the trivial inequality \begin{eqnarray*} \left| \frac{F_t[\varepsilonta_t(x)]}{G_t[\varepsilonta_t(x)]} \right| \leq \mathbb{N}orm{v_{t-\tau}}_{L^\infty(\Omegaega_0)} \leq \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)}. \end{eqnarray*} Clearly, \varepsilonnd{equation}ref{est_v} implies that \begin{eqnarray*} |v_t(x)| \leq \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)} \qquad\mathcal{B}ox{for } t\in(0,T_x), \end{eqnarray*} and from \varepsilonnd{equation}ref{ODEs1} we have \begin{eqnarray*} |\varepsilonta_t(x)| &\leq& |x| + T_x \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)} \\ &\leq& \max_{y\in\Omegaega_0} |y| + \tau \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)}\qquad\mathcal{B}ox{for } t\in(0,T_x). \end{eqnarray*} Consequently, the solution constructed above is uniformly bounded on $[0,T_x]$ and can be extended to the whole interval $[0,\tau]$. Since the bound is independent of $x\in\Omegaega_0$, we have \begin{eqnarray*} \max_{t\in[0,\tau]} \mathbb{N}orm{v_t}_{L^\infty(\Omegaega_0)} \leq \max_{s\in[-\tau,0]} \mathbb{N}orm{v_s}_{L^\infty(\Omegaega_0)}, \end{eqnarray*} and we can re-iterate the above procedure to construct a family (parametrized by $x\in\Omegaega_0$) of unique, global in time solutions of the system \varepsilonnd{equation}ref{ODEs1}--\varepsilonnd{equation}ref{ODEs2}, satisfying \varepsilonnd{equation}ref{global_v}. Finally, we show that the continuity of the initial datum is propagated in time. Let us fix $\varepsilonps>0$ and $x, z\in\Omegaega_0$ such that $|x-z|\leq\varepsilonps$. Then $|\varepsilonta_0(x)-\varepsilonta_0(z)| = |x-z| \leq\varepsilonps$ and there exists a $\mathrm{d}elta>0$ such that $|v_0(x)-v_0(z)| \leq \mathrm{d}elta$. An easy calculation gives \begin{eqnarray} \label{est1} |\varepsilonta_t(x)-\varepsilonta_t(z)| \leq |x-z| + \int_0^t |v_s(x)-v_s(z)| \mathrm{d} s, \end{eqnarray} and, for almost all $t\in[0,T]$, \begin{eqnarray} \label{est2} \frac{\mathrm{d}}{\mathrm{d} t} |v_t(x)-v_t(z)| \leq \left| \frac{F_t[\varepsilonta_t(x)]}{G_t[\varepsilonta_t(x)]} - \frac{F_t[\varepsilonta_t(z)]}{G_t[\varepsilonta_t](z)} \right| - |v_t(x)-v_t(z)|. \end{eqnarray} By a similar procedure as above we conclude that there exists a constant $C_1>0$, depending on $L_\psi$, the initial datum and $T>0$, such that \begin{eqnarray*} \left| \frac{F_t[\varepsilonta_t(x)]}{G_t[\varepsilonta_t(x)]} - \frac{F_t[\varepsilonta_t(z)]}{G_t[\varepsilonta_t](z)} \right| \leq C_1 |\varepsilonta_t(x)-\varepsilonta_t(z)| \qquad \mathcal{B}ox{for all } t\in[0,T]. \end{eqnarray*} Inserting into \varepsilonnd{equation}ref{est2} gives \begin{eqnarray*} \frac{\mathrm{d}}{\mathrm{d} t} |v_t(x)-v_t(z)| \leq C_1 |x-z| + C_1 \int_0^t |v_s(x)-v_s(z)| \mathrm{d} s - |v_t(x)-v_t(z)| \end{eqnarray*} for almost all $t\in[0,T]$. Lemma \ref{lem:growth} implies then \begin{eqnarray*} |v_t(x)-v_t(z)| \leq \left( |v_0(x)-v_0(z)| + |x-z| \right) e^{\sqrt{C_1} t} \leq (\mathrm{d}elta + \varepsilonps) e^{\sqrt{C_1}T}. \end{eqnarray*} Continuity in time is proved analogously, using again the Lipschitz continuity of $\psi$. Consequently, $v_t \in \mathcal C([0,T] \times \Omegaega_0)$ for any $T>0$. Continuous differentiability in time and continuity in space of $\varepsilonta_t$ on $[0,T] \times \Omegaega_0$ follows directly from \varepsilonnd{equation}ref{est1}. \varepsilonnd{proof} \section{Large time behavior - proof of Theorem \ref{thm:flocking}} \label{sec:flocking} In this Section we derive asymptotic estimates describing the large-time behavior of the solutions $(\varepsilonta_t, v_t)$ of the system \varepsilonnd{equation}ref{Lagr1}, \varepsilonnd{equation}ref{LagrIC}, which can be constructed by Theorem \ref{thm:existence}. For this whole Section we adopt the assumptions of Theorem \ref{thm:flocking} and Assumption \ref{ass:psi} with $\varepsilonll = 0$, in particular, the validity of the formulae \varepsilonnd{equation}ref{R_V} and \varepsilonnd{equation}ref{iii}. \begin{lemma}\label{lem_spt} Let $(\varepsilonta_t, v_t)\in\mathcal C^1([0,\infty); \mathcal C(\overline{\Omegaega_0}))$ be a solution of the system \varepsilonnd{equation}ref{Lagr1}. Then we have \begin{equation}\label{est_spt} \max_{x \in \overline\Omegaega_0} |v_t(x)| \leq R_V \qquad \mathcal{B}ox{for } t\geq 0, \varepsilonnd{equation} with $R_V$ defined in \varepsilonnd{equation}ref{R_V}. \varepsilonnd{lemma} \begin{proof} We fix an $\varepsilon > 0$, set $R_V^\varepsilon := R_V + \varepsilon$ and \begin{eqnarray*} \mathcal{A}^\varepsilon := \left\{t > 0:\, \max_{x \in \overline \Omega_0}|v_s(x)| < R^\varepsilon_V \quad \mathcal{B}ox{for } s \in [0,t) \right\}. \end{eqnarray*} Then, by assumption \varepsilonnd{equation}ref{R_V} and the continuity of the solution, we have $\mathcal{A}^\varepsilon \neq \varepsilonmptyset$ and $T^\varepsilon_* := \sup \mathcal{A}^\varepsilon > 0$. For a contradiction, let us assume that $T^\varepsilon_* < +\infty$. Then we have \begin{eqnarray} \label{limit} \lim_{t \to T^\varepsilon_* -}\, \max_{x \in \overline \Omega_0} |v_t(x)| = R_V^\varepsilon. \end{eqnarray} On the other hand, for $t < T^\varepsilon_*$ we calculate $$\begin{aligned} \frac12 \frac{\mathrm{d}}{\mathrm{d} t}|v_t(x)|^2 &\leq \frac{\int_{\Omega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y))|v_{t-\tau}(y)|\rho_0(y)\mathrm{d} y}{\int_{\Omega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y))\rho_0(y)\mathrm{d} y} |v_t(x)|- |v_t(x)|^2\cr &\leq \max_{y \in \overline \Omega_0}|v_{t-\tau}(y)||v_t(x)| - |v_t(x)|^2\cr &\leq R^\varepsilon_V|v_t(x)| - |v_t(x)|^2. \varepsilonnd{aligned}$$ Consequently, \begin{eqnarray*} \frac{\mathrm{d}}{\mathrm{d} t}|v_t(x)| \leq R^\varepsilon_V - |v_t(x)| \qquad \mathcal{B}ox{for almost all } t\in (0,T^\varepsilon_*), \end{eqnarray*} which further implies \begin{eqnarray*} |v_t(x)| \leq (|v_0(x)| - R^\varepsilon_V)e^{-t} + R^\varepsilon_V \qquad \mathcal{B}ox{for } t\in (0,T^\varepsilon_*). \end{eqnarray*} Thus, we have \begin{eqnarray*} \lim_{t \to T^\varepsilon_* -}\, \max_{x \in \overline \Omega_0} |v_t(x)| \leq (\max_{x \in \overline \Omega_0}|v_0(x)| - R^\varepsilon_V)e^{-T^\varepsilon_*} + R^\varepsilon_V \leq - \varepsilonps e^{-T^\varepsilon_*} + R_V^\varepsilon, \end{eqnarray*} which is a contradiction to \varepsilonnd{equation}ref{limit}. Hence we have $T^\varepsilon_* = +\infty$, and by taking the limit $\varepsilon \to 0$ we conclude \varepsilonnd{equation}ref{est_spt}. \varepsilonnd{proof} For $t\geq 0$ we define the quantities \begin{eqnarray} X(t) &:=& d_X(0) + \int_0^t d_V(s) \mathrm{d} s, \label{X} \\ V(t) &:=& d_V(0) e^{-t} + \int_0^t [1 - \psi(X(s-\tau) + R_V \tau)] d_V(s-\tau) e^{s-t} \mathrm{d} s, \label{V} \end{eqnarray} with $d_X$ and $d_V$ defined in \varepsilonnd{equation}ref{dXdV} and $R_V$ defined in \varepsilonnd{equation}ref{R_V}. Moreover, for $t\in[-\tau,0]$ we set $$X(t):= d_X(t),\qquad V(t):=d_V(t),$$ so that both $X(t)$ and $V(t)$ are continuous on $[-\tau,\infty)$. \begin{lemma}\label{main_prop} Let $(\varepsilonta_t, v_t)\in\mathcal C^1([0,\infty); \mathcal C(\overline{\Omegaega_0}))$ be a solution of the system \varepsilonnd{equation}ref{Lagr1}. Then, for all $t>0$ we have \begin{eqnarray} \frac{\mathrm{d}}{\mathrm{d} t} X(t) &=& d_V(t), \nonumber \\ \frac{\mathrm{d}}{\mathrm{d} t} V(t) &\leq& -V(t) + [1 - \psi(X(t-\tau) + R_V \tau)]V(t-\tau). \label{ddtV} \end{eqnarray} \varepsilonnd{lemma} \begin{proof} While the first claim follows directly from \varepsilonnd{equation}ref{X}, for the second we take the time derivative in \varepsilonnd{equation}ref{V}, \begin{eqnarray*} \frac{\mathrm{d}}{\mathrm{d} t} V(t) = -V(t) + [1 - \psi(X(t-\tau) + R_V \tau)] \frac{\mathrm{d}}{\mathrm{d} t} X(t-\tau). \end{eqnarray*} Then, due to the inequality $\frac{\mathrm{d}}{\mathrm{d} t} X(t) \leq d_V(t)$ for all $t > -\tau$ and the assumption $0 \leq \psi(s) \leq 1$ for $s\in [0,+\infty)$, we need to prove that \begin{eqnarray} \label{dVV} d_V(t) \leq V(t) \qquad\mathcal{B}ox{for } t\geq -\tau. \end{eqnarray} For $x$, $z\in\Omegaega_0$ and $t\geq 0$ set \begin{eqnarray*} \phi_t(x,z) := \frac{\psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(z))}{\int_{\Omegaega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y))\rho_0(y)\mathrm{d} y}. \end{eqnarray*} Then, for any $x$, $y\in\Omegaega_0$ and $t>0$, we calculate $$\begin{aligned} \frac12\frac{\mathrm{d}}{\mathrm{d} t}|v_t(x) - v_t(y)|^2 &= \left(v_t(x) - v_t(y) \right) \cdot \left(\partial_t v_t(x) - \partial_t v_t(y) \right)\cr &= \left(v_t(x) - v_t(y) \right) \cdot \int_{\Omega_0} \left(\phi_t(x,z) - \phi_t(y,z) \right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z\cr &\quad - |v_t(x) - v_t(y)|^2\cr &\leq |v_t(x) - v_t(y)| \left|\int_{\Omega_0} \left(\phi_t(x,z) - \phi_t(y,z)\right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z \right| \cr &\quad - |v_t(x) - v_t(y)|^2. \varepsilonnd{aligned}$$ We denote \begin{eqnarray*} \widetilde \phi_t(x,y;z) := \min \left\{ \phi_t(x,z), \phi_t(y,z)\right\} \qquad \mathcal{B}ox{and} \qquad \mathcal{P}hi_t(x,y) := \int_{\Omegaega_0} \widetilde\phi_t(x,y;z)\rho_0(z)\mathrm{d} z. \end{eqnarray*} Then, by definition, $0 \leq \psi_t(x,y) \leq 1$ for all $x$, $y\in\Omegaega_0$ and $t\geq 0$. Consequently, \begin{eqnarray*} \frac{\phi_t(x,z) - \widetilde\phi_t(x,y;z)}{1 - \mathcal{P}hi_t(x,y)} \geq 0 \qquad \mathcal{B}ox{and} \qquad \int_{\Omegaega_0} \left(\frac{\phi_t(x,z) - \widetilde\phi_t(x,y;z)}{1 - \mathcal{P}hi_t(x,y)} \right) \rho_0(z)\mathrm{d} z = 1, \end{eqnarray*} which further implies \begin{eqnarray*} \int_{\Omegaega_0} \left(\frac{\phi_t(x,z) - \widetilde\phi_t(x,y;z)}{1 - \mathcal{P}hi_t(x,y)} \right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z \in \mathcal{B}ox{conv} \left\{v_{t-\tau}(z),\, z \in \Omegaega_0 \right\}, \end{eqnarray*} where conv $\mathcal{S}$ denotes the convex hull of the set $\mathcal{S}$. Therefore, $$\begin{aligned} &\left|\int_{\Omegaega_0} \left(\phi_t(x,z) - \phi_t(y,z)\right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z \right| \cr &\quad \leq \left(1 - \mathcal{P}hi_t(x,y) \right) \Bigg|\int_{\Omegaega_0} \left(\frac{\phi_t(x,z) - \widetilde\phi_t(x,y;z)}{1 - \mathcal{P}hi_t(x,y)} \right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z \cr &\qquad \qquad \qquad \qquad \qquad \qquad - \int_{\Omegaega_0} \left(\frac{\phi_t(y,z) - \widetilde\phi_t(x,y;z)}{1 - \mathcal{P}hi_t(x,y)} \right) v_{t-\tau}(z)\rho_0(z)\mathrm{d} z\Bigg|\cr &\quad \leq \left(1 - \mathcal{P}hi_t(x,y) \right) d_V(t-\tau). \varepsilonnd{aligned}$$ On the other hand, it follows from \varepsilonnd{equation}ref{eta_flow} and Lemma \ref{lem_spt} that \begin{eqnarray*} |\varepsilonta_t(x)-\varepsilonta_{t-\tau}(z)| = \left|\varepsilonta_{t-\tau}(x) - \varepsilonta_{t-\tau}(z) - \int_{t-\tau}^t \frac{\mathrm{d}}{\mathrm{d} s}\varepsilonta_s(x)\mathrm{d} s \right| \leq \left|\varepsilonta_{t-\tau}(x) - \varepsilonta_{t-\tau}(z)\right| + R_V\tau, \end{eqnarray*} and with \varepsilonnd{equation}ref{dXdV} we have \begin{eqnarray*} \label{ineq_eta_xz} |\varepsilonta_t(x)-\varepsilonta_{t-\tau}(z)| \leq d_X(t-\tau) + R_V \tau. \end{eqnarray*} Due to the assumption $0 \leq \psi(s) \leq 1$ for $s\in [0,+\infty)$, we have \begin{eqnarray*} \int_{\Omegaega_0} \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y))\rho_0(y)\mathrm{d} y \leq 1, \end{eqnarray*} and since $\psi$ is a nonincreasing function, \begin{eqnarray*} \phi_t(x,z) \geq \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(z)) \geq \psi(d_X(t-\tau) + R_V\tau) \end{eqnarray*} for all $x, z\in\Omegaega_0$. This implies \begin{eqnarray*} \mathcal{P}hi_t(x,y) \geq \psi(d_X(t-\tau) + R_V\tau) \end{eqnarray*} for all $x, y\in\Omegaega_0$. Combining the above estimates, we arrive at \begin{eqnarray*} \frac12\frac{\mathrm{d}}{\mathrm{d} t}|v_t(x) - v_t(y)|^2 \leq \bigl([1 - \psi(d_X(t-\tau) + R_V \tau)]d_V(t-\tau) - |v_t(x) - v_t(y)| \bigr) |v_t(x) - v_t(y)|. \end{eqnarray*} We divide by $|v_t(x) - v_t(y)|$ and integrate in time, which gives \begin{eqnarray*} |v_t(x) - v_t(y)| \leq |v_0(x) - v_0(y)|e^{-t} + \int_0^t \left[(1 - \psi(d_X(s-\tau) + R_V \tau)\right]d_V(s-\tau) e^{s-t} \mathrm{d} s, \end{eqnarray*} and taking the maximum over $x$, $y\in\overline\Omegaega_0$, \begin{eqnarray*} d_V(t) \leq d_V(0) e^{-t} + \int_0^t \left[(1 - \psi(d_X(s-\tau) + R_V \tau)\right]d_V(s-\tau) e^{s-t} \mathrm{d} s. \end{eqnarray*} Since, as can be easily proven, $d_X(t) \leq X(t)$, the monotonicity property of the influence function $\psi$ finally implies \begin{eqnarray*} d_V(t) \leq d_V(0) e^{-t} + \int_0^t \left[(1 - \psi(X(s-\tau) + R_V \tau)\right]d_V(s-\tau) e^{s-t} \mathrm{d} s = V(t). \end{eqnarray*} \varepsilonnd{proof} We next recall a Gronwall-type estimate for time-delayed differential inequalities whose proof can be found in \cite[Lemma 2.4]{Choi-Haskovec}. \begin{lemma}\label{lem_gron} Let $u$ be a nonnegative, continuous and piecewise $\mathcal C^1$-function satisfying, for some constant $0 < a < 1$, the differential inequality \begin{eqnarray*} \frac{\mathrm{d}}{\mathrm{d} t} u(t) \leq (1-a) u(t -\tau) - u(t) \qquad\mathcal{B}ox{for almost all } t>0. \end{eqnarray*} Then there exists a constant $0 < C < 1$ satisfying the equation \begin{eqnarray*} 1 - C = (1-a)e^{C\tau}, \end{eqnarray*} such that the estimate holds \begin{eqnarray*} u(t) \leq \left( \max_{s\in[-\tau,0]} u(s) \right) e^{-Ct} \qquad \mathcal{B}ox{for all } t \geq 0. \end{eqnarray*} \varepsilonnd{lemma} We are now ready to prove the main result of this section. \begin{lemma}\label{prop_lt} Let $(\varepsilonta_t, v_t)\in\mathcal C^1([0,\infty); \mathcal C(\overline{\Omegaega_0}))$ be a solution of the system \varepsilonnd{equation}ref{Lagr1}. Assume that \varepsilonnd{equation}ref{R_V} and \varepsilonnd{equation}ref{iii} are verified. Then we have \begin{eqnarray*} d_V(t) \leq \left( \max_{s\in[-\tau,0]} d_V(s) \right) e^{-C_1t} \quad \mathcal{B}ox{for all } t>0, \quad \mathcal{B}ox{and} \quad \sup_{t>0} d_X(t) < C_2, \end{eqnarray*} where $C_1$, $C_2$ are positive constants independent of $t$. \varepsilonnd{lemma} \begin{proof} We introduce the following Lyapunov functional for $t \in (0,T]$, \begin{eqnarray*} \mathcal{L}(t) := V(t) + \int_{X(- \tau) + R_v \tau}^{X(t - \tau) + R_v \tau} \psi(s)\,\mathrm{d} s + \int_{t-\tau}^t V(s)\,\mathrm{d} s. \end{eqnarray*} Using Lemma \ref{main_prop}, we obtain that for almost all $t \in (0,T]$, $$ \begin{aligned} \tot{}{t}\mathcal{L}(t) &= \tot{}{t} V(t) + \psi(X(t - \tau) + R_v \tau) \tot{}{t} X(t - \tau) + V(t) - V(t - \tau)\cr &\leq - V(t) + \left[1 - \psi(X(t-\tau) + R_v \tau)\right] V(t - \tau) \cr &\quad + \psi(X(t - \tau) + R_v \tau) d_V(t- \tau) + V(t) - V(t - \tau)\cr &= 0, \varepsilonnd{aligned} $$ where we used the inequality $d_V(t) \leq V(t)$ for $t\geq -\tau$, \varepsilonnd{equation}ref{dVV} from the proof of Lemma \ref{main_prop}. Integrating over the time interval $(0,t)$ yields \begin{eqnarray} \label{zwischenstep} V(t) + \int_{X(- \tau) + R_v \tau}^{X(t - \tau) + R_v \tau} \psi(s)\,\mathrm{d} s + \int_{t-\tau}^t V(s)\,\mathrm{d} s \leq V(0) + \int_{-\tau}^0 V(s)\,\mathrm{d} s. \end{eqnarray} On the other hand, assumption \varepsilonnd{equation}ref{iii} implies that there exists a $d_* > 0$ such that \begin{eqnarray*} d_V(0) + \int_{-\tau}^0 d_V(s)\,\mathrm{d} s = \int_{d_X(-\tau) + R_v \tau}^{d_*}\psi(s)\,\mathrm{d} s. \end{eqnarray*} Since, by definition, $V(t) = d_V(t)$ for $t\in[-\tau,0]$, we have \begin{eqnarray*} d_V(0) + \int_{-\tau}^0 d_V(s)\,\mathrm{d} s = V(0) + \int_{-\tau}^0 V(s)\,\mathrm{d} s. \end{eqnarray*} This together with \varepsilonnd{equation}ref{zwischenstep} implies \begin{eqnarray*} \int_{X(- \tau) + R_v \tau}^{X(t - \tau) + R_v \tau} \psi(s)\,\mathrm{d} s \leq \int_{d_X(-\tau) + R_v \tau}^{d_*}\psi(s)\,\mathrm{d} s, \end{eqnarray*} and, since $X(- \tau) = d_X(-\tau)$, \begin{eqnarray*} 0 \leq \int_{X(t -\tau) + R_v \tau}^{d_*} \psi(s)\,\mathrm{d} s. \end{eqnarray*} With the inequality $d_X(t) \leq X(t)$ for $t\geq -\tau$, this implies \begin{eqnarray*} d_X(t -\tau) + R_v \tau \leq X(t -\tau) + R_v \tau \leq d_* \quad \mathcal{B}ox{for } t > 0. \end{eqnarray*} Using this in \varepsilonnd{equation}ref{ddtV}, we arrive at \begin{eqnarray*} \tot{}{t} V(t) \leq -V(t) + (1 - \psi_*) V(t - \tau) \end{eqnarray*} for all $t>0$, where $\psi_* := \psi(d_*)$. We finally apply Lemma \ref{lem_gron} and the inequality \varepsilonnd{equation}ref{dVV} to complete the proof. \varepsilonnd{proof} \begin{remark} In the above proof we proceeded along the lines of \cite{Choi-Haskovec}, where a similar statement has been proved for the discrete setting. However, in our setting the proof becomes slightly more involved. Indeed, in the discrete setting the time axis can be divided into an at most countable system of disjoint intervals $[t_k, t_{k+1})$ such that the velocity diameter $d_V$ is realized by a fixed pair of particles on this time interval, and one can calculate the time derivative of $d_V$ there. This is obviously not possible in the continuum setting. Consequently, we had to introduce the functions $X$ and $V$ in \varepsilonnd{equation}ref{X}--\varepsilonnd{equation}ref{V} and estimate their time derivatives in Lemma \ref{main_prop}. This is the main difference with respect to the approach taken in \cite{Choi-Haskovec}. \varepsilonnd{remark} \section{Existence of solutions for the Eulerian system - proof of Theorem \ref{thm:Eulerian}}\label{sec:classical} In this section we prove the global-in-time existence and uniqueness of classical solutions of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2}. We first shortly establish the existence of local-in-time solutions in Lemma \ref{lem_local} and then derive suitable a-priori estimates that allow us to establish a global-in-time result. \begin{lemma}\label{lem_local} Let Assumption \ref{ass:psi} be verified with some $\varepsilonll > \frac{d}2+1$. Moreover, we assume that the initial datum satisfying the regularity: \begin{eqnarray*} (\bar\rho_s, \bar u_s) \in \mathcal C([-\tau,0];H^\varepsilonll(\Omegaega_s)) \times \mathcal C([-\tau,0];H^{\varepsilonll+1}(\Omegaega_s)). \end{eqnarray*} Then, there exists a $T > 0$ such that the system \varepsilonnd{equation}ref{Lagr1} has a unique classical solution $(\varepsilonta_t, v_t) \in \mathcal C^1([-\tau,T];H^{\varepsilonll+1}(\Omega_0)) \times \mathcal C([-\tau,T];H^{\varepsilonll+1}(\Omega_0))$. \varepsilonnd{lemma} \begin{proof} Even though we are dealing with the effect of time delay in the alignment force, the existence and uniqueness of local-in-time classical solutions can be obtained by using a similar argument as in \cite[Appendix A]{CCZ}, in which the one-dimensional pressureless Euler-Poisson equations are considered. Thus we skip it here. \varepsilonnd{proof} For the global regularity, we need to use the estimate of time behavior studied in Section \ref{sec:flocking} which plays a important role in constructing the global-in-time classical solutions. Similar idea are used in \cite{CK16, HKK15} to prevent the formation of finite-time singularities in pressureless Eulerian dynamics. We derive uniform a priori estimates for $\|v_t\|_{H^{\varepsilonll+1}}$ to the equation $\varepsilonnd{equation}ref{Lagr1}_2$. For this, we recall several Sobolev inequalities which will be used in the rest of this paper. \begin{lemma}\label{lem_sob} Let $k \geq 1$. \begin{itemize} \item[(i)] For any pair of functions $f,g \in H^k \cap L^\infty$, we obtain \begin{eqnarray*} \|\nabla^k (fg)\|_{L^\infty} \lesssim \|f\|_{L^\infty}\|\nabla^k g\|_{L^\infty} + \|\nabla^k f\|_{L^\infty}\|g\|_{L^\infty}. \end{eqnarray*} Here $f \lesssim g$ represents that there exists a positive constant $C>0$ such that $f \leq Cg$. Furthermore, if $\nabla f \in L^\infty$, we have \begin{eqnarray*} \|\nabla^k (fg) - f \nabla^k g\|_{L^2} \lesssim \|\nabla f\|_{L^\infty}\|\nabla^{k-1}g\|_{L^2} + \|\nabla^k f\|_{L^2}\|g\|_{L^\infty}. \end{eqnarray*} \item[(ii)] For $f \in H^{[d/2]+1}$, we have \begin{eqnarray*} \|f\|_{L^\infty} \lesssim \|\nabla f\|_{H^{[d/2]}}. \end{eqnarray*} \item[(iii)] For $f \in H^k \cap L^\infty$, let $p \in [1,\infty]$, and $h \in \mathcal C^k(B(0,\|f\|_{L^\infty}))$ where $B(0,R)$ denotes the ball of radius $R>0$ centered at the origin in $\mathbb R^d$. Then there exists a positive constant $C = C(k,p,h)$ such that \begin{eqnarray*} \|\nabla^k h(f)\|_{L^p} \leq C(1 + \|f\|_{L^\infty})^{k-1}\|\nabla^k f\|_{L^p}. \end{eqnarray*} \varepsilonnd{itemize} \varepsilonnd{lemma} For notational simplicity, we set \begin{eqnarray*} \label{ppsi} \psi_{t,\tau}[\eta](x,y) := \psi(\varepsilonta_t(x) - \varepsilonta_{t-\tau}(y)). \end{eqnarray*} In the lemma below, we provide the $H^k$-estimate of the influence function $\psi_{t,\tau}[\eta]$ by directly using Lemma \ref{lem_sob}. \begin{lemma}\label{lem_useful0}Let $\varepsilonta_t \in \mathcal C([-\tau,T];H^{\varepsilonll+1}(\Omegaega_0))$ for some $T>0$ and $\varepsilonll\in\mathbb{N}$. Then, for $1 \leq k \leq \varepsilonll + 1$, we have \begin{eqnarray*} \|\nabla^k_x \psi_{t,\tau}[\eta](\cdot,y)\|_{L^2} \leq C(1 + d_X(t-\tau) + R_V \tau)^{k-1}\|\nabla^k_x \varepsilonta_t(\cdot)\|_{L^2}. \end{eqnarray*} In particular, we have \begin{eqnarray*} \|\nabla^k_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^2 \times L^\infty} \leq C(1 + d_X(t-\tau) + R_V \tau)^{k-1}\|\nabla^k_x \varepsilonta_t(\cdot)\|_{L^2}. \end{eqnarray*} \varepsilonnd{lemma} \begin{remark}\label{rmk_useful0}Due to the smoothness of influence function $\psi$, we can easily get \begin{eqnarray*} \|\nabla_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^\infty \times L^\infty} \leq C\|\nabla_x \varepsilonta_t(\cdot)\|_{L^\infty}. \end{eqnarray*} \varepsilonnd{remark} \begin{lemma}\label{lem_useful} Let $\varepsilonll > d/2 + 1$ and $T > 0$. Suppose that the assumptions given in Theorem \ref{thm_main} hold. Then we have \begin{eqnarray*} \int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) \rho_0(y)\,dy \geq \psi(C_2 + R_V\tau) =: \psi_m > 0 \quad \mathcal{B}ox{for all } x\in\Omegaega_0 \mathcal{B}ox{ and } t \in [0,T], \end{eqnarray*} with $C_2 > 0$ given in Lemma \ref{prop_lt}. Furthermore, if there exists a positive constant $M > 0$ such that $\|v\|_{L^\infty((-\tau,T); H^{\varepsilonll + 1}(\Omegaega))} \leq M$, we have \begin{eqnarray*} \|\nabla^k \varepsilonta_t\|_{L^2(\Omegaega_0)} \leq C(1 + M t) \quad \mathcal{B}ox{for } 1 \leq k \leq \varepsilonll+1 \mathcal{B}ox{ and } t \in [0,T], \end{eqnarray*} for some $C>0$ independent of $t$. \varepsilonnd{lemma} \begin{proof} It follows from the monotonicity of the influence function $\psi$, Lemma \ref{prop_lt}, and the inequality \varepsilonnd{equation}ref{ineq_eta_xz} given in the proof of Lemma \ref{main_prop} that \begin{eqnarray*} |\varepsilonta_t(x)-\varepsilonta_{t-\tau}(y)| \leq d_X(t-\tau) + R_V \tau. \end{eqnarray*} Thus we obtain \begin{eqnarray*} \int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) \rho_0(y)\,dy \geq \psi(C_2 + R_V\tau)\int_{\Omega_0} \rho_0(y)\,dy = \psi_m. \end{eqnarray*} We now assume that $\|v\|_{L^\infty((-\tau,T); H^{\varepsilonll + 1}(\Omegaega))} \leq M$ for some $M > 0$. Taking the $k$-th derivative of $\varepsilonnd{equation}ref{Lagr1}_1$ yields \begin{eqnarray*} \nabla^k \varepsilonta_t = \mathrm{d}elta_{k,1} \mathbb{I} + \int_0^t \nabla^k v_s\mathrm{d} s \quad \mathcal{B}ox{for} \quad 1 \leq k \leq \varepsilonll + 1, \end{eqnarray*} where $\mathbb{I}$ is the identity matrix. This yields \begin{eqnarray*} \|\nabla^k \varepsilonta_t\|_{L^2(\Omegaega_0)} \leq C \left(1 + \int_0^t \|\nabla^k v_s\|_{L^2(\Omegaega_0)}\mathrm{d} s\right) \leq C(1 + Mt). \end{eqnarray*} \varepsilonnd{proof} \begin{remark}\label{rmk_gd} It follows from Lemmas \ref{lem_useful0} and \ref{lem_useful}, Remark \ref{rmk_useful0}, and the Sobolev embedding $H^{\varepsilonll-1}(\Omegaega_0) \hookrightarrow L^\infty(\Omegaega_0)$ for $\varepsilonll > d/2 + 1$ that \begin{eqnarray*} \|\nabla^k_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^2 \times L^\infty} + \|\nabla_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^\infty \times L^\infty} \leq C(1+Mt), \end{eqnarray*} for $1 \leq k \leq \varepsilonll+1$. \varepsilonnd{remark} We are now in a position to provide the uniform a priori estimate of $\|v_t\|_{H^{\varepsilonll+1}}$ in the lemma below. \begin{lemma}\label{prop_apriori} Let $\varepsilonll > d/2 + 1$ and $T > 0$. Suppose that the assumptions given in Theorem \ref{thm_main} hold. Let $M > 0$ be any positive constant. Then if $\|v\|_{L^\infty((-\tau,T); H^{\varepsilonll + 1}(\Omegaega_0))} \leq M$, we have \begin{eqnarray*} \|v_t\|_{L^\infty((0,T);H^{\varepsilonll+1}(\Omegaega_0))} \leq C_0\|v_s\|_{L^\infty((-\tau,0);H^{\varepsilonll+1}(\Omegaega_0))}, \end{eqnarray*} where $C_0 >0$ is a constant independent of $T$. \varepsilonnd{lemma} \begin{remark} The constant $M>0$ appeared in Lemma \ref{prop_apriori} does not need to be small. \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{prop_apriori}] We start by estimating the $L^2(\Omegaega_0)$-norm of $v_t$ for a fixed $t\in (0,T)$. We calculate $$\begin{aligned} \frac12\frac{\mathrm{d}}{\mathrm{d} t} \int_{\Omegaega_0} |v_t|^2\mathrm{d} x &= \int_{\Omegaega_0} v_t \cdot \left(\frac{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\mathrm{d} y}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y} - v_t\right)\mathrm{d} x\cr &\leq \|v_t\|_{L^1(\Omegaega_0)}\|v_{t-\tau}\|_{L^\infty(\Omegaega_0)} - \|v_t\|_{L^2(\Omegaega_0)}^2\cr &\leq {|\Omegaega_0|}^{1/2}\|v_t\|_{L^2(\Omegaega_0)}\|v_s\|_{L^\infty((-\tau,0); L^\infty(\Omegaega_0))} - \|v_t\|_{L^2(\Omegaega_0)}^2\cr &\leq -\frac12\|v_t\|_{L^2(\Omegaega_0)}^2 + C\|v_s\|_{L^\infty((-\tau,0);H^{\varepsilonll+1}(\Omegaega_0))}^2, \varepsilonnd{aligned}$$ where we used Lemma \ref{lem_spt} and the Cauchy-Schwartz inequality, and $C > 0$ only depends on $|\Omegaega_0|$ and the space dimension $d\in\mathbb{N}$. An application of the Gronwall lemma gives then \begin{eqnarray*} \sup_{0 \leq t \leq T}\|v_t\|_{L^2(\Omegaega_0)}^2 \leq \|v_0\|_{L^2(\Omegaega_0)}^2 + C\|v_s\|_{L^\infty((-\tau,0);H^{\varepsilonll+1}(\Omegaega_0))}^2 \leq C\|v_s\|_{L^\infty((-\tau,0);H^{\varepsilonll+1}(\Omegaega_0))}^2. \end{eqnarray*} Next, we estimate the $H^1(\Omegaega_0)$-norm of $v_t$, $$\begin{aligned} \frac12\frac{\mathrm{d}}{\mathrm{d} t}\int_{\Omega_0}|\nabla v_t|^2\mathrm{d} x &= \int_{\Omegaega_0} \nabla v_t \cdot \nabla \left( \frac{\int_{\Omegaega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\mathrm{d} y}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y} - v_t \right) \mathrm{d} x \\ & =: - \|\nabla v_t\|_{L^2(\Omegaega_0)}^2 + I_1. \varepsilonnd{aligned}$$ Note that $$\begin{aligned} &\left|\nabla_x \left( \frac{\int_{\Omegaega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\mathrm{d} y}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y}\right)\right|\cr &\quad = \left|\frac{\iint_{\Omegaega_0 \times \Omega_0} (\nabla_x \psi_{t,\tau}[\eta](x,y)) \psi_{t,\tau}[\eta](x,z)(v_{t-\tau}(y) - v_{t-\tau}(z))\rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z}{\left(\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y\right)^2}\right|\cr &\quad \leq \frac{1}{\psi_m^2}d_V(t-\tau)\left(\iint_{\Omegaega_0\times\Omegaega_0}|\nabla (\psi_{t,\tau}[\eta](x,y))|^2 \rho_0(y) \rho_0(z) \mathrm{d} y\mathrm{d} z\right)^{1/2} \\ &\qquad\qquad\qquad\times \left(\iint_{\Omegaega_0\times\Omegaega_0}|\psi_{t,\tau}[\eta](x,y)|^2 \rho_0(y) \rho_0(z) \mathrm{d} y\mathrm{d} z\right)^{1/2}\cr &\quad \leq \frac{1}{\psi_m^2}d_V(t-\tau)\left(\int_{\Omegaega_0}|\nabla (\psi_{t,\tau}[\eta](x,y))|^2 \rho_0(y)\mathrm{d} y\right)^{1/2}, \varepsilonnd{aligned}$$ due to Lemma \ref{lem_useful}, $\|\psi\|_{L^\infty} \leq 1$ and the normalization $\int_{\Omegaega_0} \rho_0(y) \mathrm{d} y = 1$. The Cauchy-Schwarz inequality yields \begin{eqnarray*} |I_1| \leq \frac{1}{\psi_m^2} d_V(t-\tau)\|\nabla v_t\|_{L^2(\Omegaega_0)} \left(\iint_{\Omegaega_0 \times \Omegaega_0}|\nabla_x \psi_{t,\tau}[\eta](x,y)|^2 \rho_0(y)\mathrm{d} x\mathrm{d} y \right)^{1/2}, \end{eqnarray*} and using further the estimate \varepsilonnd{equation}ref{statement2} on $d_V(t-\tau)$ together with Remark \ref{rmk_gd}, we obtain $$\begin{aligned} |I_1| &\leq \frac{C}{\psi_m^2} \left( \max_{s \in [-\tau,0]}d_V(s) \right) e^{-C (t-\tau)}\|\nabla v_t\|_{L^2(\Omegaega_0)}(1 + \varepsilon_1t) |\Omegaega_0|^{1/2} \cr &\leq C\|v_s\|_{L^\infty((-\tau,0)\times\Omegaega_0)}\|\nabla v_t\|_{L^2(\Omegaega_0)}\cr &\leq \frac12\|\nabla v_t\|_{L^2(\Omegaega_0)}^2 + C\|v_s\|_{L^\infty((-\tau,0)\times\Omegaega_0)}^2, \varepsilonnd{aligned}$$ where we used the elementary inequality $e^{-C t}(1 + M t) \leq C_{M}$ for all $t\geq 0$, with $C_{M} > 0$ independent of $T$. Therefore, we have \begin{eqnarray*} \frac12\frac{\mathrm{d}}{\mathrm{d} t} \|\nabla v_t\|_{L^2(\Omegaega_0)}^2 \leq - \frac12 \|\nabla v_0\|_{L^2(\Omegaega_0)}^2 + C\|v_s\|_{L^\infty((-\tau,0)\times\Omegaega_0)}^2, \end{eqnarray*} which implies \begin{eqnarray*} \sup_{0 \leq t \leq T}\|\nabla v_t\|_{L^2(\Omegaega_0)}^2 \leq \|\nabla v_0\|_{L^2(\Omegaega_0)}^2 + C\|v_s\|_{L^\infty((-\tau,0); H^{\varepsilonll+1}(\Omegaega_0))}^2 \leq C\|v_s\|_{L^\infty((-\tau,0); H^{\varepsilonll+1}(\Omegaega_0))}^2, \end{eqnarray*} where we used the embedding $H^{\varepsilonll+1}(\Omegaega_0) \hookrightarrow L^\infty(\Omegaega_0)$. Finally, we derive the estimate of the $H^k$-norm of $v_t$ for general $k\in\mathbb{N}$. We first notice that for $1 \leq k \leq \varepsilonll$, $$\begin{aligned} &\nabla^{k+1}_x \left( \frac{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\mathrm{d} y}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y}\right)\cr &= \nabla^{k}_x\left( \frac{\iint_{\Omega_0 \times \Omega_0} \nabla_x (\psi_{t,\tau}[\eta](x,y)) \psi_{t,\tau}[\eta](x,z)(v_{t-\tau}(y) - v_{t-\tau}(z))\rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z}{\left(\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y\right)^2}\right)\cr &=: \sum_{1 \leq k' \leq k-1}\binom{k}{k'} \nabla^{k'}_x I_2(x) \nabla^{k-k'}_x I_3(x) (1 - \mathrm{d}elta_{k,1}) + I_2(x) \nabla_x^k I_3(x) + \nabla_x^k I_2(x) I_3(x)\cr &=: J_1(x) + J_2(x) + J_3(x), \varepsilonnd{aligned}$$ where $$\begin{aligned} I_2(x) &= \left( \int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\mathrm{d} y\right)^{-2},\cr I_3(x) &= \int_{\Omega_0 \times \Omega_0} (\nabla_x \psi_{t,\tau}[\eta](x,y)) \psi_{t,\tau}[\eta](x,z)(v_{t-\tau}(y) - v_{t-\tau}(z))\rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z. \varepsilonnd{aligned}$$ Note that, due to Lemma \ref{lem_useful}, \begin{eqnarray*} I_2(x) \leq \psi_m^{-2} \qquad \mathcal{B}ox{for } x \in \Omegaega_0. \end{eqnarray*} Furthermore, for $1 \leq k \leq \varepsilonll$, $$\begin{aligned} |\nabla_x^k I_2| &\lesssim \left| \int_{\Omega_0} \nabla^k (\psi_{t,\tau}[\eta] (x,y))\rho_0(y)\mathrm{d} y \right| \cr &\quad + (1 - \mathrm{d}elta_{k,1}) \sum_{ \substack{\alpha+ \beta = k \\ \alpha,\beta \geq 1}}\left| \int_{\Omegaega_0} \nabla_x^\alpha (\psi_{t,\tau}[\eta] (x,y))\rho_0(y)\mathrm{d} y \right| \left| \int_{\Omegaega_0} \nabla_x^\beta (\psi_{t,\tau}[\eta] (x,y))\rho_0(y)\mathrm{d} y \right|\cr &=: I_2^1 + I_2^2, \varepsilonnd{aligned}$$ where $L^2$-norm of $I_2^1$ can be easily estimated as $$\begin{aligned} \int_{\Omega_0} |I_2^1|^2 \,\mathrm{d} x &\lesssim \int_{\Omega_0} \left| \int_{\Omega_0} |\nabla^k (\psi_{t,\tau}[\eta] (x,y))|\rho_0(y)\mathrm{d} y\right|^2\,dx \cr &\lesssim \left(\int_{\Omegaega_0} \|\nabla_x^k \psi_{t,\tau}[\eta](\cdot,y)\|_{L^2(\Omegaega_0)}\rho_0(y)\mathrm{d} y\right)^2\cr &\lesssim \int_{\Omegaega_0} \|\nabla_x^k \psi_{t,\tau}[\eta](\cdot,y)\|_{L^2(\Omegaega_0)}^2\rho_0(y)\mathrm{d} y\cr &\lesssim \|\nabla_x^k \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^2 \times L^\infty}^2, \varepsilonnd{aligned}$$ due to Minkowski integral inequality. For the estimate of $I_2^2$, we again use Minkowski integral inequality together with Moser-type inequalities to obtain $$\begin{aligned} &\int_{\Omegaega_0} | I_2^2|^2\,\mathrm{d} x \cr &\quad \lesssim \sum_{ \substack{\alpha+ \beta = k \\ \alpha,\beta \geq 1}}\int_{\Omega_0}\left| \int_{\Omega_0 \times \Omega_0}|\nabla^\alpha_x(\psi_{t,\tau}[\eta](x,y))||\nabla^\beta_x(\psi_{t,\tau}[\eta](x,z))| \rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z\right|^2\,dx \cr &\quad \lesssim \sum_{ \substack{\alpha+ \beta = k \\ \alpha,\beta \geq 1}} \left(\iint_{\Omegaega_0 \times \Omegaega_0} \bigl\lVert |\nabla_x^\alpha \psi_{t,\tau}[\eta](\cdot,y)| |\nabla_x^\beta \psi_{t,\tau}[\eta](\cdot,z)| \bigr\rVert_{L^2(\Omegaega_0)} \rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z\right)^2\cr &\quad \lesssim \sum_{ \substack{\alpha+ \beta = k \\ \alpha,\beta \geq 1}} \iint_{\Omegaega_0 \times \Omegaega_0} \bigl\lVert |\nabla_x^\alpha \psi_{t,\tau}[\eta](\cdot,y)| |\nabla_x^\beta \psi_{t,\tau}[\eta](\cdot,z)| \bigr\rVert_{L^2(\Omegaega_0)}^2 \rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z\cr &\quad \lesssim \sum_{ \substack{\alpha+ \beta = k \\ \alpha,\beta \geq 1}} \iint_{\Omegaega_0 \times \Omegaega_0} \| \nabla_x^\alpha \psi_{t,\tau}[\eta](\cdot,y)\|_{H^1(\Omegaega_0)}^2 \|\nabla_x^\beta \psi_{t,\tau}[\eta](\cdot,z)\|_{H^1(\Omegaega_0)}^2 \rho_0(y)\rho_0(z)\mathrm{d} y\mathrm{d} z\cr &\quad \lesssim \|\nabla_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{H^k \times L^\infty}^4. \varepsilonnd{aligned}$$ Thus we obtain \begin{eqnarray*} \|\nabla I_2\|_{H^{k-1}} \leq C\left( \|\nabla_x^k \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^2 \times L^\infty}^2 + \|\nabla_x \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{H^k \times L^\infty}^4\right) \leq C(1 + M t)^4, \end{eqnarray*} for $1 \leq k \leq \varepsilonll$. We also easily find from Remark \ref{rmk_gd} that \begin{eqnarray*} |I_3(x)| \leq d_V(t-\tau)\|\nabla \psi_{t,\tau}[\eta](\cdot,\cdot)\|_{L^\infty \times L^\infty} \leq Cd_V(t-\tau)(1 + M t). \end{eqnarray*} In a similar fashion as before, we get that for $1 \leq k \leq \varepsilonll$ $$\begin{aligned} &\int_{\Omega_0} |\nabla^k I_3|^2\,dx \cr &\quad \leq d_V^2(t-\tau)\int_{\Omega_0} \left(\int_{\Omega_0 \times \Omega_0} |\nabla^k \left(\nabla(\psi_{t,\tau}[\eta](x,y)) \psi_{t,\tau}[\eta](x,z) \right) \rho_0(y)\rho_0(z)\,dydz\right)^2 dx\cr &\quad \leq d_V^2(t-\tau)\left(\int_{\Omega_0 \times \Omega_0} \|\nabla^k \left(\nabla(\psi_{t,\tau}[\eta](\cdot,y)) \psi_{t,\tau}[\eta](\cdot,z) \right)\|_{L^2} \rho_0(y)\rho_0(z)\,dydz\right)^2\cr &\quad \leq Cd_V^2(t-\tau)\left(\int_{\Omega_0 \times \Omega_0} \left(\|\nabla (\psi_{t,\tau}[\eta](\cdot,y))\|_{L^\infty}\|\nabla^k (\psi_{t,\tau}[\eta](\cdot,z))\|_{L^2} \right) \rho_0(y)\rho_0(z)\,dydz\right)^2\cr &\qquad + Cd_V^2(t-\tau)\left(\int_{\Omega_0} \|\nabla^{k+1} (\psi_{t,\tau}[\eta](\cdot,z))\|_{L^2} \rho_0(z)\,dz\right)^2\cr &\quad \leq Cd_V^2(t-\tau)(1 + M t)^4, \varepsilonnd{aligned}$$ and, subsequently, this implies \begin{eqnarray*} \|\nabla I_3\|_{H^{k-1}} \leq Cd_V(t-\tau)(1 + M t)^2. \end{eqnarray*} Using the above estimates, we have that for $1 \leq k \leq \varepsilonll$ $$\begin{aligned} \|J_1\|_{L^2} &\leq C(1 - \mathrm{d}elta_{k,1})\left(\int_{\Omega_0}\left|\sum_{1 \leq k' \leq k-1}\nabla^{k'}I_2(x) \nabla^{k-k'}I_3(x)\right|dx \right)^{1/2}\cr &\leq C(1 - \mathrm{d}elta_{k,1})\sum_{1 \leq k' \leq k-1} \| |\nabla^{k'} I_2| |\nabla^{k-k'}I_3|\|_{L^2}\cr &\leq C(1 - \mathrm{d}elta_{k,1})\sum_{1 \leq k' \leq k}\|\nabla^{k'}I_2\|_{H^1}\|\nabla^{k-k'}I_3\|_{H^1}\cr &\leq C\|\nabla I_2\|_{H^{k-1}}\|\nabla I_3\|_{H^{k-1}} \leq Cd_V(t-\tau)(1 + M t)^4,\cr \|J_2\|_{L^2} &\leq \|I_2\|_{L^\infty}\|\nabla^k I_3\|_{L^2} \leq Cd_V(t-\tau)(1 + M t)^4,\cr \|J_3\|_{L^2} &\leq \|I_3\|_{L^\infty}\|\nabla^k I_2\|_{L^2} \leq Cd_V(t-\tau)(1 + M t)^6. \varepsilonnd{aligned}$$ This yields $$\begin{aligned} &\left\|\nabla^{k+1} \left( \frac{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\,dy}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\,dy}\right) \right\|_{L^2} \cr &\qquad \leq Cd_V(t-\tau)(1 + M t)^8 \leq C\|v_s\|_{L^\infty(-\tau,0;H^{\varepsilonll+1})}, \varepsilonnd{aligned}$$ for $1 \leq k \leq \varepsilonll$. Finally, we have $$\begin{aligned} &\frac12\frac{d}{dt}\int_{\Omega_0}|\nabla^{k+1} v_t|^2\,dx \cr &\quad = - \|\nabla^{k+1} v_t\|_{L^2}^2 + \int_{\Omega_0} \nabla^{k+1} \left( \frac{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y) v_{t-\tau}(y)\rho_0(y)\,dy}{\int_{\Omega_0} \psi_{t,\tau}[\eta](x,y)\rho_0(y)\,dy}\right) \cdot \nabla^{k+1} v_t(x)\,dx\cr &\quad \leq - \frac12\|\nabla^{k+1} v_t\|_{L^2}^2 + C\|v_s\|_{L^\infty(-\tau,0;H^{\varepsilonll+1})}^2, \varepsilonnd{aligned}$$ for $1 \leq k \leq \varepsilonll$. Hence we conclude that \begin{eqnarray*} \|\nabla^2 v_t\|_{H^{\varepsilonll-1}} \leq C\|v_s\|_{L^\infty(-\tau,0;H^{\varepsilonll+1})}. \end{eqnarray*} This completes the proof. \varepsilonnd{proof} \begin{proof}[Proof of Theorem \ref{thm_main}] The proof of global existence and uniqueness of classical solutions are easily obtained from Lemma \ref{lem_local} and Lemma \ref{prop_apriori}. Once we obtained the global-in-time classical solutions, the all computations in Section \ref{sec:flocking} are justified. This completes the proof. \varepsilonnd{proof} As mentioned in Section \ref{sec:main}, in order to back to the Eulerian variable from the Lagrangian formulation, we need to show that the characteristic flow $\varepsilonta_t$ defined in \varepsilonnd{equation}ref{eta_flow} is a diffeomorphism. Thus, in the rest of this section, we provide the estimate of det$\nabla \varepsilonta_t$. For this, we first need to estimate the exponential decay of $\nabla v_t$ in $L^2$-norm. \begin{lemma}\label{lem_decay2} Let Assumption \ref{ass:psi} be verified with some $\varepsilonll > \frac{d}2+1$, let \varepsilonnd{equation}ref{R_V} and \varepsilonnd{equation}ref{iii} hold. Then we have \begin{eqnarray*} \|\nabla v_t\|_{L^2(\Omegaega_0)} \leq \|\nabla v_0\|_{L^2(\Omegaega_0)}e^{-t} + C\left( \max_{s \in [-\tau,0]}d_V(s) \right) e^{-at} \quad \mathcal{B}ox{for} \quad t \geq 0, \end{eqnarray*} for some $a > 0$. \varepsilonnd{lemma} \begin{proof} Similarly as in Lemma \ref{prop_apriori}, we get \begin{eqnarray*} \frac12\frac{\mathrm{d}}{\mathrm{d} t}\int_{\Omega_0}|\nabla v_t|^2\mathrm{d} x \leq - \|\nabla v_t\|_{L^2(\Omegaega_0)}^2 + C\left( \max_{s \in [-\tau,0]}d_V(s) \right) e^{-C (t-\tau)}\|\nabla v_t\|_{L^2(\Omegaega_0)}(1 + t). \end{eqnarray*} This yields \begin{eqnarray*} \frac{d}{dt} \|\nabla v_t\|_{L^2(\Omegaega_0)} \leq -\|\nabla v_t\|_{L^2(\Omegaega_0)} + C\left( \max_{s \in [-\tau,0]}d_V(s) \right)(1 + t)e^{-Ct}. \end{eqnarray*} Applying Gronwall's inequality gives \begin{eqnarray*} \|\nabla v_t\|_{L^2(\Omegaega_0)} \leq \|\nabla v_0\|_{L^2(\Omegaega_0)}e^{-t} + C\left( \max_{s \in [-\tau,0]}d_V(s) \right) e^{-at} \quad \mathcal{B}ox{for} \quad t \geq 0, \end{eqnarray*} for some $a > 0$. \varepsilonnd{proof} We are ready to provide the details of the proof for Theorem \ref{thm:Eulerian}. \begin{proof}[Proof of Theorem \ref{thm:Eulerian}]It suffices to show that det$(\nabla \varepsilonta_t) > 0$ for all $t \geq 0$. It follows from \varepsilonnd{equation}ref{Lagr1} that \begin{eqnarray*} \nabla_x\varepsilonta_t(x) = \mathbb{I} + \int_0^t \nabla_x v_s(x)\,ds. \end{eqnarray*} Using Sobolev-Gagliardo-Nirenberg inequality together with the decay estimate in Lemma \ref{lem_decay2} yields \begin{eqnarray*} \|\nabla_x v_t\|_{L^\infty} \leq C\|\nabla_x v_t\|_{H^s} \leq C\|\nabla_x v_t\|_{L^2}^{1-\beta}\|\nabla_x v_t\|_{H^{s+1}}^\beta \leq C\varepsilon e^{-bt}, \end{eqnarray*} for some $b >0$ and $\beta \in (0,1)$, where $s > \frac d2$. Note that det$(\mathbb{I} + \varepsilon_0 A) = 1 + \varepsilon_0$tr$(A) + \mathcal{O}(\varepsilon_0^2)$ for some $\varepsilon_0> 0$ and \begin{eqnarray*} \left|\int_0^t \nabla_x v_s(x)\,ds\right| \leq C\varepsilon\int_0^t e^{-bs}\,ds \leq C\varepsilon. \end{eqnarray*} This provides \begin{eqnarray*} \mathcal{B}ox{det}(\nabla_x\varepsilonta_t(x)) = \mathcal{B}ox{det}\left(\mathbb{I} + \int_0^t \nabla_x v_s(x)\,ds \right) \geq 1 - C\varepsilon - \mathcal{O}(\varepsilon^2). \end{eqnarray*} Hence, by choosing $\varepsilon > 0$ small enough, we conclude the desired result. \varepsilonnd{proof} \section{Critical threshold phenomenon in one dimension - Proof of Theorem \ref{thm_cri}} \label{sec:critical} We conclude the paper by showing a critical threshold phenomenon in the spatially one-dimensional version ($d=1$) of the system \varepsilonnd{equation}ref{Eul1}--\varepsilonnd{equation}ref{Eul2}, which can be rewritten as \begin{eqnarray} \partial_t \rho_t + \partial_x(\rho_t u_t) &=& 0, \label{crit1} \\ \partial_t u_t + u_t \partial_x u_t &=& \frac{\int_\mathbb R \psi(x-y) u_{t-\tau}(y) \rho_{t-\tau}(y)\mathrm{d} y}{\int_\mathbb R \psi(x-y) \rho_{t-\tau}(y)\mathrm{d} y} - u_t. \label{crit2} \end{eqnarray} We proceed in the spirit of \cite{CCTT, CCZ, TT} and set $w_t(x) := \partial_x u_t(x)$ and introduce the differential operator $D_t := \partial_t + u_t \partial_x$. Taking the formal $x$-derivative of \varepsilonnd{equation}ref{crit2}, we obtain \begin{eqnarray*} D_t w_t + w_t^2 = \partialrtial_x \left( \frac{\int_\mathbb R \psi(x-y) u_{t-\tau}(y) \rho_{t-\tau}(y)\mathrm{d} y}{\int_\mathbb R \psi(x-y) \rho_{t-\tau}(y)\mathrm{d} y} \right) - w_t. \end{eqnarray*} Using Assumption \ref{ass:psi} with some $\varepsilonll \geq 1$, we have the bound \begin{eqnarray*} \mathbb{N}orm{u_t}_{\mathcal C([0,\infty); L^\infty(\Omegaega_t))} \leq \mathbb{N}orm{v_t}_{\mathcal C([0,\infty)\times\Omegaega_0)} \leq R_V \end{eqnarray*} derived in Lemma \ref{lem_spt}. This together with the assumption $|\psi'| \leq C|\psi|$ yields \begin{eqnarray*} \left| \partialrtial_x \left( \frac{\int_{\mathbb R} \psi(x-y) u_{t-\tau}(y) \rho_{t-\tau}(y)\mathrm{d} y}{\int_{\mathbb R} \psi(x-y) \rho_{t-\tau}(y)\mathrm{d} y} \right) \right| \leq 2CR_V = \overline C. \end{eqnarray*} Consequently, \begin{eqnarray} \label{ineq_d} |D_t w_t + w_t^2 + w_t | \leq \overline C, \end{eqnarray} where the constant $\overline C>0$ is independent of time. We can now distinguish the following two cases: \begin{itemize} \item \textbf{Subcritical case:} Assume that $4\overline C \leq 1$. Then it follows from \varepsilonnd{equation}ref{ineq_d} that \begin{eqnarray*} D_t w_t \geq - (w_t^2 + w_t + \overline C) = - \left( w_t - w^{1,+})(w_t - w^{1,-} \right) \end{eqnarray*} with \begin{eqnarray*} w^{1,\pm} := \frac{-1 \pm \sqrt{1 - 4\overline C}}{2}. \end{eqnarray*} Consequently, if $w_0 \geq w^{1,-}$, then $w_t \geq w^{1,-}$ for all $t \geq 0$. \item \textbf{Supercritical case:} Again, using \varepsilonnd{equation}ref{ineq_d}, we have \begin{eqnarray*} D_t w_t \leq -(w_t^2 + w_t - \overline C) = - (w_t - w^{2,+})(w_t - w^{2,-}) \end{eqnarray*} with \begin{eqnarray*} w^{2,\pm} :=\frac{-1 \pm \sqrt{1 + 4\overline C}}{2}. \end{eqnarray*} Consequently, if $w_0 < w^{2,-}$, then $w_t \leq w^{2,-}$ for all $t \geq 0$. Due to $w^{2,-} < w^{2,+}$, this further implies that \begin{eqnarray*} D_t w_t \leq -(w_t - w^{2,-})^2, \end{eqnarray*} and solving this differential inequality gives \begin{eqnarray*} w_t \leq \left( t + \frac{1}{w_0 - w^{2,-}} \right)^{-1} + w^{2,-}. \end{eqnarray*} Thus $w_t$ is only defined on a finite time interval and diverges to $-\infty$ before $t = (w^{2,-} - w_0)^{-1}$. \varepsilonnd{itemize} Collecting the above observations completes the proof of Theorem \ref{thm_cri}. \section*{Acknowledgments} YPC was supported by National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2017R1C1B2012918) and the Alexander Humboldt Foundation through the Humboldt Research Fellowship for Postdoctoral Researchers. JH was supported by KAUST baseline funds and KAUST grant no. 1000000193. \begin{thebibliography}{10} \bibitem{CCP} \newblock{J. A. Carrillo, Y.-P. Choi and S. P\'erez}, \newblock{A review on attractive-repulsive hydrodynamics for consensus in collective behavior}, \newblock N. Bellomo, P. Degond, and E. Tadmor (Eds.), Active Particles Vol. I: Advances in Theory, Models, Applications, Series: Modelling and Simulation in Science and Technology, Birh\"auser Basel, (2017), 259--298. \bibitem{CCTT} \newblock{J. A. Carrillo, Y.-P. Choi, E. Tadmor, and C. Tan}, \newblock{Critical thresholds in 1D Euler equations with nonlocal forces}, \newblock \varepsilonmph{Math. Mod. Methods Appl. Sci}., \textbf{26}, (2016), 185--206. \bibitem{CCZ} \newblock{J. A. Carrillo, Y.-P. Choi and E. Zatorska}, \newblock{On the pressureless damped Euler-Poisson equations with quadratic confinement: critical thresholds and large-time behavior}, \newblock \varepsilonmph{Math. Mod. Methods Appl. Sci.}, \textbf{26}, (2016), 2311-2340. \bibitem{CHL} \newblock{Y.-P. Choi, S.-Y. Ha and Z. Li}, \newblock{Emergent dynamics of the Cucker-Smale flocking model and its variants}, \newblock N. Bellomo, P. Degond, and E. Tadmor (Eds.), Active Particles Vol. I: Advances in Theory, Models, Applications, Series: Modelling and Simulation in Science and Technology, Birh\"auser Basel, (2017), 299--331. \bibitem{Choi-Haskovec} \newblock{Y.-P. Choi and J. Haskovec}, \newblock{Cucker-Smale model with normalized communication weights and time delay}, \newblock \varepsilonmph{Kinetic and Related Models}, \textbf{10} (2017), 1011--1033. \bibitem{CK16} \newblock{Y.-P. Choi and B. Kwon}, \newblock{The Cauchy problem for the pressureless Euler/isentropic Navier-Stokes equations}, \newblock \varepsilonmph{J. Differ. Equat.}, \textbf{261}, (2016), 654--711. \bibitem{CS1} \newblock {F. Cucker and S. Smale}, \newblock {Emergent behaviour in flocks}, \newblock \varepsilonmph{IEEE T. on Automat. Contr.}, \textbf{52} (2007), 852--862. \bibitem{CS2} \newblock {F. Cucker and S. Smale}, \newblock {On the mathematics of emergence}, \newblock \varepsilonmph{Jap. J. Math.}, \textbf{2} (2007), 197--227. \bibitem{DS} \newblock {D. Coutand and S. Shkoller}, \newblock {Well-posedness in smooth function spaces for the moving-boundary three-dimensional compressible Euler equations in physical vacuum}, \newblock \varepsilonmph{Arch. Rational Mech. Anal.}, \textbf{206} (2012), 515--616. \bibitem{EHS} \newblock {R. Erban, J. Haskovec and Y. Sun}, \newblock {A Cucker-Smale model with noise and delay}, \newblock \varepsilonmph{SIAM J. Appl. Math.}, \textbf{76} (2016), 1535--1557. \bibitem{HKK15} \newblock{S.-Y. Ha, M.-J. Kang, and B. Kwon}, \newblock{Emergent dynamics for the hydrodynamic Cucker-Smale system in a moving domain}, \newblock \varepsilonmph{SIAM J. Math. Anal.}, \textbf{47}, (2015), 3813--3831. \bibitem{MT} \newblock{S. Motsch and E. Tadmor}, \newblock{A new model for self-organized dynamics and its flocking behavior}, \newblock \varepsilonmph{J. Stat. Phys.}, \textbf{144}, (2011), 923--947. \bibitem{Smith} \newblock {H. Smith} \newblock \varepsilonmph{An Introduction to Delay Differential Equations with Applications to the Life Sciences}, \newblock Springer, New York Dordrecht Heidelberg London, 2011. \bibitem{TT} \newblock{E. Tadmor and C. Tan}, \newblock{Critical thresholds in flocking hydrodynamics with nonlocal alignment}, \newblock \varepsilonmph{ Philos. Trans. A Math. Phys. Engrg. Sci.}, \textbf{372}, (2014), 20130401. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \baselineskip 15pt \maketitle \begin{abstract} In this paper, we generalize the partial fraction decomposition which is fundamental in the theory of multiple zeta values, and prove a relation between Tornheim's double zeta functions of three complex variables. As applications, we give new integral representations of several zeta functions, an extension of the parity result to the whole domain of convergence, concrete expressions of Tornheim's double zeta function at non-positive integers and some results for the behavior of a certain Witten's zeta function at each integer. As an appendix, we show a functional equation for Euler's double zeta function. \end{abstract} \section{Introduction} Tornheim's double zeta function is defined as \[ \zeta(s,t;u)= \sum_{m,n=1}^\infty \frac1{m^s n^t (m+n)^u} \] for $(s,t,u)\in\mathbb{C}^3$ with $\Rep(s+u)>1$, $\Rep(t+u)>1$ and $\Rep(s+t+u)>2$. It is known, by Matsumoto \cite[Theorem 1]{Matsumoto02}, that $\zeta(s,t;u)$ can be meromorphically continued to the whole space $\mathbb{C}^3$, and its singularities are located on the subsets of $\mathbb{C}^3$ defined by one of the equations $s+u=1-l$, $t+u=1-l$ ($l=0,1,2,\dotsc$) and $s+t+u=2$. This function can be regarded as a generalization of some well-known zeta functions: the product of two Riemann zeta functions $\zeta(s)\zeta(t)=\zeta(s,t;0)$, the Euler double zeta function $\zeta(u,t)=\zeta(0,t;u)$ and the $SU(3)$-type Witten zeta function $\zeta_{SU(3)}(s)=2^s \zeta(s,s;s)$. Euler and Tornheim \cite{Tornheim50}, and many people gave a lot of relations between the values $\zeta(s,t;u)$ for triples $(s,t,u)$ of non-negative integers on the domain of convergence, but little relation as functions of complex variables has been found. As an exception, Tsumura \cite[Theorem 4.5]{Tsumura07} represented explicitly the function \begin{equation} \label{eq:def of Z} Z(s,t;u)=\zeta(s,t;u)+\cos(\pi t)\zeta(t,u;s) +\cos(\pi s)\zeta(u,s;t) \end{equation} in terms of the Riemann zeta function, when $s,t\in\mathbb{Z}_{\ge 0}$, $t\ge 2$ and $u\in\mathbb{C}$, except for singularities. Afterward Nakamura \cite[Theorem 1.2]{Nakamura06} gave a simpler version: for $s,t\in\mathbb{Z}_{\ge 1}$ and $u\in\mathbb{C}$ except for the singular points, \begin{align} Z(s,t;u) &= 2\sum_{h=0}^{[s/2]} \binom{s+t-2h-1}{s-2h} \zeta(2h)\zeta(s+t+u-2h) \nonumber\\ &\quad +2\sum_{k=0}^{[t/2]} \binom{s+t-2k-1}{t-2k} \zeta(2k)\zeta(s+t+u-2k), \label{eq:Nakamura's result} \end{align} where $[x]$ for $x\in\mathbb{R}$ denotes the greatest integer not exceeding $x$. This result seems really fascinating because it contains most of the known relations between the values $\zeta(s,t;u)$ for $s,t,u\in\mathbb{Z}_{\ge 1}$ and the Riemann zeta values (see \cite[\S 3]{Nakamura06}). The aim of this paper is to generalize it to a relation between Tornheim's double zeta functions of three complex variables. Our main result is \begin{Thm} The following relation holds on the whole space $\mathbb{C}^3$ except for the singular points of both sides: \[ Z(s,t;u)=A(s,t;u)+A(t,s;u), \] where, for $(s,t,u)\in\mathbb{C}^3$ with $s,-t, 1-t-u\ne 0,1,2,\dotsc$, \begin{equation} \label{thm-eq:definition of A} A(s,t;u) = \frac{\sin(\pi s)}{2\pi i} \int_L \cot\left(\frac{\pi(s-\eta)}2\right) \frac{\Gamma(t+\eta)\Gamma(-\eta)}{\Gamma(t)} \zeta(s-\eta)\zeta(t+u+\eta)d\eta. \end{equation} Here, the contour $L$ is a line from $-i\infty$ to $i\infty$ indented in such a manner as to separate the poles at $\eta=s-2n,-t-n,1-t-u~(n=0,1,2,\dotsc)$ from the poles at $\eta=0,1,2,\dotsc$. \end{Thm} \begin{Rem} The singularities of $A(s,t;u)$ are located only on the subsets of $\mathbb{C}^3$ defined by the equations $t+u=1-l~(l=0,1,2,\dotsc)$. This can be easily seen by shifting the contour $L$ as follows. Let $K$ be a non-negative integer. If $\Rep(s)<K+1/2$, $-K-1/2<\Rep(t)$ and $-K+1/2<\Rep(t+u)$, then \begin{align} A(s,t;u) &= 2\sum_{k=0}^K \frac{(t)_k}{k!} \cos^2\left(\frac{\pi(s-k)}2\right) \zeta(s-k)\zeta(t+u+k) \nonumber\\ &\quad +\frac{\sin(\pi s)}{2\pi i} \int_{L_K}\cot\left(\frac{\pi (s-\eta)}2\right) \frac{\Gamma(t+\eta)\Gamma(-\eta)}{\Gamma(t)} \zeta(s-\eta)\zeta(t+u+\eta) d\eta, \label{eq:A-function shifted} \end{align} where $(t)_k=\Gamma(t+k)/\Gamma(t)$ and $L_K$ describes the vertical line from $K+1/2-i\infty$ to $K+1/2+i\infty$. Also, from this, it is clear that our functional relation is a generalization of (\ref{eq:Nakamura's result}) (see (\ref{lem-eq:A(a,t;u)})). \end{Rem} \begin{Rem} Since \[ \begin{pmatrix} Z(s,t;u)\\ Z(t,u;s)\\ Z(u,s;t) \end{pmatrix} =\begin{pmatrix} 1 & \cos(\pi t) & \cos(\pi s)\\ \cos(\pi t) & 1 & \cos(\pi u)\\ \cos(\pi s) & \cos(\pi u) & 1 \end{pmatrix} \begin{pmatrix} \zeta(s,t;u)\\ \zeta(t,u;s)\\ \zeta(u,s;t) \end{pmatrix}, \] we can write $\zeta(s,t;u)$ in terms of the $Z$-function as \begin{align} \Delta(s,t,u)\zeta(s,t;u) &=(1-\cos^2(\pi u))Z(s,t;u) \nonumber\\ &\quad +(\cos(\pi s)\cos(\pi u)-\cos(\pi t))Z(t,u;s) \nonumber\\ &\quad +(\cos(\pi t)\cos(\pi u)-\cos(\pi s))Z(u,s;t), \label{eq:representation of zeta by Z-function} \end{align} where $\Delta(s,t,u) =1-\cos^2(\pi s)-\cos^2(\pi t)-\cos^2(\pi u) +2\cos(\pi s)\cos(\pi t)\cos(\pi u)$, and so Theorem gives a new integral representation of $\zeta(s,t;u)$. Some special cases will be displayed in Proposition \ref{prop:representation of zetas}. \end{Rem} In this paper, to prove Theorem, we employ Li's method in \cite{Lie_arxiv} which gave a simple proof of (\ref{eq:Nakamura's result}). In \S 2, we will generalize some partial fraction decompositions used there to a usable form in our case. We will give a proof of Theorem in \S 3 and exhibit its applications in \S 4. In Appendix, we will show a functional equation for Euler's double zeta function. \section{Generalized partial fraction decomposition} The following partial fraction decomposition plays a fundamental role in the theory of multiple zeta values: for two independent variables $p,q$ and two positive integers $s,t$, \begin{equation} \label{eq:classical PFD} \frac1{p^sq^t} =\sum_{h=0}^{s-1} \frac{(t)_h}{h!}\frac1{p^{s-h}(p+q)^{t+h}} +\sum_{k=0}^{t-1} \frac{(s)_k}{k!}\frac1{q^{t-k}(p+q)^{s+k}}. \end{equation} In this section, we will formulate two partial fraction decompositions in the case of $s,t$ being complex numbers. \begin{Lem} \label{lem:GPFD1} Let $p,q$ be positive real numbers and let $s,t$ be complex numbers whose real parts are positive. If $s,t\ne 1,2,\dotsc$, then \begin{equation} \label{lem-eq1:GPFD1} \frac{\Gamma(s)\Gamma(t)}{p^s q^t} =I(s,t;p,r)+I(t,s;q,r), \end{equation} where $r=p+q$ and \begin{equation} \label{lem-eq2:GPFD1} I(s,t;p,r) =\frac1{2\pi i} \int_{L_{s,t}} \frac{\Gamma(1-s+\eta)\Gamma(-\eta)}{\Gamma(1-s)} \frac{\Gamma(s-\eta)}{p^{s-\eta}} \frac{\Gamma(t+\eta)}{r^{t+\eta}}d\eta. \end{equation} Here, the contour $L_{s,t}$ is a line from $-i\infty$ to $i\infty$ indented in such a manner as to separate the points at $\eta=s-1-m,-t-m~(m=0,1,2,\dotsc)$ from the points at $\eta=s+n,n~(n=0,1,2,\dotsc)$. \end{Lem} \begin{proof} From the usual integral representation of the gamma function, it follows that \[ I(s,t;p,r) =\frac1{2\pi i}\int_{L_{s,t}} \frac{\Gamma(1-s+\eta)\Gamma(-\eta)}{\Gamma(1-s)} \left(\iint_{(\mathbb{R}_{>0})^2} e^{-p\mu-r\nu} \mu^{s-\eta-1} \nu^{t+\eta-1} d\mu d\nu \right) d\eta. \] By a suitable choice of $L_{s,t}$, it can be shown that the order of the integrations can be interchanged. Hence, we see \[ I(s,t;p,r) = \iint_{(\mathbb{R}_{>0})^2} e^{-p\mu-r\nu} \mu^{s-1}\nu^{t-1} \left(\frac1{2\pi i}\int_{L_{s,t}} \frac{\Gamma(1-s+\eta)\Gamma(-\eta)}{\Gamma(1-s)} (\nu/\mu)^\eta d\eta\right)d\mu d\nu. \] Since the innermost integral is $(1+\nu/\mu)^{s-1}$ (see \cite[\S14.51, Corollary]{Whittaker-Watson}), we obtain \begin{align*} I(s,t;p,r) &= \iint_{(\mathbb{R}_{>0})^2} e^{-p\mu-r\nu} (\mu+\nu)^{s-1}\nu^{t-1}d\mu d\nu\\ &=\iint_{0<\nu<\mu} e^{-p\mu-q\nu} \mu^{s-1}\nu^{t-1} d\mu d\nu. \end{align*} We note that \[ I(t,s;q,r) =\iint_{0<\mu<\nu} e^{-p\mu-q\nu} \mu^{s-1}\nu^{t-1} d\mu d\nu, \] and so the right hand side of (\ref{lem-eq1:GPFD1}) is equal to \[ \iint_{(\mathbb{R}_{>0})^2} e^{-p\mu-q\nu} \mu^{s-1}\nu^{t-1} d\mu d\nu =\frac{\Gamma(s)\Gamma(t)}{p^s q^t}. \] This is the desired result. \end{proof} \begin{Lem} \label{lem:GPFD2} Let $p,q,s,t$ be as in Lemma $\ref{lem:GPFD1}$. If $p<q$ and $s,t\ne 1,2,\dotsc$, then \begin{equation} \label{lem-eq1:GPFD2} \frac{\cos(\pi s)\Gamma(s)\Gamma(t)}{p^s q^t} =J(s,t;p,q-p)+I(t,s;q,q-p), \end{equation} where \[ J(s,t;p,q-p) =\frac1{2\pi i} \int_{L_{s,t}} \frac{\Gamma(1-s+\eta)\Gamma(-\eta)}{\Gamma(1-s)} \frac{\cos(\pi(s-\eta))\Gamma(s-\eta)}{p^{s-\eta}} \frac{\Gamma(t+\eta)}{(q-p)^{t+\eta}}d\eta. \] \end{Lem} \begin{proof} For any $p,r\in\mathbb{C}^\times$, the integrand in (\ref{lem-eq2:GPFD1}) is \[ \ll {|\eta|}^{\Rep(t)-1} e^{-2\pi |\eta|} e^{\Imp(\eta)(\arg r -\arg p)} |p^{-s}||r^{-t}| \] as $\eta\rightarrow \pm i\infty$ on $L_{s,t}$, where the implied constant does not depend on $p,r,\eta$. This estimate ensures that, for any fixed $q\in\mathbb{R}_{>0}$, the right hand side of (\ref{lem-eq1:GPFD1}) is continued to $\mathbb{C}\setminus\{\pm i\mathbb{R}_{\ge0}\cup (-q\pm i\mathbb{R}_{\ge 0})\}$ as a holomorphic function in $p$, where the double-signs correspond, and hence, if $0<p<q$, then \[ \frac{e^{\pm \pi i s}\Gamma(s)\Gamma(t)}{p^s q^t} =\frac{\Gamma(s)\Gamma(t)}{(-p)^s q^t} =I(s,t;-p,q-p)+I(t,s;q,q-p) \] and \[ I(s,t;-p,q-p) =\frac1{2\pi i} \int_{L_{s,t}} \frac{\Gamma(1-s+\eta)\Gamma(-\eta)}{\Gamma(1-s)} \frac{e^{\pm \pi i(s-\eta)}\Gamma(s-\eta)}{p^{s-\eta}} \frac{\Gamma(t+\eta)}{(q-p)^{t+\eta}}d\eta. \] Thus, we obtain Lemma \ref{lem:GPFD2}. \end{proof} \section{Proof of Theorem} For simplicity of description, we suppose that $\Rep(s),\Rep(t),\Rep(u)>2$, $s,t\ne 3,4,5,\dotsc$ and the contour $L_{s,t}$ always satisfies the condition $-1/2\le \Rep(\eta)\le \Rep(s)-1/2$ for all $\eta\in L_{s,t}$. We first evaluate \[ \zeta(s,t;u) =\sum_{m,n=1}^\infty \frac1{m^s n^t (m+n)^u}. \] Applying Lemma \ref{lem:GPFD1} with $(p,q)=(m,n)$, we see \[ \zeta(s,t;u) =X(s,t;u)+X(t,s;u), \] where \[ X(s,t;u) =\frac1{2\pi i}\sum_{m,n=1}^\infty \int_{L_{s,t}} \frac{\Gamma(s,t;\eta)}{m^{s-\eta}(m+n)^{t+u+\eta}}d\eta \] and \[ \Gamma(s,t;\eta)= \frac{\Gamma(1-s+\eta)\Gamma(-\eta) \Gamma(s-\eta)\Gamma(t+\eta)} {\Gamma(1-s)\Gamma(s)\Gamma(t)}. \] From the condition of $L_{s,t}$, it follows that the order of summation and integration of $X(s,t;u)$ can be interchanged. As a result, we have \begin{align} X(s,t;u) &=\frac1{2\pi i} \int_{L_{s,t}} \Gamma(s,t;\eta)\zeta(s-\eta,0;t+u+\eta)d\eta \nonumber \\ &=\frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \zeta(1,0;s+t+u-1) \nonumber \\ &\quad +\frac1{2\pi i} \int_{L_{s-1,t}} \Gamma(s,t;\eta)\zeta(s-\eta,0;t+u+\eta)d\eta. \label{pf-eq:X(s,t;u)} \end{align} We next treat $\cos(\pi t)\zeta(t,u;s)+\cos(\pi s)\zeta(u,s;t)$. Set \[ a_{m,n}(s,t;u) = \frac{\cos(\pi t)}{m^t n^u (m+n)^s} +\frac{\cos(\pi s)}{n^u m^s (m+n)^t} \] for $m,n\in\mathbb{Z}_{\ge 1}$. Applying Lemma \ref{lem:GPFD2} to each term, we obtain \[ a_{m,n}(s,t;u) =b_{m,n}(s,t;u)+b_{m,n}(t,s;u), \] where \begin{align*} b_{m,n}(s,t;u) &= \frac{I(s,t;m+n,n)+J(s,t;m,n)}{\Gamma(s)\Gamma(t) n^u}\\ &=- \frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \frac1{n^{s+t+u-1}} \left(\frac1m-\frac1{m+n}\right)\\ &\quad +\frac1{2\pi i} \int_{L_{s-1,t}}\Gamma(s,t;\eta) \left( \frac1{n^{t+u+\eta}(m+n)^{s-\eta}} +\frac{\cos(\pi(s-\eta))} {m^{s-\eta}n^{t+u+\eta}} \right)d\eta. \end{align*} Put $Y(s,t;u)=\sum_{m,n=1}^\infty b_{m,n}(s,t;u)$. Then, it is easily seen that \[ \cos(\pi t)\zeta(t,u;s)+\cos(\pi s)\zeta(u,s;t) =Y(s,t;u)+Y(t,s;u), \] and that \begin{align} Y(s,t;u) &= -\frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \Big( \zeta(1,0;s+t+u-1)+\zeta(s+t+u) \Big) \nonumber\\ &\quad +\frac1{2\pi i} \int_{L_{s-1,t}} \Gamma(s,t;\eta) \Big\{\zeta(t+u+\eta,0;s-\eta) \nonumber\\ &\hspace{30mm} +\cos(\pi(s-\eta))\zeta(s-\eta)\zeta(t+u+\eta)\Big\} d\eta. \label{pf-eq:Y(s,t;u)} \end{align} Combining (\ref{pf-eq:X(s,t;u)}) and (\ref{pf-eq:Y(s,t;u)}), we have \begin{align} \lefteqn{X(s,t;u)+Y(s,t;u)} \quad& \nonumber\\ &=-\frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \zeta(s+t+u) \nonumber\\ &\quad +\frac1{2\pi i} \int_{L_{s-1,t}}\Gamma(s,t;\eta) \{1+\cos(\pi (s-\eta))\}\zeta(s-\eta)\zeta(t+u+\eta) d\eta \nonumber\\ &\quad -\frac1{2\pi i}\int_{L_{s-1,t}} \Gamma(s,t;\eta)d\eta\, \zeta(s+t+u) \label{pf-eq:X+Y} \end{align} because generally \begin{equation} \label{pf-eq:harmonic product formula} \zeta(s,0;t) +\zeta(t,0;s) =\zeta(s)\zeta(t) -\zeta(s+t). \end{equation} The second term on the right hand side of (\ref{pf-eq:X+Y}) becomes \[ \frac{\Gamma(s+t)}{s\Gamma(s)\Gamma(t)} \zeta(s+t+u) +A(s,t;u) \] by shifting the contour to $L_{s+1,t}$, and the third term is \begin{align*} &= \frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \zeta(s+t+u) -\frac1{2\pi i}\int_{L_{s,t}} \Gamma(s,t;\eta)d\eta\, \zeta(s+t+u)\\ &=\frac{\Gamma(s+t-1)}{\Gamma(s)\Gamma(t)} \zeta(s+t+u)- \frac{\Gamma(s+t)}{t\Gamma(s)\Gamma(t)} \zeta(s+t+u) \end{align*} by Barnes' lemma (see \cite[14.52]{Whittaker-Watson}). Hence, \[ X(s,t;u)+Y(s,t;u) =\left(\frac1s-\frac1t\right) \frac{\Gamma(s+t)}{\Gamma(s)\Gamma(t)}\zeta(s+t+u) +A(s,t;u). \] Thus, we have \begin{align*} Z(s,t;u) &=X(s,t;u)+X(t,s;u)+Y(s,t;u)+Y(t,s;u)\\ &=A(s,t;u)+A(t,s;u), \end{align*} when $\Rep(s),\Rep(t),\Rep(u)>2$ and $s,t\ne 3,4,5,\dotsc$. By the theory of analytic continuation, the proof of Theorem is completed. \begin{Rem} We have used Li's method in this section, but it is possible to prove Theorem by Nakamura's original method. Indeed, all his argument is valid here except for the property for the Bernoulli polynomial \cite[(2,7)]{Nakamura06}, which can be proved by the partial fractional decomposition (\ref{eq:classical PFD}) in a similar way to Eisenstein's proof for addition formulas of the trigonometric functions (see \cite[Chapter II]{Weil76} or \cite[\S 2.1]{Sczech92}). And so it can be reformulated to an available form in our case by (a slightly generalized) Lemma \ref{lem:GPFD1}. \end{Rem} \section{Application} In this section, we will find some new results from Theorem. The following each proposition can be proved independently of the others. However, the next lemma seems to be useful for some applications, and so we first state it in order to access quickly. \begin{Lem} \label{lem:evaluation of A} Let $a$ be an integer and $b,c$ be non-negative integers. Set \begin{equation} \label{lem-eq:definition of F} F(s,t;c) =\sum_{k=0}^c \binom{c}{k}\zeta(s-k)\zeta(t-c+k) \end{equation} for $s,t\in\mathbb{C}$. Then we have the followings: $(1)$ For $t,u\in\mathbb{C}$ with $t+u\ne 1-l~(l=0,1,2,\dotsc)$, \begin{equation} \label{lem-eq:A(a,t;u)} A(a,t;u) =2\sum_{k=0}^{[a/2]} \binom{t+a-2k-1}{a-2k} \zeta(2k)\zeta(t+u+a-2k), \end{equation} where the value of any empty sum is defined to be $0$. $(2)$ For $s,u\in\mathbb{C}$ with $u\ne b+1-l~(l=0,1,2,\dotsc)$, \[ A(s,-b;u) = \sum_{k=0}^b \binom{b}{k} (\cos(\pi s)+(-1)^k) \zeta(s-k)\zeta(u-b+k). \] $(3)$ For any $s\in\mathbb{C}$, \[ \lim_{u\rightarrow -c} A(s,-b;u) =(\cos(\pi s)-(-1)^{b+c}) F(s,-c;b) +\delta_{c0}(-1)^{b+1}\zeta(s-b), \] where $\delta_{ij}$ denotes the Kronecker symbol. $(4)$ For any $s\in\mathbb{C}$, \begin{align*} \lefteqn{\lim_{t\rightarrow -b} A(s,t;-c)} \quad &\\ &=(\cos(\pi s)-(-1)^{b+c}) \left(F(s,-c;b) +\frac{(-1)^{c+1} b! c!}{(b+c+1)!} \zeta(s-b-c-1)\right)\\ &\quad +\delta_{c0}(-1)^{b+1}\zeta(s-b). \end{align*} \end{Lem} \begin{proof} (1),(2) The formulas follow immediately from (\ref{eq:A-function shifted}). (3) We apply (2) to get \begin{align*} \lim_{u\rightarrow -c}A(s,-b;u) &=(\cos(\pi s)-(-1)^{b+c}) \sum_{k=0}^{b-1} \binom{b}{k}\zeta(s-k)\zeta(-b-c+k)\\ &\quad +(\cos(\pi s)+(-1)^b)\zeta(s-b)\zeta(-c), \end{align*} where we have used the fact that $\zeta(-b-c+k)=0$ if $0\le k\le b-1$ and $k\equiv b+c \pmod 2$. Thus, by simple calculation, we obtain the result. (4) The result follows in a similar way to the above. \end{proof} We give integral representations of several zeta functions. \begin{Prop} \label{prop:representation of zetas} $(1)$ The Euler double zeta function $\zeta(s,t)$ has the following representation: \begin{align} \lefteqn{(\cos(\pi t)-\cos(\pi s))\zeta(s,t)} \quad & \nonumber\\ &=A(s,t;0)+A(t,s;0)-(1+\cos(\pi s))\zeta(s)\zeta(t) +\cos(\pi s)\zeta(s+t). \label{prop-eq:representation of Euler double zeta} \end{align} $(2)$ Let $n$ be a non-negative integer. Then, \begin{align} \lefteqn{(1+\cos(\pi s))\zeta(s)\zeta(s+2n)} \quad & \nonumber\\ &=A(s,s+2n;0)+A(s+2n,s;0) +\cos(\pi s)\zeta(2s+2n) \label{prop-eq:product of zeta 1} \end{align} and \begin{align*} \lefteqn{(1+\cos(\pi s))\zeta(s)\zeta(-s+2n)} \quad &\\ &=A(s,-s+2n;0)+A(-s+2n,s;0) +\cos(\pi s)\zeta(2n) +\delta_n(s), \end{align*} where \[ \delta_n(s)= \begin{cases} -\pi s \sin(\pi s)/12 & \text{if $n=0$},\\ -\pi \sin(\pi s)/(s-1) & \text{if $n=1$},\\ 0 & \text{otherwise}. \end{cases} \] In particular, we obtain \begin{align*} \lefteqn{(1+\cos(\pi s))\zeta(s)^2-\cos(\pi s)\zeta(2s) =2A(s,s;0)} \quad &\\ &=\frac{2\sin(\pi s)}{2\pi i} \int_{L} \cot\left(\frac{\pi(s-\eta)}2\right) \frac{\Gamma(s+\eta)\Gamma(-\eta)}{\Gamma(s)} \zeta(s-\eta)\zeta(s+\eta)d\eta. \end{align*} $(3)$ The Witten zeta function of $SU(3)$ can be written as \begin{align} \lefteqn{2^{-s-1}(1+2\cos(\pi s))\zeta_{SU(3)}(s) =A(s,s;s)} \quad & \nonumber\\ &= \frac{\sin(\pi s)}{2\pi i} \int_{L} \cot\left(\frac{\pi(s-\eta)}2\right) \frac{\Gamma(s+\eta)\Gamma(-\eta)}{\Gamma(s)} \zeta(s-\eta)\zeta(2s+\eta)d\eta. \label{prop-eq:integral representation of Witten zeta} \end{align} \end{Prop} \begin{Rem} We can regard (\ref{prop-eq:product of zeta 1}) as a generalization of the formula \begin{align*} \lefteqn{\zeta(2l)\zeta(2m)-\frac12 \zeta(2l+2m)} \quad &\\ &=\sum_{k=0}^{\max\{l,m\}} \left\{\binom{2l+2m-2k-1}{2l-1}+\binom{2l+2m-2k-1}{2m-1}\right\} \zeta(2k)\zeta(2l+2m-2k) \end{align*} for $l,m\in\mathbb{Z}_{\ge 1}$. Indeed, taking $s=2l~(l=1,2,\dotsc)$ in (\ref{prop-eq:product of zeta 1}) and putting $m=l+n$, this follows from (\ref{lem-eq:A(a,t;u)}). \end{Rem} \begin{proof}[Proof of Proposition $\ref{prop:representation of zetas}$] (1) Substituting $u=0$ in (\ref{eq:def of Z}) and using (\ref{pf-eq:harmonic product formula}), we see \begin{align*} Z(s,t;0) &=\zeta(s)\zeta(t)+\cos(\pi t)\zeta(t,0;s) +\cos(\pi s)\zeta(0,s;t)\\ &=(\cos(\pi t)-\cos(\pi s))\zeta(t,0;s) +(1+\cos(\pi s))\zeta(s)\zeta(t)-\cos(\pi s)\zeta(s+t). \end{align*} Hence, the result is shown by Theorem. (2) Assume that $\Rep(s)>1$. Comparing the limits of the both sides of (\ref{prop-eq:representation of Euler double zeta}) as $t\rightarrow \pm s+2n$, we get \begin{align*} \lefteqn{(1+\cos(\pi s))\zeta(s)\zeta(\pm s+2n)} \quad &\\ &=A(s,\pm s+2n;0)+A(\pm s+2n,s;0) +\cos(\pi s)\zeta(s\pm s+2n)\\ &\quad -\lim_{z\rightarrow 0} \{\cos(\pi (z\pm s))-\cos(\pi s)\}\zeta(z\pm s+2n,0;s), \end{align*} where the double-signs correspond. It is clear that the limit becomes $0$ unless the double-signs are ``$-$'' and $n=0,1$. In the remaining cases, the last term of the right side is $\delta_n(s)$ because \[ \zeta(z-s+2n,0;s) =\frac1{s-1}\zeta(z+2n-1)+\frac{s}{12}\zeta(z+2n+1)+O(1) \] as $z\rightarrow 0$ (see \cite[p.425, (4.4)]{Matsumoto02}). (3) The result follows immediately from (\ref{eq:def of Z}) and Theorem. \end{proof} We next extend the parity result \cite[Theorem 2]{Huard-Williams-Zhang96} to the whole domain of convergence. \begin{Prop} \label{prop:parity result} Let $a,b,c$ be integers such that $a+b+c$ is odd. Assume that $a+c\ge 2$, $b+c\ge 2$ and $a+b+c\ge 3$. If $a+b\ge 2$, then \[ 2\zeta(a,b;c) =(-1)^a\{A(c,a;b)+A(a,c;b)\} +(-1)^b\{A(c,b;a)+A(b,c;a)\}, \] where every $A$-value is representable in the form $(\ref{lem-eq:A(a,t;u)})$. If $a+b\le 1$, then \begin{align*} 2\zeta(a,b;c) &=(-1)^a\{A^*(c,a;b)+A(a,c;b)\} +(-1)^b\{A^*(c,b;a)+A(b,c;a)\}\\ &\quad +\frac{(-1)^a 2}{(1-a-b)!} \left.\frac{d}{ds}(s+a)_{1-a-b}\right|_{s=0} \zeta(a+b+c-1). \end{align*} Here \[ A^*(c,a;b)= 2\sum_k \binom{a+c-2k-1}{c-2k}\zeta(2k)\zeta(a+b+c-2k), \] where the sum is taken over all integers $k\in[0,c/2]$ except for $k=(a+b+c-1)/2$. \end{Prop} \begin{proof} Taking the limits of the both sides of (\ref{eq:representation of zeta by Z-function}) as $u\rightarrow c$, $t\rightarrow b$ and $s\rightarrow a$ in order, we have \begin{align*} 2\zeta(a,b;c) &=(-1)^a A(a,c;b)+(-1)^b A(b,c;a)\\ &\quad +\lim_{s\rightarrow a} ((-1)^a A(c,s;b)+(-1)^b A(c,b;s)). \end{align*} It is easily seen that the limiting value equals $(-1)^a A(c,a;b)+(-1)^b A(c,b;a)$ if $a+b\ge 2$, and \begin{align*} &(-1)^a A^*(c,a;b)+(-1)^b A^*(c,b;a)\\ &+\frac{(-1)^a 2}{(1-a-b)!} \left.\frac{d}{ds}(s+a)_{1-a-b}\right|_{s=0} \zeta(a+b+c-1) \end{align*} if $a+b\le 1$. Thus, we obtain Proposition \ref{prop:parity result}. \end{proof} The following proposition suggests that $\zeta(s,t;u)$ can be represented as a sum of products of the Riemann zeta functions, if at least two of $s$, $t$ and $u$ are non-positive integers in the sense of the coordinate-wise limit. \begin{Prop} Let $a$, $b$ and $c$ be non-negative integers and let $s$, $t$ and $u$ be complex numbers. $(1)$ If $s,t\ne c+1-l$ $(l=0,1,2,\dotsc)$ and $s+t\ne c+2$, then $\zeta(s,t;-c)=F(s,t;c)$, where $F(s,t;c)$ is defined by $(\ref{lem-eq:definition of F})$. $(2)$ If $u\ne a+1-l,b+1-l,a+b+2$ $(l=0,1,2,\dotsc)$, then \begin{align*} \zeta(-a,-b;u) &= (-1)^{a+1}F(u,-a;b)+(-1)^{b+1}F(u,-b;a)\\ &\quad +\frac{a!b!}{(a+b+1)!}\zeta(u-a-b-1) -\delta_{a0}\zeta(u-b)-\delta_{b0}\zeta(u-a). \end{align*} $(3)$ For $s\in\mathbb{C}$ with $s\ne c+1-l,b+c+2$ $(l=0,1,2,\dotsc)$, \[ \lim_{u\rightarrow -c}\zeta(s,-b;u) =F(s,-b;c) +\frac{(-1)^{b+1}b!c!}{(b+c+1)!}\zeta(s-b-c-1). \] \end{Prop} \begin{proof} (1) This is trivial. (2) We take the limits of the both sides of (\ref{eq:representation of zeta by Z-function}) as $t\rightarrow -b$ and $s\rightarrow -a$ in order. Then, the result is a direct consequence of Lemma \ref{lem:evaluation of A}. (3) The result can be proved in a similar way to (2), but it follows easily from the representation \cite[(5.3)]{Matsumoto02} of $\zeta(s,t;u)$. \end{proof} \begin{Cor} Let $a$, $b$ and $c$ be non-negative integers. $(1)$ \[ \lim_{(s,t)\rightarrow (-a,-b)} \zeta(s,t;-c) =F(-a,-b;c). \] $(2)$ \[ \lim_{u\rightarrow -c} \zeta(-a,-b;u) =F(-a,-b;c) + \left(\frac{(-1)^{a+1}a!c!}{(a+c+1)!} +\frac{(-1)^{b+1}b!c!}{(b+c+1)!}\right) \zeta(-a-b-c-1). \] $(3)$ \[ \lim_{s\rightarrow -a}\lim_{u\rightarrow -c}\zeta(s,-b;u) =F(-a,-b;c) +\frac{(-1)^{b+1}b!c!}{(b+c+1)!}\zeta(-a-b-c-1). \] $(4)$ \[ \lim_{t\rightarrow -b}\lim_{u\rightarrow -c}\zeta(-a,t;u) =F(-a,-b;c) +\frac{(-1)^{a+1}a!c!}{(a+c+1)!}\zeta(-a-b-c-1). \] \end{Cor} \begin{Rem} Komori \cite{Komori08} studied Tornheim's double zeta values for coordinate-wise limits at non-positive integers and gave their explicit expressions in terms of generalized Bernoulli numbers. Our formulation seems to be more concrete than his. \end{Rem} To prove (2), we have to use the following lemma and the relation \[ (-1)^{a+b+c}F(-a,-b;c)-\delta_{a0}\zeta(-b-c)-\delta_{b0}\zeta(-a-c) -\delta_{a0}\delta_{b0}\delta_{c0} =F(-a,-b;c). \] \begin{Lem} Let $a$, $b$ and $c$ be non-negative integers. Then, \begin{align} \lefteqn{(-1)^{a+b} F(-a,-b;c) +(-1)^{b+c}F(-b,-c;a)+(-1)^{c+a}F(-c,-a;b)} \quad & \nonumber\\ & =\left(\frac{(-1)^c a!b!}{(a+b+1)!} +\frac{(-1)^a b!c!}{(b+c+1)!} +\frac{(-1)^b c!a!}{(c+a+1)!}\right)\zeta(-a-b-c-1) \nonumber\\ &\quad +\delta_{a0}\delta_{b0}\delta_{c0}. \label{lem-eq:convolution formula} \end{align} \end{Lem} This lemma is equivalent to Theorem 2 of Chu--Wang \cite{Chu-Wang10}. However, their formulation is quite different from ours, and so we now prove it for the reader's convenience. \begin{proof} For a non-negative integer $m$, we set \[ \tilde{P}_m(x) =\delta_{m0}+(-1)^m \frac{m!}{x^{m+1}} +2^{m+1}\sum_{k=0}^\infty (-1)^{m+k}\zeta(-m-k)\frac{(2x)^k}{k!}. \] We calculate the value \[ R=2^{-a-b-c-2}[x^{-1}] (\tilde{P}_a(x)-\delta_{a0}) (\tilde{P}_b(x)-\delta_{b0}) (\tilde{P}_c(x)-\delta_{c0}) \] in two ways, where $[x^{-1}]f(x)$ denotes the formal residue of a formal Laurent series $f(x)$. We first use the definition of $\tilde{P}_m(x)$ to obtain \begin{align*} R&= (-1)^{a+b} F(-a,-b;c) +(-1)^{b+c}F(-b,-c;a)+(-1)^{c+a}F(-c,-a;b)\\ &\quad -\left(\frac{(-1)^c a!b!}{(a+b+1)!} +\frac{(-1)^a b!c!}{(b+c+1)!} +\frac{(-1)^b c!a!}{(c+a+1)!}\right)\zeta(-a-b-c-1). \end{align*} We next apply Proposition 3.1 in \cite{Onodera} to get $R=\delta_{a0}\delta_{b0}\delta_{c0}$. Thus, we have (\ref{lem-eq:convolution formula}). \end{proof} We finally show the behavior of $\zeta_{SU(3)}(s)$ at each integer. \begin{Prop} \label{prop:Witten zeta values} Let $a$ be a positive integer. $(1)$ \cite[Theorem 3]{Huard-Williams-Zhang96} \[ \zeta_{SU(3)}(a) =\frac{2^{a+2}}{1+(-1)^a 2} \sum_{k=0}^{[a/2]} \binom{2a-2k-1}{a-1} \zeta(2k)\zeta(3a-k). \] $(2)$ $\zeta_{SU(3)}(0)=1/3$ and $\zeta_{SU(3)}'(0)=\log (2^{4/3}\pi)$. $(3)$ If $a$ is odd, then $\zeta_{SU(3)}(s)$ has a simple zero at $s=-a$, and \begin{equation} \label{prop-eq:SU(3)-zeta at odd} \zeta_{SU(3)}'(-a)= 2^{-a+2}\sum_{k=0}^{(a-1)/2} \binom{a}{2k} \zeta(-a-2k)\zeta'(-2a+2k) +\frac{2^{-a+1}(a!)^2}{(2a+1)!}\zeta'(-3a-1). \end{equation} In particular, $\sign(\zeta_{SU(3)}'(-a))=(-1)^{(a-1)/2}$. $(4)$ If $a$ is even, then $\zeta_{SU(3)}(s)$ has a zero of order two at $s=-a$, and \begin{equation} \label{prop-eq:SU(3)-zeta at even} \zeta_{SU(3)}''(-a) =2^{-a+2}\sum_{k=0}^{a/2} \binom{a}{2k} \zeta'(-a-2k)\zeta'(-2a+2k). \end{equation} In particular, $\sign(\zeta_{SU(3)}''(-a))=(-1)^{a/2}$. \end{Prop} \begin{Rem} The value of Witten's zeta function $\zeta_G(s)$ of each finite group $G$ at $s=-2$ coincides with the order of $G$. In this viewpoint, it is attractive to clarify the behavior of $\zeta_G(s)$ at $s=-2$ in the case of $G$ being an infinite compact topological group. In \cite{Kurokawa-Ochiai}, Kurokawa and Ochiai studied the values of Witten's zeta functions at negative integers, and proved that $\zeta_{SU(3)}(s)$ has a zero at each negative integer. Proposition \ref{prop:Witten zeta values} can be regarded as a refinement of their result. Moreover, as seen below, our proof reveals that a zero of $\zeta_{SU(3)}(s)$ at each negative integer comes from the gamma factors appearing in the left sides of (\ref{lem-eq1:GPFD1}) and (\ref{lem-eq1:GPFD2}). \end{Rem} \begin{proof}[Proof of Proposition $\ref{prop:Witten zeta values}$] We use here the integral representation (\ref{prop-eq:integral representation of Witten zeta}) of $\zeta_{SU(3)}(s)$. (1) The result is clear from (\ref{lem-eq:A(a,t;u)}). (2) By (\ref{eq:A-function shifted}), we see that, if $K$ is a non-negative integer and $-K/2+1/4<\Rep(s)<K+1/2$, then \begin{align*} \lefteqn{2^{-s-1}(1+2\cos(\pi s))\zeta_{SU(3)}(s)} \quad & \nonumber\\ &=2\sum_{k=0}^{K} \frac{(s)_k}{k!}\cos^2\left(\frac{\pi (s-k)}2\right) \zeta(s-k)\zeta(2s+k) +\frac{\sin(\pi s)}{\Gamma(s)} R_K(s), \label{pf-eq:Witten zeta} \end{align*} where $R_K(s)$ is a holomorphic function. We note that every term except for the term with $k=0$ has a zero of order at least two at $s=0$. Hence, the values at $s=0$ can be immediately calculated. (3) Set $K=2a+1$. If $a$ is odd, then the terms with $k=0,1,\dotsc,a$ satisfying $k\equiv a \pmod 2$ and the term with $k=2a+1$ have a simple zero at $s=-a$, and the others have a zero of order at least two. Hence, we can easily obtain the first part of the result. The last part follows from the functional equation of the Riemann zeta function. Indeed, we can show that the sign of each term on the right side of (\ref{prop-eq:SU(3)-zeta at odd}) coincides with $(-1)^{(a-1)/2}$. (4) Put $K=2a+1$ again. In the same way, we see that $\zeta_{SU(3)}(s)$ has a zero of order at least two at $s=-a$ if $a$ is even. In order to determine the multiplicity of the zero, we now show (\ref{prop-eq:SU(3)-zeta at even}). Assume that $0<\varepsilon<1/2$. Set \[ f(s,\eta) =\frac{\sin(\pi s)}{\Gamma(s)} \cot\left(\frac{\pi (s-\eta)}2\right) \Gamma(s+\eta)\Gamma(-\eta) \zeta(s-\eta)\zeta(2s+\eta). \] Then, by shifting the contour, we obtain the following expression of $\zeta_{SU(3)}(s)$ which is valid around $s=-a$: \[ 2^{-s-1}(1+2\cos(\pi s))\zeta_{SU(3)}(s) = -\sum_{k=0}^{a/2} U_k(s) +\sum_{l=0}^{a/2-1} V_l(s) +W(s) +I(s), \] where $ U_k(s) = \Res_{\eta=k} f(s,\eta)$, $V_l(s) = \Res_{\eta=-s-l} f(s,\eta)$, $W(s) =\Res_{\eta=1-2s}f(s,\eta)$ and \[ I(s)= \frac1{2\pi i}\int_{C_\varepsilon} f(s,\eta)d\eta \] whose contour $C_\varepsilon$ describes the union of $C_\varepsilon^{(1)}:a/2-i\infty \rightarrow a/2-i\varepsilon$, $C_\varepsilon^{(2)}:a/2+\varepsilon e^{i\theta} (\theta:-\pi/2 \rightarrow \pi/2)$ and $C_\varepsilon^{(3)}:a/2+i\varepsilon \rightarrow a/2+i\infty$. We here remark that the poles at $\eta=k,s-2m,-s-a/2-m~(k=0,1,\dotsc,a/2;m=0,1,2,\dotsc)$ lie on the left of the contour $C_\varepsilon$ and the poles at $\eta=1-2s,-s-l,a/2+n ~(l=0,1,\dotsc,a/2-1;n=1,2,\dotsc)$ lie on the right. Hence, \[ 2^{a-1} 3\,\zeta_{SU(3)}''(-a) = -\sum_{k=0}^{a/2} U_k''(-a) +\sum_{l=0}^{a/2-1} V_l''(-a) +W''(-a) +I''(-a). \] By simple calculation, we first see that, for $k=0,1,\dotsc,a/2$ and $l=0,1,\dotsc,a/2-1$, \begin{align*} U_k''(-a) &= \begin{cases} \displaystyle -8 \binom{a}{k} \zeta'(-a-k)\zeta'(-2a+k) & \text{if $k$ is even},\\ \displaystyle \pi^2 \binom{a}{k} \zeta(-a-k)\zeta(-2a+k) & \text{if $k$ is odd}, \end{cases} \\ V_l''(-a) &= \begin{cases} \displaystyle 4 \binom{a}{l} \zeta'(-a-l)\zeta'(-2a+l) & \text{if $l$ is even},\\ \displaystyle -2\pi^2 \binom{a}{l} \zeta(-a-l)\zeta(-2a+l) & \text{if $l$ is odd}, \end{cases} \end{align*} and \[ W''(-a)=\frac{3\pi^2}{2} \frac{(a!)^2}{(2a+1)!}\zeta(-3a-1). \] We next evaluate \[ I''(-a) =\frac1{2\pi i}\int_{C_{\varepsilon}}f''(-a,\eta)d\eta, \] where $f''$ means $(\partial/\partial s)^2 f$. Since the integrals on $C_\varepsilon^{(1)}$ and $C_\varepsilon^{(3)}$ cancel each other out, we obtain \begin{align*} I''(-a) &=\frac1{2\pi i}\int_{C_\varepsilon^{(2)}} \Res_{\eta=a/2}f''(-a,\eta) \frac{d\eta}{\eta-a/2}\\ &\quad +\frac1{2\pi i}\int_{C_\varepsilon^{(2)}} \left(f''(-a,\eta)- \Res_{\eta=a/2}f''(-a,\eta)\cdot \frac1{\eta-a/2} \right)d\eta. \end{align*} We note that the integrand in the second integral is holomorphic at $\eta=a/2$, and so the second integral tends to zero as $\varepsilon$ tends to zero. Since $I(s)$ is independent of the choice of $\varepsilon$, we get \[ I''(-a)=\frac12 \Res_{\eta=a/2} f''(-a,\eta) =D_1(a)+D_2(a), \] where \begin{align*} D_1(a) &= \begin{cases} \displaystyle -\frac{\pi^2}2\binom{a}{a/2}\zeta(-3a/2)^2 & \text{if $a\equiv 2 \pmod 4$},\\ 0 & \text{if $a\equiv 0 \pmod 4$}, \end{cases} \\ D_2(a) &= \begin{cases} 0 & \text{if $a\equiv 2 \pmod 4$},\\ \displaystyle -2 \binom{a}{a/2} \zeta'(-3a/2)^2 & \text{if $a\equiv 0 \pmod 4$}. \end{cases} \end{align*} Combining the above results, we have \begin{align*} - \lefteqn{\sum_{\substack{0\le k\le a/2\\ k:\text{odd}}} U_k''(-a) +\sum_{\substack{0\le l\le a/2-1\\ l:\text{odd}}} V_l''(-a) +W''(-a)+D_1(a)} \quad &\\ &= -\frac{3\pi^2}{2} \sum_{k=0}^a \binom{a}{k} \zeta(-a-k)\zeta(-2a+k) +\frac{3\pi^2}{2} \frac{(a!)^2}{(2a+1)!}\zeta(-3a-1)\\ &=0, \end{align*} where in the last step we have used (\ref{lem-eq:convolution formula}) with $a=b=c$. Moreover, we see \begin{align*} -\lefteqn{\sum_{\substack{0\le k\le a/2\\ k:\text{even}}} U_k''(-a) +\sum_{\substack{0\le l\le a/2-1\\ l:\text{even}}} V_l''(-a) +D_2(a)} \quad &\\ &= 6\sum_{k=0}^{a/2} \binom{a}{2k} \zeta'(-a-2k)\zeta'(-2a+2k). \end{align*} Thus, we obtain (\ref{prop-eq:SU(3)-zeta at even}). In the same way as (3) above, we get $\sign(\zeta_{SU(3)}''(-a))=(-1)^{a/2}$, which completes the proof of Proposition \ref{prop:Witten zeta values}. \end{proof} \appendix \section{A functional equation for Euler's double zeta function} The $A$-function (\ref{thm-eq:definition of A}) has not been found in previous papers on multiple zeta functions. However, as seen in the next proposition, $A(s,t;0)$ is related to the functional equation of $\zeta(s,t)=\zeta(t,0;s)$ which was obtained by Matsumoto \cite[Theorem 1]{Matsumoto04}. \begin{Prop} \label{prop:functional equation} Set \[ h(s,t)=\zeta(s,t)-\frac{\Gamma(1-t)}{\Gamma(s)} \Gamma(s+t-1)\zeta(s+t-1). \] Then, we have \begin{align} \frac{h(s,t)}{(2\pi)^{s+t-1}\Gamma(1-t)} &=\cos\left(\frac{\pi}2(s+t-1)\right) \frac{h(1-t,1-s)}{\Gamma(s)} \nonumber\\ &\quad +\sin\left(\frac{\pi}2(s+t-1)\right) \frac{\Gamma(1-s)}{\pi}A(1-s,1-t;0). \label{prop-eq:functional equation} \end{align} In particular, the second term on the right side of {\upshape (\ref{prop-eq:functional equation})} vanishes on the hyperplane $s+t=2k+1$ $(k\in\mathbb{Z}\setminus\{0\})$ $($cf.\ \cite[Theorem 2.2]{Komori-Matsumoto-Tsumura10}$)$. \end{Prop} \begin{Rem} Firstly, the function $g(u,v)$ in Matsumoto's paper coincides with $h(v,u)$. Secondly, it may seem that the singularities of $\zeta(s,t)$ are located on the hyperplanes $s=1-l$ and $s+t=2-l~(l=0,1,2,\dotsc)$. However, the singularities on $s=-l$ and $s+t=-1-2l~(l=0,1,2,\dotsc)$ are fake, namely, the singularities of $\zeta(s,t)$ are only located on the hyperplanes $s=1$, $s+t=1$ and $s+t=2-2l~(l=0,1,2,\dotsc)$. This can be confirmed, for instance, by (\ref{eq:A-function shifted}) and (\ref{prop-eq:representation of Euler double zeta}). Hence, the last part of the proposition is justified. \end{Rem} \begin{proof}[Proof of Proposition {\upshape \ref{prop:functional equation}}] We first recall the usual integral representation of $\zeta(s,t)$ (cf.\ \cite[(5.2)]{Matsumoto02}): \[ \zeta(s,t)= \frac1{2\pi i}\int_{(c)} \frac{\Gamma(s+\eta)\Gamma(-\eta)}{\Gamma(s)} \zeta(t-\eta)\zeta(s+\eta)d\eta \] for $s,t\in\mathbb{C}$ with $\Rep(s)>1$ and $\Rep(t)>1$, where $-\Rep(s)+1<c<0$ and the contour $(c)$ describes the line from $c-i\infty$ to $c+i\infty$. Since the residue of the integrand at $\eta=t-1$ is \[ -\frac{\Gamma(1-t)}{\Gamma(s)}\Gamma(s+t-1)\zeta(s+t-1) \] unless $t=1,2,3,\dotsc$, we shift the contour to obtain \[ h(s,t)= \frac1{2\pi i}\int_{C} \frac{\Gamma(s+\eta)\Gamma(-\eta)}{\Gamma(s)} \zeta(t-\eta)\zeta(s+\eta)d\eta \] for $s,t\in\mathbb{C}$ with $s\ne1-k$ and $t\ne k$ ($k=0,1,2,\dotsc$), where the contour $C$ is a line from $-i\infty$ to $i\infty$ indented in such a manner as to separate the points at $\eta=-s+1-l, t-l~(l=0,1,2,\dotsc)$ from the points at $\eta=0,1,2,\dotsc$. By the functional equation of the Riemann zeta function, the integrand is equal to $(2\pi)^{s+t-1}\Gamma(1-t)$ times \begin{align*} & \cos\left(\frac{\pi}2(s+t-1)\right) \frac{\Gamma(1-t+\eta)\Gamma(-\eta)}{\Gamma(s)\Gamma(1-t)} \zeta(1-s-\eta)\zeta(1-t+\eta) \\ & +\sin\left(\frac{\pi}2(s+t-1)\right) \frac{\Gamma(1-s)}{\pi} \sin(\pi (1-s))\\ &\quad \times \cot\left(\frac{\pi}2(1-s-\eta)\right) \frac{\Gamma(1-t+\eta)\Gamma(-\eta)}{\Gamma(1-t)} \zeta(1-s-\eta)\zeta(1-t+\eta). \end{align*} Thus, we obtain (\ref{prop-eq:functional equation}). \end{proof} We now compare our result with the result of Matsumoto to obtain a new representation of $A(s,t;0)$. For $(s,t)\in\mathbb{C}^2$ with $\Rep(s)<0$ and $\Rep(t)>1$, set \[ F_{\pm}(s,t) =\sum_{k=1}^\infty \sigma_{s+t-1}(k)\Psi(t,s+t;\pm 2\pi i k), \] where $\sigma_\nu(k)=\sum_{d|k}d^{\nu}$ and $\Psi(\alpha,\gamma;z)$ is the confluent hypergeometric function of the second kind. It is known that $F_{\pm}(s,t)$ can be continued meromorphically to the whole space $\mathbb{C}^2$. \begin{Cor} The function $A(s,t;0)$ can be represented in terms of the $F_{\pm}$-functions: \[ 2\Gamma(s)A(s,t;0) =(2\pi i)^{s+t} F_+(s,t)+(-2\pi i)^{s+t}F_-(s,t). \] \end{Cor} \begin{proof} Propositions 1 and 2 in \cite{Matsumoto04} show \[ \frac{h(s,t)}{(2\pi)^{s+t-1}\Gamma(1-t)} =e^{\pi i(s+t-1)/2}F_+(t,s)+e^{\pi i(1-s-t)/2}F_-(t,s) \] and \begin{equation} \label{pf-eq:functional equation of F} F_{\pm}(1-t,1-s)=(\pm 2\pi i)^{s+t-1}F_{\pm}(s,t), \end{equation} respectively. These suggest \[ \frac{h(1-t,1-s)}{\Gamma(s)} =F_+(t,s)+F_-(t,s) \] and so we see \begin{align*} \frac{h(s,t)}{(2\pi)^{s+t-1}\Gamma(1-t)} &=\cos\left(\frac{\pi}2(s+t-1)\right) \frac{h(1-t,1-s)}{\Gamma(s)}\\ &\quad +i\sin\left(\frac{\pi}2(s+t-1)\right) (F_+(t,s)-F_-(t,s)). \end{align*} By comparing this with (\ref{prop-eq:functional equation}), we have $\Gamma(1-s)A(1-s,1-t;0)=\pi i(F_+(t,s)-F_-(t,s))$. Thus, we use (\ref{pf-eq:functional equation of F}) to obtain the result. \end{proof} \end{document}
\begin{document} \title{A Bayesian Account of Quantum Histories}\author{Thomas Marlow\thanks{email: [email protected]}\\ \emph{School of Mathematical Sciences, University of Nottingham,}\\ \emph{UK, NG7 2RD}} \maketitle \begin{abstract} We investigate whether quantum history theories can be consistent with Bayesian reasoning and whether such an analysis helps clarify the interpretation of such theories. First, we summarise and extend recent work categorising two different approaches to formalising multi-time measurements in quantum theory. The standard approach consists of describing an ordered series of measurements in terms of history propositions with non-additive `probabilities'. The non-standard approach consists of defining multi-time measurements to consist of sets of exclusive and exhaustive history propositions and recovering the single-time exclusivity of results when discussing single-time history propositions. We analyse whether such history propositions can be consistent with Bayes' rule. We show that certain class of histories are given a natural Bayesian interpretation, namely the linearly positive histories originally introduced by Goldstein and Page. Thus we argue that this gives a certain amount of interpretational clarity to the non-standard approach. We also attempt a justification of our analysis using Cox's axioms of probability theory. \end{abstract} \textbf{Keywords}: Bayesian Probability, Consistent Histories, Linear Positivity \textbf{PACS}: 02.50.Cw, 02.50.Tt, 03.67.-a, 03.65.Ca. \section*{Outline} The basic premise of this paper is rather simple. We propose to apply Bayesian probability rules to quantum histories theory and see if we get any form of consistency. The are a few reasons why this is a pedagogically useful tack to take. Firstly, Bayesian probability is pedagogically useful in its own right as it provides a framework for thinking about probabilities that is rather natural in a human sense---it accommodates, in different situations, all uses of the term `probability' including probabilistic inference and relative frequencies \cite{JaynesBOOK}. Secondly, quantum history theories are specifically designed with the idea of applying such probabilities to closed systems, without necessarily discussing observers and their experiments; thus it is natural to interpret such probabilities in a Bayesian manner rather than necessarily discussing the relative frequencies of experiments. In fact Bayesian probability can accommodate almost all notions of relative frequency presently used in the literature \cite{JaynesBOOK}, whereas theories of relative frequency have to be designed for the problem at hand. Thirdly, even when discussing quantum histories instrumentally such a Bayesian interpretation might help to clarify the interpretation of Standard Quantum Theory (SQT) \cite{Mana04}. It will turn out that we can apply a very natural Bayesian interpretation to a certain class of quantum histories. So, although it may seem na\"{\i}ve at first we will get out something quite profound, a natural interpretation of the probabilities of certain history propositions. In fact, as we will show, we can justify our analysis using Cox's axioms of probability theory \cite{CoxBOOK} to show that the standard notions of probability in the consistent histories programme aren't necessarily `good' notions of probability, but there are alternative notions. Before we get stuck into our Bayesian analysis, let us briefly discuss why the foundational interpretation of probability really matters when interpreting such theories. Then we will introduce quantum history theories and try and analyse what consistency we can get through Bayesian reasoning. \section*{Quantum Probabilities} In Standard Quantum Theory (SQT) one usually invokes multi-time measurements as a succession of single-time measurements. If we use the von Neumann measurement formalism then the possible exclusive propositions at each time are represented by the projection operators associated with the eigenstates of a Hermitian operator. Thus a non-relativistic history is represented as a succession of such projection operators each labelled by a time. The probability of each such history is, in the standard formalism, given by the probability trace formula. For example, for a three-time succession ($t_1 < t_2 < t_3$) of von Neumann measurements $\{\hat{A}(t_1), \hat{B}(t_2), \hat{C}(t_3)\}$ given an initial state $\rho$, the probability of a history $\{\hat{a}_i(t_1), \hat{b}_j(t_2), \hat{c}_k(t_3)\}$ is given by: \begin{equation} p(a_i, b_j, c_k \vert \rho) = \mbox{tr}(\hat{c}_k(t_3) \hat{b}_j(t_2) \hat{a}_i(t_1) \rho \hat{a}_i(t_1) \hat{b}_j(t_2)). \label{Trace} \end{equation} In the above equation the results of each measurement are conveniently represented by Heisenberg picture projection operators. For example, the results of the von Neumann measurement $\hat{A}$ at time $t_1$ are represented by the set of Heisenberg picture projection operators $\hat{a}_i(t_1) = U^\dagger(t_1-t_0) \hat{a}_i U(t_1-t_0)$ where $\hat{a}_i$ are the relevant Schr\"odinger picture operators and $t_0$ is the fiducial time. Similarly for the other von Neumann measurements $\hat{B}$ and $\hat{C}$. This is the standard way that multi-time measurements are invoked in non-relativistic SQT. Classically, the additivity of propositions in Bayesian probability theory is a contextual property of propositions. This can be seen in the pedagogical example given recently by Mana \cite{Mana04}. Take an urn that contains some red balls and some wooden balls; the urn is shaken and an observer takes out a ball. We can ask for the probabilities of the following two propositions: ``the ball is red'' and ``the ball is wooden''. Only if it is the case that the balls cannot be both wooden and red then these two propositions are exclusive. Thus, we can see that propositions are not \emph{inherently} exclusive. Rather, classically at least, exclusivity is a contextual property of propositions as there are ways that these two propositions could be non-exclusive (say, some balls are both red and wooden). Therefore, since there is no mention of contexts in the standard analysis, one is not \emph{necessarily} discussing exclusive propositions when discussing the possible history propositions that arise through an ordered succession of von Neumann measurements. Or, one might, ambiguously, be implicitly invoking many possible contexts that need to be formally differentiated. The \emph{only} way, classically, we have to define whether two propositions $A$ and $B$ are exclusive is to equate exclusivity with the additivity of their probabilities: \begin{equation} p(A \cup B \vert I) = p(A \vert I) + p(B \vert I) \label{ADD} \end{equation} \noindent such that $A \cap B = \emptyset$ where `$\emptyset$' is the null proposition that is always false in standard Boolean logic---when $A \cap B = \emptyset$ we say that $A$ and $B$ are disjoint. So, if two propositions are both additive and disjoint we will simply call them `exclusive' with respect to context $I$. If this is not satisfied then propositions $A$ and $B$ are called `not-exclusive' with respect to context $I$. Classical probability theory and SQT therefore differ by how they treat `not-exclusive' propositions. Note that we use the term `exclusive', throughout this paper, in a pedagogically distinct way to how it is normally used in the quantum histories literature---where exclusive is synonymous with disjoint. We wish to differentiate `disjoint' and `exclusive' propositions because, classically, exclusive propositions must always be additive so we wish to reserve the word `exclusive' only for contextually additive propositions. This means that when we generalise we keep the standard notion of exclusivity and are forced to name any other tentative notion something else so as to avoid confusion. In standard Bayesian probability theory exclusive and disjoint are considered equivalent notions (the former being about probabilities and the latter about the propositions themselves) but when we get into problems with non-additivity we should differentiate these notions. Obviously, single-time von Neumann measurements consist of sets of `exclusive' propositions (both additive and disjoint). When we wish to differentiate our introduced notion of conventional probabilistic exclusivity from other presumed notions of exclusivity we will do so explicitly, otherwise we will simply use the term exclusive in the standard contextual probabilistic way we have introduced above. When we come to discuss quantum history theories it will turn out that exclusive propositions are also disjoint, but disjoint propositions aren't necessarily exclusive (since exclusivity is taken to be a contextual probabilistic property of propositions whereas disjointness is something that is defined on the proposition algebra). Here we use standard Bayesian notation such that all probabilities are defined with respect to a specific context $I$ for exactly the reason noted above: propositions $A$ and $B$ are only well-defined in a given context exactly because their meaning is contextual\footnote{We use the term `context' in the colloquial manner used by Bayesian theorists rather than in the technical sense of the Kochen-Specker theorem.}. By invoking contexts explicitly we hope to clarify the meaning of such statements. Since the probabilities given by (\mathbbm{R} \mbox{e }f{Trace}) are not necessarily additive then we are, in the standard interpretation, having to invoke a different kind of exclusivity to that invoked when requiring both (\mathbbm{R} \mbox{e }f{ADD}) and disjointness. The standard von Neumann interpretation of exclusivity comes about because each single-time measurement consists of explicitly exclusive propositions and it is rather natural (although we argue that it is perhaps dubious) to \emph{presume} that successions of these single-time measurements give well-defined exclusive history propositions---even though the non-additivity of the probabilities of such history propositions suggests that they are not probabilistically `exclusive' as we have defined above. So, as Anastopoulos has argued \cite{Anast04}, there is a dichotomy for multi-time measurements that we must account for. We have a choice between the two following paradigms: \begin{enumerate} \item Postulate single-time exclusivity of results in the standard manner and presume some na\"{\i}ve kind of exclusivity for multi-time propositions that arise from a series of single-time measurements. This is the standard interpretation of von Neumann measurements. \label{1} \item Postulate the exclusivity of some history propositions, using the standard notion of exclusivity of probability theory as we have defined above, and get single-time exclusivity of results as a corollary by discussing single-time history propositions. \label{2} \end{enumerate} In what follows we shall refer to these two interpretations as \mathbbm{R} \mbox{e }f{1} and \mathbbm{R} \mbox{e }f{2} respectively. Anastopoulos argues, in \cite{Anast04}, that neither interpretation of multi-time measurements have yet been convincingly promoted. We are, of course, used to interpretation \mathbbm{R} \mbox{e }f{1} and not used to interpretation \mathbbm{R} \mbox{e }f{2}. If we use interpretation \mathbbm{R} \mbox{e }f{1} then, as Anastopoulos \cite{Anast04} shows, we are forced to admit a dependency of the probabilities (treated as relative frequencies) on the resolution of the apparatus we use, exactly because such `probabilities' are non-additive and thus aren't exclusive in the conventional sense (nor are they not-exclusive in the conventional sense). So, if we use finer-grained projection operators we get different probabilities out for given sample sets. It is interesting to investigate interpretation \mathbbm{R} \mbox{e }f{2} simply because it is not usually considered and cannot be rejected \emph{a priori}. It is the conflict implicit in the noted dichotomy which makes us so uncomfortable with multi-time measurements. In interpretation \mathbbm{R} \mbox{e }f{1}, \emph{any} ordered set of measurements, presuming that the relevant apparatus can be made, is well-defined---this suggests an amazing amount of freedom that nature gives to experimental physicists. \section*{A Pedagogical Account of Consistent Histories} There does exist a quantum formalism that implicitly uses interpretation \mathbbm{R} \mbox{e }f{2}; namely the Consistent Histories (CH) programme \cite{Grif84, Omnes88, GH90, Isham94}. Rarely, however, is interpretation \mathbbm{R} \mbox{e }f{2} explicitly used by consistent historians. Rather, CH is usually invoked in a non-instrumental fashion (some exceptions to this trend are \cite{Hartle04} and \cite{GriffithsBOOK}). Interpretation \mathbbm{R} \mbox{e }f{2} is also in opposition to the general claim by some consistent historians that CH solves the measurement problem. Interpretation \mathbbm{R} \mbox{e }f{2} is a way to \emph{re-define} measurement rather than solve the measurement problem \emph{per se}. Let us give a brief introduction to the CH programme; the basic setup of CH is as follows. One defines a set of homogeneous history propositions; following \cite{Isham94}, each homogeneous history proposition $\alpha$ consists of an ordered tensor product of time-labelled projection operators just like in SQT---for example: \begin{equation} \alpha := \hat{\alpha}_{t_n}(t_n) \otimes \hat{\alpha}_{t_{n-1}}(t_{n-1}) \otimes ...\hat{\alpha}_{t_2}(t_2) \otimes \hat{\alpha}_{t_1}(t_1) \end{equation} \noindent where each $\hat{\alpha}$ is a standard single-time projection operator. Here we use the Heisenberg picture. The ordered set of times over which an homogeneous history is defined is called its temporal support. We can then naturally define the class operator \cite{Isham94} for such a history to be: \begin{equation} C_{\alpha} := \hat{\alpha}_{t_n}(t_n) \hat{\alpha}_{t_{n-1}}(t_{n-1}) ...\hat{\alpha}_{t_2}(t_2) \hat{\alpha}_{t_1}(t_1) \end{equation} \noindent and the probability formula (\mathbbm{R} \mbox{e }f{Trace}) becomes: \begin{equation} p(\alpha \vert I) = \mbox{tr}(C_{\alpha} \rho C^{\dagger}_{\alpha}). \end{equation} It is natural to extend the definition of history propositions to include inhomogeneous history propositions \cite{Isham94}. Inhomogeneous history propositions are defined by combining homogeneous history propositions in novel, but rather natural, ways. We will not repeat such arguments here (see Isham's original work \cite{Isham94}) because it is sufficient simply to note the following. One can define `or' and `not' operations for homogeneous history propositions in a rather natural manner; such operations are denoted `$\vee$' and `$\neg$' respectively. These operations are not the standard notions of `or' and `not' in Boolean logic, but are defined naturally on the history algebra. The standard `and' operation `$\wedge$' takes homogeneous history propositions into homogeneous history propositions and behaves exactly like the Boolean `and' operation should. We can also naturally define a notion of disjointness; we denote such a relation `$\perp$'. Note that we have explicitly been calling these histories `propositions'; this is because, in analogy with Bayesian probability theory, we are going to treat them as propositions in the standard sense to see if we get any consistency via Bayesian reasoning. When two homogeneous history propositions $\alpha$ and $\beta$ are disjoint (such that they have the same temporal support) then the class operator for the history $\alpha \vee \beta$ is simply: \begin{equation} C_{\alpha \vee \beta} = C_{\alpha} + C_{\beta}. \end{equation} We define two history propositions to be `exclusive' if their probabilities are additive under this `$\vee$' operation and such that, in the \emph{same} context, the probability of both being the case is zero. A sufficient condition for two disjoint history propositions to have additive probabilities, and thus be exclusive propositions, is defined using what is called the decoherence functional $d$. For SQT the decoherence functional acting on two homogeneous history propositions $\alpha$ and $\beta$ is defined as follows: \begin{equation} d_{\rho, H}(\alpha, \beta) := \mbox{tr}(C_{\alpha} \rho C_{\beta}^\dagger). \label{dFUNC} \end{equation} There is an ambiguity in how we have defined homogeneous history propositions because we have used the Heisenberg picture in their definition, but obviously one could use Schr\"{o}dinger picture projection operators and absorb all the dynamics into the definition of the decoherence functional. So, the subscripts $\rho$ and $H$ refer to such a dependence of the decoherence functional on the initial state and the Hamiltonian. We will drop these subscripts from now on and such dependence is kept implicit. One can consider the initial state and dynamics constant throughtout the following discussion. Obviously, $d(\alpha, \alpha)$ has the same form as (\mathbbm{R} \mbox{e }f{Trace}). If we take two homogeneous history propositions $\alpha^i$ and $\alpha^j$ then their respective probabilities ($d(\alpha^i, \alpha^i)$ and $d(\alpha^j, \alpha^j)$ respectively) are additive if $d(\alpha^i, \alpha^j) = 0$---if this is the case then we will call these two history propositions `exclusive' with respect to a context $I$. We call the context `$I$' simply to give it a name and invoke a context explicitly---we reserve the right to change its name, or use a different context, later. $I$ obviously must specify $\rho$ and $H$, but it may also specify further information at present left unspecified. A set of such history propositions $\{\alpha^i: i=1,2,...,N\}$ is called `$d$-consistent' \cite{Isham94} when all such propositions are mutually `exclusive' and exhaustive with respect to context $I$. Single-time SQT is recovered by noting that von Neumann single-time measurements are $d$-consistent sets of single-time history propositions. For two disjoint histories we have that $d(\alpha^i \wedge \alpha^j, \alpha^i \wedge \alpha^j) = 0$. A set of disjoint histories that form a partition of unity such that $\sum_i d(\alpha^i, \alpha^i) = 1$ is simply called a `complete' set. If we use interpretation \mathbbm{R} \mbox{e }f{2} then it is clear that we could equate multi-time measurements with $d$-consistent sets. However, rather than call a $d$-consistent set a measurement (which might get quite confusing when discussing the distinction between interpretations \mathbbm{R} \mbox{e }f{1} and \mathbbm{R} \mbox{e }f{2}) we will call a $d$-consistent set a `null-counterfactual'. A null-counterfactual consists of an exclusive and exhaustive set of propositions. A counterfactual statement is a statement about what would have happened in a different context; a null-counterfactual statement is simply the trivial statement about what would happen if the same context was invoked. Obviously in quantum theory a null-counterfactual can have many different exclusive results because of its probabilistic nature \cite{Grif98}. So a null-counterfactual is almost like a definition of `context', but we do not wish to use the term `context' because of its more technical use in SQT and because, in what follows, we use the term in the more colloquial Bayesian manner. If such a von Neumann measurement is repeated using an identical setup then one of the possible propositions is, exclusively, the case; so, a standard single von Neumann measurement is a null-counterfactual. The same is considered true for null-counterfactuals consisting of more general history propositions. A series of von Neumann measurements does not necessarily define a null-counterfactual. Null-counterfactuals can also include inhomogeneous history propositions. Some history propositions cannot be defined in any $d$-consistent set; such history propositions are to be called non-$d$-realisable. A necessary and sufficient condition for a history proposition $\alpha^i$ to be $d$-realisable is thus: \begin{equation} d(\alpha^i, \alpha^i) + d(\neg \alpha^i, \neg \alpha^i) = 1. \end{equation} Although it is not yet clear which interpretation which out of \mathbbm{R} \mbox{e }f{1} or \mathbbm{R} \mbox{e }f{2} is physically correct, it is pedagogically interesting to investigate interpretation \mathbbm{R} \mbox{e }f{2} because it is not usually considered and cannot be rejected \emph{a priori} \cite{Anast04}. Adopting \mathbbm{R} \mbox{e }f{2} is tempting because of its clear and unambiguous definition of exclusivity and null-counterfactual statements. In interpretation \mathbbm{R} \mbox{e }f{1} one might run an experiment and a certain history proposition is realised; one may then ask a null-counterfactual question: ``what history propositions could be realised if you repeated the experiment in \emph{exactly the same manner}?'' and you are forced to \emph{presume} that any distinct history proposition that is realised upon a second run is exclusive to the one you first received even though it is not probabilistically exclusive in the standard sense. Using null-counterfactuals in interpretation \mathbbm{R} \mbox{e }f{2} one bypasses this problem (as such a definition of exclusivity is uncontroversial). Just as we define a single-time measurement to be some kind of context in which an exclusive set of single-time propositions can be realised, so it seems we might wish to define a multi-time measurement to be related to contexts in which an exclusive set of history propositions can be realised. Just as von Neumann measurements can be convexly mixed we might assume that more general null-counterfactuals can be mixed. If we use interpretation \mathbbm{R} \mbox{e }f{1} then it is clear that a succession of von Neumann measurements might not be defined by a $d$-consistent set of homogeneous history propositions, but each homogeneous history proposition might be $d$-realisable. In such a case then perhaps, one might na\"{\i}vely think, we can define such a multi-time measurement using interpretation \mathbbm{R} \mbox{e }f{2} by mixing null-counterfactuals. Note that in the CH interpretation of quantum systems the \emph{values} of probabilities of history propositions are independent of the $d$-consistent set they are taken to be part of. This is rather analogous to the Gleason non-contextuality of single-time SQT \cite{ILS94}. It is exactly this type of non-contextuality that has, in the history of SQT, confused the distinction between interpretations \mathbbm{R} \mbox{e }f{1} and \mathbbm{R} \mbox{e }f{2}. This is because the independence of the values of probabilities upon contexts doesn't necessarily mean that we can disregard the contextual element of their very definition. A tautology: probabilities with the same values are not necessarily the same probabilities---they might need to be distinguished. No two equals are the same. One can easily imagine a situation where an experimenter mixes a set of von Neumann measurements such that she chooses each such measurement with a given weight. In such a case it doesn't matter that the propositions realised within different measurements are not considered exclusive when taken together, they become exclusive only by the application of the mixing process. Similarly one might be able to give a good definition to a mixture of null-counterfactuals. So, if we take a succession of von Neumann measurements and label the possible history propositions as $\{\alpha^i : i=1, 2..., N\}$ then this set need not be $d$-consistent but if all the $\alpha^i$ are $d$-realisable then the sets $\{\alpha^i, \neg \alpha^i\}$ will be $d$-consistent for each $i=1,2,...,N$. Let us denote $p(\alpha^i \vert I) = d(\alpha^i, \alpha^i)$ in order to emphasise the probabilistic interpretation of the decoherence functional. If we mix these null-counterfactuals we must assign weights $w_i$ to each such $d$-consistent set $\{\alpha^i, \neg \alpha^i\}$, just as we would if we were mixing a set of von Neumann measurements. Presuming that the context $I$ is the same for each element of the mixture (we reserve the right to change this assumption later but it is easy to assume that the proposition $\alpha^i \vee \neg \alpha^i$ is equivalent to the proposition $\alpha^j \vee \neg \alpha^j$ since they are normally considered equivalent tautologies, our mixture $M$ being a weighted set of different tautologies in this case) then the probability for any history $\alpha^i$ to be received given such a mixed set $M$ is given by the following: \begin{equation} p(\alpha^i \vert M) = w_i p(\alpha^i \vert I) + \sum_{j \neq i}^{N} w_j p(\alpha^i \wedge \alpha^j \vert I) + \sum_{j \neq i}^{N} w_j p(\alpha^i \wedge \neg \alpha^j \vert I). \label{yourmum} \end{equation} If we equate $p(\alpha^i \wedge \alpha^j \vert I)$ with $d(\alpha^i \wedge \alpha^j, \alpha^i \wedge \alpha^j)$ then we must note that $d(\alpha^i \wedge \alpha^j, \alpha^i \wedge \alpha^j)$ equals zero for all disjoint homogeneous history propositions $\alpha^i$ and $\alpha^j$ which are defined over the same temporal support. This is because, for all such history propositions, $\alpha^i \wedge \alpha^j \equiv \bf{0}$, where $\bf{0}$ is used to denote the null history proposition. Thus for all homogeneous history propositions so defined the first summation in (\mathbbm{R} \mbox{e }f{yourmum}) is equal to zero. The simplest case is when all the $d$-consistent sets $\{\alpha^i, \neg \alpha^i\}$ are \emph{a priori} equally likely; so lets try this and see what happens in the case where $w_i = \frac{1}{N}$ for all $i$---each $d$-consistent set will be the corresponding null-counterfactual with \emph{a priori} weight $\frac{1}{N}$. Note that we don't yet call such weights `probabilities'. In such a case we get: \begin{equation} p(\alpha^i \vert M) = \frac{1}{N}(p(\alpha^i \vert I) + \sum_{j \neq i}^{N} p(\alpha^i \wedge \neg \alpha^j \vert I)). \end{equation} Now we must ask what form $p(\alpha^i \wedge \neg \alpha^j \vert I)$ should take in terms of the decoherence functional. It is clear that, by intuition, the history proposition $\alpha^i \wedge \neg \alpha^j$ is equivalent to $\alpha^i$. The proposition that the history proposition $\alpha^i$ is the case \emph{and} the proposition that $\alpha^j$ isn't the case is just equivalent to the proposition that the history proposition $\alpha^i$ is the case. This can be shown explicitly in the History Projection Operator (HPO) form of CH \cite{Isham94}. In the HPO formalism homogeneous history propositions are represented by tensor products of the relevant single-time propositions. So for two two-time history propositions we have that $\alpha^i = \hat{\alpha}^{i}_{t_1} \otimes \hat{\alpha}^{i}_{t_2}$ and $\alpha^j = \hat{\alpha}^{j}_{t_1} \otimes \hat{\alpha}^{j}_{t_2}$. If we assume that these two history propositions are defined such that $\hat{\alpha}^{i}_{t_1} \perp \hat{\alpha}^{j}_{t_1}$ and $\hat{\alpha}^{i}_{t_2} \perp \hat{\alpha}^{j}_{t_2}$ then the proof that $\alpha_i \wedge \neg \alpha_j = \alpha_i$ goes as follows: \begin{eqnarray} \hat{\alpha}^{i}_{t_1} \otimes \hat{\alpha}^{i}_{t_2} \wedge \neg (\hat{\alpha}^{j}_{t_1} \otimes \hat{\alpha}^{j}_{t_2}) &:=& \hat{\alpha}^{i}_{t_1} \wedge \neg \hat{\alpha}^{j}_{t_1} \otimes \hat{\alpha}^{i}_{t_2} \wedge \neg \hat{\alpha}^{j}_{t_2} \nonumber \\ & & + \ \hat{\alpha}^{i}_{t_1} \wedge \neg \hat{\alpha}^{j}_{t_1} \otimes \hat{\alpha}^{i}_{t_2} \wedge \hat{\alpha}^{j}_{t_2} \nonumber \\ & & + \ \hat{\alpha}^{i}_{t_1} \wedge \hat{\alpha}^{j}_{t_1} \otimes \hat{\alpha}^{i}_{t_2} \wedge \neg \hat{\alpha}^{j}_{t_2} \label{And}\\ &=& \hat{\alpha}^{i}_{t_1} \otimes \hat{\alpha}^{j}_{t_2} + \hat{\alpha}^{i}_{t_1} \otimes \hat{\bf{0}} + \hat{\bf{0}} \otimes \hat{\alpha}^{j}_{t_2} \label{Nullhistory propositions}\\ &=& \hat{\alpha}^{i}_{t_1} \otimes \hat{\alpha}^{i}_{t_2} \label{Answer}. \end{eqnarray} Eq.(\mathbbm{R} \mbox{e }f{And}) represents the intuitive logical result that the history propostion $\alpha^i \wedge \neg \alpha^j$ can be true in three different ways, namely if any one of the three history propositions on the RHS of Eq.(\mathbbm{R} \mbox{e }f{And}) is true. To get from Eq.(\mathbbm{R} \mbox{e }f{Nullhistory propositions}) to Eq.(\mathbbm{R} \mbox{e }f{Answer}) we simply note that all history propositions which have a null result at any given time are deemed equivalent to the null history $\bf{0} = \hat{\bf{0}} \otimes \hat{\bf{0}}$ \cite{Isham94}. Thus, for homogeneous history propositions that are defined using exclusive and exhaustive single-time propositions, it is always the case that $\alpha^i \wedge \neg \alpha^j = \alpha^i$ and thus that: \begin{equation} d(\alpha^i \wedge \neg \alpha^j, \alpha^i \wedge \neg \alpha^j) = d(\alpha^i, \alpha^i). \end{equation} Thus it seems natural to equate $p(\alpha^i \wedge \neg \alpha^j \vert I)$ with $p(\alpha^i \vert I) = d(\alpha^i, \alpha^i)$ in this case. This gives us that the probability that history proposition $\alpha^i$ is the case, given that context $M$ is an equally weighted mixture of null-counterfactuals, is given by the rather trivial result: \begin{eqnarray} p(\alpha^i \vert M) &=& \frac{1}{N}(p(\alpha^i \vert I) + p(\alpha^i \vert I) \sum_{j \neq i}^{N} 1) \\ &=& \frac{1}{N}(p(\alpha^i \vert I) + (N-1)p(\alpha^i \vert I)) \\ &=& p(\alpha^i \vert I) = d(\alpha^i, \alpha^i). \label{MIXTURE} \end{eqnarray} Note that this result does not depend upon the fact that we chose to use equal weights. Any set of positive weights $w_i$, such that $\sum_i w_i = 1$, would work. This result should be taken with a pinch of salt, lots of implicit assumptions have been made in order to reach (\mathbbm{R} \mbox{e }f{MIXTURE})---we will investigate it less na\"{\i}vely in the next section once we have introduced a Bayesian account of such propositions. So, it is clear that some ordered sets of single-time von Neumann measurements might equally well be interpreted as a mixed set of null-counterfactuals---but, of course, not all ordered sets of von Neumann measurements could be interpreted in such a manner. Non-$d$-realisable propositions are propositions that can never be $d$-realised with respect to some other history propositions in context $I$. So far we have not discussed the strict meaning of the context $I$; we have simply kept the context within the notation because of the tentative contextual meaning we apply to propositions. In \cite{Mana04} it was shown that doing such a thing can help clarify the meaning of probabilistic statements in SQT; we simply adopt the same principle here for quantum history theories. So, for the moment, one is asked just to accept the name `$I$' for the the context whatever that context is taken to mean. We will, in part, rectify this gnomic situation later. So, strictly speaking, the above discussion is rather na\"{\i}ve and we have yet to check that the reasoning we have used is all consistent and unambiguous. For example, it is not clear that the context $I$ is well-defined globally throughout the mixing process. Or whether the mixing process is itself well-defined---especially since the weights are totally arbitrary. We shall examine this in the following sections. We will show that by using Bayesian reasoning such concepts do become consistent and less ambiguous. \section*{Bayesian Histories} Bayes' rule is a rule that relates \emph{a priori} probability statements to \emph{a posteriori} probability statements. Say we have two propositions $A$ and $B$ and a general context $D$ which refers to the general setup of the problem (and remains constant through the analysis) then Bayes' rule is as follows: \begin{equation} p(A \vert B D) = \frac{p(B \vert A D)p(A \vert D)}{p(B \vert D)}. \label{BAYES} \end{equation} \noindent Bayes' rule is derived from the following rule: \begin{equation} p(A \cap B \vert D) = p(A \vert B D) p(B \vert D) = p(B \vert A D)p(A \vert D). \label{AND} \end{equation} We can try and use Bayes' rules (or equivalently (\mathbbm{R} \mbox{e }f{AND})) to analyse the reasoning we used above. If we take all history propositions then one might be tempted to try and apply Bayes' rule to them and see if we get any form of consistency. So, let us apply Bayes' rule in the following na\"{\i}ve way, simply using the history algebra `$\wedge$' instead of the standard Boolean `$\cap$' (we will justify the step using Cox's axioms of probability later): \begin{eqnarray} p(\alpha^i \wedge \neg \alpha^j \vert I) &=& p(\alpha^i \vert \neg \alpha^j I) p(\neg \alpha^j \vert I) \\ &=& p(\neg \alpha^j \vert \alpha^i I) p(\alpha^i \vert I). \end{eqnarray} By intuition one might like to assign that $p(\neg \alpha^j \vert \alpha^i I) := 1$ because if the proposition $\alpha^i$ is true then obviously the proposition $\neg \alpha^j$ is true. This then gives us that: \begin{equation} p(\alpha^i \wedge \neg \alpha^j \vert I) = p(\alpha^i \vert I). \label{gay} \end{equation} The above analysis is consistent as long as Bayes' rule is valid for such history propositions, so we need to work out if Bayes' rule is a valid way to manipulate the probabilities of history propositions. For this to be the case then all the above probabilities must be well-defined. We will justify our na\"{\i}ve application of Bayes' rule later, but for now let us continue along with this na\"{\i}ve analysis for a moment and see how Bayes' rule (or equivalently rule (\mathbbm{R} \mbox{e }f{AND})) apply to the other probability assignments we might like to make. \begin{eqnarray} p(\alpha^i \wedge \alpha^j \vert I) &=& p(\alpha^i \vert \alpha^j I) p(\alpha^j \vert I) = 0 \label{ONE} \\ &=& p(\alpha^j \vert \alpha^i I) p(\alpha^j \vert I) = 0. \label{TWO} \end{eqnarray} The statement (\mathbbm{R} \mbox{e }f{ONE}) is intuitively the case for disjoint history propositions since if $\alpha^j$ is the case then $\alpha^i$ isn't the case, and similarly for the second decomposition (\mathbbm{R} \mbox{e }f{TWO}). Note that this doesn't presume that these two propositions are probabilistically exclusive, only that given one we never infer the other. \begin{eqnarray} p(\neg \alpha^i \wedge \alpha^j \vert I) &=& p(\neg \alpha^i \vert \alpha^j I) p(\alpha^j \vert I) \\ &=& p(\alpha^j \vert I). \label{gay2} \end{eqnarray} To get to (\mathbbm{R} \mbox{e }f{gay2}) we use exactly the same reasoning we used to get (\mathbbm{R} \mbox{e }f{gay}). And so we come to ask how we interpret $p(\neg \alpha^i \wedge \neg \alpha^j \vert I)$. One way to look at this probability is to decompose it as follows: \begin{eqnarray} p(\neg \alpha^i \wedge \neg \alpha^j \vert I) &=& p(\neg \alpha^j \vert \neg \alpha^i I) p(\neg \alpha^i \vert I) \label{DONE} \\ &=& (1 - p(\alpha^j \vert \neg \alpha^i I))p(\neg \alpha^i \vert I) \label{29}\\ &=& (1 - \frac{p(\alpha^j \vert I)}{p(\neg \alpha^i \vert I)}) p(\neg \alpha^i \vert I) \\ &=& p(\neg \alpha^i \vert I) - p(\alpha^j \vert I). \end{eqnarray} \noindent But, of course, instead of using the decomposition (\mathbbm{R} \mbox{e }f{DONE}) one could have used: \begin{eqnarray} p(\neg \alpha^i \wedge \neg \alpha^j \vert I) &=& p(\neg \alpha^i \vert \neg \alpha^j I) p(\neg \alpha^j \vert I) \label{DTWO}\\ &=& (1 - p(\alpha^i \vert \neg \alpha^j I))p(\neg \alpha^j \vert I) \label{33}\\ &=& (1 - \frac{p(\alpha^i \vert I)}{p(\neg \alpha^j \vert I)}) p(\neg \alpha^j \vert I) \\ &=& p(\neg \alpha^j \vert I) - p(\alpha^i \vert I). \end{eqnarray} In order for the two ways of decomposing $p(\neg \alpha^i \wedge \neg \alpha^j \vert I)$ to be consistent we require that $p(\neg \alpha^j \vert I) - p(\alpha^i \vert I) = p(\neg \alpha^i \vert I) - p(\alpha^j \vert I)$. A necessary and sufficient condition for the history propositions to satisfy this requirement is that: \begin{equation} p(\alpha^i \vert I) + p(\neg \alpha^i \vert I) = K \mbox{ for all } i, \label{QUASI} \end{equation} \noindent where $K$ is a positive constant. We call condition (\mathbbm{R} \mbox{e }f{QUASI}) quasi-realisability. When $K=1$ we call the probabilities realisable. Note that in assuming steps (\mathbbm{R} \mbox{e }f{29}) and (\mathbbm{R} \mbox{e }f{33}) are valid we must presume that the probabilities are realisable in an \emph{a posteriori} sense in that $p(\alpha^i \vert \neg \alpha^j I) + p(\neg \alpha^i \vert \neg \alpha^j I) = 1$ for all $i$. A set $\{\alpha^i : i=1,2,...,N\}$ that does \emph{not} satisfy (\mathbbm{R} \mbox{e }f{QUASI}) does not give equal decompositions (\mathbbm{R} \mbox{e }f{DONE}) and (\mathbbm{R} \mbox{e }f{DTWO}). Thus, in interpretation \mathbbm{R} \mbox{e }f{1} any complete set of homogeneous history propositions makes sense, but if we require consistency with Bayesian probability theory then we must at least discuss sets of history propositions which satisfy the stricter condition (\mathbbm{R} \mbox{e }f{QUASI}). So, if we identify that $p(\alpha^i \vert I) = d(\alpha^i, \alpha^i)$ and that $p(\neg \alpha^i \vert I) = d(\neg \alpha^i, \neg \alpha^i)$, then a sufficient condition for all the above to be consistent by both decompositions (\mathbbm{R} \mbox{e }f{DONE}) and (\mathbbm{R} \mbox{e }f{DTWO}) is that everything is $d$-realisable. If the history propositions weren't $d$-realisable then there is no \emph{a priori} reason why decompositions (\mathbbm{R} \mbox{e }f{DONE}) and (\mathbbm{R} \mbox{e }f{DTWO}) should match. However, are all these probabilities well-defined? All the probabilities that we identify with decoherence probabilities are obviously well-defined in the sense that they are bounded between 0 and 1. But we haven't yet identified whether the na\"{\i}ve conditional probabilities are all well-defined. For example, if we make the identification that $p(\neg \alpha^j \vert \alpha^i I) = 1$ and then, using Bayes' rule, we derive: \begin{equation} p(\alpha^i \vert \neg \alpha^j I) = \frac{p(\alpha^i \vert I)}{p(\neg \alpha^j \vert I)} = \frac{d(\alpha^i, \alpha^i)}{d(\neg \alpha^j, \neg \alpha^j)}. \end{equation} In order for the above to be bounded by 0 and 1 we require that: \begin{equation} 0 \leq \frac{d(\alpha^i, \alpha^i)}{d(\neg \alpha^j, \neg \alpha^j)} \leq 1. \label{CONDITION} \end{equation} If $\alpha^i$ is more probable than $\neg \alpha^j$ in the context $I$ then the above condition will not be satisfied. The next question we must ask is what types of history propositions do we require for this Bayesian analysis to be consistent? In terms of the HPO form of CH \cite{Isham94} we define that the the homogeneous history propositions $\{\alpha^i : i = 1,2,...,N\}$ defined using exclusive sets of single-time propositions are all mutually disjoint: $\alpha^i \perp \alpha^j$ for all $i,j$ such that $i \neq j$. In terms of the natural orthoalgebra of history propositions, this means that $\alpha^i \leq \neg \alpha^j$ for all $i,j$ such that $i \neq j$. Thus if the decoherence functional preserves the partial order defined on the history proposition space then condition (\mathbbm{R} \mbox{e }f{CONDITION}) is satisfied. Isham and Linden \cite{IL94} have argued that this need not be the case; there are examples where the following is not true: \begin{equation} \alpha \leq \beta \Rightarrow d(\alpha, \alpha) \leq d(\beta, \beta). \label{PRESERVE} \end{equation} They give a specific example which disobeys (\mathbbm{R} \mbox{e }f{PRESERVE}). The sum-over-paths formulation for SQT does obey (\mathbbm{R} \mbox{e }f{PRESERVE}) and it is thus not clear whether we should assume it in general history theories \cite{IL94}. In order to satisfy (\mathbbm{R} \mbox{e }f{CONDITION}), however, we \emph{must} use sets of history propositions such that the following is satisfied: \begin{equation} \alpha^i \leq \neg \alpha^j \Rightarrow d(\alpha^i, \alpha^i) \leq d(\neg \alpha^j, \neg \alpha^j) \mbox{ for all } i \neq j. \label{PRESERVE2} \end{equation} Presuming $\{\alpha^i: i=1,2,...,N\}$ consists of $d$-realisable homogeneous history propositions that are defined using exclusive and exhaustive single-time propositions then this Bayesian analysis is consistent as long as (\mathbbm{R} \mbox{e }f{PRESERVE2}) is satisfied. If this is the case then the probabilities $p(\alpha^i \vee \alpha^j \vert \alpha^k I)$ and $p(\alpha^i \vee \alpha^j \vert \neg \alpha^k I)$ are well-defined (in the sense of being bounded by 0 and 1) for all $i,j,k$. This can be seen just by invoking the conditional probabilities invoked above. \begin{eqnarray} p(\alpha^i \vee \alpha^j \vert \alpha^k I) := p(\alpha^i \vert \alpha^k I) + p(\alpha^j \vert \alpha^k I) - p(\alpha^i \wedge \alpha^j \vert \alpha^k I) \label{POS} \end{eqnarray} If $k \neq i$ and $k \neq j$ then the RHS of (\mathbbm{R} \mbox{e }f{POS}) is $0+0-0 = 0$. If $k=i \neq j$ then the RHS of (\mathbbm{R} \mbox{e }f{POS}) is $1+0-0 = 1$ and if $k=j \neq i$ then it is $0+1-0 = 1$. If $k=i=j$ then the RHS is $1+1-1 = 1$. This is all as we would expect by intuition. Similarly, \begin{eqnarray} p(\alpha^i \vee \alpha^j \vert \neg \alpha^k I) := p(\alpha^i \vert \neg \alpha^k I) + p(\alpha^j \vert \neg \alpha^k I) - p(\alpha^i \wedge \alpha^j \vert \neg \alpha^k I) \end{eqnarray} \noindent is well-defined by construction. We can also define `$\vee$' relations for the inhomogeneous negations. \begin{eqnarray} p(\alpha^i \vee \neg \alpha^j \vert \alpha^k I) &:=& p(\alpha^i \vert \alpha^k I) + p(\neg \alpha^j \vert \alpha^k I) \nonumber \\ & & - p(\alpha^i \wedge \neg \alpha^j \vert \alpha^k I). \\ p(\alpha^i \vee \neg \alpha^j \vert \neg \alpha^k I) &:=& p(\alpha^i \vert \neg \alpha^k I) + p(\neg \alpha^j \vert \neg \alpha^k I) \nonumber \\ & & - p(\alpha^i \wedge \neg \alpha^j \vert \neg \alpha^k I). \\ p(\neg \alpha^i \vee \neg \alpha^j \vert \alpha^k I) &:=& p(\neg \alpha^i \vert \alpha^k I) + p(\neg \alpha^j \vert \alpha^k I) \nonumber \\ & & - p(\neg \alpha^i \wedge \neg \alpha^j \vert \alpha^k I). \\ p(\neg \alpha^i \vee \neg \alpha^j \vert \neg \alpha^k I) &:=& p(\neg \alpha^i \vert \neg \alpha^k I) + p(\neg \alpha^j \vert \neg \alpha^k I) \nonumber \\ & & - p(\neg \alpha^i \wedge \neg \alpha^j \vert \neg \alpha^k I). \end{eqnarray} All the above, by construction, give answers consistent with classical probabilistic intuition as long as $d$-realisability and (\mathbbm{R} \mbox{e }f{PRESERVE2}) are satisfied. So, if we discuss exhaustive sets of $d$-realisable propositions such that: \begin{equation} p(\alpha^i \vert \neg \alpha^j I) = \frac{p(\alpha^i \vert I)}{p(\neg \alpha^j \vert I)} \leq 1 \mbox{ for all } i,j \mbox{ such that } i \neq j, \end{equation} \noindent then we can, by construction, get complete consistency with Bayesian reasoning. History propositions $\alpha^i$ and $\alpha^j$ within such a set are additive over all conditional probabilities even if they are not additive \emph{a priori} such that $p(\alpha^i \vee \alpha^j \vert I) \neq p(\alpha^i \vert I) + p(\alpha^j \vert I)$. But is there anything wrong with two propositions being additive in one context and not additive in another? Of course not. In the Bayesian framework, probabilities are \emph{always} defined contextually \cite{JaynesBOOK} and exclusivity is a contextual property of propositions. This is not to say that quantum probabilities definitely don't behave in ways that go against classical intuition, only that classical Bayesian probability theory might take us a little further than we may have thought in analysing quantum history propositions. This approach, which we call Bayesian Histories (BH), has a clear pedagogical basis and, as we shall argue below, may tentatively be experimentally distinguishable from SQT. So, using BH, we can define history propositions to be exclusive in certain contexts. But, of course, we can identify these contexts either as \emph{a priori} ones or \emph{a posteriori} ones in reference to Bayes' rule (\mathbbm{R} \mbox{e }f{BAYES}) depending upon what stage of the Bayesian updating process we are considering. \section*{A Pedagogical Account of Additivity} As we discussed above, and recently emphasised by Mana \cite{Mana04}, propositions have certain properties that are contextual. The exclusivity of propositions is a contextual property of propositions. Therefore if we have two propositions $A$ and $B$ it is not necessarily the case that they are exclusive in any given context (nor even defined in any given context). We have defined above that the exclusivity of two propositions arises when $p(A \cap B \vert D) = 0$ and $p(A \cup B \vert D) = p(A \vert D) + p(B \vert D)$ such that all probabilities are well-defined (this happily coincides with the standard Bayesian notion of exclusivity). In a similar way, consistent historians define `$d$-consistency'; although there is a subtle distinction between the two. In CH contexts are \emph{defined} to be situations in which $d$-consistency occurs, where-as in BH contexts are far more general. The exclusivity of propositions might be gained when going from \emph{a priori} probabilities to \emph{a posteriori} probabilities. So, for propositions $A$ and $B$ and prior-information $D$ it might be the case that: \begin{equation} p(A \cup B \vert D) \neq p(A \vert D) + p(B \vert D) \end{equation} \noindent even though when we update using further information $E$ it is the case that: \begin{equation} p(A \cup B \vert E D) = p(A \vert E D) + p(B \vert E D). \end{equation} \noindent This is a possibility we can imagine since exclusivity is a contextual property of propositions. One might be able to define contexts which give additive \emph{a posteriori} probabilities using BH, rather than restricting ourselves to additive \emph{a priori} probabilities (as one might put it when using CH). Using Bayes' rule we can also na\"{\i}vely derive the following rule: \begin{eqnarray} p(A \vert (D \cup E) F) &=& p(D \cup E \vert A F) \frac{p(A \vert F)}{p(D \cup E \vert F)} \\ &=& \frac{p(D \vert A F) p(A \vert F) + p(E \vert A F)p(A \vert F)}{p(D \cup E \vert F)} \label{mum}\\ &=& \frac{p(D \cap A \vert F) + p(E \cap A \vert F)}{p(D \cup E \vert F)}. \label{INVERSE} \end{eqnarray} \noindent We get to (\mathbbm{R} \mbox{e }f{mum}) as long $D$ and $E$ are additive on the \emph{a posteriori} context $A F$. Throughout the analysis that gave us (\mathbbm{R} \mbox{e }f{MIXTURE}) we assumed that the context $I$ is well-defined and globally applicable to each null-counterfactual. This is an assumption that need not be valid. For example, one could either make the association that $p(\alpha^i \vert C) = d(\alpha^i, \alpha^i)$---identifying the decoherence functional with \emph{a priori} probabilities---or one could associate the decoherence functional with \emph{a posteriori} probabilities: \begin{equation} p(\alpha^i \vert (\alpha^k \vee \neg \alpha^k) C) = p(\alpha^i \vert \mathbf{1}^k C) = d(\alpha^i ,\alpha^i). \end{equation} Now, if we associate the decoherence functional with \emph{a posteriori} probabilities then such probabilities are independent of the context $\mathbf{1}^k$ in which they are taken. This is a kind of non-contextuality. Even if the values are the same, however, they may still \emph{behave} differently---probabilities with the same value are not necessarily the same probabilities. Thus we should keep their notational dependence upon context even if their values are the same. When can one interpret each $\mathbf{1}^k$ as a null-counterfactual? By (\mathbbm{R} \mbox{e }f{INVERSE}) we have: \begin{eqnarray} p(\alpha^i \vert \mathbf{1}^k C) = \frac{p(\alpha^i \wedge \alpha^k \vert C) + p(\alpha^i \wedge \neg \alpha^k \vert C)}{p(\mathbf{1}^k \vert C)}. \end{eqnarray} So, if we associate the decoherence functional probabilities with \emph{a posteriori} probabilities rather than \emph{a priori} probabilities then we have that: \begin{equation} p(\alpha^i \vert \mathbf{1}^k C) = d(\alpha^i, \alpha^i) = \frac{p(\mathbf{1}^k \vert \alpha^i C)p(\alpha^i \vert C)}{p(\mathbf{1}^k \vert C)}. \end{equation} This means that $p(\alpha^i \vert \mathbf{1}^k C) \neq p(\alpha^i \vert C)$. So, even if probabilities don't depend upon the contexts $\mathbf{1}^k C$, probabilities still depend on whether such a context is known to be the case or not. \emph{A priori} probabilities are not the same as \emph{a posteriori} probabilities. It is rather natural to make the association that $I = \mathbf{1}^k C$ and hence why we must differentiate between $C$ and $I$ in the above presentation. We reserve the the name `$C$' for \emph{a priori} contexts. Thus all the na\"{\i}ve probability assignments given in context $I$ can be passed across to probability assignments in contexts $\mathbf{1}^k C$ for all $k$. Using (\mathbbm{R} \mbox{e }f{INVERSE}) we can discuss the probabilities assigned to an exhaustive set of contexts $\vee_k \mathbf{1}^k$: \begin{eqnarray} p(\alpha^i \vert (\vee_k \mathbf{1}^k) C) &=& \frac{(\sum_k p(\alpha^i \wedge \mathbf{1}^k \vert C))}{p(\vee_k \mathbf{1}^k \vert C)} \\ &=& p(\alpha^i \vert \mathbf{1}^k C) \sum_k p(\mathbf{1}^k \vert C) = d(\alpha^i, \alpha^i). \end{eqnarray} Therefore, a set of contexts $\{\mathbf{1}^k\}$ that are exhaustive on $C$ gives us the standard probabilities predicted by SQT. One might now ask how the \emph{a priori} probabilities behave. We presume that $p(\mathbf{1}^k \vert \alpha^i C) = 1$ for all $k$ since such conditional probability assignments are natural. Thus we have that ratios of \emph{a priori} probabilities and ratios of \emph{a posteriori} probabilities are equal, for example: \begin{equation} \frac{p(\alpha^i \vert C)}{p(\neg \alpha^k \vert C)} = \frac{p(\alpha^i \vert \mathbf{1}^k C)}{p(\neg \alpha^k \vert \mathbf{1}^k C)}. \label{RATIOS} \end{equation} In order for the probabilities $p(\alpha^i \wedge \neg \alpha^k \vert C)$ to be well-defined we thus require that such ratios are less than 1. This is thus equivalent to requiring (\mathbbm{R} \mbox{e }f{PRESERVE2}). In order for probabilities $p(\neg \alpha^i \wedge \neg \alpha^k \vert C)$ to be consistent with Bayes' rule we also require that the \emph{a priori} probabilities are quasi-realisable: \begin{equation} p(\alpha^i \vert C) + p(\neg \alpha^i \vert C) = L \mbox{ for all } i \label{QUASIC} \end{equation} \noindent where $L$ is a constant. Since we are assuming that the \emph{a posteriori} probabilities are independent of contexts we require that $p(\alpha^i \vert \mathbf{1}^k C) = p(\alpha^i \vert \mathbf{1}^i C)$. We thus have that $p(\mathbf{1}^i \vert C) = p(\mathbf{1}^k \vert C)$. This suggests that all contexts $\mathbf{1}^k$ are \emph{a priori} equally likely so: \begin{equation} p(\mathbf{1}^i \vert C) = L' \mbox{ for all } i \end{equation} \noindent where $L'$ is a constant. Comparing \emph{a posteriori} and \emph{a priori} probabilities we have that: \begin{eqnarray} p(\alpha^i \vert \mathbf{1}^k C) + p(\neg \alpha^i \vert \mathbf{1}^k C) = \frac{p(\alpha^i \vert C) + p(\neg \alpha^i \vert C)}{p(\mathbf{1}^k \vert C)} = \frac{L}{L'} = K \mbox{ for all } i. \end{eqnarray} This is thus completely consistent with our requirement that the \emph{a posteriori} probabilities must be quasi-realisable (\mathbbm{R} \mbox{e }f{QUASI}). Thus if $\frac{L}{L'} = K = 1$ then we have $d$-realisable history propositions for all $i$. If we have $K \neq 1$ then we have a quasi-$d$-realisable set of history propositions. If $L=L'$ then we have a very cogent interpretation: all the contexts that we invoke consist of $d$-consistent sets and are thus what we have called null-counterfactuals. So, if $L=L'$, we represent experiments using an equally weighted mixture of null-counterfactuals. When $K \neq 1$ we don't have a good interpretation so, for now, we reject such cases. We have a sound interpretation for \emph{a posteriori} contexts when $K=1$, but what does the \emph{a priori} context $C$ refer to? We don't interpret $C$ here except to say that if Bayesian probability is the correct probability to use then we must require that such \emph{a priori} contexts are consistent with Bayes' rule (\mathbbm{R} \mbox{e }f{BAYES}). $C$ is simply some context in which the \emph{a priori} probabilities are well-defined. $C$ is our knowledge about $\{\mathbf{1}^k\}$ and our knowledge about $\{\mathbf{1}^k\}$ is that we don't know which $\mathbf{1}^k$ happens, so we apply equal \emph{a priori} probabilities. The standard von Neumann collapse formulation predicts that all probabilities for multi-time measurements are well-defined, but in BH only those that give consistency with Bayesian reasoning are valid. Thus the collapse hypothesis is not deemed universally valid in BH---it is rather only a convenient hypothesis in certain situations. Lets look at a standard interference device: a Mach-Zehnder interferometer. In the standard interpretation there are two possible history propositions which end in detection by a given detector labelled $e$---these histories we call $\alpha^u$ and $\alpha^d$---and SQT predicts that each one happens with probabilities given by the decoherence functional: $d(\alpha^u, \alpha^u)$ and $d(\alpha^d, \alpha^d)$ respectively. We interpret $\alpha^u$ to be the history proposition that the particle takes the upper path and $\alpha^d$ as the proposition that it takes the lower path. Thus, in the standard interpretation, the probabilities given by the decoherence functional using these two propositions represent the situation where the path of the particle is measured. Interference suggests that: \begin{equation} d(\alpha^u \vee \alpha^d, \alpha^u \vee \alpha^d) \neq d(\alpha^d, \alpha^d) + d(\alpha^u, \alpha^u). \end{equation} \noindent This means that, in the standard interpretation, when you don't measure the path you predict a different probability at the detector to that you would predict had you measured the path. One can loosely say then that in one `context' the histories are exclusive and in another they are not, but how do we formalise such notions? It is clear that in the space of history propositions it is \emph{not} the case that $\neg \alpha^u = \alpha^d$. We must be more subtle in our use of the negation operation. Using interpretation \mathbbm{R} \mbox{e }f{2} we look at this path detection experiment in a subtly distinct fashion. There are two possible null-counterfactuals $\mathbf{1}^u = \alpha^u \vee \neg \alpha^u$ and $\mathbf{1}^d = \alpha^d \vee \neg \alpha^d$ (lets presume explicitly that $\alpha^u$ and $\alpha^d$ are both $d$-realisable since we have a good interpretation for such propositions). Using (\mathbbm{R} \mbox{e }f{INVERSE}) we make the association: \begin{eqnarray} p(\alpha^u \vert \mathbf{1}^d C) &=& \frac{p(\alpha^u \wedge \alpha^d \vert C) + p(\alpha^u \wedge \neg \alpha^d \vert C)}{p(\mathbf{1}^d \vert C)} \\ &=& \frac{0 + p(\alpha^u \vert \neg \alpha^d C)p(\neg \alpha^d \vert C)}{p(\mathbf{1}^d \vert C)} = \frac{p(\alpha^u \vert C)}{p(\mathbf{1}^d \vert C)}. \end{eqnarray} We can do this as long as the probabilities for $\alpha^d$ and $\neg \alpha^d$ are well-defined and additive on the \emph{a posteriori} context $\alpha^u C$. With similar provisos we can argue that: \begin{eqnarray} p(\alpha^u \vert \mathbf{1}^u C) &=& \frac{p(\alpha^u \wedge \alpha^u \vert C) + p(\alpha^u \wedge \neg \alpha^u \vert C)}{p(\mathbf{1}^u \vert C)} \\ &=& \frac{p(\alpha^u \vert C)}{p(\mathbf{1}^u \vert C)}. \end{eqnarray} When we do the experiment we have no \emph{a priori} reason to expect one null-counterfactual to occur over the other so we assign equal weights to each, $p(\mathbf{1}^u \vert C) = \frac{1}{2} = p(\mathbf{1}^d \vert C)$. Each null-counterfactual is deemed to be apt with these \emph{a priori} probabilities. So the Mach-Zehnder experiment can consist of an equally weighed mixed set $M$ of null-counterfactuals such that: \begin{eqnarray} p(\alpha^u \vert M) &=& \frac{p(\alpha^u \wedge \mathbf{1}^u \vert C) + p(\alpha^u \wedge \mathbf{1}^d \vert C)}{p(\mathbf{1}^u \vee \mathbf{1}^d \vert C)} \\ &=& d(\alpha^u, \alpha^u). \end{eqnarray} Thus we recover the SQT predictions for path detection as long as we use $d$-realisable history propositions which give a consistent Bayesian analysis. Otherwise we must use a different set of null-counterfactuals---the same set of null-counterfactuals can't give use the the case when path detection doesn't occur. We could also try to define the probability $p(\neg \alpha^u \vert M)$ and in order to do so we would require that the probability $p(\neg \alpha^u \wedge \neg \alpha^d \vert C)$ is well defined, and this requires quasi-realisability. So, for consistency of the reasoning we use we require at least quasi-realisability for both \emph{a priori} and \emph{a posteriori} probabilities---we require (\mathbbm{R} \mbox{e }f{QUASIC}) and (\mathbbm{R} \mbox{e }f{QUASI}) respectively. We have investigated the situation where the path lengths are equal but, of course, one can easily introduce phase shifters into the arms of the interferometer. Note that the dynamics is invoked in the very definition of the decoherence functional so phase shifters would be represented by a change in the evolution between to times from standard unitary evolution to one including a change in phase: $d \rightarrow d'$. Obviously this would have no effect for the path detection experiments but it would have an effect on non-path detection experiments such that $d'(\alpha^u \vee \alpha^d, \alpha^u \vee \alpha^d)$ would depend on a phase factor. Note $\alpha^u \vee \alpha^d = \alpha^e$ where $\alpha^e$ is the history proposition that the particle is detected by a click in detector $e$ without any path detection. All this discussion about null-counterfactuals is perhaps rather controversial; it is based around mainly notational issues. We have invoked them here, however, simply in an attempt to distinguish quasi-realisable and realisable histories. Even if one does not accept this null-counterfactual formalism we hope that you still take away with you the primary fact that consistency with Bayesian reasoning produces a consistency condition; decoherence probabilities must be at least quasi-realisable and must satisfy (\mathbbm{R} \mbox{e }f{PRESERVE2}), and $d$-realisable histories seem far less controversial than quasi-$d$-realisable ones. As to why we should use Bayesian reasoning in the first place, we shall get onto that in a moment once we have discussed linear positivity. \section*{Quasi-realisability vs Linear Positivity} Having a rather natural Bayesian interpretation for complete sets of $d$-realisable history propositions, let us now discuss quasi-$d$-realisable history propositions that satisfy (\mathbbm{R} \mbox{e }f{QUASI}). These don't give a good interpretation so it is tempting just to reject them, but lets look a little closer at them. Non-$d$-realisable history propositions simply satisfy the inequality $d(\alpha, \neg \alpha) \neq 0$. We have that: \begin{equation} \mathbbm{R} \mbox{e } d(\alpha, \neg \alpha) = \mathbbm{R} \mbox{e } d^{LP}(\alpha) - d(\alpha, \alpha) \end{equation} \noindent where $d^{LP}(\alpha)$ is defined on homogeneous history propositions in a similar manner to the decoherence functional (\emph{c.f.} Eq.(\mathbbm{R} \mbox{e }f{dFUNC})): \begin{equation} d^{LP}(\alpha) := \mbox{tr}(C_{\alpha} \rho). \label{LPFUNC} \end{equation} As long as they are positive the $\mathbbm{R} \mbox{e } d^{LP}(\alpha)$ behave like probabilities. In the literature they are called Linear Positive (LP) probabilities and were originally promoted by Goldstein and Page \cite{GP95} as a less restrictive alternative to CH probabilities. Therefore $d$-realisable history propositions have the property that LP probabilities and decoherence functional probabilities have the same value. Quasi-realisability enforces: \begin{eqnarray} p^{LP}(\alpha^i \vert \mathbf{1}^k C) + p^{LP}(\neg \alpha^i \vert \mathbf{1}^k C) = K'. \end{eqnarray} \noindent Note, however, that LP probabilities are always, by definition, exhaustive when defined on a partition of unity $\sum_i \alpha^i = \mathbf{1}$ so $K'=1$ for all LP probabilities. So now we have a choice: either we attempt to interpret quasi-$d$-realisable propositions or we extend our discussion to LP propositions. There are a couple of reasons why going the LP way is pedagogically interesting. Firstly, all LP probabilities are realisable---hence we don't need to worry about non-realisable probabilities cropping up and having to interpret them. We shall give another reason why we reject non-realisable propositions when we discuss Cox's axioms. Secondly, LP probabilities are \emph{explicitly} non-contextual; their interpretation doesn't depend upon what other history propositions they are invoked with. This makes the non-contextuality assumption a bit more explicit such that LP probabilities do not depend upon which null-counterfactuals they are defined with respect to. So we can, rather naturally, define: \begin{equation} p^{LP}(\alpha^i \vert \mathbf{1}^k C) = p^{LP}(\alpha^i \vert \mathbf{1}^i C) \mbox{ for all } k, \end{equation} \noindent \emph{i.e.} its value is independent of the context, labelled by $k$, we use. It still depends on the fact that we have a well-defined context hence we keep the notational dependence upon context and don't remove it entirely. Even if one represents non-contextuality by assuming the contexts are all equivalent and called, say, $I$, then one still gets a consistency condition, namely quasi-realisability, that LP probabilities satisfy since they are realisable. This non-contextuality assumption is rather analogous to the Gleason non-contextuality (which can also be expressed in terms of null-counterfactuals); afterall, what is non-contextuality if not an assumption that, if you don't know which null-counterfactual you are discussing, that you give each possible null-counterfactual equal \emph{a priori} weighting. So, there may exist a theorem akin to Gleason's which shows our LP probability assignments to be uniquely defined by certain natural assumptions (although we would have to justify the LP set of history propositions before discussing such a theorem; we do not attempt such a thing here). So LP probabilities can then be interpreted in a way that is exactly analogous to the way we interpreted the complete sets of $d$-realisable history propositions. In order for the Bayesian probability assignments to be well-defined probabilities bounded by $0$ and $1$ then we must, in analogy with (\mathbbm{R} \mbox{e }f{PRESERVE2}), require that all LP probabilities preserve the partial order on the history space for all LP history propositions: \begin{equation} \alpha^i \leq \neg \alpha^j \Rightarrow \mathbbm{R} \mbox{e } d^{LP}(\alpha^i) \leq \mathbbm{R} \mbox{e } d^{LP}(\neg \alpha^j) \mbox{ for all } i \neq j. \label{PRESERVE3} \end{equation} \noindent This is satisfied for all LP history propositions. We can define $K' = \frac{L''}{L'''}$ in an analogous way such that: \begin{equation} p^{LP}(\alpha^i \vert C) + p^{LP}(\neg \alpha^i \vert C) = L'' \mbox{ for all } i \end{equation} \noindent and \begin{equation} p^{LP}(\mathbf{1}^k \vert C) = L''' \mbox{ for all } k. \end{equation} For LP history propositions we have that $K'=1$ and thus that $L''=L'''$. Thus we interpret the \emph{a priori} context $C$ to be the knowledge that we have no knowledge about the contexts $\mathbf{1}^k$ and thus assign them equal \emph{a priori} probabilities. Thus we can, if we wish, extend BH to include all LP history propositions and not just $d$-realisable propositions (which are, of course, also LP). If BH is correct then it helps, in part, to `explain' interference because the probabilities invoked obey rules that are consistent with our classical intuition. If BH is incorrect---if non-LP history propositions remain well-defined and experimentally realisable---then we have a theory that obeys our classical intuition, to some extent at least, which SQT disobeys---this in itself would be a novel result. \section*{Why Bayes' Rule?} Having shown that there is a certain amount of consistency between Bayes' rule and the LP formalism the following programme presents itself: perhaps we can derive the LP probabilities by taking the history algebra and applying something akin to Cox's axioms \cite{CoxBOOK} to this space. Cox's work derives probability theory over an underlying Boolean algebra using simple consistency conditions that a natural form of inductive reasoning should obey, so it seems natural just to try and apply a similar kind of reasoning with the history algebra. If such a proof is found and as long as the history algebra could then be justified by \emph{a priori} means---or by simple physically justified axioms---then one would be able to prove that the LP formalism is \emph{just} another kind of probability theory. It is clear, however, that our na\"{\i}ve assumption of using Bayes' rule is justified by Cox's axioms. Cox's first axiom is that the probability of a statement conditional upon some hypothesis determines the negation of that same statement upon the same hypothesis. The second axiom, more relvent here, is that the probability that two statements are both true upon a given hypothesis is determined alone from the probability of one of the statements conditional upon the given hypothesis and the probability of the other statement conditional upon the hypothesis conjoined with the presumption that the first statement is true. In our notation this is written schematically as: \begin{equation} p(\alpha \wedge \beta \vert I) := F[p(\beta \vert I), p(\alpha \vert \beta I)] \label{Cox2} \end{equation} \noindent where $F$ is an arbitrary function to be determined that is sufficiently well-behaved for our purposes. The underlying algebra for history propositions is associative so the following statement is true: \begin{equation} \alpha \wedge (\beta \wedge \gamma) = (\alpha \wedge \beta) \wedge \gamma = \alpha \wedge \beta \wedge \gamma. \label{property} \end{equation} The above property (\mathbbm{R} \mbox{e }f{property}) forces $F$ not to be arbitrary and Cox \cite{CoxBOOK} proves that Bayes' rule is a consequence (Jaynes highlights a more general proof in \cite{JaynesBOOK}): \begin{equation} p(\alpha \vert \beta I) = \frac{p(\alpha \wedge \beta \vert I)}{p(\beta \vert I)}. \label{properbayes} \end{equation} Since the associativity of `$\wedge$' (\mathbbm{R} \mbox{e }f{property}) is valid for our quantum logics then Bayes' rule follows and the above work is justified to an extent. Note that for homogeneous histories defined over the same temporal support we have that $\alpha \wedge \beta = \beta \wedge \alpha$ so the histories equivalent of the multiplication rule (\mathbbm{R} \mbox{e }f{AND}) also follows in such cases. Thus, although the above analysis initially seems quite na\"{\i}ve there is some truth to it---Bayes' rule, if nothing else, should be obeyed by any natural notion of probability by Cox's axioms. We have, however, yet to generalise Cox's other proofs to the HPO algebra of history propositions proper. This remains work in progress. It is clear that the decoherence probabilities (which are also the standard probability assignments invoked using von Neumann collapse), at least in the Hamiltonian formulation, need not always obey Bayes' rule---they can disobey (\mathbbm{R} \mbox{e }f{CONDITION}) for example and need not be realisable or quasi-realisable---so we have to restrict our attention to either $d$-realisable histories or LP ones (or use some other assignment). A na\"{\i}ve application of Cox's first axiom, \begin{equation} p(\neg \alpha \vert I) := G[p(\alpha \vert I)], \label{Cox1} \end{equation} \noindent suggests we should use a fixed value of $K$, hence why we have restricted our attention to realisable histories. Hence we should use realisable probabilities that obey Bayes' rule---we should use LP probabilities. This approach may not be considered wholly satisfactory because we are not giving probability assignments to all histories but only the LP subset of the algebra. This is curiously analogous to the situation in Youssef's work \cite{Youssef01} where, in deriving a form of SQT as an `exotic' complex probability theory, he has to presume a subset where the standard real probabilities are manifested. We have placed the term `exotic' in scare quotes because we prefer not to use the term. When invoking Bayesian reasoning there is no \emph{a priori} reason probabilities should be real numbers (also see \cite{Isham02}). We only need to presume they are real when notions of relative frequency are applicable. Hence we would rather call such theories just probability theories; there is nothing really `exotic' about them. Hence we leave open the possibility of deriving the whole of the histories formalism in such a manner. We investigate such a possibility in forthcoming work. \section*{Experimental Differentiation} So if we \emph{re-define} multi-time measurements to be equally weighted sets of null-counterfactuals (due to some principle of insufficient reason) we can get all Linearly Positive (LP) probabilities. One might wish to take this very seriously. There are two tacks that we can take in regards to BH. Firstly, we could choose to use BH to discuss closed quantum systems. LP probabilities were originally promoted in this manner \cite{GP95} because as soon as we discuss closed quantum systems then using Eq.(\mathbbm{R} \mbox{e }f{dFUNC}) to assign probabilities to history propositions simply becomes a \emph{postulate}. Using the real part of Eq.(\mathbbm{R} \mbox{e }f{LPFUNC}) to assign candidate probabilities is another, equally valid, postulate. And, of course, any rule that is distinct from the von Neumann projection postulate must be investigated very carefully. Di\'{o}si \cite{Diosi94} has argued that the LP probabilities should not be used as probability assignments because they are not consistent with the statistical independence of subsystems. Implicit in this critique is the use of a relative frequency interpretation of probability but it is not clear that using a relative frequency interpretation for closed systems is wholly sound. The only other option we have (propensities being simply objectivised relative frequencies) is to use Bayesian probabilities; and if we do use Bayesian probabilities then LP probabilities are \emph{promoted} over decoherence probabilities as they have a very simply interpretation and obey Cox's axioms for the LP subset. Bayesian probability theory encompasses the use of relative frequencies in certain situations \cite{JaynesBOOK} so there is nothing necessarily untoward about this. Secondly, we could try and apply BH to actual experiments. Anastopoulos suggests, in \cite{Anast04}, that we should try and experimentally check that $d$-inconsistent sets do really make good statistical sense. With a similar emphasis it may also be prudent just to check whether non-LP history propositions do really make good statistical sense in quantum experiments. But, of course, what do we mean by ``good statistical sense''? If we assume that ``good statistical sense'' is equivalent with the statement ``is consistent with Bayes' rule'' then BH is promoted as a tautology. Otherwise one must use a form of statistics that is inconsistent with Bayes' rule and is also well-defined. So, it is clear that, at present, the only way we know how to get relative frequencies out from quantum history theories is by discussing $d$-consistent sets. Those sets that aren't $d$-consistent may not give well-defined notions of relative frequency \cite{Anast04}. This might be because such relative frequencies aren't convergent or converge to many different values \cite{Aerts02}. Of course, if relative frequencies converge to many values then the most natural interpretation is to suggest that we are inadvertently or necessarily mixing contexts. Hence we should, as argued above, be very careful about the notation we use and include any contextual dependence in the very definition of the probabilities involved. Hartle \cite{Hartle04}, for example, has recently analysed the double slit experiment in reference to LP history propositions and shows that if the resolution of the screen is sufficiently high then the candidate probabilities predicted using the real part of Eq.(\mathbbm{R} \mbox{e }f{LPFUNC}) will not be well-defined. But resolution coarser than a critical value will give well-defined LP probabilities. How seriously should we take this? Normally such probabilities are interpreted in closed systems but should we not just check that these aren't compatible with the relative frequencies of actual experiments? We have argued that LP probabilities can be interpreted in a particularly Bayesian way; the next challenge for BH is thus to try and work out how such Bayesian probabilities are related (obviously such a relation might be non-trivial) to the relative frequencies of experiments---as this might provide a way to experimentally distinguish the two approaches. There are a variety of ways one can derive relative frequencies from Bayesian probabilities; for example one can invoke notions of exchangability, independence or use maximum entropy methods \cite{JaynesBOOK}. Statistical independence in history theories has been studied by Di\'{o}si \cite{Diosi94} and discussed by Hartle \cite{Hartle04} but there may be other useful ways to invoke relative frequencies from Bayesian probabilities. To the present author, it is tempting to believe in BH simply for the cogency of the interpretation. It uses standard notions of Bayesian probability that are well understood and it pedagogically invokes the contextuality implicit in the propositional nature of history propositions. Although, of course, further investigation and statistical analysis of experiments are necessary to justify it above the standard interpretation. In the standard interpretation \emph{any} ordered set of single-time measurements is realisable regardless of problems of non-additivity (presuming the relevant apparatus can be made). In BH, only those multi-time measurements that give well-defined \emph{a posteriori} and \emph{a priori} probabilities are experimentally realisable with good statistics. Thus the standard interpretation and BH give distinct statistical predictions when interpreted instrumentally. But, of course, the instrumental validity of BH bares little relation to whether BH should be invoked when discussing closed quantum systems. In closed systems probability is implicitly used as a form of inference rather than as relative frequencies of experiments so one should naturally use Bayesian probability. We have shown that all LP probabilities are consistent with Bayesian reasoning, whereas not all probabilities of the form (\mathbbm{R} \mbox{e }f{Trace}) are (when using the natural space of history propositions). \section*{Entropy} By invoking contexts in which history propositions are exclusive and exhaustive we now have the opportunity to use standard Shannon entropy to quantify information. For example, if a set $\{\alpha^i : i=1,2,...,N_\alpha\}$ is exclusive and exhaustive on \emph{a priori} context $C$ then we can define the the set's Shannon entropy in a simple way. Here we denote probabilities with a small $p$ and probability distributions with a large $P$. The Shannon entropy is then simply given by: \begin{equation} H[P(\alpha^i \vert C)] := -K_H \sum_{i=1}^{N_\alpha} p(\alpha^i \vert C) \ln p(\alpha^i \vert C). \end{equation} \noindent where $K_H$ is a constant. We cannot define such an entropy for sets $\{\alpha^i\}$ that aren't exhaustive and exclusive on $C$. But we can define an entropy for them if we take an \emph{a posteriori} context $I$ in which $\{\alpha^i\}$ \emph{are} exclusive and exhaustive: \begin{equation} H[P(\alpha^i \vert I)] = -K_H \sum_{i=1}^{N_\alpha} p(\alpha^i \vert I) \ln p(\alpha^i \vert I). \end{equation} In \cite{Mana04}, Mana has cogently argued against improper use of such entropy concepts in SQT. Since we have used standard Bayesian probability and kept contextuality in check we can use Mana's pedagogical results in the histories domain as well. As such, if we have two sets of propositions $\{\alpha^i\}$ and $\{\beta^j\}$ that are exclusive and exhaustive in the \emph{same} context $I$---they are both `sets of alternatives' in $I$---then we can define the conditional entropy as follows: \begin{eqnarray} H[P(\beta^j \vert \alpha^i I)] &:=& -K_H \sum_{i=1}^{N_\alpha} p(\alpha^i \vert I) H[P(\beta^j \vert \alpha^i I)] \\ &=& -K_H \sum_{i=1}^{N_\alpha} p(\alpha^i \vert I) \sum_{j=1}^{N_\beta} p(\beta^j \vert \alpha^i I) \ln p(\beta^j \vert \alpha^i I). \end{eqnarray} An analogous definition is used for $H[P(\alpha^i \vert \beta^j I)]$. In such a case the following standard formulae should apply by mathematical necessity \cite{Mana04}: \begin{eqnarray} H[P(\alpha^i \wedge \beta^j \vert I)] &=& H[P(\alpha^i \vert I)] + H[P(\beta^j \vert \alpha^i I)] \\ &=& H[P(\beta^j \vert I)] + H[P(\alpha^i \vert \beta^j I)] \end{eqnarray} \begin{equation} H[P(\beta^j \vert I)] \geq H[P(\beta^j \vert \alpha^i I)]. \end{equation} These are the standard strong additivity and concavity properties of Shannon entropy. We can avoid any of the confusions highlighted by Mana \cite{Mana04} by using such standard definitions of Shannon entropy. If we interpret multi-time measurements as equally weighed mixed sets of null-counterfactuals then such entropy concepts allow us to compare $d$-consistent or LP sets entropically. This is not particularly useful if one interprets history propositions in the standard quantum cosmological manner but, of course, it may be very useful when interpreting quantum history propositions instrumentally. The reader is also referred, in earnest, to a Bayesian derivation of entropic concepts given recently by Caticha \cite{Catich03}. \section*{Future Research} Isham's seminal work on CH and topos theory \cite{Isham97} pre-empts the idea that $d$-inconsistent sets can be assigned a certain amount of meaning using a notion of $d$-accessability which is related to our definition of $d$-realisability\footnote{Note that our use of the term `$d$-realisable' in not the same as its use in \cite{Isham97}.}. The present work can be considered a pedagogical account of such toposophic concepts in the domain of instrumental quantum theory, which shows that such a generalisation provides different statistical predictions. Isham also argued that it is pedagogically useful to discuss $d$-consistent Boolean algebras rather than $d$-consistent sets \emph{per se} because such objects are more akin to what we think of in classical probability theory. We agree with this sentiment (although we didn't submit to it here because of the useful illustrative notion of null-counterfactual) and the above work can easily be framed as such: one can define Boolean sub-algebras $W$ that consist of history propositions; $W = \{\alpha^i : i=1,2,....,M\}$ to be $d$-consistent if \cite{Isham97}: \begin{equation} d(\alpha^i \wedge \alpha^j, \alpha^i \wedge \alpha^j) = d(\alpha^i, \alpha^j) \mbox{ for all } \alpha^i,\alpha^j \in W. \end{equation} \noindent Furthermore, in our notation, we can ask that: \begin{equation} p(\alpha^i \vee \alpha^j \vert I) = p(\alpha^i \vert I) + p(\alpha^j \vert I) - p(\alpha^i \wedge \alpha^j \vert I) \mbox{ for all } \alpha^i,\alpha^j \in W. \label{61} \end{equation} The extended definition of entropy for not-necessarily exclusive events given by Cox \cite{CoxBOOK} can also be applied to such propositions in a Boolean algebra as long as (\mathbbm{R} \mbox{e }f{61}) is satisfied---when such propositions are not-exclusive they would be not-exclusive in the same way that classical propositions can be not-exclusive. It might also be pedagogically useful to generalise single-time von Neumann null-counterfactuals to Boolean algebras proper. One can discuss more general contexts in which $d$-inconsistent Boolean algebras are consistent---in an \emph{a posteriori} sense---with rules (\mathbbm{R} \mbox{e }f{BAYES}) and (\mathbbm{R} \mbox{e }f{AND}). The present author is not yet sure exactly how such toposophic concepts are related to BH; this is left for further research. Nor is it clear how such concepts pass across to LP history propositions. Operational notions \cite{KrausBOOK, DaviesBOOK} such as Positive Operator Valued Measures (POVMs) provide a generalisation of von Neumann single-time measurements in the sense that each POVM defines a set of propositions that are apt in a measurement with certain probabilities. As such, it is easy to imagine a operational generalisation of the above work (see \cite{Rudolph96, Kent98}). In the POVM formalism single-time propositions are represented by effect operators that need not necessarily be orthogonal. When discussing such operational notions it is important to distinguish between `orthogonal' propositions and `exclusive' ones---POVMs can consist of non-orthogonal propositions but these propositions are interpreted to occur exclusively regardless of whether they are orthogonal or not. In SQT we can prepare a mixed state of non-orthogonal pure states such that each pure state occurs exclusively in the mixed state with a given weight; similarly POVMs can consist of exclusive non-orthogonal propositions. So, if we interpret the outcomes of POVMs to happen exclusively, a generalisation into the multi-time domain that is compatible with interpretation \mathbbm{R} \mbox{e }f{2} might be possible. Such a multi-time generalisation, however, would require a logic to the set of effect operators akin to the quantum logic of projection operators. Time evolution in the POVM formalism is more general than unitary evolution which might add an extra complication. One is also tempted to apply such null-counterfactuals to Bell-like experiments. It is exactly a cogent notion of a null-counterfactuals that is lacking in such analyses \cite{Shimon04}. By describing such experiments in terms of realisable sets of history propositions one might nullify any proofs of nonlocality. We have shown above that there is no \emph{a priori} reason why all multi-time candidate propositions made out of single-time propositions should be well-defined consistently. Similarly, there is no \emph{a priori} reason why candidate null-counterfactual propositions about multiple spacelike separated spacetime regions must all be well-defined (as is implicitly assumed in \cite{Stapp03} and criticised by \cite{Shimon04}). By using interpretation \mathbbm{R} \mbox{e }f{2}, null-counterfactual statements might necessarily be statements about both spacelike separated regions and cannot be well-defined for individual small spacetime regions. This would, tentatively, be a way to argue against the EPR paper \cite{EPR} in a way akin to Bohr's response \cite{Bohr35}. It may also be a way to promote Bayesian probability over relative frequencies \cite{Marlow05b,Marlow05}. This is presently left for further research; as are relativistic generalisations of BH. \section*{Conclusions} We have shown that the two interpretations of multi-time measurements given by Anastopoulos \cite{Anast04} can be distinguished by how they treat non-realisable history propositions. If we assume that multi-time measurements consist of successions of single-time measurements then one gets non-additive (and thus non-exclusive) propositions---this is the standard interpretation of multi-time measurements. Alternatively, if we assume that multi-time measurements are made up of sets of exclusive and exhaustive history propositions (and recover single-time SQT when using single-time history propositions) then one promotes a more standard notion of probabilistic exclusivity. The latter interpretation seems cogent and it might be experimentally differentiated from the former by a statistical analysis of non-realisable propositions in experiments. If the probabilities of non-realisable propositions all remain well-defined then we must stick to the standard interpretation, but otherwise the latter novel interpretation would be promoted. Since the latter interpretation provides a certain amount of philosophical clarity over the former, it is worthwhile trying to experimentally distinguish the two. We justify our novel approach, in part, by invoking Cox's probability axioms on the history algebra and showing that Bayes' rule should be obeyed by any natural probability assignments. All preprints refer to the http://arxiv.org website. \end{document}
\begin{document} \mainmatter \title{A dynamic symbolic geometry environment based on the Gr\"obnerCover algorithm for the computation of geometric loci and envelopes} \titlerunning{A geometric environment: loci and envelopes} \author{Miguel A. Ab\'anades \and Francisco Botana \thanks{Both authors partially supported by the project MTM2011-25816-C02-(01,02) funded by the Spanish \textit{Ministerio de Econom\'ia y Competitividad} and the European Regional Development Fund (ERDF)\dots The final publication is available at http://link.springer.com.}} \authorrunning{Miguel A. Ab\'anades \and Francisco Botana} \institute{CES Felipe II, Universidad Complutense de Madrid\\ C/ Capit\'an 39, 28300 Aranjuez, Spain\\ [email protected]\\ Depto. de Matem\'atica Aplicada I, Universidad de Vigo\\ Campus A Xunqueira, 36005 Pontevedra, Spain\\ [email protected]} \maketitle \begin{abstract} An enhancement of the dynamic geometry system \mbox{GeoGebra} for the automatic symbolic computation of algebraic loci and envelopes is presented. Given a \mbox{GeoGebra} construction, the prototype, after rewriting the construction as a polynomial system in terms of variables and parameters, uses an implementation of the recent Gr\"obnerCover algorithm to obtain the algebraic description of the sought locus/envelope as a locally closed set. The prototype shows the applicability of these techniques in general purpose dynamic geometry systems. \keywords{Dynamic Geometry, Locus, Envelope, Gr\"obnerCover Algorithm, GeoGebra, Sage} \end{abstract} \section{Introduction} Most dynamic geometry systems (DGS) implement loci generation just from a graphic point of view, returning a locus as a set of points in the screen with no algebraic information. A simple algorithm based on elimination theory to obtain the equation of an algebraic plane curve from its description as a locus set was described in \cite{BotanaValcarceMatCom2003}. This new information expands the algebraic knowledge of the system, allowing further transformations of the construction elements, such as constructing a point on a locus, intersecting the locus with other elements, etc. The same consideration can be made with respect to the envelope of a family of curves. This algebraic approach is a significant improvement over the numeric-graphic method mentioned above. An implementation of the algorithm in a system embedding GeoGebra in the Sage notebook was described at CICM 2011 \cite{BotanaCICM2011}. In fact, the algorithm is already behind the \textit{LocusEquation} command in the beta version of the next version of the DGS GeoGebra \cite{GeoGebra} (see \url{http://wiki.geogebra.org/en/LocusEquation_Command}). It has also recently been implemented by the DGS JSXGraph \cite{jsxgraph} to determine the equation of a locus set using remote computations on a server \cite{JSXGraphADG2010}, an idea previously developed by the authors in \cite{LAD}. Unfortunately, this algorithm does not discriminate between regular and special components of a locus (following the definitions in \cite{SendraSendra2000})\footnote{A special component of a locus is basically a one-dimensional subset of the locus corresponding to a single position of the moving point.}. More concretely, the obtained algebraic set may contain extra components sometimes due to the fact that the method returns only Zariski closed sets (i.e. zero sets of polynomials) and sometimes due to degenerate positions in the construction (e.g. two vertices being coincidental for a triangle construction). There is little that can be done to solve these problems with the simple elimination approach. Concerning degeneration, there is no alternative except explicitly requesting information from the user about the positions producing special components. However, the recent Gr\"obnerCover algorithm \cite{MontesWibmer2010} has opened new possibilities for the automated processing of these problems. From the canonical decomposition of a polynomial system with parameters returned by the algorithm, and following a remark by Tom\'as Recio concerning the dimensions of the spaces of variables and parameters, a protocol has been established to distinguish between regular and special components of a locus set. For example, a circle, a variety of dimension 1, is declared to be a special component of a locus by the protocol if it corresponds to a point, a variety of dimension 0. This heuristic in the protocol improves the automatic determination of loci but does not fully resolve it. It is not difficult to find examples where this general rule does not suit the user's interests. This is a delicate issue because, in some situations, these special components are the relevant parts of the sought set (the study of bisector curves is a good source for such examples). As an illustration, let us consider the following problem included in \cite{Guzman2002} together with a remark about its difficult synthetic treatment: {\it Given a triangle $ABC$. Take a point $M$ on $BC$. Consider the orthogonal projections $N$ of $M$ onto $AC$, and $P$ onto $AB$ respectively. The lines $AM$ and $PN$ meet at $X$. What is the locus set of points $X$ when $M$ moves along the line $BC$?} When the vertices of the triangle $ABC$ are the points $(2,3)$, $(1,0)$ and $(0,1)$, the locus set is a conic from which a point has to be removed. That is, the locus set is not an algebraic variety but a locally closed set. Figure \ref{fig:Locus-as-a-constructible-set-1} shows the plotting of the conic in GeoGebra together with its precise algebraic description as provided by the prototype. \begin{figure} \caption{Locus as a constructible set.} \label{fig:Locus-as-a-constructible-set-1} \end{figure} If we consider this same construction with $A(0,0),B(1,0)$ and $C(0,1)$, a standard DGS will plot a straight line as locus, while ordinary elimination will give the true locus $2x+2y = 1$ plus two other lines, namely, the coordinate axes $x = 0$ and $y = 0$. These extra lines correspond to two degenerate positions for the mover: $M = B$ and $M = C$. Applying the criterion sketched above, the system identifies these two lines as special components and hence removes them from the final description. In tables \ref{table:outputCase1} and \ref{table:outputCase2} we find the parametric systems and outputs from the Gr\"obnerCover algorithm for the two considered instances respectively. \begin{table} \begin{center} \begin{tabular}{|l|p{8cm} |} \hline Parametric System & $-x_1-x_2+1, 2x_1+2x_2-2x_3-2x_4, -2x_3+2x_4-2, x_1+3x_2-x_5-3x_6, -3x_5+x_6+3, -(x_4-y)(x_3-x_5)+(x_4-x_6)(x_3-x), -(y-3)(x_1-2)+(x-2)(x_2-3)$ \\ \hline \hline Basis segment 1 & $\{1\}$\\ \hline Segment 1 & $\mathbb{V}(0)\setminus \mathbb{V}(x^2-4xy+6x-y^2+8y-7)$ \\ \hline Basis segment 2 $^*$ & $ \{\{(5y-20)x_6+(-3x+6y+3),(x-2y+7)x_6+(-3y)\},\{(5y-20)x_5+(-x-3y+21),(x-7y+27)x_5+(4y-28)\},x_4-1,x_3,\{(y-4)x_2+(-x+2y+1),(x-2y+7)x_2+(-5y)\},\{(y-4)x_1+(x-3y+3),(x-y+3)x_1+(4y-4)\}\}$ \\ \hline Segment 2 & $\mathbb{V}(x^2-4xy+6x-y^2+8y-7)\setminus \mathbb{V}(y-4,x-1)$ \\ \hline Basis segment 3 & $\{1\}$ \\ \hline Segment 3 & $\mathbb{V}(y-4,x-1)\setminus \mathbb{V}(1)$ \\ \hline \hline Locus (after heuristic step) & $\mathbb{V}(x^2 - 4xy - y^2 + 6x + 8y - 7) \setminus \mathbb{V}(y - 4,x - 1)$ \\ \hline \end{tabular} \caption{Parametric system, Gr\"obnerCover output and returned locus for instance with $A(2,3)$, $B(1,0)$ and $C(0,1)$. \newline \footnotesize{$^*$ Includes regular functions (see \cite{MontesWibmer2010}).}} \label{table:outputCase1} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|l|p{8cm}|} \hline Parametric System & $-x_1-x_2+1,x_3, -x_2 + x_4, -x_1 + x_5, -x_6, -x_1 y + x_2 x, -(x_4 - y)(x_3 - x_5) + (x_4 - x_6)(x_3 - x)$ \\ \hline \hline Basis segment 1 & $\{1\}$ \\ \hline Segment 1 & $\mathbb{V}(0)\setminus (\mathbb{V}(2x+2y-1) \cup \mathbb{V}(x) \cup \mathbb{V}(y))$ \\ \hline Basis segment 2 & $\{x_6,(x+y)x_5-x,(x+y)x_4-y,x_3,(x+y)x_2-y,(x+y)x_1-x\}$ \\ \hline Segment 2 & $(\mathbb{V}(2x+2y-1)\setminus \mathbb{V}(1)) \cup (\mathbb{V}(x)\setminus \mathbb{V}(x,y)) \cup (\mathbb{V}(y)\setminus \mathbb{V}(x,y))$ \\ \hline Basis segment 3 & $\{x_6,x_4+x_5-1,x_3,x_2+x_5-1,x_1-x_5,x_5^2-x_5\}$ \\ \hline Segment 3 & $ \mathbb{V}(x,y)\setminus \mathbb{V}(1)$ \\ \hline \hline Locus (after heuristic step) & $\mathbb{V}(2x + 2y - 1)$ \\ \hline \end{tabular} \caption{Parametric system, Gr\"obnerCover output and returned locus for instance with $A(0,0)$, $B(1,0)$ and $C(0,1)$.} \label{table:outputCase2} \end{center} \end{table} \section{Prototype description} The system (accessible at \cite{LocusEnvelopeWeb}) consists of a web page with a \mbox{GeoGebra} applet where the user constructs a locus or a family of linear objects depending on a point. For any of these constructions (specified using a predetermined set of GeoGebra commands) the prototype provides the algebraic description of the locus/envelope by just pressing one button. Note that in its current state, the system does not provide the equation of the envelope, but the one of the discriminant line. A note stating this fact should be given if using the system for teaching purposes. The process is roughly as follows. First, the XML description of the GeoGebra construction is sent to a Server where an installation of a Sage Cell Server (\cite{SageCellServer}) is maintained by the authors. There, the construction follows an algebraization process, as specified by a Sage library \cite{BotanaICCSA2011}. The communication Sage-GeoGebra is made possible by the JavaScript GeoGebra functions that allow the data transmission to and from the applet. In particular, the XML description of any GeoGebra diagram can be obtained. The processing of the XML description of the diagram is made by some ad-hoc code by the authors that use Sage through the Sage cell server, a general service by Sage. More concretely, Singular, included in Sage and with an implementation of the Gr\"obnerCover algorithm, is used. The Gr\"obner cover of the obtained parametric polynomial system is analyzed, and the accepted components of the locus/envelope are incorporated into the applet. Note that the goal is not to provide a final tool but a proof-of-concept prototype showing the feasibility of using sophisticated algorithms like Gr\"obnerCover to supplement the symbolic capabilities of existing dynamic geometry systems, as well as to show the advantage of connecting different systems by using web services. \end{document}
\begin{document} \maketitle \def\ddate {\sevenrm \ifcase\month\or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or November\or December\fi\! {\the\day}, \!{\sevenrm\the\year}} \renewcommand{\arabic{footnote}}{} {{ \footnote{2010 \emph{Mathematics Subject Classification}: Primary: 60F15, 60G50 ; Secondary: 60F05.} \footnote{\emph{Key words and phrases}: independent random variables, lattice distributed, Bernoulli part, local limit theorem, effective remainder, random walk in random scenery. \par \sevenrm{[LLTR]5} \ddate{}} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \begin{abstract} We show that the Bernoulli part extraction method can be used to obtain approximate forms of the local limit theorem for sums of independent lattice valued random variables, with effective error term, that is with explicit parameters and universal constants. We also show that our estimates allow to recover Gnedenko and Gamkrelidze local limit theorems. We further establish by this method a local limit theorem with effective remainder for random walks in random scenery. \end{abstract} \section{Introduction and Main Result.} The extraction method of the Bernoulli part of a (lattice valued) random variable was developed by McDonald in \cite{M},\cite{M1},\cite{MD} for proving local limit theorems in presence of the central limit theorem. Twenty years before McDonald's work, Kolmogorov \cite{K} initiated a similar approach in the study of L\'evy's concentration function, and is the first having explored this direction. For details and clarifications, we refer to the recent paper by Aizenmann, Germinet, Klein and Warzel \cite{AGKW}, where this idea is also developed for general random variables and applications are given. That method allows to transfer results which are available for systems of Bernoulli random variables to systems of arbitrary random variables. It is based on a probabilistic device, and is proved to be an efficient alternative to the characteristic functions method. Kolmogorov wrotes to this effect in his 1958's paper \cite{K} p.29: \lq\lq... \!\!{\it Il semble cependant que nous restons toujours dans une p\'eriode o\`u la comp\'etition de ces deux directions {\rm [characteristic functions or direct methods from the calculus of probability]} conduit aux r\'esultats les plus f\'econds ...\rq\rq}. We believe that Kolmogorov's comment is still topical. \vskip 2pt The main object of this article is to show that this approach can be used to obtain, in a rather simple way, approximate forms of the local limit theorem for sums of independent lattice valued random variables, with effective error term, that is with explicit parameters and universal constants. The approximate form we obtain expresses quite simply, and is thereby very handable. Further, it is precise enough to contain Gnedenko and Gamkrelidze local limit theorems (\ref{llt}). Before stating the main results and in view of comparing results, it is necessary to recall and discuss some classical facts and briefly describe the background of this problem. Let $ \tilde X=\{X_n , n\ge 1\}$ be a sequence of independent, square integrable random variables taking values in a common lattice $\mathcal L(v_{ 0},D )$ defined by the sequence $v_{ k}=v_{ 0}+D k$, $k\in {\mathbb Z}$, where $v_{0} $ and $D >0$ are real numbers. Let \begin{equation}\label{not1}S_n=\sum_{j=1}^nX_j, \qq M_n=\sum_{j=1}^n{\mathbb E \,} X_j , \qq \Sigma_n=\sum_{j=1}^n{\rm Var}( X_j) . \end{equation} Then $S_n$ takes values in the lattice $\mathcal L( v_{ 0}n,D )$. The sequence $\tilde X$ satisfies a local limit theorem if \begin{equation}\label{llt} \D_n:= \sup_{N=v_0n+Dk }\Big| \sqrt{\Sigma_n} {\mathbb P}\{S_n=N\}-{D\over \sqrt{ 2\pi } }e^{- {(N-M_n)^2\over 2 \Sigma_n} }\Big| = o(1). \end{equation} This is a fine limit theorem in Probability Theory, which also has deep connections with Number Theory, see for instance Freiman \cite{F} and Postnikov \cite{Po}. These two aspects of a same problematic were much studied in the past decades by the Russian School of probability. It seems however that some of these results are nowadays forgotten. \vskip 2pt Assume that $\tilde X$ is an i.i.d. sequence and let $\m={\mathbb E \,} X_1$, $\s^2={\rm Var}( X_1)$. If for instance $X_1$ takes only even values, it is clear that (\ref{llt}) cannot be fulfilled with $D=1$. In fact, (\ref{llt}) holds (with $M_n=n\m$, $\Sigma_n=n\s^2$) if and only if the span $D$ is maximal, i.e. there are no other real numbers $v'_{0} $ and $D' >D$ for which ${\mathbb P}\{X \in\mathcal L(v'_0,D')\}=1$. This is Gnedenko's well-known generalization of the de Moivre-Laplace theorem. Notice that (\ref{llt}) is significant only for the bounded domains of values \begin{equation}\label{lltrange} {|N-n\m|} \le \s \sqrt{2n\log \frac{D }{ \e_n }} ,\end{equation} where $\e_n\downarrow 0$ depends on the Landau symbol $o$. It is worth observing that (\ref{llt}) cannot be deduced from a central limit theorem with rate, even under stronger moment assumption. Suppose for instance $D=1$, $X$ is centered and ${\mathbb E \,}|X|^3<\infty$. From Berry-Esseen's estimate only follows that $$\Big|\s \sqrt{ n} {\mathbb P}\{S_n=k\}-\s \sqrt n\int_{\frac{k}{\s \sqrt n}}^{\frac{k+1}{\s \sqrt n}}e^{-t^2/2}\frac{\dd t}{\sqrt{2\pi}}\Big|\le C\frac{ {\mathbb E \,}|X|^3}{\s^2} .$$ However the comparison term has already the right order for all integers $k$ such that $k+1\le \s \sqrt n$ since, $$ \sup_{k+1\le \s \sqrt n}\big|\s \sqrt n\int_{\frac{k}{\s \sqrt n}}^{\frac{k+1}{\s \sqrt n}}e^{-t^2/2}\frac{\dd t}{\sqrt{2\pi}}-\frac{1}{\sqrt{2\pi}}e^{-\frac{k^2}{2\s^2 n}}\big|\le \frac{C}{\s \sqrt n} \to 0. $$ Hence (\ref{llt}) cannot follow from it. Gnedenko' theorem is optimal: Matskyavichyus \cite{Mat} showed that for any nonnegative function $\p(n)\to 0$ as $n\to \infty$, there is an i.i.d. sequence $\tilde X$, (${\mathbb E \,} X_1=0$, ${\mathbb E \,} X_1^2<\infty$ and the form of the characteristic function of $X_1$ is explicited) such that for each $n\ge n_0$, $ \sqrt n \D_n\ge \p(n)$. Stronger integrability properties yield better remainder terms. \begin{theorem} \label{r} Let $F$ denote the distribution function of $X$.\vskip 1pt \noi {\rm (i) (\cite{IL} Theorem 4.5.3)} In order that the property \begin{equation} \label{alfa} \sup_{N=an+Dk}\Big| \sqrt n {\mathbb P}\{S_n=N\}-{D\over \sqrt{ 2\pi}\s}e^{- {(N-n\m )^2\over 2 n \s^2} }\Big| ={\mathcal O}\big(n^{-\alpha}\big) , \ 0<\a<1/2 , \end{equation} holds, it is necessary and sufficient that the following conditions be satisfied: \begin{eqnarray*} (1) \ D \ \hbox{is maximal}, \qq \qq\qq (2) \ \ \int_{|x|\ge u} x^2 F(dx) = \mathcal O(u^{-2\a})\quad \hbox{as $u\to \infty$.} \end {eqnarray*} {\rm (ii) (\cite{[P]} Theorem 6 p.197)} If ${\mathbb E \,} |X|^3<\infty$, then (\ref{alfa}) holds with $\a =1/2$. \end{theorem} The local limit theorem in the independent case is often studied by using various structural characteristics, which are interrelated. There exists a consequent literature (unfortunately no survey) and we only report a very few of them. The "smoothness" characteristic \begin{eqnarray}\label{delta} \d_X =\sum_{k\in {\mathbb Z}}\big|{\mathbb P}\{X=v_k\}-{\mathbb P}\{X=v_{k+1}\}\big|, \end{eqnarray} thoroughly investigated by Gamkrelidze is connected to the characteristic function $\p_X(t)= {\mathbb E \,} e^{i t X}$ through the relation \begin{eqnarray}\label{delta1a} (1-e^{it})\p_X(t)&=&\sum_{m\in {\mathbb Z}} \frac{(itm)}{m!}\big({\mathbb P}\{X=m\}-{\mathbb P}\{X=m-1\}\big). \end{eqnarray} Hence \begin{eqnarray}\label{delta1b} |\p_X(t)|&\le & \frac{\d_X}{2|\sin (t/2)| }\qq\quad (t\notin 2\pi{\mathbb Z}). \end{eqnarray} This is used in Gamkrelidze \cite{G}, to prove the following well-known result: If a sequence $\tilde X$ of independent integer valued random variables verifies: \vskip 2pt (i) there exists an $n_0$ such that $\sup _k\ \d_{X_k^1+\ldots +X_k^{n_0}}\ <\ \sqrt 2$, (here $X_k^{ j}, 1\le j\le n_0$ are independent copies of $X_k$), \vskip 1pt (ii) the central limit theorem is applicable, \vskip 1pt (iii) ${\rm Var}(S_n)=\mathcal O(n)$, \vskip 2pt \noi then the local limit theorem is applicable in the strong form (i.e. remains true when changing or discarding a finite number of terms of $\overline X$). Later Davis and McDonald \cite{MD} proved a variant of of Gamkrelidze's result using the Bernoulli part extraction method. Let $X$ be a random variable such that ${\mathbb P} \{X \in {\mathcal L}(v_0,D) \}=1$, and let \begin{eqnarray}\label{vartheta} \t_X =\sum_{k\in {\mathbb Z}}{\mathbb P}\{X=v_k\}\wedge{\mathbb P}\{X=v_{k+1}\} , \end{eqnarray} where $a\wedge b=\min(a,b)$. Note (section \ref{bercomp}) that necessarily $ \t_{X }<1$; moreover $ \d_X =2-2\t_X $ (Mukhin \cite{Mu}, p.700). This simple characteristic is used in that method and it is required that $\t_X>0$. More precisely \begin{theorem} {\rm (\cite{MD}, Theorem 1.1)} Let $\{ X_j , j\ge 1\}$ be independent, integer valued random variables with partial sums $S_n= X_1+\ldots +X_n$ and let $f_j(k)= {\mathbb P}\{X_j=k\}$. Let also for each $j$ and $n$, $$q(f_j)= \sum_{k} [f_j(k)\wedge f_j(k+1)], \qq Q_n=\sum_{j=1}^n q(f_j). $$ Suppose that there are numbers $b_n>0$, $a_n$ such that $\lim_{n\to \infty}b_n= \infty$, $\limsup_{n\to \infty} {b_n^2}/{Q_n}<\infty$, and $$ \frac{S_n-a_n}{b_n} \ \ \buildrel{\mathcal L}\over{\Longrightarrow}\ \ \mathcal N(0,1).$$ Then $$ \lim_{n\to \infty} \sup_{k} \Big|b_n {\mathbb P}\{S_n=k\} -\frac{1}{\sqrt{2\pi}}e^{- \frac{(k-a_n)^2}{2b_n^2}}\Big|=0. $$ \end{theorem} \begin{remark} -- It may happen that $q(f_j)\equiv0$ and so $Q_n\equiv 0$. In the above (original) statement, it is thus implicitly assumed that $Q_n>0$, $Q_n\uparrow\infty$ and $q(f_j)>0$, which is equivalent to $f_j(k)\wedge f_j(k+1)>0$ for some $k\ge 0$. \noi -- It was recently shown in Weber \cite{W} that this method can also be used efficiently to prove the almost sure local limit theorem in the critical case, namely for sums of i.i.d. random variables with the minimal integrability assumption: square integrability. \end{remark} As mentioned before, we are mainly interested in local limit theorems with explicit constants in the remainder term. There are generally speaking, much less related papers. Most of the local limit theorems with rate are usually stated with Landau symbols $o$, $\mathcal O$. And so the implicit constants may depend on the sequence itself. Consider the characteristic $$ H(X ,d) = {\mathbb E \,} \langle X^*d\rangle^2,$$ where $\langle \a \rangle$ is the distance from $\a$ to the nearest integer and $X^*$ denotes a symmetrization of $X$. In Mukhin \cite{Mu} and \cite{Mu1} , the two-sided inequality \begin{eqnarray}\label{fih} 1-2\pi^2 H(X ,\frac{t }{2\pi}) \le |\p_X(t)|\le 1-4 H(X ,\frac{t }{2\pi}) , \end{eqnarray} is established. The following is the one-dimensional version of Theorem 5 in \cite{Mu}, which is stated without proof. \begin{theorem} Let $X_1,\ldots, X_n$ have zero mean and finite third moments. Let $$ H_n= \inf_{1/4\le d\le 1/2}\sum_{j=1}^n H(X_j ,d), \qq L_n= \frac{\sum_{j=1}^n{\mathbb E \,} |X_j|^3}{\big(\sum_{j=1}^n{\mathbb E \,} |X_j|^2\big)^{3/2}} .$$ Then $\D_n\le CL_n\, \big( {\Sigma_n }/{ H_n}\big)$. \end{theorem} The author further announced a manuscript devoted to the question of the estimates of the rate of convergence. We have been however unable to find any corresponding publication. For the iid case with third moment condition, we also record Lemma 3 in Doney \cite{D}. \vskip 3 pt Before stating our main result, say a few words concerning the method we will use, which is quite elementary. \vskip 3 pt Recall that $S_n=X_1+\ldots +X_n$, where $X_j$ are independent random variables such that ${\mathbb P}\{X_j \in\mathcal L(v_{ 0},D )\}=1$, for the moment we do not assume any moment condition. We only suppose that \begin{equation}\label{basber} \t_{X_j}>0, \qq \quad j=1,\ldots, n. \end{equation} Anticipating a bit Lemma \ref{dec}, we can write $ S_n\buildrel{\mathcal D}\over{=} W_n + DM_n $ where \begin{equation}\label{dec0} W_n =\sum_{j=1}^n V_j,\qq M_n=\sum_{j=1}^n \e_jL_j, \quad B_n=\sum_{j=1}^n \e_j . \end{equation}The random variables $ (V_j,\e_j),L_j$, $j=1,\ldots,n $ are mutually independent and $\e_j$, $ L_j $ are independent Bernoulli random variables with ${\mathbb P}\{L_j =0\}={\mathbb P}\{L_j=1\}=1/2$. As moreover $M_n\buildrel{\mathcal D}\over{ =}\sum_{j=1}^{B_n } L_j$, the following result will be relevant. \begin{lemma}{\rm (\cite{[P]}, Chapter 7, Theorem 13)} \label{lltber}Let $\mathcal B_n=\b_1+\ldots+\b_n$, $n=1,2,\ldots$ where $ \b_i $ are i.i.d. Bernoulli r.v.'s (${\mathbb P}\{\b_i=0\}={\mathbb P}\{\b_i=1\}=1/2$). There exists a numerical constant $C_0$ such that for all positive $n$ \begin{eqnarray*} \sup_{z}\, \Big| {\mathbb P}\big\{\mathcal B_n=z\} -\sqrt{\frac{2}{\pi n}} e^{-{ (2z-n)^2\over 2 n}}\Big| \le \frac{C_0}{n^{3/2}} . \end{eqnarray*} \end{lemma} \begin{remark} In fact a little more is true, namely that we have $ o ( {1}/{n^{3/2}} )$. And it is also possible to show the following estimate yielding a better error term in presence of a different comparison term: There exists an absolute constant $C$ such that \begin{equation}\sup_{k}\Big| {\mathbb P}\{ \mathcal B_n=z \} -\sqrt{{ 2 \over \pi n}} \int_{\mathbb R} e^{-i{2z-n\over \sqrt n}v- { v^2 \over 2}- { v^4 \over 12} }\dd v \Big|\le C { \log^{7/2} n \over n^{ 5/2}}. \end{equation} \end{remark} Let ${\mathbb E \,}_{\!L}$, ${\mathbb P}_{\!L}$ (resp. ${\mathbb E \,}_{(V,\e)}$, ${\mathbb P}_{(V,\e)}$) stand for the integration symbols and probability symbols relatively to the $\s$-algebra generated by the sequence $\{L_j , j=1, \ldots, n\}$ (resp. $\{(V_j,\e_j), j=1, \ldots, n\}$). \vskip 2 pt Assume from now that the $X_j$'s are square integrable. The estimation of $${\mathbb P} \{S_n =\kappa \} = {\mathbb E \,}_{(V,\e)} {\mathbb P}_{\!L} \big\{D M_n =\kappa-W_n \big\} $$ relies upon the conditional sum $S_n'= {\mathbb E \,}_L S_n= W_n + \frac{D}{2} B_n $, which verifies \begin{eqnarray*} {\mathbb E \,} S'_n= {\mathbb E \,} S_n, \qq {\mathbb E \,} (S'_n) ^2= {\mathbb E \,} S_n ^2- \frac{D^2\Theta_n }{4} . \end{eqnarray*} Set \begin{eqnarray*} H_n&=& \sup_{x\in {\mathbb R}} \big|{\mathbb P}_{(V,\e)} \big\{{S'_n-{\mathbb E \,}_{(V,\e)}S'_n \over \sqrt{{\rm Var}(S'_n)} }<x\big\} - {\mathbb P}\{g<x\} \big|\cr \rho_n(h)&=& {\mathbb P}\Big\{\big|\sum_{j=1}^n \e_j-\Theta_n\big|>h\Theta_n\Big\} , \end{eqnarray*} where $\e_1,\ldots,\e_n$ are independent random variables verifying ${\mathbb P}\{\e_j=1\}= 1-{\mathbb P}\{\e_j=0\}=\t_j$, $0<\t_j\le \t_{X_j}$, $j=1,\ldots, n$ and $\Theta_n=\sum_{j=1}^n \t_j$. As $S_n'= {\mathbb E \,}_L S_n$, suitable moment conditions permit to easily estimate $H_n$ by using Berry-Esseen estimates. And concentration inequalities (Lemma \ref{primo}) provide sharp estimates of $\rho_n(h)$. \vskip 3 pt We are now ready to state our main result. Let $ C_1= \max (4,C_0 )$. \begin{theorem} \label{ger1} For any $0<h<1$, $0<\t_j\le \t_{X_j}$, and all $\k\in \mathcal L( v_{ 0}n,D )$ \begin{eqnarray*} {\mathbb P}\{S_n =\k\} &\le & \Big( \frac{1+ h }{ 1-h}\Big) \, { D \over \sqrt{2 \pi {\rm Var}(S_n) } } e^{-\frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1 + h){\rm Var}(S_n) } } \cr & &\quad + {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} \big) + \rho_n(h) . \end{eqnarray*} And \begin{eqnarray*} \qq \quad {\mathbb P} \{S_n =\kappa \} &\ge & \Big(\frac{ { 1- h }}{ { 1 +h }}\Big) { D \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h){\rm Var}(S_n) } }} - \cr & &\ {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} + 2\rho_n(h) \big) - \rho_n(h). \end{eqnarray*} \end{theorem} \begin{corollary} \label{ger2} Assume that $\frac{ \log \Theta_n }{\Theta_n}\le {1}/{14} $. Then, for all $\k\in \mathcal L( v_{ 0}n,D )$ such that $$\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } \le \big({\frac { \Theta_n} {14 \log \Theta_n} }\big)^{1/2} ,$$ we have \begin{eqnarray*} \Big| {\mathbb P} \{S_n =\kappa \} -{ D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2\pi {\rm Var}(S_n) }} \Big| & \le & C_2\Big\{ D\big({ { \log \Theta_n } \over { {\rm Var}(S_n) \Theta_n} } \big)^{1/2} + { H_n + \Theta_n^{-1} \over \sqrt{ \Theta_n} } \Big\} . \end{eqnarray*} And $C_2=12(C_1+1)$. \end{corollary} \begin{remark}\label{123} Assume that $$\lim_{n\to\infty} \big(\frac{ {\rm Var}(S_n) }{ \Theta_n }\big)^{1/2} \big( H_n + \frac{ 1 }{ \Theta_n }\big) =0. $$ This is for instance satisfied if $${\rm (i)}\ \lim_{n\to\infty} {\rm Var}(S_n) =\infty, \qq {\rm (ii)}\ \lim_{n\to\infty} H_n =0, \qq {\rm (iii)}\ \limsup_{n\to\infty} \frac{ {\rm Var}(S_n) }{ \Theta_n } <\infty, $$ Then \begin{eqnarray*} \lim_{n\to\infty} \sup_{\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } \le ({\frac { \Theta_n} {14 \log \Theta_n} } )^{1/2}}\Big| \sqrt{ {\rm Var}(S_n) }{\mathbb P} \{S_n =\kappa \} -{ D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2\pi }} \Big| & = & 0. \end{eqnarray*} Now if $\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } > ({\frac { \Theta_n} {14 \log \Theta_n} } )^{1/2}$, then $$e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }\ll \frac{ 2 {\rm Var}(S_n) }{(\k- {\mathbb E \,} S_n)^2} \le ({\frac {14 \log \Theta_n} { \Theta_n} } )^{1/2}.$$ And $${\mathbb P} \{S_n =\kappa \}\le {\mathbb P} \big\{\frac{|S_n- {\mathbb E \,} S_n|}{ \sqrt{{\rm Var}(S_n) } } \ge \frac{|\k- {\mathbb E \,} S_n|}{ \sqrt{{\rm Var}(S_n) } } \big\}\le ({\frac {14 \log \Theta_n} { \Theta_n} } )^{1/2}.$$ Hence $$\sqrt{ {\rm Var}(S_n) }{\mathbb P} \{S_n =\kappa \}\le ({\frac {14 {\rm Var}(S_n)\log \Theta_n} { \Theta_n} } )^{1/2} $$ We deduce that if $$\lim_{n\to\infty} {\frac { {\rm Var}(S_n)\log \Theta_n} { \Theta_n} } =0, $$ then \begin{eqnarray*} \lim_{n\to\infty} \sup_{ \k\in \mathcal L( v_{ 0}n,D ) }\Big| \sqrt{ {\rm Var}(S_n) }{\mathbb P} \{S_n =\kappa \} -{ D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2\pi }} \Big| & = & 0. \end{eqnarray*} \end{remark} \vskip 6pt Let $\psi:{\mathbb R}\to {\mathbb R}^+$ be even, convex and such that $\frac {\psi(x)}{x^2}$ and $\frac{x^3}{\psi(x)}$ are non-decreasing on ${\mathbb R}^+$ and assume now \begin{equation}\label{did} {\mathbb E \,} \psi( X_j )<\infty . \end{equation} Put $$L_n=\frac{ \sum_{j=1}^n{\mathbb E \,} \psi (X_j) } { \psi (\sqrt { {\rm Var}(S_n )})} .$$ Then Corollary \ref{ger2} can strengthened as follows \begin{corollary} \label{ger3} Assume that $\frac{ \log \Theta_n }{\Theta_n}\le {1}/{14} $. Then, for all $\k\in \mathcal L( v_{ 0}n,D )$ such that $$\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } \le \sqrt{\frac{7 \log \Theta_n} {2\Theta_n}},$$ we have \begin{eqnarray*} \Big| {\mathbb P} \{S_n =\kappa \} -{ D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2\pi {\rm Var}(S_n) }} \Big| & \le & C_3\Big\{ D\big({ { \log \Theta_n } \over { {\rm Var}(S_n) \Theta_n} } \big)^{1/2} + { L_n + \Theta_n^{-1} \over \sqrt{ \Theta_n} } \Big\} . \end{eqnarray*} And $C_3=\max (C_2, 2^{ 3/2}C_{{\rm E}}) $, $C_{{\rm E}}$ being an absolute constant arising from Esseen's inequality. \end{corollary} \section{Preliminaries.} \subsection{ \gsec Characteristics of a random variable.} Let $X$ be a random variable such that ${\mathbb P}\big(X \in {\mathcal L}(v_0,D)\big)=1$ and recall according to (\ref{vartheta}) that $$ \t_X =\sum_{k\in {\mathbb Z}}{\mathbb P}\{X=v_k\}\wedge{\mathbb P}\{X=v_{k+1}\} . $$ Then \begin{eqnarray}\label{vartheta1} 0\le \t_X<1 .\end{eqnarray} Let indeed $k_0$ be some integer such that $f(k_0)>0$. Then $$\sum_{k=k_0}^\infty f(k)\wedge f(k+1)\le \sum_{k=k_0}^\infty f(k+1)=\sum_{k=k_0+1}^\infty f(k ) $$ And so $0\le \t_X\le \sum_{k< k_0 } f(k) +\sum_{k=k_0+1}^\infty f(k )<1$. \vskip 2 pt Now assume that $X$ has finite mean $\mu$ and finite variance $\sigma^2$. The below inequality linking parameters $\s,D,\t_X$, is implicit in our proof (see (\ref{prime})), \begin{eqnarray}\label{ssprime1} \s^2\ge \frac{ D^2 }{4} \t_X. \end{eqnarray} We begin with giving a proof valid for general lattice valued random variables. At first by Tchebycheff's inequality, $${{D^2}\over{4}}{\mathbb P}\big\{|X - \mu|\ge {{D}\over{2}}\big\}\le \sigma^2.$$ Now \begin{eqnarray*} & & {\mathbb P}\big\{|X - \mu|\ge \frac{D} {2} \big\}=\sum_{v_k\ge\mu+ {{D}\over{2}} }{\mathbb P}\{X=v_k\}+ \sum_{v_k\le \mu- {{D}\over{2}} }{\mathbb P}\{X=v_k\} \cr &\ge& \!\!\sum_{v_{k+1}\ge\mu+ {{D}\over{2}} }{\mathbb P}\{X=v_{k})\wedge {\mathbb P}\{X=v_{k+1}\} + \!\!\! \sum_{v_k\le \mu- {{D}\over{2}} }{\mathbb P}\{X=v_k\}\wedge {\mathbb P}\{X=v_{k+1}\} \cr &=& \sum_{v_{k}\ge\mu- {{D}\over{2}} }{\mathbb P}\{X=v_{k}\}\wedge {\mathbb P}\{X=v_{k+1}\} +\!\!\! \sum_{v_k\le \mu- {{D}\over{2}} }{\mathbb P}\{X=v_k\}\wedge {\mathbb P}\{X=v_{k+1}\} \cr&\ge & \t_X. \end{eqnarray*} Hence inequality (\ref{ssprime1}). \begin{remark} In Lemma 2 of Mukhin \cite{Mu}, the following inequality is proved $$ \mathcal D(X,d):=\inf_{a\in {\mathbb R}}{\mathbb E \,} \langle (X-a)d\rangle^2 \ge \frac{ |d|^2 }{4} \t_X, $$ where $d$ is a real number, $|d|\le 1/2$ and $\langle \a \rangle$ is the distance from $\a$ to the nearest integer. Notice that $ \mathcal D(X,d)=0$ if and only if $X$ is lattice with span $1/d$. Let $\p_X(t)= {\mathbb E \,} e^{i t X}$. \end{remark} \subsection{ \gsec Bernoulli component of a random variable} \label{bercomp} Let $X$ be a random variable such that ${\mathbb P}\{X \in\mathcal L(v_0,D)\}=1$. It is not necessary to suppose here that the span $D$ is maximal. Put $$ f(k)= {\mathbb P}\{X= v_k\}, \qq k\in {\mathbb Z} .$$ We assume that \begin{equation}\label{basber1}\t_X>0. \end{equation} Notice that $ \t_X<1$. Indeed, let $k_0$ be some integer such that $f(k_0)>0$. Then $$\sum_{k=k_0}^\infty f(k)\wedge f(k+1)\le \sum_{k=k_0}^\infty f(k+1)=\sum_{k=k_0+1}^\infty f(k ) $$ And so $ \t_X\le \sum_{k< k_0 } f(k) +\sum_{k=k_0+1}^\infty f(k )<1$. Let $0<\t\le\t_X$. One can associate to $\t$ and $X$ a sequence $ \{ \tau_k, k\in {\mathbb Z}\}$ of non-negative reals such that \begin{equation}\label{basber0} \tau_{k-1}+\tau_k\le 2f(k), \qq \qq\sum_{k\in {\mathbb Z}} \tau_k =\t. \end{equation} Just take $\tau_k= \frac{\t}{\nu_X} \, (f(k)\wedge f(k+1)) $. Now define a pair of random variables $(V,\e)$ as follows: \begin{eqnarray}\label{ve} \qq\qq\begin{cases} {\mathbb P}\{ (V,\e)=( v_k,1)\}=\tau_k, \cr {\mathbb P}\{ (V,\e)=( v_k,0)\}=f(k) -{\tau_{k-1}+\tau_k\over 2} . \end{cases}\qq (\forall k\in {\mathbb Z}) \end{eqnarray} By assumption this is well-defined, and the margin laws verify \begin{eqnarray}\begin{cases}{\mathbb P}\{ V=v_k\} &= \ f(k)+ {\tau_{k }-\tau_{k-1}\over 2} , \cr {\mathbb P}\{ \e=1\} &= \ \t \ =\ 1-{\mathbb P}\{ \e=0\} . \end{cases}\end{eqnarray} Indeed, ${\mathbb P}\{ V=v_k\}= {\mathbb P}\{ (V,\e)=( v_k,1)\}+ {\mathbb P}\{ (V,\e)=( v_k,0)\}=f(k)+ {\tau_{k }-\tau_{k-1}\over 2} .$ Further ${\mathbb P}\{ \e=1\} =\sum_{k\in{\mathbb Z}} {\mathbb P}\{ (V,\e)=( v_k,1)\}=\sum_{k\in{\mathbb Z}} \tau_{k }=\t $. \vskip 3pt The whole approach is based on the lemma below, the proof of which is given for the sake of completeness. \begin{lemma} \label{bpr} Let $L$ be a Bernoulli random variable which is independent from $(V,\e)$, and put $Z= V+ \e DL$. We have $Z\buildrel{\mathcal D}\over{ =}X$. \end{lemma} \begin{proof} (\cite{MD},\cite{W}) Plainly, \begin{eqnarray*}{\mathbb P}\{Z=v_k\}&=&{\mathbb P}\big\{ V+\e DL=v_k, \e=1\}+ {\mathbb P}\big\{ V+\e DL=v_k, \e=0\} \cr &=&{{\mathbb P}\{ V=v_{k-1}, \e=1\}+{\mathbb P}\{ V=v_k, \e=1\}\over 2} +{\mathbb P}\{ V=v_k, \e=0\} \cr&=& {\tau_{k-1}+ \tau_{k }\over 2} +f(k)-{\tau_{k-1}+ \tau_{k }\over 2} = f(k). \end{eqnarray*} \end{proof} } Now consider independent random variables $ X_j,j=1,\ldots,n$, each satisfying assumption (\ref{basber1}) and let $0<\t_i\le \t_{X_i}$, $i=1,\ldots, n$. Iterated applications of Lemma \ref{bpr} allow to associate to them a sequence of independent vectors $ (V_j,\e_j, L_j) $, $j=1,\ldots,n$ such that \begin{eqnarray}\label{dec0} \big\{V_j+\e_jD L_j,j=1,\ldots,n\big\}&\buildrel{\mathcal D}\over{ =}&\big\{X_j, j=1,\ldots,n\big\} . \end{eqnarray} Further the sequences $\{(V_j,\e_j),j=1,\ldots,n\} $ and $\{L_j, j=1,\ldots,n\}$ are independent. For each $j=1,\ldots,n$, the law of $(V_j,\e_j)$ is defined according to (\ref{ve}) with $\t=\t_j$. And $\{L_j, j=1,\ldots,n\}$ is a sequence of independent Bernoulli random variables. Set \begin{equation}\label{dec} S_n =\sum_{j=1}^n X_j, \qq W_n =\sum_{j=1}^n V_j,\qq M_n=\sum_{j=1}^n \e_jL_j, \quad B_n=\sum_{j=1}^n \e_j . \end{equation} \begin{lemma} \label{lemd}We have the representation \begin{eqnarray*} \{S_k, 1\le k\le n\}&\buildrel{\mathcal D}\over{ =}& \{ W_k + DM_k, 1\le k\le n\} . \end{eqnarray*} And $M_n\buildrel{\mathcal D}\over{ =}\sum_{j=1}^{B_n } L_j$. \end{lemma} \section{Proof of Theorem \ref{ger1}} We denote again $X_j= V_j+D\e_jL_j$, $S_n= W_n + M_n$, $j,\, n\ge 1$, which is justified by the previous representation. Fix $0<h<1$ and let \begin{eqnarray}\label{dep0}A_n=\Big\{\big| B_n - \Theta_n\big|\le h\Theta_n \Big\}, \qq\qq \rho_n(h)= {\mathbb P}_{(V,\e)}(A_n^c) . \end{eqnarray} For $\k\in \mathcal L(v_0,D)$, \begin{eqnarray}\label{dep} {\mathbb P} \{S_n =\kappa \} &=& {\mathbb E \,}_{(V,\e)} {\mathbb P}_{\!L} \big\{D \sum_{j= 1}^n \e_jL_j =\kappa-W_n \big\} \cr &=&{\mathbb E \,}_{(V,\e)} \Big( \chi(A_n)+\chi(A_n^c)\Big) {\mathbb P}_{\!L} \big\{D \sum_{j= 1}^n \e_jL_j =\kappa-W_n \big\} . \end{eqnarray} Thus \begin{eqnarray}\label{dep0} \Big|{\mathbb P} \{S_n =\kappa \} - {\mathbb E \,}_{(V,\e)} \chi(A_n) {\mathbb P}_{\!L} \big\{D \sum_{j= 1}^n \e_jL_j =\kappa-W_n \big\}\Big|&\le & {\mathbb P}_{(V,\e)}(A_n^c)\cr &= & \rho_n(h) . \end{eqnarray} We have $\sum_{j= 1}^n \e_jL_j\buildrel{\mathcal D}\over{ =}\sum_{j=1}^{B_n } L_j$. In view of Lemma \ref{lltber}, \begin{eqnarray*} \sup_{z}\, \Big| {\mathbb P}_{\!L}\big\{\sum_{j=1}^{N } L_j=z \big\} -{2\over \sqrt{2\pi N}}e^{-{( z-(N/2))^2\over N/2}}\Big| \le {C_0\over N^{3/2}} . \end{eqnarray*} On $A_n$, we have $ (1-h )\Theta_n \le B_n \le (1+h )\Theta_n$. Therefore \begin{eqnarray} \label{dep2} \Big|{\mathbb E \,}_{(V,\e)} \chi(A_n) \Big\{ {\mathbb P}_{\!L} \big\{D \sum_{j= 1}^n \e_jL_j =\kappa-W_n \big\} - {2e^{-{(\kappa-W_n-D(B_n/2))^2\over D^2(B_n/2)}}\over \sqrt{2\pi B_n}} \Big\}\Big|\cr \le C_0\ {\mathbb E \,}_{(V,\e)} \chi(A_n)\cdot B_n^{-3/2}\le \frac{ C_0}{ (1-h )^{ 3/2}} \,\frac{ 1}{(\sum_{i=1}^n \t_i )^{ 3/2} } . \end{eqnarray} And by inserting this into (\ref{dep0}) \begin{eqnarray}\label{dep01} \Big|{\mathbb P} \{S_n =\kappa \} - {\mathbb E \,}_{(V,\e)} \chi(A_n) {2e^{-{(\kappa-W_n-D(B_n/2))^2\over D^2(B_n/2)}}\over \sqrt{2\pi B_n}} \Big| &\le & \frac{ C_0}{ (1-h )^{ 3/2}} \,\frac{ 1}{(\sum_{i=1}^n \t_i )^{ 3/2} } + \rho_n(h) . \end{eqnarray} \vskip 5 pt \noi {\it Step 2.} ({\it Second reduction}) Some elementary algebra is necessary in order to put the exponential term in a more appropriate form. Recall that $S_n= W_n + M_n$. \begin{lemma}\label{mmprime} Let $S'_n=W_n+D( B_n/2)$. Then \begin{eqnarray*} {\mathbb E \,} S'_n= {\mathbb E \,} S_n, \qq {\mathbb E \,} (S'_n) ^2= {\mathbb E \,} S_n ^2- \frac{D^2\Theta_n }{4} . \end{eqnarray*} \end{lemma} \begin{proof} At first ${\mathbb E \,} S_n={\mathbb E \,}_{(V,\e)}{\mathbb E \,}_{\!L}\big(W_n+ D\sum_{j=1}^{ n} \e_jL_j\big) = {\mathbb E \,}_{(V,\e)}\big(W_n+ D{ B_n\over 2}\big) ={\mathbb E \,} S'_n $. Further, by using independence, \begin{eqnarray*}{\mathbb E \,}_{(V,\e)} B_n^2&=& \sum_{1\le i,j\le n\atop i\not = j} {\mathbb E \,}_{(V,\e)} \e_i{\mathbb E \,}_{(V,\e)}\e_j +\sum_{1\le i \le n } {\mathbb E \,}_{(V,\e)}\e_i^2 \cr &=& \sum_{1\le i,j\le n\atop i\not = j} \t_{ i}\t_{j} +\sum_{i=1}^n \t_{ i} =\Big(\sum_{i=1}^n \t_{ i}\Big)^2-\sum_{i=1}^n \t_{ i}^2 +\sum_{i=1}^n \t_{ i} , \end{eqnarray*} and \begin{eqnarray*}{\mathbb E \,} \Big(\sum_{j=1}^n \e_j L_j\Big)^2&=& \sum_{1\le i,j\le n\atop i\not = j} {\mathbb E \,}_{(V,\e)}\e_i\e_j {\mathbb E \,}_L L_i{\mathbb E \,}_L L_j +\sum_{i=1}^n {\mathbb E \,}_{(V,\e)}\e_i^2 {\mathbb E \,}_L L_i^2 \cr &=& \frac{1}{4}\sum_{1\le i,j\le n\atop i\not = j} {\mathbb E \,}_{(V,\e)}\e_i\e_j +\frac{1}{2}\sum_{i=1}^n {\mathbb E \,}_{(V,\e)}\e_i^2 = \frac{1}{4}\sum_{1\le i,j\le n\atop i\not = j} \t_{ i}\t_{j} +\frac{1}{2}\sum_{i=1}^n \t_{ i} \cr \cr &=& \frac{1}{4}\Big\{\Big(\sum_{i=1}^n \t_{ i}\Big)^2-\sum_{i=1}^n \t_{ i}^2\Big\} +\frac{1}{2}\sum_{i=1}^n \t_{ i} . \end{eqnarray*} Now \begin{eqnarray*}{\mathbb E \,} S_n ^2 &=&{\mathbb E \,} \big( W_n+ D \sum_{i=1}^n \e_i L_i\big) ^2 \cr &=&{\mathbb E \,}_{(V,\e)} W_n^2 +2D {\mathbb E \,}_{(V,\e)}W_n\Big( \sum_{i=1}^n \e_i {\mathbb E \,}_L L_i\Big)+ D^2 {\mathbb E \,} \Big(\sum_{i=1}^n \e_i L_i\Big)^2 \cr &=& {\mathbb E \,}_{(V,\e)} \Big(W_n^2+2D W_n \big({ B_n\over 2} \big) \Big)+ \frac{D^2}{4}\Big\{\Big(\sum_{i=1}^n \t_{ i}\Big)^2-\sum_{i=1}^n \t_{ i}^2\Big\} +\frac{D^2}{2}\sum_{i=1}^n \t_{ i} . \end{eqnarray*} And \begin{eqnarray*} {\mathbb E \,} (S'_n) ^2&=& {\mathbb E \,}_{(V,\e)} \big(W_n^2+2DW_n \big({ B_n\over 2} \big) \big)+ \frac{D^2}{4} \Big\{\Big(\sum_{i=1}^n \t_{ i}\Big)^2-\sum_{i=1}^n \t_{ i}^2 +\sum_{i=1}^n \t_{ i} \Big\} \cr &=& {\mathbb E \,} S_n ^2- \frac{D^2}{4}\sum_{i=1}^n \t_{ i}. \end{eqnarray*} Hence Lemma \ref{mmprime} is established. \end{proof} We deduce \begin{equation} \label{prime} {\rm Var}(S'_n)={\rm Var}(S_n)- \frac{D^2}{4}\sum_{i=1}^n \t_{ i}= \sum_{i=1}^n\Big(\s_i^2-\frac{D ^2\t_{ i}}{4}\Big) \end{equation} Put \begin{equation*} T_n= {S'_n-{\mathbb E \,}_{(V,\e)}S'_n \over \sqrt{{\rm Var}(S'_n)} }. \end{equation*} As ${\mathbb E \,}_{(V,\e)}S'_n={\mathbb E \,} S_n$ we can write \begin{eqnarray*} {(\kappa-W_n- D(B_n/2))^2\over D^2(B_n/2)} &=& {(\kappa-{\mathbb E \,} S_n -\{S'_n-{\mathbb E \,}_{(V,\e)}S'_n\})^2\over D^2(B_n/2)} \cr & = &{ {\rm Var}(S'_n) \over D^2 (B_n/2) }\Big({\kappa-{\mathbb E \,} S_n \over \sqrt{{\rm Var}(S'_n)} } - T_n \Big)^2 \end{eqnarray*} And (\ref{dep01}) is more conveniently rewritten as \begin{eqnarray}\label{dep21} \Big|{\mathbb P} \{S_n =\kappa \} - \Upsilon_n\Big|&\le & \frac{ C_0}{ (1-h )^{ 3/2}} \,\frac{ 1}{(\sum_{i=1}^n \t_i )^{ 3/2} } + \rho_n(h), \end{eqnarray} where \begin{eqnarray}\label{dep210} \Upsilon_n= {\mathbb E \,}_{(V,\e)} \chi(A_n) {2e^{-{ {\rm Var}(S'_n) \over D^2 (B_n/2) }\big({\kappa-{\mathbb E \,} S_n \over \sqrt{{\rm Var}(S'_n)} } - T_n\big)^2}\over \sqrt{2\pi B_n}} . \cr & & \end{eqnarray} Set for $-1< u\le 1$, $$ Z_n(u)= {\mathbb E \,}_{(V,\e)} e^{-{2{\rm Var}(S'_n)\over D^2 (1 + u ) \Theta_n } \big({\kappa-{\mathbb E \,} S_n \over \sqrt{{\rm Var}(S'_n)} }- T_n\big)^2}. $$ Then \begin{eqnarray} \label{dep3} { 2Z_n(-h) - 2\rho_n(h) \over \sqrt{2\pi (1 +h )\Theta_n}}\ \le \ \Upsilon_n &\le &{ 2Z_n(h) \over \sqrt{2\pi (1 - h ) \Theta_n}} . \end{eqnarray} The second inequality is obvious, and the first follows from \begin{eqnarray*} \Upsilon_n &\ge & { 2 \over \sqrt{2\pi (1 +h ) \Theta_n}}{\mathbb E \,}_{(V,\e)} \chi(A_n) e^{-{2{\rm Var}(S'_n)\over D^2 (1 -h ) \Theta_n } \big({\kappa-{\mathbb E \,} S_n \over \sqrt{{\rm Var}(S'_n)} }- T_n\big)^2} \cr &\ge & { 2 \over \sqrt{2 \pi(1 +h ) \Theta_n}}\bigg\{{\mathbb E \,}_{(V,\e)} e^{-{2{\rm Var}(S'_n)\over D^2 (1 -h ) \Theta_n } \big({\kappa-{\mathbb E \,} S_n \over \sqrt{{\rm Var}(S'_n)} }- T_n\big)^2} -{\mathbb P}_{(V,\e)} (A_n^c)\bigg\} \cr &\ge & { 2 Z_n(-h) -2\rho_n(h) \over \sqrt{2\pi (1 +h )\Theta_n}}. \end{eqnarray*} \vskip 5 pt \noi {\it Step 3.} ({\it Exponential moment}) \begin{lemma}\label{tech} Let $Y $ be a centered random variable. For any positive reals $a$ and $b$ \begin{eqnarray*} \Big|{\mathbb E \,} e^{-a(b-Y)^2} - \frac{e^{- \frac{b^2}{2+ 1/a} }}{ \sqrt{1+2a}}\Big|&\le & 4\sup_{x\in {\mathbb R}} \big|{\mathbb P} \{Y<x \} - {\mathbb P}\{g<x\} \big| . \end{eqnarray*}\end{lemma} \begin{proof} By the transfert formula, \begin{eqnarray*} \Big|{\mathbb E \,} e^{-a(b-Y)^2}-{\mathbb E \,} e^{-a(b-g)^2}\Big| & =&\Big|\int_0^1 \Big( {\mathbb P} \big\{e^{-a(b-Y)^2}>x\big\}- {\mathbb P} \big\{e^{-a(b-g)^2}>x\big\} \Big)\dd x\Big| \cr \ (x=e^{-ay^2})\quad &=&2a \Big|\int_0^\infty \Big( {\mathbb P} \big\{|b-Y|<y\big\}-{\mathbb P} \big\{|b-g|<y\big\}\Big) ye^{-ay^2} \dd y\Big| \cr &\le & 4\sup_{x\in {\mathbb R}} \Big|{\mathbb P} \big\{Y<x\big\} - {\mathbb P}\{g<x\} \Big| . \end{eqnarray*} The claimed estimate follows from\begin{equation}\label{estexp}{\mathbb E \,} e^{-a(b-g)^2}= \frac{e^{- \frac{b^2}{2+ 1/a} }}{ \sqrt{1+2a}} .\end{equation} \end{proof} We apply Lemma \ref{tech} to estimate $Z_n(u)$. Here $ a= {2{\rm Var}(S'_n)\over D^2 (1+u ) \Theta_n}$, $b= \frac{\k- {\mathbb E \,} S_n}{\sqrt{{\rm Var}(S'_n)}} $. Since by (\ref{prime}), ${\rm Var}(S'_n)= {\rm Var}(S_n)- \frac{ D^2 \Theta_n}{4}$, we have \begin{eqnarray*} \frac{b^2}{2+ 1/a} &= &\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S'_n)\big( 2+ \frac{D^2 (1+u )\Theta_n }{2{\rm Var}(S'_n)}\big)} = \frac{(\k- {\mathbb E \,} S_n)^2}{ 2{\rm Var}(S'_n) + \frac{D^2(1+u )\Theta_n }{2 } } \cr & = & \frac{(\k- {\mathbb E \,} S_n)^2}{ 2{\rm Var}(S_n)- \frac{ D^2 \Theta_n}{2} + \frac{D^2 (1+u )\Theta_n }{2 } }=\frac{(\k- {\mathbb E \,} S_n)^2}{ 2{\rm Var}(S_n) + \frac{D^2 u \Theta_n }{2 } } \cr & = & \frac{(\k- {\mathbb E \,} S_n)^2}{ 2{\rm Var}(S_n)(1 + \d(u)) }, \end{eqnarray*} where we have denoted $$\d(u)= \frac{D^2 \Theta_n u}{4 {\rm Var}(S_n) } .$$ Now \begin{eqnarray*}\frac{1}{ \sqrt{1+2a}}& =& \Big(\frac{1}{ {1+ {4{\rm Var}(S'_n)\over D^2 (1+u ) \Theta_n}}}\Big)^{1/2} =\frac{D}{ 2 }\Big(\frac{ { (1+ u )\Theta_n}}{ { {\rm Var}(S_n') +{D^2 (1+ u ) \Theta_n\over 4} }}\Big)^{1/2}\cr &=&\frac{D}{ 2 }\Big(\frac{ { (1+ u )\Theta_n}}{ { {\rm Var}(S_n) +{D^2 h \Theta_n \over 4} }}\Big)^{1/2} =\frac{D}{ 2 }\Big(\frac{ { \Theta_n (1+ u )}}{ {{\rm Var}(S_n) (1+ \d(u) ) }}\Big)^{1/2} . \end{eqnarray*} This along with Lemma \ref{tech} provides the following bound, \begin{eqnarray} \label{fb} \Big| Z_n(u) - \frac{D}{ 2 }\Big(\frac{ { \Theta_n (1+ u )}}{ {{\rm Var}(S_n) (1+ \d(u) ) }}\Big)^{1/2} e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2{\rm Var}(S_n)(1 + \d(u)) } } \Big| &\le & 4H_n , \end{eqnarray} with $$ H_n=\sup_{x\in {\mathbb R}} \Big|{\mathbb P}_{(V,\e)} \big\{T_n<x\big\} - {\mathbb P}\{g<x\} \Big| .$$ Besides, it follows from (\ref{ssprime1}) that for $h\ge 0$, \begin{eqnarray} \label{fb1} 0\le \d(h) \le h . \end{eqnarray} \vskip 5 pt \noi {\it Step 4.}({\it Conclusion}) Consider the upper bound part. By reporting (\ref{fb}) into (\ref{dep3}) and using (\ref{fb1}), we get \begin{eqnarray*} \Upsilon_n &\le & { 8H_n \over \sqrt{2 \pi (1-h)\Theta_n} } + \Big( \frac{1+ h }{ 1-h}\Big) \, { D \over \sqrt{2 \pi {\rm Var}(S_n) } } e^{-\frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1 + h){\rm Var}(S_n) } } . \end{eqnarray*} And by combining with (\ref{dep21}), \begin{eqnarray} \label{dep5} {\mathbb P}\{S_n =\k\} &\le & \Big( \frac{1+ h }{ 1-h}\Big) \, { D \over \sqrt{2 \pi {\rm Var}(S_n) } } e^{-\frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1 + h){\rm Var}(S_n) } } \cr & &\quad + { 8H_n \over \sqrt{2 \pi (1-h)\Theta_n} } + \frac{ C_0}{ (1-h )^{ 3/2}} \,\frac{ 1}{ \Theta_n ^{ 3/2} } + \rho_n(h) . \end{eqnarray} Similarly, by using (\ref{dep3}), \begin{eqnarray} \label{dep6} \Upsilon_n &\ge & \Big(\frac{ { 1- h }}{ { 1 +h }}\Big) { D \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h){\rm Var}(S_n) } }} - {8 H_n+ 2\rho_n(h) \over \sqrt{2\pi (1 +h )\Theta_n}} . \end{eqnarray} By combining with (\ref{dep21}), we obtain \begin{eqnarray}\label{dep5} {\mathbb P} \{S_n =\kappa \} &\ge & \Big(\frac{ { 1- h }}{ { 1 +h }}\Big) { D \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h){\rm Var}(S_n) } }} - { 8 H_n+ 2\rho_n(h) \over \sqrt{2\pi (1 +h )\Theta_n}}\cr & &\ -\frac{ C_0}{ (1-h )^{ 3/2}} \,\frac{ 1}{\Theta_n^{ 3/2} } - \rho_n(h), \end{eqnarray} As $ \max ({ 8 /\sqrt{2 \pi } },C_0 )\le C_1 $, we deduce \begin{eqnarray} \label{f1} {\mathbb P}\{S_n =\k\} &\le & \Big( \frac{1+ h }{ 1-h}\Big) \, { D \over \sqrt{2 \pi {\rm Var}(S_n) } } e^{-\frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1 + h){\rm Var}(S_n) } } \cr & &\quad + {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} \big) + \rho_n(h) . \end{eqnarray} And \begin{eqnarray}\label{f2} {\mathbb P} \{S_n =\kappa \} &\ge & \Big(\frac{ { 1- h }}{ { 1 +h }}\Big) { D \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h){\rm Var}(S_n) } }} - \cr & &\ {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} + 2\rho_n(h) \big) - \rho_n(h). \end{eqnarray} This achieves the proof. \section{\bf Proof of Corollary \ref{ger2}} In order to estimate $\rho_n(h)$ we use the following Lemma (\cite{di}, Theorem 2.3) \begin{lemma} \label{primo} Let $X_1, \dots, X_n$ be independent random variables, with $0 \le X_k \le 1$ for each $k$. Let $S_n = \sum_{k=1}^n X_k$ and $\mu = E[S_n]$. Then for any $\epsilon >0$, \begin{eqnarray*} (a) && {\mathbb P}\big\{S_n \ge (1+\epsilon)\mu\big\} \le e^{- \frac{\epsilon^2\mu}{2(1+ \epsilon/3) } } . \cr (b) & &{\mathbb P}\big\{S_n \le (1-\epsilon)\mu\big\}\le e^{- \frac{\epsilon^2\mu}{2}}. \end{eqnarray*} \end{lemma} By (a) and (b), and observing that $ e^{- \frac{\epsilon^2\mu}{2}}\le e^{- \frac{\epsilon^2\mu}{2(1+ \epsilon/3)}}$, we obtain \begin{eqnarray*} \rho_n(h) &=& {\mathbb P}\big\{\big|\sum_{k=1}^n \e_k- \Theta_n\big|> h \Theta_n\big\} ={\mathbb P}\big\{\sum_{k=1}^n \e_k>(1+ h) \Theta_n\big\}+ {\mathbb P}\big\{\sum_{k=1}^n \e_k<(1- h) \Theta_n\big\} \cr &\le & 2 e^{- \frac{h^2\Theta_n}{2(1+ h/3)}}. \end{eqnarray*} Let $ h_n=\sqrt{\frac{7 \log \Theta_n} {2\Theta_n}}$. By assumption $\frac{ \log \Theta_n }{\Theta_n}\le {1}/{14} $. Thus $h_n\le 1/2$ and so $\frac{h_n^2\Theta_n}{2(1+ h_n/3)}\ge {(3/2)\log \Theta_n} $. It follows that \begin{eqnarray*} \rho_n(h) &\le & 2 \Theta_n^{- 3/2}. \end{eqnarray*} Further \begin{eqnarray*} {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} \big) + \rho_n(h) &\le & 2^{ 1/2}C_1{ H_n \over \sqrt{ \Theta_n} } + {2^{ 3/2}C_1 +2 \over \Theta_n^{ 3/2} } . \end{eqnarray*} Let $C_2=2^{ 3/2 } 3(C_1+1) $. Therefore \begin{eqnarray*} {\mathbb P}\{S_n =\k\} &\le & { D ( 1+4h_n )\over \sqrt{2 \pi {\rm Var}(S_n) } } e^{-\frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1 + h_n){\rm Var}(S_n) }} + {C_2 \over \sqrt{ \Theta_n} }\big( H_n + {1 \over \Theta_n } \big) . \end{eqnarray*} Besides \begin{eqnarray*} & & {C_1 \over \sqrt{ (1-h)\Theta_n} } \big(H_n + \frac{1}{(1-h)\Theta_n} + 2\rho_n(h) \big) + \rho_n(h) \cr &\le & 2^{1/2}C_1 { H_n \over \sqrt{ \Theta_n} } + {2 (3.2^{1/2}C_1 +1) \over \Theta_n^{ 3/2} }\le {C_2 \over \sqrt{ \Theta_n} } \big( H_n + { 1 \over \Theta_n } \big). \end{eqnarray*} Consequently, \begin{eqnarray*} {\mathbb P} \{S_n =\kappa \} &\ge & { D(1-2h_n) \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h_n){\rm Var}(S_n) } }} - { C_2 \over \sqrt{ \Theta_n} } \big(H_n + {1 \over \Theta_n }\big) . \end{eqnarray*} If $$ \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } \le \frac{ 1+h_n }{ h_n} , $$ then by using the inequalities $e^u\le 1+3u$ and $Xe^{-X}\le e^{-1}$ valid for $0\le u\le 1$, $X\ge 0$, we get \begin{eqnarray*} e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h_n){\rm Var}(S_n) } } &= & e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } e^{\frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) }\frac{ h_n}{ 1+h_n } } \cr&\le & e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \Big\{ 1+ 3\frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) }\frac{ h_n}{ 1+h_n } \Big\} \cr&\le & e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }+ \frac{ 3h_n}{ e(1+h_n ) } \le e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }+ 2h_n . \end{eqnarray*} Hence, \begin{eqnarray*} { D ( 1+4h_n )\over \sqrt{2 \pi {\rm Var}(S_n) } } e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h_n){\rm Var}(S_n) } } &\le & { D ( 1+4h_n )\over \sqrt{2 \pi {\rm Var}(S_n) } } \big\{ e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }+ 2h_n \big\} \cr&\le & { D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2 \pi {\rm Var}(S_n) } } +{ 4h_n D \over \sqrt{2 \pi {\rm Var}(S_n) } }+ { 4h_n D ( 1+2h_n )\over \sqrt{2 \pi {\rm Var}(S_n) } } \cr&\le & { D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2 \pi {\rm Var}(S_n) } } +{ 16h_n D \over \sqrt{2 \pi {\rm Var}(S_n) } } .\end{eqnarray*} Therefore, recalling that $ h_n=\sqrt{\frac{7 \log \Theta_n} {2\Theta_n}}$, \begin{eqnarray*} {\mathbb P}\{S_n =\k\} - { D e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \over \sqrt{2 \pi {\rm Var}(S_n) } } &\le & { 16h_n D \over \sqrt{2 \pi {\rm Var}(S_n) } } + C_2\, { H_n + \Theta_n^{-1} \over \sqrt{ \Theta_n} } \cr &\le & C_3\Big\{ D\big({ { \log \Theta_n } \over { {\rm Var}(S_n) \Theta_n} } \big)^{1/2} + { H_n + \Theta_n^{-1} \over \sqrt{ \Theta_n} } \Big\} . \end{eqnarray*} since $ 8\sqrt{ 7 / \pi } \le C_2$. Similarly, if $$ \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } \le \frac{ 1 }{2 h_n} , $$ {\rm then \begin{eqnarray*} e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h_n){\rm Var}(S_n) } } &\ge & e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } e^{-h_n\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } } \ge e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } \Big\{ 1- 3h_n\frac{(\k- {\mathbb E \,} S_n)^2}{ {\rm Var}(S_n) } \Big\} \cr&\ge & e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }- \frac{ 6h_n}{ e } \ge e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }- 3h_n , \end{eqnarray*} where we used the inequality $\frac{1}{1+3u}\ge 1-3u $. Hence, \begin{eqnarray*} { D(1-2h_n) \over \sqrt{2\pi {\rm Var}(S_n) }} { e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2(1-h_n){\rm Var}(S_n) } }} &\ge & { D(1-2h_n) \over \sqrt{2\pi {\rm Var}(S_n) }} \big\{ e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } }- 3h_n \big\} \cr &\ge & { D \over \sqrt{2\pi {\rm Var}(S_n) }} e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } -{5h_n D \over \sqrt{2\pi {\rm Var}(S_n) }} .\end{eqnarray*} Consequently, \begin{eqnarray*} {\mathbb P} \{S_n =\kappa \} -{ D \over \sqrt{2\pi {\rm Var}(S_n) }} e^{- \frac{(\k- {\mathbb E \,} S_n)^2}{ 2 {\rm Var}(S_n) } } &\ge & -{5h_n D \over \sqrt{2\pi {\rm Var}(S_n) }} - { C_2 \over \sqrt{ \Theta_n} } \big(H_n + {1 \over \Theta_n }\big) \cr &\ge &-C_3\Big\{ D\big({ { \log \Theta_n } \over { {\rm Var}(S_n) \Theta_n} } \big)^{1/2} + { H_n + \Theta_n^{-1} \over \sqrt{ \Theta_n} } \Big\} . \end{eqnarray*} \section{\bf Proof of Corollary \ref{ger3}} By using the generalization of Esseen's inequality given in \cite{[P]}, Theorem 5 p.112, we have \begin{eqnarray}\label{esseen}\sup_{x\in {\mathbb R}} \big|{\mathbb P} \{T_n<x \} - {\mathbb P}\{g<x\} \big|&\le & \frac{C_{{\rm E}} }{ \psi (\sqrt {{\rm Var}(S_n')})} \sum_{j=1}^n{\mathbb E \,} \psi (\overline \xi_j ) . \end{eqnarray} And the constant $C_{{\rm E}}$ is numerical. Let $\xi_j={\mathbb E \,}_{L} X_j= V_j + (D/2) \e_j$, $\overline \xi_j= \xi_j -{\mathbb E \,}_{(V,\e)}\xi_j$. By assumption $\psi(x)$ is convex and $\frac{x^3}{\psi(x)}$ is non-decreasing on ${\mathbb R}^+$. Thus $\psi(ax)\ge a^3\psi(x)$ for $0\le a\le 1$, $x\ge 0$. By Young's inequality, $${\mathbb E \,} \psi(2\xi_j)={\mathbb E \,}_{(V,\e)}\psi(2 {\mathbb E \,}_{L} X_j) \le {\mathbb E \,} \psi(2X_j) .$$ Thus \begin{eqnarray*}{\mathbb E \,} \psi(\overline\xi_j )&\le &\frac{1}{2}\big({\mathbb E \,} \psi(2\xi_j ) + {\mathbb E \,} \psi( 2{\mathbb E \,}_{(V,\e)}\xi_j )\big) \le \frac{1}{2}\big({\mathbb E \,} \psi(2X_j) + {\mathbb E \,} \psi(2X_j)\big) \cr &\le & \frac{1}{2}\big(8{\mathbb E \,} \psi( X_j) + 8{\mathbb E \,} \psi( X_j)\big) =8{\mathbb E \,} \psi( X_j ). \end{eqnarray*} By reporting into (\ref{esseen}) we get \begin{eqnarray}\label{esseen1}H_n &\le &2^{ 3/2}C_{{\rm E}}L_n \end{eqnarray} recalling that $$L_n=\frac{ \sum_{j=1}^n{\mathbb E \,} \psi (X_j) } { \psi (\sqrt { {\rm Var}(S_n )})} .$$ The conclusion then follows directly from Corollary \ref{ger2}. \section{Gamkrelidze's Local limit theorem.} We indicate in this section how to recover Gamkrelidze's local limit theorem with an effective bound. To this extent we restate Lemma 1 of \cite{MD} for the particular case of $S_n= \sum_{k=1}^n X_k$, where $X_k$ are integer--valued and independent. We prove it in greater detail than in the original paper. Let $(a_n)$ and $(b_n)$ two sequences of real numbers, with $b_n >0$ for every $n$. We denote \begin{equation*}\label{CLT} \rho_n:= \sup_{p,q:p<q}\Big|{\mathbb P}\{p\le S_n \le q\} - \frac{1}{\sqrt{2\pi}}\int_{\frac{p-1-a_n}{ \sqrt{b_n}}}^{\frac{q-a_n}{ \sqrt{b_n} }}e^{\frac{-t^2}{2}}\, dt\Big|. \end{equation*} First we remark that \begin{eqnarray*}& &{\mathbb P}\{p\le S_n \le q\} - \frac{1}{\sqrt{2\pi}}\int_{\frac{p-1-a_n}{ \sqrt{b_n} }}^{\frac{q-a_n}{\sqrt{b_n} }}e^{\frac{-t^2}{2}}\, dt\cr &= &\sum_{h=p}^q\Big\{ P(S_n=h)- \frac{1}{\sqrt{2\pi}} \int_{\frac{h-1-a_n}{\sqrt{b_n} }}^{\frac{h-a_n}{\sqrt {b_n} }}e^{\frac{-t^2}{2}}\, dt\Big\} =\sum_{h=p}^q d_{h,n}, \end{eqnarray*} where \begin{equation*}d_{h,n}:={\mathbb P}\{S_n=h\}- \frac{1}{\sqrt{2\pi}}\int_{\frac{h-1-a_n}{\sqrt {b_n} }}^{\frac{h-a_n}{ \sqrt {b_n} }}e^{-\frac{ t^2}{2}}\, dt.\end{equation*} \begin{proposition} Suppose that \begin{equation}\label{ipotesiMc} \sup_{n \in \mathbb{N}} \ b_ n\Big(\sup_{k \in \mathbb{Z}} \big|{\mathbb P}\{S_n=k+1\}- {\mathbb P}\{S_n=k\}\big|\Big) =M< \infty. \end{equation} Then there exists a constant $C$ depending on $M$ only such that \begin{equation}\label{primaparte} \sup_{k\in \mathbb{Z}}\sqrt{b_n}\big|d_{k,n}\big|\le C\sqrt{\rho_n}. \end{equation} As a consequence\begin{equation}\label{secondaparte} \sup_{k\in \mathbb{Z}}\Big |\sqrt { b_n}{\mathbb P}\{S_n=k\}-\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(k-a_n)^2}{2b_n}}\Big |\le C\sqrt{\rho_n} + \frac{1}{\sqrt{2\pi e} \sqrt {b_n} }.\end{equation} \end{proposition} The value of $C$ is explicited in the course of the proof.\begin{proof} Put \begin{equation*}\ell_{k,n}:= \frac{1}{\sqrt{2\pi}}\int_{\frac{k-1-a_n}{\sqrt{b_n}}}^{\frac{k-a_n}{\sqrt{b_n}}}e^{\frac{-t^2}{2}}\, dt,\end{equation*} and observe that\begin{align*} \big|\ell_{k+1,n} - \ell_{k,n}\big|=\frac{1}{\sqrt{2\pi}\sqrt{b_n}}\Big|e^{-\frac{\xi_k^2}{2}}-e^{-\frac{\eta_k^2}{2}}\Big|,\end{align*} with $\frac{k-1-a_n}{ \sqrt{b_n} }\le \xi_k \le \frac{k-a_n}{ \sqrt{b_n} } \le \eta_k \le \frac{k+1-a_n}{ \sqrt{b_n} } $; and by Lagrange's Theorem \begin{equation*} \Big|e^{-\frac{\xi_k^2}{2}}-e^{-\frac{\eta_k^2}{2}}\Big|=|\xi_k - \eta_k|\cdot \big|\theta_k e^{-\frac{\theta_k^2}{2}}\big|\le \frac{2}{\sqrt{e b_ n}}, \end{equation*} with $\xi_k\le \theta_k\le \eta_k$ and $ \sup_{z\in \mathbb{R}}\big|ze^{-\frac{z^2}{2}}\big|=e^{-1/2}$. Hence \begin{equation}\label{stima} \big|\ell_{k+1,n} - \ell_{k,n}\big|\le \Big( \sqrt{\frac{2}{ e\pi}}\Big)\frac{1} {b_n}. \end{equation} Now we write \begin{eqnarray*} \label{prima} & & d_{k,n}={\mathbb P}\{S_n=k\}-\ell_{k,n} \cr &=&\big\{{\mathbb P}\{S_n=k\}- {\mathbb P}\{S_n=k+1\}\big\}+ \big\{\ell_{k+1,n} - \ell_{k,n}\big\} + d_{k+1,n} \cr &\le & \sup_{k \in \mathbb{Z}}\big|{\mathbb P}\{S_n=k+1\}- {\mathbb P}\{S_n=k\}\big|+\sup_{k \in \mathbb{Z}}\big|\ell_{k+1,n} - \ell_{k,n}\big|+ d_{k+1,n} \le \frac{R}{b_n}+d_{k+1,n}. \end{eqnarray*} where we denote $$R:=M+\sqrt{\frac{2}{ e\pi}} .$$ Similarly we also have \begin{eqnarray} \label{seconda} d_{k,n}&\le& \frac{R}{b_n}+ d_{k-1,n} . \end{eqnarray} Using induction we find \begin{equation*} d_{h,n}\le \frac{R(k-h)}{b_n}+d_{k,n} \qquad h<k \end{equation*} \begin{equation*} d_{k,n}\le \frac{R(k-h)}{b_n}+d_{h,n} \qquad h<k; \end{equation*} putting together we have found \begin{equation}\label{insieme} \big|d_{h,n}-d_{k,n} \big|\le \frac{R|k-h|}{b_n} \qquad \forall h,k. \end{equation} \noindent We show that for every $\delta>0 $, for every $n$ and for every $k$, \begin{equation*} 4R \rho_n< \delta^2 \Longrightarrow \sqrt{b_n}|d_{k,n}|<\delta, \end{equation*} thus proving \eqref{primaparte} with $C= 2 \sqrt R$. Assume the contrary, i.e. there exist $\delta>0$, an integer $k_0$ and a positive integer $n_0$ such that $$ 4R\rho_{n_0}< \delta^2, \quad \hbox{but }\quad \sqrt {b_{n_0}}\cdot\big|d_{k_0,n_0}\big|\ge \delta.$$ To fix ideas, assume that $\sqrt {b_{n_0}} \cdot d_{k_0,n_0}\ge \delta$ . Consider the set of integers \begin{align*} A&= \Big\{h \in \mathbb{Z}:\frac{R|k_0-h|}{b_{n_0}}\le \frac{\delta}{2 \sqrt{b_{n_0}}}\Big\}= \Big\{h \in \mathbb{Z}:k_0-\frac{\delta \sqrt{b_{n_0}}}{2R}\le h \le k_0+\frac{\delta \sqrt{b_{n_0}}}{2R}\Big\}. \end{align*} From \begin{align*} {\rm card}\Big([r-\alpha, r+\alpha]\cap\mathbb{Z}\Big)= 2\alpha +1 - 2\{\alpha\}\ge \alpha, \qquad \hbox{($\alpha \in \mathbb{R}^+$ and $r \in \mathbb{Z}$)} \end{align*} we get \begin{equation}\label{cardinalita} {\rm card}(A)\ge \frac{\delta \sqrt{b_{n_0}}}{2R}, \end{equation} and by \eqref{insieme}, for every $h \in A$ \begin{equation*} \frac{\delta }{\sqrt{b_{n_0}}}\le d_{k_0,n_0}\le \big| d_{k_0,n_0}-d_{h,n_0}\big|+ d_{h,n_0} \le \frac{R|k_0-h|}{b_{n_0}}+ d_{h,n_0}\le \frac{\delta}{2 \sqrt{b_{n_0}}}+ d_{h,n_0}, \end{equation*} which implies \begin{equation} \label{minorazione} d_{h,n_0}\ge \frac{\delta}{2 \sqrt{b_{n_0}}}.\end{equation} Hence, by \eqref{cardinalita} and \eqref{minorazione}, \begin{align*} & 4R\rho_{n_0}= 4R\cdot \sup_{p,q: p<q}\Big|\sum_{h=p}^qd_{h,n_0}\Big|\ge 4R\cdot \Big|\sum_{h=p_0}^{q_0}d_{h,n_0}\Big|=4R\cdot \Big(\sum_{h=p_0}^{q_0}d_{h,n_0}\Big)\\&= 4R\cdot\Big(\sum_{h\in A}d_{h,n_0}\Big) \ge 4R\cdot\frac{\delta}{2 \sqrt{b_{n_0}}}\cdot card(A) \ge 4R\cdot\frac{\delta}{2 \sqrt{b_{n_0}}}\cdot \frac{\delta \sqrt{b_{n_0}}}{2R}=\delta^2, \end{align*} a contradiction. This proves \eqref{primaparte}. In order to prove \eqref{secondaparte} we write (for a suitable $\xi_k \in (k-1,k)$) \begin{align*} &\Big|\sqrt {b_n}{\mathbb P}\{S_n=k)-\frac{1}{\sqrt{2\pi }}e^{-\frac{(k-a_n)^2}{2b_n}}\Big|\le \sqrt{b_ n} |d_{k,n}|+\Bigg|\frac{\sqrt{b_n}}{\sqrt{2\pi}}\int_{\frac{k-1-a_n}{ \sqrt{b_n}}}^{\frac{k-a_n}{ \sqrt{b_n}}}e^{\frac{-t^2}{2}}\, dt-\frac{1}{\sqrt{2\pi }}e^{-\frac{(k-a_n)^2}{2b_n}}\Bigg|\\&=\sqrt{b_n}|d_{k,n}|+\frac{1}{\sqrt{2\pi }}\Bigg|e^{-\frac{(\xi_k-a_n)^2}{2b_n}}-e^{-\frac{(k-a_n)^2}{2b_n}}\Bigg|\le \sqrt {b_n} |d_{k,n}|+\frac{1}{\sqrt{2\pi }}\cdot \frac{|\xi_k-k|}{ \sqrt {b_n}}\sup_{z\in \mathbb{R}}\big|ze^{-\frac{z^2}{2}}\big|\\&\le \sqrt {b_n} |d_{k,n}|+\frac{1}{\sqrt{2\pi e} \sqrt {b_n}}.\end{align*} \end{proof} Now we estimate $M$ in (\ref{ipotesiMc}) by using (\ref{dep01}) which we recall \begin{eqnarray*} & &\Big |{\mathbb P}\{S_n=k)-{\mathbb E \,}_{(V,\epsilon)}\Big[{\bf 1}_{A_n}\cdot \frac{2}{\sqrt{2\pi B_n}}e^{- \frac{(k-W_n -D\frac{B_n}{2})^2}{D^2 \frac{B_n}{2}}}\Big]\Big|\cr &\le & 2e^{- \frac{h^2 \Theta_n}{2(1+h/3))}}+ \frac{C}{(1-h)^{\frac{3}{2}}\Theta_n^{3/2}}. \end{eqnarray*} Thus \begin{eqnarray*} & & \Big|{\mathbb P}\{S_n=k)-{\mathbb P}\{S_n=k+1)\Big| \ \le \ 2e^{- \frac{h^2 \Theta_n}{2(1+h/3))}}+ \frac{2C}{(1-h)^{\frac{3}{2}}\Theta_n^{3/2}}+ \cr & &\quad {\mathbb E \,}_{(V,\epsilon)}\Big[{\bf 1}_{A_n}\cdot \frac{2}{\sqrt{2\pi B_n}}\Big\{e^{- \frac{(k+1-W_n -D\frac{B_n}{2})^2}{D^2 \frac{B_n}{2}}}-e^{- \frac{(k-W_n -D\frac{B_n}{2})^2}{D^2 \frac{B_n}{2}}}\Big\}\Big]\Big| , \end{eqnarray*} and recalling that on $A_n$ we have $(1-h)\Theta_n \le B_n \le (1+h)\Theta_n$ we obtain \begin{eqnarray*}& &\Big|{\mathbb E \,}_{(V,\epsilon)}\Big[{\bf 1}_{A_n}\cdot \frac{2}{\sqrt{2\pi B_n}}\Big\{e^{- \frac{(k+1-W_n -D\frac{B_n}{2})^2}{D^2 \frac{B_n}{2}}}-e^{- \frac{(k-W_n -D\frac{B_n}{2})^2}{D^2 \frac{B_n}{2}}}\Big\}\Big]\Big|\cr &\le & \Big|{\mathbb E \,}_{(V,\epsilon)}\Big[{\bf 1}_{A_n}\cdot \frac{2}{\sqrt{2\pi B_n}}\cdot \frac{\sqrt 2}{D \sqrt{e B_n}}\Big]\Big|\cr & \le & \frac{2}{\sqrt{\pi e}(1-h)\Theta_n}. \end{eqnarray*} We conclude that \begin{eqnarray} & &b_n\sup_{k \in \mathbb{Z}}\Big|{\mathbb P}\{S_n=k)-{\mathbb P}\{S_n=k+1)\Big|\cr &\le & 2b_ne^{- \frac{h^2 \Theta_n}{2(1+h/3))}}+ \frac{2Cb_n}{(1-h)^{\frac{3}{2}}\Theta_n^{3/2}}+\frac{2b_n}{\sqrt{\pi e}(1-h)\Theta_n}, \end{eqnarray} which is bounded if we assume that \begin{equation}\label{ipotesiM} \limsup_{n \in \mathbb{N}}\frac{b_n}{\Theta_n}< \infty. \end{equation} In particular, in the case $b_n= {\rm Var} (S_n)$, assumption \eqref{ipotesiM} is exactly assumption (iii) in Remark \ref{123}. For \eqref{ipotesiM} to hold, it suffices to assume that $$\inf_j \vartheta_{X_j} >0, \qquad \sup_j{\rm Var}(X_j) < \infty. $$ \begin{remark} Assume that we have an effective bound for $\rho_n$ (as it happens with the Berry--Esseen theorems); in such a case from \eqref{secondaparte} we automatically get an effective bound for \begin{equation*} \sup_{k\in \mathbb{Z}}\Bigg|\sqrt { b_n}{\mathbb P}\{S_n=k)-\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(k-a_n)^2}{2b_n}}\Bigg|.\end{equation*} \end{remark} \section{Application to Random Walks in Random Scenery.} Let $X=\{ X_j, j\ge 1\}$ be a sequence of i.i.d. square integrable random variables taking values in a lattice $\mathcal L(v_{ 0},D )$. Suppose we are given another sequence $U=\{ U_j, j\ge 1\}$ of integer--valued random variables, independent from $X$. We form the sequence of composed sums $$S=\{ S_n, n\ge 1\}\qq {\rm where } \qq S_n =\sum_{k=1}^n X_{U_k} .$$ This defines a random walk in a random scenery, this one being described by the sequence $U$. We establish an effective Local Limit Theorem for the sequence $S$. In a first step, we prove the analog of Theorem \ref{ger1} for the sequence $S$. Next we find a reasonable condition (see \eqref{ipotesir}) under which Berry-Esseen's estimate is applicable. This is due to the surprising fact that under this condition, the intermediate conditioned sums in the Bernoulli part construction, are sums of {\it i.i.d.} random variables. \subsection{Preliminary calculations.} By Lemma \ref{lemd}, $ \{X_j, 1\le j\le n\} \buildrel{\mathcal D}\over{=} \{V_j + \e_jL_j, 1\le j\le n\} $ where the random variables $ (V_j,\e_j),L_j$, $j=1,\ldots,n $ are mutually independent and $\e_j$, $ L_j $ are independent Bernoulli random variables with ${\mathbb P}\{\e_j=1\}= 1-{\mathbb P}\{\e_j=0\}=\t_j $ and ${\mathbb P}\{L_j =0\}={\mathbb P}\{L_j=1\}=1/2$. We thus denote again $X_j= V_j+D\e_jL_j$ $1\le j\le n$. The Corollary below is thus straightforward. Put, $$W_n= \sum_{k=1}^n V_{U_k}, \quad M_n= \sum_{k=1}^n\varepsilon_{U_k} L_{U_k}, \quad B_n= \sum_{k=1}^n\varepsilon_{U_k}.$$ \begin{corollary} For every $n\geqslant 1$ we have the representation $$\{S_k, \, 1 \leqslant k \leqslant n\} \mathop{=}^\mathcal{D}\{W_k+DM_k, \, 1 \leqslant k \leqslant n\}.$$ \end{corollary} \begin{remark}[Local time] We also have that $S_n= \sum_{j=1}^\infty X_j \nu_n(j) $, where $\nu_n(j)$ is the local time of the sequence $(U_j)$, i.e. \begin{eqnarray*}\nu_n(j)&=&\begin{cases}0 &\quad {\rm if} \ U_k\not=j , 1\le k\le n\cr \#\big\{k; 1\le k\le n: U_k=j\big\}& \quad {\rm otherwise}.\end{cases} \end{eqnarray*} And so $S_n= \sum_{j=1}^\infty (V_j + \varepsilon_j D L_j) \nu_n(j) \buildrel{\mathcal D}\over{=}\sum_{k=1}^n V_{U_k}+D\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}$. However, we will not use properties of local time as it is standard for proving strong laws or local limit theorem. At this regard, our approach is new in the context of random scenery. We will still use the Bernoulli part extraction approach, in developing more the algebra inherent to that construction, which in the setting of random scenery reveals richer than expected. \end{remark} In what follows, we note $V=\{V_j, j\ge 1\}$, $\varepsilon=\{\varepsilon_j, j\ge 1\}$, $L=\{L_j, j\ge 1\}$. \begin{lemma} For every $k$, $\varepsilon_{U_k}$ is a Bernoulli random variable such that $${\mathbb P}\{\varepsilon_{U_k}=1\}= {\mathbb E \,} \vartheta_{U_k} .$$ Moreover, for $h \not =k$ we have $${\mathbb P}\{\varepsilon_{U_h}=1,\varepsilon_{U_k}=1\}={\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} +\sum_{r=1}^\infty(\vartheta_r - \vartheta^2_r){\mathbb P}\{U_h=r, U_k=r\}.$$ \end{lemma} \begin{proof} By independence of $U$ and $\varepsilon$ we have \begin{eqnarray*}& & {\mathbb P}\{\varepsilon_{U_k}=1\}= \sum_{r=1}^\infty {\mathbb P}\{\varepsilon_{U_k}=1,U_k=r \}=\sum_{r=1}^\infty {\mathbb P}\{\varepsilon_{r}=1,U_k=r \}\\&=&\sum_{r=1}^\infty {\mathbb P}\{\varepsilon_{r}=1\}{\mathbb P}\{U_k=r \}=\sum_{r=1}^\infty \vartheta_{r} {\mathbb P}\{U_k=r \}= {\mathbb E \,} \vartheta_{U_k} .\end{eqnarray*} And similarly, using also the independence of the variables $\{\varepsilon_j, j\ge 1\}$, \begin{align*}& {\mathbb P}\{\varepsilon_{U_h}=1,\varepsilon_{U_k}=1\}\\&= \sum_{r,s=1}^\infty {\mathbb P}\{\varepsilon_{U_h}=1,\varepsilon_{U_k}=1,U_h=r,U_k=s \}=\sum_{r,s=1}^\infty {\mathbb P}\{\varepsilon_{r}=1,\varepsilon_{s}=1, U_h=r,U_k=s \} \\&=\sum_{r =1}^\infty{\mathbb P}\{\varepsilon_{r}=1\}{\mathbb P}\{U_h=r,U_k=r \}+\sum_{r,s=1\atop r\neq s}^\infty {\mathbb P}\{\varepsilon_{r}=1\}{\mathbb P}\{\varepsilon_{s}=1\}{\mathbb P}\{U_h=r,U_k=s \}\\&=\sum_{r}\vartheta_r{\mathbb P}\{U_h=r,U_k=r \}+\sum_{r,s=1\atop r\neq s}^\infty \vartheta_{r} \vartheta_{s} {\mathbb P}\{ U_h=r,U_k=s\}\\&=\sum_{r=1}^\infty(\vartheta_r-\vartheta^2_r\}{\mathbb P}\{U_h=r,U_k=r \}+\sum_{r,s=1}^\infty \vartheta_{r} \vartheta_{s} {\mathbb P}\{ U_h=r,U_k=s\}\\&= \sum_{r=1}^\infty(\vartheta_r-\vartheta^2_r){\mathbb P}\{U_h=r,U_k=r \}+{\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} .\end{align*} \end{proof} Let $$S_n^\prime= W_n+D \frac{B_n}{2}, \qq \quad n=1,2,\ldots.$$ The two following Lemmas generalize Lemma 3.1 \begin{lemma} We have $$ {\mathbb E \,} S_n = {\mathbb E \,} {S_n^\prime} .$$ \end{lemma} \begin{proof} Just observe that \begin{eqnarray*} & &{\mathbb E \,} M_n = \sum_{k=1}^n {\mathbb E \,}\varepsilon_{U_k}L_{U_k} =\sum_{k=1}^n\sum_{r=1}^\infty {\mathbb E \,} \varepsilon_{r}L_{r}{\bf 1}_{\{U_k=r\}} \cr & &=\sum_{k=1}^n\sum_{r=1}^\infty {\mathbb E \,} \varepsilon_{r}{\mathbb E \,} L_{r}{\mathbb P}\{ U_k=r\} = \sum_{k=1}^n\sum_{r=1}^\infty \frac{\vartheta_r}{2}{\mathbb P}\{ U_k=r\} =\frac{1}{2}\sum_{k=1}^n{\mathbb E \,} \vartheta_{U_k} ={\mathbb E \,} \frac{B_n}{2} .\end{eqnarray*} \end{proof} \noindent Let $$\Theta_n =\sum_{j=1}^n{\mathbb E \,} \vartheta_{U_j} .$$ \begin{lemma}\label{lemmagenerale} We have $${\mathbb E \,} {S_n}^2 = {\mathbb E \,}({S_n^\prime})^2 +\frac{D^2\Theta_n }{4}+\frac{D^2}{4}\sum_{1\leqslant h,k \leqslant n\atop h \not= k}c_{h,k},$$ where $$c_{h,k}= \sum_{r=1}^\infty \Big(\frac{3\vartheta^2_r}{4}-\frac{\vartheta_r}{2}\Big) {\mathbb P}\{U_h=r, U_k=r\}.$$ \end{lemma} \begin{proof}First \begin{align}&\label{calcolaccio1} {\mathbb E \,} {S_n}^2 ={\mathbb E \,} \Big(W_n+ D \sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\Big)^2 ={\mathbb E \,} W_n^2 +2D\, {\mathbb E \,}\Big[W_n\big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\big)\Big]+D^2{\mathbb E \,} \Big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\Big)^2 .\end{align} Now \begin{align}\label{calcolaccio2}&\nonumber {\mathbb E \,} W_n\big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\big) =\sum_{k=1}^n {\mathbb E \,} W_n\varepsilon_{U_k} L_{U_k} =\sum_{k=1}^n {\mathbb E \,}\Big\{ \big(\sum_{h=1}^nV_{U_h}\big)\varepsilon_{U_k} L_{U_k}\Big\} \\&=\sum_{h,k=1}^n {\mathbb E \,}\big[ V_{U_h}\varepsilon_{U_k} L_{U_k}\big] =\sum_{k=1}^n {\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} L_{U_k}\big) +\sum_{h\not=k=1}^n {\mathbb E \,}\big( V_{U_h}\varepsilon_{U_k} L_{U_k}\big).\end{align} By of $U$ with $(V,\varepsilon, L)$, and independence of $L$ with $(V, \varepsilon)$ we have \begin{align*}& {\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} L_{U_k}\big)=\sum_{r=1}^\infty{\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} L_{U_k}{\bf 1}_{\{U_k=r\}}\big) =\sum_{r=1}^\infty{\mathbb E \,}\big[ V_{r}\varepsilon_{r} L_{r}\big){\mathbb P}\{U_k=r\} \\& =\sum_{r=1}^\infty{\mathbb E \,}\big( V_{r}\varepsilon_{r}\big){\mathbb E \,}\big( L_{r}\big){\mathbb P}\{U_k=r\}=\frac{1}{2}\sum_{r=1}^\infty{\mathbb E \,}\big( V_{r}\varepsilon_{r} \big){\mathbb P}\{U_k=r\} =\frac{1}{2}\sum_{r=1}^\infty{\mathbb E \,}\big(V_{r}\varepsilon_{r}{\bf 1}_{\{U_k=r\}} \big)\\&= \frac{1}{2}{\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} \big).\end{align*} In a similar way we have also $$ {\mathbb E \,}\big( V_{U_h}\varepsilon_{U_k} L_{U_k}\big)= \frac{1}{2}{\mathbb E \,}\big( V_{U_h}\varepsilon_{U_k} \big), \qquad h \not =k.$$ Hence, continuing from \eqref{calcolaccio2}, we obtain \begin{align}& \label{calcolaccio3}\nonumber {\mathbb E \,}\Big(W_n\big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\big)\Big)=\frac{1}{2}\Big(\sum_{k=1}^n {\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} \big) +\sum_{1\leqslant h\not=k\leqslant n} {\mathbb E \,}\big( V_{U_h}\varepsilon_{U_k} \big)\Big)\\&\nonumber=\nonumber\frac{1}{2}\Big(\sum_{k=1}^n {\mathbb E \,}\big( V_{U_k}\varepsilon_{U_k} \big) +\sum_{k=1}^n \sum_{h\not=k} {\mathbb E \,}\big( V_{U_h}\varepsilon_{U_k} \big)\Big)=\frac{1}{2}{\mathbb E \,}\big(\sum_{k=1}^n V_{U_k}\varepsilon_{U_k}+\sum_{h\not=k}V_{U_h}\varepsilon_{U_k}\big) \\&=\frac{1}{2}{\mathbb E \,}\Big(\sum_{k=1}^n\varepsilon_{U_k}\big(V_{U_k}+ \sum_{h\not=k}V_{U_h}\big)\Big)=\frac{1}{2}{\mathbb E \,}\Big(\big(\sum_{k=1}^n\varepsilon_{U_k}\big)W_n\Big) ={\mathbb E \,}\big(\frac{B_n}{2} W_n\big).\end{align} Lastly, \begin{align}\label{calcolaccio4}& \nonumber {\mathbb E \,} \big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\big)^2 =\sum_{k=1}^n {\mathbb E \,}\big(\varepsilon^2_{U_k}L^2_{U_k}\big)+\sum_{1\leqslant h\not= k \leqslant n}{\mathbb E \,}\big(\varepsilon_{U_h}L_{U_h}\varepsilon_{U_k}L_{U_k}\big) \\&=\sum_{k=1}^n {\mathbb E \,}\big(\varepsilon_{U_k}L_{U_k}\big)+\sum_{1\leqslant h \not= k \leqslant n}{\mathbb E \,}\big(\varepsilon_{U_h}\varepsilon_{U_k}L_{U_h}L_{U_k}\big).\end{align} And \begin{align}\label{calcolaccio5}& {\mathbb E \,}\big(\varepsilon_{U_k}L_{U_k}\big)= \sum_{r=1}^\infty{\mathbb E \,}\big(\varepsilon_{U_k}L_{U_k}{\bf 1}_{\{U_k=r\}}\big)= \sum_{r=1}^\infty{\mathbb E \,}\big(\varepsilon_{r}L_{r}{\bf 1}_{\{U_k=r\}}\big)\\& \nonumber =\sum_{r=1}^\infty{\mathbb E \,}\big(\varepsilon _{r}\big){\mathbb E \,}\big(L_{r}\big){\mathbb P}\{U_k=r\}=\frac{1}{2} \sum_{r=1}^\infty{\mathbb E \,}\big(\vartheta _{r}\big) {\mathbb P}\{U_k=r\} =\frac{1}{2}\, {\mathbb E \,} \vartheta _{U_k} .\end{align} Similarly \begin{align}\label{calcolaccio6}& \nonumber {\mathbb E \,} \varepsilon_{U_h}\varepsilon_{U_k}L_{U_h}L_{U_k} = \sum_{r,s=1}^\infty{\mathbb E \,}\big(\varepsilon_{U_h}\varepsilon_{U_k}L_{U_h}L_{U_k}{\bf 1}_{\{U_h=r, U_k=s\}}\big)=\sum_{r,s=1}^\infty{\mathbb E \,}\big(\varepsilon_{r}\varepsilon_{s}L_{r}L_{s}{\bf 1}_{\{U_h=r, U_k=s\}}\big)\\&=\nonumber\sum_{r=1}^\infty{\mathbb E \,} \varepsilon_{r} {\mathbb E \,} L_{r} {\mathbb P}\{U_h=r, U_k=r\} +\sum_{r,s=1\atop r\neq s}^\infty{\mathbb E \,} \varepsilon_{r} {\mathbb E \,} \varepsilon_{s} {\mathbb E \,} L_{r} {\mathbb E \,} L_{s} {\mathbb P}\{U_h=r, U_k=s\} \\&\nonumber =\frac{1}{2}\sum_{r =1 }^\infty\vartheta_r {\mathbb P}\{U_h=r, U_k=r\}+\frac{1}{4}\sum_{r,s=1\atop r\neq s}^\infty\vartheta_{r}\vartheta_{s}{\mathbb P}\{U_h=r, U_k=s\}\\&=\nonumber\sum_{r =1 }^\infty\Big(\frac{\vartheta_r}{2}-\frac{\vartheta^2_r}{4}\Big) {\mathbb P}\{U_h=r, U_k=r\}+\frac{1}{4}\sum_{r,s=1}^\infty\vartheta_{r}\vartheta_{s}{\mathbb P}\{U_h=r, U_k=s\} \\&=\frac{1}{4}{\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} +\sum_{r=1}^\infty\Big(\frac{\vartheta_r}{2}-\frac{\vartheta^2_r}{4}\Big) {\mathbb P}\{U_h=r, U_k=r\}=\frac{1}{4}{\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} +a_{h,k},\end{align} where we set $$a_{h,k}=\sum_{r=1}^\infty\Big(\frac{\vartheta_r}{2}-\frac{\vartheta^2_r}{4}\Big) {\mathbb P}\{U_h=r, U_k=r\}.$$ Then, by inserting \eqref{calcolaccio5} and \eqref{calcolaccio6} into \eqref{calcolaccio4} we get \begin{align}\label{calcolaccio7}& \nonumber {\mathbb E \,} \Big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\Big)^2 =\frac{1}{2}\sum_{k=1}^n{\mathbb E \,}\vartheta _{U_k} +\sum_{1\leqslant h \not= k \leqslant n}\Big(\frac{1}{4}{\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} +a_{h,k}\Big) \\&=\nonumber\frac{\Theta_n}{2}+\frac{1}{4} \sum_{1\leqslant h , k \leqslant n}{\mathbb E \,} \vartheta_{U_h}\vartheta_{U_k} -\frac{1}{4} \sum_{k=1}^n{\mathbb E \,} \vartheta^2 _{U_k} +\sum_{1\leqslant h \not= k \leqslant n}a_{h,k}\\&=\frac{\Theta_n}{2}+\frac{1}{4}{\mathbb E \,} \Big\{\Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\sum_{1\leqslant h \not= k \leqslant n}a_{h,k}.\end{align} Now, inserting \eqref{calcolaccio3} and \eqref{calcolaccio7} in \eqref{calcolaccio1} we find \begin{align*}& {\mathbb E \,} {S_n}^2 ={\mathbb E \,} W_n^2 +2D {\mathbb E \,}\Big(W_n \sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\Big) +D^2{\mathbb E \,} \Big(\sum_{k=1}^n \varepsilon_{U_k}L_{U_k}\Big)^2 \\&={\mathbb E \,} W_n^2 +2D{\mathbb E \,}\Big(\frac{B_n}{2} W_n\Big)+\frac{D^2}{2}\Theta_n+\frac{D^2}{4}{\mathbb E \,}\Big\{ \Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\frac{D^2}{4}\sum_{1\leqslant h \not= k \leqslant n}a_{h,k}. \end{align*} On the other hand \begin{align*}&{\mathbb E \,} W_n^2 +2D{\mathbb E \,}\Big(\frac{B_n}{2} W_n\Big)={\mathbb E \,} \Big(W_n+D\frac{B_n}{2}\Big)^2 -\frac{D^2}{4}{\mathbb E \,} B_n^2 = {\mathbb E \,} (S^\prime_n)^2 -\frac{D^2}{4}{\mathbb E \,} B_n^2.\end{align*} Hence \begin{align}&\label{calcolaccio8} {\mathbb E \,} {S_n}^2={\mathbb E \,}(S^\prime_n)^2 -\frac{D^2}{4}{\mathbb E \,} B_n^2+\frac{D^2}{2}\Theta_n+\frac{D^2}{4}{\mathbb E \,}\Big\{ \Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\frac{D^2}{4}\sum_{1\leqslant h \not= k \leqslant n}a_{h,k}.\end{align} Now, in a similar way as we did for \eqref{calcolaccio7}, we find that \begin{align*}&{\mathbb E \,}\, B_n^2={\mathbb E \,}\Big( \sum_{k=1}^n\varepsilon_{U_k}\Big)^2=\Theta_n+{\mathbb E \,}\Big\{ \Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\sum_{1\leqslant h \not= k \leqslant n}b_{h,k},\end{align*} where $$b_{h,k}=\sum_{r=1}^\infty\big(\vartheta_r-\vartheta^2_r\big)\, {\mathbb P}\{U_h=r, U_k=r\}.$$ and inserting into \eqref{calcolaccio8}, we obtain \begin{align*}&{\mathbb E \,} {S_n}^2 ={\mathbb E \,} (S^\prime_n)^2 -\frac{D^2}{4}\Big\{\Theta_n+{\mathbb E \,}\Big\{ \Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\sum_{1\leqslant h \not= k \leqslant n}b_{h,k}\Big\}\\&+\frac{D^2}{2}\Theta_n+\frac{D^2}{4}{\mathbb E \,}\Big\{ \Big(\sum_{k=1}^n\vartheta _{U_k}\Big)^2- \sum_{k=1}^n\vartheta^2 _{U_k}\Big\}+\frac{D^2}{4}\sum_{1\leqslant h \not= k \leqslant n}a_{h,k}\\&={\mathbb E \,} (S^\prime_n)^2 +\frac{D^2\Theta_n}{4}+\frac{D^2}{4} \sum_{1\leqslant h \not= k \leqslant n}(a_{h,k}-b_{h,k}) ={\mathbb E \,} (S^\prime_n)^2 +\frac{D^2\Theta_n}{4}+\frac{D^2}{4}\sum_{1\leqslant h \not= k \leqslant n}c_{h,k},\end{align*} where $$c_{h,k}=a_{h,k}-b_{h,k}= \sum_{r=1}^\infty\Big(\frac{3\vartheta^2_r}{4}-\frac{\vartheta_r}{2}\Big) {\mathbb P}\{U_h=r, U_k=r\}.$$ \end{proof}\begin{remark}\label{ipotesi}(i) Assume that the variables $(U_j)$ verify \begin{equation}\label{ipotesir}{\mathbb P}\{U_h=r, U_k=r\}=0, \qquad \forall \, h \not =k \quad \hbox{and }\forall \, r.\end{equation} Then from Lemma \ref{lemmagenerale} we get $${\mathbb E \,} {S_n}^2 = {\mathbb E \,} ({S_n^\prime})^2 +\frac{D^2\Theta_n }{4}.$$ The above assumption holds for instance in the following important case: let the $U_j$ be the partial sums of a sequence of random variables $(Y_i)$ taking positive integer values $$U_j= \sum_{i=1}^j Y_i.$$ This is the case if $Y_i\equiv 1$ for every $i$, so that $U_j=j$ for every $j$. Hence our present discussion is a generalization of the previous one. (ii) Let the $U_j$ be the partial sums of a sequence of independent random variables $(Y_i)$. Then, for $h < k$, \begin{eqnarray*} {\mathbb P}\big\{U_h=r, U_k=r\big\}&=&{\mathbb P} \{ U_h=r\}{\mathbb P}\Big\{\sum_{i=h+1}^kY_i=0 \Big\}.\end{eqnarray*} Hence \begin{align*}&c_{h,k}= \rho_{h,k}\sum_{r=1}^\infty\Big(\frac{3\vartheta^2_r}{4}-\frac{\vartheta_r}{2}\Big) {\mathbb P}\{U_h=r\}={\mathbb P}\Big\{\sum_{i=h+1}^kY_i=0 \Big\}{\mathbb E \,}\Big(\frac{3\vartheta^2_{U_h}}{4}-\frac{\vartheta_{U_h}}{2}\Big).\end{align*} Notice that, if the random variables $(Y_i)$ are i.i.d, then \begin{align*}&{\mathbb P}\Big\{\sum_{i=h+1}^kY_i=0 \Big\}={\mathbb P}\Big\{\sum_{i=1}^{k-h}Y_i=0 \Big\}= \sigma_{k-h},\end{align*} where $\sigma_{n}={\mathbb P}\{U_n=0\}$. \end{remark}\subsection{The Local Limit Theorem with effective rate} In this section we keep all the notations of the preceding one; furthermore we set $$H_n= \sup_{x \in \mathbb{R}}\Big|P\Big(\frac{S^\prime_n- {\mathbb E \,}[S^\prime_n]}{\sqrt{Var(S^\prime_n)}}<x\Big)-{\mathbb P}hi(x)\Big|$$ $$\rho_n=P\Big(\Big|\sum_{k=1}^n\varepsilon_{U_k}- \Theta_n\Big|> h\Theta_n \Big),$$ where ${\mathbb P}hi$ is the distribution function of the standard gaussian law. \vskip 3 pt The following Theorem now generalizes Theorem \ref{ger1} for the case of random scenery. Its proof is identical to that of Theorem \ref{ger1} (just replace $\vartheta_k$ with $\vartheta_{U_k}$ in each formula of Theorem \ref{ger1}), so we omit it. \begin{theorem}\label{LLTRS} For any $0<h<1 $, $0<\vartheta_j \leqslant \vartheta_{X_j}$ and all $\kappa \in \mathcal{L}(v_0n, D)$\begin{align*}& {\mathbb P}\{S_n=\kappa\}\leqslant \Big(\frac{1+h}{1-h}\Big)\frac{D}{\sqrt{2 \pi Var(S_n)}}e^{-\frac{(\kappa-E[S_n])^2}{2(1+h){\rm Var}(S_n)}}\\&+\frac{C_1}{\sqrt{(1-h)}\Theta_n}\Big(H_n+ \frac{1}{(1-h)\Theta_n}\Big)+\rho_n(h);\end{align*} \begin{align*}& {\mathbb P}\{S_n=\kappa\}\geqslant \Big(\frac{1-h}{1+h}\Big)\frac{D}{\sqrt{2 \pi Var(S_n)}}e^{-\frac{(\kappa-E[S_n])^2}{2(1-h){\rm Var}(S_n)}}\\&-\frac{C_1}{\sqrt{(1-h)\Theta_n}}\Big(H_n+ \frac{1}{(1-h)\Theta_n}+2\rho_n(h)\Big)-\rho_n(h). \end{align*} \end{theorem} \subsection{Covariance structure of the sequence $\boldsymbol{\big\{V_{U_k}+\frac{D}{2}\varepsilon_{U_k},k\ge 1\big\}}$.} Denote $$Y_k=V_{U_k}+\frac{D}{2}\varepsilon_{U_k}.$$We observe that $$S^\prime_n = W_n +\frac{D}{2}B_n = \sum_{k=1}^n\Big(V_{U_k}+\frac{D}{2}\varepsilon_{U_k}\Big)=\sum_{k=1}^nY_k$$ and that the quantity $H_n$ appearing in the statement of Theorem \ref{LLTRS} concerns precisely the sequence of partial sums $S^\prime_n$. The aim of the present section is to discuss suitable assumptions assuring the independence of the variables $\{Y_k, k\ge 1\}$, thus enabling us to give an estimation of \lq\lq Berry--Esseen type\rq\rq \ for $H_n$. \vskip 2 pt Throughout this section we assume that the variables $\{U_j, j\ge 1\}$ verify condition \eqref{ipotesir} appeared in Remark \ref{ipotesi} (i), i.e. $$ {\mathbb P}\big\{U_h=r, U_k=r\big\}=0, \qquad\quad \forall \, h \not =k \quad \hbox{and}\quad \forall \, r\ge 1. $$ \begin{theorem} Let the $\{X_n, n\ge 1\}$ be i.i.d. Assume moreover that, for every pair $(h,k)$ with $h \not =k$, the random variables $\vartheta_{U_h}$ and $\vartheta_{U_k}$ are uncorrelated. Then the sequence $\{Y_k, k\ge 1\}$ is i.i.d.\end{theorem} \begin{remark} The assumption of the above theorem is valid if either \begin{itemize} \item[(i)] $r \mapsto \vartheta_r$ is constant (for instance $\vartheta_r = \vartheta_{X_r}=\vartheta_{X} $ for every $r$), \vskip 1 pt \item[(ii)] $U_h$ and $U_k$ are independent (and trivially if $U_h= h$, for every $h$). \end{itemize} \end{remark} Let $\phi: \mathbb{R}\to \mathbb{R}$ be a measurable function and denote \begin{equation*}\label{definizione} \Delta \phi (t) = \phi\Big( t+\frac{D}{2}\Big)-\phi\big( t \big). \end{equation*} The above theorem is a straightforward consequence of \begin{proposition} Let the sequence $\{X_n, n\ge 1\}$ be i.i.d. Then, for every pair $\phi,\psi $ of measurable functions $ \mathbb{R}\to \mathbb{R}$, $${\mathbb E \,}\big[\phi(Y_{h})\psi(Y_{k})\big]= {\mathbb E \,}\big[(\alpha_{\phi}+ \beta_{\phi}\vartheta_{U_{h}})(\alpha_{\psi}+ \beta_{\psi}\vartheta_{U_{k}})\big],\qquad h \not =k$$ where $$\alpha_\phi={\mathbb E \,}\phi(X_1)=\sum_{k=1}^\infty f(k)\phi(v_k) ,\qquad \beta_\phi = -\frac{1}{2}\sum_{k=1}^\infty \frac{f(k)\wedge f(k+1)}{\vartheta_{X}} \Delta^2 \phi(v_k) .$$$$\alpha_\psi={\mathbb E \,}\psi(X_1)=\sum_{k=1}^\infty f(k)\psi(v_k) ,\qquad \beta_\psi = -\frac{1}{2}\sum_{k=1}^\infty \frac{f(k)\wedge f(k+1)}{\vartheta_{X}} \Delta^2 \psi(v_k) .$$ In particular, for every pair $A$ and $B$ of Borel subsets of ${\mathbb R}$, \begin{equation*} {\mathbb P}\{Y_h \in A, Y_k \in B\}-{\mathbb P}\{Y_h \in A\}{\mathbb P}\{Y_k \in B\}= {\rm Cov}({\bf 1}_A(Y_h),{\bf 1}_B(Y_k))=\beta_A \beta_B {\rm Cov} (\vartheta_{U_{h}},\vartheta_{U_{k}}) \end{equation*} where $$ \beta_A =\beta_{{\bf 1}_A},\qquad \beta_B =\beta_{{\bf 1}_B}.$$ \end{proposition} \begin{proof} Since the $X_r$ are identically distributed, we shall drop the symbol $r$ in the definition of $f_r$; moreover (see Section 1, before \eqref{basber0}) $$\tau_k^{(r)}=\vartheta_r \frac{f(k)\wedge f(k+1)}{\vartheta_{X}}.$$ First, for every $r$,\begin{eqnarray*}&& {\mathbb E \,}\phi\Big( V_{r}+\frac{D}{2}\varepsilon_{r}\Big) \cr &=& \sum_{k=1}^\infty \phi\Big( v_k+\frac{D}{2}\Big){\mathbb P}\{V_{r}=v_k,\varepsilon_{r}=1\}+\sum_{k=1}^\infty \phi\big( v_k\big){\mathbb P}\{V_{r}=v_k,\varepsilon_{r}=0\} \cr &=&\sum_{k=1}^\infty \phi\Big( v_k+\frac{D}{2}\Big)\tau_k^{(r)}+\sum_{k=1}^\infty \phi\big( v_k\big)\Big(f(k) - \frac{\tau_{k-1}^{(r)}+\tau_k^{(r)}}{2}\Big) \cr &=&\sum_{k=1}^\infty \phi\Big( v_k+\frac{D}{2}\Big)\tau_k^{(r)}+\sum_{k=1}^\infty \phi\big( v_k\big)f(k)-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_k\big)\tau_{k-1}^{(r)}-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_k\big)\tau_{k}^{(r)} \cr &=& \sum_{k=1}^\infty \phi\Big( v_k+\frac{D}{2}\Big)\tau_k^{(r)}+\sum_{k=1}^\infty \phi\big( v_k\big)f(k)-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_{k-1}+D\big)\tau_{k-1}^{(r)}-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_k \big)\tau_{k}^{(r)} \cr &=& \sum_{k=1}^\infty \phi\Big( v_k +\frac{D}{2}\Big)\tau_k^{(r)}+\sum_{k=1}^\infty \phi\big( v_k \big)f(k)-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_{k} +D\big)\tau_{k}^{(r)}-\frac{1}{2}\sum_{k=1}^\infty \phi\big( v_k \big)\tau_{k}^{(r)}\cr &=& \sum_{k=1}^\infty\tau_{k}^{(r)}\Big\{\phi\big( v_k +\frac{D}{2}\big)-\frac{\phi\big( v_{k} +D\big)+\phi\big( v_{k} \big)}{2}\Big\}+\sum_{k=1}^\infty \phi\big( v_k \big)f(k) \cr &=&\sum_{k=1}^\infty \phi\big( v_k \big)f(k)-\frac{1}{2}\sum_{k=1}^\infty\tau_{k}^{(r)}\Delta^2\phi(v_k) \cr &=&\sum_{k=1}^\infty \phi\big( v_k \big)f(k)-\frac{\vartheta_r}{2}\sum_{k=1}^\infty \frac{f(k)\wedge f(k+1)}{\vartheta_{X}}\Delta^2\phi(v_k)=\alpha_\phi + \beta_\phi\vartheta_r.\end{eqnarray*} Similarly, \begin{align*}&{\mathbb E \,}\psi\Big( V_{s}+\frac{D}{2}\varepsilon_{s}\Big)=\alpha_\psi + \beta_\psi\vartheta_s.\end{align*} Hence, observing that for $r\not =s$ the random variables $V_r +\frac{D}{2}\varepsilon_r$ and $V_s +\frac{D}{2}\varepsilon_s$ are independent, we have\begin{align*}& {\mathbb E \,}\big[\phi(Y_{h})\psi(Y_{k})\big]=\sum_{r,s=1}^\infty{\mathbb E \,}\Big[\phi\big(V_r +\frac{D}{2}\varepsilon_r\big)\psi\big(V_s +\frac{D}{2}\varepsilon_s\big)\Big]{\mathbb P}\big\{U_h=r,U_k=s\big\}\\&=\sum_{r,s=1\atop r\not =s}^\infty{\mathbb E \,}\Big[\phi\big(V_r +\frac{D}{2}\varepsilon_r\big)\psi\big(V_s +\frac{D}{2}\varepsilon_s\big)\Big]{\mathbb P}\big\{U_h=r,U_k=s\big\}\\&=\sum_{r,s=1\atop r\not =s}^\infty{\mathbb E \,}\big[\phi\big(V_r +\frac{D}{2}\varepsilon_r\big)\big]{\mathbb E \,}\big[\psi\big(V_s +\frac{D}{2}\varepsilon_s\big)\big]{\mathbb P}\big\{U_h=r,U_k=s\big\}\\&=\sum_{r,s=1\atop r\not =s}^\infty (\alpha_\phi + \beta_\phi\vartheta_r)(\alpha_\psi + \beta_\psi\vartheta_r){\mathbb P}\big\{U_h=r,U_k=s\big\} \\&=\sum_{r,s=1}^\infty (\alpha_\phi + \beta_\phi\vartheta_r)(\alpha_\psi + \beta_\psi\vartheta_r){\mathbb P}\big\{U_h=r,U_k=s\big\}={\mathbb E \,}(\alpha_{\phi}+ \beta_{\phi}\vartheta_{U_{h}})(\alpha_{\psi}+ \beta_{\psi}\vartheta_{U_{k}}).\end{align*} \end{proof} \begin{remark} Let $A=[a,b]$ be a closed interval in $\mathbb{R}$. Let $$p= \max\{k: v_k < a\}, \qquad q = \max\{k: v_k \leqslant b\}.$$ It is easy to see that $$-\frac{1}{2}\Delta^2 \phi(v_p)= \begin{cases}+\frac{1}{2} &\hbox{\rm if }v_p +\frac{D}{2}\in A\\-\frac{1}{2}& \hbox{\rm if }v_p +\frac{D}{2}\not \in A; \end{cases} $$ similarly$$-\frac{1}{2}\Delta^2 \phi(v_q)= \begin{cases}+\frac{1}{2} &\hbox{\rm if }v_q +\frac{D}{2}\in A\\-\frac{1}{2}& \hbox{\rm if }v_q +\frac{D}{2}\not \in A. \end{cases} $$ It follows that $$\big|\beta_A\big| =\Big|-\frac{1}{2} \frac{f(p)\wedge f(p+1)}{\vartheta_{X}} \Delta^2 \phi(v_p)-\frac{1}{2} \frac{f(q)\wedge f(q+1)}{\vartheta_{X}} \Delta^2 \phi(v_q)\Big|\leqslant 1,$$ since $$ \frac{f(k)\wedge f(k+1)}{\vartheta_{X}}\leqslant 1, \qquad \forall\,\, k.$$ As a consequence we get\begin{equation*} \big|{\mathbb P}\{Y_h \in A, Y_k \in B\}-{\mathbb P}\{Y_h \in A\}{\mathbb P}\{Y_k \in B\}\big|\leqslant \Big|{\rm Cov} (\vartheta_{U_{h}},\vartheta_{U_{k}})\Big| \end{equation*} A similar argument yields the above inequality for any interval in ${\mathbb R}$ (open, or half--closed, or unbounded). \end{remark} \section{\bf Concluding Remarks and Open Problems.} We conclude with discussing two important questions concerning the approach used. The first concerns moderate deviations, and the second is related to weighted sums. \subsection{\gsec Moderate deviation local limit theorems.} In the i.i.d. case, the general form of the local limit theorem (\cite{IL}, Th. 4.2.1) states \begin{theorem}\label{th:G4} In order that for some choice of constants $a_n$ and $b_n$ $$\lim_{n \to \infty}\sup_{N \in \mathcal{L}(v_0n, D)}\Big|\frac{b_n}{\lambda}{\mathbb P}\{S_n=N\}-g\big( \frac{N-a_n}{b_n}\big)\Big|=0, $$ where $g$ is the density of some stable distribution $G$ with exponent $0< \alpha \leq 2$, it is necessary and sufficient that $$ {\rm (i)}\ \ \frac{S_n-a_n}{b_n} \buildrel{\mathcal D}\over{{\mathbb R}ightarrow} G \ \ \hbox{as $n \to \infty$} \qq\qq {\rm (ii)}\ \ \hbox{$D $ is maximal}.$$ \end{theorem} \noi This provides a useful estimate of ${\mathbb P}\{S_n=N\} $ for the values of $N$ such that ${|N | / b_n }$ is bounded, as already mentioned when $\a=2$ (with $b_n=\sqrt {\Sigma_n}$ using notation (\ref{not1})). When ${|N | / b_n }\to \infty$, it is known, at least when $0<\a<1$, that another estimate exists. More precisely, $${\mathbb P}\{S_n=N\} \sim n{\mathbb P}\{X=N\} \qq \hbox{as $n \to \infty $,} $$ uniformly in $n$ such that ${|N | / b_n }\to \infty$. We refer to Doney \cite{D} for large deviation local limit theorems. In the intermediate range of values where ${|N | / b_n }$ can be large but not too large with respect to $n$, it was known already three centuries ago that in the binomial case finer estimates are available for this range of values. \begin{lemma}\label{moivre} {\rm (De Moivre--Laplace, 1730)} Let $0<p<1$, $q=1-p$. Let $X$ be such that ${\mathbb P}\{X=1\}=p=1-{\mathbb P}\{X=0\} $. Let $X_1, X_2,\ldots$ be independent copies of $X$ and let $S_n=X_1+\ldots +X_n$. Let $0<\g<1$ and let $ \b \le \g\sqrt{ pq}\, n^{1/3} $. Then for all $k$ such that letting $ x= \frac{k-np}{\sqrt{ npq}} $, $|x|\le \b n^{1/6}$, we have \begin{eqnarray*} {\mathbb P}\{S_n=k\} &=& \frac{e^{- \frac{x^2}{ 2 }} }{\sqrt{2\pi npq}} \ e^E ,\end{eqnarray*} with $|E|\le \frac{ |x|^3}{\sqrt{ npq}}+ \frac{|x|^4 }{npq}+ \frac{|x|^3}{2(npq)^{\frac{3}{2}} } + \frac{1}{ 4n\min(p,q)(1 - \g )}$. \end{lemma} See Chow and Teicher \cite{CT}. Although the uniform estimate given in Lemma \ref{lltber} is optimal (it is derived from a fine local limit theorem with asymptotic expansion), it is for a moderate deviation like $x\sim n^{1/7}$, considerably less precise than the old one of De Moivre (case $p=q$). \vskip 2 pt {\gsec Problem I} Under which moment assumptions, De Moivre-Laplace's estimate extends to sums of independent random variables? \vskip 1 pt \noindent A partial answer can be given by means of the following result proved by Chen, Fang and Shao \cite{CFS}. \begin{theorem} Let $X_i$, $1 \leqslant i \leqslant n$ be a sequence of independent random variables with ${\mathbb E \,}[X_i]=0$. Put $S_n= \sum_{i=1}^n X_i$ and $B_n^2 =\sum_{i=1}^n {\mathbb E \,} X_i^2 $. Assume that there exist positive constant $c_1$, $c_2$ and $t_0$ such that $$B_n^2 \geqslant c_1^2 n,\qquad {\mathbb E \,} e^{t_0\sqrt{|X_i|}} \leqslant c_2 \qq \qq \hbox{for} \ 1 \leqslant i \leqslant n.$$ Then $$\Big|\frac{{\mathbb P}\{S_n/B_n\geqslant x\}}{1-{\mathbb P}hi(x)}-1 \Big|\le c_3\frac{(1+x^3)}{\sqrt n},$$ for $0 \leqslant x \leqslant (c_1t_0^2)^{1/3} n^{1/6}$, where $c_3$ depends on $c_2$ and $c_1t_0^2.$\end{theorem} \noindent Consider a sequence $X_i$ of integer--valued random variables and assume for simplicity that $ c_3=1$. Let $k$ be an integer. The above result gives \begin{eqnarray*} & & {\mathbb P}\{S_n=k\}= {\mathbb P}\big\{\frac{S_n}{B_n}\geqslant \frac{k}{B_n}\big\}-{\mathbb P}\big\{\frac{S_n}{B_n}\geqslant \frac{k+1}{B_n}\big\} \cr &=&\big(1+\frac{1+ (\frac{k}{B_n} )^3}{\sqrt n}\big) \big(1-{\mathbb P}hi\big(\frac{k}{B_n}\big)\big)- \big(1+ \frac{1+ (\frac{k+1}{B_n} )^3}{\sqrt n}\big)\big(\big(1-{\mathbb P}hi\big(\frac{k}{B_n}\big)\big)+{\mathbb P}hi\big(\frac{k}{B_n}\big)- {\mathbb P}hi\big(\frac{k+1}{B_n}\big)\big) \cr &=& \Big\{1+ \frac{1+ (\frac{k+1}{B_n} )^3}{\sqrt n}+\frac{1}{\sqrt n}\, \frac{\big(1-{\mathbb P}hi (\frac{k}{B_n} )\big)\big( (\frac{k}{B_n} )^3- (\frac{k+1}{B_n} )^3\big)}{{\mathbb P}hi (\frac{k+1}{B_n} )-{\mathbb P}hi (\frac{k}{B_n } )} \ \Big\} \big({\mathbb P}hi (\frac{k+1}{B_n } )-{\mathbb P}hi (\frac{k}{B_n} )\big). \end{eqnarray*} Now \begin{eqnarray*}{\mathbb P}hi\big( ({k+1})/{B_n}\big)-{\mathbb P}hi\big( {k}/{B_n}\big)&\approx& \frac{1}{\sqrt{2\pi}B_n} e^{-\frac{k^2}{2B_n^2}}\cr 1-{\mathbb P}hi\big( {k}/{B_n}\big)&\approx & \frac {B_n}{k}e^{-\frac {k^2}{2B_n^2}} \cr \big( {k}/{B_n}\big)^3-\big( {(k+1)}/{B_n}\big)^3&\approx &- {3k^2}/{B_n^3}. \end{eqnarray*} Putting into the above expression we find the approximation \begin{eqnarray*} {\mathbb P}\{S_n=k\}&\approx& \Big\{1 + \frac{k^3}{\sqrt n B_n^3}- \frac{B_n}{k \sqrt n} \cdot\frac{3k^2}{B_n^3}\Big\} \frac{1}{\sqrt{2\pi}B_n} e^{-\frac{k^2}{2B_n^2}}\cr &=&\Big\{1 + \frac{k^3}{\sqrt n B_n^3}- \frac{3k}{\sqrt n B_n^2}\Big\} \frac{1}{\sqrt{2\pi}B_n} e^{-\frac{k^2}{2B_n^2}} \ = \ e^E \frac{1}{\sqrt{2\pi}B_n} e^{-\frac{k^2}{2B_n^2}},\end{eqnarray*} with $E=\frac{k^3}{\sqrt n B_n^3}+ \frac{3k}{\sqrt n B_n^2}$. However, assumption ${\mathbb E \,} e^{t_0\sqrt{|X_i|}} \le c_2$ is restrictive, since the constant $c_2$ can be quite large. Consider for instance the following remarkable example. \vskip 2 pt \noi {\emph{Probabilistic model of the partition function:}} We refer to Freiman-Pitman \cite{FP}. Let $\s $ be a real. Fix some positive integer $n$, and let $1\le m\le n$. Let $X_m, \ldots , X_n$ be independent random variables defined by $$ {\mathbb P}\{X_j =0\}= \frac{1}{1+e^{-\s j}}, \qq\qq {\mathbb P}\{X_j =j\}= \frac{e^{-\s j}}{1+e^{-\s j}}.$$ The random variable $Y= X_m + \ldots + X_n$ can serve to modelize the partition function $q_m(n)$ counting the number of partitions of $n$ into distinct parts, each of which is at least $m$, namely the number of ways to express $n$ as $$ n= i_1+ \ldots +i_r, \qq \qq m\le i_1<\ldots <i_r\le n.$$ (By Euler's penthagonal theorem, $q_0(n)$ for instance appears as a coefficient in the expansion of $\prod_{k\le n} (1+e^{ik\theta})$.) Notice that we have the following formula (in which $\s$ only appears in the right-hand side) \begin{equation} \label{qmn} q_m(n) = e^{\s n} \int_0^1 \prod_{j=m}^n \big(1 + e^{-\s j} e^{2i\pi \a j}\big)e^{-2i\pi \a n} \dd \a . \end{equation} By using characteristic functions and Fourier inversion formula, we deduce from (\ref{qmn}), \begin{eqnarray} \label{link} q_m(n)& = & e^{\s n}\Big( \prod_{j=m}^n ( {1+e^{-\s j}})\Big){\mathbb P}\{Y=n\} . \end{eqnarray} Choosing $\s$ as the (unique) solution of the equation $ \sum_{j=m}^n \frac{j}{1+e^{\s j}}= n $ gives ${\mathbb P}\{Y=n\}= {\mathbb P}\{ \overline Y=0\}$ where $\overline Y=Y-{\mathbb E \,} Y$. But here we have ${\mathbb E \,} e^{t_0\sqrt{|X_i-{\mathbb E \,} X_i|}}\approx e^{t'_0 \sqrt j}$. Hence $c_2\approx e^{t'_0 \sqrt n}$, and so $c_3\gg \sqrt n$. Freiman and Pitman lacked a result of this kind, and in place, directly estimated the integral in (\ref{qmn}) in a painstriking work. \subsection{\gsec Weighted i.i.d.\!\! sums} \label{bebi1} The requirement on the random variables to take values in a common lattice is generally no longer satisfied when replacing $X_j$ by $w_j X_j$, where $ w_j,j=1,\ldots,n$ are real numbers. This occurs if $X_j= w_j\b_j$, where $\b_j$ is a Bernoulli random variable and $w_j$ are distinct integers having greatest common divisor $d$. In this case, ${\mathbb P}\{X_j \in\mathcal L(0,w_j)\}=1$ for each $j$, but one cannot select a smaller {\it common} span (e.g. $D=d$) since condition (\ref{basber}) (see also (\ref{basber1})) would be no longer fulfilled. This example in turn covers important classes of independent random variables used as probabilistic models in arithmetic. See \cite{F},\cite{FP},\cite{Po}. However, the representation given in Lemma \ref{lemd} extends to weighted sums. Set for $m=1,\ldots,n$, $$S _m =\sum_{j=1 }^{ m} w_j X_{ j}, \qq W_m =\sum_{j=1}^m w_j V_j,\qq M_m=\sum_{j=1}^m w_j\e_jL_j, \qq B_m=\sum_{j=1}^m \e_j .$$ A direct consequence of (\ref{dec0}) is \begin{lemma} We have the representation $$ \{S_m, 1\le m\le n\}\buildrel{\mathcal D}\over{ =} \{ W_m + DM_m, 1\le m\le n\} .$$ And, conditionally to the $\s$-algebra generated by the sequence $\{(V_j,\e_j), j=1, \ldots, n\}$, $M_n$ is a weighted Bernoulli random walk. \end{lemma} {\gsec Problem II} Show an approximate form of the local limit theorem for weighted i.i.d.\!\! sums. {\baselineskip 9pt \end{document}
\begin{document} \title[On diversities and finite dimensional Banach spaces]{On diversities and finite dimensional Banach spaces} \author[B. Gonz\'alez Merino]{Bernardo Gonz\'alez Merino} \address{\'Area de Matem\'atica Aplicada, Departamento de Ingenier\'ia y Tecnolog\'ia de Computadores, Facultad de Inform\'atica, Universidad de Murcia, 30100-Murcia, Spain}\email{[email protected]} \thanks{2020 Mathematics Subject Classification. Primary 52A20; Secondary 52A21, 52A40.} \date{\today}\maketitle \begin{abstract} A diversity $\delta$ in $M$ is a function defined over every finite set of points of $M$ mapped onto $[0,\infty)$, with the properties that $\delta(X)=0$ if and only if $|X|\leq 1$ and $\delta(X\cup Y)\leq\delta(X\cup Z)+\delta(Z\cup Y)$, for every finite sets $X,Y,Z\subset M$ with $|Z|\geq 1$. Its importance relies in the fact that, amongst others, they generalize the notion of metric distance. Our main contribution is the characterization of Banach-embeddable diversities $\delta$ defined over $M$, $|M|=3$, i.e. when there exist points $p_i\in\mathbb R^n$, $i=1,2,3$, and a symmetric, convex, and compact set $C\subset\mathbb R^n$ such that $\delta(\{x_{i_1},\dots,x_{i_m}\})=R(\{p_{i_1},\dots,p_{i_m}\},C)$, where $R(X,C)$ denotes the circumradius of $X$ with respect to $C$. \end{abstract} \date{\today}\maketitle \section{Introduction} For any set $X$, we say that $\delta:\mathcal P_F(X)\rightarrow[0,\infty)$ is a \emph{diversity} if for every finite $A,B,C \subset X$, then \begin{itemize} \item[(D1)] $\delta(A)=0$ if and only if $|A|\leq 1$, and \item[(D2)] if $B\neq\emptyset$ then $\delta(A\cup C)\leq\delta(A\cup B)+\delta(B\cup C)$, \end{itemize} where $\mathcal P_F(X)$ denotes the \emph{set of finite subsets} of $X$, and $|A|$ denotes the \emph{cardinality} of $A$. The importance of diversities rely on the fact that they are intimately connected to metric spaces. On the one hand, if $(X,\delta)$ is a diversity, then defining $d(a,b):=\delta(\{a,b\})$, for every $a,b\in X$, would immediately generate a metric space $(X,d)$. On the other hand, if we are given a metric space $(X,d)$, then we can define different associated diversities by $\delta_1(A):=\max_{a,b\in A} d(a,b)$ or $\delta_2(A):=\sum_{a,b\in A}d(a,b)$. Diversities were first defined in \cite{BrTu12}. Many well-studied functionals defined over subsets of a given set are diversities: radii functionals (diameter, mean width, $\dots$), the length of a shortest Steiner tree connecting a set, the length of the shortest travelling salesman tour through a set, or the $L_1$ diversity in $\mathbb R^n$ (see \cite{BrTu12} and \cite{BHMT}). Diversities and their connection to other notions and theories have been studied in \cite{EsBo}, \cite{BNT}, \cite{WBT}. Our motivation to study diversities partly comes from its close connection to the circumradius functional. Remember that $\mathcal K^n$ (resp. $\mathcal K^n_0$) denotes the set of all $n$-dimensional compact, convex (resp. $0$-symmetric) sets, and that the circumradius $R(X,C)$ of $X\subset\mathbb R^n$ with respect to some $C\in\mathcal K^n$ is the smallest rescalation of $C$ that contains a translation of $X$. In \cite{BHMT} the authors observed that if $\delta(X):=R(X,C)$, for some $X\subset\mathbb R^n$ and $C\in\mathcal K^n$, then $\delta$ is a diversity over $\mathbb R^n$, and they denoted those diversities as \emph{Minkowski diversities}. It is well known that Minkowski diversities are sublinear functionals (see for instance \cite{BHMT}, see also \cite{BoFe}). The authors in \cite{BHMT} showed a fundamental characterization of Minkowski diversities in terms of some functional properties. They proved that if $\delta$ is a diversity defined over subsets of $\mathbb R^n$, then $\delta$ is a Minkowski diversity if and only if it holds \begin{equation}\label{thm:CharactMinkDivers} \begin{split} & \text{(a)}\,\,\,\delta\text{ is sublinear and} \\ & \text{(b)} \text{ for every }A,B\in\mathcal P_F(\mathbb R^n)\text{ there exist }a,b\in\mathbb R^n\text{ such that }\\ & \hspace{1cm} \delta((a+A)\cup(b+B)) \leq \max\{\delta(A),\delta(B)\}. \end{split} \end{equation} We introduce here a very natural notion. We say that a diversity $\delta$ is a \emph{Banach diversity} if there exists $C\in\mathcal K^n_0$ such that $\delta(X)=R(X,C)$ for every $X\subset\mathbb R^n$. Banach diversities naturally generalize the notion of norm over finite dimensional normed space within $\mathbb R$. An almost direct consequence of the result above in \eqref{thm:CharactMinkDivers} is the following characterization. \begin{theorem}\label{thm:CharactBanachDivers} Let $\delta$ be a diversity over $\mathbb R^n$. Then $\delta$ is a Banach diversity if and only if $\delta$ is a seminorm and for every finite $A,B\subset\mathbb R^n$, there exist $a,b\in\mathbb R^n$ such that \[ \delta((a+A)\cup(b+B)) \leq \max\{\delta(A),\delta(B)\}. \] \end{theorem} For any given $X$ finite, we say that a diversity $\delta:X\rightarrow[0,\infty)$ is \emph{Minkowski-embeddable} (resp. \emph{Banach-embeddable}) if there exist $p_1,\dots,p_{|X|}\in\mathbb R^n$ and $C\in\mathcal K^n$ (resp. $C\in\mathcal K^n_0$), for some $n\in\mathbb N$, such that \[ \delta(\{x_{i_1},\dots,x_{i_m}\}) = R(\{p_{i_1},\dots,p_{i_m}\},C), \] for every $1\leq i_1<\cdots<i_m\leq |X|$ and every $1\leq m\leq |X|$. Looking backwards, the study and classification of metrics over finite sets goes back at least to \cite{BaDr}, see also \cite{KMT}, \cite{StYu}. In \cite{BHMT} the authors proved that every diversity $\delta$ defined over sets $X$ of \emph{three} points is Minkowski embeddable. If we denote by $X=\{x_1,x_2,x_3\}$, $\delta_{i_1\dots i_m}:=\delta(\{x_{i_1},\dots,x_{i_m}\})$ for every $1\leq i_1<\cdots<i_m\leq 3$, then the possible values of $\delta_i$, $\delta_{ij}$, $\delta_{123}$ characterizing $\delta$ to be a diversity rewrites as the following set of inequalities \begin{equation}\label{eq:CharactMinkDiver3points} 0 = \delta_l < \delta_{ij} \leq \delta_{123} \leq \delta_{ij}+\delta_{jk}, \end{equation} for every $l\in\{i,j\}$, $1\leq i<j\leq 3$, and $\{i,j,k\}=\{1,2,3\}$, respectively (see \cite{BHMT} and \cite{BrKo13}). Our next and main result of the paper characterizes when a diversity defined over sets of three points is Banach-embeddable. \begin{theorem}\label{thm:mainresult} Let $\delta:\mathcal P_F(X)\rightarrow[0,\infty)$ be a diversity with $|X|=3$. Let us furthermore assume that $0\leq \delta_{13} \leq \delta_{12}$. Then, $\delta$ is Banach-embeddable for some $C\in\mathcal K^n_0$, $n\geq 2$, if and only if the following inequalities hold true: \begin{equation*} \begin{split} \delta_{12}-\delta_{13} & \leq \delta_{23} \\ \delta_{23} & \leq \delta_{12}+\delta_{13} \\ \delta_{ij} & \leq \delta_{123}\,\,\,\, 1\leq i<j\leq 3 \\ \sqrt{3}(2\delta_{12}\delta_{13}+2\delta_{12}\delta_{23}+2\delta_{13}\delta_{23}-\delta_{12}^2-\delta_{13}^2-\delta_{23}^2) \delta_{123} & \leq 8 \delta_{12}\delta_{13}\delta_{23} \end{split} \end{equation*} \end{theorem} The paper is organized as follows. In Section \ref{sec:definitions} we introduced basic notation and notions required during the rest of the paper. In Section \ref{sec:CharcBanachDiversities} we focus on proving the characterization of Banach diversities of Theorem \ref{thm:CharactBanachDivers}. Later in Section \ref{sec:Embedding2d} we show the main ingredients of Theorem \ref{thm:mainresult}, when considering embeddings within $\mathbb R^2$. In Section \ref{sec:embeddingnd}, we prove Theorem \ref{thm:mainresult}, by showing that diversities over three points that can be embedded onto $\mathbb R^n$, can be \emph{also} embedded onto $\mathbb R^2$. Finally in Section \ref{sec:4ormorepoints} we discuss the increasingly difficult conditions for a diversity to be Banach-embeddable, by exploring the particular example of four points. \section{Definitions and basic properties}\label{sec:definitions} Let $C$ be an $n$-dimensional \emph{convex body}, i.e. a convex and compact set in $\mathbb R^n$. We say that $C$ is \emph{symmetric} if $x+C=-C$, for some $x\in\mathbb R^n$, and if $x=0$, we furthermore say that $C$ is \emph{$0$-symmetric}. For every $K,C\in\mathcal K^n$, let $K+L=\{x+y:x\in K,\,y\in C\}$ be the \emph{Minkowski addition} of $K$ and $C$. What is more, for every $\lambda\in\mathbb R$ let $\lambda K=\{\lambda x:x\in K\}$, and $-K=(-1)K$. For every $x,y\in\mathbb R^n$, let $\langle x,y\rangle$ be the \emph{scalar product} of $x$ and $y$, and let $\|x\|:=\sqrt{\langle x,x\rangle}$ be the \emph{Euclidean norm} of $x$. For any given $X\subset\mathbb R^n$, we denote by $\mathrm{conv}(X)$, $\mathrm{lin}(X)$, and $\mathrm{aff}(X)$, the \emph{convex hull}, the \emph{linear hull}, and the \emph{affine hull} of $X$, respectively. Moreover, for every $x,y\in\mathbb R^n$, we denote by $[x,y]:=\mathrm{conv}(\{x,y\})$ the \emph{segment} of endpoints $x$ and $y$. For every $K\in\mathcal K^n$, let $\partial K$ be the \emph{boundary} of $K$. Moreover, for every $p\in\partial K$, let $N(K,p)=\{x\in\mathbb R^n:\langle x,y-p\rangle\leq 0,\,\forall y\in K\}$ be the \emph{outer normal cone} of $K$ at $p$. For further details on basic notions of convex bodies, we recommend \cite{Schn14}. Given $C\in\mathcal K^n$ and $X\subset\mathbb R^n$, let $R(X,C)$ be the \emph{circumradius} of $X$ with respect to $C$, i.e., the smallest $\lambda\geq 0$ such that $x+X \subset \lambda C$, for some $x\in\mathbb R^n$. The circumradius $R(\cdot,\cdot)$ is a monotonically increasing function on its first entry, whereas it is a monotonically decreasing function on its second entry, i.e. for every $X,Y\subset\mathbb R^n$ and $C_1,C_2\in\mathcal K^n$ with $X\subset Y$ and $C_2\subset C_1$, then $R(X,C_1)\leq R(Y,C_1) \leq R(Y,C_2)$. Moreover, it is homogeneous of degree $1$ (resp. $-1$) with respect to its first entry (resp. second entry), i.e. for every $X\subset\mathbb R^n$, $C\in\mathcal K^n$, $\lambda\geq0$, then $R(\lambda X,C)=R(X,\lambda^{-1}C)=\lambda R(X,C)$. The circumradius $R(\cdot,\cdot):\mathcal P_F(\mathbb R^n)\times \mathcal K^n\rightarrow [0,\infty)$ is a continuous functional with respect to the Hausdorff distance (see \cite{BoFe}, \cite{BrKo15} and the references therein), where $\mathcal P_F(X)$ denotes the set of finite subsets of $X$. If $X\subset C$ with $R(X,C)=1$, we then write that $X\subset^{opt}C$. It was proven in \cite{BrKo13} that in such case $K\subset^{opt}C$ if and only if \begin{equation}\label{eq:OptConta} \begin{split} \text{there exist } p_i\in X\cap\partial C,\, & u_i\in N(C,p_i),\,i=1,\dots,m,\,2\leq m\leq n+1, \text{ such that} \\ & 0\in\mathrm{conv}(\{u_1,\dots,u_m\}). \end{split} \end{equation} Let us define the \emph{$n$-dimensional volume} (or \emph{Lebesgue measure}) of $K\in\mathcal K^n$ by $\mathrm{vol}(K)$. Notice that if $K\subset C$, $K,C\in\mathcal K^n$, and $\mathrm{vol}(K)=\mathrm{vol}(C)$, then $K=C$ (see \cite{Schn14}). Diversities are motononically increasing with respect to set inclusion, i.e. if $\delta$ is a diversity, $A,B\subset X$, $A,B$ finite, with $A\subset B$, then $\delta(A)\leq\delta(B)$ (see \cite{BHMT}). A function $f:\mathcal P(\mathbb R^n)\rightarrow[0,\infty)$, where $\mathcal P(X)$ denotes the set of \emph{bounded subsets} of $X$, is \emph{sublinear} if for every $A,B\subset\mathbb R^n$ and $\lambda\geq 0$ then \begin{itemize} \item[(L1)] $f(A+B)\leq f(A)+f(B)$ and \item[(L2)] $f(\lambda A)=\lambda f(A)$. \end{itemize} Moreover, we say that $f$ is a \emph{seminorm} if $f$ is sublinear and for every $A\subset\mathbb R^n$ and $\lambda\leq 0$ fulfills \begin{itemize} \item[(L2')] $f(\lambda A)=-\lambda f(A)$. \end{itemize} \section{Characterization of Banach diversities}\label{sec:CharcBanachDiversities} Minkowski (and therefore Banach) diversities can be naturally extended from finite sets to convex sets. Since the circumradius (i.e. Minkowski diversities) is continuous with respect to the Hausdorff metric, we can naturally define \begin{equation}\label{eq:deltatilde} \tilde{\delta}(K):=\lim_{m\rightarrow\infty}\delta(X_m),\quad K\in\mathcal K^n, \end{equation} for some sequence $X_m\in\mathcal P_F(\mathbb R^n)$, such that $\mathrm{conv}(X_m)\rightarrow K$ in the Hausdorff metric when $m\rightarrow\infty$. Notice that the above result makes sense due to Proposition 6 (c) of \cite{BHMT}, where it is proven that $\delta(X)$ solely depends on the convex hull of $X$ for Minkowski diversities. \begin{proof}[Proof of Theorem \ref{thm:CharactBanachDivers}] We start with the \emph{only if} part. Since $\delta$ is a Banach diversity, then there exists $C\in\mathcal K^n_0$ such that $\delta(X)=R(X,C)$ for every finite set $X\subset\mathbb R^n$. By the characterization of Minkowski diversities in \eqref{thm:CharactMinkDivers}, since $\delta$ is a Minkowski diversity, then $\delta$ is sublinear and fulfills (b) in \eqref{thm:CharactMinkDivers}. Thus, it remains to show that $\delta(\lambda X)=-\lambda \delta(X)$ for every $\lambda<0$ and every finite $X\subset\mathbb R^n$. To do so, notice that $\delta(\lambda X)=R(\lambda X,C)$ holds if and only if $x+\lambda X\subset R(\lambda X,C)C$, for some $x\in\mathbb R^n$, which is equivalent to $-x-\lambda X\subset R(\lambda X,C)(-C)$, which by the $0$-symmetry of $C$ is equivalent to $-x-\lambda X\subset R(\lambda X,C)C$, and thus $R(-\lambda X,C) \leq R(\lambda X,C)$. The same ideas imply that the equality holds, i.e. $R(-\lambda X,C) = R(\lambda X,C)$, and thus, since $R$ is homogeneous of degree $1$ on its first entry (i.e. $\delta$ is sublinear) then $R(-\lambda X,C) = -\lambda R(X,C)$, as desired. We now show the \emph{if} part. Since $\delta$ is already a sublinear diversity fulfilling (b) in \eqref{thm:CharactMinkDivers}, then by the characterization of Minkowski diversities in \eqref{thm:CharactMinkDivers} there exists $C\in\mathcal K^n$ such that $\delta(X)=R(X,C)$ for every finite $X\subset\mathbb R^n$. It remains to show that $C$ is symmetric. To do so, remember that we can extend $\delta$ continuously onto $\tilde{\delta}$ defined over every convex and compact set, see \eqref{eq:deltatilde}. In particular, using the seminormal property we obtain \[ R(-C,C)=\tilde{\delta}(-C)=\tilde{\delta}(C)=R(C,C)=1, \] i.e., $x-C\subset C$, for some $x\in\mathbb R^n$. If this happens, since $\mathrm{vol}(x-C)=\mathrm{vol}(C)$ we necessarily have that $x-C=C$, i.e. $C$ is symmetric, which concludes the proof. \end{proof} \section{Embedding diversities over $X=\{x_1,x_2,x_3\}$ onto $\mathbb R^2$}\label{sec:Embedding2d} We start this section with the following basic statement of diameters over centrally symmetric convex bodies (see \cite{GrKl92} for a detailed discussion about this and similar properties). \begin{proposition}\label{prop:SegmentinC} Let $C\in\mathcal K^n_0$, $p_1,p_2\in\mathbb R^n$. Then \[ \frac{1}{2R(\{p_1,p_2\},C)}[p_1-p_2,p_2-p_1] \subset^{opt} C \] \end{proposition} \begin{proof} By definition of $R(\{p_1,p_2\},C)$, we have that \[ x+[p_1,p_2] \subset^{opt} R(\{p_1,p_2\},C)C, \] for some $x\in\mathbb R^n$. By the central symmetry of $C$, we would also have $-x-[p_1,p_2]\subset R(\{p_1,p_2\},C)C$, and using the convexity of $C$ we would conclude that \[ \begin{split} \frac12\left[p_1-p_2,p_2-p_1\right]=\left[\frac12(x+p_1)+\frac12(-x-p_2),\frac12(x+p_2)+\frac12(-x-p_1)\right] & \\ \subset R(\{p_1,p_2\},C)C. & \end{split} \] \end{proof} Next result characterizes the range of possible values of a Banach diversity evaluated over any two points (out of three). Even though the inequalities characterizing it are \emph{essentially} the same than for Minkowski diversities (see \eqref{eq:CharactMinkDiver3points}), in the case of Banach diversities we also learn below about certain configuration of boundary points of the set $C$, which will be crucial afterwards. \begin{theorem}[Characterization of $R_{ij}$]\label{thm:existenceij} Let $S=\mathrm{conv}(\{p_1,p_2,p_3\})$, where \begin{equation}\label{eq:equiltriangle} p_1=\left(-\frac{\sqrt{3}}{2},-\frac12\right),\quad p_2=\left(\frac{\sqrt{3}}{2},-\frac12\right),\quad\text{and}\quad p_3=(0,1). \end{equation} Let $C\in\mathcal K^2_0$ and $R_{ij}:=R(\{p_i,p_j\},C)$, $1\leq i<j\leq 3$. After reordering $p_1,p_2,p_3$, let us assume $0 < R_{13} \leq R_{12}$. Then \begin{equation}\label{eq:12_13_23} R_{12}-R_{13} \leq R_{23} \leq R_{12} + R_{13}. \end{equation} Conversely, if three scalars $R_{ij}$, $1\leq i<j\leq 3$, fulfill $0 < R_{13} \leq R_{12}$ and \eqref{eq:12_13_23}, then there exists $C\in\mathcal K^2_0$ such that \[ R(\{p_i,p_j\},C)=R_{ij}, \] for every $1\leq i<j\leq 3$. \end{theorem} \begin{proof} Let us start observing that \begin{equation}\label{eq:PointsBoundary} \pm \frac{\sqrt{3}}{2R_{12}}(1,0),\pm\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3}),\pm\frac{\sqrt{3}}{4R_{23}}(1,-\sqrt{3})\in\partial C \end{equation} (see Proposition \ref{prop:SegmentinC}). Note that if we select $\mu>0$ such that \begin{equation}\label{eq:muInSegment} \mu(1,-\sqrt{3}) \in \left[\frac{\sqrt{3}}{2R_{12}}(1,0),-\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3})\right], \end{equation} since $\frac{\sqrt{3}}{4R_{23}}(1,-\sqrt{3}) \in \partial C$ (see \eqref{eq:PointsBoundary}), it necessarily holds \begin{equation}\label{eq:muIneq} 2\mu = \left\|\mu(1,-\sqrt{3})\right\| \leq \left\|\frac{\sqrt{3}}{4R_{23}}(1,-\sqrt{3})\right\| = \frac{\sqrt{3}}{2R_{23}}. \end{equation} The line passing through $\frac{\sqrt{3}}{2R_{12}}(1,0)$ and $-\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3})$ has equations (via its outer normal vector) \[ \left\{(x,y)\in\mathbb R^2:\left<(x,y),\left(\frac{\sqrt{3}}{4R_{13}},-\frac{\sqrt{3}}{4R_{13}}-\frac{\sqrt{3}}{2R_{12}}\right)\right> = \frac{3\sqrt{3}}{8R_{12}R_{13}}\right\}. \] Thus condition \eqref{eq:muInSegment} becomes \[ \mu\left(\frac{3}{4R_{13}}+\frac{3}{4R_{13}}+\frac{3}{2R_{12}}\right) = \frac{3\sqrt{3}}{8R_{12}R_{13}}, \] i.e. $\mu=\frac{\sqrt{3}}{4(R_{12}+R_{13})}$, and thus \eqref{eq:muIneq} implies the right inequality in \eqref{eq:12_13_23}. Second, note that since $\frac{\sqrt{3}}{2R_{12}}(1,0) \in \partial C$, there exists a line $r$ supporting $C$ at $\frac{\sqrt{3}}{2R_{12}}(1,0)$. Moreover, since $0<R_{13}\leq R_{12}$, then $r$ intersects the ray $\lambda(1,-\sqrt{3})$, $\lambda>0$ (except in the limit case $R_{13} = R_{12}$ which we can solve doing the same computations). It is then clear that the largest $\lambda>0$ such that $\lambda(1,-\sqrt{3})$ belongs to such supporting line occurs when $r$ is the line containing both vertices $\frac{\sqrt{3}}{2R_{12}}(1,0)$ and $\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3})$. The equation of $r$ in the latter case is thus given by \[ \left\{(x,y)\in\mathbb R^2:\left<(x,y),\left(\frac{3}{4R_{13}},\frac{\sqrt{3}}{2R_{12}}-\frac{\sqrt{3}}{4R_{13}}\right)\right>=\frac{3\sqrt{3}}{8R_{12}R_{13}}\right\}. \] Therefore $\lambda(1,-\sqrt{3})\in r$ translates onto \[ \lambda\left(\frac{3}{4R_{13}}-\frac{3}{2R_{12}}+\frac{3}{4R_{13}}\right) = \frac{3\sqrt{3}}{8R_{12}R_{13}}, \] i.e. $\lambda=\frac{\sqrt{3}}{4(R_{12}-R_{13})}$. By the convexity of $C$, we must have that \[ \frac{\sqrt{3}}{4R_{23}}=\left\|\frac{\sqrt{3}}{4R_{23}}(1,-\sqrt{3})\right\| \leq \left\|\lambda(1,-\sqrt{3})\right\| = \frac{\sqrt{3}}{4(R_{12}-R_{13})} \] (otherwise $\frac{\sqrt{3}}{2R_{12}}(1,0) \notin \partial C$) from which we get the left inequality in \eqref{eq:12_13_23}. The above arguments show the entire statements in the theorem above: on the one hand, those inequalities have to hold; on the other hand, if those inequalities hold true, then we can define \[ C:=\mathrm{conv}\left(\pm \frac{\sqrt{3}}{2R_{12}}(1,0),\pm \frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3}),\pm \frac{\sqrt{3}}{4R_{23}}(1,-\sqrt{3})\right), \] and the arguments above ensure the validity of the conditions in \eqref{eq:PointsBoundary}, as desired. \end{proof} \begin{remark} Notice that the argument in Theorem \ref{thm:existenceij} can be extended to $C\in\mathcal K^n_0$. In particular, on the one hand, if $\delta$ is Banach-embeddable such that $\delta_{ij}=R(\{p_i,p_j\},C)$, $\delta_{123}=R(S,C)$, and $H=\mathrm{lin}(S-p_1)$, we clearly have (due to Proposition \ref{prop:SegmentinC}) that $\delta_{ij}=R(\{p_i,p_j\},C_0)$, $1\leq i<j\leq 3$, where $C_0:=C\cap H$, which is a $2$-dimensional $0$-symmetric convex and compact set. Thus by Theorem \ref{thm:existenceij} we would obtain that the inequalities hold true. On the other hand, if the inequalities hold true, again by Theorem \ref{thm:existenceij} there exists $C\in\mathcal K^2_0$ such that $\delta_{ij}=R(\{p_i,p_j\},C)$, $1\leq i<j\leq 3$, as desired. \end{remark} \begin{theorem}[Characterization of $R_{123}$]\label{thm:existence123} Let $S=\mathrm{conv}(\{p_1,p_2,p_3\})$, where \[ p_1=\left(-\frac{\sqrt{3}}{2},-\frac12\right),\quad p_2=\left(\frac{\sqrt{3}}{2},-\frac12\right),\quad\text{and}\quad p_3=(0,1). \] Let $C\in\mathcal K^2_0$, $R_{ij}:=R(\{p_i,p_j\},C)$, $1\leq i<j\leq 3$, and $R_{123}:=R(S,C)$. After reordering $p_1,p_2,p_3$, let us assume $0 < R_{13} \leq R_{12}$. Then \begin{equation}\label{eq:delta123} \max\{R_{ij}\} \leq R_{123} \leq \frac{8 R_{12}R_{13}R_{23}}{\sqrt{3}(2R_{12}R_{13}+2R_{12}R_{23}+2R_{13}R_{23}-R_{12}^2-R_{13}^2-R_{23}^2)}. \end{equation} Conversely, if four scalars $R_{ij}$, $1\leq i<j\leq 3$, $R_{123}$ fulfill $0<R_{13}\leq R_{12}$, \eqref{eq:12_13_23} and \eqref{eq:delta123}, then there exists $C\in\mathcal K^2_0$ such that \[ R(\{p_i,p_j\},C)=R_{ij},\quad 1\leq i<j\leq 3,\quad \text{and} \quad R(S,C)=R_{123}. \] \end{theorem} \begin{proof} The fact that $R_{ij} \leq R_{123}$, $1\leq j<j\leq 3$, is a consequence of the monotonicity of $R(\cdot,C)$, and thus the left inequality in \eqref{eq:delta123} holds. In order to show the right inequality in \eqref{eq:delta123}, we start noting that there exists $x\in\mathbb R^2$ such that $x+S \subset^{opt} R_{123}C$. Let us denote by $a:=\frac{\sqrt{3}}{2R_{12}}$, $b:=\frac{\sqrt{3}}{2R_{13}}$ and $c:=\frac{\sqrt{3}}{2R_{23}}$. Since $C$ is convex, using \eqref{eq:PointsBoundary} we get that \[ C_0:=\mathrm{conv}\left(\pm a(1,0),\pm \frac{b}{2}(1,\sqrt{3}),\pm \frac{c}{2}(-1,\sqrt{3})\right) \subset C, \] and, due to the decreasing monotonicity in the second entry of $R(S,\cdot)$, we get $R_{123}=R(S,C) \leq R(S,C_0)$. Without loss of generality, we now replace $C$ by $C_0$. If we let $\lambda:=1/R_{123}$, the inclusion above $x+S \subset^{opt} R_{123}C$ boils down to the fact that the vertices of $\lambda x+\lambda S$ belong to the boundary of $C$. Introducing $x_0,y_0\in\mathbb R$ such that $(x_0,y_0)=\lambda x+\lambda p_3$, then \[ \lambda x+\lambda p_1=(x_0,y_0)+\lambda \left(-1,-\sqrt{3}\right)\quad\text{and}\quad \lambda x+\lambda p_2=(x_0,y_0)+\lambda \left(1,-\sqrt{3}\right), \] and thus $x+S \subset^{opt} R_{123}C$ reduces to \[ \begin{split} (x_0,y_0) & \in \left[b\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right) , c\left(\frac{-1}{2},\frac{-\sqrt{3}}{2}\right)\right], \\ (x_0,y_0) + \lambda \left(1,-\sqrt{3}\right) & \in \left[a(1,0) , c\left(\frac{1}{2},\frac{-\sqrt{3}}{2}\right)\right], \\ (x_0,y_0) + \lambda \left(-1,-\sqrt{3}\right) & \in \left[a(-1,0), b\left(-\frac{1}{2},-\frac{\sqrt{3}}{2}\right)\right], \end{split} \] for some $x_0,y_0\in \mathbb R$ and $\lambda>0$. If we solve the corresponding linear system above, i.e. \[ \begin{split} (x_0,y_0) & = (1-t_1) b\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right) + t_1 c\left(\frac{-1}{2},\frac{-\sqrt{3}}{2}\right), \\ (x_0,y_0) + \lambda \left(1,-\sqrt{3}\right) & = (1-t_2) a(1,0) + t_2 c\left(\frac{1}{2},\frac{-\sqrt{3}}{2}\right), \\ (x_0,y_0) + \lambda \left(-1,-\sqrt{3}\right) & = (1-t_3) a(-1,0) + t_3 b\left(-\frac{1}{2},-\frac{\sqrt{3}}{2}\right), \end{split} \] for some $t_i\in[0,1]$, $i=1,2,3$, tells us \[ \begin{split} \lambda & = \frac{2abc(a+b)-a^2b^2-c^2(a-b)^2}{4abc}, \\ x_0 & = \frac{(a-b)c^2-b^2(c-a)}{4bc}, \\ y_0 & = \frac{(2\sqrt{3}ab+\sqrt{3}b^2)c-\sqrt{3}ab^2-(\sqrt{3}a-\sqrt{3}b)c^2}{4bc}, \\ t_1 & = \frac{ab-(a-b)c}{2bc}, \\ t_2 & = \frac{ab+(a-b)c}{2ac}, \\ t_3 & = \frac{ab+(a-b)c}{2ab}. \end{split} \] In particular \[ R_{123} \leq R(S,C) = \frac{1}{\lambda}= \frac{4abc}{2abc(a+b)-a^2b^2-c^2(a-b)^2} \] gives already the right inequality in \eqref{eq:delta123}. However, in order to conclude the proof of the inequality, we need to do the minor checkings that \[ t_i \in [0,1],\quad i=1,2,3,\quad\text{and}\quad \lambda \geq 0, \] which we show in Proposition \ref{prop:checkings}. We now show the conversely. First, remember that the necessary and sufficient conditions such that $R_{ij}=R(\{p_i,p_j\},C)$, $1\leq i<j\leq 3$ is that \[ \pm\frac{\sqrt{3}}{2R_{12}}(1,0),\, \pm\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3}),\, \pm\frac{\sqrt{3}}{4R_{23}}(-1,\sqrt{3})\, \in\, \partial C \] (see \eqref{eq:PointsBoundary}). This holds if and only if we consider three pairs of parallel lines $r_{i,\pm}$, $i=1,2,3$, supporting $C_0:=\mathrm{conv}\left(\pm a(1,0),\pm \frac{b}{2}\left(1,\sqrt{3}\right),\pm \frac{c}{2}\left(-1,\sqrt{3}\right)\right)$ at each of its six vertices. In that case, let $C$ be the intersection containing the origin of the halfplanes determined by those six lines. Notice that, if we choose $r_{i,\pm}$ to be such that each coincides with one of the two edges it touches (say, for instance, in clockwise order), then $C=C_0$. In that case, we would have that \[ R(S,C)=\frac{8 R_{12}R_{13}R_{23}}{\sqrt{3}(2R_{12}R_{13}+2R_{12}R_{23}+2R_{13}R_{23}-R_{12}^2-R_{13}^2-R_{23}^2)} \] (i.e. it coincides with the right side in \eqref{eq:delta123}). Second, notice that if in the previous selection of lines, we replace a pair of parallel lines such that now they cover the other pair of adjacent edges of $C_0$, then we would have that $C$ becomes a parallelogram, containing two of the parallel edges of $C_0$. In that case, when $x+S\subset R(S,C)C$ for some $x\in\mathbb R^2$, it is clear that we find two vertices of $x+S$ touching two opposing parallel edges of $C$, say without loss of generality, that those edges are the ones containing the edges given by $\frac{\sqrt{3}}{2R_{12}}(1,0)$ and $\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3})$ (and $-\frac{\sqrt{3}}{2R_{12}}(1,0)$ and $-\frac{\sqrt{3}}{4R_{13}}(1,\sqrt{3})$). In that case, it is immediate that $R(S,C)$ coincides with both values $R_{12}$ and $R_{13}$, i.e. $\max\{R_{ij}\}=R(S,C)$. Finally, changing continuously from one $C$ to the other (simply moving continuously the pair of parallel edges transforming $C_0$ onto the parallelogram) and using the fact that $R(S,\cdot)$ is a continuous functional with respect to the Hausdorff metric, we would attain (by Bolzano Theorem) each possible value ranging between both extreme values in \eqref{eq:delta123}, thus concluding the proof of the theorem. \end{proof} \begin{proposition}\label{prop:checkings} Let $a,b,c \in\mathbb R$ be such that $0<a\leq b$, $\frac1a-\frac1b \leq \frac1c \leq \frac1a+\frac1b$. Then \begin{equation}\label{eq:4properties} \begin{split} t_1 & := \frac{ab-(a-b)c}{2bc} \in [0,1], \\ t_2 & := \frac{ab+(a-b)c}{2ac} \in [0,1], \\ t_3 & := \frac{ab+(a-b)c}{2ab} \in [0,1], \\ \lambda & :=\frac{2abc(a+b)-a^2b^2-c^2(a-b)^2}{4abc} \geq 0. \end{split} \end{equation} \end{proposition} \begin{proof} Notice that $ab-(a-b)c=ab+(b-a)c \geq 0$. Second, $\frac{ab+(b-a)c}{2bc} \leq 1$ is equivalent to $\frac1c \leq \frac1a+\frac1b$, which is true, and thus, the first statement in \eqref{eq:4properties} holds true. Notice that $ab+(a-b)c \geq 0$ is equivalent to $\frac1c \geq \frac1a-\frac1b$, which is true. Second, $\frac{ab+(a-b)c}{2ac} \leq 1$ is equivalent to $\frac1c\leq \frac1a+\frac1b$, which is also true, hence the second statement in \eqref{eq:4properties} holds true. Notice also that $ab+(a-b)c\geq 0$ is equivalent to $\frac1c\geq \frac1a-\frac1b$, which is true. Second, $\frac{ab+(a-b)c}{2ab} \leq 1$ is equivalent to $c(a-b)-ab \leq 0$, which holds true since $c,b-a,ab\geq 0$, and therefore the third statement in \eqref{eq:4properties} holds true. For the last statement, we recover its original values, i.e. $R_{12}=\frac{\sqrt{3}}{2a}$, $R_{13}=\frac{\sqrt{3}}{2b}$, $R_{23}=\frac{\sqrt{3}}{2c}$, with the inequalities $0<R_{13}\leq R_{12}$, $R_{12}-R_{13}\leq R_{23} \leq R_{12}+R_{13}$. We now observe that \[ \lambda = \frac{-\sqrt{3}\left(R_{12}^2+R_{13}^2+R_{23}^2-2R_{12}R_{13}-2R_{12}R_{23}-2R_{13}R_{23}\right)}{8R_{12}R_{13}R_{23}}, \] and thus the last statement is true is and only if \[ f(x,y,z):=2xy+2xz+2yz-x^2-y^2-z^2 \geq 0, \] subject to $0<y\leq x$ and $x-y \leq z \leq x+y$, where $x:=R_{12}$, $y:=R_{13}$ and $z:=R_{23}$. Notice that it is sufficient to show that the minimum of $f$ in its domain is $0$ (as long as this minimum \emph{exists}). Notice also that $f$ is a quadric over an unbounded domain. We use \emph{Schm\"udgen's Positivstellensatz} on the second hierarchy level (see \cite{Schm}, \cite{LaPu}) within the following terms \[ 2xy+2xz+2yz-x^2-y^2-z^2 = \sum _{1\leq i \leq j\leq 4} \lambda_{ij}g_{i}g_j, \] where $g_1=y$, $g_2=x-y$, $g_3=z-x+y$, $g_4=x+y-z$, $g_k\geq 0$, $k=1,\dots,4$, and $\lambda_{ij}\geq 0$, $1\leq i<j\leq 4$. Even though solutions in this case are a-priori not \emph{granted} (see \cite{Ste}, \cite{HLM22}), we obtain an undetermined compatible system: letting \[ \begin{split} f = \gamma_1g_1^2+\gamma_2g_2^2+\gamma_3g_2g_3+\gamma_4g_2g_4+\gamma_5g_3^2+\gamma_6g_4^2 +\gamma_7g_3g_4+\gamma_8g_1g_2+\gamma_9g_1g_3+\gamma_{10}g_1g_4, \end{split} \] then the system reduces to \[ \left(\begin{array}{cccccccccc|c} 1 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 1 & 1 & 4 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & \frac12 & 0 & 0 & 2 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & \frac{-1}2 & 0 & 0 & -2 \\ 0 & 0 & 0 & 0 & 1 & 0 & \frac{-1}2 & 0 & \frac14 & \frac{-1}4 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & \frac{-1}2 & 0 & \frac{-1}4 & \frac14 & -1 \\ \end{array}\right) \] We find by \emph{direct search} the non-negative solutions to this system $\gamma_1=\cdots=\gamma_6=\gamma_{10}=0$, $\gamma_7=1$, $\gamma_8=4$, $\gamma_9=2$, i.e. \[ 2xy+2xz+2yz-x^2-y^2-z^2 = (z-x+y)(x+y-z)+4y(x-y)+2y(z-x+y), \] which ensures the last statement in \eqref{eq:4properties}. \end{proof} \section{Embedding diversities over $X=\{x_1,x_2,x_3\}$ onto $\mathbb R^n$, $n\geq 3$}\label{sec:embeddingnd} The aim of this section is to show that embedding diversities over three points requires us to look at $C$ of dimension $2$, since bigger dimensions \emph{do not} enlarge the set of possible Banach-embeddings. \begin{theorem}\label{thm:higherDim} Let $S=\mathrm{conv}(\{p_1,p_2,p_3\})\subset\mathbb R^n$ be a triangle, and let $C\in\mathcal K^n_0$. Moreover, let $R_{ij}:=R(\{p_i,p_j\},C)$, $1\leq i<j\leq 3$, and $R_{123}:=R(S,C)$. If $0<R_{13} \leq R_{12}$, then it holds \eqref{eq:12_13_23} as well as \eqref{eq:delta123}. \end{theorem} \begin{proof} After a suitable translation of $S$, let us suppose that $S\subset^{opt}R_{123}C$. If we denote by $H=\mathrm{aff}(S)$, which is a $2$-dimensional affine subspace, by definition $S\subset^{opt}(R_{123}C)\cap H$. Let $L:=\mathrm{lin}(H)$. If $0\in H$, since $L=H$, then $S\subset^{opt}R_{123}C_0$, where $C_0:=C\cap H$ is a $2$-dimensional convex body in $\mathbb R^n$. Thus, we can apply Theorems \ref{thm:existenceij} and \ref{thm:existence123} and obtain the desired inequalities. If $0\notin H$, then $L$ is a $3$-dimensional linear subspace. Notice that in this case, $S\subset R_{123}C\cap H \subset R_{123}C$, and hence, $S\subset^{opt}R_{123}C\cap H$. Let $C_1:=C\cap H$, which is a $3$-dimensional $0$-symmetric convex and compact set. Notice that the proof in Theorem \ref{thm:existenceij} and the left hand side inequality in \eqref{eq:delta123} do not depend on the dimension of $C$. Thus, all those inequalities still hold true. It remains to show that it is still true the right hand side inequality in \eqref{eq:delta123}. Theorem \ref{thm:existenceij} ensures that $\pm \frac{1}{2R_{ij}}\frac{p_i-p_j}{\|p_i-p_j\|} \in \partial(C)$, see \eqref{eq:PointsBoundary}. Notice now that since $S\subset^{opt} R_{123}C$, using \eqref{eq:OptConta}, there exist $u_i\in N(R_{123}C,p_i)$, $i=1,2,3$, such that $0\in\mathrm{conv}(\{u_1,u_2,u_3\})$. In particular, $R_{123}C$ is contained in the intersection of the three halfspaces determined by those three halfplanes $\{x:\langle x,u_i\rangle = \langle p_i,u_i\rangle\}$. This last intersection is an infinite triangular prism. Moreover, notice that every section by a plane parallel to $L$ provides the same section up to translations. In particular, the section with $L-p_1$ (i.e. the plane parallel to $H$ containing the origin $0$) has the same section too. Thus, $S\subset^{opt} R_{123}(C\cap H)$ rewrites as $\frac{1}{R_{123}}S \subset^{opt} C\cap H$. From the observation before, we thus know that if $x+\mu S \subset C\cap(H-c)$ then $\mu\leq \frac{1}{R_{123}}$. Let $\lambda$ be the right hand side in \eqref{eq:delta123}. Assuming $R_{123}>\lambda$ leads to a contradiction, simply because we would have that if $x+\mu S \subset C\cap(H-c)$ then $\mu\leq \frac{1}{R_{123}}<1/\lambda$, which is false (see the proof of Theorem \ref{thm:existence123}, where we show that the smallest rescaling of $S$ such that $x+\mu S\subset^{opt} C$, for some $x$, is at least $1/\lambda$). Therefore, $R_{123}$ fulfills the right hand side inequality in \eqref{eq:delta123}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:mainresult}] We start proving the \emph{only if} part. Since $\delta$ is Banach-embeddable over $X=\{x_1,x_2,x_3\}$, then by definition there exists $C_0\in\mathcal K^n_0$ and points $q_1,q_2,q_3\in\mathbb R^n$ such that \[ \delta_{ij}=R(\{q_i,q_j\},C_0),\quad\text{and}\quad \delta_{123}=R(\{q_1,q_2,q_3\},C_0). \] By Theorem \ref{thm:higherDim}, we directly get that the inequalities hold true. We now show the \emph{if} part. Since the inequalities above hold, they by the \emph{if} part of Theorem \ref{thm:existence123} we directly ensure the existence of $C\in\mathcal K^2_0$ such that \[ \delta_{ij}=R(\{p_i,p_j\},C)\quad\text{and}\quad \delta_{123}=R(\{p_1,p_2,p_3\},C), \] where $p_1,p_2,p_3$ are the points of the equilateral triangle described in \eqref{eq:equiltriangle}. Hence, mapping each $x_i$ onto $p_i$, $i=1,2,3$ gives us the desired Banach-embedding (measured with respect to $C$). \end{proof} \begin{remark} Notice that the first three inequalities in Theorem \ref{thm:mainresult} are exactly the same conditions as in \eqref{eq:CharactMinkDiver3points}. However, the fourth condition is more restrictive than that above. For instance, consider $\delta$ such that $\delta_i=0$, $i=1,2,3$, $\delta_{12}=\delta_{13}=2$, $\delta_{23}=1$. While \eqref{eq:CharactMinkDiver3points} becomes \[ 2= \max\{2,2,1\} \leq \delta_{123} \leq \min \{3,3,4\} = 3, \] the third and fourth conditions in Theorem \ref{thm:mainresult} become \[ 2= \max\{2,2,1\} \leq \delta_{123} \leq \frac{4}{\sqrt{3}}, \] thus showing that Banach embeddable is strictly more restrictive than Minkowski embeddable. \end{remark} \section{Banach embeddings for $4$ or more points}\label{sec:4ormorepoints} In this section we only do some comments on Banach embeddings of four points. The formulas (and therefore the difficulty) explodes in the number of points considered in the embedding. Let $\{p_i\in\mathbb R^3:i=1,\dots,4\}$, $C\in\mathcal K^3_0$, and let $R_{i_1\cdots i_m}:=R(\{p_{i_1},\dots,p_{i_m}\},C)$, for every $1\leq i_1<\cdots<i_m\leq 4$, $1\leq m\leq 4$. It is then clear that we have that \begin{equation}\label{eq:ij4points} 0=R_l<R_{ij} \leq R_{ik}+R_{kj}, \end{equation} for every $l\in\{i,j\}$, $1\leq i<j\leq 4$, $k\in\{1,\dots,4\}\setminus\{i,j\}$ (see \cite[Theorem 4.1]{BrKo13}, see also \cite{BHMT}). Moreover, analogous ideas to the ones exhibited in Theorem \ref{thm:existenceij} would show that those inequalities are the best we can say regarding $R_{ij}$. Involving three points, we would clearly have that \begin{equation}\label{eq:ijk4points} R_{ij} \leq R_{abc} \leq \frac{8 R_{ab}R_{ac}R_{bc}}{\sqrt{3}(2R_{ab}R_{ac}+2R_{ab}R_{bc}+2R_{ac}R_{bc}-R_{ab}^2-R_{ac}^2-R_{bc}^2)}, \end{equation} for every $i<j$, $\{i,j\}\subset\{a,b,c\}$, $1\leq a<b<c\leq 4$ (see Theorem \ref{thm:existence123}). Computing if numbers $\delta_{i}$, $\delta_{xy}$, $\delta_{abc}$ fulfilling the equations above \emph{induce} the existence of a $C\in\mathcal K^3_0$ seems to be already a hard task. Furthermore, we would still need to derive inequalities for $R_{1234}$. We leave them here for the interested reader. The computations follow the same pattern that Theorem \ref{thm:existence123}. Let \[ p_1=(1,0,0),\quad p_2=(0,0,0),\quad p_3=(0,1,0),\quad\text{and}\quad p_4=(0,0,1), \] and $S:=\mathrm{conv}(\{p_i:i=1,\dots,4\})$. We immediately know that $\pm P_{ij}:=\pm \frac{1}{2R_{ij}}(p_i-p_j) \in \partial C$, for every $1\leq i<j\leq 4$ (see Theorem \ref{thm:existenceij}). Assuming that $x+S \subset R_{1234}C$, for some $x\in\mathbb R^3$, then $y+\frac{1}{R_{1234}}S\subset C$, for $y=\frac{x}{R_{1234}}$, and denoting by $(x_0,y_0,z_0):=y+(0,0,\frac{1}{R_{1234}})$, we implement the conditions \[ \begin{split} & (x_0,y_0,z_0) \in \mathrm{conv}(\{P_{14},P_{24},P_{34}\}), \\ & (x_0,y_0,z_0) + \lambda (1,0,-1) \in \mathrm{conv}(\{P_{12},P_{13},P_{14}\}), \\ & (x_0,y_0,z_0) + \lambda (0,1,-1) \in \mathrm{conv}(\{P_{13},P_{23},P_{34}\}), \\ & (x_0,y_0,z_0) + \lambda (0,0,-1) \in \mathrm{conv}(\{P_{12},P_{23},P_{24}\}), \\ \end{split} \] which is a compatible linear system of $12$ variables and $12$ equations, depending on the six parameters $R_{ij}$, $1\leq i<j\leq 4$: \[ \begin{split} & (x_0,y_0,z_0) = \frac{1-a-b}{2R_{14}}(-1,0,1)+\frac{a}{2R_{24}}(0,0,1)+\frac{b}{2R_{34}}(0,-1,1), \\ & (x_0,y_0,z_0) + \lambda (1,0,-1) = \frac{1-c-d}{2R_{12}}(1,0,0)+\frac{c}{2R_{13}}(1,-1,0)+\frac{d}{2R_{14}}(1,0,-1), \\ & (x_0,y_0,z_0) + \lambda (0,1,-1) = \frac{1-e-f}{2R_{13}}(-1,1,0)+\frac{e}{2R_{23}}(0,1,0)+\frac{f}{2R_{34}}(0,1,-1), \\ & (x_0,y_0,z_0) + \lambda (0,0,-1) = \frac{1-g-h}{2R_{12}}(-1,0,0)+\frac{g}{2R_{23}}(0,-1,0)+\frac{h}{2R_{24}}(0,0,-1). \\ \end{split} \] Notice that $\lambda=\frac{1}{R_{1234}}$, and that the \emph{only} remaining part of the proof to be proven would be the fact that the coefficients of the convex combinations take values in $[0,1]$ as well as $\lambda \geq 0$. However, this would be a very hard and technical proof, since for instance the value of the coefficient $a$ after simplifying it is \[ \begin{split} & a = \left[R_{12}R_{14}R_{24}R_{34}^2 - (R_{12}R_{13}^2 + (R_{12} + R_{13})R_{14}^2 - (2R_{12}R_{13} + R_{13}^2)R_{14})R_{23}R_{24} \right.\\ & + (R_{13}^2R_{14} - R_{13}R_{14}^2)R_{24}^2 - (R_{13}R_{14}R_{24}^2 - (R_{12}^2R_{13} + R_{12}R_{13}^2 - R_{13}R_{14}^2 + \\ & \left.R_{14}^2R_{23} - (R_{12}^2 + 3R_{12}R_{13} + R_{13}^2)R_{14})R_{24})R_{34})\right]/\left[R_{13}^2R_{14}R_{24}^2 + R_{12}^2R_{14}R_{34}^2 +\right. \\ & (R_{12}R_{13}R_{14} - (R_{12} + R_{13})R_{14}^2)R_{23}^2 - (R_{12}R_{13}^2 + R_{13}R_{14}^2 - (R_{12}R_{13}+R_{13}^2)R_{14})R_{23}R_{24} \\ & \left. - (2R_{12}R_{13}R_{14}R_{24}- (R_{12}^2R_{13} + R_{12}R_{14}^2 - (R_{12}^2 + R_{12}R_{13})R_{14})R_{23})R_{34}\right]. \end{split} \] All in all, we would conclude saying that \begin{equation}\label{eq:12344points} \begin{split} & R_{ijk} \leq R_{1234} \leq 2 \left[R_{13}^2R_{14}R_{24}^2 + R_{12}^2R_{14}R_{34}^2 + (R_{12}R_{13}R_{14} - (R_{12} + R_{13})R_{14}^2)R_{23}^2\right. \\ & - (R_{12}R_{13}^2 + R_{13}R_{14}^2 - (R_{12}R_{13} + R_{13}^2)R_{14})R_{23}R_{24} - (2R_{12}R_{13}R_{14}R_{24} - (R_{12}^2R_{13} \\ & \left. + R_{12}R_{14}^2 - (R_{12}^2 + R_{12}R_{13})R_{14})R_{23})R_{34}\right]/\left[2R_{13}R_{14}R_{24}^2+\right. \\ & (R_{12}R_{13}-(R_{12}+R_{13})R_{14}-R_{14}^2)R_{23}^2+ (R_{12}^2R_{13} - R_{12}R_{13}^2 - (R_{12} + R_{13})R_{14}^2 \\ & - (R_{12}^2 - 2R_{12}R_{13} - R_{13}^2)R_{14})R_{23} - (R_{12}^2R_{13}\\ & +R_{12}R_{13}^2-(R_{12}-R_{13})R_{14}^2-(R_{12}^2+R_{13}^2)R_{14}+(R_{12}R_{13}-(R_{12}+R_{13})R_{14}\\ & +R_{14}^2)R_{23})R_{24}+(R_{12}^2R_{13}+R_{12}R_{13}^2+(R_{12}-R_{13})R_{14}^2-2R_{12}R_{14}R_{24}+(R_{12}^2- \\ & \left.2R_{12}R_{13}-R_{13}^2)R_{14}-(R_{12}R_{13} - (R_{12} + R_{13})R_{14} - R_{14}^2)R_{23})R_{34}\right] \end{split} \end{equation} and furthermore, it is quite likely that the right conjecture would be the following. \begin{conjecture} Let $X$ be a set with $|X|=4$, and let $\delta:\mathcal P_F(X)\rightarrow[0,\infty)$ be a diversity. Then $\delta$ is Banach-embeddable if and only if \eqref{eq:ij4points}, \eqref{eq:ijk4points}, and \eqref{eq:12344points} hold true (when replacing each $R_{i_1\cdots i_m}$ by $\delta_{i_1\cdots i_m}$). \end{conjecture} \end{document}
\begin{document} \title{f Scalar-Invariant Test for High-Dimensional Regression Coefficients} \begin{abstract} \baselineskip 18pt This article is concerned with simultaneous tests on linear regression coefficients in high-dimensional settings. When the dimensionality is larger than the sample size, the classic $F$-test is not applicable since the sample covariance matrix is not invertible. In order to overcome this issue, both Goeman, Finos and van Houwelingen (2011) and Zhong and Chen (2011) proposed their test procedures after excluding the $({\bf X}^{'}{\bf X})^{-1}$ term in $F$-statistics. However, both these two test are not invariant under the group of scalar transformations. In order to treat those variables in a `fair' way, we proposed a new test statistic and establish its asymptotically normal under certain mild conditions. Simulation studies showed that our test procedure performs very well in many cases. \noindent{\bf Keywords}: Asymptotic normality; High-dimensional data; Large $p$, small $n$; $U$-statistics; Scale-invariant. \end{abstract} \section{Introduction} In the past decades, high-dimensional data are increasingly encountered in statistical application from many areas, such as hyperspectral imagery, internet portals, microarray analysis and finance. A frequently encountered challenge in high-dimensional regression is the detection of relevant variables. Identifying significant sets of genes which are associated with certain clinical outcome is very important in genomic studies, see Subramanian et al. (2005), Efron and Tibshirani (2007) and Newton et al. (2007). The main challenge of high-dimensional data is that the dimension $p$ is much larger than the sample sizes $n$. When this happens, many traditional statistical methods and theories may not necessarily work since they assume that $p$ keeps unchanged as $n$ increases. Recently, many efforts have been devoted to solve this problem. One is the variable selection method. Fan and Lv (2008) proposed the Sure Independence Screening (SIS) method based on a correlation learning to reduce the dimensionality from high to a moderate scale that is below sample size. Wang (2009) extended the classic Forward Regression method under an ultra-high dimensional setup. The other method is hypothesis testing. To gain power and insight, it can be advantageous to look for influence not at the level of individual variables but rather at the level of clusters of variables. Thus, A simultaneous test on linear regression coefficients in high-dimensional settings is needed. Goeman, Finos and van Houwelingen (2011) formulated an Empirical Bayes test via a score test on the hyper parameter of a prior distribution assumed on the regression coefficients. Zhong and Chen (2011) modified the classic $F$-statistic and proposed a $U$-statistic to examine the validity of the full model and extended their test to a linear model augmented with the factorial design setting. However, both these two tests are not scalar invariant. Intuitively speaking, their test power would heavily depend on the underlying variance magnitudes since they do not use the information from the diagonal elements of the sample covariance, i.e., the variances of each variables. When all the components are (approximately) homogeneous , they would be very powerful, whereas their superiority would be highly affected if the component variances differ much. In practice, different components may have completely different physical or biological readings and thus certainly their scales would not be the same. Hence, it is desirable to develop scalar-transformation-invariant test procedure which are able to integrate all the individual information in a relatively ``fair" way. In practice, due to confidentiality reasons, both the response and predictors will be firstly standardized to be zero mean and unit variance usually. When the dimension of predictors is low, the test efficiency is not impacted by this standardized procedure. However, when the dimension of predictors is ultra-high, there would be a large bias in the test procedure because the variance estimators are only root-$n$ consistent, see Feng et al. (2012) for the case in the high-dimensional two sample Behrens-Fisher problem. Thus, if we standardize the predictors firstly, Zhong and Chen (2011)'s test will not be reasonable when the dimension $p$ is ultra-high. This motivates us to discuss when the asymptotic normality of their test statistic still holds after standardizing the predictors. Thus, in this article, we proposed a novel test statistic which is scalar-invariant and provide the theoretical conditions when its asymptotic normality still holds. Simulation studies show that our proposed test has reasonable sizes and effective powers. The remainder of the paper is organized as follows. In the next section, we propose our test statistic and establish its asymptotic normality. Simulation comparison is conducted in Section 3. All technical details are provided in the Appendix. \section{Test Statistics} In this article, we consider the following linear regression model \begin{align} E(Y_i|{\bf X}_i)=\alpha+{\bf X}^{'}_i{\bm\beta}, ~~\mathrm {var}(Y_i|{\bf X}_i)=\sigma^2 \end{align} for $i=1,\mathop{\rightarrow}\limits^{d}ots,n$ where ${\bf X}_1,\mathop{\rightarrow}\limits^{d}ots, {\bf X}_n$ are independent and identically distributed $p$-dimensional covariates and $Y_1,\mathop{\rightarrow}\limits^{d}ots,Y_n$ are independent responses, ${\bm\beta}$ is the vector of regression coefficients, and $\alpha$ is a nuisance intercept. To make ${\bm\beta}$ identifiable, we assume that $\boldsymbols=\mathrm {var}({\bf X}_i)$ and ${\bf R}=\mathrm {cor}({\bf X}_i)$ is positive definite. Our interest is in testing a high-dimensional hypothesis \begin{align}\label{test} H_0:{\bm\beta}={\bm\beta}_0~~\rm{vs}~~H_1:{\bm\beta}\not={\bm\beta}_0 \end{align} for a specific ${\bm\beta}_0 \in \mathbb{R}^p$. A classical method to deal with this problem is the famous $F$-test statistic \begin{align*} F_{n}=\frac{(\hat{{\bm\beta}}-{\bm\beta}_0)^{'}{\bf A}^{'}({\bf A}({\bf U}^{'}{\bf U})^{-1}{\bf A}^{'})^{-1}(\hat{{\bm\beta}}-{\bm\beta}_0)/p}{{\bf Y}^{'}({\bf I}_n-{\bf U}({\bf U}^{'}{\bf U})^{-1}{\bf U}^{'}){\bf Y}/(n-p-1)} \end{align*} where ${\bf U}=({\bf 1},{\bf X})^{'}$, $A=(\boldsymbol 0, {\bf I}_p)$ and $\hat{{\bm\beta}}$ is the least square estimator of ${\bm\beta}$. Its advantages include: it is invariant under linear transformation, its exact distribution is known under the null hypothesis and it is powerful when the dimension of data is sufficiently small, compared with the sample sizes. However, Zhong and Chen (2011) showed that the power of $F$-test is adversely impacted by an increased dimension even $p<n-1$, reflecting a reduced degree of freedom in estimating $\sigma^2$ when the dimensionality is close to the sample size. Moreover, the $F$-test statistics is undefined when the dimension of data is greater than the within sample degrees of freedom since the pooled sample covariance matrices are not positive definite. In order to overcome this issue, Goeman, Finos and van Houwelingen (2011) proposed an Empirical Bayes test, which is formulated via a score test on the hyper parameter of a prior distribution assumed on the regression coefficients. Their test statistics is \begin{align} G_n=\frac{({\bf Y}-\hat{\alpha}-{\bf X}^{'}{\bm\beta}_0)^{'}{\bf X}{\bf X}^{'}({\bf Y}-\hat{\alpha}-{\bf X}^{'}{\bm\beta}_0)}{n({\bf Y}-\hat{\alpha}-{\bf X}^{'}{\bm\beta}_0)^{'}({\bf Y}-\hat{\alpha}-{\bf X}^{'}{\bm\beta}_0)} \end{align} where $\hat{\alpha}$ is the sample mean of $Y$. The key feature of their method is to use Euclidian norm to replace the Mahalanobis norm since having $({\bf X}^{'}{\bf X})^{-1}$ is no longer beneficial when $p$ is larger than $n$. However, the power of $G_n$ is adversely impacted by $\boldsymbol \mu$, the mean of ${\bf X}$, which is a nuisance parameter in our interested test. Zhong and Chen (2011) consider a $U$-statistic \begin{align} Z_n=\frac{1}{4P_n^4}\sum^{*}({\bf X}_{i_1}-{\bf X}_{i_2})^{'}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4}) \end{align} where ${\bf D}elta_i=Y_i-{\bf X}_i{\bm\beta}_0$. Through this article, we use $\sum^{*}$ to denote summations over distinct indexes. For example, in $Z_n$, the summation is over the set $\{i_1\not=i_2\not=i_3\not=i_4\}$, for all $i_1,i_2,i_3,i_4\in\{1,\mathop{\rightarrow}\limits^{d}ots,n\}$ and $P_n^m=\frac{n!}{(n-m)!}$. Obviously, $Z_n$ is not impacted by the nuisance parameter $\alpha$ and $\boldsymbol \mu$. They established the asymptotic normality of $Z_n$ under the diverging factor model ( Bai and Saranadasa 1996). However, an obvious limitation of $G_n$ and $Z_n$ is that they are not invariant under scalar transformations. To this end, we standardize each component of $({\bf X}_{i_1}-{\bf X}_{i_2})^{'}({\bf X}_{i_3}-{\bf X}_{i_4})$ in $Z_n$ by the corresponding variance and propose a simple but effective test statistics, \begin{align} T_n=&\frac{1}{4P_n^4}\sum^{*}({\bf X}_{i_1}-{\bf X}_{i_2})^{'}{\bf D}_S^{-1}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4}) \end{align} where ${\bf D}_S$ is the diagonal matrix of pooled sample covariance matrix, that is \[{\bf D}_S=\mathrm {diag}(\hat{\sigma}_1^2,\mathop{\rightarrow}\limits^{d}ots,\hat{\sigma}_p^2)\] where $\hat{\sigma}^2_{k}$ is the sample variance of $\{X_{ik}\}_{i=1}^n$, $k=1,\mathop{\rightarrow}\limits^{d}ots,p$. Obviously, $T_n$ is invariant to location shifts in both ${\bf X}_i$ and $Y_i$. Thus, we assume, without loss of generality, that $\alpha=\mu=0$ in the rest of the article. Moreover, $T_n$ is invariant under the group of scalar transformations, say, ${\bf X}_{i}\to {\bf C}{\bf X}_i$ for $i=1,\mathop{\rightarrow}\limits^{d}ots, n$ where ${\bf C}=\mathrm {diag}\{c_1,\mathop{\rightarrow}\limits^{d}ots,c_p\}$ and $c_1,\mathop{\rightarrow}\limits^{d}ots,c_p$ are non-zero constants. In order to establish the asymptotic normality of $T_n$, we assume, like Bai and Saranadasa (1996), the following diverging factor model: \begin{align*} {\bf X}_i={\bf \Gamma} {\bf z}_{i}+\boldsymbol \mu \end{align*} where ${\bf \Gamma}$ is a $p\times m$ matrix for some $m\ge p$ such that ${\bf \Gamma} {\bf \Gamma}^{'}=\boldsymbols$ and $\{{\bf z}_i\}_{i=1}^n$ are $m$-variate independent and identically distributed random vectors such that \begin{align} \label{chends} \begin{array}{c} E({\bf z}_i)=0, \ \mathrm {var}({\bf z}_i)={\bf I}_m,~E(z_{il}^{4})=3+{\bf D}elta,~ E(z_{il}^{8})=m_8\in (0,\infty),\\ E(z_{ik_1}^{\alpha_1}z_{ik_2}^{\alpha_2}\mathop{\rightarrow}\limits^{d}ots z_{ikq}^{\alpha_q})=E(z_{ik_1}^{\alpha_1})E(z_{ik_2}^{\alpha_2})\mathop{\rightarrow}\limits^{d}ots E(z_{ikq}^{\alpha_q}), \end{array} \end{align} whenever $\sum_{k=1}^q\alpha_k\leq 8$ and $k_1\neq k_2\mathop{\rightarrow}\limits^{d}ots\neq k_q$. Additional, we need the following conditions to regulate for the `` large $p$, small $n$" is, \begin{itemize} \item[(C1)] $p(n)\to \infty$ as $n\to \infty$; \item[(C2)] $\mathrm {tr}({\bf R}^4)=o(\mathrm {tr}^2({\bf R}^2))$; \item[(C3)] $\frac{p^2}{n^2 \mathrm {tr}({\bf R}^2)} \to 0$. \end{itemize} \remark 1 Both Condition (C1) and (C2) are similar to condition (2.8) in Zhong and Chen (2011). Since the estimator $\hat{\sigma}_k^2$ is only root-$n$ consistent, there would be a little bias term in the variance of $T_n$. Fortunately, the bias term would be negligible when condition (C3) holds. To appreciate condition (C3), consider the simple case ${\bf R}={\bf I}_p$, thus, the condition becomes $p=o(n^2)$. When $p$ gets larger, such as $p=O(n^2)$, the bias term in the variance of $T_n$ will no longer be negligible. Thus, we need a bias correction to solve this problem, see Feng et al. (2012) for more information. In order to study the asymptotic power of our test, similar to Zhong and Chen (2011), we define the following local alternatives \begin{align}\label{alter} ({\bm\beta}-{\bm\beta}_0)^{'}\boldsymbols({\bm\beta}-{\bm\beta}_0)&=o(1)\nonumber\\ ({\bm\beta}-{\bm\beta}_0)^{'}\boldsymbols{\bf D}^{-1}\boldsymbols{\bf D}^{-1}\boldsymbols({\bm\beta}-{\bm\beta}_0)&=o(n^{-1}\mathrm {tr}({\bf R}^2)) \end{align} Note that the local alternatives (\ref{alter}) prescribe a smaller difference between ${\bm\beta}$ and ${\bm\beta}_0$. Similar to Zhong and Chen (2011), we also consider two different fixed alternatives which violate the first part of (\ref{alter}) in the Appendix. And we also demonstrate our proposed test can achieve at least $50\%$ power under these two fixed alternatives. The following Theorem establishes the asymptotic normality of $T_n$ under the null or local alternative (\ref{alter}) hypothesis. \begin{thm} Assume conditions (C1)-(C3) hold, then under either $H_0$ or the local alternative (\ref{alter}), as $n\to\infty$, \begin{align} \frac{n}{\sigma^2\sqrt{2\mathrm {tr}({\bf R}^2)}}\left(T_n-||{\bf D}^{-1/2}\boldsymbols({\bm\beta}-{\bm\beta}_0)||^2\right) \mathop{\rightarrow}\limits^{d} N(0,1) \end{align} \end{thm} To formulate a test procedure based on $T_n$, we need to estimate $\mathrm {tr}({\bf R}^2)$ and $\sigma^2$ appeared in the asymptotic variance. In order to reduce the computational work, we propose the following ratio consistent estimator of $\mathrm {tr}({\bf R}^2)$, \begin{align*} \widehat{\mathrm {tr}({\bf R}^2)}=\frac{1}{2P_n^4}\sum^{*} ({\bf X}_{i_1}-{\bf X}_{i_2})^{'}{\bf D}_S^{-1}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf X}_{i_3}-{\bf X}_{i_2})^{'}{\bf D}_S^{-1}({\bf X}_{i_1}-{\bf X}_{i_4}) \end{align*} And the estimator of $\sigma^2$ under $H_0$ is \begin{align*} \hat{\sigma}^2=\frac{1}{n-1}\sum_{i=1}^n (Y_i-{\bf X}_i^{'}{\bm\beta}_0-\bar{Y}+\bar{{\bf X}}^{'}{\bm\beta}_0)^2 \end{align*} \begin{pro} Suppose the conditions in Theorem 1 hold. Then, as $n,p\to \infty$ \begin{align*} \frac{\widehat{\mathrm {tr}({\bf R}^2)}}{\mathrm {tr}({\bf R}^2)}\mathop{\rightarrow}\limits^{p} 1 \end{align*} \end{pro} Apply Theorem 1 and the Slutsky Theorem, the proposed test rejects $H_0$ at a significant level $\alpha$ if \begin{align} nT_n\ge \sqrt{2\widehat{\mathrm {tr}({\bf R}^2)}}\hat{\sigma}^2z_{\alpha} \end{align} where $z_{\alpha}$ is the upper-$\alpha$ quantile of $N(0,1)$. Next, we discuss the power properties of the proposed test. According to Theorem 1, the power of our proposed test under the local alternative (\ref{alter}) is \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)=\Phi(-z_{\alpha}+\frac{n||{\bf D}^{-1/2}\boldsymbols({\bm\beta}-{\bm\beta}_0)||^2}{\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2}) \end{align*} where $\Phi$ is the standard normal distribution function. In comparison, Zhong and Chen (2011) show the power of their proposed test is \begin{align*} \beta_{Z_n}(||{\bm\beta}-{\bm\beta}_0||)=\Phi(-z_{\alpha}+\frac{n||\boldsymbols({\bm\beta}-{\bm\beta}_0)||^2}{\sqrt{2\mathrm {tr}(\boldsymbols^2)}\sigma^2}) \end{align*} Note that it is difficult to compare the proposed test with Zhong and Chen's (2011) test under general settings. Thus, in order to get a rough picture of the asymptotic power comparison between these two test, we consider the following representative cases: \begin{itemize} \item[(i)] The variances of all variables are equal to $\lambda$ and then $\boldsymbols=\lambda {\bf R}$. In this case, \begin{align*} \beta_{Z_n}(||{\bm\beta}-{\bm\beta}_0||)=\beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)=\Phi(-z_{\alpha}+\frac{n\lambda||{\bf R}({\bm\beta}-{\bm\beta}_0)||^2}{\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2}) \end{align*} \item[(ii)] $\boldsymbols({\bm\beta}-{\bm\beta}_0)=\delta(1,1,\mathop{\rightarrow}\limits^{d}ots,1)^{'}$. In this case, \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)&=\Phi(-z_{\alpha}+\frac{n\mathrm {tr}({\bf D}^{-1})\delta^2}{\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2})\\ \beta_{Z_n}(||{\bm\beta}-{\bm\beta}_0||)&=\Phi(-z_{\alpha}+\frac{np\delta^2}{\sqrt{2\mathrm {tr}(\boldsymbols^2)}\sigma^2}) \end{align*} According to the Cauchy inequality, \[\mathrm {tr}^2({\bf D}^{-1})\mathrm {tr}(\boldsymbols^2)\ge p^2 \mathrm {tr}({\bf R}^2)\] As a consequence, \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)\ge \beta_{Z_n}(||{\bm\beta}-{\bm\beta}_0||) \end{align*} When the variances of all the variables are equal, the two tests are equivalently powerful. Otherwise, the proposed test would be more preferable in this case. \item[(iii)] $\boldsymbols$ is a diagonal matrix i.e. $\boldsymbols={\bf D}$. The variances of the first half components are $\sigma_1^2$ and the rest are all $\sigma_2^2$. Assume $\beta_i-\beta_{0i}=\delta$, $i=1,\mathop{\rightarrow}\limits^{d}ots,\lfloor\frac{p}{2}\rfloor$ and the others are all equal to zero. In this setting, \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)&=\Phi(-z_{\alpha}+\frac{n\sqrt{p}\sigma_1^2\delta^2}{2\sqrt{2}\sigma^2})\\ \beta_{Z_n}(||{\bm\beta}-{\bm\beta}_0||)&=\Phi(-z_{\alpha}+\frac{n\sqrt{p}\sigma_1^4\delta^2}{2\sqrt{\sigma_1^4+\sigma_2^4}\sigma^2}) \end{align*} Thus, the asymptotic relative efficiency (ARE) of the porposed test with respect to the Zhong and Chen's (2011) test would be $\sqrt{\sigma_1^4+\sigma_2^4}/(\sqrt{2}\sigma_1^2)$. It is clear that the proposed test is more powerful than Zhong and Chen's (2011) test if $\sigma_1^2<\sigma_2^2$ and vice versa. This ARE has a positive lower bound of $1/\sqrt{2}$ when $\sigma_1^2>>\sigma_2^2$, whereas it can be arbitrarily large if $\sigma_1^2/\sigma_2^2$ is close to zero. \end{itemize} \section{Simulation} Here we report a simulation study designed to evaluate the performance of our proposed test (abbreviated as SF). For comparison purposes, we also conducted the test proposed by Zhong and Chen (2011) (abbreviated as ZC) and the Empirical Bayes test proposed by Goeman, Fino, and van Houwelingen (2011) (abbreviated as EB). We consider the following linear regression as Zhong and Chen (2011): \begin{align} Y_i={\bf X}_i^{'}{\bm\beta}+\mathrm {var}epsilon_i \end{align} and the hypotheses to be tested are \begin{align}\label{test} H_0:{\bm\beta}=\boldsymbol 0_{p\times 1}~~\rm{vs}~~H_1:{\bm\beta}\not=\boldsymbol 0_{p\times 1} \end{align} We consider two distributions for $\mathrm {var}epsilon_i$, one is $N(0,4)$, the other is centralized gamma distribution Gamma$(1,0.5)$. And ${\bf X}_i=(X_{i1},\mathop{\rightarrow}\limits^{d}ots, X_{ip})$ are generated according to the following moving average model \begin{align*} X_{ij}=\rho_{1}Z_{ij}+\rho_{2}Z_{i(j+1)}+\mathop{\rightarrow}\limits^{d}ots+\rho_{T} Z_{i(j+T-1)}+\mu_{ij} \end{align*} for $j=1,\mathop{\rightarrow}\limits^{d}ots,p$ and $T<p$. Here $\{Z_{ij}\}_{j=1}^{p+T-1}$ are, respectively, i.i.d. random variables. We consider two scenarios for the innovation ${\bf Z}_{ij}$: (Scenario I) all the $\{Z_{ij}\}$ are from $N(0,1)$; (Scenario II) the first half components of $\{Z_{ij}\}_{j=1}^{p+T-1}$ are from $N(0,1)$, and the rest half components are from centralized Gamma$(4,1)$. The coefficient $\{\rho_{l}\}_{l=1}^{T}$ were generated independently from $U(0,1)$ and were kept fixe once generated through our simulations. And the means $\{\mu_i\}_{i=1}^p$ are also fixed constants generate from $U(2,3)$. We chose $T=10$ and $T=20$, to generate different covariances of ${\bf X}_i$. Similar to Zhong and Chen (2011), we consider two configurations of the alternative hypothesis $H_1$. One is ``nonsparse case'', which allocated first half of the ${\bm\beta}$-components of equal magnitude to be nonzeros. The other is ``sparse case'', which has only the first five nonzero components of equal magnitude. In both case, we fixed $||{\bm\beta}||^2$ at three levels: $0.03,0.06,0.09$. Here we only consider the case $p>n$ and chose $(n,p)=(30,100), (40,200), (50,400)$. Table 1--2 and Table 3--4 reports the empirical sizes and powers with normally and centralized gamma distributed residuals, respectively. From Table 1 to Table 4, we observe that the empirical sizes are both reasonable for these three tests. And the the sizes of these two tests became closer to the nominal level 0.05 when $n$ and $p$ gets larger, which is similar to Zhong and Chen (2011)'s results. Moreover, from Table 1 and 3, we observe that when all the variances of components are equal (Scenario I), our proposed SF test performs similar to ZC tests and EB tests. Even though we need to estimate the variance of each component, our proposed SF test does not lose much information form the samples when the dimension $p$ is a small order of $n^2$. These findings are also consistent with the asymptotic intuition in Section 2. However, when the variances of each components are not equal (Scenario II), our proposed SF test is clearly much more powerful then the other two tests. This mainly due to the fact that ZC tests and EB tests are not scale-invariant. When the variances of variables are not equal, ZC tests and EB tests hardly capture the coefficient shifts with smaller variances and then it will be powerless in such cases. Thus, it is not strange that their performance are extremely poor in such cases. \begin{table}[ht] \centering \caption{Empirical size and power comparisons at 5\% significance for normal residual under Scenario I} \renewcommand{1}{1} \tabcolsep 9pt \begin{tabular}{ccccccccc}\hline & & \multicolumn{3}{c}{T=10}& & \multicolumn{3}{c}{T=20} \\ \cline{3-5} \cline{7-9} {\small $(n,p)$} & {\small $||{\bm\beta}||^2$} &\multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} & & \multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} \\\hline (a) nonsparse case \\ (30,100) & 0.00 & 0.06 & 0.06 & 0.05 & & 0.06 & 0.06 & 0.06 \\ & 0.03 & 0.27 & 0.31 & 0.27 & & 0.75 & 0.71 & 0.76 \\ & 0.06 & 0.53 & 0.54 & 0.46 & & 0.93 & 0.91 & 0.95 \\ & 0.09 & 0.69 & 0.70 & 0.64 & & 0.97 & 0.96 & 0.98 \\ (40,200) & 0.00 & 0.05 & 0.05 & 0.04 & & 0.05 & 0.05 & 0.05 \\ & 0.03 & 0.25 & 0.25 & 0.37 & & 0.78 & 0.77 & 0.82 \\ & 0.06 & 0.47 & 0.49 & 0.60 & & 0.95 & 0.95 & 0.96 \\ & 0.09 & 0.63 & 0.69 & 0.76 & & 0.98 & 0.97 & 1.00 \\ (50,400) & 0.00 & 0.05 & 0.05 & 0.06 & & 0.05 & 0.05 & 0.04 \\ & 0.03 & 0.22 & 0.22 & 0.14 & & 0.81 & 0.79 & 0.79 \\ & 0.06 & 0.45 & 0.42 & 0.33 & & 0.94 & 0.95 & 1.00 \\ & 0.09 & 0.61 & 0.60 & 0.43 & & 0.97 & 0.97 & 1.00 \\ (b) sparse case \\ (30,100) & 0.03 & 0.12 & 0.13 & 0.07 & & 0.15 & 0.15 & 0.16 \\ & 0.06 & 0.20 & 0.19 & 0.12 & & 0.24 & 0.24 & 0.26 \\ & 0.09 & 0.24 & 0.23 & 0.15 & & 0.31 & 0.30 & 0.35 \\ (40,200) & 0.03 & 0.11 & 0.12 & 0.14 & & 0.16 & 0.14 & 0.17 \\ & 0.06 & 0.18 & 0.18 & 0.22 & & 0.25 & 0.26 & 0.32 \\ & 0.09 & 0.24 & 0.24 & 0.32 & & 0.40 & 0.39 & 0.48 \\ (50,400) & 0.03 & 0.10 & 0.10 & 0.11 & & 0.12 & 0.12 & 0.15 \\ & 0.06 & 0.15 & 0.15 & 0.17 & & 0.25 & 0.25 & 0.22 \\ & 0.09 & 0.18 & 0.21 & 0.28 & & 0.34 & 0.33 & 0.33 \\\hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Empirical size and power comparisons at 5\% significance for normal residual under Scenario II} \renewcommand{1}{1} \tabcolsep 9pt \begin{tabular}{ccccccccc}\hline & & \multicolumn{3}{c}{T=10}& & \multicolumn{3}{c}{T=20} \\ \cline{3-5} \cline{7-9} {\small $(n,p)$} & {\small $||{\bm\beta}||^2$} &\multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} & & \multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} \\\hline (a) nonsparse case \\ (30,100) & 0.00 & 0.06 & 0.06 & 0.06 & & 0.05 & 0.07 & 0.06 \\ & 0.03 & 0.29 & 0.10 & 0.12 & & 0.70 & 0.28 & 0.34 \\ & 0.06 & 0.48 & 0.17 & 0.18 & & 0.92 & 0.50 & 0.53 \\ & 0.09 & 0.63 & 0.23 & 0.27 & & 0.96 & 0.59 & 0.59 \\ (40,200) & 0.00 & 0.05 & 0.05 & 0.06 & & 0.05 & 0.05 & 0.04 \\ & 0.03 & 0.35 & 0.15 & 0.10 & & 0.77 & 0.31 & 0.32 \\ & 0.06 & 0.55 & 0.19 & 0.12 & & 0.95 & 0.48 & 0.51 \\ & 0.09 & 0.65 & 0.23 & 0.14 & & 0.97 & 0.59 & 0.63 \\ (50,400) & 0.00 & 0.05 & 0.05 & 0.05 & & 0.06 & 0.06 & 0.05 \\ & 0.03 & 0.23 & 0.10 & 0.07 & & 0.80 & 0.34 & 0.33 \\ & 0.06 & 0.45 & 0.15 & 0.10 & & 0.95 & 0.48 & 0.49 \\ & 0.09 & 0.60 & 0.17 & 0.14 & & 0.99 & 0.55 & 0.60 \\ (b) sparse case \\ (30,100) & 0.03 & 0.09 & 0.06 & 0.05 & & 0.16 & 0.07 & 0.06 \\ & 0.06 & 0.18 & 0.07 & 0.07 & & 0.28 & 0.10 & 0.11 \\ & 0.09 & 0.23 & 0.09 & 0.07 & & 0.40 & 0.11 & 0.12 \\ (40,200) & 0.03 & 0.13 & 0.13 & 0.07 & & 0.16 & 0.09 & 0.08 \\ & 0.06 & 0.16 & 0.14 & 0.09 & & 0.28 & 0.11 & 0.10 \\ & 0.09 & 0.24 & 0.14 & 0.11 & & 0.35 & 0.13 & 0.12 \\ (50,400) & 0.03 & 0.07 & 0.05 & 0.07 & & 0.15 & 0.11 & 0.06 \\ & 0.06 & 0.14 & 0.05 & 0.09 & & 0.25 & 0.12 & 0.09 \\ & 0.09 & 0.18 & 0.06 & 0.09 & & 0.33 & 0.14 & 0.12 \\\hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Empirical size and power comparisons at 5\% significance for centralized gamma residual under Scenario I} \renewcommand{1}{1} \tabcolsep 9pt \begin{tabular}{ccccccccc}\hline & & \multicolumn{3}{c}{T=10}& & \multicolumn{3}{c}{T=20} \\ \cline{3-5} \cline{7-9} {\small $(n,p)$} & {\small $||{\bm\beta}||^2$} &\multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} & & \multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} \\\hline (a) nonsparse case \\ (30,100) & 0.00 & 0.05 & 0.06 & 0.06 & & 0.06 & 0.06 & 0.06 \\ & 0.03 & 0.33 & 0.31 & 0.25 & & 0.79 & 0.76 & 0.84 \\ & 0.06 & 0.56 & 0.58 & 0.43 & & 0.90 & 0.88 & 0.95 \\ & 0.09 & 0.70 & 0.72 & 0.58 & & 0.96 & 0.95 & 0.98 \\ (40,200) & 0.00 & 0.05 & 0.06 & 0.06 & & 0.06 & 0.06 & 0.03 \\ & 0.03 & 0.29 & 0.31 & 0.30 & & 0.83 & 0.82 & 0.96 \\ & 0.06 & 0.48 & 0.48 & 0.53 & & 0.95 & 0.94 & 0.99 \\ & 0.09 & 0.65 & 0.64 & 0.71 & & 0.98 & 0.98 & 1.00 \\ (50,400) & 0.00 & 0.05 & 0.05 & 0.07 & & 0.05 & 0.05 & 0.06 \\ & 0.03 & 0.29 & 0.28 & 0.39 & & 0.80 & 0.80 & 0.85 \\ & 0.06 & 0.53 & 0.53 & 0.63 & & 0.95 & 0.94 & 0.98 \\ & 0.09 & 0.66 & 0.65 & 0.78 & & 0.98 & 0.98 & 0.99 \\ (b) sparse case \\ (30,100) & 0.03 & 0.11 & 0.14 & 0.15 & & 0.20 & 0.20 & 0.17 \\ & 0.06 & 0.18 & 0.19 & 0.22 & & 0.32 & 0.32 & 0.30 \\ & 0.09 & 0.25 & 0.27 & 0.32 & & 0.40 & 0.42 & 0.41 \\ (40,200) & 0.03 & 0.12 & 0.12 & 0.10 & & 0.21 & 0.21 & 0.26 \\ & 0.06 & 0.19 & 0.18 & 0.17 & & 0.32 & 0.31 & 0.46 \\ & 0.09 & 0.24 & 0.23 & 0.19 & & 0.39 & 0.39 & 0.57 \\ (50,400) & 0.03 & 0.11 & 0.10 & 0.10 & & 0.15 & 0.14 & 0.14 \\ & 0.06 & 0.15 & 0.15 & 0.20 & & 0.25 & 0.23 & 0.28 \\ & 0.09 & 0.21 & 0.20 & 0.24 & & 0.36 & 0.35 & 0.41 \\\hline \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Empirical size and power comparisons at 5\% significance for centralized gamma residual under Scenario II} \renewcommand{1}{1} \tabcolsep 9pt \begin{tabular}{ccccccccc}\hline & & \multicolumn{3}{c}{T=10}& & \multicolumn{3}{c}{T=20} \\ \cline{3-5} \cline{7-9} {\small $(n,p)$} & {\small $||{\bm\beta}||^2$} &\multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} & & \multicolumn{1}{c}{SF} & \multicolumn{1}{c}{ZC} & \multicolumn{1}{c}{EB} \\\hline (a) nonsparse case \\ (30,100) & 0.00 & 0.06 & 0.06 & 0.05 & & 0.06 & 0.06 & 0.05 \\ & 0.03 & 0.38 & 0.13 & 0.12 & & 0.76 & 0.38 & 0.35 \\ & 0.06 & 0.56 & 0.17 & 0.17 & & 0.93 & 0.57 & 0.55 \\ & 0.09 & 0.70 & 0.25 & 0.21 & & 0.98 & 0.66 & 0.66 \\ (40,200) & 0.00 & 0.06 & 0.05 & 0.05 & & 0.05 & 0.06 & 0.05 \\ & 0.03 & 0.30 & 0.11 & 0.12 & & 0.75 & 0.28 & 0.25 \\ & 0.06 & 0.49 & 0.18 & 0.18 & & 0.97 & 0.47 & 0.38 \\ & 0.09 & 0.63 & 0.23 & 0.23 & & 0.99 & 0.61 & 0.50 \\ (50,400) & 0.00 & 0.06 & 0.06 & 0.07 & & 0.06 & 0.06 & 0.05 \\ & 0.03 & 0.30 & 0.12 & 0.10 & & 0.79 & 0.34 & 0.46 \\ & 0.06 & 0.49 & 0.16 & 0.14 & & 0.96 & 0.45 & 0.64 \\ & 0.09 & 0.62 & 0.19 & 0.17 & & 0.99 & 0.53 & 0.71 \\ (b) sparse case \\ (30,100) & 0.03 & 0.10 & 0.09 & 0.06 & & 0.19 & 0.09 & 0.08 \\ & 0.06 & 0.20 & 0.09 & 0.08 & & 0.30 & 0.10 & 0.10 \\ & 0.09 & 0.27 & 0.09 & 0.08 & & 0.40 & 0.14 & 0.12 \\ (40,200) & 0.03 & 0.12 & 0.10 & 0.06 & & 0.17 & 0.08 & 0.06 \\ & 0.06 & 0.20 & 0.10 & 0.08 & & 0.32 & 0.10 & 0.09 \\ & 0.09 & 0.25 & 0.11 & 0.09 & & 0.43 & 0.13 & 0.10 \\ (50,400) & 0.03 & 0.16 & 0.08 & 0.05 & & 0.15 & 0.11 & 0.07 \\ & 0.06 & 0.22 & 0.10 & 0.07 & & 0.25 & 0.10 & 0.09 \\ & 0.09 & 0.28 & 0.10 & 0.08 & & 0.37 & 0.12 & 0.11 \\\hline \end{tabular} \end{table} \section{Appendix} \subsection{Proof of Theorem 1} Define $D$ the diagonal matrix of covariance matrix, that is \begin{align*} D=\mathrm {diag}(\sigma_1^2,\mathop{\rightarrow}\limits^{d}ots,\sigma_p^2). \end{align*} Thus, we can rewrite $T_n$ as follow \begin{align*} T_n=&\frac{1}{4P_n^4}\sum^{*}({\bf X}_{i_1}-{\bf X}_{i_2})^{'}{\bf D}^{-1}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})\\ &+\frac{1}{4P_n^4}\sum^{*}({\bf X}_{i_1}-{\bf X}_{i_2})^{'}({\bf D}_S^{-1}-{\bf D}^{-1})({\bf X}_{i_3}-{\bf X}_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})\\ \doteq &T_{n1}+T_{n2} \end{align*} Define \begin{align*} \phi(i_1,i_2,i_3,i_4)=\frac{1}{4}({\bf X}_{i_1}-{\bf X}_{i_2})^{'}{\bf D}^{-1}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4}) \end{align*} And then we symmetrize $\phi$ by \begin{align*} h(W_i,W_j,W_k,W_l)=\frac{1}{3}\{\phi(i,j,k,l)+\phi(i,k,j,l)+\phi(i,l,j,k)\} \end{align*} where $W_i=({\bf X}_i^{'},\mathrm {var}epsilon_i)^{'}$ and $\mathrm {var}epsilon_i=Y_i-{\bf X}_i^{'}{\bm\beta}$. Thus \begin{align*} T_{n1}=\frac{1}{C_n^4}\sum_{C_{n,4}} h(W_i,W_j,W_k,W_l). \end{align*} Define $\delta_{{\bm\beta}}={\bm\beta}-{\bm\beta}_0$. After some tedious calculation, we can obtain the projections of $h$ are, respectively, \begin{align*} h_1(W_1)=&\frac{1}{2}\delta_{{\bm\beta}}^{'}({\bf X}_1{\bf X}_1^{'}+\boldsymbols){\bf D}^{-1}\boldsymbols \delta_{{\bm\beta}}+\frac{1}{2}\mathrm {var}epsilon_1{\bf X}_1^{'}{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}\\ h_2(W_1,W_2)=&\frac{1}{6}{\bf B}ig\{\delta_{{\bm\beta}}^{'}({\bf X}_1-{\bf X}_2)({\bf X}_1-{\bf X}_2)^{'}{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}+(\mathrm {var}epsilon_1-\mathrm {var}epsilon_2)({\bf X}_1-{\bf X}_2)^{'}{\bf D}^{-1}\boldsymbols \delta_{{\bm\beta}}\\ &+\left(\delta_{{\bm\beta}}^{'}({\bf X}_1{\bf X}_1^{'}+\boldsymbols)+\mathrm {var}epsilon_1{\bf X}_1^{'}\right){\bf D}^{-1} \left(({\bf X}_2{\bf X}_2^{'}+\boldsymbols)\delta_{{\bm\beta}}+\mathrm {var}epsilon_2{\bf X}_2\right){\bf B}ig\}\\ h_3(W_1,W_2,W_3)=&\frac{1}{12}\left(({\bf X}_1-{\bf X}_2)^{'}\delta_{{\bm\beta}}+(\mathrm {var}epsilon_1-\mathrm {var}epsilon_2)\right){\bf D}^{-1}({\bf X}_1-{\bf X}_2)^{'} \left(({\bf X}_3{\bf X}_3^{'}+\boldsymbols)\delta_{{\bm\beta}}+\mathrm {var}epsilon_3{\bf X}_3\right)\\ &+\frac{1}{12}\left(({\bf X}_1-{\bf X}_3)^{'}\delta_{{\bm\beta}}+(\mathrm {var}epsilon_1-\mathrm {var}epsilon_3)\right){\bf D}^{-1}({\bf X}_1-{\bf X}_3)^{'} \left(({\bf X}_2{\bf X}_2^{'}+\boldsymbols)\delta_{{\bm\beta}}+\mathrm {var}epsilon_2{\bf X}_2\right)\\ &+\frac{1}{12}\left(({\bf X}_2-{\bf X}_3)^{'}\delta_{{\bm\beta}}+(\mathrm {var}epsilon_2-\mathrm {var}epsilon_3)\right){\bf D}^{-1}({\bf X}_2-{\bf X}_3)^{'} \left(({\bf X}_1{\bf X}_1^{'}+\boldsymbols)\delta_{{\bm\beta}}+\mathrm {var}epsilon_1{\bf X}_1\right) \end{align*} Define $B_1=\delta_{{\bm\beta}}^{'}\boldsymbols \delta_{{\bm\beta}}$, $B_2=\delta_{{\bm\beta}}^{'}\boldsymbols{\bf D}^{-1}\boldsymbols \delta_{{\bm\beta}}$, $B_3=\delta_{{\bm\beta}}^{'}\boldsymbols {\bf D}^{-1}\boldsymbols {\bf D}^{-1}\boldsymbols \delta_{{\bm\beta}}$ and $A_0={\bf \Gamma}^{'}{\bf \Gamma}$, $A_1={\bf \Gamma}^{'}\delta_{{\bm\beta}}\delta_{{\bm\beta}}^{'}{\bf \Gamma}$, $A_2={\bf \Gamma}^{'}{\bf D}\boldsymbols \delta_{{\bm\beta}}\delta_{{\bm\beta}}^{'}\boldsymbols {\bf D}^{-1}{\bf \Gamma}$, $A_3={\bf \Gamma}^{'}{\bf R}{\bf \Gamma}$. Then, \begin{align*} \mathrm {var}(h_1)=&\frac{1}{4}\left\{(B_1+\sigma^2)B_3+B_2^2+{\bf D}elta\mathrm {tr}(A_1\circ A_2)\right\}\\ \mathrm {var}(h_2)=&\frac{1}{36}{\bf B}ig\{\sigma^4\mathrm {tr}({\bf R}^2)+21B_2^2+22B_1B_3+22\sigma^2B_3+B_1^2\mathrm {tr}({\bf R}^2)+2\sigma^2\mathrm {tr}({\bf R}^2)B_1\\ &+2{\bf D}elta(B_1+\sigma^2)\mathrm {tr}(A_1\circ A_3)+20{\bf D}elta\mathrm {tr}(A_1\circ A_2)+{\bf D}elta^2\mathrm {tr}[(A_0\mathrm {diag}(A_1))^2]{\bf B}ig\}\\ \mathrm {var}(h)=&\frac{1}{24}{\bf B}ig\{12\sigma^4\mathrm {tr}({\bf R}^2)+45B_2^2+65B_1B_3+40\sigma^2B_3+10B_1^2\mathrm {tr}({\bf R}^2)+24\sigma^2\mathrm {tr}({\bf R}^2)B_1\\ &+12{\bf D}elta(B_1+\sigma^2)\mathrm {tr}(A_1\circ A_3)+37{\bf D}elta\mathrm {tr}(A_1\circ A_2)+4{\bf D}elta^2\mathrm {tr}[(A_0\mathrm {diag}(A_1))^2]{\bf B}ig\} \end{align*} Thus, $\mathrm {var}(h_2)$ and $\mathrm {var}(h)$ are of the same order. Next, taking the same procedure as Zhong and Chen (2011), under the condition (\ref{alter}), we can show that \begin{align*} T_{n1}=||{\bf D}^{-1/2}\boldsymbols({\bm\beta}-{\bm\beta}_0)||^2+\frac{2}{n(n-1)}\sum_{i<j}\mathrm {var}epsilon_i\mathrm {var}epsilon_j{\bf X}_i^{'}{\bf X}_j+o_p(\sqrt{\mathrm {var}(T_{n1})}) \end{align*} And then, similar to Zhong and Chen (2011), we can easily obtain that \begin{align} \frac{n}{\sigma^2\sqrt{2\mathrm {tr}({\bf R}^2)}}\left(T_{n1}-||{\bf D}^{-1/2}\boldsymbols({\bm\beta}-{\bm\beta}_0)||^2\right) \mathop{\rightarrow}\limits^{d} N(0,1) \end{align} by applying the martingale central limit theorem (Hall and Heyde 1980). In order to proof Theorem 1, we only need to show that $T_{n_2}=o(\frac{1}{n}\sigma^2\sqrt{2\mathrm {tr}({\bf R}^2)})$. \begin{align*} T_{n2}=&\frac{1}{4P_n^4}\sum^*\sum_{k=1}^p(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})(\hat{\sigma}_k^{-2}-\sigma_k^{-2})\\ =&\frac{1}{4P_n^4}\sum^*\sum_{k=1}^p(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})(\hat{\sigma}_k^{-2}-\sigma_k^{-2})\\ =&\frac{1}{4P_n^4}\sum^*\sum_{k=1}^p(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})(\sigma_k^{2}-\hat{\sigma}_k^{2})\sigma_k^{-4}\\ &+\frac{1}{4P_n^4}\sum^*\sum_{k=1}^p(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})(1-\hat{\sigma}_k^2\sigma_k^{-2})^2\hat{\sigma}_k^{-2}\\ \doteq & A_1+A_2 \end{align*} Firstly, we will show that $E(A_1^2)=o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2))$. \begin{align*} &E(A_1^2)\\ =&\frac{1}{16(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4}(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})(\sigma_k^{2}-\hat{\sigma}_k^{2})\sigma_k^{-4}\\ &\times \sum^*_{i_5,i_6,i_7,i_8}(x_{i_5l}-x_{i_6l})(x_{i_7l}-x_{i_8l})({\bf D}elta_{i_5}-{\bf D}elta_{i_6})({\bf D}elta_{i_7}-{\bf D}elta_{i_8})(\sigma_l^{2}-\hat{\sigma}_l^{2})\sigma_l^{-4}{\bf B}igg\}\\ =&\frac{1}{16(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4,i_5,i_6}({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_5})({\bf D}elta_{i_3}-{\bf D}elta_{i_6})\\ &\times\sigma_l^{-4}\sigma_k^{-4} (x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})(x_{i_1l}-x_{i_5l})(x_{i_3l}-x_{i_6l})\\ &\times \left(\frac{1}{n}\sum_{i=1}^n(\sigma_k^2- x_{ik}^2)+\frac{2}{n(n-1)}\sum_{i\not=j}x_{ik}x_{jk}\right)\left(\frac{1}{n}\sum_{i=1}^n(\sigma_l^2- x_{il}^2)+\frac{2}{n(n-1)}\sum_{i\not=j}x_{il}x_{jl}\right){\bf B}igg\}\\ =&\frac{1}{16n^2(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4,i_5,i_6}\sum_{i=1}^n\sum_{j=1}^n({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_5})({\bf D}elta_{i_3}-{\bf D}elta_{i_6})\\ &\times\sigma_l^{-4}\sigma_k^{-4} (x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})(x_{i_1l}-x_{i_5l})(x_{i_3l}-x_{i_6l})(\sigma_k^2- x_{ik}^2)(\sigma_l^2- x_{jl}^2){\bf B}igg\}\\ &+\frac{1}{16n^2(n-1)(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4,i_5,i_6}\sum_{i_7=1}^n\sum_{i_8\not=i_9}^n({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_5})\\ &\times({\bf D}elta_{i_3}-{\bf D}elta_{i_6})\sigma_l^{-4}\sigma_k^{-4} (x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})(x_{i_1l}-x_{i_5l})(x_{i_3l}-x_{i_6l})(\sigma_k^2- x_{i_7k}^2)x_{i_8l}x_{i_9l}{\bf B}igg\}\\ &+\frac{1}{16n^2(n-1)(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4,i_5,i_6}\sum_{i_7=1}^n\sum_{i_8\not=i_9}^n({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_5})\\ &\times({\bf D}elta_{i_3}-{\bf D}elta_{i_6})\sigma_l^{-4}\sigma_k^{-4} (x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})(x_{i_1l}-x_{i_5l})(x_{i_3l}-x_{i_6l})(\sigma_l^2- x_{i_7l}^2)x_{i_8k}x_{i_9k}{\bf B}igg\}\\ &+\frac{1}{16n^2(n-1)^2(P_n^4)^2}\sum_{k=1}^p\sum_{l=1}^p E{\bf B}igg\{ \sum^*_{i_1,i_2,i_3,i_4,i_5,i_6}\sum_{i_7\not=i_8}^n\sum_{i_9\not=i_{10}}^n({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})({\bf D}elta_{i_1}-{\bf D}elta_{i_5})\\ &\times({\bf D}elta_{i_3}-{\bf D}elta_{i_6})\sigma_l^{-4}\sigma_k^{-4} (x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})(x_{i_1l}-x_{i_5l})(x_{i_3l}-x_{i_6l})x_{i_7k}x_{i_8k}x_{i_{9}l}x_{i_{10}l}{\bf B}igg\}\\ \doteq & A_{11}+A_{12}+A_{13}+A_{14} \end{align*} After some tedious calculation, under condition (\ref{alter}), we can obtain that \begin{align*} A_{11}=&O(\frac{p^2}{n^4}) \left(\sigma_l^{-4}\sigma_k^{-4}(a_{kl}\sigma_k^2-E(x_{ik}^3x_{il}))(a_{kl}\sigma_l^2-E(x_{ik}x_{il}^3)\right)\\ &+O(\frac{p^2}{n^4})\left(\sigma_l^{-4}\sigma_k^{-4}a_{kl}(a_{kl}\sigma_k^2\sigma_l^2-\sigma_l^2E(x_{ik}^3x_{il})-\sigma_k^2E(x_{ik}x_{il}^3)+E(x_{ik}^3x_{il}^3))\right)+o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2)) \end{align*} where $\boldsymbols=(a_{ij})_{i,j=1,\mathop{\rightarrow}\limits^{d}ots,p}$. Define ${\bf \Gamma}=(v_{ij})$, according to the multivariate model, we can show that \begin{align*} E(x_{ik}^3x_{il})=&E\left(\left(\sum_{i=1}^m v_{ki}z_i\right)^3\left(\sum_{j=1}^m v_{lj}z_j\right)\right)=E\left(\sum_{i=1}^m\sum_{j=1}^m\sum_{s=1}^m\sum_{t=1}^m v_{ki}v_{kj}v_{ks}v_{lt}z_iz_jz_sz_t\right)\\ =&(3+{\bf D}elta)\sum_{i=1}^m v_{ki}^3v_{li}+3\sum_{i\not=j}^m v_{ki}^2v_{kj}v_{lj}={\bf D}elta\sum_{i=1}^m v_{ki}^3v_{li}+3\sigma_{k}^2a_{kl}\\ \le & {\bf D}elta \sqrt{\sum_{i=1}^m v_{ki}^4 \sum_{i=1}^m v_{ki}^2v_{li}^2}+3\sigma_{k}^2a_{kl} \le {\bf D}elta \sqrt{\left(\sum_{i=1}^m v_{ki}^2\right)^3\left(\sum_{i=1}^m v_{li}^2\right)}+3\sigma_{k}^2a_{kl}\\ =& {\bf D}elta \sigma_{k}^3\sigma_{l}+3\sigma_{k}^2a_{kl} \end{align*} Define $E(z_i^6)=\Psi<+\infty$, \begin{align*} E(x_{il}^3x_{ik}^3)=&E\left(\left(\sum_{i=1}^m v_{ki}z_i\right)^3\left(\sum_{j=1}^m v_{lj}z_j\right)^3\right)\\ =&E\left(\sum_{i=1}^m\sum_{j=1}^m\sum_{s=1}^m\sum_{t=1}^m\sum_{r=1}^m\sum_{w=1}^m v_{ki}v_{kj}v_{kr}v_{ls}v_{lt}v_{wl}z_iz_jz_rz_wz_sz_t\right)\\ =&\sum_{i=1}^m\sum_{j=1}^m\sum_{s=1}^m\sum_{t=1}^m\sum_{r=1}^m\sum_{w=1}^m v_{ki}v_{kj}v_{kr}v_{ls}v_{lt}v_{wl}E(z_iz_jz_rz_wz_sz_t)\\ =&\Psi\sum_{i=1}^m v_{ki}^3v_{li}^3 +(27+9{\bf D}elta)\sum_{i\not=j}^m v_{ki}^2v_{kj}v_{li}^2v_{lj}+(27+9{\bf D}elta)\sum_{i\not=j}^m v_{ki}^3v_{li}v_{lj}^2+9\sum_{i\not=j\not=s}^m v_{ki}^2v_{lj}^2v_{ks}v_{ls}\\ \le &\frac{\Psi}{2}\sum_{i=1}^m v_{ki}^2v_{li}^2(v_{ki}^2+v_{li}^2)+\frac{27+9{\bf D}elta}{2}\sum_{i=1}^m\sum_{j=1}^mv_{ki}^2v_{li}^2(v_{kj}^2+v_{lj}^2)\\ &+\frac{27+9{\bf D}elta}{2}\sum_{i=1}^m\sum_{j=1}^mv_{ki}^2v_{lj}^2(v_{ki}^2+v_{li}^2)+\frac{9}{2}\sum_{i=1}^m \sum_{j=1}^m\sum_{s=1}^m v_{ki}^2v_{lj}^2(v_{ks}^2+v_{ls}^2)\\ \le &\frac{1}{2}(\Psi+63+18{\bf D}elta)(\sigma_{k}^4\sigma_{l}^2+\sigma_{l}^4 \sigma_{k}^2) \end{align*} Thus, we obtain that $A_{11}=O(\frac{p^2}{n^4})+o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2))=o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2))$ by the condition (C3). Taking the same procedure as $A_{11}$, we can show that $A_{12}, A_{13}, A_{14}$ are all $\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2)$. Here, we obtain the result that $E(A_1^2)=o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2))$. Next, we rewrite $A_2$ as follows, \begin{align*} A_2=&\sum_{k=1}^p\left(\frac{1}{4P_n^4}\sum^*(x_{i_1k}-x_{i_2k})(x_{i_3k}-x_{i_4k})({\bf D}elta_{i_1}-{\bf D}elta_{i_2})({\bf D}elta_{i_3}-{\bf D}elta_{i_4})\hat{\sigma}_k^{-2}\right)(1-\hat{\sigma}_k^2\sigma_k^{-2})^2\\ &\doteq \sum_{k=1}^p C_kD_k \end{align*} By the Cauchy inequality, we obtain that \begin{align*} E(A_2^2)=&E\left(\left(\sum_{k=1}^p C_kD_k\right)^2\right)\le E\left(\left(\sum_{k=1}^p C_k^2\right)\left(\sum_{k=1}^p D_k^2\right)\right)\\ &\le\sqrt{E\left(\left(\sum_{k=1}^p C_k^2\right)^2\right)E\left(\left(\sum_{k=1}^p D_k^2\right)^2\right)} \end{align*} Taking the same procedure as $A_{11}$, we can show that $E(C_k^2C_l^2)=O(n^{-4})$ and $E(D_k^2D_l^2)=O(n^{-4})$. Thus, $E(A_2^2)=O(\frac{p^2}{n^{4}})=o(\frac{1}{n^2}\sigma^4\mathrm {tr}({\bf R}^2))$ by the condition (C3). Here we proof the results. \subsection{Proof of Proposition 1} Firstly, after some tedious calculation, we can rewrite $\widehat{\mathrm {tr}({\bf R}^2)}$ as follow, \begin{align*} &\frac{1}{2P_n^4}\sum^{*} ({\bf X}_{i_1}-{\bf X}_{i_2})^{'}{\bf D}_S^{-1}({\bf X}_{i_3}-{\bf X}_{i_4})({\bf X}_{i_3}-{\bf X}_{i_2})^{'}{\bf D}_S^{-1}({\bf X}_{i_1}-{\bf X}_{i_4})\\ =&\frac{1}{P_n^2}\sum^{*}({\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2})^2-\frac{2}{P_n^3}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2}{\bf X}_{i_2}^{'}{\bf D}_S^{-1}{\bf X}_{i_3} +\frac{1}{P_n^4}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2}{\bf X}_{i_3}^{'}{\bf D}_S^{-1}{\bf X}_{i_4} \end{align*} Taking the same procedure as Theorem 1, we can show that \begin{align*} \frac{1}{P_n^2}\sum^{*}({\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2})^2&=\frac{1}{P_n^2}\sum^{*}({\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2})^2+o_p(\mathrm {tr}({\bf R}^2))\\ \frac{2}{P_n^3}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2}{\bf X}_{i_2}^{'}{\bf D}_S^{-1}{\bf X}_{i_3}&=\frac{2}{P_n^3}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2}{\bf X}_{i_2}^{'}{\bf D}^{-1}{\bf X}_{i_3}+o_p(\mathrm {tr}({\bf R}^2))\\ \frac{1}{P_n^4}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}_S^{-1}{\bf X}_{i_2}{\bf X}_{i_3}^{'}{\bf D}_S^{-1}{\bf X}_{i_4}&=\frac{1}{P_n^4}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2}{\bf X}_{i_3}^{'}{\bf D}^{-1}{\bf X}_{i_4}+o_p(\mathrm {tr}({\bf R}^2)) \end{align*} Thus, \begin{align*} \widehat{\mathrm {tr}({\bf R}^2)}=&\frac{1}{P_n^2}\sum^{*}({\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2})^2-\frac{2}{P_n^3}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2}{\bf X}_{i_2}^{'}{\bf D}^{-1}{\bf X}_{i_3}\\ &+\frac{1}{P_n^4}\sum^{*}{\bf X}_{i_1}^{'}{\bf D}^{-1}{\bf X}_{i_2}{\bf X}_{i_3}^{'}{\bf D}^{-1}{\bf X}_{i_4}+o_p(\mathrm {tr}({\bf R}^2))\\ =&\frac{1}{P_n^2}\sum^{*}(\tilde{{\bf X}}_{i_1}^{'}\tilde{{\bf X}}_{i_2})^2-\frac{2}{P_n^3}\sum^{*}\tilde{{\bf X}}_{i_1}^{'}\tilde{{\bf X}}_{i_2}\tilde{{\bf X}}_{i_2}^{'}\tilde{{\bf X}}_{i_3} +\frac{1}{P_n^4}\sum^{*}\tilde{{\bf X}}_{i_1}^{'}\tilde{{\bf X}}_{i_2}\tilde{{\bf X}}_{i_3}^{'}\tilde{{\bf X}}_{i_4}+o_p(\mathrm {tr}({\bf R}^2)) \end{align*} where $\tilde{{\bf X}}_{i}={\bf D}^{-1/2}{\bf X}_{i}$. Then, according to Theorem 2 in Chen, Zhang and Zhong (2010), we can easily obtain the result. \subsection{Power Under Fixed Alternative} In this part, similar to Zhong and Chen (2011), we consider two scenarios of fixed alternatives under \begin{align*} \delta_{{\bm\beta}}^T\boldsymbols\delta_{{\bm\beta}} ~ \text{is not}~ o(1) \end{align*} One is \begin{align}\label{f1} \delta_{{\bm\beta}}^T\boldsymbols {\bf D}^{-1}\boldsymbols{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}=o\left(\frac{1}{n}\delta_{{\bm\beta}}^{'}\boldsymbols\delta_{{\bm\beta}}\mathrm {tr}({\bf R}^2)\right) \end{align} If $\delta_{{\bm\beta}}^T\boldsymbols\delta_{{\bm\beta}}$ is truly bounded, (\ref{f1}) implies $\delta_{{\bm\beta}}^T\boldsymbols {\bf D}^{-1}\boldsymbols{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}=o\left(\frac{1}{n}\mathrm {tr}({\bf R}^2)\right)$ which mimics the second part of (\ref{alter}). The other is \begin{align}\label{f2} \frac{1}{n}\delta_{{\bm\beta}}^{'}\boldsymbols\delta_{{\bm\beta}}\mathrm {tr}({\bf R}^2)=o\left(\delta_{{\bm\beta}}^T\boldsymbols {\bf D}^{-1}\boldsymbols{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}\right) \end{align} If $\delta_{{\bm\beta}}^T\boldsymbols\delta_{{\bm\beta}}$ is truly bounded, (\ref{f1}) implies $\frac{1}{n}\mathrm {tr}({\bf R}^2)=o\left(\delta_{{\bm\beta}}^T\boldsymbols {\bf D}^{-1}\boldsymbols{\bf D}^{-1}\boldsymbols\delta_{{\bm\beta}}\right)$ which means there is a larger discrepancies between ${\bm\beta}$ and ${\bm\beta}_0$. \begin{thm}\label{th2} Assume the condition (C1)--(C3) hold, then \begin{itemize} \item[(i)] under the first fixed alternatives (\ref{f1}), \begin{align*} \frac{n}{\sigma_{A_1}}(T_n-||{\bf D}^{-1/2}\boldsymbols ({\bm\beta}-{\bm\beta}_0)||^2)\mathop{\rightarrow}\limits^{d} N(0,1) \end{align*} where \begin{align*} \sigma_{A_1}^2=&2\sigma^4\mathrm {tr}({\bf R}^2)+2B_1^2\mathrm {tr}({\bf R}^2)+4\sigma^4\mathrm {tr}({\bf R}^2)B_1\\ &+4{\bf D}elta(B_1+\sigma^2)\mathrm {tr}(A_1\circ A_3)+2{\bf D}elta^2\mathrm {tr}[(A_0\mathrm {diag}(A_1))^2] \end{align*} \item[(ii)] under the first fixed alternatives (\ref{f2}), \begin{align*} \frac{n}{\sigma_{A_2}}(T_n-||{\bf D}^{-1/2}\boldsymbols ({\bm\beta}-{\bm\beta}_0)||^2)\mathop{\rightarrow}\limits^{d} N(0,1) \end{align*} where \begin{align*} \sigma_{A_2}^2=(B_1+\sigma^2)B_3+B_2^2+{\bf D}elta\mathrm {tr}(A_1\circ A_2). \end{align*} \end{itemize} \end{thm} The proof of Theorem \ref{th2} is contained in a longer version of this article. The above theorem implies that the asymptotic power of the test under the first fixed alternative (\ref{f1}) is \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)\approx\Phi\left(-\frac{\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2z_{\alpha}}{\sigma_{A_1}}+\frac{n||{\bf D}^{-1/2}\boldsymbols\delta_{{\bm\beta}}||^2}{\sigma_{A_1}}\right) \end{align*} Note that $\sigma_{A_1}^{-1}\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2$ is always bounded from infinity because $B_1$ is not $o(1)$ and $\sigma_{A_1}^2>2B_1^2\mathrm {tr}({\bf R}^2)$. When $B_1 \to \infty$, the first term converges to $0$ and then our test attains at least $50\%$ power in this case. Furthermore, if $n\sigma_{A_1}^{-1}||{\bf D}^{-1/2}\boldsymbols\delta_{{\bm\beta}}||^2\to\infty$, the power will converge to $1$. And the asymptotic power of the test under the first fixed alternative (\ref{f2}) is \begin{align*} \beta_{T_n}(||{\bm\beta}-{\bm\beta}_0||)\approx\Phi\left(-\frac{\sqrt{2\mathrm {tr}({\bf R}^2)}\sigma^2z_{\alpha}}{\sqrt{n-1}\sigma_{A_2}}+\frac{n||{\bf D}^{-1/2}\boldsymbols\delta_{{\bm\beta}}||^2}{\sigma_{A_2}}\right) \end{align*} Under fixed alternative (\ref{f2}), $\frac{1}{n}\mathrm {tr}{{\bf R}^2}=o(\sigma_{A_2}^2)$, which implies the first term converge to $0$. And then our test attains at least $50\%$ power in this case. Similarly, if $n\sigma_{A_2}^{-1}||{\bf D}^{-1/2}\boldsymbols\delta_{{\bm\beta}}||^2\to \infty$, our test is consistent. \section*{\small References} {\footnotesize \baselineskip 10pt \begin{description} \item Anderson, T. W. (2003). {\it An Introduction to Multivariate Statistical Analysis.} Hoboken, NJ: Wiley. \item Bai, Z. and Saranadasa, H. (1996). Effect of high dimension: by an example of a two sample problem. {\it Statistica Sinica}, {\bf 6}, 311--29. \item Chen, S. X. and Qin, Y-L. (2010). A two-sample test for high-dimensional data with applications to gene-set testing. {\it The Annals of Statistics}, {\bf 38}, 808--835. \item Chen, S. X., Zhang, L. -X. and Zhong, P. -S. (2010). Tests for high-dimensional covariance matrices. {\it Journal of the American Statistical Association}, {\bf 105}, 810--815. \item Efron, B., and Tibshirani, R. (2007). On testing the significance of sets of genes, {\it The Annals of Applied Statistics}, {\bf 1}, 107--129. \item Fan, J., and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space, {\it Journal of the Royal Statistical Society, Ser. B}, {\bf 70}, 849--911. \item Fan, J., Hall, P., and Yao, Q. (2007). To how many simultaneous hypothesis tests can normal student¡¯s $t$ or bootstrap calibrations be applied, {\it Journal of the American Statistical Association}, {\bf 102}, 1282--1288. \item Feng, L., Zou, C., Wang, Z. and Chen, B. (2013), Rank-based Score Tests for High-Dimensional Regression Coefficients, {\it Electronic Journal of Statistics}, 7, 2131--2149. \item Feng, L., Zou, C., Wang, Z. and Zhu, L. (2014). Two-sample Behrens-Fisher problem for high-dimensional data, {\it Statistica Sinica}, To appear. \item Goeman, J., Finos, L., and van Houwelingen, J. C. (2011). Testing against a high dimensional alternative in the generalized linear model: asymptotic type I error control, {\it Biometrika}, {\bf 98}, 381--390. \item Goeman, J., Van de Geer, S. A. and Van Houwelingen, J. C. (2006). Testing against a high-dimensional alternative. {\it Journal of the Royal Statistical Society, Ser. B}, {\bf 68}, 477--493. \item Hall, P., and Heyde, C. C. (1980), {\it Martingale Limit Theory and Its Application}, New York: Academic Press. \item Kosorok, M. R., and Ma, S. (2007). Marginal asymptotics for the ``Large p, Small n'' paradigm: with applications to microarray data, {\it The Annals of Statistics}, {\bf 35}, 1456--1486. \item Lee, A. J. (1990), {\it U-Statistics: Theory and Practice}, Marcel Dekker. \item Meinshausen, N. (2008). Hierarchical testing of variable importance, {\it Biometrika}, {\bf 95}, 265--278. \item Newton, M., Quintana, F., Den Boon, J., Sengupta, S., and Ahlquist, P. (2007), Random-Set Methods Identify Distinct Aspects of the Enrichment Signal in Gene-Set Analysis, {\it The Annals of Applied Statistics}, {\bf 1}, 85--106. \item Portnoy, S. (1984). Asymptotic behavior of the M-Estimators of $p$-regression parameters when $p^2/n$ is large: consistency, {\it The Annals of Statistics}, {\bf 12}, 1298--1309. \item Portnoy, S. (1985). Asymptotic behavior of the M-Estimators of $p$-regression parameters when $p^2/n$ is large: normal approximation, {\it The Annals of Statistics}, {\bf 13}, 1403--1417. \item Schott, J. R. (2005). Testing for complete independence in high dimensions. {\it Biometrika} {\bf 92}, 951--956. \item Subramanian, A., Tamayo, P., Mootha, V. K., Mukherjee, S., Ebert, B. L., Gillette, M. A., Paulovich, A., Pomeroy, S. L., Golub, T. R., Lander, E. S., and Mesirov, J. P. (2005), Gene Set Enrichment Analysis: A Knowledge- Based Approach for Interpreting Genome-Wide Expression Profiles, {\it Proceedings of the National Academy of Sciences}, {\bf 102}, 15545--15550. \item Wang, H. (2009). Forward regression for ultra-high dimensional variable screening. {\it Journal of the American Statistical Association}. {\bf 104}, 1512--1524. \item Zhong, P. S. and Chen, S. X. (2011). Tests for high dimensional regression coefficients with factorial designs. {\it Journal of the American Statistical Association}, {\bf 106}, 260--274. \end{description}} \end{document}
\begin{document} \begin{abstract} In this paper, we introduce quotients of exact categories by percolating subcategories. This approach extends earlier localization theories by Cardenas and Schlichting for exact categories, allowing new examples. Let ${\cal A}$ be a percolating subcategory of an exact category ${\cal E}$, the quotient ${\cal E}{/\mkern-6mu/} {\cal A}$ is constructed in two steps. In the first step, we associate a set $S_{\cal A} \subseteq \Mor({\cal E})$ to ${\cal A}$ and consider the localization ${\cal E}[S^{-1}_{\cal A}]$. In general, ${\cal E}[S_{\cal A}^{-1}]$ need not be an exact category, but will be a one-sided exact category. In the second step, we take the exact hull ${\cal E} {/\mkern-6mu/} {\cal A}$ of ${\cal E}[S_{\cal A}^{-1}]$. The composition ${\cal E} \to {\cal E}[S_{\cal A}^{-1}] \to {\cal E} {/\mkern-6mu/} {\cal A}$ satisfies the 2-universal property of a quotient in the 2-category of exact categories. We formulate our results in slightly more generality, allowing to start from a one-sided exact category. Additionally, we consider a type of percolating subcategories which guarantee that the morphisms of the set $S_{\cal A}$ are admissible. In related work, we show that these localizations induce Verdier localizations on the level of the bounded derived category. \end{abstract} \title{Localizations of (one-sided) exact categories} \tableofcontents \section{Introduction} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} Quotients of abelian and triangulated categories are ubiquitous in geometry, representation theory, and $K$-theory. In these settings, the quotients are obtained by localizing the abelian or triangulated category at an appropriate multiplicative set of morphisms. Quillen exact categories provide a more flexible framework to study homological and $K$-theoretic properties than abelian categories. For an exact category ${\cal C}$, quotient constructions similar to those mentioned before were given in \cite{Cardenas98, Schlichting04}. Here, the Serre subcategory (of an abelian category) or thick subcategory (of a triangulated category) is replaced by a subcategory that \emph{localizes} ${\cal C}$ (see \cite[4.0.35]{Cardenas98} or see \cite[appendix A]{Levine85}; in our terminology, such subcategories are called \emph{two-sided admissibly percolating} subcategories) or by a left or right \emph{special filtering} subcategory (see \cite{Schlichting04}). In both cases, the quotient categories are obtained by localizing at a class of morphisms, and satisfy the expected homological and $K$-theoretic properties. However, there are some natural examples that are not accommodated by these constructions. It was observed in \cite[example 4]{Braunling20} that, in the category $\LCA$ of locally compact abelian groups, neither the subcategory $\LCA_{\mathsf{C}}$ of compact abelian groups nor the subcategory $\LCA_{\mathsf{D}}$ of discrete abelian groups satisfy the s-filtering condition used in \cite{Schlichting04}, nor do these examples satisfy the conditions in \cite{Cardenas98}. Another example comes from the theory of glider representations \cite{CaenepeelVanOystaeyen19book}. Here, one can construct the category of glider representations as a localization of an exact category (namely the category of pregliders \cite{HenrardvanRoosmalen20a}), but this localization is not described by \cite{Cardenas98, Schlichting04}. Such examples indicate a need for a more general localization theory. In this paper, we extend the previous localization theories. We introduce a (left or right) percolating subcategory ${\cal A}$ of an exact category ${\cal C}$ and describe the quotient category ${\cal C} {/\mkern-6mu/} {\cal A}$ (we opt for the notation ${\cal C} {/\mkern-6mu/} {\cal A}$ for the quotient in the 2-category of exact categories; the notation ${\cal C} / {\cal A}$ will be used for the quotient in the 2-category of conflation categories, see below). When ${\cal C}$ is an abelian category, the notion of a left/right percolating subcategory reduces to the notion of a Serre subcategory. This generalizes the construction from both \cite{Cardenas98, Schlichting04}. \begin{theorem}\label{theorem:IntroductionExact} Let ${\cal A}$ be a left or right percolating subcategory in an exact category ${\cal C}$. There is an exact functor $Q\colon {\cal C} \to {\cal C} {/\mkern-6mu/} {\cal A}$ between exact categories, satisfying the usual 2-universal property of a quotient. \end{theorem} To discuss this result, it will be useful to have the following definition. A \emph{conflation category} is an additive category ${\cal C}$ together with a class of distinguished kernel-cokernel pairs, called \emph{conflations}, closed under isomorphisms. A functor between conflation categories is called \emph{(conflation-)exact} if it preserves conflations. The exact category ${\cal C}$ in theorem \ref{theorem:IntroductionExact} is a conflation category. In contrast to the aforementioned quotient construction for abelian or triangulated categories (as well as the localizations of exact categories in \cite{Cardenas98, Schlichting04}), the quotient ${\cal C} {/\mkern-6mu/} {\cal A}$ in theorem \ref{theorem:IntroductionExact} is not necessarily a localization of ${\cal C}$ at a set of morphisms. Instead, our construction passes two steps. In the first step, we invert a class of morphisms in ${\cal C}$ that need to become invertible under the (exact) quotient functor $Q$, namely inflations with cokernel in ${\cal A}$ and deflations with kernel in ${\cal A}$, as well as their compositions (we write $S_{\cal A}$ for this set; a morphism in $S_{\cal A}$ is called a weak isomorphism). When ${\cal A}$ is a left or right percolating subcategory, the set $S_{\cal A}$ of weak isomorphisms is a left or right multiplicative system, respectively, and we show that ${\cal C}[S_{\cal A}^{-1}]$ is the quotient ${\cal C} / {\cal A}$ in the 2-category of conflation categories. In general, the localization ${\cal C}[S_{\cal A}^{-1}]$ can fail to be an exact category. However, we show that ${\cal C}[S_{\cal A}^{-1}]$ is a one-sided exact category (see \S\ref{subsection:IntroductionExact}). By \cite{Rosenberg11}, a one-sided exact category admits a 2-universal embedding into its exact hull. The second step of the proof of theorem \ref{theorem:IntroductionExact} is to embed the localization ${\cal C}[S_{\cal A}^{-1}]$ into its exact hull; this is then the quotient ${\cal C} {/\mkern-6mu/} {\cal A}$ of ${\cal C}$ by ${\cal A}$ in the 2-category of exact categories. In this paper, we focus on the first step, namely understanding the category ${\cal C}[S_{\cal A}^{-1}].$ \subsection{Exact and one-sided exact categories}\label{subsection:IntroductionExact} By considering localizations of exact categories at percolating subcategories, we naturally arrive at the setting of one-sided exact categories. We start by establishing some terminology. A Quillen \emph{exact category} is a conflation category, satisfying some additional axioms, which we recall in detail in definition \ref{definition:RightExact}. Following \cite{Keller90}, we refer to the kernel morphism of a conflation as an \emph{inflation} and to the cokernel morphism as a \emph{deflation}. The original axioms of a Quillen exact category, as given in \cite{Quillen73}, can be partitioned into two dual sets: one set only referring to inflations, and one set only referring to deflations. In a \emph{one-sided exact category}, only one of these sets is required to hold (see \cite{BazzoniCrivei13,Rump11}). Thus, for a \emph{deflation-exact category}, we require a set of conflations such that the class of deflations contains all isomorphisms, and is stable under both composition and base change. One-sided exact categories still enjoy many useful homological properties (see \cite{BazzoniCrivei13}). Similar one-sided exact structures have occurred in several guises throughout the literature. Our main source of examples is based on left or right almost abelian categories (see \cite{Rump01}). The axioms of a one-sided exact category are closely related to those of a Grothendieck pretopology (see \cite{Rosenberg11}), to homological categories (see \cite{BorceuxBourn04}), and to categories with fibrations (or cofibrations) and Waldhausen categories (see \cite{Weibel13}). The latter allows for a $K$-space to be associated to a one-sided exact category. The main part of the proof of theorem \ref{theorem:IntroductionExact} is to understand the localization of a (one- or two-sided) exact category ${\cal C}$ with respect to the set of weak isomorphisms $S_{\cal A}$ associated to a percolating subcategory ${\cal A} \subseteq {\cal C}$. In the 2-category of conflation categories, the localization ${\cal C} \to {\cal C}[S_{\cal A}^{-1}]$ satisfies the 2-universal property of a quotient ${\cal C} \to {\cal C} / {\cal A}.$ This is expressed in the following theorem (see theorem \ref{theorem:Maintheorem} in the text, as well as propositions \ref{proposition:MinimalConditionsRMS}, \ref{proposition:InterpretationOfP4}, and \ref{proposition:QuotientInConflationCategories}). \begin{theorem}\label{theorem:IntroductionMain} Let ${\cal C}$ be a deflation-exact category and let ${\cal A} \subseteq {\cal C}$ be a deflation-percolating subcategory. There is an exact functor $Q\colon {\cal C} \to {\cal C} / {\cal A}$ between deflation-exact categories such that $Q({\cal A}) = 0$ and which is 2-universal with respect to this property. Furthermore, \begin{enumerate} \item the corresponding set of weak isomorphisms $S_{\cal A}$ is a right multiplicative system and the category ${\cal C} / {\cal A}$ is equivalent to the corresponding localization ${\cal C}[S^{-1}_{\cal A}]$, and \item $Q(f\colon X \to Y)$ is zero if and only if $f$ factors through an object of ${\cal A}$. \end{enumerate} \end{theorem} As $Q$ is given by a localization at a right multiplicative set, we know that it commutes with finite limits (but not necessarily with finite colimits). The main part of the proof of theorem \ref{theorem:IntroductionMain} is to establish that $Q$ maps a conflation in ${\cal C}$ to a kernel-cokernel pair in ${\cal C}[S^{-1}_{\cal A}].$ This is done in proposition \ref{proposition:CokernelsDescend}. \subsection{Admissibly percolating subcategories} Even though the set $S_{\cal A}$ of weak isomorphisms in theorem \ref{theorem:IntroductionMain} is a right multiplicative system, it might be difficult to determine whether a given morphism in ${\cal C}$ is a weak isomorphism. In many examples of interest, weak isomorphisms are \emph{admissible} or \emph{strict} (meaning that they admit a deflation-inflation factorization). We provide a class of percolating subcategories for which this is always the case, called \emph{admissibly percolating subcategories} or \emph{strictly percolating subcategories} (see definition \ref{Definition:AbelianPercolating}) and show that the associated weak isomorphisms are admissible morphisms. Moreover, admissibly percolating subcategories satisfy some additional desirable properties, such as the two-out-of-three-property (proposition \ref{proposition:2OutOf3}) and saturation (proposition \ref{proposition:Saturation}) for the weak isomorphisms. The subcategories considered in \cite{Cardenas98} and the aforementioned subcategories $\LCA_{\mathsf{C}}$ and $\LCA_{\mathsf{D}}$ of the category of locally compact abelian groups $\LCA$ are examples of admissibly percolating subcategories. Our main results about admissibly percolating subcategories are summarized in the following theorem (combining theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms}, and propositions \ref{proposition:2OutOf3} and \ref{proposition:Saturation}). \begin{theorem} Let ${\cal C}$ be a deflation-exact category, and let ${\cal A} \subseteq {\cal C}$ be an admissibly deflation-percolating subcategory. \begin{enumerate} \item Every weak isomorphism is admissible. \item The set $S_{\cal A}$ of weak isomorphisms is saturated and satisfies the 2-out-of-3 property. \end{enumerate} \end{theorem} \subsection{Applications and recognition results} In section \S\ref{section:Examples} we provide several tools for recognizing percolating subcategories in various contexts. In particular, we obtain the following useful proposition. \begin{proposition}\label{proposition:IntroductionRecognitionToolI} Let ${\cal C}$ be a deflation quasi-abelian category. A full subcategory ${\cal A}$ of ${\cal C}$ is a (strongly) deflation-percolating if and only if ${\cal A}$ is a Serre subcategory which is additionally closed under subobjects. \end{proposition} As an easy consequence of this proposition (and its dual), the subcategory $\LCAf\subset \LCA$ of finite abelian groups is a two-sided percolating subcategory. It follows that $\LCA/\LCAf$ is an exact category; in fact, proposition \ref{proposition:TwoSidedQuotientOfQuasiAbelianIsQuasiAbelian} shows that two-sided quotients of quasi-abelian categories are quasi-abelian, hence, $\LCA/\LCAf$ is a quasi-abelian category. Torsion theories in exact categories give rise to percolating subcategories as well (see proposition \ref{proposition:TorsionfreeIsPercolating} and corollary \ref{corollary:RightConflationExact} in the text). \begin{theorem} Let $({\cal T}, {\cal F})$ be a cohereditary torsion theory in an exact category ${\cal C}$. \begin{enumerate} \item The torsion-free class ${\cal F}$ is a right filtering subcategory of ${\cal C}$. \item If the functor ${\cal C} \to {\cal C}\colon C \mapsto C_F$ mapping an object of ${\cal C}$ to its torsion-free quotient is right conflation-exact (i.e.~for any conflation $X \rightarrowtail Y \twoheadrightarrow Z$, the map $Y_F \to Z_F$ is a deflation and is the cokernel of $X_F \to Y_F$, see definition \ref{definition:RightConflationExact}), then ${\cal F}$ is a deflation-percolating subcategory of ${\cal C}$. \item If the torsion-free functor $L\colon {\cal C}\to {\cal F}$ is an exact functor, then ${\cal F}$ is a special right filtering subcategory of ${\cal C}$. \end{enumerate} \end{theorem} As a consequence of this theorem, we show that the category $\mathsf{TAb}_{\mathsf{triv}}$ of topological abelian groups with the trivial (or indiscrete) topology is admissibly inflation-percolating in the category $\mathsf{TAb}$ of all topological abelian groups (see example \ref{example:IndiscreteAndHausdorff}). In a follow-up paper (see \cite{HenrardvanRoosmalen19b}), we consider the derived category of a one-sided exact category and show that the localization sequence ${\cal A} \to {\cal C} \to {\cal C} / {\cal A}$ given by a percolating subcategory in a one-sided exact category, yields a Verdier localization $\Db({\cal C}) \to \Db ({\cal C} / {\cal A})$ as in \cite{Schlichting04}. Moreover, we show that the natural embedding ${\cal C} \hookrightarrow \overline{{\cal C}}$ of a one-sided exact category into its exact hull lifts to a triangle equivalence $\Db({\cal C}) \to \Db(\overline{{\cal C}})$. \textbf{Acknowledgments.} We are grateful to Frederik Caenepeel and Freddy Van Oystaeyen for useful discussions and ideas leading to this paper. The authors are grateful to Sven Ake-Wegner, Oliver Braunling, and Sondre Kvamme for motivating us to extend our earlier results to the current generality. The second author is currently a postdoctoral researcher at FWO (12.M33.16N). \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \section{Preliminaries}\label{section:Preliminaries} Throughout this paper, we will assume that all categories are small. \subsection{Properties of pullbacks and pushouts} For easy reference, it will be convenient to collect some properties of pullbacks and pushouts. We start by recalling the pullback lemma. \begin{proposition}[Pullback lemma] Consider the following commutative diagram in any category: \[\xymatrix{ X\ar[r] \ar[d] & Y\ar[r] \ar[d] & Z \ar[d] \\ X' \ar[r] & Y' \ar[r] & Z' }\] Assume that the right square is a pullback. The left square is a pullback if and only if the outer rectangle is a pullback. \end{proposition} The following statement is \cite[proposition~I.13.2]{Mitchell65} together with its dual. \begin{proposition}\label{proposition:MitchellPullbackPushout}\label{proposition:MitchellPullback} Let ${\cal C}$ be any pointed category. \begin{enumerate} \item\label{enumerate:MitchellPullback} Consider a diagram \[ \xymatrix{ {X'} \ar[r]^{f'} & {Y'} \ar[d]^{h} & \\ {X} \ar[r]^{f} & {Y} \ar[r]^{g} & Z } \] where $f$ is the kernel of $g$. The left-hand side can be completed to a pullback square if and only if $f'$ is the kernel of $gh$. \item\label{enumerate:MitchellPushout} Consider a diagram \[ \xymatrix{ {X} \ar[r]^{f} & {Y} \ar[d]^{h} \ar[r]^{g} & Z \\ & {Y'} \ar[r]^{g'} & Z'} \] where $g$ is the cokernel of $f$. The right-hand side can be completed to a pushout square if and only if $g'$ is the cokernel of $hf$. \end{enumerate} \end{proposition} The following well-known proposition states that pullbacks preserve kernels and pushouts preserve cokernels (see \cite[lemma 1]{Rump11}). \begin{proposition}\label{proposition:PullbacksPreserveKernels}\label{proposition:PushoutsPreserveCokernels} Let ${\cal C}$ be any pointed category. Consider the following diagram in ${\cal C}$: \[\xymatrix{ A \ar[d]^{g} \ar[r]^{f} & B \ar[d]^{h} \\ C \ar[r]^{f'} & D }\] \begin{enumerate} \item Assume that the commutative square is a pullback. The morphism $f$ admits a kernel if and only if $f'$ admits a kernel. In this case, the composition $\ker(f) \to A \to C$ is the kernel of $f'.$ \item Assume that the commutative square is a pushout. The morphism $f$ admits a cokernel if and only if $f'$ admits a cokernel. In this case, the composition $B \to D \to \coker(f')$ is the cokernel of $f.$ \end{enumerate} \end{proposition} \subsection{Localizations and right calculus of fractions}\label{subsection:RMS} The material of this section is based on \cite{GabrielZisman67, KashiwaraSchapira06}. \begin{definition}\label{Definition:LocalizationWithRespectToMorphisms} Let ${\cal C}$ be any category and let $S \subseteq \Mor {\cal C}$ be any subset of morphisms of ${\cal C}$. The \emph{localization of ${\cal C}$ with respect to $S$} is a universal functor $Q\colon {\cal C} \to S^{-1} {\cal C}$ such that $Q(s)$ is invertible, for all $s \in S$. \end{definition} \begin{remark} By universality, we mean that any functor $F\colon {\cal C} \to {\cal D}$ such that every morphism in $S$ becomes invertible in ${\cal D}$ factors uniquely through $Q\colon {\cal C} \to S^{-1} {\cal C}$. Put differently, for every category ${\cal D}$, the functor $(Q \circ -)\colon \Fun(S^{-1}{\cal C}, {\cal D}) \to \Fun({\cal C}, {\cal D})$ induces an isomorphism between $\Fun(S^{-1}{\cal C}, {\cal D})$ and the full subcategory of $\Fun({\cal C}, {\cal D})$ consisting of those functors $F\colon {\cal C} \to {\cal D}$ which make every $s \in S$ invertible. \end{remark} \begin{remark} Since all the categories in this paper are small, localizations always exist. \end{remark} In this paper, we often consider localizations with respect to so-called right multiplicative systems. \begin{definition}\label{definition:RMS} Let ${\cal C}$ be a category and let $S$ be a set of arrows. Then $S$ is called a \emph{right multiplicative system} if it has the following properties: \begin{enumerate}[label=\textbf{RMS\arabic*},start=1] \item\label{RMS1} For every object $A$ of ${\cal C}$ the identity $1_A$ is contained in $S$. Composition of composable arrows in $S$ is again in $S$. \item\label{RMS2} Every solid diagram \[\xymatrix{ X \ar@{.>}[r]^{g} \ar@{.>}[d]_{t}^{\rotatebox{90}{$\sim$}}& Y\ar[d]_{s}^{\rotatebox{90}{$\sim$}}\\ Z\ar[r]_{f} & W }\] with $s\in S$ can be completed to a commutative square with $t\in S$. \item\label{RMS3} For every pair of morphisms $f,g\colon X\rightarrow Y$ and $s\in S$ with source $Y$ such that $s\circ f= s\circ g$ there exists a $t\in S$ with target $X$ such that $f\circ t =g\circ t$. \end{enumerate} Often arrows in $S$ will be endowed with $\sim$. \end{definition} For localizations with respect to a right multiplicative system, we have the following description of the localization. \begin{construction}\label{construction:Localization} Let ${\cal C}$ be a category and $S$ a right multiplicative system in ${\cal C}$. We define a category $S^{-1}{\cal C}$ as follows: \begin{enumerate} \item We set $\Ob(S^{-1}{\cal C})=\Ob({\cal C})$. \item Let $f_1\colon X_1\rightarrow Y, s_1\colon X_1\rightarrow X, f_2\colon X_2\rightarrow Y, s_2\colon X_2\rightarrow X$ be morphisms in ${\cal C}$ with $s_1,s_2\in S$. We call the pairs $(f_1,s_1), (f_2,s_2) \in (\Mor {\cal C}) \times S$ equivalent (denoted by $(f_1,s_1) \sim (f_2,s_2)$) if there exists a third pair $(f_3\colon X_3\rightarrow Y,s_3\colon X_3\rightarrow X) \in (\Mor {\cal C}) \times S$ and morphisms $u\colon X_3\rightarrow X_1, v\colon X_3\rightarrow X_2$ such that \[\xymatrix@!{ & X_1\ar[ld]_{s_1}^{}\ar[rd]^{f_1} & \\ X &X_3\ar[d]^{v}\ar[u]_{u}\ar[l]_{s_3}^{}\ar[r]_{f_3} & Y\\ & X_2 \ar[ul]^{s_2}_{}\ar[ur]_{f_2}& }\] is a commutative diagram. \item $\Hom_{S^{-1}{\cal C}}(X,Y)=\left\{(f,s)\mid f\in \Hom_{{\cal C}}(X',Y), s\colon X'\rightarrow X \mbox{ with } s\in S \right\} / \sim$ \item The composition of $(f\colon X'\rightarrow Y, s\colon X'\rightarrow X)$ and $(g\colon Y'\rightarrow Z, t\colon Y'\rightarrow Y)$ is given by $(g\circ h\colon X''\rightarrow Z,s\circ u\colon X''\rightarrow X)$ where $h$ and $u$ are chosen to fit in a commutative diagram \[\xymatrix{ X''\ar[r]^{h}\ar[d]_{u}^{\rotatebox{90}{$\sim$}} & Y'\ar[d]_{t}^{\rotatebox{90}{$\sim$}}\\ X'\ar[r]^{f} & Y }\] which exists by \ref{RMS2}. \end{enumerate} \end{construction} \begin{proposition}\label{proposition:BasicPropertiesOfLocalization} Let ${\cal C}$ be a category and $S$ a right multiplicative system in ${\cal C}$. \begin{enumerate} \item The assignment $X\mapsto X$ and $(f\colon X\rightarrow Y)\mapsto (f\colon X\rightarrow Y, 1_X\colon X\rightarrow X)$ defines a functor $Q\colon {\cal C}\rightarrow S^{-1}{\cal C}$ called the \emph{localization functor} and is a localization of ${\cal C}$ with respect to the set $S$ as in definition \ref{Definition:LocalizationWithRespectToMorphisms}. \item For any $s\in S$, the map $Q(s)$ is an isomorphism. \item The localization functor commutes with finite limits. \item If ${\cal C}$ is an additive category, then $S^{-1}{\cal C}$ is an additive category and the localization functor $Q$ is an additive functor. \end{enumerate} \end{proposition} \begin{remark}\label{remark:PreservationOfKernelsAndPullbacks} It follows that if ${\cal C}$ is an additive category and $S$ a right multiplicative system the functor $Q$ preserves kernels and pullbacks. \end{remark} \begin{definition} Let ${\cal C}$ be any category and let $S \subseteq \Mor {\cal C}$ be any subset. \begin{enumerate} \item We say that $S$ satisfies the \emph{2-out-of-3 property} if, for any two composable morphisms $f,g \in \Mor {\cal C}$, we have that if two of $f,g,fg$ are in $S$, then so is the third. \item Let $Q \colon {\cal C} \to S^{-1}{\cal C}$ be the localization of ${\cal C}$ with respect to $S$. We say that $S$ is \emph{saturated} if ${S} = \{f \in \Mor {\cal C} \mid \mbox{$Q(f)$ is invertible}\}.$ \end{enumerate} \end{definition} \section{Basic results on one-sided exact categories} We now recall the notion of a one-sided exact category as introduced by \cite{BazzoniCrivei13,Rosenberg11,Rump10}. In the remainder of the text we follow the conventions of Rosenberg \cite{Rosenberg11}, that is, one-sided exact categories containing all axioms referring to the deflation-side are called \emph{right exact categories}. This convention is opposite to the terminology used by \cite{BazzoniCrivei13}. In order to avoid further confusion, we prefer to use the terminology of \emph{deflation-exact categories} over right exact categories and dually \emph{inflation-exact} over left exact. \begin{definition}\label{Definition:ConflationCategory} Let ${\cal C}$ be an additive category. A sequence $A\xrightarrow{f} B\xrightarrow{g} C$ in ${\cal C}$ where $f = \ker g$ and $g = \coker f$ is called a \emph{kernel-cokernel pair}. \\ A \emph{conflation category} ${\cal C}$ is an additive category ${\cal C}$ together with a chosen class of kernel-cokernel pairs, closed under isomorphisms, called \emph{conflations}. A map that occurs as the kernel (or the cokernel) in a conflation is called an \emph{inflation} (or a \emph{deflation}). Inflations will often be denoted by $\rightarrowtail$ and deflations by $\twoheadrightarrow$. A map $f\colon X\rightarrow Y$ is called an \emph{admissible morphism} if it admits a deflation-inflation factorization, i.e. $f$ factors as $X\twoheadrightarrow Z\rightarrowtail Y$. The set of admissible morphisms in ${\cal C}$ is denoted by $\Adm({\cal C})$.\\ Let ${\cal C}$ and ${\cal D}$ be conflation categories. An additive functor $F\colon {\cal C}\rightarrow {\cal D}$ is called \emph{exact} or \emph{conflation-exact} if conflations in ${\cal C}$ are mapped to conflations in ${\cal D}$. \end{definition} \begin{definition}\label{definition:RightExact} A \emph{right exact category} or a \emph{deflation-exact category} ${\cal C}$ is a conflation category satisfying the following axioms: \begin{enumerate}[label=\textbf{R\arabic*},start=0] \item\label{R0} The identity morphism $1_0\colon 0\rightarrow 0$ is a deflation. \item\label{R1} The composition of two deflations is again a deflation. \item\label{R2} The pullback of a deflation along any morphism exists and is again a deflation, i.e. \[\xymatrix{ X\ar@{.>>}[d]\ar@{.>}[r] & Y \ar@{->>}[d]\\ Z\ar[r] & W }\] \end{enumerate} Dually, we call an additive category ${\cal C}$ \emph{left exact} or \emph{inflation-exact} if the opposite category ${\cal C}^{op}$ is right exact. Explicitly, an inflation-exact category is a conflation category such that the inflations satisfy the following axioms: \begin{enumerate}[label=\textbf{L\arabic*},start=0] \item\label{L0} The identity morphism $1_0\colon 0\rightarrow 0$ is an inflation. \item\label{L1} The composition of two inflations is again an inflation. \item\label{L2} The pushout of an inflation along any morphism exists and is again an inflation, i.e. \[\xymatrix{ X\ar@{>->}[d]\ar@{->}[r] & Y \ar@{>.>}[d]\\ Z\ar@{.>}[r] & W }\] \end{enumerate} \end{definition} \begin{definition}\label{definition:StrongRightExact} Let ${\cal C}$ be a conflation category. In addition to the properties listed in definition \ref{definition:RightExact}, we will also consider the following axioms: \begin{enumerate}[align=left] \myitem{\textbf{R0}$^\ast$}\label{R0*} For any $A\in \Ob({\cal C})$, $A\rightarrow 0$ is a deflation. \myitem{\textbf{R3}}\label{R3} \hspace{0.175cm}If $i\colon A\rightarrow B$ and $p\colon B\rightarrow C$ are morphisms in ${\cal C}$ such that $p$ has a kernel and $pi$ is a deflation, then $p$ is a deflation. \myitem{\textbf{L0}$^\ast$}\label{L0*} For any $A\in \Ob({\cal C})$, $0\rightarrow A$ is an inflation. \myitem{\textbf{L3}}\label{L3} \hspace{0.175cm}If $i\colon A\rightarrow B$ and $p\colon B\rightarrow C$ are morphisms in ${\cal C}$ such that $i$ has a cokernel and $pi$ is an inflation, then $i$ is an inflation. \end{enumerate} A right exact category satisfying \ref{R3} is called \emph{strongly right exact} or \emph{strongly deflation-exact}. Dually, a left exact category satisfying \ref{L3} is called \emph{strongly left exact} or \emph{strongly inflation-exact}. \end{definition} \begin{remark}\label{remark:DefExactAndQuillenObscureAxiom} \begin{enumerate} \item An exact category in the sense of Quillen (see \cite{Quillen73}) is a conflation category ${\cal C}$ satisfying axioms \ref{R0} through \ref{R3} and \ref{L0} through \ref{L3}. In \cite[appendix~A]{Keller90}, Keller shows that axioms \ref{R0}, \ref{R1}, \ref{R2}, and \ref{L2} suffice to define an exact category. \item Axioms \ref{R3} and \ref{L3} are sometimes referred to as Quillen's \emph{obscure axioms} (see \cite{Buhler10,ThomasonTrobaugh90}). \item In \cite{Rump11}, the notions of one-sided exact categories includes the obscure axiom. \end{enumerate} \end{remark} \begin{remark} {A deflation-exact category ${\cal C}$ satisfies axiom \ref{R0*} if and only if every split kernel-cokernel pair is a conflation.} \end{remark} \begin{lemma}\label{Lemma:BasicProperties} Let ${\cal C}$ be a deflation-exact category. Then: \begin{enumerate} \item Every isomorphism is a deflation. \item If ${\cal C}$ is strongly deflation-exact, then ${\cal C}$ satisfies \upshape{\ref{R0*}}. \item \itshape Every inflation is a monomorphism. An inflation which is an epimorphism is an isomorphism. \item Every deflation is an epimorphism. A deflation which is a monomorphism is an isomorphism. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Let $f\colon X \to Y$ be an isomorphism. One easily checks that \[\xymatrix{X \ar[r]^f\ar[d] & Y\ar[d]\\ 0 \ar[r] & 0}\] is a pullback diagram. By \ref{R0} and \ref{R2}, we know that $f\colon X \to Y$ is a deflation. \item Since $1_A\colon A\rightarrow A$ is the kernel of $p\colon A\rightarrow 0$ and the composition of $0\xrightarrow{i} A \xrightarrow{p} 0$ is a deflation by \ref{R0}, it follows that $p$ is a deflation. This establishes \ref{R0*}. \item Every inflation is a kernel and kernels are monic. If an inflation is an epimorphism, then the cokernel is zero. As an inflation is the kernel of its cokernel, we infer that the inflation is an isomorphism. \item Similar.\qedhere \end{enumerate} \end{proof} \begin{proposition}\label{proposition:PushoutIfCokernel}\label{proposition:WhenPushout}\label{proposition:WhenPullback}\label{proposition:PullbackPushout}\label{proposition:InducedSequenceLemma} Let ${\cal C}$ be a deflation-exact category. \begin{enumerate} \item For a commutative square \[\xymatrix{ A\ar@{>->}[r]^{i'} \ar[d]^f & B \ar[d]^g\\ A' \ar@{>->}[r]^{i} & B' }\] where the horizontal arrows are inflations, the following statements are equivalent: \begin{enumerate} \item\label{enumerate:CommutativeSquare1Pushout} the square is a pushout, \item\label{enumerate:CommutativeSquare1Bicartesian} the square is both a pushout and a pullback, \item\label{enumerate:CommutativeSquare1Extension} the square can be extended to a diagram \[\xymatrix{ A\ar@{>->}[r]^{i} \ar[d]^f & B\ar@{->>}[r] \ar[d]^g & C \ar@{=}[d] \\ A' \ar@{>->}[r]^{i'} & B' \ar@{->>}[r] & C }\] where the rows are conflations, \item\label{enumerate:CommutativeSquare1Conflation} the induced sequence $\xymatrix@1{A\ar[r]^-{\begin{psmallmatrix}f\\ i\end{psmallmatrix}} & A'\oplus B\ar[r]^-{\begin{psmallmatrix}-i' & g\end{psmallmatrix}} & B'}$ is a conflation. \end{enumerate} \item\label{enumerate:SecondPart} For a commutative square \[\xymatrix{ B\ar@{->>}[r]^{p} \ar[d]^f & C \ar[d]^g\\ B' \ar@{->>}[r]^{p'} & C' }\] where the horizontal arrows are deflations, the following statements are equivalent: \begin{enumerate} \item\label{enumerate:CommutativeSquare2Pullback} the square is a pullback, \item\label{enumerate:CommutativeSquare2Bicartesian} the square is both a pushout and a pullback, \item\label{enumerate:CommutativeSquare2Extension} the square can be extended to a diagram \[\xymatrix{ A\ar@{>->}[r] \ar@{=}[d] & B \ar[d]^f \ar@{->>}[r]^{p} & C\ar[d]^g \\ A' \ar@{>->}[r] & B' \ar@{->>}[r]_{p'} & C'}\] where the rows are conflations. \end{enumerate} If ${\cal C}$ satisfies axiom \ref{R0*}, then the previous are equivalent to: \begin{enumerate}[resume] \item\label{enumerate:CommutativeSquare2Conflation} the induced sequence $\xymatrix@1{B\ar[r]^-{\begin{psmallmatrix}f\\ p\end{psmallmatrix}} & B'\oplus C\ar[r]^-{\begin{psmallmatrix}-p' & g\end{psmallmatrix}} & C'}$ is a conflation. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item The implication $\eqref{enumerate:CommutativeSquare1Pushout} \Rightarrow \eqref{enumerate:CommutativeSquare1Extension}$ is straightforward to prove. For the reverse implication, one can verify that the proof of \cite[proposition~2.12]{Buhler10} still holds. If \eqref{enumerate:CommutativeSquare1Extension} holds, then proposition \ref{proposition:MitchellPullbackPushout} shows that the given commutative square is a pullback. This shows that \eqref{enumerate:CommutativeSquare1Bicartesian} holds. The implication $\eqref{enumerate:CommutativeSquare1Bicartesian} \Rightarrow \eqref{enumerate:CommutativeSquare1Pushout}$ is trivial. Assume now that \eqref{enumerate:CommutativeSquare1Extension} holds. Consider the following commutative diagram: \[\xymatrix{ & A'\ar@{=}[r]\ar[d]^{\begin{psmallmatrix}-1_{A'}\\0\end{psmallmatrix}} & A'\ar@{>->}[d]^{i'}\\ A\ar[r]^-{\begin{psmallmatrix}f\\ i\end{psmallmatrix}}\ar@{=}[d] & A'\oplus B\ar[r]^-{\begin{psmallmatrix}-i' & g\end{psmallmatrix}}\ar[d]^{\begin{psmallmatrix}0&1_B\end{psmallmatrix}} & B'\ar@{->>}[d]^{p'}\\ A\ar@{>->}[r]^i & B\ar@{->>}[r]^p & C }\] One readily verifies that the lower right square is a pullback square. Axiom \ref{R2} implies that the middle row is a conflation as required. This establishes the implication $\eqref{enumerate:CommutativeSquare1Extension} \Rightarrow \eqref{enumerate:CommutativeSquare1Conflation}.$ The implication $\eqref{enumerate:CommutativeSquare1Conflation} \Rightarrow \eqref{enumerate:CommutativeSquare1Pushout}$ is trivial. \item The equivalence $\eqref{enumerate:CommutativeSquare2Pullback} \Leftrightarrow \eqref{enumerate:CommutativeSquare2Extension}$ is \cite[proposition~5.4]{BazzoniCrivei13}. The equivalence $\eqref{enumerate:CommutativeSquare2Pullback} \Leftrightarrow \eqref{enumerate:CommutativeSquare2Bicartesian}$ again follows from proposition \ref{proposition:MitchellPullbackPushout}. Assume now that ${\cal C}$ satisfies axiom \ref{R0*}. It is shown in \cite[proposition~5.7]{BazzoniCrivei13} that $\eqref{enumerate:CommutativeSquare2Pullback} \Leftrightarrow \eqref{enumerate:CommutativeSquare2Conflation}$.\qedhere \end{enumerate} \end{proof} \begin{remark} In proposition \ref{proposition:PullbackPushout}, the implication $\eqref{enumerate:CommutativeSquare2Conflation} \Rightarrow \eqref{enumerate:CommutativeSquare2Pullback}$ does not use axiom \ref{R0*}. In contrast, the converse $\eqref{enumerate:CommutativeSquare2Pullback} \Rightarrow \eqref{enumerate:CommutativeSquare2Conflation}$ needs axiom \ref{R0*}. To see this, let $C \in {\cal C}$ be any object. By proposition \ref{Lemma:BasicProperties}, the diagram \[\xymatrix{ C\ar@{=}[r] \ar[d] & C \ar[d]\\ 0 \ar@{=}[r] & 0 }\] satifies the properties in \eqref{enumerate:CommutativeSquare2Pullback}. The implication $\eqref{enumerate:CommutativeSquare2Pullback} \Rightarrow \eqref{enumerate:CommutativeSquare2Conflation}$ implies that $C \to 0$ is a deflation. This implies that axiom \ref{R0*} holds. \end{remark} \begin{proposition}\label{proposition:FactorizationOfConflationMorphism} Let ${\cal C}$ be a deflation-exact category. Every morphism $(f,g,h)$ between conflations $X \stackrel{i}{\rightarrowtail} Y \stackrel{p}{\twoheadrightarrow} Z$ and $X' \stackrel{i'}{\rightarrowtail} Y' \stackrel{p'}{\twoheadrightarrow} Z'$ factors through some conflation $X' {\rightarrowtail} P {\twoheadrightarrow} Z$: \[\xymatrix{ X\ar@{>->}[r]^i\ar[d]^f & Y\ar@{->>}[r]^p\ar[d] & Z\ar@{=}[d] \\ X'\ar@{=}[d]\ar@{>->}[r] & P \ar@{->>}[r]\ar[d] & Z \ar[d]^h \\ X'\ar@{>->}[r]^{i'} & Y'\ar@{->>}[r]^{p'} & Z'}\] such that the upper-left and lower-right squares are both pullbacks and pushouts. \end{proposition} \begin{proof} The factorization property is \cite[proposition 5.2]{BazzoniCrivei13}. The statements about the pushouts and pullbacks follow from proposition \ref{proposition:PushoutIfCokernel}. \end{proof} \begin{proposition}\label{proposition:PullbackOfInflation} Let ${\cal C}$ be a deflation-exact category. The pullback of an inflation $f$ along a deflation is an inflation $f'$. \end{proposition} \begin{proof} Let $f\colon X \rightarrowtail Z$ be an inflation and $g\colon Y \twoheadrightarrow Z$ be a deflation. Consider the commutative diagram \[\xymatrix{ P \ar@{.>}[r]^{f'} \ar@{.>}[d]^{g'} & Y \ar@{->>}[d]^{g} \\ X \ar@{>->}[r]^{f} & Z \ar@{->>}[r]^-{h} & {\coker(f)} }\] where the square is a pullback diagram and the bottom row is a conflation. It follows from proposition \ref{proposition:MitchellPullbackPushout}(\ref{enumerate:MitchellPullback}) that $f'$ is the kernel of the composition $Y \twoheadrightarrow Z \twoheadrightarrow \coker(f)$ and hence an inflation by axiom \ref{R1}. \end{proof} The following proposition provides a sufficient condition for a subcategory of a deflation-exact category to be deflation-exact. \begin{proposition}\label{proposition:DeflationClosed} Let ${\cal C}$ be a deflation-exact category. Let ${\cal D} \subseteq {\cal C}$ be a full subcategory. The conflation structure of ${\cal C}$ induces a deflation-exact structure on ${\cal D}$ if for every deflation $f\colon Y\to Z$ in ${\cal C}$ with $Y,Z\in {\cal D}$, one has that $\ker(f)\in {\cal D}$. \begin{enumerate} \item If ${\cal C}$ satisfies axiom \ref{R0*}, then so does ${\cal D}$. \item If ${\cal C}$ satisfies axiom \ref{R3}, then so does ${\cal D}$. \end{enumerate} \end{proposition} \begin{proof} The only non-trivial part is to show that ${\cal D}$ inherits axiom \ref{R3} from ${\cal C}$. To that end, let $K_D\to Y\xrightarrow{p}Z$ be a sequence in ${\cal D}$ such that $K_D\to Y$ is the kernel of $p$ and let $i\colon X\to Y$ be a map in ${\cal D}$ such that $p\circ i$ is a deflation. Write $P$ for the pullback of $pi$ along $p$ in ${\cal C}$. By proposition \ref{proposition:PushoutIfCokernel}, we obtain a conflation $P\rightarrowtail Y\oplus X\twoheadrightarrow Z$. It follows that $P\in {\cal D}$. As the kernel of $\begin{pmatrix}p & pi \end{pmatrix} \colon Y \oplus X \twoheadrightarrow Z$ in ${\cal D}$ is $K_D \oplus X$, one finds that $P\cong K_D\oplus X$. It follows that $K_D\to Y\to Z$ is a conflation by \cite[proposition~5.9]{BazzoniCrivei13} in ${\cal C}$. In particular $K_D$ is the kernel of $Y\to Z$ in ${\cal C}$. This completes the proof. \end{proof} \section{Percolating subcategories}\label{Section:PercolatingSubcategories} Let ${\cal C}$ be a one-sided exact category. In this section, we define the notion of a percolating subcategory of ${\cal C}$. To place this notion in context: if the category ${\cal C}$ is abelian, a subcategory ${\cal A} \subseteq {\cal C}$ is percolating if and only if it is a Serre subcategory; if ${\cal C}$ is an exact category, then the notion of a percolating subcategory is weaker than the notion of a right s-filtering subcategory in \cite{Schlichting04} (this will be verified in proposition \ref{proposition:RecoveredSchlichtingsFramework}). Starting from a percolating subcategory ${\cal A} \subseteq {\cal C}$, we define a set of weak isomorphisms $S_{\cal A}$. This set is a left or right multiplicative set (see proposition \ref{proposition:MinimalConditionsRMS}). We will proceed to establish some techincal results which will help to understand the localization ${\cal C}[S_{\cal A}^{-1}],$ chief among them lemma \ref{lemma:LiftingConflations} and proposition \ref{proposition:InterpretationOfP4}. \subsection{Definitions and basic properties} We start by defining percolating subcategories. As this definition does not refer to the deflation-exact structure of ${\cal C}$, we formulate the definition for a more general conflation category. \begin{definition}\label{Definition:GeneralPercolatingSubcategory} Let ${\cal C}$ be a conflation category. A non-empty full subcategory ${\cal A}$ of ${\cal C}$ is called a \emph{right percolating subcategory} or a \emph{deflation-percolating subcategory} of ${\cal C}$ if the following axioms are satisfied: \begin{enumerate}[label=\textbf{P\arabic*},start=1] \item\label{P1} ${\cal A}$ is a \emph{Serre subcategory}, meaning: \[\mbox{ If } A'\rightarrowtail A \twoheadrightarrow A'' \mbox{ is a conflation in ${\cal C}$, then } A\in \Ob({\cal A}) \mbox{ if and only if } A',A''\in \Ob({\cal A}).\] \item\label{P2} For all morphisms $C\rightarrow A$ with $C \in \Ob({\cal C})$ and $A\in \Ob({\cal A})$, there exists a commutative diagram \[\xymatrix{ A'\ar[rd] & \\ C \ar@{->>}[u]\ar[r]& A\\ }\] with $A'\in \Ob({\cal A})$ and where $C \twoheadrightarrow A'$ is a deflation. \item\label{P3} For any composition $\xymatrix{X\ar@{>->}[r]^i & Y\ar[r]^t & T}$ which factors through ${\cal A}$, there exists a commutative diagram \[\xymatrix{ X\ar@{>->}[r]^i\ar@{->>}[d]^f & Y\ar@{->>}[d]^{f'}\ar@/^/[rdd]^t &\\ A\ar@{>->}[r]^{i'}\ar@/_/[rrd] & P\ar@{.>}[rd] &\\ && T }\] with $A \in \Ob({\cal A})$ and such that the square $XYAP$ is a pushout square. \item\label{P4} For all maps $X\stackrel{f}{\rightarrow} Y$ that factor through ${\cal A}$ and for all inflations $A\stackrel{i}{\rightarrowtail} X$ (with $A \in \Ob({\cal A})$) such that $f\circ i=0$, the induced map $\coker(i)\to Y$ factors through ${\cal A}$. \end{enumerate} By dualizing the above axioms one obtains a similar notion of a \emph{left percolating subcategory} or an \emph{inflation-percolating subcategory}. \end{definition} \begin{remark} In axiom \ref{P3}, we start with a composition $t \circ i$ factoring through an object $B \in \Ob({\cal A}).$ We do not require any compatibility between this object $B \in \Ob({\cal A})$ and the object $A \in \Ob({\cal A})$ in the diagram occurring in the statement of axiom \ref{P3}. \end{remark} \begin{definition}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{definition:AdditionalPercolatingDefinitions} \begin{enumerate} \item Following the conventions by \cite{Schlichting04}, a non-empty full subcategory ${\cal A}$ of a conflation category ${\cal C}$ satisfying axioms \ref{P1} and \ref{P2} is called \emph{right filtering}. \item If ${\cal A}$ is a right filtering subcategory of ${\cal C}$ such that the map $A'\rightarrow A$ in axiom \ref{P2} can be chosen as a monic map, we will call ${\cal A}$ a \emph{strongly right filtering subcategory}. \item A right percolating subcategory which is also strongly right filtering will be abbreviated to a \emph{strongly right percolating subcategory} or \emph{strongly deflation-percolating subcategory}. \end{enumerate} The notions of a \emph{left filtering}, \emph{strongly left filtering}, and \emph{strongly left percolating subcategory} are defined dually. \end{definition} \begin{remark}\label{remark:PercolatingRemarks} \begin{enumerate} \item Any deflation-exact category is a deflation-percolating subcategory of itself. \item If ${\cal C}$ is an exact category, then any subcategory ${\cal A}$ satisfying axiom \ref{P2} automatically satisfies axiom \ref{P3} (see \cite[proposition~2.15]{Buhler10}). \item As a deflation-percolating subcategory ${\cal A}$ of ${\cal C}$ is closed under extensions in ${\cal C}$ (by axiom \ref{P1}), the conflations of ${\cal C}$ induce a deflation-exact structure on ${\cal A}$. \end{enumerate} \end{remark} \subsection{Weak isomorphisms}\label{subsection:WeakIsomorphisms} Let $F\colon {\cal C} \to {\cal D}$ be an exact functor between conflation categories. Let ${\cal A} \subseteq {\cal C}$ be a full subcategory and assume that $F({\cal A}) = 0$. It is clear that, for any conflation $X\stackrel{f}{\rightarrowtail} Y \stackrel{g}{\twoheadrightarrow} Z$, we have that $X \in \Ob({\cal A})$ implies that $F(g)$ is an isomorphism. Likewise, $Z \in \Ob({\cal A})$ implies that $F(f)$ is an isomorphism. This observation motivates the following definition (the terminology is based on \cite{Cardenas98,Schlichting04}). \begin{definition}\label{Definition:WeakIsomorphisms} Let ${\cal C}$ be a conflation category and let ${\cal A}$ be a non-empty full subcategory of ${\cal C}$. \begin{enumerate} \item An inflation $f\colon X\rightarrowtail Y$ in ${\cal C}$ is called an \emph{${\cal A}^{-1}$-inflation} if its cokernel belongs to ${\cal A}$. \item A deflation $f\colon X\twoheadrightarrow Y$ in ${\cal C}$ is called a \emph{${\cal A}^{-1}$-deflation} if its kernel belongs to ${\cal A}$. \item A morphism $f\colon X\rightarrow Y$ is called a \emph{weak ${\cal A}^{-1}$-isomorphism} (or simply a \emph{weak isomorphism} if ${\cal A}$ is implied) if it is a finite composition of ${\cal A}^{-1}$-inflations and ${\cal A}^{-1}$-deflations. We often endow weak isomorphisms with ``$\sim$''. \end{enumerate} The set of weak isomorphisms is denoted by $S_{{\cal A}}$. Given a weak isomorphism $f$, the \emph{composition length} of $f$ is defined as the smallest natural number $n$ such that $f$ can be written as a composition of $n$ ${\cal A}^{-1}$-inflations or ${\cal A}^{-1}$-deflations. \end{definition} The following proposition is a straightforward strengthening of \cite[lemma~1.13]{Schlichting04}. \begin{proposition}\label{proposition:MinimalConditionsRMS} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a right filtering subcategory. The set $S_{{\cal A}}$ of weak isomorphisms is a right multiplicative system. Moreover, every solid diagram \[\xymatrix{ X\ar@{.>}[r]^g\ar@{.>}[d]_{\rotatebox{90}{$\sim$}}^t & Y\ar[d]_{\rotatebox{90}{$\sim$}}^s\\ Z\ar[r]^f & W }\] with $s\in S_{{\cal A}}$ can be completed to a commutative square such that $t\in S_{{\cal A}}$ and the composition length of $t$ is at most the composition length of $s$.\\ If ${\cal A}$ is a strongly right filtering subcategory, then the square in axiom \ref{RMS2} can be chosen as a pullback-square. \end{proposition} \begin{proof} Axiom \ref{RMS1} is trivial. For axiom \ref{RMS2} (using notation as in definition \ref{definition:RMS}), we can easily reduce to the case where $s\colon Y \stackrel{\sim}{\rightarrow} W$ is either a deflation or an inflation. In the former case, we can complete the diagram by taking a pullback. In the latter case, we can use axiom \ref{P2} to factor the composition $Z \stackrel{f}{\rightarrow} W \to \coker s$ as $Z \stackrel{\alpha}{\twoheadrightarrow} A \stackrel{\beta}{\rightarrow} \coker s$; the morphism $t\colon \ker \alpha \stackrel{\sim}{\rightarrowtail} Z$ then completes the diagram. If ${\cal A} \subseteq {\cal C}$ is a strongly right filtering subcategory, then we can choose $\beta$ to be a monomorphism, and the square we obtained before is a pullback as well (this follows from proposition \ref{proposition:MitchellPullbackPushout}). For axiom \ref{RMS3}, consider a composition $X \stackrel{f}{\rightarrow} Y \stackrel{s}{\rightarrow} Z$ with $s \in S$. Assume that $s\circ f=0$. We need to show that there is a $t \in S$ such that the composition $W \stackrel{t}{\rightarrow}X \stackrel{f}{\rightarrow} Y$ is zero. Again, we can easily reduce to the case where $s$ is either an inflation or a deflation. If $s$ is an inflation, then $f = 0$ so that we can choose $t = 1_X$. If $s$ is a deflation, then $f \circ s = 0$ shows that $f$ factors as $X \stackrel{\alpha}{\rightarrow} \ker(s) \stackrel{\beta}{\rightarrow} Y$. As $\ker(s) \in {\cal A}$, it follows from axiom \ref{P2} that there is an inflation $t\colon W \stackrel{\sim}{\rightarrowtail} X$ such that $f \circ t = 0.$ \end{proof} \begin{proposition}\label{proposition:WeakIsoPullback} Let $f\colon X \twoheadrightarrow Y$ be a deflation in a deflation-exact category ${\cal C}$. For any weak isomorphism $s\colon Z \stackrel{\sim}{\rightarrow} Y$, the pullback along $f$ is a weak isomorphism. \end{proposition} \begin{proof} This follows from propositions \ref{proposition:PushoutIfCokernel} and \ref{proposition:PullbackOfInflation}, and the pullback lemma. \end{proof} \begin{lemma}\label{lemma:CompositionOfAAdeflations} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}\subseteq {\cal C}$ be a non-empty full subcategory. If ${\cal A}$ satisfies axiom \ref{P1}, the composition of two ${\cal A}^{-1}$-deflations is again an ${\cal A}^{-1}$-deflation. \end{lemma} \begin{proof} Let $U \stackrel{a}{\rightarrow} V \stackrel{b}{\rightarrow}W$ be ${\cal A}^{-1}$-deflations. Axiom \ref{R1} shows that $ba\colon U \to W$ is a deflation. Propositions \ref{proposition:MitchellPullback} and \ref{proposition:WhenPullback} now yield the following commutative diagram: \[\xymatrix{ \ker(a')\ar@{>->}[d]^{k_a'}\ar@{=}[r] & \ker(a)\ar@{>->}[d]^{k_a} &\\ P\ar@{>->}[r]^{k_{ab}}\ar@{->>}[d]^{a'} & U\ar@{->>}[d]^a \ar@{->>}[r]^{ba} & W \ar@{=}[d] \\ \ker(b)\ar@{>->}[r]^{k_b} & V\ar@{->>}[r]^b & W\\ }\] where the rows and columns are conflations, and the lower-left square is a pullback. As $\ker(a),\allowbreak \ker(b) \in \Ob({\cal A})$, axiom \ref{P1} implies that $P \in \Ob({\cal A})$. Proposition \ref{proposition:MitchellPullbackPushout}\eqref{enumerate:MitchellPullback} implies that $P=\ker(ba)$. It follows that $ba \in S_{{\cal A}}$, as required. \end{proof} \subsection{The lifting lemma} The following crucial lemma allows one to lift conflations $X \rightarrowtail Y \twoheadrightarrow$ over a weak isomorphism $Y' \stackrel{\sim}{\rightarrow} Y$. \begin{lemma}[Lifting lemma]\label{lemma:LiftingConflations} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}\subseteq {\cal C}$ be a deflation-percolating subcategory. Given a conflation $X\stackrel{i}{\rightarrowtail} Y \stackrel{p}{\twoheadrightarrow} Z$ and a weak isomorphism $s\colon Y''\stackrel{\sim}{\rightarrow} Y$, there exists a weak isomorphism $t\colon \overline{Y} \to Y$, factoring through $s$, such that there is a commutative diagram \[\xymatrix{ \overline{X}\ar@{>->}[r]^{\overline{i}}\ar[d]^{\rotatebox{90}{$\sim$}} & \overline{Y}\ar@{->>}[r]^{\overline{p}}\ar[d]^{\rotatebox{90}{$\sim$}}_t & \overline{Z}\ar[d]^{\rotatebox{90}{$\sim$}}\\ X\ar@{>->}[r]^i & Y\ar@{->>}[r]^p & Z }\] where the rows are conflations and the vertical maps are weak isomorphisms. \end{lemma} \begin{proof} We first consider two cases. \begin{enumerate} \item[Case I] Assume that $s$ factors as $\xymatrix{Y''\ar[r]^{\sim}_{s''} & Y'\ar@{->>}[r]^{\sim}_{s'} & Y}$. By axiom \ref{R2}, the pullback of $i$ along $s'$ exists. By propositions \ref{proposition:WhenPullback} and \ref{proposition:PullbackOfInflation}, we obtain the following commutative diagram: \[\xymatrix{ X'\ar@{>->}[r]\ar@{->>}[d]^{\rotatebox{90}{$\sim$}} & Y'\ar@{->>}[d]^{\rotatebox{90}{$\sim$}}_{s'}\ar@{->>}[r] & Z\ar@{=}[d]\\ X\ar@{>->}[r]^i & Y\ar@{->>}[r]^p & Z }\] Thus, taking the pullback along $s'$, one can lift the conflation $(i,p)$ over $s'$. \item[Case II] Assume that $s$ factors as $\xymatrix{Y''\ar[r]^{\sim}_{s''} & Y'\ar@{>->}[r]^{\sim}_{s'} & Y}$. Write $g\colon Y \twoheadrightarrow A'$ for the cokernel of $s'$. Applying axiom \ref{P3} to the composition $X\stackrel{i}{\rightarrowtail}Y\stackrel{g}{\twoheadrightarrow}A'$ yields a commutative diagram \[\xymatrix{ \overline{X}\ar@{=}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} & \overline{X}\ar@{>->}[d] & \\ X\ar@{>->}[r]^i\ar@{->>}[d] & Y\ar@{->>}[r]^p\ar@{->>}[d] & Z\ar@{=}[d]\\ A\ar@{>->}[r] & P\ar@{->>}[r]^{\sim}\ar[d] & Z\\ & A' & }\] such that the lower-left square is bicartesian and the composition $Y\twoheadrightarrow P\to A'$ equals $g$. By axiom \ref{P2}, the map $P\to A'$ factors as $P\twoheadrightarrow B\to A'$ with $B\in \Ob({\cal A})$. Write $\overline{Z}\stackrel{\sim}{\rightarrowtail}P$ for the kernel of $P\twoheadrightarrow B$. Taking the pullback of $\overline{Z}\stackrel{\sim}{\rightarrowtail}P$ along $Y\twoheadrightarrow P$ yields the following commutative diagram: \[\xymatrix{ \overline{X}\ar@{>->}[r]\ar@{=}[d] & \overline{Y}\ar@{->>}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}_t & \overline{Z}\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} \\ \overline{X}\ar@{>->}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} & Y\ar@{->>}[r]\ar@{=}[d] & P\ar@{->>}[d]^{\rotatebox{90}{$\sim$}}\\ X\ar@{>->}[r]^i & Y\ar@{->>}[r]^p & Z }\] As the upper-right square is bicartesian, the map $t\colon\overline{Y}\stackrel{\sim}{\rightarrowtail}Y$ is indeed an ${\cal A}^{-1}$-inflation. One readily verifies that the composition $\overline{Y} \rightarrowtail Y \twoheadrightarrow P \twoheadrightarrow Y \twoheadrightarrow B \to A'$ is zero, and hence that $t\colon\overline{Y}\stackrel{\sim}{\rightarrowtail}Y$ factors through $s' = \ker (Y \twoheadrightarrow A')$ via a map $u\colon \overline{Y}\to Y'$. By proposition \ref{proposition:MinimalConditionsRMS}, we obtain a commutative square \[\xymatrix{ {\overline{Y}''}\ar@{..>}[r]^{\sim}_{t''}\ar@{..>}[d]_{u'} & \overline{Y}\ar[d]^u \ar@/^/[dr]^t\\ {Y''}\ar[r]^{\sim}_{s''} & Y' \ar@{>->}[r]_{s'}&Y}\] such that the length of $t''$ is bounded by the length of $s''$. Thus, we have lifted the conflation $X\rightarrowtail Y \twoheadrightarrow Z$ over $s'$ to a conflation $\overline{X}\rightarrowtail \overline{Y}\twoheadrightarrow \overline{Z}$ and we have replaced $s''$ by $t''$. \end{enumerate} The result follows by induction on the composition length of $s$. \end{proof} \subsection{Interpretation of axiom \ref{P4}} Thus far, we have only used axioms \ref{P1} through \ref{P3} of a deflation-percolating subcategory. The following proposition highlights the r{\^o}le of axiom \ref{P4}. \begin{proposition}\label{proposition:InterpretationOfP4} Let ${\cal A}\subseteq {\cal C}$ be a deflation-percolating subcategory of a deflation-exact category ${\cal C}$. Let $(f\colon X'\to Y,s\colon X'\to X)$ be a morphism in $S_{{\cal A}}^{-1}{\cal C}$. The following are equivalent: \begin{enumerate} \item\label{item:InterpretationOfP4A} $(f,s)=0$ in $S_{{\cal A}}^{-1}{\cal C}$, \item\label{item:InterpretationOfP4B} $f$ factors through ${\cal A}$ in ${\cal C}$, \item\label{item:InterpretationOfP4C} there exists an ${\cal A}^{-1}$-inflation $t$ such that $f\circ t=0$ in ${\cal C}$. \end{enumerate} \end{proposition} \begin{proof} Assume that \eqref{item:InterpretationOfP4A} holds. We will show that \eqref{item:InterpretationOfP4B} holds. Clearly, $f\circ s^{-1}=0$ in $S^{-1}_{\cal A}{\cal C}$ and $s$ is an isomorphism in $S_{{\cal A}}^{-1}{\cal C}$. It follows that $Q(f)=0$. By construction \ref{construction:Localization}, there is a weak isomorphism $u\colon M\to X'$ such that $f\circ u=0$ in ${\cal C}$. As the zero object belongs to ${\cal A}$, the composition $f\circ u$ factors through ${\cal A}$. If $u$ is an isomorphism in ${\cal C}$, then \eqref{item:InterpretationOfP4B} holds. Otherwise, the composition length of $u$ is at least one and one of the following cases hold. \begin{enumerate} \item[Case I] Assume that $u$ factors as $\xymatrix{M\ar@{->>}[r]^{\sim}_{u''} & M'\ar[r]^{\sim}_{u'} & X'}$. Note that $\ker(u'')\in \Ob({\cal A})$, hence axiom \ref{P4} implies that $f\circ u'$ factors through ${\cal A}$. \item[Case II] Assume that $u$ factors as $\xymatrix{M\ar@{>->}[r]^{\sim}_{u''} & M'\ar[r]^{\sim}_{u'} & X'}$. As $f\circ u$ factors through ${\cal A}$, axiom \ref{P3} yields the following commutative diagram: \[\xymatrix{ M\ar@{>->}[r]^{\sim}_{u''}\ar@{->>}[d] & M'\ar[r]^{\sim}_{u'}\ar@{->>}[d] & X'\ar[d]^f\\ A\ar@{>->}[r]^{\sim} & Q\ar[r] & Y\\ }\] where the left square is a pushout square. By proposition \ref{proposition:PullbackPushout}, we know that $Q/A \cong M'/M \in \Ob({\cal A})$, so that, by axiom \ref{P1}, $Q\in \Ob({\cal A})$. It follows that $f\circ u'$ factors through ${\cal A}$. \end{enumerate} Iterating the previous two cases, we conclude that \eqref{item:InterpretationOfP4B} holds. Assume that \eqref{item:InterpretationOfP4B} holds. By axiom \ref{P2}, we may assume that $f\colon X' \to Y$ factors as $X' \twoheadrightarrow A \to Y$ with $A \in {\cal A}$. The kernel of the deflation $X' \twoheadrightarrow A$ is the desired ${\cal A}^{-1}$-inflation and, hence, \eqref{item:InterpretationOfP4C} holds. The implication $\eqref{item:InterpretationOfP4C}\Rightarrow\eqref{item:InterpretationOfP4A}$ is trivial. \end{proof} We now give a useful criterion to verify that axiom \ref{P4} holds. \begin{proposition}\label{proposition:P4Criterion} Let ${\cal E}$ be a deflation-exact category. Let ${\cal A} \subseteq {\cal E}$ a nonempty full subcategory. If every morphism $a\colon A \to B$ in ${\cal A}$ admits a cokernel in ${\cal C}$ (with $\coker a \in {\cal A}$), then ${\cal A}$ satisfies axiom \ref{P4}. \end{proposition} \begin{proof} Let $f\colon X\to Y$ be a map that factors as $X\stackrel{\rho}{\rightarrow}B\stackrel{h'}{\rightarrow}Y$ with $B\in \Ob({\cal A})$ and let $A\stackrel{i}{\rightarrowtail}X \stackrel{p}{\twoheadrightarrow} Q$ be a conflation with $A\in \Ob({\cal A})$ such that $fi=0$, as in the setup of axiom \ref{P4}. The cokernel property of $p$ induces a unique map $h\colon Q\to Y$ such that $f=hp$. We need to show that $h\colon Q \to Y$ factors through ${\cal A}.$ By assumption, the composition $\rho i\colon A \to B$ admits a cokernel $p'\colon B \to C$ with $C\in \Ob({\cal A})$. We obtain the following commutative diagram: \[\xymatrix{ A\ar@{>->}[r]^i\ar@{=}[d] & X\ar@{->>}[r]^p\ar[d]^{\rho} & Q\ar[r]^h\ar@{.>}[d]^{\rho'} & Y\ar@{=}[d]\\ A\ar[r]^{\rho i} & B\ar[r]^{p'}\ar@/_2pc/[rr]_{h'} & C\ar@{.>}[r]^{\exists ! u} & Y }\] Here, the map $\rho'$ is induced by the cokernel property of $p$. As $h'\rho i=fi=0$, the cokernel property of $p'$ induces a unique map $u\colon C\to Y$ such that $h'=up'$. It follows that $hp=f=h'\rho=up'\rho=u\rho'p$ and hence $h=u\rho'$ since $p$ is an epimorphism. We conclude that $h$ factors through ${\cal A}$ as required. \end{proof} \section{Quotients and localizations of one-sided exact categories}\label{Section:LocalizationOfOneSidedExactCategories} Throughout this section, let ${\cal C}$ denote a deflation-exact category and ${\cal A}$ a deflation-percolating subcategory. We write $S_{\cal A}$ for the corresponding set of weak isomorphisms (see definition \ref{Definition:WeakIsomorphisms}). The aim of this section is to show that $S_{{\cal A}}^{-1}{\cal C}$ has a canonical deflation-exact structure such that the localization functor $Q\colon {\cal C}\rightarrow S_{{\cal A}}^{-1}{\cal C}$ is exact. Moreover, we show that $S_{{\cal A}}^{-1}{\cal C}$ is universal in the sense of the following definition. \begin{definition}\label{definition:RightExactLocalization} Let ${\cal C}$ be a deflation-exact category and ${\cal A}$ a full deflation-exact subcategory. We define the \emph{quotient} of ${\cal C}$ by ${\cal A}$ as a deflation-exact category ${\cal C}/{\cal A}$ together with an exact \emph{quotient functor} $Q\colon {\cal C}\rightarrow {\cal C}/{\cal A}$ satisfying the following universal property: for any exact functor $F\colon {\cal C}\rightarrow \mathcal{D}$ of deflation-exact categories such that $F(A)\cong 0$ for all $A\in\Ob({\cal A})$ there exists a unique exact functor $G\colon {\cal C}/{\cal A}\rightarrow \mathcal{D}$ such that the following diagram commutes: \[\xymatrix{ {\cal A}\ar[d]\ar[rd]^0 & \\ {\cal C}\ar[r]^F\ar[d]_Q & \mathcal{D}\\ {\cal C}/{\cal A}\ar@{.>}[ru]_{G} }\] \end{definition} \begin{remark}\label{remark:SAInverts} The next two observations motivate the definitions of axiom \ref{P1} and weak isomorphisms. \begin{enumerate} \item Let $A\rightarrowtail X\twoheadrightarrow Y$ be a conflation in ${\cal C}$ with $A\in \Ob({\cal A})$. Then $0\rightarrowtail Q(X)\twoheadrightarrow Q(Y)$ is a conflation in ${\cal C}/{\cal A}$. It follows $Q(X)\twoheadrightarrow Q(Y)$ is invertible in ${\cal C}/{\cal A}$. Similarly, if $X\rightarrowtail Y\twoheadrightarrow A$ is a conflation in ${\cal C}$ with $A\in \Ob({\cal A})$, then $Q(X)\rightarrowtail Q(Y)$ is invertible. In particular, all weak isomorphisms become isomorphisms under $Q$. \item The kernel of any exact functor $F\colon {\cal C} \to {\cal D}$ is a Serre subcategory of ${\cal C}$, i.e.~it satisfies \ref{P1}. \end{enumerate} \end{remark} Let ${\cal A}$ be a deflation-percolating subcategory of a deflation-exact category ${\cal C}$. The main theorem (theorem \ref{theorem:Maintheorem} below) states that the localization functor $Q\colon{\cal C}\to S_{{\cal A}}^{-1}{\cal C}$ is a quotient functor. The proof consists of two major steps: in the first step, we endow $S_{\cal A}^{-1}{\cal C}$ with the structure of a conflation category such that $Q\colon {\cal C} \to S_{\cal A}^{-1}{\cal C}$ is exact; in the second step, we show that the conflation category $S_{\cal A}^{-1}{\cal C}$ is a deflation-exact category. \subsection{The \texorpdfstring{category $S_{\cal A}^{-1}{\cal C}$}{localized category} is a conflation category} The next proposition allows us to impose a conflation structure on $S_{\cal A}^{-1}{\cal C}$ for which $Q\colon {\cal C} \to S_{\cal A}^{-1} {\cal C}$ is exact (see definition \ref{definition:LocalizationConflation} below). \begin{proposition}\label{proposition:CokernelsDescend} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a deflation-percolating subcategory. The localization functor $Q\colon {\cal C} \to S_{\cal A}^{-1}{\cal C}$ maps conflations to kernel-cokernel pairs. \end{proposition} \begin{proof} Let $X \stackrel{i}{\rightarrowtail} Y \stackrel{p}{\twoheadrightarrow} Z$ be a conflation in ${\cal C}$. As $S_{{\cal A}}$ is a right multiplicative system (see proposition \ref{proposition:MinimalConditionsRMS}), we know that $Q(i)$ is the kernel of $Q(p)$. We only need to show that $Q(p)$ is the cokernel of $Q(i)$. For this, we consider the following diagram: \[\xymatrix{ & T& &\\ &Y''\ar[d]_{\rotatebox{90}{$\sim$}}^s\ar[u]_f&&\\ X\ar@{>->}[r]^i&Y\ar@{->>}[r]^p&Z }\] where the composition $(f,s)\circ(i,1)$ is zero in $S_{{\cal A}}^{-1}{\cal C}$. We will show that $(p,1)$ is the cokernel of $(i,1)$ by showing that, in $S_{\cal A}^{-1}{\cal C}$, the morphism $(f,s)$ factors uniquely through $(p,1)$. Using the lifting lemma (lemma \ref{lemma:LiftingConflations}), we find the following diagram \[\xymatrix{ &T&\\ X'\ar@{>->}[r]^{i'}\ar[d]_{\rotatebox{90}{$\sim$}}& Y'\ar@{->>}[r]^{p'}\ar[d]_{\rotatebox{90}{$\sim$}}^{s'}\ar[u]_{f'}& Z'\ar[d]_{\rotatebox{90}{$\sim$}}\\ X\ar@{>->}[r]^i&Y\ar[r]^{p}&Z }\] where the rows are conflations and where $(f,s) = (f',s')$. As $(f,s)\circ (i,1)$ is zero in $S_{{\cal A}}^{-1}{\cal C}$, we infer that $Q(f'i') = 0$. By proposition \ref{proposition:InterpretationOfP4}, the composition $f'i'$ factors through ${\cal A}$. Applying axiom \ref{P3} to the composition $f'i'$ yields a commutative diagram \[\xymatrix{ \overline{X}\ar@{=}[r]\ar@{>->}[d]^{\iota}_{\rotatebox{90}{$\sim$}} & \overline{X}\ar@{>->}[d]^{\iota'} &\\ X'\ar@{>->}[r]^{i'}\ar@{->>}[d]^{\rho} & Y'\ar@{->>}[r]^{p'}\ar@{->>}[d]^{\rho'} & Z'\ar@{=}[d]\\ A\ar@{>->}[r]_j & \overline{Z}\ar@{->>}[r]_{k}^{\sim}\ar[d]^{h'} & Z'\\ & T & }\] such that the rows and columns are conflations, where $A\in \Ob({\cal A})$ and $h'\rho'=f'$. Thus we obtain the following commutative diagram: \[\xymatrix{ & T &\\ \overline{X}\ar@{>->}[r]^{\iota'}\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}_{\iota} & Y'\ar@{->>}[r]^{\rho'}\ar@{=}[d]\ar[u]^{f'} & \overline{Z}\ar@{->>}[d]^{\rotatebox{90}{$\sim$}}_k\ar@{.>}@/_/[ul]_{m}\\ X'\ar@{>->}[r]^{i'} & Y'\ar@{->>}[r]^{p'} & Z' }\] As $f'\iota'=0$, there is an induced map $m\colon \overline{Z}\to T$ such that $m\rho'=f'$. It follows that $(f,s)$ factors through $(p,1_Y)$ in $S_{{\cal A}}^{-1}{\cal C}$ as required. It remains to show that such a factorization is unique. It suffices to show that $Q(p)$ is an epimorphism in $S_{{\cal A}}^{-1}{\cal C}$. So let $(g,t)$ be map such that $(g,t)\circ Q(p)=0$. By proposition \ref{proposition:WeakIsoPullback}, we find the following diagram \[\xymatrix{ X\ar@{>->}[r]^i\ar@{=}[d] &Y \ar@{->>}[r]^p & Z \\ X\ar@{>->}[r]^{i'} & Y' \ar@{..>}[u]_{\rotatebox{90}{$\sim$}}^{t'} \ar@{..>>}[r]^{p'}& Z'\ar[u]_{\rotatebox{90}{$\sim$}}^t \ar[r]^{g} & T }\] where the right square is a pullback and the vertical arrows are weak isomorphisms. Note that $Q(gp')=0$ and thus proposition \ref{proposition:InterpretationOfP4}.\eqref{item:InterpretationOfP4C} yields an ${\cal A}^{-1}$-inflation $k\colon K \stackrel{\sim}{\rightarrowtail}Y'$ such that $gp'k=0$ in ${\cal C}$. By the lifting lemma (lemma \ref{lemma:LiftingConflations}) we obtain a commutative diagram \[\xymatrix{ & \overline{K}\ar@{->>}[r]^{\overline{p}}\ar[d]^{\rotatebox{90}{$\sim$}}_{\overline{k}} & \overline{Z}\ar[d]^{\rotatebox{90}{$\sim$}}_s & \\ X\ar@{>->}[r]^{i'} & Y'\ar@{->>}[r]^{p'} & Z'\ar[r]^g & T }\] where $\overline{k}$ factors through $k$. It follows that the composition $gs\overline{p}=0$ in ${\cal C}$. As $\overline{p}$ is a deflation, $\overline{p}$ is epic and thus $gs=0$ in ${\cal C}$. It follows that $Q(g)\circ Q(s)=Q(g\circ s)=0$ and since $Q(s)$ is an isomorphism, we find $Q(g)=0$ as required. This completes the proof. \end{proof} \begin{definition}\label{definition:LocalizationConflation} Let ${\cal A}$ be a deflation-percolating subcategory of a deflation-exact category ${\cal C}$. We say that a sequence $X \to Y \to Z$ is a conflation in $S_{\cal A}^{-1}{\cal C}$ if it is isomorphic (in $S_{\cal A}^{-1}{\cal C}$) to the image of a conflation under the localization functor $Q\colon {\cal C} \to S_{\cal A}^{-1}{\cal C}$, i.e.~there is a conflation $\overline{X} \rightarrowtail \overline{Y} \twoheadrightarrow \overline{Z}$ in ${\cal C}$ and a commutative diagram \[\xymatrix{ Q(\overline{X}) \ar[r] \ar[d] & Q(\overline{Y}) \ar[r] \ar[d] & Q(\overline{Z}) \ar[d] \\ X \ar[r] & Y \ar[r] & Z }\] in $S_{\cal A}^{-1}{\cal C}$ where the vertical arrows are isomorphisms. \end{definition} \begin{remark} It follows from proposition \ref{proposition:CokernelsDescend} that definition \ref{definition:LocalizationConflation} endows $S_{{\cal A}}^{-1}{\cal C}$ with a conflation structure. With this choice, the localization functor $Q\colon {\cal C} \to S^{-1}_{\cal A} {\cal C}$ is conflation-exact. \end{remark} \begin{proposition}\label{proposition:QuotientInConflationCategories} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a deflation-percolating subcategory. When we endow ${\cal C}[S^{-1}_{\cal A}]$ with the conflation structure from definition \ref{definition:LocalizationConflation}, the localization functor $Q\colon {\cal C} \to {\cal C}[S^{-1}_{\cal A}]$ satifies the universal property of a quotient ${\cal C} \to {\cal C} / {\cal A}$ in the category of (small) conflation categories, meaning that $Q$ is exact and every exact functor $F\colon {\cal C} \to {\cal D}$ (with ${\cal D}$ a conflation category) for which $F({\cal A}) = 0$ factors uniquely through $Q$. \end{proposition} \begin{proof} It follows easily from definition \ref{definition:LocalizationConflation} that $Q$ is exact. It remains to show that the localization functor $Q$ satisfies the universal property of the quotient ${\cal C}/{\cal A}$. For this, consider an exact functor $F\colon {\cal C}\rightarrow \mathcal{D}$ between deflation-exact categories such that $F(A)\cong 0$ for all $A\in \Ob({\cal A})$. By remark \ref{remark:SAInverts}, we know that $F(s)$ is an isomorphism for all ${\cal A}^{-1}$-inflations and all ${\cal A}^{-1}$-deflations $s$ in $S_{{\cal A}}$ (and hence for any $s \in S_{{\cal A}}$). By the universal property of $Q$, there exists a unique functor $G\colon S_{{\cal A}}^{-1}{\cal C}\rightarrow \mathcal{D}$ such that $F=G\circ Q$. It remains to show that $G$ is exact. It follows from definition \ref{definition:LocalizationConflation} that any conflation in $S_{{\cal A}}^{-1}{\cal C}$ lifts to a conflation in ${\cal C}$. Since $F$ is exact, this lift is mapped to a conflation in ${\cal D}$. Since $F=G\circ Q$, we know that $G$ maps conflations to conflations, i.e.~$G$ is exact. \end{proof} \subsection{The \texorpdfstring{category $S_{\cal A}^{-1}{\cal C}$}{localized category} is a deflation-exact category} We are now in a position to prove the main theorem, namely that the conflation category $S_{\cal A}^{-1}{\cal C}$ from definition \ref{definition:LocalizationConflation} is deflation-exact. \begin{theorem}\label{theorem:Maintheorem} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a deflation-percolating subcategory. The conflation category $S_{{\cal A}}^{-1}{\cal C}$ (see definition \ref{definition:LocalizationConflation}) is a deflation-exact category. Moreover, if ${\cal C}$ satisfies axiom \ref{R0*}, so does $S_{{\cal A}}^{-1}{\cal C}$. \end{theorem} \begin{proof} It is easy to see that axiom \ref{R0} (respectively axiom \ref{R0*}) descends to $S_{{\cal A}}^{-1}{\cal C}$. We now check that $S_{{\cal A}}^{-1}{\cal C}$ satisfies axioms \ref{R1} and \ref{R2}. \begin{enumerate} \item[\ref{R1}] We consider two deflations $X\rightarrow Y$ and $Y\rightarrow Z$ in $S_{{\cal A}}^{-1}{\cal C}$. By definition \ref{definition:LocalizationConflation}, this means that there are deflations $\overline{X}\rightarrow \overline{Y}$ and $\overline{\overline{Y}}\rightarrow \overline{Z}$ and a diagram \[\xymatrix@!@C=0.5em@R=0.5em{ & & Z &\ar[l]\ar[r]^{\sim} &\overline{Z}\\ & & \ar[u]\ar[d]^{\rotatebox{90}{$\sim$}} & &\\ X &\ar[l]_{\sim}\ar[r] & Y &\ar[l]Y''\ar[r]^{\sim} & \overline{\overline{Y}}\ar@{->>}[uu]\\ \ar[u]^{\rotatebox{90}{$\sim$}}\ar[d] & & \ar[u]^{\rotatebox{90}{$\sim$}}Y'\ar[d] & P\ar@{.>}[l]\ar@{.>}[u]^{\rotatebox{90}{$\sim$}} & \\ \overline{X}\ar@{->>}[rr]& & \overline{Y} }\] which descends to a commutative diagram in $S_{{\cal A}}^{-1}{\cal C}$. Here, we chose the direction of the isomorphisms in $S_{{\cal A}}^{-1}{\cal C}$ in such a way to get the particular arrangement of arrows in $S_{{\cal A}}$. The first step of the proof is to find a better representation in ${\cal C}$ of this composition of deflations in $S_{{\cal A}}^{-1}{\cal C}$. Using proposition \ref{proposition:MinimalConditionsRMS}, we obtain the dotted arrows by axiom \ref{RMS2}. Note that the induced map $P\rightarrow Y'$ descends to an isomorphism in $S_{{\cal A}}^{-1}{\cal C}$. It follows that we can represent the outer edge by the solid part of the following commutative diagram: \[\xymatrix@!{ & & \overline{Z}\\ R\ar@{.>}[d]\ar@{.>>}[r]& P \ar[d] \ar[r]^{\sim} & \overline{\overline{Y}}\ar@{->>}[u]\\ \overline{X} \ar@{->>}[r] & \overline{Y} &\\ }\]By axiom \ref{RMS1}, the composition $P\xrightarrow{\sim}Y''\xrightarrow{\sim} \overline{\overline{Y}}$ belongs to $S_{{\cal A}}$. Axiom \ref{R2} yields the pullback square $RP\overline{Y}\overline{X}$ in ${\cal C}$. As $P\rightarrow \overline{Y}$ descends to an isomorphism and $Q$ commutes with pullbacks (see remark \ref{remark:PreservationOfKernelsAndPullbacks}), the map $R\rightarrow \overline{X}$ descends to an isomorphism as well. It follows that the original composition of deflations in $S_{{\cal A}}^{-1}{\cal C}$ can be represented by the composition $R\twoheadrightarrow P\xrightarrow{\sim}\overline{\overline{Y}}\twoheadrightarrow \overline{Z}$. Write $K\rightarrowtail \overline{\overline{Y}}$ for the kernel of the deflation $\overline{\overline{Y}}\twoheadrightarrow \overline{Z}$. By lemma \ref{lemma:LiftingConflations}, we can lift the conflation over $P\stackrel{\sim}{\rightarrow} \overline{\overline{Y}}$ to $P'$ and obtain the following commutative diagram: \[\xymatrix{ & Z'\ar[r]^{\sim} & \overline{Z}\\ R'\ar@{.>>}[r]\ar@{.>}[d] & P'\ar@{->>}[u]\ar[rd]^{\rotatebox{135}{$\sim$}}\ar[d] & \\ R\ar@{->>}[r] & P\ar[r]^{\sim} & \overline{\overline{Y}}\ar@{->>}[uu] }\] The lower left square is a pullback square obtained by axiom \ref{R2} in ${\cal C}$. Note that the map $P'\to P$ descends to an isomorphism in $S_{{\cal A}}^{-1}{\cal C}$ and the map $R'\to R$ descends to an isomorphism as well (this follows from remark \ref{remark:PreservationOfKernelsAndPullbacks} and the fact that pullbacks of isomorphisms are isomorphisms). By axiom \ref{R1}, the composition $R'\twoheadrightarrow P' \twoheadrightarrow Z'$ is a deflation in ${\cal C}$. Moreover, this composition descends to the composition $R\twoheadrightarrow P\xrightarrow{\sim}\overline{\overline{Y}}\twoheadrightarrow \overline{Z}$ up to isomorphism. This establishes that axiom \ref{R1} holds. \item[\ref{R2}] We now show that the pullback along a deflation exists and yields a deflation in $S_{{\cal A}}^{-1}{\cal C}$. For this, consider a co-span $X \twoheadrightarrow Y \leftarrow Z$ in $S_{\cal A}^{-1} {\cal C}$. The co-span can be represented by the following diagram in ${\cal C}$: \[\xymatrix@d@!{ && Z\\ && Z'\ar[u]^{\rotatebox{0}{$\sim$}}\ar[d]\\ X & X'\ar[l]^{\rotatebox{90}{$\sim$}}\ar[r] &Y & P\ar@{.>}[lu]_{\rotatebox{45}{$\sim$}} \ar@{.>}[ld]\\ X''\ar[u]^{\rotatebox{0}{$\sim$}}\ar[d] & & Y'\ar[u]^{\rotatebox{0}{$\sim$}}\ar[d]\\ \overline{X}\ar@{->>}[rr] & & \overline{Y}\\ }\] The dotted arrows are obtained by applying axiom \ref{RMS2} to $Y' \stackrel{\sim}{\twoheadrightarrow} Y \leftarrow Z'$. In this way, we obtain a co-span $\overline{X} \twoheadrightarrow \overline{Y} \leftarrow P$ in ${\cal C}$, which is isomorphic, in $S_{{\cal A}}^{-1}{\cal C}$, to the original co-span $X \twoheadrightarrow Y \leftarrow Z$. As $Q$ preserves pullbacks, we are done by axiom \ref{R2} in ${\cal C}$. \qedhere \end{enumerate} \end{proof} \subsection{Exact quotients of exact categories} The definition of the quotient in definition \ref{definition:RightExactLocalization} is taken in the category of conflation categories. Even if one starts with an exact category ${\cal C}$, the quotient category ${\cal C} / {\cal A} = {\cal C}[S_{\cal A}^{-1}]$ needs only be deflation-exact. In this subsection, we show how one can obtain a quotient in the category of exact categories with respect to left or right percolating subcategories. Our approach is based on the following proposition (see \cite[proposition I.7.5]{Rosenberg11}). \begin{proposition} Let ${\cal C}$ be a deflation-exact category. There exists an exact category $\overline{{\cal C}}$ and a fully faithful exact functor $\gamma\colon{\cal C}\rightarrow \overline{{\cal C}}$ which is 2-universal in the following sense: for any exact category ${\cal D}$, the functor $-\circ Q\colon \Hom_{\text{ex}}(\overline{{\cal C}}, {\cal D}) \to \Hom_{\text{ex}}({\cal C}, {\cal D})$ is an equivalence. The category $\overline{{\cal C}}$ is called the \emph{exact hull} of ${\cal C}$. \end{proposition} For the benefit of the reader, we recall the construction given in \cite{Rosenberg11}. As ${\cal C}$ is deflation-exact, there is a Grothendieck pretopology where the covers are the deflations. The exact hull $\overline{{\cal C}}$ is obtained as the smallest fully exact subcategory of the category of sheaves on ${\cal C}$ containing the representable sheaves. The next corollary is an immediate application of the previous proposition and theorem \ref{theorem:Maintheorem}. \begin{corollary}\label{corollary:Rosenberg} Let ${\cal C}$ be an exact category and let ${\cal A}$ be a deflation-percolating subcategory of ${\cal C}$. The composition of the exact quotient functor $Q\colon {\cal C}\rightarrow {\cal C}/{\cal A}$ and the embedding $\gamma\colon {\cal C}/{\cal A}\rightarrow {\cal C} {/\mkern-6mu/} {\cal A} \coloneqq \overline{{\cal C}/{\cal A}}$ is an exact functor between exact categories satisfying the following 2-universal property: for any exact category ${\cal D}$, the functor $- \circ \gamma\colon \Hom_{\text{ex}}({\cal C} {/\mkern-6mu/} {\cal A}, {\cal D}) \to \Hom_{\text{ex}}({\cal C}, {\cal D})$ is a fully faithful functor whose essential image consists of those conflation-exact functors $F\colon {\cal C} \to {\cal D}$ for which $F({\cal A}) = 0$. \end{corollary} \subsection{On the role of axioms \ref{P1}-\ref{P4}} In this section, we expand upon the inclusion of each of the axioms \ref{P1}-\ref{P4}. As noted in \S\ref{subsection:WeakIsomorphisms}, the kernel of an exact functor is a Serre subcategory. For ${\cal A}$ to be the kernel of $Q$, axiom \ref{P1} needs to be satisfied. Axiom \ref{P2} is used to bound the composition length of the reflected weak isomorphism in axiom \ref{RMS2} (see proposition \ref{proposition:MinimalConditionsRMS}). In particular, the proof of the lifting lemma and proposition \ref{proposition:InterpretationOfP4} rely on this bound. In fact, axiom \ref{P2} is equivalent to requiring that axiom \ref{RMS2} reflects the structure of weak isomorphisms. Indeed, let $f\colon X\to A$ be any morphism (with $A \in {\cal A}$). Applying axiom \ref{RMS2}, we obtain the following commutative diagram: \[\xymatrix{ P\ar@{>->}[r]^{\sim}\ar[d] & X\ar@{->>}[r]\ar[d]^f & {X/P}\ar@{..>}[d]\\ 0\ar@{>->}[r]^{\sim} & A\ar@{->>}[r] & A }\] Requiring that $P \to X$ is an inflation (since $0 \rightarrowtail X$ is), we see that axiom \ref{P2} holds. In section \S\ref{Subsection:GliderExample} we construct an example of an exact category ${\cal E}$ and a deflation-percolating subcategory ${\cal A}$ such that ${\cal E}/{\cal A}$ is not inflation-exact. One can verify explicitly that ${\cal A}$ satisfies the duals of axioms \ref{P1}, \ref{P3} and \ref{P4} but not the dual of \ref{P2}. As the quotient ${\cal E}/{\cal A}$ is not inflation-exact one sees that one cannot simply omit (the dual) of axiom \ref{P2}. Example \ref{Example:P3Requirement} below gives a deflation-exact category ${\cal C}$ and a full subcategory ${\cal A}$ of ${\cal C}$ satisfying axioms \ref{P1}, \ref{P2}, and \ref{P4}, but not satisfying axiom \ref{P3}. We show explicitly that ${\cal C}[S_{\cal A}^{-1}]$ fails to satisfy theorem \ref{theorem:Maintheorem}. This justifies the requirement of axiom \ref{P3}. Similarly, example \ref{Example:P4Requirement} below gives a deflation-exact category ${\cal C}$ and a full subcategory ${\cal A}$ of ${\cal C}$ satisfying axiom \ref{P1}, \ref{P2}, and \ref{P3}, but not satisfying axiom \ref{P4}. Again, we explicitly show that ${\cal C}[S_{\cal A}^{-1}]$ fails to satisfy theorem \ref{theorem:Maintheorem}. This justifies the requirement of axiom \ref{P4}. \begin{example}\label{Example:P3Requirement} Consider the quiver $A_4: 1\leftarrow 2\leftarrow 3\leftarrow 4$. Let $k$ be a field and write $\rep_k(A_4)$ for the category of finite-dimensional $k$-representations of $A_4$. We write $S_j$ for the simple representation associated to the vertex $j$, $P_j$ for its projective cover and $I_j$ for its injective envelope. The Auslander-Reiten quiver of $\rep_k(A_4)$ is give by: \[\xymatrix@!C=3pt@R=3pt{ &&&P_4\ar[dr] &&&\\ &&P_3\ar[ur]\ar[dr] && I_2\ar[dr]&&\\ & P_2\ar[ur]\ar[dr] && \tau^{-1}P_2\ar[ur]\ar[dr] && I_3\ar[dr]&\\ S_1\ar[ur] && S_2\ar[ur] && S_3\ar[ur] && S_4 }\] where $\tau$ is the Auslander-Reiten translate. Let ${\cal C}=\text{add}\{S_1,P_2,P_3,P_4,S_2,S_3,I_2\}$ be the full additive subcategory of $\rep_k(A_4)$ generated by the objects $S_1,P_2,P_3,P_4,S_2,S_3$ and $I_2$ (thus, consisting of all objects which do not have direct summands isomorphic to $S_4, I_3,$ and $\tau^{-1} P_2$). We claim that ${\cal C}$, with the conflations given by those in $\rep_k(A_4),$ is a deflation-exact category. Let $\mathcal{D}$ be the full additive subcategory of $\rep_k(A_4)$ generated by ${\cal C}$ and $\tau^{-1}P_2$. As $\mathcal{D}$ is an extension-closed subcategory of an abelian category, it is exact. It now follows from proposition \ref{proposition:DeflationClosed} that ${\cal C}$ is deflation-exact. Let ${\cal A}$ be the full additive subcategory of ${\cal C}$ generated by $S_2$. Clearly, ${\cal A}$ is an abelian subcategory satisfying axioms \ref{P1} and \ref{P2}. In fact, axiom \ref{A2} (see definition \ref{Definition:AbelianPercolating} below) holds and thus axiom \ref{P4} holds by the proof of proposition \ref{proposition:AIsAbelian}. Furthermore, one can verify that ${\cal A}$ does not satisfy axiom \ref{P3} (indeed, \ref{P3} fails on the composition $P_2\rightarrowtail P_3\to I_2$). Consider the commutative diagram \[\xymatrix@!@C=0.5em@R=0.5em{ P_2\ar@{>->}[rr]^i\ar[rd] && P_3\ar@{->>}[rr]\ar[dd] && S_3\\ & S_2\ar[rd] &&&\\ &&I_2&& }\] in ${\cal C}$. If ${\cal C}/{\cal A}$ were a conflation category and the localization functor $Q\colon {\cal C}\rightarrow {\cal C}/{\cal A}$ is exact, $Q(S_3)$ is the cokernel of $Q(i)$. The above diagram yields an induced map $Q(S_3)\rightarrow Q(I_2)$ in ${\cal C}/{\cal A}$. On the other hand, one can verify explicitly that such a morphism cannot be obtained by localizing with respect to the weak isomorphisms. It follows that $P_2 \to P_3 \to S_3$ is not a kernel-cokernel pair in $S_{\cal A}^{-1}{\cal C}$. \end{example} \begin{example}\label{Example:P4Requirement} Consider the quiver $1\stackrel{\gamma}{\leftarrow}2\stackrel{\beta}{\leftarrow}3\stackrel{\alpha}{\leftarrow}4$ with relation $\gamma\beta\alpha=0$. The category ${\cal U}$ of finite-dimensional representations of this quiver can be represented by its Auslander-Reiten quiver: \[\xymatrix@!C=3pt@R=3pt{ &&P_3\ar[dr] && I_2\ar[dr]&&\\ & P_2\ar[ur]\ar[dr] && \tau^{-1}P_2\ar[ur]\ar[dr] && I_3\ar[dr]&\\ S_1\ar[ur] && S_2\ar[ur] && S_3\ar[ur] && S_4 }\] We consider on ${\cal U}$ the exact structure induced by all Auslander-Reiten sequences but $S_2\to \tau^{-1}P_2\to S_3$ (see \cite[corollary~3.10]{Enomoto2018} or \cite[theorem~5.7]{BrustleHassounLangfordRoy20} to see that this is a well-defined exact structure). Let ${\cal E}$ now be the full Karoubi subcategory of ${\cal U}$ given by those objects who do not have $S_1$ as a direct summand. As ${\cal E}$ is extension-closed in ${\cal U}$, we know that ${\cal E}$ is an exact category. As ${\cal E}$ satisfies axiom \ref{R3}, it is straightforward to see that $P_2\to P_3\to S_3$ is not a conflation in ${\cal E}$. Let ${\cal A}$ be the full additive subcategory of ${\cal E}$ generated by $P_2$ and $P_3$. As $P_2\to P_3\to S_3$ is not a conflation, it is easy to verify axiom \ref{P1}. Axiom \ref{P2} is straightforward to verify and axiom \ref{P3} is automatic as ${\cal E}$ is exact. We now claim that ${\cal A}$ does not satisfy axiom \ref{P4}. Note that the deflation $S_2\oplus P_3\twoheadrightarrow \tau^{-1}P_2$ is an ${\cal A}^{-1}$-deflation (with kernel $P_2$), and that the split embedding $S_2\rightarrowtail S_2\oplus P_3$ is an ${\cal A}^{-1}$-inflation (with cokernel $P_3$). As the composition $S_2\stackrel{\sim}{\rightarrowtail}S_2\oplus P_3\stackrel{\sim}{\twoheadrightarrow}\tau^{-1}P_2 \to S_3$ is zero, the map $\tau^{-1}P_2\to S_3$ descends to zero in $S_{{\cal A}}^{-1}{\cal E}$. As the (irreducible) morphism $\tau^{-1}P_2\to S_3$ does not factor through ${\cal A}$, it follows from proposition \ref{proposition:InterpretationOfP4} that ${\cal A}$ does not satisfy axiom \ref{P4}. We now show that conflations in ${\cal E}$ need not descend to kernel-cokernel pairs in $S_{{\cal A}}^{-1}{\cal E}$. Consider the conflation $\tau^{-1}P_2\rightarrowtail I_2\twoheadrightarrow S_4$ in ${\cal E}$ and note that the composition $\tau^{-1}P_2\rightarrowtail I_2\to I_3$ descends to zero in $S_{{\cal A}}^{-1}{\cal E}$ (indeed, the map $\tau^{-1}P_2\to I_3$ factors through $\tau^{-1}P_2\to S_3$ which descends to zero). If $\tau^{-1}P_2\rightarrowtail I_2\twoheadrightarrow S_4$ were a kernel-cokernel pair in $S_{{\cal A}}^{-1}{\cal E}$, there should be a map $f\colon S_3\to I_3$ in $S_{{\cal A}}^{-1}{\cal E}$ such that $P_3\to I_3$ factors through $f$. Note that a map $f\colon S_3\to I_3$ in $S_{{\cal A}}^{-1}{\cal E}$ can be represented by a diagram $S_3 \stackrel{\sim}{\leftarrow}X\to I_3$ in ${\cal E}$. Writing $X\stackrel{\sim}{\rightarrow}S_3$ as a composition of ${\cal A}^{-1}$-inflations and ${\cal A}^{-1}$-deflations, one finds that $X\cong S_3\oplus P_2^{\oplus n_2}\oplus P_3^{\oplus n_3}$ (for some $n_2,n_3$). This shows that there are no non-zero maps $S_3\to I_3$ in $S_{{\cal A}}^{-1}{\cal E}$. Hence, $S_{{\cal A}}^{-1}{\cal E}$ is not a conflation category. \end{example} \section{Admissibly percolating subcategories}\label{section:AbelianPercolating} In the previous sections, we considered quotients of deflation-exact categories by deflation-percolating subcategories. In theorem \ref{theorem:Maintheorem}, we showed that such a quotient can be obtained by localization with respect to a right multiplicative system. In practice, it might be difficult to show that a given morphism $f$ in ${\cal C}$ is a weak isomorphism or that $Q(f)$ is invertible. In this section, we address these difficulties by considering admissibly percolating subcategories of one-sided exact categories. Admissibly (inflation- or deflation-) percolating subcategories satisfy stronger axioms than regular percolating subcategories. We show that every weak isomorphism is an admissible morphism with kernel and cokernel in ${\cal A}$ (see theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms}). This last property motivates the terminology of an admissibly percolating subcategory. In addition, we show that the class of weak isomorphisms (with respect to an admissibly percolating subcategory) satisfies the $2$-out-of-$3$ property (proposition \ref{proposition:2OutOf3}) and is saturated (proposition \ref{proposition:Saturation}). \subsection{Basic definitions and results} We begin with the definition of an admissibly percolating subcategory. \begin{definition}\label{Definition:AbelianPercolating} Let ${\cal C}$ be a conflation category and let ${\cal A}$ be a non-empty full subcategory of ${\cal C}$. We call ${\cal A}$ an \emph{admissibly deflation-percolating subcategory} or a \emph{strictly deflation-percolating subcategory} of ${\cal C}$ if the following three properties are satisfied: \begin{enumerate}[label=\textbf{A\arabic*},start=1] \item\label{A1} ${\cal A}$ is a Serre subcategory, meaning: \[\mbox{ If } A'\rightarrowtail A \twoheadrightarrow A'' \mbox{ is a conflation in ${\cal C}$, then } A\in \Ob({\cal A}) \mbox{ if and only if } A',A''\in \Ob({\cal A}).\] \item\label{A2} For all morphisms $C\rightarrow A$ with $C \in \Ob({\cal C})$ and $A\in \Ob({\cal A})$, there exists a commutative diagram \[\xymatrix{ A'\ar@{>->}[rd] & \\ C \ar@{->>}[u]\ar[r]& A\\ }\] with $A'\in \Ob({\cal A})$, and where $C \twoheadrightarrow A'$ is a deflation and $A\rightarrowtail A'$ is an inflation. \item\label{A3}If $a\colon C\rightarrowtail D$ is an inflation and $b\colon C\twoheadrightarrow A$ is a deflation with $A\in \Ob({\cal A})$, then the pushout of $a$ along $b$ exists and yields an inflation and a deflation, i.e. \[\xymatrix{ C \ar@{>->}[r]^{a}\ar@{->>}[d]^b & D\ar@{.>>}[d]\\ A\ar@{>.>}[r] & P }\] \end{enumerate} \end{definition} \begin{remark}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{remark:AbelianPercolatingAboutAxioms} \begin{enumerate} \item A conflation category with an admissibly deflation-percolating subcategory satisfies \ref{R0*}. Indeed, it follows from \ref{A1} that $0 \in {\cal A}$ and from \ref{A2} that any morphism $X \to 0$ is a deflation. \item It follows from proposition \ref{proposition:WhenPushout} that the pushout square in axiom \ref{A3} is a pullback square as well. \item Conditions \ref{A1} and \ref{A2} are also required by \cite[definition 4.0.35]{Cardenas98}. \item Given a deflation-exact category ${\cal C}$ and an admissibly deflation-percolating subcategory ${\cal A}$, axiom \ref{A2} implies that ${\cal A}$ is strongly right filtering. By proposition \ref{proposition:MinimalConditionsRMS}, pullbacks along weak isomorphisms exist and weak isomorphisms are stable under pullbacks. \item If ${\cal C}$ is an exact category, axiom \ref{A3} is automatically satisfied. See for example the dual of \cite[proposition~2.15]{Buhler10}. \end{enumerate} \end{remark} \begin{lemma}\label{Lemma:EpiToA} Let ${\cal A}$ be an admissibly deflation-percolating subcategory of a conflation category ${\cal C}$. Let $f\colon C \to A$ be a morphism in ${\cal C}$ with $A \in {\cal A}$. If $f$ is a monomorphism (epimorphism), then $f$ is an inflation (deflation). In particular, a morphism $X \to 0$ is a deflation. \end{lemma} \begin{proof} This is an immediate application of axioms \ref{A1} and \ref{A2}. \end{proof} \begin{proposition}\label{proposition:AIsAbelian} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be an admissibly deflation-percolating subcategory. Then ${\cal A}$ is a (strongly) deflation-percolating subcategory of ${\cal C}$ and ${\cal A}$ is an abelian subcategory. \end{proposition} \begin{proof} Axiom \ref{P1} and \ref{P2} hold by axioms \ref{A1} and \ref{A2}. Axiom \ref{P3} follows from axiom \ref{A2} and \ref{A3} in a straightforward way. To show show axiom \ref{P4}, it suffices to see that the conditions of proposition \ref{proposition:P4Criterion} holds. For this, we note that axiom \ref{A2} shows that every every morphism $f\colon A \to B$ in ${\cal A}$ is admissible and hence admits a cokernel $B \twoheadrightarrow \coker(f).$ By axiom \ref{A1}, we know that $\coker(f) \in {\cal A}.$ We conclude that axiom \ref{P4} holds, and hence ${\cal A}$ is a (strongly) deflation-percolating subcategory of ${\cal C}$. It remains to show that ${\cal A}$ is abelian. Let $f\colon A\to B$ be a morphism in ${\cal A}$. By axiom \ref{A2}, $f$ is admissible. By axiom \ref{A1}, $\ker(f)$ and $\coker(f)$ belong to ${\cal A}$, it follows that ${\cal A}$ is abelian. \end{proof} \begin{remark}\makeatletter \hyper@anchor{\@currentHref} \makeatother \begin{enumerate} \item Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a deflation-percolating subcategory. We claim that, if all weak isomorphisms are admissible, then ${\cal A}$ satisfies axiom \ref{A2}. Let $A,B \in \Ob({\cal A})$. Any morphism $f\colon A \to B$ is the composition of $\begin{psmallmatrix} 1 & f \end{psmallmatrix}\colon A \stackrel{\sim}{\rightarrowtail} A \oplus B$ and $\begin{psmallmatrix} 0 \\ 1 \end{psmallmatrix}\colon A \oplus B \stackrel{\sim}{\twoheadrightarrow} B$, and thus a weak isomorphism. Hence, if the weak isomorphisms are admissible, then any map between objects of ${\cal A}$ is admissible. This observation, combined with axiom \ref{P2}, yields axiom \ref{A2}. \item As in the proof of proposition \ref{proposition:AIsAbelian}, axioms \ref{A1} and \ref{A2} implies axiom \ref{P4}. We will see in example \ref{Example:IsbellCategory} that axioms \ref{A1}, \ref{A2}, and \ref{P3} (and thus also \ref{P4}) alone are not sufficient for the set $S_{\cal A}$ to be admissible. This motivates strengthening axiom \ref{P3} to axiom \ref{A3} (see theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms}). \end{enumerate} \end{remark} \begin{proposition}\label{proposition:PercolatingOfPercolating} Let ${\cal C}$ be a deflation-exact category and let ${\cal A} \subseteq {\cal C}$ be an admissibly percolating subcategory. If ${\cal B} \subseteq {\cal A}$ is a Serre subcategory, then ${\cal B} \subseteq {\cal C}$ is an admissibly percolating subcategory. \end{proposition} \begin{proof} It is easy to see that ${\cal B} \subseteq {\cal C}$ satisfies axioms \ref{A1} and \ref{A3}. Using that ${\cal A}$ is an abelian category (proposition \ref{proposition:AIsAbelian}) and that the embedding ${\cal A} \subseteq {\cal C}$ satisfies axiom \ref{A2}, it is easy to verify that ${\cal B} \subseteq {\cal C}$ satisfies axiom \ref{A2}. \end{proof} \subsection{Homological consequences of axiom \ref{A3}}\label{subsection:HomologicalConsequencesOfAxiomA3} Throughout this subsection, let ${\cal C}$ be a deflation-exact category and ${\cal A}$ a non-empty full subcategory of ${\cal C}$ satisfying axiom \ref{A3}. We show that the existence of such a subcategory yields a weak version of axiom \ref{R3} (see proposition \ref{proposition:WeakR3} below), and we show that a weak version of the $3\times 3$-lemma holds (see proposition \ref{proposition:WeakNineLemma} below). If ${\cal C}$ is a strongly deflation-exact category, these two properties are automatically satisfied (see \cite{BazzoniCrivei13}). \begin{proposition}\label{proposition:WeakR3} Let $g\colon Y\rightarrow Z$ be a map such that $g$ has a kernel belonging to ${\cal A}$ and such that there exists a deflation $f\colon X\twoheadrightarrow Y$ such that $gf$ is also a deflation. Then $g$ is a deflation. \end{proposition} \begin{proof} Proposition \ref{proposition:MitchellPullback} yields the following commutative diagram \[\xymatrix{ \ker(gf)\ar@{.>}[d]^{f'} \ar@{>->}[r]^{k'} & X\ar@{->>}[r]\ar@{->>}[d]^f & Z\ar@{=}[d]\\ \ker(g)\ar[r]^k & Y\ar[r]^g & Z }\] where the left-hand square is a pullback. Axiom \ref{R2} implies that $f'$ is a deflation. Proposition \ref{proposition:WhenPullback} yields the existence of the following commutative diagram \[\xymatrix{ K\ar@{>->}[d]^{l}\ar@{=}[r] & K\ar@{>->}[d]^{l'}\\ \ker(gf)\ar@{->>}[d]^{f'} \ar@{>->}[r]^{k'} & X\ar@{->>}[d]^f \\ \ker(g)\ar[r]^k & Y }\] where the columns are conflations. By proposition \ref{proposition:MitchellPullbackPushout}\eqref{enumerate:MitchellPushout}, we know that the lower square is a pushout. Since $\ker(g)\in \Ob({\cal A})$, axiom \ref{A3} implies that $k\colon \ker(g)\rightarrowtail Y$ is an inflation. Proposition \ref{proposition:WhenPushout} implies that $g$ is the cokernel of the inflation $k$, and hence $g$ is a deflation. \end{proof} The next proposition is a version of the $3\times 3$-lemma as given in \cite[proposition 5.11]{BazzoniCrivei13}. \begin{proposition}\label{proposition:WeakNineLemma} Consider a commutative diagram \[\xymatrix{ X\ar@{>->}[r]\ar@{->>}[d] & Y\ar@{->>}[r]\ar@{->>}[d] & Z\ar@{->>}[d] \\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z' }\] where the rows are conflations and the vertical arrows are deflations. If $X'\in \Ob({\cal A})$, then the above diagram can be completed to a commutative diagram \[\xymatrix{ X''\ar@{>->}[r]\ar@{>->}[d] & Y''\ar@{>->}[d]\ar@{->>}[r] & Z''\ar@{>->}[d]\\ X\ar@{>->}[r]\ar@{->>}[d] & Y\ar@{->>}[r]\ar@{->>}[d] & Z\ar@{->>}[d] \\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z' }\] where the rows and the columns are conflations. Moreover, the upper left square is a pullback and the lower right square is a pushout. \end{proposition} \begin{proof} By proposition \ref{proposition:FactorizationOfConflationMorphism}, the diagram can be extended to a commutative diagram \[\xymatrix{ X\ar@{>->}[r]^i\ar@{->>}[d]_{f} \ar@{}[rd] | {A} & Y\ar@{->>}[r]^p\ar[d] \ar@{}[rd] | {B} & Z\ar@{=}[d]\\ X'\ar@{>->}[r]\ar@{=}[d] \ar@{}[rd] | {C} & P\ar@{->>}[r]\ar[d] \ar@{}[rd] | {D} & Z\ar@{->>}[d]^h\\ X'\ar@{>->}[r]_{i'} & Y'\ar@{->>}[r]_{p'} & Z' }\] such that the square {D} is a pullback and square {A} is both a pullback and a pushout. By axioms \ref{A3} and \ref{R2} the maps $Y\rightarrow P$ and $P\rightarrow Y'$ are deflations. Applying proposition \ref{proposition:WhenPullback} yields the following commutative diagrams: \begin{center} \begin{tabular}{l p{.2\textwidth} r} $\xymatrix{ X''\ar@{=}[r]\ar@{>->}[d]\ar@{}[rd] | {E} & X'' \ar@{>->}[d]&\\ X\ar@{->>}[d]\ar@{}[rd] | {A}\ar@{>->}[r] & Y\ar@{->>}[d]\ar@{->>}[r]\ar@{}[rd] | {B} & Z\ar@{=}[d]\\ X'\ar@{>->}[r] & P\ar@{->>}[r] & Z }$ & & $\xymatrix{ & Z''\ar@{=}[r]\ar@{>->}[d]\ar@{}[rd] | {F} & Z''\ar@{>->}[d]\\ X'\ar@{=}[d]\ar@{>->}[r]\ar@{}[rd] | {C} & P\ar@{->>}[d]\ar@{->>}[r]\ar@{}[rd] | {D} & Z\ar@{->>}[d]\\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z' }$ \end{tabular} \end{center} where the rows and colums are conflations. Starting from the conflations $X'' \rightarrowtail Y \twoheadrightarrow P$ and $Z'' \rightarrowtail P \twoheadrightarrow Y'$, we construct the commutative diagram \[\xymatrix{ X''\ar@{=}[d]\ar@{.>}[r]\ar@{}[rd] | {G} & Y''\ar@{>->}[d]\ar@{.>}[r]\ar@{}[rd] | {H} & Z''\ar@{>->}[d] \\ X'' \ar@{>->}[r]& Y \ar@{->>}[d]\ar@{->>}[r]\ar@{}[rd] | {I}& P\ar@{->>}[d]\\ & Y'\ar@{=}[r] & Y' }\] where the rows and columns are conflations. Here, the dotted morphism $\xymatrix@1{Y'' \ar@{..>}[r] & Z''}$ is chosen such that the square {H} is a pullback (see proposition \ref{proposition:MitchellPullback}, the chosen morphism is automatically a deflation by \ref{R2}). The square {G} is given by proposition \ref{proposition:WhenPullback}. Putting the commutative squares together, we obtain the commutative diagram: \[\xymatrix{ X''\ar@{=}[d]\ar@{>.>}[r]\ar@{}[rd] | {G} & Y''\ar@{>->}[d]\ar@{.>>}[r]\ar@{}[rd] | {H} & Z''\ar@{>->}[d] \\ X'' \ar@{>->}[r] \ar@{>->}[d] \ar@{}[rd] | {E} & Y \ar@{=}[d]\ar@{->>}[r]\ar@{}[rd] | {B} & P\ar@{->>}[d]\\ X \ar@{>->}[r] & Y\ar@{->>}[r] & Z }\] where the right-most column composes to the morphism $Z'' \rightarrowtail Z$ by the square {F}. As $X'' \rightarrowtail X$, $Y'' \rightarrowtail Y$, and $Z'' \rightarrowtail Z$ have been constructed as kernels of $X \twoheadrightarrow X'$, $Y \twoheadrightarrow Y'$, and $Z \twoheadrightarrow Z'$, respectively, we obtain the $3 \times 3$-diagram in the statement of the proposition. Finally, consider the $3 \times 3$-diagram in the statement of the proposition. It follows from proposition \ref{proposition:MitchellPullbackPushout} that the upper left square is a pullback and the lower right square is a pushout. \end{proof} \subsection{Weak isomorphisms are admissible}\label{subsection:AbelianPercolatingAdmissible} Throughout this section, ${\cal C}$ denotes a deflation-exact category and ${\cal A}$ denotes an admissibly deflation-percolating subcategory. Consider the set $\widehat{S_{{\cal A}}} \coloneqq S_{{\cal A}}\cap \Adm({\cal C})$ of admissible weak isomorphisms. The aim of this section is to show that $\widehat{S_{{\cal A}}}=S_{{\cal A}}$. \begin{remark}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{remark:DefinitionAdmissibleWeakIso} \begin{enumerate} \item A morphism $f\colon X\rightarrow Y$ in ${\cal C}$ belongs to $\widehat{S_{{\cal A}}}$ if and only if $f$ is admissible and $\ker(f),\coker(f)\in {\cal A}$. \item For any admissible morphism $f$, one automatically has that $\coim(f)\cong \im(f)$ and $f$ factors as deflation-inflation through $\im(f)$. \item Admissible weak isomorphisms are called ${\cal A}^{-1}$-isomorphisms in \cite[definition~4.0.36]{Cardenas98}. \item Any morphism $\alpha\colon A\rightarrow B$ in an admissibly deflation-percolating subcategory ${\cal A}\subseteq {\cal C}$ belongs to $\widehat{S_{{\cal A}}}$. Indeed, this is an immediate corollary of proposition \ref{proposition:AIsAbelian}. \end{enumerate} \end{remark} We show two additional homological properties which are consequences of axiom \ref{A2}. The first is a strengthening of the lifting lemma (lemma \ref{lemma:LiftingConflations}) \begin{corollary}\label{corollary:WeakNineLemma} Let $X \rightarrowtail Y \twoheadrightarrow Z$ be a conflation and $f\colon Y \twoheadrightarrow B$ be a deflation. If $B \in \Ob({\cal A})$, then there is a commutative diagram \[\xymatrix{ X''\ar@{>->}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} & Y''\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}\ar@{->>}[r] & Z''\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}\\ X\ar@{>->}[r]\ar@{->>}[d] & Y\ar@{->>}[r]\ar@{->>}[d]^{f} & Z\ar@{->>}[d] \\ A\ar@{>->}[r] & B\ar@{->>}[r] & C }\] where the rows and the columns are conflations, and where the bottom row lies in ${\cal A}$. Moreover, the upper left square is a pullback and the lower right square is a pushout. \end{corollary} \begin{proof} It follows from \ref{A2} that the composition $X \to Y \to B$ factors as $X \twoheadrightarrow A \rightarrowtail B$ with $A \in \Ob({\cal A})$. This gives the following commutative diagram: \[\xymatrix{ X\ar@{>->}[r]\ar@{->>}[d] & Y\ar@{->>}[r]\ar@{->>}[d]^{f} & Z\ar@{.>>}[d] \\ A\ar@{>->}[r] & B\ar@{->>}[r] & C }\] with exact rows (the bottom rows lies in ${\cal A}$ by \ref{A1}). The dotted arrow is induced by the universal property of the cokernel $Y\twoheadrightarrow Z$. One easily verifies that the dotted arrow is an epimorphism and, thus, by lemma \ref{Lemma:EpiToA}, a deflation. The statement now follows from proposition \ref{proposition:WeakNineLemma}. \end{proof} \begin{proposition}\label{proposition:PushoutOfAInverseAlongMapToA} Let $f\colon X\rightarrow Y$ belong to $\widehat{S_{{\cal A}}}$ and let $g\colon X\rightarrow A$ be any morphism. If $A \in \Ob({\cal A})$, then the pushout of $f$ along $g$ exists and the induced map belongs to $\widehat{S_{{\cal A}}}$. \end{proposition} \begin{proof} By definition, $f$ is an admissible map. By axiom \ref{A2}, $g$ is admissible as well. Since $A' \in {\cal A}$, corollary \ref{corollary:WeakNineLemma} yields a commutative diagram \[\xymatrix{ Y && \\ X'\ar@{>->}[u]\ar@{.>>}[r] & P& \\ X\ar@{->>}[r] \ar@{->>}[u]& A' \ar@{.>>}[u]\ar@{>->}[r]&A\\ \ker(f) \ar@{.>>}[r]\ar@{>->}[u]& A''\ar@{>.>}[u] }\] with $P \in {\cal A}$ and such that the square $X'PA'X$ is a pushout. Applying axiom \ref{A3} twice yields a commutative diagram \[\xymatrix{ Y\ar@{.>>}[r] & Q& \\ X'\ar@{>->}[u]\ar@{->>}[r]\ar@{>.>}[u] & P\ar@{>.>}[u]\ar@{>.>}[r]&R \\ X\ar@{->>}[r] \ar@{->>}[u]& A' \ar@{->>}[u]\ar@{>->}[r]&A\ar@{.>>}[u] }\] Since $Q,R,P\in \Ob({\cal A})$ and ${\cal A}$ is an abelian category by proposition \ref{proposition:AIsAbelian}, we can complete $Q,R,P$ to a pushout square. Hence we obtain the commutative diagram \[\xymatrix{ Y\ar@{->>}[r] & Q\ar@{>.>}[r]& S\\ X'\ar@{>->}[u]\ar@{->>}[r]\ar@{>->}[u] & P\ar@{>->}[u]\ar@{>->}[r]&R\ar@{>.>}[u] \\ X\ar@{->>}[r] \ar@{->>}[u]& A' \ar@{->>}[u]\ar@{>->}[r]&A\ar@{->>}[u]\\ }\] where all squares are pushout squares. It follows from the pushout lemma that the square $YSAX$ is a pushout square as well. Since the map $A\rightarrow S$ belongs to ${\cal A}$, remark \ref{remark:DefinitionAdmissibleWeakIso} yields that it is an admissible weak isomorphism. \end{proof} \begin{lemma}\label{lemma:CompositionOfAAInflations} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a full subcategory of ${\cal C}$. If ${\cal A}$ satisfies axioms \ref{A1} and \ref{A3}, the composition of ${\cal A}^{-1}$-inflations is again an ${\cal A}^{-1}$-inflation. \end{lemma} \begin{proof} Let $\xymatrix{U\ar@{>->}[r]^{\sim}_a & V}$ and $\xymatrix{V\ar@{>->}[r]^{\sim}_b & W}$ be two ${\cal A}^{-1}$-conflations. Consider the following commutative diagram: \[\xymatrix{ U\ar@{>->}[r]^{\sim}_a\ar@{=}[d] & V\ar@{->>}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}_b &\coker(a)\ar@{>->}[d]\\ U\ar[r]_{ba} & W\ar@{->>}[r]\ar@{->>}[d] &P\ar@{->>}[d]\\ & \coker(b)\ar@{=}[r] & \coker(b) }\] The top right square is a pushout square which exists by axiom \ref{A3}. Note that the top right square is also a pullback, it follows that the compostion $ba$ is the kernel of the deflation $W\twoheadrightarrow P$. Since $\coker(a),\coker(b)\in {\cal A}$, axiom \ref{A1} imlies that $P\in {\cal A}$ as well. It follows that $ba$ is an ${\cal A}^{-1}$-inflation. \end{proof} The next lemma is crucial in showing that $S_{{\cal A}}=\widehat{S_{{\cal A}}}$, i.e.~that the weak isomorphisms are automatically admissible. \begin{lemma}\label{lemma:SwitchAInflationsAndADeflations} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}\subseteq {\cal C}$ be an admissibly deflation-percolating subcategory. Let $a\colon U \stackrel{\sim}{\rightarrowtail} V$ and $b\colon V\stackrel{\sim}{\twoheadrightarrow} W$ be an ${\cal A}^{-1}$-inflation and ${\cal A}^{-1}$-deflation, respectively. The composition $b\circ a$ is an admissible weak isomorphism. \end{lemma} \begin{proof} Using corollary \ref{corollary:WeakNineLemma}, we find the commutative diagram \[\xymatrix{ \ker(c_ak_b)\ar@{>->}[d]^i\ar[r]^{k_b'}& U\ar@{->>}[r]^{c_b'} \ar@{>->}[d]^a & \ker(c_a')\ar@{>->}[d]^{k_a'}\\ \ker(b)\ar@{>->}[r]^{k_b}\ar@{->>}[d]^{p} & V\ar@{->>}[r]^{b}\ar@{->>}[d]^{c_a} & W\ar@{->>}[d]^{c_a'}\\ \im(c_ak_b)\ar@{>->}[r]^{i'}& \coker(a)\ar@{->>}[r]^{p'} &\coker(c_a k_b) }\] such that the rows and columns are conflations. By axiom \ref{A1}, the left column and lower row belong to ${\cal A}$. The upper-right square shows that $ba=k_a'c_b'$ and thus $ba$ is admissible. Clearly, $\ker(ba)=\ker(c_ak_b)$ and $\coker(ba)=\coker(c_a k_b)$, both belonging to ${\cal A}$. \end{proof} \begin{theorem}\label{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms} Let ${\cal C}$ be a deflation-exact category. If ${\cal A}\subseteq {\cal C}$ an admissibly deflation-percolating subcategory, then $S_{{\cal A}}=\widehat{S_{{\cal A}}}$, in particular all weak isomorphisms are admissible. Moreover, $S_{{\cal A}}$ is a right multiplicative system such that the square in axiom \ref{RMS2} can be chosen as a pullback square, in particular, one can take pullbacks along weak isomorphisms. \end{theorem} \begin{proof} The proof is a straightforward application of proposition \ref{proposition:MinimalConditionsRMS} and lemmas \ref{lemma:CompositionOfAAInflations} and \ref{lemma:SwitchAInflationsAndADeflations}. \end{proof} \subsection{The 2-out-of-3 property}\label{subsection:2oo3} Throughout this section ${\cal C}$ is a deflation-exact category and ${\cal A}$ is an admissibly deflation-percolating subcategory. We now show that the right multiplicative system $S_{{\cal A}}$ of admissible weak isomorphisms satisfies the 2-out-of-3 property. We first establish some preliminary results. \begin{lemma}\label{Lemma:Bazzoni5.10}\label{Lemma:DeflationInflationFiveLemma} Consider a commutative diagram \[\xymatrix{ X\ar@{>->}[r]\ar[d]^f & Y\ar@{->>}[r]\ar[d]^g & Z\ar@{=}[d]\\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z }\] with exact rows. \begin{enumerate} \item If $f$ is an ${\cal A}^{-1}$-inflation, then $g$ is an ${\cal A}^{-1}$-inflation. \item If $f$ is an ${\cal A}^{-1}$-deflation, then $g$ is an ${\cal A}^{-1}$-deflation. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item From proposition \ref{proposition:PushoutIfCokernel}, we obtain the following commutative diagram with exact rows and columns (where the left square is a pushout): \[\xymatrix{ X\ar@{>->}[r]\ar@{>->}[d]^f_{\rotatebox{90}{$\sim$}} & Y\ar@{->>}[r]\ar[d]^g & Z\ar@{=}[d]\\ X'\ar@{>->}[r]\ar@{->>}[d] & Y'\ar@{->>}[r]\ar[d]^{c_g} & Z\\ \coker(f)\ar@{=}[r] & \coker(g) & }\] As $c_g\colon Y'\rightarrow \coker(g)$ is an epimorphism and $\coker(f)\in \Ob({\cal A})$, lemma \ref{Lemma:EpiToA} yields that $c_g$ is a deflation. Denote the kernel of $c_g$ by $K\rightarrowtail Y'$. By corollary \ref{corollary:WeakNineLemma} we obtain a commutative diagram: \[\xymatrix{ X\ar@{>->}[r]\ar@{=}[d] & Y\ar@{->>}[r]\ar@{.>}[d] & Z\ar@{=}[d]\\ X\ar@{>->}[r]\ar@{>->}[d] & K\ar@{->>}[r]\ar@{>->}[d] & Z\ar@{=}[d]\\ X'\ar@{>->}[r]\ar@{->>}[d]& Y'\ar@{->>}[r]\ar@{->>}[d]& Z\ar@{->>}[d]\\ \coker(f)\ar@{=}[r] & \coker(g)\ar@{->>}[r] &0 }\] The dotted arrow is obtained by factoring $g$ through the kernel of its cokernel. By the short five lemma (\cite[lemma 5.3]{BazzoniCrivei13}), the induced map $Y\rightarrow K$ is an isomorphism. It follows that $g$ is an inflation. Since $\coker(g)\in\Ob({\cal A})$, we find that $g \in S_{{\cal A}}$. \item By proposition \ref{proposition:PushoutIfCokernel}, we know that the left square is a pushout and a pullback, and we obtain the following commutative diagram (where the columns are conflations) \[\xymatrix{ \ker(f)\ar@{>->}[d]^k\ar@{=}[r] & \ker(g)\ar[d]\\ X\ar@{>->}[r]^i\ar@{->>}[d]^f & Y\ar[d]^g \\ X'\ar@{>->}[r]^{i'} & Y' }\] with $\ker(f)\in \Ob({\cal A})$ and such that $ik$ is the kernel of $g$. By proposition \ref{proposition:PullbackPushout}, the map $\begin{psmallmatrix} i' & g \end{psmallmatrix}\colon X'\oplus Y\twoheadrightarrow Y'$ is a deflation. By \cite[lemma 5.1]{BazzoniCrivei13}, the map $\begin{psmallmatrix} f & 0\\0&1 \end{psmallmatrix}\colon X\oplus Y\rightarrow X'\oplus Y$ is a deflation as well. Axiom \ref{R1} yields that \[\begin{pmatrix} i' & g \end{pmatrix}\begin{pmatrix} f & 0\\0&1 \end{pmatrix}=\begin{pmatrix} i'f & g \end{pmatrix}=\begin{pmatrix} gi & g \end{pmatrix}=g\begin{pmatrix} i & 1 \end{pmatrix}\] is a deflation. As $\begin{psmallmatrix} i& 1 \end{psmallmatrix}\colon X \oplus Y \to Y$ is a retraction (and hence a deflation), proposition \ref{proposition:WeakR3} shows that $g$ is a deflation. \qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lemma:CompositionOfAAInflationAndInflation} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be a non-empty full subcategory satisfying axiom \ref{A3}. Consider two composable inflations $f\colon X\rightarrowtail Y$ and $g\colon Y\rightarrowtail Z$. If $f$ is an ${\cal A}^{-1}$-inflation, $g\circ f$ is an inflation. \end{lemma} \begin{proof} Let $q\colon Y\twoheadrightarrow \coker(f)$ be the cokernel of $f$. By axiom \ref{A3}, we obtain the following commutative diagram such that the lower-right square is bicartesian: \[\xymatrix{ X\ar@{=}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}_f & X\ar[d]_{gf}\\ Y\ar@{>->}[r]\ar@{->>}[d] & Z\ar@{.>>}[d]\\ \coker(f)\ar@{>.>}[r] & P }\] By proposition \ref{proposition:MitchellPullback}, $g\circ f$ is the kernel of the deflation $Z\to P$ and thus $g\circ f$ is an inflation. \end{proof} \begin{proposition}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{proposition:DeflationInflationFiveLemma} \begin{enumerate} \item Consider the following commutative diagram in ${\cal C}$ \[\xymatrix{ X\ar@{>->}[r]\ar@{>->}[d]^f & Y\ar@{->>}[r]\ar[d]^g & Z\ar@{>->}[d]^h \\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z'}\] where the rows are conflations. \begin{enumerate} \item If $f$ is an ${\cal A}^{-1}$-inflation, then $g$ is an inflation. \item If additionally $h$ is an ${\cal A}^{-1}$-inflation, then $g$ is an ${\cal A}^{-1}$-inflation. \end{enumerate} \item Consider the following commutative diagram in ${\cal C}$ \[\xymatrix{ X\ar@{>->}[r]\ar@{->>}[d]^f & Y\ar@{->>}[r]\ar[d]^g & Z\ar@{->>}[d]^h \\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z'}\] where the rows are conflations. \begin{enumerate} \item If $f$ is an ${\cal A}^{-1}$-deflation, then $g$ is a deflation. \item If additionally $h$ is an ${\cal A}^{-1}$-deflation, then $g$ is an ${\cal A}^{-1}$-deflation. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Following proposition \ref{proposition:FactorizationOfConflationMorphism}, we consider the following factorization of both diagrams in the statement of the proposition: \[\xymatrix{ X\ar@{>->}[r] \ar@{}[rd] | {A} \ar[d]_f & Y\ar@{->>}[r]\ar[d]^{g_1} \ar@{}[rd] | {B} & Z\ar@{=}[d] \\ X'\ar@{=}[d]\ar@{>->}[r] \ar@{}[rd] | {C} & P \ar@{->>}[r]\ar[d]^{g_2} \ar@{}[rd] | {D} & Z \ar[d]^h \\ X'\ar@{>->}[r] & Y'\ar@{->>}[r] & Z'}\] \begin{enumerate} \item If $f$ is an ${\cal A}^{-1}$-inflation, then lemma \ref{Lemma:DeflationInflationFiveLemma} yields that $g_1$ is an ${\cal A}^{-1}$-inflation. As $D$ is a pullback square and $h$ an inflation, proposition \ref{proposition:PullbackOfInflation} yields that $g_2$ is an inflation. It now follows from lemma \ref{lemma:CompositionOfAAInflationAndInflation} that $g=g_2\circ g_1$ is an inflation. Moreover, if $h$ is an ${\cal A}^{-1}$-inflation, its cokernel belongs to ${\cal A}$. As $D$ is also a pushout square, $\coker(g_2)=\coker(h)$. Hence $g_2$ is an ${\cal A}^{-1}$-inflation. By lemma \ref{lemma:CompositionOfAAInflations} we conclude that $g=g_2\circ g_1$ is an ${\cal A}^{-1}$-inflation. \item If $f$ is an ${\cal A}^{-1}$-deflation, lemma \ref{Lemma:DeflationInflationFiveLemma} yields that $g_1$ is an ${\cal A}^{-1}$-deflation. As $D$ is a pullback square and $h$ a deflation, axiom \ref{R2} yields that $g_2$ is a deflation. By axiom $\ref{R1}$, $g=g_2\circ g_1$ is a deflation. Moreover, if $h$ is an ${\cal A}^{-1}$-deflation, then $g_2$ is a deflation (by axiom \ref{R3}) with $\ker g = \ker h \in {\cal A}$. It then follows from theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms} yields that $g = g_2 \circ g_1$ is an ${\cal A}^{-1}$-deflation.\qedhere \end{enumerate} \end{proof} \begin{proposition}\label{proposition:2OutOf3} The two-out-of-three property holds, i.e.~if $f,g$ are composable morphisms, then if two of the three maps $f,g$, and $gf$ belong to $S_{{\cal A}}$, so does the third. \end{proposition} \begin{proof} As we already showed that $S_{{\cal A}}$ is a right multiplicative set, we know that $f,g\in S_{{\cal A}}$ implies that $gf \in S_{\cal A}$. We will first show that $g,gf\in S_{{\cal A}}$ implies that $f \in S_{\cal A}$. In step 1 we prove this statement assuming that $g$ is an inflation. In step 2 we prove the statement assuming that $g$ is a deflation. These two steps suffice as we know that $g$ has a deflation-inflation factorization where both parts are morphisms in $S_{{\cal A}}$.\\ \begin{enumerate} \item[Step 1: ] We now show that if $g,gf\in S_{{\cal A}}$ and $g$ is an inflation, then $f\in S_{{\cal A}}$. As $g$ is a monomorphism, we find that $\ker(f)=\ker(gf)\in \Ob({\cal A})$. It follows that $\coim(gf)\cong\coim(f)$. Hence we obtain the diagram: \[\xymatrix{ & Y\ar@{>->}[rd]^g & \\ X\ar@{->>}[r]\ar[ru]^f\ar@{->>}[rd] & \coim(f)\ar[d]^{\cong} \ar@{.>}[u]& Z\\ & \coim(gf)\ar@{>->}[ru] & }\] Clearly the left-hand side of the diagram is commutative. Since $X\twoheadrightarrow \coim(f)$ is epic, the right side is commutative as well. Since $gf\in S_{{\cal A}}$ is admissible, we have that $\coim(gf)\cong \im(gf)$. Since the right-hand side commutes, the map $\im(gf)\rightarrowtail Z \twoheadrightarrow \coker(g)$ is zero. By corollary \ref{corollary:WeakNineLemma}, we obtain a commutative diagram: \[\xymatrix{ \coim(f)\ar@{>->}[d]\ar[r]^{\cong} & \im(gf)\ar@{->>}[r] \ar@{>->}[d]& 0\ar@{>->}[d]\\ Y\ar@{>->}[r]^g\ar@{->>}[d] & Z\ar@{->>}[r]\ar@{->>}[d] & \coker(g)\ar@{=}[d]\\ K \ar@{>->}[r]& \coker(gf)\ar@{->>}[r] & \coker(g) }\] Using that $g$ is monic, one readily verifies that $\coim(f)\rightarrowtail Y$ coincides with the dotted arrow from the previous diagram. It follows that $f\colon X \twoheadrightarrow X/\ker(f) = \coim(f) \rightarrowtail Y$ is admissible and that$\coker(f)=K\in {\cal A}$. We conclude that $f\in S_{{\cal A}}$. \item[Step 2: ] We now show that if $g,gf\in S_{{\cal A}}$ and $g$ is a deflation, then $f\in S_{{\cal A}}$. Consider the commutative diagram: \[\xymatrix{ \ker(gf)\ar@{.>}[d]^{\phi} \ar@{>->}[r]& X\ar[d]^f\ar@{->>}[r] & \im(gf)\ar@{>->}[d]\\ \ker(g)\ar@{>->}[r] & Y\ar@{->>}[r]^g &Z }\] As $\phi$ is a map in ${\cal A}$ and ${\cal A}$ is abelian (see proposition \ref{proposition:AIsAbelian}), we know that $\phi$ factors as a deflation-inflation through its image $\im(\phi)$. Moreover, proposition \ref{proposition:MitchellPullback} implies that the left square is a pullback. As pullbacks preserve kernels, $\ker(f)=\ker(\phi)\in \Ob({\cal A})$. Axiom \ref{A3} yields the following commutative diagram: \[\xymatrix{ \ker(gf)\ar@{>->}[r]\ar@{->>}[d]^{\rotatebox{90}{$\sim$}} & X\ar@{->>}[r]\ar@{->>}[d] & \im(gf)\ar@{=}[d]\\ \im(\phi)\ar@{>->}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}& P\ar@{->>}[r]\ar@{.>}[d] & \im(gf)\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}\\ \ker(g)\ar@{>->}[r] & Y\ar@{->>}[r] & Z }\] Indeed, the upper-left square is a pushout square constructed by axiom \ref{A3} and the lower-right square commutes as $X\twoheadrightarrow P$ is epic. By proposition \ref{proposition:DeflationInflationFiveLemma} we conclude that $P\rightarrow Y$ is an ${\cal A}^{-1}$-inflation. It follows that $f\in S_{{\cal A}}$. This concludes step $2$. \end{enumerate} Next, we will show that if $f,gf\in S_{{\cal A}}$ then $g\in S_{{\cal A}}$. Since $f$ has a deflation-inflation factorization, it suffices to prove the statement separately assuming that $f$ is a deflation and assuming $f$ is an inflation. This will be done in step 1' and step 2'. \begin{enumerate} \item[Step 1': ] Assume that $f$ is a deflation. Then $\coker(gf)=\coker(g)\in \Ob({\cal A})$. Hence we get the diagram \[\xymatrix{ & Y\ar[rd]^g\ar@{.>}[d]^h & \\ X\ar@{->>}[ru]^f\ar@{->>}[rd] & \im(g)\ar@{>->}[r] & Z\\ & \im(gf)\ar@{>->}[ru]\ar[u]^{\cong} & }\] Using that $\im(g)\rightarrowtail Z$ is monic, we see that this diagram is commutative. As the composition $\ker(f)\rightarrow X\rightarrow \im(gf)$ is zero, one easily obtains the following commutative diagram: \[\xymatrix{ \ker(f)\ar@{>.>}[r]\ar@{=}[d] & \ker(gf)\ar@{->>}[r]\ar@{>->}[d] & C\ar@{.>}[d]\\ \ker(f)\ar@{>->}[r]\ar@{->>}[d] & X\ar@{->>}[r]^f\ar@{->>}[d] & Y\ar@{.>}[d]^{h'}\\ 0\ar@{>->}[r] & \im(gf)\ar@{=}[r] & \im(gf) }\] Here the induced map $\ker(f)\rightarrow \ker(gf)$ is a monomorphism between objects in${\cal A}$. As ${\cal A}$ is abelian (see propositio \ref{proposition:AIsAbelian}), it has a cokernel $C \in {\cal A}$. By proposition \ref{proposition:MitchellPullbackPushout} the upper-right square is a pushout, so that by axiom \ref{A3} the map $C\rightarrow Y$ is an inflation and by proposition \ref{proposition:PushoutIfCokernel} the map $Y\rightarrow \im(gf)$ is a deflation. As $f$ is epic one sees that $h=h'$. It follows that $g\colon Y \twoheadrightarrow \im(gf) \rightarrowtail Z$ has a deflation-inflation factorization. Since $\ker(h)=C\in \Ob({\cal A})$ and $\ker(h)=\ker(g)$ we conclude that $g$ is an admissible weak isomorphism. \item[Step 2': ] Let $f$ be an inflation. We obtain a commutative diagram: \[\xymatrix{ X\ar@{>->}[r]^f\ar@{->>}[d] & Y\ar[d]^g\ar@{->>}[r] & \coker(f)\ar@{.>}[d]^{\phi}\\ \im(gf)\ar@{>->}[r] & Z\ar@{->>}[r]\ar@{.>>}[d] & \coker(gf)\ar@{->>}[d]\\ & \coker(\phi)\ar@{=}[r] & \coker(\phi) }\] Here we used that the induced map $\phi$ lies in ${\cal A}$ and hence has a cokernel $\coker(\phi) \in {\cal A}$ (as ${\cal A}$ is abelian, see proposition \ref{proposition:AIsAbelian}), the map $Z\twoheadrightarrow \coker(\phi)$ is a deflation by axiom \ref{R1}. The upper-right square is a pushout by proposition \ref{proposition:MitchellPullbackPushout}. It follows that $Z\twoheadrightarrow \coker(\phi)$ is the cokernel of $g$. As $\phi$ is a morphism in the abelian category ${\cal A}$, it admits a deflation-inflation factorization $\coker(f)\twoheadrightarrow \im(\phi)\rightarrowtail \coker(gf)$. Taking the pullback of $\im(\phi)\rightarrowtail \coker(gf)$ along $Z\twoheadrightarrow \coker(gf)$ we obtain the commutative diagram: \[\xymatrix{ X\ar@{>->}[r]^f\ar@{->>}[d]_{\rotatebox{90}{$\sim$}} & Y\ar@{->>}[r]\ar@{.>}[d] & \coker(f)\ar@{->>}[d]_{\rotatebox{90}{$\sim$}}\\ \im(gf)\ar@{>->}[r]\ar@{=}[d] & P\ar@{->>}[r]\ar[d] & \im(\phi)\ar@{>->}[d]\\ \im(gf)\ar@{>->}[r] & Z\ar@{->>}[r] & \coker(gf) }\] By proposition \ref{proposition:PullbackOfInflation}, $P\rightarrowtail Z$ is an inflation. By proposition \ref{proposition:DeflationInflationFiveLemma}, the map $Y\rightarrow P$ is a deflation whose kernel belongs to ${\cal A}$. It follows that $g$ is an admissible weak isomorphism. \qedhere \end{enumerate} \end{proof} \subsection{The saturation property} We now show that the right multiplicative set $S_{\cal A}$ of weak isomorphisms satisfies the saturation property. Our proof is based on the two-out-of-three-property (proposition \ref{proposition:2OutOf3}). \begin{lemma}\label{lemma:IdempotentsInKernel} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be an idempotent complete deflation-percolating subcategory. Let $e\colon X \to X$ be any idempotent in ${\cal C}$. If $Q(e) = 0$, then $X \cong \ker(e) \oplus A$ for some $A \in {\cal A}$. \end{lemma} \begin{proof} This follows from the first part of the proof of \cite[lemma 1.17.6]{Schlichting04} and proposition \ref{proposition:InterpretationOfP4}. \end{proof} \begin{proposition}\label{proposition:Saturation} Let ${\cal C}$ be a deflation-exact category and let ${\cal A}$ be an admissibly deflation-percolating subcategory. The set $S_{{\cal A}}$ of weak isomorphisms is saturated. \end{proposition} \begin{proof} Let $f\colon Y\rightarrow Z$ be map that descends to an isomorphism in $S_{{\cal A}}^{-1}{\cal C}$. By definition, there exists a map $(g,s)\in \Mor(S_{{\cal A}}^{-1}{\cal C})$ such $(f,1) \circ (g,s) \sim (1,1)$, i.e.~there exists a commutative diagram: \[\xymatrix{ & Z'\ar[ld]_{s}\ar[rd]^{fg}&\\ Z\ar@{=}[rd] & M\ar[l]_{\sim}\ar[r]^{\sim}\ar[d]_{\rotatebox{90}{$\sim$}}\ar[u]^h & Z\\ & Z\ar@{=}[ru] & }\] Since $fgh\in S_{{\cal A}}$, we can take the pullback of $f$ along $fgh$ (see theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms}). We obtain the following commutative diagram \[\xymatrix{ & Y\ar[rr]^f&& Z\\ &&&\\ &P\ar[uu]^{\beta}_{\rotatebox{90}{$\sim$}}\ar[rr]^{\alpha}&&M\ar[uu]^{\rotatebox{90}{$\sim$}}_{f(gh)}\ar[lluu]_{gh}\\ M\ar@/^2pc/[uuur]^{gh}\ar@/_2pc/[rrru]_{1_M}\ar@{.>}[ru]^{\gamma}&&& }\] where the square is a pullback and $\gamma\colon M \to P$ is induced by the pullback property. Clearly, $\alpha$ is a retraction and $\gamma$ is a section. Since $Q(f)$ is invertible in $S_{{\cal A}}^{-1} {\cal C}$ and the localization functor commutes with pullbacks (see proposition \ref{proposition:BasicPropertiesOfLocalization}), we know that $Q(\alpha)$ is invertible in $S_{{\cal A}}^{-1} {\cal C}$ and that $Q(\alpha)^{-1} = Q(\gamma)$. It follows that the kernel of $\alpha$ is zero in $S_{{\cal A}}^{-1}{\cal C}$. This implies that $Q(1_P - \gamma \circ \alpha) = 0$. Lemma \ref{lemma:IdempotentsInKernel} shows that $P \cong K \oplus A$ with $A \in {\cal A}$. We infer that $\ker(\gamma \circ \alpha) = \ker(\alpha) = A.$ As ${\cal C}$ satisfies \ref{R0*} (see remark \ref{remark:AbelianPercolatingAboutAxioms}) and $\alpha$ is a retraction, we may infer that $\alpha$ is a deflation in ${\cal C}$. It follows that $\alpha\in S_{{\cal A}}$. From the 2-out-of-3 property, it follows that $gh \in S_{\cal A}$ and subsequently that $f \in S_{{\cal A}}$. \end{proof} \section{Quillen's obscure axiom under localizations}\label{section:StabilityQuillen'sObscureAxiom} Let ${\cal A}$ be a deflation-percolating subcategory of a strongly deflation-exact category ${\cal C}$. It is shown in theorem \ref{theorem:Maintheorem} that the localization ${\cal C} / {\cal A} = S_{{\cal A}}^{-1}{\cal C}$ is again a deflation-exact category. As will be illustrated in example \ref{example:MissingR3} below, even when ${\cal A}$ is an admissibly deflation-percolating subcategory, the category ${\cal C} / {\cal A}$ does not need to inherit axiom \ref{R3} from ${\cal C}$. In this section, we show that if ${\cal C}$ is \emph{weakly idempotent complete} (thus, if every retraction $X \to Y$ admits a kernel) and satisfies axiom \ref{R3}, then the same holds for ${\cal C} / {\cal A}$ whenever ${\cal A}$ is a strongly deflation-percolating subcategory (see definition \ref{definition:AdditionalPercolatingDefinitions}). We start by reformulating the conditions on ${\cal C}$. \begin{proposition}\label{proposition:wicR3} Let ${\cal C}$ be a deflation-exact category. The following are equivalent: \begin{enumerate} \item ${\cal C}$ is weakly idempotent complete and satisfies axiom \ref{R3}, \item If $f\colon X \to Y$ and $g\colon Y \to Z$ are morphisms in ${\cal C}$ such that $gf$ is a deflation, then $g$ is a deflation. \end{enumerate} \end{proposition} \begin{proof} This follows easily from \cite[proposition 6.4]{BazzoniCrivei13}. \end{proof} \begin{theorem}\label{theorem:StabilityOfStrongCondition} Let ${\cal C}$ be a weakly idempotent complete strongly deflation-exact category (thus, satisfying the equivalent conditions in proposition \ref{proposition:wicR3}) and let ${\cal A}$ be a strongly deflation-percolating subcategory. The localization ${\cal C}[S_{\cal A}^{-1}]$ is also a weakly idempotent complete strongly deflation-exact category. \end{theorem} \begin{proof} By theorem \ref{theorem:Maintheorem}, we know that $S_{{\cal A}}^{-1}{\cal C}$ is a deflation-exact category. We now show that if $f\colon X\rightarrow Y$ and $g\colon Y\rightarrow Z$ are two maps in $S_{{\cal A}}^{-1}{\cal C}$ such that $gf$ is a deflation, then $g$ is a deflation. Consider a commutative diagram \[\xymatrix{ X \ar@{=}[r]\ar[d]^f &X\ar@{->>}[d]\\ Y\ar[r]^g & Z }\] in $S_{{\cal A}}^{-1}{\cal C}$. The diagram lifts to a diagram \[\xymatrix@!@C=0.3em@R=0.3em{ X\ar@{=}[rr] & & X&\ar[l]X'''\ar[r]^{\sim}&\overline{X}\ar@{->>}[dd]\\ X' \ar[u]^{\rotatebox{90}{$\sim$}}\ar[d]& & X''\ar[u]^{\rotatebox{90}{$\sim$}}\ar[d]&&\\ Y & Y'\ar[l]_{\sim}\ar[r] & Z&\ar[l]_{\sim}Z'\ar[r]&\overline{Z} }\] in ${\cal C}$. We claim that we can choose a lift \[\xymatrix{ \widetilde{X} \ar[r]^{\sim}\ar[d]^{f'} &\overline{X}\ar@{->>}[d]^h\\ \widetilde{Y}\ar[r]^{g'} & \overline{Z} }\] in ${\cal C}$ such that this diagram descends to $gf$ under the localization functor $Q$. Indeed, applying axiom \ref{RMS2} four times we obtain the diagram \[\xymatrix@!@C=0.2em@R=0.2em{ X\ar@{=}[rr] & & X & X'''\ar[l]\ar[rr]^{\sim} & & \overline{X}\ar@{->>}[dddd]\\ & & & \ar@{.>}[u]^{\rotatebox{90}{$\sim$}}\ar@{.>}[ld] & &\\ X'\ar[uu]^{\rotatebox{90}{$\sim$}}\ar[dd]& \ar@{.>}[l]X_1\ar@{.>}[r]^{\sim} & X''\ar[uu]^{\rotatebox{90}{$\sim$}}\ar[dd] & &\ar@{.>}[lu]_{\rotatebox{135}{$\sim$}}\ar@{.>}[ld]X_2 &\\ && &\ar@{.>}[lu]_{\rotatebox{135}{$\sim$}}\ar@{.>}[d] & &\\ Y & Y'\ar[l]_{\sim}\ar[r] & Z & Z'\ar[l]_{\sim}\ar[rr] && \overline{Z} }\] which descends to a commutative diagram in $S_{{\cal A}}^{-1}{\cal C}$. Rearranging the diagram and applying axiom \ref{RMS2} twice we obtain the diagram: \[\xymatrix@!@C=0.2em@R=0.5em{ X'' &&&&X_2\ar[d]_{\rotatebox{90}{$\sim$}}\ar[llll]\\ X_1\ar[u]^{\rotatebox{90}{$\sim$}}\ar[d] & & X_3 \ar@{.>}[ll]\ar@{.>}[rru]^{\rotatebox{26.5}{$\sim$}}&&\overline{X}\ar@{->>}[d]\\ Y & Y'\ar[l]_{\sim}\ar[r] & Z & Z'\ar[l]_{\sim}\ar[r] & \overline{Z}\\ && \widetilde{Y}\ar@{.>}[lu]_{\rotatebox{135}{$\sim$}}\ar@{.>}[ru] && }\] Applying axiom \ref{RMS2} to $X_3\rightarrow Y$ along $\widetilde{Y}\xrightarrow{\sim} Y$ we obtain the desired lift: \[\xymatrix{ \widetilde{X} \ar[r]^{\sim}\ar[d]^{f'} &\overline{X}\ar@{->>}[d]^{h}\\ \widetilde{Y}\ar[r]^{g'} & \overline{Z} }\] This shows the claim. Hence, it suffices to show that, given a commutative diagram in ${\cal C}$: \[\xymatrix{ X\ar[r]^{\sim}_s\ar[d]^f & X'\ar@{->>}[d]^h\\ Y\ar[r]^g&Z }\] the morphism $g$ descends to a deflation in $S_{{\cal A}}^{-1}{\cal C}$. Write $k\colon K\rightarrowtail X'$ for the kernel of $h$. By the lifting lemma \ref{lemma:LiftingConflations}, there exists a commutative diagram \[\xymatrix{ \overline{K}\ar[rr]^{\sim}\ar@{>->}[d]_{\overline{k}} && K\ar@{>->}[d]^k\\ \overline{X}\ar@{->>}[d]_{\overline{h}}\ar[r]_t\ar@/^2pc/[rr]^{\sim} & X\ar[r]^{\sim}_s\ar[rd]_{gf} & X'\ar@{->>}[d]^{h}\\ \overline{Z}\ar[rr]^{\sim}_u && Z }\] Since ${\cal A}$ is strongly right filtering in ${\cal C}$, proposition \ref{proposition:MinimalConditionsRMS} yields that pullbacks along weak isomorphisms exist. Hence we obtain the following commutative diagram: \[\xymatrix{ \overline{X}\ar@/^2pc/[rrd]^t\ar@{.>}[rd]^{t'}\ar@/_2pc/@{->>}[rddd]_{\overline{h}} &&\\ & P_2\ar[r]^{\sim}_{u''}\ar[d]_{f'} & X\ar[d]^f\\ & P_1\ar[r]^{\sim}_{u'}\ar[d]_{g'} & Y\ar[d]^g\\ &\overline{Z}\ar[r]^{\sim}_{u} & Z }\] Here the two squares are pullback squares and thus the rectangle $P_2X\overline{Z}Z$ is a pullback by the pullback lemma. The map $t'$ is induced by the pullback property. Note that $g'f't'=\overline{h}$ is a deflation in ${\cal C}$. By axiom \ref{R3} and weak idempotent completeness, $g'$ is a deflation in ${\cal C}$. Clearly $Q(g')=Q(g)$ in $S_{{\cal A}}^{-1}{\cal C}$ and thus $g$ descends to a deflation as required. Note that the above property automatically implies that $S_{{\cal A}}^{-1}{\cal C}$ is weakly idempotent complete (see proposition \ref{proposition:wicR3}), this completes the proof. \end{proof} The following example illustrates that the condition of ${\cal C}$ being weakly idempotent complete cannot be dropped. \begin{example}\label{example:MissingR3} Let $Q$ be the quiver $\xymatrix@1{c \ar[r]^\alpha & b\ar[r]^\beta & a}$ with relation $\beta\alpha = 0$. Let $k$ be a field and write $\rep_k Q$ for the category of finite-dimensional $k$-representations of $Q$. We write $S_a, S_b,$ and $S_c$ for the simple representations associated to the corresponding vertices, and $P_a, P_b,$ and $P_c$ for their projective covers (note that $P_a \cong S_a$). The Auslander-Reiten of $\rep_k Q$ quiver is given by \[\xymatrix{ &P_b \ar[rd]^{g} && P_c \ar[rd]^{l} \\ S_a \ar[ru]^{f} && S_b \ar[ru]^{h} && S_c}\] Let ${\cal C}$ be the full subcategory of $\rep_k Q$ given by all objects not isomorphic to $S_a^{\oplus n} \oplus S_b$ (for any $n \geq 0$). As ${\cal C}$ is an extension-closed full subcategory of an abelian category, it is endowed with a natural exact structure. Let ${\cal A} = \add\{S_a\}$ be the additive closure of $S_a$ in ${\cal C}$. We have that ${\cal A} \subseteq {\cal C}$ is an admissibly deflation-percolating subcategory. Following theorem \ref{theorem:Maintheorem}, we can consider the quotient $Q\colon {\cal C} \to {\cal C}/{\cal A}$. Note, as an additive category, ${\cal C} / {\cal A}$ is generated by $Q(P_b), Q(P_c),$ and $Q(S_c)$. More specifically, the category ${\cal C} / {\cal A}$ is equivalent (as an additive category) to $\rep_k A_2$. We claim that the quotient ${\cal C} / {\cal A}$ does not satisfy axiom \ref{R3}. Indeed, consider the sequences \[\mbox{$\xymatrix@1{Q(P_b) \ar[r]^{Q(hg)} & Q(P_c) \ar[r]^{Q(l)} & Q(S_c)}$ and $\xymatrix@1{Q(P_b) \ar@{=}[r] & Q(P_b) \ar[r]& 0}$}\] in ${\cal C}/{\cal A}$. One can verify that the first sequence is not a conflation (in particular, the map $Q(l)\colon Q(P_c) \to Q(S_c)$ is not a deflation). However, the direct sum of these two sequence is a conflation: \[\xymatrix{Q(P_b \oplus P_b) \ar@{>->}[rr]^{Q\begin{psmallmatrix} 1 & 0 \\ 0 & hg \end{psmallmatrix}} && Q(P_b \oplus P_c) \ar@{->>}[rr]^{Q \begin{psmallmatrix} 0 & l \end{psmallmatrix}} && Q(S_c)}\] (this uses that $Q(P_b \oplus P_b) \cong (P_b \oplus S_b)$). It follows from \cite[proposition 5.9]{BazzoniCrivei13} that ${\cal C} / {\cal A}$ does not satisfy axiom \ref{R3}. Note that ${\cal C}$ satisfies axiom \ref{R3} (as ${\cal C}$ is exact) and that ${\cal A}$ is strongly (even admissibly) deflation-percolating in ${\cal C}$, but that ${\cal C}$ is not weakly idempotent complete (as the retraction $S_b \oplus P_b \to P_b$ has no kernel). \end{example} \section{Examples and applications}\label{section:Examples} In this section, we give examples of the localizations studied in this paper. We start with a comparison with \cite{Cardenas98, Schlichting04}. Next, we show that a deflation-exact category with \ref{R0*} is a category with fibrations (thus, in particular, a coWaldhausen category). In this way, we have a natural definition of the $K$-theory of a deflation-exact category. We show that the quotient behaves as expected on the level of the Grothendieck groups. We then proceed by considering some more specific examples of percolating subcategories. In \S\ref{subsection:Torsion}, we consider torsion theories in exact categories and give sufficient conditions for the torsion-free part to be a deflation-percolating subcategory or a right special filtering subcategory. In \S\ref{subsection:QuasiAbelian}, we consider the case of a one-sided quasi-abelian category (also called an almost abelian category) and show that the axioms of a percolating subcategory simplify in this setting. Finally, we consider two more explicit examples. The first example (\S\ref{subsection:LCA}) concerns the category $R-\LC$ locally compact modules over a discrete ring $R$. It was shown in \cite{Braunling20} that the subcategory $R-\LC_D$ of discrete modules is, in general, neither a left nor a right special filtering subcategory. We show that $R-\LC_D$ is a right percolating subcategory so that we can consider the quotient category $R-\LC / R-\LC_D$. Furthermore, we show that the subcategory of totally disconnected locally abelian groups is a deflation-percolating subcategory. In the second example (\S\ref{Subsection:GliderExample}), we give an example coming from the theory of glider representations. Here, we show explicitly that the quotient category is not inflation-exact (and hence not exact). \subsection{Comparison to localization theories of exact categories} Localizations of exact categories have been considered with an eye on $K$-theoretic applications in \cite{Cardenas98, Schlichting04}. We now compare these notions with the notions introduced in this paper. The following figure provides an overview. \begin{figure} \caption{Different types of subcategories of an exact category} \label{figure:ExactOverview} \end{figure} \subsubsection{Cardenas' localization theory} The localization theory of exact categories developed by Cardenas in \cite{Cardenas98} is recovered completely by the framework of localizations with respect to percolating subcategories. We recover the following result from \cite{Cardenas98}: \begin{theorem} Let ${\cal C}$ be an exact category and let ${\cal A}$ be a full subcategory satisfying axioms \ref{A1}, \ref{A2} and the dual of \ref{A2}. There exists an exact category ${\cal C}/{\cal A}$ and an exact functor $Q\colon {\cal C}\rightarrow {\cal C}/{\cal A}$ satisfying the following universal property: for any exact category ${\cal C}$ and exact functor $F\colon {\cal C}\rightarrow {\cal C}$ such that $F(A)\cong 0$ for all $A\in {\cal A}$, there exists a unique exact functor $G\colon {\cal C}/{\cal A}\rightarrow {\cal C}$ such that $F=G\circ Q$. The set $S_{\cal A}$ of weak equivalences is a (left and right) multiplicative set, and the quotient ${\cal C} / {\cal A}$ is equivalent to the category localization ${\cal C}[S_{\cal A}^{-1}].$ \end{theorem} \begin{proof} Since ${\cal C}$ is exact, the subcategory ${\cal A}$ automatically satisfies axiom \ref{A3}. Hence, ${\cal A}$ is both an admissibly inflation- and deflation-percolating subcategory. By theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms} and its dual, the set $\widehat{S_{{\cal A}}}$ is a multiplicative system. By theorem \ref{theorem:Maintheorem} and its dual, as well as proposition \ref{proposition:QuotientInConflationCategories}, the category ${\cal C}[S_{{\cal A}}^{-1}]$ is both inflation-exact and deflation-exact and the canonical localization functor $Q\colon {\cal C}\rightarrow {\cal C}[S_{{\cal A}}^{-1}]$ is exact. Moreover, ${\cal C}[S_{{\cal A}}^{-1}]\cong {\cal C}/{\cal A}$ and $Q$ satisfies the desired universal property. \end{proof} \subsubsection{Schlichting's localization theory}\label{subsubsection} We recall the notion of an s-filtering subcategory of an exact category introduced by Schlichting (see \cite[definition~1.5]{Schlichting04}). We use the reformulation given in \cite[definition~2.12 and proposition~A.2]{BraunlingGroechenigWolfson16}. \begin{definition}\label{definition:SpecialFiltering} Let ${\cal C}$ be an exact category and let ${\cal A}$ be a full subcategory. The subcategory ${\cal A}$ is called \emph{right special} if for every inflation $A\rightarrowtail X$ with $A\in {\cal A}$ there exists a commutative diagram \[\xymatrix{ A\ar@{>->}[r]\ar@{=}[d] & X\ar@{->>}[r]\ar[d] & Y\ar[d]\\ A\ar@{>->}[r] & B\ar@{->>}[r] & C }\] such that the rows are conflations in ${\cal C}$ and the lower row belongs to ${\cal A}$. Dually, ${\cal A}$ is called \emph{left special} if ${\cal A}^{op}$ is right special in ${\cal C}^{op}$.\\ The subcategory ${\cal A}$ is called \emph{right s-filtering} if it is both right filtering, i.e.~satisfies axioms \ref{P1} and \ref{P2}, and right special in ${\cal C}$. \end{definition} The following results about Schlichting's localization theory can be found in \cite[propositions~1.16 and 2.6]{Schlichting04}): \begin{theorem}\label{theorem:Schlichting'sMainTheorem} Let ${\cal C}$ be an exact category and let ${\cal A}$ be a right s-filtering subcategory. The localization functor $Q\colon {\cal C}\rightarrow S_{{\cal A}}^{-1}{\cal C}$ endows $S_{{\cal A}}^{-1}{\cal C}$ with the structure of an exact category. The functor $Q$ is universal among exact functors from ${\cal C}$ to exact categories that vanish on ${\cal A}$, i.e.~${\cal C}/{\cal A} \cong S_{{\cal A}}^{-1}{\cal C}$.\\ Moreover, if ${\cal A}$ is idempotent complete, the sequence \[\Db({\cal A})\rightarrow \Db({\cal C})\rightarrow \Db({\cal C}/{\cal A})\] is a Verdier localization sequence. \end{theorem} The localization theory developed in \cite{Schlichting04} is compatible with the localization theory with respect to percolating subcategories in the following sense. \begin{proposition}\label{proposition:RecoveredSchlichtingsFramework} Let ${\cal C}$ be an exact category and ${\cal A}\subseteq {\cal C}$ a full subcategory. If ${\cal A}$ is a right s-filtering subcategory, then ${\cal A}$ is a deflation-percolating subcategory. \end{proposition} \begin{proof} As ${\cal A}$ is a right filtering subcategory of ${\cal C}$, axioms \ref{P1} and \ref{P2} are satisfied. Since ${\cal C}$ is exact, axiom \ref{A3} is satisfied (and thus axiom \ref{P3} is satisfied as well). It remains to verify axiom \ref{P4}. To that end, consider a commutative diagram: \[\xymatrix{ A\ar@{>->}[rd]^i\ar@/^2pc/[rrrd]^0 &&&\\ & X\ar[rr]^f\ar@{->>}[rd]_p^{\rotatebox{135}{$\sim$}}\ar[dd]_{\rho}&& Y\\ && Q\ar@{.>}[ru]^h & \\ & B\ar@/_2pc/[rruu]_g&& }\] We need to show that $h$ factors through ${\cal A}$. By axiom \ref{P2}, we may, without loss of generality, assume that $\rho$ is a deflation with kernel $\iota\colon K\stackrel{\sim}{\rightarrowtail} X$. Since $p\circ \iota\in S_{{\cal A}}$, \cite[lemma~1.17.(3)]{Schlichting04} implies that there exists an ${\cal A}^{-1}$-inflation $t\colon U\stackrel{\sim}{\rightarrowtail}K$ such that $p\iota t$ is an ${\cal A}^{-1}$-inflation as well. Lemma \ref{lemma:CompositionOfAAInflations} shows that the composition $\iota \circ t$ is an ${\cal A}^{-1}$-inflation. As ${\cal C}$ is an exact category, the $3\times 3$-lemma (see \cite[corollary 3.6]{Buhler10}) yields the following commutative diagram: \[\xymatrix{ 0\ar@{>->}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} & U\ar@{=}[r]\ar@{>->}[d]^{\rotatebox{90}{$\sim$}}_{\iota t} & U\ar@{>->}[d]^{\rotatebox{90}{$\sim$}} & \\ A\ar@{>->}[r]_i\ar@{=}[d] & X\ar@{->>}[r]_p^{\sim}\ar@{->>}[d]_{q'} & Q\ar@{->>}[d]_q\ar[r]_h & Y\ar@{=}[dd]\\ A\ar@{>->}[r]^{i'} & B'\ar@{->>}[r]^{p'}\ar@{.>}[d]_l& C\ar@{.>}[rd]^{\exists ! u} &\\ & B\ar[rr]^g && Y\\ }\] Here, $q'$ is the cokernel of $\iota t$ and $l$ is the induced map such that $lq'=\rho$. Note that $(gl)i'=glq'i=g\rho i=fi=0$, hence $gl$ factors through $p'$ via a unique induced map $u\colon C\to Y$ satisfying $up'=gl$. Moreover, as $p$ is epic and $hp=f=g\rho=q'lg=up'q'=(uq)p$, we conclude that $h=uq$. This shows that $h$ factors through $C\in \Ob({\cal A})$, as required. \end{proof} \begin{remark} Let ${\cal C}$ be an exact category and let ${\cal A}$ be a right s-filtering subcategory. By theorem \ref{theorem:Schlichting'sMainTheorem}, the category $S_{{\cal A}}^{-1}{\cal C}$ is an exact category. Using the above proposition and theorem \ref{theorem:Maintheorem}, we may only conclude that $S_{{\cal A}}^{-1}{\cal C}$ is a deflation-exact category. In section \ref{Subsection:GliderExample} we will show that the localization of an exact category with respect to a one-sided percolating subcategory need not be exact. \end{remark} \begin{remark} In subsequent work \cite{HenrardvanRoosmalen19b}, we show that, given a deflation-exact category ${\cal C}$ and a deflation-percolating subcategory ${\cal A}$, one still obtains a Verdier localization sequence \[\DAb({\cal C})\rightarrow \Db({\cal C})\rightarrow \Db({\cal C}/{\cal A}).\] Here $\DAb({\cal C})$ denotes the thick triangulated subcategory of $\Db({\cal C})$ generated by ${\cal A}$ (the derived category of a deflation-exact category is defined in \cite{BazzoniCrivei13}). \end{remark} \subsection{Waldhausen categories and the Grothendieck group} Given a deflation-exact category ${\cal C}$ and an admissibly deflation-percolating subcategory ${\cal A}$, one can encode the localization ${\cal C}/{\cal A}$ into a \emph{coWaldhausen category}. In this way, one can study the K-theory of ${\cal C}/{\cal A}$. In particular one obtains an immediate description of the Grothendieck group of ${\cal C}/{\cal A}$ (see proposition \ref{proposition:WaldhausenGrothendieck}). We refer the reader to \cite{Weibel13} for more details. \begin{definition} Let ${\cal C}$ be a category and let $\cofib({\cal C})$ be a set of morphisms in ${\cal C}$ called \emph{cofibrations} (indicated by arrows $\rightarrowtail$). The pair $({\cal C},\cofib({\cal C}))$ is called a \emph{category with cofibrations} if the following axioms are satisfied: \begin{enumerate}[label=\textbf{W\arabic*},start=0] \item\label{W0} Every isomorphism is a cofibration and cofibrations are closed under composition. \item\label{W1} The category ${\cal C}$ has a zero object $0$ and for each $X\in {\cal C}$ the unique map $0\rightarrowtail X$ is a cofibration. \item\label{W2} Pushouts along cofibrations exist and cofibrations are stable under pushouts. \end{enumerate} Axioms \ref{W1} and \ref{W2} yield the existence of cokernels of cofibrations, thus for every cofibration $X\rightarrowtail Y$ there is a canonical \emph{cofibration sequence} $X\rightarrowtail Y\twoheadrightarrow Z$. A \emph{category with fibrations} is defined dually. A fibration is depicted by $\twoheadrightarrow$ and the set of fibrations is denoted by $\fib({\cal C})$. \end{definition} \begin{remark}\label{remark:EquivalenceRightExactAndCategoryWithFibrations} An inflation-exact category with \ref{L0*} is a category with cofibrations and, dually, a deflation-exact category with \ref{R0*} is a category with fibrations. \end{remark} \begin{definition} Let $({\cal C},\cofib({\cal C}))$ be a category with fibrations and let $w{\cal C}$ be a set of morphisms in ${\cal C}$ called \emph{weak equivalences} (indicated by arrows endowed with $\sim$). The triple $({\cal C},\cofib({\cal C}),w{\cal C})$ is called a \emph{Waldhausen category} if $w{\cal C}$ contains all isomorphisms and is closed under composition and the following axiom (called the \emph{gluing axiom}) is satisfied: \begin{enumerate}[label=\textbf{W\arabic*},start=3] \item\label{W3} For any commutative diagram of the form \[\xymatrix{ Z\ar[d]_{\rotatebox{90}{$\sim$}} & X\ar[d]_{\rotatebox{90}{$\sim$}}\ar@{>->}[r]\ar[l] & Y\ar[d]_{\rotatebox{90}{$\sim$}}\\ Z'&X'\ar@{>->}[r]\ar[l]&Y' }\] the induced map $Z\cup_X Y\rightarrow Z'\cup_{X'}Y'$ between the pushouts is a weak equivalence. \end{enumerate} A \emph{coWaldhausen category} is defined dually. \end{definition} \begin{definition} Let $({\cal C},\cofib({\cal C}),w{\cal C})$ be a Waldhausen category. The Grothendieck group $K_0({\cal C})$ (often denoted as $K_0(w{\cal C})$) is defined as the free abelian group generated by the isomorphism classes of objects in ${\cal C}$ modulo the relations: \begin{enumerate} \item $[X]=[Y]$ if there is a weak equivalence $X\xrightarrow{\sim}Y$. \item $[Z]=[X]+[Y]$ for every cofibration sequence $X\rightarrowtail Z\twoheadrightarrow Y$. \end{enumerate} The Grothendieck group of a coWaldhausen category is defined dually. \end{definition} \begin{proposition} Let ${\cal C}$ be a deflation-exact category satisfying axiom \ref{R0*} and let ${\cal A}$ be a deflation-percolating subcategory. Let $\fib({\cal C})$ be the set of deflations in ${\cal C}$ and let $w{\cal C}$ be the saturated closure of the set of weak isomorphisms with respect to the subcategory ${\cal A}$. The triple $({\cal C},\fib({\cal C}),w{\cal C})$ is a coWaldhausen category. \end{proposition} \begin{proof} By assumption the category ${\cal C}$ satisfies axiom \ref{R0*}. By remark \ref{remark:EquivalenceRightExactAndCategoryWithFibrations}, the pair $({\cal C},\fib({\cal C}))$ is a category with fibrations. We now show that $w{\cal C}$ satisfies the gluing axiom. Consider a commutative diagram: \[\xymatrix{ Z\ar[r]\ar[d]_{\rotatebox{90}{$\sim$}} & X\ar[d]_{\rotatebox{90}{$\sim$}} & \ar@{->>}[l]\ar[d]_{\rotatebox{90}{$\sim$}} Y\\ Z'\ar[r] & X'&\ar@{->>}[l] Y' }\] Here, arrows endowed with $\sim$ are weak equivalences. By axiom \ref{R0*} and the dual of \cite[proposition~5.7]{BazzoniCrivei13} we obtain a commutative diagram: \[\xymatrix{ Z\cap_{X}Y \ar@{>->}[r]\ar@{.>}[d]& Z\oplus Y\ar@{->>}[r]\ar[d]_{\rotatebox{90}{$\sim$}} & X\ar[d]_{\rotatebox{90}{$\sim$}}\\ Z'\cap_{X'}Y'\ar@{>->}[r] & Z'\oplus Y'\ar@{->>}[r] & X' }\] As the localization $Q\colon {\cal C} \to {\cal C} / {\cal A}$ commutes with kernels, the induced map $Z\cap_{X}Y\rightarrow Z'\cap_{X'}Y'$ descends to an isomorphism. It follows that the triple $({\cal C},\fib({\cal C}),w{\cal C})$ is a coWaldhausen category. \end{proof} \begin{proposition}\label{proposition:WaldhausenGrothendieck} Let ${\cal C}$ be a deflation-exact category satisfying axiom \ref{R0*} and let ${\cal A}$ be a deflation-percolating subcategory. Let $\fib({\cal C})$ be the set of deflations in ${\cal C}$ and let $w{\cal C}$ be the saturated closure of the set of weak isomorphisms with respect to the subcategory ${\cal A}$. The quotient functor $Q\colon {\cal C} \to {\cal C} / {\cal A}$ induces an isomorphism $K_0(w{\cal C})\cong K_0({\cal C} / {\cal A})$, where $K_0({\cal C}/{\cal A})$ is defined in the usual manner (the weak equivalences on ${\cal C} / {\cal A}$ are the isomorphisms). \end{proposition} \begin{proof} By theorem \ref{theorem:Maintheorem}, the quotient category ${\cal C}/{\cal A}$ is a deflation-exact category. Note that $\Ob({\cal C})=\Ob({\cal C}/{\cal A})$. One readily verifies that the map $f\colon K_0(w{\cal C})\rightarrow K_0({\cal C}/{\cal A})$, which is the identity on objects, is a group morphism. Let $F({\cal C} / {\cal A})$ and $F(w{\cal C})$ be the free abelian groups generated by the objects of ${\cal C} / {\cal A}$ and ${\cal C}$, respectively. As $\Ob({\cal C} / {\cal A}) = \Ob({\cal C})$, we can consider the identity map $\bar{g}\colon F({\cal C}/{\cal A})\rightarrow F(w{\cal C})$. Let $X\rightarrowtail Y\twoheadrightarrow Z$ be a conflation in ${\cal C}/{\cal A}$. By definition \ref{definition:LocalizationConflation}, this means that there is a diagram \[\xymatrix{ \overline{X}\ar@{>->}[rr] && \overline{Y}\ar@{->>}[rr] && \overline{Z}\\ \ar[u]^{\rotatebox{90}{$\sim$}}\ar[d]_{\rotatebox{90}{$\sim$}}&&\ar[u]^{\rotatebox{90}{$\sim$}}\ar[d]_{\rotatebox{90}{$\sim$}}&&\ar[u]^{\rotatebox{90}{$\sim$}}\ar[d]_{\rotatebox{90}{$\sim$}}\\ X &\ar[l]_{\sim}\ar[r]& Y &\ar[l]_{\sim}\ar[r]& Z }\] in ${\cal C}$ which descends to a commutative diagram in ${\cal C}/{\cal A}$ and such that the vertical arrows descend to isomorphisms. Hence $[X]=[\overline{X}]$, $[Y]=[\overline{Y}]$ and $[Z]=[\overline{Z}]$ in $K_0(w{\cal C})$. As $\overline{X}\rightarrowtail \overline{Y}\twoheadrightarrow \overline{Z}$ is a conflation in ${\cal C}$, we obtain $[Y]=[X]+[Z]$ in $K_0(w{\cal C})$. It follows that $\bar{g}$ induces a group homomorphism $K_0({\cal C}/{\cal A})\rightarrow K_0(w{\cal C})$. Clearly, $f$ and $g$ are inverse to each other. We conclude that $K_0({\cal C}/{\cal A})\cong K_0(w{\cal C})$. \end{proof} \subsection{Torsion theories in one-sided exact categories}\label{subsection:Torsion} In \cite{BournGran06}, a definition of a torsion theory is given for general homological categories. We restrict ourselves to the context of deflation-exact categories and relate them to deflation-percolating subcategories. \begin{definition} Let ${\cal C}$ be a one-sided exact category. A torsion theory in ${\cal C}$ is a pair of full replete (=closed under isomorphisms) subcategories $({\cal T},{\cal F})$ such that \begin{enumerate} \item $\Hom(T,F)=0$ for all $T\in {\cal T}$ and all $F\in {\cal F}$, \item for any object $M\in {\cal C}$ there exists a conflation \[T \rightarrowtail M \twoheadrightarrow F\] in ${\cal C}$ with $T\in {\cal T}$ and $F\in {\cal F}$. \end{enumerate} We refer to the category ${\cal F}$ as the torsion-free subcategory and to ${\cal T}$ as the torsion category. A torsion theory $({\cal T},{\cal F})$ is called \emph{hereditary} if ${\cal T}$ is a Serre subcategory of ${\cal C}$ and \emph{cohereditary} if ${\cal F}$ is a Serre subcategory. \end{definition} \begin{lemma}\label{lemma:TorsionSequence} Let ${\cal C}$ be a one-sided exact category and let $({\cal T},{\cal F})$ be a torsion theory. For any object $M\in {\cal C}$, there is a conflation \[\xymatrix{M_T\ar@{>->}[r] & M\ar@{->>}[r] & M_F,}\] with $ M_T \in {\cal T}$ and $M_F\in {\cal F}$, which is unique up to isomorphism. \end{lemma} \begin{proof} The proof from \cite[lemma 4.2]{BournGran06} carries over to this setting. \end{proof} \begin{proposition}\label{proposition:TorsionPairAdjoints} Let ${\cal C}$ be a conflation category, let $({\cal T},{\cal F})$ be a torsion theory in ${\cal C}$. \begin{enumerate} \item The inclusion functor $i\colon {\cal T}\rightarrow {\cal C}$ has a right adjoint $R$ and the inclusion functor $j\colon {\cal F}\rightarrow {\cal C}$ has a left adjoint $L$. \item ${\cal T} = \{C \in \Ob({\cal C}) \mid \Hom(C, {\cal F}) = 0\},$ \item ${\cal F} = \{C \in \Ob({\cal C}) \mid \Hom({\cal T}, C) = 0\}.$ \end{enumerate} \end{proposition} \begin{proof} The right adjoint to the embedding $i\colon {\cal T} \to {\cal C}$ is given by $M \mapsto M_T$. The left adjoint to the embedding $j\colon {\cal F}\rightarrow {\cal C}$ is given $M \mapsto M_F$. It follows from the definition of a torsion theory that ${\cal T} \subseteq \{C \in \Ob({\cal C}) \mid \Hom(C, {\cal F}) = 0\}$. For the other inclusion, let $C \in \Ob({\cal C})$ such that $\Hom(C, {\cal F})=0$. As there is a deflation $C \twoheadrightarrow C_F$, we find $C_F = 0$ so that $C \cong C_T \in {\cal T}$, as required. The proof that ${\cal F} = \{C \in \Ob({\cal C}) \mid \Hom({\cal T}, C) = 0\}$ is similar. \end{proof} \begin{corollary}\label{corollary:TorsionTheories} Let ${\cal C}$ be a conflation category, let $({\cal T},{\cal F})$ be a torsion theory in ${\cal C}$. The subcategory ${\cal F}$ is closed under extensions and subobjects in ${\cal C}$. Dually, ${\cal T}$ is closed under extensions and (epimorphic) quotient objects in ${\cal C}$. \end{corollary} \begin{remark} Let ${\cal C}$ be a deflation-exact category. The conflation structure of ${\cal C}$ induces a conflation structure on the extension-closed subcategories ${\cal T}$ and ${\cal F}$. Moreover, both ${\cal T}$ and ${\cal F}$ are deflation-exact. \end{remark} The following proposition uses notation from proposition \ref{proposition:TorsionPairAdjoints}. \begin{proposition}\label{proposition:TorsionfreeIsPercolating} Let ${\cal C}$ be an exact category. Let $({\cal T}, {\cal F})$ be a cohereditary torsion pair in ${\cal C}$. \begin{enumerate} \item The subcategory ${\cal F} \subseteq {\cal C}$ is a right filtering subcategory. \item\label{enumerate:TorsionfreeIsPercolating} If for every inflation $f\colon A \rightarrowtail X$ (with $A \in {\cal F}$), the morphism $jL(f)\colon A \to X_F$ has a cokernel which lies in ${\cal F}$, then ${\cal F}$ is a deflation-percolating subcategory of ${\cal C}$. \item If $L\colon {\cal C} \to {\cal F}$ (or, equivalently, $jL\colon {\cal C} \to {\cal C}$) is a conflation-exact functor, then ${\cal F}$ is a right s-filtering subcategory. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item As $({\cal T}, {\cal F})$ is a cohereditary torsion pair in ${\cal C}$, we know that ${\cal F}$ satisfies axiom \ref{P1}. We show that axiom \ref{P2} holds. Let $f\colon X\rightarrow F$ be a morphism with $F\in {\cal F}$. By lemma \ref{lemma:TorsionSequence} we obtain the commutative diagram: \[\xymatrix{ X_T\ar@{>->}[d]\ar[rd]^0 & \\ X\ar[r]^{f}\ar@{->>}[d] & F\\ X_F\ar@{.>}[ru] & }\] The composition $X_T\rightarrowtail X\xrightarrow{\alpha} F$ is zero since $\Hom({\cal T},{\cal F})=0$. It follows that $f$ factors through the deflation $X\twoheadrightarrow X_F$. This shows that axiom \ref{P2} is satisfied. \item By the above, ${\cal F}$ is right filtering. As ${\cal C}$ is exact, axiom \ref{P3} is automatically satisfied. Axiom \ref{P4} follows immediately from proposition \ref{proposition:P4Criterion}. \item We only need to verify that ${\cal F} \subseteq {\cal C}$ is right special. Let $f\colon A \rightarrowtail X$ be any inflation with $A \in {\cal F}$. As $L$ is conflation-exact, we know that the composition $L(f)\colon A \rightarrowtail X \twoheadrightarrow X_F$ is an inflation. This establishes that ${\cal F} \subseteq {\cal C}$ is right special.\qedhere \end{enumerate} \end{proof} \begin{definition}\label{definition:RightConflationExact} Let $F\colon {\cal C} \to {\cal D}$ be a functor between conflation categories. We say that $F$ is \emph{right conflation-exact} if, for any conflation $X \stackrel{f}{\rightarrowtail} Y \stackrel{g}{\twoheadrightarrow} Z$, we have that $F(g)\colon F(Y) \to F(Z)$ is a deflation and $F(g) = \coker F(f).$ A \emph{left conflation-exact functor} is defined dually. \end{definition} \begin{remark} The notions of a right `exact' functor \cite[\S1.3]{Rosenberg11} and a sequentially right exact functor \cite[definition 3.1]{PeschkeVanderLinden16} are special cases of right conflation-exact functors. \end{remark} \begin{corollary}\label{corollary:RightConflationExact} Let ${\cal C}$ be an exact category. Let $({\cal T}, {\cal F})$ be a cohereditary torsion pair in ${\cal C}$. If $jL\colon {\cal C} \to {\cal C}$ is right conflation-exact, then ${\cal F}$ is a deflation-percolating subcategory of ${\cal C}$. \end{corollary} \begin{proof} Let $X \stackrel{f}{\rightarrowtail} Y \stackrel{g}{\twoheadrightarrow} Z$ be a conflation in ${\cal C}$. By assumption, $jL(f)$ has a cokernel (namely $jL(Z)$), which lies in ${\cal F}$ since ${\cal F}$ is a Serre subcategory of ${\cal C}$. The result follows from proposition \ref{proposition:TorsionfreeIsPercolating}. \end{proof} \begin{example}\label{example:IndiscreteAndHausdorff} Consider the quasi-abelian category $\mathsf{TAb}$ of topological abelian groups. Let ${\cal T}$ be the full subcategory of of topological abelian groups with the indiscrete topology and let ${\cal F}$ be the full subcategory of Hausdorff abelian groups. Every abelian group fits into a conflation $\overline{\left\{e_G\right\}}\rightarrowtail G \twoheadrightarrow G/\overline{\left\{e_G\right\}}$ with $\overline{\left\{e_G\right\}}\in \Ob({\cal T})$ and $G/\overline{\left\{e_G\right\}}\in \Ob({\cal F})$. Moreover, $\Hom({\cal T},{\cal F})=0$. Hence $({\cal T},{\cal F})$ is a torsion theory. It is easy to check that $({\cal T}, {\cal F})$ is hereditary. In the notation of proposition \ref{proposition:TorsionPairAdjoints}, the functor $iR\colon \mathsf{TAb} \to \mathsf{TAb}$ is left conflation-exact. By the dual of corollary \ref{corollary:RightConflationExact}, ${\cal T}$ is an inflation-percolating subcategory of $\mathsf{TAb}$. Moreover, using that ${\cal T}$ is abelian and that $\mathsf{TAb}$ is an exact category, we find that the subcategory ${\cal T}$ is an admissibly inflation-percolating subcategory of $\mathsf{TAb}$. Note that the natural functor $\Phi\colon {\cal F}\to \mathsf{TAb}/{\cal T}$ is essentially surjective and faithful. However, $\Phi$ is not full. Indeed, consider the conflation $\mathbb{Q}\rightarrowtail \mathbb{R}\twoheadrightarrow \mathbb{R}/\mathbb{Q}$ where $\mathbb{R}$ has the usual Euclidean topology. Note that $\mathbb{Q}$ and $\mathbb{R}$ are Hausdorff groups and $\mathbb{R}/\mathbb{Q}$ has the indiscrete topology. It follows that $\mathbb{Q}\cong \mathbb{R}$ in $\mathsf{TAb}/{\cal T}$. As there is no non-zero morphism $\mathbb{R}\to \mathbb{Q}$ in $\mathsf{TAb}$, $\Phi$ cannot be full. \end{example} \begin{example} Let $\rep_k(A_4)$ be the category of finite-dimensional representations of the quiver $A_4$. The category $\rep_k(A_4)$ can be visualized by its Auslander-Reiten quiver: \[\xymatrix@!@C=0.5em@R=0.5em{ &&&P_4\ar[rd]&&&\\ &&P_3\ar[ru]\ar[rd] && I_2\ar[rd]&&\\ & P_2\ar[ru]\ar[rd] && X\ar[ru]\ar[rd] && I_3\ar[rd] &\\ S_1\ar[ru] && S_2\ar[ru] && S_3\ar[ru] && S_4 }\] Let ${\cal C}$ be the full additive subcategory of ${\cal U}$ generated by $S_1,P_2,P_3,P_4, S_2, X, I_2$ and $S_4$ . Clearly, ${\cal C}$ is exact as it is an extension-closed subcategory of ${\cal U}$. Let ${\cal T}$ be the full additive subcategory of ${\cal C}$ generated by $S_1, P_4, I_2$ and $S_4$ and let ${\cal F}$ be the full additive subcategory of ${\cal C}$ generated by $S_2$ and $X$. One readily verifies that $({\cal T},{\cal F})$ is a cohereditary torsion pair in ${\cal C}$. The functor $jL\colon {\cal C} \to {\cal C}$ is right deflation-exact. By corollary \ref{corollary:RightConflationExact}, ${\cal F}$ is a deflation-percolating subcategory of ${\cal C}$. However, ${\cal F}$ is not right special in ${\cal C}$ (this can be seen by considering the inflation $X \rightarrowtail I_2$). \end{example} \begin{remark} In example \ref{Example:P4Requirement}, the subcategory ${\cal A}$ is a torsion-free class of a torsion theory $({\cal T}, {\cal A})$. This torsion theory is split in the sense that every object in ${\cal C}$ is a direct sum of an object in ${\cal A}$ and an object in ${\cal T}.$ However, the corresponding localization is not deflation-exact. This shows that the conditions on the functor $jL$ in proposition \ref{proposition:TorsionfreeIsPercolating}.\ref{enumerate:TorsionfreeIsPercolating} cannot be removed. Applying $jL$ to the conflation $P_2 \rightarrowtail P_3 \twoheadrightarrow S_3$ yields $P_2 \to P_3 \twoheadrightarrow 0.$ This shows that $jL$ does not commute with cokernels. \end{remark} \subsection{(One-sided) quasi-abelian categories}\label{subsection:QuasiAbelian} Many interesting examples of localizations with respect to percolating subcategories arise in the context of \emph{(one-sided) quasi-abelian} categories. We recall the following definition from \cite{Rump01}: \begin{definition} An additive category ${\cal C}$ is called \emph{pre-abelian} if every morphism $f\colon A\rightarrow B$ in ${\cal C}$ has a kernel and cokernel. A pre-abelian category ${\cal C}$ is called \emph{left quasi-abelian} if cokernels are stable under pullbacks and it is called \emph{right quasi-abelian} if kernels are stable under pushouts. A pre-abelian category is called \emph{quasi-abelian} if it is both left and right quasi-abelian. \end{definition} \begin{remark}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{remark:QuasiAbelianremarks} \begin{enumerate} \item Left quasi-abelian categories have a natural strongly deflation-exact structure (all cokernels are deflations) and right quasi-abelian categories have natural strongly inflation-exact structure (all kernels are inflations), for this reason we will prefer the terminology of \emph{deflation quasi-abelian} and \emph{inflation quasi-abelian} categories over the left and right versions. We refer the reader to \cite[section~4]{BazzoniCrivei13} for useful results on one-sided quasi-abelian categories in the context of one-sided exact categories. Quasi-abelian categories inherit a natural exact structure (see also \cite{Schneiders99}). \item Left or right quasi-abelian categories are called \emph{left or right almost abelian categories} in \cite{Rump01}. \item Let $f\colon X \to Y$ be a morphism. In a preabelian category, a morphism $f\colon X \to Y$ admits a diagram \[\xymatrix{ {\ker f} \ar[r]& X \ar[rrr]^f \ar[dr] &&& Y \ar[r] & {\coker f} \\ && {\coim f} \ar[r]_{\widehat{f}} & {\im f} \ar[ru]}\] where $\coim f = \coker(\ker f)$ and $\im f = \ker(\coker f)$. In a deflation quasi-abelian category, the canonical map $\widehat{f}\colon \coim f \to \im f$ is a monomorphism; dually, in an inflation quasi-abelian category the map $\widehat{f}\colon \coim f \to \im f$ is an epimorphism (see \cite[proposition~1]{Rump01}). Hence, for a quasi-abelian category, the canonical morphism $\coim(f)\rightarrow \im(f)$ is both a monomorphism and an epimorphism (see also \cite[corollary~1.1.5]{Schneiders99}). \item By \cite[proposition~B.3]{BondalVandenBergh03}, every quasi-abelian category ${\cal C}$ can be realized as the torsion-free part of a cotilting torsion pair $({\cal T},{\cal C})$ in an abelian category. Dually, ${\cal C}$ can be realized as the torsion part of a tilting torsion pair $({\cal C},{\cal F})$ in an abelian category. \item Following \cite[theorem 4.17]{BrustleHassounTattar20} (extending \cite{HassounShahWegner20}), a pre-abelian exact category ${\cal C}$ is quasi-abelian if and only if it satisfies the admissible intersection property (in the sense of \cite[definition 4.3]{HassounRoy19}). \end{enumerate} \end{remark} The next lemma yields an easy characterization of axiom \ref{P2} for deflation quasi-abelian categories. \begin{lemma}\label{lemma:QuasiAbelianP2} Let ${\cal C}$ be a deflation quasi-abelian category and let ${\cal A}\subseteq {\cal C}$ be a full subcategory. The subcategory ${\cal A}$ satisfies axiom \ref{P2} if and only if ${\cal A}$ is closed under subobjects. \end{lemma} \begin{proof} Assume that ${\cal A}$ satisfies axiom \ref{P2}. Let $f\colon X\hookrightarrow A$ be a monomorphism such that $A\in \Ob({\cal A})$. By axiom \ref{P2}, $f$ factors as \[\xymatrix{X\ar@{->>}[r] & B\ar[r] & A}\] with $B\in \Ob({\cal A})$. Since $f$ is monic, the deflation $X\twoheadrightarrow B$ is an isomorphism. Hence $X\cong B\in {\cal A}$. Conversely, assume that ${\cal A}$ is closed under subobjects. Let $f\colon X\rightarrow A$ be a morphism in ${\cal C}$ with $A\in \Ob({\cal A})$. Since ${\cal C}$ is deflation quasi-abelian, $f$ factors as \[\xymatrix{X\ar@{->>}[r] & \coim(f)\ar[r] & A}\] As in remark \ref{remark:QuasiAbelianremarks}, the map $\coim(f)\rightarrow A$ is monic. Since ${\cal A}$ is closed under subobjects, $\coim(f)\in \Ob({\cal A})$. Hence, $f$ has the desired factorization and axiom \ref{P2} holds. \end{proof} The next proposition yields an easy characterization of percolating subcategories in a (one-sided) quasi-abelian setting. \begin{proposition}\label{proposition:PercolatingInQA} Let ${\cal C}$ be a deflation quasi-abelian category and let ${\cal A}\subseteq {\cal C}$ be a full subcategory. The subcategory ${\cal A}$ is deflation-filtering (i.e. axioms \ref{P1} and \ref{P2} are satisfied) if and only if ${\cal A}$ is strongly deflation-percolating. \end{proposition} \begin{proof} If ${\cal A}$ is strongly deflation-percolating, then ${\cal A}$ is deflation-filtering by definition. For the other implication, assume that ${\cal A}$ is deflation-filtering in ${\cal E}$. By definition, axioms \ref{P1} and \ref{P2} are satisfied. We show that ${\cal A}$ is strongly right filtering. Let $f\colon X\to A$ be a morphism in ${\cal C}$ with $A\in \Ob({\cal A})$. Note that $f=f_m\circ \widehat{f}\circ f_e$ and that $f_m$ is monic as it is a kernel. By remark \ref{remark:QuasiAbelianremarks}, $\widehat{f}$ is monic and hence $f_m\circ \widehat{f}$ is monic. By lemma \ref{lemma:QuasiAbelianP2}, we conclude that $\coim(f)\in \Ob({\cal A})$ and hence ${\cal A}$ is strongly deflation-filtering. We now show axiom \ref{P3}. Let $i\colon X\rightarrowtail Y$ be an inflation and $t\colon Y\to T$ a map such that $t\circ f$ factors through ${\cal A}$. Without loss of generality we may assume there is a deflation $f\colon X\twoheadrightarrow A$ with $A\in \Ob({\cal A})$ and a map $t'\colon A\to T$ such that $t\circ i=t'\circ f$. As ${\cal C}$ is pre-abelian, pushouts exist and we obtain the following commutative diagram: \[\xymatrix{ X\ar@{>->}[rr]^{i}\ar@{->>}[d]^f && Y\ar@{->>}[r]^p\ar@{->>}[d]^{f'}\ar@/^2pc/[rr]^t& Z\ar@{=}[d]& T\ar@{=}[d]\\ A\ar[rr]^{i'}\ar@{.>}[rd]^{\iota} &&P\ar@{->>}[r]^{p'}\ar@/_2pc/@{.>}[rr]_{t''} & Z & T\\ & K'\ar@{>->}[ru]_{\iota'} && }\] Here the square $XYAP$ is a pushout square, $f'$ is a deflation since it is the cokernel of the composition $i\circ \ker f$ (see proposition \ref{proposition:MitchellPullbackPushout}), $p'$ is the cokernel of $i'$ and $\iota'$ is the kernel of $p'$ (see proposition \ref{proposition:PushoutsPreserveCokernels}). The map $t''$ is induced by the pushout property and satisfies $t''i'=t'$. Since $f$ is epic and $p'i'f=0$, $p'i'=0$ and hence $i$ factors through $\iota'=\ker(p')$ via $\iota$. By proposition \ref{proposition:MitchellPullback}, the square $XYKP$ is a pullback square and thus axiom \ref{R2} implies that the composition $X\stackrel{f}{\twoheadrightarrow} A \stackrel{\iota}{\rightarrow} K$ is a deflation. As ${\cal C}$ is a deflation-exact category (and hence satisfies the equivalent conditions in proposition \ref{proposition:wicR3}), we find that $\iota$ is a deflation. Axiom \ref{P1} implies that $K\in \Ob({\cal A})$. Furthermore, proposition \ref{proposition:PushoutIfCokernel} implies that the square $XYKP$ is a pushout square as well. This shows that axiom \ref{P3} is satisfied. Finally, it follows from proposition \ref{proposition:P4Criterion} that axiom \ref{P4} is satisfied. \end{proof} Combining lemma \ref{lemma:QuasiAbelianP2} and proposition \ref{proposition:PercolatingInQA}, we find the characterization as given in proposition \ref{proposition:IntroductionRecognitionToolI}. \begin{proposition}\label{proposition:TwoSidedQuotientOfQuasiAbelianIsQuasiAbelian} Let ${\cal C}$ be a quasi-abelian category and let ${\cal A} \subseteq {\cal C}$ be a full subcategory. If ${\cal A}$ is both inflation- and deflation-percolating in ${\cal C}$, then the category ${\cal C} / {\cal A}$ is quasi-abelian. \end{proposition} \begin{proof} As ${\cal A}$ is both inflation- and deflation exact, the set $S_{\cal A}$ of weak isomorphisms is a left and right multiplicative system. The localization $Q\colon {\cal C} \to {\cal C}[S^{-1}_{\cal A}] (={\cal C}/ {\cal A})$ commutes with finite limits and colimits and ${\cal C}[S^{-1}_{\cal A}]$ is exact. We need to show that every kernel-cokernel pair in ${\cal C}[S^{-1}_{\cal A}]$ is a conflation. Let $X\stackrel{f}{\to} Y \stackrel{g}{\rightarrow} Z$ be a kernel-cokernel pair in ${\cal C}[S^{-1}_{\cal A}]$. As $S_{\cal A}$ is a right multiplicative system, there is a roof $Y \stackrel{\sim}{\leftarrow} Y' \stackrel{g'}{\rightarrow} Z$ in ${\cal C}$ representing $g$. Since $Q$ commutes with finite limits, we know that $X \cong \ker(g')$ in ${\cal C}[S^{-1}_{\cal A}].$ As $\ker(g) \rightarrowtail Y'$ is an inflation in ${\cal C}$, the map $f\colon X \to Y$ is an inflation in ${\cal C}[S^{-1}_{\cal A}]$ as well. Hence, $X \to Y \to Z$ is a conflation in ${\cal C}[S^{-1}_{\cal A}].$ \end{proof} \begin{example} Let $k$ be a field and let $\mathsf{TVS}$ be the category of topological vector spaces over $k.$ Any of the following subcategories satisfy the conditions of proposition \ref{proposition:PercolatingInQA} and are strongly deflation-percolating subcategories of $\mathsf{TVS}:$ \begin{enumerate} \item the subcategory of vector spaces with the discrete topology, \item the subcategory of finite-dimensional vector spaces. \end{enumerate} \end{example} \begin{example} Let $\LCAf\subset \LCA$ be the full subcategory of the locally compact (Hausdorff) groups consisting of finite abelian groups. Clearly $\LCAf\subset\LCA$ is a Serre subcategory which is closed under subojects and quotients. Hence proposition \ref{proposition:IntroductionRecognitionToolI} (and its dual) imply that $\LCAf$ is both a inflation- and deflation-percolating subcategory of $\LCA$. Moreover, proposition \ref{proposition:TwoSidedQuotientOfQuasiAbelianIsQuasiAbelian} then implies that $\LCA/\LCAf$ is a quasi-abelian category. This result extends to locally compact $R$-modules (see \S\ref{subsection:LCA}). \end{example} \begin{example}\label{Example:IsbellCategory} Let ${\cal I}$ be the Isbell category, that is, the full additive subcategory of $\Ab$ generated by the abelian groups containing no element of order $p^2$ for some fixed prime $p$. The Isbell category is a deflation quasi-abelian category which does not satisfy axioms \ref{L1}, \ref{L2} and \ref{L3} (see \cite[section~2]{Kelly69} and \cite[example~4.7]{BazzoniCrivei13}). Moreover, ${\cal I}$ is a reflective subcategory of $\Ab$, i.e.~the inclusion functor $\iota\colon {\cal I}\hookrightarrow \Ab$ has a left adjoint $L\colon \Ab\to {\cal I}$. The adjoint $L$ is determined by $L(G)=\{g\in G\mid p^2\nmid \text{ord}(g)\}$. Let ${\cal A}\subseteq {\cal I}$ be the full subcategory generated by the $p$-groups. Clearly ${\cal A}$ is a Serre subcategory of ${\cal I}$ which is closed under subobjects. Proposition \ref{proposition:IntroductionRecognitionToolI} implies that ${\cal A}$ is a strongly deflation-percolating subcategory of ${\cal I}$. Write ${\cal B}$ for the full subcategory of $\Ab$ generated by the $p$-groups. As ${\cal B}\subset\Ab$ is a Serre subcategory, the quotient $\Ab/{\cal B} = \Ab[S^{-1}_{\cal B}]$ is an abelian category. Using the adjunction $L\dashv \iota$ and the universal properties of quotients, one readily verifies that $\Ab/{\cal B}\simeq {\cal I}/{\cal A}$ are equivalent. In particular ${\cal I}/{\cal A}$ is abelian. Note that lemma \ref{lemma:CompositionOfAAInflations} implies that ${\cal A}\subseteq {\cal I}$ does not satisfy axiom \ref{A3} as the inflation $\mathbb{Z}\stackrel{\cdot p}{\rightarrowtail}\mathbb{Z}$ is a weak isomorphism, but the composition $\mathbb{Z}\stackrel{\cdot p}{\rightarrowtail}\mathbb{Z}\stackrel{\cdot p}{\rightarrowtail}\mathbb{Z}$ is not an inflation in ${\cal I}$. In particular, this shows that weak isomorphisms need not be admissible. \end{example} \subsection{Locally compact modules}\label{subsection:LCA} Let $\LCA$ be the category of locally compact (and Hausdorff) abelian groups. It is shown in \cite[proposition~1.2]{HoffmannSpitzweck07} that $\LCA$ is a quasi-abelian category. Let $R$ be a unital ring, endowed with the discrete topology. We write $R-\LC$ for the category of locally compact (and Hausdorff) $R$-modules. We furthermore write $R-\LC_{\mathsf{C}}$ or $R-\LC_{\mathsf{D}}$ for the full subcategories given by those $R$-modules whose topology is compact or discrete, respectively. \begin{proposition}\label{proposition:LCAIsQuasiAbelian} Let $R$ be a unital ring. \begin{enumerate} \item The categories $R-\LC$ and $\LC-R$ are quasi-abelian. \item There are quasi-inverse contravariant functors: \[\mbox{${\blb D}\colon R-\LC \to \LC-R$ and ${\blb D}'\colon \LC-R \to R-\LC$}\] which interchange compact and discrete $R$-modules. \end{enumerate} \end{proposition} \begin{proof} The first part follows from \cite[proposition~1.2]{HoffmannSpitzweck07} (see also \cite[proposition~2.2]{Braunling20}). The contravariant functors in the second statement are induced by the standard Pontryagin duality $\LCA \to \LCA$ (see \cite[theorem 1]{Levin73} or \cite[theorem 2.3]{Braunling20}). \end{proof} It follows from \cite{HoffmannSpitzweck07} that the canonical exact structure on $R-\LC$ is described as follows: a morphism $f\colon X \to Y$ is an inflation if and only if it is a closed injection; a morphism $f\colon X \to Y$ is a deflation if and only if it is an open surjection. \begin{proposition}\makeatletter \hyper@anchor{\@currentHref} \makeatother\label{proposition:PercolatingInLCA} \begin{enumerate} \item The category $R-\LC_{\mathsf{D}}$ is an admissibly deflation-percolating subcategory of $R-\LC$. The set $S_{R-\LC_{\mathsf{D}}}$ of admissible weak isomorphisms is saturated. \item The category $R-\LC_{\mathsf{C}}$ is an admissibly inflation-percolating subcategory of $R-\LC$. The set $S_{(R-\LC_{\mathsf{C}})}$ of admissible weak isomorphisms is saturated. \end{enumerate} \end{proposition} \begin{proof} We first show that $R-\LC_{\mathsf{D}}$ satisfies axiom \ref{A1}. Let $A\stackrel{f}{\rightarrowtail}B\stackrel{g}{\twoheadrightarrow}C$ be a conflation in $R-\LC$. It is straightforward to show that if $B$ is discrete, then so are $A$ and $C$. Conversely, assume that $A$ and $C$ are discrete. Since the singleton $\left\{0_B\right\}$ is open in $A$ and $A$ has the subspace topology of $B$, there exists an open $U\subseteq B$ such that $\left\{0_B\right\}=U\cap A$. Since the singleton $\left\{0_C\right\}$ is open in $C$, $g^{-1}(\left\{0_C\right\})=\ker(g)=A$ is open in $B$. Hence $\left\{0_B\right\}$ is open $B$. It follows that $B$ has the discrete topology. Axiom \ref{A2} follows from the observation that any map $f\colon X\rightarrow A$ with $A$ discrete induces an open surjective map $X\rightarrow \im(f)$ in $R-\LC$. Axiom \ref{A3} is automatic as $R-\LC$ is an exact category. It follows from proposition \ref{proposition:Saturation} that the set $S_{(R-\LC_{\mathsf{C}})}$ is saturated and it follows from theorem \ref{theorem:WeakIsomorphismsEqualAAInverseIsomorphisms} that weak isomorphisms are admissible. Pontryagin duality implies the corresponding statements about $R-\LC_{\mathsf{C}}$. \end{proof} \begin{remark} \cite[example 4]{Braunling20} shows that $\LCA_{\mathsf{C}}$ is not left (or right) s-filtering in $\LCA$ in the sense of \cite{Schlichting04} (see definition \ref{definition:SpecialFiltering}). On the other hand, putting $R=\mathbb{Z}$, the previous proposition implies that the category $\LCA_C$ is an admissibly inflation-percolating subcategory of $\LCA$. It follows that $\LCA/\LCA_{\mathsf{C}}$ can be described as a localization with respect to the saturated left multiplicative system given by the weak $\LCA_{\mathsf{C}}^{-1}$-isomorphisms and the localization carries a natural inflation-exact structure (see theorem \ref{theorem:Maintheorem}). \end{remark} \begin{remark} The category $\LCA_{\mathsf{D}}$ is not an inflation-percolating subcategory of $\LCA$. Indeed, the dual of axiom \ref{A2} fails for the map $1_{\mathbb{R}}\colon (\mathbb{R},\tau_{\text{discrete}})\rightarrow (\mathbb{R},\tau_{\text{trivial}})$. Dually, the category $\LCA_{\mathsf{C}}$ is not a deflation-percolating subcategory of $\LCA$. \end{remark} \begin{remark} It follows from proposition \ref{proposition:PercolatingOfPercolating} that Serre subcategories of $R-\LC_{\mathsf{D}}$ or $R-\LC_{\mathsf{C}}$ are themselves admissibly (deflation- or inflation-)percolating subcategories of $R-\LC$. \end{remark} Following \cite{Braunling19, Braunling20}, we write $R-\LC_{{\blb R} \mathsf{C}}$ for the full subcategory of $R-\LC$ whose objects have a direct sum decomposition ${\blb R}^n \oplus C$ (as topological groups) where $C$ is compact. It is shown in \cite[corollary 9.4]{Braunling19} that $R-\LC_{{\blb R} {\mathsf{C}}}$ is an idempotent complete fully exact subcategory of $R-\LC$. We write $R-\LC_{{\blb R}}$ for those objects of $R-\LC$ which are isomorphic to ${\blb R}^n$ (with the standard topology). As an application of proposition \ref{proposition:TorsionfreeIsPercolating}, we show that $R-\LC_{\mathsf{C}}$ is left s-filtering in $R-\LC_{{\blb R} {\mathsf{C}}}$. In this way, we recover \cite[proposition~9.8]{Braunling19}. \begin{proposition} \begin{enumerate} \item The pair $(R-\LC_{{\mathsf{C}}}, R-\LC_{{\blb R}})$ is a torsion pair in $R-\LC_{{\blb R} {\mathsf{C}}}$. \item $R-\LC_{{\mathsf{C}}}$ is left s-filtering in $R-\LC_{{\blb R} {\mathsf{C}}}$. \item $R-\LC_{{\blb R}}$ is right s-filtering in $R-\LC_{{\blb R} {\mathsf{C}}}$. \end{enumerate} \end{proposition} \begin{proof} It is clear that $(R-\LC_{\mathsf{C}}, R-\LC_{{\blb R}})$ is a torsion pair in $R-\LC_{{\blb R} {\mathsf{C}}}$: the torsion of an object ${\blb R}^n \oplus C$ is given by $t({\blb R}^n \oplus C) = C$ and the torsion-free part of an object is given by $f({\blb R}^n \oplus C) = {\blb R}^n$. As $R-\LC_{{\blb R} {\mathsf{C}}}$ is a fully exact subcategory of $R-\LC$ and $R-\LC_{C}$ satisfies \ref{P1} in $R-\LC$, it follows that $R-\LC_{{\mathsf{C}}}$ satisfies \ref{P1} in $R-\LC_{{\blb R} {\mathsf{C}}}$. Moreover, as ${\blb R}$ is injective in $\LCA$, it is clear that $R-\LC_{{\blb R}}$ is closed under extensions. Hence, we find that $R-\LC_{{\blb R}}$ also satisfies \ref{P1} in $R-\LC_{{\blb R} {\mathsf{C}}}$. Lastly, given any conflation ${\blb R}^{n_1} \oplus C_1 \rightarrowtail {\blb R}^{n_2} \oplus C_2 \twoheadrightarrow {\blb R}^{n_3} \oplus C_3$ in $R-\LC_{{\blb R} {\mathsf{C}}}$, we find, by applying the functor $L\colon R-\LC_{{\blb R} {\mathsf{C}}} \to R-\LC_{{\blb R}}$, the conflation ${\blb R}^{n_1}\rightarrowtail {\blb R}^{n_2}\twoheadrightarrow {\blb R}^{n_3}$. The $3 \times 3$-lemma shows that the torsion part $C_1 \rightarrowtail C_2 \twoheadrightarrow C_3$ is also a conflation. \end{proof} In the following proposition, we write $\LCAcon$ and $\LCAtd$ for the full subcategories of $\LCA$ given by the connected locally abelian groups and the totally disconnected locally abelian groups. \begin{proposition} In the category $\LCA$, there is a cohereditary torsion pair $(\LCAcon, \LCAtd)$. In particular, the subcategory $\LCAtd$ of totally disconnected locally compact abelian groups is a strongly percolating subcategory of $\LCA$. \end{proposition} \begin{proof} For a $G \in \LCA$, let $G_0$ be the connected component of the identity. It follows from \cite[Proposition III.4.6.14]{BourbakiTop} that $G_0$ is a (closed) subgroup of $G$ and from \cite[proposition I.11.5.9]{BourbakiTop} that the quotient $G/G_0$ is totally disconnected. As $\Hom(\LCAcon, \LCAtd)=0$, we see that $(\LCAcon, \LCAtd)$ is indeed a torsion pair. Furthermore, it follows from \cite[Corollary III.4.6.3]{BourbakiTop} that quotients of totally disconnected locally compact groups are totally disconnected so that $(\LCAcon, \LCAtd)$ is a cohereditary torsion pair (this uses corollary \ref{corollary:TorsionTheories}). As $\LCA$ is a quasi-abelian category, it follows from proposition \ref{proposition:TorsionfreeIsPercolating} (or proposition \ref{proposition:PercolatingInQA}) that $\LCAtd$ is a strongly deflation-percolating subcategory of $\LCA.$ \end{proof} \begin{remark} As $\LCA_{\mathsf{D}} \subseteq \LCAtd$, we find that $\LCA_{\mathsf{D}}$ is an admissibly deflation-percolating subcategory of $\LCAtd$. It follows from proposition \ref{proposition:PercolatingInLCA} that the category $\LCA_{\mathsf{C}} \cap \LCAtd$ of compact totally disconnected groups is an admissibly inflation-percolating subcategory of $\LCAtd$. \end{remark} \subsection{An example coming from filtered modules}\label{Subsection:GliderExample} In this section, we consider an example of an exact category ${\cal E}$ and an admissibly deflation-percolating subcategory ${\cal A}$ such that the localization $S_{\cal A}^{-1}{\cal E}$ is deflation-exact but not inflation-exact. This shows that in general one cannot expect that localizing with respect to an (admissibly) percolating subcategory preserves two-sided exactness. This example is based on the theory of glider representations (see for example \cite{CaenepeelVanOystaeyen16,CaenepeelVanOystaeyen19book,CaenepeelVanOystaeyen18} or \cite{HenrardvanRoosmalen20a}). Let $k$ be a field and let $R$ be the matrix ring \[R=\begin{pmatrix} k & 0 & 0\\ k[t]_{\leq 2} & k& 0\\ k[t] & k[t] & k[t] \end{pmatrix}.\] We write $E_{i,j}$ for the $3\times 3$-matrix defined by $(E_{i,j})_{k,l}=\delta_{i,k}\delta_{j,l}$ where $\delta_{i,k}$ is the Kronecker delta. We write $e_1,e_2,e_3$ for the primitive orthogonal idempotents, i.e. $e_i=E_{i,i}$. Let ${\cal E}$ be the abelian category of left $R$-modules. Given an $R$-module $M$, we have that $M\cong e_1M+e_2M+e_3M$ as a $k$-vector space. Note that $e_3M$ is a $k[t]$-module. Let ${\cal E}$ be the full subcategory of ${\cal E}$ of all left $R$-modules $M$ such that the maps \begin{eqnarray*} \iota_1\colon e_1M\hookrightarrow e_2M &:& m\mapsto E_{2,1}m\\ \iota_2\colon e_2M\hookrightarrow e_3M &:& m\mapsto E_{3,2}m \end{eqnarray*} are injective. For simplicity, we write an object of ${\cal E}$ as $e_1M\hookrightarrow e_2M\hookrightarrow e_3M$. One readily verifies that ${\cal E}$ is extension-closed in ${\cal E}$ and therefore inherits a natural exact structure. Using \cite[proposition~B.3]{BondalVandenBergh03} we see that ${\cal E}$ is in fact a quasi-abelian category. Indeed, one can verify that ${\cal E}$ is closed under subobjects and contains all projective $R$-modules. It follows that ${\cal E}$ arises as the the torsion-free part of a cotilting torsion pair. Let ${\cal A}$ be the full subcategory of ${\cal E}$ consisting of all $R$-modules such that $e_1M=0=e_2M$. Clearly, ${\cal A}$ is equivalent to the abelian category of $k[t]$-modules. Consider the map $\phi$ in ${\cal E}$ given by the following commutative diagram: \[\xymatrix{ 0\ar@{^{(}->}[r]\ar[d]&0\ar@{^{(}->}[r]\ar[d]&k\ar[d]^{1_k}\\ k\ar@{^{(}->}[r] & k\ar@{^{(}->}[r] & k }\] One readily verifies that $\ker(\phi)=0=\coker(\phi)$ in ${\cal E}$. It follows that if $\phi$ is admissible, it is an isomorphism. However, $\phi$ does not admit a right inverse. It follows that ${\cal A}$ is not inflation-percolating in ${\cal E}$. On the other hand, it is easy to see that ${\cal A}$ is admissibly deflation-percolating in ${\cal E}$. Hence, we can describe the localization ${\cal E}/{\cal A}$ using theorem \ref{theorem:Maintheorem}. Moreover, proposition \ref{proposition:Saturation} implies that the right multiplicative system of ${\cal A}^{-1}$-isomorphisms is saturated. \begin{lemma}\label{Lemma:DesciptionIsoInQuotient} Let $M,N\in \Ob({\cal E}/{\cal A})$ such that $M\cong N$. Then $e_1M\cong e_1N$ and $e_2M\cong e_2N$ as $k$-vector spaces. \end{lemma} \begin{proof} Let $f\colon X\rightarrow Y$ be an ${\cal A}^{-1}$-isomorphism in ${\cal E}/{\cal A}$ and write $f_i$ for the induced map $e_iX\rightarrow e_iY$. Since $\ker(f),\coker(f)\in {\cal A}$, we have that $\ker(f_j)=0=\coker(f_j)$ for $j=1$ or $2$. It follows that $f_1$ and $f_2$ are isomorphisms of $k$-vector spaces. Assume that $M\cong N$ in ${\cal E}/{\cal A}$ and let $(g\colon L \rightarrow N, s\colon L\rightarrow M)\in \Hom_{S_{{\cal A}}^{-1}{\cal E}}(M,N)$ be an isomorphism in ${\cal E}/{\cal A}$. Since $Q(g)$ is also an isomorphism in ${\cal E}/{\cal A}$ and $S_{{\cal A}}$ is saturated, $g$ is an ${\cal A}^{-1}$-isomorphism. It follows that $g_1,g_2,s_1$ and $s_2$ are isomorphisms and hence $e_1M\cong e_1N$ and $e_2M\cong e_2N$ as $k$-vector spaces. \end{proof} We now show that the localization ${\cal E}/{\cal A}$ is not inflation-exact by explicitly showing the failure of axiom \ref{L1}. Consider the following commutative diagram in ${\cal E}$ \[\xymatrix{ tRe_2 \ar@{>->}[d]_{f} && 0\ar@{^{(}->}[r]\ar[d] & kt\ar@{^{(}->}[r]\ar[d]^{\begin{psmallmatrix}1\\0\end{psmallmatrix}} & tk[t]\ar[d]^{\begin{psmallmatrix}1\\0\end{psmallmatrix}}\\ tRe_2\oplus tRe_2\ar[d]_{g}^{\rotatebox{90}{$\sim$}} && 0 \ar@{^{(}->}[r]\ar[d] & kt \oplus kt \ar@{^{(}->}[r]\ar[d]^{\begin{psmallmatrix}1&0\\0&t\end{psmallmatrix}} & tk[t]\oplus tk[t]\ar[d]^{\begin{psmallmatrix}1&t\end{psmallmatrix}}\\ M \ar@{>->}[d]_{h}&& 0 \ar@{^{(}->}[r]\ar[d] & kt\oplus kt^2 \ar@{^{(}->}[r]\ar[d]^{\begin{psmallmatrix}0&0\\1&0\\0&1 \end{psmallmatrix}} & tk[t]\ar[d]_1\\ Re_1 && k \ar@{^{(}->}[r] & k\oplus kt \oplus kt^2 \ar@{^{(}->}[r] & k[t] }\] One can verify that $f\colon tRe_2\rightarrow tRe_2\oplus tRe_2$ is an inflation, $g$ is an ${\cal A}^{-1}$-isomorphism, and $h \colon M\rightarrow Re_1$ is an inflation. It follows that the composition $Re_2\xrightarrow{f} Re_2\oplus Re_2\xrightarrow{g} M$ descends to an inflation in ${\cal E}/{\cal A}$. The cokernel of $hgf$ in ${\cal E}/{\cal A}$ is given by $k\hookrightarrow k\hookrightarrow k$. A direct computation shows that $\ker(\coker(hgf))$ is given by $0\hookrightarrow kt\oplus kt^2\hookrightarrow tk[t]$. By lemma \ref{Lemma:DesciptionIsoInQuotient}, $\ker(\coker(hgf))\not\cong tRe_2$ in ${\cal E}/{\cal A}$. It follows that $hgf$ is not an inflation in ${\cal E}/{\cal A}$. This shows that axiom \ref{L1} is not satisfied. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title[Global wellposedness for the 3D KPII equation]{Global well-posedness and scattering for small data for the 3-D Kadomtsev-Petviashvili-II equation} \author{Herbert Koch} \address{Mathematisches Institut \\ Unversit\"at Bonn \\ Endenicher Allee 60 \\ 53115 Bonn \\ Germany} \email{[email protected]} \thanks{H.K has been partially supported by the DFG through CRC 1060} \author{Junfeng Li} \address{Laboratory of Math and Complex Systems \\ Ministry of Education \\ School of Mathematical Sciences \\ Beijing Normal University \\ Beijing 100875 \\ P. R. China} \email{[email protected]} \thanks{J.L. has been partially supported by the Humboldt foundation, NSF of China (Grant No. 11171026), the Fundamental Research Funds for the Central Universities (NO. 2014KJJCA10).} \begin{abstract} We study global well-posedness for the Kadomtsev-Petviashvili II equation in three space dimensions with small initial data. The crucial points are new bilinear estimates and the definition of the function spaces. As by-product we obtain that all solutions to small initial data scatter as $t \to \pm \infty$. \end{abstract} \subjclass{35Q53, 37K40} \maketitle {\bf{ Keywords}}: Kadomtsev-Petviashvili II, Galilean transform, Bilinear estimate, nonlinear waves. \section{Introduction and main results} In this paper, we study the Cauchy problem for the 3-dimensional Kadomtsev-Petviashvili II (KP-II) equation \begin{align}\label{eq:1.1} \left\{ \begin{aligned} \partial_x\left(\partial_t u+\partial_x^3 u+\partial_x(u^2)\right)+\triangle_{y}u=& 0 \hspace{1.5cm} & (t,x,y)\in \mathbb{R} \times \mathbb{R}\times \mathbb{R}^2 \\ u(0,x,y)=& u_0(x,y) & (x,y)\in \mathbb{R}\times \mathbb{R}^2. \end{aligned} \right. \end{align} The Kadomtsev-Petviashvili (KP) equations describe nonlinear wave interactions of almost parallel waves. They come with at least four different flavors: The KP-II equation for which the line soliton is supposed to be stable, the KP-I equation with localized solitons, and the modified KP-I and KP-II equations with cubic nonlinearities. The KP-II equation is invariant under \begin{enumerate} \item Translations in $x,y$ and $t$. \item Scaling: $\lambda^2 u( \lambda x, \lambda^2 y, \lambda^3 t) $ is a solution if $u$ satisfies the KP-II equation \eqref{eq:1.1}. \item Galilean transform: Let $c \in \mathbb{R}^2$. Then $ u(t, x- c \cdot y -|c|^2 t, y +2ct ) $ is a solution if $u$ satisfies \eqref{eq:1.1}. On the Fourier side the transform is $\hat u( \tau- |c|^2 \xi-2c\cdot \eta, \xi, \eta +c\xi ) $ where $\tau$ is the Fourier variable of $t$, $\xi\in \mathbb{R}$ is the Fourier variable of $x$ and $\eta$ the one of $y$. \item Isometries of the $y$ plane. \item Simultaneous reflections of $x$, $t$ and $u$. \end{enumerate} The Galilean invariance is often a consequence of the rotational symmetry of full systems for which certain solutions are asymptotically described by a KP equation. The interest in the KP equations comes from the expectation that they describe waves in a certain asymptotic regime for a large class of problems, for which one does not even have to formulate a full model, similar to the role of the nonlinear Schr\"odinger equation in nonlinear optics. The Galilean symmetry group is noncompact, in contrast to the orthogonal group $O(n)$ and it seems that with this noncompactness the difficulty increases with the dimension, in contrast to what is true for many wave and Schr\"odinger equations. It would be interesting to see whether the stronger decay of the linear equation compared to the 2d problem can be used to prove global existence for small Schwartz functions. We search for spaces of initial data and solutions which reflect the symmetries. Given $\lambda \in \mathbb{R} \backslash \{0\}$, we define the Fourier projection $u_\lambda$ (we denote the Fourier transform by $\mathcal{F}$ resp. $\hat{ }$) by \begin{equation} \label{Fourierprojection} \hat u_\lambda (\tau, \xi,\eta) = \left\{ \begin{array}{rl} \hat u(\tau, \xi,\eta) & \text{ if } \lambda \le|\xi| < 2\lambda \\ 0 & \text{otherwise.} \end{array} \right. \end{equation} We will always choose $\lambda$ to be a power of $2$. For fixed $\lambda $, we partition the set $\{ (\xi,\eta) \in \mathbb{R}\times \mathbb{R}^2: \lambda\le |\xi| < 2\lambda\}$ into sets $\Gamma_{\lambda,k}$ for $ k \in \lambda\cdot\mathbb{Z}^2$ defined by \begin{equation} \label{eq:Gamma} \Gamma_{\lambda,k} = \left\{ (\xi,\eta): \lambda\le |\xi| < 2\lambda, \left| \frac{\eta}{\xi} - k \right|_{\infty} \le \frac{\lambda}{2} \right\} \end{equation} where $ | a |_{\infty} = \max \{ |a_1|,|a_2|\}$. This decomposition is shown below. \begin{center} \begin{tikzpicture}[xscale=4.2,yscale=.0625] \draw(0,0)--(2.2,0) node[anchor=south]{$\xi$}; \draw(0,-64)--(0,64) node[anchor=west]{$\eta$}; \draw(1,-32)--(2,-64)--(2,64)--(1,32)--(1,-32)--(1,0)--(2,0); \draw(0.5,-28)--(1,-56)--(1,56)--(0.5,28)--(0.5,-28)--(0.5,-12)--(1,-24)--(1,-8)--(.5,-4)--(.5,4)--(1,8)--(1,24)--(.5,12)--(.5,20)--(1,40)--(1,-40)--(.5,-20); { \draw(.25,-17)--(.5,-34)--(.5,34)--(.25,17)--(.25,-9)--(.5,-18)--(.5,-14)--(.25,-7)--(.25,-5)--(.5,-10)--(.5,-6)--(.25,-3)--(.25,-1)--(.5,-2)--(.5,2) --(.25,1)--(.25,3)--(.5,6)--(.5,10)--(.25,5)--(.25,-7)--(.5,-14)-- (.5,14)--(.25,7)--(.25,9)--(.5,18)--(.5,-26)--(.25,-13)--(.25,-11)--(.5,-22)--(.5,22)--(.25,11)--(.25,13)--(.5,26)--(.5,30)--(.25,15)--(.25,-19)--(.5,-38)--(.5,38)--(.25,19)--(.25,-15)--(.5,-30); \draw(.25,-19)--(.25,19);} { \clip (.124,-64) rectangle (.251,64); \draw (0,0)--(.5,43)--(.5,-43)--(0,0)--(.5,41)--(.5,-41)--(0,0)--(.5,39) --(.5,-39)--(0,0)--(.5,37)--(.5,-37)--(0,0)--(.5,35)--(.5,-35)--(0,0)--(.5,33) --(.5,-33)--(0,0)--(.5,31)--(.5,-31)--(0,0)--(.5,29)--(.5,-29)--(0,0)--(.5,27) --(.5,-27)--(0,0)--(.5,25)--(.5,-25)--(0,0)--(.5,23)--(.5,-23)--(0,0)--(.5,21) --(.5,-21)--(0,0)--(.5,19)--(.5,-19)--(0,0)--(.5,17)--(.5,-17)--(0,0)--(.5,15) --(.5,-15)--(0,0)--(.5,13)--(.5,-13)--(0,0)--(.5,11)--(.5,-11)--(0,0)--(.5,9) --(.5,-9)--(0,0)--(.5,7)--(.5,-7)--(0,0)--(.5,5)--(.5,-5)--(0,0)--(.5,3) --(.5,-3)--(0,0)--(.5,-1)--(.5,1)--(0,0); \draw (.125,10.75)--(.25,21.5)--(.25,-21.5)--(.125,-10.75)--(.125,10.75); } \end{tikzpicture} \end{center} For $1\leq q< \infty$, $1\leq p< \infty$, a tempered distribution $f$ is said to be in $l^ql^pL^2$ if it is in the closure of $C^\infty_0$ with respect to the norm \[\Vert f\Vert_{l^ql^pL^2}:=\left\{\sum_{\lambda\in 2^{\mathbb Z}}\lambda^{\frac{q}{2}}\left(\sum_{k\in\lambda\cdot\mathbb Z^2}\Vert f_{\Gamma_{\lambda,k}}\Vert^p_{L^2}\right)^\frac{q}{p}\right\}^{\frac{1}{q}}<\infty.\] The case $p,q=\infty$ require the standard modification. Here and in the sequel $f_{\Gamma_{\lambda,k}}$ denotes the Fourier projection. We base our construction of the solution space on the space $V^2_{KP}$ of functions of bounded 2 variation $V^2$ adapted to the three dimensional KP-II equation. This function space will be introduced in more detail in section \ref{bilinear}. The solution space is defined as \[ \|u\|_{l^q l^pV_{KP}^2}= \left(\sum_{\lambda\in 2^{\mathbb Z}} \Big(\lambda^{\frac12} \sum_{k\in\lambda\cdot\mathbb Z^2}\|u_{\Gamma_{\lambda,k}}\|^p_{V^2_{KP}(\Gamma_{\lambda,k})}\Big)^{\frac{q}{p}}\right)^{\frac1q}<+\infty. \] We need also the homogeneous Fourier restriction space $\dot{X}^{0,b}$ for $|b|\le 1$ which is defined by \[ \|u_1\|_{\dot{X}^{0,b}}=\Vert |\partial_t-\partial_x^3+ \partial_x^{-1} \Delta_y|^b u_1\Vert_{L^2}:=\||\tau-\omega(\xi,\eta)|^b\hat{u}_1\|_{L^2}<+\infty \] for tempered distributions supported in $[0,\infty) \times \mathbb R \times \mathbb R^2$. Here $\omega(\xi,\eta)=\xi^3-\frac{|\eta|^2}{\xi}$ is the dispersion function associated to KP-II equation. We define \[ \|u\|_{l^q \dot{X}^{0,b}}= \Vert\lambda^2 u_{\lambda}(\lambda x,\lambda^2 y,\lambda^3 t)\Vert_{l^q_{\lambda}\dot{X}^{0,b}}=\left(\sum_{\lambda\in 2^{\mathbb{Z}}}\lambda^{(2-3b)q} \Vert u_\lambda\Vert^q_{\dot{X}^{0,b}}\right)^{\frac1q}.\] Here $l^p_\lambda$ denotes the $l^p$ norm with respect to the summation over $\lambda \in 2^{\mathbb{Z}}$. Finally we define the function space for the fixed point map by \[ \Vert u \Vert_X =\|u\|_{l^q l^p V^2_{KP} }+\|u\|_{l^q \dot{X}^{0,b}}<\infty. \] Since $\sup_{t} \Vert u(t) \Vert_{L^2} \le \Vert u \Vert_{V^2_{KP}}$ (see \cite{KoBi}) one has $ \sup_t \Vert u(t) \Vert_{l^q l^pL^2} \le \Vert u \Vert_X$. It will be clear from the construction that we obtain solutions in $ u \in C([0, \infty); l^ql^pL^2),\,\,\text{for}\,\, 1\leq q<\infty, 1<p<2$. We are ready to state our main results. \begin{theorem}\label{wellposed} For $1\leq q<\infty$, $1<p<2$, there exists an $0<\varepsilon$ such that if $u_0\in l^ql^pL^2$ satisfies \[\Vert u_0 \Vert_{l^ql^pL^2}\leq\varepsilon\] then there exist a unique global solution $w$ to \eqref{eq:1.1} \[w=S(t)u_0+u\] with $u\in X\subset C(\mathbb R, l^ql^pL^2).$ It satisfies \begin{equation}\label{eq:1.2} \Vert u \Vert_X \le c \Vert u_0 \Vert_{l^ql^pL^2}^2. \end{equation} Here $S(t)u_0$ is the solution to the homogeneous problem defined by the Fourier transform (see \eqref{eq:2.1} in Section \ref{Strichartz} ). The flow map $$\Phi:B_{\varepsilon}\mapsto X: u_{0}\mapsto u \in X$$ is analytic. Here the symbol $B_{\varepsilon}$ denotes the ball of radius $\varepsilon$ in $l^ql^pL^2$. \end{theorem} Scattering is an immediate consequence. \begin{corollary}\label{scattering}[Scattering] Under the assumption of Theorem \ref{wellposed} for $u_0\in B_\varepsilon$ there exists $u_{\pm}\in l^ql^pL^2 $ such that \[ u(t)-S(t)u_{\pm} \rightarrow 0\text{\, in\, } l^ql^pL^2 \text{\,as\,} \, t\rightarrow\pm\infty. \] The wave operators are the inverses of the maps \[V_\pm : B_\varepsilon\ni u_0 \rightarrow u_{\pm} \in l^ql^pL^2. \] They are analytic diffeomorphisms to their range if $\varepsilon $ is sufficiently small. \end{corollary} \begin{proof} It is an important property of the spaces $V^2_{KP}$ that for $ v \in V^2_{KP}$ the limit \[ \lim_{t\to \infty} S(-t) v(t) \] exists. If $v \in X$ then $\lim_{t \to \infty} S(-t) v_{\Gamma_{\lambda,k}}$ exist. But then also \[ \lim_{t\to\infty} S(-t) v(t) \] exists in $l^ql^pL^2$. Since $ u_0 \to u(t) \in X$ is analytic also the map $ u_0 \to \lim_{t\to \infty} S(-t) u(t) $ is analytic as a function of $u_0$. Its derivative at $u_0=0$ is the identity, and hence the map is invertible in a neighborhood of $u_0=0$. \end{proof} Theorem \ref{wellposed} is almost sharp. For $2<p<\infty$, problem \eqref{eq:1.1} is ill-posed in the sense that the map $l^ql^pL^2 \ni u_0 \to u(t) \in l^ql^pL^2$ cannot be twice differentiable at $0$. \begin{theorem}\label{illposed} Let $1\leq q\leq\infty$, $2<p<\infty$. Suppose there exists $T >0$ and $\varepsilon>0$ such that \eqref{eq:1.1} admits a unique solution defined on the interval $ [-T,T] $ for initial data in ball of radius $\varepsilon$ and center $0$ in $l^ql^pL^2$. Then the flow map \[F_t :u_0 \rightarrow u(t)\] for \eqref{eq:1.1} is not twice differentiable at $u_0=0$ as a map from $l^ql^pL^2$ to itself. \end{theorem} We complement the results by studying the relation of the new function spaces to test functions and distributions. \begin{theorem}\label{discription} For any $1\leq q\leq \infty$, we have \begin{enumerate} \item If $p<2$ then $l^q l^p L^2$ embeds continuously into the space of distributions. \item If $p\ge 2$ and $q>1$ there is a sequence of Schwartz functions $\phi_j$ converging to $0$ in $l^q l^p L^2 $, which does not converge in the sense of distributions. \item If $p\leq \frac43 $, and $ \phi$ is Schwartz function in $l^q l^p L^2$ then for all $y \in \mathbb{R}^2$ we have \[ \int \phi(x,y) dx = 0. \] \item The Schwartz functions are contained in $l^q l^p L^2$ if $\frac43 <p<\infty$. \end{enumerate} \end{theorem} \begin{remark} For $l^1l^2L^2=L^2(\mathbb R^2;B^{\frac{1}{2}}_{2,1})$ and $l^2l^2L^2=\dot H^{\frac12,0}$ (see the definition \eqref{norms} below) we do not know whether the flow map is smooth or not. \end{remark} It is worthwhile to compare our results to the 2-D KP II initial data problem, which is much better understood. It has the same symmetries - up to obvious changes - as the three dimensional problem. A scaling critical and Galilean invariant space is $\dot{H}^{-\frac12,0}$ defined by the norm \begin{equation} \label{norms} \Vert u_0 \Vert_{\dot{H}^{s,\sigma }} = \Vert |\xi|^{-1/2} <\eta>^\sigma \hat u_0\Vert_{L^2}. \end{equation} In \cite{Bourgain3}, Bourgain settled the global well-posedness of the two dimensional version of \eqref{eq:1.1} in $L^2(\mathbb{R}^2)$. The assertion was then extended by Takaoka and Tzvetkov \cite{TkTz} (see also Isaza and Mej\'ia \cite{IsMe}) from $L^2(\mathbb{R}^2)$ to $H^{s_1,s_2}$ with $s_1>-\frac{1}{3},\, s_2\geq 0$. In \cite{Takaoka}, Takaoka obtained local well-posedness for $s_1>-\frac{1}{2},\,s_2=0$ under an additional assumption on the low frequencies which was later removed by Hadac in \cite{Hadac1}. Hadac, Herr and the first author \cite{HaHeKo} studied the two dimensional KP-II equation in the critical case $s_1=-\frac12,s_2=0$. They obtained global well-posedness and scattering result in the homogeneous Sobolev space $\dot{H}^{-1/2,0}(\mathbb R^{2})$ with small initial data. A local well posedness result in $H^{-1/2,0}(\mathbb R^{2})$ was also obtained in \cite{HaHeKo}. Some recent results on the KP-II equation can be found in \cite{KleSa}. Much less is known for KP II in three dimensional spaces. Tzvetkov \cite{Tzvetkov} obtained local well-posedness in $ H^s(\mathbb{R}^3)$ with the additional condition $\partial_x^{-1}u\in H^{s}(\mathbb R^3)$ for $s>\frac32$. Here $H^s(\mathbb R^3)$ denotes the isotropic Sobolev space. Isaza, L\'opez and Mej\'ia \cite{IsLoMe} constructed unique local solutions in Sobolev space $H^{s,r}(\mathbb R^3)$ defined by the norm \[ \|f\|_{H^{s,r}(\mathbb R^3)}:=\|<\xi>^s<\zeta>^r\hat{f}(\zeta)\|_{L^2_{\zeta}} \] for $s,r \in \mathbb R$. Hadac \cite{Hadac2} in his Ph.D thesis extended the local well-posedness result to almost all the subcritical cases. He obtained local well posed for \eqref{eq:1.1} in $Y_{s,r}(\mathbb R^3)$ for $s>\frac12, r>0$. To our best knowledge our result is the first result for initial data in a scaling invariant space, and the first scattering result for the three dimensional problem. Also the bilinear estimates (Proposition \ref{th:2.1}) accounting for dispersion in $y$ seem to be new. In the 3-D setting using the vertical direction (i.e. dispersion in the $y$ variable) is much more important than in the two dimensional problem. This can be see from the Strichartz estimates in Theorem \ref{th:2.1} in Section \ref{bilinear}. In particular the bilinear $L^4$ estimate by itself seems not to suffice to close the iteration argument, and we need several nontrivial modifications. In particular we use bilinear estimates which give us a gain making use of the dispersion in $y$ direction. We hope and think that these modifications and the constructions are of interest beyond this particular problem at hand. The 3D-KP II equation may be considered as a problem where the quadratic nonlinearity satisfies a null condition which exactly balances the bilinear estimates and the gain from high modulation, where we are not allowed to loose anything on the $L^2$ level. The outline of this paper is following. In Section \ref{strichartz} we prove the Strichartz estimates for the linear equations and a new crucial and fundamental bilinear estimate, Theorem \ref{th:2.1}. In Section \ref{sketch} we give the proofs of our main results. We first sketch an incorrect heuristic proof to show how far one gets using simple bilinear estimates and high modulation, for $q=1$ and $p=2$. A number of estimates is tight in this situation and we have not been able to close the argument for those function spaces. In the remainder of this section we sharpen the bilinear estimates and complete the proof of the main theorem. In Section \ref{ill} we complete the paper by a proof of Theorem \ref{illposed} and \ref{discription}. We use the standard notation $A\lesssim B$ to mean that there exists constant $C>1$ such $A\leq C B$. Constants $C$ may differ from line to line and depend on some obvious indices in the context but not on $A$ and $B$. $A\sim B$ means $\frac{1}{C} B\leq A\leq C B$. Similarly we denote $A\ll B$ for $A\leq\frac{1}{C} B$ for some $C>0$. The $s$ dimensional Hausdorff measure is denoted by $\mathcal{H}^s$ and its restriction to a set $S$ by $\mathcal{H}^s_S$. \section{Strichartz estimates and bilinear refinements} \label{strichartz} \subsection{Strichartz estimate}\label{Strichartz} The linear equation \[ u_t + u_{xxx} +\partial_x^{-1} u_{yy} = 0 \] defines a unitary group $S(t)$ on $L^2$ by \begin{equation} \label{eq:2.1} \mathcal{F} (S(t) u_0) = e^{it (\xi^3-|\eta|^2/\xi)} \hat u_0. \end{equation} Given $u_0 $ the solution $u(t) = S(t) u_0$ satisfies the Strichartz estimates of the next lemma. We denote by $|D_x|^s$ the Fourier multiplier $|\xi|^s$, $\xi $ being as always the Fourier variable of $x$. \begin{lemma} \label{le:2.1}Suppose that $2\le p\le \infty$ and \begin{equation} \label{eq:2.2} \frac2p+\frac3q = \frac32. \end{equation} Then the following estimate holds for all $u_0 \in \mathcal{S}$ \[ \Vert u \Vert_{L^p_t L^q_x} \lesssim \Vert |D_x|^{\frac1{3p}} u_0 \Vert_{L^2}. \] If $2\le q <\infty $ \begin{equation} \label{eq:2.3}\frac1p+\frac1q = \frac12 \end{equation} then \[ \Vert u \Vert_{L^p_t L^q_x} \lesssim \Vert |D_x|^{\frac2p} u_0 \Vert_{L^2}. \] \end{lemma} \begin{proof} We only sketch the proof. By a Littlewood Paley decomposition (see \eqref{Fourierprojection}) and H\"older's inequality the estimate follows from \[ \Vert u_{1} \Vert_{L^p_t L^q_x} \le c \Vert u_{1}(0) \Vert_{L^2} \] for Strichartz pairs $(p,q)$ which in turn is a consequence of the calculation of the complex Gaussian (as oscillatory integral) \[\frac1{ 2\pi} \int_{\mathbb R^2} e^{i y\cdot \eta - i t \eta^2/\xi+it \xi^3} d\eta = \frac{\xi}{4ti} e^{i\frac{\xi|y|^2}{4t}+ i t\xi^3 }. \] By stationary phase and the lemma of van der Corput we obtain \[ \left| \int \frac{\xi}{|\xi|} |\xi|^{1/2} e^{i(x+ \frac{|y|^2}{4t}) \xi + i t\xi^3} d\xi \right| \le C |t|^{-\frac12} \] which we write as \[ \Vert D_x^{\frac12} \mathcal{F}^{-1} e^{it (\xi^3-\eta^2/\xi)} \Vert_{sup} \le C |t|^{-\frac32}. \] By complex interpolation, the Hardy-Littlewood-Sobolev resp. weak Young inequality and a $T^*T$ argument \eqref{eq:2.2} follows. The endpoint $p=2$ and $q=6$ follows from \cite{KeelTao}. The estimate \[ \left| \int_{1\le |\xi|\le 2} \xi e^{i(x+ \frac{|y|^2}{4t}) \xi + i t\xi^3} d\xi \right| \le C \] is trivial. It leads to the second estimate \eqref{eq:2.3} by the same standard arguments. \end{proof} It is remarkable that there is so much flexibility in the choice of $p$ and $q$. This is true for the Schr\"odinger group, but there it comes from a trivial combination of (sharp) Strichartz estimates with Sobolev embedding. Here the situation is different due to the unbounded $y$ direction. \subsection{Bilinear estimates} There is an important special case of \eqref{eq:2.3}: \begin{equation} \label{eq:2.4} \Vert u \Vert_{L^4(\mathbb R^4)} \le c \Vert |D_x|^{\frac12} u_0 \Vert_{L^2(\mathbb R^3)}. \end{equation} The proof of the main theorem relies crucially on the following bilinear refinements. We denote by $u_{<\mu}$ the Fourier projection to all $\xi$ frequencies less in absolute value than $\mu$, by $u_{>\lambda}$ the Fourier projection to $\xi$ frequencies with absolute value $>\lambda$ and by $u_{\mu, \Gamma}$ the Fourier projection to \[ \Big\{ (\xi,\eta): \mu< |\xi| \le 2\mu, \frac{\eta}\xi \in \mu \Gamma \Big\}. \] Let $|\Gamma|$ denote the Lebesgue measure of $\Gamma$. With this notation the following variant or sharpening of the bilinear estimate is true. \begin{thm}\label{th:2.1} Let $0< \mu , \lambda$. Then \begin{equation}\label{th:2.1a} \Vert u_{<\mu} v_{>\lambda} \Vert_{L^2} \le c \mu \Vert u_0 \Vert_{L^2} \Vert v_0 \Vert_{L^2}, \end{equation} and, if $\mu \le \lambda$, if $\Gamma \subset \mathbb R^2$ is measurable, and if either \begin{itemize} \item $\mu \le \lambda/8$ or \item $\lambda/8 < \mu \le \lambda$ and $\Gamma \subset B_\lambda (0) $ and the support of the Fourier transform of $v_\lambda$ is disjoint from $\mathbb R \times \mathbb R \times B_{10 \lambda^2}(0)$ \end{itemize} then \begin{equation}\label{th:2.1b} \begin{split} \left\Vert \int_{\mathbb R\times \mathbb R^2} \Big(\lambda+ \left|\frac{\eta_1}{\xi_1} - \frac{\eta-\eta_1}{\xi-\xi_1} \right| \Big) \hat u_{\mu,\Gamma}(t,\xi_1,\eta_1) \hat v_\lambda(t,\xi-\xi_1,\eta-\eta_1) d\xi_1 d \eta_1 \right\Vert_{L^2} &\\ & \hspace{-7cm} \lesssim \mu |\Gamma|^{\frac12} \Vert u_{0,\mu,\Gamma} \Vert_{L^2} \Vert v_{0,\lambda} \Vert_{L^2}. \end{split} \end{equation} \end{thm} \begin{remark} Here as always $u_{0,\mu,\Gamma}$ denotes the Fourier projection of the initial data. \end{remark} \begin{remark} The condition for the second inequality is needed for a bound of a derivative from below at a single point in the argument in \eqref{lower} below. If $\mu \sim \lambda$, $ \Gamma = B_\lambda(0)$ and the Fourier support of $v_\lambda$ is contained in $\mathbb R \times \mathbb R \times B_{10 \lambda^2}(0)$ then there is no gain compared to the Strichartz estimate \eqref{eq:2.4}. \end{remark} \begin{proof} We consider solutions to the dispersive equation \begin{equation}\label{eq:phi} i \partial_t u + \phi(D) u = 0\end{equation} with $\phi(D)$ defined as Fourier multiplier with a smooth real function $\phi$. Then the Fourier transform of a solution with initial data $u_0$ is a complex measure supported on the characteristic set $\{(\tau,\xi): \tau= \phi(\xi)\}$. Here we denote all spatial Fourier variables by $\xi$. If $u$ is the solution to \eqref{eq:phi} with initial data $u_0$ then (essentially using a regularization and the coarea formula to make sense of the calculus of Dirac measures) \[ \hat u = \hat u_0(\xi) \delta_{\Phi} = \sqrt{2\pi} (1+2|\nabla \phi|^2)^{-1/2} \hat u_0(\xi) d\mathcal{H}^{d}|_{\Sigma} \] where $\Sigma = \{(\tau, \xi): \tau = \phi(\xi)\}$ is the characteristic set, and \[ \Vert \hat u \Vert_{L^2(\delta_\Phi)}= (2\pi)^{-1/2} \Vert u_0 \Vert_{L^2}. \] By the formula of Plancherel bilinear estimates for dispersive equations are equivalent to $L^2$ estimates of convolutions of such signed measures supported in such surfaces. By the Cauchy-Schwarz inequality and the theorem of Fubini, for non-negative bounded measurable functions $h$ and $l$, \[ \begin{split} \Vert fh*gl \Vert_{L^2(\mathbb R^d)}^2 \hspace{-1.5cm}& \\ = & \int_{\mathbb R^{d}} \left(\int_{\mathbb R^{d}} f(x)(h(x) l(z-x))^{1/2} g(z-x) (h(x) l(z-x))^{1/2} dx \right)^2 dz \\ \le & \int_{\mathbb R^{d}} \int_{\mathbb R^d} f^2(x)h(x) l(z-x) dx \int_{\mathbb R^d} g^2(y) h(z-y)l(y) dy \, dz \\ \le & \int_{\mathbb R^{2d}} \left[ \int_{\mathbb R^d} h(z-y) l(z-x) dz \right] f^2(x) h(x) g^2(y) l(y) dx dy. \end{split} \] Suppose that $U,V \subset \mathbb R^d$ are open, $\Phi_{1} \in C^1(U)$, $\Phi_2 \in C^1(V)$ and that the gradients $\nabla \Phi_i$ are nonzero where $\Phi_i$ vanishes. We define the Dirac measures $\delta_{\Phi_i}$ by approximation. The zero set of $\Phi_i$ is denoted by $\Sigma_i$. The calculation above yields \[ \Vert f\delta_{\Phi_1} * g\delta_{\Phi_2} \Vert_{L^2} \le C \Vert f \Vert_{L^2(\delta_{\Phi_1})} \Vert g \Vert_{L^2(\delta_{\Phi_2})} \] where \begin{equation}\label{eq:C} C^2 = \sup_{x\in \Sigma_1, y \in \Sigma_2} \int_{\mathbb R^d} \delta_{\Phi_2(z-x)} \delta_{\Phi_1(z-y)} dz \end{equation} which has again to be understood as limit through the approximation of the Dirac measures by smooth functions. By the coarea formula the integral can be rewritten. Let \[ \Sigma_{x,y} = \{ z \in \mathbb R^d : z-x \in \Sigma_2, z-y \in \Sigma_1 \} = (x+\Sigma_2) \cap (y+\Sigma_1).\] With \[ D= \left( \begin{matrix} d\Phi_1(z-x) \\ d\Phi_2(z-y) \end{matrix} \right) \] \[ J(z,x,y) = \left( \det( D^T D) \right)^{1/2}, \] we have \begin{equation}\label{eq:2.5} C^2 = \sup_{x,y} \int_{\Sigma_{x,y}} J(x,y,z) d\mathcal{H}^{d-2}(z). \end{equation} The case $ \Phi_i(\tau,\xi) = \tau-\phi(\xi)$, but with $\Phi_1$ defined on $\mathbb R \times A $ and $\Phi_2$ on $\mathbb R \times B$ is of particular interest. Integrating out $\tau$ \eqref{eq:C} simplifies to (with $n=d-1$) \begin{equation}\label{c2} C^2 = \sup_{\xi_1\in A,\xi_2\in B} \sup_{\tau} \int_{\mathbb R^n} \delta_{\phi(\xi-\xi_1)-\phi(\xi-\xi_2)-\tau} d\xi. \end{equation} The first case of interest is $U= \{(\tau,\xi,\eta): |\xi|\le \mu\}$, $V = \{(\tau,\xi,\eta): \lambda \le |\xi| \}$ and \[ \phi=\phi_1 = \phi_2 = \xi^3 -|\eta|^2/\xi. \] To obtain the bilinear estimate \eqref{th:2.1a} we have to estimate the integrals in \eqref{c2} by a constant times $\mu^2$. By the $L^4$ estimate \eqref{eq:2.4} we may assume that $\mu \le \lambda/2$ and estimate the quantity in \eqref{eq:C}: \begin{equation} \label{c2kp} C^2 = \sup_{\tau,\xi_1,\xi_2,\eta_1,\eta_2} \int_{\mathbb R^3,\xi-\xi_2 \in A, \xi-\xi_1\in B} \delta_{\phi(\xi-\xi_1,\eta-\eta_1)-\phi(\xi-\xi_2,\eta-\eta_2)-\tau} d\eta \, d\xi. \end{equation} The algebraic identity \begin{equation}\label{eq:2.6} \begin{split} \phi(\xi-\xi_1,\eta-\eta_1) - & \Phi(\xi-\xi_2,\eta-\eta_2) +\Phi(\xi_1-\xi_2,\eta_1-\eta_2 ) \\ & + 3(\xi_1-\xi_2)(\xi-\xi_1)(\xi-\xi_2) \\ = & (\xi_2-\xi_1) (\xi-\xi_1)(\xi-\xi_2) \left( \frac{\left|\frac{\eta-\eta_1}{\xi-\xi_1} - \frac{\eta-\eta_2}{\xi-\xi_2}\right|}{|\xi_1-\xi_2|} \right)^2 \end{split} \end{equation} can be verified by an easy calculation. In particular, if we fix $\xi$ then either the $\eta$ integral is over the empty set, a point, or it is an integral over a circle, in which case by \eqref{eq:2.6} (it suffices to consider the coefficient of the quadratic term since the integral is independent of the radius) \[ \int_{\mathbb R^2} \delta_{\phi(\xi-\xi_1,\eta-\eta_1)-\phi(\xi-\xi_2,\eta-\eta_2)-\tau} d\eta = \frac{4\pi |\xi_2-\xi_1|}{|\xi-\xi_1||\xi-\xi_2|} \] and we estimate the integral with respect to $\xi$ for $\mu \le \lambda/2$ \[ \frac{2\pi} {|\xi_2-\xi_1|} \int_{|\xi-\xi_2| \le \mu} |\xi-\xi_2| |\xi-\xi_1| d\xi \le 8 \pi \mu^2 . \] Together with the $L^4$ Strichartz estimate this implies estimate \eqref{th:2.1a}. We turn to the second part, \eqref{th:2.1b}, for which we repeat the calculus argument. Here we want to recover the stronger bilinear estimate for the KP equation where one gains a full derivative. Of course this can only be done by reducing the domain of the integration. The final integration then leads to the factor given by measure of $|\Gamma|$. Let $ \Phi_i$ be as above. Instead of estimating the convolution itself we claim that \[ \begin{split} \left\Vert \int h(y ,x -y) f_1(y) f_2(x-y)\delta_{\Phi_1}(y) \delta_{\Phi_2}(x-y) dy \right\Vert_{L^2} & \\ & \hspace{-4cm} \le C \Vert f_1 \Vert_{L^2(\delta_{\Phi_1})} \Vert f_2 \Vert_{L^2(\delta_{\Phi_2})} \end{split} \] where \[ C^2 = \sup_{x \in \Sigma_1, y \in \Sigma_2} \int h^2(z-x, z-y) \delta_{\Phi_1(z-x)} \delta_{\Phi_2(z-y)} dz. \] This follows by the same calculation as above. We take up the bilinear estimate for the KPII equation and estimate the integral in \eqref{c2kp} with the integration restricted to a suitable set. We fix $\tau$, $\xi_1$, $\xi_2$, $\eta_1$ and $\eta_2$. We search an estimate which contains the measure of $\Gamma$ and apply the transformation formula and Fubini's theorem to take the integration with respect to $\Gamma$ as outer integration. This yields the desired estimate provided we get uniform bounds for the integral with respect to $\xi$ for $\frac{\eta-\eta_2}{\xi-\xi_2} = \rho \in \mathbb R^2$ fixed. The Jacobian determinant of the map \[ (\xi, \eta) \to (\xi, \frac{\eta-\eta_2}{\xi-\xi_2} ) \] from $\mathbb R^3$ to $\mathbb R^3$ is $\frac{1}{|\xi-\xi_2|^2} $. We assume that one of the conditions of the second part of the theorem holds. Let $h= \lambda + \left|\frac{\eta_1}{\xi_1} - \frac{\eta_2}{\xi_2}\right| $ be the integrand to be studied. We recall that $\Gamma \subset \mathbb{R}^2$ and denote \[ B = \left\{(\xi,\eta): \mu/2 \le |\xi|\le \mu, \frac{\eta-\eta_2}{\xi-\xi_2} \in \mu \Gamma \right\}. \] Then \[ \begin{split} \int_B \Big(\lambda + \Big| \frac{\eta-\eta_1}{\xi-\xi_1} - \frac{\eta-\eta_2}{\xi-\xi_2}\Big|\Big)^2 \delta_{\phi(\xi-\xi_1,\eta-\eta_1)-\phi(\xi-\xi_2,\eta-\eta_2)} d\xi d\eta \hspace{-8cm} & \hspace{8cm} \\ = & \int_{\Gamma} \int \Big(\lambda + \Big| \frac{\eta-\eta_1}{\xi-\xi_1} - \frac{\eta-\eta_2}{\xi-\xi_2}\Big|\Big)^2 |\xi-\xi_2|^2 \delta_{g_\rho}(\xi) d\xi d\gamma \\ \le & C \mu^2|\Gamma| \end{split} \] where we calculated with \[ \begin{split} (\xi-\xi_1)^3 & - \frac{(\eta-\eta_1)^2}{\xi-\xi_1} - (\xi-\xi_2)^3 + \frac{(\eta-\eta_2)^2}{\xi-\xi_1} \\ = & (\xi-\xi_1)^3 - \frac{( \rho\cdot (\xi-\xi_2) + \eta_2-\eta_1)^2}{\xi-\xi_1} - (\xi-\xi_2)^3 + (\xi-\xi_2) |\rho|^2 \\ = :& g_\rho(\xi). \end{split} \] Clearly $g_\rho(\xi)=\tau$ if and only if \[ (\xi-\xi_1)^4 - (\rho\cdot (\xi-\xi_2) +\eta_2-\eta_1)^2-(\xi -\xi_2)^3(\xi-\xi_1) + (\xi-\xi_1)(\xi-\xi_2)|\rho|^2 = \tau \] and hence there are at most $4$ values of $\xi$ where $g_\rho=\tau$. Moreover \begin{equation} \begin{split} \left|\frac{d}{d\xi} g(\xi)\right| = & \Big| 3(\xi-\xi_1)^2 - 2 \rho \frac{\rho (\xi-\xi_2)+\eta_2-\eta_1}{\xi-\xi_1}\\ & + \frac{( \rho(\xi-\xi_2) + \eta_2-\eta_1)^2}{(\xi-\xi_1)^2} - 3(\xi-\xi_2)^2 + |\rho|^2 \Big|\\ = &\left| 3 (\xi-\xi_1)^2 - 3 (\xi-\xi_2)^2 + \left|\frac{\eta-\eta_2}{\xi-\xi_2} - \frac{\eta-\eta_1}{\xi-\xi_1} \right|^2 \right| \\ \sim & \, \Big(\lambda + \Big| \frac{\eta-\eta_1}{\xi-\xi_1} - \frac{\eta-\eta_2}{\xi-\xi_2}\Big|\Big)^2 \end{split} \label{lower} \end{equation} since $g_\rho(\xi)=\tau $ at most at four points, and it satisfies the lower bound there. \end{proof} \subsection{Functions of bounded $p$ variation and their predual} Functions of bounded $p$ variation were introduced by N.Wiener \cite{Wiener}. The space of function of bounded $p$ variation and their pre-dual spaces $U^p$ were defined by D.Tataru and the first author of this paper in \cite{KoTa}. $V^p_{KP}$ and $U^p_{KP}$ are defined by $S(t)V^p$ and $S(t)U^p$. Here $S(t)$ is the unitary group defined in \eqref{eq:2.1}. We refer the reader to \cite{HaHeKo} for the following statements and further properties about $U^p_{KP}$ and $V^p_{KP}$. Let $\frac1p+\frac1{p'}=1$, $1<p < \infty$. The duality pairing can formally be written as \[ B(u,v) = \int v (\partial_t+\partial_{xxx} - \partial_x^{-1} \Delta_y) \bar u dx dy dt, \] but a correct definition requires more care (see \cite{HaHeKo1}). The space $V^{p'}_{KP}$ is the dual of $U^p_{KP}$ with respect to this duality pairing. We denote by $V^p_{rc}$ the subspace of $V^p_{KP}$ of right continuous functions with limit $0$ at $-\infty$. The spaces $U^p$ have an atomic structure and the Strichartz estimates imply \begin{equation}\label{eq:2.7} \Vert u \Vert_{L^pL^q} \le c_1\Vert |D_x|^{\frac1{3p}} u \Vert_{U^p_{KP}} \end{equation} where $\frac2p + \frac3q = \frac32$, $2\le p,q \le \infty$ and \begin{equation}\label{eq:2.8} \Vert u \Vert_{L^pL^q} \le \Vert D^{\frac1p} u \Vert_{U^p_{KP}} \end{equation} where $\frac1p+\frac1q = \frac12$, $2<p\le \infty$. Moreover one has the inclusions \begin{equation}\label{eq:2.9} \Vert u \Vert_{U^p_{KP} } \le c \Vert u \Vert_{V^q_{KP} } \end{equation} whenever $q<p$ and $u \in V^q_{KP}$ is right continuous. Similarly we obtain from the bilinear estimates of Theorem \ref{th:2.1} under the same assumptions there, \begin{equation} \label{eq:2.10} \Vert u_\mu v_\lambda \Vert_{L^2} \le c \mu \Vert u_{\mu} \Vert_{U^2_{KP}} \Vert v_\lambda \Vert_{U^2_{KP}} \end{equation} and \begin{equation}\label{eq:2.11} \left\Vert \int_S \Big(\lambda+ \left|\frac{\eta_1}{\xi_1} - \frac{\eta_2}{\xi_2} \right| \Big) \hat u_{\mu,\Gamma} \hat v_\lambda \right\Vert_{L^2} \lesssim \mu |\Gamma|^{\frac12} \Vert u_{\mu,\Gamma} \Vert_{U^2_{KP}} \Vert v_{\lambda} \Vert_{U^2_{KP} }. \end{equation} The $V^2_{KP}$ spaces behave well with respect to further decompositions: \begin{equation} \Vert u_\lambda \Vert_{V^2_{KP}} \le \Vert u_\lambda \Vert_{l^2 V^2_{KP}}, \end{equation} see \cite{KoBi}. They allow the following decomposition \begin{lemma} \label{interpo} Suppose that $1<p<q<\infty$. There exists $\delta>0$ so that for any right continuous $v \in V^p_{KP}$ and $M>1$ there exists $u \in U^p_{KP}$ and $w \in U^q_{KP}$ such that \[ v = u + w \] \[ \Vert u \Vert_{U^p_{KP}} \le M , \qquad \Vert w \Vert_{U^q_{KP}} \le e^{-\delta M }. \] \end{lemma} From \eqref{eq:2.10}, the $L^4$ Strichartz estimates and logarithmic interpolation lemma \ref{interpo} (see again \cite{HaHeKo}), we obtain for any $0<\varepsilon\ll 1$, \begin{equation}\label{eq:2.12} \Vert u_\mu v_\lambda \Vert_{L^2} \le C(\varepsilon) \mu \Big(\frac{\lambda}{\mu}\Big)^{\varepsilon}\Vert u_{\mu} \Vert_{V^2_{KP}} \Vert v_\lambda \Vert_{V^2_{KP}}. \end{equation} Similarly the bilinear estimate \eqref{th:2.1b} implies bilinear estimates with respect to $U^2_{KP}$, and via logarithmic interpolation, estimate with respect to the $V^2_{KP}$ norm. Later we will make use of the spaces $U^1_{KP}\subset V^1_{KP}$ which carry identical norms, which, for functions given by $S(-t)u(t) = \int_{-\infty}^t f(s) ds $ is $\int_{\mathbb R} |f| dt$. We define \[ \begin{split} \Vert v \Vert_{V^1_{KP}}= & \Vert S(-t) v(t) \Vert_{BV} \\ = & \sup_{t_0 < t_2 <\cdots< t_n} \sum_{j=1}^n \Vert S(-t_i) v(t_i) - S(-t_{i-1}) v(t_{i-1}) \Vert_{L^2} \end{split} \] where we allow $t_n= \infty$ (recall the convention $v(\infty)=0$). We denote by $U^1_{KP}$ the Banach space of all right continuous functions with $\lim_{t\to -\infty} u(t) = 0 $ for which this norm is finite. It is not hard to see that \[ \Vert u \Vert_{U^1_{KP}}= \Vert S(-t) u(t) \Vert_{BV(\mathbb R,L^2)} \] Then $U^1_{KP} \subset U^2_{KP}$. We will use an improvement of the estimate for high modulation. Let $\Phi \in \mathcal{S}(\mathbb R)$ with $\hat \Phi = 1$ for $|\tau|\le 1$, $\hat \Phi=0$ for $|\tau| \ge 2$. Then, for $f$ with $f' \in L^1$ \[ \begin{split} \Vert f-\Phi*f \Vert_{L^1} = & \left\|\int (f(t)-f(s))\Phi(t-s) ds \right\|_{L^1} \\ \le & \int_{\mathbb R} |\Phi(\sigma)| \int |f(t)-f(t-\sigma)| dt d\sigma \\ = & \int_{\mathbb R} |\sigma| |\Phi(\sigma)| d\sigma \int |f'(t)| dt. \end{split} \] Rescaling and an approximation yield the high modulation estimate \begin{equation} \Vert u_\lambda^{>\Lambda} \Vert_{L^1_t L^2} \le c\Lambda^{-1} \Vert u_\lambda \Vert_{U^1_{KP}}. \end{equation} Here $u^{>\Lambda}$ resp $u^{\leq\Lambda}$ means the Fourier projection to high resp- low modulation, i.e. to $$|\tau-\omega(\xi,\eta)|:=\Big|\tau-(\xi^3-\frac{|\eta|^2}{\xi})\Big| > \Lambda $$ resp. $\leq \Lambda$. By the definition of the Fourier restriction spaces \[ \Vert u^{>\Lambda} \Vert_{L^2} \le \Lambda^{-b} \Vert u^{>\Lambda} \Vert_{\dot{X}^{0,b}}, \qquad \Vert u^{>\Lambda} \Vert_{L^2} \le \Lambda^{-1/2} \Vert u^{>\Lambda} \Vert_{V^2_{KP}}, \] and similarly \[ \Vert u^{\sim \Lambda} \Vert_{U^2_{KP}} \le \Lambda^{1/2} \Vert u \Vert_{L^2}. \] see \cite{HaHeKo}. \subsection{A bilinear operator} \label{bilinear} The bilinear estimates of Theorem \ref{th:2.1} state some off-diagonal decay in the bilinear terms. This suggests to decompose waves into wave packets of corresponding Fourier support. We recall that we partition $\{\lambda, 2\lambda\}\times \mathbb R^2$ into sets $ \Gamma_{\lambda,k}$ \eqref{eq:Gamma}. Theorem \ref{th:2.1} effectively diagonalizes the bilinear estimate in the sector determined by the large frequency. To capture this we define \[ \Gamma_{\lambda,k,L}= \left\{ (\xi_1,\eta_1) : \lambda \le \xi_1 \le 2\lambda, \Big|\frac{\eta_1}{\xi_1}-kL \Big|_\infty \le \frac{L\lambda}2 \right\} \] and $\Gamma_{\mu,k, L\lambda/\mu}$ is the set in frequency $|\xi| \sim \mu$ which corresponds to $\Gamma_{\lambda,k}$ in the bilinear estimate of Theorem \ref{th:2.1}. We define a smooth bilinear projection which is compatible with scaling and the Galilean symmetry. Here we again denote the Fourier transform in space time by $\mathcal{F}$ resp. $\hat {\, }$. Let $ \phi_1 \in C^\infty_0((-129,129)\times (-129,129))$, identically $1$ in $(-128,128)\times (-128,128)$ and even. We define for $L = 2^k$ with $k \ge 1$ \[ \psi_L (s) = \phi_1( s/L) )- \phi_1(2s/L) \] and \[ \rho_L(\xi_1,\eta_1, \xi_2,\eta_2) := \psi_L\left( \frac{\frac{\eta_1}{\xi_1}-\frac{\eta_2}{\xi_2}}{\xi_1+\xi_2} \right). \] For $L=1$, we make the modification \[ \rho_1(\xi_1,\eta_1,\xi_2,\eta_2):=\phi_1\left( \frac{\frac{\eta_1}{\xi_1}-\frac{\eta_2}{\xi_2}}{\xi_1+\xi_2} \right). \] \begin{definition} We define the bilinear operators by their Fourier transform \[ \! \mathcal{F}( T_L(v_\mu ,u_\lambda ))(\tau,\xi,\eta) = \int_{S} \rho_L(\xi_1, \eta_1,\xi_2,\eta_2) \hat v_\mu(\tau_1,\xi_1, \eta_1) \hat u_\lambda(\tau_2,\xi_2,\eta_2) d\mathcal{H}^4.\! \] Here $S=\{\xi=\xi_1+\xi_2,\eta=\eta_1+\eta_2,\tau=\tau_1+\tau_2\}$ and $d\mathcal{H}^4$ denotes the 4-Dimensional Hausdorff measure on it. \end{definition} The product is the dyadic sum of these bilinear operators. The key properties of the bilinear projection are its symmetry, and the bounds of Proposition \ref{pro:3.1} below. \begin{lemma}\label{TL} The following symmetry identity always holds. \begin{equation} \label{T:sym} \begin{split} \int u_\lambda T_L (v_\lambda ,w_\mu) dx\, dy\, dt = & \int v_\lambda T_L(u_\lambda, w_\mu) dx\, dy \, dt \\ = & \int w_\mu T_L(u_\lambda, v_\lambda) dx\, dy \, dt. \end{split} \end{equation} \end{lemma} \begin{proof} This follows from the algebraic calculation \[ \frac{\frac{\eta_1+\eta_2}{\xi_1+\xi_2} - \frac{\eta_1}{\xi_1} }{\xi_2} = \frac{\frac{\eta_2}{\xi_2}-\frac{\eta_1}{\xi_1}}{\xi_1+\xi_2}. \] \end{proof} The following bilinear estimates provide us with a crucial new tool. Below the index $\{.\}_+$ denotes the positive part. \begin{proposition}\label{pro:3.1} Let $\varepsilon >0$, $1\le p ,q,r \le \infty$ with \[ \frac1r \le \frac 1p+\frac1q \] and $L \in 2^{k}, k=0,1,2\dots $. Then the following estimates hold \begin{equation} \label{eq:3.10} \left\Vert T_L (u_\mu ,v_\lambda)_\lambda \right\Vert_{l^r L^2} \le C \mu \Big(\frac{L\lambda}{\mu}\Big)^{1-\frac2p+\varepsilon} L^{(1-\frac2q)_++ (\frac2r-1)_+} \Vert u_\mu \Vert_{l^pV^2_{KP}} \Vert v_\lambda \Vert_{l^{q}V^2_{KP}} \end{equation} and \begin{equation} \label{eq:3.15}\begin{split} \Vert (T_L (u_\lambda ,v_\lambda))_\mu \Vert_{l^rL^2} & \\ & \hspace{-2cm} \le C \lambda \Big(\frac{L\lambda}{\mu}\Big)^{(\frac2r-1)_+} L^{(1-\frac{2}{p}) +(1-\frac2q)_++\varepsilon} \Vert u_\lambda \Vert_{l^{p}V^2_{KP}} \Vert v_\lambda \Vert_{l^{q}V^2_{KP}} . \end{split} \end{equation} \end{proposition} \begin{proof} We consider the case $\mu < \lambda/4$ for \eqref{eq:3.10} first. By rescaling we may assume that $\mu=1 < \lambda/4$. We decompose the bilinear term further, using that by the definition of $T_L$ there is only a contribution if \[ \left| \frac{\frac{\eta_1}{\xi_1} -\frac{\eta_2}{\xi_2}}{\xi_1+\xi_2} \right| \sim L. \] It is important that this relation is equivalent to \[ \left| \frac{\frac{\eta_1+\eta_2}{\xi_1+\xi_2} -\frac{\eta_2}{\xi_2}}{\xi_1} \right| \sim L. \] Since $1\le \lambda/4$ we have $|\xi_2| \sim |\xi_1+\xi_2 |\sim \lambda $ and both $\xi_2$ and $\xi_1+\xi_2$ have the same sign. For simplicity we assume that both are positive. Recall that $|\xi_1| \sim 1$. We begin with the case $L=1$ resp. \[ \left| \frac{\frac{\eta_1}{\xi_1} -\frac{\eta_2}{\xi_2}}{\xi_1+\xi_2} \right| \le 1. \] If $(\xi_2,\eta_2) \in \Gamma_{\lambda,k}$ then the $l^r$ summation in \eqref{eq:3.10} over $\Gamma_{\lambda,l}$ contributes only if $|k-l| \le C$. We simplify our lifes and restrict to $l=k$. The situation is similar if $L >1$. and we obtain the restriction that the indices are of distance $\sim$ $1$ and the slopes have distance $\sim L \lambda $. Hence, by the same abuse of notation as usual, and with the sets $\Gamma_{1,k,\lambda}$ defined at the beginning of this subsection \begin{equation}\label{eq:5.9} (T_{1}(u_1,v_\lambda))_{\Gamma_{\lambda, k}}= \left(T_1(u_{\Gamma_{1,k,\lambda}},v_{\Gamma_{\lambda,k}})\right)_{\Gamma_{\lambda, k}} . \end{equation} We search for an $L^2$ estimate and ignore the outer restriction to $\Gamma_{\lambda,k}$ in the notation. By the bilinear estimate we get \[ \Vert u_{\Gamma_{1,l}} v_{\Gamma_{\lambda ,k}} \Vert_{L^2} \le c \frac{1}{\lambda} \Vert u_{\Gamma_{1,l}} \Vert_{U^2_{KP}}\Vert v_{\Gamma_{\lambda,k}} \Vert_{U^2_{KP}}. \] There are $\sim \lambda^2$ such terms in $u_{\Gamma_{1,k,\lambda}}$ contributing to the sum and hence by H\"older's inequality applied to the finite sum \[ \Vert u_{\Gamma_{1,k,\lambda}}v_{\Gamma_{\lambda,k}}\Vert_{L^2} \le c \frac{1}{\lambda} \lambda^{2-\frac2p} \Vert u_{\Gamma_{1,k,\lambda}} \Vert_{l^p U^2_{KP}} \Vert v_{\Gamma_{\lambda,k}} \Vert_{U^2_{KP}}. \] The $L^4$ Strichartz estimate gives \[ \begin{split} \Vert u_{\Gamma_{1,k,\lambda}}v_{\Gamma_{\lambda,k}}\Vert_{L^2} \le & \sum_{\Gamma_{1,l}\subset\Gamma_{1,k,\lambda}} \Vert u_{\Gamma_{1,l}} v_{\Gamma_{\lambda,k}}\Vert_{L^2} \\ \le & c\sum_{\Gamma_{1,l}\subset\Gamma_{1,k,\lambda}}\lambda^{\frac12} \Vert u_{\Gamma_{1,l}} \Vert_{U^4_{KP}} \Vert v_{\Gamma_{\lambda,k}}\Vert_{U^4_{KP}} \\ \le & c \lambda^{\frac12} \lambda^{2-\frac2p} \Vert u_{\Gamma_{1,k,\lambda}} \Vert_{l^p U^4_{KP}} \Vert v_{\Gamma_{\lambda,k}}\Vert_{U^4_{KP}}. \end{split} \] where the summation is with respect to those $l$ for which $\Gamma_{1,l} \subset \Gamma_{1,k, \lambda}.$ With the logarithmic interpolation of Lemma \ref{interpo} we arrive at \[ \Vert u_{\Gamma_{1,k,\lambda}}v_{\Gamma_{\lambda,k}}\Vert_{L^2} \le c \lambda^{1-\frac2p+\varepsilon} \Vert u_{\Gamma_{1,k,\lambda}} \Vert_{l^pV^2_{KP}} \Vert v_{\Gamma_{\lambda,k}} \Vert_{V^2_{KP}}. \] The summation with respect to $k$ is trivial and we arrive at the first estimate \eqref{eq:3.10}, also for $L>1$, for which there are only the obvious modifications, up to an explanation why we may simply drop the operator $T_L$ once we restricted the support of the Fourier transforms of the factors. Bounded spatial Fourier multipliers define bounded operators on the function spaces $U^p_{KP}$ and $V^p_{KP}$. Our problem is that $T_L$ is a bilinear Fourier multiplier, and we have to reduce the estimates to estimates of Fourier multipliers acting on single functions. We recall that \[ \rho_L(\xi_1,\eta_1, \xi_2,\eta_2) := \psi_L\left( \frac{\frac{\eta_1}{\xi_1}-\frac{\eta_2}{\xi_2}}{\xi_1+\xi_2} \right) \] and we want to bound $T_L( u_{\Gamma_{\mu,k,L\lambda/\mu}} , u_{\Gamma_{\lambda,k',L}})$ which is zero unless $4\le |k-k'|_\infty \le 20$. Without loss of generality we consider $64\le k_1-k_1' \le 1000$. We apply a Galilee transform which reduces the problem to $k_1+k_1'=0$, $k_2=0$ and $|k_2'| \le 20$. More precisely we expand \begin{equation} u_{\Gamma_{\mu,k,L\lambda/\mu}} = \sum_{l\in A} u_{\Gamma_{\mu,l}} \end{equation} where $A$ is set of cardinality $(L\lambda/\mu)^2$. The function $\rho_L$ is a smooth function on $ \Gamma_{\mu,k,L\lambda/\mu} \times \Gamma_{\lambda,k',L}$. We choose a smooth extension supported in \[ \begin{split} (( -3\mu,-\frac43\mu) \cup (\frac43 \mu, 3\mu)) \cup \{ |\eta -kL\lambda \mu|_\infty \le L\lambda \mu \} & \\ & \hspace{-6cm} \times (( -3\lambda,-\frac43\lambda) \cup (\frac43 \lambda, 3\lambda)) \cup \{ |\eta -kL\lambda^2|_\infty \le L\lambda^2 \}, \end{split} \] which, by an abuse of notation, we call again $\rho_L$. Its derivative satisfies \[ \left| \partial_{\xi_1}^k \partial_{\eta_1}^\alpha \partial_{\xi_2}^l \partial_{\eta_1}^\beta \psi_1\left( \frac{\eta_1}{\xi_1} -\frac{\eta_2}{\xi_2} \right) \right| \le c \mu^{-k} \lambda^{-l} (L\mu \lambda)^{-|\alpha|} (L\lambda^2)^{-|\beta|}. \] We expand it into a fast converging Fourier series and we multiply it by a suitable smooth product cutoff function \[ \begin{split} \rho_{L} =& \sum_\alpha \rho_1(\xi_1/\mu) e^{2\pi i \alpha_1 \xi_1/\mu} \rho_2(\eta_1/L\lambda \mu) e^{2\pi i \eta_1 \alpha_2/(L\lambda \mu) } \\ & \times \rho_3(\xi_2/\lambda) e^{2\pi i \alpha_3 \xi_2/\lambda} \rho_4(\eta_2/(L\lambda^2) e^{2\pi i \eta_2/(L\lambda^2) } \\ =: &f^\alpha = \sum_{\alpha} a_\alpha f_1^\alpha(\xi_1) f_2^\alpha (\eta_1) f_3^\alpha (\xi_2) f_4^\alpha ( \eta_2) \end{split} \] with uniform bounded compactly supported functions $f^\alpha_j$ and summable coefficients $a^\alpha$. It suffices to bound the operator \[ \begin{split} T_{f^\alpha} (u_{\Gamma_{\mu,k,L\lambda/\mu}},v_{\Gamma_{\lambda,k',L}} ) = & M_{f_1^\alpha f_2^\alpha} u_{\Gamma_{\mu,k,L\lambda/\mu}}M_{f_3^\alpha f_4^\alpha} v_{\Gamma_{\lambda,k',L\lambda^2}} \\ = & \tilde u_{\Gamma_{\mu,k,L\lambda/\mu}} \tilde v_{\Gamma_{\lambda,k',L\lambda^2}}. \end{split} \] where $M_f$ denotes the Fourier multiplier. The bilinear estimate above, together with the observation that spatial Fourier multipliers define bounded operators on $U^p_{KP}$ and $V^p_{KP}$ completes the argument for the first estimate \eqref{eq:3.10} if $\mu \le \lambda/4$. If $\mu > \lambda/4 $ we decompose $v_\mu = v_{<\lambda/4} + \sum_{\lambda/4 \le \rho \le \mu}v_\rho$ and apply \eqref{eq:3.10} to the first term and \eqref{eq:3.15} (which we prove next) to the remaining terms. We turn to estimate \eqref{eq:3.15}. It suffices to prove the estimate for $\mu =1 \le \lambda $. We begin again with $L=1$. As above it suffices to consider a fixed number $k \in \mathbb{Z}^2$, which we even may assume to be zero. The summation with respect to $k$ poses no difficulties. The $L^4$ Strichartz estimate implies $ \Vert u_{\Gamma_{\lambda,k}}^2 \Vert_{L^2} \le c \lambda\Vert u_{\Gamma_{\lambda, k}} \Vert_{U^4_{KP}}^2$. By H\"older's inequality for sequences and orthogonality \[ \sum_{k} \Vert (u_{\Gamma_{\lambda,k}} u_{\Gamma_{\lambda,k}})_{\Gamma_{1,k,\lambda}} \Vert_{l^r(L^2)} \le c \lambda^{(\frac2r-1)_++1} \Vert u_{\Gamma_{\lambda,k}} \Vert_{l^pV^2_{KP}} \Vert u_{\Gamma_{\lambda,k}} \Vert_{l^pV^2_{KP}} . \] The condition $\frac1r= \frac1p+\frac1q$ suffices for that summation. This time there will be an important modification for large $L$. As above, if $k \ge 2$, by the bilinear estimate of Theorem \ref{th:2.1}, and its consequences for $U^2_{KP}$, \begin{equation} \Vert u_{\Gamma_{\lambda,0}} v_{\Gamma_{\lambda,k,L}} \Vert_{L^2} \le c \lambda L^{-1} \Vert u_{\Gamma_{\lambda,0}}\Vert_{U^2_{KP}} \Vert v_{\Gamma_{\lambda,k,L}} \Vert_{U^2_{KP}}. \end{equation} As above we have to sum over $L^2$ terms which gives \[ \Vert T_L (u_{\Gamma_{\lambda,k,L}}, v_{\Gamma_{\lambda,k',L}}) \Vert_{L^2} \le c L^{1-\frac2p + (1-\frac2q)_+ + \varepsilon} \lambda \Vert u_{\Gamma_{\lambda,k,L}} \Vert_{l^pV^2_{KP}} \Vert v_{\Gamma_{\lambda,k,L}} \Vert_{l^q V^2_{KP}}. \] We complete the proof with the same type of approximation and summation as above. \end{proof} \section{Proof of the main theorem}\label{sketch} \subsection{A simple proof with three flaws} We begin with sketching an incomplete proof, attempting to get an iteration argument work in a simpler and slightly larger space $X^0$ defined by the norm \[ \Vert u \Vert_{X^0} = \sup_{\lambda>0} \left( \lambda^{1/2} \Vert u_\lambda \Vert_{V^{2}_{KP}} + \lambda^{-1} \Vert u_\lambda \Vert_{\dot{X}^{0,1}}\right). \] This will almost work, and we will provide essential modifications which will complete the wellposedness argument. Existence via the contraction mapping principle follows from the two estimates \begin{equation}\label{eq:3.1}\lambda^{\frac12} \left\Vert \int_0^t S(t-s)\partial_x (uv)_\lambda ds \right\Vert_{V^2_{KP}} \le c \Vert u \Vert_{X^0} \Vert v \Vert_{X^0} \end{equation} and \begin{equation}\label{eq:3.2}\lambda^{-1} \left\Vert \int_0^t S(t-s) \partial_x (uv)_\lambda ds \right\Vert_{\dot{X}^{0,1}} \le c \Vert u \Vert_{X^0} \Vert v \Vert_{X^0}. \end{equation} It is useful to observe that \begin{equation}\label{eq:3.3}\lambda^{1/2} \Vert u_\lambda \Vert_{V^{2}_{KP}} + \lambda^{-1} \Vert u_\lambda \Vert_{\dot{X}^{0,1}} \sim \lambda^{1/2} \Vert u_\lambda^{\leq\lambda^3} \Vert_{V^2_{KP}} + \lambda^{-1} \Vert u_\lambda^{>\lambda^3} \Vert_{\dot{X}^{0,1}}. \end{equation} This implies \eqref{eq:3.3}. By scaling it suffices to consider \eqref{eq:3.1} and \eqref{eq:3.2} for $\lambda=1$, and duality reduces the two estimates to bounds for trilinear integrals \begin{equation}\label{eq:3.4} \int uvw_1dxdydt=\int_S\widehat{u}(\xi_1,\eta_1,\tau_1)\widehat{v}(\xi_2,\eta_2,\tau_2)\widehat{w_1}(\xi_3,\eta_3,\tau_3)d\mathcal{H}^8. \end{equation} for $w_1\in V^2_{KP}\cup L^2.$ Here $S$ denotes the subspace of dimension $8$ given by \[ \{\xi_1+\xi_2+\xi_3=0, \eta_1+\eta_2+\eta_3=0,\tau_1+\tau_2+\tau_3=0\} \] and $d\mathcal{H}^8$ denotes the $8$-dimensional Hausdorff measure on it. On this subspace \eqref{eq:2.6} becomes \begin{equation}\label{eq:3.5}\tau_1-\omega_1+\tau_2-\omega_2+\tau_3-\omega_3= -3\xi_1\xi_2\xi_3-\frac{\xi_1\xi_2}{\xi_3}\Big|\frac{\eta_1}{\xi_1} -\frac{\eta_2}{\xi_2}\Big|^2. \end{equation} It has the following important interpretation: If $\tau_i = \xi_i^3 - \eta_i^2/\xi_i$ for $i=1,2$ then $\Lambda \ge |\xi_1\xi_2(\xi_1+\xi_2)|$ where \[ \begin{split} \Lambda:= \left| (\tau_2-\tau_1) - (\xi_2-\xi_1)^3 + \frac{|\eta_2-\eta_1|^2}{\xi_2-\xi_1} \right| = & |\xi_1||\xi_2||\xi_1+\xi_2| \\ & + \frac{|\xi_1||\xi_2|}{|\xi_1+\xi_2|} \left| \frac{\eta_1}{\xi_1} - \frac{\eta_2}{\xi_2} \right|^2 \end{split}\] $\Lambda$ is a function of $\xi_i$ and $\eta_i$. We decompose $u,v$ into dyadic pieces according to the size of $\xi$'s and, by an abuse of notation we choose a version which is constant on the sets of consideration. We decompose $u_i = u_i^{>{\Lambda/3}} + u_i^{\le \Lambda/3} $. Then the trilinear integral vanishes unless at least one term has high modulation since $ \int u_1^{\le \Lambda/3} u_2^{\le \Lambda/3} u_3^{\le \Lambda/3} dx dy dt = 0$. The Strichartz estimates give for $\lambda\ge 1$ \begin{equation} \label{eq:3.6} \begin{split} \int u_\lambda v_\lambda w_1 dx\, dy\, dt \le & \Vert w_1 \Vert_{L^2} \Vert u_\lambda v_\lambda \Vert_{L^2} \\ \le & C \Vert w_1 \Vert_{L^2} \left(\lambda^{\frac12} \Vert u_\lambda \Vert_{V^2_{KP}}\right) \left( \lambda^{\frac12} \Vert v_\lambda \Vert_{V^2_{KP}} \right) \end{split} \end{equation} which yields by scaling and orthogonality of the Paley-Littlewood pieces \[ \left\Vert \partial_x \int_0^t S(t-s) u_\lambda v_\lambda ds \right\Vert_{\dot X^{0,1}} \le C \left(\lambda^{\frac12} \Vert u_\lambda \Vert_{V^2_{KP}}\right) \left( \lambda^{\frac12} \Vert v_\lambda \Vert_{V^2_{KP}} \right). \] By the bilinear estimate of Theorem \ref{th:2.1} - see also \eqref{eq:2.10} \begin{equation} \label{eq:3.7} \begin{split} \left| \int u^{>\lambda^2/3}_\lambda v_\lambda w_1 dx\, dy\, dt \right|\le & \Vert u^{>\lambda^2/3}_\lambda \Vert_{L^2} \Vert v_\lambda w_1 \Vert_{L^2} \\ \le & c \lambda^{-1} \Vert u^{>\lambda^3/3}_\lambda \Vert_{V^2_{KP}}\Vert v_\lambda \Vert_{U^2_{KP}} \Vert w_1 \Vert_{U^2_{KP}} \end{split} \end{equation} and hence \[ \begin{split} \left\Vert \partial_x \int_0^t S(t-s) (u_\lambda v_\lambda)_1 ds \right\Vert_{\dot X^{0,1}} +\left\Vert \partial_x \int_0^t S(t-s) (u_\lambda v_\lambda)_1 ds \right\Vert_{V^2_{KP}} \hspace{-3cm}& \\ \le & c \lambda^{\frac12} \Vert u_\lambda \Vert_{U^2_{KP}} \lambda^{\frac12} \Vert v_\lambda \Vert_{U^2_{KP}}. \end{split} \] For $\mu \le 1$ we estimate using the Strichartz estimate \eqref{eq:2.8} for $p=q=4$ and the embedding $V^2_{KP} \subset U^4_{KP}$ \begin{equation} \label{eq:3.8} \begin{split} \int u^{>\mu/3}_\mu v_1 w_1 dx\, dy\, dt \le & \Vert u^{>\mu/3}_\mu \Vert_{L^2} \Vert v_1 w_1 \Vert_{L^2} \\ \le & c(\mu^{-1} \Vert u_\mu \Vert_{\dot{X}^{0,1}}) \Vert v_1 \Vert_{V^2_{KP}} \Vert w_1 \Vert_{V^2_{KP}} \end{split} \end{equation} and the bilinear estimate \eqref{eq:2.10} to arrive at \begin{equation} \label{eq:3.9} \begin{split} \int u_\mu v^{>\mu/3}_1 w_1 dx\, dy\, dt \le & c \mu \Vert u_\mu \Vert_{U^2_{KP}} \mu^{-\frac12} \Vert v_1 \Vert_{V^2_{KP}} \Vert w_1 \Vert_{U^2_{KP}} \\ = & c (\mu^{1/2} \Vert u_\mu \Vert_{U^2_{KP}})\Vert v_1 \Vert_{V^2_{KP}} \Vert w_1 \Vert_{U^2_{KP}}, \end{split} \end{equation} thus \[ \begin{split} \left\Vert \partial_x \int_0^t S(t-s) (u_\mu v_1)_1 ds \right\Vert_{\dot X^{0,1}} +\left\Vert \partial_x \int_0^t S(t-s) (u_\mu v_1)_1 ds \right\Vert_{V^2_{KP}} \hspace{-6cm}& \\ \le & c\left( \mu^{\frac12} \Vert u_\mu \Vert_{U^2_{KP}} + \mu^{-1} \Vert u_\mu \Vert_{\dot X^{0,1}}\right) \Vert v_1 \Vert_{U^2_{KP}}. \end{split} \] To achieve \eqref{eq:3.1} and \eqref{eq:3.2}, there are three issues to resolve: \begin{enumerate} \item The summability with respect to $\lambda$ and $\mu$ requires improved estimates to obtain \eqref{eq:3.1} and \eqref{eq:3.2}. \item In \eqref{eq:3.7} and \eqref{eq:3.9}, we have to replace $U^2_{KP}$ by $V^2_{KP}$. \item The function $u=S(t) u_0$ for $t>0$ and $u=0$ for $t <0$ is not in $\dot{X}^{0,1}$. We need a variant of the estimates for solutions to the homogeneous initial value problem. \end{enumerate} Here as always we oversimplify things a bit: We have to consider more general frequency combinations, and we only know that the two highest frequencies have to be of comparable size, otherwise the trilinear integral vanishes, which as always we ignore since we want to keep the formulas simpler, and there is no new difficulty connected with that. \subsection{$l^p$ summation and bilinear estimate}\label{lp-summation} We begin to explain the modifications for the proof. We use $l^ql^p(V^2_{KP})$ with $1\leq q\leq\infty,1<p<2$ and replace $\dot{X}^{0,1}$ by $\dot{X}^{0,b}$ with some $b \in (\frac56,1)$ as discussed in the introduction. \begin{definition} Let $X$ be the space of all distributions for which \[ \Vert u \Vert_{X} := \left\Vert\lambda^{\frac12} \Vert u_\lambda \Vert_{l^p V^2_{KP}} + \lambda^{2-3b} \Vert u_\lambda \Vert_{ \dot X^{0,b}}\right\Vert_{l^q_\lambda} < \infty. \] \end{definition} We next formulate a bilinear estimate. \begin{proposition}[Bilinear estimates for the quadratic term] \label{pr:finbin} For $u,v\in X$, we have \begin{equation} \label{finbin} \left\Vert \int_{-\infty}^t S(t-s) \partial_x(uv) ds \right\Vert_{X} \le c \Vert u \Vert_X \Vert v \Vert_X. \end{equation} \end{proposition} In our proof we obtain a slightly stronger bilinear estimate. We will replace the $U^2_{KP}$ by $V^2_{KP}$ at several places. \begin{proof} Using a Littlewood-Paley decomposition, a duality argument and an expansion of \eqref{finbin} the estimate follows from the next four inequalities. The high $\times $ high to low type estimates are \begin{equation} \label{first} \begin{split} \int u_\lambda v_\lambda w_\mu dx\, dy\, dt & \le C \mu^{3b-3}\Big(\frac{\mu}{\lambda}\Big)^{2-2b -\varepsilon} \\ & \hspace{-1cm} \times \Big(\lambda^{\frac12} \Vert u_\lambda \Vert_{l^pV^2_{KP}} \Big) \Big( \lambda^{\frac12} \Vert v_\lambda \Vert_{l^pV^2_{KP}} \Big) \Vert w_\mu \Vert_{\dot X^{0,1-b}} \end{split} \end{equation} \begin{equation} \label{second} \begin{split} \int u_\lambda v_\lambda w_\mu dx\, dy\, dt & \le C \mu^{-\frac32}\Big(\frac{\mu}{\lambda}\Big)^{2-\frac2p-\varepsilon}\\ & \hspace{-1cm} \times\Big(\lambda^{\frac12} \Vert u_\lambda \Vert_{l^pV^2_{KP}} \Big) \Big( \lambda^{\frac12} \Vert v_\lambda \Vert_{l^pV^2_{KP}} \Big) \Vert w_\mu \Vert_{l^{p'}V^2_{KP}}. \end{split} \end{equation} which we complement by low $ \times $ high to high estnates \begin{equation} \label{third} \begin{split} \int u_\mu v_\lambda w_\lambda dx\, dy\, dt \le & c \lambda^{-\frac32}\Big(\frac{\mu}{\lambda}\Big)^{\min\{ \frac2p-1-\varepsilon, 2b-\frac53 \} } \\ &\hspace{-3cm} \times \left( \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}}+ \mu^{2-3b} \Vert u_\mu \Vert_{\dot{X}^{0,b}} \right) \Big(\lambda^{\frac12}\Vert v_\lambda \Vert_{l^pV^2_{KP}}\Big) \Vert w_\lambda \Vert_{l^{p'}V^2_{KP}} \end{split} \end{equation} \begin{equation} \label{fourth} \begin{split} \int u_\mu v_\lambda w_\lambda dx\, dy\, dt & \le c \lambda^{3b-3}\Big(\frac{\mu}{\lambda}\Big)^{\min\{b-\frac12-\varepsilon,3b-\frac{5}{2}\}} \\ & \hspace{-3cm}\times \left( \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}}+ \mu^{2-3b} \Vert u_\mu \Vert_{\dot{X}^{0,b}} \right) \Big(\lambda^{\frac12}\Vert v_\lambda \Vert_{l^pV^2_{KP}}\Big) \Vert w_\lambda \Vert_{\dot X^{0,1-b}} \end{split} \end{equation} for $\mu \le \lambda$. Proposition \ref{pr:finbin} and more precisely \eqref{finbin} follows by summing up the $\mu$ and $\lambda$, which is trivial. More precisely we would have to consider frequencies $\lambda_1 $ and $\lambda_2$ for the first estimates, but, since on the Fourier side the Fourier variables $\xi_1$ and $\xi_2$ have to add up to something of size $\sim \mu$ which we assume always less then $\lambda$, it suffices to consider neighboring dyadic intervals resp $\lambda_1 \sim \lambda_2$. To simplify the notation we restrict to $\lambda_1 = \lambda_2 = \lambda$ and we deal similarly with the other inequalities. We turn to the proof of the four main estimates \eqref{first}-\eqref{fourth}. For the [(high,high)$\to $ low] type estimates \eqref{first} and \eqref{second}, by rescaling, we assume that $\mu=1$. We decompose \[ \int u_\lambda v_\lambda w_1 dx\, dy\, dt = \sum_{L \in 2^{\mathbb{N}}} \int T_L(u_\lambda v_\lambda) w_1 dx \, dy \,dt \] where the sum runs over $L=2^{\mathbb Z_+}$. At least one of the terms has to have high modulation, i.e. modulation at least $\ge L^2 \lambda^2/3$. For simplicity we will ignore the denominator $3$. Now, if $L> 1$ - the difference for $L=1$ is only in notation - \begin{equation}\label{w-high} \begin{split} \left|\int T_L(u_\lambda, v_\lambda) w_1^{\ge L^2 \lambda^2} dx dy dt \right| \le \hspace{-3cm} &\hspace{3cm} \Vert T_L (u_\lambda, v_\lambda)_1 \Vert_{l^2L^2} \Vert w_1^{\ge L^2 \lambda^2} \Vert_{l^2 L^2} \\ \le & C \lambda \Vert u_\lambda \Vert_{l^2V^2_{KP}} \Vert v_\lambda \Vert_{l^2V^2_{KP}} (L\lambda)^{-2(1-b)} \Vert w_1 \Vert_{l^2\dot{X}^{0,1-b}}. \end{split} \end{equation} Since for $1<p<2$, \[\Vert w_1\Vert_{l^2 \dot{X}^{0,1-b}}\approx \Vert w_1 \Vert_{\dot{X}^{0,1-b}}, \Vert u_\lambda\Vert_{l^2V^2_{KP}}\le \Vert u_\lambda\Vert_{l^p V^2_{KP}}\] we obtain \[ \begin{split} \sum_{L} \left|\int T_L(u_\lambda v_\lambda) w_1^{\ge L^2 \lambda^2} dx dy dt \right| & \\ & \hspace{-4cm} \le c \lambda^{2b-2} \left(\lambda^{1/2} \Vert u_\lambda \Vert_{l^pV^2_{KP}} \right) \left( \lambda^{1/2} \Vert v_\lambda \Vert_{l^pV^2_{KP}} \right) \Vert w_1 \Vert_{\dot{X}^{0,1-b}}. \end{split} \] \eqref{w-high} can also be bounded, for $1<p<2$, by \[ \lambda (L\lambda)^{\frac2p-1} \Vert u_\lambda \Vert_{l^pV^2_{KP}} \Vert v_\lambda \Vert_{l^pV^2_{KP}} (L\lambda)^{-1} \Vert w_1 \Vert_{l^{p'}V^2_{KP}}. \] Here we used H\"older's inequality and then the high modulation estimate for $w$ and \eqref{eq:3.15} with $r=q=p$ for the product. We complete the proof of \eqref{first} for the case the $w$ has high modulation by \[ \begin{split} \sum_{L} \left|\int T_L(u_\lambda v_\lambda) w_1^{\ge L^2 \lambda^2} dx dy dt \right| & \\ & \hspace{-4cm} \le c \lambda^{\frac2p-2} \left(\lambda^{1/2} \Vert u_\lambda \Vert_{l^pV^2_{KP}} \right) \left( \lambda^{1/2} \Vert v_\lambda \Vert_{l^pV^2_{KP}} \right) \Vert w_1 \Vert_{l^{p'}V^2_{KP}}. \end{split} \] Next we use the symmetry property of Lemma \ref{TL} to deal with the case that $v$ has high modulation: \begin{equation}\label{w-low}\begin{split} \left|\int T_L(u_\lambda, v_\lambda^{\ge L^2\lambda^2})_1 w_1^{< L^2\lambda^2} dx dy dt \right| = \hspace{-3cm} & \hspace{3cm} \left|\int v_\lambda^{\ge L^2\lambda^2} T_L (u_\lambda, w_1^{L^2\lambda^2})_\lambda dx dy dt \right| \\ \le & \Vert T_L (u_\lambda, w_1^{<L^2\lambda^2})_\lambda \Vert_{l^2L^2} \Vert v_\lambda^{\ge L^2 \lambda^2} \Vert_{l^{2} L^2} \\ \le & C \lambda^{\frac2p-1+\varepsilon} L^\varepsilon \Vert u_\lambda \Vert_{l^pV^2_{KP}} \Vert w_1 \Vert_{l^{p'}V^2_{KP}} (L\lambda)^{-1} \Vert v_\lambda \Vert_{l^{p}V^2_{KP}}. \end{split} \end{equation} with the obvious modification if $L=1$. Here we used the high modulation estimate for $v_\lambda$ and \eqref{eq:3.10} with $r=2$, and $q=p'$. The summation with respect to $L$ gives \[ \begin{split} \sum_{L} \left|\int T_L(u_\lambda v_\lambda^{\ge L^2\lambda^2}) w_1^{< L^2\lambda^2} dx dy dt \right| & \\ & \hspace{-4cm} \le C \lambda^{\frac2p-3+\varepsilon} \Big( \lambda^{1/2} \Vert u_\lambda \Vert_{l^pV^2_{KP}}\Big) \Vert w_1 \Vert_{l^{p'}V^2_{KP}} \Big( \lambda^{1/2} \Vert v_\lambda \Vert_{l^pV^2_{KP}}\Big). \end{split} \] In the same way, we can bound \eqref{w-low} by \[\lambda^{\varepsilon} \Vert u_\lambda \Vert_{l^2V^2_{KP}} \Vert w_1^{<L^2\lambda^2} \Vert_{l^2V^2_{KP}} (L\lambda)^{-1} \Vert v_\lambda \Vert_{l^2V^2_{KP}}.\] Notice that\[ \Vert f^{\le L^2\lambda^2} \Vert_{V^2_{KP}} \lesssim L^{2b-1}\lambda^{2b-1} \Vert f \Vert_{\dot{X}^{0,1-b}}.\] \eqref{first} and \eqref{second} follows by a trivial summation over $L$. Now we turn to \eqref{third} and \eqref{fourth} and rescale to $\lambda=1$. We decompose the factors in the same fashion as above \[ \int u_\mu v_1 w_1 dx dy dt = \sum_L \int T_L(u_\mu v_1) w_1 dx dy dt .\] As above, using \eqref{eq:3.10} with $r=q=p=2$ \[\begin{split} \left| \int T_L (u_\mu v_1) w_1^{\ge \mu L^2} dx dy dt \right| \le & \Vert T_L(u_\mu v_1)_1 \Vert_{l^2 L^2} \Vert w_1^{\ge \mu L^2} \Vert_{l^2L^2} \\ & \hspace{-4.5cm} \le C \mu^{\frac12} (L/\mu)^{\varepsilon} (\mu L^2)^{b-1} (\mu^{1/2} \Vert u_\mu \Vert_{l^2V^2_{KP}}) \Vert v_1 \Vert_{l^2V^2_{KP}} \Vert w_1 \Vert_{l^2 \dot X^{0,1-b}} \end{split} \] resp. taking $r=p=q<2$, \[\begin{split} \left| \int T_L (u_\mu v_1) w_1^{\ge \mu L^2} dx dy dt \right| \le & \Vert T_L(u_\mu v_1)_1 \Vert_{l^p L^2} \Vert w_1^{\ge \mu L^2} \Vert_{l^{p'}L^2} \\ & \hspace{-4cm} \le C (L/\mu)^{1-\frac2p+\varepsilon} L^{-1} \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}} \Vert v_1 \Vert_{l^pV^2_{KP}} \Vert w_1 \Vert_{l^{p'} V^2_{KP}}. \end{split} \] The summation with respect to $L$ gives \[ \begin{split} \sum_L \left| \int T_L (u_\mu v_1) w_1^{\ge \mu L^2} dx dy dt \right| & \le C \mu^{b-\frac12-\varepsilon}\\ & \hspace{-3cm} \times\left( \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}}\right) \Vert v_1 \Vert_{l^p V^2_{KP}} \Vert w_1 \Vert_{\dot X^{0,1-b}}. \end{split} \] resp. \[ \begin{split} \sum_L \left| \int T_L (u_\mu v_1) w_1^{\ge \mu L^2} dx dy dt \right| & \le C \mu^{\frac2p-1-\varepsilon}\\ & \hspace{-3cm} \times\left( \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}}\right) \Vert v_1 \Vert_{l^p V^2_{KP}} \Vert w_1 \Vert_{l^{p'}V^2_{KP}}. \end{split} \] The same computation gives \[ \begin{split} \sum_L \left| \int T_L (u_\mu w_1) v_1^{\ge \mu L^2} dx dy dt \right| & \le C \mu^{\frac2p-\frac12-\varepsilon}\\ & \hspace{-4cm}\times\left( \mu^{1/2} \Vert u_\mu \Vert_{l^pV^2_{KP}}\right) \Vert v_1 \Vert_{l^pV^2_{KP}}\Vert w_1 \Vert_{l^{p'}V^2_{KP}} \end{split} \] resp. \[ \begin{split} \sum_L \left| \int T_L (u_\mu w_1^{\leq \mu L^2}) v_1^{\ge \mu L^2} dx dy dt \right| & \le C \mu^{b-\frac12-\varepsilon}\\ & \hspace{-4cm}\times\left( \mu^{1/2} \Vert u_\mu \Vert_{l^2V^2_{KP}}\right) \Vert v_1 \Vert_{l^2V^2_{KP}} \Vert w_1\Vert_{\dot X^{0,1-b}}. \end{split} \] Here we used \[ \Vert w^{<\mu L^2}_1 \Vert_{V^2_{KP}} \le C \mu^{b-\frac12}L^{2b-1} \Vert w_1 \Vert_{\dot X^{0,1-b}}. \] The last term with the high modulation on $u_\mu$ is different, and it is the most interesting: \[ \left| \int u_\mu^{\ge \mu L^2} T_L ( v_1, w_1) dx dy dt \right| \le \Vert T_L( v_1, w_1)_\mu \Vert_{l^{2} L^{2}_tL^{3/2}_{xy}} \Vert u_\mu^{\ge \mu L^2} \Vert_{l^{2}L^2_t L^3_{xy}} . \] We continue with the endpoint Strichartz estimate \[ \Vert u_\mu^{\ge \mu L^2} \Vert_{L^2_t L^3} \le \Vert u_\mu^{\ge \mu L^2} \Vert_{L^2}^{1/2} \Vert u_\mu^{\ge \mu L^2} \Vert_{L^2_tL^6}^{1/2} \le C (\mu L^2)^{\frac14-b} \mu^{\frac1{12}} \Vert u_\mu \Vert_{\dot X^{0,b}} \] for each part localized in $\eta$ and we achieve \[\begin{split} \left| \int u_\mu^{\ge \mu L^2} T_L ( v_1 w_1) dx dy dt \right| & \\ &\hspace{-4cm} \le C \mu^{2b -\frac53 } L^{\frac12-2b+\frac2p-1} \Vert v_1 \Vert_{l^p L^4_tL^3} \Vert w_1 \Vert_{l^{p'} L^4_tL^3} \mu^{2-3b} \Vert u_\mu \Vert_{\dot X^{0,b}}. \end{split} \] By Proposition \ref{pro:3.1}, we drop $T_L$ here. The exponent $(4,\ 3)$ is a Strichartz pair. The summation with respect to $L$ is trivial. It gives \[ \begin{split} \sum_L \left| \int u_\mu^{\ge \mu L^2} T_L ( v_1, w_1) dx dy dt \right| & \\ & \hspace{-3cm} \le c \mu^{2b-\frac5{3}} \left( \mu^{2-3b} \Vert u_\mu \Vert_{l^p\dot X^{0,b}}\right) \Vert v_1 \Vert_{l^p V^2_{KP}} \Vert w_1 \Vert_{l^{p'} V^2_{KP}}. \end{split} \] resp. \[ \begin{split} \sum_L \left| \int u_\mu^{\ge \mu L^2} T_L ( v_1, w_1^{\leq \mu L^2}) dx dy dt \right| & \\ & \hspace{-3cm} \le c \mu^{3b-\frac{5}{2}} \left( \mu^{2-3b} \Vert u_\mu \Vert_{\dot X^{0,b}}\right) \Vert v_1 \Vert_{V^2_{KP}} \Vert w_1 \Vert_{ \dot{X}^{0,1-b}}. \end{split} \] The summation with respect to $\mu$ requires $ b > \frac56$ and we arrive at \eqref{third} and \eqref{fourth} . \end{proof} \subsection{The initial data, the proof of wellposedness}\label{initial data} It remains to consider estimate $S(t) u_0$ in terms of the initial data. Let $$\tilde{u}(t)=\chi_{[0,\infty)}S(t)u_0.$$ As we pointed out in issue iii), it is not in $\dot{X}^{0,b}$ for any $1/2<b\leq1$, thus it is not in $X$ unless it is trivial. Let \[ \Vert u \Vert_Y= \left\Vert\lambda^{1/2} \Vert u_\lambda \Vert_{l^p U^1_{KP}} \right\Vert_{l^q_{\lambda}}\] to shorten the notation. Then by construction \[ \Vert \tilde u \Vert_{Y} \le \Vert u_0 \Vert_{ l^ql^pL^2}. \] The two estimates of the following proposition will allow to complete the proof. \begin{proposition} \label{initial} The following estimates hold. \begin{equation} \label{eq:3.23} \left\Vert \int_0^t S(t-s)\partial_x (uv) \right\Vert_X \le c \Vert u \Vert_Y \Vert v \Vert_{Y}, \end{equation} \begin{equation} \label{eq:3.24} \left\Vert \int_0^t S(t-s)\partial_x (uv) \right\Vert_X \le c \Vert u \Vert_{X} \Vert v \Vert_{Y} . \end{equation} \end{proposition} With these estimates at hand we complete the fixed point argument. By Duhamel's formula, to solve \eqref{eq:1.1} on $[0,\infty) $ is equivalent to solving \[w=\tilde u+\int_0^t S(t-s)\partial_x(w^2)(s)ds.\] We rewrite this equation in terms of the difference $u=w- \tilde u$ and define the map \begin{equation}\label{eq:3.26} \begin{split} \Phi(u):= & \int_0^t S(t-s)\partial_x((u+\tilde{u})^2)(s)ds \\ = & \int_0^t S(t-s) \partial_x\tilde{u}^2 ds + \int_{0}^t S(t-s) \partial_x (2 \tilde u u +u^2)ds \end{split} \end{equation} where we set $u(s)=0 $ for $s<0$. Set $r:=\min(\frac{1}{4C},3\varepsilon)$. Here $C$ is the largest constant among the constants from \eqref{finbin}, \eqref{eq:3.23} and \eqref{eq:3.24}. We define the closed ball of radius $r$ in $X$ \[ B_r:=\{u\in X ; \|u\|_{X}\leq r\}.\] We search an unique fixed point of $\Phi$ in $B_r$. By the definition of $Y$ \[ \|\tilde u \|_{Y}\leq C \|u \|_{l^ql^p L^2}\lesssim \varepsilon.\] By \eqref{finbin}, \eqref{eq:3.23} and \eqref{eq:3.24}, we have \begin{equation}\label{eq:3.27}\|\Phi(u)\|_{X}\lesssim \|u\|_{X}^2+2\|\tilde u \|_{Y}\|u\|_{X}+\|\tilde u \|_{Y}^2\leq r \end{equation} and \[ \begin{split} \|\Phi(u)-\Phi(v)\|_{X} \le & C\|u-v\|_{X}(\|u\|_{X}+\|v\|_{X} +\|\tilde{u}\|_{Y}) \\ \leq & \frac12\|u-v\|_{X}. \end{split}\] We apply the contraction mapping theorem to obtain existence of a unique fixed point. The linearization at the fixed point is invertible - it is a contraction by construction - and the map $\Phi$ is analytic. Hence the map from the initial data to the fixed point is analytic. The estimate $$\|u\|_X\leq C\|u_0\|^2_{l^ql^pL^2}$$ follows from \eqref{eq:3.27}. This completes the proof, up to proving Proposition \ref{initial}. \subsection{The proof of Proposition \ref{initial}} \label{proof for initial data} By the same strategy as above we continue to assume $\mu \le 1 \le \lambda$. The estimates \eqref{first} and \eqref{second} are in terms of $l^pV^2_{KP}$ at frequency $\lambda$. It is a consequence of Minkowski's inequality that \[ \Vert u_\lambda \Vert_{l^pV^2_{KP}} \lesssim \Vert u_\lambda \Vert_{l^pU^2_{KP}}\lesssim \Vert u_\lambda \Vert_{l^p U^1_{KP}} .\] We can directly replace $l^pV^2_{KP}$ by $l^pU^1_{KP}$ in the estimates \eqref{first} and \eqref{second}. This completes the argument for the [(high,high)$\to $ low] case, for both estimates \eqref{eq:3.23} and \eqref{eq:3.24}. The next lemma provides the remaining [(low,high) $\to $ high] estimates. \begin{lemma}\label{U1U1} The following estimates hold, for $\mu\le 1$, \begin{equation} \label{v2kpii2} \begin{split} \left\| \int_0^t S(t-s) (u_\mu v_1 )_1ds \right\|_{l^p(U^2_{KP})} & \\ & \hspace{-4cm} \le C \mu^{\min(\frac{2}{p}-1-\varepsilon,b-\frac12)}\Big( \mu^{1/2} \Vert u_\mu \Vert_{l^pU^1_{KP}}\Big) \Vert v_1 \Vert_{l^pU^1_{KP}}, \end{split} \end{equation} \begin{equation} \label{itfirstl22} \left\|\int_0^t S(t-s)(u_\mu v_1 )_1 \right\|_{ \dot{X}^{0,b}} \le C \mu^{\min(\frac2p-1-\varepsilon,b-\frac12)} \Big( \mu^{\frac12} \Vert u_\mu \Vert_{l^pU^1_{KP}}\Big) \Vert v_1 \Vert_{l^pV^2_{KP}}. \end{equation} \begin{equation} \label{itfirstl23} \left\|\int_0^t S(t-s)(u_\mu v_1 )_1 \right\|_{ \dot{X}^{0,b}} \le C \mu^{\min(\frac2p-1-\varepsilon,b-\frac12)} \Big( \mu^{\frac12} \Vert u_\mu \Vert_{l^pV^2_{KP}}\Big) \Vert v_1 \Vert_{l^pU^1_{KP}}. \end{equation} \end{lemma} Together with the versions of \eqref{first} and \eqref{second} above these imply \eqref{eq:3.24} then \eqref{eq:3.23} in Proposition \ref{initial} by an easy summation. \begin{proof} Again we use duality and decompose \[ \left|\int u_\mu v_1 w_1 dx\, dy\, dt\right| \le \sum_L \left|\int u_\mu T_L( v_1, w_1) dx\, dy\, dt\right|. \] At least one term has modulation $\geq \mu L^2$. Notice that \[\Vert u_\lambda\Vert_{l^p(V^2_{KP})}\lesssim \Vert u_\lambda\Vert_{l^p U^1_{KP}},\]the estimates in \eqref{third} and \eqref{fourth} work well except the case $u_{\mu}$ has the high modulation \[ \int u_\mu^{>\mu L^2} T_L( v_1 w_1) dx\, dy\, dt. \] Let $L\ge 1$ and consider \[ \int u^{>\mu L^2}_{\Gamma_{\mu,k,L/\mu}} T_L (v_{\Gamma_{1,k',L}} ,u_{\Gamma_{1,k,L}}) dx\, dy\, dt \] with $16 \le |k-k'|\le 1000$ if $L>1$, resp $|k-k'|\le 200$ if $L=1$. we decompose $u_{\Gamma_{\mu,k,\frac{L}{\mu}}}$ further \[ \begin{split} \sum_{16\leq|k-k'|\leq100} \sum_{|k-l|\lesssim\frac{L}{\mu}}\left|\int u^{>\mu L^2}_{\Gamma_{\mu, l}} T_L (v_{\Gamma_{1,k,L}} , w_{\Gamma_{1,k,L}}) dx\, dy\, dt \right| \hspace{-7.5cm} & \\ \lesssim &\sum_{16\leq|k-k'|\leq100}\sum_{|k-l|\lesssim\frac{L}{\mu}} \Vert u^{>\mu L^2}_{\Gamma_{\mu, l}} \Vert_{L^1_tL^\infty} \Vert T_L (v_{\Gamma_{1,k',L}} , w_{\Gamma_{1,k,L}}) \Vert_{L^\infty_t L^1} \\ \lesssim & \sup_{k} \mu^{\frac52}\Big(\frac{L}{\mu}\Big)^{\frac{2}{p^\prime}} \Vert u_{\Gamma_{\mu,k,\frac{L}{\mu}}} \Vert_{l^p(L^1_tL^2)} \Vert v_1 \Vert_{l^p L^\infty_tL^2} L^{\frac{2}{p}-1}\Vert w_1 \Vert_{l^{p'}L^\infty_tL^2} \\ \lesssim & \mu^{\frac2p-1}L^{-1} \Big(\mu^{\frac12}\Vert u_\mu \Vert_{l^p U^1_{KP}}\Big) \Vert v_1 \Vert_{l^p V^2_{KP}} \Vert w_1 \Vert_{l^{p'}V^2_{KP}}. \end{split} \] Here we used the size of the set $\Gamma_{\mu, l}$ is $\mu^5$. We estimate similarly to above \[ \begin{split} \sum_{16\leq|k-k'|\leq100}\left|\int u^{>\mu L^2}_{\Gamma_{\mu, k,L/ \mu}} T_L (v_{\Gamma_{1,k,L}} , w_{\Gamma_{1,k,L}}) dx\, dy\, dt \right| \hspace{-5.5cm} & \\ \lesssim & \mu^{b}L^{2b-2} \Vert u_\mu \Vert_{l^2U^1_{KP}} \Vert v_1 \Vert_{l^2 V^2_{KP}} \Vert w_1 \Vert_{\dot{X}^{0,1-b}}. \end{split} \] Here we applied Sobolev's resp. Bernstein's inequality in sets of Fourier size $\mu^3L^2$ and the high modulation factor $ \mu L^2$. The summation with respect to $L$ is trivial since the exponent is negative. Finally \eqref{itfirstl23} is a direct consequence of \eqref{fourth}. \end{proof} \section{Ill-posedness and Function spaces}\label{ill} \subsection{ Ill-posedness in $l^ql^pL^2$ for $p>2$. } We prove illposedness (Theorem \ref{ill}) by contradiction. By scaling it suffices to consider $T=1$. Suppose that the flow map $u_0 \to u(1)$ defines a map from $l^ql^pL^2$ to itself which is continuously differentiable near $0$, and twice differentiable at $0$, for some $p>2$. For simplicity we choose $q=\infty$, but the proof works for all $q\in [1,\infty] $. Consider the Cauchy problem \begin{align}\label{eq:6.1} \left\{ \begin{aligned} &\partial_x\left(\partial_t u+\partial_x^3 u+\partial_x(u^2)\right)+\triangle_{y }u=0 \\ &u(0,x,y )=\gamma \phi(x,y )\ \ \gamma\in\mathbb R. \end{aligned} \right. \end{align} where $\phi\in l^\infty l^pL^2$ and $1<p<\infty$. Suppose that $u(\gamma,t,x,y )$ solves (\ref{eq:6.1}). By Duhamel's formula, we have $$u(\gamma,t,x,y )=\gamma S(t)\phi(x,y )+\int_0^t S(t-s)\partial_x(u(\gamma,x,y )^2)(s)ds. $$ Since the flow map is (twice) differentiable at $u_0=0$ $$\frac{\partial u}{\partial \gamma}(0,t,x,y )=S(t)\phi(x,y ):=u_1(t,x,y ),$$ $$\frac{\partial^2u}{\partial\gamma^2}(0,t,x,y )=-2\int_0^t S(t-s)\partial_x(u_1^2(s))ds:=u_2(t,x,y ).$$ Since we assume the flow map to be twice differentiable \begin{equation}\label{eq:6.2} \|u_{2}(1,\cdot)\|_{l^\infty l^p L^2}\lesssim \|\phi\|^2_{l^\infty l^p L^2}. \end{equation} We construct a sequence of initial data for $u_1$ of norm $1$ so that the norm of $u_2(1)$ tends to infinity. This yields the desired contradiction. We define the initial data $\phi$ defined by its Fourier transform \[\begin{split}\hat{\phi}(\xi,\eta)=&\frac{1}{\mu^3\big(\frac{\lambda}{\mu}\big)^\frac2p}\chi_{[\frac{\mu}{2},\mu]}(\xi)\chi_{[\frac{\lambda\mu}{2},2\lambda\mu]^2}(\eta)+\frac{1}{\mu^{\frac32}\lambda^\frac32}\chi_{[\lambda+\frac{\mu}{2},\lambda+\mu]}(\xi)\chi_{[\frac{\lambda\mu}{2},2\lambda\mu]^2}(\eta)\\ :=&\hat{\phi}_1+\hat{\phi}_2.\end{split}\] Here dyadic numbers $\mu\ll1\ll \lambda$ will be chosen later. It is easy to check that $$\|\phi\|_{l^\infty l^p L^2}\approx \|\phi_1\|_{l^\infty l^p L^2}\approx\|\phi_2\|_{l^\infty l^p L^2}\approx 1.$$ Moreover $$u_1=S(t)\phi_1+S(t)\phi_2,$$ $$u_1^2=(S(t)\phi_1)^2+(S(t)\phi_2)^2+2(S(t)\phi_1S(t)\phi_2):=f_1+f_2+f_3.$$ The Fourier transforms of the three summand are supported on pairwise disjoint sets and they are orthogonal. We then decompose $u_2$ into three orthogonal parts as $$u_2(1)=\int_0^1 S(1-s)(f_1+f_2+f_3)(s)ds:=F_1+F_2+F_3.$$ By (\ref{eq:6.2}), we have \begin{equation}\label{eq:6.3}\|F_3(1)\|_{l^\infty l^p L^2}\lesssim \|u_2(1,\cdot)\|_{l^\infty l^p L^2}\lesssim1.\end{equation} By Lemma 4 in Page 376 of \cite{MoSaTz}, we have \[ \hat{F}_3(1,\xi,\eta)=2\frac{\xi e^{i\omega(\xi,\eta)}}{\mu^3\big(\frac{\lambda}{\mu}\big)^{\frac2p}\mu^\frac{3}{2}\lambda^\frac32} \int_A \frac{e^{iR(\xi,\xi_1,\eta,\eta_1)}-1}{R(\xi,\xi_1,\eta,\eta_1)}d\xi_1 d\eta_1. \] Here $R(\xi,\xi_1,\eta,\eta_1)$ denotes the resonance function \begin{equation} -3\xi\xi_1(\xi-\xi_1)-\frac{\xi\xi_1}{\xi-\xi_1}\Big|\frac{\eta}{\xi} -\frac{\eta_1}{\xi_1}\Big|^2. \end{equation} In the set \[ \begin{split} A= & \Big\{ \xi_1,\eta_1: \xi_1\in[\frac{\mu} {2},\mu],\eta_1\in[\frac{\lambda\mu}{2},2\lambda\mu]^2,\\ & \xi-\xi_1\in[\lambda+\frac{\mu}{2},\lambda+\mu],\eta-\eta_1\in[\frac{\lambda\mu}{2},2\lambda\mu]^2\Big\} \end{split} \] the resonance function is bounded from below: $$|R(\xi,\xi_1,\eta,\eta_1)|\sim \lambda^2\mu.$$ If $\mu\lambda^2=O(1)$ (we may choose $\mu$ and $\lambda$) and obtain $$\frac{e^{iR(\xi,\xi_1,\eta,\eta_1)}-1}{R(\xi,\xi_1,\eta,\eta_1)}=1+O(1).$$ It follows that $$|\hat{F}_3(1,\xi,\eta)|\geq\frac{\lambda\lambda^2\mu^3}{\mu^3\big(\frac{\lambda}{\mu}\big)^{\frac2p}\mu^{\frac32}\lambda^{\frac32}}\chi_{[\lambda+\mu,\lambda+\frac{3\mu}{2}]}(\xi)\chi_{[\lambda\mu,2\lambda\mu]^2}(\eta).$$ Then \begin{equation}\label{eq:6.4}1\gtrsim\|F_3\|_{l^\infty l^p L^2}\gtrsim\frac{\lambda^3\mu^3}{\mu^3\big(\frac{\lambda}{\mu}\big)^\frac{2}{p}}\approx\frac{\lambda^3}{\lambda^{\frac{6}{p}}}.\end{equation} Here we used $\mu\lambda^2=O(1)$. Since $\lambda\gg1$ we arrive at a contradiction to \eqref{eq:6.4} unless $p\leq 2$. \subsection{The function spaces $l^ql^pL^2$} We prove Theorem \ref{discription}. By the embedding $l^ql^pL^2 \subset l^{\tilde q} l^{\tilde p} L^2$ if $\tilde q \ge q$ and $\tilde p \ge p$ it suffices to prove endpoint statements. (i) Let $f$ be a Schwartz function and fix $\lambda$. Trivially \[ \Big(\sum_{l\in \lambda\cdot\mathbb Z^2}\|f_{\Gamma_{\lambda,l}}\|_{L^2}^p\Big)^{\frac1p}= \Big(\sum_{M\geq \lambda^2}\sum_{|l|\sim\frac{M}{\lambda}}\|f_{\Gamma_{\lambda,l}}\|_{L^2}^p\Big)^{\frac1p} \] and for $\frac43 <p$ and $N>2$, we have $$\|f_{\Gamma_{\lambda,l}}\|_{L^2}\lesssim_{N} \frac{\lambda^{\frac32}}{(1+\lambda+M)^{N}},$$ thus \begin{equation} \label{schwartzdyadic} \Big(\sum_{l\in \lambda\cdot\mathbb Z^2}\|f_{\lambda,\Gamma_{l,\lambda}}\|_{L^2}^p\Big)^{\frac1p}\lesssim_N\frac{\lambda^{\frac32-\frac2p}}{(1+\lambda)^N} \end{equation} and for $p>2$ \[\sum_{\lambda}\lambda^{-\frac12}\|f_{\lambda}\|_{l^p(L^2)}<\infty.\] By duality $l^\infty l^pL^2 (p<2)$ embeds into the space of distributions. A small modification shows that $l^1l^2L^2$ embedds into the space of distributions. (ii) It suffices to construct a sequence of Schwartz functions which converges in $l^pl^2L^2$ ($p>1$) but diverges as distributions. Since \[ l^pl^2L^2 = L^2(\mathbb R^2; \dot B^{\frac12}_{2,p} ) \] it suffices to construct a sequence of functions $\phi_\mu$ of one variable of norm $1$ in $\dot B^{\frac12}_{2,q}$ and a Schwartz function $\phi$ so that $\int \phi_\mu \phi dx \to \infty$. Here $\dot B^{\frac12}_{2,q}$ denotes the homogeneous Besov space. This is well known but we give an example for completeness. For $0<\lambda$ we choose a Schwartz function $f_\lambda$ with the property \[\hat{f}_{\lambda}(\xi)=\left\{\begin{array}{ll}\lambda^{-1},&\text{for } |\xi|\sim \lambda,\\ 0,&\text{\, otherwise.}\end{array}\right.\] For any fixed $\mu\ll 1$, we define \[\phi_\mu=\frac{1}{|\ln\mu|^{\frac1p}}\sum_{\mu^2\leq\lambda\leq\mu}f_\lambda.\] It is easy to see $$\|\phi_{\mu}\|_{\dot B^{\frac12}_{2q}} \sim 1. $$ However if $\psi$ is a Schwartz function with Fourier transform supported in the ball $B(0,2)$ and $\hat{\psi}=1$ in the unit ball $B(0,1)$ then \[<\psi,\phi_\mu>\sim\sum_{\mu^2\leq\lambda\leq \mu}|\ln\lambda|^{-\frac1p} \sim |\ln \mu|^{1-\frac1p} .\] (iii) Suppose now that the Schwarz function $\phi$ is in $l^\infty l^pL^2$ for $p<\frac43$. We assume there exists $(0,\eta_0)\in \mathbb R^3$ such that $\hat{\phi}(0,\eta_0)\neq0$. By continuity, there exists $r,c>0$ such that \[ |\hat{\phi}(\xi,\eta)|>c,\,\,\text{for}\,\,(\xi,\eta)\in B:= B((0,\eta_0),r). \] Then \[ \begin{split} \sup_{\lambda}\lambda^\frac12\Big(\sum_{ l\in\lambda\cdot\mathbb Z^2}\|\phi_{\Gamma_{\lambda, l}}\|_{L^2}^p\Big)^{\frac1p}&\ge \sup_{\lambda\lesssim r} \lambda^\frac12\Big(\sum_l\|\phi_{\Gamma_{\lambda,l}\cap B}\|_{L^2}^p\Big)^{\frac1p}\\ &\sim\sup_{\lambda\lesssim r}r^{\frac2p}\lambda^{1-\frac{2}{p}}\|\phi_{\lambda\cap B}\|_{L^2}\sim \sup_{\lambda\lesssim r} cr^{\frac2p+1}\lambda^{\frac32-\frac2p}. \end{split} \] which is $\infty$ if $1<p<\frac43$. This is a contradiction and hence \[ 0=\hat \phi(0,\eta)= (2\pi)^{-\frac32} \int e^{-iy\eta} \phi(x,y) dx dy \] for all $\eta \in \mathbb R^2$. The conclusion for $l^q l^{\frac43}L^2$ follows in the same fashion. (iv) It follows from \eqref{schwartzdyadic} that Schwartz functions are contained in $l^\infty l^p L^2$ if $\frac43\leq p $ and in $l^ql^pL^2$ if $\frac43<p$ and $1\leq q<\infty$. \end{document}
\begin{document} \title[Computing fundamental domains]{Computing fundamental domains \\ for Fuchsian groups} \author[John {\sc Voight}]{{\sc John} Voight} \address{John {\sc Voight}\\ Department of Mathematics and Statistics\\ 16 Colchester Avenue\\ University of Vermont\\ Burlington, Vermont 05401-1455 \\ USA} \email{[email protected]} \urladdr{http://www.cems.uvm.edu/\~{}voight/} \maketitle \begin{resume} Nous exposons un algorithme pour calculer un domaine de Dirichlet pour un Fuchsian groupe $\Gamma$ avec aire cofinis. En cons\'equence, nous calculons les invariants de $\Gamma$ et une pr\'esentation explicite finis pour $\Gamma$. \end{resume} \begin{abstr} We exhibit an algorithm to compute a Dirichlet domain for a Fuchsian group $\Gamma$ with cofinite area. As a consequence, we compute the invariants of $\Gamma$, including an explicit finite presentation for $\Gamma$. \end{abstr} Let $\Gamma \subset \PSL_2(\mathbb R)$ be a \emph{Fuchsian group}, a discrete group of orientation-preserving isometries of the upper half-plane $\mathfrak{H}$ with hyperbolic metric $d$. A \emph{fundamental domain} for $\Gamma$ is a closed domain $D \subset \mathfrak{H}$ such that: \begin{enumroman} \item $\Gamma D = \mathfrak{H}$, and \item $g D^o \cap D^o = \emptyset$ for all $g \in \Gamma \setminus \{1\}$, where ${}^o$ denotes the interior. \end{enumroman} Assume further that $\Gamma$ has cofinite area, i.e., the coset space $X=\Gamma \backslash \mathfrak{H}$ has finite hyperbolic area $\mu(X)<\infty$; then it follows that $\Gamma$ is finitely generated. In this article, we exhibit an algorithm to compute a fundamental domain for $\Gamma$; we assume that $\Gamma$ is specified by a finite set of generators $G \subset \SL_2(K)$ with $K \hookrightarrow \mathbb R \cap \overline{\mathbb Q}$ a number field, and we call $\Gamma$ \emph{exact}. Suppose that $p \in \mathfrak{H}$ has trivial stabilizer $\Gamma_p=\{1\}$. Then the set \[ D(p)=\{z \in \mathfrak{H} : d(z,p) \leq d(g z,p) \text{ for all $g \in \Gamma$}\}, \] known as a \emph{Dirichlet domain}, is a hyperbolically convex fundamental domain for $\Gamma$. The boundary of $D(p)$ consists of finitely many geodesic segments or \emph{sides}. We specify $D(p)$ by a sequence of vertices, oriented counterclockwise around $p$. The domain $D(p)$ has a natural \emph{side pairing}: For each side $s$ of $D(p)$, there exists a unique side $s^*$ and $g \in \Gamma \setminus \{1\}$ such that $s^*=g s$, and the set of such $g$ comprises a set of generators for $\Gamma$. Our main theorem is as follows. \begin{thm*} There exists an algorithm which, given an exact Fuchsian group $\Gamma$ with cofinite area and a point $p \in \mathfrak{H}$ with $\Gamma_p=\{1\}$, returns the Dirichlet domain $D(p)$, a side pairing for $D(p)$, and a finite presentation for $\Gamma$ with a minimal set of generators. \end{thm*} This algorithm also provides a solution to the word problem for the computed presentation of $\Gamma$. Of particular and relevant interest is the class of \emph{arithmetic Fuchsian groups}, those groups commensurable with the group of units $\mathcal{O}_1^*$ of reduced norm $1$ in a maximal order $\mathcal{O}$ of a quaternion algebra $B$ defined over a totally real field and split at exactly one real place. Alsina-Bayer \cite{AB} and Kohel-Verrill \cite{KV} give several examples of fundamental domains for arithmetic Fuchsian groups with $F=\mathbb Q$. Our work generalizes that of Johansson \cite{Johansson}, who first made use of a Dirichlet domain for algorithmic purposes: he restricts to the case of arithmetic Fuchsian groups, and we improve on his methods in several respects (see the discussion preceding Algorithm \ref{algdomG} and the reduction algorithms in \S 4). The algorithm described in the above theorem has the following applications. The first is a noncommutative generalization of the problem of computing generators for the unit group of a number field. \begin{cor*} There exists an algorithm which, given an order $\mathcal{O} \subset B$ of a quaternion algebra $B$ defined over a totally real field and split at exactly one real place, returns a finite presentation for $\mathcal{O}_1^*$ with a minimal set of generators. \end{cor*} We may also use the presentation for $\Gamma$ to compute invariants. The group $\Gamma$ has finitely many orbits with nontrivial stabilizer, known as \emph{elliptic cycles} or \emph{parabolic cycles} according as the stabilizer is finite or infinite. The coset space $X=\Gamma \setminus \mathfrak{H}$ can be given the structure of a Riemann surface, and we say that $\Gamma$ has \emph{signature} $(g;m_1,\dots,m_t;s)$ if $X$ has genus $g$ and $\Gamma$ has exactly $t$ elliptic cycles of orders $m_1,\dots,m_t \in \mathbb Z_{\geq 2}$ and $s$ parabolic cycles. \begin{cor*} There exists an algorithm which, given $\Gamma$, returns the signature of $\Gamma$ and a set of representatives for the elliptic and parabolic cycles in $\Gamma$. \end{cor*} Finally, we mention a corollary which is useful for the evaluation of automorphic forms. \begin{cor*} There exists an algorithm which, given $\Gamma$ and $z,p \in \mathfrak{H}$ with $\Gamma_p=\{1\}$, returns a point $z' \in D(p)$ and $g \in \Gamma$ such that $z'=g(z)$. \end{cor*} The article is organized as follows. We begin by fixing notation and discussing the necessary background from the theory of Fuchsian groups (\S 1--2). We then treat arithmetic Fuchsian groups and give methods for enumerating ``small'' elements of the group $\mathcal{O}_1^*$, with $\mathcal{O} \subset B$ a quaternion order as above (\S 3). Next, we describe the basic algorithm to reduce an element $g \in \Gamma$ with respect to a finite set $G \subset \Gamma$ (\S 4). We then prove the main theorem (\S 5) and conclude by giving two examples (\S 6). The author would like to thank the Magma group at the University of Sydney for their hospitality, Steve Donnelly and David Kohel for their helpful input, and Stefan Lemurell for his careful reading of the paper. \section{Fuchsian groups} In this section, we present the relevant background from the theory of Fuchsian groups; suggested references include Katok \cite[Chapters 3--4]{Katok} and Beardon \cite[Chapter 9]{Beardon}. Throughout, we let $\Gamma \subset \PSL_2(\mathbb R)$ denote a Fuchsian group with cofinite area, which is finitely generated by a result of Siegel \cite[Theorem 4.1.1]{Katok}, \cite[\S 1]{Gelfand}. To simplify, we will identify a matrix $g \in \SL_2(\mathbb R)$ with its image in $\PSL_2(\mathbb R)$. Throughout this section, let $p \in \mathfrak{H}$ be a point with trivial stabilizer $\Gamma_p=\{1\}$. Almost all points $p$ satisfy this property: there exist only finitely many $p$ with $\Gamma_p \neq \{1\}$ in any compact subdomain of $\mathfrak{H}$, and in particular, the set of $p \in \mathfrak{H}$ with $\Gamma_p \neq \{1\}$ have area zero. In practice, with probability $1$ a ``random'' choice of $p$ will suffice. We define the \emph{Dirichlet domain} centered at $p$ to be \[ D(p)=\{z \in \mathfrak{H}:d(z,p) \leq d(g z,p) \text{ for all $g \in \Gamma$}\}. \] The set $D(p)$ is a fundamental domain for $\Gamma$, and is a \emph{hyperbolic polygon}. More generally, we define a \emph{generalized hyperbolic polygon} to be a closed, connected, and hyperbolically convex domain whose boundary consists of finitely many geodesic segments, called \emph{sides}, so that a hyperbolic polygon is a generalized hyperbolic polygon with finite area. Let $D \subset \mathfrak{H}$ be a hyperbolic polygon. Let $S=S(D)$ denote the set of sides of $D$, with the following convention: if $g \in \Gamma$ is an element of order $2$ which fixes a side $s$ of $D$, and $s$ contains the fixed point of $g$, we instead consider $s$ to be the union of two sides meeting at the fixed point of $g$. We define a labeled equivalence relation on $S$ by \[ P=\{(g,s,s^*) : s^*=g(s)\} \subset \Gamma \times (S \times S). \] We say that $P$ is a \emph{side pairing} for $D$ if $P$ induces a partition of $S$ into pairs, and we denote by $G(P)$ the projection of $P$ to $\Gamma$. \begin{prop} \langlebel{sidepair} The Dirichlet domain $D(p)$ has a side pairing $P$, and the set $G(P)$ generates $\Gamma$. Conversely, let $D \subset \mathfrak{H}$ be a hyperbolic polygon, and let $P$ be a side pairing for $D$. Then $D$ is a fundamental domain for the group generated by $G(P)$. \end{prop} \begin{proof} The first statement is well-known \cite[Theorem 9.3.3]{Beardon}, \cite[Theorem 3.5.4]{Katok}. For the second statement, we refer to Beardon \cite[Theorem 9.8.4]{Beardon} and the accompanying exercises: the condition that $\mu(D)<\infty$ ensures that any vertex which lies on the circle at infinity is fixed by a hyperbolic element \cite[\S 1]{Gelfand}. \end{proof} \begin{rmk} The second statement of Proposition \ref{sidepair} extends to a larger class of polygons (see \cite[\S 9.8]{Beardon}), and therefore conceivably our results extend to the class of finitely generated non-elementary Fuchsian groups of the first kind. For simplicity, we restrict to the case of groups with cofinite area. \end{rmk} We can define an analogous equivalence relation on the set of vertices of $D$, and we say that a vertex $v$ of $D$ is \emph{paired} if each side $s$ containing $v$ is paired to a side $s^*$ via an element $g \in G$ such that $gv$ is a vertex of $D$. We now consider the corresponding notions in the hyperbolic unit disc $\mathfrak{D}$, which will prove more convenient for algorithmic purposes. The maps \begin{equation} \langlebel{phimap} \setlength{\arraycolsep}{0.5ex} \begin{array}{rlcrl} \phi:\mathfrak{H} &\to \mathfrak{D} & \quad & \phi^{-1}:\mathfrak{D} &\to \mathfrak{H} \\ z &\mapsto \displaystyle{\frac{z-p}{z-\overline{p}}} & & w &\mapsto \displaystyle{\frac{\overline{p}w-p}{w-1}} \end{array} \end{equation} define a conformal equivalence between $\mathfrak{H}$ and $\mathfrak{D}$ with $p \mapsto \phi(p)=0$. Via the map $\phi$, the group $\Gamma$ acts on $\mathfrak{D}$ as \[ \Gamma^{\phi}=\phi\Gamma\phi^{-1} \subset \PSU(1,1)=\left\{\pm \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \PSL_2(\mathbb C) : a=\overline{d}, b=\overline{c}\right\}. \] We may analogously define a Dirichlet domain $D(q)$ for $q \in \mathfrak{D}$ with $\Gamma_q=\{1\}$, and we have $\phi(D(p))=D(0) \subset \mathfrak{D}$. To ease notation, we identify $\Gamma$ with $\Gamma^{\phi}$ by $g \mapsto g^{\phi}=\phi g\phi^{-1}$ when no confusion can result. Any matrix $g=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SU(1,1)$ acts on $\mathfrak{D}$, multiplying lengths by $|g'(z)|=|cz+d|^{-2}$, and therefore Euclidean lengths (and areas) are preserved if and only if $|cz+d|=1$. We define the \emph{isometric circle} of $g$ to be \[ I(g)=\{z \in \mathbb C: |cz+d|=1\}; \] if $c \neq 0$, then $I(g)$ is a circle with radius $1/|c|$ and center $-d/c$, and if $c=0$ then $I(g)=\mathbb C$. We denote by \[ \inter(I(g))=\{z \in \mathbb C: |cz+d|<1\}, \quad \ext(I(g))=\{z \in \mathbb C: |cz+d|>1\} \] the \emph{interior} and \emph{exterior} of $I(g)$, respectively. With these notations, we now find the following alternative description of the Dirichlet domain $D(0) \subset \mathfrak{D}$. \begin{prop} \langlebel{forddom} \ \begin{enumalph} \item The domain $D(0)$ is the closure in $\mathfrak{D}$ of \[ \bigcap_{g \in \Gamma \setminus \{1\}} \ext(I(g)). \] \item For any $g \in \SU(1,1)$, we have \[ d(z,0) \left.\begin{cases} < \\ = \\ > \end{cases} \hspace{-2.5ex} \right\} d(g z,0) \text{ according as } \begin{cases} z \in \ext(I(g)), \\ z \in I(g),\\ z \in \inter(I(g)). \end{cases} \] \end{enumalph} \end{prop} \begin{proof} See Katok \cite[Theorem 3.3.5]{Katok}; we note that if $g \in \Gamma$ and $c=0$, then $q=0$ is a fixed point of $g$, so by hypothesis $g=1$, and hence $\ext(I(g)) \neq \emptyset$ for all $g \neq 1$. In particular, since $\Gamma$ has cofinite area we note that the intersection in (a) is nonempty. \end{proof} \begin{cor} \langlebel{gginv} For any $g \in \SU(1,1)$, we have $g I(g) = I(g^{-1})$. \end{cor} \begin{proof} By Proposition \ref{forddom}(b), we have \[ w=gz \in I(g^{-1}) \Leftrightarrow d(g^{-1}w,0)=d(w,0) \Leftrightarrow d(z,0)=d(gz,0) \Leftrightarrow z \in I(g) \] and the result follows. \end{proof} \begin{rmk} \langlebel{HtoD} One can similarly define isometric circles $I(g)$ for $g \in \PSL_2(\mathbb R)$ acting on $\mathfrak{H}$. One warning is due, however: although $\phi^{-1}(D(0))=D(p) \subset \mathfrak{H}$ is again a Dirichlet domain, its sides need not be contained in isometric circles (as the map $\phi$ is a hyperbolic isometry, whereas isometric circles are defined by a Euclidean condition). Instead, we see easily that \[ \phi^{-1} I(g^{\phi})=\{z \in \mathfrak{H} : d(z,p)=d(g z,p)\}, \] i.e., the isometric circle $I(g^{\phi})$ corresponds in $\mathfrak{H}$ to the perpendicular bisector of the geodesic between $p$ and $g(p)$. In particular, if $p=i$ then a somewhat lengthy calculation reveals that for $g=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2(\mathbb R)$, this perpendicular bisector is the half-circle of radius $\displaystyle{\frac{a^2+b^2+c^2+d^2-2}{(a^2+c^2-1)^2}}$ centered at $\displaystyle{\frac{ab+cd}{a^2+c^2-1}} \in \mathbb R$. \end{rmk} The domain $D(0)$ is also known as a \emph{Ford domain}, since Proposition \ref{forddom} is originally attributed to Ford \cite[Theorem 7, \S 20]{Ford}. The heart of our algorithm (as provided in the main theorem) will be to algorithmically construct a Ford domain. \section{Algorithms for the upper half-plane and unit disc} We represent points $p \in \mathfrak{H},\mathfrak{D}$ using exact complex arithmetic: see Pour-El--Richards \cite{PER}, Weihrauch \cite{Weihrauch} for theoretical foundations (the subject of computable analysis) and e.g.\ Boehm \cite{Boehm}, Gowland-Lester \cite{GL} for a discussion of practical implementations. Alternatively, our algorithms can be interpreted using fixed and sufficiently large precision; even though one cannot predict in advance the precision required to guarantee correct output, it is likely that an error due to round-off will only very rarely occur in practice; see also Remark \ref{precissue}. The induced action on $\mathfrak{D}$ has $\Gamma \leftrightarrow \Gamma^{\phi} \subset \SU(1,1)$, represented as matrices with exact complex entries. A Fuchsian group $\Gamma$ is \emph{exact} if it has a finite set of generators $G \subset \SL_2(K)$ with $K \hookrightarrow \mathbb Qbar \cap \mathbb R$ a number field; from now on, we assume that the group $\Gamma$ is exact. Even up to conjugation in $\PSL_2(\mathbb R)$, not every finitely generated Fuchsian group is exact; our methods conceivably extend to the case where the set of generators $G \subset \SL_2(\mathbb R)$ are specified with (exact) real entries, but we will not discuss this case any further. Algorithms for efficiently computing with algebraic number fields are well-known (see e.g.\ Cohen \cite{Cohen}). We now discuss some elementary methods for working with generalized hyperbolic polygons in $\mathfrak{D}$, which are defined analogously as those in $\mathfrak{H}$. Let $\overline{\mathfrak{D}}=\{z \in \mathbb C:|z| \leq 1\}$ denote the closure of $\mathfrak{D}$ and let $\partial \mathfrak{D}=\{z \in \mathbb C:|z|=1\}$ be the \emph{circle at infinity}. We represent a geodesic $L$ in $\mathfrak{D}$ in bits by four pieces of data: \begin{itemize} \item the center $c=\ctr(L) \in \mathbb C \cup \{\infty\}$, \item the radius $r=\rangled(L) \in \mathbb R \cup \{\infty\}$ of $L$, and \item the \emph{initial point} $z=\init(L) \in \overline{\mathfrak{D}}$ and the \emph{terminal point} $w \in \overline{\mathfrak{D}}$; \end{itemize} the inital and terminal points are normalized so that the path along $L$ follows a counterclockwise orientation. Although this data is redundant, it will be more efficient in practice to store all values rather than, say, to recompute $c$ and $r$ when needed. If $L_1,L_2 \subset \mathfrak{D}$ are geodesics which intersect at a point $v \in \mathfrak{D} \setminus \{0\}$, then we define $\angle(L_1,L_2)$ to be the counterclockwise-oriented angle at $v$ from the geodesics $L_1$ to $L_2$ for the wedge directed toward the origin, so that in particular we have $\angle(L_2,L_1)=-\angle(L_1,L_2)$. \begin{exm} \langlebel{angle} In the following figure, we depict a geodesic and the angle $\angle(L_1,L_2) \approx 3\pi/8$ between geodesics. \begin{center} \begin{pspicture}(3,-2.5)(11,2.5) \psclip{\psframe[linecolor=white](3,-2.5)(6,2.5)} \pscircle(0,0){5} \endpsclip \psclip{\psframe[linecolor=white](3,-2.5)(6,2.5)} \pscircle(5.5,0){2} \endpsclip \rput(5.7,0){$c$} \pscircle[fillstyle=solid,fillcolor=black](5.5,0){0.05} \rput(4.5,0.2){$r$} \psline(3.5,0)(5.5,0) \rput(2.9,-1){$z=\init(L)$} \pscircle[fillstyle=solid,fillcolor=black](3.78,-1){0.05} \rput(4,1.58){$w$} \pscircle[fillstyle=solid,fillcolor=black](4,1.31){0.05} \psclip{\psframe[linecolor=white](8,-2.5)(9.95,2.5)} \pscircle(10.5,-1.5){2.3} \endpsclip \psclip{\psframe[linecolor=white](8,-2.5)(10,2.5)} \pscircle(10.4,2.1){1.9} \endpsclip \psclip{\psframe[linecolor=white](8,-2.5)(11,2.5)} \pscircle(5,0){5} \endpsclip \psarcn{->}(9.7,0.55){0.7}{203}{160} \rput(8.1,2.3){$L_2$} \rput(8.1,0.6){$\angle(L_1,L_2)$} \rput(7.9,-1.75){$L_1$} \end{pspicture} \\ \textbf{Figure \ref{angle}}: Geodesics and angles \end{center} \end{exm} We leave it to the reader to show that one can compute using elementary formulae the following quantities: for geodesics $L_1,L_2$, the intersection $L_1 \cap L_2$ and (if nonzero) the angle $\angle(L_1,L_2)$, as well as the area of a hyperbolic polygon. \begin{defn} Let $G \subset \Gamma \setminus \{1\}$. The \emph{exterior domain} of $G$, denoted $E=\ext(G)$, is the closure in $\overline{\mathfrak{D}}$ of the set $\bigcap_{g \in G} \ext(I(g)) \cap \mathfrak{D}$. \end{defn} With this definition, Proposition \ref{forddom}(a) becomes simply the statement that $\ext(\Gamma \setminus \{1\})$ is the closure of $D(0)$. Let $G \subset \Gamma$ be a finite subset and let $E=\ext(G)$ be its exterior domain. Then $E$ is a generalized hyperbolic polygon whose sides are contained in isometric circles $I(g)$ with $g \in G$. A \emph{proper vertex} of $E$ is a point of intersection $v \in I(g) \cap I(g')$ between two sides (with $g \neq g' \in G$); a \emph{vertex at infinity} of $E$ is a point of intersection $v \in I(g) \cap \partial \mathfrak{D}$ between a side and the circle of infinity. A \emph{vertex} of $E$ is either a proper vertex or a vertex at infinity. \begin{defn} \langlebel{extGdomU} Let $E=\ext(G)$ be an exterior domain. A sequence $U=g_1,\dots,g_n$ is a \emph{normalized boundary} for $E$ if: \begin{enumroman} \item $E=\ext(U)$; \item $I(g_1),\dots,I(g_n)$ contain the counterclockwise consecutive sides of $D$; and \item the vertex $v \in E$ with minimal $\arg(v) \in (0,2\pi)$ is either a proper vertex with $v \in I(g_1) \cap I(g_2)$ or a vertex at infinity with $v \in I(g_1)$. \end{enumroman} \end{defn} It is clear that for each exterior domain $E$, there exists a unique normalized boundary $G$ for $E$: in (i) and (ii) we order exactly those $g_i$ for which $I(g_i)$ are sides of $E$ and in (iii) we choose a consistent place to start. \begin{exm} \langlebel{normbound} In the following figure, we exhibit a normalized boundary $G=\{g_1,g_2,g_3,g_4\}$; the vertices $v_1,v_2$ are on the circle at infinity whereas $v_3,v_4,v_5$ are proper. \begin{center} \begin{pspicture}(-3.5,-3.5)(3.5,3.5) \pscircle[fillstyle=solid,fillcolor=lightgray](0,0){3} \psclip{\pscircle(0,0){3}} \pscircle[fillstyle=solid,fillcolor=white](-4.2,-4.2){5.5} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle[fillstyle=solid,fillcolor=white](-3.5,1){2} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle[fillstyle=solid,fillcolor=white](-3.5,4.7){5} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle[fillstyle=solid,fillcolor=white](2.3,-3.5){3} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle[fillstyle=solid,fillcolor=white](3,-1){1.5} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle(-4.2,-4.2){5.5} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle(-3.5,1){2} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle(-3.5,4.7){5} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle(2.3,-3.5){3} \endpsclip \psclip{\pscircle(0,0){3}} \pscircle(3,-1){1.5} \endpsclip \psline{->}(-3.5,0)(3.5,0) \psline{->}(0,-3.5)(0,3.5) \rput(1.35,3){$v_2$} \pscircle[fillstyle=solid,fillcolor=black](1.1,2.785){0.05} \rput(-1.1,0.6){$v_3$} \pscircle[fillstyle=solid,fillcolor=black](-1.1,0.33){0.05} \rput(0.4,-1.6){$v_4$} \pscircle[fillstyle=solid,fillcolor=black](0.4,-1.2){0.05} \rput(1.8,-0.85){$v_5$} \pscircle[fillstyle=solid,fillcolor=black](1.57,-0.6){0.05} \rput(3.25,0.55){$v_1$} \pscircle[fillstyle=solid,fillcolor=black](2.94,0.48){0.05} \rput(0.4,2.4){$I(g_2)$} \rput(-0.8,-0.5){$I(g_3)$} \rput(0.8,-0.5){$I(g_4)$} \rput(2.5,-2.3){$I(g_1)$} \rput(1.5,1.2){$\ext(G)$} \end{pspicture} \\ \textbf{Figure \ref{normbound}}: Normalized boundary of a generalized hyperbolic polygon \end{center} \end{exm} We now detail an algorithm which computes a normalized boundary for a given exterior domain. \begin{alg} \langlebel{algdomG} Let $G \subset \Gamma$ be a finite subset. This algorithm returns the normalized boundary $U$ of the exterior domain $E=\ext(G)$. \begin{enumalg} \item Initialize $\theta := 0$, $U := \emptyset$, and $L := [0,1]$. \item Let \[ H := \{g \in G : \arg(I(g) \cap L) \geq \theta\}. \] \begin{enumalgalph} \item If $H = \emptyset$, let $g \in G$ be such that \[ \theta := \arg(\init(I(g))) \in \theta + [0,2\pi) \] is minimal. \item If $H \neq \emptyset$, let $g \in H$ be such that \[ \theta := \arg(I(g) \cap L) \in \theta + [0,2\pi) \] is minimal; if more than one such $g$ exists, let $g$ be the one that minimizes $\angle(L,I(g))$. \end{enumalgalph} Let $U := U \cup \{g\}$ and let $L := I(g) \cap \overline{\mathfrak{D}}$. \item If $U=\{g_1,\dots,g_n\}$ and $g_n=g_1$, return $U := \{g_1,\dots,g_{n-1}\}$. Otherwise, return to Step 2. \end{enumalg} \end{alg} \begin{proof}[Proof of correctness] By definition $\ext(U)$ is a generalized hyperbolic polygon. Suppose that $E \neq \ext(U)$. Then there exists $g \in G$ such that $L = I(g) \cap \ext(U)$ is not just a vertex of $\ext(U)$. Consider the initial point $z=\init(L)$: either $z$ lies on a side $I(g_i)$ of $\ext(U)$ or $z \in \partial \mathfrak{D}$. Suppose that $z \in I(g_i)$. Let $v_i$ be the initial vertex of the side $s_i \subset I(g_i)$. Then in the $i$th iteration of Step 2 of the algorithm we have $g \in H$, so the terminal vertex $v_{i+1}$ of $s_i$ is proper and we are in case (b). But by assumption we have $d(v_i,z) \leq d(v_i,v_{i+1})$ since $I(g_i)$ is a geodesic, and $\arg$ increases along $s_i$ with the distance, thus according to the stipulations of the algorithm we must have $z=v_{i+1}$. But then in order for the interior of $I(g)$ to intersect $\ext(U)$ nontrivially, we must have $\angle(L,I(g_i)) < \angle(I(g_{i+1}),I(g))$, a contradiction. \begin{center} \begin{pspicture}(-3.5,0)(3.5,3.5) \psarc[fillstyle=solid,fillcolor=lightgray](0,0){3}{0}{180} \psclip{\psarc(0,0){3}{0}{180}} \pscircle[fillstyle=solid,fillcolor=white](3.2,1){2} \endpsclip \psclip{\psarc(0,0){3}{0}{180}} \pscircle[fillstyle=solid,fillcolor=white](0,4.5){3} \endpsclip \psclip{\psarc(0,0){3}{0}{180}} \pscircle[fillstyle=solid,fillcolor=white](-3.2,1.5){2} \endpsclip \psarc(0,0){3}{0}{180} \psclip{\psarc(0,0){3}{0}{180}} \pscircle(3.2,1){2} \endpsclip \psclip{\psarc(0,0){3}{0}{180}} \pscircle(0,4.5){3} \endpsclip \psclip{\psarc(0,0){3}{0}{180}} \pscircle(-3.2,1.5){2} \endpsclip \psclip{\psarc(0,0){3}{0}{180}} \pscircle[linestyle=dashed](-2,3.5){2.5} \endpsclip \psline{->}(-3.5,0)(3.5,0) \rput(1.65,1.65){$v_i$} \pscircle[fillstyle=solid,fillcolor=black](1.45,1.9){0.05} \rput(-1.6,1.6){$v_{i+1}$} \pscircle[fillstyle=solid,fillcolor=black](-1.25,1.8){0.05} \rput(-0.45,1.8){$z$} \pscircle[fillstyle=solid,fillcolor=black](-0.45,1.55){0.05} \rput(0.4,1.2){$I(g_i)$} \rput(-2.1,2.8){$I(g_{i+1})$} \rput(0.5,3.25){$I(g)$} \end{pspicture} \\ \end{center} So suppose that $z \in \partial \mathfrak{D}$. Then there exists $i$ such that $z$ lies on the principal circle between the terminal point of $I(g_i)$ and the initial point of $I(g_{i+1})$. But then $\arg(\init(I(g))) < \arg(\init(I(g_{i+1}))$, contradicting (a). This proves that (i) holds in Definition \ref{extGdomU}. It is obvious that (ii) holds, and condition (iii) holds by initialization: if the vertex $v \in E$ with minimal $\arg(v) \in (0,2\pi)$ is a vertex at infinity then it is found in the first iteration of the algorithm in stage (a), and if it is a proper vertex then it is found in the second iteration in stage (b). \end{proof} A Ford domain $D(0)$ is specified in bits by a normalized boundary $G$ for $D(0)$. We can similarly specify a Dirichlet domain $D(p)$ by an analogously defined normalized boundary of perpendicular bisectors, as in Remark \ref{HtoD}; for many purposes, it will be sufficient to represent $D(p)$ by a sequence of vertices (ordered in a counterclockwise orientation around $p$). \begin{rmk} \langlebel{precissue} Although the intermediate computations as above are of a numerical sort, an algorithm to compute a Dirichlet domain accepts exact input and produces exact output. \end{rmk} \section{Element enumeration in arithmetic Fuchsian groups} In this section, we treat arithmetic Fuchsian groups, and in particular we exhibit methods for enumerating ``small'' elements of these groups. See Vigneras \cite{Vigneras} for background material and Voight \cite[Chapter 4]{Voight} for a discussion of algorithms for quaternion algebras. Let $F$ be a number field with $[F:\mathbb Q]=n$ and discriminant $d_F$. A \emph{quaternion algebra} $B$ over $F$ is an $F$-algebra with generators $\alpha,\beta \in B$ such that \[ \alpha^2=h, \quad \beta^2=k, \quad \beta\alpha=-\alpha\beta \] with $h,k \in F^*$; such an algebra is denoted $B=\quat{h,k}{F}$ and is specified in bits by $h,k \in F^*$. An element $\gamma \in B$ is represented by $\gamma = x+y\alpha+z\beta+w\alpha\beta$ with $x,y,z,w \in F$, and we define the \emph{reduced trace} and \emph{reduced norm} of $\gamma$ by $\trd(\gamma)=2x$ and $\nrd(\gamma)=x^2-hy^2-kz^2+hkw^2$, respectively. Let $B$ be a quaternion algebra over $F$ and let $\mathbb Z_F$ denote the ring of integers of $F$. An \emph{order} $\mathcal{O} \subset B$ is a finitely generated $\mathbb Z_F$-submodule with $F\mathcal{O}=B$ which is also subring; an order is \emph{maximal} if it is not properly contained in any other order. We represent an order by a \emph{pseudobasis} over $\mathbb Z_F$; see Cohen \cite[\S 1]{Cohen2} for methods of computing with finitely generated modules over Dedekind domains using pseudobases. A place $v$ of $F$ is \emph{split} or \emph{ramified} according as $B_v = B \otimes_F F_v \cong M_2(F_v)$ or not, where $F_v$ denotes the completion at $v$. The set $S$ of ramified places of $B$ is finite and of even cardinality, and the ideal $\mathfrak{d}=\prod_{v \in S, v \nmid \infty} \mathfrak{p}_v$ of $\mathbb Z_F$ is called the \emph{discriminant} of $B$. Now suppose that $F$ is a totally real field, and there is a unique split real place $v \not\in S$ corresponding to $\iota_\infty:B \hookrightarrow M_2(\mathbb R)$. Let $\mathcal{O} \subset B$ be an order and let $\mathcal{O}_1^*$ denote the group of units of reduced norm $1$ in $\mathcal{O}$. Then the group $\Gamma(\mathcal{O})=\iota_\infty(\mathcal{O}_1^*/\{\pm 1\}) \subset \PSL_2(\mathbb R)$ is a Fuchsian group \cite[\S\S 5.2--5.3]{Katok}. If $\mathcal{O}$ is maximal, we denote $\Gamma^B(1)=\Gamma(\mathcal{O})$. An \emph{arithmetic Fuchsian group} $\Gamma$ is a Fuchsian group commensurable with $\Gamma^B(1)$ for some choice of $B$. One can, for instance, recover the usual modular groups in this way, taking $F=\mathbb Q$, $\mathcal{O}=M_2(\mathbb Z) \subset M_2(\mathbb Q)=B$, and $\Gamma \subset \PSL_2(\mathbb Z)$ a subgroup of finite index. An arithmetic Fuchsian group $\Gamma$ has cofinite area; indeed, by a formula of Shimizu \cite[Appendix]{Shimizu}, the area $A=\mu(X)=\mu(\Gamma \backslash \mathfrak{H})$ is given by \begin{equation} \langlebel{shimizu} A = \frac{4}{(2\pi)^{2n}} d_F^{3/2} \zeta_F(2) \Phi(\mathfrak{d}) [\Gamma^B(1):\Gamma], \end{equation} where $\zeta_F(s)$ denotes the Dedekind zeta function of $F$, and \[ \Phi(\mathfrak{d})=\#(\mathbb Z_F/\mathfrak{d}\mathbb Z_F)^*=\N(\mathfrak{d})\prod_{\mathfrak{p} \mid \mathfrak{d}} \left(1-\frac{1}{\N(\mathfrak{p})} \right); \] here the hyperbolic area is normalized so that \[ \mu(\Omega)=\frac{1}{2\pi}\int\!\!\int_\Omega \frac{dx\,dy}{y^2} \] and hence an ideal triangle has area $1/2$. \begin{rmk} \langlebel{areacompute} The area $A$ is effectively computable from the formula (\ref{shimizu}). By the Riemann-Hurwitz formula, we have \begin{equation} \langlebel{RH} A = 2g-2+\sum_q e_q\left(1-\frac{1}{q}\right) + e_\infty \end{equation} where $e_q$ is the number of elliptic cycles of order $q \in \mathbb Z_{\geq 2}$ in $\Gamma$ and $e_\infty$ the number of parabolic cycles. In particular, $A \in \mathbb Q$; and since $e_q>0$ implies $F(\zeta_{2q}) \hookrightarrow B$, the denominator of $A$ is bounded by the least common multiple of all $q$ such that $[F(\zeta_{2q}):F]=2$ (which in particular requires that $F$ contains the totally real subfield $\mathbb Q(\zeta_{2q})^+$ of $\mathbb Q(\zeta_{2q})$). Therefore, it suffices to compute the usual Dirichlet series or Euler product expansion for $\zeta_F(2)$ with the required precision; see also Dokchitser \cite{Dokchitser}. \end{rmk} We now relate isometric circles to the arithmetic of $B$. Let $p \in \mathfrak{H}$ have $\Gamma_p=\{1\}$. A short calculation with the maps defined in (\ref{phimap}) shows that if $g=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2(\mathbb R)$, then $g^{\phi}=\phi g\phi^{-1} \in \SU(1,1)$ has radius \[ \rangled(I(g^{\phi}))=\displaystyle{\frac{2\impart(p)}{|f_g(p)|}}, \] where $f_g(t)=ct^2+(d-a)t-b$, a polynomial whose roots are the fixed points of $g$ in $\mathbb C$. We will abbreviate $\rangled(g)=\rangled(I(g^{\phi}))$. The map \begin{equation} \langlebel{invrad} \setlength{\arraycolsep}{0.5ex} \begin{array}{rl} \invrad:M_2(\mathbb R) &\to \mathbb R \\ g &\mapsto |f_g(p)|^2 +2y^2\det(g) \end{array} \end{equation} yields a quadratic form on $M_2(\mathbb R)$: explicitly, if $p=x+yi$, we have \begin{align*} \invrad\!\begin{pmatrix} a & b \\ c & d \end{pmatrix} &=\left(xa + b - (x^2-y^2) c - xd\right)^2 + y^2\left(a-2xc-d\right)^2 + 2y^2(ad-bc) \\ &= y^2(a-xc)^2+(xa+b-x^2c-xd)^2+y^4c^2+y^2(xc+d)^2, \end{align*} and hence the form $\invrad$ is positive definite and via $\iota_\infty$ induces a positive definite form $\invrad:B \to \mathbb R$. For $g \in B$, we note that $\det \iota_\infty(g)=v(\nrd(g))$, where $v$ is the unique split real place of $B$. Suppose that $p=i$. Then we have simply $\invrad\!\begin{pmatrix} a & b \\ c & d \end{pmatrix} = a^2+b^2+c^2+d^2$. Let $B=\quat{h,k}{F}$. Identify $F$ with its image $F \hookrightarrow \mathbb R$ under the unique split real place of $B$; without loss of generality, we may assume that $h>0$. We may therefore embed $\iota_\infty:B \hookrightarrow M_2(\mathbb R)$ by letting \begin{equation} \langlebel{embedmin} \alpha \mapsto \begin{pmatrix} \sqrt{h} & 0 \\ 0 & -\sqrt{h} \end{pmatrix}, \quad \beta \mapsto \begin{pmatrix} 0 & \sqrt{|k|} \\ \sgn(k)\sqrt{|k|} & 0 \end{pmatrix} \end{equation} where $\sgn$ denotes the sign. Therefore if $g=x+y\alpha+z\beta+w\alpha\beta \in B$, then we see directly that \[ \invrad(g)=x^2+hy^2+|k|z^2+h|k|w^2. \] For the ramified real places $v$ of $F$, corresponding to $B \hookrightarrow B \otimes_F \mathbb R \cong \mathbb H$, the reduced norm form $\nrd_v:B \to \mathbb R$ by $g \mapsto v(\nrd(g))$ is positive definite. Putting these together, we find that the \emph{absolute reduced norm} \begin{align*} N:B &\to \mathbb R \\ g &\mapsto 2y^2 \textstyle{\sum_{v \in S, v \mid \infty}} \nrd_v(g) + \invrad(g) = |f_g(p)|^2 + 2y^2 \Tr_{F/\mathbb Q} \nrd(g) \end{align*} is positive definite and gives $\mathcal{O}$ the structure of a lattice of rank $4n$. The elements $g \in \mathcal{O}$ with small absolute reduced norm $N$ are those such that $|f_g(p)|$ and $\Tr_{F/\mathbb Q} \nrd(g)$ are both small---in particular, this will include the elements of $\mathcal{O}_1^*$ with small $\invrad$ (with respect to $p \in \mathfrak{H}$), which correspond to elements $g \in \Gamma$ whose isometric circle in $\mathfrak{D}$ (centered at $p$) has large radius. Since the Dirichlet domain $D(p)$ has only finitely many sides, those $g \in \Gamma$ with $\rangled(g)$ sufficiently small radius cannot contribute to the boundary of $D(p)$. Hence, one simple idea to construct $D(p)$ would be to enumerate all elements of $\mathcal{O}_1^*$ by increasing absolute reduced norm $N$ until the exterior domain of these elements has area equal to $\mu(\Gamma\backslash\mathfrak{H})$. This method shows that $D(p)$ is indeed computable, and may have been known to Klein; it is mentioned by Katok \cite{Katok2} when $F=\mathbb Q$ and sees further explication by Johansson \cite{Johansson}. Using the above framework, we can immediately improve upon this method by enumerating such elements efficiently using lattice reduction, as follows. \begin{alg} \langlebel{enumlll} Let $\mathcal{O} \subset B$ be a quaternion order. This algorithm returns a Dirichlet domain for $\Gamma(\mathcal{O})$. \begin{enumalg} \item Compute $A=\mu(\Gamma(\mathcal{O})\backslash \mathfrak{H})$ by Remark \ref{areacompute}. \item Embed $\mathcal{O} \hookrightarrow \mathbb R^{4n}$ as a lattice using the absolute reduced norm form $N$, and choose $C \in \mathbb R_{>0}$. \item Using the Fincke-Pohst algorithm \cite{FinckePohst}, compute the set \[ G(C)=\left\{\iota_\infty(g/u): \pm g \in \mathcal{O},\ N(g) \leq C,\ \nrd(g)=u^2 \in \mathbb Z_F^{*2}\right\} \subset \Gamma. \] \item From Algorithm \ref{algdomG}, compute $E=\ext(G(C))$. If $\mu(E)=A<\infty$, then return $E$; otherwise, increase $C$ and return to Step 2. \end{enumalg} \end{alg} \begin{rmk} \langlebel{chooseC} In choosing $C$, we note that \[ \{g \in \mathcal{O}_1^*: \rangled(g) \geq R\}=\left\{g \in \mathcal{O}: N(g) \leq 2y^2\left(n+\frac{2}{R^2}\right)\right\} \cap \mathcal{O}_1^*; \] in practice, we would like to take $C$ large enough so that $G(C) \neq \emptyset$ but not too large. It is not immediately clear how to choose $C$ (and a strategy for its incrementation) optimally in general, unless one knows something about the radii of the sides of the Dirichlet domain. \end{rmk} Our final algorithm (Algorithm \ref{algDO}) significantly improves on Algorithm \ref{enumlll} by the use of a reduction algorithm, which we introduce in the next section. \section{Reduction algorithm} In this section, we introduce the reduction algorithm (Algorithm \ref{algred}) which forms the heart of the paper. This algorithm will allow us to find a normalized basis for the group $\Gamma$ (Algorithm \ref{algD}), yielding a fundamental domain. Throughout this section, let $G=\{g_1,\dots,g_t\} \subset \Gamma \setminus \{1\}$ be an (ordered) finite subset of a Fuchsian group $\Gamma$, and denote by $\langle G \rangle$ the group generated by $G$. For any $z \in \mathfrak{D}$, we have a map \begin{align*} \rho: \Gamma &\to \mathbb R_{\geq 0} \\ \gamma &\mapsto \rho(\gamma;z)=d(\gamma z, 0) \end{align*} where $d$ denotes hyperbolic distance. We abbreviate $\rho(\gamma;0)=\rho(\gamma)$. \begin{defn} Let $z \in \mathfrak{D}$. An element $\gamma \in \Gamma$ is \emph{$(G,z)$-reduced} if for all $g \in G$, we have $\rho(\gamma;z) \leq \rho(g \gamma;z)$, and $\gamma$ is \emph{$G$-reduced} if it is $(G,0)$-reduced. \end{defn} \begin{rmk} \langlebel{rmkgred} By Proposition \ref{forddom}, we note that $\gamma$ is $(G,z)$-reduced if and only if $\gamma z \in \ext(G)$. \end{rmk} We arrive at the following straightfoward algorithm to perform $(G,z)$-reduction. \begin{alg} \langlebel{algred} Let $\gamma \in \Gamma$ and let $z \in \mathfrak{D}$. This algorithm returns elements $\overline{\gamma} \in \Gamma$ and $\delta \in \langle G \rangle$ such that $\overline{\gamma}$ is $(G,z)$-reduced and $\overline{\gamma}=\delta\gamma$. \begin{enumalg} \item Initialize $\overline{\gamma} := \gamma$ and $\delta := 1$. \item If $\rho(\overline{\gamma};z) \leq \rho(g \overline{\gamma};z)$ for all $g \in G$, return $\overline{\gamma}, \delta$. Otherwise, let $g \in G$ be the first element in $G$ such that \[ \rho(g \overline{\gamma};z) = \min_i \rho(g_i \overline{\gamma};z). \] Let $\overline{\gamma} := g_i \overline{\gamma}$ and $\delta := g_i \delta$, and return to Step $2$. \end{enumalg} \end{alg} We denote the output of the above algorithm $\overline{\gamma}=\reduc_G(\gamma;z)$ and abbreviate $\reduc_G(\gamma;0)=\reduc_G(\gamma)$. \begin{proof}[Proof of correctness] The output of the algorithm $\overline{\gamma}$ is by definition $G$-reduced. The algorithm terminates because if $\overline{\gamma}_1,\overline{\gamma}_2,\dots$ are the elements that arise in the iteration of Step 2, then $\rho(\overline{\gamma}_1;z)>\rho(\overline{\gamma}_2;z)>\dots$; however, the action of $\Gamma$ is discrete, so among the points $\{\overline{\gamma}_i(z)\}_i$, only finitely many are distinct. \end{proof} A priori, Step $2$ in Algorithm \ref{algred} depends on the ordering of the set $G$ and therefore the output $\overline{\gamma}$ will depend on this ordering. This is analogous to the situation of the reduction theory of polynomials, as follows. Let $k$ be a field, let $R=k[x_1,\dots,x_n]$ be the polynomial ring over $k$ in $n$ variables with a choice of term order, and let $G=g_1,\dots,g_t \in R$ be not all zero. Applying the generalized division algorithm, one can reduce a polynomial $f \in R$ with respect to $G$, and the result is unique (i.e., independent of the ordering of the $g_i$) for all $f$ if $G$ is a Gr\"obner basis of the ideal $I=\langle g_1, \dots, g_t \rangle$. Moreover, if $G$ is a Gr\"obner basis, then $f \in I$ if and only if the remainder on division of $f$ by $G$ is zero. (See e.g.\ Cox-Little-O'Shea \cite[Chapter 2]{CLO}.) We can prove analogous statements, replacing the ring $R$ by the group $\Gamma$, as follows. \begin{prop} \langlebel{normbas} Suppose that $\ext(G)$ is a fundamental domain for $\langle G \rangle$. Then for almost all $z \in \mathfrak{D}$, $\reduc_G(\gamma;z)$ as an element of $\Gamma$ is independent of the ordering of $G$ for all $\gamma \in \langle G \rangle$. Moreover, for all $\gamma \in \Gamma$, we have $\reduc_G (\gamma)=1$ if and only if $\gamma \in \langle G \rangle$. \end{prop} Here, ``almost all'' means for all $z$ outside of a set of measure zero: it suffices to take $z$ in the $\Gamma$-orbit of the interior of $\ext(G)$. \begin{proof} Suppose that $\ext(G)$ is a fundamental domain for $\langle G \rangle$. Let $z$ be in the $\Gamma$-orbit of $z_0 \in \inter(\ext(G))$, let $\gamma \in \langle G \rangle$, and let $\overline{\gamma}=\reduc_G(\gamma;z)$. Then by Remark \ref{rmkgred}, we have $\overline{\gamma}z \in \ext(G)$, and since $\ext(G)$ is a fundamental domain and $\Gamma z = \Gamma z_0$ with $z_0 \in \inter(\ext(G))$, we must have $\overline{\gamma}z=z_0$; in particular, $\overline{\gamma}$ is unique and independent of the ordering of $G$. The second statement follows similarly: we have that $0,\overline{\gamma}(0) \in \inter(\ext(G))$, so if $\overline{\gamma} \neq 1$ then $\gamma \not\in \langle G \rangle$. \end{proof} Inspired by the preceding proposition, we make the following definition. \begin{defn} A set $G$ is a \emph{basis for $\Gamma$} if $\ext(G)$ is a fundamental domain for $\langle G \rangle=\Gamma$. If $G$ is a basis that forms a normalized boundary for $\Gamma$, then we say that $G$ is a \emph{normalized basis}. \end{defn} \begin{rmk} \langlebel{wordprob} It follows from Proposition \ref{normbas} that if one can compute a normalized basis $G$ for $\Gamma$, then one also has a solution to the word problem: given any element $\gamma \in \Gamma$, we compute $\overline{\gamma}=\reduc_G \gamma$, which by Proposition \ref{normbas} must satisfy $\overline{\gamma}=1$, so we have explicitly written $\gamma$ as a word from $G$. \end{rmk} We construct a normalized basis for as follows. \begin{alg} \langlebel{algD} Let $G \subset \Gamma$. This algorithm returns a normalized basis for $\langle G \rangle$. \begin{enumalg} \item Let $G := \{g_1,\dots,g_t,g_1^{-1},\dots,g_t^{-1}\}$. \item Compute the normalized boundary $U$ of $\ext(G)$ by Algorithm \ref{algdomG}. \item Let $G' := U$. For each $g \in G$, compute $\overline{g}=\reduc_{G \setminus \{g\}}(g)$ using Algorithm $\ref{algred}$. If $\overline{g} \neq 1$, set $G':=G' \cup \{\overline{g}\}$. \item Compute the normalized boundary $U'$ of $\ext(G')$. If $U'=U$, set $G := G'$ and proceed to Step 5; otherwise set $U := U'$ and return to Step 3. \item If all vertices of $E=\ext(U)$ are paired, return $U$. Otherwise, for each $g \in G$ with a vertex $v \in I(g)$ which is not paired, compute $\overline{g} := \reduc_G(g;v)$, where if $v$ is a vertex at infinity we replace $v$ by a nearby point in $I(g^{-1}) \setminus E \subset \mathfrak{D}$. Add the reductions $\overline{g}$ for each nonpaired vertex $v$ to $G$ and return to Step $2$. \end{enumalg} \end{alg} \begin{proof}[Proof of correctness] First, note that if $v$ be a vertex of $E=\ext(G)$, then by Corollary \ref{gginv}, $v$ is a paired vertex if and only if for every side $s \subset I(g)$ containing $v$, we have that $gv \in I(g^{-1})$ is a vertex of $E$. Next, we prove that if the algorithm terminates it does so correctly. We construct a side pairing as in \S 1. A side $s$ of $E$ pairs up with $gs \subset I(g^{-1})$ if and only if its vertices are paired, necessarily with the vertices of $I(g^{-1})$ by Corollary \ref{gginv}. Therefore if we terminate in Step 5, we have in fact paired all sides of $\ext(U)$ and by Theorem \ref{sidepair}, $\ext(U)$ is a Dirichlet domain and $U$ is a basis. Otherwise, by Step 5 we have $v \in s$ such that $gv \not \in \ext(G)$. We now compute $\overline{g}=\reduc_G(g; v)$, and refer to Proposition \ref{forddom}. Since $v \in I(g)$, we have $d(v,0)=d(gv,0)$, and since $gv \not\in \ext(G)$, we have $d(gv,0) > d(\overline{g} v, 0)$. Putting these together, we find that $v \in \inter(I(\overline{g}))$ and hence $\ext(G \cup \{\overline{g}\}) \subsetneq \ext(G)$. Consider now the limit of the sets $G_\infty=\lim G$ and $U_\infty=\lim U$ as we let the algorithm run forever. Accordingly, every vertex $v$ of $\ext(U_\infty)$ must be paired, otherwise it would be caught in some step of the algorithm. Therefore by the above, $U_\infty$ is a basis for $\langle G_\infty \rangle$. But at each step of the algorithm, the group $\langle G \rangle$ remains the same, even as $G$ changes: indeed, in Step 3, if $\overline{g}=1$ then $g \in \langle G \setminus \{g\}\rangle$. Therefore $\langle G_\infty \rangle = \langle G \rangle$, and since $\langle G \rangle$ is finitely generated we know that $U$ is finite, and hence the algorithm terminates after finitely many steps. \end{proof} We now extend this in the natural way to an arithmetic Fuchsian group $\Gamma(\mathcal{O})$. \begin{alg} \langlebel{algDO} Let $\mathcal{O}$ be a quaternion order. This algorithm returns a basis $G$ for $\Gamma=\Gamma(\mathcal{O})$. \begin{enumalg} \item Choose $C \in \mathbb R_{>0}$, initialize $G := \emptyset$, and compute $A=\mu(\Gamma \backslash \mathfrak{H})$. \item Using Steps 1--2 in Algorithm \ref{enumlll}, compute the set $G(C) \subset \Gamma$. \item Call Algorithm \ref{algD} with input $G \cup G(C)$ and let $G$ be the output. If $\mu(\ext(G))=A<\infty$, then return $G$; otherwise, increase $C$ and return to Step 2. \end{enumalg} \end{alg} A fundamental domain for an arithmetic Fuchsian group $\Gamma \subset \Gamma(\mathcal{O})$ can easily be computed from this by first running Algorithm \ref{algDO} and then computing a coset decomposition of $\Gamma$ in $\Gamma(\mathcal{O})$; and for that reason, one may even restrict consideration to the case where $\mathcal{O}$ is maximal. \begin{rmk} In practice, in some cases we can improve Step 5 of Algorithm \ref{algD} for arithmetic Fuchsian groups as follows. For each nonpaired vertex $v$, we can consider those elements with small absolute reduced norm $N$ relative to $p \in \mathfrak{D}$ taken to be a point along the geodesic between $0$ and $v$: indeed, by continuity if $g \in \mathcal{O}_1^*$ has $v \in \inter(I(g))$, then $\rangled(g)$ increases as the center $p$ moves towards $v$ and thus $N(g)$ decreases, so using lattice reduction we are likely to find a small such $g$. \end{rmk} \section{Proof of the main theorem} We are now ready to prove the main theorem of this paper. \begin{thm} There exists an algorithm which, given a finitely generated Fuchsian group $\Gamma$ and a point $p \in \mathfrak{H}$ with $\Gamma_p=\{1\}$, returns the Dirichlet domain $D(p)$, a side pairing for $D(p)$, and a finite presentation for $\Gamma$ with a minimal set of generators. \end{thm} To prove the theorem, we need to show how the output of Algorithm \ref{algD} yields a finite presentation for $\Gamma$ with a minimal set of generators. Indeed, Algorithm \ref{algD} terminates only if it has computed a side pairing $P$ (which we may assume meets the convention in \S 1) for the Dirichlet domain $D$. Such a side pairing $P$ gives a set $G$ of generators for $\Gamma$ by Proposition \ref{sidepair}. We now consider the induced relation on the set of vertices. A \emph{cycle} of $D$ is a sequence $v_1,\dots,v_n=v_1$ which is the (ordered) intersection of the $\Gamma$-orbit of $v=v_1$ with $D$. To each cycle, we associate the word $g=g_ng_{n-1}\cdots g_2g_1$ where $g_i(v_i)=v_{i+1}$ and the indices are taken modulo $n$. We say that a cycle is a \emph{pairing cycle} if $g_i \in G$ for all $i$, and without further mention we shall assume from now on that a cycle is a pairing cycle. A cycle is \emph{minimal} if $v_i \neq v_j$ for all $i \neq j$. Every vertex $v$ of $D$ is contained in a unique minimal cycle (up to reversion and cyclic permutation). Indeed, by the uniqueness of the side pairing, a vertex $v \in I(g) \cap I(g')$ either has $v=gv=g'v$, in which case $v$ has nontrivial stabilizer and one has the singleton cycle $v$, or $v$ has trivial stabilizer and is paired with the distinct elements $gv \in I(g^{-1})$ and $g'v \in I(g'^{-1})$, each of which also has trivial stabilizer, and then continuing in this way one constructs a (unique minimal) cycle. This analysis gives rise to the following algorithm. \begin{alg} \langlebel{mincycles} Let $P$ be a side pairing for a Dirichlet domain $D$ for $\Gamma$. This algorithm returns a set of minimal cycles for $D$. \begin{enumalg} \item Initialize $V$ to be the set of vertices of $D$ and $M := \emptyset$. \item If $V = \emptyset$, terminate. Otherwise, choose $v \in V$ with $v=I(g) \cap I(g')$ for $g,g' \in G(P)$. If $gv=v$, add the cycle $v$ to $M$ and remove $v$ from $V$, and return to Step 2. Otherwise, let $i := 1$ and $v_1 := v$. \item Let $v_{i+1} := gv_i \in I(g^{-1}) \cap I(g')$. If $v_{i+1}=v_1$, add the cycle $v_1,\dots,v_i$ to $M$, remove these elements from $V$, and return to Step 2; otherwise, increment $i := i+1$, let $g := g'$ and return to Step 3. \end{enumalg} \end{alg} The relations associated to minimal cycles have the following important property. \begin{lem} Let $g \in G$ be a side-pairing element. Then $g$ appears at most once in any word associated to a minimal cycle. Moreover, $g$ and its inverse appears in exactly two such words. \end{lem} \begin{proof} By definition, a side-pairing element $g$ pairs a unique set of sides: in particular, $g$ pairs the vertices of one side $s$ with the vertices of another. Suppose that $g$ occurs twice in a word associated to a minimal cycle. Then by minimality, the vertices of $s$ are in the same $\Gamma$ orbit. But this implies that $g$ maps $I(g)$ to itself, so $g$ has order $2$ and therefore one of the vertices of $s$ is fixed by $g$, a contradiction. In a similar way, we see that $g$ and its inverse can appear in at most two words since each vertex belongs to exactly one minimal cycle. \end{proof} We have the following characterization of the minimal cycles. \begin{prop}{Beardon \cite[Theorem 9.4.5]{Beardon}} \langlebel{areazero} For all $p \in \mathfrak{H}$ outside of a set of area zero, the following statements hold: \begin{enumroman} \item Every elliptic cycle has length $1$; \item Every accidental cycle has length $3$; and \item Every parabolic cycle has length $1$. \end{enumroman} \end{prop} \begin{rmk} \langlebel{exceptp2} The exceptional set of $p$ is contained in the union \[ E_2=\bigcup_{f,g,h \in \Gamma} \{z : R(z) \in \mathbb R\} \] over all triples $f,g,h \in \Gamma$ such that \[ R(z)=\frac{(z-gz)(fz-hz)}{(z-fz)(gz-hz)} \] is not constant. It is easy to see that the set $E_2$ has area zero. \end{rmk} For the purposes of computing a minimal set of generators and relations, we may and do assume that $p$ does not lie in the exceptional set; indeed, a sufficiently general choice of $p$ will suffice, and so in practice the conditions of Proposition \ref{areazero} always hold. In particular, every elliptic cycle is represented by a minimal cycle (whose fixed point is a vertex of $D$). Now, to each cycle, associated to the word $g$, we further associate a relation in $\Gamma$ as follows. By definition, we have $g \in \Gamma_v$, and therefore we have one of three possibilities. If $\#\Gamma_v=1$, then we have the relation $g=1$; we call $g$ an \emph{accidental cycle}. If $1<\#\Gamma_v<\infty$, then we associate the relation $g^k=1$ where $k$ is the order of $g$, and we call $g$ an \emph{elliptic cycle}. Otherwise, if $\#\Gamma_v=\infty$, then we associate the empty relation, a \emph{parabolic cycle}. We note that the latter occurs if and only if $g$ has infinite order if and only if $\trd(g)=\pm 2$, so the relation $g$ is computable. We now appeal to the structure theory for Fuchsian groups with cofinite area \cite[\S 4.3]{Katok}. Suppose that $\Gamma$ has exactly $t$ elliptic cycles of orders $m_1,\dots,m_t \in \mathbb Z_{\geq 2}$ and $s$ parabolic cycles, and that $X=\Gamma \backslash \mathfrak{H}$ has genus $g$. We say then that $\Gamma$ has signature $(g;m_1,\dots,m_t;s)$. Moreover, $\Gamma$ is generated by elements \begin{equation} \langlebel{gens} \alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g,\gamma_1,\dots,\gamma_t,\gamma_{t+1},\dots,\gamma_{t+s} \end{equation} subject to the relations \begin{equation} \langlebel{relats} \gamma_1^{m_1}=\dots=\gamma_t^{m_t}=[\alpha_1,\beta_1] \cdots [\alpha_g,\beta_g]\gamma_1\cdots \gamma_{t+s} =1, \end{equation} where $[\alpha,\beta]=\alpha\beta\alpha^{-1}\beta^{-1}$ is the commutator. (One obtains a minimal set of generators from this presentation by eliminating $\gamma_{t+s}$ whenever $t+s>0$.) From the set of generators coming from the side-pairing elements and the set of relations coming from the minimal cycles, we can build a minimal set of generators and relations by ``back substitution''. First, we prove a lemma. \begin{lem} \langlebel{freeproduct} Suppose $\Gamma \cong \Gamma_1 * \Gamma_2$ is a free product, and that $\gamma_i \in \Gamma_1$ or $\Gamma_2$ for $i=1,\dots,s+t$. Then either $\Gamma_1$ or $\Gamma_2$ is isomorphic to the free product of cyclic groups. \end{lem} \begin{proof} Let $\phi:\Gamma \xrightarrow{\sim} \Gamma_1 * \Gamma_2$ be an isomorphism. Passing to the quotient by the $\gamma_i$, for $i=1,\dots,t+s$, we may assume that $s=t=0$. But then the homology groups $H_i(\Gamma,\mathbb Z)$ (coming from group homology) coincide with the homology groups $H_i(Y,\mathbb Z)$ (coming from topology) where $Y$ is the orientable surface of genus $g$ \cite[\S II.4]{Brown}; in particular, we have $H_0(\Gamma,\mathbb Z)=\mathbb Z$. By the Mayer-Vietoris sequence \cite[Corollary II.7.7]{Brown}, we have \[ \mathbb Z \cong H_0(\Gamma,\mathbb Z) \cong H_0(\Gamma_1*\Gamma_2,\mathbb Z) \cong H_0(\Gamma_1,\mathbb Z) \oplus H_0(\Gamma_2,\mathbb Z) \] so say $H_0(\Gamma_2,\mathbb Z)=0$; but this immediately implies $\Gamma_2$ is trivial as well, and the result now follows. \end{proof} \begin{alg} \langlebel{minrelations} Let $P$ be a side pairing for $D$ and let $M$ be a set of minimal cycles for $D$. This algorithm returns a minimal set of generators and relations for $\Gamma$. \begin{enumalg} \item Let $H \subset G(P)$ be such that $g \in G$ implies either $g=g^{-1}$ or $g^{-1} \not \in G$. \item Let $R$ be the set of elliptic cycles in $M$ and let $A$ be the set of accidental cycles. Initialize $r$ to be an element of $A$ and remove $r$ from $A$. \item If $A = \emptyset$, add $r$ to $R$ and return the generators $H$ and the relations $R$. Otherwise, choose an element $g \in A$ such that $g$ and $r$ have an element $g_i \in H$ in common; then solve for $g_i$, substitute this expression in for $g_i$ in the relation $r$, and remove $g_i$ from $H$. Return to Step 3. \end{enumalg} \end{alg} \begin{proof}[Proof of correctness] If in Step $3$ there is always an element $g \in A$ such that $g$ and $r$ have an element in common, then the algorithm terminates correctly: in the notation of (\ref{gens}--\ref{relats}), there are exactly $t+1$ relations, and hence the set of generators must also be minimal. So suppose otherwise. Let $H_1$ be the set of $g \in H$ such that $g$ or $g^{-1}$ occurs in the relation $r$ and let $H_2=H \setminus H_1$. Let $\Gamma_1,\Gamma_2$ be the groups generated by $H_1,H_2$. Then by assumption, $\Gamma$ is the free product of $\Gamma_1$ and $\Gamma_2$. By Lemma \ref{freeproduct}, since the relation in $\Gamma_1$ is nontrivial, it follows that $\Gamma_2$ is the free product of finite cyclic groups, and hence cannot contain any accidental cycles, which is a contradiction. \end{proof} The minimal presentation resulting from Algorithm \ref{minrelations} is not necessarily of the form (\ref{gens})--(\ref{relats}); we refer to the methods of Imbert \cite{Imbert} for an alternative approach using fat graphs which computes such a canonical presentation. This completes the proof of the theorem and the accompanying corollaries in the introduction. \begin{rmk} If in the first corollary, one wants the structure of $\mathcal{O}^*$, we use the exact sequence \[ 1 \to \mathbb Z_F^{*2} \mathcal{O}_1^* \to \mathcal{O}^* \xrightarrow{\nrd} \mathbb Z_{F,+}^*/\mathbb Z_F^{*2} \to 1 \] where $\mathbb Z_{F,+}^*=\{u \in \mathbb Z_F^* : v(u)>0\text{ for all ramified places $v \mid \infty$}\}$. From the solution to the word problem, it then suffices to find elements $\gamma \in \mathcal{O}^*$ such that $\nrd(\gamma)=u$ generates the finite group $\mathbb Z_{F,+}^*/\mathbb Z_F^{*2}$, and these can be found using the methods of \S 3. \end{rmk} \section{Examples} \langlebel{examples} We have implemented a variant of the above algorithm in the computer system \textsf{Magma} \cite{Magma}. In this section, we provide two examples of the output of this algorithm. First, we consider the quaternion algebra $B=\quat{3,-1}{\mathbb Q}$ of discriminant $6$. A maximal order $\mathcal{O}$ is given by \[ \mathcal{O} = \mathbb Z \oplus \mathbb Z\alpha \oplus \mathbb Z\beta \oplus \mathbb Z\frac{1+\alpha+\beta+\alpha\beta}{2}. \] We consider the Eichler order contained in $\mathcal{O}$ of level $13$, given by \begin{align*} \mathcal{O}(13) &= \mathbb Z \oplus \mathbb Z\frac{3-5\alpha-5\beta+3\alpha\beta}{2} \oplus \mathbb Z(2-2\alpha-\beta+\alpha\beta) \\ & \qquad \oplus \mathbb Z\frac{13-13\alpha-13\beta+13\alpha\beta}{2}. \end{align*} We denote $\Gamma(\mathcal{O})=\Gamma_0^{6}(13)$. We embed $B \hookrightarrow M_2(\mathbb R)$ by the embedding (\ref{embedmin}), and take $p=9i/10 \in \mathfrak{H}$. By (\ref{shimizu}), we compute that the Fuchsian group $\Gamma_0^{6}(13)$ has coarea $14/3$. Step 2 in Algorithm \ref{algDO} finds the units $(1-\alpha-3\beta+\alpha\beta)/2,\alpha-2\beta,\dots$, and following the algorithm, reduction and further enumeration automatically yields the fundamental domain as in Figure \ref{examples}.1. (The methods in \textsf{Magma} for producing the postscript graphic are due to Helena Verrill \cite{Verrill}.) This domain already exhibits significant complexity: it has $38$ sides and hence $19$ side-pairing elements, which yields a set of $10$ minimal generators $\gamma_1,\dots,\gamma_{10}$ for $\Gamma_0^{6}(13)$, namely \begin{center} $12-7\alpha+4\beta+2\alpha\beta,\ (1-\alpha-33\beta-19\alpha\beta)/2,\ 2\alpha+16\beta+9\alpha\beta$, \\ $(37-19\alpha+9\beta+11\alpha\beta)/2,\ 2\alpha + 4\beta + \alpha\beta,\ (1-\alpha-3\beta+\alpha\beta)/2$, \\ $\alpha-2\beta,\ (1+7\alpha-15\beta-5\alpha\beta)/2,\ (1+7\alpha-45\beta-25\alpha\beta)/2,\ \alpha-14\beta-8\alpha\beta$, \end{center} subject to the relations \begin{eqnarray*} \gamma_3^2=\gamma_5^2=\gamma_7^2=\gamma_{10}^2=\gamma_2^3=\gamma_6^3=\gamma_8^3=\gamma_9^3=1 \\ \gamma_1^{-1}\gamma_4\gamma_5\gamma_6^{-1}\gamma_1\gamma_2^{-1}\gamma_3\gamma_4^{-1}\gamma_7\gamma_8^{-1}\gamma_9^{-1}\gamma_{10}^{-1}=1. \end{eqnarray*} We deduce that $\Gamma_0^{6}(13)$ has signature $(1;2,2,2,2,3,3,3,3;0)$, a fact which can be independently verified by well-known formulae \cite{AB}. Second, we consider the totally real number field $F$ generated by a root $t$ of the polynomial $x^7-x^6-6x^5+4x^4+10x^3-4x^2-4x+1$; it is the minimal septic totally real field, having discriminant $d_F=20134393=71 \cdot 283583$. We consider the quaternion algebra $B$ which is ramified at $6$ of the $7$ real places of $F$ and no finite place: explicitly, $B=\quat{h,k}{F}$ where $h=-t^6+6t^4+t^3-9t^2-3t+1$ and $k=-t^2+2t-1$, and in fact $h,k \in \mathbb Z_F^*$. We compute a maximal order $\mathcal{O}$ of $B$. Letting $\Gamma=\Gamma(\mathcal{O})$, we see that $\Gamma$ has coarea $5/2$. The output of Algorithm \ref{algDO} in this case is given in Figure \ref{examples}.2; we find that $\Gamma$ has signature $(0;2,2,2,2,2,3,3,3;0)$. We conclude by noting that it would be interesting to extend the methods in this paper to other arithmetic groups; this would allow the computation of unit groups for a wider range of quaternion algebras over number fields and would have further consequences for the algorithmic theory of Shimura varieties. \end{document}
\begin{document} \title{Learning the Structure for Structured Sparsity\thanks{This article corresponds to the version of \citep{SheBac15} \begin{abstract} Structured sparsity has recently emerged in statistics, machine learning and signal processing as a promising paradigm for learning in high-dimensional settings. All existing methods for learning under the assumption of structured sparsity rely on prior knowledge on how to weight (or how to penalize) individual subsets of variables during the subset selection process, which is not available in general. Inferring group weights from data is a key open research problem in structured sparsity. In this paper, we propose a Bayesian approach to the problem of group weight learning. We model the group weights as hyperparameters of heavy-tailed priors on groups of variables and derive an approximate inference scheme to infer these hyperparameters. We empirically show that we are able to recover the model hyperparameters when the data are generated from the model, and we demonstrate the utility of learning weights in synthetic and real denoising problems. \end{abstract} \section{Introduction} \label{sec:intro} High-dimensional prediction problems are more and more common in many application domains such as computational biology, signal processing, computer vision or natural language processing. To handle this high-dimensionality, one usually resorts to linear modeling and regularization with sparsity-inducing norms, such as the $\ell_1$ norm. This type of regularization results in \emph{sparse} models, meaning that the model is described by relatively few parameters. Besides making parameter learning consistent in high-dimensional settings, the sparsity assumption has the appealing property of yielding more interpretable models. As an example, consider the problem of explaining a particular phenotype of patients, e.g., the disease state, based on the genome sequence of each patient. Sparse linear approaches try to find a handful of genetic loci that govern the disease state, rather than a model involving the whole sequence. The $\ell_1$-regularized sparse linear models, such as the LASSO \citep{Tibshirani94} or basis pursuit \citep{chen}, are well studied by now, with a solid body of theoretical results, efficient algorithms and applications in diverse fields \citep[see, e.g.,][and references therein]{BuhGee11}. However, in practice, we often know that there is more \emph{structure} in the problem at hand, which cannot be captured by simple sparse modeling and $\ell_1$ regularization, and which, if exploited, can improve the estimation of parameters as well as the interpretability of the estimates \citep[see][and references therein]{Cevher2008,Huang2011,Bachetal12a}. In our example, we could expect the genetic loci that influence the disease to be part of a small number of connected patterns in a known gene-gene interaction network \citep{Rapetal07,Azencottetal13}. In other words, we could be looking for a small number of possibly overlapping subsets of variables such that each subset corresponds to a connected subgraph in a given gene network, and the combination of variables in each subset influences the phenotype. Given prior knowledge about the relevance of each considered group of variables, several methods exist for learning sparse models guided by this prior knowledge. These methods achieve different kinds of structured sparsity by regularization (penalization, weighting) with appropriate sparsity-inducing norms, that often correspond to convex relaxations of combinatorial penalties on the support (i.e., the set of indices of non-zero components) of the parameter vector. After the group LASSO \citep{YuaLin06}, a number of convex penalties have been proposed, generalizing the group LASSO penalty to the cases of overlapping groups \citep{ZhaYu09, JacOboVer09, JenAudBac11, Chenetal12}, including tree-structured groups \citep{KimXin10,Jenattonetal11}. See \citep{Bachetal12,Bachetal12a} for a more detailed review of sparsity-inducing norms. \begin{figure} \caption{The coefficient vector $w$ is covered by latent variables supported on subsets $A$, $B$ and $C$: $w = v_A+v_B+v_C$.} \label{fig:w} \end{figure} While most of these norms induce \emph{intersection-closed} sets of non-zero patterns, \citet{JacOboVer09} and \citet{OboBac12} introduce a different, latent formulation of sparsity-inducing norms that yields \emph{union-closed} sets of non-zero patterns, meaning that the parameter vector $w$ is represented as a sum of latent vectors $v_A$, identically zero at indices not in~ $A$ for a subset $A$ of indices. If several such sets of indices are considered, then the support of $w$ (i.e., the set of indices $i$ for which $w_i$ is non-zero) is included in the union of such sets (see Figure~\ref{fig:w} for illustration with three sets $A$, $B$ and $C$). In order to quantify the intuition above, \citet{OboBac12} consider the following function on the support ${\rm supp}(w)$ of $w$: \begin{equation} g({\rm supp}(w)) = \min_{\substack{\Acal' \subseteq \Acal,\\ \cup_{A\in\Acal'} A ={\rm supp}(w)}} \sum_{A\in\Acal'} f(A), \end{equation} that is, $g({\rm supp}(w))$ is the minimum-weight \emph{cover} of ${\rm supp}(w)$ with the subsets $A$ in the family $\Acal$. The weights $f(A)$ express our prior belief in the subset~$A$ being relevant: If a group $A$ is irrelevant, then $f(A)=\infty$. Using the function $g$ as a regularizer (essentially the approach of~\citet{Huang2011}) will encourage the support of the parameter vector $w$ to be a union of subsets $A \in \Acal$ with finite $f(A)$. Moreover, \citet{OboBac12} computed a convex relaxation of the function $g$ defined above, leading to the following norm~$\Omega(w)$ equal to: \begin{equation} \label{eq:norm_guillaume} \min_{v_A\in\RR^P} \!\sum_{A\in\Acal} \!\|v_A\|_2 f(A)^{1/2} {\rm \quad s.t.} \sum_{A\in\Acal} v_A\!=\!w. \end{equation} However, generally we do not have this prior knowledge about the relevance of individual groups: The problem of automatically choosing appropriate weights for groups of variables, $f(A)$, is an important open research problem in structured sparsity. Assuming that we have several learning problems with similar structure (the relevance of a given group is largely shared across individual problems), in this paper we propose a framework for learning group relevances from data. Note that learning the structure is naturally a multi-task problem, as it is impossible to estimate the prior on a vector of parameters if we only observe one particular instance of it. To come back to our example, we could assume that we have several phenotypes that can be explained by groups of loci whose relevance is largely shared across phenotypes. A recent approach to learning group relevances from data has been proposed by \citet{HerHer13}. However, this work only considers learning relevances of pairs of variables and does not make the link with sparsity-inducing norms. Let us also mention that probabilistic modeling for structured sparsity has also been explored by \citet{MarMur09} and \citet{MarSchMur09} in the context of learning Gaussian graphical models, and by \citet{Hanetal14} for multi-task learning with structure on tasks. We approach the problem using probabilistic modeling with a broad family of heavy-tailed priors and derive a variational inference scheme to learn the parameters of these priors. Our model follows the pattern of \emph{sparse Bayesian} models \citep[][among others]{Palmeretal06,SeeNic11}, that we take two steps further: First, we propose a more general formulation, suitable for structured sparsity with any family of groups; Second, we learn the prior parameters from data. We show that prior parameter estimation with classical variational inference does not always lead to reasonable estimates in these models, and find a way of regularizing that works well in practice. Moreover, we propose a greedy algorithm that makes this inference scalable to settings in which the number of groups to consider is large. In our experiments, we show that we are able to recover the model parameters when the data are generated from the model, and we demonstrate the utility of learning penalties in image denoising. \section{A Probabilistic Model for Structured Sparse Linear Regression} \label{sec:model} In this section, we formally describe our model and a suitable approximate inference scheme. \subsection{Model definition} We consider $K$ linear regression problems with design matrices $X^k \in \RR^{N^k\times P}$ and response vectors $y^k \in \RR^{N^k}$ for $k\in\{1,\ldots, K\}$. For each~$X^k$ and $y^k$, we assume the classical Gaussian linear model with i.i.d.~noise with variance~$\sigma^2$, that is, \begin{equation} \label{eq:distr_y} y^k \sim \Ncal(X^kw^k, \sigma^2 I). \end{equation} Let $V$ be the set of indices of variables $\{1,\ldots,P\}$. For a family $\Acal$ of subsets of $V$, we assume \begin{equation} \label{eq:w} \displaystyle w^k = \sum_{A\in \Acal} v_A^k, \end{equation} where, for each $k$, \begin{itemize} \item $\forall A\in\Acal, v_A^k$ is a vector in $\RR^P$ such that all its components with indices in $V\setminus A$ are zero (in other words, it is supported on $A$), \item $\{v_A^k\}_{ A \in \Acal}$ are jointly independent, and \item $\forall A\in\Acal, v_A^k$ has an isotropic density with inverse scale parameter~$f(A)$ \begin{equation} \label{eq:prior_v_A} p(v_A^k|f(A))=q_A(\|v_A^k\|_2 f(A)^{1/2})f(A)^{|A|/2}, \end{equation} where $q_A$ is a heavy-tailed distribution that only depends on $A$ through its cardinality, $|A|$. We specify~$q_A$ in Section \ref{sec:super-Gaussian}. \end{itemize} We regard the inverse scale parameter~$f(A)$ as a measure of relevance of the group of variables~$A$\footnote{Abusing notation, we will call ``group $A$'' the subset of variables indexed by elements of $A$ throughout the paper.}: If a group of variables is irrelevant, then~$f(A)$ should equal infinity. We are interested in priors~$q_A$ such that for each task indexed by~$k$ only a handful of~$v_A^k$ can be significantly away from~zero. Here it is important to stress the link between the expression of our isotropic prior \eqref{eq:prior_v_A} and the norm~$\Omega(w)$ \eqref{eq:norm_guillaume} from \citet{OboBac12}, introduced above: The log-likelihood of parameter vectors $\{w^k\}_{k=1,\ldots,K}$ with respect to $f$ will (up to a constant) be equal to the term $\sum_{A\in\Acal} \log q_A(\|v_A^k\|_2 f(A)^{1/2})$, which very closely resembles the norm \eqref{eq:norm_guillaume}. If~$q_A$ is the generalized Gaussian distribution (cf. Section \ref{sec:specialcases}), the two expressions match exactly. Thus, learning with our prior is a natural probabilistic counterpart of learning with the sparsity-inducing norm~\eqref{eq:norm_guillaume}. Given data $\{X^k, y^k\}_{k=1,\ldots,K}$ and such a model for the prior, our goal will be to infer the parameters~$f(A)$ by maximizing the likelihood with respect to~$f$, \begin{equation} \label{eq:ML_brute} \begin{aligned} \ p(y^1,\ldots,y^K |f) = \prod_{k=1}^K \int p(y^k|X^kw^k,\sigma^2I) \prod_{A\in \Acal}p(v_A^k|f(A)) dv_A^k,\\ \end{aligned} \end{equation} where the parameters~$v_A^k$ are marginalized. \subsection{Super-Gaussian priors} \label{sec:super-Gaussian} We assume that $q_A$ is a \emph{scale mixture of Gaussians}, i.e., \begin{equation*} q_A(u) = \int_0^{\infty} \Ncal(u|0,s) r_A(s) ds \end{equation*} for some mixing density $r_A(s)$. The main reason why we choose to work with the family of scale mixtures of zero-mean Gaussians is that it contains distributions that are heavy-tailed and therefore suitable for modeling sparsity; One such distribution is Student's $t$ which we use in our experiments. The inverse scale parameter of the distribution on $v_A^k$, $f(A)$, captures the relevance of the group~$A$: the smaller $f(A)$, the more relevant the group, that is, the larger the values $v_A^k$ is likely to take. Note that even if the group $A$ is relevant, not all $v_A^k, k=1,\ldots,K$ have to be large. In fact, if the parameters $v_A^k, k=1,\ldots,K$ are drawn from a heavy-tailed distribution with small $f(A)$, then only a fraction of them will be significantly away from zero. Moreover, as we show in Section \ref{sec:variational}, learning in such models is amenable to variational optimization with closed-form updates and leads to an approximate Gaussian posterior on~$v_A^k$. In general, the integral in~\eqref{eq:ML_brute} is intractable for Gaussian scale mixtures, therefore one has to resort to sampling or approximate inference to learn parameters in such models. The fact that $q_A$ is a Gaussian scale mixture implies that it is also \emph{super-Gaussian}, that is, the logarithm of $q_A(u)$ is convex in $u^2$ and non-increasing \citep{Palmeretal06}\footnote{Note that the converse is not true: complete monotonicity of the log-density is a necessary and sufficient condition for the existence of a Gaussian scale mixture representation~\cite[Section 3]{Palmeretal06}.}. It therefore admits a representation of the following form by convex conjugacy \begin{equation} \label{eq:q_superG} \log q_A(u) = \sup_{s \geq 0} -\frac{u^2}{2s} - \phi_A(s), \end{equation} where $\phi_A(s)$ is convex in $1/s$. Note that the expression inside the supremum in \eqref{eq:q_superG} has a unique maximizer. In this work we only consider~$q_A$ for which this maximizer has an analytical simple form. From~\eqref{eq:prior_v_A} and~\eqref{eq:q_superG}, we get the following variational representation for $p(v_A^k|f(A))$: \begin{equation} \label{eq:var_repr_p_v_A} \begin{aligned} p(v_A^k|f(A))& = f(A)^{\frac{|A|}{2}} \!\! \sup_{\zeta_A^k \geq 0} \exp{\Big( -\frac{\|v_A^k\|_2^2f(A)}{2\zeta_A^k} - \phi_A(\zeta_A^k) \Big)}\\ & = f(A)^{\frac{|A|}{2}} \!\! \sup_{\zeta_A^k \geq 0} \! \Big[ \! \Ncal \! \Big(v_A^k \Big| 0,\frac{\zeta_A^k I}{f(A)} \Big) \!\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\!\!\!\frac{|A|}{2}} \! \! \! e^{-\phi_A(\zeta_A^k)} \Big]. \end{aligned} \end{equation} For a particular choice of the prior $q_A$, we measure the relevance of the group of variables $A$ by the expectation of $\|v_A^k\|_2^2$ (which amounts to the sum of the variances of the individual components of~$v_A^k$), \begin{equation*} \EE\big[\|v_A^k\|_2^2\big] = \frac{\EE_{\|z\|_2\sim q_A} \big[\|z\|_2^2\big]}{f(A)}, \end{equation*} where $\EE_{\|z\|_2\sim q_A} \big[\|z\|_2^2\big]$ is the expectation of $\|z\|_2^2$ under the standardized distribution $q_A$ on $\|z\|_2$. In fact, as we have \begin{equation*} \EE\big[\|w^k\|_2^2 \big] = \sum_{A\in\Acal} \EE\big[\|v_A^k\|_2^2 \big] \end{equation*} given our independence assumption, the expected value of $\|v_A^k\|_2^2$ allows us to measure the contribution of the group $A$ with respect to $\EE\big[\|w^k\|_2^2 \big]$. We somewhat abusively call $\EE\big[\|w^k\|_2^2 \big]$ the \emph{signal variance} in our experiments, as opposed to $P\sigma^2$, the \emph{noise variance}. \begin{figure} \caption{The graphical representation of our model.} \label{fig:graphical_model} \end{figure} Figure~\ref{fig:graphical_model} represents the graphical model corresponding to our assumptions. Note that we have explicitly incorporated the variational parameter $\zeta_A^k$ into the graphical model: In fact, the same parameter can also be interpreted as the scale parameter of the Gaussian in the Gaussian scale mixture representation of $p(v_A^k|f(A))$ \citep{Palmeretal06}. \subsection{Inference} \label{sec:variational} Our model described above, namely the combination of the density of $y^k$ \eqref{eq:distr_y} and the variational representation of the prior density on~$v_A^k$~\eqref{eq:var_repr_p_v_A}, leads to the following variational bound on the marginal distribution of~$y^k$: \begin{equation*} \begin{aligned} \log &\,p(y^k|f)\\ & =\log \int p(y^k|X^kw^k,\sigma^2I) \prod_{A\in \Acal}p(v_A^k|f(A)) dv_A^k\\ &\geq \sup_{\substack{{\zeta_A^k \geq 0}\\{ A \in \Acal}}} \Big\{\log \Ncal(y^k|0, X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I) \\[-.1cm] & + \sum_{A\in\Acal} \Big[ \frac{|A|}{2}\log f(A) \! + \! \frac{|A|}{2}\log\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big) \!-\! \phi_A(\zeta_A^k) \Big]\Big\},\\ \end{aligned} \end{equation*} where $M$ is a matrix of dimension $P\times\sum_{A \in \Acal} |A|$ that ensures $w^k = Mv^k$ where $v^k$ is the concatenation of all elements indexed by elements of $A$ in $v_A^k, A \in \Acal$, and $F$ and $Z^k$ are square diagonal matrices of size $\sum_{A \in \Acal} |A|$ whose diagonals consist of $f(A)$ and $\zeta_A^k$ respectively, replicated $|A|$ times, for each $A\in \Acal$. Thus, as an approximation to minimizing the negative log-likelihood, we would like to minimize the following overall bound with respect to $f$ and~$\zeta_A^k$ for all $A \in \Acal$ and $k\in\{1,\ldots,K\}$: \begin{equation} \label{eq:LBgeneral_sum} \begin{aligned} -\sum_{k=1}^K \Big\{ &\!- \frac{1}{2} {y^k}^{\top} \!\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top}\!\! + \sigma^2I\Big)^{-1} \! y^k \! - \!\frac{1}{2} \log\det\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top}\!\! + \sigma^2I\Big) \\[-.1cm] & + \sum_{A\in\Acal}\frac{|A|}{2}\log f(A) + \frac{\sum_{A\in\Acal}|A|\!-\! N^k}{2}\log(2\pi) + \frac{1}{2}\log\det (Z^kF^{-1} ) -\sum_{A\in\Acal}\phi_A(\zeta_A^k) \Big\}.\\ \end{aligned} \end{equation} In its form given by \eqref{eq:LBgeneral_sum}, the bound is difficult to optimize. However, we recognize parts of it as minima of convex functions, which allows us to design an iterative algorithm with analytic updates, finding a local minimum (see the appendix for details). Our optimization problem becomes \begin{equation} \label{eq:obj} \begin{aligned} \inf_{\zeta^k\geq 0} \inf_{v^k} \inf_{\Sigma^k\succcurlyeq 0} \sum_{k=1}^K\Big\{ &\frac{1}{2\sigma^2} \|y^k-X^kMv^k\|^2_2 + \frac{1}{2} \sum_{A\in\Acal} \frac{f(A)}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k) - \frac{1}{2} \log\det \Sigma^k\\[-.1cm] & + \frac{N^k}{2}\log(\sigma^2) + \frac{N^k}{2} \log (2\pi) + \frac{1}{2\sigma^2} \tr{M^{\top}{X^k}^{\top}X^kM\Sigma^k} -\frac{1}{2} \sum_{A\in\Acal}|A|\\ &+ \sum_{A\in\Acal} \Big[ - \frac{1}{2}|A|\log f(A) -\frac{|A|}{2} \log 2\pi + \phi_A(\zeta_A^k) \Big]\Big\},\\ \end{aligned} \end{equation} and the closed-form updates are \begin{equation} \label{eq:updates} \begin{aligned} \Sigma^k & = \sigma^2(M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1} )^{-1}\\ v^k & = (M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1})^{-1}M^{\top}{X^k}^{\top}y^k\\ \zeta_A^k & = \argmin_{z \geq 0} \phi_A(z) + \frac{1}{2} \frac{f(A)}{z} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)\\ \sigma^2& = \frac{\sum_{k=1}^K \!\! \big\{ \|y^k \!\!-\!\! X^kMv^k\|^2_2 \!+\! \tr M^{\top}{X^k}^{\top}X^kM\Sigma^k \big\}} {\sum_{k=1}^{K}N^k} \\ f(A) & = \frac{K|A|}{\sum_{k=1}^K \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)},\\ \end{aligned} \end{equation} iterated until convergence. \begin{remark} Note that the only update that depends on the specific prior distribution is that for the variational parameter $\zeta_A^k$, all others apply to all super-Gaussian priors. \end{remark} \begin{remark} It can be shown that the updates \eqref{eq:updates} exactly correspond to the updates yielded by mean-field variational inference in the special case of Gaussian scale mixtures \citep{Palmeretal06}. However, the approach presented here is more general, as it also applies to super-Gaussian priors that are not Gaussian scale mixtures. \end{remark} \begin{remark} Using the matrix inversion lemma, the update for $\Sigma^k$ can be rewritten in such a way that we avoid the expensive inversion of a $\sum_{A\in\Acal}|A|\times \sum_{A\in\Acal}|A|$ matrix and we only have to invert a $P\times P$ or $N^k\times N^k$ matrix instead, which can even be diagonal in certain cases (see the appendix for details). When it is not diagonal, matrix inversions can be avoided by making an extra diagonal assumption on the covariance matrix of the Gaussian posteriors of all $v_A^k$. \end{remark} \begin{remark} While we do provide an update equation for $\sigma^2$ for completeness, in general it is customary to assume the noise level known, which we also do in all our experiments. \end{remark} \subsection{Special cases} \label{sec:specialcases} The family of super-Gaussian distributions includes Student's $t$ and generalized Gaussian distributions among many others. We here give the densities of these distributions, as well as the expressions for the quantities in our model and inference that depend on the particular prior on~$v_A^k$. \paragraph{Student's $t$:} The density of this distribution is given by \begin{equation} p(v_A^k|a, f(A)) =f(A)^{\frac{|A|}{2}} \frac{\Gamma( a + |A|/2)}{\Gamma(a)} \Big(\frac{1}{2\pi}\Big)^{\frac{|A|}{2}}\Big(1 + \frac{{\|v_A^k\|_2}^2f(A)}{2} \Big)^{-a-\frac{|A|}{2}}, \end{equation} where $a$ is a parameter governing the shape of the distribution. The smaller $a$, the heavier-tailed the distribution (for $a\le 1$, there is no finite variance). For this distribution, \begin{equation} \begin{aligned} \phi_A(\zeta_A^k) = & \frac{1}{\zeta_A^k}+ (a+1/2)\log(\zeta_A^k) + \frac{|A|}{2}\log(2\pi) - (a + |A|/2)+(a+|A|/2)\log(a+|A|/2) \\ & - \log(\Gamma(a+|A|/2))+ \log(\Gamma(a)),\\ \end{aligned} \end{equation} and, therefore, the update for $\zeta_A^k$ is written as \begin{equation} \zeta_A^k = \frac{1+\frac{1}{2} f(A) (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}{a+\frac{|A|}{2}} . \end{equation} The variance of a Student's $t$-distributed random variable, if $a>1$, is $\EE(v_A^k {v_A^k}^{\top})=\frac{1}{f(A)(a-1)}I, $ and therefore $\EE(\| v_A^k\|_2^2)=\frac{|A|}{f(A)(a-1)}. $ Student's $t$ has a natural representation as a Gaussian scale mixture with the inverse Gamma as the mixing distribution. All our experiments are carried out using Student's~$t$. \paragraph{Generalized Gaussian:} The density is given by \begin{equation} p(v_A^k|\gamma, f(A)) = f(A)^{\frac{|A|}{2}}\frac{\frac{\gamma}{2}\Gamma(\frac{|A|}{2})} {\pi^{\frac{|A|}{2}} \Gamma(\frac{|A|}{2} )} e^{-\|v_A^k f(A)^{\frac{1}{2}}\|_2^\gamma } \end{equation} \citep{Pascaletal13}. Consequently, we have \begin{equation} \begin{aligned} \phi_A(\zeta_A^k) = & -\log \frac{\frac{\gamma}{2}\Gamma(\frac{|A|}{2})} {\pi^{\frac{|A|}{2}} \Gamma(\frac{|A|}{2} )} + \frac{{\zeta_A^k}^\frac{\gamma}{2-\gamma}( \frac{1}{\gamma} - \frac{1}{2}) }{\gamma^{\frac{2}{\gamma-2}} },\\ \end{aligned} \end{equation} \begin{equation} \zeta_A^k = \Big(-\frac{\frac{1}{2} f(A) (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}{(1/\gamma -1/2)\gamma^{\frac {\gamma-2}{2}} } \frac{\gamma-2}{\gamma} \Big)^{\frac{2-\gamma}{2}}, \end{equation} and $\EE(\| v_A^k\|_2^2)=\frac{\Gamma(|A|/\gamma + 2/\gamma)}{f(A)\Gamma(|A|/\gamma)}$. \subsection{Learning with all groups} \label{sec:allgroups} While our model and the associated inference algorithm described earlier are valid for any set of groups~$\Acal$, including $\Acal=2^V$, the algorithm is impractical when $\Acal$ is large: Indeed, even if we only have 20 variables and 1000 tasks, learning with $\Acal = 2^{\{1,\ldots,20\}}$ implies that the number of variational parameters~$\zeta_A^k$ will exceed a billion. To avoid working with a prohibitively large number of groups at once, one can leverage an \textit{active set}-type heuristic that maintains a list of relevant groups and iteratively updates it. Algorithm~\ref{alg:greedy}, which we discuss in detail in the following, describes one way to do this. It requires setting the maximal allowed cardinality $T$ of $\Acal$, and the number $D$ of groups to be discarded in each active set update. We start by learning with singletons only (steps 1 and 2); After ranking the groups in $\Acal$ according to their relevance measured by $\frac{f(A)}{|A|}$ into the sequence $(A_1,\ldots,A_{|\Acal|})$ (step 3), we determine the additional groups to be considered, $\Acal'$, by taking the first $T$ sets from the sequence $(A_1\cup A_2 ,\ldots, A_1 \cup A_{|\Acal|}, A_2\cup A_3,\ldots, A_2\cup A_{|\Acal|}, \ldots)$, ignoring groups that have been considered in the past and making sure we do not add the same group more than once; In steps 5-11 we repeatedly (a) learn with $\Acal\cup\Acal'$, (b) rank the groups, (c) update $\Acal$ and $\Acal'$. In step 8 we choose not to discard the singletons just to make sure that $\Acal$ always covers $\{1,\ldots,P\}$. The stopping criterion (step 5) may be that we have no more groups to consider (if $P$ is small enough), or that we have reached a predefined maximal number of iterations. \begin{algorithm} \caption{Active set procedure for the discovery of relevant groups}\label{alg:greedy} \begin{algorithmic}[1] \REQUIRE $T\in\NN$, $D\in\NN$ \STATE Let $\Acal =\{1,\ldots,P\}$ and $\Dcal=\emptyset$ \STATE $f \gets \text{variational}(\Acal)$ \STATE Rank all $A\in\Acal$ according to their relevance \STATE Determine $\Acal'$ \\ (make sure $\left\vert\Acal\cup\Acal'\right\vert\!\le\! T, \Acal'\cap(\Acal\cup\Dcal)\!=\!\emptyset$) \WHILE {stopping condition not met} \STATE $f \gets \text{variational}(\Acal \cup \Acal')$ \STATE Rank $A\!\in\Acal\! \cup \!\Acal'$ according to their relevance \STATE Add to $\Dcal$ the $D$ least relevant non-singletons in $\Acal\cup\Acal'$ \STATE $\Acal \gets \Acal\cup\Acal' \setminus\Dcal$ \STATE Determine $\Acal'$\\ (make sure $\left\vert\Acal\cup\Acal'\right\vert\!\le\! T, \Acal'\cap(\Acal\cup\Dcal)\!=\!\emptyset$) \ENDWHILE \end{algorithmic} \end{algorithm} \section{Approximation Quality and Regularization} \label{sec:reg} The goal of this section is to experimentally study the behavior of our approximate inference scheme in terms of estimation quality and to clarify how we can control it. As we empirically show below, the variational approximation scheme from Section \ref{sec:variational} tends to overestimate the variance of the prior distribution (i.e., underestimate the inverse scale parameter $f(A)$) when this variance is smaller than~$\sigma^2$, the noise variance. This is undesirable, as we would like $f(A)$ to tend to infinity for irrelevant groups of variables. To circumvent this problem, we use an improper hyperprior of the form $p(f(A)) \propto f(A)^\beta$ to encourage $f(A)$ to go to infinity when the variance of $p(v_A)$ is smaller than $\sigma^2$. Consequently, the regularization term $-K\beta\sum_{A\in\Acal}\log f(A)$ with $\beta>0$ is added to the objective function \eqref{eq:obj}, and the only update that changes is that for $f(A)$: \begin{equation} f(A) = \frac{K(\beta + \frac{|A|}{2})}{\frac{1}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}. \end{equation} Thus, we substitute the approximate type-II maximum likelihood estimation of $f(A)$ by approximate (also ``type-II'') maximum a posteriori estimation. In Sections \ref{sec:exp_p_1} and \ref{sec:exp_p_2} we empirically study the effect of the parameter $\beta$ on the approximation quality. \subsection{Scale parameter inference with only one variable} \label{sec:exp_p_1} In this experiment, we evaluate the performance of the variational method described in Section \ref{sec:variational} in recovering the unknown scale parameter $f$ of the prior in the simplest, 1-dimensional case (note that in this subsection we omit the subscripts $A$ as $\Acal =\{\{1\}\}$). More specifically, our goal here is to answer the following questions: Given an i.i.d. sample drawn from a univariate Student's $t$ with shape and inverse scale parameters $a$ and $f$, corrupted by Gaussian noise, and supposing we know both the noise variance $\sigma^2$ and the shape parameter $a$, can we precisely estimate the inverse scale parameter $f$ using the variational method from Section \ref{sec:variational}? In the settings where we cannot, does regularization improve our estimates? \paragraph{Experimental setup.} We consider 10,000 tasks with one variable and one observation each ($P$, $N^k$ for all $k$, and $X^k$ for all $k$ equal to 1). Data are generated from the model with Student's $t$ prior on $v^k$ with parameters $a$ set to 1.5 and $f$ varying in the set $\mathcal{F}$ of 14 values between $0.02$ and $50$ taken roughly uniformly on the logarithmic scale, and Gaussian noise with variance $\sigma^2$ set to $1$. We compare the performance of the variational method with that of a grid search over $\mathcal{F}\cup\{10^5\}$, where we use the trapezoidal rule to numerically solve the intractable integral in \eqref{eq:ML_brute}. The grid search, feasible in this basic setting, provides the best available approximation to the regularized maximum likelihood solution. To reduce the effect of random fluctuations, we repeat all experiments 5 times with different random seeds and report averaged results. \paragraph{Results.} Figure~\ref{fig:reg_scale_p1} summarizes the results. For three values of the parameter $\beta$, we plot (on the logarithmic scale) the estimated against the true variance for the considered range of the parameter $f$ (recall that the variance of a Student's $t$-distributed random variable with parameters $a$ and $f$ equals $\frac{1}{(a-1)f}$). In all figures, we also plot the variance of the Gaussian noise $\sigma^2$. We observe that in the absence of regularization ($\beta=0$) and when the signal is not much stronger than noise, the variational method overestimates the signal variance while the grid search does not. As we add regularization, this effect gradually goes away and the signal variance estimate is set to 0 (i.e., the estimate of $f$, $\widehat{f}$, goes to infinity) if the true signal variance is smaller than a certain threshold. When the regularization is too strong ($\beta=0.25$), the estimated signal variance drops to 0 even when the signal is stronger than the noise, and the variance of the signal is heavily underestimated. With the right amount of regularization ($\beta=0.05$ in this case) we observe the desired behavior: The variational method recovers the signal when it is stronger than noise, and sets $\widehat{f}$ to infinity otherwise. In all cases, variational estimates are close to the maximum likelihood estimates obtained by the grid search when the signal is much stronger than the noise. \begin{figure} \caption{Recovery of the variance of the univariate Student's $t$ distribution with added Gaussian noise of known variance with grid search and the variational method, with different levels of regularization. The x and y axes represent the variance based on the true and on the estimated $f$ parameter values, respectively.} \label{fig:reg_scale_p1} \end{figure} \subsection{Structured sparsity with two variables} \label{sec:exp_p_2} In this section, we empirically study the most basic case of the group relevance learning problem. Suppose that in each task we only have 2 variables, and therefore 3 possible groups, $\Acal =\{\{1\},\{2\},\{1,2\}\}$. Let $X^k$ be the identity matrix in each task. In this basic setting, and supposing that the data come from the model, can our inference algorithm distinguish the case where the data $\{y^k\}_{k=1,\ldots,K}$ are generated by the group of variables $\{1,2\}$ from the opposite case, where the relevant groups are the two singletons $\{1\}$ and~$\{2\}$? These two settings differ in fact significantly in the case of a heavy-tailed prior on $v_A^k$: We have $w^k = v_{ \{1\} }^k + v_{ \{2\} }^k + v_{ \{1,2\} }^k$; If $\{1, 2\}$ is relevant and $\{1\}$ and $\{2\}$ are not, then $v_{ \{1\} }^k$ and $v_{ \{2\} }^k$ will have to be close to zero for all $k$, however, $v_{ \{1,2\} }^k$ will be significantly far from zero for some $k$. As the prior on $v_A^k$ only depends on $v_A$ through its norm, these $v_{ \{1,2\} }^k$ can be anywhere on the circle with radius $\|v_{ \{1,2\} }^k\|_2$ with the same probability and therefore $y^k$ can also be anywhere on the circle with radius $\|y^k\|_2$. In contrast, when $\{1, 2\}$ is irrelevant and $\{1\}$ and $\{2\}$ are relevant, the rare events of $v_{ \{1\} }$ and $v_{ \{2\} }$ both being significantly away from zero will not occur at the same time for most $k$, and therefore the $y^k$ with a large norm will tend to be concentrated along the axes. This behavior (using Student's $t$ prior with parameter $a=1.5$ on $v_A^k$) is illustrated in Figure~\ref{fig:singleton_pair_data}, where we have plotted the data $\{y^k\}_{k=1,\ldots,K}$ for $K=5,000$ in both settings. \begin{figure} \caption{On the left, the singletons are the relevant groups. On the right, the pair is the relevant group.} \label{fig:singleton_pair_data} \end{figure} \paragraph{Experimental setup.} We consider 5,000 tasks with $P$ and $N^k$ for all $k$ equal to 2, with the set of groups $\Acal =\{\{1\},\{2\},\{1,2\}\}$. The data are generated from the model with Student's $t$ prior on $v^k$ with parameters $a$ set to 1.5 and each $f(A)$ varying in a set of 14 values between $0.01$ and $25$ taken roughly uniformly on the logarithmic scale ($f(\{1\})$ and $f(\{2\})$ always equal each other), and Gaussian noise with variance $\sigma^2$ set to $1$. \begin{figure} \caption{A red (blue) square means that the estimate of the singleton (group) variance is larger than the estimate of the group (singleton) variance for the corresponding true singleton and pair variances indicated by the axes. A black square means that both singleton and pair variances are under $2\sigma^2$, the noise variance. Best seen in color.} \label{fig:singleton_pair_results} \end{figure} \paragraph{Results.} Figure~\ref{fig:singleton_pair_results} summarizes the results for three values of the regularization parameter $\beta$ ($\beta=0$ corresponds to the absence of regularization). We report when the estimated pair variance $\frac{2}{(a-1)\widehat{f}(\{1,2\})}$ dominates (blue) or is dominated (red) by the estimated singleton variance $\frac{1}{(a-1)\widehat{f}(\{1\})}+\frac{1}{(a-1)\widehat{f}(\{2\})}$, provided that one of them is larger than the noise variance, $2\sigma^2$. We see that when we do not regularize, the variational method explains everything with the singletons. As we add regularization, the pair explains more and more variance, however in such a way that the pair also explains the signal coming from singletons. Nonetheless, there is a regime ($\beta=0.03$) where a strong signal coming from both the singletons and the pair is identified correctly. If we regularize too strongly ($\beta=0.15$), the entire signal is explained by the pair, regardless of its source. \section{Experiments} \label{sec:exp} In our experiments we consider two different instances of the denoising problem and we empirically evaluate the performance of our approach in recovering both the signal and the structure. \subsection{Structured sparsity in the context of denoising} In this section, we study toy multi-task structured sparse denoising problems. Our goal is to answer the following questions: Given data $\{y^k\}_{k=1,\ldots,K}$, generated from the model, and assuming that we know the true shape parameter $a$ of the Student's $t$ and the noise variance~$\sigma^2$, (a) can we recover the structure (i.e., the relevant groups and their weights), and (b) if we use the correct structure, is our denoising more accurate than when using a different structure? \paragraph{Experimental setup.} To this end, we consider 10,000 tasks with $P$ and $N^k$ for all $k$ equal to 10, with the set of groups $\Acal =\{\{Q\}_{Q=1,\ldots,P}, \{1,\ldots,Q\}_{Q=2,\ldots,P}\}$. Each signal $w^k$ is generated using Student's $t$ with parameters $a$ set to $1.5$ and $f(A)$ set to 0.2 or to 200, depending on whether $A$ is considered relevant or irrelevant: In this fashion, the variance of the signal coming from relevant $A$ is $\frac{|A|}{(a-1)f(A)}=10\times|A|$ (respectively, $0.01\times|A|$ for irrelevant $A$). For each task~$k$, $y^k$ is a perturbed version of the signal $w^k$ with additive Gaussian noise of variance $\sigma^2I$. We consider three different ways of generating data: \begin{itemize} \item {\bf Singletons}: Here, only $\{1\},\ldots,\{5\}$ are relevant, all other groups in $\Acal$ are irrelevant. \item {\bf One group}: Only $\{1,2,3,4,5\}$ is relevant. \item {\bf Overlapping groups}: The groups $\{1\}$, $\{1,2\}$,$\ \ldots\ $, $\{1,2,3,4,5\}$ are relevant. \end{itemize} For the three cases, we choose $\sigma^2$ so that the total noise variance $P\sigma^2$ equals the total signal variance in each case. We consider four models of increasing complexity for inference: \begin{itemize} \item {\bf LASSO-like}: In this simplest model, we only use the singletons, $\Acal = \{\{1\},\ldots,\{P\}\}$, and moreover, we force $f(A)$ to be constant across $\Acal$; In order to do so, we change the update for $f(A)$ to $f(A) = \frac{K\sum_{A\in\Acal}(\beta + \frac{|A|}{2})}{\frac{1}{2}\sum_{k=1}^K\sum_{A\in \Acal} \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}.$ This mimics the behavior of the LASSO, as the prior (that we are learning here) is the same for each coefficient. \item {\bf Weighted LASSO-like}: The usual model with $\Acal \!=\! \{\{1\},\ldots,\{P\}\}$. \item {\bf Structured}: The usual model with $\Acal \!=\!\{\{Q\}_{Q=1,\ldots,P},\{1,\!\ldots,\! Q\}_{Q=2,\ldots,P}\}$. \item {\bf Structured (active set)}: The model where we also learn $\Acal$ using Algorithm~\ref{alg:greedy} (with parameters $T=4P$, $D=2P$, and 5 active set updates). \end{itemize} We examine each of the 12 combinations of data generation and learning models. In each case, we use half of the tasks to find the optimal $\beta$ in terms of the mean squared prediction error (i.e., the mean squared difference between the true and the learned signals $w^k$) from a predefined range of 7 values, and the other half to learn with this $\beta$ and evaluate the test error. \begin{table}[t] \begin{center} \begin{tabular}{|r|c| c |c|} \hline & Singletons & One group & Overlapping \\ \hline LASSO-like & 18.5$\pm$0.3 & 18.6$\pm$0.4 & 58.4$\pm$1.1\\ \hline W. LASSO-like & {\bf 14.5}$\pm$0.3 & 14.5$\pm$0.3 & {\bf 42.8}$\pm$0.9\\ \hline Structured & 14.8$\pm$0.3 & {\bf 13.8}$\pm$0.3 & 43.0$\pm$0.9 \\ \hline Structured(AS) & 14.6$\pm$0.3 & 14.0$\pm$0.3 & {\bf 42.8}$\pm$0.9 \\ \hline \end{tabular} \end{center} \caption{Squared error averaged over the tasks with $95\%$-confidence error bars for each combination of data generation and learning models. The usage of boldface indicates that the corresponding method significantly outperforms the others, as measured using a $t$-test at the level $0.05$.\label{tab:12comb}} \end{table} \paragraph{Results.} We begin by examining the performance of each of the four models in \emph{signal recovery}: In Table \ref{tab:12comb} we report the mean squared error on the 5,000 test tasks with $95\%$-confidence error bars. For all three regimes for data generation, the LASSO-like model performs far worse than the three others in recovery. This is due to the fact that this model learns the same prior for all variables, although not all variables have the same marginal variance. In the first and third data generation regimes W.LASSO performs slightly better than Structured in signal recovery, while Structured has an advantage when a single group is relevant. The performance of Structured(AS) is systematically close to, or on a par with that of the best-performing model. \begin{figure} \caption{For each group of variables on the y axis, the intensity of gray indicates the percentage of total explained variance per $\beta$. \label{fig:p10_exp_variance_23_3} \label{fig:p10_exp_variance_23_3} \end{figure} In terms of \emph{structure recovery}, for all three data generation regimes, we find one or more values of $\beta$ that lead to the recovery of the relevant groups by Structured and Structured(AS), with either the same or a slightly different $\beta$ value leading to the smallest error in signal recovery. Figure~\ref{fig:p10_exp_variance_23_3} illustrates the percentage of total explained signal variance by each group for the One group and Overlapping regimes and for the Structured model, for all considered regularization parameters: With no regularization, the model explains the signal with both the relevant group(s) and the singletons included in the relevant group(s), however with more and more regularization, the signal variance explained by smaller groups is taken over by larger ones. The groups containing elements from $\{6,\ldots,10\}$, not shown in the plot, explain no variance in no regularization regime, with the exception of the largest group $\{1,\ldots,10\}$ that explains the weak signal coming from the irrelevant groups (recall that we have non-zero signal variance $0.01\times|A|$ for the irrelevant groups $A$) in weak and moderate regularization regimes and takes over the whole signal variance when the regularization is too strong. In summary, the performance in denoising does not change drastically depending on the amount of regularization, unless it is too strong; However, a small amount of regularization is likely to better capture the structure than no regularization; If there is a strong group structure among the variables, regularization may also lead to better recovery. A formal criterion to set the value of the hyperparameter $\beta$ would be to maximize its likelihood, as is customary in Bayesian methods. \subsection{Image denoising with wavelets} In this section, we consider the image denoising problem using wavelets. The Haar wavelet basis for 2-dimensional images~\citep{mallat} can naturally be arranged in three rooted directed quad-trees, which can be connected to form one tree by attaching the three roots to an artificial parent node; The structured sparsity-inducing norms with non-zero groups that are paths from the root in this tree have shown improvements over the $\ell_1$ norm~\citep{Jenattonetal11}. Our goal is to find out whether, in this task, (a) a value of $\beta$ that leads to good recovery for a set of images is also close to optimal for another set of images of roughly the same size, at least when the noise level is unchanged (stability of the hyperparameter); (b) learning a non-uniform prior on singletons improves recovery with respect to using a uniform prior (importance of learning a non-uniform prior); (c) learning the group structure helps beyond learning a non-uniform prior on singletons (importance of learning group relevances). \begin{figure} \caption{The images used in our experiments (Barbara, Fingerprint, Lena, House).\label{fig:images} \label{fig:images} \end{figure} \paragraph{Experimental setup.} In order to denoise a large grayscale image, we cut it into possibly overlapping patches of $32\times 32$ pixels, which compose the multiple 1024-dimensional signals that we denoise simultaneously by learning the appropriate (structured) prior. We use four well-known images (see Figure~\ref{fig:images}), Barbara, Fingerprint, Lena ($512\times 512$ pixels each), and House ($256\times 256$ pixels). Each signal $w_k$ is formed by the wavelet coefficients of one $32\times 32$ patch. For each of the $K=961$ tasks ($841$ for House) we form $y^k$ by adding Gaussian noise of variance $\sigma^2=400$ along each dimension. As in the previous section, we examine the performance of three instances of our model: the model with a uniform factorized sparse prior (LASSO-like), a non-uniform factorized sparse prior (W.LASSO-like), the structured norms on all descending (equivalently, ascending) paths in the rooted tree (Structured), and the structured norms on groups that we discover in the process of learning, with~2 active set updates (Structured(AS)). We consider a predefined range of 6 values for the regularization hyperparameter $\beta$, and 3 values ($0.5, 1.1, 1.5$) for the shape parameter~$a$ of Student's~$t$. We compare the behavior of our methods with that of existing algorithms based on sparsity-inducing norms, which are not designed to learn group weights from data. From the family of such approaches, we choose the ``Tree-$\ell_2$'' structured norm proposed by \citet{Jenattonetal11}, and the classical LASSO \citep{Tibshirani94} on the wavelet coefficients. (We would like to stress here that ``Tree-$\ell_2$'' does need group weights to be specified, but does not provide a systematic way to learn them. They are usually set by introducing a group-weighting parameter $\alpha$ so that $\alpha^d$ is the weight of all groups at depth $d$ in the tree, and then optimizing $\alpha$ over a predefined range of values using cross-validation.) We run these methods on each set of small images with the regularization parameter $\lambda$ and the group-weighting parameter $\alpha$ (only for Tree-$\ell_2$) varying over predefined ranges of 75 and 7 values respectively, and report the smallest error. To train the LASSO and learn with the Tree-$\ell_2$ norm, we use the ``proximal'' toolbox of the software package SPAMS~\citep{Jenattonetal11}. \paragraph{Results.} Table \ref{tab:mse_images_32} shows the best performance in terms of the mean squared error of each method on each image (which corresponds to a set of $K$ small images). The values in the parentheses for our proposed methods indicate the value of $\beta$ corresponding to the minimal error. The performance of our proposed methods with respect to the shape parameter $a$ is systematically slightly better for larger $a$, and all reported results correspond to $a=1.5$. According to these results, (a)~the performance of a given value of $\beta$ in signal recovery indeed seems to be stable across images (note that we have also observed that the performance on a given image is robust to small changes of the value of the hyperparameter); (b)~the fact that the LASSO and our LASSO-like model are systematically outperformed by models that weight each variable confirms the intuition that learning how to weight individual variables should boost the estimation quality; (c)~it seems that learning a prior on joint relevances of variables can lead to improved performance, as shown in the column corresponding to Fingerprint, although this is not always the case: on House and Lena, the performance of methods that learn group relevances is not significantly different from that of Tree-$\ell_2$, and in the case of Barbara they perform worse. Inspecting the relevances of different groups (paths in the wavelet tree) learned by Structured, we see that the groups explaining the bulk of the variance are overlapping groups of 2, 3, or 4 elements, mostly descending from the roots of the three quad-trees. In contrast, the relevant groups selected by Structured(AS) tend to consist of one to three roots of the three wavelet quad-trees and one or two wavelets of higher frequency, suggesting that paths in the wavelet tree may not always be the most natural groups in this problem. At last, let us stress that while ``Tree-$\ell_2$'' is applicable in problems where variables can be structured in a tree given in advance, our proposed approach applies to any known or unknown group structure. The Matlab code used in our experiments is available at \url{http://cbio.ensmp.fr/~nshervashidze/code/LLSS}. \begin{table} \begin{center} \resizebox{0.98\linewidth}{!}{ \begin{tabular}{|r |c|c|c|c|} \hline & Barbara & House & Fingerprint & Lena \\ \hline LASSO-like & 179.0$\pm$4.6 (0.001) & 107.5$\pm$2.6 (0.001) & 247.5$\pm$1.7 (0.005) & 110.3$\pm$2.8 (0.001) \\ \hline W.LASSO-like & 163.3$\pm$5.1 (0) & 93.7$\pm$2.6 (0) & 195.0$\pm$1.8 (0.0001) & 89.5$\pm$3.2 (0) \\ \hline Structured & 164.8$\pm$5.3 (0) & 95.3$\pm$2.9 (0) & {\bf 193.6}$\pm$1.8 (0.0005) & 90.3$\pm$3.5 (0) \\ \hline Structured(AS) & 163.1$\pm$5.0 (0.0001) & 92.9$\pm$2.3 (0.0001) & 194.9$\pm$1.8 (0.001) & 89.5$\pm$2.8 (0.0001) \\ \hline \hline Tree-$\ell_2$&{\bf 155.3}$\pm$6.4 & 93.3$\pm$3.8 & 214.9$\pm$2.4 & 88.7$\pm$3.7 \\ \hline LASSO &176.7$\pm$6.4 & 102.1$\pm$3.6 & 250.0$\pm$2.2 & 106.6$\pm$3.9 \\ \hline \end{tabular} } \end{center} \caption{Squared error averaged over the images with $95\%$-confidence error bars for each combination of data generation and learning models. The usage of boldface indicates that the corresponding method significantly outperforms the others, as measured using a $t$-test at the level $0.05$. (Each number is divided by 1000 for readability.)\label{tab:mse_images_32}} \end{table} \section{Conclusions and Future Work} \label{sec:concl} In this paper, we have proposed a flexible and general probabilistic model and an associated inference scheme for automatically learning the weights of possibly overlapping groups in the context of structured sparse multi-task linear regression. We have shown that the classical variational inference scheme is not well adapted for learning with this model, and have proposed a regularization method that closes this gap. This has allowed us to investigate the effect of learning group weights in denoising problems, leading to the conclusion that learning penalties can significantly improve prediction quality, as well as the interpretability of the models, in this context. We have furthermore devised an active-set procedure that makes the inference with our model scalable to settings with large~$P$ and a large number of potential groups in~$\Acal$. In our future work we may consider different likelihood models to handle settings different from linear regression, such as binary classification. Learning group relevances for classification is indeed crucial, e.g., in the context of genome-wide association studies with binary phenotypes in computational biology, or for image segmentation in computer vision. In the appendix we provide details on the derivation of the variational inference scheme for our model (briefly introduced in Section \ref{sec:variational}) and discuss efficient ways of implementing the closed-form updates~\eqref{eq:updates}. \appendix {\setlength{\parindent}{0pt} \setlength{\parskip}{2.2ex plus 0.5ex minus 0.2ex} \section{Variational inference for the super-Gaussian structured sparse prior} In this appendix, we derive step by step the variational updates given in Section 2.3. We first recall our model: We assume that $\forall A\in\Acal, v_A^k$ has an isotropic density with inverse scale parameter $f(A)$ \begin{equation*} \tag{\ref{eq:prior_v_A} revisited} p(v_A^k|f(A))=q_A(\|v_A^k\|_2 f(A)^{1/2})f(A)^{|A|/2}, \end{equation*} where $q_A$ is a \emph{super-Gaussian} distribution, that is, the logarithm of $q_A(u)$ is convex in $u^2$ and non-increasing \citep{Palmeretal06}. It therefore admits a representation of the following form by convex conjugacy \begin{equation*} \tag{\ref{eq:q_superG} revisited} \log q_A(u) = \sup_{s \geq 0} -\frac{u^2}{2s} - \phi_A(s), \end{equation*} where $\phi_A(s)$ is convex in $1/s$. Note that the expression under the supremum in \eqref{eq:q_superG} has a unique maximizer. In this work we only consider~$q_A$ for which this maximizer has an analytical simple form. From~\eqref{eq:prior_v_A} and~\eqref{eq:q_superG}, we get the following variational representation for $p(v_A^k|f(A))$: \begin{equation*} \tag{\ref{eq:var_repr_p_v_A} revisited} \begin{aligned} p(v_A^k|f(A)) &= f(A)^{\frac{|A|}{2}} \sup_{\zeta_A^k \geq 0} e^{ -\frac{||v_A^k||^2f(A)}{2\zeta_A^k} - \phi_A(\zeta_A^k) }\\ & = f(A)^{\frac{|A|}{2}} \sup_{\zeta_A^k \geq 0} \Big[ \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \Big]. \end{aligned} \end{equation*} Finally, as $v_A^k$ are assumed independent, \begin{equation*} p(\{v_A^k\}_{A\in \Acal}|f) = \prod_{A\in\Acal} p(v_A^k| f(A)). \end{equation*} Our goal is to infer the set function $f$ from data by maximizing the type~II log-likelihood, $$\sum_{k=1}^K\log p(y^k|f).$$ We tackle the problem using variational inference and consider the following lower bound on $\log p(y^k|f)$ (obtained by combining the density of~$y^k$ \eqref{eq:distr_y}, the variational representation of the prior density on~$v_A^k$~\eqref{eq:var_repr_p_v_A}, and taking into account the independence of $\{v_A^k\}_{A\in\Acal}$), for sets $A \in \Acal \subseteq 2^V$ and for each regression task $k$, where we use the following notation: \begin{itemize} \item $v^k$ is the concatenation of all elements indexed by elements of $A$ in $v_A^k, A \in \Acal$, \item $Z^k$ is a square diagonal matrix of dimension $\sum_{A \in \Acal} |A|$. Its diagonal consists of $\zeta_A^k$, replicated $|A|$ times, for each $A\in \Acal$. \item $F^k$ is a square diagonal matrix of dimension $\sum_{A \in \Acal} |A|$. Its diagonal consists of $f(A)$, replicated $|A|$ times, for each $A\in \Acal$. \item $M$ is a matrix of dimension $P\times\sum_{A \in \Acal} |A|$ that ensures $w^k = Mv^k$. \end{itemize} \begin{equation*} \begin{aligned} \log & \ p(y^k|f)\\ &= \log \int_{\RR^P} p(y^k| \{v_A^k\}_{A\in \Acal})\prod_{A\in\Acal} p(v_A^k| f(A))dv_A^k\\ &= \log \int_{\RR^P} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \prod_{A\in\Acal} \sup_{\zeta_A^k \geq 0} f(A)^{\frac{|A|}{2}}\Big[ \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \Big] \prod_{A\in\Acal} dv_A^k\\ &= \log \int_{\RR^P} \sup_{\zeta_A^k \geq 0, A\in \Acal} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \prod_{A\in\Acal} f(A)^{\frac{|A|}{2}} \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \prod_{A\in\Acal} dv_A^k\\ &\geq \log \sup_{\zeta_A^k \geq 0, A\in \Acal} \int_{\RR^P} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \prod_{A\in\Acal} f(A)^{\frac{|A|}{2}} \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \prod_{A\in\Acal} dv_A^k\\ &= \sup_{\zeta_A^k \geq 0, A\in \Acal} \log\int_{\RR^P} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \prod_{A\in\Acal} f(A)^{\frac{|A|}{2}} \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \prod_{A\in\Acal} dv_A^k\\ &= \sup_{\zeta_A^k \geq 0, A \in \Acal} \Big\{\log \int_{\RR^P} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \prod_{A\in\Acal} \Ncal\Big(v_A^k| 0,\frac{\zeta_A^k}{f(A)} I\Big) \prod_{A\in\Acal} dv_A^k\\ &\qquad \qquad \qquad + \sum_{A\in\Acal}\log \Big[ f(A)^{\frac{|A|}{2}}\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\frac{|A|}{2}} e^{-\phi_A(\zeta_A^k)} \Big]\Big\}\\ &= \sup_{\zeta_A^k \geq 0, A \in \Acal} \Big\{\log \int_{\RR^P} \Ncal(y^k|X^kMv^k,{\sigma^k}^2I) \Ncal\Big(v^k| 0, Z^kF^{-1} \Big) dv^k \\ &\qquad \qquad \qquad+ \sum_{A\in\Acal} \Big[ \frac{|A|}{2}\log f(A) + \frac{|A|}{2}\log\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big) -\phi_A(\zeta_A^k) \Big]\Big\}\\ &= \sup_{\zeta_A^k \geq 0, A \in \Acal} \Big\{\log \Ncal(y^k|0, X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I) \\ &\qquad \qquad \qquad+ \sum_{A\in\Acal} \Big[ \frac{|A|}{2}\log f(A) + \frac{|A|}{2}\log\Bigl(2\pi\frac{\zeta_A^k}{f(A)}\Bigr) -\phi_A(\zeta_A^k) \Big]\Big\}\\ &= \sup_{\zeta_A^k \geq 0, A \in \Acal} \Bigl\{ - \frac{1}{2} {y^k}^{\top}\!\Bigl(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} \!\!\!+\! \sigma^2I\Bigr)^{-1}\! y^k - \frac{1}{2} \log\det\Big( X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} \!\!\!+\! \sigma^2I\Bigr) \\ &\qquad \qquad \qquad -\frac{N}{2} \log(2\pi) +\sum_{A\in\Acal}\Bigl[ \frac{|A|}{2}\log f(A) + \frac{|A|}{2}\log\Bigl(2\pi\frac{\zeta_A^k}{f(A)}\Bigr) -\phi_A(\zeta_A^k) \Bigr]\Bigr\}.\\ \end{aligned} \end{equation*} Thus, we need to minimize the following overall bound with respect to $f$ and $\zeta_A^k$ for all $A \in \Acal$ and $k\in\{1,\ldots,K\}$: \begin{equation*} \tag{\ref{eq:LBgeneral_sum} revisited} \begin{aligned} -\sum_{k=1}^K \Big\{ &- \frac{1}{2} {y^k}^{\top}\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I\Big)^{-1} y^k - \frac{1}{2} \log\det\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I\Big) \\ & + \sum_{A\in\Acal}\frac{|A|}{2}\log f(A) + \frac{\sum_{A\in\Acal}|A|-N}{2}\log(2\pi) + \frac{1}{2}\log\det (Z^kF^{-1} ) -\sum_{A\in\Acal}\phi_A(\zeta_A^k) \Big\}.\\ \end{aligned} \end{equation*} In its form given by \eqref{eq:LBgeneral_sum}, the bound is difficult to optimize. However, we can recognize parts of it as minima of convex functions, which will allow us to design an iterative algorithm with analytic updates, finding a local minimum. In particular, it is not difficult to show that \begin{align*} \frac{1}{2} \log &\det \Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I\Big) -\frac{1}{2}\log\det (Z^kF^{-1} ) \tag{matrix determinant lemma}\\ & =\frac{1}{2} \log\det \Big( M^{\top}{X^k}^{\top} X^kM + \sigma^2 F{Z^k}^{-1} \Big) + \frac{N^k\!\!-\!\!\sum_{A\in\Acal}|A|}{2}\log(\sigma^2) \\ &= \inf_{\Lambda^k \succcurlyeq 0} \frac{1}{2} \tr\{ (M^{\top}\!{X^k}^{\top}\!X^kM + \sigma^2 F{Z^k}^{-1}) \Lambda^k \} \!-\! \frac{1}{2}\log \det\Lambda^k \!-\! \frac{1}{2}\!\!\sum_{A\in\Acal}\!|A|+ \frac{N^k\!\!-\!\!\sum_{A\in\Acal}|A|}{2}\log(\sigma^2),\\ \end{align*} which, with a change of variables $\Sigma^k = \sigma^2\Lambda^k$, is written as \begin{equation*} \begin{aligned} \inf_{\Sigma^k \succcurlyeq 0} \frac{1}{2\sigma^2} \tr\{ (M^{\top}{X^k}^{\top}X^kM)\} +\frac{1}{2}\sum_{A\in\Acal} \frac{\tr\Sigma^k f(A)}{\zeta_A^k} - \frac{1}{2}\log \det\Sigma^k - \frac{1}{2}\sum_{A\in\Acal}|A|+ \frac{N^k}{2}\log(\sigma^2),\\ \end{aligned} \end{equation*} and that \begin{equation*} \begin{aligned} \frac{1}{2} {y^k}^{\top}\Big( X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I\Big)^{-1} y^k = \inf_{v^k} \frac{1}{2\sigma^2}||y^k-X^kMv^k ||^2_2 + \frac{{v^k}^{\top} F{Z^k}^{-1} v^k}{2}.\\ \end{aligned} \end{equation*} Thus, from \eqref{eq:LBgeneral_sum}, our optimization problem becomes \begin{equation*} \tag{\ref{eq:obj} revisited} \begin{aligned} \inf_{\zeta^k\geq 0} \inf_{v^k} \inf_{\Sigma^k\succcurlyeq 0} \sum_{k=1}^K\Big\{&\quad \frac{1}{2\sigma^2} ||y^k-X^kMv^k||^2_2 + \frac{1}{2} \sum_{A\in\Acal} \frac{f(A)}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k) \\ &- \frac{1}{2} \log\det \Sigma^k + \frac{N^k}{2}\log(\sigma^2) + \frac{N^k}{2} \log (2\pi)+ \frac{1}{2\sigma^2} \tr{M^{\top}{X^k}^{\top}X^kM\Sigma^k} -\frac{1}{2} \sum_{A\in\Acal}|A|\\ &+ \sum_{A\in\Acal} \Big[ -\frac{|A|}{2} \log 2\pi + \phi_A(\zeta_A^k) - \frac{1}{2}|A|\log f(A) \Big]\Big\},\\ \end{aligned} \end{equation*} and the updates are \begin{equation} \label{eq:updates1} \begin{aligned} \Sigma^k & = \sigma^2(M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1} )^{-1}\\ v^k & = (M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1})^{-1}M^{\top}{X^k}^{\top}y^k\\ \zeta_A^k & = \argmin_{z \geq 0} \phi_A(z ) + \frac{1}{2} \frac{f(A)}{z} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k)\\ \sigma^2& = \frac{\sum_{k=1}^K\big\{ ||y^k-X^kMv^k||^2_2 + \tr M^{\top}{X^k}^{\top}X^kM\Sigma^k \big\}} {\sum_{k=1}^{K}N^k}\\ f(A) & = \argmin_{x} \frac{x}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k) - K\frac{1}{2}|A|\log x\\ & = \frac{K|A|}{\sum_{k=1}^K \frac{1}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k)}. \\ \end{aligned} \end{equation} \subsection{Regularized version} As we empirically show in Section 3.1, the variational approximation scheme from above tends to overestimate the variance of the prior distribution (i.e., underestimate the inverse scale parameter $f(A)$) when this variance is smaller than $\sigma^2$, the noise variance. This is undesirable, as we would like $f(A)$ to go to infinity for irrelevant subsets of variables. To circumvent this problem, we use an improper prior of the form $$p(f(A)) \propto f(A)^\beta$$ to encourage $f(A)$ to go to infinity when the variance of $p(v_A)$ is smaller than $\sigma^2$. Consequently, the term $-\beta\log f(A)$ is added to the objetive function \eqref{eq:obj}, and the only update that changes is the update for $f(A)$: \begin{equation} \begin{aligned} f(A) & = \argmin_{x} \frac{x}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k) - K(\frac{|A|}{2}+\beta)\log x\\ & = \frac{K(\frac{|A|}{2}+\beta )}{\frac{1}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k)}.\\ \end{aligned} \end{equation} \subsection{Faster updates} The update equations \eqref{eq:updates1} involve the inversion of the $\sum_{A\in\Acal}|A|\times \sum_{A\in\Acal}|A|$ matrix $M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1}$. In fact, using the matrix inversion lemma, we can avoid performing this expensive inversion by rewriting the updates so that we only have to invert a $P\times P$ or an $N^k\times N^k$ matrix instead. Before we write down the modified updates, let us introduce some additional shorthand notation: \begin{itemize} \item $\xi^k \in \RR^P$ is the sum $\sum_{A\in\Acal} \frac{\zeta_A^k}{f(A)} 1_A$, where $1_A\in \RR^P$ denotes the indicator vector for the index set $A$; \item $\Xi^k \in \RR^{P\times P}$ is a square diagonal matrix with $\Xi^k_{ii}=\xi^k_i, i=1,\ldots,P$; Put differently, $\Xi^k=MZ^kF^{-1}M^{\top}$. \item $H^k$ is a square diagonal matrix corresponding to $Z^kF^{-1}$. \end{itemize} \paragraph{$P\times P$ matrix inversion.} \begin{equation} \label{eq:updates_PP} \begin{aligned} \Sigma^k & = H^k - H^k M^{\top} {\Xi^k}^{-1}M H^k +\sigma^2 H^k M^{\top} {\Xi^k}^{-1} \big({X^k}^{\top} X^k + \sigma^2 {\Xi^k}^{-1}\big)^{-1}{\Xi^k}^{-1} M H^k \\ \tr\Sigma_{AA}^k & = |A|\frac{\zeta_A^k}{f(A)} - \frac{{\zeta_A^k}^2}{{f(A)}^2}\sum_{i\in A} \frac{1}{\xi^k_i} + \sigma^2 \frac{{\zeta_A^k}^2}{{f(A)}^2} \sum_{i,j\in A} \frac{1}{\xi^k_i \xi^k_j} \big[\big({X^k}^{\top} X^k + \sigma^2 {\Xi^k}^{-1}\big)^{-1}\big]_{ij} \\ v^k & = H^k M^{\top} {\Xi^k}^{-1} \big({X^k}^{\top} X^k + \sigma^2 {\Xi^k}i^{-1}\big)^{-1}{X^k}^{\top}y^k\\ \|v_A^k\|_2^2 & = \frac{{\zeta_A^k}^2}{{f(A)}^2} \sum_{i \in A} \frac{1}{{\xi^k_i}^2} \big[\big({X^k}^{\top} X^k + \sigma^2 {\Xi^k}^{-1}\big)^{-1} {X^k}^{\top}y^k\big]_i^2.\\ \end{aligned} \end{equation} \paragraph{$N^k\times N^k$ matrix inversion.} \begin{equation} \label{eq:updates_NN} \begin{aligned} \Sigma^k & = H^k - H^k M^{\top} {X^k}^{\top} \big( X^k \Xi^k {X^k}^{\top} + \sigma^2 I\big)^{-1} X^kM H^k \\ v^k & = H^k M^{\top}{X^k}^{\top} \big( X^k \Xi^k {X^k}^{\top} + \sigma^2 I\big)^{-1} y^k.\\ \end{aligned} \end{equation} \paragraph{Special case of no design (signal denoising).} When $X^k=I$, the computations become considerably simpler. Note that in this case $N^k=P$ and the matrix ${X^k}^{\top} X^k + \sigma^2 {\Xi^k}^{-1}$ is diagonal, so the cost of its inversion is $O(P)$ instead of $O(P^3)$. In fact, we do not even need to form the diagonal matrix, as we do not need to explicitly use $\Sigma^k$ and $v^k$ in the updates. The updates can be rewritten as follows: \begin{equation} \label{eq:updates_nodesign} \begin{aligned} \tr\Sigma_{AA}^k & = |A|\frac{\zeta_A^k}{f(A)} - \frac{{\zeta_A^k}^2}{{f(A)}^2}\sum_{i\in A} \frac{1}{\xi^k_i} + \sigma^2 \frac{{\zeta_A^k}^2}{{f(A)}^2} \sum_{i\in A} \frac{1}{{\xi^k_i}^2}\frac{1}{(1+\sigma^2/\xi^k_i)} \\ \|v_A^k\|_2^2 & = \frac{{\zeta_A^k}^2}{{f(A)}^2} \sum_{i \in A} \frac{1}{{\xi^k_i}^2} \Big[\frac{y_i^k}{1+\sigma^2/\xi^k_i}\Big]^2\\ \zeta_A^k & = \argmin_{z \geq 0} \phi_A(z ) + \frac{1}{2} \frac{f(A)}{z} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k)\\ f(A) & = \frac{K(\frac{|A|}{2}+\beta )}{\frac{1}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k)}.\\ \end{aligned} \end{equation} (The updates for $f(A)$ and $\zeta_A^k$ remain unchanged.) In this case the computation of $w^k, k\in\{1,...,K\}$, and of the value of the objective are also simplified. To obtain $w^k= \sum_{A \in \Acal} v_A^k$, we compute each component of $v_A^k$ as \begin{equation*} \begin{aligned}\quad [v_A^k]_i = \frac{{\zeta_A^k}}{{f(A)}} \frac{1}{\xi^k_i} \frac{y_i^k}{1+\sigma^2/\xi^k_i} =\frac{{\zeta_A^k}}{{f(A)}} \frac{y_i^k}{\xi^k_i+\sigma^2} \mbox{ if } i\in A, 0 \mbox{ otherwise,}\\ \end{aligned} \end{equation*} and the objective as \begin{equation*} \begin{aligned} \inf_{\zeta^k\geq 0} \inf_{v^k} \inf_{\Sigma^k\succcurlyeq 0} \sum_{k=1}^K\Big\{&\quad \frac{1}{2\sigma^2} ||y^k-w^k||^2_2 + \frac{1}{2} \sum_{A\in\Acal} \frac{f(A)}{\zeta_A^k} (||v_A^k||^2_2 + \tr \Sigma_{AA}^k) \\ &+ \frac{1}{2} \sum_{i=1}^P \log(\sigma^2 + \xi^k_i) +\frac{1}{2} \sum_{A \in \Acal} |A| \log\frac{f(A)}{\zeta_A^k} \\ &+ \frac{P}{2} \log (2\pi) + \frac{1}{2} \sum_{i=1}^P \frac{\xi^k_i}{\xi^k_i + \sigma^2} -\frac{1}{2} \sum_{A\in\Acal}|A|\\ & + \sum_{A\in\Acal} \Big[ -\frac{|A|}{2} \log 2\pi + \phi_A(\zeta_A^k) - \frac{1}{2}|A|\log f(A) \Big]\Big\}.\\ \end{aligned} \end{equation*} Here we have used \begin{equation*} \begin{aligned} \log\det \Sigma^k & = \log\det [\sigma^2 (M^{\top} M + \sigma^2F{Z^k}^{-1})^{-1}] \\ & = \sum_{A\in \Acal}|A|\log\sigma^2 - \log\det (M^{\top} M + \sigma^2 F{Z^k}^{-1}) \\ & = \sum_{A\in \Acal}|A|\log\sigma^2 - \log[ {\sigma^2}^{\sum_{A\in\Acal}|A|-P} \det (\sigma^2I + M^{\top} Z^kF^{-1}M) \det (F{Z^k}^{-1})]\\ & =P\log\sigma^2 -\sum_{i=1}^P \log(\sigma^2 + \xi^k_i)-\sum_{A \in \Acal} |A| \log\frac{f(A)}{\zeta_A^k}\\ \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} \tr {M^{\top}M\Sigma^k} &= \tr {M\Sigma^kM^{\top}}\\ & =\tr{MH^kM^{\top}} -\tr{ MH^kM^{\top} (\Xi^k + \sigma^2I)^{-1} MH^kM^{\top} }\\ & =\tr{\Xi^k} -\tr{ \Xi^k (\Xi^k + \sigma^2I)^{-1} \Xi^k }\\ & =\sum_{i=1}^P\xi^k_i - \sum_{i=1}^P \frac{{\xi^k_i}^2}{\xi^k_i + \sigma^2}\\ &=\sum_{i=1}^P \frac{\xi^k_i \sigma^2}{\xi^k_i + \sigma^2}.\\ \end{aligned} \end{equation*} Note that if we update the variables in the same order as in \eqref{eq:updates_nodesign}, then $w^k$, $\log\det \Sigma^k$, and $\tr {M^{\top}M\Sigma^k}$ have to be computed before updating $\zeta_A^k$ and $f(A)$; This will ensure that $w^k$ and $||v_A^k||^2_2$, respectively $\tr \Sigma_{AA}^k$ and the two terms involving $\log\det \Sigma^k$ and $\tr {M^{\top}M\Sigma^k}$, are consistent, that is, they correspond to the same value of $v_A^k, A\in\Acal$, respectively $\Sigma^k$. } \end{document}
\begin{document} \title{Diffusions in Random Environment and Ballistic Behavior} \author{Tom Schmitz\\ Department of Mathematics\\ ETH Zurich\\ CH-8092 Zurich\\ Switzerland\\ email: [email protected] } \date{Revised version\\ June 20, 2005} \maketitle {\bf Abstract:} In this article we investigate the ballistic behavior of diffusions in random environment. We introduce conditions in the spirit of $(T)$ and $(T')$ of the discrete setting, cf. \cite{szn01}, \cite{szn02}, that imply, when $d \geq 2$, a law of large numbers with non-vanishing limiting velocity (which we refer to as 'ballistic behavior') and a central limit theorem with non-degenerate covariance matrix. As an application of our results, we consider the class of diffusions where the diffusion matrix is the identity, and give a concrete criterion on the drift term under which the diffusion in random environment exhibits ballistic behavior. This criterion provides examples of diffusions in random environment with ballistic behavior, beyond what was previously known.\\ {\bf R\'esum\'e:} On \'etudie dans cet article le comportement ballistique de diffusions en milieu al\'eatoire. On montre que certaines conditions $(T)$ et $(T')$, d'abord introduites dans le cadre discret, cf. \cite{szn01}, \cite{szn02}, entra\^inent en dimension sup\'erieure une loi des grands nombres avec une vitesse limite non nulle (ce qu'on appelle 'comportement ballistique'), et un th\'eor\`eme limite central avec une matrice de covariance non d\'eg\'en\'er\'ee. Pour illustrer ces r\'esultats, on consid\`ere la classe de diffusions o\`u la matrice de diffusion est l'identit\'e, et on donne un crit\`ere concret sur la d\'erive qui entra\^ine le comportement ballistique de la diffusion en milieu al\'eatoire. Ce crit\`ere fournit de nouveaux examples de diffusions en milieu al\'eatoire avec comportement ballistique. \section{Introduction} The method of ``the environment viewed from the particle'' has played a prominent role in the investigation of random motions in random environment, see for instance \cite{kip-Var}, \cite{kozlov}, \cite{molchanov}, \cite{olla94}, \cite{olla01}, \cite{papa}, \cite{ras}. In the continuous space-time setting, it applies successfully when one can construct, most often explicitly, an invariant measure for the process of the environment viewed from the particle, which is absolutely continuous with respect to the static measure of the random medium, see \cite{deMasi}, \cite{kom}, \cite{kom-krupa02}, \cite{kom-krupa}, \cite{kom-olla-01}, \cite{kom-olla-03}, \cite{landim-olla-yau}, \cite{oel}, \cite{olla94}, \cite{olla01}, \cite{papa}. However, the existence of such invariant measures is hard to prove in the general setting. The case of Brownian motion with a random drift which is either incompressible or the gradient of a stationary function, is tractable, see \cite{olla94}, \cite{olla01}. But many examples fall outside this framework, and only recent developments go beyond it, for they require new techniques, see \cite{kom}, \cite{kom-krupa02}, \cite{kom-krupa}, \cite{kom-olla-03}.\\ Progress has recently been made in the discrete setting for random walks in random environment in higher dimensions, in particular with the help of the renewal-type arguments introduced in Sznitman-Zerner \cite{szn-zer}, see \cite{bolt-szn-1}, \cite{bolt-szn}, \cite{bolt-szn-zeit}, \cite{com-zeit}, \cite{szn00}, \cite{szn01}, \cite{szn02}, \cite{szn03}, \cite{szn04}, \cite{zeit}. It is natural, but not straightforward, to try to transpose these results to the continuous space-time setting, and thus propose a new approach to multidimensional diffusions in random environment, when no invariant measure is a priori known. The first step in this direction was taken up in Shen \cite{shen}, where, in the spirit of Sznitman-Zerner \cite{szn-zer}, certain regeneration times providing a renewal structure are introduced. Then a sufficient condition for a 'ballistic' strong law of large numbers ('ballistic' means that the limiting velocity does not vanish, which we refer to as ballistic behavior) and a central limit theorem governing corrections to the law of large numbers, with non-degenerate covariance matrix, is given in terms of these regeneration times.\\ In this article we show that under condition ($T'$), see (\ref{eq:T'}) for the definition, when $d \geq 2$, the diffusion in random environment satisfies the aforementioned sufficient condition of Shen \cite{shen}. We formulate the rather geometric condition $(T')$ and are able to restate it equivalently in terms of the renewal structure of Shen \cite{shen}, see Theorem \ref{thm:Tgamma}. With $(T')$ we are then able to derive tail estimates on the first regeneration time which in particular imply the above mentioned sufficient condition of Shen \cite{shen}, see Theorem \ref{thm:tail-estimate}. In the discrete i.i.d. setting, condition ($T'$) was introduced in the work of Sznitman, see \cite{szn01} and \cite{szn02}, and some of our arguments are inspired by \cite{szn01} and \cite{szn02}. As an application of our methods, we give concrete examples. In particular, we recover and extend results of Komorowski and Krupa \cite{kom-krupa}.\\ Before describing our results in more details, let us recall the setting.\\ The {\it random environment} is described by a probability space $(\Omega,\mathcal{A},\mathbb{P})$. We assume that there exists a group $\{t_{x}:x \in \R{d}\}$ of transformations on $\Omega$, jointly measurable in $x, \omega$, which preserve the probability $\mathbb P$: \begin{equation} \label{eq:stationarity} t_x \mathbb P =\mathbb P \,. \end{equation} On $(\Omega,\mathcal{A},\mathbb{P})$ we consider bounded measurable functions $b(\cdot):\Omega \rightarrow \R{d}$ and $\sigma(\cdot):\Omega \rightarrow \R{d \times d}$, as well as two finite constants $\bar{b},~\bar{\sigma}>0$ such that for all $\omega \in \Omega$ \begin{eqnarray} \label{eq:b-sigma-bound} \left| b(\omega) \right| \leq \bar{b}, \quad \left| \sigma(\omega) \right| \leq \bar{\sigma}, \end{eqnarray} where $|\cdot|$ denotes the Euclidean norm for vectors resp. for square matrices. We write \begin{eqnarray*} b(x,\omega)=b(t_{x}(\omega)), \quad \sigma(x,\omega)=\sigma(t_{x}(\omega)). \end{eqnarray*} We further assume that $b(\cdot,\omega)$ and $\sigma(\cdot,\omega)$ are Lipschitz continuous, i.e. there is a constant $K>0$ such that for all $\omega \in \Omega,~x,y \in \R{d}$, \begin{eqnarray} \label{eq:Lipschitz} |b(x,\omega)-b(y,\omega)|+|\sigma(x,\omega)-\sigma(y,\omega)| \leq K|x-y|. \end{eqnarray} $\sigma \sigma^{t}(x,\omega)$ is uniformly elliptic, i.e. there is a constant $\nu > 0$ such that for all $\omega \in \Omega,~x,y \in \R{d}$, \begin{eqnarray} \label{eq:elliptic} \frac{1}{\nu}|y|^{2} \leq |\sigma^{t}(x,\omega)y|^{2} \leq \nu |y|^{2}, \end{eqnarray} where $\sigma^{t}$ denotes the transposed matrix of $\sigma$. For a Borel subset $F \subset \R{d}$, we define the $\sigma$-field generated by $b(x,\omega),~\sigma(x,\omega)$, for $x \in F$ by \begin{equation} \label{eq:sigma-field} \mathcal{H}_{F} \df \sigma\{b(x,\cdot), \sigma(x,\cdot):x \in F\}, \end{equation} and assume finite range dependence: there is an $R > 0$ such that for all Borel subsets $F, F' \subset \R{d}$ with $d(F,F') \df \inf\{|x-x'|: x \in F, x' \in F' \}>R$, \begin{equation} \label{eq:R-separation} \mathcal{H}_{F} \text{ and } \mathcal{H}_{F'} \text{ are $\mathbb P$-independent}. \end{equation} We denote by $(C(\R{}_{+},\R{d}),\mathcal{F},W)$ the canonical Wiener space, and with $(B_{t})_{t \geq 0}$ the $d$-dimensional Brownian motion (which is independent from $(\Omega,\mathcal{A},\mathbb{P})$). The diffusion process in the random environment $\omega$ is described by the family of laws $(P_{x,\omega})_{x \in \R{d}}$ (we call them the \emph{quenched} laws) on $(C(\R{}_{+},\R{d}),\mathcal{F})$ of the solution of the stochastic differential equation \begin{eqnarray} \label{eq:SDE} \begin{cases} dX_t=\sigma(X_t,\omega)dB_t+b(X_t,\omega)dt,\\ X_{0}=x, \quad x \in \R{d}, ~\omega \in \Omega. \end{cases} \end{eqnarray} The second order linear differential operator associated to the stochastic differential equation (\ref{eq:SDE}) is given by: \begin{equation} \label{eq:diff-operator} \mathcal{L_\omega} \df \frac{1}{2}\sum_{i,j=1}^d a_{ij}(x,\omega)\frac{\partial^2}{\partial x_i\partial x_j }+\sum_{j=1}^d b_j(x, \omega)\frac{\partial}{\partial x_j}\,. \end{equation} To restore some stationarity to the problem, it is convenient to introduce the \emph{annealed} laws $P_{x}$, which are defined as the semi-direct products: \begin{equation} \label{eq:annealed} P_{x} \df \mathbb{P} \times P_{x,\omega},~~ \mathrm{for}~ x \in \R{d}. \end{equation} Of course the Markov property is typically lost under the annealed laws. \\ Let us now explain the purpose of this work. The main object is to introduce sufficient conditions for ballistic behavior of the diffusion in random environment when $d \geq 2$. These conditions are expressed in terms of another condition $(T)_{\gamma}$ which is defined as follows. Consider, for $|l|=1$ a unit vector of $\R{d}$, $b, L>0$, the slabs \begin{equation*} U_{l,b,L} \df \{x \in \R{d}:-bL < x \cdot l <L\}. \end{equation*} We say that \emph{condition $(T)_{\gamma}$} holds relative to $l \in S^{d-1}$, in shorthand notation $(T)_\gamma |\,l$, if for all $l' \in S^{d-1}$ in a neighborhood of $l$, and for all $b>0$, \begin{equation} \label{eq:Tgamma} \limsup_{L \to \infty} L^{-\gamma} \log{P_{0}[X_{\T{}{\U{l'}}}\cdot l' < 0]}<0 ,\end{equation} where $\T{}{\U{l}}$ denotes the exit time of $X_\cdot$ out of the slab $U_{l,b,L}$, see (\ref{eq:exit-time}) for the definition.\\ The aforementioned sufficient conditions for ballistic behavior are then condition $(T)$ relative to the direction $l$, in shorthand notation $(T)|l$, which refers to the case where \begin{equation} \label{eq:T} (\ref{eq:Tgamma}) \textrm{ holds for } \gamma = 1\,, \end{equation} or the weaker condition $(T')$ relative to the direction $l$, in shorthand notation $(T')|l$, which refers to the case where \begin{equation} \label{eq:T'} (\ref{eq:Tgamma}) \textrm{ holds for all } \gamma \in (0,1)\,. \end{equation} Clearly $(T)$ implies $(T')$ which itself implies $(T)_{\gamma}$ for all $\gamma \in (0,1)$. We expect these conditions all to be equivalent, cf. Sznitman \cite{szn02}, \cite{szn04}, however this remains an open question. The conditions $(T)$ and $(T')$ are not effective conditions which can be checked by direct inspection of the environment restricted to a bounded domain of $\R{d}$. In the discrete i.i.d. setting, Sznitman \cite{szn02} proved the equivalence between a certain effective criterion and condition $(T')$. With the help of the effective criterion he also proved that $(T)_\gamma$ and $(T')$ are equivalent for $\frac{1}{2}< \gamma <1$. We believe that a similar effective criterion holds in the continuous setting, and it is in the spirit of this belief that we formulate all our results in Section \ref{sec:condition-T} and \ref{sec:tail-estimate} in terms of condition $(T')$ resp. $(T)_\gamma$. Later, in Section \ref{sec:examples}, we verify the stronger condition ($T$) for a large class of examples.\\ In Theorem \ref{thm:Tgamma} we show that the definition of condition $(T)_\gamma|l$, see (\ref{eq:Tgamma}), which is of a rather geometric nature, has an equivalent formulation in terms of transience of the diffusion in direction $l$ and a stretched exponential control of the size of the trajectory up to the first regeneration time $\tau_{1}$ (see subsection \ref{subsec:regeneration} for the precise definition): \begin{align} & P_{0}-a.s. \lim_{t \to \infty} X_t \cdot l = \infty\,, \label{eq:transience-0} \\ &\text{and for some } \mu > 0,~~ \hat{E_{0}}[\,\exp\{\mu \sup_{0 \leq t \leq \tau_{1}}|X_{t}|^{\gamma}\}] < \infty\,. \label{eq:integrability} \end{align} Following Shen \cite{shen}, the successive regeneration times $\tau_{k}, k \geq 1$, are defined on an enlarged probability space which is obtained by adding some suitable auxiliary i.i.d.~Bernoulli variables, cf. subsection \ref{subsec:regeneration}. The quenched measure on the enlarged space, which couples the trajectories to the Bernoulli variables, is denoted by $\hat{P}_{x,\omega}$, and $\hat{P}_{x}$ refers to the annealed measure $\mathbb{P}\times \hat P_{x,\omega}$, cf. subsection \ref{subsec:coupling}. Loosely speaking, the first regeneration time $\tau_{1}$ is the first integer time where the diffusion process in random environment reaches a local maximum in a given direction $l \in S^{d-1}$, some auxiliary Bernoulli variable takes value one, and from then on the diffusion process never backtracks. \\ The strategy of the proof of the above mentioned equivalence statement is similar to that of the analogue statement in the discrete i.i.d. setting, see Sznitman \cite{szn02}. Nevertheless, changes appear in several places, due among others to the fact that the regeneration time $\tau_{1}$ is more complicated than in the discrete setting.\\ Theorem \ref{thm:Tgamma} is very useful because conditions (\ref{eq:Tgamma}) and (\ref{eq:integrability}) have different flavours. Condition (\ref{eq:integrability}) is especially useful when studying asymptotic properties of the diffusion process, whereas (\ref{eq:Tgamma}) is more adequate to construct examples.\\ Together with the crucial renewal property (see Theorem \ref{thm:renewal}) induced by the regeneration times $\tau_{k}$, $k \geq 1$, the formulation (\ref{eq:integrability}) is instrumental in showing that under $(T')$, and when $d \geq 2$, \begin{equation} \label{eq:tail} \limsup_{u \rightarrow \infty}~ (\log u)^{-\alpha} \log \hat{P}_{0}[\tau_{1} > u]<0, \quad \mathrm{for} \quad \alpha < 1+\frac{d-1}{d+1}, \end{equation} see Theorem \ref{thm:tail-estimate}. The proof again uses a strategy close to the proof in the discrete case, see Sznitman \cite{szn01}. We prove a seed estimate, see Lemma \ref{lemma:seed-estimate}, which is then propagated to the right scale by performing a renormalisation step, see Lemma \ref{lemma:renormalisation}. Interestingly enough, we do not require condition $(T')$ to prove the renormalisation lemma. \\ Under the assumption of (\ref{eq:transience-0}) and the finiteness of the first and the second moment of $\tau_1$, the Theorems 3.2 and 3.3 in Shen \cite{shen} imply that: \begin{equation} \label{eq:lln} P_{0}-\text{a.s.}, \quad \frac{X_{t}}{t} \rightarrow v, \quad v \neq 0, \text{ deterministic, with } v \cdot l >0\,, \end{equation} \begin{equation} \label{eq:clt} \begin{aligned} &\text{and under $P_{0}$, $B_{\cdot}^{s}=\tfrac{X_{s \cdot}-s\cdot v}{\sqrt{s}}$ converges in law on $C(\R{}_{+},\R{d})$, as $s \to \infty$, to a}\\ &\text{Brownian motion $B_{\cdot}$ with non-degenerate deterministic covariance matrix}. \end{aligned} \end{equation} Hence, when condition $(T')$ holds, and $d \geq 2$, Theorem \ref{thm:tail-estimate}, see also (\ref{eq:tail}), yields a ballistic law of large numbers and a central limit theorem governing corrections to the law of large numbers. Incidentally let us mention that as in the discrete setting, cf. Sznitman \cite{szn02}, \cite{szn04}, condition $(T')$ is a natural contender for the characterisation of ballistic diffusions in random environment when $d \geq 2$. However at present there are no rigorous results in that direction.\\ As an application of our methods, we provide a rich class of examples exhibiting ballistic behavior. We first consider the case where, for some $l \in S^{d-1}$ and all $\omega \in \Omega$, all $x \in \R{d}$, $b(x, \omega)\cdot l $ remains uniformly positive, and show in Proposition \ref{prop:non-nestling} that condition $(T)|l$ holds. Hence we recover and extend the main result of Komorowski and Krupa \cite{kom-krupa} (which only asserts (\ref{eq:lln}) when $\sigma = Id$).\\ Then we consider the case where $\sigma$ in (\ref{eq:SDE}) is the identity. We prove in Theorem \ref{thm:examples} that, when $d \geq 1$, there is a constant $c_e(\bar b, K, d, R)>0$ such that, for $l \in S^{d-1}$, \begin{equation} \label{eq:examples} \mathbb{E}[(b(0,\omegaega) \cdot l)_{+}]>c_e~ \mathbb{E}[(b(0,\omegaega) \cdot l)_{-}] \end{equation} implies condition $(T)|l$ (and hence condition $(T')|l$). Clearly, when $\sigma = Id$, the result of Proposition \ref{prop:non-nestling} is included in Theorem \ref{thm:examples}. Note that Theorem \ref{thm:examples} covers additional situations where $b(0,\omega) \cdot l$ changes sign in every unit direction $l$. This provides new examples of ballistic diffusions in random environment. More details are included in remark \ref{rem:examples} at the end of Section \ref{sec:examples}.\\ To prove Theorem \ref{thm:examples}, we verify the geometric formulation (\ref{eq:T}) of condition ($T$). However it is a difficult task to compute the exit distribution of the diffusion out of large slabs under $P_{0}$, since the Markov property is lost under $P_{0}$. In the spirit of Kalikow \cite{kalikow}, we restore a Markovian character to the exit problem by virtue of Proposition \ref{prop:exit}. With the help of Proposition \ref{prop:exit}, we show that condition $(T)$ is implied by a certain condition ($K$), see (\ref{eq:K}), which has a similar flavor as Kalikow's condition in the discrete i.i.d. setting, see Sznitman and Zerner \cite{szn-zer}. The proof of Theorem \ref{thm:examples} is then carried out by checking condition ($K$). These steps are similar in spirit to the strategy used in the discrete setting, cf. lecture 5 of \cite{bolt-szn}. However, difficulties arise in the continuous space-time framework.\\ Let us now describe the organisation of this article.\\ In Section \ref{sec:recall}, we recall the coupling construction which leads to the measures $\hat P_{x,\omega}$ resp. $\hat P_x$, cf. Proposition \ref{prop:coupling}. On this new probability space one constructs the regeneration times $\tau_{k}, ~k \geq 1$, which provide the crucial renewal structure, cf. Theorem \ref{thm:renewal}. These results have been obtained in Shen \cite{shen}; we recall them for the convenience of the reader.\\ In Section \ref{sec:condition-T}, we prove the equivalence of (\ref{eq:Tgamma}) and (\ref{eq:transience-0}), (\ref{eq:integrability}), see Theorem \ref{thm:Tgamma}. \\ In Section \ref{sec:tail-estimate}, we show (\ref{eq:tail}) under the assumption of condition $(T')$, see Theorem \ref{thm:tail-estimate}. Proposition \ref{prop:trap} highlights the importance of large deviation controls of the exit probability of large slabs. The renormalisation step is carried out in Lemma \ref{lemma:renormalisation}, and a seed estimate is provided in Lemma \ref{lemma:seed-estimate}.\\ In Section \ref{sec:examples}, we show that condition $(T)$ (in the geometric formulation (\ref{eq:Tgamma})) holds either under the assumption of the uniform positivity of $b(x,\omega)\cdot l$ for some unit vector $l$ and all $\omega \in \Omega$, all $x \in \R{d}$, or under the assumption of $\sigma=Id$ and (\ref{eq:examples}). \\ In the Appendix, we provide some results on continuous local martingales and Green functions, that we use throughout this article.\\ {\bf Convention on constants} Unless otherwise stated, constants only depend on the quantities $\nu, \bar{b},\bar{\sigma},K, R, d, \gamma$. In particular they are independent of the environment $\omega$. Generic positive constants are denoted by $c$. Dependence on additional parameters appears in the notation. For example, $c(p,L)$ means that the constant $c$ depends on $p$ and $L$ {\it and on} $\nu, \bar{b},\bar{\sigma},K, R, d, \gamma$. When constants or positive numbers are not numerated, their value may change from line to line.\\ {\bf Acknowledgement:} Let me thank my advisor Prof. A.-S. Sznitman for introducing me to the subject and for his advice during the completion of this work. I also want to thank Lian Shen for his help and numerous discussions. \section{The Regeneration Times and the Renewal Structure} \label{sec:recall} In this section, we recall the definition of the coupled measures $\hat{P}_{x,\omega}$ (resp. $\hat{P}_{x}$) and of the regeneration times $\tau_{k}$, $k \geq 1$, given in Shen \cite{shen}. We then cite the resulting renewal structure, see Theorem \ref{thm:renewal}. For the proofs or further details, we refer the reader to Shen \cite{shen}. \subsection{Notation} We introduce some additional notation. For $x \in \R{d}$, $d \ge 1$, we let $B_{r}(x)$ denote the open Euclidean ball with radius $r$ centered in $x$. For $U \subseteq \R{d}$, we denote with $\bar U$ its closure, with diam$(U) \df \sup\{|x-y|:x,y \in U\}$ its diameter, and, for measurable $U$, with $|U|$ its Lebesgue measure. A domain stands for a connected open subset of $\R{d}$. For $x \in \R{}$, we define $\lfloor x \rfloor \df \sup\{k \in \mathbb Z:k \leq x\}$ and $\lceil x \rceil \df \inf\{k \in \mathbb Z:k \geq x\}$. For a discrete set $A$, we denote with $\#A$ its cardinality. For an open set $U$ in $\R{d}$ and $u \in \R{}$ we define the $(\mathcal{F}_t)_{t \geq 0}$-stopping times (($\mathcal{F}_t)_{t \geq 0}$ denotes the canonical right-continuous filtration on $(C(\mathbb{R}_{+}, \R{d}), \mathcal{F}))$:\\ the exit time from $U$, \begin{equation} \label{eq:exit-time} T_U \df \inf{\{t \geq 0:X_{t} \notin U\}}, \end{equation} and the entrance times into the half-spaces $\{x \cdot l \geq u\}$ resp. $\{x \cdot l \leq u\}$, \begin{align} \label{eq:stopping-times} \begin{split} &\T{l}{u} \df \inf{\{t \geq 0:X_t \cdot l \geq u\}}, \\ &\widetilde{T}^{l}_{u} \df \inf{\{t \geq 0:X_t \cdot l \leq u\}}. \end{split} \end{align} We define as well the maximal value of the process $(X_{s}\cdot l)_{s \geq 0}$ till time t, \begin{equation} \label{eq:maximum} M(t) \df \sup{\{X_{s} \cdot l: 0 \leq s \leq t \}}, \end{equation} and the first return time of the process $(X_s \cdot l)_{s \geq 0}$ to the level $-R$ relative to the starting point, as well as its rounded value, \begin{equation} \label{eq:return} J \df \inf\{t \geq 0: (X_t-X_0)\cdot l \leq -R\}\, , \quad D \df \lceil J \rceil \, . \end{equation} \subsection{The coupled measures} \label{subsec:coupling} We need further notations. We let $l$ be a fixed unit vector, and \begin{equation} \label{eq:2-subsets} U^x\df B_{6R}(x+5Rl)\,,\quad B^x\df B_R(x+9Rl)\,. \end{equation} We denote by $\lambda_j$ the canonical coordinates on $\{0,1\}^\mathbb N$. Further, let $\mathcal S_m\df \sigma\{\lambda_0,\cdots,\lambda_m\}$\,, $m\in\mathbb N$, denote the canonical filtration on $\{0,1\}^\mathbb N$ generated by $(\lambda_m)_{m\in\mathbb N}$ and $\mathcal S \df \sigma\big\{\bigcup_m \mathcal S_m\big\}$ be the canonical $\sigma$-algebra. We also write for $t\geq 0$: \begin{equation} \label{eq:filtrationZ} \mathcal Z_t\df \mathcal F_t\otimes\mathcal S_{\lceil t\rceil}\,,\quad \mathcal Z \df \mathcal F\otimes\mathcal S=\sigma\Big\{\bigcup_{m\in\mathbb N}\mathcal Z_m\Big\}\,. \end{equation} We also consider the shift operators $\big\{\theta_m:m\in\mathbb N\big\}$, with $\theta_m: \left(C(\mathbb R_+,\R{d})\times \{0,1\}^\mathbb N, \mathcal Z\right)\to \left(C(\R{}_+,\R{d})\times \{0,1\}^\mathbb N,\mathcal Z\right)$, such that \begin{equation} \label{eq:shift} \theta_m\left(X_\cdot,\lambda_\cdot\right)=\left(X_{m+\cdot},\lambda_{m+\cdot}\right)\,. \end{equation} Then from Theorem 2.1 in Shen \cite{shen}, one has the following measures, coupling the diffusion in random environment with a sequence of Bernoulli variables: \begin{prop} \label{prop:coupling} There exists $p>0$, such that for every $\omega\in\Omega$ and $x\in\R{d}$, there exists a probability measure $\hat P_{x,\omega}$ on $\big(C(\R{}_+,\R{d})\times\{0,1\}^\mathbb N,\mathcal Z\big)$ depending measurably on $\omega$ and $x$, such that \begin{enumerate} \item Under $\hat P_{x,\omega}$, $(X_t)_{t\geq 0}$ is $P_{x,\omega}$-distributed, and the $\lambda_m$, $m\geq 0$, are i.i.d. Bernoulli variables with success probability $p$. \item For $m \geq 1$, $\lambda_m$ is independent of $\mathcal F_m\otimes\mathcal S_{m-1}$ under $\hat P_{x,\omega}$. Conditioned on $\mathcal Z_m$, $X_\cdot\circ\theta_m$ has the same law as $X_\cdot$ under $\hat P^{\lambda_m}_{X_m,\omega}$, where for $y \in \R{d}$, $\lambda \in \{0,1\}$, $\hat P^{\lambda}_{y,\omega}$ denotes the law $\hat P_{y,\omega}[\;\cdot\;|\lambda_0=\lambda]$. \item $\hat P^{1}_{x,\omega}$ almost surely, $X_{s}\in U^{x}$ for $s\in[0,1]$ (recall (\ref{eq:2-subsets})). \item Under $\hat P^{1}_{x,\omega}$, $X_{1}$ is uniformly distributed on $B^x$ (recall (\ref{eq:2-subsets})). \end{enumerate} \end{prop} We then introduce the new annealed measures on $\big(\Omega\times C(\R{}_+,\R{d})\times\{0,1\}^\mathbb N,\mathcal A\otimes\mathcal Z\big)$: \begin{equation} \label{eq:coupled-measures} \hat P_{x} \df \mathbb P\times \hat P_{x,\omega} \quad\text{and}\quad \hat E_x \df \mathbb E\times \hat E_{x,\omega}\,. \end{equation} \subsection{The Regeneration Times $\tau_k$ and the Renewal Structure} \label{subsec:regeneration} To define the first regeneration time $\tau_1$, we introduce a sequence of integer-valued $(\mathcal Z_t)_{t\geq 0}$-stopping times $N_k$, $k \ge 1$, such that, at these times, the Bernoulli variable takes the value one, and the process $(X_t \cdot l)_{t\geq 0}$ in essence reaches a new maximum. Proposition \ref{prop:coupling} now shows that {\it for every environment $\omega \in \Omega$}, the position of the diffusion at time $N_{k+1}$ is uniformly distributed on the ball $B^{X_{N_k}}$ under $\hat P_{0,\omega}$. We define $\tau_1$ as the first $N_k+1$ such that, after time $N_k+1$, the process $(X_t \cdot l)_{t \ge 0}$ never goes below the level $X_{N_k+1} \cdot l -R$. In essence, the distance between the positions $X_{\tau_1-1}$ and $X_{\tau_1}$ is large enough to obtain, in view of finite range dependence, independence of the parts of the trajectory $(X_t-X_0)_{t \le \tau_1-1}$ and $(X_{\tau_1+t}-X_{\tau_1})_{t \ge 0}$ under $\hat P_0$, so that the diffusion regenerates at time $\tau_1$ under $\hat P_0$. We define the regeneration times $\tau_k$, $k \ge 2$, in an iterative fashion, and we provide the renewal structure in Theorem \ref{thm:renewal}.\\ In fact, the precise definition of $\tau_{1}$ relies on several sequences of stopping times. First, for $a>0$, introduce the $(\mathcal F_t)_{t\geq 0}$-stopping times $V_k(a)$, $k\geq 0$, (recall $M(t)$ in (\ref{eq:maximum}) and $T_u$ in (\ref{eq:stopping-times})): \begin{equation} \label{eq:V(a)} V_0(a)\df T_{M(0)+a}\,, \, V_{k+1}(a)\df T_{M(\lceil V_k(a)\rceil)+R}\,. \end{equation} In view of the Markov property, see point 2. of Proposition \ref{prop:coupling}, we want the stopping times $N_k(a), k\geq 1$, to be integer-valued. Therefore we introduce in an intermediate step the (integer-valued) stopping times $\tilde N_k(a)$ where the process $X_t\cdot l$ essentially reaches a maximum: \begin{equation} \begin{cases} \displaystyle \tilde N_1(a)\df \inf\Big\{\lceil V_k(a)\rceil : k\geq 0, \sup_{s\in[V_k,\lceil V_k\rceil]}|l\cdot(X_s-X_{V_k})|< \tfrac{R}{2}\Big\}\,,\\ \tilde N_{k+1}(a)\df \tilde N_1(3R)\circ\theta_{\tilde N_k(a)}+\tilde N_k(a)\,,\ k\geq 1\,, \end{cases} \end{equation} (by convention we set $\tilde N_{k+1}=\infty$ if $\tilde N_k=\infty$). In the spirit of the comment at the beginning of this subsection, we define the $(\mathcal Z_t)_{t\geq 0}$-stopping time $N_1$ as \begin{equation} \label{eq:N(a)} N_1(a)\df \inf\left\{\tilde N_k(a): k\geq 1, \lambda_{\tilde N_k(a)}=1\right\},\,\,\,N_1 \df N_1(3R), \end{equation} as well as the $(\mathcal Z_t)_{t\geq 0}$-stopping times \begin{equation} \label{eq:time-S} \begin{cases} S_1\df N_1+1\,,\\ R_1\df S_1+D\circ\theta_{S_1}\,. \end{cases} \end{equation} The $(\mathcal Z_t)_{t\geq 0}$-stopping times $N_{k+1}$, $S_{k+1}$ and $R_{k+1}$ are defined in an iterative fashion for $k \geq 1$: \begin{equation} \label{eq:N-S-R} \begin{cases} N_{k+1}\df R_{k}+N_1(a_k)\circ\theta_{R_k} \text{ with } a_k\df M(R_k)- X_{R_k} \cdot l+R\ge R\,,\\ S_{k+1}\df N_{k+1}+1\,,\\ R_{k+1}\df S_{k+1}+D\circ\theta_{S_{k+1}}\, \end{cases} \end{equation} (the shift $\theta_{R_k}$ is {\em not} applied to $a_k$ in the above definition).\\ Notice that for all $k \geq 1$, the $(\mathcal Z_t)_{t\geq 0}$-stopping times $N_k$, $S_k$ and $R_k$ are integer-valued, possibly equal to infinity, and we have $1\leq N_1\leq S_1\leq R_1\leq N_2\leq S_2\leq R_2\cdots\leq\infty$.\\ The first {\em regeneration time} $\tau_1$\, is defined, as in \cite{szn-zer}, by \begin{equation} \label{eq:regeneration-time} \tau_1\df \inf\{S_k : S_k<\infty,\, R_k=\infty\}\leq \infty\;. \end{equation} We define the sequence of random variables $\tau_k$, $k \geq 1$, iteratively on the event $\{\tau_1<\infty\}$, by viewing $\tau_k$ as a function of $(X_{\cdot}, \lambda_{\cdot})$: \begin{equation} \label{eq:tau_k} \tau_{k+1}\big(( X_\cdot,\lambda_\cdot)\big) \df \tau_1\big(( X_\cdot,\lambda_\cdot)\big)+\tau_k\big(( X_{\tau_1+\cdot}- X_{\tau_1}, \lambda_{\tau_1+\cdot})\big), \ k\geq 1, \end{equation} and set by convention $\tau_{k+1}=\infty$ on $\{\tau_k=\infty\}$. Observe that for each $k \geq 1$, $\tau_k$ is either infinite or a positive integer. By convention, we set $\tau_0=0$. The random variables $\tau_k$, $k \geq 0$, provide a renewal structure, see also Theorem 2.5 in Shen \cite{shen}, which will be crucial in the proof of Theorem \ref{thm:Tgamma}. \begin{thm}[Renewal Structure] \label{thm:renewal} Assume that $\hat P_0{}$-a.s., $\tau_1<\infty$. Then under the measure $\hat P_0$, the random variables $Z_k\df \left(X_{(\tau_k+\cdot)\wedge(\tau_{k+1}-1)}-X_{\tau_k};\, X_{\tau_{k+1}}-X_{\tau_k};\, \tau_{k+1}-\tau_k\right)$, $k\geq 0$, are independent. Furthermore, $Z_k$, $k\geq 1$, under $\hat P_0$, have the distribution of $Z_0=\left(X_{\cdot\wedge(\tau_{1}-1)}-X_0;\, X_{\tau_{1}}-X_0;\, \tau_{1}\right)$ under $\hat P_0[\;\cdot\;|D=\infty]$. \end{thm} The following Proposition is also established in \cite{shen} (see Lemma 2.3 and Proposition 2.7 therein): \begin{prop} \label{prop:equiv} $\hat P_0$-a.s. $\tau_1<\infty$ if and only if $P_0$-a.s. $\lim_{t\to \infty} X_t\cdot l=\infty$. Furthermore $\hat P_0$-a.s. $\tau_1<\infty$ implies $P_0[D=\infty]>0$ (recall the definition of $D$ in (\ref{eq:return})). \end{prop} \section{Equivalent Formulations of Condition $(T)_\gamma$} \label{sec:condition-T} In this section, we provide an equivalent formulation of the condition $(T)_\gamma |l$, cf. (\ref{eq:Tgamma}), in terms of a stretched exponential estimate on the size of the trajectory $X_{t},0 \leq t \leq \tau_{1}$. \begin{thm} \label{thm:Tgamma} Let $l \in S^{d-1}, 0< \gamma \leq 1$. One has the equivalence \begin{align} &\bullet (T)_{\gamma}|l \label{eq:geometric-Tgamma} \\ &\bullet P_{0}-a.s. \lim_{t \to \infty} X_{t} \cdot l = \infty\,,\text{and for some }\mu > 0,\, \hat{E_{0}}[\exp\{{\mu \sup_{0 \leq t \leq \tau_{1}}|X_{t}|^{\gamma}\}}] < \infty\,. \label{eq:integrability-2} \end{align} \end{thm} \subsection{The Proof of (\ref{eq:geometric-Tgamma}) $\Rightarrow$ (\ref{eq:integrability-2})} Let us first show that \begin{equation} \label{eq:transience-1} P_{0}-a.s.\, \lim_{t \to \infty} X_{t} \cdot l = \infty\,. \end{equation} We choose an orthonormal basis $(f_{i})_{1 \leq i \leq d}$ of $\R{d}$ with $f_{1}=l$. By definition of condition $(T)_\gamma|l$, there are unit vectors $l_{i,+},\, l_{i,-}$ in $\R{}f_{1}+ \R{}f_{i}$, $2 \leq i \leq d$, such that: \begin{equation*} l_{i,\pm} \cdot f_{1} > 0, \quad l_{i,+} \cdot f_{i} > 0, \quad l_{i,-} \cdot f_{i} < 0, \end{equation*} and, for $l'=l,\, l_{i,+},\, l_{i,-},\, 2 \leq i \leq d,\,b>0$, \begin{equation} \label{eq:Tgamma-l'} \limsup_{L \to \infty} L^{-\gamma} \log{P_{0}[X_{\T{}{\U{l'}}}\cdot l' < 0]}<0\,. \end{equation} Consider the open set $\mathcal{D} \df \{x \in \R{d},\, |x \cdot l|<1,\, x \cdot l_{i,\pm}>-1, \, 2 \leq i \leq d \}.$ $\mathcal{D}$ is a bounded set, hence we can find numbers $a_{i,\pm}>0 \, , \, 2 \leq i \leq d$, such that \[ \mathcal{D} \subseteq \{x \in \R{d}:\, x \cdot l_{i,\pm}< a_{i,\pm} \, , 2 \leq i \leq d \}\,. \] Since $(T)_{\gamma}$ holds relative to $l$ and $l_{i,\pm}$, $2 \leq i \leq d $, writing \begin{eqnarray*} P_{0} [T_{L\mathcal{D}} < T^{l}_{L}]\, \leq \,P_{0} [\tilde{T}^{l}_{-L} < T^{l}_{L}] ~+~ \sum_{i=2}^{d} P_{0} [\tilde{T}^{l_{i,+}}_{-L} < T^{l_{i,+}}_{La_{i,+}}] ~+~ \sum_{i=2}^{d} P_{0} [\tilde{T}^{l_{i,-}}_{-L} < T^{l_{i,-}}_{La_{i,-}}]\,, \end{eqnarray*} we find by (\ref{eq:Tgamma-l'}) that \begin{equation} \label{eq:exit-D} \limsup_{L \to \infty} L^{-\gamma} \log{P_{0}[T_{L\mathcal{D}} < T^{l}_{L}]} <0\,. \end{equation} Since $P_{0} [T^{l}_{L} = \infty] \leq P_{0} [T_{L\mathcal{D}} < T^{l}_{L}]$, and the left-hand side increases with $L$, (\ref{eq:exit-D}) implies that $P_{0}-a.s. \, \limsup_{t \to \infty} X_{t} \cdot l = \infty\,$. As a next step we observe that \begin{equation} \label{eq:transience} \limsup_{L \to \infty} L^{-\gamma} \log{P_{0}[\tilde{T}_{\frac{L}{2}}^{l} \circ \theta _{T_{L}^{l}} < T_{\frac{4L}{3}}^{l}\circ \theta _{T_{L}^{l}}]} <0\,. \end{equation} Indeed: \begin{equation} \label{eq:transience-3} P_{0}[\tilde{T}_{\frac{L}{2}}^{l}\circ \theta _{T_{L}^{l}} < T_{\frac{4L}{3}}^{l}\circ \theta _{T_{L}^{l}}] \leq P_{0} [\T{}{L\mathcal{D}} < \T{l}{L}] + P_{0}[\tilde{T}_{\frac{L}{2}}^{l}\circ \theta _{T_{L}^{l}} < T_{\frac{4L}{3}}^{l}\circ \theta _{T_{L}^{l}}, \T{}{L\mathcal{D}}= \T{l}{L}]\,, \end{equation} and by (\ref{eq:exit-D}) we only need to estimate the second term on the right-hand side of (\ref{eq:transience-3}). We define \begin{equation*} \partial_{+}\mathcal{D} \df \{x \in \partial \mathcal{D}:~x \cdot l =1\}, \end{equation*} and let $(B_{1}(x_{i}))_{i \in I}$, $x_{i} \in \partial_{+}L\mathcal{D}$, $I$ a finite set with cardinality growing polynomially in $L$, be a cover of $\partial_{+}L\mathcal{D}$ by unit balls, see above (\ref{eq:stopping-times}) for the notation. It follows from the strong Markov property and the stationarity of the measure $\mathbb{P}$ that \begin{equation} \label{eq:P} \begin{aligned} &P_{0}[\tilde{T}_{\frac{L}{2}}^{l}\circ \theta _{T_{L}^{l}} < T_{\frac{4L}{3}}^{l}\circ \theta_{T_{L}^{l}}, T_{L\mathcal{D}}= T^{l}_{L}] \leq \sum_{i \in I}\mathbb{E}\Big[ E_{0,\omegaega}[P_{X_{\T{l}{L}},\omegaega}[\tilde{T}_{\frac{L}{2}}^{l} < T_{\frac{4L}{3}}^{l}], X_{\T{l}{L}} \in B_1(x_i)]\Big]\\ \leq& \sum_{i \in I}\mathbb{E}\left[ \sup{_{x \in B_1(x_i)}P_{x,\omegaega} [\tilde{T}_{\frac{L}{2}}^{l} <T_{\frac{4L}{3}}^{l}]}\right] = \sum_{i \in I}\mathbb{E}\left[\sup{_{x \in B_{1}(0)}P_{x,\omegaega} [\tilde{T}_{-\frac{L}{2}}^{l} <T_{\frac{L}{3}}^{l}]}\right]. \end{aligned} \end{equation} For large enough $L$, it follows from the strong Markov property that for all $\omega \in \Omega$, \begin{equation} \label{eq:strong-Markov} \text{the function } x \mapsto P_{x, \omega}[\tilde{T}_{-\frac{L}{2}}^{l} < T_{\frac{L}{3}}^{l}] \text{ is $\mathcal L_\omega$-harmonic on $B_3(0)$}, \end{equation} see for instance \cite{kar-shr} p.364f. Harnack's inequality (see \cite{gil-tru} p.250) states that there is a constant $c_H>1$ such that for all $\mathcal L_\omega$-harmonic functions $u$ on $B_{3}(x)$, $x \in \R{d}$, \begin{equation} \label{eq:Harnack} \sup_{y \in B_1(x)}u(y) \leq c_H \inf_{y \in B_1(x)}u(y)\,, \end{equation} which shows that \begin{equation} \label{eq:Harnack-1} \mathbb{E}\left[\sup{_{x \in B_{1}(0)}P_{x,\omegaega}[\tilde{T}_{-\frac{L}{2}}^{l} < T_{\frac{L}{3}}^{l}]}\right] \leq c_H~ P_{0}[\tilde{T}_{-\frac{L}{2}}^{l} < T_{\frac{L}{3}}^{l}]. \end{equation} Inserting (\ref{eq:Harnack-1}) in $(\ref{eq:P})$, we see that (\ref{eq:transience}) follows from (\ref{eq:geometric-Tgamma}). From an application of Borel-Cantelli's lemma we obtain that $P_{0}$-a.s. for large integer L, \[ T^{l}_{\frac{4L}{3}} < \tilde T_{\frac{L}{2}}^{l} \circ \theta _{T_{L}^{l}} + T_{L}^{l}. \] So on a set of full $P_{0}$-measure we can construct an integer-valued sequence $L_{k} \nearrow \infty$, with $L_{k+1}=[\frac{4}{3}L_{k}]$ and $T^{l}_{L_{k+1}} < \tilde{T}_{\frac{L_{k}}{2}}^{l} \circ \theta _{T_{L_{k}}^{l}} + T_{L_{k}}^{l}, \, k \geq 0.$ This shows (\ref{eq:transience-1}).\\ We now show that for some $\mu>0$ \begin{equation} \label{assertion} \hat{E_{0}}[\exp\{{\mu \sup_{0 \leq t \leq \tau_{1}}|X_{t}|^{\gamma}\}}] < \infty. \end{equation} The proof is divided into several propositions. In a first step, we study the integrability properties of the random variable (recall (\ref{eq:return})) \begin{equation} \label{eq:M} M \df \sup{\{(X_{t}-X_{0}) \cdot l : 0 \leq t \leq J\}}\, , \end{equation} i.e. M is the maximal relative displacement of $X_{.}$ in the direction l before it goes an amount of R below its starting point. By virtue of the Proposition \ref{prop:equiv} and (\ref{eq:transience-1}), we know that $P_{0}[D=\infty]=P_{0}[J=\infty]>0$ (recall (\ref{eq:return})). Hence we cannot expect M to be finite. Nevertheless we have the following Proposition: \begin{prop} \label{prop:maximum} There is $ \mu_1> 0$ such that \[ E_{0}[\exp{\{\mu_{1}~M^\gamma\}},~J < \infty] \leq 1- \tfrac{P_{0}[J=\infty]}{2}. \] \end{prop} \begin{proof} Let $L_k = \left(\frac{4}{3}\right)^k$. By our previous result (\ref{eq:transience}), we see that there is $\mu>0$ such that for large integers $k$: \begin{equation} \label{eq:M-0} P_{0}\left[L_k \leq M < L_{k+1},~ J < \infty\right] \leq P_{0}\left[\tilde{T}^{l}_{L_k/2} \circ \theta _{T_{L_k}^{l}} < T^{l}_{4L_k/3} \circ \theta_{T_{L_k}^{l}}\right] \leq \exp{\{-\mu L_k^\gamma\}}. \end{equation} Let $k_{0}$ be large enough such that $\sum_{k \geq k_{0}} \exp{\{-\tfrac{\mu} {2}L_k^\gamma\}} \leq \tfrac{P_{0}[J=\infty]}{4}.$ Further, let $\mu_{1} >0$ such that $0 < (\frac{4}{3})^\gamma \mu_{1} < \frac{\mu}{2}$. Then (\ref{eq:M-0}) shows that for $k_0$ large enough, \begin{eqnarray*} &&E_{0}[\exp{\{\mu_1~M^\gamma\}},~J < \infty] \\ &\leq& \exp{\{\mu_1L_{k_0}^\gamma\}}~ P_{0}[J < \infty] + \sum_{k \geq k_{0}} \exp{\{\mu_1 L_{k+1}^\gamma\}}~ P_{0}[L_k \leq M < L_{k+1},~ J < \infty] \\ &\leq& \exp{\{\mu_1L_{k_0}^\gamma\}}~(1-P_{0}[J = \infty])+\sum_{k \geq k_{0}} \exp{\{-\tfrac{\mu}{2}L_k^\gamma\}}\\ &\leq& \exp{\{\mu_1L_{k_0}^\gamma\}}~(1-P_{0}[J = \infty])+ \tfrac{P_{0}[J=\infty]}{4} \leq 1-\tfrac{P_{0}[J=\infty]}{2}, \end{eqnarray*} provided $\mu_1 >0$ is chosen small enough in the last inequality. \end{proof} As a next step, we shall prove the integrability of $\exp{\{\mu~ (X_{\tau_{1}} \cdot l)^\gamma\}}$ under the extended annealed measure $\hat{P}_0$. Recall the $(\mathcal{Z}_{t})_{t \geq 0}$- stopping times $(V_{k}(a))_{k \geq 0}$, $(\tilde{N}_{k}(a))_{k \geq 0}$ and $N_{1}(a)$ defined in subsection \ref{subsec:regeneration}. As we will see in the proof of Proposition \ref{prop:Xtau1}, $\exp{\{\mu~((X_{N_{1}(a)}-X_{0}) \cdot l)^\gamma\}}$ will play a key role in studying the integrability of \\ $\exp{\{\mu~(X_{\tau_{1}} \cdot l)^\gamma\}}$ under $\hat{P}_0$. Let us therefore start with the following Proposition, which only assumes that, $P_0-a.s.,\,\lim_{t \to \infty}X_t \cdot l =\infty$, which we have established in (\ref{eq:transience-1}). \begin{prop} \label{prop:N1} Assume that $\lim_{t \to \infty}X_t \cdot l =\infty \,\, P_0-$a.s. Then, for each $\mu_2 > 0$ there is $\mu_3 > 0$, such that for $\mathbb{P}$-a.e. $\omegaega \in \Omega$: \begin{equation} \label{eq:N1} \sup_{x,a \geq R}{\hat{E}_{x, \omegaega}}[\exp{\{\mu_3~(((X_{N_{1}(a)}-X_{0}) \cdot l)^\gamma-a^\gamma)\}}] \leq 1+\mu_2. \end{equation} \end {prop} \begin{proof} Define $A_l \df \{\lim_{t \to \infty}X_{t} \cdot l = \infty\}$. Observe that \begin{equation} \label{eq:t} \text{for $\mathbb{P}$-a.e. $\omegaega$ and for every $x \in \R{d}$, $P_{x,\omegaega}[A_l]=1$}\,. \end{equation} Indeed, by the stationarity of the measure $\mathbb{P}$, $P_{y}[A_l]=1$ for all $y \in \R{d}$. Hence $\int{dy~P_{y}[A_l^{c}]}=0$, and by applying Fubini's Theorem it follows that there is a $\mathbb{P}$-null set $\Gamma \subset \Omega$, such that for all $\omegaega \notin \Gamma$ and y outside a Lebesgue null set $\mathcal{N}(\omegaega) \subset \R{d}$, $P_{y,\omegaega}[A_l^{c}]=0$. Observe that for all $x \in \R{d}$, and $\omega \in \Omega$, $P_{x,\omegaega}[A_l]=P_{x,\omegaega}[A_l \circ \theta_{1}]$. It follows from the Markov property that for all $x \in \R{d}$, and $\omega \notin \Gamma$, $P_{x,\omegaega}[A_l \circ \theta_{1}]=\int_{\R{d}}P_{y,\omegaega}[A_l]\,p_\omega(1,x,y)\,dy=1$, where $p_\omega(s,x,y)$ is the transition density function under $P_{x,\omegaega}$ (that is, for every open subset $U$ of $\R{d}$, $P_{x,\omegaega}[X_s \in U]=\int_U p_\omega(s,x,y)\,dy$). The claim (\ref{eq:t}) now follows. When $P_{x,\omega}[A_l]=1$ for all $\omega \in \Omega$ and all $x \in \R{d}$, Proposition 4.8 in Shen \cite{shen} shows that (\ref{eq:N1}) holds for all $\omega \in \Omega$, when $\gamma =1$. By the same proof as given there, Proposition \ref{prop:N1} follows from (\ref{eq:t}) when $\gamma =1$. When $0<\gamma<1$, using $\beta^\gamma-\alpha^\gamma \leq \beta -\alpha$ for $\beta \geq 1 \vee \alpha$, and (\ref{eq:N1}) with $\gamma =1$, we find $\mu_3 \in (0,1)$ such that \begin{align*} &\sup_{x,a \geq R}{\hat{E}_{x, \omegaega}}[\exp{\{\mu_3~(((X_{N_{1}(a)}-X_{0}) \cdot l)^\gamma-a^\gamma)\}}]\\ \leq & \sup_{x,a \geq R}{\hat{E}_{x, \omegaega}}[\exp{\{\mu_3~(((X_{N_{1}(a)}-X_ {0}) \cdot l)^\gamma-a^\gamma)\}},(X_{N_{1}(a)}-X_{0}) \cdot l \geq 1 \vee a] + e \leq 4. \end{align*} By Jensen's inequality, if $n \geq 1$ is large enough, we find \begin{align*} \sup_{x,a \geq R}{\hat{E}_{x, \omegaega}}[\exp{\{\tfrac{\mu_3}{n}~ (((X_{N_{1}(a)}-X_{0}) \cdot l)^\gamma-a^\gamma)\}}] \leq 4^{\frac{1}{n}} \leq 1+\mu_2\,, \end{align*} which shows (\ref{eq:N1}). \end{proof} \begin{prop} \label{prop:Xtau1} There exists $\mu_4 > 0$ such that \begin{equation} \label{eq:tau-1} \hat{E_{0}}[\exp\{{\mu_4 (X_{\tau_{1}} \cdot l)^{\gamma}\}}] < \infty. \end{equation} \end{prop} \begin{proof} Using that, $\hat P_0$-a.s., $X_{S_{k}} \cdot l \leq X_{N_{k}}\cdot l+10R$, $k \geq 1$, (see the remark following (\ref{eq:N-S-R})) we observe that \begin{multline} \label{eq:h-k} \hat E_{0}[\exp\{\mu_4 (X_{\tau_{1}} \cdot l)^\gamma\}] = \sum_{k \geq 1} \hat E_{0}[\exp\{{\mu_4 (X_{S_{k}} \cdot l)^\gamma\}},~S_{k}<\infty, D \circ \theta_{S_{k}}=\infty] \\ \leq \exp(\mu_4(10R)^\gamma) \sum_{k \geq 1} \hat E_{0}[\exp\{{\mu_4 (X_{N_{k}} \cdot l)^\gamma\}}, ~N_{k}<\infty] \df \exp(\mu_4(10R)^\gamma)\sum_{k \geq 1} h_{k}. \end{multline} Observe that, for $k \geq 1$, see (\ref{eq:N-S-R}), \begin{equation*} l\cdot X_{N_{k+1}} = l\cdot X_{R_{k}}+l \cdot (X_{N_1(a_k)}-X_0)\circ\theta_{R_{k}}\,, \end{equation*} with $a_k=M(R_{k})-l\cdot X_{R_{k}}+R\in \mathcal Z_{R_k}$, (in fact for any $m\geq 1$, $a_k\cdot 1_{\{R_k=m\}}$ is $\mathcal F_m\otimes \mathcal S_{m-1}$-measurable, and $\lambda_m$ is independent of $\mathcal F_m\otimes\mathcal S_{m-1}$). We recall that the shift $\theta_{R_k}$ is {\em not} applied to $a_k$. Therefore, by the strong Markov property, cf. Proposition \ref{prop:coupling}, and, by applying Proposition \ref{prop:N1} (notice that $a_k \geq R$, $k \geq 1$, see (\ref{eq:N-S-R})), we see that for all $\mu_2>0$, there is $\mu_4 \in (0,\mu_3)$ such that: \begin{equation} \label{eq:integral-1} \begin{split} h_{k+1} \leq & \mathbb E\left[\hat E_{0,\omega}\Big[\exp\big\{\mu_4(l\cdot X_{R_{k}})^\gamma\big\}, \, R_{k}<\infty,\,\hat E_{X_{R_{k}},\omega}\big[\exp\big\{\mu_4(l\cdot (X_{N_1(a_k)}-X_0))^\gamma\big\}\big]\Big]\right]\\ \leq & \mathbb E\left[\hat E_{0,\omega}\Big[\exp\big\{\mu_4(l\cdot X_{R_{k}})^\gamma\big\}, \, R_{k}<\infty,\,(1+\mu_2)\,e^{\mu_4a_k^\gamma}\Big]\right]\,. \end{split} \end{equation} Observe that with $M$ from (\ref{eq:M}) and $Z_1$ as in Lemma \ref{lemma:bernstein} of the Appendix, the following inequalities hold, when $R_k$ is finite: \begin{gather*} a_k \leq Z_1\circ\theta_{J}\circ\theta_{S_k}+M\circ\theta_{S_{k}}+2R\,,\\ l\cdot X_{R_{k}} = l\cdot X_{S_k}+\underbrace{\big(l\cdot (X_D-X_0)\big)}_ {\leq Z_1\circ\theta_{J}}\circ~\theta_{S_k}\,. \end{gather*} Insert them into the last term of (\ref{eq:integral-1}), apply the strong Markov property at time $S_k$, cf. Proposition \ref{prop:coupling}, (we use the same argument as above, that for $m\geq 1$, $\exp\{\mu_4(l\cdot X_{S_k})^\gamma\}\cdot 1_{\{S_k=m\}}$ is $\mathcal F_m\otimes\mathcal S_{m-1}$-measurable, and $\lambda_m$ is independent of $\mathcal F_m\otimes\mathcal S_{m-1}$), then use the strong Markov property for the process $(X_t)_{t\geq 0}$ at time $J$ on the event it is finite, and obtain (observe that $M$ is $\mathcal F_{J}$-measurable) \begin{align*} & h_{k+1} \\ \leq& e^{\mu_4(2R)^\gamma}\, \mathbb E\left[\hat E_{0,\omega}\Big[e^{\mu_4(l\cdot X_{S_k})^\gamma}, S_k<\infty, (1+ \mu_2)\, \hat E_{X_{S_k},\omega}\big[\exp\big\{\mu_4(2Z_1^\gamma \circ\theta_J+M^\gamma)\big\}, J<\infty\big]\Big]\right]\\ \leq& e^{\mu_4(2R)^\gamma}\,\mathbb E\left[\hat E_{0,\omega}\Big[e^{\mu_4(l\cdot X_{S_k})^\gamma}, S_k<\infty, (1+ \mu_2)\, E_{X_{S_k},\omega}\Big[e^{\mu_4M^\gamma}\, E_{X_J,\omega}\big[e^{2\mu_4Z_1^\gamma}\big], J<\infty\Big]\Big]\right]\,. \end{align*} From Lemma \ref{lemma:bernstein} of the Appendix, we know that, for $\mu_4 \in(0,\delta)$, $\sup_{x,\omegaega} E_{x,\omega}[e^{2\mu_4Z_1^\gamma}]\leq 1+ \mu_2$. Further we use that, $\hat P_0$-a.s.,$(X_{S_{k}}-X_{N_{k}})\cdot l \leq 10R$, $k \geq 1$, and, that, after an application of the strong Markov property to the stopping time $N_k$, conditionally on $\mathcal Z_{N_k}$, $X_1$ is uniformly distributed on $B^{X_{N_k}}$ under $\hat P_{X_{N_k},\omega}^1$, see Proposition \ref{prop:coupling}. Let $\mu_5 \df \exp\{\mu_4((2R)^\gamma+(10R)^\gamma)\} (1+ \mu_2)^2$, then we obtain that the last expression is smaller than \begin{align*} &\mu_5\, \mathbb E\left[\hat E_{0,\omega}\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty, E_{X_{S_k},\omega}\Big[e^{\mu_4M^\gamma}\,,J<\infty\Big]\Big]\right]\\ =& \mu_5 \, \mathbb E\left[\hat E_{0,\omega}\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty, \hat E_{X_{N_k},\omega}^1 \Big[E_{X_{1},\omega}\Big[e^{\mu_4M^\gamma}\,, J<\infty \Big]\Big]\Big]\right]\\ =&\mu_5\frac{1}{|B_R|}\,\int \textrm{d}y \, \mathbb E\left[\hat E_{0,\omega}\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty, y \in B^{X_{N_k}}\Big] E_{y,\omega}\Big[e^{\mu_4M^\gamma}\,,J<\infty \Big]\right]\,. \end{align*} Since $\hat{E}_{0,\omegaega}[\exp{\{\mu_4 (X_{N_{k}} \cdot l)^\gamma\}}, ~N_{k}<\infty,~y \in B^{X_{N_{k}}}]$ is $\mathcal{H}_{\{x \cdot l \leq y \cdot l-4R\}}$-measurable (see point (3) in the addendum \cite{shen-add} to Shen \cite{shen}) and ${E}_{y,\omegaega}[\exp{\{\mu_4~M^\gamma\}},~J < \infty]$ is $\mathcal{H}_{\{x \cdot l \geq y \cdot l-R\}}$-measurable, as a result of finite range dependence, see (\ref{eq:R-separation}), the above random variables are $\mathbb{P}$-independent. Hence, using the stationarity of the measure $\mathbb{P}$ and Proposition \ref{prop:maximum}, we obtain that \begin{align*} h_{k+1} \leq & \mu_5 \hat E_0\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty\Big] \cdot E_0\Big[e^{\mu_4M^\gamma}\,,J<\infty\Big]\\ \leq& \mu_5 \Big(1-\tfrac{P_{0}[J=\infty]}{2}\Big)\hat E_0\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty\Big] \leq (1-\alpha) \hat E_0\Big[e^{\mu_4(l\cdot X_{N_k})^\gamma}, N_k<\infty\Big], \end{align*} for some $\alpha > 0$, provided $\mu_2 >0$ and $\mu_4 \in (0,\mu_1 \wedge \mu_3 \wedge \delta)$ are small enough such that $\mu_5(1-\tfrac{P_{0}[J=\infty]}{2})=e^{\mu_4((2R)^\gamma+(10R)^\gamma)}(1+ \mu_2)^2(1-\tfrac{P_{0}[J=\infty]}{2}) \leq 1-\alpha$. It follows by induction that \[ h_{k+1} \leq (1-\alpha)^{k}\hat E_0\Big[e^{\mu_4(l\cdot X_{N_1})^\gamma}, N_1<\infty\Big] \] so that, by (\ref{eq:h-k}), and by virtue of Proposition \ref{prop:N1}, \[ \hat{E_{0}}\Big[e^{\mu_4 (X_{\tau_{1}} \cdot l)^\gamma}\Big] \leq e^{\mu_4(10R)^\gamma} \hat E_0\Big[e^{\mu_4(l\cdot X_{N_1})^\gamma}, N_1<\infty\Big]~\sum_{k \geq 0} (1-\alpha)^{k} < \infty\,, \] which is our claim (\ref{eq:tau-1}). \end{proof} The assertion (\ref{assertion}) now readily follows. Choose $r>0$ such that $ \mathcal{ \bar D} \subset B_{r}(0)$, and let $\tilde{L} \df \frac{L^{\frac{1}{\gamma}}}{r}$. Hence $\tilde{L}\mathcal{ \bar D} \subset B_{r\tilde{L}}(0)$, and by definition of the random variable $\tau_1$ in (\ref{eq:regeneration-time}), \begin{equation} \label{eq:sup} \begin{aligned} &\hat{P}_{0}\left[\sup_{0 \leq t \leq \tau_{1}}{|X_{t}|^{\gamma}} \geq L\right] \leq \hat{P}_{0}[T_{\tilde{L}\mathcal{D}} < \tau_{1}] \leq P_0[T_{\tilde{L}\mathcal{D}} < T^{l}_{\tilde{L}}] + \hat{P}_{0}[T_{\tilde{L}\mathcal{D}} = T^{l}_{\tilde{L}},~ T_{\tilde{L}\mathcal{D}} < \tau_{1} ]\\ \leq& P_0[T_{\tilde{L}\mathcal{D}} < T^{l}_{\tilde{L}}] + \hat{P}_{0}[X_{\tau_{1}} \cdot l \geq \tilde{L}-3R]. \end{aligned} \end{equation} Applying (\ref{eq:exit-D}) to the first term on the right-hand side of (\ref{eq:sup}), and applying Chebychev's inequality and Proposition \ref{prop:Xtau1} to the second term on the right-hand side, we find \begin{align} \label{eq:assertion-1} \limsup_{L \to \infty} L^{-1} \log \hat P_0[\sup_{0 \leq t \leq \tau_{1}}{|X_{t}|^{\gamma}} \geq L]<0. \end{align} Thus, for some $\mu>0$ small enough, \begin{equation*} \hat{E}_{0}[\exp{\{\mu \sup_{0 \leq t \leq \tau_{1}}|X_{t}|^{\gamma} \}}] = 1+\mu \int_{0}^{\infty}\exp{\{\mu L\}}\hat{P}_{0}[\sup_{0 \leq t \leq \tau_{1}}{|X_{t}|^{\gamma}} \geq L]~dL < \infty\,, \end{equation*} and (\ref{assertion}) follows from (\ref{eq:assertion-1}). \subsection{The Proof of (\ref{eq:integrability-2}) $\Rightarrow$ (\ref{eq:geometric-Tgamma})} By Proposition \ref{prop:equiv}, we know that $\lim_{t \to \infty} X_{t} \cdot l = \infty$ $P_{0}$-a.s. implies $\tau_{1} < \infty$ $\hat{P}_{0}$-a.s., and hence Theorem \ref{thm:renewal} holds. To verify condition $(T)_{\gamma}|l$, we first show that the diffusion has an asymptotic direction $\hat{v}$ under $\hat{P}_{0}$, with $\hat{v} \cdot l >0$, see Proposition \ref{prop:asymptotic-direction}. The claim (\ref{eq:geometric-Tgamma}) is implied by Lemma \ref{lemma:exit}, which is immediate for $d=1$, and which follows from a control on the oscillations of the diffusion orthogonal to $\hat{v}$ under $\hat{P}_{0}$, see Proposition \ref{prop:orthogonal}, when $d \geq 2$. \begin{prop} \label{prop:asymptotic-direction} It holds that \begin{equation} \label{eq:asymptotic-direction} P_0-\text{a.s.},\, \,\,\frac{X_{t}}{|X_{t}|} \underset{t \to \infty}{\longrightarrow} \hat{v} \df \frac{\hat{E}_{0}[X_{\tau_{1}}|D=\infty]}{|\hat{E}_{0}[X_{\tau_{1}}|D=\infty]|} \qquad \textrm{and} \qquad \hat{v} \cdot l >0. \end{equation} \end{prop} \begin{proof} By definition of $\tau_{1}$, $X_{\tau_{1}} \cdot l >0$ $\hat{P}_{0}$-a.s., so $\hat{v}$ is well defined and $\hat{v} \cdot l >0.$ By assumption, $\hat{E}_{0}[X_{\tau_{1}}|D=\infty]<\infty$. The strong law of large numbers applied to the i.i.d. random variables $X_{\tau_{k+1}}-X_{\tau_{k}}$, $k \geq 1$, (cf. Theorem \ref{thm:renewal}) yields \begin{equation} \label{eq:law-LN} \frac{1}{k}X_{\tau_{k}} \underset{k \to \infty}{\longrightarrow} \hat{E}_{0}[X_{\tau_{1}}|D=\infty] \qquad \hat{P}_{0}-\textrm{a.s.} \end{equation} For $t>0$, define $k(t)$ via \begin{equation} \label{eq:number-of-regenerations} \tau_{k(t)} \leq t < \tau_{k(t)+1}, \end{equation} i.e. $k(t)$ is the number of regenerations up to time t. Clearly $\hat P_0$-a.s. $k(t)\underset{t \to \infty}{\longrightarrow} \infty$. Write, for $k(t) \geq 1$, \begin{equation} \label{eq:remainder-term} \frac{X_{t}}{k(t)}=\frac{X_{\tau_{k(t)}}}{k(t)}+\frac{1}{k(t)} (X_{t}-X_{\tau_{k(t)}}). \end{equation} The modulus of the second term on the right-hand side can be bounded by \begin{equation} \sup_{s \geq 0}\frac{1}{k(t)}\, \big| X_{(\tau_{k(t)}+s) \wedge \tau_{k(t)+1}}-X_{\tau_{k(t)}}\big|. \end{equation} Since $\lambda_{\tau_{k}-1}=1$, $k \ge 1$, it follows from Proposition 2.1 that, $\hat P_0$-a.s., $X_u \in U^{X_{\tau_{k}-1}}$ for all $u \in [\tau_k-1,\tau_k]$, and we thus find that $\hat P_0$-a.s., \begin{equation} \label{path decomp} \tfrac{1}{k}\, \big| X_{(\tau_{k}+s) \wedge \tau_{k+1}}-X_{\tau_{k}}\big| \leq \tfrac{1}{k}\, \big|X_{(\tau_{k}+s) \wedge (\tau_{k+1}-1)}-X_{\tau_{k}}\big| + \tfrac{12R}{k}\,. \end{equation} For $k \geq 0$, let $Y_k \df \sup_{s \geq 0} |X_{(\tau_{k}+s) \wedge (\tau_{k+1}-1)}-X_{\tau_{k}}|$. From Theorem \ref{thm:renewal}, we know that the random variables $Y_{k},\, k \geq 1$, are i.i.d. random variables under $\hat{P}_{0}$ and are distributed under $\hat P_0$ as $Y_{0}$ under $\hat P_0[\cdot|D=\infty]$. Hence, applying Chebychev's inequality and Theorem \ref{thm:renewal}, we find by virtue of (\ref{eq:integrability-2}) that, for $\epsilon >0$, there is $\mu >0$ and $\alpha < \infty$ such that for $k \geq 1$, \begin{multline*} \hat{P}_{0}\Big[\tfrac{|Y_{k}|}{k}>\epsilon \Big] \leq \exp{\{-\mu (k\epsilon)^{\gamma}\}}\hat{E}_{0} [\exp{\{\mu|Y_{k}|^{\gamma}\}}]\\ =\exp{\{-\mu (k\epsilon)^{\gamma}\}}\hat{E}_{0} [\exp{\{\mu \sup_{s \geq 0}|X_{s \wedge (\tau_{1}-1)}|^{\gamma}\}}\, \big| D=\infty] \leq \alpha \, \exp{\{-\mu (k\epsilon)^{\gamma}\}}. \end{multline*} Applying Borel-Cantelli's lemma, we see that, $\hat{P}_{0}$-a.s., $\frac{1}{k}|Y_{k}|\underset{k \to \infty}{\longrightarrow} 0$, and hence, $\hat{P}_{0}$-a.s., $\frac{1}{k(t)}|Y_{k(t)}| \underset{t \to \infty}{\longrightarrow} 0$. The claim (\ref{eq:asymptotic-direction}) now follows from (\ref{eq:law-LN}), (\ref{eq:remainder-term})and from (\ref{path decomp}). \end{proof} Denote by $\Pi(\, \cdot \,)$ the orthogonal projection on the orthogonal complement of $\hat{v}$: \begin{equation} \label{eq:orth-proj} \Pi(w) \df w-(w \cdot \hat{v})\hat{v}\,, \end{equation} and let $L_{u}^{l} \df \sup \{t \geq 0: X_{t} \cdot l \leq u\}$ be the time of last visit of the half space $\{x \cdot l \leq u\}$ by $X_\cdot$. The next Proposition gives a control on the oscillations of the process orthogonal to $\hat{v}$, when $d \geq 2$. \begin{prop} \label{prop:orthogonal} ($d \geq 2$)\\ Assume (\ref{eq:integrability-2}). For $\rho \in (\frac{1}{2},1]$ and $\alpha>0$, \begin{equation} \label{orthogonal} \limsup_{u \to \infty} u^{-(2\rho-1)\wedge \gamma \rho} \log P_{0} \Big[\sup_{0 \leq t \leq L_{u}^l}|\Pi(X_{t})|>\alpha\, u^{\rho}\Big]<0. \end{equation} \end{prop} \begin{proof} Without loss of generality, we can replace $|\Pi(X_{t})|$ by $X_{t} \cdot w$, where $w \in \R{d}$ is such that $w \cdot \hat{v}=0$. Recall the definition of $k(t)$ in (\ref{eq:number-of-regenerations}). Notice that $\hat{P}_{0}$-a.s., for $k \geq 1$, $(X_{\tau_{k}}-X_{\tau_{k-1}})\cdot l \geq 21R/2$. Indeed, by Theorem \ref{thm:renewal}, it suffices to prove the statement for $k=1$. Recall that $\tau_0=0$, and observe that $(X_{V_k (3R)}-X_0)\cdot l \ge 3R$, all $k \ge 0$, and hence we find $(X_{\tilde N_1 (3R)}-X_0)\cdot l \ge 5R/2$, all $k\ge 1$. Consequently, since $X_{\tilde N_k (3R)} \cdot l \ge X_{\tilde N_1 (3R)}\cdot l$, we obtain $(X_{N_1 (3R)}-X_0)\cdot l \ge 5R/2$, and since $\lambda_{N_1 (3R)}=1$, we find from Proposition \ref{prop:coupling}, as well as from the definition of $\tau_1$ and the stopping times $S_k$, $k \ge 1$, that $(X_{\tau_1}-X_0)\cdot l \ge (X_{S_1}-X_0)\cdot l \ge (X_{N_1(3R)}-X_0)\cdot l +8R \ge 21R/2$. Since, for $0 \leq t \leq L^l_u$, $X_{\tau_{k(t)}} \cdot l < u+R$, it follows that $k(t) \leq \frac{u+R}{21R/2} \leq \frac{u}{R}$, u large enough. Let $X^\ast \df \sup_{t \leq \tau_1}|X_t-X_0|$. For $t \geq 0$ it holds $\hat P_0$-a.s. that \begin{equation*} X_t \cdot w=X_{\tau_{k(t)}}\cdot w+(X_t-X_{\tau_{k(t)}})\cdot w \leq X_{\tau_{k(t)}}\cdot w+X^\ast \circ \theta_{\tau_{k(t)}}. \end{equation*} It follows that \begin{equation} \label{eq:X*} \begin{aligned} & \hat P_0[\sup_{0 \leq t \leq L_{u}}X_t \cdot w > \alpha u^\rho] \leq \sum_{0 \leq k \leq \frac{u}{R}} \hat P_0[X_{\tau_k}\cdot w+X^\ast \circ \theta_{\tau_k}> \alpha u^\rho]\\ \leq & \sum_{0 \leq k \leq \frac{u}{R}} \hat P_0[X^\ast \circ \theta_{\tau_k}> \tfrac{\alpha}{3} u^\rho]+\sum_{1 \leq k \leq \frac{u}{R}}(\hat P_0[X_{\tau_1} \cdot w >\tfrac{\alpha}{3} u^\rho]+\hat P_0[(X_{\tau_k} - X_{\tau_1})\cdot w >\tfrac{\alpha}{3} u^\rho]). \end{aligned} \end{equation} Applying first Chebychev's inequality, then Theorem \ref{thm:renewal} to the first term of the last line of (\ref{eq:X*})(we use the same decomposition of the path as in (\ref{path decomp})), and with (\ref{eq:integrability-2}) applied to both the first and the second term, we find that there is $\lambda > 0$, such that for large $u$, (\ref{eq:X*}) is smaller than \begin{align} \label{eq:sum-1} \exp\{-\lambda (\tfrac{\alpha}{3}u^{\rho})^\gamma\} + \sum_{1 \leq k \leq \frac{u}{R}} \hat P_0[(X_{\tau_k} - X_{\tau_1})\cdot w >\tfrac{\alpha}{3} u^\rho]. \end{align} If $\gamma \in (0,1)$, the claim (\ref{orthogonal}) follows from Theorem \ref{thm:renewal} and from Theorem A.1. in the Appendix of Sznitman \cite{szn02}. If $\gamma=1$, then, as above, we first apply Chebychev's inequality and then Theorem \ref{thm:renewal} to (\ref{eq:sum-1}) and obtain that it is smaller than \begin{equation*} \exp\{-\lambda \tfrac{\alpha}{3}u^\rho\} (1+ \sum_{1 \leq k \leq \frac{u}{R}} \hat E_0 [\exp\{\lambda X_{\tau_1}\cdot w\}|D=\infty]^{k-1}) \leq \exp\{-\lambda \tfrac{\alpha}{3}u^\rho\}(1+\tfrac{u}{R} \exp\{\tfrac{u}{R}H( \lambda)\}), \end{equation*} provided, we define, for $|\lambda|$ small, \begin{equation*} H(\lambda) \df \log \,\hat E_0 [\exp\{\lambda X_{\tau_1}\cdot w\}|D=\infty]. \end{equation*} $H(\cdot)$ is a convex function, and, since $\hat E_0[X_{\tau_1}\cdot w\,|\,D=\infty]=0$, we see that $H(0)=0,\, H'(0)=0,\,H(\cdot) \geq 0$ for $\lambda \geq 0$, and $H(\lambda)=O(\lambda^2)$, as $\lambda \to 0$. If $\rho =1$, choose $\lambda >0$ small enough such that $H(\lambda)<\lambda \tfrac{\alpha}{3} R$, and (\ref{orthogonal}) holds. In the case $\rho \in (\tfrac{1}{2},1)$, we instead choose for a sufficiently small $\nu >0,\, \lambda=\nu u^{\rho -1}$, and conclude in a similar fashion. \end{proof} Let $\hat{R}(\cdot)$ be a rotation of $\R{d}$ such that $\hat{R}(e_{1})=\hat v$. For $\epsilon > 0$, consider the cylinder in $\R{d}$: \begin{equation} \label{rotation} C^{\epsilon, u} \df \hat{R}\left(\left(-\epsilon u,\frac{u}{\epsilon}\right) \times B^{d-1}_{\frac{\epsilon u}{2}}(0)\right), \end{equation} where, for $r>0$, $B^{d-1}_r(0)$ stands for the $(d-1)$-dimensional Euclidean ball with radius $r$ and center 0. ($C^{\epsilon, u}$ is understood as $\hat{R}(-\epsilon u,\frac{u}{\epsilon})$ when $d=1$).\\ The next step is \begin{lemma} \label{lemma:exit} Assume (\ref{eq:integrability-2}). For $\epsilon > 0$, \begin{equation} \label{eq:exit-1} \limsup_{u \rightarrow \infty} u^{-\gamma} \log P_{0}[T_{C^{\epsilon, u}} < T^{\hat{v}}_{\frac{u}{\epsilon}}]<0. \end{equation} \end{lemma} \begin{proof} Let us first handle the case $d=1$. From Chebychev's inequality and (\ref{eq:geometric-Tgamma}), for large $u$, we find $\alpha >0$ such that \begin{equation*} P_0[\widetilde T^{\hat{v}}_{-u \epsilon}<T^{\hat{v}}_{\frac{u}{\epsilon}}] \leq P_0[\widetilde T^{\hat{v}}_{-u \epsilon} < \infty] \leq \hat P_0 [\sup_{0 \leq t \leq \tau_1}|X_t| \geq \epsilon u] \leq \exp\{-\alpha u^\gamma\}\,, \end{equation*} and (\ref{eq:exit-1}) follows. When $d \geq 2$, write \begin{multline} \label{eq:sum-2} P_0[T_{C^{\epsilon, u}} <T^{\hat{v}}_{\frac{u}{\epsilon}}] \leq P_0[\widetilde T^{\hat{v}}_{-u \epsilon}<T^{\hat{v}}_{\frac{u}{\epsilon}},\, \sup \{|\Pi(X_t)|:t \leq \widetilde T^{\hat{v}}_{-u \epsilon}\} \leq \tfrac{\epsilon}{2}\,l \cdot \hat v \,u] +\\ P_0[\widetilde T^{\hat{v}}_{-u \epsilon}<T^{\hat{v}}_{\frac{u}{\epsilon}},\, \tfrac{\epsilon}{2}\,l \cdot \hat v \,u < \sup \{|\Pi(X_t)|:t \leq \widetilde T^{\hat{v}}_{-u \epsilon}\} \leq \tfrac{\epsilon}{2}u]+ P_0[T_{C^{\epsilon, u}} <\widetilde T^{\hat{v}}_{-u \epsilon} \wedge T^{\hat{v}}_{\frac{u}{\epsilon}}]. \end{multline} Let us first estimate the probability of the leftmost event on the right-hand side of (\ref{eq:sum-2}). Observe that on this event, \begin{equation*} X_{\widetilde T^{\hat{v}}_{- \epsilon u}}\cdot l= X_{\widetilde T^{\hat{v}}_{- \epsilon u}}\cdot \hat v\,\hat v \cdot l +\Pi(X_{\widetilde T^{\hat{v}}_{- \epsilon u}})\cdot l \leq -\frac{\epsilon}{2}u \,\hat v \cdot l\,. \end{equation*} Hence, with the help of (\ref{eq:integrability-2}), for large $u$, we find $\alpha >0$ such that the probability of this event is smaller than \begin{equation*} P_0 [\widetilde T^{l}_{-\tfrac{\epsilon}{2}l \cdot \hat v \,u}<\infty] \leq \hat P_0 [\tau_1 > \widetilde T^{l}_{-\tfrac{\epsilon}{2}l \cdot \hat v \,u}] \leq \hat P_0 [\sup_{0 \leq t \leq \tau_1}|X_t| \geq \tfrac{\epsilon}{2}l \cdot \hat v \,u] \leq \exp\{-\alpha u^\gamma\}. \end{equation*} To bound the rightmost term of (\ref{eq:sum-2}), notice that $\{T_{C^{\epsilon, u}}<\widetilde T^{\hat{v}}_{-u \epsilon} \wedge T^{\hat{v}}_{\frac{u}{\epsilon}}\} \subseteq $ \\ $\{\sup_{0 \leq t \leq L^l _{(\epsilon/2 +1/\epsilon)u}}|\Pi(X_t)|\geq \tfrac{\epsilon u}{2}\}$, and then apply Proposition \ref{prop:orthogonal} with $\rho =1$. The bound for the middle term of (\ref{eq:sum-2}) equally follows from a direct application of Proposition \ref{prop:orthogonal} with $\rho =1$. \end{proof} Now (\ref{eq:geometric-Tgamma}) easily follows. Indeed, choose $\epsilon > 0$ such that $\epsilon < 2b \wedge \frac{\hat{v}\cdot l}{2}$. The last estimate also holds for unit vectors $l'$ in a neighborhood of $l$, and, with the notation $\partial_+ C^{\epsilon, L}=\{x \in \partial \,C^{\epsilon, L}: x \cdot \hat v=L/\epsilon\}$ for the ``top part'' of the boundary of the cylinder and similarly $\partial_- C^{\epsilon, L}=\{x \in \partial \,C^{\epsilon, L}: x \cdot \hat v=-\epsilon L\}$ for the ``bottom part'' of the boundary, it follows that $\partial_+ C^{\epsilon, L}$ is contained in the complement of $U_{l',b,L}$, whereas $\partial_- C^{\epsilon, L}$ lies inside $U_{l',b,L}$. As a result, we find that for unit vectors $l'$ as above, \begin{equation} \label{cylinder} \limsup_{L \rightarrow \infty} L^{-\gamma} \log P_{0}\left[X_{T_{U_{l',b,L}}} \cdot l' < 0\right] \leq \limsup_{L \rightarrow \infty} L^{-\gamma} \log P_{0}\left[ T_{C^{\epsilon, L}} < T^{\hat{v}}_{\frac{L}{\epsilon}}\right] <0\,, \end{equation} which is our claim (\ref{eq:geometric-Tgamma}). \begin{rem} \label{direction T} In the same way as in (\ref{cylinder}), we see that, \begin{equation} \label{eq:T-half-plane} \text{if $(T)_{\gamma}|l_0$ holds for some $l_0 \in S^{d-1}$, then $(T)_{\gamma}|l$ holds iff $l \cdot \hat{v}>0$}\,. \end{equation} \end{rem} \section{Tail estimates on the first renewal time $\tau_{1}$} \label{sec:tail-estimate} The ballistic law of large numbers and the central limit theorem established in Shen \cite{shen} (see (\ref{eq:lln}) and (\ref{eq:clt})) respectively follow from $P_0$-a.s. $\lim_{t \to \infty} X_t \cdot l=\infty$, $\hat E_0[\tau_1] < \infty$ and from $P_0$-a.s. $\lim_{t \to \infty} X_t \cdot l=\infty$, $\hat E_0[\tau_1^2] < \infty$. In this section, we are going to derive tail estimates on $\tau_{1}$ under the assumption of condition $(T')$. These will ensure the finiteness of every moment of $\tau_{1}$ when $d \geq 2$, see (\ref{eq:tail-estimate}). The arguments in this section closely follow section 3 in Sznitman \cite{szn01}.\\ For a bounded domain $U$, and $f$ a bounded measurable function on $U$, introduce the semigroup corresponding to the diffusion killed when exiting $U$, see (\ref{eq:exit-time}) for notations, \begin{equation} \label{eq:semigroup} R^{U}_{t,\omega}f(x) \df E_{x,\omega}[f(X_{t}), T_{U}>t], \end{equation} and a threshold time related to the decay of the semigroup, \begin{equation} \label{treshhold time} t_{\omega}(U) \df \inf\left\{t \geq 0:\|R^{U}_{t,\omega}\|_{\infty,\infty}\leq \frac{1}{2}\right\} =\inf \left\{t \geq 0:\sup_{x \in U}P_{x,\omega}[T_U>t]\leq \frac{1}{2}\right\}. \end{equation} Consider further the successive returns of $X_{\cdot}$ to $B_{1}(x)$ and departures from $B_{2}(x)$, \begin{equation} \label{eq:excursion-1} R_{1}^{x} \df \inf \{s \geq 0: X_{s} \in B_{1}(x)\},~~ D_{1}^{x} \df \inf \{s \geq R_{1}^{x}: X_{s} \notin B_{2}(x)\}, \end{equation} and inductively, for $n \geq 0$, \begin{equation} \label{eq:excursion-2} R_{n+1}^{x} \df D_{n}^{x}+R_{1}^{x}\circ \theta_{D_{n}^{x}},~~ D_{n+1}^{x} \df R_{n+1}^{x}+D_{1}^{x}\circ \theta_{R_{n+1}^{x}}. \end{equation} \begin{lemma} \label{lemma:threshold-time} There is a constant $c$ such that for all bounded domains $U$ and $\omega \in \Omega$, one can find $x_{0}$ in $\frac{1}{\sqrt{d}}\mathbb{Z}^{d}$ within distance 1 of $U$ such that \begin{equation} \label{eq:threshold-time} \inf_{z \in \partial B_2 (x_0)}P_{z,\omega}[R_{1}^{x_{0}}>T_{U}] \leq \frac{c \, \text{diam}(U)^d}{t_{\omega}(U)}\,. \end{equation} \end{lemma} \begin{proof} Cover $U$ by unit balls centered in $\tfrac{1}{\sqrt{d}}\mathbb Z^d$, and let $(y_{i})_{i=1}^{N}$, $N \leq c\,\text{diam}(U)^d$, be an enumeration of the centers of these balls. Choose $\delta \leq t_\omega (U)/2$, then, by definition of $t_{\omega}(U)$, we can find an $x_{1}$ in $U$ such that $P_{x_{1},\omega}[T_{U}>t_{\omega}(U)-\delta]>\frac{1}{2}$. Hence $\frac{1}{4}t_{\omega}(U) \leq \frac{1}{2}(t_{\omega}(U)-\delta) \leq E_{x_{1},\omega}[T_{U}]$. Applying the strong Markov property to the stopping times $R_{j}^{y_i}$ and using the fact that $\sup_{\omega \in \Omega} \sup_{i,x \in \bar B_{1}(y_{i})} E_{x,\omega}\left[T_{B_{2}(y_{i})}\right]<\infty$, see for instance \cite{kar-shr} p.365, yields \begin{multline} \label{eq:return-1} \frac{1}{4}t_{\omega}(U) \leq E_{x_{1},\omega}[T_{U}] \leq \sum_{i=1}^{N} E_{x_{1},\omega}\left[\int_{0}^{T_{U}}\mathbf{1}_{B_{1}(y_{i})} (X_{s})\textrm{d}s\right]\\ \leq \sum_{i=1}^{N}\sum_{j=1}^{\infty}E_{x_{1},\omega}\left[R_{j}^{y_{i}}<T_{U}, E_{X_{R_{j}^{y_{i}}},\omega}\left[\int_{0}^{D_{1}}\mathbf{1}_{B_{1}(y_{i})} (X_{s})\textrm{d}s\right]\right] \leq \,c\,\sum_{i=1}^{N}\sum_{j=1}^{\infty}P_{x_{1},\omega}\left[R_{j}^{y_{i}}<T_{U}\right]. \end{multline} For $j \geq 2$, successive applications of the strong Markov property show that \begin{align*} P_{x_{1},\omega}[R_{j}^{y_{i}}<T_{U}]=& E_{x_1, \omega}[R_{j-1}^{y_{i}}<T_{U}, \,P_{X_{D^{y_i}_{j-1}},\omega}[R_1^{y_i}<T_U]]\\ \leq & \sup_{z \in \partial B_2(y_i)} P_{z,\omega} [R_1^{y_i}<T_U]\,P_{x_1,\omega} [R_{j-1}^{y_{i}}<T_{U}] \\ \leq &(\sup_{z \in \partial B_2(y_i)} P_{z,\omega} [R_1^{y_i}<T_U])^{j-1}P_{x_1,\omega}[R_{1}^{y_{i}}<T_{U}]\,. \end{align*} Using the last estimate, we see that the last expression in (\ref{eq:return-1}) is smaller than \begin{align*} c~\sum_{i=1}^{N} \frac{P_{x_{1},\omega} [R_{1}^{y_{i}}<T_{U}]} {\inf_{z \in \partial{B_{2}(y_{i})}}P_{z,\omega} [R_{1}^{y_{i}}>T_{U}]} \leq \frac{c\,\text{diam($U$)}^{d}}{\inf_{1 \leq i \leq N} \inf_{z \in \partial{B_{2}(y_{i})}}P_{z,\omega} [R_{1}^{y_{i}}>T_{U}]}\,. \end{align*} The claim (\ref{eq:threshold-time}) now follows. \end{proof} For $\beta \in (0,1]$ and $L>0$, we denote by $U_{\beta,L}$ the set \[ U_{\beta,L} \df \{x \in \R{d}: x \cdot l \in (-L^{\beta},L)\}. \] The next Proposition shows that the control of the tail of the variable $\tau_{1}$ can be obtained from the derivation of large-deviation-type estimates on the exit distribution of the diffusion out of $U_{\beta,L}$. \begin{prop} \label{prop:trap} Let $d \geq 2$, and assume that $(T')$ holds with respect to $l \in S^{d-1}$. If $\beta \in (0,1)$ is such that for any $\alpha>0$, \begin{equation} \label{eq:trap} \limsup_{L \rightarrow \infty} L^{-1}\log \mathbb{P}\left[P_{0,\omega}\left[ X_{T_{U_{\beta,L}}}\cdot l >0\right]\leq \exp\{-\alpha L^{\beta}\}\right]<0, \end{equation} then \begin{equation} \label{eq:tail-2} \limsup_{u \rightarrow \infty}\,(\log u)^{-\zeta}\log \hat P_{0}[\tau_{1}>u]<0 \end{equation} for any $\zeta<\frac{1}{\beta}$ (when $(T)$ holds, one can choose $\zeta=\frac{1}{\beta}$). \end{prop} \begin{proof} Let $R$ be a rotation of $\R{d}$ such that $R(e_{1})=l$. For $L>0$ write \begin{eqnarray*} C_{L}=R\left(\left(-L/2,L/2\right)^{d}\right) \quad \mathrm{and}\quad V_{x}=x+R\left((-1,3)\times (-1,1)^{d-1}\right). \end{eqnarray*} From the Support Theorem, see \cite{bass} p.25, we know that there is a constant $\kappa >0$ such that for all $x \in \R{d}$ and all $\omega \in \Omega$ \begin{equation} \label{kappa} \inf_{z \in B_{\frac{1}{2}}(x)}P_{z,\omega}[X_{1} \in B_{\frac{1}{2}}(x+2l), T_{V_{x}}>1]\geq \kappa >0. \end{equation} For $u>1$, denote $\Delta(u) \df \lfloor \frac{\log u}{6 \log(1/\kappa)}\rfloor \quad \mathrm{and} \quad L(u) \df \Delta(u)^{\frac{1}{\beta}}$. Let $\beta \in (0,1)$ and $\zeta < \frac{1}{\beta}$. Write \begin{equation} \label{eq:tail-tau-1} \begin{aligned} \hat{P}_{0}[\tau_{1}>u] \leq & \hat{P}_{0}[\tau_{1}>u,~T_{C_{L(u)}} \leq \tau_{1}] +P_{0}[T_{C_{L(u)}}>u]\\ \leq &\hat{P}_{0}[\sup_{0 \leq t \leq \tau_{1}}|X_t| \geq L(u)/2]+ P_{0}[T_{C_{L(u)}}>u]. \end{aligned} \end{equation} Using Chebychev's inequality and condition $(T)_\gamma|l$, $\gamma$ close to 1 such that $\frac{\gamma}{\beta}\geq \zeta$, we find that \begin{equation} \label{eq:first-term} \limsup_{u \rightarrow \infty}\, (\log u)^{-\zeta} \log \hat{P}_{0}[\sup_{0 \leq t \leq \tau_{1}}|X_t| \geq L(u)/2]<0. \end{equation} Hence, by means of (\ref{eq:tail-tau-1}), it suffices to show that \begin{equation} \label{eq:tail-1} \limsup_{u \rightarrow \infty}\left(\log u\right)^{-\frac{1}{\beta}} \log P_{0}[T_{C_{L(u)}}>u]<0. \end{equation} Recall the definition of $t_{\omega}(U)$ in (\ref{treshhold time}), and denote by $\mathcal{T}$ the event \begin{equation} \mathcal{T} \df \left\{\omega \in \Omega:t_{\omega}(C_{L(u)})>\frac{u}{(\log u)^{\frac{1}{\beta}}}\right\}. \end{equation} It follows from Lemma \ref{lemma:threshold-time} and the Markov property that for large $u$ \begin{align} \label{eq:tau} &P_{0}[T_{C_{L(u)}}>u] \leq \mathbb{E}\left[\mathcal{T}^{c}, P_{0,\omega}[T_{C_{L(u)}}>u]\right]+\mathbb{P}[\mathcal{T}]\nonumber \leq \\ & \left(\frac{1}{2}\right)^{\lfloor (\log u)^{\frac{1}{\beta}}\rfloor}+ \mathbb{P}\Big[\exists~ x_{2} \in C_{L(u)} \cap \tfrac{1}{\sqrt{d}}\mathbb{Z}^{d}; \inf_{z \in \partial B_2(x_2)}P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}] \leq \frac{c L(u)^d (\log u)^{\frac{1}{\beta}}}{u}\Big]. \end{align} (Notice that, if $x_2$ would not belong to $C_{L(u)}$, then we would find from the Support Theorem, see \cite{bass} p.25, that for every $z \in \partial B_2(x_2)$, $P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}]\geq c >0$, which contradicts the rightmost event in the last line for large $u$.) Choose $x=x_{2}+2 \Delta(u)l$. By the strong Markov property, we see that \begin{equation} \label{eq:escape-prob} \inf_{z \in \partial B_2(x_2)}P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}] \geq \inf_{z \in \partial B_2(x_2)}P_{z,\omega}[R_{1}^{x_{2}}>R_{1}^{x}]~ \inf_{z \in \partial B_{1}(x)}P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}]. \end{equation} Let $y \in \partial B_{2}(x_{2})$. One way to hit $B_{1}(x)$ before returning to $B_{1}(x_{2})$ when starting at $y$ is the following: we hit $B_{\frac{1}{2}}(x_2+2l)$ before hitting $B_{1}(x_{2})$ which happens with probability at least $\tilde \kappa$, where $\tilde \kappa$ is a positive constant, see the Support Theorem p.25 in \cite{bass}. Then we hit $B_{\frac{1}{2}}(x_{2}+4l)$ without exiting $V_{x_2+2l}$ which occurs with probability at least $\kappa$, see (\ref{kappa}). Then continue hitting $B_{\frac{1}{2}}(x_{2}+2(k+1)l)$ without exiting $V_{x_{2}+2kl}$, $1 \leq k \leq \Delta(u)-1$, until landing in $B_{1}(x)$. Hence \begin{equation} \label{kappa2} \inf_{z \in \partial B_2(x_2)}P_{z,\omega}[R_{1}^{x_{2}}>R_{1}^{x}] \geq \tilde{\kappa} \kappa^{\Delta(u)-1} \geq \tilde \kappa u^{-\frac{1}{6}}. \end{equation} Together with (\ref{eq:escape-prob}), this shows that for large $u$, on the event $\mathcal T$, see (\ref{eq:tau}), \begin{eqnarray} \label{eq:exit-before-return} \inf_{z \in \partial B_{1}(x)}P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}]\leq \frac{1}{\tilde \kappa}u^{\frac{1}{6}} \inf_{z \in \partial B_2(x_2)}P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}] \leq u^{-\frac{1}{2}}. \end{eqnarray} In particular, by a similar argument as given below (\ref{eq:tau}), we see that, for large $u$, $B_{3}(x) \subset C_{L(u)}$. By the same argument as in (\ref{eq:strong-Markov}), it follows that, for large $u$, $P_{\cdot,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}]$ is $\mathcal L_\omega$-harmonic on $B_{3}(x)$, and (\ref{eq:Harnack}) shows that \begin{equation} \label{eq:harnack-1} P_{x,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}] \leq c_H \inf_{z \in \partial B_{1}(x)} P_{z,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}]\,. \end{equation} It follows from (\ref{eq:exit-before-return}) and (\ref{eq:harnack-1}) that for large $u$, \begin{eqnarray*} P_{x,\omega}[X_{T_{x+U_{\beta,L(u)}}}\cdot l>x \cdot l] \leq P_{x,\omega}[R_{1}^{x_{2}}>T_{C_{L(u)}}] \leq c_H u^{-\frac{1}{2}} \leq \exp\big(-cL(u)^{\beta}\big). \end{eqnarray*} Using translation invariance and (\ref{eq:tau}), we find \[ P_{0}[T_{C_{L(u)}}>u] \leq \left(\frac{1}{2}\right)^{\lfloor (\log u)^{\frac{1}{\beta}}\rfloor}+ c\, L(u)^{d}\,\mathbb{P}\Big[P_{0,\omega}[X_{T_{U_{\beta,L(u)}}}\cdot l>0]\leq \exp \big(-c L(u)^{\beta}\big)\Big], \] and (\ref{eq:tail-1}) follows from (\ref{eq:trap}). This proves (\ref{eq:tail-2}). \end{proof} We shall now derive upper bounds like (\ref{eq:trap}) under the assumption of condition $(T')$. By means of Proposition \ref{prop:trap}, we then obtain tail estimates on the first renewal time $\tau_1$. We first need some notation. For $\beta > 0$ and $L>0$, consider the lattice \begin{equation*} \mathcal{L}_{\beta,L}=L\mathbb{Z}\times((2d+1)L^{\beta}+2R)\mathbb{Z}^{d-1}, \end{equation*} and, for $w \in \R{d}$, we introduce the blocks \begin{equation} \label{blocks} \begin{aligned} &B_{1,\beta,L}(w)=\hat{R}(w+[0,L]\times[0,L^{\beta}]^{d-1}),\\ &B_{2,\beta,L}(w)=\hat{R}(w+(-dL^{\beta},L]\times(-dL^{\beta},(d+1)L^{\beta}) ^{d-1}), \end{aligned} \end{equation} where $\hat{R}$ is a rotation of $\R{d}$ such that $\hat{R}(e_1)=\hat v$, and $\hat v$ is the asymptotic direction of the annealed diffusion (that exists under $(T')$, see Proposition \ref{prop:asymptotic-direction}). We shall also consider the following subset of the boundary of $B_{2,\beta,L}(w)$, which is a subset of the 'top part' of the box, \[ \partial_{+}{B}_{2,\beta,L}(w)=\partial{B}_{2,\beta,L}(w) \cap \partial B_{1,\beta,L}(w), \quad w \in \R{d}, \] as well as the random variables \begin{equation} X_{\beta, L}(w)=-\log \inf_{x \in B_{1,\beta,L}(w)}P_{x,\omega}\left[ X_{T_{B_{2,\beta,L}(w)}} \in \partial_{+}{B}_{2,\beta,L}(w)\right]. \end{equation} To obtain an upper bound like (\ref{eq:trap}) under $(T')$, it is instrumental to produce a control on the tail of the random variable $X_{\beta,L}(w)$ for some $\beta \in (0,1)$ under $(T')$. Indeed, we devise an escape route for the diffusion through the ``right'' side of $U_{\beta,L}$ by piling up in the direction $\hat v$ a finite number of boxes of type $ B_{2,\beta,L}$. An atypical behavior of the exit distribution out of the slab $U_{\beta,L}$ under $P_{0,\omega}$ as in (\ref{eq:trap}) implies an atypical size for at least one of the $X_{\beta,L}(w)$ in one of the piled up boxes. Hence, to produce an upper bound like (\ref{eq:trap}), it suffices to show that, for large $L$, the probability that $X_{\beta,L}(w)$ is bigger than const $L^\beta$ decays exponentially with $L$ for some $\beta \in (0,1)$.\\ We prove in fact a stronger statement. Namely, we show that the above probability decays exponentially with $L^\zeta$, where $\zeta < f(\beta)=d(2 \beta -1)$, with $\beta$ restricted to the interval $(1/2,1)$, so that for suitable values of $\beta$ close to one, $\zeta$ can be chosen larger than one, since $d \ge 2$. By means of a renormalization-type argument, see Lemma \ref{lemma:renormalisation}, we reduce this task to showing a substantially weaker estimate. Indeed, it now suffices to prove for some $\beta_0$ slightly larger than $1/2$ that the probability that $X_{\beta_0,L}(w)$ is bigger than const $L^\beta$ decays exponentially with $L^{f_0(\beta)}$, where $f_0(\beta)=\beta+\beta_0 -1$, and $\beta \in (\beta_0,1)$. This ``seed-estimate'' is then provided in Lemma \ref{lemma:seed-estimate} under the assumption of condition $(T')$.\\ We begin with the renormalisation step. Surprisingly enough, we do not need to assume condition $(T')$, in which case the rotation $\hat{R}$ in (\ref{blocks}) is an arbitrary rotation of $\R{d}$. \begin{lemma}[Renormalisation step, $d \geq 2$] \label{lemma:renormalisation} Assume that $\beta_{0} \in (0,1)$ and $f_{0}$ is a positive function defined on $[\beta_{0},1)$, such that \begin{equation*} f_{0}(\beta)\geq f_{0}(\beta_{0})+\beta - \beta_{0}, \quad \beta \in [\beta_{0},1) \end{equation*} and, for $\beta \in [\beta_{0},1)$, $\zeta < f_{0}(\beta)$, \begin{equation} \lim_{\beta' \uparrow \beta}\limsup_{L \rightarrow \infty}L^{-\zeta} \sup_{w \in \R{d}} \log \mathbb{P}[X_{\beta_{0},L}(w)\geq L^{\beta'}]<0. \end{equation} Denote by $f(\cdot)$ the linear interpolation on $[\beta_{0},1]$ of the value $f_{0}(\beta_{0})$ at $\beta_{0}$ and the value $d$ at 1.Then, for $\beta \in [\beta_{0},1)$ and $\zeta < f(\beta)$, \begin{equation} \label{eq:renormalisation} \lim_{\beta' \uparrow \beta}\limsup_{L \rightarrow \infty}L^{-\zeta} \sup_{w \in \R{d}} \log \mathbb{P}[X_{\beta,L}(w)\geq L^{\beta'}]<0. \end{equation} \end{lemma} \begin{proof} We only give a sketch of the proof, since it is similar to the proof of Lemma 3.2 in \cite{szn01}. For $\chi \in (0,1)$ defined via $\beta \df \chi \beta_0 + 1-\chi$, we consider the set \begin{equation} \label{eq:Col} \text{Col} \df \{z \in \mathcal L_{\beta_0, L^\chi}, z \cdot e_1=0, z \cdot e_i \in [\frac{1}{4}L^\beta,\frac{3}{4}L^\beta],\, 2 \leq i \leq d\}\,. \end{equation} For $w \in \R{d}$, attach at every $w+z$, $z \in$ Col, a ``column of boxes'' $B_{1,\beta_0,L^\chi}(\cdot)$, made by piling up $\lfloor L^{1-\chi}\rfloor$ such boxes on top of each other. Each such column will provide a line of escape of the diffusion out of a box $B_{2,\beta,L}(w)$ through $\partial_+ B_{2,\beta,L}(w)$. Every $x \in B_{1,\beta,L}(w)$ is at most at distance $\sqrt{d}L^\beta$ from a box $B_{1,\beta_0,L^\chi}(\cdot)$ in one of the aforementioned columns. From a similar argument as in (\ref{kappa2}), and from the strong Markov property, we see that for large $L$ and $c_1=\sqrt{d} \log\frac{1}{\kappa}$, with $\kappa$ from (\ref{kappa}), $J=\lfloor L^{1-\chi} \rfloor$, \begin{equation*} \{ X_{\beta,L}(w) \geq 3 c_1 L^\beta\} \subseteq \{\min_{z \in \text{Col}} \underbrace{\sum_{j=0}^{J} X_{\beta_0,L^\chi}(w+z +jL^\chi e_1)}_{\df Y(z)} \geq 2 c_1 L^\beta\}\,. \end{equation*} Using the independence of the variables $Y(z)$, $z \in$ Col, and Chebychev's inequality, we find that for $\lambda >0$, \begin{equation} \label{eq:indep-1} \mathbb P[X_{\beta,L}(w) \geq 3 c_1 L^\beta] \leq \underset{z \in \text{Col}}{\Pi} \left\{\exp\{-\lambda c_1 L^\beta\} \mathbb E [\exp\{\tfrac{\lambda}{2}Y(z)\}]\right\}\,. \end{equation} Observe that, for $z \in$ Col and large $L$, the variables $X_{\beta_0,L^\chi}(w+z +jL^\chi e_1)$ are independent when $j$ is restricted to the set of even or the set of odd integers. It thus follows from Cauchy-Schwarz's inequality that the right-hand side of (\ref{eq:indep-1}) is smaller than \begin{equation*} \underset{z \in \text{Col}}{\Pi}\left\{\exp\{-\lambda c_1 L^\beta\}\Pi_{j=0}^{J} \mathbb E[\exp\{\lambda X_{\beta_0,L^\chi}(w+z +jL^\chi e_1)\}]^{1/2} \right\}\,. \end{equation*} Since the random variables $X_{\beta_0,L^\chi}$ are non-negative, the quantity in the last line becomes larger when we omit the square roots, and an application of Fubini's Theorem yields that the last line can be bounded by \begin{equation} \label{eq:indep-2} \underset{z \in \text{Col}}{\Pi}\Big\{\exp\{-\lambda c_1 L^\beta\} \Big(\exp\{\tfrac{\lambda}{2} c_1 L^{\chi \beta_0}\}+ \int_{\tfrac{c_1}{2}L^{\chi \beta_0}}^{\infty}\lambda e^{\lambda u} \sup_{w' \in \R{d}}\mathbb P[X_{\beta_0,L^\chi}(w') \geq u]\textrm{d}u\Big)^{J+1}\Big\}\,. \end{equation} For $\lambda=L^{\alpha}$, $\alpha = \chi f_0(\beta_0)-\chi \beta_0-\varepsilon$ and $0<\varepsilon < \chi f_0(\beta_0)$, one can show that the integral in the rightmost term of (\ref{eq:indep-2}) tends to 0 as $L \to \infty$. Since $\lambda L^{\chi \beta_0}$ tends to $\infty$ with $L$, we find that, for large $L$, \begin{equation*} \sup_{w \in \R{d}} \mathbb P[X_{\beta,L}(w) \geq 3 c_1 L^\beta] \leq \exp\{-\tfrac{\lambda}{6}c_1 L^\beta \text{\#Col}\}\,. \end{equation*} Since \#Col$\sim c L^{(d-1)(\beta-\chi \beta_0)}$, as $L \to \infty$, we obtain that, for small $\varepsilon >0$, \begin{equation} \label{eq:escape} \limsup_{L \to \infty} L^{-(\chi f_0(\beta_0)+d(1-\chi)-\varepsilon)} \sup_{w \in \R{d}}\log \mathbb P[X_{\beta,L}(w)\geq 3 c_1 L^\beta]<0\,, \end{equation} which implies the claim. \end{proof} The next Lemma shows that, when $d \geq 2$, under condition $(T')$, the function $f_{0}(\beta)=\beta +\beta_{0}-1$, $\beta \in [\beta_{0},1)$, fulfills the assumption of Lemma \ref{lemma:renormalisation} when $\beta_{0} \in (\frac{1}{2},1)$. \begin{lemma}[Seed estimate, $d \geq 2$, under $(T')$] \label{lemma:seed-estimate} Assume that $\beta_{0} \in (\frac{1}{2},1)$. Then, for $\rho > 0$ and $\beta \in [\beta_{0},1)$, \begin{equation} \label{eq:seed-estimate} \limsup_{L \rightarrow \infty}L^{-(\beta +\beta_{0}-1)} \sup_{w \in \R{d}} \log \mathbb{P}[X_{\beta_{0},L}(w)\geq \rho L^{\beta}]<0. \end{equation} \end{lemma} \begin{proof} Choose $\eta \in (0,1)$ small and then introduce $\chi=\beta_{0}+1-\beta \in (\beta_{0},1]$, and, for large $L$ and $w \in \R{d}$ the boxes $\tilde{B}_{1}(w) \subset \tilde{B}_{2}(w)$, defined analogously as before, with $[0,L]\times[0,L^{\beta}]^{d-1}$ and $(-dL^{\beta},L]\times(-dL^{\beta},(d+1)L^{\beta}) ^{d-1})$ replaced by $[0,L_{0}]\times[0,L^{\beta_{0}}]^{d-1}$ and $(-dL^{\beta_0},L_{0}+3]\times(-\eta L^{\beta_{0}},(1+\eta )L^{\beta_{0}})^{d-1})$ respectively, with the notation \[ L_{0}=\frac{L-\eta L^{\beta_{0}}}{\lfloor L^{1-\chi}\rfloor}. \] Define also Top$\tilde{B}_{2}(w)=\partial{\tilde{B}}_{2}(w) \cap \{x: x \cdot \hat{v}=w \cdot \hat{v}+L_{0}+3\}$. Let $(B_1(z_i))_{i \in I}, z_i \in \tilde B_1 (w)$, $I$ a finite set growing polynomially with $L$, be a finite cover of $\tilde B_1 (w)$ by unit balls. For $L$ large, it holds that $B_3(z_i) \subset \tilde B_2(w), i \in I$, and by the same argument as in (\ref{eq:strong-Markov}), we see that $P_{\cdot,\omega}\big[ X_{T_{\tilde{B}_{2}(w)}} \in \mathrm{Top}~\tilde{B}_{2}(w)\big]$ is $\mathcal L_\omega$-harmonic on $B_3(z_i)$, so that (\ref{eq:Harnack}) implies that for all $i \in I$, \begin{align} \label{eq:harnack-2} P_{z_i,\omega}\left[ X_{T_{\tilde{B}_{2}(w)}}\in \mathrm{Top}~\tilde{B}_{2}(w) \right] \leq c_H \inf_{x \in B_{1}(z_i)}P_{x,\omega}\left[ X_{T_{\tilde{B}_{2}(w)}} \in \mathrm{Top}~\tilde{B}_{2}(w)\right]. \end{align} We say that $w$ is {\it good} when \[ \inf_{x \in \tilde{B}_{1}(w)}P_{x,\omega}\left[ X_{T_{\tilde{B}_{2}(w)}} \in \mathrm{Top}~\tilde{B}_{2}(w)\right]\geq \frac{1}{2\,c_H}, \] and {\it bad} otherwise. Hence, by (\ref{eq:harnack-2}), and using Chebychev's inequality and translation invariance, we obtain \begin{equation} \label{eq:bad} \begin{aligned} \mathbb{P}[w \text{ is bad}] \leq & \sum_{i \in I} \mathbb{P}\Big[\inf_{x \in B_{1}(z_i)}P_{x,\omega} \big[ X_{T_{\tilde{B}_{2}(w)}}\in \mathrm{Top}~\tilde{B}_{2}(w)\big] <\frac{1}{2\,c_H}\Big] \\ \leq &\sum_{i \in I}\mathbb{P}\left[P_{z_i,\omega}\left[ X_{T_{\tilde{B}_{2}(w)}} \in \mathrm{Top}~\tilde{B}_{2}(w)\right]<\frac{1}{2}\right] \\ \leq & 4|I|\Big(P_{0}[\sup_{0 \leq t \leq T^{\hat{v}}_{L_{0}+3}}|\Pi(X_{t})|\geq \eta L^{\beta_{0}}]+ P_{0}[\widetilde{T}^{\hat{v}}_{-dL^{\beta_{0}}}<\infty]\Big). \end{aligned} \end{equation} Notice that $L_0 \sim L^\chi$, so that, for large $L$, $T^{\hat{v}}_{L_{0}+3} \le T^{\hat{v}}_{2L^\chi+3} \le L^{\hat{v}}_{2L^\chi+3}$. Then, under condition $(T)_{\gamma}|l$, where $\gamma$ fulfills $\gamma \beta_{0} \geq 2\beta_{0}-\chi$, we find with the help of Proposition \ref{prop:orthogonal} applied (with $\rho=\beta_0/\chi \in (1/2,1)$ and $u=2L^\chi +3$) to the first term on the right-hand side of (\ref{eq:bad}) that \begin{equation} \label{eq:orthogonal} \limsup_{L \rightarrow \infty} L^{-(2\beta_{0}-\chi)}\log P_{0} [\sup_{0 \leq t \leq T^{\hat{v}}_{L_{0}+3}}|\Pi(X_{t})|\geq \eta L^{\beta_{0}}] <0, \end{equation} and, since $(T)_{\gamma}|\hat v$ holds, see (\ref{eq:T-half-plane}), we find with the help of Chebychev's inequality that there is $\mu >0$ such that \begin{equation} \label{eq:left} P_{0}[\tilde{T}^{\hat{v}}_{-dL^{\beta_{0}}}<\infty] \leq \hat{P}_{0} \Big[\sup_{0 \leq t \leq \tau_{1}}|X_{t}|\geq d L^{\beta_{0}}\Big] \leq \exp (-\mu L^{\gamma \beta_{0}})\,. \end{equation} According to our choice of $\gamma$, we obtain with (\ref{eq:bad}),(\ref{eq:orthogonal}) and (\ref{eq:left}) that \begin{equation} \label{eq:bad-0} \limsup_{L \rightarrow \infty} L^{-(2\beta_{0}-\chi)}\sup_{w \in \R{d}} \log \mathbb{P}[w \mathrm{~~is~~bad}]<0. \end{equation} When starting in $B_{1,\beta_0,L}(w) \cap \tilde B_1(w+j_0L_0 e_1)$, $0 \leq j_0 <\lfloor L^{1-\chi}\rfloor$, for large $L$, one way to exit $B_{2,\beta_0,L}(w)$ through $\partial_+ B_{2,\beta_0,L}(w)$ is to successively exit the boxes $\tilde B_2(w+jL_0e_1)$, $j_0 \leq j <\lfloor L^{1-\chi}\rfloor$, through Top $\tilde B_2(w+jL_0e_1)$, and move to the box $\tilde B_1(w+(j+1)L_0 e_1)$, which is at distance at most $\sqrt d \,\eta L^{\beta_0}$ from every point in Top $\tilde B_2(w+jL_0e_1)$, until landing in $\tilde B_1(w+\lfloor L^{1-\chi}\rfloor L_0 e_1) \cap B_{1,\beta_0,L}(w)$, and then exit $B_{2,\beta_0,L}(w)$ through $\partial_+ B_{2,\beta_0,L}(w)$, which is at distance at most $\eta L^{\beta_0}$ from every point in $\tilde B_1(w+\lfloor L^{1-\chi}\rfloor L_0 e_1) \cap B_{1,\beta_0,L}(w)$. When $w \in \R{d}$ and all $w+jL_0e_1$, $0 \leq j <\lfloor L^{1-\chi}\rfloor$, are good, then, for large $L$, it follows from the strong Markov property and from (\ref{kappa}) that for all $x \in B_{1,\beta_0,L}(w)$, \begin{equation} P_{x,\omega}[X_{T_{B_{2,\beta_0,L}(w)}} \in \partial_+ B_{2,\beta_0,L}(w)] \geq \Big( \frac{1}{2c_H}\kappa ^{\lceil \tfrac{1}{2}\sqrt d \, \eta L^{\beta_0}\rceil +1}\Big)^{L^{1-\chi}} \kappa^{\lceil \tfrac{\eta}{2}L^{\beta_0}\rceil +1} >\exp\{-\rho L^\beta\}\,, \end{equation} provided $\eta >0$ is chosen small enough such that $\tfrac{\eta}{2}(1+\sqrt d) \log\tfrac{1}{\kappa}<\tfrac{\rho}{2}$, where $\rho>0$ is as in (\ref{eq:seed-estimate}). Therefore, for large $L$, \begin{equation*} \sup_{w \in \R{d}} \mathbb P[X_{\beta_0, L}(w) \geq \rho L^\beta] \leq L^{1-\chi} \sup_{w \in \R{d}}\mathbb P[w \text{ is bad}]\,, \end{equation*} and the claim (\ref{eq:seed-estimate}) follows from (\ref{eq:bad-0}) together with the identity $2 \beta_0 - \chi= \beta_0 + \beta -1$. \end{proof} We can now state the main result. With the help of the Renormalisation Lemma \ref{lemma:renormalisation}, we propagate the seed estimate contained in Lemma \ref{lemma:seed-estimate} to the right scale, and by piling up a finite number of boxes of the type $B_{2,\beta,L}$ in the direction $\hat v$, we obtain an upper bound like (\ref{eq:trap}). Proposition \ref{prop:trap} then enables us to obtain tail estimates on $\tau_1$. \begin{thm}($d \geq 2$) \label{thm:tail-estimate} Assume that $(T')$ holds relative to $l$. Then, for $\beta \in (\frac{1}{2},1)$, \begin{equation} \label{eq:trap-1} \limsup_{L \rightarrow \infty} L^{-\zeta} \log \mathbb{P}\, [P_{0,\omega}[X_{T_{U_{\beta, L}}}\cdot l >0]\leq \exp \{-L^{\beta}\}]<0 \mathrm{~~for}~~ \zeta < d(2\beta -1), \end{equation} and \begin{equation} \label{eq:tail-estimate} \limsup_{u \rightarrow \infty}\,(\log u)^{-\alpha} \log \hat P_{0}[\tau_{1}>u]<0 \mathrm{~~for~~} \alpha<1+\frac{d-1}{d+1}\,. \end{equation} \end{thm} \begin{proof} Let $\beta$ and $\zeta$ be as in (\ref{eq:trap-1}), and choose $\beta_0 \in (\frac{1}{2},\beta)$ close to $\tfrac{1}{2}$, as well as $\beta' \in (\beta_0,\beta)$ such that, in the notation of Lemma \ref{lemma:renormalisation}, $f(\beta') > \zeta$. By piling up $N$ boxes $B_{1,\beta',L}, B_{2,\beta',L}$, $0 \leq j \leq N$, where $N$ is chosen as the smallest integer such that \begin{equation*} Nl\cdot \hat v >1\,, \end{equation*} we obtain from the strong Markov property that for large $L$, \begin{center} $P_{0,\omega}[X_{T_{U_{\beta, L}}}\cdot l >0] \geq \exp \left\{-\sum_{j=0} ^{N}X_{\beta',L}(jLe_1)\right\}$ so that\\ $\mathbb{P}\,[P_{0,\omega}[X_{T_{U_{\beta, L}}}\cdot l >0]\leq \exp \{-L^{\beta}\}]\leq (N+1)\, \underset{w}{\sup}\, \mathbb{P}\left[X_{\beta',L}(w) \geq \frac{L^\beta}{N}\right]$\,. \end{center} (\ref{eq:trap-1}) now follows from (\ref{eq:renormalisation}) applied with $f_0(\cdot)=\beta_0+\cdot-1$, in view of Lemma \ref{lemma:seed-estimate}. For the proof of (\ref{eq:tail-estimate}), let $\alpha \in (1,2d/(d+1))$, and define $\beta = \alpha^{-1}$. Then, for any $\mu>0$, \begin{equation*} \limsup_{L \rightarrow \infty} L^{-1} \log \mathbb{P}\, [P_{0,\omega}[X_{T_{U_{\beta, L}}}\cdot l >0]\leq \exp \{-\mu L^{\beta}\}]<0\,, \end{equation*} as follows from (\ref{eq:trap-1}) applied to $\beta' \in (\tfrac{1}{2},\beta)$, such that $d(2 \beta' -1) >1$. The claim now follows from Proposition \ref{prop:trap}. \end{proof} \section{Examples of condition (T)} \label{sec:examples} We start with an easy example. \begin{prop} \label{prop:non-nestling} ($d \geq 1$) If for some $\delta >0$ and all $\omega \in \Omega$, all $x \in \R{d}$, \begin{equation} \label{eq:non-nestling-0} b(x,\omega)\cdot l > \delta\,, \end{equation} then condition $(T)|l$ holds. \end{prop} \begin{proof} Define for $u \in \R{}$, $s(u) \df \exp\{-\tfrac{\delta}{\nu}u\}$. It follows from (\ref{eq:b-sigma-bound}), (\ref{eq:non-nestling-0}) that $s(X_t \cdot l)$ is a supermartingale, and an application of Chebychev's inequality and of the stopping theorem yield that for all $\omega \in \Omega$ \begin{equation} P_{0,\omega}[X_{T_{U_{l,b,L}}}\cdot l <0] \leq \frac{1}{s(-bL)}E_{0,\omega}[ s(X_{T_{U_{l,b,L}}}\cdot l)] \leq \exp\{-\tfrac{\delta b}{\nu}L\}\,. \end{equation} The set of unit vectors that satisy (\ref{eq:non-nestling-0}) is open, and hence condition $(T)|l$ holds. \end{proof} Consequently, when $d \geq 2$, we recover and extend the main result of Komorowski and Krupa \cite{kom-krupa}, which provides a law of large numbers when $\sigma = Id$. Proposition \ref{prop:non-nestling} holds for a general diffusion matrix $\sigma$ that satisfies (\ref{eq:b-sigma-bound})-(\ref{eq:R-separation}), and we have in addition a central limit theorem, see (\ref{eq:lln}) and (\ref{eq:clt}).\\ We will now turn to a more involved situation. In the remainder of this section we now assume that, cf. (\ref{eq:b-sigma-bound}), (\ref{eq:SDE}), (\ref{eq:diff-operator}), \begin{equation} \label{eq:sigma} \sigma (\cdot) = Id \,. \end{equation} The next Theorem provides a rich class of examples of diffusions in random environment which fulfill condition $(T)$, and hence, when $d \geq 2$, a ballistic law of large numbers, and a central limit theorem with non-degenerate covariance matrix governing corrections to the law of large numbers, see (\ref{eq:lln}) and (\ref{eq:clt}). \begin{thm} \label{thm:examples} ($d \geq 1$) Assume (\ref{eq:stationarity})-(\ref{eq:R-separation}) and (\ref{eq:sigma}). There is a constant $c_e>0$, such that for $l \in S^{d-1}$, \begin{equation} \label{eq:criterion} \mathbb{E}[(b(0,\omegaega) \cdot l)_{+}]>c_e~ \mathbb{E}[(b(0,\omegaega) \cdot l)_{-}] \end{equation} implies $(T)|l$, cf. (\ref{eq:T}). \end{thm} Theorem \ref{thm:examples} is the main result of this section. Its analogue in the discrete i.i.d. setting can be found in \cite{bolt-szn} p.40. In contrast to Proposition \ref{prop:non-nestling}, it comprises situations where $b(0,\omega)\cdot l$ changes sign for every unit vector $l$, see also remark \ref{rem:examples} at the end of this section.\\ The proof of Theorem \ref{thm:examples} is inspired by the strategy used in the discrete i.i.d. setting, see \cite{bolt-szn} p.40. Following Kalikow's idea, for each bounded domain $U$, we introduce an auxiliary diffusion with characteristics independent of the environment, see (\ref{eq:b'}) and (\ref{eq:measure}). When starting at 0, this diffusion and the annealed diffusion have the same exit distribution from $U$, see Proposition \ref{prop:exit}. This restores some Markovian character to the question of controlling exit distributions of $X_\cdot$ under the annealed measure, and enables us to show that condition $(T)$ is implied by a certain condition $(K)$, see (\ref{eq:K}), which has a similar flavor as Kalikow's condition in the discrete i.i.d. setting, see \cite{szn-zer}. The proof of Theorem \ref{thm:examples} is then carried out by checking condition ($K$).\\ Let us now define the auxiliary diffusion process mentioned above. Let $U$ be a bounded domain containing 0, and, for $x,y \in U$, $s>0$, denote with $p_{\omegaega, U}(s,x,y)$ the subtransition density for the quenched diffusion started in $x$ and killed when exiting $U$ ($p_{\omegaega, U}(s,x,y)$ can for instance be defined by means of Duhamel's formula, see equation (\ref{eq:inf-green}) in the appendix or \cite{stroock} page 331). We define the corresponding Green function through \begin{equation} \label{eq:green-function} g_U(x,y,\omega) \df \int_{0}^{\infty} p_{\omegaega, U}(s,x,y)ds\,. \end{equation} We now define the auxiliary drift term \begin{equation} \label{eq:b'} b'_U(x) \df \begin{cases} \frac{\mathbb{E}[g_{U}(0,x,\omegaega)b(x,\omegaega)]}{\mathbb{E}[g_{U}(0,x,\omegaega)]}\,, &\text{if $x \in U \smallsetminus \{0\}$}\,, \\ 0\,, & \text{if $x=0$ or $x \in U^c$\,.} \end{cases} \end{equation} The next lemma will be useful in the sequel. \begin{lemma} \label{lemma:b'} It holds that $|b'_U(x)|\leq \bar b$ (see (\ref{eq:b-sigma-bound}) for the notation), and $g_U(0,\cdot,\omega)$ and $b'_U(\cdot)$ are continuous in $U \smallsetminus \{0\}$. \end{lemma} \begin{proof} From (\ref{eq:b-sigma-bound}) we see that $|b'_U(x)|\leq \bar b$. Theorem 9, p.671 in \cite{aronson} and the subsequent remark state that the subtransition density $p_{\omegaega, U}(s,0,\cdot)$ is continuous in U. From (\ref{eq:PDE-upper}) in Proposition \ref{prop:PDE} and from similar computations as carried out between (\ref{eq:sup-green}) and (\ref{eq:sup-green-1}), and applying dominated convergence, we see that $g_{U}(0,\cdot,\omega)$ is continuous in $U \smallsetminus \{0\}$. Consequently, by continuity of $b(\cdot,\omega)$, see (\ref{eq:Lipschitz}), and an application of (\ref{eq:green-upper}) and dominated convergence, we see that $b'_U$ is continuous in $x \in U \smallsetminus \{0\}$. \end{proof} For $f \in C^{2}(\R{d})$, define \begin{equation} \label{eq:operator} \mathcal{L}'f(x) \df \frac{1}{2} \Delta f(x) + b'_U(x) \nabla f(x)\,, \end{equation} and denote with, cf. \cite{bass} p.146, \begin{equation} \label{eq:measure} \text{$P'_{x,U}$ the unique solution to the martingale problem for $\mathcal L'$ started at $x \in \R{d}$}. \end{equation} We write $E'_{x,U}$ for the corresponding expectation, and we denote with $p'_U(s,x,y)$, $x,y \in U$, $s>0$, the corresponding subtransition density (which can be defined by means of Girsanov's theorem, see equation (4.1) in \cite{lyons}). Theorem 4.1 in \cite{lyons} states that estimate (\ref{eq:PDE-upper}) in Proposition \ref{prop:PDE} holds for $p'_U$. With the same arguments as given in the proof of statement (\ref{eq:green-upper}) in Lemma \ref{cor:green}, we see that the Green function \begin{equation} \label{eq:green-def} g'_U(x,y) \df \int_0^\infty p'_U(s,x,y)\textrm{d}s \end{equation} is well defined for $x,y \in U$, $x \neq y$, when $d \geq 2$, and for $x,y \in U$, when $d=1$. The first step is \begin{prop} \label{prop:exit} Let $U$ be a bounded $C^{\infty}$ domain containing 0. Then $X_{T_U}$ has same law under $P'_{0,U}$ and $P_{0}$ (see (\ref{eq:annealed}) for the notation). \end{prop} \begin{proof} We drop the subscript $U$ in $P'_{0,U}$ and $E'_{0,U}$. By definition of the martingale problem, it holds for $f \in C^2(\R{d})$ that \begin{equation*} E'_0[f(X_{t \wedge T_{U}})]-f(0)=E'_0\left[\int_{0}^{t \wedge T_{U}} \mathcal{L}'f(X_{s})\textrm{d}s\right]\,. \end{equation*} In particular, for $f \in C^{2}(\bar U)$, it follows from $E'_0[T_U]<\infty$ and from dominated convergence that \begin{multline} \label{eq:forward1} E'_0[f(X_{T_U})]=f(0)+E'_0[\int_{0}^{T_U}\mathcal{L}'f(X_{s})\textrm{d}s]\\ =f(0)+\int_0^\infty E'_0[\mathcal{L}'f(X_{s}),\,s<T_U]\textrm{d}s =f(0)+\int_U g'_U(0,x)\mathcal{L}'f(x)\textrm{d}x\,. \end{multline} In the same way it follows that for $\omega \in \Omega$, \begin{equation} \label{eq:forward2} E_{0,\omega}[f(X_{T_U})]=f(0)+\int_U g_U(0,x,\omega)\mathcal{L}_\omega f(x)\textrm{d}x\,. \end{equation} Integrating (\ref{eq:forward2}) with respect to $\mathbb P$, the definition of $\mathcal L'$ (recall (\ref{eq:operator})) shows that \begin{equation} \label{eq:forward3} E_0[f(X_{T_U})]=f(0)+\int_U \mathbb E [g_U(0,x,\omega)]\mathcal{L}'f(x)\textrm{d}x\,. \end{equation} Combining (\ref{eq:forward1}) and (\ref{eq:forward3}), we obtain that for $f \in C^2(\bar U)$ \begin{equation} \label{eq:forward4} E_0[f(X_{T_U})]-E'_0[f(X_{T_U})]=\int_U (\mathbb E [g_U(0,x,\omega)]-g'_U(0,x) )\mathcal{L}' f(x)\textrm{d}x\,. \end{equation} Given $\phi \in C^\infty (\bar U)$, we will now find functions $u_n \in C^2(\bar U)$ such that \begin{equation} \label{eq:smooth} \lim_{n \to \infty}\mathcal L' u_n(x) =0 \text{ for a.e. } x \in U, \text{ and } u_n=\phi \text{ on the boundary } \partial U\,. \end{equation} Choose functions $b'_{U,n} \in C^\infty (\bar U)$, $n \geq 1$, which converge boundedly a.e. in $U$ to $b'_U$. For $\phi \in C^\infty (\bar U)$, consider the Dirichlet problem \begin{equation} \label{eq:dirichlet1} \frac{1}{2}\Delta u_n + b'_{U,n} \nabla u_n=0 \text{ in }U,\,u_n=\phi \text{ on } \partial U\,. \end{equation} Following theorem 6.14 p.107 in \cite{gil-tru}, there is a unique solution $u_n$ in $C^2(\bar U)$. Fix $p>d$. The generalized problem \begin{equation} \label{eq:dirichlet2} \mathcal L' u=0 \text{ in }U\,,\,u-\phi \in W_0 ^{1,p}(U) \end{equation} has a unique solution $u$ in the Sobolev space $W^{2,p}(U)$, see \cite{gil-tru} p.241. Continuing our proof of (\ref{eq:smooth}), we will now show that \begin{equation} \label{eq:gradient-bounded} \sup_{n} \sup_{x \in U} |\nabla u_n (x)|<\infty\,. \end{equation} Define $w_n \df u_n-u$, $n \geq 1$, and obtain by means of the Sobolev inequality, see \cite{gil-tru} p.158, that \begin{equation} \label{eq:sobolev} \sup_{x \in U}|\nabla u_n (x)| \leq \sup_{x \in U}|\nabla w_n (x)| + \sup_{x \in U}|\nabla u (x)| \leq c(p,U) (\|w_n\|_{W^{2,p}(U)}+\|u\|_{W^{2,p}(U)})\,. \end{equation} $w_n$, $n \geq 1$, lies in the Sobolev space $W_0^{1,p}(U)$ and solves (see (\ref{eq:dirichlet1}) and (\ref{eq:dirichlet2})) \begin{equation} \frac{1}{2}\Delta w_n + b'_{U,n} \nabla w_n=(b'_U-b'_{U,n})\nabla u \text{ in } U\,. \end{equation} Lemma 9.17 p.242 in \cite{gil-tru} and dominated convergence show that \begin{equation} \label{eq:w_n} \|w_n\|_{W^{2,p}(U)}\leq c(p,U) \|(b'_U-b'_{U,n}) \nabla u\|_{L^p(U)} \underset{n \to \infty} \longrightarrow 0\,. \end{equation} Combining (\ref{eq:sobolev}), (\ref{eq:w_n}) and (\ref{eq:dirichlet2}) yields (\ref{eq:gradient-bounded}). (\ref{eq:dirichlet1}) yields \begin{equation} \mathcal L' u_n=(b'_U-b'_{U,n})\nabla u_n \text{ in }U,\,\, u_n=\phi \text{ on } \partial U, \end{equation} which, together with (\ref{eq:gradient-bounded}), shows (\ref{eq:smooth}). Choosing $f=u_n$ in (\ref{eq:forward4}) and applying dominated convergence gives \begin{equation} E_0[\phi(X_{T_U})]=E'_0[\phi(X_{T_U})] \text{ for all } \phi \in C^\infty (\bar U)\,. \end{equation} Since every function in $C^\infty(\partial U)$ is the restriction of a function in $C^\infty(\bar U)$, see Lemma 6.37 p.137 in \cite {gil-tru}, the claim of the Proposition follows. \end{proof} We now introduce condition $(K)$, and show that it implies condition $(T)$. \begin{defn} \label{def:K} Let $l \in S^{d-1}$. We say that condition ($K)|l$ holds, if there is an $\epsilon >0$, such that for all bounded domains $U$ containing 0 \begin{equation} \label{eq:K} \inf_{x \in U \smallsetminus \{0\}, \text{ dist} (x,\partial U)>5R} b'_U (x)\cdot l > \epsilon \,, \end{equation} with the convention $\inf \varnothing = +\infty$. \end{defn} \begin{prop} \label{prop:K} $(K)|l \Rightarrow (T)|l$\, (recall (\ref{eq:T})). \end{prop} \begin{proof} The set of $l \in S^{d-1}$ for which (\ref{eq:K}) holds is open and hence our claim will follow if for such an $l$ we show that \begin{equation} \label{eq:T|l} \limsup_{L \to \infty}L^{-1}\log P_0[X_{T_{U_{l,b,L}}}\cdot l <0]<0\,. \end{equation} Denote with $\Pi_l(w) \df w-(w\cdot l) l$, $w \in \R{d}$, the projection on the orthogonal complement of $l$, and define \begin{equation} \label{eq:bounded-set} V_{l,b,L} \df \left\{x \in \R{d}: -bL < x \cdot l < L, |\Pi_l(x)| < L^{2} \right\}\,. \end{equation} In view of Proposition \ref{prop:exit}, we choose bounded $C^\infty$ domains $\tilde V_{l,b,L}$ such that \begin{equation} \label{eq:set-inclusion} V_{l,b,L} \subset \left\{x \in \R{d}: -bL < x \cdot l < L, |\Pi_l(x)| < L^{2} +5R \right\} \subset \tilde V_{l,b,L} \subset U_{l,b,L}\,. \end{equation} (When $d=1$, $\Pi_l(w) \equiv 0$, and we simply have that $U_{l,b,L}=V_{l,b,L}=\tilde V_{l,b,L}$.) Recall (\ref{eq:measure}). To prove (\ref{eq:T|l}), it will suffice to prove that \begin{equation} \label{eq:exit-bounded-set} \limsup_{L \to \infty}L^{-1}\log P'_{0,\tilde V_{l,b,L}}[X_{T_{V_{l,b,L}}} \cdot l <L]<0\,. \end{equation} Indeed, once this is proved, it follows from (\ref{eq:set-inclusion}) that \begin{equation} \label{eq:exit-bounded-set2} \limsup_{L \to \infty}L^{-1}\log P'_{0,\tilde V_{l,b,L}}[X_{T_{\tilde V_{l,b,L}}}\cdot l <L]<0\,. \end{equation} Hence, with Proposition \ref{prop:exit}, statement (\ref{eq:exit-bounded-set2}) holds with $ P'_{0,\tilde V_{l,b,L}}$ replaced by $P_0$, and, using (\ref{eq:set-inclusion}) once more, (\ref{eq:T|l}) follows.\\ We now prove (\ref{eq:exit-bounded-set}). By (\ref{eq:set-inclusion}) and (\ref{eq:K}), we see that for $x \in V_{l,b,L}$, \begin{equation} \label{eq:b'-bounds} b'_{\tilde V_{l,b,L}}(x)\cdot l \geq \begin{cases} \epsilon, &\text{ if } -bL+5R<x \cdot l <L-5R \text{ and } x \neq 0,\\ -\bar b, &\text{ else }. \end{cases} \end{equation} We thus consider the process $X_t \cdot l$. We introduce the function $u(\cdot)$ on $\R{}$, which is defined on $[-bL,L]$ through \begin{equation} u(r) \df \begin{cases} \alpha_1 e^{\alpha_2 \epsilon (bL-5R)}(\alpha_3-e^{4\bar b (r-(-bL+5R))}), &\text{if } r \in [-bL,-bL+5R]\,,\\ e^{-\alpha_2 \epsilon r}, &\text{if } r \in (-bL+5R,L-5R)\,,\\ \alpha_4 e^{-\alpha_2 \epsilon (L-5R)}(\alpha_5-e^{4\bar b (r-(L-5R))}), &\text{if } r \in [L-5R,L]\,, \end{cases} \end{equation} and which is extended boundedly and in a $C^2$ fashion outside $[-bL,L]$, and such that $u$ is twice differentiable in the points $-bL$ and $L$. The numbers $\alpha_i$, $1\leq i \leq 5$, are chosen positive and independent of $L$, via \begin{equation} \label{eq:coefficients} \alpha_5=1+e^{20\bar b R},\, \alpha_4=e^{-20\bar b R},\, \alpha_2 = \min (1, \frac{4 \bar b}{\epsilon}e^{-20\bar b R}),\, \alpha_1=\frac{\epsilon \alpha_2}{4 \bar b},\, \alpha_3=1+\frac{4 \bar b}{\epsilon \alpha_2}\,. \end{equation} Then, on $[-bL,L]$, $u$ is positive, continuous and decreasing. In addition, one has with the definition $j(r)=u'(r_+)-u'(r_-)$, \begin{equation} \label{eq:derivative} j(-bL+5R)=0, \text{ and } j(L-5R) \le 0\,. \end{equation} On $\R{d}$ we define the function $\tilde u(x)=u(x \cdot l)$, and for $\lambda$ real, we define on $\R{}_+ \times \R{}$ the function $v_\lambda(t,r) \df e^{\lambda t} u(r)$, and on $\R{}_+ \times \R{d}$ the function $\tilde v_\lambda(t,x) \df v_\lambda(t,x \cdot l)=e^{\lambda t} \tilde u(x)$. We will now find $\lambda_0$ positive such that \begin{equation} \label{eq:supermartingale} v_{\lambda_0}(t \wedge T_{V_{l,b,L}}, X_{t \wedge T_{V_{l,b,L}}}\cdot l) \text{ is a positive supermartingale under }P'_{0,\tilde V_{l,b,L}}. \end{equation} Corollary 4.8 p.317 in \cite{kar-shr}, combined with remark 4.3 p.173 therein, shows the existence of a $d$-dimensional Brownian motion $W_t$ defined on $(C(\R{}_+,\R{d}), \mathcal F,P'_{0,\tilde V_{l,b,L}})$, such that \begin{equation*} P'_{0,\tilde V_{l,b,L}}-\text{a.s.},\quad Y_t \df X_t \cdot l= W_t \cdot l+\int_0^t b'_{\tilde V_{l,b,L}}(X_s)\cdot l\,ds\,. \end{equation*} Writing $u$ as a linear combination of convex functions, we find from the generalised It\^o rule, see \cite{kar-shr} p.218, that \begin{align} \label{eq:ito} P'_{0,\tilde V_{l,b,L}}-\text{a.s.,}\quad u(Y_{t})\,=\,1+\int_0^{t}D^-u(Y_s)dY_s + \int_{-\infty}^{\infty}\Lambda_t(a)\mu(da), \end{align} where $D^-u$ is the left-hand derivative of $u$, $\Lambda(a)$ is the local time of $Y$ in $a$, and $\mu$ is the second derivative measure, i.e. $\mu([a,b))=D^-u(b)-D^-u(a)$, $a<b$ real. Notice that the first derivative of $u$ exists and is continuous outside $L-5R$, and the second derivative of $u$ exists (in particular) outside the Lebesgue zero set $A= \{-bL+5R,0,L-5R\}$. Hence we find by definition of the second derivative measure, and with the help of equation (7.3) p.218 in \cite{kar-shr} that $P'_{0,\tilde V_{l,b,L}}$-a.s., \begin{equation} \label{eq:sec-der} \begin{aligned} \int_{-\infty}^{\infty}\Lambda_t(a)\mu(da)=& \int_{-\infty}^{\infty} \Lambda_t(a){\bf 1}_{A^c}(a) u''(a)\,da +\Lambda_t(L-5R)\,j(L-5R)\\ =&\tfrac{1}{2} \int_0^t u''(Y_s){\bf 1}_{A^c}(Y_s)\,ds+\Lambda_t(L-5R)\,j(L-5R)\,. \end{aligned} \end{equation} Another application of equation (7.3) p.218 in \cite{kar-shr} shows that \begin{equation} \label{eq:N} P'_{0,\tilde V_{l,b,L}}-\text{a.s.,}\quad \int_0^t {\bf 1}_{A}(Y_s)\,ds=2\int_{-\infty}^\infty {\bf 1}_{A}(a)\Lambda_t(a)\,da=0\,. \end{equation} As a result, we find that $P'_{0,\tilde V_{l,b,L}}$-a.s., \begin{equation} \label{eq:first-der} \int_0^t D^-u(Y_s){\bf 1}_{A}(Y_s)dY_s =0\,. \end{equation} Combining (\ref{eq:sec-der}) and (\ref{eq:first-der}), and by definition of the operator $\mathcal L'$, see (\ref{eq:operator}), we can now rewrite (\ref{eq:ito}) as the $P'_{0,\tilde V_{l,b,L}}$-a.s. equalities \begin{equation*} \begin{aligned} u(Y_{t})=&1+\int_0^{t} u'(Y_s){\bf 1}_{A^c}(Y_s)dY_s +\tfrac{1}{2}\int_0^{t} u''(Y_s) {\bf 1}_{A^c}(Y_s)ds + \Lambda_t(L-5R)j(L-5R)\\ =&1+\int_0^{t}\mathcal L'\tilde u(X_s){\bf 1}_{A^c}(X_s \cdot l)\,ds + \Lambda_t(L-5R)\,j(L-5R) +M_t\,, \end{aligned} \end{equation*} where $M_t$ is a continuous martingale. In particular, $\tilde u(X_t) (=u(Y_t))$ is a continuous semimartingale, and applying It\^o's rule to the product $e^{\lambda t} \cdot \tilde u(X_t)=\tilde v_\lambda(t,X_t)$, and using (\ref{eq:N}) once again, we obtain that, $P'_{0,\tilde V_{l,b,L}}$-a.s., \begin{equation} \label{eq:ito-2} \begin{aligned} &\tilde v_{\lambda}(t,X_{t }) =\,1+\int_0^{t }\lambda e^{\lambda s} \tilde u(X_s) \,ds + \int_0^{t } e^{\lambda s}\, d\tilde u(X_s)\\ &=\,1+ \int_0^{t } \Big(\tfrac{\partial}{\partial s}+ \mathcal L'\Big)\tilde v_\lambda (s,X_s){\bf 1}_{A^c}(X_s \cdot l)\,ds +j(L-5R)\int_0^{t } e^{\lambda s}d\Lambda^{L-5R}_s +N_{t }\,, \end{aligned} \end{equation} where $N_{t }$ is a continuous martingale. We find through direct computation that for $ x \in V_{l,b,L}$, and a suitable $\psi (x) \geq 0$, using the notation $I_1=(-bL,-bL+5R)$, $I_2=(-bL+5R,L-5R)$, $I_3=(L-5R,L)$, \begin{equation*} \big[(\tfrac{\partial}{\partial s}+ \mathcal L')\tilde v_\lambda \big](s,x)\leq \psi (x)e^{\lambda s} \cdot \begin{cases} \lambda(e^{20\bar b R}\alpha_3-1)-4\bar b( 2 \bar b + b'_{\tilde V_{l,b,L}} (x) \cdot l)\,,&\text{ if } x \cdot l \in I_1\,,\\ \lambda+\alpha_2\epsilon(\frac{1}{2}\alpha_2\epsilon-b'_{\tilde V_{l,b,L}} (x) \cdot l)\,,&\text{ if }x \cdot l \in I_2\,,\\ \lambda(\alpha_5-1)-4\bar b( 2 \bar b + b'_{\tilde V_{l,b,L}} (x) \cdot l)\,,&\text{ if }x \cdot l \in I_3\,. \end{cases} \end{equation*} Hence, by (\ref{eq:b'-bounds}) and (\ref{eq:coefficients}), we can find $\lambda_0>0$ small such that for $x \in V_{l,b,L}$, $x \cdot l \notin A$, the right-hand side of the last expression is negative. Since $j(L-5R) \le 0$, see (\ref{eq:derivative}), we obtain from (\ref{eq:ito-2}) applied to the finite stopping time $t \wedge T_{V_{l,b,L}}$ that (\ref{eq:supermartingale}) holds.\\ We now derive the claim of the proposition from (\ref{eq:supermartingale}). When $d \geq 2$, the probability to exit $V_{l,b,L}$ neither from the ``right'' nor from the ``left'' can be bounded as follows: \begin{equation} \label{eq:bound} \begin{aligned} & P'_{0,\tilde V_{l,b,L}}[-bL<X_{T_{V_{l,b,L}}} \cdot l < L\,]\leq \\ & P'_{0,\tilde V_{l,b,L}}[-bL<X_{T_{V_{l,b,L}}} \cdot l < L,\,T_{V_{l,b,L}} > \tfrac{2\alpha_2 \epsilon}{\lambda_0}L\,] + P'_{0,\tilde V_{l,b,L}}[\,\sup |X_t|\geq L^2:t \leq \tfrac{2\alpha_2 \epsilon}{\lambda_0}L\,]\,. \end{aligned} \end{equation} By Chebychev's inequality and Fatou's lemma, we find that the first term on the right-hand side is smaller than \begin{equation} \label{eq:bound-0} \begin{aligned} &\frac{1}{v_{\lambda_0}(\tfrac{2\alpha_2 \epsilon}{\lambda_0}L,L)}E'_{0,\tilde V_{l,b,L}} [v_{\lambda_0}(T_{V_{l,b,L}}, X_{T_{V_{l,b,L}}}\cdot l)]\\ \le &c(\epsilon)e^{-\alpha_2 \epsilon L} \liminf_{t \to \infty}E'_{0,\tilde V_{l,b,L}} [v_{\lambda_0}(t \wedge T_{V_{l,b,L}}, X_{t \wedge T_{V_{l,b,L}}}\cdot l)]\\ \le & c(\epsilon)e^{-\alpha_2 \epsilon L}\, v_{\lambda_0}(0,0)=c(\epsilon)e^{-\alpha_2 \epsilon L}, \end{aligned} \end{equation} where, in the last inequality, we used (\ref{eq:supermartingale}). Applying (\ref{eq:max}) in Lemma \ref{lemma:bernstein} to the second term in the right-hand side of (\ref{eq:bound}), we obtain, together with (\ref{eq:bound-0}), that \begin{equation} \label{eq:bound-1} \limsup_{L \to \infty}L^{-1} \log P'_{0,\tilde V_{l,b,L}}[-bL<X_{T_{V_{l,b,L}}} \cdot l < L\,]<0. \end{equation} When $d \geq 1$, we bound the probability to exit $V_{l,b,L}$ from the left by a similar argument as in (\ref{eq:bound-0}), and find that \begin{equation} \label{eq:bound-2} P'_{0,\tilde V_{l,b,L}}[X_{T_{V_{l,b,L}}} \cdot l=-bL] \leq \frac{v_{\lambda_0}(0,0)}{v_{\lambda_0}(0,-bL)} \leq e^{-c(\epsilon)L}\,. \end{equation} (\ref{eq:bound-2}), together with (\ref{eq:bound-1}), when $d \geq 2$, show (\ref{eq:exit-bounded-set}), which implies condition $(T)|l$. \end{proof} Let us now turn to the \begin{proof}[{\bf Proof of Theorem \ref{thm:examples}}] It suffices to verify condition $(K)|l$, which implies condition $(T)|l$, see Proposition \ref{prop:K}. Let $U$ be a bounded domain containing 0, and assume that there is \begin{equation} \label{eq:dist} x \in U \smallsetminus \{0\} \text{ such that }\text{dist}(x,\partial U)>5R\,. \end{equation} (otherwise $(K)|l$ automatically holds). With $x$ as above, $\delta >0$, for $f$ a non-negative bounded measurable function on $U$, we write \begin{align*} f_\delta(\cdot) \df f(\cdot) \mathbf{1}_{B_{\delta}(x)}(\cdot)\,,\text{ and } b^{\pm}_\delta(\cdot,\omega) \df (b(\cdot,\omega)\cdot l)_{\pm} \mathbf{1}_{B_{\delta}(x)}(\cdot)\,. \end{align*} Lemma \ref{lemma:b'} shows that \begin{equation} \label{eq:b'-integral} b'_U(x)\cdot l=\lim_{\delta \to 0}\frac{1}{|B_\delta|}\int_{B_{\delta}(x)}b'_U(y)\cdot l~\textrm{d}y\,. \end{equation} If we choose $\delta < |x|/2$, it follows from (\ref{eq:green-lower}) and from (\ref{eq:green-upper}) in Corollary \ref{cor:green} that \begin{equation} \label{eq:inf-sup-green} 0<\inf_{y \in B_{\delta}(x)}\mathbb{E}[g_{U}(0,y,\omega)]\leq \sup_{y \in B_{\delta}(x)}\mathbb{E}[g_{U}(0,y,\omega)]<\infty\,, \end{equation} and we obtain by the definition of $b'_U$, see (\ref{eq:b'}), that \begin{equation} \label{eq:b'-bound} \begin{aligned} \frac{1}{|B_\delta|}\underset{B_{\delta}(x)}{\int}b'_U(y)\cdot l~\textrm{d}y \geq \frac{\mathbb{E}\left[\int g_{U}(0,y,\omega)b_{\delta}^{+}(y,\omega) \textrm{d}y \right]}{|B_\delta|\,\sup_{y \in B_\delta(x)}\mathbb{E}[g_{U}(0,y,\omega)]}- \frac{\mathbb{E}\left[\int g_{U}(0,y,\omega)b_{\delta}^{-}(y,\omega) \textrm{d}y \right]}{|B_\delta|\,\inf_{y \in B_\delta(x)}\mathbb{E}[g_{U}(0,y,\omega)]}\,. \end{aligned} \end{equation} Denote with $R_k$ and $D_k$, $k \geq 1$, the successive returns of $X_\cdot$ to $B_{2R}(x)$ and departures from $B_{4R}(x)$ defined similarly as in (\ref{eq:excursion-1}) and (\ref{eq:excursion-2}), with $B_1(x)$ and $B_2(x)$ replaced by $B_{2R}(x)$ and $B_{4R}(x)$ respectively. For $y$ in $U$, define the associated operators: \begin{equation*} Rf(y) \df E_{y,\omega}\left[f(X_{R_{1}}), R_{1} < T_{U} \right],\, Qf(y) \df E_{y,\omega}\left[f(X_{D_{1}})\right],\, Tf(y) \df E_{y,\omega}[\int_{0}^{D_{1}}f(X_{s})\textrm{d}s]. \end{equation*} If $\delta \leq 2R$, successive applications of the strong Markov property show that \begin{equation} \label{eq:Markov} \int_{U}g_{U}(0,y,\omega)f_{\delta}(y)\textrm{d}y = E_{0,\omega}[\int_{0}^{T_{U}}f_{\delta}(X_s)\textrm{d}s] = R(Id - QR)^{-1}Tf_{\delta}(0). \end{equation} In view of (\ref{eq:b'-bound}), it will be crucial to bound the above quantity from below and from above. In a first step, we derive bounds on the operators $R$ and $QR$. For $y \in U$, we have \begin{equation} \label{eq:R-bound} \inf_{z \in \partial{B}_{2R}(x)}f(z)P_{y,\omega}[R_{1}<T_{U}]\,\,\leq Rf(y)\, \, \leq \sup_{z \in \partial{B}_{2R}(x)}f(z)P_{y,\omega}[R_{1}<T_{U}], \end{equation} and hence, \begin{equation} \label{eq:QR-bound} \begin{aligned} &\sup_{y \in \partial B_{2R}(x)}QRf(y) \leq \sup_{z \in \partial{B}_{4R}(x)} P_{z,\omega}[R_{1}<T_{U}] \sup_{z \in \partial{B}_{2R}(x)}f(z),\\ &\inf_{y \in \partial B_{2R}(x)}QRf(y) \geq \inf_{z \in \partial{B}_{4R}(x)} P_{z,\omega}[R_{1}<T_{U}] \inf_{z \in \partial{B}_{2R}(x)}f(z)\,. \end{aligned} \end{equation} We first derive a lower bound for (\ref{eq:Markov}), see (\ref{eq:estimate-2}) below. Repeated applications of (\ref{eq:R-bound}) and (\ref{eq:QR-bound}) yield \begin{equation} \label{eq:estimate-1} \begin{aligned} & R(Id - QR)^{-1}Tf_\delta(0)\\ \geq & \, P_{0,\omega}[R_{1}<T_{U}]\sum_{j \geq 0}\left(\inf_{z \in \partial {B}_{4R}(x)}P_{z,\omega}[R_{1}<T_{U}]\right)^{j}\inf_{z \in \partial{B}_{2R}(x)}Tf_\delta(z)\\ \geq & \, \frac{ P_{0,\omega}[R_{1}<T_{U}]}{\sup_{z \in \partial B_{4R}(x)}P_{z,\omega} [R_{1}>T_{U}]}\inf_{z \in B_\delta (x)} f_\delta(z) \inf_{z \in \partial{B}_{2R}(x)}T \mathbf{1}_{B_{\delta}(x)}(z)\,. \end{aligned} \end{equation} If $\delta <2R$, we find by means of (\ref{eq:green-lower}) in Corollary \ref{cor:green} that \begin{equation} \label{eq:green-1} \inf_{z \in \partial{B}_{2R}(x)}T \mathbf{1}_{B_{\delta}(x)}(z) \geq \int_{B_{\delta}(x)} ~\inf_{z \in \partial{B}_{2R}(x)}g_{B_{4R}(x)}(z,y,\omega)\textrm{d}y \geq c\, |B_\delta|\,. \end{equation} Combining (\ref{eq:estimate-1}) and (\ref{eq:green-1}), and using (\ref{eq:Markov}), we see that \begin{equation} \label{eq:estimate-2} \int_{U}g_{U}(0,y,\omega)f_{\delta}(y)\textrm{d}y \geq c\,|B_\delta|\,\frac{ P_{0,\omega}[R_{1}<T_{U}]}{\sup_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}]}\inf_{z \in B_\delta (x)} f_\delta(z)\,. \end{equation} We will now derive an upper bound on (\ref{eq:Markov}), see (\ref{eq:estimate-3}). If $\delta < R$, we find by another use of Corollary \ref{cor:green} that \begin{equation} \sup_{z \in \partial{B}_{2R}(x), y \in B_\delta (x)}g_{B_{4R}(x)}(z,y,\omega) \leq c\,. \end{equation} Proceeding in a similar fashion as in (\ref{eq:estimate-1})-(\ref{eq:estimate-2}), we obtain the upper bound \begin{equation} \label{eq:estimate-3} \int_{U}g_{U}(0,y,\omega)f_{\delta}(y)\textrm{d}y \leq c\,|B_\delta |\,\frac{ P_{0,\omega}[R_{1}<T_{U}]}{\inf_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}]}\,\sup_{z \in B_{\delta}(x)}f_\delta(z)\,. \end{equation} We will now give a lower bound for the first term in the last line of (\ref{eq:b'-bound}). Applying (\ref{eq:Markov}) with $f_\delta=b^+_\delta$ and using (\ref{eq:estimate-2}), we see that \begin{equation} \label{eq:estimate-4} \mathbb{E}\left[\int g_{U}(0,y,\omega)b_{\delta}^{+}(y,\omega) \textrm{d}y \right] \geq c\,|B_\delta |\, \mathbb{E}\left[\frac{ P_{0,\omega}[R_{1}<T_{U}]}{\sup_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}]}\,\inf_{z \in B_\delta (x)} b_{\delta}^{+}(z,\omega)\right]\,. \end{equation} Observe that $P_{0,\omega}[R_{1}<T_U]=1$ if $0 \in B_{2R}(x)$. Hence $\frac{ P_{0,\omega}[R_{1}<T_{U}]}{\inf_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1} >T_{U}]}$ is $\mathcal{H}_{B_{2R}^{c}(x)}$-measurable. Since $\inf_{z \in B_{\delta}(x)}b_{\delta}^{+}(z,\omega)$ is $\mathcal{H}_{B_{\delta}(x)}$-measurable, it follows for $\delta < R$ and from finite range dependence, see (\ref{eq:R-separation}), that these two random variables are $\mathbb{P}$-independent, and hence (\ref{eq:estimate-4}) equals \begin{equation} \label{eq:estimate-5} c\,|B_\delta |\, \mathbb{E}\left[\frac{ P_{0,\omega}[R_{1}<T_{U}]}{\sup_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}]}\right] \mathbb{E}[\inf_{z \in B_\delta (x)} b_{\delta}^{+}(z,\omega)]\,. \end{equation} The application of Harnack's inequality (see \cite{gil-tru} p.199) to the $\mathcal L_\omega$-harmonic function $P_{\cdot,\omega}[R_{1}>T_{U}]$ on $B_{5R}(x) \smallsetminus \bar{B}_{2R}(x)$ shows that \begin{equation*} \sup_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}] \leq c\, \inf_{z \in \partial B_{4R}(x)}P_{z,\omega}[R_{1}>T_{U}]\,. \end{equation*} Together with an application of (\ref{eq:estimate-3}) and (\ref{eq:Markov}) with $f_\delta=\mathbf 1_{B_\delta(x)}$, we obtain that (\ref{eq:estimate-5}) is bigger than \begin{multline} \label{eq:estimate-6} c\,\mathbb{E}\left[\int_{B_{\delta}(x)}g_{U}(0,y,\omega) \textrm{d}y\right]\,\mathbb{E}[\inf_{z \in B_{\delta}(x)}b_{\delta}^{+}(z,\omega)]\\ \geq c\,|B_\delta |\,\mathbb{E}[\inf_{y \in B_\delta (x)} g_{U}(0,y,\omega)]\,\mathbb{E}[\inf_{z \in B_{\delta}(x)}b_{\delta}^{+}(z,\omega)]\,. \end{multline} Finally, using (\ref{eq:estimate-4})-(\ref{eq:estimate-6}), we find that the first term in the right-hand side of (\ref{eq:b'-bound}) is bigger than \begin{equation} \label{eq:estimate-7} c_1\,\frac{\mathbb{E}[\inf_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} {\mathbb{E}[\sup_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} ~\mathbb{E}[\inf_{z \in B_{\delta}(x)}b_{\delta}^{+}(z,\omega)]\,. \end{equation} By similar computations as carried out between (\ref{eq:estimate-4}) and (\ref{eq:estimate-7}), we find as an upper bound for the second term in the right-hand side of (\ref{eq:b'-bound}) \begin{equation} \label{eq:estimate-8} c_2\, \frac{\mathbb{E}[\sup_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} {\mathbb{E}[\inf_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} ~\mathbb{E}[\sup_{z \in B_{\delta}(x)}b_{\delta}^{-}(z,\omega)]\,. \end{equation} The continuity of $g_U(0,\cdot,\omega)$ and of $b_\delta ^+ (\cdot, \omega)$ in $B_\delta (x)$, see lemma \ref{lemma:b'} and (\ref{eq:Lipschitz}), together with dominated convergence, and the translation invariance of the measure $\mathbb P$, show that \begin{equation} \label{eq:estimate-9} \lim_{\delta \to 0}\,c_1\,\frac{\mathbb{E}[\inf_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} {\mathbb{E}[\sup_{y \in B_{\delta}(x)}g_{U}(0,y,\omega)]} ~\mathbb{E}[\inf_{z \in B_{\delta}(x)}b_{\delta}^{+}(z,\omega)] =\,c_1\,\mathbb{E}[(b(0,\omega)\cdot l)_+]\,, \end{equation} and a similar identity for the term in (\ref{eq:estimate-8}). Inserting (\ref{eq:estimate-7})-(\ref{eq:estimate-9}) in (\ref{eq:b'-bound}), and using (\ref{eq:b'-integral}), we finally obtain \begin{equation} \label{eq:estimate-10} b'_U(x)\cdot l \geq ~c_1~\mathbb{E}[(b(0,\omega)\cdot l)_{+}- \tfrac{c_2}{c_1}\,(b(0,\omega)\cdot l)_{-}]\,. \end{equation} Hence, if (\ref{eq:criterion}) holds with $c_e \df \tfrac{c_2}{c_1}$, we see that there is an $\epsilon >0$ such that for all $x$ as in (\ref{eq:dist}) \begin{equation} \label{eq:estimate-11} b'_U(x)\cdot l > \epsilon\,. \end{equation} We conclude that condition $(K)|l$ holds, see (\ref{eq:K}). By means of Proposition \ref{prop:K}, condition $(T)|l$ holds, and Theorem \ref{thm:examples} is proved. \end{proof} \begin{rem} \rm \label{rem:examples} With the help of Theorem \ref{thm:examples}, it is easy to obtain concrete examples of diffusions fulfilling condition $(T)$. For instance, when $(b(0,\omega)\cdot l)_- =0$, we find: \begin{equation} \label{eq:non-nestling} \begin{aligned} &\text{Condition $(T)$ holds when $d \geq 1$ and there is $l \in S^{d-1}$ and $\delta >0$,} \\ &\text{such that $b(0,\omega)\cdot l \geq 0$ for all $\omega \in \Omega$, and $p_\delta = \mathbb P[b(0,\omega)\cdot l \geq \delta]>0$}\,. \end{aligned} \end{equation} If there is $\delta >0$ such that $p_\delta=1$, this is in the spirit of the {\it non-nestling} case, which is in fact already covered by Proposition \ref{prop:non-nestling}, and else, of the {\it marginal nestling} case in the discrete setting, see Sznitman \cite{szn00}. Of course, Theorem \ref{thm:examples} also comprises more involved examples of condition $(T)$ where \\ $b(0,\omega)\cdot l$ takes both positive and negative values for every $l \in S^{d-1}$. Hence, when $d \geq 2$, Theorem \ref{thm:examples} provides examples of ballistic diffusions in random environment beyond previous knowledge. They correspond to the {\it plain nestling} case in \cite{szn00}. \end{rem} \section{Appendix} \label{sec:appendix} \small \subsection{Bernstein's Inequality} Recall the convention of the constants stated at the end of the Introduction. The following Lemma follows in essence from Bernstein's inequality (see \cite {rev-yor} page 153-154). \begin{lemma} \label{lemma:bernstein} On $\R{d}$ we consider measurable functions $a$, $b$, with values in the space of symmetric matrices and in $\R{d}$ respectively, that satisfy for suitable $\nu \ge 1$, and $\bar a >0$, $\bar b >0$, \begin{equation} \label{eq:1} \tfrac{1}{\nu}|y|^{2}\leq \sum_{i,j} a_{ij}(x)y_i y_j \leq \nu |y|^{2},\,\, \left|a(x) \right| \leq \bar a,\,\, \left|b(x) \right| \leq \bar b,\,\,x,y \in \R{d}\,. \end{equation} We denote with $\mathcal L$ the operator attached to $a$ and $b$, similarly as in (\ref{eq:diff-operator}), and we assume that $P_x$ solves the martingale problem for $\mathcal L$ started at $x$ in $\R{d}$. We denote with $E_x$ the corresponding expectation. Write $(X_t)_{t \ge 0}$ for the canonical process on $C([0,\infty),\R{d})$, and let $Z_t = \sup_{s \leq t}|X_s-X_0|$. Then, for every $\alpha >0$, there are two constants $c(\alpha)>0$ and $\tilde c(\alpha)>0$, such that for large $L$, \begin{equation} \label{eq:max} \sup_{x}P_x \big[Z_{\alpha L}\geq L^2\big] \leq \tilde c e^{-cL^3}\,. \end{equation} Further, for $\gamma \in (0,1]$ and for all $\alpha>0$, there exists a constant $\delta(\alpha)>0$ such that \begin{equation} \label{eq:Z} \sup_{x} E_{x}\big[e^{\delta Z_1^\gamma}\big]\leq 1+\alpha\,. \end{equation} \end{lemma} \begin{proof} We obtain from the martingale problem that $M_t=X_t-X_0-\int_0^t b\,(X_s)\,ds$ is a martingale. We compute the bracket $\langle M^i \rangle_t$ of the $i$-th component $M^i_t$ of $M_t$, $1 \le i \le d$, and find $\langle M^i \rangle_t=\int_0^t a_{ii}(X_s)\,ds$. (\ref{eq:1}) yields $\langle M^i \rangle_t \le \nu t$, and with the help of Bernstein's inequality (see \cite {rev-yor} page 153-154) and a further application of (\ref{eq:1}), it follows immediately that for large $L$, \begin{equation*} P_{x}\big[Z_{\alpha L}\geq L^2\big] \leq P_{x}\big[\sup_{s\leq \alpha L}|M_s|\geq (L^2-\alpha\bar b L) \big] \leq 2d e^{-\frac{L^3}{4\nu \alpha d}}\,, \end{equation*} which proves (\ref{eq:max}). Since $Z_1\leq \sup_{s\leq 1}|M_s|+\bar b$, we obtain for $0<\delta<1$ that \begin{align*} &E_{x}\big[e^{\delta Z_1^\gamma}\big]\leq e^{\delta \bar{b}^\gamma}\, E_{x}\big[\exp\{\delta (\sup_{s\leq 1}|M_s|)^\gamma\}\big]\\ =&e^{\delta\bar b^\gamma}\Big(1+ \delta\int^\infty_{0} \mathrm{d}v\; e^{\delta v}\underbrace{P_{x}\big[(\sup_{s\leq 1}|M_s|)^\gamma \geq v\big]}_{\leq 2d \exp\{-v^{\frac{2}{\gamma}}/(2d\nu)\}}\Big) \leq e^{\delta\bar b^\gamma}\big(1+\delta\, c \big)\,, \end{align*} which proves (\ref{eq:Z}). \end{proof} \subsection{Bounds on the Green function} The bounds on the transition density contained in the next Proposition will be crucial to derive bounds on the Green function. \begin{prop} \label{prop:PDE} Let $\mathcal L_\omega$ be as in (\ref{eq:diff-operator}), and let assumptions (\ref{eq:b-sigma-bound})-(\ref{eq:elliptic}) be in force. Then the linear parabolic equation of second order $\frac{\partial u}{\partial t}=\mathcal L_\omega u$ has a a unique fundamental solution $p_\omega(t,x,y)$, and there are positive constants $\alpha$, $\beta$, $a$ and $\tilde \alpha$ such that for $t \leq 1$ \begin{equation} \label{eq:PDE-upper} |p_\omega(t,x,y)| \leq \frac{\alpha}{t^{d/2}} \exp\big\{-\tfrac{\beta |x-y|^2}{t}\big\}\,, \end{equation} and such that for $|x-y|^2< a t$ and $t\in (0, 1]$ \begin{equation} \label{eq:PDE-lower} p_\omega(t,x,y)\geq \frac{\tilde \alpha}{t^{d/2}}\,. \end{equation} \end{prop} For the proof we refer the reader to \cite{illin}. The statements (4.16) and (4.75) therein correspond to (\ref{eq:PDE-upper}) and (\ref{eq:PDE-lower}). Recall the convention on the constants stated at the end of the Introduction. We obtain the following Corollary: \begin{cor} \label{cor:green} Assume (\ref{eq:b-sigma-bound}) and (\ref{eq:Lipschitz}), and let $U$ be a bounded domain. There is a positive constant $m(r,U)$ such that for all $\omega \in \Omega$, and for all $y, z \in U$ with dist$(y, \partial U)>r$, dist$(z, \partial U)>r$, \begin{equation} \label{eq:green-lower} g_{U}(y,z,\omega) \geq m\,. \end{equation} For $y \neq z$, define \begin{equation} \label{eq:function-h} h_y(z)= \begin{cases} |y-z|^{2-d} \,, &d \geq 3\,,\\ \log \frac{\text{diam}(U)}{|y-z|}\,,&d=2\,. \end{cases} \end{equation} There are positive constants $\alpha, c(U)$ such that for $y,z \in U$, and all $\omega \in \Omega$, \begin{equation} \label{eq:green-upper} g_U(y,z,\omega) \leq \begin{cases} \alpha h_y(z)+c, & \text{if $d \geq 2$ and $y \neq z$},\\ c, &\text{if $d=1$}\,. \end{cases} \end{equation} \end{cor} \begin{proof} Let $x \in U$ with dist$(x, \partial U)>r$. Choose $t_{0} \in (0,1]$ such that $\sqrt{at_0}\leq \frac{r}{2}$ and for all $t\leq t_0$, $\frac{\tilde \alpha}{t^{d/2}}\geq \frac{2\alpha}{t_0^{d/2}} \exp\{-\tfrac{\beta r^2}{4t_0}\}$ holds, and such that in addition the function $t\mapsto \frac{\alpha}{t^{d/2}} \exp\{-\tfrac{\beta r^2}{4t}\}$ is monotone increasing on $\{t:t\leq t_0\}$. Let $\rho =\min(\frac{r}{2},\sqrt{at_{0}/8})$ and $z_0 \in B_{\rho}(x)$. Hence $|X_{T_U}-z_0|>\tfrac{r}{2}$, and on the event $\{T_U<t\leq t_0\}$, the inequality $p_\omegaega(t-T_U, X_{T_U}, z)\leq \frac{\alpha}{t^{d/2}} \exp\big\{-\tfrac{\beta r^2}{4t}\big\}$ follows from (\ref{eq:PDE-upper}) and from the monotonicity mentioned above. Choose further $y_0 \in B_\rho(x)$, then $|y_0-z_0|<\sqrt{at_{0}/2}$, and hence, for $t \in (t_0/2,t_0)$, $|y_0-z_0|<\sqrt{at}$ holds. By Duhamel's formula, see \cite{stroock} page 331, and by (\ref{eq:PDE-lower}), the subtransition density $p_{\omega,U}(t,y,z)$ satisfies for $y_0,z_0 \in B_{\rho}(x)$ and $t \in (t_0/2,t_0)$ \begin{equation} \label{eq:inf-green} \begin{aligned} p_{\omega,U}(t,y_0,z_0)&=p_\omega(t,y_0,z_0)-E_{y_0,\omega}\big[T_U<t, p_\omega(t-T_U, X_{T_U}, z_0)\big]\\ &\geq \tfrac{\alpha}{t_0^{d/2}}\exp\{-\tfrac{\beta r^2}{4t_0}\}>0\,. \end{aligned} \end{equation} We will now prove (\ref{eq:green-lower}). Since $U$ is a bounded domain, it follows from a standard chaining argument using (\ref{eq:inf-green}) that there is a finite integer $K(U)>0$ such that for all $y,z \in U$ as above (\ref{eq:green-lower}), for all $t \in (Kt_0/2,Kt_0)$ and for all $\omega \in \Omega$, \begin{equation} p_{\omega,U}(t,y,z) \geq c(r,K)>0\,. \end{equation} Since \begin{equation*} g_{U}(y,z,\omega) \geq \int_{K\frac{t_0}{2}}^{K t_0}p_{\omega,U}(t,y,z)\textrm{d}t\,, \end{equation*} the claim (\ref{eq:green-lower}) follows. To prove the upper bound (\ref{eq:green-upper}), we write \begin{align} \label{eq:sup-green} g_{U}(y,z,\omega)=\int_{0}^{\infty}p_{\omega,U}(t, y ,z)\textrm{d}t \leq \int_{0}^{1}p_{\omega}(t, y ,z)\textrm{d}t + \sum_{k=2}^{\infty}\int_\frac{k}{2}^{\frac{k+1}{2}}p_{\omega,U}(t,y,z)\textrm{d}t\,. \end{align} With the help of (\ref{eq:PDE-upper}), we find positive constants $\alpha,\,c$ such that \begin{equation} \label{eq:h} \int_{0}^{1}p_{\omega}(t, y ,z)\textrm{d}t \leq \begin{cases} \alpha h_y(z)+c\,,& \text{ if $d \geq 2$, $y \neq z$},\\ c\,, &\text{ if $d=1$}\,. \end{cases} \end{equation} We obtain by a repeated use of the Chapman-Kolmogorov equation and by (\ref{eq:PDE-upper}), that for $k \geq 2$, \begin{multline*} \int_\frac{k}{2}^{\frac{k+1}{2}}p_{\omega,U}(t,y,z)\textrm{d}t \leq \int_U \textrm{d}v ~p_{\omega,U}(1/2,y,v) ~\sup_{v \in U}\int_\frac{ k-1}{2}^{\frac{k}{2}}p_{\omega,U}(t,v,z)\textrm{d}t\\ \stackrel{induction}{\leq} \left(\sup_{v \in U}P_{v,\omega}[T_{U}>\frac{1}{2}] \right)^{k-1} \sup_{v \in U} \int_{\frac{1}{2}}^{1}p_{\omega,U}(t,v,z)\textrm{d}t \leq c \left(\sup_{v \in U}P_{v,\omega}[T_{U}>\frac{1}{2}]\right)^{k-1}. \end{multline*} Hence, with the help of the Support Theorem of Stroock-Varadhan, see \cite{bass} p.25, or from a chaining argument using (\ref{eq:PDE-lower}), the sum on the right-hand side of (\ref{eq:sup-green}) will be smaller than \begin{equation} \label{eq:sup-green-1} \frac{c}{\inf_{v \in U}P_{v,\omega} [T_{U} \leq\frac{1}{2}]} \leq c(U) < \infty\,. \end{equation} Combining (\ref{eq:sup-green}), (\ref{eq:h}) and (\ref{eq:sup-green-1}) shows (\ref{eq:green-upper}). \end{proof} \end{document}
\mathbf{1}gin{document} \title{A sequential approach for speed planning under jerk constraints} \mathbf{1}gin{abstract} In this paper we discuss a sequential algorithm for the computation of a minimum-time speed profile over a given path, under velocity, acceleration and jerk constraints. Such a problem arises in industrial contexts such as automated warehouses, where LGVs need to perform assigned tasks as fast as possible in order to increase productivity. It can be reformulated as an optimization problem with a convex objective function, linear velocity and acceleration constraints, and non-convex jerk constraints, which, thus, represent the main source of difficulty. While existing non-linear programming (NLP) solvers can be employed for the solution of this problem, it turns out that the performance and robustness of such solvers can be enhanced by the sequential line-search algorithm proposed in this paper. At each iteration a feasible direction, with respect to the current feasible solution, is computed, and a step along such direction is taken in order to compute the next iterate. The computation of the feasible direction is based on the solution of a linearized version of the problem, and the solution of the linearized problem, through an approach which strongly exploits its special structure, represents the main contribution of this work. The efficiency of the proposed approach with respect to existing NLP solvers is proved through different computational experiments. \end{abstract} \keywords{Speed planning, Optimization, Sequential Line-Search Method} \section{Introduction} An important problem in motion planning is the computation of the minimum-time motion of a car-like vehicle from a start configuration to a target one while avoiding collisions (obstacle avoidance) and satisfying kinematic, dynamic, and mechanical constraints (for instance, on velocities, accelerations and maximal steering angle). This problem can be approached in two ways: i) As a minimum-time trajectory planning, where both the path to be followed by the vehicle and the timing law on this path (i.e., the vehicle's velocity) are simultaneously designed. For instance, one could use the RRT algorithm (see~\cite{7505628}). ii) As a (geometric) path planning followed by a minimum-time speed planning on the planned path (see, for instance,~\cite{doi:10.1177/027836498600500304}). In this paper, following the second paradigm, we assume that the path that joins the initial and the final configuration is assigned, and we aim at finding the time-optimal speed law that satisfies some kinematic and dynamic constraints. The problem can be reformulated as an optimization one and it is quite relevant from the practical point of view. In particular, in automated warehouses the speed of LGVs needs to be planned under acceleration and jerk constraints. The solution algorithm should be: i) {\em fast}, since speed planning is made continuously throughout the work-day, not only when an LGV receives a new task but also during the execution of the task itself, since conditions may change, e.g., if the LGV has to be halted for security reasons; ii) {\em reliable}, i.e., it should return solutions of high quality, because a better speed profile allows to save time and even a small percentage improvements, say a 5\% improvement, has a considerable impact on the productivity of the warehouse and, thus, determines a significant economic gain. In our previous work~\cite{consolini2017scl}, we proposed an optimal time-complexity algorithm for finding the time-optimal speed law that satisfies constraints on maximum velocity and tangential and normal acceleration. In the subsequent work~\cite{CabConLoc2018coap1}, we included a bound on the derivative of the acceleration with respect to the arc-length. In this paper, we consider the presence of jerk constraints (constraints on the time derivative of the acceleration). The resulting optimization problem is a non-convex one and, for this reason, is significantly more complex than the ones we discussed in~\cite{consolini2017scl} and~\cite{CabConLoc2018coap1}. The main contribution of this work is the development of a line-search algorithm for this problem based on the sequential solution of convex problems. The proposed algorithm meets both the requirement of being fast and the requirement of being reliable. The former is met by heavily exploiting the special structure of the optimization problem, the latter by the theoretical guarantee that the returned solution is a first-order stationary point (in practice, a minimizer) of the optimization problem. \subsection*{Problem Statement} \label{sec:velplan} Here we introduce more formally the problem at hand. Let $\boldsymbol{\gamma}: [0,s_f] \to \mathbb{R}^2$ be a smooth function. The image set $\boldsymbol{\gamma}([0,s_f])$ is the path to be followed, $\boldsymbol{\gamma}(0)$ the initial configuration, and $\boldsymbol{\gamma}(s_f)$ the final one. Function $\boldsymbol{\gamma}$ has arc-length parameterization, that is, is such that \mbox{$(\forall \lambda \in [0,s_f])$, $\|\boldsymbol{\gamma}'(\lambda)\| =1$}. In this way, $s_f$ is the length of the path. We want to compute the speed-law that minimizes the overall transfer time (i.e., the time needed to go from $\boldsymbol{\gamma}(0)$ to $\boldsymbol{\gamma}(s_f)$). To this end, let $\lambda: [0,t_f] \to [0,s_f]$ be a differentiable monotone increasing function, that represents the vehicle's curvilinear abscissa as a function of time, and let $v: [0,s_f]\to [0,+\infty[$ be such that $(\forall t \in [0,t_f])\ \dot \lambda(t)=v(\lambda(t))$. In this way, $v(s)$ is the derivative of the vehicle curvilinear abscissa, which corresponds to the norm of its velocity vector at position $s$. The position of the vehicle as a function of time is given by ${\bf x}:[0,t_f] \to \mathbb Real^2, \ {\bf x}(t)=\boldsymbol{\gamma}(\lambda(t))$. The velocity and acceleration are given, respectively, by $$ \mathbf{1}gin{array}{ll} \dot {\bf x}(t)=\boldsymbol{\gamma}'(\lambda(t)) v(\lambda(t)),\\ \ddot {\bf x}(t)=a_T(t) \boldsymbol{\gamma}'(\lambda(t))+ a_N(t) \boldsymbol{\gamma}'^{\perp} (\lambda(t)),\, \end{array} $$ where $a_T(t)= v'(\lambda(t)) v(\lambda(t))$, $a_N(t)=k(\lambda(t)) v(\lambda(t))^2$ are, respectively, the tangential and normal components of the acceleration (i.e., the projections of the acceleration vector $\ddot {\bf x}$ on the tangent and the normal to the curve). Moreover $\boldsymbol{\gamma}'^{\perp}(\lambda)$ is the normal to vector $\boldsymbol{\gamma}'(\lambda)$, the tangent of $\boldsymbol{\gamma}'$ at $\lambda$. Here $k:[0,s_f] \to \mathbb Real$ is the scalar curvature, defined as \mbox{$k(s)=\scalar{\boldsymbol{\gamma}''(s)}{\boldsymbol{\gamma}'(s)^\perp}$}. Note that $|k(s)|=\|\boldsymbol{\gamma}''(s)\|$. In the following, we assume that $k(s) \in \mathcal{C}^{1}([0,s_f],\mathbb Real)$. The total maneuver time, for a given velocity profile \mbox{$v\in C^1([0,s_f],\mathbb Real)$}, is returned by the functional \mathbf{1}gin{equation} \label{obj_fun_pr} {\cal F}: C^1([0,s_f],\mathbb Real)\rightarrow \mathbb Real, \ \ \ {\cal F}(v)=\int_0^{s_f} v^{-1}(s) d s. \end{equation} In our previous work~\cite{consolini2017scl}, we considered the problem \mathbf{1}gin{equation} \label{eqn_problem_pr} \min_{v \in {\cal V}} {\cal F}(v), \end{equation} where the feasible region ${\cal V}\subset C^1([0,s_f],\mathbb Real)$ is defined by the following set of constraints \mathbf{1}gin{subequations} \label{eqn_problem_constraints} \mathbf{1}gin{align} v(0)=0,\,v(s_f)=0, \label{inter_con_pr}\\ 0\leq v(s) \leq v_{\max},\ \ s \in ]0,s_f[, \label{con_speed_pr}\\ |2 v'(s)v(s)| \leq A, \ \ s \in [0,s_f], \label{con_at_pr}\\ |k(s)| v(s)^2 \leq A_N, \ \ s \in [0,s_f], \label{con_an_pr} \end{align} \end{subequations} ($v_{\max}$, $A$, $A_N$ are upper bounds for the velocity, the tangential acceleration, and the normal acceleration, respectively). Constraints~\eqref{inter_con_pr} are the initial and final interpolation conditions, while constraints~\eqref{con_speed_pr},~\eqref{con_at_pr},~\eqref{con_an_pr} limit velocity and the tangential and normal components of acceleration. In~\cite{consolini2017scl} we presented an algorithm, with linear-time computational complexity with respect to the number of variables, that provides an optimal solution of~\eqref{eqn_problem_pr} after spatial discretization. One limitation of this algorithm is that the obtained velocity profile is Lipschitz\footnote{A function $f:\mathbb Real \to \mathbb Real$ is \emph{Lipschitz} if there exists a real positive constant $L$ such that $(\forall x,y \in \mathbb Real)\ |f(x)-f(y)| \leq L |x-y|$. } but not differentiable, so that the vehicle's acceleration is discontinuous. With the aim of obtaining a smoother velocity profile, in the subsequent work~\cite{CabConLoc2018coap1}, we required that the velocity be differentiable and we imposed a Lipschitz condition (with constant $J$) on its derivative. In this way, after setting $w=v^2$, the feasible region of the problem ${\cal W}\subset C^1([0,s_f],\mathbb Real)$ is defined by the set of functions $w \in C^1([0,s_f],\mathbb Real)$ that satisfy the following set of constraints \mathbf{1}gin{subequations} \label{eqn_problem_cont_constraints} \mathbf{1}gin{align} w(0)=0,\,w(s_f)=0, \label{inter_con_cont}\\ 0\leq w(s) \leq v_{\max}^2,\ \ s \in ]0,s_f[, \label{con_speed_cont}\\ |w'(s)| \leq A,\ \ s \in [0,s_f], \label{con_at_cont}\\ |k(s)| w(s) \leq A_N, \ \ s \in [0,s_f], \label{con_an_cont}\\ |w'(s_1)- w'(s_2)| \leq J |s_1-s_2|,\ \ s_1,s_2 \in [0,s_f]. \label{con_j_cont} \end{align} \end{subequations} So that we end up with problem \mathbf{1}gin{equation} \label{eqn_problem_cont} \min_{w\in {\cal W}} G(w), \end{equation} where the objective function is \mathbf{1}gin{equation} \label{obj_fun_cont} G: C^1([0,s_f],\mathbb Real)\rightarrow \mathbb Real, \ \ \ G(w)=\int_0^{s_f} w^{-1/2}(s) d s. \end{equation} The objective function~\eqref{obj_fun_cont} and constraints \eqref{inter_con_cont}-\eqref{con_an_cont} correspond to the ones in Problem~\eqref{eqn_problem_pr} after the substitution $w=v^2$. Note that this change of variable is well known in the literature. It has been first proposed in \cite{Pfeiffer87}, while in \cite{verscheure09} it is observed that Problem \eqref{eqn_problem_pr} becomes convex after this change of variable. The added set of constraints~\eqref{con_j_cont} is a Lipschitz condition on the derivative of the squared velocity $w$. It is used to enforce a smoother velocity profile by bounding the second derivative of the squared velocity with respect to arc-length. Note that constraints~\eqref{eqn_problem_cont_constraints} are linear and that objective function~\eqref{obj_fun_cont} is convex. In~\cite{CabConLoc2018coap1}, we proposed an algorithm for solving a finite dimensional approximation of Problem~\eqref{eqn_problem_cont_constraints}. The algorithm exploited the particular structure of the resulting convex finite dimensional problem. This paper extends the results of~\cite{CabConLoc2018coap1}. It considers a non-convex variation of Problem~\eqref{eqn_problem_cont_constraints}, in which constraint~\eqref{con_j_cont} is substituted with a constraint on the time derivative of the acceleration $|\dot a(t)| \leq J$, where $a(t)=\frac{d}{d t} v(\lambda(t))=v' (\lambda(t)) v (\lambda(t))=\frac{1}{2} w'(\lambda(t))$. Then, we set $$j_L(t) = \dot{a}(t) = \frac{1}{2} w^{\prime\prime}(s(t))\sqrt{(w(s(t)))}.$$ We name this quantity ``jerk''. Note that $j_L(t)$ is not the third time derivative of the position $\mathbf{g}amma(\lambda(t))$ but is related to it. We will clarify this in Section~\ref{sec_higher_dim}. We used the subscript $L$ for ``longitudinal''. Then, we end up with the following minimum-time problem: \mathbf{1}gin{problem}[Smooth minimum-time velocity planning problem: continuous version] \label{cap4:prob:continuos} \mathbf{1}gin{align} & \min_{w \in C^2} {\displaystyle\int_0^{s_f} w(s)^{-1/2} \, ds} \nonumber\\ &w(0)=0, \quad w(s_{f})=0 , \nonumber \\\ &0 \leq w(s) \leq \mu^+(s), & s \in [0,s_{f}], \nonumber\\ &\frac{1}{2}\left|w^{\prime}(s)\right| \leq A, & s\in [0,s_{f}], \label{cap4:acc_bound_cont}\\ &\frac{1}{2}\left|w^{\prime\prime}(s) \sqrt{w(s)}\right| \le J, & s\in[0,s_f],\label{cap4:jerk_bound_cont} \end{align} \end{problem} where $\mu^{+}$ is the square velocity upper bound depending on the shape of the path, i.e., \[ \mu^{+}(s) = \min \left\{v_{\max}^2,\frac{A_N}{|k(s)|}\right\}, \] with $v_{\max}$, $A_N$ and $k$ be the maximum allowed velocity of the vehicle, the maximum normal acceleration and the curvature of the path, respectively. Parameters $A$ and $J$ are the bounds representing the limitations on the (tangential) acceleration and the jerk, respectively. For the sake of simplicity we consider constraints~\eqref{cap4:acc_bound_cont} and~\eqref{cap4:jerk_bound_cont} symmetric and constant. However, the following development could be easily extended to the non-symmetric and non-constant case. Note that the jerk constraint~\eqref{cap4:jerk_bound_cont} is non-convex. The continuous problem is discretized as follows. We subdivide the path into $n-1$ intervals of equal length, i.e., we evaluate function $w$ at points $$ s_i=\frac{(i-1) s_f}{n-1},\ \ \ i=1,\ldots,n, $$ so that we have the following $n$-dimensional vector of variables $$ {\bf w}=(w_1, w_2,\ldots,w_n)=\left(w(s_1), w(s_2), \ldots, w(s_n)\right). $$ Then, the finite dimensional version of the problem is: \mathbf{1}gin{problem} [Smooth minimum-time velocity planning problem: discretized version] \label{cap4:prob_disc} \mathbf{1}gin{align} & \qquad \min_{{\mathbf w} \in \mathbb Real^n} \sum_{i=1}^{n-1} \frac{2h}{\sqrt{w_{i+1}} + \sqrt{w_{i}}} \label{cap4:obj:disc}\\ &0 \leq {\mathbf w} \leq {\mathbf u}, \label{cap4:con:bound}\\ & w_{i+1} - w_i \leq 2hA, & i=1, \dots ,n-1,\label{cap4:con:acc}\\ &w_{i} - w_{i+1} \leq 2hA, & i=1, \dots ,n-1, \,\label{cap4:con:dec} \\ &(w_{i-1} - 2w_i + w_{i+1})\sqrt{\frac{w_{i+1} + w_{i-1}}{2}}\le 2h^2J,& i = 2,\dots,n-1, \label{cap4:con:par}\\ &-(w_{i-1} - 2w_i + w_{i+1})\sqrt{\frac{w_{i+1} + w_{i-1}}{2}}\le 2h^2J,& i = 2,\dots,n-1, \label{cap4:con:nar} \end{align} \end{problem} where $u_i=\mu^+(s_i)$, for $i=1,\ldots,n$, and, in particular, $u_1 = 0$ and $u_n = 0$, since we are assuming that the initial and final velocity are equal to 0. The objective function (\ref{cap4:obj:disc}) is an approximation of~\eqref{obj_fun_cont} given by the Riemann sum of the intervals obtained by dividing each interval $[s_i,s_{i+1}]$, for $i=1,\ldots,n-1$, in two subintervals of the same size. Constraints (\ref{cap4:con:acc}) and (\ref{cap4:con:dec}) are obtained by finite difference to approximate $w^{\prime}$. Constraints~\eqref{cap4:con:par} and~\eqref{cap4:con:nar} are obtained by using a second-order central finite difference to approximate $w^{\prime\prime}$, while $w$ is approximated by the arithmetic mean. Due to jerk constraints~\eqref{cap4:con:par} and~\eqref{cap4:con:nar}, Problem~\ref{cap4:prob_disc} is a non-convex one and cannot be solved with the algorithm presented in~\cite{CabConLoc2018coap1}. \newline\newline\noindent For the sake of illustration, Figure~\ref{fig:minaripathjerk} shows the speed profiles computed with and without the jerk constraints for the instance associated to the path shown in Figure~\ref{cap1:fig:geometry_path}. It is straightforward to observe how the jerk bounded velocity profile is smoother than the one obtained without the jerk limitation. It is also interesting to remark that in the interval where the maximum allowed velocity is smallest (the one between 40 and 60 m), the optimal solution falls below the maximum speed profile at the beginning and at the end of the interval. Indeed, due to the jerk constraints, it is worthwhile to reduce the speed in these positions in order to have larger velocities in the two intervals immediately preceding and immediately following it, respectively. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=0.99\linewidth]{minari_path_jerk} \caption{ Red line represents the maximum allowed velocity along the path as a function of the arc length. The blue line is the optimal speed profile without jerk constraints. The orange line is the optimal speed profile with jerk constraints. \label{fig:minaripathjerk}} \end{figure} \mathbf{1}gin{figure} \centering \includegraphics[width=0.99\columnwidth]{geometry_example_path} \caption{Path with parameters $s_{f} = 90$ m, maximum allowed velocity $v_{\max}=15 ms^{-1}$, $A = 1.5$ ms$^{-2}$, $A_N = 1$ ms$^{-2}$, $J=1$ ms$^{-3}$} \label{cap1:fig:geometry_path} \end{figure} \subsection*{Main result} The main contribution of this paper is the development of a new solution algorithm for finding a local minimum of the non-convex Problem~\ref{cap4:prob_disc}. As will be detailed in next sections, we propose to solve Problem~\ref{cap4:prob_disc} by a line-search algorithm based on the sequential solution of convex problems. The algorithm will be an iterative one where at each iteration the following operations will be performed. \newline\newline\noindent {\bf Constraint linearization:} We will first define a convex problem by linearizing constraints~\eqref{cap4:con:par} and~\eqref{cap4:con:nar} through a first-order Taylor approximation around the current point ${\mathbf w}^{(k)}$. Differently from other sequential algorithms for non-linear programming (NLP) problems, we will keep the original convex objective function. The linearized problem will be introduced in Section \ref{sec:linesearch}. \newline\newline\noindent {\bf Computation of a feasible descent direction:} The convex problem (actually, a relaxation of such problem) is solved in order to compute a feasible descent direction $\delta wb^{(k)}$. The main contribution of the paper lies in this part. The computation requires the minimization of a suitably defined objective function through a further iterative algorithm. At each iteration of this algorithm the following operations are performed: \mathbf{1}gin{itemize} \item {\bf Objective function evaluation:} Such evaluation requires the solution of a problem with the same objective function but subject to a subset of the constraints. The special structure of the resulting subproblem is heavily exploited in order to solve it efficiently. This is the topic of Section \ref{cap4:sec:accnar}. \newline\newline\noindent \item {\bf Computation of a descent step:} Some Lagrange multipliers of the subproblem define a subgradient for the objective function. This can be employed to define a linear programming (LP) problem which returns a descent step for the objective function. This is the topic of Section~\ref{cap4:sec:PAR}. \end{itemize} $\ $\newline\newline\noindent {\bf Line-search:} Finally, a standard line-search along the half-line ${\mathbf w}^{(k)}+\alpha \delta wb^{(k)}$, $\alpha\geq 0$, is performed. \newline\newline\noindent Sections \ref{sec:linesearch}--\ref{cap4:sec:PAR} will detail all what we discussed above. In Section \ref{sec_higher_dim} we will briefly discuss the speed planning problem for a curve in a generic configuration space and show that, also in this case, a speed profile obtained by solving Problem~\ref{cap4:prob:continuos} allows to bound the velocity, the acceleration and the jerk of the obtained trajectory. Finally, in Section \ref{sec:compexp} we will present different computational experiments. \subsection*{Comparison with existing literature} Although many works consider the problem of minimum-time speed planning with acceleration constraints (see for instance, \cite{7515141,Velenis2008,frego2017}), relatively few consider jerk constraints. Perhaps, this is also due to the fact that the jerk constraint is non-convex, so that its presence significantly increases the complexity of the optimization task. One can use a general purpose NLP solver (such as SNOPT or IPOPT) for finding a local solution of Problem~\ref{cap4:prob_disc}, but the required time is in general too large for the speed planning application. As outlined in the previous subsection, in this work we tackle this problem through an approach based on the solution of a sequence of convex subproblems. There are different approaches in the literature based on the sequential solution of convex subproblems. In~\cite{hauser2014fast} it is first observed that the problem with acceleration constraints but no jerk constraints for robotic manipulators can be reformulated as a convex one with linear constraints, and it is solved by a sequence of LP problems obtained by linearizing the objective function at the current point, i.e., the objective function is replaced by its supporting hyperplane at the current point, and by introducing a trust region centered at the current point. In \cite{pham2018new,CLMNV19} it is further observed that this problem can be solved very efficiently through the solution of a sequence of 2D LP problems. In \cite{LippBoyd2014} an interior point barrier method is used to solve the same problem based on Newton’s method. Each Newton step requires the solution of a KKT system and an efficient way to solve such systems is proposed in that work. Moving to approaches also dealing with jerk constraints, we mention \cite{debrouwere2013time}. In this work it is observed that jerk constraints are non-convex but can be written as the difference of two convex functions. Based on this observation, the authors solve the problem by a sequence of convex subproblems obtained by linearizing at the current point the concave part of the jerk constraints and by adding a proximal term in the objective function which plays the same role as a trust region, preventing from taking too large steps. In \cite{Singh15} a slightly different objective function is considered. Rather than minimizing the travelling time along the given path, the integral of the squared difference between the maximum velocity profile and the computed velocity profile is minimized. After representing time varying control inputs as products of parametric exponential and a polynomial functions, the authors reformulate the problem in such a way that its objective function is convex quadratic, while non-convexity lies in difference-of-convex functions. The resulting problem is tackled through the solution of a sequence of convex subproblems obtained by linearizing the concave part of the non-convex constraints. In \cite{Palleschi19} the problem of speed planning for robotic manipulators with jerk constraints is reformulated in such a way that non-convexity lies in simple bilinear terms. Such bilinear terms are replaced by the corresponding convex and concave envelopes, obtaining the so called McCormick relaxation, which is the tightest possible convex relaxation of the non-convex problem. Other approaches dealing with jerk constraints do not rely on the solution of convex subproblems. For instance, in \cite{MacFarlane03} a concatenation of fifth-order polynomials is employed to provide smooth trajectories, which results in quadratic jerk profiles, while in \cite{Haschke08} cubic polynomials are employed, resulting in piecewise constant jerk profiles. The decision process involves the choice of the phase durations, i.e., of the intervals over which a given polynomial applies. A very recent and interesting approach to the problem with jerk constraints is~\cite{pham2017structure}. In this work an approach based on numerical integration is discussed. Numerical integration has been first applied under acceleration constraints in \cite{Bobrow85,Shin85}. In~\cite{pham2017structure} jerk constraints are taken into account. The algorithm detects a position $s$ along the trajectory where the jerk constraint is singular, that is, the jerk term disappears from one of the constraints. Then, it computes the speed profile up to $s$ by computing two maximum jerk profiles and then connecting them by a minimum jerk profile, found by a shooting method. In general, the overall solution is composed of a sequence of various maximum and minimum jerk profiles. This approach does not guarantee reaching a local minimum of the traversal time. Moreover, since Problem~\ref{eqn_problem_cont_constraints} has velocity and acceleration constraints, the jerk constraint is singular for all values of $s$, so that the algorithm presented in~\cite{pham2017structure} cannot be directly applied to Problem~\ref{eqn_problem_cont_constraints}. Some algorithms use heuristics to quickly find suboptimal solutions of acceptable quality. For instance, \cite{Villagra-et-al2012} proposes an algorithm that applies to curves composed of clothoids, circles and straight lines. The algorithm does not guarantee local optimality of the solution. Reference~\cite{RaiCGL2019jerk} presents a very efficient heuristic algorithm. Also this method does not guarantee global nor local optimality. Various works in literature consider jerk bounds in the speed optimization problem for robotic manipulators instead of mobile vehicles. This is a slightly different problem, but mathematically equivalent to Problem~\eqref{cap4:prob:continuos}. In particular, paper~\cite{DONG20071941} presents a method based on the solution of a large number of non-linear and non-convex subproblems. The resulting algorithm is slow, due to the large number of subproblems; moreover, the authors do not prove its convergence. Reference~\cite{ZHANG2012472} proposes a similar method that gives a continuous-time solution. Again, the method is computationally slow, since it is based on the numerical solution of a large number of differential equations; moreover, the paper does not contain a proof of convergence or of local optimality. Some other works replace the jerk constraint with \emph{pseudo-jerk}, that is the derivative of the acceleration with respect to arc-length, obtaining a constraint analogous to~\eqref{con_j_cont} and ending up with a convex optimization problem. For instance,~\cite{8569414} adds to the objective function a pseudo-jerk penalizing term. This approach is computationally convenient but substituting~\eqref{cap4:jerk_bound_cont} with~\eqref{con_j_cont} may be overly restrictive at low speeds. \subsection*{Statement of contribution} The method presented in this paper is a sequential convex one which aims at finding a local optimizer of Problem~\ref{cap4:prob_disc}. To be more precise, as usual with non-convex problems, only convergence to a stationary point can usually be proved. However, the fact that the sequence of generated feasible points is decreasing with respect to the objective function values usually guarantees that the stationary point is a local minimizer, except in rather pathological cases (see, e.g., \cite[Page 19]{Fletcher:00}). To our knowledge and as detailed in the following, this algorithm is more efficient than the ones existing in literature, since it leverages the special structure of the subproblems obtained as local approximations of Problem~\ref{cap4:prob_disc}. We discussed this class of problems in our previous work~\cite{ConLocLAu19Graph}. This structure allows computing very efficiently a feasible descent direction for the main line-search algorithm; it is one of the key elements that allows us to outperform generic NLP solvers. \section{A sequential algorithm based on constraint~linearization} \label{sec:linesearch} To account for the non-convexity of Problem~\ref{cap4:prob_disc} we propose a line-search method based on the solution of a sequence of special structured convex problems. Throughout the paper we will call this Algorithm SCA (Sequential Convex Algorithm) and its flow chart is shown in~Figure~\ref{fig:flowlinesearch}. It belongs to the class of Sequential Convex Programming algorithms, where at each iteration a convex subproblem is solved. In what follows we will denote by $\Omega$ the feasible region of Problem~\ref{cap4:prob_disc}. At each iteration $k$, we replace the current point ${\mathbf w}^{(k)}\in\Omega$ with a new point ${\mathbf w}^{(k)} + \alpha^{(k)}\delta wb^{(k)} \in\Omega$, where the step-size $\alpha^{(k)}\in[0,1] $ is obtained by a \emph{line search} along the descent direction $\delta wb^{(k)}$, which, in turn, is obtained through the solution of a convex problem. The constraints of the convex problem are linear approximations of \eqref{cap4:con:bound}-\eqref{cap4:con:nar} around ${\mathbf w}^{(k)}$, while the objective function is the original one. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=\linewidth]{flowlinesearch.eps} \caption{Flow chart of Algorithm SCA. The dashed block corresponds to a call of the procedure \texttt{ComputeUpdate}, proposed to solve Problem~\ref{cap4:prob_lin}, which represents the main contribution of this paper. \label{fig:flowlinesearch}} \end{figure} Then, the problem we consider to compute the direction $\delta wb^{(k)}$ is the following (superscript $k$ of ${\mathbf w}^{(k)}$ is omitted): \mathbf{1}gin{problem} \label{cap4:prob_lin} \mathbf{1}gin{align} & \qquad \min_{\delta wb \in \mathbb Real^n} \sum_{i=1}^{n-1} \frac{2h}{\sqrt{w_{i+1} +\delta w_{i+1}} + \sqrt{w_{i}+\delta w_{i}}} \label{cap4:eq:obj_lin}\\ & \mathbf{l_B} \leq \delta wb \leq \mathbf{u_B},\label{cap4:eq:bound_lin} \\ & \delta w_{i+1}-\delta w_{i} \le b_{\text{A}}i, & i=1,\dots,n-1,\label{cap4:eq:acc_lin}\\ &\delta w_{i}-\delta w_{i+1} \le \mathbf{d}di, & i=1,\dots n-1,\label{cap4:eq:dec_lin}\\ &\delta w_{i} - \eta_i \delta w_{i-1} - \eta_i \delta w_{i+1} \le b_{\text{N}}i ,& i = 2,\dots,n-1.\label{cap4:eq:nar_lin}\\ & \eta_i \delta w_{i-1} + \eta_i \delta w_{i+1} -\delta w_{i} \le b_{\text{P}}i ,& i = 2,\dots,n-1,\label{cap4:eq:par_lin} \end{align} \end{problem} where $\mathbf{l_B} = -{\mathbf w}$ and $\mathbf{u_B} = {\mathbf u} - {\mathbf w}$ (recall that ${\mathbf u}$ has been introduced in (\ref{cap4:con:bound}) and its components have been defined immediately below Problem~\ref{cap4:prob_disc}), while parameters $\boldsymbol{\eta}$, $\mathbf{b_{\text{A}}}$ $\mathbf{\mathbf{d}d}$, $\mathbf{b_{\text{N}}}$ and $\mathbf{b_{\text{P}}}$ depend on the point ${\mathbf w}$ around which the constraints~\eqref{cap4:con:bound}-\eqref{cap4:con:nar} are linearized. More precisely, we have: \mathbf{1}gin{equation} \label{eq:paramdef} \mathbf{1}gin{array}{l} b_{\text{A}}i = 2hA - w_{i+1} + w_{i} \\ [4pt] {\mathbf{d}di} = 2hA - w_{i} + w_{i+1} \\ [4pt] \eta_i = \frac{3(w_{i+1} + w_{i-1}) -2w_i}{4(w_{i+1}+w_{i-1})} \\ [4pt] b_{\text{P}}i = \frac{2\sqrt{2}h^2J - (w_{i-1} - 2w_i + w_{i+1})\sqrt{w_{i+1} + w_{i-1}}}{2\sqrt{w_{i+1}+w_{i-1}}} \\ [4pt] b_{\text{N}}i= \frac{2\sqrt{2}h^2J + (w_{i-1} - 2w_i + w_{i+1})\sqrt{w_{i+1} + w_{i-1}}}{2\sqrt{w_{i+1}+w_{i-1}}}. \end{array} \end{equation} These parameters are proved to be nonnegative in the following proposition. \mathbf{1}gin{prop}\label{cap4:prop:positivity} All parameters $\boldsymbol{\eta}$, $\mathbf{b_{\text{A}}}$ $\mathbf{\mathbf{d}d}$, $\mathbf{b_{\text{N}}}$ and $\mathbf{b_{\text{P}}}$ are non negative for $h \rightarrow 0$. \end{prop} \mathbf{1}gin{proof} We have $b_{\text{A}}i = 2hA - w_{i+1} + w_{i} \ge 0 $ because of the feasibility of ${\mathbf w}$. Analogously we can prove that ${\mathbf{d}di} = 2hA - w_{i} + w_{i+1} \ge 0 $. Next, by continuity of $w$ we have that, for $h\to0$, $\eta_i \to \frac{1}{2}$, while by feasibility of ${\mathbf w}$ we have $b_{\text{P}}i ,b_{\text{N}}i \ge 0$. \end{proof} The proposed approach follows some standard ideas of sequential quadratic approaches employed in the literature about non-linearly constrained problems. But a quite relevant difference is that the true objective function~\eqref{cap4:obj:disc} is employed in the problem to compute the direction, rather than a quadratic approximation of such function. This choice comes from the fact that the objective function~\eqref{cap4:obj:disc} has some features (in particular, convexity and being decreasing), which, combined with the structure of the linearized constraints, allow for an efficient solution of Problem~\ref{cap4:prob_lin}. Problem~\ref{cap4:prob_lin} is a convex problem with a non-empty feasible region ($\delta wb = \mathbf{0}$ is always a feasible solution) and, consequently, can be solved by existing NLP solvers. However, such solvers tend to increase computing times since they need to be called many times within the iterative Algorithm SCA. The main contribution of this paper lies in the routine~\texttt{computeUpdate} (see dashed block in Figure~\ref{fig:flowlinesearch}), which is able to solve Problem~\ref{cap4:prob_lin} and efficiently returns a descent direction $\delta wb^{(k)}$. To be more precise, we will solve a \emph{relaxation} of Problem~\ref{cap4:prob_lin}. Such relaxation as well as the routine to solve it, will be detailed in Sections~\ref{cap4:sec:accnar} and \ref{cap4:sec:PAR}. In Section~\ref{cap4:sec:accnar} we present efficient approaches to solve some subproblems including proper subsets of the constraints. Then, in Section~\ref{cap4:sec:PAR} we address the solution of the relaxation of Problem~\ref{cap4:prob_lin}. \mathbf{1}gin{remark}\label{cap4:rem:step-feasible} It is possible to see that if one of the constraints~\eqref{cap4:con:par}-\eqref{cap4:con:nar} is active at ${\mathbf w}^{(k)}$, then along the direction $\delta wb^{(k)}$ computed through the solution of the linearized Problem \ref{cap4:prob_lin}, it holds that ${\mathbf w}^{(k)} + \alpha \delta wb^{(k)} \in\Omega \ $ for any sufficiently small $\alpha>0$. In other words, small perturbations of the current solution ${\mathbf w}^{(k)}$ along direction $\delta wb^{(k)}$ do not lead outside the feasible region $\Omega$. This fact is illustrated in Figure~\ref{fig:linearization}. Let us rewrite constraints~\eqref{cap4:con:par}-\eqref{cap4:con:nar} as follows: \mathbf{1}gin{equation}\label{cap4:eq:2dconstr} |(x-2y)\sqrt{x}|\le C, \end{equation} where $x = w_{i+1} + w_{i-1}$, $y = w_i$ and $C=2\sqrt{2}h^2 J$ is a constant. The feasible region associated to constraint~\eqref{cap4:eq:2dconstr} is reported in Figure~\ref{fig:linearization}. In particular, it is the region between the blue and the red curves. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=0.99\linewidth]{linearization} \caption{Constraints~\eqref{cap4:con:par}-\eqref{cap4:con:nar} and their linearization ($C=2\sqrt{2}h^2 J$). \label{fig:linearization}} \end{figure} Suppose that constraint $y\le \frac{x}{2} + \frac{C}{2\sqrt{x}}$ is active at ${\mathbf w}^{(k)}$ (the case when $y\ge \frac{x}{2} - \frac{C}{2\sqrt{x}}$ is active can be dealt with in a completely analogous way). If we linearize such constraint around ${\mathbf w}^{(k)}$, then we obtain a linear constraint (black line in Figure~\ref{fig:linearization}) which defines a region completely contained into the one defined by the non-linear constraint $y\le \frac{x}{2} + \frac{C}{2\sqrt{x}}$. Hence, for each direction~$\delta wb^{(k)}$ feasible with respect to the linearized constraint, we are always able to perform sufficiently small steps, without violating the original non-linear constraints, i.e., for $\alpha>0$ small enough, it holds that ${\mathbf w}^{(k)} + \alpha \delta wb^{(k)} \in\Omega$. \end{remark} A special case occurs when $w_{i-1}+w_{i+1}=0$ holds for some $i\in \{2,\ldots,n-1\}$. In that case, coefficients $\eta_i, b_{P_i}, b_{N_i}$ are not even defined. In fact, in this case we can omit the linearized constraints (\ref{cap4:eq:nar_lin})-(\ref{cap4:eq:par_lin}). Indeed, the corresponding non-linear constraints \eqref{cap4:con:par}-\eqref{cap4:con:nar} are not active at the current solution and, thus, along the computed direction a step with strictly positive length can always be taken without violating them. For all feasible solutions ${\mathbf w}$ such that this special case does not occur, i.e., such that $w_{i-1}+w_{i+1}>0$, $i=2,\ldots,n-1$, constraints \eqref{cap4:con:par} and \eqref{cap4:con:nar} can be rewritten as follows \mathbf{1}gin{eqnarray} w_{i-1} + w_{i+1}-2 w_i -2 \sqrt{2} h^2 J (w_{i+1} + w_{i-1})^{-\frac{1}{2}}\leq 0& \label{cap4:con:par1}\\ [6pt] 2 w_i-w_{i-1} - w_{i+1}-2 \sqrt{2} h^2 J (w_{i+1} + w_{i-1})^{-\frac{1}{2}}\leq 0. & \label{cap4:con:nar1} \end{eqnarray} Note that the functions on the left-hand side of these constraints are concave. Now we can define a variant of Problem \ref{cap4:prob_lin} where constraints \eqref{cap4:eq:nar_lin} and~\eqref{cap4:eq:par_lin} are replaced by the following linearizations of constraints \eqref{cap4:con:par1} and \eqref{cap4:con:nar1} \mathbf{1}gin{eqnarray} -\mathbf{1}ta_i \delta w_{i-1} - \mathbf{1}ta_i \delta w_{i+1} +\delta w_{i} \le b_{\text{N}}i & \label{cap4:con:linnar1} \\ [6pt] \theta_i \delta w_{i-1} + \theta_i \delta w_{i+1} -\delta w_{i} \le b_{\text{P}}i, & \label{cap4:con:linpar1} \end{eqnarray} where $b_{\text{P}}i$ and $b_{\text{N}}i$ are the same as in \eqref{cap4:eq:nar_lin} and~\eqref{cap4:eq:par_lin} (see (\ref{eq:paramdef})), while \mathbf{1}gin{equation} \label{eq:newcoeff} \mathbf{1}gin{array}{l} \theta_i=\frac{1}{2}+\frac{\sqrt{2}h^2 J}{2} (w_{i+1} + w_{i-1})^{-\frac{3}{2}} \\ [6pt] \mathbf{1}ta_i=\frac{1}{2}-\frac{\sqrt{2}h^2 J}{2} (w_{i+1} + w_{i-1})^{-\frac{3}{2}}. \end{array} \end{equation} The following proposition states that constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} are tighter than constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin}. \mathbf{1}gin{prop} \label{prop:tighter} For all $i=2,\ldots,n-1$, it holds that $\mathbf{1}ta_i\leq \eta_i\leq \theta_i$. Equality $\eta_i=\theta_i$ holds if the corresponding non-linear constraint \eqref{cap4:con:par1} is active at the current point ${\mathbf w}$. Similarly, $\eta_i=\mathbf{1}ta_i$ holds if the corresponding non-linear constraint \eqref{cap4:con:nar1} is active at the current point ${\mathbf w}$. \end{prop} \mathbf{1}gin{proof} We only prove the results about $\theta_i$ and $\eta_i$. Those about $\mathbf{1}ta_i$ and $\eta_i$ are proved in a completely analogous way. By definition of $\eta_i$ and $\theta_i$, we need to prove that $$ \frac{3(w_{i+1} + w_{i-1}) -2w_i}{4(w_{i+1}+w_{i-1})}\leq \frac{1}{2}+\frac{\sqrt{2}h^2 J}{2} (w_{i+1} + w_{i-1})^{-\frac{3}{2}}. $$ After few simple computations, this inequality can be rewritten as $$ w_{i+1} + w_{i-1}-2w_i -2\sqrt{2}h^2 J(w_{i+1} + w_{i-1})^{-\frac{1}{2}}\leq 0, $$ which holds in view of feasibility of ${\mathbf w}$ and, moreover, holds as an equality if constraint \eqref{cap4:con:par1} is active at the current point ${\mathbf w}$, as we wanted to prove. \end{proof} In view of this result, by replacing constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin} with \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1}, we reduce the search space of the new displacement $\delta wb$. On the other hand, the following proposition states that with constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} no line search is needed along the direction $\delta wb$, i.e., we can always choose the step length $\alpha=1$. \mathbf{1}gin{prop} If constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} are employed as a replacement of constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin} in the definition of Problem \ref{cap4:prob_lin}, then for each feasible solution $\delta wb$ of this problem it holds that ${\mathbf w}+\delta wb\in \Omega$. \end{prop} \mathbf{1}gin{proof} For the sake of convenience, let us rewrite Problem \ref{cap4:prob_disc} in the following more compact form \mathbf{1}gin{equation} \label{eq:origcompact} \mathbf{1}gin{array}{ll} \min & f({\bf w}+\delta wb) \\ [6pt] & {\bf c}({\bf w}+\delta wb)\leq 0, \end{array} \end{equation} where the vector function ${\bf c}$ contains all constraints of Problem \ref{cap4:prob_disc} and the non-linear ones are given as in \eqref{cap4:con:par1}-\eqref{cap4:con:nar1} (recall that in that case vector ${\bf c}$ is a vector of concave functions). Then, Problem \ref{cap4:prob_lin} can be written as follows \mathbf{1}gin{equation} \label{eq:lincompact} \mathbf{1}gin{array}{ll} \min & f({\bf w}+\delta {\bf w}) \\ [6pt] & {\bf c}({\bf w}) +\nabla {\bf c}({\bf w}) \delta {\bf w} \leq 0. \end{array} \end{equation} Now, it is enough to observe that, by concavity $$ {\bf c}({\bf w}+\delta wb)\leq {\bf c}({\bf w}) +\nabla {\bf c}({\bf w}) \delta {\bf w}, $$ so that each feasible solution of \eqref{eq:lincompact} is also feasible for \eqref{eq:origcompact}. \end{proof} The above proposition states that the feasible region of Problem \ref{cap4:prob_lin}, when constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} are employed in its definition, is a subset of the feasible region $\Omega$ of the original Problem \ref{cap4:prob_disc}. As a final result of this section, we state the following theorem, which establishes convergence of Algorithm SCA to a stationary (KKT) point of Problem \ref{cap4:prob_disc}, if it runs for an infinite number of iterations and if constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} are always employed after a finite number of iterations in the definition of Problem \ref{cap4:prob_lin}. \mathbf{1}gin{thm} \label{thm:convergence} If Algorithm SCA is run for an infinite number of iterations and there exists some positive integer value $K$ such that for all iterations $k\geq K$, constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} are always employed in the definition of Problem \ref{cap4:prob_lin}, then the sequence of points $\{{\bf w}^{(k)}\}$ generated by the algorithm converges to a KKT point of Problem \ref{cap4:prob_disc}. \end{thm} \mathbf{1}gin{proof} See Appendix \ref{sec:appconvergence}. \end{proof} \mathbf{1}gin{remark} In Algorithm SCA at each iteration we solve to optimality Problem \ref{cap4:prob_lin}. This is indeed necessary in the final iterations to prove the convergence result stated in Theorem \ref{thm:convergence}. However, during the first iterations it is not necessary to solve the problem to optimality: finding a feasible descent direction is enough. This does not alter the theoretical properties of the algorithm and allows to reduce the computing times. \end{remark} In the rest of the paper we will refer to constraints~\eqref{cap4:eq:acc_lin}-\eqref{cap4:eq:dec_lin} as acceleration constraints, while constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin} (or \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1}) will be called (linearized) Negative Acceleration Rate (NAR) and Positive Acceleration Rate (PAR) constraints, respectively. Also note that in the different subproblems discussed in the following sections we will always refer to the linearization with constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin} and, thus, with parameters $\eta_i$, but the same results also hold for the linearization with constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1} and, thus, with parameters $\theta_i$ and $\mathbf{1}ta_i$. \section{The subproblem with acceleration and NAR constraints}\label{cap4:sec:accnar} In this section we will propose an efficient method to solve Problem~\ref{cap4:prob_lin} when PAR constraints are removed. The solution of this subproblem will become part of an approach to solve a suitable relaxation of Problem~\ref{cap4:prob_lin} and, in fact, under very mild assumptions, to solve Problem~\ref{cap4:prob_lin} itself. This will be clarified in Section \ref{cap4:sec:PAR}. We will discuss: (i) the subproblem including only (\ref{cap4:eq:bound_lin}) and the acceleration constraints~\eqref{cap4:eq:acc_lin} and \eqref{cap4:eq:dec_lin}; (ii) the subproblem including only (\ref{cap4:eq:bound_lin}) and the NAR constraints~\eqref{cap4:eq:nar_lin}; (iii) the subproblem including all constraints~\eqref{cap4:eq:bound_lin}-\eqref{cap4:eq:nar_lin}. Throughout the section we will need the results stated in the following two propositions. Let us consider problems with the following form, where $N=\{1,\ldots,n\}$ and $M_j=\{1,\ldots,m_j\}$, $j\in N$, \mathbf{1}gin{equation} \label{eq:specstruct} \mathbf{1}gin{array}{lll} \min & g(x_1,\ldots,x_n) & \\ & x_j\leq a_{i,j} x_{j-1} + b_{i,j} x_{j+1} + c_{i,j} & i\in M_j,\ \ \ j\in N \\ & \ell_j\leq x_j \leq u_j & j\in N \end{array} \end{equation} with \mathbf{1}gin{itemize} \item $g$ a monotonic decreasing function; \item $a_{ij}, b_{ij}, c_{ij}\geq 0$, for $ i\in M_j$ and $j\in N$; \item $a_{i1}=0$ for $i\in M_1$; \item $b_{in}=0$ for $ i\in M_n$. \end{itemize} The following result is proved in~\cite{ConLocLAu19Graph}. Here we report the proof in order to make the paper self-contained. We denote by $P$ the feasible polytope of problem (\ref{eq:specstruct}) . Moreover, we denote by ${\bf z}$ the component-wise maximum of all feasible solutions in $P$, i.e., for each $j\in N$: $$ z_j=\max_{{\bf x}\in P} x_j $$ (note that the above maximum value is attained since $P$ is a polytope). \mathbf{1}gin{prop} \label{prop:specstruct} The unique optimal solution of (\ref{eq:specstruct}) is the component-wise maximum ${\bf z}$ of all its feasible solutions. \end{prop} \mathbf{1}gin{proof} If we are able to prove that the component-wise maximum ${\bf z}$ of all feasible solutions is itself a feasible solution, by monotonicity of $g$, it must also be the unique optimal solution. In order to prove that ${\bf z}$ is feasible, we proceed as follows. For $j\in N$, let ${\bf x}^{*j}$ be the optimal solution of $\max_{{\bf x}\in P} x_j$, so that $z_j=x_j^{*j}$. Since ${\bf x}^{*j}\in P$, then it must hold that $\ell_j\leq z_j\leq u_j$. Moreover, let us consider the generic constraint $$ x_j\leq a_{i,j} x_{j-1} + b_{i,j} x_{j+1} + c_{i,j} , $$ for $i\in M_j$. It holds that $$ \mathbf{1}gin{array}{ll} z_j & =x_j^{*j}\leq a_{i,j} x_{j-1}^{*j} + b_{i,j} x_{j+1}^{*j} + c_{i,j} \leq \\ &\leq a_{i,j} z_{j-1} + b_{i,j} z_{j+1}+ c_{i,j}, \end{array} $$ where the first inequality follows from feasibility of ${\bf x}^{*j}$, while the second follows from nonnegativity of $a_{ij}$ and $b_{ij}$ and the definition of ${\bf z}$. Since this holds for all $j\in N$, the result is proved. \end{proof} Now, consider the problem obtained from (\ref{eq:specstruct}) by removing some constraints, i.e., by taking $M'_j\subseteq M_j$ for each $j\in N$: \mathbf{1}gin{equation} \label{eq:specstruct1} \mathbf{1}gin{array}{lll} \min & g(x_1,\ldots,x_n) & \\ & x_j\leq a_{i,j} x_{j-1} + b_{i,j} x_{j+1} + c_{i,j} & i\in M'_j,\ \ \ j\in N \\ & \ell_j\leq x_j \leq u_j & j\in N, \end{array} \end{equation} Later on we will also need the result stated in the following proposition. \mathbf{1}gin{prop} \label{prop:specstruct1} The optimal solution $\boldsymbol{a}r{{\bf x}}^\star$ of problem (\ref{eq:specstruct1}) is an upper bound for the optimal solution ${\bf x}^\star$ of problem (\ref{eq:specstruct}), i.e., $\boldsymbol{a}r{{\bf x}}^\star\geq {\bf x}^\star$. \end{prop} \mathbf{1}gin{proof} It holds that ${\bf x}^\star$ is a feasible solution of problem (\ref{eq:specstruct1}), so that, in view of Proposition \ref{prop:specstruct}, $\boldsymbol{a}r{{\bf x}}^\star\geq {\bf x}^\star$ holds. \end{proof} \subsection{Acceleration constraints} The simplest case is the one where we only consider the acceleration constraints ~\eqref{cap4:eq:acc_lin} and~\eqref{cap4:eq:dec_lin}, besides constraints~\eqref{cap4:eq:bound_lin} with a generic upper bound vector ${\bf y}\ge {\bf 0}$. The problem to be solved is: \mathbf{1}gin{problem} \label{cap4:prob_lin_acc} \mathbf{1}gin{align*} \min_{\delta wb \in \mathbb Real^n}& \sum_{i=1}^{n-1} \frac{2h}{\sqrt{w_{i+1} +\delta w_{i+1}} + \sqrt{w_{i}+\delta w_{i}}}\\ & \mathbf{l_B} \leq \delta wb \leq \mathbf{y}, \\ & \delta w_{i+1}-\delta w_{i} \le b_{\text{A}}i, & i=1,\dots,n-1,\\ &\delta w_{i}-\delta w_{i+1} \le \mathbf{d}di, & i=1,\dots n-1. \end{align*} \end{problem} It can be seen that such problem belongs to the class of problems (\ref{eq:specstruct}). Therefore, in view of Proposition \ref{prop:specstruct}, the optimal solution of Problem \ref{cap4:prob_lin_acc} is the component-wise maximum of its feasible region. Moreover, in ~\cite{consolini2017scl} it has been proved that Algorithm~\ref{cap4:alg:acc}, based on a forward and a backward iteration and with $O(n)$ computational complexity, returns an optimal solution of Problem \ref{cap4:prob_lin_acc}. \mathbf{1}gin{algorithm} \caption{Routine \texttt{SolveAcc} for the solution of the problem with acceleration constraints\label{cap4:alg:acc}} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Upper bound $\mathbf{y}$ } \Output{$\delta wb$} $\delta w_1 = 0$, $\delta w_n = 0$ \; \For{$i=1$ \KwTo $n-1$}{ $\delta w_{i+1} = \min \mathbf{1}gin{Bmatrix}{\delta w_{i} + b_{\text{A}}i}, y_{i+1}\end{Bmatrix}$ } \For{$i=n-1$ \KwTo $1$}{ $\delta w_{i} = \min \mathbf{1}gin{Bmatrix}{\delta w_{i+1} +b_{\text{A}}i}, y_{i}\end{Bmatrix}$ } \mathbb Return $\delta wb$ \end{algorithm} \subsection{ NAR constraints } Now, we consider the problem only including NAR constraints~\eqref{cap4:eq:nar_lin} and constraints~\eqref{cap4:eq:bound_lin} with upper bound vector ${\bf y}$: \mathbf{1}gin{problem}\label{cap4:prob:nar} \mathbf{1}gin{align} & \qquad \min_{\delta wb \in \mathbb Real^n} \sum_{i=1}^{n-1} \frac{2h}{\sqrt{w_{i+1} +\delta w_{i+1}} + \sqrt{w_{i}+\delta w_{i}}} \nonumber \\ & \mathbf{0} \leq \delta wb \leq \mathbf{y},& \label{cap4:con:nar_bound} \\ &\delta w_{i} \le \eta_i (\delta w_{i-1} + \delta w_{i+1}) + b_{\text{N}}i , &i = 2,\dots,n-1,\label{cap4:con:nar_nar} \end{align} \end{problem} where $y_1=y_n=0$ because of the boundary conditions. Also this problem belongs to the class of problems (\ref{eq:specstruct}), so that Proposition \ref{prop:specstruct} states that its optimal solution is the component-wise maximum of its feasible region. Problem \ref{cap4:prob:nar} can be solved by using the graph-based approach presented in~\cite{CabConLoc2018coap1,ConLocLAu19Graph}. However, reference~\cite{CabConLoc2018coap1} shows that, by exploiting the structure of a simpler version of the NAR constraints, it is possible to develop an algorithm more efficient than the graph-based one. Our purpose is to extend the results presented in reference~\cite{CabConLoc2018coap1} to a case with different and more challenging NAR constraints, in order to develop an efficient algorithm outperforming the graph-based one. \newline\newline\noindent Now, let us consider the restriction of Problem \ref{cap4:prob:nar} between two generic indexes $s,t$ such that $1\leq s<t\leq n$, obtained by fixing $\delta w_s = y_s$ and $\delta w_t = y_t$ and by considering only the NAR and upper bound constraints at $s+1,\ldots,t-1$. Let $\delta wb^*$ be the optimal solution of the restriction. We first prove the following lemma. \mathbf{1}gin{lem}\label{cap4:rem:max} The optimal solution $\delta wb^*$ of the restriction of Problem \ref{cap4:prob:nar} between two indexes $s,t$, $1\leq s<t\leq n$, is such that for each $j\in \{s+1,\ldots,t-1\}$, either $\delta w_j^* \le y_j$ or $\delta w_j^* \le \eta_j(\delta w^*_{j+1} + \delta w^*_{j-1}) + b_{N_j}$ holds as an equality. \end{lem} \mathbf{1}gin{proof} It is enough to observe that in case both inequalities were strict for some $j$, then, in view of the monotonicity of the objective function, we could decrease the objective function value by increasing the value of $\delta w^*_j$, thus contradicting optimality of $\delta wb^*$. \end{proof} Note that the above result also applies to the full Problem \ref{cap4:prob:nar}, which corresponds to the special case $s=1$, $t=n$ with $y_1=y_n=0$. In view of Lemma~\ref{cap4:rem:max} we have that there exists an index $j$, with $s< j \le t$, such that: (i) $\delta w^*_j = y_j$; (ii) the upper bound constraint is not active at $s+1,\ldots,j-1$; (iii) all NAR constraints $s+1,\dots,j-1$ are active. Then, $j$ is the lowest index in $\{s+1,\ldots, t-1\}$ where the upper bound constraint is active If index $j$ were known, then the following observation allows to return the components of the optimal solution between $s$ and $j$. Let us first introduce the following definitions of matrix ${\bf A}$ and vector ${\bf q}$: \mathbf{1}gin{equation} \label{eq:defA} {\bf A}= \mathbf{1}gin{bmatrix} 1 & -\eta_{s+1} & 0 & \cdots & 0 \\ -\eta_{s+2} & 1 &-\eta_{s+2}& \ddots& \vdots\\ 0 & \ddots & \ddots& \ddots & 0 \\ \vdots& \ddots & -\eta_{j-2} & 1 & -\eta_{j-2} \\ 0 & \cdots& 0 & -\eta_{j-1} & 1 \end{bmatrix}\,, \end{equation} \mathbf{1}gin{equation} \label{eq:defq} {\bf q} = \mathbf{1}gin{bmatrix}{b_{\text{N}}}_{s+1} + \eta_{s+1}y_s \\{b_{\text{N}}}_{s+2}\\ \vdots\\ {b_{\text{N}}}_{j-2}\\ {b_{\text{N}}}_{j-1} + \eta_{j-1}y_j\end{bmatrix}. \end{equation} Note that {\bf A} is the square submatrix of the NAR constraints restricted to rows $s+1$ up to $j-1$ and the related columns. \mathbf{1}gin{obser} Let $\delta wb^*$ be the optimal solution of the restriction of Problem \ref{cap4:prob:nar} between $s$ and $t$ and let $s < j$. If constraints $\delta w^*_s\le y_s$, $\delta w^*_j\le y_j$, and $\delta w^*_i \le \eta_i(\delta w^*_{i+1} + \delta w^*_{i-1}) + b_{N_i}$, for $i = s+1,\dots,j-1$, are all active, then $\delta w^*_{s+1},\dots,\delta w^*_{j-1}$ are obtained by the solution of the following tridiagonal system $$ \mathbf{1}gin{array}{ll} \delta w_s=y_s & \\ \delta w_{r}-\eta_{r}\delta w_{r+1}-\eta_{r}\delta w_{r-1}= {b_{\text{N}}}_{r} & r=s+1,\ldots,j-1 \\ \delta w_j=y_j, & \end{array} $$ or, equivalently, as \mathbf{1}gin{equation} \label{eq:linsysaux} \mathbf{1}gin{array}{ll} \delta w_{s+1}-\eta_{s+1}\boldsymbol{a}r{x}_{s+2}= {b_{\text{N}}}_{s+1} +\eta_{s+1} y_s & \\ \delta w_{r}-\eta_{r}\delta w_{r+1}-\eta_{r}\delta w_{r-1}= {b_{\text{N}}}_{r} & r=s+2,\ldots,j-2 \\ \delta w_{s+1}-\eta_{s+1}\boldsymbol{a}r{x}_{s+2}= {b_{\text{N}}}_{s+1} +\eta_{s+1} y_s. \end{array} \end{equation} In matrix form, the above tridiagonal linear system can be written as follows, where matrix ${\bf A}$ is defined in (\ref{eq:defA}), while vector ${\bf q}$ is defined in (\ref{eq:defq}): \mathbf{1}gin{equation}\label{linear-system} {\bf A} \mathbf{1}gin{bmatrix} \delta w^*_{s+1}\\ \vdots \\ \delta w^*_{j-1} \end{bmatrix} = {\bf q}. \end{equation} \end{obser} Tridiagonal systems \[ a_i x_{i-1} + b_i x_i + c_i x_{i+1} = d_i, \ \ \ i=1,\ldots,m, \] with $a_1 = c_m = 0$, can be solved through so called Thomas algorithm~\cite{higham2002accuracy} (see Algorithm~\ref{cap4:algo:thomas}) with $O(m)$ operations. \mathbf{1}gin{algorithm}[!h] \caption{Thomas algorithm}\label{cap4:algo:thomas} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{$\mathbf{a}$, $\mathbf{b}$, $\mathbf{c}$, $\mathbf{d}$} \Output{$\boldsymbol{a}r{\mathbf x}$} Let $m$ be the dimension of ${\bf d}$\\ \tcc{ {\em Forward phase}}\ \For{$i=2,\dots,m$} { Set $\delta_i = \frac{a_i}{b_{i-1}}$\; Set $b_i = b_i - \delta_i c_{i-1}$\; Set $d_i = d_i - \delta_id_{i-1}$\; Set $\alpha_i = \frac{d_i}{b_i} $\;\label{cap4:algo:thomas:alpha} Set $\psi_i = \frac{c_i}{b_i}$\;\label{cap4:algo:thomas:beta} } \tcc{ {\em Backward phase}}\ Set $\boldsymbol{a}r{x}_m = \alpha_m$\; \label{cap4:alg:thomas:back_start} \For {$i = m-1,\dots,1$} { Set $\boldsymbol{a}r{x}_i = \alpha_i - \psi_i \boldsymbol{a}r{x}_{i+1}$ \label{cap4:alg:thomas:back_end}\; } \end{algorithm} In order to detect the lowest index $j\in \{s+1,\ldots, t-1\}$ such that the upper bound constraint is active at $j$, we propose Algorithm~\ref{cap4:algo:solveNAR}, also called \texttt{SolveNAR} and described in what follows. We initially set $j=t$. Then, at each iteration we solve the linear system~\eqref{linear-system}. Let $\boldsymbol{a}r{\bf x} = (\boldsymbol{a}r{x}_{s+1},\dots,\boldsymbol{a}r{x}_{j-1})$ be its solution. We check whether it is feasible and optimal or not. Namely, if there exists $k\in \{s+1,\ldots,j-1\}$ such that either $\boldsymbol{a}r{x}_k< 0$ or $\boldsymbol{a}r{x}_k > y_k$, then $\boldsymbol{a}r{\bf x}$ is unfeasible and, consequently, we need to reduce $j$ by 1. If $\boldsymbol{a}r{x}_k= y_k$ for some $k\in\{s+1,\ldots,j-1\}$, then we also reduce $j$ by 1 since $j$ is not in any case the lowest index of the optimal solution where the upper bound constraint is active. Finally, if $0\le \boldsymbol{a}r{x}_k< y_k$, for $k= s+1,\dots,j-1$, then we need to verify if ${\bf \boldsymbol{a}r{x}}$ is the best possible solution over the interval $\{s+1,\dots,j-1\}$. We will be able to check that after proving the following result. \mathbf{1}gin{prop} \label{prop:optimalpiece} Let matrix ${\bf A}$ be defined as in (\ref{eq:defA}) and vector ${\bf q}$ be defined as in (\ref{eq:defq}). The optimal solution $\delta wb^*$ of the restriction of Problem \ref{cap4:prob:nar} between $s$ and $t$ satisfies \mathbf{1}gin{equation} \label{eq:pieceopt} \delta w_s^*=y_s,\ \ \delta w_r^*=\boldsymbol{a}r{x}_r,\ \ r=s+1,\ldots,j-1,\ \ \delta w_j^*=y_j, \end{equation} if and only if the optimal value of the LP problem: \mathbf{1}gin{equation}\label{cap4:prob:epsilon_red} \mathbf{1}gin{array}{cl} \max_{\boldsymbol{\epsilon}} & \boldsymbol{1}^T\boldsymbol{\epsilon}\\ &{\bf A} \boldsymbol{\epsilon} \le {\bf 0}, \\ & \boldsymbol{\epsilon} \leq {\bf \boldsymbol{a}r y} - {\bf\boldsymbol{a}r {\mathbf x}}, \end{array} \end{equation} is strictly positive or, equivalently, if the following system admits no solution: \mathbf{1}gin{equation}\label{cap4:KKT} {\bf A}^T \boldsymbol{\lambda} = {\bf 1}, \ \boldsymbol{\lambda} \ge {\bf 0}. \end{equation} \end{prop} \mathbf{1}gin{proof} Let us first assume that $\delta wb^*$ does not fulfill (\ref{eq:pieceopt}). Then, in view of Lemma \ref{cap4:rem:max}, $j$ is not the lowest index such that the upper bound is active at the optimal solution and, consequently, $\delta w^*_k=y_k>\boldsymbol{a}r{x}_k$ for some $k\in \{s+1,\ldots,j-1\}$. Such optimal solution must be feasible and, in particular, it must satisfy all NAR constraints between $s+1$ and $j-1$ and the upper bound constraints between $s+1$ and $j$, i.e.: $$ \mathbf{1}gin{array}{ll} \delta w^*_{s+1}-\eta_{s+1}\delta w^*_{s+2}\leq {b_{\text{N}}}_{s+1} +\eta_{s+1} y_s & \\ \delta w^*_{r}-\eta_{r}\delta w^*_{r+1}-\eta_{r}\delta w^*_{r-1}\leq {b_{\text{N}}}_{r} & r=s+2,\ldots,j-2 \\ \delta w^*_{j-1}-\eta_{j-1}\delta w^*_{j-2}-\eta_{j-1}\delta w^*_j \leq {b_{\text{N}}}_{j-1} & \\ \delta w^*_{r}\leq y_r & r=s+1,\ldots,j. \end{array} $$ In view of $\delta w^*_j\leq y_j$ and $\eta_{j-1}\geq 0$, $\delta wb^*$ also satisfies the following system of inequalities: $$ \mathbf{1}gin{array}{ll} \delta w^*_{s+1}-\eta_{s+1}\delta w^*_{s+2}\leq {b_{\text{N}}}_{s+1} +\eta_{s+1} y_s & \\ \delta w^*_{r}-\eta_{r}\delta w^*_{r+1}-\eta_{r}\delta w^*_{r-1}\leq {b_{\text{N}}}_{r} & r=s+2,\ldots,j-2 \\ \delta w^*_{j-1}-\eta_{j-1}\delta w^*_{j-2} \leq {b_{\text{N}}}_{j-1} +\eta_{j-1} y_j & \\ \delta w^*_{r}\leq y_r & r=s+1,\ldots,j-1. \end{array} $$ After making the change of variables $\delta w^*_r=\boldsymbol{a}r{x}_r+\epsilon_r$ for $r=s+1,\ldots,j-1$, and recalling that ${\boldsymbol{a}r {\bf x}}$ solves system (\ref{eq:linsysaux}), the system of inequalities can be further rewritten as: $$ \mathbf{1}gin{array}{ll} \epsilon_{s+1}-\eta_{s+1}\epsilon_{s+2}\leq 0 & \\ \epsilon_{r}-\eta_{r}\epsilon_{r+1}-\eta_{r}\epsilon_{r-1}\leq 0 & r=s+2,\ldots,j-2 \\ \epsilon_{j-1}-\eta_{j-1}\epsilon_{j-2} \leq 0 & \\ \epsilon_{r}\leq y_r - \boldsymbol{a}r{x}_{r}& r=s+1,\ldots,j-1. \end{array} $$ Finally, recalling the definition of matrix ${\bf A}$ and vector ${\bf q}$ given in (\ref{eq:defA}) and (\ref{eq:defq}), respectively, this can also be written in a more compact form as: $$ \mathbf{1}gin{array}{l} {\bf A}\boldsymbol{\epsilon}\leq {\bf 0} \\ \boldsymbol{\epsilon}\leq \boldsymbol{a}r{{\bf y}}-\boldsymbol{a}r{{\bf x}}. \end{array} $$ If $\delta w^*_k=y_k>\boldsymbol{a}r{x}_k$ for some $k\in \{s+1,\ldots,j-1\}$, then the system must admit a solution with $\epsilon_k>0$. This is equivalent to prove that problem (\ref{cap4:prob:epsilon_red}) has an optimal solution with at least one strictly positive component and the optimal value is strictly positive. Indeed, in view of the definition of matrix ${\bf A}$, problem (\ref{cap4:prob:epsilon_red}) has the structure of the problems discussed in Proposition \ref{prop:specstruct}. More precisely, to see that we need to remark that maximizing $\boldsymbol{1}^T\boldsymbol{\epsilon}$ is equivalent to minimizing the decreasing function $-\boldsymbol{1}^T\boldsymbol{\epsilon}$. Then, observing that $\boldsymbol{\epsilon}={\bf 0}$ is a feasible solution of problem (\ref{cap4:prob:epsilon_red}), by Proposition \ref{prop:specstruct} the optimal solution $\boldsymbol{\epsilon}^*$ must be a nonnegative vector, and since at least one component, namely component $k$, is strictly positive, then the optimal value must also be strictly positive. \newline\newline\noindent Conversely, let us assume that the optimal value is strictly positive and that $\boldsymbol{\epsilon}^*$ is an optimal solution with at least one strictly positive component. Then, there are two possible alternatives. Either the optimal solution $\delta wb^*$ of the restriction of Problem \ref{cap4:prob:nar} between $s$ and $t$ is such that $\delta w_j^*<y_j$, in which case (\ref{eq:pieceopt}) obviously does not hold, or $\delta w_j^*=y_j$. In the latter case, let us assume by contradiction that (\ref{eq:pieceopt}) holds. We observe that the solution defined as follows: $$ \mathbf{1}gin{array}{ll} x'_s=y_s & \\ x'_r=\boldsymbol{a}r{x}_r+\epsilon^*_r=\delta w_r^*+ \epsilon^*_r& r=s+1,\ldots, j-1 \\ x'_j=y_j =\delta w_j^*& \\ x'_r=\delta w_r^* & r=j+1,\ldots,t, \end{array} $$ is feasible for the restriction of Problem \ref{cap4:prob:nar} between $s$ and $t$. Indeed, by feasibility of $\boldsymbol{\epsilon}^*$ in problem (\ref{cap4:prob:epsilon_red}) all upper bound and NAR constraints between $s$ and $j-1$ are fulfilled. Those between, $j+1$ and $t$ are also fulfilled by the feasibility of $\delta wb^*$. Then, we only need to prove that the NAR constraint at $j$ is satisfied. By feasibility of $\delta wb^*$ and in view of $\epsilon_{j-1}^*,\eta_j\geq 0$, we have that : $$ \mathbf{1}gin{array}{l} x'_j=\delta w_j^*\leq \eta_{j} \delta w_{j-1}^*+\eta_{j} \delta w_{j+1}^*+{b_{\text{N}}}_{j}\leq \\ \leq \eta_{j} (\delta w_{j-1}^*+\epsilon_{j-1})+\eta_{j} \delta w_{j+1}^*+{b_{\text{N}}}_{j}=\eta_{j} x'_{j-1}+\eta_{j} x'_{j+1}+{b_{\text{N}}}_{j}. \end{array} $$ Thus, ${\bf x}'$ is feasible and such that ${\bf x}'\geq \delta wb^*$ with at least one strict inequality (recall that at least one component of $\boldsymbol{\epsilon}^*$ is strictly positive), which contradicts the optimality of $\delta wb^*$ (recall that the optimal solution must be the component-wise maximum of all feasible solutions). \newline\newline\noindent In order to prove the last part, i.e., that problem (\ref{cap4:prob:epsilon_red}) has a positive optimal value if and only if (\ref{cap4:KKT}) admits no solution, we notice that the optimal value is positive if and only if the feasible point ${ \boldsymbol{\epsilon} = {\bf 0}}$ is not an optimal solution, or, equivalently, the null vector is not a KKT point. Since at ${\boldsymbol \epsilon = {\bf 0}}$ constraints $\boldsymbol{\epsilon} \le {\bf \boldsymbol{a}r y} - {\bf\boldsymbol{a}r {\mathbf x}}$ cannot be active, then the KKT conditions for problem~\eqref{cap4:prob:epsilon_red} at this point are exactly those established in (\ref{cap4:KKT}), where vector $\boldsymbol{\lambda}$ s the vector of Lagrange mutlpliers for constraints ${\bf A} \boldsymbol{\epsilon} \le {\bf 0}$. This concludes the proof. \end{proof} Then if~\eqref{cap4:KKT} admits no solution, then (\ref{eq:pieceopt}) does not hold and, again, we need to reduce $j$ by 1. Otherwise, we can fix the optimal solution between $s$ and $j$ according to (\ref{eq:pieceopt}). After that, we recursively call the routine \texttt{SolveNAR} on the remaining subinterval $\{j,\dots,t\}$ in order to obtain the solution over the full interval. \mathbf{1}gin{remark} In Algorithm~\ref{cap4:algo:solveNAR} routine \texttt{isFeasible} is the routine used to verify if, for $ k = s+1,\dots,j-1$, $0\le \boldsymbol{a}r{x}_k < y_k$, while \texttt{isOptimal} is the procedure to check optimality of $\boldsymbol{a}r{ \bf x}$ over the interval $\{s+1,\dots,j-1\}$, i.e., that (\ref{eq:pieceopt}) holds. \end{remark} Now, we are ready to prove that Algorithm~\ref{cap4:algo:solveNAR} solves Problem \ref{cap4:prob:nar}. \mathbf{1}gin{prop} The call \texttt{solveNAR(${\mathbf y}$,1,$n$)} of Algorithm~\ref{cap4:algo:solveNAR} returns the optimal solution of Problem \ref{cap4:prob:nar}. \end{prop} \mathbf{1}gin{proof} After the call \texttt{solveNAR(${\mathbf y}$,1,$n$)}, we are able to identify the portion of the optimal solution between 1 and some index $j_1$, $1<j_1\leq n$. If $j_1=n$, then we are done. Otherwise, we make the recursive call \texttt{solveNAR(${\mathbf y}$,$j_1$,$n$)}, which will enable to identify also the portion of the optimal solution between $j_1$ and some index $j_2$, $j_1<j_2\leq n$. If $j_2=n$, then we are done. Otherwise, we make the recursive call \texttt{solveNAR(${\mathbf y}$,$j_2$,$n$)}, and so on. After at most $n$ recursive calls, we are able to return the full optimal solution. \end{proof} \mathbf{1}gin{algorithm}[!ht] \caption{\texttt{SolveNAR}(${\bf y}, s, t$)}\label{cap4:algo:solveNAR} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Upper bound $\mathbf{y}$ and two indices $s$ and $t$ with $1\le s<t \le n$} \Output{$\delta wb^*$ } Set $j = t$\; $\delta wb^* = {\bf y}$\; \While{$j \ge s+1$}{ Compute the solution $\boldsymbol{a}r{\bf x}$ of the linear system~\eqref{linear-system}\; \If{ \texttt{isFeasible}$(\boldsymbol{a}r{\bf x})$ and \texttt{isOptimal}$(\boldsymbol{a}r{\bf x})$}{ \textbf{Break}\; } \Else{ Set $j = j- 1$\; } } \For{$i = s+1,\dots,j-1$}{ Set $\delta w^*_i = \boldsymbol{a}r{x}_{i}$\;} \mathbb Return $\delta wb^* = \min \left\{\delta wb^*, \texttt{SolveNAR}(\delta wb^*,j,t) \right\} $\; \end{algorithm} \mathbf{1}gin{remark} Note that Algorithm~\ref{cap4:algo:solveNAR} involves solving a significant amount of linear systems, both to compute $\boldsymbol{a}r{\bf x}$ and to verify its optimality (see~\eqref{linear-system} and~\eqref{cap4:KKT}). In what follows we propose some implementation details which improve the performance of Algorithm~\ref{cap4:algo:solveNAR}. As previously remarked, each (tridiagonal) linear system~\eqref{linear-system} can be solved in at most $O(j-s)$ operations (here the number $m$ of equations is $t-s-1$), which can be upper bounded by $O(n)$. Moreover, we can directly check the feasibility of $\boldsymbol{a}r{\bf x}$ during the backward phase of the Thomas Algorithm (see lines~\ref{cap4:alg:thomas:back_start}-\ref{cap4:alg:thomas:back_end} of Algorithm~\ref{cap4:algo:thomas}), namely we declare unfeasibility as soon as $0\le \boldsymbol{a}r{x}_i\le y _i$ does not hold, without completing the backward propagation. We also observe that coefficients $\alpha_i$ and $\psi_i$, $i=2,\ldots,m$ do not change with $j$, so that the forward phase of the Thomas algorithm can be performed only once at the beginning of the procedure \texttt{solveNAR} for the whole interval $\{s,\dots,t\}$. Finally, Thomas algorithm can also be employed to solve the (tridiagonal) linear system~\eqref{cap4:KKT}, needed to verify optimality of $\boldsymbol{a}r{\bf x}$. It is also worthwhile to remark that $j$ can be reduced by more than one unit at each iteration. Indeed, let $m_{i,r}=\prod_{s=i}^{r-1} \psi_s$ for $i<r\leq j$. Then, it can be seen that $\boldsymbol{a}r{x}_i=q_r-m_{i,r} \boldsymbol{a}r{x}_r$, for some $q_r$ and each $r\in \{i+1,\ldots,j\}$. Now, let us assume that $\boldsymbol{a}r{x}_i> y_i$. In such case what we are currently doing is moving form $j$ to $j-1$ and compute a new solution $\boldsymbol{a}r{{\bf x}}^{{\tt new}}$. However, we are able to compute in advance the value $\boldsymbol{a}r{x}_i^{{\tt new}}$ without solving the full linear system. Indeed, we have $$ \boldsymbol{a}r{x}_i^{{\tt new}}=\boldsymbol{a}r{x}_i+m_{i,j-1} (\boldsymbol{a}r{x}_{j-1}-y_{j-1}), $$ and, in case $\boldsymbol{a}r{x}_i^{{\tt new}}> y_i$, we can further reduce to $j-2$ and repeat the same procedure. A similar approach can be employed when $\boldsymbol{a}r{x}_i<0$. \end{remark} The following proposition states the worst-case complexity of \texttt{solveNAR(${\mathbf y}$,1,$n$)}. \mathbf{1}gin{prop} Problem \ref{cap4:prob:nar} can be solved with $O(n^3)$ operations by running the procedure \texttt{SolveNAR}$({\bf y},1,n)$ and by using the Thomas algorithm for the solution of each linear system. \end{prop} \mathbf{1}gin{proof} In the worst case, at the first call we have $j_1=2$, since we need to go all the way from $j=n$ down to $j=2$. Since for each $j$ we need to solve a tridiagonal system, which requires at most $O(n)$ operations, the first call of \texttt{SolveNAR} requires $O(n^2)$ operations. This is similar for all successive calls, and since the number of recursive calls is at most $O(n)$, the overall effort is at most of $O(n^3)$ operations. \end{proof} In fact, what we observed is that the practical complexity of the algorithm is much better, namely $\Theta(n^2)$. \subsection{Acceleration and NAR constraints} Now we discuss the problem with acceleration and NAR constraints, with upper bound vector ${\bf y}$, i.e.: \mathbf{1}gin{problem} \label{cap4:prob_lin_accnar} \mathbf{1}gin{align*} & \qquad \min_{\delta wb \in \mathbb Real^n} \sum_{i=1}^{n-1} \frac{2h}{\sqrt{w_{i+1} +\delta w_{i+1}} + \sqrt{w_{i}+\delta w_{i}}} \\ & \mathbf{l_B} \leq \delta wb \leq \mathbf{y}, \\ & \delta w_{i+1}-\delta w_{i} \le b_{\text{A}}i, & i=1,\dots,n-1,\\ &\delta w_{i}-\delta w_{i+1} \le \mathbf{d}di, & i=1,\dots n-1,\\ &\delta w_{i} - \eta_i \delta w_{i-1} - \eta_i \delta w_{i+1} \le b_{\text{N}}i ,& i = 2,\dots,n-1. \end{align*} \end{problem} We first remark that Problem \ref{cap4:prob_lin_accnar} has the structure of problem (\ref{eq:specstruct}), so that by Proposition \ref{prop:specstruct}, its unique optimal solution is the component-wise maximum of its feasible region. As for Problem \ref{cap4:prob:nar}, we can solve Problem \ref{cap4:prob_lin_accnar} by using the graph-based approach proposed in reference~\cite{ConLocLAu19Graph}. However, reference~\cite{CabConLoc2018coap1} shows that, if we adopt a very efficient procedure to solve Problems \ref{cap4:prob_lin_acc} and \ref{cap4:prob:nar}, then it is worth splitting the full problem into two separated ones and use an iterative approach (see Algorithm~\ref{cap4:alg:accnar}). Indeed, Problems \ref{cap4:prob_lin_acc}-\ref{cap4:prob_lin_accnar} share the common property that their optimal solution is also the component-wise maximum of the corresponding feasible region. Moreover, according to Proposition \ref{prop:specstruct1}, the optimal solutions of Problems \ref{cap4:prob_lin_acc} and \ref{cap4:prob:nar} are valid upper bounds for the optimal solution (actually, also for any feasible solution) of the full Problem \ref{cap4:prob_lin_accnar}. In Algorithm~\ref{cap4:alg:accnar} we first call the procedure $\texttt{SolveACC}$ with input the upper bound vector ${\bf y}$. Then, the output of this procedure, which, according to what we have just stated, is an upper bound for the solution of the full Problem \ref{cap4:prob_lin_accnar}, satisfies $\delta wb_{\text{Acc}}\leq {\bf y}$ and becomes the input for a call of the procedure $ \texttt{SolveNAR}$. The output $\delta wb_{\text{NAR}}$ of this call will be again an upper bound for the solution of the full Problem \ref{cap4:prob_lin_accnar} and it will satisfy $\delta wb_{\text{NAR}}\leq \delta wb_{\text{ACC}}$. This output will become the input of a further call to the procedure $\texttt{SolveACC}$, and we proceed in this way until the distance between two consecutive output vectors falls below a prescribed tolerance value $\varepsilon$. The following proposition states that the sequence of output vectors generated by the alternate calls to the procedures $\texttt{SolveACC}$ and $ \texttt{SolveNAR}$ will converge to the optimal solution of the full Problem \ref{cap4:prob_lin_accnar}. \mathbf{1}gin{prop} Algorithm~\ref{cap4:alg:accnar} converges to the the optimal solution of Problem \ref{cap4:prob_lin_accnar} when $\varepsilon=0$ and stops after a finite number of iterations if $\varepsilon>0$. \end{prop} \mathbf{1}gin{proof} We have observed that the sequence of alternate solutions of Problems \ref{cap4:prob_lin_acc} and \ref{cap4:prob:nar}, here denoted by $\{{\bf y}_t\}$, is: (i) a sequence of valid upper bounds for the optimal solution of Problem \ref{cap4:prob_lin_accnar}; (ii) component-wise monotonic non-increasing; (iii) component-wise bounded from below by the null vector. Thus, if $\varepsilon=0$, an infinite sequence is generated which converges to some point $\boldsymbol{a}r{{\bf y}}$, which is also an upper bound for the optimal solution of Problem \ref{cap4:prob_lin_accnar} but, more precisely, by continuity is also a feasible point of the problem and, is thus, also the optimal solution of the problem. If $\varepsilon>0$, due to the convergence to some point $\boldsymbol{a}r{{\bf y}}$, at some finite iteration the exit condition of the while loop must be satisfied. \end{proof} \mathbf{1}gin{algorithm}[h] \caption{ Algorithm \texttt{SolveACCNAR} for the solution of Problem \ref{cap4:prob_lin_accnar}.} \label{cap4:alg:accnar} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{ The upper bound $\mathbf{y}$ and the tolerance $\varepsilon$} \Output{The optimal solution $\delta wb^*$ and the optimal value $f^{*}$} $\delta wb_{\text{Acc}} = $ \texttt{SolveACC}$(\mathbf{y})$\; $\delta wb_{\text{NAR}}= $ \texttt{SolveNAR}$(\delta wb_{\text{Acc}},1,n)$\; \While{$\|\delta wb_{\text{NAR}} - \delta wb_{\text{Acc}} \| > \varepsilon$}{ $\delta wb_{\text{Acc}} = $ \texttt{SolveACC}$(\delta wb^*)$\; $\delta wb_{\text{NAR}} = $ \texttt{SolveNAR}$(\delta wb_{\text{Acc}},1,n)$\; } $\delta wb^*=\delta wb_{\text{NAR}}$\; \mathbb Return $\delta wb^*$, \texttt{evaluateObj}$(\delta wb^*)$ \end{algorithm} \section{A descent method for the case of acceleration, PAR and NAR constraints}\label{cap4:sec:PAR} Unfortunately, PAR constraints~\eqref{cap4:eq:par_lin} do not satisfy the assumptions requested in Proposition \ref{prop:specstruct} in order to guarantee that the component-wise maximum of the feasible region is the optimal solution of Problem~\ref{cap4:prob_lin}. However, in Section~\ref{cap4:sec:accnar} we have shown that Problem \ref{cap4:prob_lin_accnar}, i.e., Problem~\ref{cap4:prob_lin} without the PAR constraints, can be efficiently solved by Algorithm~\ref{cap4:alg:accnar}. Our purpose then is to separate the acceleration and NAR constraints from the PAR constraints. \mathbf{1}gin{defn}\label{cap4:defn:optimvalfun} Let $f: \mathbb Real^n\rightarrow\mathbb Real$ be the objective function of Problem~\ref{cap4:prob_lin} and let $\mathcal{D}$ be the region defined by the acceleration and NAR constraints (the feasible region of Problem \ref{cap4:prob_lin_accnar}). We define the function $F: \mathbb Real^n \rightarrow \mathbb Real$ as follows \[ F({\mathbf y}) = \min\left\{ f({\mathbf x})\, | \, {\mathbf x} \in \mathcal{D}, {\mathbf x} \le {\mathbf y} \right\}. \] Namely, $F$ is the optimal value function of Problem \ref{cap4:prob_lin_accnar} when the upper bound vector is ${\mathbf y}$. \end{defn} \mathbf{1}gin{prop} Function $F$ is a convex function. \end{prop} \mathbf{1}gin{proof} Since Problem \ref{cap4:prob_lin_accnar} is convex, then the optimal value function $F$ is convex (see Section 5.6.1 of~\cite{boyd2004convex}) . \end{proof} Now, let us introduce the following problem: \mathbf{1}gin{problem}\label{cap4:prob:par} \mathbf{1}gin{align} &\quad \min_{{\mathbf y} \in \mathbb Real^n} F({\mathbf y}) \label{cap4:con:par_obj}\\ \intertext{such that} & \eta_i( y_{i-1} + y_{i+1}) -y_{i} \le {b_{\text{P}}i} ,\quad i = 2,\dots,n-1, \label{cap4:con:par_par}\\ & \mathbf{l_B}\le {\mathbf y} \le \mathbf{u_B} \label{cap4:con:par_bound}. \end{align} \end{problem} Such problem is a relaxation of Problem~\ref{cap4:prob_lin}. Indeed, each feasible solution of Problem~\ref{cap4:prob_lin} is also feasible for Problem~\ref{cap4:prob:par} and the value of $F$ at such solution is equal to the value of the objective function of Problem~\ref{cap4:prob_lin} at the same solution. We will solve Problem~\ref{cap4:prob:par} rather than Problem~\ref{cap4:prob_lin} to compute the new displacement $\delta wb$. More precisely, if ${\mathbf y}^*$ is the optimal solution of Problem~\ref{cap4:prob:par}, then we will set \mathbf{1}gin{equation} \label{eq:computedir} \delta wb = \arg \min_{{\mathbf x} \in \mathcal{D}, {\mathbf x} \le {\mathbf y}^*} f({\mathbf x}). \end{equation} In the following proposition we prove that, under a very mild condition, the optimal solution of Problem \ref{cap4:prob:par} computed in (\ref{eq:computedir}) is feasible and, thus, optimal for Problem \ref{cap4:prob_lin}, so that, although we solve a relaxation of the latter problem, we return an optimal solution for it. \mathbf{1}gin{prop}\label{cap4:prop:salvezza} Let ${\mathbf w}^{(k)}$ be the current point. If \mathbf{1}gin{equation}\label{cap4:ass:prop_salvezza} \delta w_{j-1} + \delta w_{j+1}\le 2\left(w^{(k)}_{j-1} + w^{(k)}_{j+1} \right), \quad j=2,\ldots,n-1, \end{equation} where $\delta wb$ is computed through (\ref{eq:computedir}), then, $\delta wb$ is feasible for Problem \ref{cap4:prob_lin}, both if the non-linear constraints are linearized as in \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin} and if they are linearized as in \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1}. \end{prop} \mathbf{1}gin{proof} See Appendix \ref{sec:feasdir}. \end{proof} Note that assumption~\eqref{cap4:ass:prop_salvezza} is mild since we are basically requiring that no steps larger than twice as much as the current values can be taken. In order to fulfill it, one can impose restrictions on $\delta w_{j+1}$ and $\delta w_{j-1}$, like, e.g., $$\delta w_{j-1} \le w^{(k)}_{j-1} + \frac{w^{(k)}_{j-1} +w^{(k)}_{j+1} }{2},$$ and a similar restriction for $\delta w_{j+1}$, so that the assumption is satisfied. In fact, in the computational experiments we did not impose such restrictions unless a positive step-length along the computed direction $\delta wb$ could not be taken (which, however, never occurred in our experiments). \newline\newline\noindent Now, let us turn our attention towards the solution of Problem~\ref{cap4:prob:par}. In order to solve it, we propose a descent method. We can exploit the information provided by the dual optimal solution $\boldsymbol{\nu}\in\mathbb Real_{+}^n$ associated to the upper bound constraints of Problem \ref{cap4:prob_lin_accnar}. Indeed, from the sensitivity theory, we know that the dual solution is related to the gradient of the optimal value function $F$ (see Definition~\ref{cap4:defn:optimvalfun}) and provides information about how it changes its value for small perturbations of the upper bound values (for further details see Sections 5.6.2 and 5.6.5 in~\cite{boyd2004convex}). Let ${\mathbf y}^{(t)}$ be a feasible solution of Problem~\ref{cap4:prob:par} and $\boldsymbol{\nu}\in\mathbb Real_{+}^n$ be the Lagrange multipliers of the upper bound constraints of Problem \ref{cap4:prob_lin_accnar}, when the upper bound is ${\mathbf y}^{(t)}$. Let: $$\varphi_i = b_{\text{P}}i - \eta_i\left( y^{(t)}_{i-1} + y^{(t)}_{i+1} \right) + y^{(t)}_i, \quad i=2,\dots,n-1.$$ Then, a \emph{feasible descent direction} $\mathbf{d}^{(t)}$ can be obtained by solving the following LP problem: \mathbf{1}gin{problem}\label{cap4:prob:direction} \mathbf{1}gin{align} &\min_{\mathbf{d} \in \mathbb Real^n} -\boldsymbol{\nu}^{T}\mathbf{d} \label{cap4:con:obj_dir}\\ & \eta_i\left( d_{i-1} + d_{i+1} \right)- d_i \le \varphi_i, \, i = 2,\dots,n-1, \label{cap4:con:par_dir}\\ & \mathbf{l_B} \le {\mathbf y}^{(t)} + \mathbf{d} \le \mathbf{u_{B}} \label{cap4:con:box_dir}, \end{align} \end{problem} where the objective function~\eqref{cap4:con:obj_dir} imposes that $\mathbf{d}^{(t)}$ is a descent direction while constraints~\eqref{cap4:con:par_dir} and~\eqref{cap4:con:box_dir} guarantee feasibility with respect to Problem~\ref{cap4:prob:par}. Problem~\ref{cap4:prob:direction} is an LP problem and, consequently, it can easily be solved through a standard LP solver. In particular, we employed GUROBI \cite{gurobi}. Unfortunately, since the information provided by the dual optimal solution $\boldsymbol{\nu}$ is local and related to small perturbations of the upper bounds, it might happen that $F({\mathbf y}^{(t)} + \mathbf{d}^{(t)}) \ge F({\mathbf y}^{(t)})$. To overcome this issue we introduce a trust-region constraint in Problem~\ref{cap4:prob:direction}. So, let $\sigma^{(t)}\in\mathbb Real_{+}$ be the radius of the trust-region at iteration $t$, then we have: \mathbf{1}gin{problem}\label{cap4:prob:direction_tr} \mathbf{1}gin{align} &\min_{\mathbf{d} \in \mathbb Real^n} -\boldsymbol{\nu}^{T}\mathbf{d} \label{cap4:obj:d}\\ & \eta_i\left( d_{i-1} + d_{i+1} \right)- d_i \le \varphi_i, \, i = 2,\dots,n-1, \label{cap4:con:par_dir_tr}\\ &\mathbf{\boldsymbol{a}r{l}_{B}} \le \mathbf{d} \le \mathbf{\boldsymbol{a}r{u}_B} \label{cap4:con:box_dir_tr}. \end{align} \end{problem} where $\boldsymbol{a}r{l}_{B_i} = \max\{l_{B_i}-y^{(t)}_i,-\sigma^{(t)}\}$ and $\boldsymbol{a}r{u}_{B_i} = \min\{u_{B_i}-y_i^{(t)},\sigma^{(t)}\}$ for $i=1,\dots,n$. After each iteration of the descent algorithm, we change the radius $\sigma^{(t)}$ according to the following rules: \mathbf{1}gin{itemize} \item if $F({\mathbf y}^{(t)}+\mathbf{d}^{(t)}) \ge F({\mathbf y}^{(t)})$, then we set ${\mathbf y}^{(t+1)} = {\mathbf y}^{(t)}$ and we tight the trust-region by decreasing $\sigma^{(t)}$ by a factor $\tau\in(0,1)$; \item if~$F({\mathbf y}^{(t)}+\mathbf{d}^{(t)}) < F({\mathbf y}^{(t)})$, then we set ${\mathbf y}^{(t+1)} = {\mathbf y}^{(t)} + \mathbf{d}^{(t)}$ and enlarge the radius~$\sigma^{(t)}$ by a factor $\rho>1$. \end{itemize} The proposed descent algorithm is sketched in Figure~\ref{fig:flowcompute}, which reports the flow chart of the procedure~\texttt{ComputeUpdate} used in Algorithm SCA. \mathbf{1}gin{figure*}[!h] \centering \includegraphics[width=\textwidth]{flowcompute.eps} \caption{Flow chart of the routine \texttt{ComputeUpdate} \label{fig:flowcompute}} \end{figure*} We initially set ${\mathbf y}^{(0)} = {\bf 0}$. At each iteration $t$ we evaluate the objective function $F({\mathbf y}^{t})$ by solving Problem \ref{cap4:prob_lin_accnar} with upper bound vector ${\mathbf y}^{(t)}$ through a call of the routine \texttt{solveACCNAR} (Algorithm~\ref{cap4:alg:accnar}). Then, we compute the Lagrange multipliers $\boldsymbol{\nu}^{(t)}$ associated to the upper bound constraints. After that, we compute a candidate descent direction $\mathbf{d}^{(t)}$ by solving Problem~\ref{cap4:prob:direction_tr}. If $\mathbf{d}^{(t)}$ is a descent step, then we set ${\mathbf y}^{(t+1)} = {\mathbf y}^{(t)} + \mathbf{d}^{(t)}$ and enlarge the radius of the trust region, otherwise we do not move to a new point and we tight the trust region and solve again~Problem~\ref{cap4:prob:direction_tr}. The descent algorithm stops as soon as the radius of the trust region becomes smaller than a fixed tolerance $\varepsilon_1$. \mathbf{1}gin{remark} Note that we initially set ${\mathbf y}^{(0)} ={\bf 0} $. But any feasible solution of Problem~\ref{cap4:prob:direction_tr} does the job and, actually, starting with a good initial solution may enhance the performance of the algorithm. \end{remark} \mathbf{1}gin{remark} \label{rem:heuristic} Problem~\ref{cap4:prob:direction_tr} is an LP one and can be solved by any existing LP solver. However, a suboptimal solution to Problem~\ref{cap4:prob:direction_tr}, obtained by a heuristic approach, is also acceptable. Indeed, we observe that: i) an {\em optimal} descent direction is not strictly required; ii) a heuristic approach allows to reduce the time needed to get a descent direction. In this paper we propose a possible heuristic. This will be described in Appendix~\ref{sec:heuristic}. However, we point out that a possible topic for future research is the development of further heuristic approaches. \end{remark} \section{Speed planning in general configuration spaces} \label{sec_higher_dim} In this section, we consider the speed planning problem for a curve in a generic configuration space. We show that, also in this case, a speed profile obtained by solving Problem~\ref{cap4:prob:continuos} allows to bound the velocity, the acceleration and the jerk of the obtained trajectory. Let $\mathcal{Q}$ be a smooth manifold of dimension $p$ that represents a configuration space with $p$-degrees of freedom ($p$-DOF). For instance, the configuration space of a rigid body corresponds to $SE(3)$, the set of rigid transformations in $\mathbb Real^3$. Let $\|\cdot\|: T Q \to \mathbb Real$ be a a norm on $T \mathcal{Q}$, the tangent space of $\mathcal{Q}$. Let $\mathbf{g}amma: [0,s_f] \to \mathcal{Q}$ be a $C^3$ function, whose image set $\Gamma=\mathbf{g}amma([0,s_f])$ is the path to be followed and such that $\mathbf{g}amma$ has unit-length parameterization, that is $(\forall s \in [0,s_f]) \|\mathbf{g}amma'(s)\|=1$. In this way, $s_f$ is the length of $\Gamma$. In particular, $\mathbf{g}amma(0)$ and $\mathbf{g}amma(s_f)$ are the initial and final configurations. Define $t_{f}$ as the time when the configuration reaches the end of the path. Let $\lambda : [0, t_f] \rightarrow [0, s_f]$ be a differentiable monotone increasing function such that $\gamma(\lambda(t))$ is the configuration at time $t$ and let $ v : [0, s_f] \rightarrow [0, +\infty]$ be such that $\left( \forall t \in [0,t_f]\right) \dot{\lambda}(t) = v(\lambda(t))$. Namely, $v(s)$ is the norm of the velocity of the configuration along $\Gamma$ at position $\mathbf{g}amma(s)$. We impose ($\forall s \in [0,s_{f}]$) $v(s) \ge 0$. For any $t \in [0,t_f]$, using chain rule, we obtain, setting $w=v^2$ and $\mathbf{q}(t) = \mathbf{g}amma(\lambda(t))$, \mathbf{1}gin{equation} \label{eq:rep} \mathbf{1}gin{array}{ll} \dot{\mathbf{q}}(t) = \mathbf{g}amma^{\prime}(\lambda(t))v(\lambda(t)),\\[3pt] \ddot{\mathbf{q}}(t) = \frac{1}{2} \mathbf{g}amma^{\prime}(\lambda(t))w^\prime(\lambda(t))+ \mathbf{g}amma^{\prime\prime}(\lambda(t))w(\lambda(t)),\\ \dddot{\mathbf{q}}(t) =\frac{3}{2}\mathbf{g}amma^{\prime \prime}(\lambda(t))w^\prime(\lambda(t))w(\lambda(t))^\frac{1}{2}+\\ \mathbf{g}amma^{\prime}(\lambda(t))w^{\prime \prime}(\lambda(t)) w(\lambda(t))^\frac{1}{2}+ \mathbf{g}amma^{\prime\prime \prime}(\lambda(t))w(\lambda(t))^\frac{3}{2}\,. \end{array} \end{equation} At this point, one could formulate a speed optimization problem, similar in structure to Problem~\ref{cap4:prob:continuos}, with constraints on $\|\dot{\mathbf{q}}(t) \|$, $\|\ddot{\mathbf{q}}(t)\|$, $\|\dddot{\mathbf{q}}(t)\|$. This leads to a different optimization problem that, although related to Problem~\ref{cap4:prob:continuos}, would need a different discussion and is outside the scope of this paper. However, in the following, we show that if the speed profile $w$ is chosen by solving Problem~\ref{cap4:prob:continuos}, then quantities $\|\dot{\mathbf{q}}(t) \|$, $\|\ddot{\mathbf{q}}(t)\|$, $\|\dddot{\mathbf{q}}(t)\|$ are bounded by terms depending on the parameters $\mu^+$, $A$, $J$ appearing in Problem~\ref{cap4:prob:continuos}. To this end, set $k(\lambda)=\|\mathbf{g}amma^{\prime\prime}(\lambda(t))\|$, $k_2(\lambda)=\|\mathbf{g}amma^{\prime\prime\prime}(\lambda(t))\|$, and, recalling that $\|\gamma'(\lambda(t))\|=1$, note that \[ \mathbf{1}gin{array}{ll} \|\dot{\mathbf{q}}(t) \| = v(\lambda(t))\\[3pt] \|\ddot{\mathbf{q}}(t)\| \leq \frac{1}{2} |w^\prime(\lambda(t))|+ k(\lambda(t)) w(\lambda(t)),\\ \|\dddot{\mathbf{q}}(t)\| \leq \frac{3}{2} k(\lambda(t)) |w^\prime(\lambda(t))| w(\lambda(t))^\frac{1}{2}+ |w^{\prime \prime}(\lambda(t))| w(\lambda(t))^\frac{1}{2}\\+ k_2(\lambda(t))w(\lambda(t))^\frac{3}{2}\,. \end{array} \] Hence, if $w$ is feasible for Problem~\ref{cap4:prob:continuos}, the following bounds hold \[ \mathbf{1}gin{array}{ll} \|\dot{\mathbf{q}}(t) \| \leq \sqrt{\mu^+(\lambda(t))}\\[3pt] \|\ddot{\mathbf{q}}(t)\| \leq A+ k(\lambda(t)) \mu^+(\lambda(t)),\\ \|\dddot{\mathbf{q}}(t)\| \leq 3 k(\lambda(t)) A \mu^+(\lambda(t))^\frac{1}{2}+ 2 J + k_2(\lambda(t))\mu^+(\lambda(t))^\frac{3}{2}\,. \end{array} \] Hence, a speed profile $w$ obtained as the solution of Problem~\ref{cap4:prob:continuos} implies that quantities $\|\dot{\mathbf{q}}(t) \|$, $\|\ddot{\mathbf{q}}(t) \|$, $\|\dddot{\mathbf{q}}(t) \|$ are bounded in a known way. If one wants to satisfy constraints $\|\dot{\mathbf{q}}(t) \|\leq \hat V$, $\|\ddot{\mathbf{q}}(t)\| \leq \hat A$, $\|\dddot{\mathbf{q}}(t)\| \leq \hat J$, it is possible to proceed in the following way. Set two constant $0<A<\hat A$, $0<J<J^+$ (for instance, set $A=\frac{\hat A}{2}$ and $J=\frac{\hat J}{2}$ and define $\mu^+(\lambda) = \min\{\hat{V}^2, \frac{\hat A-A}{ k(\lambda)},\chi(\lambda)\}$, where $\chi(\lambda)$ is a positive quantity that satisfies equation $3 k(\lambda(t)) A \chi(\lambda) ^\frac{1}{2}+ J+ k_2(\lambda(t)) \chi(\lambda) ^\frac{3}{2} = \hat J$, then any $w$ obtained by solving Problem~\ref{cap4:prob:continuos} satisfies the required bounds. \section{Computational Experiments} \label{sec:compexp} In this section we present various computational experiments performed in order to evaluate the approaches proposed in Sections~\ref{cap4:sec:accnar} and~\ref{cap4:sec:PAR}. \newline\newline\noindent In particular, we compared solutions of Problem~\ref{cap4:prob_disc} computed by algorithm SCA to solutions obtained with commercial NLP solvers. Note that, with a single exception, we did not carry out a direct comparison with other methods specifically tailored to Problem~\ref{cap4:prob_disc} for the following reasons. \mathbf{1}gin{itemize} \item Some algorithms (such as \cite{Villagra-et-al2012}, \cite{RaiCGL2019jerk}) use heuristics to quickly find suboptimal solutions of acceptable quality, but do not achieve local optimality. Hence comparing their solution times with SCA would not be fair. However, in one of our experiments (Experiment 4), we made a comparison between the most recent heuristic proposed in \cite{RaiCGL2019jerk} and Algorithm SCA, both in terms of computing times and in terms of the quality of the returned solution. \item The method presented in~\cite{8569414} does not consider the (nonconvex) jerk constraint, but solves a convex problem whose objective function has a penalization term that includes pseudo-jerk. Due to this difference, a direct comparison with SCA is not possible. \item The method presented in~\cite{DONG20071941} is based on the numerical solution of a large number of non-linear and non-convex subproblems and is therefore structurally slower than SCA, whose main iteration is based on the efficient solution of the convex Problem~\ref{cap4:prob_lin}. \end{itemize} In the first two experiments we compare the computational time of IPOPT, a general purpose NLP solver~\cite{ipopt2006}, with that of Algorithm SCA over some randomly generated instances of Problem~\ref{cap4:prob_disc}. In particular, we tested two different versions of Algorithm SCA. The first version, called \emph{SCA-H} in what follows, employs the heuristic mentioned in Remark \ref{rem:heuristic} and described in Appendix~\ref{sec:heuristic}. Since the heuristic procedure may fail in some cases, in such cases we also need an LP solver. In particular, in our experiments, we used GUROBI whenever the heuristic did not produce either a feasible solution to Problem~\ref{cap4:prob:direction_tr} or a descent direction. In the second version, called~\emph{SCA-G} in what follows, we always employed GUROBI to solve Problem~\ref{cap4:prob:direction_tr}. For what concerns the choice of the NLP solver IPOPT, we remark that we chose it after a comparison with two further general purpose NLP solvers, SNOPT and MINOS, which, however, turned out to perform worse than IPOPT on this class of problems. \newline\newline\noindent In the third experiment we compare the performance of the two implemented versions of Algorithm SCA applied to two specific paths and see their behaviour as the number $n$ of discretized points increases. \newline\newline\noindent In the fourth experiment, we compare the solutions returned by Algorithm SCA with those returned by the heuristic recently proposed in \cite{RaiCGL2019jerk}. \newline\newline\noindent Finally, in the fifth experiment, in order to illustrate the approach presented in Section~\ref{sec_higher_dim}, we consider a speed planning problem for a UAV vehicle. \newline\newline\noindent We remark that we have also made some experiments to compare the computational time of routine~\texttt{solveACCNAR} (Algorithm~\ref{cap4:alg:accnar}) with the graph-based approach proposed in~\cite{ConLocLAu19Graph} and with GUROBI for solving Problem \ref{cap4:prob_lin_accnar}. Note that, strictly speaking, Problem \ref{cap4:prob_lin_accnar} is not an LP one since its objective function is not linear. However, as discussed in~\cite{ConLocLAu19Graph}, its (monotonic non-increasing) objective function can be converted into a (monotonic non-increasing) linear function, thus making GUROBI a valid option to solve the problem. The computational experiments show that routine~\texttt{solveACCNAR} strongly outperforms both the graph-based approach and GUROBI. That was expected, since the graph-based approach and GUROBI are general purpose for a wide class of problems, while routine~\texttt{solveACCNAR} is tailored to the problem with acceleration and NAR constraints. \newline\newline\noindent Finally, we remark that rather than employing an NLP solver only once to solve the non-convex Problem~\ref{cap4:prob_disc}, we could have employed it to solve the convex Problem~\ref{cap4:prob_lin} arising at each iteration of the proposed method in place of the procedure \texttt{ComputeUpdate}, presented in this paper. However, the experiments revealed that in doing this the computing times become much larger even with respect to the single call to the NLP solver for solving the non-convex Problem~\ref{cap4:prob_disc}. This confirms that the problem-specific procedure \texttt{ComputeUpdate} is able to strongly outperform a general-purpose NLP solver when solving the convex Problem~\ref{cap4:prob_lin}. \newline\newline\noindent All tests have been performed on an IntelCore i7-8550U CPU at 1.8 GHz. Both for IPOPT and Algorithm SCA the null vector was chosen as a starting point. The parameters used within Algorithm SCA were $\varepsilon = 1e^{-8}$, $\varepsilon_1 = 1e^{-6}$ (tolerance parameters), $\rho = 4$ and $\tau = 0.25$ (trust-region update parameters). The initial trust region radius $\sigma^{(0)}$ was initialized to 1 in the first iteration $k=0$, but adaptively set equal to the size of the last update $\|{\mathbf w}^{(k)}-{\mathbf w}^{(k-1)}\|_{\infty}$ in all subsequent iterations (this adaptive choice allowed to reduce computing times by more than a half). We remark that Algorithm SCA has been implemented in MATLAB, so we expect better performance after a C/C++ implementation. \paragraph{Experiment 1} As a first experiment we compared the performance of Algorithm SCA with the NLP solver IPOPT. We made the experiments over a set of 50 different paths, each of which was discretized setting $n = 100$, $n = 500$ and $n = 1000$ sample points. The instances were generated by assuming that the traversed path was divided into seven intervals over which the curvature of the path was assumed to be constant. Thus, the $n$-dimensional upper bound vector ${\bf u}$ was generated as follows. First, we fixed $u_1 = u_n = 0$, i.e., the initial and final speed must be equal to 0. Next, we partitioned the set $\{2,\dots,n-1\}$ into seven subintervals $I_j, \ j \in \{1,\ldots,7\}$, which correspond to intervals with constant curvature. Then, for each subinterval we randomly generated a value $u_j\in(0,\tilde{u}]$, where $\tilde{u}$ is the maximum upper bound (which was set equal to 100 m$^2$s$^{-2}$). Finally, for each $j\in \{1,...,7\}$ we set $u_k = \tilde{u}_j $ $ \forall k \in I_{j} $. The maximum acceleration parameter $A$ is set equal to $2.78$ ms$^{-2}$ and the maximum jerk $J$ to 0.5 ms$^{-3}$, while the path length is $s_f$ = 60 m. The values for $A$ and $J$ allow a comfortable motion for a ground transportation vehicle (see~\cite{hoberock1977survey}). The results are reported in Figure~\ref{cap:fig:ex2_result} in which we show the minimum, maximum and average computational times. The results show that Algorithm \emph{SCA-H} is the fastest one, while \emph{SCA-G} is slightly faster than IPOPT at $n=100$ but clearly faster for a larger number of sample points $n$. In general, we observe that both \emph{SCA-H} and \emph{SCA-G} tend to outperform IPOPT as $n$ increases. For what concerns the objective function values returned by the three algorithms, there are some differences due to numerical issues related to the choice of the tolerance parameters, but such differences are mild ones and never exceed 1\%. \mathbf{1}gin{figure} \centering \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_costanti_100} \caption{Samples $n = 100$} \label{fig:gull} \end{subfigure} ~ \centering \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_costanti_500} \caption{Samples $n = 500$} \label{fig:tiger} \end{subfigure} ~ \centering \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_costanti_1000} \caption{Samples $n = 1000$} \label{fig:mouse} \end{subfigure} \caption{Computational results for Experiment 1.\label{cap:fig:ex2_result}} \end{figure} \paragraph{Experiment 2} We compared again the performance of Algorithm SCA with the NLP solver IPOPT over different paths. Again, we made the experiments over a set of 50 different paths, each of which was discretized using $n = 100$, $n = 500$ and $n = 1000$ variables. These new instances were randomly generated such that the traversed path was divided into up to five intervals over which the curvature could be zero, linear with respect to the arc-length or constant. We chose this kind of paths since they are able to represent the curvature of a road trip (see \cite{FraSch:04}). An example of the generated curvature is shown in Figure~\ref{cap4:fig:curvature}. The maximum squared velocity along the path was fixed equal to 192.93 m$^2$s$^{-2}$ (corresponding to a maximum velocity of 50kmh$^{-1}$). The total length of the paths was fixed to $s_f = 1000$ m, while parameter $A$ was set equal to 0.25 ms$^{-2}$, $J$ to 0.025 ms$^{-3}$ and $A_N$ to 4.9 ms$^{-2}$. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=\columnwidth]{curvature_example} \caption{Example of a randomly generated curvature.\label{cap4:fig:curvature}} \label{fig:curvatureexample} \end{figure} The results are reported in Figure~\ref{cap:fig:ex2_result} in which we display the minimum, maximum and average computational times. \mathbf{1}gin{figure} \centering \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_lineari_100} \caption{Samples $n = 100$} \label{fig:gull} \end{subfigure} ~ \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_lineari_500} \caption{Samples $n = 500$} \label{fig:ex3_500} \end{subfigure} ~ \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{tratti_lineari_1000_1} \caption{Samples $n = 1000$} \label{fig:mouse} \end{subfigure} \caption{Computational results for Experiment 2.\label{cap4:fig:ex3_result}} \end{figure} With respect to the paths of Experiment 1, for those in Experiment 2 the superiority of both versions of Algorithm SCA with respect to IPOPT is even clearer, and also in this case the superiority becomes more and more evident as the number of sampled points $n$ increases. For these paths \emph{SCA-H} still performs quite well but it can not be claimed to be superior with respect to \emph{SCA-G}. Moreover, the solutions returned by \emph{SCA-H} are in some cases poorer (in terms of objective function values) with respect to those returned by \emph{SCA-G}, although the difference is still very mild and never exceeds 1\%. We can give two possible motivations: \mathbf{1}gin{itemize} \item The directions computed by the heuristic procedure are not necessarily good descent directions, so routine \texttt{computeUpdate} slowly converged to a solution. \item The heuristic procedure often failed and it was in any case necessary to call GUROBI. \end{itemize} For what concerns IPOPT, besides being slower, we should also remark that for $n=100$, it is sometimes unable to converge and returns poor solutions whose objective function values exceed by more than 100\% those returned by \emph{SCA-H} and \emph{SCA-G}. \paragraph{Experiment 3} In our third experiment we compared the performance of the two proposed approaches (\emph{SCA-H} and \emph{SCA-G}) over two possible driving scenarios as the number $n$ of samples increases. As a first example we considered a continuous curvature path composed of a line segment, a clothoid, a circle arc, a clothoid and a final line segment (see Figure~\ref{cap1:fig:geometry_path}). The minimum-time velocity planning on this path, whose total length is $s_{f} = 90$ m, is addressed with the following data. The maximum squared velocity is 225 m$^2$s$^{-2}$, the longitudinal acceleration limit is $A = 1.5$ ms$^{-2}$, the maximal normal acceleration is $A_N = 1$ ms$^{-2}$, while for the jerk constraints we set $J=1$ ms$^{-3}$. Next, we considered a path of length $s_f = 60$ m (see Figure~\ref{fig:sinpath}) whose curvature was defined according to the following function \[ k(s) = \frac{1}{5}\sin\left(\frac{s}{10}\right),\quad s\in[0,s_f], \] and parameter $A$, $A_N$ and $J$ were set equal to 1.39 ms$^{-2}$, 4.9 ms$^{-2}$ and 0.5 ms$^{-3}$, respectively. The computational results are reported in Figure~\ref{fig:minaripathtimes} and Figure~\ref{fig:jotatime} for values of $n$ that grows from 100 to 1000. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=0.8\columnwidth]{SinPath.eps} \caption{Experiment 3: second path.} \label{fig:sinpath} \end{figure} Figures~\ref{fig:minaripathtimes} and~\ref{fig:jotatime} show two opposite results which confirm what we have already observed about Experiments 1 and 2, namely, that the performance of \emph{SCA-H} and \emph{SCA-G} depend on the path. In particular, it seems that the heuristic performs in a poorer way when the number of points of the upper bound vector at which PAR constraints are violated (which will be called critical points in Section~\ref{sec:heuristic}), tends to be large, which is the case for the second instance. Note that, although not reported here, the computing times of IPOPT on these two paths are larger than those of \emph{SCA-H} and \emph{SCA-G}, and, as usual, the gap increases with $n$. Moreover, for the second path IPOPT was unable to converge for $n=100$ and returned a solution which differed by more than 35\% with respect to those returned by \emph{SCA-H} and \emph{SCA-G}. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=0.8\columnwidth]{minari_path_times_1} \caption{ Computational times as a function of the number $n$ of sample points for the two tested versions of Algorithm SCA over the path displayed in Figure~\ref{cap1:fig:geometry_path}} \label{fig:minaripathtimes} \end{figure} \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=0.8\columnwidth]{jotatime_1} \caption{Computational times as a function of the number $n$ of sample points for the two tested versions of Algorithm SCA over the path displayed in Figure~\ref{fig:sinpath}} \label{fig:jotatime} \end{figure} \paragraph{Experiment 4} In this experiment we compared the performance of our approach with the heuristic procedure recently proposed in \cite{RaiCGL2019jerk}. We made different tests with the instances discussed in Experiments 1 and 2. Algorithms \emph{SCA-H} and \emph{SCA-G} have computing times comparable (actually, slightly better) with respect to that heuristic, and the quality of the final solutions is 5\%-10\% higher. Note that for a company a 5-10\% gain means the opportunity of completing 5-10\% more tasks during the day (taking into account that these algorithms are not only run once a day but are repeatedly run throughout the day, e.g., to plan the activities of LGVs in a depot), which is a considerable gain from an economic point of view. Rather than reporting detailed computational results, we believe that it is more instructive to discuss a single representative instance, taken from Experiment 1 with $n=100$, which reveals the qualitative difference between the solutions returned by Algorithm SCA and those returned by the heuristic. In this instance we set $A = 2.78$ ms$^{-2}$, while for the jerk constraints we set $J=2$ ms$^{-3}$. The total length of the path is $s_f = 60$ m. The maximum velocity profile is the piecewise constant black line in Figure \ref{fig:compareheur}. In the same figure we report in red the velocity profile returned by the heuristic and in blue the one returned by Algorithm SCA. The computing time for the heuristic is 45ms, while for Algorithm SCA is 39ms. The final objective function value (i.e., the travelling time along the given path) is 15.4s for the velocity profile returned by the heuristic, and 14.02s for the velocity profile returned by Algorithm SCA. From the qualitative point of view it can be observed in this instance (and similar observations hold for the other instances we tested) that the heuristic produces velocity profiles whose local minima coincide with those of the maximum velocity profile. For instance, in the interval between 10m and 20m we notice that the velocity profile returned by the heuristic coincides with the maximum velocity profile in that interval. Instead, the velocity profile generated by Algorithm SCA generates velocity profiles which fall below the local minima of the maximum velocity profile, but this way they are able to keep the velocity higher in the regions preceding and following the local minima of the maximum velocity profile. Again referring to the interval between 10m and 20m, we notice that the velocity profile computed by Algorithm SCA falls below the maximum velocity profile in that region and, thus, below the velocities returned by the heuristic, but this way velocities in the region before 10m and in the one after 20m are larger with respect to those computed by the heuristic. \mathbf{1}gin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{CompareScaHeur} \caption{Velocity profile returned by the heuristic proposed in \cite{RaiCGL2019jerk} (red line) and by Algorithm SCA (blue line). The black line is the maximum velocity profile.} \label{fig:compareheur} \end{figure} \paragraph{Experiment 5} As a final experiment, to illustrate the approach presented in Section~\ref{sec_higher_dim}, we consider a speed planning problem for a UAV vehicle. The configuration space is $SE(3)$, the set of rigid transformations in $\mathbb Real^3$. A configuration of $SE(3)$ is represented by a couple $(R,p)$ ,with $R \in SO(3)$ (the set of rotations in $\mathbb Real^3$) and $p \in \mathbb Real^3$. The rigid transformation associated to couple $(R,p)$ is given by map $T: \mathbb Real^3 \to \mathbb Real^3$, $T(x)= R x+p$. Note that $R$ and $p$ are associated, respectively, to the vehicle rotation and translation. Let $(R',p')$ be an element of the tangent space of $SE(3)$ at $(R,p)$. Then, $R'$ can be written as $R'=R \Omega$, where $\Omega=\left[\mathbf{1}gin{array}{lll} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 & -\omega_1\\ -\omega_2 & \omega_1 & 0 \end{array}\right]$ is a skew symmetric matrix and $\omega =[\omega_1,\omega_2,\omega_3]^T \in \mathbb Real^3$ is a vector that contains the angular velocities with respect to the vehicle frame. In our computations, we considered norm $\|(R',p')\|^2=\|\omega\|^2+ \|p'\|^2$. Namely, the squared norm of $(R',p')$ is the sum of the squared angular and translational velocities. Obviously, other choices are possible. We randomly defined a path in $SE(3)$ with the following procedure. We first picked independent random vectors $p_i \in\mathbb Real^3$, $i=0,\ldots,3$, in which each component of $p_i$ is chosen from a uniform distribution in interval $[0,1000]$. Then, we interpolated these points by defining a spline curve $\eta: [0,3] \to \mathbb Real^3$ of order $5$ such that $\eta(i)=p_i$, $i=0,\ldots,3$. We associate each vector $r \in \mathbb Real^3$ to a rotation matrix by the exponential map. That is, we define function $S:\mathbb Real^3 \to \mathbb Real^{3 \times 3}$ such that, if $r=[x,y,z]^T$, $S(x)=\left[\mathbf{1}gin{array}{lll} 0 &-z &y \\z &0&-x\\-y&x&0 \end{array}\right]$ and then set $M: \mathbb Real^3 \to SO(3)$, $M(x)=e^{S(x)}$. Set $e_1=[1,0,0]^T$, the unit vector aligned with the $x$-axis. We defined four vectors $r_i \in\mathbb Real^3$, $i=1\,\dots,4$, by solving equation $T(r_i) e_1=\frac{\eta'(i)}{\|\eta'(i)\|}$. In this way, for $i=1,\ldots,4$, the $x$-axis of the vehicle frame is aligned to the tangent of $\eta$ at $i$. Finally, we defined a second spline curve $\mu: [0,3] \to \mathbb Real^3$ of order $5$ that satisfies conditions $\mu(i)=r_i$, $i=0,\ldots,3$. Then, the reference path is given by $\gamma: [0,3] \to SO(3) \times R^3$, $\gamma(s)=(T(\mu(i)),\eta(i))$, after arc-length reparameterization. Figure~\ref{fig:path_UAV_example} presents a possible reference path obtained with this method. Note that this procedure is just a simple trick for determining a random path in $SE(3)$ in order to test the procedure presented in Section~\ref{sec_higher_dim}. In general, the determination of a reference path in $SE(3)$ for a UAV is a complex task that has to take into account multiple factors, such that the actual dynamic model of the vehicle and the actuator limits. However, addressing this problem is outside the scope of this work. Indeed, the random path $\gamma$ obtained with the method presented here may not be a valid reference for a UAV. Figures~\ref{fig:ex5_100}--\ref{fig: ex5_1000} show the computation times for algorithms \emph{SCA-H}, \emph{SCA-G} and IPOPT, for $n \in \{100,500,1000\}$. We applied the method presented in Section~\ref{sec_higher_dim} with $\hat V=50$, $\hat A=5$, $\hat J=1$, $A=\frac{\hat A}{2}$, $\hat J=\frac{\hat J}{2}$. The results are qualitatively similar to previous experiments. \mathbf{1}gin{figure} \centering \includegraphics[width=\columnwidth]{aereo_TRO.eps} \caption{Example UAV reference path} \label{fig:path_UAV_example} \end{figure} \mathbf{1}gin{figure} \centering \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{ex5_100.eps} \caption{Samples $n = 100$} \label{fig:ex5_100} \end{subfigure} ~ \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{ex5_500.eps} \caption{Samples $n = 500$} \label{fig: ex5_500} \end{subfigure} ~ \mathbf{1}gin{subfigure}[b]{\columnwidth} \includegraphics[width=0.6\columnwidth]{ex5_1000.eps} \caption{Samples $n = 1000$} \label{fig: ex5_1000} \end{subfigure} \caption{Computational results for Experiment 5.\label{cap:fig:ex5_result}} \end{figure} \mathbf{1}gin{appendix} \section{Proof of Theorem \ref{thm:convergence}} \label{sec:appconvergence} In order to prove the theorem, we first need to prove some lemmas. \mathbf{1}gin{lem} \label{lem:convfun} The sequence $\{f({\bf w}^{(k)})\}$ of the function values at points generated by Algorithm SCA converges to a finite value. \end{lem} \mathbf{1}gin{proof} The sequence is nonincreasing and bounded from below, e.g., by the value $f({\bf u}_B)$, in view of the fact that the objective function $f$ is monotonic decreasing. Thus, it converges to a finite value. \end{proof} Next, we need the following result based on strict convexity of the objective function $f$. \mathbf{1}gin{lem} \label{lem:strconv} For each $\delta>0$ sufficiently small, it holds that \mathbf{1}gin{equation} \label{eq:auxstrconv} \mathbf{1}gin{array}{ll} \min & \left\{\max\{f({\bf x}), f({\bf y})\}-f\left(\frac{{\bf x}+{\bf y}}{2}\right)\ :\ \right. \\ [6pt] & \left.{\bf x}, {\bf y}\in \Omega, \ \|{\bf x}-{\bf y}\|\geq \delta\right\}\geq \varepsilon_{\delta}>0. \end{array} \end{equation} \end{lem} \mathbf{1}gin{proof} Due to strict convexity, it holds that $\forall {\bf x}\neq {\bf y}$, $$ \max\{f({\bf x}), f({\bf y})\}-f\left(\frac{{\bf x}+{\bf y}}{2}\right)>0. $$ Moreover, the function is a continuous one. Next, we observe that the region $$ \left\{{\bf x}, {\bf y}\in \Omega:\ \|{\bf x}-{\bf y}\|\geq \delta\right\}, $$ is a compact set. Thus, by Weierstrass Theorem, the minimum in (\ref{eq:auxstrconv}) is attained and it must be strictly positive, as we wanted to prove. \end{proof} Finally, we prove that also the sequence of points generated by Algorithm SCA converges to some point, feasible for Problem \ref{cap4:prob_disc}. \mathbf{1}gin{lem} \label{lem:seqpoint} It holds that $$ \|\delta {\bf w}^{(k)}\|\rightarrow 0. $$ \end{lem} \mathbf{1}gin{proof} Let us assume, by contradiction, that over some infinite subsequence with index set ${\cal K}$, it holds that $\|\delta {\bf w}^{(k)}\|\geq 2\rho>0$ for all $k\in {\cal K}$, i.e., \mathbf{1}gin{equation} \label{eq:newpoint} \|{\bf w}^{(k+1)}-{\bf w}^{(k)}\|\geq 2\rho>0, \end{equation} where ${\bf w}^{(k+1)}={\bf w}^{(k)}+\delta {\bf w}^{(k)}$. Over this subsequence it holds, by strict convexity, that \mathbf{1}gin{equation} \label{eq:strdecr} f({\bf w}^{(k+1)})\leq f({\bf w}^{(k)})-\xi\ \ \ \ \ \forall k\in {\cal K}, \end{equation} for some $\xi>0$. Indeed, it follows by optimality of ${\bf w}^{(k)}+\delta {\bf w}^{(k)}$ for Problem \ref{cap4:prob_lin} and convexity of $f$ that $$ f({\bf w}^{(k+1)})\leq f\left(\frac{{\bf w}^{(k+1)}+{\bf w}^{(k)}}{2}\right)\leq f({\bf w}^{(k)}), $$ so that $$ \max\left\{f({\bf w}^{(k)}), f({\bf w}^{(k+1)}\right\}=f({\bf w}^{(k)}). $$ Then, it follows from (\ref{eq:newpoint}) and Lemma \ref{lem:strconv} that we can choose $\xi=\varepsilon_{\rho}>0$. Thus, since (\ref{eq:strdecr}) holds infinitely often, we should have $f({\bf w}^{(k)})\rightarrow -\infty$, which, however, is not possible in view of Lemma \ref{lem:convfun}. \end{proof} As a consequence of Lemma \ref{lem:seqpoint} it also holds that \mathbf{1}gin{equation} \label{eq:convergpoint} {\bf w}^{(k)}\rightarrow \boldsymbol{a}r{{\bf w}}\in \Omega. \end{equation} Indeed, all points ${\bf w}^{(k)}$ belong to the compact feasible region $\Omega$, so that the sequence $\{{\bf w}^{(k)}\}$ admits accumulation points. However, due to Lemma \ref{lem:seqpoint}, the sequence cannot have distinct accumulation points. \newline\newline\noindent Now, let us consider the compact reformulation (\ref{eq:origcompact}) of Problem \ref{cap4:prob_disc} and the related linearization (\ref{eq:lincompact}), equivalent to Problem \ref{cap4:prob_lin} with the linearized constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1}. Since the latter is a convex problem with linear constraints, its local minimizer $\delta {\bf w}^{(k)}$ (unique in view of strict convexity of the objective function) fulfills the following KKT conditions \mathbf{1}gin{equation} \label{eq:sysKKTlin} \mathbf{1}gin{array}{l} \nabla f({\bf w}^{(k)}+\delta {\bf w}^{(k)}) +\boldsymbol{\mu}_k^\top \nabla {\bf c}({\bf w}^{(k)})={\bf 0} \\ [6pt] {\bf c}({\bf w}^{(k)}) +\nabla {\bf c}({\bf w}^{(k)}) \delta {\bf w}^{(k)} \leq 0 \\ [6pt] \boldsymbol{\mu}_k^\top\left({\bf c}({\bf w}^{(k)}) +\nabla {\bf c}({\bf w}^{(k)}) \delta {\bf w}^{(k)} \right)=0 \\ [6pt] \boldsymbol{\mu}_k\geq {\bf 0}, \end{array} \end{equation} where $\boldsymbol{\mu}_k$ is the vector of Lagrange multipliers. Now, by taking the limit of system (\ref{eq:sysKKTlin}), possibly over a subsequence, in order to guarantee convergence of the multiplier vectors $\boldsymbol{\mu}_k$ to a vector $\boldsymbol{a}r{\boldsymbol{\mu}}$, in view of Lemma \ref{lem:seqpoint} and of (\ref{eq:convergpoint}), we have that $$ \mathbf{1}gin{array}{l} \nabla f(\boldsymbol{a}r{{\bf w}}) +\boldsymbol{a}r{\boldsymbol{\mu}}^\top \nabla {\bf c}(\boldsymbol{a}r{{\bf w}})={\bf 0} \\ [6pt] {\bf c}(\boldsymbol{a}r{{\bf w}}) \leq 0 \\ [6pt] \boldsymbol{a}r{\boldsymbol{\mu}}^\top {\bf c}(\boldsymbol{a}r{{\bf w}})=0 \\ [6pt] \boldsymbol{a}r{\boldsymbol{\mu}}\geq{\bf 0}, \end{array} $$ or, equivalently, the limit point $\boldsymbol{a}r{{\bf w}}$ is a KKT point of Problem \ref{cap4:prob_disc}, as we wanted to prove. \section{Proof of Proposition \ref{cap4:prop:salvezza}} \label{sec:feasdir} First, we notice that if we prove the result for the tighter constraints \eqref{cap4:con:linnar1}-\eqref{cap4:con:linpar1}, then it must also hold for constraints \eqref{cap4:eq:nar_lin}-\eqref{cap4:eq:par_lin}. So we prove the result only for the former. By definition (\ref{eq:computedir}), $\delta wb$ satisfies the acceleration and NAR constraints, so that $$ \mathbf{1}gin{array}{l} \delta w_j\leq \delta w_{j+1}+ b_{D_j} \\ [8pt] \delta w_j\leq \delta w_{j-1}+ b_{A_j} \\ [8pt] \delta w_j \leq \mathbf{1}ta_j(\delta w_{j+1}+\delta w_{j-1})+ b_{N_j} \\ [8pt] \delta w_j\leq y^*_j. \end{array} $$ At least one of these constraints must be active, otherwise $\delta w_j$ could be increased, thus contradicting optimality. If the active constraint is $ \delta w_j \leq \mathbf{1}ta_j(\delta w_{j+1}+\delta w_{j-1})+ b_{N_j}$, then constraint \eqref{cap4:con:linpar1} can be rewritten as follows $$ (\theta_j-\mathbf{1}ta_j)(\delta w_{j+1}+\delta w_{j-1})\leq b_{P_j}+b_{N_j}. $$ By recalling the definitions of $\theta_j, \mathbf{1}ta_j, b_{P_j}$, and $b_{N_j}$, it can be seen that this is equivalent to (\ref{cap4:ass:prop_salvezza}) and, thus, the constraint is satisfied under the given assumption. If $\delta w_j = y^*_j$, then $$ \theta_j(\delta w_{j-1}+\delta w_{j+1})\leq\theta_j( y^*_{j-1}+y^*_{j+1})\leq y^*_j + b_{P_j} = \delta w_j + b_{P_j} , $$ where the second inequality follows from the fact that ${\bf y}^*$ satisfies the PAR constraints. Now, let $\delta w_j = \delta w_{j+1}+ b_{D_j}$ (the case when $\delta w_j\leq \delta w_{j-1}+ b_{A_j}$ is active can be dealt with in a completely analogous way). First we observe that $\delta w_j\geq \delta w_{j-1}- b_{D_{j-1}}$. Then, \mathbf{1}gin{equation*} 2\delta w_j \ge \delta w_{j+1} + \delta w_{j-1} + b_{D_j} - b_{D_{j-1}}. \end{equation*} In view of the definitions of $b_{D_j}$ and $ b_{D_{j-1}}$ this can also be written as \mathbf{1}gin{equation} \label{cap4:aux_ineq} 2\delta w_j \ge \delta w_{j+1} + \delta w_{j-1} + w^{(k)}_{j+1} -2 w^{(k)}_{j}+ w^{(k)}_{j-1}. \end{equation} Now we recall that $$ \mathbf{1}gin{array}{l} \theta_j = \frac{1}{2} + \frac{\Delta}{2\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right)^{\frac{3}{2}}} \\ [6pt] b_{P_j}=\frac{\Delta}{\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right)^{\frac{1}{2}}} - \frac{1}{2}\left(w^{(k)}_{j+1} -2 w^{(k)}_{j}+ w^{(k)}_{j-1}\right), \end{array} $$ where $\Delta= \sqrt{2}h^2J$. Then, \eqref{cap4:con:linpar1} can be rewritten as \[ 2\delta w_j \ge \delta w_{j+1} + \delta w_{j-1} + \frac{\Delta}{\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right)^{\frac{3}{2}}} (\delta w_{j+1} + \delta w_{j-1})-2b_{P_j}. \] Now, taking into account~\eqref{cap4:aux_ineq}, such inequality certainly holds if $$ w^{(k)}_{j+1} -2 w^{(k)}_{j}+ w^{(k)}_{j-1}\geq \frac{\Delta}{\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right)^{\frac{3}{2}}} (\delta w_{j+1} + \delta w_{j-1})-2b_{P_j}. $$ Recalling the definition of $b_{P_j}$, the above inequality can be rewritten as \[ \frac{2\Delta}{\sqrt{w^{(k)}_{j+1} + w^{(k)}_{j-1}}}\ge \frac{\Delta}{\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right)^{\frac{3}{2}}} (\delta w_{j+1} + \delta w_{j-1}), \] and it holds if $2\left(w^{(k)}_{j+1} + w^{(k)}_{j-1}\right) \ge(\delta w_{j+1} + \delta w_{j-1}) $, as we wanted to prove. \section{A heuristic procedure for computing a suboptimal descent direction\label{cap4:subsec:heur}} \label{sec:heuristic} We first need to introduce a definition. \mathbf{1}gin{defn} Given a vector $\mathbf{d}\in\mathbb Real^N$ the set of critical points associated to such vector is \[ Q(\mathbf{d}) = \left\{ p \ : \ \eta_i (d_{p-1} + d_{p+1}) -d_{p} - \mathbf{1}ta_p > 0 \right\}, \] i.e., the set of points where constraints~(\ref{cap4:con:par_dir_tr}) are violated at $\mathbf{d}$. \end{defn} Now, the heuristic is detailed in Algorithm~\ref{cap4:alg:heur}. Its purpose is to sequentially remove all the critical points $p$ of the upper bound ${\mathbf{\boldsymbol{a}r{u}_B}}$ by activating a sequence of constraints~\eqref{cap4:con:par_dir_tr} in the neighbourhood of $p$ itself. We initially set $\mathbf{d} = \mathbf{\boldsymbol{a}r{u}_{B}}$ and compute the related set $Q(\mathbf{\boldsymbol{a}r{u}_{B}})$ of critical points. After that, we consider the most violated critical point $p\in Q(\mathbf{\boldsymbol{a}r{u}_{B}})$ and define $\Delta_p$ as its associated violation. Then, we define the \emph{propagation function} from $p$. To this aim we first define a function ${\bf z}: [0,1] \rightarrow \mathbb Real^n$ such that: \mathbf{1}gin{equation}\label{cap4:eq:propag} z_j(\alpha; \mathbf{d},p) = \mathbf{1}gin{cases} d_p & j=p\\ d_{p-1} - \alpha\Delta_p & j=p-1\\ d_{p+1} - (1-\alpha)\Delta_p & j=p+1\\ \eta_{j}^{-1}(\mathbf{1}ta_{j} + z_{j}(\alpha; \mathbf{d},p)) - z_{j+1}(\alpha; \mathbf{d},p), & j <p \\ \eta_{j}^{-1}(\mathbf{1}ta_{j} + z_{j}(\alpha; \mathbf{d},p)) - z_{j-1}(\alpha; \mathbf{d},p), & j >p. \\ \end{cases} \end{equation} Then, let $$ \mathbf{1}gin{array}{l} k_1=\max\{k<p\ :\ z_k(\alpha; \mathbf{d},p)\geq d_k\} \\ [8pt] k_2=\min\{k>p\ :\ z_k(\alpha; \mathbf{d},p)\geq d_k\}. \end{array} $$ We define the propagation function ${\bf x}(\cdot; \mathbf{d}, p): [0,1] \rightarrow \mathbb Real^n$ around $p$ as follows: \mathbf{1}gin{equation}\label{cap4:eq:propag} x_j(\alpha; \mathbf{d}; p) = \mathbf{1}gin{cases} z_j(\alpha; \mathbf{d},p) & j=k_1+1,\ldots,k_2-1\\ d_j & \mbox{otherwise.} \end{cases} \end{equation} Basically, ${\bf x}$ decreases the components of the current vector $\mathbf{d}$ around the critical point $p$ in order to remove the violations locally, without decreasing all other components. The choice of not decreasing the remaining components comes from the fact that the objective function \eqref{cap4:obj:d} is monotonic non-increasing. After having defined the propagation function, we search for the best $\alpha$ that, in addition to activating a sequence of constraints~\eqref{cap4:con:par_dir_tr} around $p$, gives the best possible solution with respect to the objective function~\eqref{cap4:obj:d}. Then, we consider the following problem: \mathbf{1}gin{equation}\label{cap4:prob:under-estimator} \alpha^*\in \arg \min_{\alpha\in[0,1]} \left\{ -\boldsymbol{\nu}^T{{\mathbf x}}(\alpha), | , {{\mathbf x}}(\alpha) \ge \mathbf{\boldsymbol{a}r{l}_B} \right\}. \end{equation} We employ the ternary search algorithm to efficiently find its optimal solution~\cite{cormen2009introduction}. Note that the ternary search needs to consider the lower bound constraints. Actually, this issue can be easily overcome by setting to $+\infty$ the value of the objective function if ${\bf x}(\alpha) \not\geq {\bf {\boldsymbol{a}r{l}}_{B}}$. If an optimal solution $\alpha^*$ exists, we set $\mathbf{d} = {\mathbf x}(\alpha^*)$, compute its set of critical points $Q(\mathbf{d})$, and repeat the above procedure until $Q$ is empty. If an optimal solution $\alpha^*$ does not exist, i.e., problem~\eqref{cap4:prob:under-estimator} is unfeasible, then we remove the critical point $p$ from $Q$ and repeat the above procedure by considering the next most violated critical point. \mathbf{1}gin{remark}\label{cap4:rem:heur} The procedure described in this section is a heuristic one since we do not have any proof of correctness and optimality. Moreover, it may happen that: \mathbf{1}gin{itemize} \item $\alpha^*$ does not exist for all critical points contained in the set $Q$, i.e., we are unable to remove all violations; \item the solution returned by the heuristic might not be a descent direction, i.e., $-\boldsymbol{\nu}^T\mathbf{d} \ge 0$. \end{itemize} For this reason, if one of these two cases occurs, we do not consider the result computed by the heuristic and solve Problem~\ref{cap4:prob:direction_tr} by using an LP solver. \end{remark} \mathbf{1}gin{algorithm}[!h] \caption{The heuristic procedure to compute a descent direction ${\mathbf{d}}$ for Problem~\ref{cap4:prob:direction_tr}\label{cap4:alg:heur}} \SetKwProg{function}{Function}{:}{} Set $\mathbf{d} = \mathbf{\boldsymbol{a}r{u}_B}$\; Let $U=Q({\mathbf{d}})$\; \While{$U \ne \emptyset$}{ Let $p$ be the most violated critical point in $U$\; Let ${\bf x}(\alpha; \mathbf{d}; p)$ be defined as in (\ref{cap4:eq:propag})\; \If{$ \exists \ \alpha^* \in \arg\min_{\alpha\in[0,1]} \left\{ -\boldsymbol{\nu}^T {\mathbf x}(\alpha; \mathbf{d}, p) \, | \, {\mathbf x}(\alpha; \mathbf{d},p) \ge \mathbf{\boldsymbol{a}r{l}_B} \right\}$} { Set $\mathbf{d}= {\mathbf x}(\alpha^*; \mathbf{d}, p)$\; Let $U=Q(\mathbf{d})$ } \Else{ $U = U \boldsymbol{a}ckslash \{p\}$ } } \mathbb Return $\mathbf{d}$ \end{algorithm} \end{appendix} \end{document}
\begin{document} \title{On commutative homogeneous vector bundles attached to nilmanifolds} \textsc{Abstract.} The notion of Gelfand pair $(G,K)$ can be generalized by considering homogeneous vector bundles over $G/K$ instead of the homogeneous space $G/K$ and matrix-valued functions instead of scalar-valued functions. This gives the definition of commutative homogeneous vector bundles. Being a Gelfand pair is a necessary condition for being a commutative homogeneous vector bundle. In the case when $G/K$ is a nilmanifold having square-integrable representations, a big family of commutative homogeneous vector bundles was determined in \cite{rocio2}. In this paper, we complete that classification. \pagestyle{headings} \pagenumbering{arabic} \section{Introduction} A homogeneous space $G/K$ is called commutative, and the pair $(G, K)$ is called a Gelfand pair, when $G$ is a locally compact group, $K$ is a compact subgroup of $G$, and the convolution algebra $L_0^1(G)$ is commutative. Here $L_0^1 (G)$ denotes the Banach algebra of $L^1$ functions on $G$ satisfying $f(kxk' ) = f(x)$ for $x \in G$ and $k, k' \in K$, where the product is the usual convolution $(f * h)(g) = \int_G f(x)h(x^{-1} g)dx$ on $G$ (where $dx$ denotes the Haar measure on $G$). By a nilmanifold we mean a differentiable manifold on which a nilpotent Lie group acts transitively. By a commutative nilmanifold we mean a commutative space $G/K$ such that $G$ is a Lie group and a closed nilpotent subgroup $N$ of $G$ acts transitively (cf. e.g. \cite{Wolf2}). In that notation, if $G/K$ is simply connected then $N$ acts simply transitively on $G/K$ and $G$ is the semidirect product $K\ltimes N$ (cf. [12, Theorem 4.2] or \cite[Theorem 13.1.6]{Wolf}). It is shown in \cite{BJR} that if $(K\ltimes N,K)$ is a Gelfand pair then $N$ must be abelian or two-step nilpotent. Those definitions are associated to the commutative property of the algebra $L^1_0(G)$ of scalar-valued functions. For the vector-valued case we have analogous definitions. Again, let $G$ be a Lie group and $K$ a compact subgroup of $G$. It is well known that all the homogeneous vector bundles over the homogeneous space $G/K$ are described by taking finite dimensional representations $(\tau, W_\tau)$ of $K$ (cf. \cite[Section 5.2]{Wal}). Indeed, if we consider the equivalence relation over $G\times W_\tau$ given by $(gk,w)\sim(g,\tau(k)w)$ for $g\in G$, $w\in W_\tau$ and $k\in K$, the space of equivalence classes $E_\tau$ has an structure of homogeneous vector bundle over $G/K$. Moreover, each homogeneous vector bundle over $G/K$ is isomorphic to $E_\tau$ for some representation $(\tau,W_\tau)$ of $K$ of finite dimension. The space of compactly supported smooth sections of the homogeneous vector bundle $E_\tau$ is naturally identified with the space of functions $u$ on $\mathcal{D}(G, W_\tau)$ such that $u(xk)=\tau(k)^{-1}u(x)$ for $k\in K$ and $x\in G$. It follows from the Schwartz kernel theorem that every linear operator $T$, continuous with respect to the standard topologies, mapping $\mathcal{D}$-sections of $E_\tau$ into $\mathcal{D}^\prime$-sections of $E_\tau$ and commuting with the action of $G$ on $E_\tau$, can be represented in a unique way as a convolution operator \begin{equation}\label{ope} \big(T(u)\big)(g)=u*F(g):=\int_G F(x^{-1}g)u(x) \ dy \qquad \forall g\in G, \end{equation} where $F$ is a distribution, $F\in \mathcal{D}^\prime(G,\mathrm{End}(W_\tau))$, satisfying \begin{equation}\label{F bi tau equiv} F(k_1xk_2)=\tau(k_2)^{-1}F(x)\tau(k_1)^{-1} \qquad \forall k_1,k_2\in K, \ x\in G. \end{equation} In particular, operators $T = T_F$ as in \eqref{ope} with $F$ a function in $L^1(G,\mathrm{End}(W_\tau))$ satisfying \eqref{F bi tau equiv} can be composed with each other and $$T_{F_2}\circ T_{F_1}=T_{F_1*F_2}.$$ We call $L^1_{\tau,\tau}(G,\mathrm{End}(W\tau))$ the convolution algebra of integrable matrix-valued functions with property \eqref{F bi tau equiv}. Let $(\tau, W_\tau)$ be an irreducible unitary representation of $K$. Let $\widehat{K}$ denote the set of equivalence classes of irreducible unitary representations of the group $K$. The homogeneous vector bundle $E_\tau$ is called commutative, and the triple $(G,K,\tau)$ is also called commutative, when the algebra $L^1_{\tau,\tau}(G,\mathrm{End}(W_\tau))$ is commutative. In particular, $(G,K)$ is a Gelfand pair when $(G,K,\tau)$ is a commutative triple with $\tau$ the trivial representation of $K$. Here $L^1_{\tau,\tau}(G,\mathrm{End}(W_\tau))=L^1_0(G)$. It is shown in \cite{Fulvio} that if $G/K$ is connected and if there exists $\tau \in\widehat{K}$ such that $(G, K,\tau)$ is a commutative triple then $(G,K)$ is a Gelfand pair. Therefore, in most cases, being a Gelfand pair is a necessary condition to give rise to commutative triples. When $\tau$ is a character of $K$ and the triple $(G,K,\tau)$ is commutative, these cases are also known as twisted Gelfand pair with respect to the character $\tau$. Finally, we say that $(G,K)$ is a strong Gelfand pair if $(G,K,\tau)$ is commutative for every $\tau\in\widehat{K}$. In the case where $L^1_{\tau,\tau}(G,\mathrm{End}(W_\tau))$ is a commutative algebra, the spherical analysis consists of the computation of the continuous characters of such convolution algebra and this gives rise to a kind of simultaneous ``diagonalization'' of all the operators $T_F$ (cf. e.g. \cite{rocio, Koranyi}). From now on we concentrate on homogeneous vector bundles associated to nilmanifolds. The classification of Gelfand pairs $(K\ltimes N,K)$ with $N$ nilpotent was completed by E. Vinberg. In the notable article \cite{Vinberg} are exhibited all the Gelfand pairs $(K\ltimes N,K)$ that are irreducible and maximal. On the one hand, the irreducibility means that the center $\mathfrak{z}$ of the Lie algebra $\mathfrak{n}$ of $N$ must be $\mathfrak{z}=[\mathfrak{n},\mathfrak{n}]$ and $K$ acts irreducibly on $\mathfrak{n}/\mathfrak{z}$. On the other hand, the maximality implies that the pair $(K\ltimes N, K)$ does not have non-trivial central reductions. This means that $\mathfrak{z}$ does not have $K$-invariant subspaces $\mathfrak{s}$ such that if $\tilde{\mathfrak{s}}$ denotes the orthogonal complement of $\mathfrak{s}$ on $\mathfrak{z}$ and $\tilde{N}$ is the simply connected Lie group with Lie algebra $\tilde{\mathfrak{n}}:=\tilde{\mathfrak{s}}\oplus \left(\mathfrak{n}/\mathfrak{z}\right)\simeq \mathfrak{n}/\mathfrak{s}$ then $(K\ltimes\tilde{N}, K)$ is a Gelfand pair (cf. \cite[Section 13.4A, p. 320]{Wolf}). Firstly, J. Lauret constructs a family of Gelfand pairs $(K\ltimes N, K)$ considering on $N$ a Riemannian structure (cf. \cite{Lauret}). Here $K\ltimes N$ is the group of isometries of $N$. When $N$ is particularly the Heisenberg group or the euclidean space $\mathbb{R}^n$, the commutative triples were determined in \cite{Fulvio}. For the corresponding (matrix) spherical analysis see that article and also \cite{rocio}. When additionally $N$ has square integrable representations, all the commutative triples that come from these Gelfand pairs were determined in the recent article \cite{rocio2}. This article has the aim of completing the classification of commutative homogeneous vector bundles associated to nilmanifolds, that is, commutative triples of the form $(K\ltimes N,K,\tau)$ with $N$ a nilpotent Lie group. We will analyse which commutative triples come from the Gelfand pairs in \cite{Vinberg} that are not included in the list given in \cite{Lauret}. \textsc{Acknowledgments.} We are immensely grateful to Jorge Lauret who gave impulse to this research. \section{Preliminaries} We recall that since we consider $N$ a two-step nilpotent Lie group, its Lie algebra splits, as a vector space, as $\mathfrak{n}=\mathfrak{z}\oplus V$, where $V$ is an orthogonal complement of the center $\mathfrak{z}$ and $[V,V]\subset \mathfrak{n}$. The group $N$ acts naturally on $\mathfrak{n}$ by the adjoint action $\mathrm{Ad}$. Also, $N$ acts on $\mathfrak{n}^*$, the real dual space of $\mathfrak{n}$, by the dual or contragredient representation of the adjoint representation $\mathrm{Ad}^*(n)\lambda:=\lambda\circ \mathrm{Ad}(n^{-1})$ for $n\in N$ and $\lambda\in\mathfrak{n}^*$. For each $\lambda\in\mathfrak{n}^*$, let $O(\lambda):=\{\mathrm{Ad}^*(n)\lambda \mid \ n\in N\}$ be its coadjoint orbit. From Kirillov's theory there is a correspondence between $\widehat{N}$ and the set of coadjoint orbits. Let $B_\lambda$ be the skew symmetric bilinear form on $\mathfrak{n}$ given by $$B_\lambda(X,Y):=\lambda([X,Y]) \qquad \forall X,Y\in \mathfrak{n}.$$ Let $\mathfrak{m}\subset\mathfrak{n}$ be a maximal isotropic subalgebra in the sense that $B_\lambda(X,Y)=0$ for all $X,Y\in \mathfrak{m}$ and let $M:=\exp(\mathfrak{m})$. Defining on $M$ the character $\chi_\lambda(\exp(Y)):=e^{i\lambda(Y)}$ for $Y \in\mathfrak{m}$, the irreducible representation $\rho_\lambda\in\widehat{N}$ associated to $O(\lambda)$ is the induced representation $\rho_\lambda:=\mathrm{Ind}_M^N(\chi_\lambda)$. Let $X_\lambda\in\mathfrak{z}$ be the representative of $\lambda_{|_\mathfrak{z}}$ (the restriction of $\lambda$ to $\mathfrak{z}$), that is, $\lambda(Y)=\langle Y,X_\lambda\rangle$ for all $Y\in \mathfrak{z}$. We can split $\mathfrak{z}=\mathbb{R}X_\lambda\oplus \mathfrak{z}_\lambda$, where $\mathfrak{z}_\lambda:=\mathrm{Ker}(\lambda_{|_{\mathfrak{z}}})$ is the orthogonal complement of $\mathbb{R}X_\lambda$ in $\mathfrak{z}$. Let $Z$ be the center of $N$. A representation $(\rho, H_\rho)\in\widehat{N}$ is said square integrable if its matrix coefficients $\langle u,\rho(x)v\rangle$, for $u,v\in H_\rho$, are square integrable functions on $N$ module $Z$. One can see that $\rho_\lambda\in \widehat{N}$ is a square integrable representation if and only if that $B_\lambda$ is non-degenerate on $V$ or, equivalently, if and only if the orbits are maximal (cf. \cite{rocio2}). In this situation, consider the Heisenberg algebra $\mathfrak{n}_\lambda:=\mathbb{R}X_\lambda\oplus V$ with the bracket given by $$[X,Y]_{\mathfrak{n}_\lambda}:=B_\lambda(X,Y) X_\lambda.$$ Since the character $\chi_\lambda$ is trivial on $\mathfrak{z}_\lambda$, the representation $\rho_\lambda$ acts trivially on $\mathfrak{z}_\lambda$ and defines an irreducible unitary representation of the corresponding Heisenberg group $N_\lambda$. Now, let $P(\lambda)$ be the square root of the determinant of $(B_\lambda)_{|_{V\times V}}$ which is called the Pfaffian. This function $P$ depends only on $\lambda_{|_{\mathfrak{z}}}$ and so there is a homogeneous polynomial function (which we also denote $P$) on $\mathfrak{z}^*$ such that $P(\lambda)=P(\lambda_{|_{\mathfrak{z}}})$ (cf. \cite[p. 333]{Wolf}). According to \cite[Theorem 14.2.10]{Wolf} there is a correspondence between the coadjoint orbits $O(\lambda)$ with $P(\lambda_{|_{\mathfrak{z}}})\neq 0$ and the square integrable representations. Apart from that, let $K$ be a compact subgroup of automorphisms of $N$ and let $K_{{\lambda}}$ be the stabilizer of $X_\lambda$ with respect to the action of $K$ on $\mathfrak{n}$. Note that since we always assume $N$ simply connected, we make no distinction between automorphisms of $N$ and $\mathfrak{n}$. It can be see that $K_\lambda$ is a subgroup of the symplectic group $\mathrm{Sp}(V,(B_\lambda)_{|_{V\times V}})$. Moreover, since $K_\lambda$ is compact, we can assume that it is a subgroup of the unitary group $\mathrm{U}(m)\subset \mathrm{Sp}(V,(B_\lambda)_{|_{V\times V}})$. Let ${\omega}$ be the the metaplectic representation of $K_\lambda$ associated to the Heisenberg group $N_\lambda$. That is, $$(\omega(k)(p))(z):=p(k^{-1}z)$$ for all $p$ the space $\in\mathcal{P}(\mathbb{C}^m)$ of polynomials on $\mathbb{C}^m$, where $2m$ is the dimension on $V$. (For more details see \cite{rocio2}). For the following theorem see \cite[Theorem 3]{rocio2} and \cite[Theorem 6.1]{Fulvio}. An important fact in the proof is that the classes in $\widehat{N}$ of square integrable representations have full Plancherel measure. \begin{theorem}\label{heisenberg} Let $N$ be a connected and simply connected real two-step nilpotent Lie group which has a square integrable representation. Let $K$ be a compact subgroup of orthogonal automorphisms of $N$ and let $(\tau,W_\tau)\in\widehat{K}$. Then $(K\ltimes N,K,\tau)$ is a commutative triple if and only if $(K_\lambda\ltimes N_\lambda, K_\lambda, \tau_{|_{K_{\lambda}}})$ is a commutative triple for every square integrable representation $\rho_\lambda\in \widehat{N}$, where $\tau_{|_{K_{\lambda}}}$ denotes the restriction of $\tau$ to $K_{\lambda}$. Also, $(K_\lambda\ltimes N_\lambda, K_\lambda, \tau_{|_{K_{\lambda}}})$ is a commutative triple if and only if ${\omega}\otimes (\tau_{|_{K_{{\lambda}}}})$ is multiplicity free. \end{theorem} The previous result will mark the way of our proofs. It allows to make a reduction to Heisenberg groups simplifying our problem from two-step nilpotent Lie groups in general to Heisenberg groups. But it is extremely important the condition on $N$ of having square integrable representations. There are few Gelfand pairs $(K\ltimes N,K)$ in the list of E. Vinberg such that the nilpotent group is not of the form given by J. Lauret. Specifically, they correspond to the items 3, 5, 11, 12, 20 and 26 of table 3 of \cite{Vinberg}. In the items 3 and 26 of \cite{Vinberg} the group $N$ does not have square integrable representations. We will skip them from our study. In the following paragraphs we will develop the analysis of the triples derived from the remaining Gelfand pairs. \begin{itemize} \item[-] Case A: We will develop item 5 of table 3 of \cite{Vinberg}. Here we have $\mathfrak{n}=\left[\Lambda^2(\mathbb{C}^{2n})\oplus i\mathbb{R}\right]\oplus \mathbb{C}^{2n}$, where $\Lambda^2(\mathbb{C}^{2n})$ denotes the space of antisymmetric bilinear forms on $\mathbb{C}^{2n}$ over the complex field, and $K=\operatorname{U}(2n)$. \item[-] Case B: We will study item 12 of table 3 of \cite{Vinberg}. Denoting by $\mathbb{H}$ the quaternions, we have $\mathfrak{n}=\left[ H_0(\mathbb{H}^{n})\oplus Im(\mathbb{H})\right]\oplus \mathbb{H}^{n}$, where $H_0(\mathbb{H}^{n})$ denotes the space of hermitian $n\times n$ matrices over $\mathbb{H}$ of trace zero and $Im(\mathbb{H})$ the imaginary quaternions, and $K=\operatorname{Sp}here^1\times \operatorname{Sp}(n)$, where $\operatorname{Sp}here^1$ is the one dimensional torus and $\operatorname{Sp}(n)$ the symplectic group. The only difference between item 12 and 11 of table 3 of \cite{Vinberg} is that for item 11 the group $K$ is smaller. \item[-] Case C: We will analyze item 20 of table 3 of \cite{Vinberg}. Here we have $\mathfrak{n}= \mathbb{R}^{7}\oplus\mathbb{R}^8$ and $K=\operatorname{Sp}in(7)$. \end{itemize} \section{Analysis of commutavive and non-commutative triples}\label{section Vinberg} \subsection*{Case A} The objects that we will describe in this paragraph can be found in item 5 of table 3 in \cite{Vinberg} as well as in item 5 of table 13.4.1 in the book \cite{Wolf}. Consider the two-step nilpotent Lie algebra $$\mathfrak{n}=\mathfrak{z}\oplus V=\left[\Lambda^2(\mathbb{C}^{2n})\oplus i\mathbb{R}\right]\oplus \mathbb{C}^{2n}$$ where the composition is given by $$[(z,u),(w,v)]:=((u\wedge v,Im(u^*v)),0) \qquad \forall z, w \in\mathfrak{z}, \ u, v \in V,$$ where we denote by $u^t$ the row vector obtained from the column vector $u$ and by $u^*$ the conjugate row vector. The wedge product $u\wedge v$ can also expressed as $uv^t-vu^t$. The unitary group $K=\operatorname{U}(2n)$ acts on $\mathfrak{z}$ by \begin{equation*} k\cdot(uv^t-vu^t,s):=(k(uv^t-vu^t)k^t,s) \qquad \forall k\in K \text{ and } \forall \ uv^t-vu^t\in \Lambda^2(\mathbb{C}^{2n}), \ s\in i\mathbb{R}. \end{equation*} We consider in particular a linear functional $\lambda$ in $\mathfrak{n}^*$ such that (when we restrict it to $\mathfrak{z}$) has as a representative the element $X_\lambda:=(u\wedge v, 0)\in\mathfrak{z}$ with $u\wedge v$ non-degenerate. Let $\rho_\lambda$ denote the irreducible representation class associated to it. From the formula of the Pfaffian given in \cite[p. 339--340 ]{Wolf}, $P(\lambda)$ is a positive multiple of the determinant of the matrix $uv^t-vu^t$. According to our election, it is invertible. Therefore $\rho_\lambda$ is square integrable and we are allowed to apply Theorem \ref{heisenberg}. In this case, the subgroup $K_\lambda$ coincides with the symplectic group $\operatorname{Sp}(n)$. Let $\tau\in \widehat{K}$ non-trivial. Then there is a non-trivial $\eta\in\widehat{\operatorname{Sp}(n)}$ appearing in the restriction of $\tau$ to $K_\lambda$. Now we introduce some notation. Let $\delta_{i,j}$ denote the Kronecker delta. Writing $H_i:=\delta_{i,i}-\delta_{n+i,n+i}$, we have that the Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{sp}(n)$ is a complex vector space generated by $\{ H_1,...,H_n \}$. Let $\{L_1,...,L_n\}$ be its dual basis in the dual space $\mathfrak{h}^*$, so $\langle L_i,H\rangle=h_i$ for all $H\in\mathfrak{h}$. From the theorem of the highest weight, every irreducible representation of $\mathfrak{sp}(n)$ is in correspondence with a non-negative integer linear combination of the fundamental weights. Hence $\eta \in \widehat{\mathrm{Sp}(n)}$ can be parametrized in terms of the weights $\{L_i\}$ as $(\eta_1,...,\eta_n)$ where $\eta_i\in\mathbb{Z}_{\geq 0}$ $\forall i$ and $\eta_1\geq \eta_2\geq ... \geq \eta_n$. (For a reference see, for example, \cite{Fulton y Harris, Knapp}.) We will denote the representation $\eta\in\widehat{\mathrm{Sp}(n)}$ by $\eta_{(\eta_1,...,\eta_n)}$ to emphasize that the representation $\eta$ is in correspondence with the tuple $(\eta_1,...,\eta_n)$ that we call partition. The metaplectic representation $(\omega, \mathcal{P}(\mathbb{C}^{2n}))$ of $\operatorname{Sp}(n)$ decomposes (using the notation given in the previous sections) as \begin{equation*} \omega=\bigoplus_{j\in\mathbb{Z}_{\geq 0}} \eta_{(j)}, \end{equation*} where each $\eta_{(j)}$ corresponds to the partition on length one $(j)$, for some non-negative integer $j$. The following fact is proved in \cite[Corollary 2]{rocio2} and will be useful to deduce our main results. \begin{lemma} \label{coro 1 sp} Let $\eta$ be an irreducible representation of $\mathrm{Sp}(n)$ then $\eta$ appears in the decomposition into irreducible factors of $\eta\otimes\eta_{(2)}$. \end{lemma} From the above lemma, the representation $\eta$ appears in the decomposition of the factors $\eta\otimes\eta_{(0)}$ and $\eta\otimes\eta_{(2)}$. Then, it appears with multiplicity in the decomposition into irreducible factors of $\omega\otimes\tau_{|_{K_{\lambda}}}$. Therefore, by Theorem \ref{heisenberg}, we have the following result. \begin{theorem}\label{th 1} The triple $(K\ltimes N, K, \tau)$, where $N$ is the simply connected nilpotent Lie group with Lie algebra $\left[\Lambda^2(\mathbb{C}^{2n})\oplus i\mathbb{R}\right]\oplus \mathbb{C}^{2n}$ and $K=\operatorname{U}(2n)$, is commutative if and only if $\tau$ is the trivial representation. \end{theorem} \subsection*{Case B} The objects that we will describe in this paragraph can be found in item 12 of table 3 in \cite{Vinberg} as well as in item 9 of table 13.4.1 in the book \cite{Wolf}. Consider the two-step nilpotent Lie algebra $$\mathfrak{n}=\mathfrak{z}\oplus V:=\left[ H_0(\mathbb{H}^{n})\oplus \textrm{Im\,}(\mathbb{H})\right]\oplus \mathbb{H}^{n}$$ with Lie bracket given by $$[(z,u),(w,v)]:=(((uiv^*-viu^*)_0,u^*v-v^*u)),0)$$ for all $z, w \in\mathfrak{z}= H_0(\mathbb{H}^{n})\oplus \textrm{Im\,}(\mathbb{H})$ and $ u, v \in V=\mathbb{H}^{n}$, where, in general, if $A$ is a matrix, $(A)_0:=A-\frac{1}{2}tr(A)I$. Let $N$ be the simply connected nilpotent Lie group with Lie algebra $\mathfrak{n}$. Every point in group of automorphisms $(e^{i\theta},k)\in K= \operatorname{Sp}here^1\times \operatorname{Sp}(n)$ acts on $\mathfrak{n}$ by \begin{equation*} (e^{i\theta},k)\cdot((A,q),v):=((kAk^*,e^{i\theta} q e^{-i\theta}),kv e^{-i\theta}),\end{equation*} for all $A\in H_0(\mathbb{H}^{n})$, $q\in Im(\mathbb{H})$ and $v\in\mathbb{H}^{n}$. Let $\tau=\eta\otimes\chi_r$ where $\eta\in\widehat{\operatorname{Sp}(n)}$ and $\chi_r$ a character of $\operatorname{Sp}here^1$. First we consider the case where the factor in $\widehat{\operatorname{Sp}(n)}$ is trivial, that is $\tau=\chi_r$. Since the convolution algebra $L_{\tau,\tau}^1(K\ltimes N,\mathrm{End}(W_\tau))$ is naturally identified with the space of $\mathrm{End}(W_\tau)$-valued integrable functions on $N$ such that $F(k\cdot x) =\tau(k)F(x)\tau(k)^{-1}$ for all $k\in K$ and $x\in N$ (cf. e.g. \cite{rocio,Fulvio}). In this situation, it totally coincides with the convolution algebra of $K$-invariant integrable scalars functions on $N$, which is commutative since we have a Gelfand pair. Therefore we have commutative triples. Now let $\eta\in\widehat{\operatorname{Sp}(n)}$ non-trivial. We consider the linear functional $\lambda$ with representative $X_\lambda:=(0, q)\in\mathfrak{z}$. Let $\rho_\lambda$ denote the irreducible representation class associated to it. From the formula of the Pfaffian given \cite[p. 340]{Wolf}, $P(\lambda)$ is a positive multiple of $|q|^{2n}$. Then for all imaginary non-zero quaternions $q$, the associated representation $\rho_\lambda$ is square integrable. The group $K_\lambda$ is easily calculated: If the quaternion $q$ belongs to $i\mathbb{R}$, $K_\lambda$ coincides with $K$ (since $q$ commutes with every complex); for the other cases $K_\lambda$ is $\operatorname{Sp}(n)$. We consider for example $q=j\in Im(\mathbb{H})$ in order to fix $K_\lambda=\operatorname{Sp}(n)$. Note that $\tau_{|_{{K_{\lambda}}}}=\eta$. The metaplectic representation $(\omega, \mathcal{P}(\mathbb{C}^{2n}))$ of $K_\lambda=\operatorname{Sp}(n)$ decomposes as \begin{equation*} \omega=\bigoplus_{j\in\mathbb{Z}_{\geq 0}} \eta_{(j)}. \end{equation*} Therefore $\omega\otimes\eta$ is not multiplicity free: $\eta$ appears in the the factors $\eta\otimes\eta_{(0)}$ and in $\eta\otimes\eta_{(2)}$ by Lemma \ref{coro 1 sp}. Then, by Theorem \ref{heisenberg}, in these cases the triples are not commutative. \begin{theorem}\label{th 2} The triple $(K\ltimes N, K, \tau)$, where $N$ is the simply connected nilpotent Lie group with Lie algebra $\left[ H_0(\mathbb{H}^{n})\oplus Im(\mathbb{H})\right]\oplus \mathbb{H}^{n}$ and $K=\operatorname{Sp}here^1\times \operatorname{Sp}(n)$, is commutative if and only if $\tau\in\widehat{\operatorname{Sp}here^1}$. \end{theorem} To conclude this case we want to note that the analysis of the item 11 of table 3 of \cite{Vinberg} is the same (or simpler) since the subgroup $K$ is only $\operatorname{Sp}(n)$. \subsection*{Case C} The objects that we will describe in this paragraph can be found in item 20 of table 3 in \cite{Vinberg} as well as in item 2 of table 13.4.1 in the book \cite{Wolf}. In this case we study an H-type group (\cite{Kaplan}). Consider the two-step nilpotent Lie algebra $\mathfrak{n}=\mathfrak{z}\oplus V$ where $V$ is $\mathbb{R}^8$ or the octonions $\mathbb{O}$ and the center $\mathfrak{z}$ is $\mathbb{R}^7$ or the imaginary octonions $Im(\mathbb{O})$ with Lie bracket on $V$ characterized by: ${\langle[u, v], z \rangle}_\mathfrak{z}:={\langle J(z)u, v\rangle}_V$, where $(J,V)$ is a real representation of $\mathfrak{z}$ given by the product on the octonions $J(z)v:=zv$ for $z\in Im(\mathbb{O})$ and $v\in \mathbb{O}$. Let $K$ be the maximal connected group of orthogonal automorphisms of $N$. Precisely, $K=\operatorname{Sp}in(7)$. (We mention that in general, for an H-type group, it group of automorphisms was determined by L. Saal in \cite{la linda}.) Let $\lambda$ be a linear functional with representative $X\in\mathfrak{z}$. From \cite[p. 340]{Wolf}, the Pfaffian $P(\lambda)$ is not null almost everywhere. In this case, $K_\lambda$ is isomorphic to the spin group $\operatorname{Sp}in(6)$. Therefore we will work with the Lie algebra $\mathfrak{so}(6)$ . Let $\tau$ be a non-trivial irreducible unitary representation of $\operatorname{Sp}in(7)$. When we restrict $\tau$ to $\operatorname{Sp}in(6)$ it decomposes as a sum of irreducible representations and we can pick one non-trivial factor. We will denote it as $\tilde{\eta}\in\widehat{\operatorname{Sp}in(6)}$ and we will identify it with its derived representation and also we will view it as a representation of $\mathfrak{so}(6)$. Apart from that, it is easy to derive the decomposition of the metaplectic representation since $\operatorname{Sp}in(6)$ is isomorphic to $\operatorname{SU}(4)$ and it is well known that its action on $\mathcal{P}(\mathbb{C}^4)$ decomposes into $\oplus_{j\in\mathbb{Z}_{\geq 0}}\mathcal{P}_j(\mathbb{C}^4)$. It can be identified with \begin{equation}\label{omega} \omega=\bigoplus_{j\in\mathbb{Z}_{\geq 0}} \tilde{\eta}_{(j)}. \end{equation} where $\tilde{\eta}_{(j)}$ corresponds to the partition of length one $(j)$ of $\mathfrak{so}(6)$. Here we decided to use an analogous notation to that of the representations of $\mathfrak{sp}(n)$. As a consequence of the highest weight theorem, we can associate each irreducible representation $\sigma$ of $\mathfrak{so}(2m)$ with an $n$-tuple of integers $(\sigma_1,...,\sigma_m)$ satisfying the condition $\sigma_1\geq \sigma_2\geq...\geq\sigma_{m-1}\geq |\sigma_m|$ (cf. \cite{Fulton y Harris, Okada}). The following fact can be found in \cite[Theorem 3.2 and Remark 3.5]{Okada}. \begin{lemma}\label{Okada} Let $\tilde{\eta}$ be an arbitrary irreducible representation of $\mathfrak{so}(2m)$ associated to a sequence of $m$ integers $\tilde{\eta}_1\geq \tilde{\eta}_2\geq...\geq \tilde{\eta}_{m-1}\geq |\tilde{\eta}_m|$ and let $s$ be a non-negative integer. Then the multiplicity of an irreducible representation $\sigma$ (associated to the sequence of integers $\sigma_1\geq \sigma_2\geq...\geq \sigma_{m-1}\geq |\sigma_m|$) in the tensor product $\tilde{\eta}\otimes\tilde{\eta}_{(s)}$ is equal to the number of integer sequences $\varsigma$ satisfying: \begin{itemize} \item[$(i)$] $\varsigma_{1} \geq \varsigma_{2} \geq ... \geq \varsigma_{m-1} \geq\left|\varsigma_{m}\right|$ \item[$(ii)$] $\tilde{\eta}_{1}\geq\varsigma_{1} \geq \tilde{\eta}_{2}\geq\varsigma_{2} \geq ... \geq\varsigma_{m-1}\geq \tilde{\eta}_{m}\geq\varsigma_{m}$ and $\sigma_{1}\geq\varsigma_{1} \geq \sigma_{2}\geq\varsigma_{2} \geq ... \geq\varsigma_{m-1}\geq \sigma_{m}\geq\varsigma_{m}$ \item[$(iii)$] $\sum_{i=1}^{m}\left(\tilde{\eta}_{i}-\varsigma_{i}\right)+\sum_{i=1}^{m}\left(\sigma_{i}-\varsigma_{i}\right)=s$ \item[$(iv)$] $\varsigma_m\in\{\tilde{\eta}_m,\sigma_m\}$ \end{itemize} \end{lemma} Now will apply this lemma, with $m=3$ a fixed non-trivial irreducible representation $\tilde{\eta}$ in the decomposition of $\tau_{|_{K_{\lambda}}}$, to deduce whether the tensor product $\omega\otimes\tau_{|_{K_{\lambda}}}$ is multiplicity free or not. The representation $\tilde{\eta}$ corresponds to $(\tilde{\eta}_1,\tilde{\eta}_2,\tilde{\eta}_3)$ with $\tilde{\eta}_i\in\mathbb{Z}$ for $i=1,2,3$ and $\tilde{\eta}_1\geq \tilde{\eta}_2\geq |\tilde{\eta}_3|$ and $\tilde{\eta_1}>0$ since it is non-trivial. \begin{itemize} \item If $\tilde{\eta}_1>\tilde{\eta}_3$ (in particular if $\tilde{\eta}_3\leq 0$), then $\tilde{\eta}$ appears in $\tilde{\eta}\otimes \tilde{\eta}_{s}$ for $s=2(\tilde{\eta}_1-\tilde{\eta}_3)>0$ since $\varsigma=(\tilde{\eta}_2,\tilde{\eta}_3,\tilde{\eta}_3)$ satisfies the conditions listed in Lemma \ref{Okada}. By (\ref{omega}) we have that $\tilde{\eta}$ appears at least twice in $\omega\otimes\tau_{|_{K_{\lambda}}}$. \item If $\tilde{\eta}_1=\tilde{\eta}_2=\tilde{\eta}_3=r$ for $r\in\mathbb{Z}_{>0}$, then if $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ appears in $\tilde{\eta}\otimes \tilde{\eta}_{s}$ for $s\in\mathbb{Z}_{\geq 0}$, then by Lemma \ref{Okada}, $\sigma_1=\sigma_2=r$. Also, with the notation in the Lemma, it must happen that $\varsigma$ is $(r,r,r)$ or $(r,r,\sigma_3)$. For the first case $\sigma_3-r=s$, then since $r\geq\sigma_3$ the only possibility for $s$ is $s=0$. For the second, $r-\sigma_3=s$. Therefore $s$ is completely determined by $\sigma$. Thus $\sigma=(r,r,\sigma_3)$ with $r\geq \sigma_3$ appears once in $\tilde{\eta}\otimes \oplus_{j\in\mathbb{Z}_{\geq 0}}\tilde{\eta}_{(j)}$. Consequently, if $\tilde{\eta}=(r,r,r)$ and $\sigma=(r,r,\sigma_3)$ with $r> \sigma_3$ both appears in $\tau_{|_{K_{\lambda}}}$, the tensor product $\omega\otimes\tau_{|_{K_{\lambda}}}$ is not multiplicity free. Note that in this situation $\sigma$ satisfies the conditions $\tilde{\eta}$ for the first item. Therefore we have the following conclusion. \end{itemize} \begin{theorem}\label{th 3} The triple $(K\ltimes N, K, \tau)$, where $N$ is the simply connected nilpotent Lie group with Lie algebra $Im(\mathbb{O})\oplus\mathbb{O}$ and $K=\operatorname{Sp}in(7)$, is commutative if and only if $\tau_{|_{\operatorname{Sp}in(6)}}$ is associated to a partition of the form $(r,r,r)$ for $r\in\mathbb{Z}_{\geq 0}$. \end{theorem} \section{Conclusions} At this point we sum up the following. Theorem \ref{th 1} does not provide non-trivial commutative triples. Theorem \ref{th 2} gives rise to a commutative triple only if $\tau$ is a character of $\operatorname{Sp}here^1$. These cases are twisted Gelfand pairs. Finally, Theorem \ref{th 3} gives a family of commutative triples. This family is similar to the case of the H-type group listed by J. Lauret in \cite{Lauret}. There the nilpotent Lie algebra $\mathfrak{n}$ is $Im(\mathbb{H})\oplus\mathbb{H}^n$ where $Im(\mathbb{H})$ is its center $\mathfrak{z}$ and the Lie bracket on $\mathbb{H}^n$ is given, analogously as in case C, by the real representation of $\mathfrak{z}$, $J(z)(v):=(zv_1,...,zv_n)$ for $v=(v_1,...,v_n)\in\mathbb{H}$. Its (maximal connected) group of automorphisms is $K=\mathrm{SU}(2)\times \mathrm{Sp}(n)$. By \cite[Proposition 1]{rocio2}, in that case we obtain commutative triples if and only if $\tau\in\widehat{\mathrm{SU}(2)}$ or $\tau\in \widehat{\mathrm{Sp}(n)}$ corresponding to a partition of the form $(a,a,...,a)$ of length at most $n$ for a non-negative integer $a$. In conclusion, denoting by $\mathrm{H}_n$ the Heisenberg group $\mathrm{H}_n=\mathbb{C}_n\times \mathbb{R}$, if we exclude from the analysis the Gelfand pairs of the form $(\mathrm{H}_n\ltimes K,K)$ where $K$ is a proper subgroup of $\mathrm{U}(n)$, all the commutative triples of the form $(K\ltimes N, K,\tau)$, where $N$ is a simply connected nilpotent Lie group having square integrable representations and $K$ is a compact subgroup of $G$, are the following: \begin{enumerate} \item $([\operatorname{Sp}here^1\times \operatorname{Sp}(n)]\ltimes N, \operatorname{Sp}here^1\times \operatorname{Sp}(n), \tau)$, where $N$ is the simply connected nilpotent Lie group with Lie algebra $\left[ H_0(\mathbb{H}^{n})\oplus Im(\mathbb{H})\right]\oplus \mathbb{H}^{n}$, is commutative for $\tau\in\widehat{\operatorname{Sp}here^1}$. It corresponds to the preceding case C. It is a twisted Gelfand pair. \item $(\mathrm{SU}(n)\times \mathbb{S}^1, N(\mathfrak{su}(n),\mathbb{C}^n),\tau)$, for all $\tau\in\widehat{\mathbb{S}^1}$, where $n\geq 3$. It is a twisted Gelfand pair. \item $(\mathrm{SU}(n)\times \mathbb{S}^1, N(\mathfrak{u}(n),\mathbb{C}^n),\tau)$, for all $\tau\in\widehat{\mathbb{S}^1}$, where $n\geq 3$. It is a twisted Gelfand pair. \item $(\operatorname{Sp}in(7)\ltimes N, \operatorname{Sp}in(7), \tau)$, where $N$ is the simply connected nilpotent Lie group with Lie algebra $Im(\mathbb{O})\oplus\mathbb{O}$, is commutative when $\tau_{|_{\operatorname{Sp}in(6)}}$ is associated to a constant partition $(r,r,r)$ for $r\in\mathbb{Z}_{\geq 0}$. $N$ is of Heisenberg type. It corresponds to the preceding case C. \item $\left(\mathrm{SU}(2)\times \mathrm{Sp}(n), N(\mathfrak{su}(2),(\mathbb{C}^2)^n),\tau\right)$, for all $\tau\in\widehat{\mathrm{SU}(2)}$ and for all $\tau\in\widehat{\mathrm{Sp}(n)}$ associated to a constant partition of length at most $n$, where $n\geq 1$. $N(\mathfrak{su}(2),(\mathbb{C}^2)^n)$ is of Heisenberg type. \item $(\mathrm{SU}(2)\times \mathrm{U}(k)\times \mathrm{Sp}(n), N(\mathfrak{u}(n),(\mathbb{C}^2)^k\oplus(\mathbb{C}^2)^n),\tau)$, for all $\tau\in \widehat{\mathrm{U}(k)}$, where $k\geq 1, n\geq 0$. \item $(G\times U, N(\mathfrak{g}, V), \tau)$, where $ \mathfrak{g}=\mathfrak{su}(m_1)\oplus...\oplus\mathfrak{su}(m_\beta)\oplus\mathfrak{su}(2)\oplus...\oplus\mathfrak{su}(2)\oplus\mathfrak{c}$ with $\mathfrak{c}$ an abelian component, $G=\mathrm{SU}(m_1)\times...\times \mathrm{SU}(m_\beta)\times \mathrm{SU}(2)\times...\times \mathrm{SU}(2)$, $ V= \mathbb{C}^{m_1}\oplus...\oplus \mathbb{C}^{m_\beta}\oplus\mathbb{C}^{2k_1+2n_1}\oplus...\oplus\mathbb{C}^{2k_\alpha+2n_\alpha}$ and $U=\mathbb{S}^1\times...\times \mathbb{S}^1\times \mathrm{U}(k_1)\times \mathrm{Sp}(n_1)\times...\times \mathrm{U}(k_\alpha)\times \mathrm{Sp}(n_\alpha)$, for all $\tau\in\widehat{\mathbb{S}^1}\otimes...\otimes \widehat{\mathbb{S}^1}\otimes \widehat{\mathrm{U}(k_1)}\otimes...\otimes \widehat{\mathrm{U}(k_\alpha)}$, where $m_j\geq 3$ for all $1\leq j\leq\beta$, $k_i\geq 1$, $n_i\geq 0$ for all $1\leq i\leq\alpha$. \item $(\mathrm{H}_n\ltimes \mathrm{U}(n), \mathrm{U}(n),\tau)$ for all $\tau\in\widehat{\mathrm{U}(n)}$ (proved by Yakimova in \cite{Yakimova}). This is the unique strong Gelfand pair of this form. \end{enumerate} Where in \'items 2, 3, 5, 6 and 7 the nilpotent Lie group $N=N(\mathfrak{g},V)$ is endowed with a left-invariant Riemannian metric determined by an inner product $\langle\cdot,\cdot\rangle$ on its Lie algebra $\mathfrak{n}$ described as follows. Let $(\pi, V)$ be a faithful real representation of a compact Lie algebra $\mathfrak{g}$. We consider inner products $\langle\cdot,\cdot\rangle_\mathfrak{g}$ on $\mathfrak{g}$ and $\langle\cdot,\cdot\rangle_V$ on $V$ such that $\langle\cdot,\cdot\rangle_\mathfrak{g}$ is $\mathrm{ad}(\mathfrak{g})$-invariant and $\langle\cdot,\cdot\rangle_V$ is $\pi(\mathfrak{g})$-invariant. Let $\mathfrak{n}:=\mathfrak{g}\oplus V$ be the two-step nilpotent Lie algebra with center $\mathfrak{g}$ and Lie bracket defined on $V$ by ${\langle[u, v], X \rangle}_\mathfrak{g}:={\langle\pi(X)u, v\rangle}_V$ for all $u, v \in V$, $X \in \mathfrak{g}$. These inner products define an inner product $\langle\cdot,\cdot\rangle$ on $\mathfrak{n}$ satisfying $\langle\cdot,\cdot\rangle_{|_{\mathfrak{g}\times \mathfrak{g}}}= \langle\cdot,\cdot\rangle_\mathfrak{g}$, $\langle\cdot,\cdot\rangle_{|_{V\times V}}= \langle\cdot,\cdot\rangle_V$ and $\langle\mathfrak{g},V\rangle=0$. Let $N(\mathfrak{g}, V)$ be the two-step connected simply connected Lie group with Lie algebra $\mathfrak{n}$. Specifically, in item 2, $\mathbb{C}^n$ denotes the standard representation of $\mathfrak{su}(n)$, in item 3, $\mathbb{C}^n$ denotes the standard representation of $\mathfrak{u}(n)$ regarded as a real representation, in item 5, $\mathbb{C}^2$ denotes the standard representation of $\mathfrak{su}(2)$ regarded as a real representation, in item 6, $(\mathbb{C}^2)^k\oplus(\mathbb{C}^2)^n$ is an orthogonal sum, the center of $\mathfrak{u}(2)$ acts non-trivially only on $(\mathbb{C}^2)^k$, $(\mathbb{C}^2)^n$ denotes the representation of $\mathfrak{su}(2)$ stated in the first item, $\mathfrak{u}(2)$ acts component-wise on $(\mathbb{C}^2)^k$ in the standard way regarded as a real representation and $\tau\in \widehat{\mathrm{U}(k)}$ and in item 7, $\mathfrak{g}$ is acting on $V$ as follows: For each $1\leq i\leq \beta+\alpha$, $\mathfrak{c}$ has a unique subspace $\mathfrak{c}_i$ acting non-trivially on only the $i$-th component of $V$ and the dimension of $\mathfrak{c}_i $ is $1$. For $1\leq i\leq \beta$, $\mathfrak{su}(m_i)\oplus \mathfrak{c}_i$ (which is isomorphic to $\mathfrak{u}(m_i)$) acts non-trivially only on $\mathbb{C}^{m_i}$ by standard. For $\beta+1<i\leq \beta+\alpha $, $\mathfrak{su}(2)\oplus\mathfrak{c}_i$ acts non-trivially only on $\mathbb{C}^{2k_i+2n_i}$ as in the above case. For more details see \cite{rocio2, Lauret, Lauret nil}. \end{document}
\begin{document} \begin{center} \Large\bfseries A note on insensitivity in stochastic networks \end{center} \begin{center} Stan Zachary \footnotetext{ {\it American Mathematical Society 1991 subject classifications.\/} Primary 60K20 {\it Key words and phrases.\/} insensitivity, stochastic network, partial balance } \end{center} \begin{center} \textit{Maxwell Institute for Mathematical Sciences, Heriot-Watt University\\ Edinburgh} \end{center} \begin{quotation}\small We give a simple and direct treatment of insensitivity in stochastic networks which is quite general and which provides probabilistic insight into the phenomenon. In the case of multi-class networks, the results generalise those of Bonald and Prouti\`{e}re (2002, 2003). \end{quotation} \section{Introduction} It is well-known that many stochastic networks---notably queueing and loss networks---have stationary distributions of their level of occupancy which depend on certain input distributions only through the means of the latter. This phenomenon of \emph{insensitivity} has been studied by various authors over an extended period of time, in varying degrees of generality and abstraction, and using a variety of techniques. In the present paper we revisit this topic to develop an insight of Pechinkin (1983, 1987) to give a very simple and direct treatment of insensitivity. In particular the approach avoids those based on brute-force calculations, the consideration of phase-type distributions (Schassberger, 1978, Whittle, 1985, Bonald and Prouti\`{e}re, 2002, 2003), or the use of quite complex machinery for handling generalised semi-Markov processes (Burman, 1981, Schassberger, 1986)---although such processes are implicit in the current approach. It further avoids assumptions about, for example, continuity of distributions, necessary for some of the above approaches, and also explicitly identifies the entire stationary distributions of the networks concerned, showing that, where insensitivity obtains, these stationary distributions have a particularly simple and natural form. Pechinkin used his insight, which involves what is in effect a coupling argument together with induction, to give probabilistic proofs of the insensitivity of a number of single-class loss systems with state-dependent arrival rates---results originally proved analytically by Sevastyanov (1957). He also indicated the wider applicability of the approach in the single-class case. In the present paper we give a substantial reformulation of the underlying idea, under more general conditions and showing that its most natural expression is in terms of balance equations. This considerably simplifies its application to single-class systems---notably the quite complex coupling constructions are no longer needed. It further makes possible the extension of the idea to the multi-class networks considered in Section~\ref{sec:multi-class-networks}. The main aim is to provide probabilistic insight, notably for multi-class networks. Indeed it is shown that insensitivity is simply a byproduct, under appropriate conditions, of probabilistic independence. We study networks in which individuals arrive at various \emph{classes} at rates which may depend on the state of the entire system, bringing \emph{workloads} which are independent and identically distributed within classes and which have finite means. Within each class workloads are reduced at rates which may again be state-dependent (when the rate is constant workloads may be identified with lifetimes in classes), and on completion of its workload an individual moves to a different class or leaves the system, with probabilities which may yet again be state-dependent. In order to obtain insensitivity we typically require that an individual joining a class is immediately served, i.e.\ has its workload reduced, at a rate which is the same as that of an individual immediately prior to leaving the class (where in each of these cases the number of individuals in each class of the system is the same)---more generally that the service discipline should define a network which is \emph{symmetric} in the sense of Kelly (1979). The most common example is that of processor-sharing networks, but other possibilities are well-known, for example, ``last-in-first-out preemptive resume'' networks. We shall concentrate on a very broad class of processor-sharing networks, introduced by Bonald and Prouti\`{e}re (2002) and including, for example, traditional loss networks and processor-sharing Whittle and Jackson networks, as special cases). We shall also indicate the simple modifications required to deal with other possibilities. For the above class of processor-sharing networks, Bonald and Prouti\`{e}re used phase-type arguments to show that, under conditions which correspond to the satisfaction of the appropriate partial balance equations, the stationary distribution of the number of individuals in each class is insensitive to the workload distributions, subject to the means of the latter being fixed and to the distributions themselves being drawn from the broad class of Cox-type distributions (dense in the class of all distributions on $\mathbb{R}p$). In the present paper we formally consider all workload distributions on $\mathbb{R}p$ with finite means, and identify also the stationary residual workload distributions. However, as stated above our main aim is to give a direct and probabilistically natural treatment. It turns out (and is in many cases well-known) that, when the appropriate partial balance equations are satisfied for such a network, then the stationary distribution of the entire system, \emph{including the specification of residual workloads}, is such that departures from each class are exactly balanced by arrivals to that class---in a sense again to be made precise below. Indeed, for single-class systems, this is the essence of Pechinkin's insight. What is of interest is that same idea extends to establish insensitivity for the very much more general networks considered here, and indeed appears also to establish insensitivity in more abstract settings such as that considered by Whittle (1985), though we do not formally consider this more abstract environment here. In order to fix ideas, it is convenient to consider first, in Section~\ref{sec:single-class-networks}, single-class networks. Here the extension of previous ideas is not too difficult. Nevertheless it is desirable to give a careful treatment of this case, avoiding notational complexity while preserving rigour, so as both to establish the underlying principle and also to set the scene for the multi-class networks which we consider in Section~\ref{sec:multi-class-networks}. \section{Single-class networks} \label{sec:single-class-networks} Consider an open system with a single class of individual (customer, call, or job). Individuals arrive as a Poisson process with state-dependent rate~$\alpha(n)$, where $n$ is the number of individuals currently in the system. Arriving individuals have \emph{workloads} which are independent of each other and of the arrivals process with a common distribution~$\mu$ on $\mathbb{R}p$ which we assume to have a finite mean~$m(\mu)$. While there are $n$ individuals in the system, their \emph{total} workload is reduced at a rate $\beta(n)\ge0$, where we assume $\beta(n)>0$ if and only if $n>0$; an individual departs the system when its workload is reduced to zero. By suitably redefining the rates~$\beta(n)$ if necessary, we may, and do, assume without loss of generality that the mean workload $m(\mu)=1$. We consider first the processor-sharing case. Here when there are $n>0$ individuals in the system, the workload of each is simultaneously reduced at a rate $\beta(n)/n$, and the set-up described above becomes a fairly general description of a single-class processor sharing system. A special case is the simple Erlang loss system, in which, for some $\alpha,\beta>0$, we have $\alpha(n)=\alpha\textbf{I}(n<C)$ for some \emph{capacity}~$C\le\infty$ (where $\textbf{I}$ is the indicator function) and $\beta(n)=n\beta$ for all $n\ge0$. Here individuals are typically referred to as calls, and workloads correspond to call durations (since $\beta(n)/n$ is independent of $n$). A further special case is the $M/GI/m/\infty$ processor-sharing queue, in which, again for some $\alpha,\beta>0$, we have $\alpha(n)=\alpha$ for all $n$ and $\beta(n)=\min(n,m)\beta$ for all $n$ and some fixed $m$. We represent the system as a Markov process~$(X(t))_{t\ge0}$ by defining its state at any time~$t$ to be the number~$n$ of individuals then in the system together with their residual workloads at that time. (An alternative is to record, for each individual, the workload completed at time~$t$.) For given $n>0$ these workloads form an (unordered) set, and may be regarded as taking values in the quotient space~$S_n$ obtained from $\mathbb{R}p^n$ by identifying points which may be obtained from each other under permutation of their coordinates. The $\sigma$-algebra $\mathcal{B}(S_n)$ on $S_n$ is similarly formed in the obvious manner from the Borel $\sigma$-algebra on $\mathbb{R}p^n$. The state space~$S$ for the process~$(X(t))_{t\ge0}$ is then the union of the $S_n$, $n\ge0$, where the set $S_0$ is taken to consist of a single point, and its associated $\sigma$-algebra~$\mathcal{B}(S)$ consists of those sets which are countable unions of sets in the $\sigma$-algebras~$\mathcal{B}(S_n)$. The process~$(X(t))_{t\ge0}$ is thus an instance of a piecewise-deterministic Markov process (Davis, 1984, 1993). However, we avoid the need for most of the general machinery for handling such processes. We define the probability distribution~$\st{\mu}{}{}$ on $\mathbb{R}p$ to be the stationary residual life distribution of the renewal process with inter-event distribution~$\mu$, that is, if $\mu$ has distribution function~$F$ then $\st{\mu}{}{}$ has distribution function~$G$ given by \begin{displaymath} G(x) = 1 - \int_x^\infty (1-F(y))\,dy \end{displaymath} (recall that $m(\mu)=1$). Note that the ``residual life'' here should be thought of as a residual workload rather than a time. For each $n\ge1$, define also the probability distribution~$\st{\mu}{n}{}$ on $S_n$ to be the product of $n$ copies of the distribution~$\st{\mu}{}{}$, again with the above identification of points in $\mathbb{R}p^n$ (more formally, $\st{\mu}{n}{}(A)=\st{\mu}{}{n}(\theta^{-1}(A))$, $A \in \mathcal{B}(S_n)$, where $\st{\mu}{}{n}$ is the product of $n$ copies of the distribution~$\st{\mu}{}{}$ and $\theta$ is the projection from $\mathbb{R}p^n$ into the quotient space~$S_n$). Thus $\st{\mu}{n}{}$ represents the joint distribution of the residual lives at any time in a set of $n$ independent stationary renewal processes each with inter-event distribution~$\mu$; we define also $\st{\mu}{0}{}$ to be the probability distribution concentrated on the single-point set~$S_0$. For each $n$, we also regard $\st{\mu}{n}{}$ as a distribution on $S$, assigning its total mass one to the set~$S_n$. Finally, for any distribution~$\pi$ on $\mathbb{Z}_+$, define the distribution $\st{\mu}{\pi}{}$ on $S$ by $\st{\mu}{\pi}{}=\sum_{n\in\mathbb{Z}_+}\pi(n)\st{\mu}{n}{}$. Thus $\st{\mu}{\pi}{}$ assigns probability~$\pi(n)$ to the event that there are $n$ individuals in the system, and, conditional on this event, assigns the distribution~$\st{\mu}{n}{}$ to their residual workloads. \begin{theorem}\label{thm:single} Suppose that the distribution~$\pi$ on $\mathbb{Z}_+$ is the solution of the balance equations \begin{equation} \label{eq:1} \pi(n+1)\beta(n+1) = \pi(n)\alpha(n), \qquad n\ge0, \end{equation} and that \begin{equation} \label{eq:2} \sum_{n\ge0}\pi(n)\alpha(n)<\infty. \end{equation} Then the distribution $\st{\mu}{\pi}{}$ on $S$ is stationary for the process~$(X(t))_{t\ge0}$, and in particular the distribution~$\pi$ is stationary for the associated number of individuals in the system. \end{theorem} \begin{rem} The condition~\eqref{eq:2} ensures that, under stationarity, individuals arrive at the system at a finite rate. \end{rem} \begin{proof}[Proof of Theorem~\ref{thm:single}] In order to exclude pathological behaviour in the argument below, we make the one additional assumption that the distribution~$\mu$ has no atom of probability at zero. This is without loss of generality: in the case that $\mu$ does have such an atom, the evolution of the system may clearly be equivalently described by redefining $\alpha$, $\beta$ and $\mu$ so as to remove it, and the result of the theorem is easily obtained via this reparametrisation. Analogously to the definition of $\st{\mu}{n}{}$, for each $n\ge1$, define the probability distribution~$\pst{\mu}{n}{}$ on $S_n$ to be the product of $n-1$ copies of the distribution~$\st{\mu}{}{}$ and a single copy of the distribution~$\mu$, yet again with the above identification of points in $\mathbb{R}p^n$. (More formally, $\pst{\mu}{n}{}(A)=\pst{\mu}{}{(n)}(\theta^{-1}(A))$, $A\in\mathcal{B}(S_n)$, where $\pst{\mu}{}{(n)}=\st{\mu}{}{n-1}\times\mu$ and $\theta$ is again the projection from $\mathbb{R}p^n$ into the quotient space~$S_n$.) We again regard $\pst{\mu}{n}{}$ as a distribution on $S$, assigning mass one to the set~$S_n$. Consider now the modified process~$(\hat{X}(t))_{t\ge0}$ on $S$ describing the system in which the workload distribution is again $\mu$ and in which, when there are $n\ge1$ individuals in the system, individual workloads are again reduced at rate $\beta(n)/n$; however, for the modified system, (a)~an individual departing on completion of its workload is immediately replaced by another bringing an independent workload with distribution~$\mu$, (b)~external arrivals to the system are not accepted. Thus, for the modified system, the number of individuals remains constant, and conditional on this being $n$, the system behaves as a set of $n$ independent renewal processes, each of which has stationary residual workload distribution~$\st{\mu}{}{}$. Hence, for any distribution $\pi'$ on $\mathbb{Z}_+$, the distribution~$\st{\mu}{\pi'}{}$ on $S$ is stationary for the process~$(\hat{X}(t))_{t\ge0}$. Let $(P_t)_{t\ge0}$ and $(\hat{P}_t)_{t\ge0}$ be the semigroups of transition kernels associated respectively with the processes~$(X(t))_{t\ge0}$ and $(\hat{X}(t))_{t\ge0}$. For any $a>0$, let $\df{a}$ be the class of functions~$f$ on $S$ taking values in $[0,1]$ and satisfying the continuity condition \begin{equation} \label{eq:3} \left\lvert (P_t f(x) - f(x)) \right\rvert \le at \qquad\text{for all $x\in S$ and $t>0$}, \end{equation} where $P_tf(x)=\int_S P_t(x,dy)f(y)$. For any such $f$ and for any distribution~$\nu$ on $S$, define also $\nu f = \int_S f(x) \nu(dx)$ and, for any $t>0$, define $\nu{}P_tf=\nu(P_t{}f)$ (so that $\nu{P_t}f$ is the expectation of $f(X(t))$ when $(X(t))_{t\ge0}$ is given initial distribution~$\nu$); similarly define $\nu\hat{P_t}f$. Now compare the behaviour of the processes~$(X(t))_{t\ge0}$ and $(\hat{X}(t))_{t\ge0}$, each started with the distribution $\st{\mu}{\pi}{}$; so as to simplify the description below we couple these two processes so that they agree until the time of the first arrival or workload completion. We then have (see the further explanation below) that, with this common initial distribution, for any $a>0$, $f\in\df{a}$, and $h>0$, \begin{align} \st{\mu}{\pi}{} P_h f - \st{\mu}{\pi}{} \hat{P}_h f & = \mathbf{E} \bigl(f(X(h)) - f(\hat{X}(h))\bigr)\nonumber \\ & = h \sum_{n\ge0} \pi(n) \left[ \alpha(n)(\pst{\mu}{n+1}{}f - \st{\mu}{n}{}f) + \beta(n)(\st{\mu}{n-1}{}f - \pst{\mu}{n}{}f) \right] + o(h) \label{eq:4}\\ & = h \sum_{n\ge0} \left[ \pi(n) \alpha(n) - \pi(n+1) \beta(n+1) \right] (\pst{\mu}{n+1}{}f - \st{\mu}{n}{}f) + o(h) \nonumber\\ & = o(h) \label{eq:5} \end{align} as $h\to0$ (recall in \eqref{eq:4} that $\beta(0)=0$); further the above convergence as $h\to0$ is uniform over $f\in\df{a}$ in the sense that (\ref{eq:5}) may be written as \begin{equation} \label{eq:6} \sup_{f\in\df{a}} \lvert \st{\mu}{\pi}{} P_h f - \st{\mu}{\pi}{} \hat{P}_h f \rvert = o(h) \qquad\text{as $h\to0$} \end{equation} (again see below). To show~\eqref{eq:4} note first that, from the above coupling and for any $h>0$, we have $f(X(h))=f(\hat{X}(h))$ except where there is either at least one external arrival or at least one workload completion in $[0,h]$. It follows from the definition of $\st{\mu}{\pi}{}$ that, conditional on the number of individuals initially being $n$ the probability of an external arrival in $[0,h]$ is $\alpha(n)h+o(h)$ as $h\to0$, and that an arriving individual finding the distribution of the system to be $\st{\mu}{n}{}$ changes this to $\pst{\mu}{n+1}{}$ in the case of the process~$(X(t))_{t\ge0}$ and leaves it unchanged in the case of the process~$(\hat{X}(t))_{t\ge0}$. Similarly, again conditional on the number of individuals initially being $n$ (and recalling that $m(\mu)=1$), the probability of a workload completion in a time interval~$[0,h]$ is $\beta(n)h+o(h)$ as $h\to0$, and that under the distribution~$\st{\mu}{n}{}$, conditional on such a completion taking place, the residual workload distribution becomes $\st{\mu}{n-1}{}$ in the case of the process~$(X(t))_{t\ge0}$ and $\pst{\mu}{n}{}$ in the in the case of the process~$(\hat{X}(t))_{t\ge0}$. Further it follows from the conditions~\eqref{eq:1} and \eqref{eq:2} that, under the initial distribution~$\st{\mu}{\pi}{}$, the probability of two or more arrivals or workload completions in $[0,h]$ is $o(h)$ as $h\to0$. That the relation~\eqref{eq:4} now holds as $h\to0$ with the uniformity over $f\in\df{a}$ required for \eqref{eq:6} follows easily from these results and from the definition of $\df{a}$. To see this note that, since $f\in\df{a}$ implies that $f$ takes values in $[0,1]$, the contribution to the error term in (\ref{eq:4}) resulting from the neglect of the possibility of two or more arrivals or workload completions in $[0,h]$ is uniformly $o(h)$ as $h\to0$ as required. Similarly the terms $\pst{\mu}{n+1}{}f-\st{\mu}{n}{}f$ and $\st{\mu}{n-1}{}f-\pst{\mu}{n}{}f$ in (\ref{eq:4}) are obtained by treating the precise time of the first arrival or workload completion within $[0,h]$ as if it were time $h$; (recalling that $\pst{\mu}{n+1}{}$, etc, are probability measures) it follows from \eqref{eq:3} that the consequent error in each of the above two terms is bounded by $2ah$, so that the further contribution to the error term in (\ref{eq:4}) is $O(h^2)$ as $h\to0$, again with uniformity over $f\in\df{a}$. The relations~\eqref{eq:5}, and hence (\ref{eq:6}), are now immediate from the balance equations~\eqref{eq:1}. Since the distribution~$\st{\mu}{\pi}{}$ is stationary for the process~$(\hat{X}(t))_{t\ge0}$, it now follows from~\eqref{eq:6} that, again for any $a>0$ and $h>0$, \begin{displaymath} \sup_{f\in\df{a}} \lvert \st{\mu}{\pi}{} P_h f - \st{\mu}{\pi}{} f \rvert = o(h) \qquad\text{as $h\to0$}. \end{displaymath} Further, it is straightforward that if $f\in\df{a}$, then also $P_tf\in\df{a}$ for any $t>0$. Standard manipulations using the semigroup structure of $(P_t)_{t\ge0}$, e.g.\ the consideration of increasingly refined partitions of the interval~$[0,t]$, now give that, for all $a>0$, $f\in\df{a}$, and $t\ge0$, \begin{equation}\label{eq:7} \st{\mu}{\pi}{} P_t f = \st{\mu}{\pi}{} f. \end{equation} Finally, we show that it follows from (\ref{eq:7}) that $\st{\mu}{\pi}{}P_t=\st{\mu}{\pi}{}$ for all $t>0$, so that $\st{\mu}{\pi}{}$ is stationary for $(X(t))_{t\ge0}$ as required. It is sufficient to show that, for any $n\ge1$ and any set~$A\in\mathcal{B}(S_n)$ whose inverse image in $\mathbb{R}p^n$ under the mapping $\theta$ defined above is a product of intervals in $\mathbb{R}p$, we have \begin{equation} \label{eq:8} \st{\mu}{\pi}{} P_t \textbf{I}_A = \st{\mu}{\pi}{} \textbf{I}_A, \end{equation} where $\textbf{I}_A$ is the indicator function of the set~$A$. It follows from the piecewise deterministic form of the process~$(X(t))_{t\ge0}$ that we may choose a sequence of functions~$(f_k,\,k\ge1)$ such that, for each $k$, (i) $f_k\in\df{a}$ for some $a>0$ and (ii) $f_k$ and $\textbf{I}_A$ agree except on a set whose Lebesgue measure (under $\theta^{-1}$) in $\mathbb{R}p^n$ tends to zero as $k\to\infty$. Since $\st{\mu}{\pi}{}$, and so also $\st{\mu}{\pi}{}P_t$, are non-atomic distributions, the result~(\ref{eq:8}) now follows by using (\ref{eq:7}) with $f=f_k$ and letting $k\to\infty$. \end{proof} \begin{rem}\label{rem:balance} Suppose that the equations~\eqref{eq:1} above are multiplied by the signed measure $(\pst{\mu}{n+1}{}-\st{\mu}{n}{})$ to give \begin{equation} \label{eq:9} \pi(n)\alpha(n)(\pst{\mu}{n+1}{} - \st{\mu}{n}{}) = \pi(n+1)\beta(n+1)(\pst{\mu}{n+1}{} - \st{\mu}{n}{}), \qquad n\ge0. \end{equation} These equations have an obvious interpretation as representing, under the distribution~$\st{\mu}{\pi}{}$ and for each $n\ge0$, a detailed balance of flux between $S_n$ and $S_{n+1}$, not just with regard to the total probability assigned to each of these spaces, but also with regard to the distribution of the residual workload sizes: the intuition underlying the derivation of \eqref{eq:5} above---which is also that of Pechinkin's coupling approach---is that, under $\st{\mu}{\pi}{}$, an arrival finding $n$ individuals in the system transforms the residual workload distribution from $\st{\mu}{n}{}$ to $\pst{\mu}{n+1}{}$, while a departure from the system when it contains $n+1$ individuals transforms the residual workload distribution from what would have been $\pst{\mu}{n+1}{}$, if the individual had remained in the system with a renewed workload, to the distribution $\st{\mu}{n}{}$. \end{rem} In the case where we do not have processor-sharing, i.e.\ in which it is no longer the case that at any time all workloads are being reduced at the same rate, it is necessary at any time to distinguish the individuals in the system. Thus each~$S_n$ above is replaced by $\mathbb{R}p^n$ and the state space $S$ is replaced by $S^*=\bigcup_{n\ge0}\mathbb{R}p^n$. We consider as an example the case of the single-server queue with ``last-in-first-out preemptive resume'' discipline, in which at any time all service effort is devoted to the last individual to arrive at the system. If at any time there are $n$ individuals in the system, we may index these by $i=1,\dots,n$ in the order of their arrival, and no individual changes index during its time in the system; as usual arrivals occur as a Poisson process with rate $\alpha(n)$, and the workload of individual~$n$ is now being reduced at rate~$\beta(n)$, while that of the remaining individuals is being reduced at rate~$0$. As previously, define the probability distribution~$\st{\mu}{}{}$ on $\mathbb{R}p$ to be the stationary residual life distribution of the renewal process with inter-event distribution~$\mu$, and, for each $n\ge0$, let the distribution $\st{\mu}{n}{}$ on $\mathbb{R}p^n$ be now the (ordered) product of $n$ copies of $\st{\mu}{}{}$. For each $n\ge1$, let the distribution~$\pst{\mu}{n}{}$ on $\mathbb{R}p^n$ be the (ordered) product of $n-1$ copies of the distribution~$\st{\mu}{}{}$ and a single copy of the distribution~$\mu$, with the latter assigned to the $n$th coordinate of $\mathbb{R}p^n$. Finally, for any distribution~$\pi$ on $\mathbb{Z}_+$, define the distribution $\st{\mu}{\pi}{}$ on $S$ by $\st{\mu}{\pi}{}=\sum_{n\in\mathbb{Z}_+}\pi(n)\st{\mu}{n}{}$ as before. With these (re)definitions, both Theorem~\ref{thm:single} and its proof remain unchanged as stated. Again the underlying reason is as given in Remark~\ref{rem:balance} above: under the distribution~$\st{\mu}{\pi}{}$, and relative to the modified process considered in the proof of Theorem~\ref{thm:single}, an arrival finding $n$ individuals in the system transforms the residual workload distribution from $\st{\mu}{n}{}$ to $\pst{\mu}{n+1}{}$, while a departure from the system when it contains $n+1$ individuals transforms the residual workload distribution from $\pst{\mu}{n+1}{}$ to $\st{\mu}{n}{}$. Note that this balance does not obtain in the case of, for example, a ``first-in-first-out'' discipline, and here, as is again well known, we do not have the above insensitivity. \section{Multi-class networks} \label{sec:multi-class-networks} Consider now a multi-class network. We concentrate on the processor-sharing case---adaptations to other disciplines may be made as in the single-class case. Let $\textbf{I}I=\{1,\dots,N\}$ denote the set of classes, and let $\vec{n}=(n_i,\,i\in\textbf{I}I)$ where $n_i$ is the number of individuals in each class~$i$. An individual entering class~$i$ acquires a workload which has distribution~$\mu^i$ with nonzero finite mean~$m(\mu^i)$; we again assume without loss of generality that $m(\mu^i)=1$; the workload of each individual in class $i$ is reduced at a state-dependent rate~$\phi_i(\vec{n})/n_i$, where $\phi_i(\vec{n})>0$ if and only if $n_i>0$. Individuals arrive at each class~$i$ from outside the network as a Poisson process with state-dependent rate~$\phi_{0i}(\vec{n})$; on completion of its workload in any class~$i$ an individual moves to class~$j$ with state-dependent probability~$\phi_{ij}(\vec{n})/\phi_i(\vec{n})$ or leaves the network with probability~$\phi_{i0}(\vec{n})/\phi_i(\vec{n})$, where \begin{equation}\label{eq:10} \sum_{j\in\textbf{I}I}\phi_{ij}(\vec{n}) + \phi_{i0}(\vec{n}) = \phi_i(\vec{n}) \end{equation} (there are no problems in allowing the possibility $\phi_{ii}(\vec{n})>0$). The workloads, arrivals processes and routing decisions are all independent. As in the single-class case, we represent the system as a Markov process~$(X(t))_{t\ge0}$ by defining its state at any time to be the vector~$\vec{n}$ introduced above together with the residual workloads at that time of the set of individuals in each class. For given $\vec{n}$, these workloads take values in the space~$S_\vec{n}$ which is the ordered product of the spaces~$S_{n_i}$, $i\in\textbf{I}I$, where, as previously, each $S_{n_i}$ is formed from $\mathbb{R}p^{n_i}$ by identifying points within the latter space which may be obtained from each other under permutation of their coordinates (and where again the set~$S_0$ contains a single point). The state space~$S$ for the system is the union of all the possible $S_\vec{n}$, and the spaces $S_\vec{n}$ and $S$ are endowed with the obvious $\sigma$-algebras~$\mathcal{B}(S_\vec{n})$ and $\mathcal{B}(S)$. Analogously to the single-class case, for each $i\in\textbf{I}I$, define the probability distribution~$\st{\mu}{}{i}$ on $\mathbb{R}p$ to be the stationary residual life distribution of the renewal process with inter-event distribution~$\mu^i$ (as previously the residual life should be interpreted as a residual workload). For each $i\in\textbf{I}I$ and for each $n_i\ge1$, define as previously the distribution~$\st{\mu}{n_i}{i}$ on $S_{n_i}$ to be the product of $n_i$ copies of the distribution~$\st{\mu}{}{i}$ (again with the above identification of points in $\mathbb{R}p^{n_i}$)---representing the joint distribution of the residual lives in a set of $n_i$ independent stationary renewal processes each with inter-event distribution~$\mu^i$; define also $\st{\mu}{0}{i}$ to be the probability distribution concentrated on the single-point set~$S_0$. For each $\vec{n}\in\mathbb{Z}_+^N$, define the distribution $\st{\vec{\mu}}{\vec{n}}{}$ on $S_\vec{n}$ to be the (ordered) product distribution which, for each $i\in\textbf{I}I$, assigns the distribution~$\st{\mu}{n_i}{i}$ to $S_{n_i}$. We again regard the distribution~$\st{\vec{\mu}}{\vec{n}}{}$ as a distribution on $S$, assigning its total mass one to the set~$S_\vec{n}$. For any positive distribution~$\pi$ on $\mathbb{Z}_+^N$, we define the distribution $\st{\vec{\mu}}{\pi}{}$ on $S$ by $\st{\vec{\mu}}{\pi}{}=\sum_{\vec{n}\in\mathbb{Z}_+^N}\pi(\vec{n})\st{\vec{\mu}}{\vec{n}}{}$. It is notationally convenient to expand the set $\textbf{I}I$ to $\textbf{I}I'=\{0\}\cup\textbf{I}I$, treating $0$ as an extra class feeding external arrivals to, and receiving departures from, the network. (However, the components of the state~$\vec{n}$ of the network remain indexed in the original set~$\textbf{I}I$.) For completeness we define $\phi_{00}(\vec{n})=0$ for all $\vec{n}$, and also $\phi_{ij}(\vec{n})=0$ for all $i\in\textbf{I}I$, $j\in\textbf{I}I'$, and $\vec{n}$ such that $n_i=0$ (so that \eqref{eq:10} above remains valid for such $\vec{n}$ also). For each $i\in\textbf{I}I$, let $\vec{e}_i$ be the $N$-dimensional vector whose $i$th component is $1$ and whose other components are $0$, and let $\vec{e}_0$ be the $N$-dimensional vector all of whose components are $0$. For each $\vec{n}$ and each $i,j\in\textbf{I}I'$ define the vector $\T{i}{j}\vec{n}=\vec{n}-\vec{e}_i+\vec{e}_j$; define also $T_i\vec{n}=\T{i}{0}\vec{n}=\vec{n}-\vec{e}_i$ and $T^j\vec{n}=\T{0}{j}\vec{n}=\vec{n}+\vec{e}_j$. \begin{theorem}\label{thm:multi} Suppose that the distribution~$\pi$ on $\mathbb{Z}_+^N$ satisfies the partial balance equations \begin{equation} \label{eq:11} \pi(\vec{n})\sum_{j\in\textbf{I}I'}\phi_{ij}(\vec{n}) = \sum_{j\in\textbf{I}I'}\pi(\T{i}{j}\vec{n})\phi_{ji}(\T{i}{j}\vec{n}), \qquad \vec{n}\in\mathbb{Z}_+^N, \quad i\in\textbf{I}I', \end{equation} where, for $\vec{n}$ and $i\in\textbf{I}I$ such that $n_i=0$ we interpret the right side of \eqref{eq:11} as zero (recall that when $n_i=0$ we have $\phi_{ij}(\vec{n})=0$ for all $j\in\textbf{I}I'$ so that \eqref{eq:11} is automatically satisfied in this case). Suppose also that \begin{equation} \label{eq:12} \sum_{\vec{n}\in\mathbb{Z}_+^N}\pi(\vec{n})\sum_{i\in\textbf{I}I}\phi_{0i}(\vec{n})<\infty. \end{equation} Then $\st{\vec{\mu}}{\pi}{}$ is stationary for the process~$(X(t))_{t\ge0}$, and in particular $\pi$ is stationary for the associated numbers of individuals in the system. Conversely, if a distribution~$\pi$ on $\mathbb{Z}_+^N$ is stationary for the numbers of individuals in the system for all $\vec{\mu}=(\mu^i,\,i\in\textbf{I}I)$ such that $m(\mu^i)=1$ for all $i\in\textbf{I}I$, then $\pi$ satisfies the equations~(\ref{eq:11}). \end{theorem} \begin{proof} Suppose first that $\pi$ satisfies the equations~\eqref{eq:11}. As in the proof of Theorem~~\ref{thm:single}, we again assume without loss of generality that each distribution~$\mu^i$ has no atom of probability at zero. For each $\vec{n}$ and for each $i$ such that $n_i\ge1$, define also the residual workload distribution~$\pst{\vec{\mu}}{\vec{n}}{i}$ on $S_\vec{n}$ by $\pst{\vec{\mu}}{\vec{n}}{i} =\pst{\mu}{n_i}{i}\times\prod_{j\ne{}i}\st{\mu}{n_j}{j}$ where $\pst{\mu}{n_i}{i}$ is defined as in the proof of Theorem~\ref{thm:single}. Thus $\pst{\vec{\mu}}{\vec{n}}{i}$ corresponds to each individual in each class~$j$ having independently the stationary residual workload distribution~$\st{\mu}{}{j}$, except only that a single individual in the class $i$ is given the workload distribution~$\mu^i$. For each $\vec{n}$, define also $\pst{\vec{\mu}}{\vec{n}}{0}=\st{\vec{\mu}}{\vec{n}}{}$. Again as in the proof of Theorem~\ref{thm:single}, define the process~$(\hat{X}(t))_{t\ge0}$ on $S$ to be that appropriate to the modified system in which there are no arrivals, departures, or transfers between classes; rather each individual in each class~$i$, on completion of its workload, acquires a new independent workload with distribution~$\mu^i$. Thus the occupancy of the system remains constant; conditional on this being $\vec{n}$, individual workloads in any class~$i$ such that $n_i>1$ are again reduced at rate $\phi_i(\vec{n})/n_i$ and the system behaves as a set of independent renewal processes. Further, for any distribution $\pi'$ on $\mathbb{Z}_+^N$, the distribution~$\st{\vec{\mu}}{\pi'}{}$ on $S$ is stationary for $(\hat{X}(t))_{t\ge0}$. Again let $(P_t)_{t\ge0}$ and $(\hat{P}_t)_{t\ge0}$ be the semigroups of transition kernels associated respectively with the processes~$(X(t))_{t\ge0}$ and $(\hat{X}(t))_{t\ge0}$, and, for any $a>0$, let $\df{a}$ be the class of functions~$f$ on $S$ taking values in $[0,1]$ and satisfying the earlier continuity condition~\eqref{eq:3}. Comparison of the behaviour of the processes~$(X(t))_{t\ge0}$ and $(\hat{X}(t))_{t\ge0}$, each started with the distribution $\st{\vec{\mu}}{\pi}{}$ and coupled as in the earlier proof until the time of the first external arrival or workload completion, now gives that, for any $a>0$, $f\in\df{a}$, and $h>0$, \begin{align} \st{\vec{\mu}}{\pi}{} P_h f - \st{\vec{\mu}}{\pi}{} \hat{P}_h f \hspace{-6em} & \nonumber \\[1ex] & = \mathbf{E} \bigl(f(X(h)) - f(\hat{X}(h))\bigr)\nonumber \\ & = h \sum_{\vec{n}\in\mathbb{Z}_+^N}\pi(\vec{n})\sum_{i\in\textbf{I}I'}\sum_{j\in\textbf{I}I'} \phi_{ij}(\vec{n}) \left( \pst{\vec{\mu}}{\T{i}{j}\vec{n}}{j}f - \pst{\vec{\mu}}{\vec{n}}{i}f \right) + o(h) \label{eq:13}\\ & = h \sum_{\vec{n}\in\mathbb{Z}_+^N} \Biggl( \sum_{i\in\textbf{I}I'}\sum_{j\in\textbf{I}I'}\pi(\vec{n}) \phi_{ji}(\vec{n}) \pst{\vec{\mu}}{\T{j}{i}\vec{n}}{i}f - \sum_{i\in\textbf{I}I'}\sum_{j\in\textbf{I}I'}\pi(\vec{n}) \phi_{ij}(\vec{n}) \pst{\vec{\mu}}{\vec{n}}{i}f \Biggr) + o(h) \nonumber \\ & = h \sum_{\vec{n}\in\mathbb{Z}_+^N} \Biggl( \sum_{i\in\textbf{I}I'}\sum_{j\in\textbf{I}I'}\pi(\T{i}{j}\vec{n}) \phi_{ji}(\T{i}{j}\vec{n}) \pst{\vec{\mu}}{\vec{n}}{i}f - \sum_{i\in\textbf{I}I'}\sum_{j\in\textbf{I}I'}\pi(\vec{n}) \phi_{ij}(\vec{n}) \pst{\vec{\mu}}{\vec{n}}{i}f \Biggr) + o(h) \nonumber \\ & = h \sum_{\vec{n}\in\mathbb{Z}_+^N}\sum_{i\in\textbf{I}I'} \Biggl( \sum_{j\in\textbf{I}I'}\pi(\T{i}{j}\vec{n}) \phi_{ji}(\T{i}{j}\vec{n}) - \sum_{j\in\textbf{I}I'}\pi(\vec{n}) \phi_{ij}(\vec{n}) \Biggr) \pst{\vec{\mu}}{\vec{n}}{i}f + o(h) \nonumber \\ & = o(h) \label{eq:14} \end{align} as $h\to0$, with uniformity of convergence over all $f\in\df{a}$, so that we may write \begin{equation} \label{eq:15} \sup_{f\in\df{a}} \lvert \st{\vec{\mu}}{\pi}{} P_h f - \st{\vec{\mu}}{\pi}{} \hat{P}_h f \rvert = o(h) \qquad\text{as $h\to0$} \end{equation} (recall again that $\phi_{ij}(\vec{n})=0$ whenever $n_i=0$, so that there is no difficulty with the lack of a formal definition of $\pst{\vec{\mu}}{\vec{n}}{i}$ in this case). The identity~(\ref{eq:15}) is simply the multi-class version of the identity~\eqref{eq:6} in the proof of Theorem~\ref{thm:single}, and is similarly obtained, albeit with a slightly more compact notation; in particular, conditional on the common initial distribution of the two processes being given by $\st{\vec{\mu}}{\vec{n}}{}$, in the time interval $[0,h]$ where $h$ is small, a transition from $i$ to $j$ in the original system---where either $i$ or $j$ may be $0$---occurs with probability $\pi(\vec{n})\phi_{ij}(\vec{n})h+o(h)$ as $h\to0$, and in this case the distribution of the process~$(X(t))_{t\ge0}$ becomes $\pst{\vec{\mu}}{\T{i}{j}\vec{n}}{j}$ while that of the process~$(\hat{X}(t))_{t\ge0}$ becomes $\pst{\vec{\mu}}{\vec{n}}{i}$; that \eqref{eq:13} now holds with the required uniformity of convergence follows, using also \eqref{eq:12}, as in the earlier proof; finally the results \eqref{eq:14}, and so also (\ref{eq:15}), follow from the partial balance equations~\eqref{eq:11}. Since the distribution~$\st{\vec{\mu}}{\pi}{}$ is stationary for the process~$(\hat{X}(t))_{t\ge0}$, it now follows from~\eqref{eq:15} that, again for any $a>0$ and $h>0$, \begin{displaymath} \sup_{f\in\df{a}} \lvert \st{\vec{\mu}}{\pi}{} P_h f - \st{\vec{\mu}}{\pi}{} f \rvert = o(h) \qquad\text{as $h\to0$}. \end{displaymath} That $\st{\vec{\mu}}{\pi}{}$ is now stationary for $(X(t))_{t\ge0}$ follows as in the proof of Theorem~\ref{thm:single}. Now suppose that a distribution~$\pi$ on $\mathbb{Z}_+^N$ is stationary for the numbers of individuals in the system for all $\vec{\mu}=(\mu^i,\,i\in\textbf{I}I)$ with $m(\mu^i)=1$ for all $i\in\textbf{I}I$. A proof that $\pi$ then necessarily satisfies the partial balance equations~\eqref{eq:11} is given by Bonald and Prouti\`{e}re (2002, 2003). In summary, consider the case in which in every class the workload distribution is exponential with mean~$1$, and, for any fixed class~$i$, compare this with the case in which, for some $0<\lambda<1$, the workload distribution in class~$i$ is replaced by a mixture of two distributions, obtained by choosing with probability~$\lambda$ an exponential distribution with mean~$\lambda^{-1}$, and with probability~$1-\lambda$ the distribution concentrated on $0$. Both these models may be (re)formulated as simple Markov jump processes---in the latter case the transition rates into and out of the class~$i$ are reduced by a factor~$\lambda$. Since $\pi$ is stationary in both cases, comparison of the (full) balance equations for stationarity yields the partial balance equations~\eqref{eq:11}. \end{proof} \begin{example} \emph{Processor-sharing Whittle networks.} Suppose that for some $\nu\ge0$, some strictly positive function~$\Phi$ on $\mathbb{Z}_+^N$, and some stochastic matrix~$P=(p_{ij},\,i\in\textbf{I}I',\,j\in\textbf{I}I')$ such that $p_{00}=0$, we have, for each $\vec{n}$, \begin{alignat*}{2} \phi_{0j}(\vec{n}) & = \nu p_{0j}, & \qquad & j \in \textbf{I}I', \\ \phi_{ij}(\vec{n}) & = \frac{\Phi(T_i\vec{n})}{\Phi(\vec{n})}\, p_{ij}, && i \in \textbf{I}I, \quad j \in \textbf{I}I', \end{alignat*} where we again make the convention that $\Phi(T_i\vec{n})=0$ whenever $n_i=0$. Then it is readily checked that the partial balance equations~\eqref{eq:11} are satisfied by \begin{equation}\label{eq:16} \pi(\vec{n}) = a \Phi(\vec{n}) \prod_{i\in\textbf{I}I} \rho_i^{n_i}, \end{equation} for any $a>0$ and positive solution $\vec{\rho}=(\rho_i,\,i\in\textbf{I}I)$ of the equations \begin{align*} \nu & = \sum_{j\in\textbf{I}I}\rho_j p_{j0},\\ \rho_i & = \sum_{j\in\textbf{I}I}\rho_j p_{ji} + \nu p_{0i}, \qquad i\in\textbf{I}I. \end{align*} Thus in particular the stationary distribution~$\pi$ given by \eqref{eq:16} for the number of individuals of each type in the system is insensitive to the $\mu^i$ (recall our assumption $m(\mu^i)=1$ for all $i$). For the case where $P$ is irreducible and $\nu>0$, the above equations for $\vec{\rho}$ have a unique solution. Again when $P$ is irreducible and when $\nu=0$ (corresponding to a closed network) $\pi$ remains uniquely determined, up to a multiplicative constant, by \eqref{eq:16}. The case where $\Phi(\vec{n})=\prod_{i\in\textbf{I}I}\lambda_i^{n_i}$ for positive constants $(\lambda_i,\,i\in\textbf{I}I)$ characterises processor-sharing Jackson networks. Further discussion of Whittle networks is given by Serfozo (1999) and, for processor-sharing networks, by Bonald and Prouti\`{e}re (2002, 2003). \end{example} \begin{example} \emph{Networks with no internal transitions.} Suppose that $\phi_{ij}(\vec{n})=0$ for all $i,j\in\textbf{I}I$ and for all $\vec{n}\in\mathbb{Z}_+^N$, so that no transitions are possible between the classes in $\textbf{I}I$. The partial balance equations~\eqref{eq:11} then reduce to the detailed balance equations \begin{equation} \label{eq:17} \pi(\vec{n})\phi_{i0}(\vec{n}) = \pi(T_i\vec{n})\phi_{0i}(T_i\vec{n}), \qquad \vec{n}\in\mathbb{Z}_+^N, \quad n_i\ge1, \quad i\in\textbf{I}I. \end{equation} (In the case of a single class, these equations further reduce to the equations~\eqref{eq:1}.) An example is given by a traditional (uncontrolled) loss network---see, for example, Kelly (1986). This is naturally processor-sharing. Here workloads are identified with call durations and, for some set $\mathcal{A}\subset\mathbb{Z}_+^N$ such that $\vec{n}\in\mathcal{A}$ implies $T_i\vec{n}\in\mathcal{A}$ for all $\vec{n}$ and $i$ such that $n_i\ge1$ ($\mathcal{A}$ is typically defined by capacity constraints), we have \begin{align*} \phi_{0i}(\vec{n}) & = \nu_i\textbf{I}(T^i\vec{n}\in\mathcal{A})\\ \phi_{i0}(\vec{n}) & = \sigma_i n_i, \end{align*} for some vectors $(\nu_i,\,i\in\textbf{I}I)$ and $(\sigma_i,\,i\in\textbf{I}I)$ of strictly positive parameters. The equations~\eqref{eq:17} are then satisfied by \begin{equation} \label{eq:18} \pi(\vec{n}) = a \prod_{i\in\textbf{I}I} \frac{\kappa_i^{n_i}}{n_i!}, \end{equation} where $\kappa_i=\nu_i/\sigma_i$ for each $i$, and where $a$ is naturally chosen to be a normalising constant. As was originally shown by Burman \textit{et al} (1984), we therefore again have insensitivity of the occupancy distribution~$\pi$ of the network. The stationary distribution of the residual call durations is as identified by Theorem~\ref{thm:multi}. Other examples of processor-sharing networks with no internal transitions are given by those used to model connections in communications networks with simultaneous resource requirements and variable bandwidth requirements---see, for example, Bonald and Massouli\'e (2001) and de Veciana \textit{et al} (2001). Here it is far from automatic that the detailed balance equations~\eqref{eq:17} are satisfied. \end{example} \end{document}
\begin{equation}gin{document} \title{Towards a Landau-Zener formula for an interacting Bose-Einstein condensate} \author{D. Witthaut, E. M. Graefe, and H. J. Korsch} \email{[email protected]} \affiliation{FB Physik, Technische Universit{\"a}t Kaiserslautern, D-67653 Kaiserslautern, Germany} \date{\today } \begin{equation}gin{abstract} We consider the Landau-Zener problem for a Bose-Einstein condensate in a linearly varying two-level system, for the full many-particle system as well and in the mean-field approximation. The many-particle problem can be solved approximately within an independent crossings approximation, which yields an explicit Landau-Zener formula. \end{abstract} \pacs{03.75.Lm, 03.65.-w, 73.40.Gk} \maketitle \section{Introduction} During the last years, a lot of work has been devoted to the nonlinear Landau-Zener problem, which describes a Bose-Einstein condensate (BEC) in a time-dependent two-state system in the mean-field approximation \cite{Wu00,Wu03,Liu03}. As in the celebrated original Landau-Zener scenario, the energy difference between the two levels is assumed to vary linearly in time. This situation arises, e.g., for a BEC in a double-well trap or for a BEC in an accelerated lattice around the edge of the Brillouin zone. A major question in such a situation is the following: Initially the two states are energetically well separated and the total population is in the lower state. Then the energy difference varies linearly in time, such that the two levels (anti-) cross. Finally the states are energetically well separated again, however they are just exchanged. What is the probability of a diabatic time-evolution, i.e. how much of the initial population remains in the first (diabatic) state ? In the mean-field approximation, the time evolution is given by the Gross-Pitaevskii equation \begin{equation} { \rm i } \frac{{ \rm d }}{{ \rm d } t} \left(\begin{equation}gin{array}{c} \psi_{1} \\ \psi_2 \end{array} { \rm i }ght) = \hat H(|\psi_1|^2,|\psi_2|^2,t) \left(\begin{equation}gin{array}{c} \psi_{1} \\ \psi_2 \end{array} { \rm i }ght), \end{equation} with the nonlinear Hamiltonian \begin{equation} \hat H(|\psi_1|^2,|\psi_2|^2,t) = \left(\begin{equation}gin{array}{c c} \epsilon + g |\psi_1|^2 & v \\ v & -\epsilon + g |\psi_2|^2 \end{array} { \rm i }ght). \label{eqn-ham-nonlin} \end{equation} and $\epsilon = \alpha t$. The state vector is normalized to unity, thus the effective nonlinearity is $g = \bar g N$, where $N$ is number of particles in the condensate and $\bar g$ is the bare two particle interaction constant. Throughout this paper we use scaled units such that $\hbar = 1$. The Landau-Zener transition probability is defined as \begin{equation} P_{\rm LZ}^{\rm mf} = \frac{|\psi_1(t { \rm i }ghtarrow + \infty)|^2}{ |\psi_1(t { \rm i }ghtarrow - \infty)|^2} \, . \label{eqn-plz-mf-def} \end{equation} The original linear problem can be solved analytically with different approaches \cite{Land32,Zene32,Majo32,Stue32}. This yields the celebrated Landau-Zener formula \begin{equation} P_{\rm LZ}^{\rm lin} = { \rm e }^{-\pi v^2/\alpha} \quad \mbox{for} \, g = 0. \label{eqn-plz-lin} \end{equation} for the probability of a diabatic time evolution. In the nonlinear case $g < 0$, things get quite complicated and the Landau-Zener probability is seriously altered. New nonlinear eigenstates emerge if the nonlinearity exceeds a critical value $|g| > g_c = 2v$. A loop develops at the top of the lowest level $\mu(\epsilon)$, while the total energy \begin{equation}gin{eqnarray} E^{\rm mf} &=& \epsilon (|\psi_1|^2 - |\psi_2|^2) + \frac{g}{2} (|\psi_1|^4 + |\psi_2|^4) \nonumber \\ && \quad + v (\psi_1^* \psi_2 + \psi_2^* \psi_1) \label{eqn-nonlin-etot} \end{eqnarray} shows a swallow's tail structure (cf. the left-hand side of Fig.~{ \rm e }f{fig-levels_mp_mf}). The system can evolve adiabatically along this level only up to the end of the loop - adiabaticity breaks down. Consequently, the Landau-Zener probability does not vanish even in the adiabatic limit $\alpha { \rm i }ghtarrow 0$ \cite{Wu00,Wu03}. For repulsive nonlinearities, $g > 0$ , the situation is just the other way round: The loop appears in the upper level, thus no adiabatic evolution is possible in the upper level. In this paper we consider only the lower level and thus the attractive case $g \le 0$. These considerations have let to a reformulation of the adiabatic theorem for nonlinear systems, based on the adiabatic theorem of classical mechanics \cite{Liu03}. Note also that the emergence of looped levels was previously studied for the quantum dimer \cite{Esse95}. Several approaches were made to derive a nonlinear Landau-Zener formula for this problem using methods from classical Hamiltonian mechanics \cite{Zoba00,Liu02}. For subcritical values of the nonlinearity $|g| < g_c$, standard methods of classical nonadiabatic corrections yield good results for the near-adiabatic case ($\alpha / v^2 \ll 1$). For the case of a rapid passage ($ v^2/\alpha \ll 1$) one finds a quantitative good approximation using classical perturbation theory with $v$ being the small parameter for the subcritical regime as well as for strong nonlinearities, as long as $g<0$. Furthermore for strong nonlinearities there is a simple formula which provides a good approximation for the tunneling probability for an intermediate range of the parameter $\alpha$. This approximation fails in the rapid limit as well as in the near adiabatic one. However, there is no valid approximation in the critical regime $|g| > g_c = 2v$ for $\alpha { \rm i }ghtarrow 0$ so far. Since one is interested in the quasiadiabatic dynamic in most applications this is an important\begin{equation}gin{scriptsize}\end{scriptsize} deficit. Here we present a different approach which yields good results especially in this region. \begin{equation}gin{figure}[t] \centering \includegraphics[width=8cm, angle=0]{levels_mpmf6} \caption{\label{fig-levels_mp_mf} Total energy ({ \rm e }f{eqn-nonlin-etot}) in the mean-field theory (left) and eigenenergies the many-particle Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}),(right) for $v = 0.2$, $g = -1$ and $N=20$ particles.} \end{figure} Going back to the roots of the problem, we consider the original many-particle problem of an interacting two-mode boson field instead of the mean-field theory. We consider the many-particle Hamiltonian of Bose-Hubbard type, \begin{equation}gin{eqnarray} \hat H(t) &=& \epsilon(t) (\hat n_1 - \hat n_2) + v(\hat a_1^\dagger \hat a_2 + \hat a_2^\dagger \hat a_1) \nonumber \\ && \quad + \frac{\bar g}{2}(\hat n_1(\hat n_1-1) + \hat n_2(\hat n_2-1)), \label{eqn-mp-hamiltonian} \end {eqnarray} where $\hat a_j$ and $\hat a_j^\dagger$ are the bosonic annihilation and creation operators in the $j$th well and $\hat n_j = \hat a_j^\dagger\hat a_j $ is the occupation number operator. The eigenvalues of the Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}) are shown in Fig.~{ \rm e }f{fig-levels_mp_mf} on the right-hand side in dependence of $\epsilon$ for $g=-1$, $v = 0.2$ and $N = 20$ particles. One recognizes the similarity to the mean-field results shown on the left-hand side. A series of avoided crossings with very small level distances is observed where the mean-field energy levels form the swallow's tail structure. For $t { \rm i }ghtarrow -\infty$, one has $\epsilon = \alpha t { \rm i }ghtarrow - \infty$ and the first term dominates the Hamiltonian. The ground state is $|\psi_0\rangle = (N!)^{-1/2} (\hat a_1^\dagger)^N |0\rangle$, where $N$ is the fixed number of particles. In the spirit of the Landau-Zener problem we take this as the initial state for $t { \rm i }ghtarrow -\infty$ and consider the question, how many particles remain in the first well for $t { \rm i }ghtarrow +\infty$, i.e. the effective Landau-Zener transition probability for the {\it population}, which is given by \begin{equation} P_{\rm LZ}^{\rm mp} = \frac{\langle \hat n_1(t { \rm i }ghtarrow + \infty)\rangle}{\langle \hat n_1(t { \rm i }ghtarrow - \infty)\rangle} \, . \label{eqn-plz-mp-def} \end{equation} The superscripts mp and mf are introduced to distinguish between the many-particle and the mean-field system. It will be shown that this many-particle Landau-Zener probability agrees well with the mean-field Landau-Zener probability ({ \rm e }f{eqn-plz-mf-def}). Furthermore this ''back-to-the-roots''-procedure reduces the problem to a {\rm linear} multi-level Landau-Zener scenario, which can be solved approximately in an independent crossing approximation. In this way we derive a Landau-Zener formula for an interacting BEC, which agrees well with numerical results especially in the strongly interacting regime $|g| > g_c = 2v$. \section{The many-particle Landau-Zener problem and the ICA} \label{sec-mp-ica} We now consider the many-particle Landau-Zener scenario ({ \rm e }f{eqn-mp-hamiltonian}) in detail, where the number $N$ of particles is fixed. We expand the Hamiltonian $H$ in the number-state basis $| k \rangle = [k!(N-k)!]^{-1/2} (\hat a_1^\dagger)^{k} (\hat a_2^\dagger)^{N-k} | 0 \rangle$. Then the Hamiltonian is given by the matrix $\langle \ell | H | k \rangle = H_ {\ell,k}$ for $\ell,k = 0, \ldots,N$ with the elements \begin{equation} H_{\ell,k} = h_\ell(t) \, \delta_{\ell,k} + v_\ell \, (\delta_{\ell,k-1} + \delta_{\ell-1,k}) \label{eqn-ham-matrix} \end{equation} and \[ h_\ell(t) = \epsilon(t) (2\ell-N) + \frac{\bar g}{2} ( 2\ell^2 -2\ell N + N^2 -N ) \] and the couplings $v_\ell = v \sqrt{(\ell+1) (N-\ell)}$ on the sub- and superdiagonal. In the Landau-Zener scenario, all diabatic (i.e. uncoupled) levels $h_\ell(t)$ vary linearly in time as $\epsilon(t) = \alpha t$, however with a different offset and slope $\alpha (2 \ell - N)$. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{levels_N=3b} \caption{\label{fig-levels_N=3} The S-matrix elements $|S_{\ell,N}|^2$ in the independent crossing approximation (ICA) for $N=3$ particles.} \end{figure} As stated above, we assume that initially all particles are in the first well, $|\psi (t { \rm i }ghtarrow - \infty) \rangle = | N \rangle$. Consequently one has $\langle \hat n_1(t { \rm i }ghtarrow - \infty)\rangle = N$ and in order to derive the Landau-Zener probability ({ \rm e }f{eqn-plz-mp-def}) we are left with the problem to calculate $\langle \hat n_1(t { \rm i }ghtarrow + \infty)\rangle$. Thus we are not interested in the details of the time evolution. We just need a few elements of the $S$-matrix, which is defined by \begin{equation} \label{eqn-S-matr-def} \langle k | \psi(t=+\infty) \rangle = \sum_\ell S_{k \ell} \langle \ell | \psi(t=-\infty) \rangle. \end{equation} With this definition and $\langle \ell | \hat n_1 | k \rangle = k \delta_{\ell,k}$, the Landau-Zener transition probability ({ \rm e }f{eqn-plz-mp-def}) is reduced to \begin{equation} P_{\rm LZ}^{\rm mp} = \frac{1}{N} \sum_{k=0}^{N} k |S_{k,N}|^2 \, , \end{equation} so that only the squared modulus of the $S$-matrix elements $|S_{k,N}|^2$ are of importance. The S-matrix elements $|S_{N,k}|^2$ are now evaluated in a modified independent crossings approximation (ICA, see appendix for details). One assumes that the system undergoes a series of single, independent transitions between just two levels. The probabilities of a diabatic resp. adiabatic transition at a single anti-crossing are given $p_{k,N} = \exp(-\pi w_{k,\ell}^2 / |b_{k,l}|)$ resp. $q_{k,\ell} = 1- p_{k,\ell}$ according to the Landau-Zener formula ({ \rm e }f{eqn-plz-lin}). Here, $w_{k,l}$ denotes the level spacing at the anti-crossing and $b_{k,\ell}$ is the difference of the slopes of the two diabatic levels. The relevant S-matrix elements are given by \begin{equation} |S_{k,N}|^2 = (1-p_{k,N}) \prod_{\ell = 0}^{k-1} p_{\ell,N} \, , \end{equation} with the definition $q_{NN} = 1 \Leftrightarrow p_{NN} = 0$. The calculation of the S-matrix elements by the ICA is illustrated in Fig.~{ \rm e }f{fig-levels_N=3} for the case $N=3$. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{plz_avar_res2} \caption{\label{fig-plz-ica-avar} \label{fig-plz-avar2} Landau-Zener tunneling probability in dependence of the parameter velocity $\alpha$ for $v=0.2$, $N=100$ particles and different values of the interaction constant $g$. Numerical data (mean-field $+$ and many-particle theory $\circ$) is compared with the ICA ({ \rm e }f{eqn-plz-ica}),(dashed line) and the resulting ICA-Landau-Zener formulae ({ \rm e }f{eqn-res-crit}) resp. ({ \rm e }f{eqn-res-subcrit}),(solid line).} \end{figure} The ICA-Landau-Zener transition probability is then given by \begin{equation}gin{eqnarray} P_{\rm LZ}^{\rm ICA} &=& \frac{1}{N} \sum_{k=0}^{N} k (1-p_{k,N}) \prod_{\ell = 0}^{k-1} p_{\ell,N} \nonumber \\ &=& \frac{1}{N} \sum_{k=0}^{N-1} \prod_{\ell=0}^{k} p_{\ell,N} \, . \label{eqn-plz-ica} \end{eqnarray} Note that the $p_{\ell,N}$ depend on the distance between the levels $\ell$ and $N$ at the anti-crossing. Thus they have to be evaluated at different times $t_{\ell,N}$. However, the crossing time is easily calculated by evaluating $h_N(t_{\ell,N}) = h_\ell(t_{\ell,N})$, where $h_\ell(t)$ are diabatic levels as defined above. This yields \begin{equation} t_{\ell,N} = -\frac{\bar g \ell}{2 \alpha} \, . \end{equation} At all crossing times $t_{\ell,N}$, the level spacings $w_{\ell,N}$ are calculated by diagonalizing the Hamiltonian matrix ({ \rm e }f{eqn-ham-matrix}). As $H$ is tridiagonal, this can be done very efficiently. Furthermore, the difference of the slopes is simply given by $b_{\ell,N} = 2 \alpha (N-\ell)$. To test this approach we compare the ICA-Landau-Zener formula ({ \rm e }f{eqn-plz-ica}) with the Landau-Zener probability ({ \rm e }f{eqn-plz-mp-def}) calculated by numerically integrating the many-particle Schr\"odinger equation as well as the mean-field transition probability ({ \rm e }f{eqn-plz-mf-def}). The results are shown in Fig.~{ \rm e }f{fig-plz-ica-avar} in dependence of $\alpha$ for $v = 0.2$, $N=100$ and three different values of $g$. One observes a good agreement between the Landau-Zener formula ({ \rm e }f{eqn-plz-ica}),(dashed line) and the numerical results for large $g$. For small values of $g$ the ICA ({ \rm e }f{eqn-plz-ica}) overestimates the transition probability. These issues will be further discussed in section { \rm e }f{sec-lz-formula}. \section{Limiting cases} The linear limit $g { \rm i }ghtarrow 0$ is analytically solvable in both cases. The many-particle system ({ \rm e }f{eqn-ham-matrix}) reduces to the so-called bow-tie model, whose S-matrix was calculated in \cite{Demk01}. The mean-field dynamics reduces to the ordinary two-state model of Landau, Zener, Majorana and St\"uckelberg \cite{Land32,Zene32,Majo32,Stue32}. Not only the transition probability but also the whole dynamics is known exactly in terms of Weber functions \cite{Zene32}. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{plz_ica_adlimit} \caption{\label{fig-plz-ica-adlim} Landau-Zener transition probability $P_{\rm LZ}^{\rm ICA}(\alpha)$ in the adiabatic limit $\alpha { \rm i }ghtarrow 0$ for $v = 0.2$, $g=-1$ and different numbers of particles: $N=10$ ($-\cdot-$), $N=20$ ($--$) and $N=30$ (---).} \end{figure} In the zero-coupling limit $v { \rm i }ghtarrow 0$ the Hamiltonians become diagonal and the evolution is fully diabatic. The Landau-Zener transition probability tends to one. Most interesting is the adiabatic limit $\alpha { \rm i }ghtarrow 0$. As no subdiagonal element of the Hamiltonian matrix ({ \rm e }f{eqn-ham-matrix}) vanishes, all eigenvalues must be distinct (see, e.g. \cite{Wilk65}). They may become pathologically close, but they cannot be degenerate. This is in fact the case: The splitting of the lowest levels at the anti-crossings becomes really small for increasing $|\bar g|$. Thus all $w_{l,N}$ are non-zero and in the extreme adiabatic limit $\alpha { \rm i }ghtarrow 0$ the Landau-Zener probabilities $p_{\ell,N}$ must vanish. This seems to contradict the mean-field results (cf. Fig.~{ \rm e }f{fig-plz-ica-avar}), which predicts a non-zero Zener tunneling probability even in the adiabatic limit if $|g| > g_c$. However, the parameter regime, where the ICA predicts a vanishing Zener tunneling probability in contrast to the mean-field results, decreases rapidly with an increasing number of particles $N$. Figure { \rm e }f{fig-plz-ica-adlim} shows the Landau-Zener probability $P_{\rm LZ}^{\rm ICA}(\alpha)$ for very small $\alpha$, calculated within the ICA for different $N$, with $g = \bar g N = -1$ fixed. The truly adiabatic region, where $P_{\rm LZ}^{\rm ICA}(\alpha) \approx 0$ is negligibly small already for these quite modest numbers of $N$. The mean-field theory is valid for a BEC consisting of a {\it macroscopic} number of atoms. In order to compare to the mean-field results we thus have to consider the limit of a large number of particles, $N { \rm i }ghtarrow \infty$ with $g = \bar g N$ fixed. In this macroscopic limit, the contradiction vanishes. Furthermore, this limit will prove itself as extremely convenient for the evaluation of Eq.~({ \rm e }f{eqn-plz-ica}), since all sums can be replaced by integrals which can be solved explicitly (cf. section { \rm e }f{sec-lz-formula}). \section{The many-particle spectrum} \label{sec-spectrum} The only missing step towards an explicit Landau-Zener formula is the evaluation of the squared levels spacings $w^2_{k,N}(t_{k,N})$. Thus one has to understand the spectrum of the Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}). We start with a discussion of the spectrum for $\epsilon = 0$, which provides an insight into the qualitative features which will guide us in the following. To keep the calculations simple, we introduce the operators \begin{equation}gin{eqnarray} J_x &=& \frac{1}{2} \left(a_2^\dagger a_2 - a_1^\dagger a_1 { \rm i }ght) \nonumber \\ J_y &=& \frac{{ \rm i }}{2} \left(a_2^\dagger a_1 - a_1^\dagger a_2 { \rm i }ght) \nonumber \\ J_z &=& \frac{1}{2} \left(a_1^\dagger a_2 + a_2^\dagger a_1 { \rm i }ght), \end{eqnarray} which form an angular momentum algebra with quantum number $j = N/2$ \cite{Milb97,Vard01b,Angl01}. The Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}) then can be rewritten as \begin{equation} H = 2v J_z + \frac{g}{N} J_x^2 \end{equation} up to a constant term. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{spec_e0_N50} \caption{\label{fig-spec-e0} Spectrum of the many-particle Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}) for $\epsilon = 0$, $v=0.2$, $N=50$ particles and $g=-0.1$ and $g=-2$, respectively.} \end{figure} In the subcritical case $|g| < 2|v|$, the interaction terms can be treated as a small perturbation. The unperturbed eigenstates are the $J_z$-eigenstates $|j,m_z\rangle$ with $m_z = -j,-j+1,\ldots,j$. In second order this yields the levels \begin{equation} E_{m_z} = 2vm_z \left[ 1 - \frac{g}{2v} \frac{m_z}{2N} - \left(\frac{g}{2v}{ \rm i }ght)^2 \frac{m_z^2}{4N^2} + \mathcal{O}(g^3) { \rm i }ght] \label{eqn-spec-pert1} \end{equation} up to a constant. This spectrum is illustrated in Fig.~{ \rm e }f{fig-spec-e0} for $g=-0.1$, $v=0.2$ and $N=50$ particles. The eigenenergies are nearly equidistant, with a slight increase of the level spacing for higher energies. For $|g| > 2|v|$ and low energies, the interaction term $g J_x^2/N^2$ dominates the Hamiltonian. The eigenstates with quantum numbers $|j,\pm m_x\rangle$ are doubly degenerate with eigenenergy $E_{m_x} = g m_x^2/N^2$. The perturbation $2 v J_z$ removes this degeneracy only in the $2|m_x|$-th order. Thus the low energy eigenstates (corresponding to the high $|m_x|$ states) appear in nearly degenerate pairs. However this approach fails if the energy scale of the perturbation $2 v J_z$ becomes comparable to the unperturbed eigenenergy. Estimating the energy scale of the perturbation as $E_{\rm max}/2 = 2 v j/2 $, perturbation theory fails for $|g| m_x^2/N^2 \apprle vN/2$. Instead, Bogoliubov theory provides the appropriate description for the high energy part of the spectrum. We are dealing with an attractive interaction $g<0$, so that the highest state in the mean-field approximation is the state with equal population in the two modes. So the standard Bogoliubov approach is valid for the highest state instead of the ground state. One finds that the high energy part of the spectrum is given by $E_n = E_N - \omega (N-n)$ with the Bogoliubov frequency \cite{Pita03} \begin{equation} \omega = [(2v)^2 -2vg]^{1/2}. \label{eqn-Bog-freq} \end{equation} To clarify this issue, the spectrum is plotted in Fig.~{ \rm e }f{fig-spec-e0} for $g=-2$, $v=0.2$ and $N=50$ particles. One clearly sees the nearly degenerate pairs of eigenvalues for low energies and the approximately equal spacing of the high-energy eigenvalues. The distance of the two highest levels is given by the Bogoliubov frequency ({ \rm e }f{eqn-Bog-freq}). Now we come back to the squared level splittings $w_{k,N}^2(t_{k,N})$, beginning with the supercritical regime $|g| > 2v$. Figure { \rm e }f{fig-w2-v=0.2} shows an example of the squared level splitting for $v=0.2$, $N = 100$ particles and $g=-0.1$ resp. $g=-1$. Later, we consider the macroscopic limit $N { \rm i }ghtarrow \infty, \; \bar g { \rm i }ghtarrow 0$ with $g = \bar g N$ fixed. For this issue we plot the squared level splittings versus the rescaled index $x := \ell/N \in [0,1]$. With increasing $N$, the curve plotted in Fig.~{ \rm e }f{fig-w2-v=0.2} remains {\it the same}, only the actual points move closer together. Thus one obtains a continuous function $w^2(x)$ in the limit $N { \rm i }ghtarrow \infty$. As argued above for $\epsilon = 0$, the lower levels appear in approximately degenerate pairs. By the same arguments one concludes that this is also true for for the first level crossings. Thus, $w^2_{\ell,N}$ is effectively zero for $\ell < \ell_c$ resp. $x < x_c$. The critical index $x_c$ can be estimated as described above for $\epsilon = 0$. It is found that this estimate gives the correct results up to a numerical factor $a$ of order 1. Thus we conclude that \begin{equation} x_c \approx 1 - a \sqrt{2v/|g|}. \label{eqn-xc} \end{equation} A very good agreement of this formula to the numerical results was found for $a = 1.14$. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{w2_v=02_N=50b} \caption{\label{fig-w2-v=0.2} Squared level splitting $w^2(x)$ in dependence of the scaled index $x = \ell/N$ for $v =0.2$ and $g=-0.1$ resp. $g=-1$. Numerical results ($\circ$) are compared to the approximate formulae ({ \rm e }f{eqn-w2-supercrit}) resp. ({ \rm e }f{eqn-w2-subcrit}), (solid lines).} \end{figure} For $x > x_c$ the squared splittings increase approximately linear. In the high energy limit corresponding to $x { \rm i }ghtarrow 1$, the level splitting is given by the Bogoliubov frequency introduced above. In conclusion, the squared level spacing can be approximated by \begin{equation} w^2(x) \approx \omega^2 \frac{x-x_c}{1-x_c} H(x-x_c), \label{eqn-w2-supercrit} \end{equation} where $H(x-x_c)$ denotes Heaviside's step function. In the subcritical regime $|g| < 2v$, one can use the results from perturbation theory described above (cf. Eq.~({ \rm e }f{eqn-spec-pert1})). At time $t_{\ell,N} = -\bar g \ell/2\alpha$ one must evaluate the level splitting $E_{\ell-j+1} - E_{\ell-j}$ (note that the levels are labeled by $m_z = -j,-j+1, \ldots, j$ with $j = N/2$). Again we consider the limit $N { \rm i }ghtarrow \infty$ with $g = \bar g N$ fixed. After a little algebra one finds that the relevant level splitting is in linear order given by \begin{equation}gin{eqnarray} w(x) &=& \frac{16v^2+4g - 3g^2/4}{8v} + \left( \frac{3 g^2}{8v} - g { \rm i }ght) x \nonumber \\ &=:& w_0 + w_1 x \label{eqn-w2-subcrit} \end{eqnarray} in terms of the scaled index $x = \ell/N$. The approximate results for the squared level splitting $w^2(x)$ for $|g| > g_c$, Eq.~({ \rm e }f{eqn-w2-supercrit}), and for $|g| < g_c$, Eq.~({ \rm e }f{eqn-w2-subcrit}), are compared with the numerical results for $N=100$ particles in Fig.~{ \rm e }f{fig-w2-v=0.2}. One observes a good agreement. \section{Towards an explicit Landau-Zener formula} \label{sec-lz-formula} Using the formulae for the squared level splitting derived in the previous section, the ICA-Landau-Zener transition probability ({ \rm e }f{eqn-plz-ica}) can now be evaluated explicitly. In the spirit of the of the macroscopic limit $N { \rm i }ghtarrow \infty$, the sums are replaced by integrals according to \begin{equation} \frac{1}{N} \sum_{\ell = 0}^{k} { \rm i }ghtarrow \int_0^{k/N} dx \end{equation} The difference of the slopes $b_{\ell,N} = 2 \alpha (N-\ell)$, which enters the formula is also rewritten in terms of the rescaled index $x = \ell/N$: \begin{equation} b_{\ell,N} { \rm i }ghtarrow 2 \alpha N (1-x) =: N \bar b(x). \label{eqn-barb} \end{equation} Thus one finds \begin{equation} P_{LZ} \approx \int_0^1 \exp\left[ -\pi \int_0^y \frac{w^2(x)}{\bar b(x)} dx { \rm i }ght] dy. \label{eqn-plz-intapp} \end{equation} In the supercritical regime $|g| > g_c$ we start by evaluating the integral over $x$ in Eq.~({ \rm e }f{eqn-plz-intapp}). Substituting $w^2(x)$ and $\bar b(x)$ from Eq.~({ \rm e }f{eqn-w2-supercrit}) and ({ \rm e }f{eqn-barb}) and carrying out the integral yields \begin{equation} \int_0^y \frac{w^2(x)}{\bar b (x)} dx = \frac{-\omega^2}{2 \alpha} \left[ \frac{y-x_c}{1-x_c} + \ln\left(\frac{1-y}{1-x_c}{ \rm i }ght) { \rm i }ght] \end{equation} for $y > x_c$ and zero otherwise. The Landau-Zener transition probability ({ \rm e }f{eqn-plz-intapp}) is then given by \begin{equation}gin{eqnarray} P_{\rm LZ} &\approx& x_c + \int_{x_c}^1 \left( \frac{1-y}{1-x_c} { \rm i }ght)^{ \frac{\pi \omega^2}{2 \alpha}} \exp \left[ \frac{\pi \omega^2}{2 \alpha} \frac{y-x_c}{1-x_c} { \rm i }ght] dy \nonumber \\ &=& x_c + \frac{(1-x_c) \, { \rm e }^u}{u^{u+1}} \, \gamma(u+1,u) \label{eqn-res-crit} \end{eqnarray} with the abbreviation $u = \pi \omega^2 / 2\alpha$ and $x_c$ defined in Eq.~({ \rm e }f{eqn-xc}). Here, $\gamma$ denotes the incomplete gamma-function \cite{Abra72}. In the subcritical regime $|g| < g_c$, one finds by substituting Eq.~({ \rm e }f{eqn-plz-intapp}) into ({ \rm e }f{eqn-plz-intapp}), that the Landau-Zener transition probability is given by \begin{equation}gin{eqnarray} && P_{LZ} \approx \int_0^1 \left( 1-y { \rm i }ght)^{(\pi w_0 (w_0+2 w_1))} \exp\left[2\pi w_0 w_1 y { \rm i }ght] dy \nonumber \\ && \; = \frac{{ \rm e }^{c_1}}{c_1^{c_0+1}} \, \gamma \left(c_0+1, c_1{ \rm i }ght) \label{eqn-res-subcrit} \end{eqnarray} with the abbreviations $c_0 = \pi (w_0^2 +2 w_0 w_1) /2 \alpha$ and $c_1 = \pi w_0 w_1/\alpha$. To keep the calculations feasible, we kept only terms linear in $x$ resp. $y$ in the exponent consistent with Eq.~({ \rm e }f{eqn-res-crit}). To test the validity of our approach we compare the ICA-Landau-Zener formulae ({ \rm e }f{eqn-res-crit}) and ({ \rm e }f{eqn-res-subcrit}) to numerical results obtained by integrating the Schr\"odinger equation for mean-field Hamiltonian ({ \rm e }f{eqn-ham-nonlin}) as well as the many-particle Hamiltonian ({ \rm e }f{eqn-mp-hamiltonian}). The Landau-Zener tunneling probability in dependence of the interaction constant $g$ is plotted in Fig.~{ \rm e }f{fig-plz-gvar2} for $\alpha = 0.01$ in dependence of the velocity parameter $\alpha$ for different values of $g$ in Fig.~{ \rm e }f{fig-plz-avar2}. One observes a good agreement of the ICA-Landau-Zener formula with the numerical results in the critical regime $|g| > g_c$. Especially the increase of the tunneling probability with increasing $|g|$ for small $\alpha$ is well described by our model. This problem could not be solved with previous approaches \cite{Zoba00,Liu02}. The approximation gets worse for larger values of $\alpha$ because the assumption that the Zener transitions are well separated becomes doubtful for such a large parameter velocity. The ICA thus underestimates the tunneling probability. In the subcritical case $|g| < g_c$, the proposed ICA-Landau-Zener formula does not work as well. In fact the tunneling probability is overestimated for small $\alpha$ because the ICA itself is not a very good approximation in this case. The adiabatic levels do not show well separated avoided crossings, instead the levels splittings are nearly constant over a long interval of the parameter $\epsilon$. For larger values of $\alpha$ one faces the same problems as in the supercritical case and the tunneling probability is underestimated. Another ansatz, using e.g. perturbation theory with respect to the solution of the noninteracting problem \cite{Angl03} should be better suited to this problem. Note however, that the deviations are mainly due to the ICA itself and to the approximation of $w^2(x)$ made in this section. \begin{equation}gin{figure}[t] \centering \includegraphics[width=7cm, angle=0]{plz_gvar_res_a=001} \caption{\label{fig-plz-gvar2} Landau-Zener tunneling probability in dependence of the interaction constant $g$ for a parameter velocity $\alpha = 0.01$. Numerical data (mean-field $+$ and many-particle theory $\circ$) is compared to the ICA-Landau-Zener formula ({ \rm e }f{eqn-res-crit}),(solid line) for $v=0.2$.} \end{figure} \section{Conclusion and Outlook} In conclusion, we have derived a Landau-Zener formula for an interacting Bose-Einstein condensate from first principles. To this end we considered the original two-mode many-particle Landau-Zener scenario. It was shown that the resulting Landau-Zener formula agrees well with the numerical results calculated for the many-particle problem as well as within the mean-field approximation. In the future, it would be of interest to relate our calculations to the respective problem in the Heisenberg pictures. Here, complex eigenfrequencies may occur for the dynamics of the creation/annihilation operators, leading to spontaneous production of quasi-particles and hence a dynamical instability. For the non-interacting case, this problem has been solved analytically \cite{Angl03}. Another issue is the discussion of nonlinear Landau-Zener problems for more than two levels. First results for three level system were reported only recently \cite{05level3}. \section*{Appendix: The independent crossings approximation} Let us first briefly recall the dynamics of a two-level Landau-Zener system described by the Hamiltonian \begin{equation} \label{eqn-lin2lev} H_0(t)=\left(\begin{equation}gin{matrix} \begin{equation}ta_1 t+b_1 & v\\ v & \begin{equation}ta_2 t+b_2 \end{matrix}{ \rm i }ght). \end{equation} The diabatic and adiabatic energy curves are plotted in Fig.~{ \rm e }f{fig-P_LZlin}. The S-Matrix in the sense of Eq.~{ \rm e }f{eqn-S-matr-def} is given by \begin{equation} S=\left(\begin{equation}gin{matrix} p & q\\ q & p \end{matrix}{ \rm i }ght) \end{equation} with $p=\exp{\left(-\pi v^2/\vert \begin{equation}ta_1-\begin{equation}ta_2\vert { \rm i }ght)}$ and $q=\sqrt{1-p^2}$. The probability of a diabatic passage is therefore given by the Landau-Zener-formula $P_{\rm LZ}=p^2= \exp{\left(-\frac{2\pi v^2}{\vert \begin{equation}ta_1-\begin{equation}ta_2 \vert}{ \rm i }ght)}$, and the adiabatic transition probability by $1-P_{\rm LZ}=q^2$. They depend only on the relative slope of the diabatic levels $\vert \begin{equation}ta_1-\begin{equation}ta_2\vert$ and the coupling $v$, which is equivalent to half of the gap between the adiabatic energy-levels at the avoided crossing. \begin{equation}gin{figure}[thb] \centering \includegraphics[width=7cm, angle=0]{P_LZlin2} \caption{\label{fig-P_LZlin} Diabatic (dash-dotted line) and adiabatic (solid line) energy levels of the two-level Landau-Zener model ({ \rm e }f{eqn-lin2lev}).} \end{figure} The simplicity of the solution of the two-level system and the observation that the transitions between two adiabatic levels in a multilevel Landau-Zener system takes place only in a very narrow region around the crossing of the two corresponding diabatic levels leads to a simple approximation. If all crossings are well separated they can be considered as independent of each other and each of them is described by the two-level-Landau-Zener model where the couplings between the relevant diabatic levels are given by the nondiagonal terms of the Hamiltonian. This approach is called the "independent crossing approximation" (ICA) in the literature. It is of great importance for the study of multilevel Landau-Zener dynamics because of a surprising feature: The ICA turns out to give the {\it exact} results for all known exactly solvable multi-level Landau-Zener scenarios \cite{Demk68,Demk01}. Furthermore it has been shown that the ICA always gives the correct results for the diagonal $S$-matrix elements with minimal and maximal slope \cite{Sini04,Shyt04}. Of course there are also examples where the ICA fails, as for example for the simple three level Hamiltonian \begin{equation} H(t) = \left(\begin{equation}gin{matrix} \alpha t+a & v & w\\ v & 0& 0\\ w & 0 & -\alpha t+ a \end{matrix}{ \rm i }ght). \label{eqn-ica-3niv} \end{equation} The adiabatic and diabatic levels are plotted in figure { \rm e }f{fig-3niv}. The diabatic transition probability for the third level $S_{33}$ is exactly given by the ICA. But if we look at the S-Matrix element $S_{32}$ we find that the ICA predicts it to be zero, because the coupling matrix element vanishes, $\langle 2\vert H\vert 3\rangle=0$, independent of $\alpha$ and $a$, which isn't true. The second and the third diabatic levels do not couple directly, but for not too large values of $a$ the indirect coupling via the second diabatic level can't be neglected. This coupling manifests itself as an avoided crossing between the two adiabatic levels, which turns into a real crossing only in the limits $a\to\infty$ and $a\to 0$. For finite values of $a$ the transition probability between the third and the second diabatic levels is small but nonzero. \begin{equation}gin{figure}[thb] \centering \includegraphics[width=7cm, angle=0]{3niv1} \caption{\label{fig-3niv} Diabatic (dash-dotted line) and adiabatic (solid line) energy levels of the three-level Landau-Zener model ({ \rm e }f{eqn-ica-3niv}) for $\alpha = 0.2$, $a=0.5$, $v=0.2$ and $w=0.3$.} \end{figure} To get a better approximation one should recall the two level system, where the coupling between two diabatic levels is equivalent to half of the level-splitting of the corresponding adiabatic levels. Therefore one can use a modified ICA where the couplings are not given by the nondiagonal elements of the Hamiltonian but half of the level splitting between the relevant adiabatic levels. This approximation doesn't inherit the benefit of providing the exact results in the special cases where the original ICA did, but provides a good approximation even in the cases where the ICA fails. Therefore it is better suited for our purposes. The performance of the approximation is limited by the fact that the single avoided crossings must be well separated so that the transition regimes do not overlap. In the present case this is improved with increasing nonlinearity. Note that to simplify matters this modified ICA is denoted as ICA throughout the paper. \begin{equation}gin{acknowledgments} Support from the Studienstiftung des deutschen Volkes and the Deutsche Forschungsgemeinschaft via the Graduiertenkolleg ''Nichtlineare Optik und Ultrakurzzeitphysik'' is gratefully acknowledged. \end{acknowledgments} \begin{equation}gin{thebibliography}{10} \bibitem{Wu00} Biao Wu and Qian Niu, Phys. Rev. A {\bf 61}, 023402 (2000). \bibitem{Wu03} Biao Wu and Qian Niu, New J. Phys. {\bf 5}, 104 (2003). \bibitem{Liu03} J.~Liu, B.~Wu, and Q.~Niu, Phys. Rev. Lett. {\bf 90}, 170404 (2003). \bibitem{Land32} L.~D. Landau, Phys. Z. Sowjet. {\bf 1}, 88 (1932). \bibitem{Zene32} C.~Zener, Proc. Roy. Soc. Lond. A {\bf 137}, 696 (1932). \bibitem{Majo32} E.~Majorana, Nuovo Cim. {\bf 9}, 43 (1932). \bibitem{Stue32} E.~C.~G. St{\"u}ckelberg, Helvetica Physica Acta {\bf 5}, 369 (1932). \bibitem{Esse95} B.~Esser and H.~Schanz, Z. Phys. B {\bf 96}, 553 (1995). \bibitem{Zoba00} O.~Zobay and B.~M. Garraway, Phys. Rev. A {\bf 61}, 033603 (2000). \bibitem{Liu02} J.~{Liu, D. Choi, B . Wu and Q. Niu}, Phys. Rev. A {\bf 66}, 023404 (2002). \bibitem{Demk01} Yu.{ N. Demkov and V. N. Ostrovsky}, J. Phys. B {\bf 34}, 2419 (2001). \bibitem{Wilk65} J.~H. Wilkinson, {\em The Algebraic Eigenvalue Problem} (Oxford University Press, Oxford, 1965). \bibitem{Milb97} G.~J. Milburn, J.~Corney, E.~M. Wright, and D.~F. Walls, Phys. Rev. A {\bf 55}, 4318 (1997). \bibitem{Vard01b} A.~Vardi and J.~R. Anglin, Phys. Rev. Lett. {\bf 86}, 568 (2001). \bibitem{Angl01} J.~R. Anglin and A.Vardi, Phys. Rev. A {\bf 64}, 013605 (2001). \bibitem{Pita03} L.~Pitaevskii and S.~Stringari, {\em Bose-Einstein Condensation} (Oxford University Press, Oxford, 2003). \bibitem{Abra72} M.~Abramowitz and I.~A. Stegun, {\em Handbook of Mathematical Functions} (Dover Publications, Inc., New York, 1972). \bibitem{Angl03} J.~R. Anglin, Phys. Rev. A {\bf 67}, 051601(R) (2003). \bibitem{05level3} E.~M. Graefe, H.~J. Korsch, and D.~Witthaut, Phys. Rev. A, in press (preprint: quant--ph/0507185) (2005). \bibitem{Demk68} Y.~N. Demkov and V.~I. Osherov, Sov. Phys. JETP {\bf 26}, 916 (1968). \bibitem{Sini04} N.~A. Sinitsyn, J. Phys. A {\bf 37}, 10691 (2004). \bibitem{Shyt04} A.~V. Shytov, Phys. Rev. A {\bf 70}, 052708 (2004). \end{thebibliography} \end{document}
\begin{document} \title{Randomness and differentiability in higher dimensions} \author[\mbox{Galicki and Turetsky}]{Alex Galicki and Daniel Turetsky} \address{A.~Galicki, Department of Computer Sciece, University of Auckland} \email{[email protected]} \address{D. Turetsky, Kurt G\"odel Research Center} \email{[email protected]} \begin{abstract} We present two theorems concerned with algorithmic randomness and differentiability of functions of several variables. Firstly, we prove an effective form of the Rademacher's Theorem: we show that computable randomness implies differentiability of computable Lipschitz functions of several variables. Secondly, we show that weak 2-randomness is equivalent to differentiability of computable a.e. differentiable functions of several variables. \end{abstract} \maketitle \section{Introduction} \subseteqsection{Introduction} The main subject of this paper lies at the interface of computable analysis (\cite{Weihrauch:00}) and algorithmic randomness (\cite{Nies:book}, \cite{Downey.Hirschfeldt:book}). Intuitively, a real number is random if it does not have any exceptional properties. This approach can be formalized via identifying exceptional properties with effective null sets. To different types of effective null sets correspond different notions of algorithmic randomness. One of the most fruitful areas of research concerned with interconnections between the two subjects is differentiability of effective functions. The main reason for this is that sufficiently well-behaved functions are almost everywhere differentiable. In this case the set of non-differentiability points of an effective function forms an effective null set and thus a test for algorithmic randomness. This makes it possible to characterize different randomness notions in terms of sets of points of differentiability of effective functions. Conversely, sets of points of differentiability for functions of particular classes can be characterised in terms of algorithmic randomness. The results of this kind are particularly compelling, since they show non-trivial connections between two seemingly distant areas of mathematics. In recent years, a number of results of that kind have been published (for example, see \cite{Brattka.Miller.ea:nd,Miyabe:12, Pathak.Rojas.ea:12, Freer.Kjos.ea:nd}). Most of them are concerned with functions of one variable. Relatively few results are known about effective functions of several variables. Our first result is concerned with Lipschitz functions, which are particularly well behaved and enjoy a lot of attentions from mathematicians since they appear naturally in various contexts. The following classical result is called Rademacher's Theorem (see Section 3.1 in \cite{Evans.Gariepy:92}), it states that Lipschitz functions are almost everywhere\ differentiable. \begin{theorem}[Rademacher, \cite{Rademacher:19}] Suppose $U$ is an open subset of $\mathbb{R}^n$ and \mbox{$f:U\to\mathbb{R}^m$} is a Lipschitz function. Then there exists a null set, such that $f$ is differentiable outside it. \end{theorem} \noindent We prove the following effective form of Rademacher's Theorem. \noindentewtheorem*{thm:rademacher}{Theorem \ref{t_rademacher_1}} \begin{thm:rademacher} Let $f:[0,1]^n \to\mathbb{R}$ be a computable Lipschitz function and let $z\in[0,1]^n $ be computably random. Then $f$ is differentiable at $z$. \end{thm:rademacher} The one dimensional variant of effective Rademacher's Theorem and its converse have been proven in \cite{Freer.Kjos.ea:nd}. \begin{theorem}[Theorem 4.2 in \cite{Freer.Kjos.ea:nd}]\langlebel{thm:lipshitz1} A real $z\in[0,1] $ is computably random $\iff$ each computable Lipschitz function $f:[0,1] \to\mathbb{R}$ is differentiable at $z$. \end{theorem} Theorem \ref{t_rademacher_1} generalizes the $\mathbb{R}ightarrow$ direction of the above result. The question whether the converse of the classical Rademacher's Theorem holds, that is whether every Lebesgue null-set is contained in a set of non-differentiability points of a Lipschitz function, has been answered very recently after several decades of work by classical analysts (see \cite{Alberti.Csornyei.Preiss:10} and \cite{Preiss.Speight:14}). The converse holds when $m\ge n$ and does not hold otherwise. To characterise differentiability sets of effective functions in terms of randomness in the usual way, those functions must be differentiable almost everywhere, for otherwise the sets of non-differentiability do not form null sets and cannot be interpreted as exceptional properties. This implies that the broadest possible class of functions in this context is the class of almost everywhere differentiable functions. For functions of one variable the following result is known: \begin{theorem}[Theorem 6.1 in \cite{Brattka.Miller.ea:nd} ]\langlebel{thm:w2r1} Let $z\in [0,1] .$ The following are equivalent: \begin{enumerate} \item $z$ is weakly 2-random, and \item all computable a.e.\ differentiable functions are differentiable at $z$. \end{enumerate} \end{theorem} \noindent Our final result is the the following generalization of that theorem. \noindentewtheorem*{thm:w2r}{Theorem \ref{w2r_theorem}} \begin{thm:w2r} Let $z\in[0,1] ^n$, then the following are equivalent: \begin{enumerate} \item $z\text{ is weakly $2$-random}$, \item all partial derivatives exist for all computable a.e.\ differentiable \mbox{$f:[0,1]^n \to\mathbb{R}$}, \item each computable a.e.\ differentiable function is differentiable at $z$. \end{enumerate} \end{thm:w2r} \subseteqsection{Structure of the paper} \noindent In the rest of this section we present relevant definitions and facts and introduce some useful notation. In Section 2 we prove an effective version of Rademacher's Theorem. We start the section by recalling some important facts about Lipschitz functions and then proceed with the proof of the main result. The section ends with a discussion of a relatively recent classical result of Maleva and Dor\'e \cite{Dore.Maleva:11} and some of its implications. In Section 3 we demonstrate that weak 2-randomness characterises differentiability points of computable a.e.\ differentiable functions. The last section discusses some open problems related to this article. \subseteqsection{Preliminaries} \subseteqsubsection{Measure} We work exclusively with the Lebesgue measure on $[0,1]^n $. Slightly abusing notation, we always denote it by $\langlembda$. \subseteqsubsection{Derivatives in higher dimensions} Let $f:[0,1]^n \to\mathbb{R}$ be a function and let $x\in[0,1]^n $. We say $f$ is \emph{differentiable} at $x$ if for some linear map $T$ the following holds $$\lim_{h\to 0}\mathcal{F}rac{f(x+h)-f(x)-T\cdot h}{\vectornorm h}=0.$$ Then, by definition, $f'(x)=T$. \noindent Let $\{e_i:1\le i\le n\}$ denote the standard basis for $\mathbb{R}R^n$. We denote \emph{partial derivatives} by $D_if(x)$, lower and upper partial derivatives by $\underline D_if(x)$ and $\overline D_if(x)$, respectively. Working with derivatives often means working with slopes. In our proofs we use the following notation. \noindent Fix coordinate~$i$. For $x \in [0,1]^n $, $h \in \mathbb{R}R$, define \[ \delta_f^i(x, h) = \mathcal{F}rac{f(x_1, \dots, x_i + h, \dots, x_n) - f(x_1, \dots, x_i, \dots, x_n)}h, \] \noindent and \[ \delta^{1..n}_f({x}, h) = \left[\begin{array}{ccc} \delta_f^1({x},h) & \dots & \delta_f^n({x},h) \end{array}\right]. \] \subseteqsubsection{Computable real functions} There are multiple ways of formalizing computability of real functions, most of which turned out to be equivalent. We will rely on the following definitions. A sequence $\seq q$ of elements of $\mathbb{R}^n$ is called a \emph{Cauchy name} if the coordinates of each $q_i$ are rational, and $\|q_k-q_n\|\le 2^{-n}$ for all $n,k$ with $k\ge n$. If $\lim_{n\to\infty} q_n=x$, then we say that $\seq q$ is a \emph{Cauchy name for} $x$. We say $x\in \mathbb{R}^n$ is \emph{computable} if there is a computable Cauchy name for $x$. \begin{definition} A function $f:[0,1]^n \to\mathbb{R}^m$ is \emph{computable} if: \begin{enumerate} \item $f(q)$ is computable (uniformly in $q$) where $q$ has all dyadic rational coordinates, and \item $f$ is \emph{effectively uniformly computable}, that is if there is a computable \mbox{$h:\mathbb{N}\to\mathbb{N}$} such that $\|x-y\|\le 2^{-h(i)}$ implies $\|f(x)-f(y)\|\le 2^{-i}$ for all $x,y\in[0,1]^n $ and all $i\in\mathbb{N}$. \end{enumerate} \end{definition} A more intuitive understanding of the above definition is that $f$\ is computable if there is an algorithm that, given a Cauchy name for $x$, computes a Cauchy name for $f(x)$. \subseteqsubsection{Algorithmic randomness}~ \noindent The most common method for defining a randomness notion is via effective null sets. The following two randomness notions are of direct interest to us and are defined in terms of avoidance of effective null sets. \begin{definition} Let $z\in[0,1]^n $. We say $z$ is \emph{weakly random} if there does not exist a $\arpi01$ null set that contains $z$. Similarly, we say $z$ is \emph{weakly 2-random} if there does not exist a $\arpi02$ null set that contains $z$. \end{definition} An alternative approach to formalizing randomness notions is via effective betting strategies. An infinite binary string can be thought of as random if no (effective) betting strategy can succeed by betting on bits of that string. Betting strategies are usually formalized as \emph{martingales} (see \cite{Nies:book}, Chapter 7). \begin{definition} We say a function $B:2^{<\omega}\to\mathbb{Q}_+$ is a \emph{martingale} if the following condition holds for all $\sigma\in2^{<\omega}:$ $$2B(\sigma)=B(\sigma0)+B(\sigma1).$$ \noindent $B(\sigma)$ can be interpreted as the value of capital after betting on bits of $\sigma$. We say $B$ \emph{succeeds} on $Z\in2^{\omega} $ if $\liminf_n B(Z{\upharpoonright}r n)=\infty$. \end{definition} \begin{definition}\langlebel{cr_definition} We say $Z\in2^{\omega} $ is \emph{computably random} if no computable martingale succeeds on $Z$. We say $z=(0.Z_1,\dots,0.Z_n)\in[0,1]^n $ is computably random if its binary expansion, that is $Z=Z_1\oplus\dots\oplus Z_n$, is computably random. Here $0.A$ denotes the real number whose binary expansion is $A\in2^{\omega} $. \end{definition} \noindent It is known that weak 2-randomness implies computable randomness and computable randomness implies weak randomness. \subseteqsubsection{Preservation of computable randomness} \begin{definition}(cf. 7.1 in \cite{Rute:13}) We say that $\phi:[0,1]^n \to[0,1]^n $ is \emph{almost everywhere (a.e.)\ computable} if there exists a partial computable $F:\mathbb{N}^\omega\to\mathbb{N}^\omega$ and a $\arpi02$ subset $A\subseteqseteq[0,1]^n $ with $\lm A=1$ such that: \begin{enumerate} \item for all $x\in A$, given a Cauchy name of $x$, $F$ computes a Cauchy name for $\phi(x)$, and \item $x\in A$ iff for all $a,b$, which are Cauchy names for $x$, both $F(a)$ and $F(b)$ are Cauchy names for the same element. \end{enumerate} We say that $\phi:[0,1]^n \to[0,1]^n $ is an \emph{a.e.\ computable isomorphism} if there exists $\psi:[0,1]^n \to[0,1]^n $ such that $\phi\circ\psi=\text{id}$ and $\psi\circ\phi=\text{id}$ almost everywhere and both $\psi,\phi$ are measure preserving and a.e.\ computable. \end{definition} We are interested in the above notions for the following property of computable randomness proven by Rute. \begin{theorem}[Theorem 7.9 in \cite{Rute:13}]\langlebel{cr_preservation} Let $T$ be an a.e.\ computable isomorphism. Then for all $x\in[0,1]^n $, $x$ is computably random if and only if $T(x)$ is computably random. \end{theorem} \subseteqsubsection{Uniform relative computable randomness}~ \noindent\ Both the following definition and theorem are due to Miyabe and Rute (\cite{Miyabe.Rute:13}). \begin{definition} A total computable function $m:2^{\omega} \times2^{ < \omega}\to \mathbb{R}$ is a \emph{uniform computable martingale} if $m(Z,\cdot)$ is a martingale for every $Z\in2^{\omega} $. We say $A$ is \emph{computably random uniformly relative to }$B$ if there is no uniform computable martingale $m$ such that $m(B,\cdot)$ succeeds on $A$. \end{definition} \noindent Note that the above definition works for elements of $[0,1]^n $ as well. \begin{theorem}[Theorem 1.3 in \cite{Miyabe.Rute:13}]\langlebel{t_miyabe_rute1} $A\oplus B$ is computably random if and only if $A$ is computably random uniformly relative to $B$ and $B$ is computably random uniformly relative to $A$. \end{theorem} \section{Effective form of Rademacher's Theorem}\langlebel{lip_section} \noindent In this section we prove a theorem which can be seen as an effective version of Rademacher's. \begin{theorem}\langlebel{t_rademacher_1} Let $f:[0,1]^n \to\mathbb{R}$ be a computable Lipschitz function and let $z\in[0,1]^n $ be computably random. Then $f$ is differentiable at $z$. \end{theorem} \begin{remark} An immediate consequence of the above theorem is that computable randomness of $z\in[0,1]^n $ is sufficient for differentiability of computable Lipschitz functions form $[0,1]^n $ to $\mathbb{R}^m$ for any $n,m$. \end{remark} \noindent Lipschitz functions are particularly well-behaved and have a number of properties related to differentiability in general and to directional derivatives in particular. Some of those properties will be used by us in the proof of the above theorem and this is why we start this section by recalling some facts about Lipschitz functions and by establishing some useful notation before proceeding to the proof. \subseteqsection{Lipschitz functions}~ A function $f:\mathbb{R}^n\to\mathbb{R}^m$ is \emph{Lipschitz} if there exists $L\in\mathbb{R}^+$ such that \[\|f(x)-f(y) \|\le L\cdot\|x-y\|\text{ for all }x,y\in\mathbb{R}^n.\] \noindent The least such $L$ is called \emph{the Lipschitz constant} for $f$. We denote it by $\mathrm{\mathbf{Lip}} (f)$. ace{5pt} Let $K_n\subseteqset\mathbb{R}^n$ be defined as $K_n=\{(x_1,\dots,x_n)\in\mathbb{R}^n:x_i\ge 0 \text{ for all } i\le n\}$. We say $f:\mathbb{R}^n\to \mathbb{R}$ is \emph{$K_n$-increasing} if $f(x+k)\ge f(x)$ for all $k\in K_n$. $f$ is called \emph{$K_n$-monotone} if either $f$ or $-f$ is $K_n$-increasing. \begin{remark}\langlebel{decomposition_remark} \noindent Every Lipschitz function $f:\mathbb{R}^n\to\mathbb{R}$ is a sum of two $K_n$-monotone functions. To see this, let $ {m}=(\mathrm{\mathbf{Lip}} (f),\dots,\mathrm{\mathbf{Lip}} (f))\in\mathbb{R}^n$ and note that\\ $f=(f+\langlengle m, x\ranglengle)-\langlengle m, x\ranglengle$, and that both summands are $K_n$-monotone. \end{remark} \noindent \subseteqsection{Directional, G\^ateaux and Fr\'echet derivatives}~ \noindent In order to exploit some of the properties of Lipschitz functions, we need to present a more nuanced view of differentiability in higher dimensions. \noindent Let $f:\mathbb{R}^n\to\mathbb{R}$ be a function, we define the \emph{Dini-directional deri\-vatives} of $f$ at a point $x\in \mathbb{R}^n$ with respect to a direction $v\in\mathbb{R}^n$ as \begin{align*} \ddos fxv=\limsup_{t\downarrow 0}\mathcal{F}rac{f(x+tv)-f(x)}{t}\text{ and }\ddoi fxv=\liminf_{t\downarrow 0}\mathcal{F}rac{f(x+tv)-f(x)}{t}. \end{align*} \noindent When $\ddos fxv=\ddoi fxv$ is finite, we define \emph{one-sided directional derivative} by $$\ddo fxv=\lim_{t\downarrow 0}\mathcal{F}rac{f(x+tv)-f(x)}{t}.$$ \noindent The \emph{two-sided directional derivative} $\dd fxv$ is defined by $$\dd fxv=\lim_{t\to 0}\mathcal{F}rac{f(x+tv)-f(x)}{t}.$$ \noindent To work with directional slopes, we need the following notation. \noindent For $x \in [0,1]^n $, $v\in\mathbb{R}^n$, and $h \in \mathbb{R}R$, define \[ \delta_f^v(x, h) = \mathcal{F}rac{f(x+hv) - f(x)}h. \] ace{10pt} \noindent If all two-sided directional derivatives of $f$ at $x$ exist and the function $T$ given by $T(y)=\dd fxy$ is linear, then $f$ is said to be \emph{G\^ateaux-differentiable} at $x$. The linear map $T$ is called the \emph{G\^ateaux derivative} of $f$ at $x$. Furthermore, if $f$ is G\^ateaux-differentiable at $x$ and if $$\lim_{h\to 0}\mathcal{F}rac{f(x+h)-f(x)-T\cdot h}{\vectornorm h}=0,$$ then $f$ is said to be \emph{Fr\'echet differentiable} at $x$. \noindent Thus, Fr\'echet differentiability is equivalent to the usual differentiability. The following observation is crucial for the main proof and justifies presenting differentiability in this more elaborate way. \begin{remark}\langlebel{lip_remark} \noindent For Lipschitz functions on $\mathbb{R}^n$, G\^ateaux and Fr\'echet differentiability coincide (for example, see Observation 9.2.2 in \cite{Lindenstrauss.Preissr:12}). Furthermore, it is known (see \cite{Chabrillac.Crouzeix:87}) that for a $K_n$-monotone function $f$, both $\ddos fx\cdot$ and $\ddoi fx\cdot$ are continuous on the interior of $K_n\cup ~{-K_n}$. Thus, for a Lipschitz function $f$, both $\ddos fx\cdot$ and $\ddoi fx\cdot$ are continuous everywhere. The following property is a direct consequence of the above fact. Let $A$ be a dense subset of $\mathbb{R}^n$, let $f:[0,1]^n \to\mathbb{R}$ be a Lipschitz function and let $x\in[0,1]^n $, then: \begin{enumerate} \item[$(\star)$] if $v\mapsto \ddo fxv$ is defined and is linear on $A$, then $v\mapsto \ddo fxv$ is defined everywhere and is linear. \end{enumerate} \noindent This means that in order to show that a Lipschitz function $f$ is differentiable at~$x$, it is sufficient to show that $\dd fx\cdot$ is defined and linear on a dense subset of directions. \end{remark} \subseteqsection{Overview of the proof.}~ \noindent The proof consists of three distinct steps. (1) We show that all partial derivatives of $f$ at $z$ exist. Firstly, we prove an analogous result for $K_n-$monotone functions and then use the fact that every Lipschitz function is a sum of two $K_n-$monotone functions to prove the required result holds for Lipschitz functions. The result for $K_n-$monotone functions is a consequence of the following two facts: (i) a uniform relativization of the $\mathbb{R}ightarrow$ implication of Theorem \ref{t_nies1} and (ii) a form of Van Lambalgen's Theorem for computable randomness proven by Miyabe and Rute \cite{Miyabe.Rute:13}. (2) We use the above fact to show that all one-sided directional derivatives of $f$ at $z$ exist. Since, by Remark \ref{lip_remark}, we are only required to show this for a dense set of directions, we only consider computable directions $v$. Two observations play a crucial role in this step: (a) computable randomness is preserved by computable linear isometries, and (b) linear functions are Lipschitz. The above observations are used to define a computable Lipschitz function $g_v:[0,1]^n \to\mathbb{R}$ such that $D_1g_v(\hat z)=\dd fzv$ for some computably random $\hat z\in [0,1]^n $. By the result proven in the first step, $D_1g_v(\hat z)$ does exist. (3) Finally, we show that the function $T(u)=\ddo fzu$ is linear. Again, we consider only computable directions. We show that any point where directional derivative is not linear and the failure of linearity is witnessed by a computable direction, belongs to a $\arpi01$ null set. Since $z$ is computably random, this completes the proof. \noindent Showing the linearity of $\ddo fz\cdot$ is the final step of our proof, because for Lipschitz functions, G\^ ateaux differentiability implies (full)\ differentiability. \subseteqsection{Existence of partial derivatives} Firstly, we show that computable randomness is sufficient for all partial derivatives of computable $K_n-$monotone functions to exist. An analogous result for computable Lipschitz functions is a simple corollary of that. To achieve the required result, we combine a variation of Van Lambalgen's Theorem for computable randomness, proven by Miyabe and Rute (\cite{Miyabe.Rute:13}), with a variation of the $\mathbb{R}ightarrow$ implication of the following result. \begin{theorem}[Theorem 4.1 in \cite{Brattka.Miller.ea:nd}]\langlebel{t_nies1} A real $x$ is computably random $\iff$ $f'(x)$ exists for each computable nondecreasing function $f:[0,1] \to\mathbb{R}$. \end{theorem} \begin{lemma}\langlebel{uniform_relativization_monotone} Let $g:2^{\omega} \times [0,1] \to \mathbb{R}$ be a total computable function such that $g(X,\cdot)$ is monotone for all $X\in2^{\omega} $ and let $Z,Y\in2^{\omega} $. If $Z\oplus Y$ is computably random, then $g_Y'(z)$ exists, where $g_Y=g(Y,\cdot)$ and $z=0.Z$. \end{lemma} The proof of Lemma \ref{uniform_relativization_monotone} is a modification of the proof of Theorem 4 in \cite{nies:2014}. Theorem 4 in \cite{nies:2014} is a polynomial version of Theorem \ref{t_nies1}, but its proof is somewhat simpler and requires only a few modifications to yield the kind of uniform relativization needed for the proof of Lemma \ref{uniform_relativization_monotone}. We will describe the required changes without repeating the whole proof. \begin{proof} The proof is by contraposition. Let $g:2^{\omega} \times [0,1] \to \mathbb{R}$ be a total computable function such that $g(X,\cdot)$ is monotone for all $X\in2^{\omega} $. Let $z\in[0,1] $, let $Z$ be the binary expansion of $z$ and let $Y\in2^{\omega} $. Define $g_Y=g(Y,\cdot)$ and suppose $g_Y'(z)$ doesn't exist. We need to exhibit a uniform computable martingale $d$ such that $d(Y,\cdot)$ succeeds on $Z$. In the $\mathbb{R}ightarrow$ direction of the original proof, assuming $f'(x)$ does not exists (where $f:[0,1] \to\mathbb{R}$ is a polynomial time computable monotone function), Nies constructed a (polynomial time) computable martingale that succeeds on the binary expansion of $x$. The assumption that $f$ is polynomial time computable was used to show that the resulting martingale is polynomial time computable. If this assumption is relaxed so that $f$ is assumed to be computable, the resulting martingale ends up being computable, rather than polynomial time computable. We need to verify that a slightly modified proof works for demonstrating that there is a uniform computable martingale $d$ such that $d(Y,\cdot)$ succeeds on $Z$. \emph{Uniform relativization of the $\mathbb{R}ightarrow$ implication of Theorem 4 in \cite{nies:2014}.} Here we use the combined terminology from the original proof and the terminology required for our proof. For the $\mathbb{R}ightarrow$ direction of the proof of Theorem 4 in \cite{nies:2014}, Nies had to consider two cases: $\omegaidetilde D_2f(x)<\omegaidetilde D f(x)$ and $\utilde Df(x)<\utilde D_2 f(x)$. Nies constructed a pair of computable martingales, $L$ and $L'$, corresponding to the above mentioned cases, such that either $L$ succeeds on the binary expansion of $x$, or $L'$ succeeds on the binary expansion of $x-1/3.$ Both $L$ and $L'$ query the same martingale $M$ defined by $M(\sigma)=S_f([\sigma])$. Since $M:2^{\omega} \times2^{<\omega}\to\mathbb{R}$ defined by $$M(Y,\sigma)=S_{g(Y,\cdot)}([\sigma])$$ is a uniform computable martingale, it can be easily checked that constructions of $L$ and $L'$ can be naturally extended to define uniform computable martingales $\mathcal{L}$ and $\mathcal{L}'$ such that either \begin{enumerate} \item $\mathcal{L}(Y,\cdot)$ succeeds on $Z$ or \item $\mathcal{L}'(Y,\cdot)$ succeeds on the binary expansion of $z-1/3$ (without loss of gene\-rality we may assume that $z>1/3$). \end{enumerate} The first case implies that $Z$ is not computably random uniformly relative to $Y$ and thus $Z\oplus Y$ is not computably random. Note that $(x_1,x_2)\mapsto (x_1,x_2+1/3\mod 1)$ is an a.e.\ computable isomorphism. And since computable randomness is preserved by a.e.\ computable isomorphisms, the second case implies that $Z\oplus Y$ is not computably random. \end{proof} \begin{remark}The original proof relied on a different preservation property of computable randomness. It was using the fact that computable randomness is base invariant. We could not use the result about base invariance in our proof immediately (since we have now multiple coordinates instead of one), hence we chose to use another preservation property of computable randomness. \end{remark} \noindent \begin{lemma}\langlebel{l_cr_implies_partial_derivatives} Let $z\in[0,1]^n $ be computably random and let $f:[0,1]^n \to\mathbb{R}$ be a computable $K_n-$increasing function. Then all partial derivatives of $f$ at $z$ exist. \begin{proof} Fix $i\le n$. The proof is by contraposition: suppose $D_if(z)$ does not exist, we will show that $z$ is not computably random. Let $y=z-z_ie_i$ and let $Y$ be its binary expansion. Define $g:2^{\omega} \times[0,1] \to\mathbb{R}$ by $$g(X,h)=f(0.X+he_i)$$ and let $g_y=g(Y,\cdot)$. Then $g$ satisfies relevant assumptions of Lemma \ref{uniform_relativization_monotone} and $g_y'(z_i)=D_if(z)$. Furthermore, we know that $g_y'(z_i)$ does not exist. To show $z$ is not computably random, by Theorem \ref{t_miyabe_rute1}, it is sufficient to show that $Z_i$ is not computably random uniformly relative to $Y$ (as this implies $Z_i$ not being computably random uniformly relative to $\oplus_{j\noindenteq i} Z_j$). This follows from Lemma \ref{uniform_relativization_monotone}. \end{proof} \end{lemma} \begin{lemma}\langlebel{l_cr_implies_partial_derivatives_lipschitz} Let $z\in[0,1]^n $ be computably random and let $f:[0,1]^n \to\mathbb{R}$ be a computable Lipschitz function. Then all partial derivatives of $f$ at $z$ exist. \begin{proof} Similar to the Remark \ref{decomposition_remark}, let $M=\text{Lip}(f)$ and let $\bold m=(M,\dots,M)\in\mathbb{R}^n$, then $g(x)=f(x)+\langlengle \bold m, x\ranglengle$ is a $K_n-$increasing computable function. Thus all partial derivatives of $g$ at $z$ exist, and therefore all partial derivatives of $f$ at $z$ exist too. \end{proof} \end{lemma} \subseteqsection{Existence of directional derivatives}\langlebel{existence_directional_derivatives} We will use the previously proven fact about existence of partial derivatives of Lipschitz functions to show that, in fact, an analogous result holds for all one-sided directional derivatives. The main idea relies on two simple observations: \begin{enumerate} \item computable randomness is invariant under computable linear isometries, and \item linear functions are Lipschitz. \end{enumerate} ace{5pt} \noindent For any $u,v\in\mathbb{R}^n$ with $\|v\|=\|u\|=1$ and $u\noindenteq v$, fix (say, via the Gram-Schmidt process) two orthonormal bases $B_u,B_v$ of $\mathbb{R}^n$ with $v\in B_v$ and $u\in B_u$. Let \mbox{$\Theta_{u\to v}:\mathbb{R}^n\to\mathbb{R}^n$} denote a change of basis map (that takes $B_u$ to $B_v$) such that $\Theta_{u\to v}(u)=v$. This function is a linear isometry and it is computable when $u,v$ are computable. \noindent The image of the unit cube $[0,1]^n $ under functions of the form \mbox{$\Theta_{u\to v}:\mathbb{R}^n\to\mathbb{R}^n$} is not necessarily contained in $[0,1]^n $. To deal with this issue, we use the function $\mathcal P_1:\mathbb{R}^n\to[0,1]^n $ defined by $$\mathcal P_1(x_1,\dots,x_n)=(\min\{1,x_1\},\dots, \min\{1,x_n\}).$$ $\mathcal P_1$ is a computable Lipschitz function which coincides with the identity map on the unit $n$-cube. \noindent For any function $f:[0,1]^n \to\mathbb{R}$, let $\hat f=f\circ \mathcal P_1$, so that if $f$ is computable and a.e.\ differentiable, so is $\hat f$. Moreover, if $f$ is Lipschitz, so is $\hat f$. Note that $\hat f$ is defined on the whole $\mathbb{R}^n$. \begin{lemma}\langlebel{lemma_directional_derivative_trick} \noindent Let $f:\mathbb{R}^n\to \mathbb{R}$ be a function, let $u,v,w\in\mathbb{R}^n ,x\in[0,1]^n $ and let $\Theta=\Theta_{v\to u}$. Then $$\ddo fxu = \ddo gzv$$ where $g=f\circ (\Theta+w)$ and $z=\Theta^{-1}(x-w)$. \begin{proof}~ \noindent First, note that for any $t>0,$ \begin{align*} \mathcal{F}rac{g(z+tv)-g(z)}{t}=\mathcal{F}rac{f(\Theta(z+tv)+w)-f(\Theta(z)+w)}{t}=\mathcal{F}rac{f(x+tu)-f(x)}{t}. \end{align*} \noindent By taking the limits of both sides we get the required equality. \end{proof} \end{lemma} \begin{lemma}\langlebel{lemma222} Let $f:[0,1]^n \to\mathbb{R}$ be computable Lipschitz and suppose $x\in[0,1]^n $ is computably random. Then $\ddo fxu$ exists for every $u\in\mathbb{R}^n$. \begin{proof} It is sufficient to show that $\ddo fxu$ exists for each computable $u$ with $\|u\|=1$. Let $u$ be computable and let $v=e_1$. By density we can find some computable $w\in\mathbb{R}^n$, so that $z=\Theta_{v\to u}^{-1}(x-w)$ is contained in $[0,1]^n $. We apply Lemma \ref{lemma_directional_derivative_trick} to $\hat f,v,u,w$ and $x$, so that $$\ddo fxu=\ddo{\hat f}xu=\ddo gzv$$ where $g$ is Lipschitz and computable and $z\in[0,1]^n $ is computably random (again, we use Theorem \ref{cr_preservation} here). The required result follows from the fact that $\ddo gzv=D_1g(z)$ and we know that $D_1g(z)$ exists. \end{proof} \end{lemma} \subseteqsection{Linearity of directional derivatives}~ \noindent In the last step of the proof, we need to show that $\ddo fz\cdot$ is linear on computable elements (where $f$ is computable Lipschitz and $z$ is computably random). \noindent Let $f:[0,1]^n \to\mathbb{R}$ be a function. For $u\in\mathbb{R}^n$, define $$\mathcal{K}^f_u=\{z~|~ \ddo fzu \text{ exists} \}.$$ \noindent For $q\in\mathbb{Q}^{+}$ and $u,v\in \mathbb{R}^n$, define $\mathcal{L}_{u,v,q}^f$ to be the set of points where linearity of $\dd fz\cdot$ fails and the failure is witnessed by $u,v$ and $q$. More formally, let $$\mathcal{L}_{u,v,q}^f=\mathcal{K}^f_u\cap \mathcal{K}^f_v\cap \mathcal{K}^f_{u+v}\cap \{z~|~ |\ddo fz{u+v}- \ddo fzu- \ddo fzv|\ge q\}.$$ \begin{lemma}\langlebel{lemma302} Let $f:[0,1]^n \to\mathbb{R}$ be a computable a.e.\ differentiable function. Let $v,u\in\mathbb{R}^n$ be computable. Let $z\in\mathcal{L}^f_{v,u,q}$ for some $q\in\mathbb{Q}$. Then there exist a $\arpi01$ null-set that contains $z$. \begin{proof} Since $\ddo fzv,\ddo fzu$ and $\ddo fz{v+u}$ exist, there is $p>0$ such that $\left| \delta^v_f(z,h)+\delta^u_f(z,h)-\delta^{v+u}_f(z,h) \right|\ge q$ for all $h\le p$. Hence the set of all $x$ such that $$\mathcal{F}orall h\, \left(h \le p \implies \left| \delta^v_f(x,h)+\delta^u_f(x,h)-\delta^{v+u}_f(x,h) \right|\ge q \right),$$ where $h$ range over rationals, contains $z$. It is clearly a $\arpi01$ set and it is a null set, since its complement contains all points of differentiability of $f$ and $f$ is a.e.\ differentiable. \end{proof} \end{lemma} \noindent So far, we have shown that computable randomness implies existence of directional derivatives in and weak randomness is sufficient for linearity of directional derivatives. This implies that computable randomness is sufficient for G\^ateaux differentiability and this completes the proof of Theorem \ref{t_rademacher_1} since G\^ateaux differentiability implies differentiability of Lipschitz functions on $[0,1]^n $. \subseteqsection{Compact universal null-sets}\langlebel{dore_maleva} In the context of differentiability of Lipschitz functions, a subset $A$ of $\mathbb{R}^n$ is said to be \emph{universal} if every real-valued Lipschitz function on $\mathbb{R}^n$ is differentiable at some point of $A$. Since the early work od {D.~Preiss~\cite{Preiss:90},} it is known that there exist universal $G_\delta$ null sets for $n\ge 2$. Relatively recently, Dor\'e and Maleva constructed a family of compact universal null sets (see \cite{Dore.Maleva:11}). The crucial idea in their construction is that a Lipschitz function is differentiable at points where a directional derivative is maximal in some specific sense and that such points can be found on small line segments (see Lemmata 4.2 and 4.3 in \cite{Dore.Maleva:11}). Their sets contain lots of such line segments and this is the reason they are universal. The result implies (as will be shown shortly) the existence of a universal $\arpi01$ null set and this has significant implications for tackling the question of which randomness notion is implied by the Rademacher's Theorem. To characterise a randomness notion $X\subseteqset\mathbb{R}^n$ via differentiability of computable real-valued Lipschitz functions, it is sufficient to prove two statements: \begin{enumerate} \item $z\in X$ implies differentiability of all computable real-valued Lipschitz functions at $z,$ and \item differentiability of all computable real valued Lipschitz functions at $z$, implies $z\in X$. \end{enumerate} The second type of statements is usually proven by explicitly constructing a computable function not differentiable at a given randomness test (for $X$). The existence of a universal $\arpi01$ null set shows that such an approach cannot succeed even in proving that differentiability of computable real-valued Lipschitz functions implies weak randomness. The construction in \cite{Dore.Maleva:11} is parameterized by two sequences. Below we verify that with suitable parameters this construction yields a $\arpi01$ null set. \noindent\emph{Construction by Dor\'e and Maleva.} Let $\seq N$ be a sequence of odd integers such that $N_1>1$, $\lim _i N_i= \infty$ and $\sum \mathcal{F}rac{1}{N^2_i}=\infty$. Let $\seq p$ be a sequence of real numbers with $1\le p_i\le N_i$ and $\lim_i p_i/N_i=0$. Let $d_0=1$ and for all $i\ge 1$ let $d_i=\preceqod_{k\le i} N_k^{-1}$ and define a lattice in $\mathbb{R}^2$ $$C_i=\left(\mathcal{F}rac{d_{i-1}}2,\mathcal{F}rac{d_{i-1}}2\right)+\mathbb{Z}^2.$$ \noindent Finally, define $$W=\mathbb{R}^2\setminus\begin{itemize}gcup_{i\ge 1}\begin{itemize}gcup_{c\in C_i} \mathcal{B}_\infty(c, p_id_i/2),$$ where $\mathcal{B}_\infty(x,r)$ denotes an open ball in $\left(\mathbb{R}^2,\|\cdot\|_\infty\right)$. $W$ is a closed null set. Dor\'e and Maleva proved [Corollary 5.2 in \cite{Dore.Maleva:11}] that for any such $W$, any open neighbourhood of the set \mbox{$M=\mathbb{R}^{n-2}\times W$} contains a point of differentiability of every Lipschitz function $f:\mathbb{R}^n\to\mathbb{R}$. In particular, $[0,1]^n \cap M$ contains a point of differentiability of every Lipschitz $f:[0,1]^n \to\mathbb{R}$. It is easy to see that both $\seq N$ and $\seq p$ can be taken to be computable sequences and then $[0,1]^n \cap M$ is a $\arpi01$ null set. (For example, take $\seq N$ to be $3,3,3,5,5,5,5,5,7,7,7,7,7,7,7,\dots$ and let $p_i=4$ for all $i$). \section{Characterizing weak 2-randomness in terms of differentiability} This section is devoted to proving Theorem \ref{w2r_theorem}, which characterises weak randomness in terms of differentiability of computable functions of several variables. It is worth pointing out that while our result is a generalization of Theorem \ref{thm:w2r1}, it is ``stronger'' in the sense that we show equivalence of three conditions, rather than two. Recall that Theorem \ref{thm:w2r1} shows weak 2-randomness is equivalent to differentiability of computable a.e. differentiable functions. Somewhat surprisingly, in higher dimensions, a seemingly weaker condition, the existence of all partial derivatives for all computable a.e. differentiable functions, is also equivalent to weak 2-randomness. We start the section with a fact of independent interest. \begin{lemma}\langlebel{pi03_lemma} Let $f:[0,1]^n \to\mathbb{R}$ be a computable function. The set of points at which $f$ is differentiable is a $\arpi03$ set. \begin{proof} Recall that the definition of the derivative of a function of several variables involves a nested limit. The main idea of this proof is that the set of points of differentiability (for a given function), $D$, can be written as an intersection of two sets, each of which can be described with only one limit. Specifically, we will show that $D$ is the intersection of two $\arpi03$ sets, $A$ and $B$, where $A$\ is the set of points where all partial derivatives of $f$ exist, and $B$ is the set consisting of those~$x$ satisfying \begin{align}\langlebel{B_condition} \lim_{||{h}|| \to 0,b\to 0} \mathcal{F}rac{f({x} + {h}) - f({x}) - \delta^{1..n}_f({x}, b)\cdot{h}}{||{h}||} = 0. \end{align} \begin{claim} $A$ is a $\arpi03$ set that contains all points of differentiability of $f$. \begin{proof} Fix coordinate~$i$. \noindent For~$q$ a rational, $\ \overline{D}_if(x) \geq q$ is equivalent to the formula \[ \mathcal{F}orall p \, \mathcal{F}orall \delta \, \existsists h~ \left( |h| < \delta \langlend \left( (p<q\langlend \delta>0)\implies\delta_f^i(x,h) > p\right)\right). \] By density, we can take~$p$ and~$\delta$ to range over the rationals. By continuity of~$f$, we can take~$h$ to range over the rationals. Thus $\{ {x} \mid \overline{D}_if({x}) \geq q\}$ is a $\mathcal Pi^0_2$ set uniformly in~$q$. Symmetrically, so is $\{x \mid \underline{D}_if(x) \leq q\}$. Then the set of $x$ such that $D_if(x)$ does not exist, is precisely the set\[ \left\{ x : \mathcal{F}orall q[\overline{D}_if(x) \geq q] \vee \mathcal{F}orall q[\underline{D}_if(x) \leq q] \vee \existsists q \existsists p[\underline{D}_if(x) \leq q < p \leq \overline{D}_if({x})]\right\}. \] This is a $\mathcal{S}igma^0_3$ set and hence the set of points where at least on partial derivative does not exist is also $\arsigma03$. Thus $A$ is a $\arpi03$ set. The other part of the claim is trivial. \end{proof} \end{claim} \ \begin{claim} $B$ is a $\arpi03$ set that contains all points of differentiability of $f$. \begin{proof} By definition, $\lim_{|h| \to 0} \delta^{1..n}_f({x},h) = J_f({x})$, the Jacobian of~$f$ at~${x}$ (when this exists). \noindent The derivative of~$f$ exists at~${x}$ if $J_f({x})$ exists and \[ \lim_{||{h}|| \to 0} \mathcal{F}rac{f({x} + {h}) - f({x}) - J_f({x})\cdot{h}}{||{h}||} = 0. \] \noindent\ To see that $B$ is a $\arpi03$ set, we can rewrite the condition (\ref{B_condition}) in the following form: \[ \mathcal{F}orall \ensuremath{\varepsilon}\xspaceilon \existsists \delta \mathcal{F}orall h \mathcal{F}orall b ~\left( \delta>0\right)\langlend\left[ \left( \ensuremath{\varepsilon}\xspaceilon > 0 \langlend ||{h}|| < \delta\langlend |b| < \delta\right) \implies \mathcal{F}rac{\left| f(x + {h}) - f({x}) - \delta^{1..n}_f({x}, b)\cdot{h}\right| }{||{h}||} \le \ensuremath{\varepsilon}\xspaceilon\right]. \] Here $\ensuremath{\varepsilon}\xspaceilon, \delta,h$ and~$b$ are rationals, and ${h}$ has rational coordinates. Suppose~${x}$ is a point at which~$f$ is differentiable. Fix $\ensuremath{\varepsilon}\xspaceilon > 0$. Let $\delta$ be sufficiently small that for all $h$ with $||{h}|| < \delta$, \[\mathcal{F}rac{\left|f({x} + {h}) - f(x) - J_f({x})\cdot{h}\right|}{||{h}||} < \ensuremath{\varepsilon}\xspaceilon/2, \] and also for all $b$ with $|b| < \delta$, \[ \vectornorm{ (\delta^{1..n}_f({x},b) - J_f({x}))^T } < \ensuremath{\varepsilon}\xspaceilon/2. \] Here we treat $\delta^{1..n}_f({x},b) - J_f({x})$ as a row vector. Then for any $h$ and $b$ with $||{h}|| < \delta$ and $|b| < \delta$, \begin{eqnarray*} \mathcal{F}rac{ \left| f(x + {h}) - f(x) - \delta^{1..n}_f(x, b)\cdot{h}\right| }{||{h}||} &=& \mathcal{F}rac{\left|f({x} + {h}) - f({x}) - J_f({x}){h} + J_f({x}){h} - \delta^{1..n}_f({x}, b)\cdot{h}\right|}{||{h}||}\\ &\leq& \mathcal{F}rac{\left|f({x} + {h}) - f({x}) - J_f({x})\cdot{h}\right|}{||{h}||} + \mathcal{F}rac{\left|(J_f({x}) - \delta^{1..n}_f({x},b))\cdot{h}\right|}{||{h}||}\\ &<& \ensuremath{\varepsilon}\xspaceilon/2 + \mathcal{F}rac{\vectornorm{(J_f({x}) - \delta^{1..n}_f({x},b))^T}||{h}||}{||{h}||}\\ &<& \ensuremath{\varepsilon}\xspaceilon/2 + \ensuremath{\varepsilon}\xspaceilon/2 = \ensuremath{\varepsilon}\xspaceilon. \end{eqnarray*} Thus~$B$ contains every point at which~$f$ is differentiable. \end{proof} \end{claim} \noindent Thus, $A\cap B$ contains all points of differentiability of $f$. Let's show that the converse inclusion holds.\ \begin{claim} $f$ is differentiable at all elements of $A\cap B$. \begin{proof} Let $x\in A\cap B$. Fix~$\ensuremath{\varepsilon}\xspaceilon > 0$. Since $x\in B$, we can find~$\delta$ such that \[ \mathcal{F}orall {h}\mathcal{F}orall b\left[\left(|b| < \delta \langlend ||{h}|| < \delta \right) \implies \mathcal{F}rac{\left|f(x + {h}) - f(x) - \delta^{1..n}_f(x,b)\cdot{h}\right|}{||{h}||} < \ensuremath{\varepsilon}\xspaceilon/2\right], \] and since all partial derivatives of $f$ at $x$ exist, we can find some $b$ with $|b| < \delta$ such that \[ \vectornorm{ (\delta^{1..n}_f(x,b) - J_f(x))^T } < \ensuremath{\varepsilon}\xspaceilon/2. \] Then, for any $h$ with $||{h}|| < \delta$, \begin{eqnarray*} \mathcal{F}rac{ \left| f(x + {h}) - f(x) - J_f(x)\cdot{h}\right|}{||{h}||} &=& \mathcal{F}rac{ \left| f(x + {h}) - f(x) - \delta^{1..n}_f(x,b)\cdot{h} + \delta^{1..n}_f(x,b)\cdot{h} - J_f(x)\cdot{h}\right|}{||{h}||}\\ &\leq& \mathcal{F}rac{ \left| f(x + {h}) - f(x) - \delta^{1..n}_f(x,b)\cdot{h}\right|}{||{h}||} + \mathcal{F}rac{ \left| (\delta^{1..n}_f(x,b) - J_f(x))\cdot{h} \right|}{||{h}||}\\ &<& \ensuremath{\varepsilon}\xspaceilon/2 + \ensuremath{\varepsilon}\xspaceilon/2 = \ensuremath{\varepsilon}\xspaceilon. \end{eqnarray*} Thus~$f$ is differentiable at~$x$. \end{proof} \end{claim} \end{proof} \end{lemma} \begin{theorem}\langlebel{w2r_theorem} Let $z\in[0,1] ^n$. The following are equivalent: \begin{enumerate} \item $z\text{ is weakly $2$-random,}$ \item all $D_if(z)$ exist for all computable a.e.\ differentiable $f:[0,1]^n \to\mathbb{R},$ \item each computable a.e.\ differentiable function is differentiable at $z.$ \end{enumerate} \begin{proof}[Proof (1) $\mathbb{R}ightarrow$ (3)] Suppose~$z$ is weakly 2-random and~$f$ is an a.e.\ differentiable computable function. Since $z$ cannot be contained in any $\arsigma03$ set of measure 0, it must belong to all $\arpi03$ sets of full measure. In particular, by Lemma \ref{pi03_lemma}, $z$ belongs to the set of differentiable points of $f$. \end{proof} \begin{proof}[Proof (3) $\mathbb{R}ightarrow$ (2)] \renewcommand{} Trivial. \end{proof}{} Trivial. \end{proof} \begin{proof}[Proof (2) $\mathbb{R}ightarrow$ (1)] Suppose $z=(z_1,\dots,z_n)\in[0,1]^n $ is not weakly 2-random. We may assume that all coordinates of $z$ are weakly 2-random, otherwise the required conclusion follows from the one dimensional case. For suppose some $z_j$ is not weakly 2-random. Then there is a computable a.e.\ differentiable function $g:[0,1] \to\mathbb{R}$ such that $g'(z_j)$ does not exist. Define $\gamma:[0,1]^n \to \mathbb{R}$ as $\gamma(x_1,\dots,x_j,\dots x_n)=g(x_j)$. Then $\gamma$ is a computable a.e.\ differentiable function such that $\gamma'(z)$ doesn't exist. In what follows, we ignore those elements of $[0,1]^n $ that have at least one of its coordinates rational. Let $\seq G$ be a sequence of uniformly $\arsigma01$ subsets of $[0,1] ^n$ such that $G_{i+1}\subseteqseteq G_{i}$ for all $i$ and $G=\begin{itemize}gcap G_i$ is a null-set with $z\in G$. Since we ignore elements with dyadic coordinates, we may assume that every $G_i$ is an infinite union of basic dyadic $n-$cubes. Let $\dseq D m l$ be an effective double sequence of (open) basic dyadic $n-$cubes such that $G_m=\begin{itemize}gcup_i D_{m,i}$ for each $m$, and for all $n,k$ there is an $l$ with \mbox{$D_{n+1,k}\subseteqseteq D_{n,l}$}. ace{5pt} \noindent\emph{General idea of the proof.} We will construct a computable double sequence $\dseq C m i$ of basic dyadic $n-$cubes with certain well-behaved properties. For every $n-$cube $C_{m,i}$ in the sequence we will define a \emph{tent function} $f_{m,i}$ which is 0 outside $C_{m,i}$ and its graph forms a piecewise linear ``tent'' at $C_{m,i}$. See figure \ref{fig_tent} for an illustration of what a graph of a tent function on $[0,1] ^2$ might look like. $E_{m,i}$ is the subarea of $C_{m,i}$ where $\left|D_1f_{m,i}\right|\noindenteq \pm1$. The tent functions are defined in such a way that $z$ belongs to only finitely many of $E_{m,i}$. This is where our assumption that all coordinates of $z$ are not weakly 2-random is used. See figure \ref{fig2}. Any point belonging to infinitely many $E_{m,i}$, by pigeonhole principle, must have at least one coordinate belonging to the (darker) corner areas (one-dimensional $E^k_{m,i}$ sets). Our tent functions are defined in such a way that those areas form $\arpi02$ null sets. Then $f:[0,1]^n \to\mathbb{R}$ will be defined as a sum of those $f_{m,i}$ for which we know the first partial derivative on $z$ is equal to $\pm1$. This is used to show that $D_1f(z)$ does not exist. The properties of $\dseq C m i$ ensure that $f$ is computable and a.e.\ differentiable. \definecolor{ttttqq}{rgb}{0.62,0.62,0.62} \definecolor{zzttqq}{rgb}{0.71,0.72,0.71} \definecolor{xdxdff}{rgb}{0.049019607843137253,0.049019607843137253,0} \definecolor{qqqqff}{rgb}{0.0,0.0,0.2} \begin{figure} \caption{Two-dimensional graph of a tent function $f_{m,i} \end{figure} \begin{figure} \caption{Two-dimensional projection of $C_{m,i} \end{figure} ace{5pt} \noindent\emph{Construction of the double sequence $\dseq C m i$.} Suppose $m=0$, or $m>0$ and we have already defined $\left(C_{m-1,j}\right)_{j\in \mathbb{N}N}$. Define $\left(C_{m,j}\right)_{j\in \mathbb{N}N}$ as follows. Let $N\in\mathbb{N}N$ be the greatest number such that we have already defined $C_{m,i}$ for $i\le N$. When a new $n-$cube $D=D_{m,l}$ is enumerated into $G_m$, if $m>0,$ we wait until $D$ is contained in a union of $n-$cubes $\begin{itemize}gcup_{r\in F} C_{m-1,r}$, where $F$ is finite. This is possible since $D$ is contained in a single cell of the form $D_{m-1,\_}$, that was handled in a previous stage. If $m=0$, let $\delta=\lm{D}$, otherwise let $$\delta=\min\{\lm{D},\min\{\lm{C_{m-1,r}}:r\in F\}\}.$$ If $N=0$, let $\ensuremath{\varepsilon}\xspaceilon=8^{-m}\delta$, otherwise let $\ensuremath{\varepsilon}\xspaceilon=\min\{8^{-m}\delta,\lm{C_{m,N-1}}\}$. Finally, partition $D$ into disjoint basic dyadic $n-$cubes $C_{m,i}$, $i=N+1,\dots,N'<~\infty$, with nonincreasing volume $\langlembda(C_{m,i})\le\ensuremath{\varepsilon}\xspaceilon^n$, so that when $m>0$, each of the cubes is contained in one of $C_{m-1,r}$ for some $r\in F$. The following claim summarizes all properties of $\dseq C m i$ relevant to our proof. \begin{claim} \noindent The double sequence $\dseq C m i$ is computable and it verifies the following properties: \begin{enumerate} \item[i)] $G_m=\begin{itemize}gcup_{i\in \mathbb{N}N} C_{m,i},$ \item[ii)]$C_{m,i}~\cap C_{m,k}=\emptyset\text{ and } \langlembda(C_{m,i})\ge\langlembda(C_{m,k})\text{ for all }i<k,$ \item[iii)] if $B=C_{m,i}$ for $m>0$, then there is an $n-$cube $A=C_{m-1,k}$ such that \begin{align}\langlebel{l17} B\subseteqseteq A\text{ and } \langlembda(B)\le 8^{-m}\langlembda(A), \end{align} \item[iv)] for all $m,k\in\mathbb{N}N$ $$D_{m,k}= \text{some finite union of n-cubes of the form } C_{m,i}$$ with \begin{align} C_{m,i}\subseteqseteq D_{m,k}\implies d_{m,i}\le 8^{-m}\lm{D_{m,k}} \end{align} where $d_{m,i}$ denotes the length of a side of $C_{m,i}$. \end{enumerate} \begin{proof} All of the listed properties are straightforward consequences of the construction of $\dseq C mi$. \end{proof} \end{claim} ace{5pt} \noindent \emph{Tent functions $f_{m,j}$.} Let $m,j\in\mathbb{N}N$. For all $i\in\mathbb{N}$ with $1\le i\le n$, define $a^i_{m,j},b^i_{m,j}$ so that $\left(a^i_{m,j},b^i_{m,j}\right)=~\pi_i(C_{m,j})$, where $\pi_i:\mathbb{R}^n\to\mathbb{R}$ denotes the projection onto the $i-$th coordinate. Let $\ensuremath{\varepsilon}\xspaceilon_{m,j}=\ensuremath{\varepsilon}\xspaceilon=2^{-m-j-1}\cdot d_{m,j}$ and define $b^i_{m,j}:[0,1] \to\mathbb{R}$ as \begin{align*} b^i_{m,j}(x) = \begin{cases} \mathcal{F}rac{x-a^i_{m,j}}{\ensuremath{\varepsilon}\xspaceilon} &\mbox{if } x\in [a^i_{m,j},a^i_{m,j}+\ensuremath{\varepsilon}\xspaceilon], \\ 1 & \mbox{if } x\in (a^i_{m,j}+\ensuremath{\varepsilon}\xspaceilon,b^i_{m,j}-\ensuremath{\varepsilon}\xspaceilon),\\ \mathcal{F}rac{b^i_{m,j}-x}{\ensuremath{\varepsilon}\xspaceilon} &\mbox{if } x\in [b^i_{m,j}-\ensuremath{\varepsilon}\xspaceilon,b^i_{m,j}],\\ 0&\mbox{otherwise}. \end{cases} \end{align*} Define $f_{m,j}:[0,1]^n \to\mathbb{R}$ as \begin{align*} f_{m,j}(x_1,x_2,\dots,x_n) = d\left([0,1] \setminus \left(a^1_{m,j},b^1_{m,j}\right),x_1\right) \cdot \preceqod_{n\ge i\ge 2} b^i_{m,j}(x_i), \end{align*} where $ d\left([0,1] \setminus \left(a^1_{m,j},b^1_{m,j}\right),x_1\right)$ denotes the distance from $x_1$ to $[0,1] \setminus \left(a^1_{m,j},b^1_{m,j}\right)$. Note that $f_{m,j}$ is a computable (uniformly in $m,j$) a.e.\ differentiable function. Lastly, define $E_{m,j}$ to be the subset of $C_{m,j}$ where $\left|D_1f_{m,j}\right|\noindenteq 1$ whenever $D_1f_{m,j}$ exists, that is $$E_{m,j}=C_{m,j} \setminus \left[\left(a^1_{m,j},b^1_{m,j}\right)\times \preceqod_{n\ge i\ge 2} \left(a^i_{m,j}+\ensuremath{\varepsilon}\xspaceilon, b^i_{m,j}-\ensuremath{\varepsilon}\xspaceilon\right) \right].$$ The idea behind such definition of $f_{m,j}$ functions is that $\ensuremath{\varepsilon}\xspaceilon_{m,j}$ goes to $0$ so quickly, that $\left|D_1f_{m,j}(z)\right|\noindenteq 1$ holds only for finitely many $m,j\in \mathbb{N}N$. \begin{claim} There exists $N\in\mathbb{N}N$ such that for all $i\in\mathbb{N}N$ and $m>N,$ if $z\in C_{m,i}$ then $\left|D_1f_{m,i}(z)\right|= 1$. \begin{proof} To prove this claim, we will use our assumption that all coordinates of $z$ are weakly $2-$random. Specifically, we will show that if a point belongs to an infinitely many $E_{m,i}$, then one of its coordinates belongs to a $\arpi02$ null set. \noindent For every $m,i,k\in\mathbb{N}N$ with $2\le k\le n$, let $$E^k_{m,i}=\left(a^k_{m,i},a^k_{m,i}+\ensuremath{\varepsilon}\xspaceilon_{m,i}\right)\cup \left(b^k_{m,i}-\ensuremath{\varepsilon}\xspaceilon_{m,i}, b^k_{m,i}\right).$$ Note the following property of those sets: if $z\in E_{m,i}$ then for some $k$, $z_k\in E^k_{m,i}$. For every $m,k\in\mathbb{N}N$ with $n\ge k\ge 2$, let $B_m^k=\begin{itemize}gcup_{i>m} \begin{itemize}gcup_j E^k_{i,j}$. Let's verify that every $B^k=\begin{itemize}gcap_i B_i^k$ is a $\arpi02$ null-set. Indeed, $\seq {B^k}$ is a uniformly computable sequence of $\arsigma 01$ sets with $\lm{B_m^k}\le \sum_{i>m}\sum_j 2^{-i-j}\cdot d_{i,j}\le 8^{-m}$ for all $m,k$. By the pigeonhole principe, if $z$ belongs to infinitely many $E_{m,j}$ (for infinitely many $m$), then for some $k$, $z_k$ belongs to infinitely many $E^k_{m,j}$. In that case $z_k\in B^k$ and we get a contradiction. Let $N$ be such that $z_k\noindentotin B_{N}^k$ for all $k$ and the required result follows. \end{proof} \end{claim} ace{5pt} \noindent \emph{Definition of the function $f$.} \noindent Let $$f_m=\sum_{i=0}^\infty 4^{m}f_{m,i}$$ and $$f=\sum_{i> N} f_m.$$ ace{5pt} \noindent \begin{claim} $f$ is computable. \begin{proof} Fix $m>0$. Note that every $f_{m,i}$ is bounded from above by $d_{m,i}/2$ and since all $C_{m,i}$ are disjoint, $f_m$ is bounded from above by $4^m8^{-m}/2=2^{-m-1}$ and it follows that $f$ is well defined everywhere. Firstly, let's show that $f(q)$ is computable uniformly in rational $q$. Given $m>0$, since $\lim_{i\to\infty}\lm{C_{m,i}}=0,$ we can find $i^*$ such that $${d_{k,i^*}}\le 8^{-m}/(m+1)\text{ for each $k\le m$}.$$ Since the $d_{k,i}$ is non-increasing in $i$ and $ {f_{k,i}}\le d_{k,i}/2$, we have $$4^k f_{k,i} (q)\le 2^{-m-1}/(m+1)\text{ for all $k\le m$ and $i\ge i^*$}.$$ Hence $$\sum_{k\le m}\sum_{i\ge i^*}4^k {f_{k,i}}(q)\le 2^{-m-1}.$$ Furthermore, $$\sum_{k>m}f_k(q)\le \sum_{k>m}2^{-k-1}=2^{-m-1}.$$ Therefore the approximation of $f(q)$ at stage $i^*$ based only on the $n-$cubes of the form $C_{k,i}$ for $k\le m$ and $i< i^*$ is within $2^{-m}$ of $f(q)$. \ \noindent Secondly, we need to verify that $f$ is effectively uniformly continuous. Suppose $\|x-y\|\le d_{m,1}$ for some $m$. Then for $k<m$, we have $|f_k(x)-f_k(y)|\le 4^{k}d_{m,1}/2.$ For $k\ge m$, we have $f_k(x),f_k(y)\le 2^{-k-1}$. Thus $$|f(x)-f(y)|\le d_{m,1}\sum_{k<m}4^k +\sum_{k\ge m}2^{-k}<2^{-m+2}.$$ \noindent Define $h(m)=\lfloor-\log_2d_{m,1}\rfloor+1$ so that $2^{-h(m)}\le d_{m,1}$. Note that $h$ is a computable order function. Then we get that $\|x-y\|\le 2^{-h(m)}$ implies $|f(x)-f(y)|\le 2^{-m+2}.$ \end{proof} \end{claim} \begin{claim}\langlebel{claim133} $D_1f(z)$ does not exist. \begin{proof} For all $m>N,$ let $d_m=d_{m,i_m}$ where $i_m$ is such that $z\in C_{m,i_m}$. Note that for all $m>N$ we have either $\delta_{f_m}^1\left(z, \mathcal{F}rac{{d_m}}{4}\right) = \pm 4^m$ or $\delta_{f_m}^1\left(z, -\mathcal{F}rac{{d_m}}{4}\right)= \pm 4^m$. Without loss of generality we may assume that for infinitely many $m$, we have $\left| \delta_{f_m}^1\left(z, \mathcal{F}rac{{d_m}}{4}\right)\right|= 4^m.$ Fix one such $m>N$. Note that for for every $k\in\mathbb{N}N$ with $N<k<m$ we have $\left|\delta_{f_k}^1\left(z, \mathcal{F}rac{{d_m}}{4}\right)\right|= 4^k$. Suppose $k>m$. Then we have $$f_k(x)\le 4^k8^{-k}\mathcal{F}rac{d_m}{2}=2^{-k-1}d_m$$ for all $x\in C_{m,l}\setminus E_{m,l} $ and thus we get $$\left|\delta_{f_k}^1\left(z, \mathcal{F}rac{{d_m}}{4}\right)\right|\le \mathcal{F}rac{2\cdot2^{-k-1} d_m}{\left\|\mathcal{F}rac{{d_m}}{4} e_1\right\|}=2^{-k+2}.$$ \noindent Hence, for $m>N$ we have $$\left|\delta_{f}^1\left(z, \mathcal{F}rac{{d_m}}{4}\right)\right|\ge \left(4^m-\sum_{N<k<m}4^k-\sum_{k>m}2^{-k+2}\right)\ge 4^{m-1}-4.$$ Therefore $D_1f(z)$ does not exist. \end{proof} \end{claim} \begin{claim} $f$ is differentiable almost everywhere. \begin{proof} Let $x\in[0,1]^n $. There are three possible cases: \begin{enumerate} \item $f_{m,j}'(x)$ does not exist for some $m,j$, \item $x$ belongs to the support of $f_{m,j}$ for infinitely many $m,j$, or \item $x$ belongs to the support of $f_{m,j}$ for only finitely many $m,j$ and all $f_{m,j}'(x)$ exist. Note that this implies differentiability of $f$ at $x$. \end{enumerate} The first case corresponds to a null-set, since every $f_{m,j}$ is a.e.\ differentiable. The second case corresponds to a null-set too, since it implies $x\in \begin{itemize}gcap_i G_i$. The last case implies differentiability of $f$ at $x$ and it must correspond to a set of full measure since the cases (1) and (2) are captured by null-sets. Thus $f$ is a.e.\ differentiable. \end{proof} \end{claim} \end{proof} \end{theorem} \section{Conclusion and future directions} Despite the obstacle described in the Subsection \ref{dore_maleva}, we conjecture that computable randomness, just like on the unit interval, characterises differentiability points of all computable real-valued Lipschitz functions. Proving the converse to the effective version of Rademacher's Theorem (that is, showing that differentiability of computable Lipschitz functions implies computable randomness) remains an open question of great interest. There are quite a few results in classical analysis about differentiability of functions of several variables that exhibit Lipschitz-like behaviour. Naturally, those results are related to Rademacher's Theorem. Studying effective versions of those will improve our understanding of interplay between computable analysis and algorithmic randomness. Here we mention two such theorems that we feel are of particular importance: \noindent(1) Alexandrov's theorem (see 6.4 in \cite{Evans.Gariepy:92}) states that convex functions are twice differentiable almost everywhere. Convex functions and monotone functions are closely related: on the real line, a function is monotone if and only if it is a derivative of a convex function. For functions of several variables, the relation is a bit less straightforward (see \cite{Rockafellar:70}). Recently it has been shown that both twice-differentiability of computable convex real valued functions on $\mathbb{R}^n$ and differentiability computable monotone functions on $\mathbb{R}^n$ correspond to computable randomness (see \cite{Galicki:15,Galicki:15:2}). \noindent(2) It is known that $K_n$-monotone functions of several variables are a.e.\ differentiable (see \cite{Chabrillac.Crouzeix:87}). Two of the three steps of our proof in Section \ref{lip_section} work for $K_n-$monotone functions. The one that doesn't work is the one in Subsection \ref{existence_directional_derivatives}. It is not known whether computable randomness implies differentiability of $K_n-$monotone computable functions, and what randomness notion is induced by a.e. differentiability of computable $K_n-$monotone functions. On the other hand, our result concerning weak 2-randomness is sharp: weak 2-randomness does characterise differentiability sets of computable a.e.\ differentiable functions of several variables. There are many other similar results in one dimension that characterise differentiability of effective functions in terns of algorithmic randomness. Generalizing those results to higher dimensions (and, perhaps, to more general spaces) will provide more insight into interactions between computable analysis and algorithmic randomness. \begin{itemize}bliographystyle{plain} \end{document}
\begin{document} \title[A note on affinely regular polygons]{A note on affinely regular polygons} \author{Christian Huck} \address{Department of Mathematics and Statistics\\ The Open University\\ Walton Hall\\ Milton Keynes\\ MK7 6AA\\ United Kingdom} \email{[email protected]} \thanks{The author was supported by the German Research Council (Deutsche Forschungsgemeinschaft), within the CRC 701, and by EPSRC via Grant EP/D058465/1.} \begin{abstract} The affinely regular polygons in certain planar sets are characterized. It is also shown that the obtained results apply to cyclotomic model sets and, additionally, have consequences in the discrete tomography of these sets. \end{abstract} \maketitle \section{Introduction} Chrestenson~\cite{C} has shown that any (planar) regular polygon whose vertices are contained in $\mathbbm{Z}^{d}$ for some $d\geq 2$ must have $3,4$ or $6$ vertices. More generally, Gardner and Gritzmann~\cite{GG} have characterized the numbers of vertices of affinely regular \emph{lattice polygons}, i.e., images of non-degenerate regular polygons under a non-singular affine transformation of the plane whose vertices are contained in the square lattice $\mathbbm{Z}^2$ or, equivalently, in some arbitrary planar lattice $L$. It turned out that the affinely regular lattice polygons are precisely the affinely regular triangles, parallelograms and hexagons. As a first step beyond the case of planar lattices, this short text provides a generalization of this result to planar sets $\varLambda$ that are non-degenerate in some sense and satisfy a certain affinity condition on finite scales (Theorem~\rightf{charac}). The obtained characterization can be expressed in terms of a simple inclusion of real field extensions of $\mathbbm{Q}$ and particularly applies to \emph{algebraic Delone sets}, thus including \emph{cyclotomic model sets}. These cyclotomic model sets range from periodic examples, given by the vertex sets of the square tiling and the triangular tiling, to aperiodic examples like the vertices of the Ammann-Beenker tiling, of the T\"ubingen triangle tiling and of the shield tiling, respectively. I turns out that, for cyclotomic model sets $\varLambda$, the numbers of vertices of affinely regular polygons in $\varLambda$ can be characterized by a simple divisibility condition (Corollary~\rightf{th1mod}). In particular, the result on affinely regular lattice polygons is contained as a special case (Corollary~\rightf{cor2}(a)). Additionally, it is shown that the obtained divisibility condition implies a weak estimate in the \emph{discrete tomography} of cyclotomic model sets (Corollary~\rightf{corofin}). \section{Preliminaries and notation}\label{sec1} Natural numbers are always assumed to be positive, i.e., $\mathbbm{N}\,=\,\{1,2,3,\dots\}$ and we denote by $\mathcal{P}$ the set of rational primes. If $k,l\in \mathbbm{N}$, then $\operatorname{gcd}(k,l)$ and $\operatorname{lcm}(k,l)$ denote their greatest common divisor and least common multiple, respectively. The group of units of a given ring $R$ is denoted by $R^{\times}$. As usual, for a complex number $z\in\mathbbm{C}$, $\vert z\vert$ denotes the complex absolute value, i.e., $\vert z\vert=\sqrt{z\bar{z}}$, where $\bar{.}$ denotes the complex conjugation. The unit circle in $\mathbbm{R}^{2}$ is denoted by $\mathbb{S}^{1}$, i.e., $\mathbb{S}^{1}=\{x\in\mathbbm{R}^2\,|\,\vert x \vert =1\}$. Moreover, the elements of $\mathbb{S}^{1}$ are also called {\em directions}. For $r>0$ and $x\in\mathbbm{R}^{2}$, $B_{r}(x)$ denotes the open ball of radius $r$ about $x$. A subset $\varLambda$ of the plane is called {\em uniformly discrete} if there is a radius $r>0$ such that every ball $B_{r}(x)$ with $x\in\mathbbm{R}^{2}$ contains at most one point of $\varLambda$. Further, $\varLambda$ is called {\em relatively dense} if there is a radius $R>0$ such that every ball $B_{R}(x)$ with $x\in\mathbbm{R}^{2}$ contains at least one point of $\varLambda$. $\varLambda$ is called a {\em Delone set} (or {\em Delaunay set}) if it is both uniformly discrete and relatively dense. For a subset $S$ of the plane, we denote by $\operatorname{card}(S)$, $\mathcal{F}(S)$, $\operatorname{conv}(S)$ and $\mathbbm{1}_{S}$ the cardinality, set of finite subsets, convex hull and characteristic function of $S$, respectively. A direction $u\in\mathbb{S}^{1}$ is called an $S${\em-direction} if it is parallel to a non-zero element of the difference set $S-S:=\{s-s'\,|\,s,s'\in S\}$ of $S$. Further, a finite subset $C$ of $S$ is called a {\em convex subset of} $S$ if its convex hull contains no new points of $S$, i.e., if $C = \operatorname{conv}(C)\cap S$ holds. Moreover, the set of all convex subsets of $S$ is denoted by $\mathcal{C}(S)$. Recall that a {\em linear transformation} (resp., {\em affine transformation}) $\Psi\!:\, \mathbbm{R}^{2} \rightarrow \mathbbm{R}^{2}$ of the Euclidean plane is given by $z \mapsto Az$ (resp., $z \mapsto Az+t$), where $A$ is a real $2\times 2$ matrix and $t\in \mathbbm{R}^{2}$. In both cases, $\Psi$ is called {\em singular} when $\operatorname{det}(A)= 0$; otherwise, it is non-singular. A {\em homothety} $h\!:\, \mathbbm{R}^{2} \rightarrow \mathbbm{R}^{2}$ is given by $z \mapsto \lambda z + t$, where $\lambda \in \mathbbm{R}$ is positive and $t\in \mathbbm{R}^{2}$. A {\em convex polygon} is the convex hull of a finite set of points in $\mathbbm{R}^2$. For a subset $S \subset \mathbbm{R}^2$, a {\em polygon in} $S$ is a convex polygon with all vertices in $S$. A {\em regular polygon} is always assumed to be planar, non-degenerate and convex. An {\em affinely regular polygon} is a non-singular affine image of a regular polygon. In particular, it must have at least $3$ vertices. Let $U\subset \mathbb{S}^{1}$ be a finite set of directions. A non-degenerate convex polygon $P$ is called a $U${\em -polygon} if it has the property that whenever $v$ is a vertex of $P$ and $u\in U$, the line $\ell_{u}^{v}$ in the plane in direction $u$ which passes through $v$ also meets another vertex $v'$ of $P$. For a subset $\varLambda\subset\mathbbm{C}$, we denote by $\mathbbm{K}_{\varLambda}$ the intermediate field of $\mathbbm{C}/\mathbbm{Q}$ that is given by $$ \mathbbm{K}_{\varLambda}\,\,:=\,\,\mathbbm{Q}\left(\big(\varLambda-\varLambda\big)\cup\big(\overline{\varLambda-\varLambda}\big)\right)\,, $$ where $\varLambda-\varLambda$ denotes the difference set of $\varLambda$. Further, we set $ \mathbbm{k}_{\varLambda}:=\mathbbm{K}_{\varLambda}\cap\mathbbm{R}$, the maximal real subfield of $\mathbbm{K}_{\varLambda}$. \begin{rem} Note that $U$-polygons have an even number of vertices. Moreover, an affinely regular polygon with an even number of vertices is a $U$-polygon if and only if each direction of $U$ is parallel to one of its edges. \end{rem} For $n\in \mathbbm{N}$, we always let $\zeta_{n} := e^{2\pi i/n}$, as a specific choice for a primitive $n$th root of unity in $\mathbbm{C}$. Let $\mathbbm{Q}(\zeta_{n})$ be the corresponding cyclotomic field. It is well known that $\mathbbm{Q}(\zeta_{n}+\bar{\zeta}_{n})$ is the maximal real subfield of $\mathbbm{Q}(\zeta_{n})$; see~\cite{Wa}. Throughout this text, we shall use the notation $$\mathbbm{K}_{n}=\mathbbm{Q}(\zeta_{n}),\; \mathbbm{k}_{n}=\mathbbm{Q}(\zeta_{n}+\bar{\zeta}_{n}),\; \mathcal{O}_{n}=\mathbbm{Z}[\zeta_{n}],\; \thinspace\scriptstyle{\mathcal{O}}\displaystyle_{n} =\mathbbm{Z}[\zeta_{n}+\bar{\zeta}_{n}]\,.$$ Except for the one-dimensional cases $\mathbbm{K}_{1}=\mathbbm{K}_{2}=\mathbbm{Q}$, $\mathbbm{K}_{n}$ is an imaginary extension of $\mathbbm{Q}$. Further, $\phi$ will always denote Euler's phi-function, i.e., $$\phi(n) = \operatorname{card}\left(\big\{k \in \mathbbm{N}\, |\,1 \leq k \leq n \textnormal{ and } \operatorname{gcd}(k,n)=1\big\}\right)\,.$$ Occasionally, we identify $\mathbbm{C}$ with $\mathbbm{R}^{2}$. Primes $p\in\mathcal{P}$ for which the number $2p+1$ is prime as well are called Sophie Germain prime numbers. We denote by $\mathcal{P}_{\rm SG}$ the set of Sophie Germain prime numbers. They are the primes $p$ such that the equation $\phi(n)=2p$ has solutions. It is not known whether there are infinitely many Sophie Germain primes. The first few are \begin{eqnarray*}&&\{2,3,5,11,23,29,41,53,83,89,113,131,173,\\ &&\hphantom{\{}179,191,233,239,251, 281,293,359,419,\dots\}\,,\end{eqnarray*} see entry A005384 of~\cite{Sl} for further details. We need the following facts from the theory of cyclotomic fields. \begin{fact}[Gau\ss]\cite[Theorem 2.5]{Wa}\label{gau} $[\mathbbm{K}_{n} : \mathbbm{Q}] = \phi(n)$. The field extension $\mathbbm{K}_{n}/ \mathbbm{Q}$ is a Galois extension with Abelian Galois group $G(\mathbbm{K}_{n}/ \mathbbm{Q}) \simeq (\mathbbm{Z} / n\mathbbm{Z})^{\times}$, where $a\, (\textnormal{mod}\, n)$ corresponds to the automorphism given by\/ $\zeta_{n} \mapsto \zeta_{n}^{a}$. \end{fact} Since $\mathbbm{k}_n$ is the maximal real subfield of the $n$th cyclotomic field $\mathbbm{K}_{n}$, Fact~\rightf{gau} immediately gives the following result. \begin{coro}\label{cr5} If $n\geq 3$, one has $[\mathbbm{K}_{n} : \mathbbm{k}_{n}]= 2$. Thus, a $\mathbbm{k}_{n}$-basis of $\mathbbm{K}_{n}$ is given by $\{1,\zeta_n\}$. The field extension $\mathbbm{k}_{n} / \mathbbm{Q}$ is a Galois extension with Abelian Galois group $G(\mathbbm{k}_{n} / \mathbbm{Q})\simeq (\mathbbm{Z} / n\mathbbm{Z})^{\times}/\{\pm 1\, (\textnormal{mod}\, n)\}$ of order $[\mathbbm{k}_{n} : \mathbbm{Q}] = \phi(n)/2$. \end{coro} Consider an algebraic number field $\mathbbm{K}$, i.e., a finite extension of $\mathbbm{Q}$. A full $\mathbbm{Z}$-module $\mathcal{O}$ in $\mathbbm{K}$ (i.e., a free $\mathbbm{Z}$-module of rank $[\mathbbm{K}:\mathbbm{Q}]$) which contains the number $1$ and is a ring is called an \emph{order} of $\mathbbm{K}$. Note that every $\mathbbm{Z}$-basis of $\mathcal{O}$ is simultaneously a $\mathbbm{Q}$-basis of $\mathbbm{K}$, whence $\mathbbm{Q}\mathcal{O}=\mathbbm{K}$ in particular. It turns out that among the various orders of $\mathbbm{K}$ there is one \emph{maximal order} which contains all the other orders, namely the ring of integers $\mathcal{O}_{\mathbbm{K}}$ in $\mathbbm{K}$; see~\cite[Chapter 2, Section 2]{Bo}. For cyclotomic fields, one has the following well-known result. \begin{fact}\cite[Theorem 2.6 and Proposition 2.16]{Wa}\label{p1} For $n\in \mathbbm{N}$, one has: \begin{itemize} \item[(a)] $\mathcal{O}_{n}$ is the ring of cyclotomic integers in $\mathbbm{K}_{n}$, and hence its maximal order. \item[(b)] $\thinspace\scriptstyle{\mathcal{O}}\displaystyle_{n}$ is the ring of integers in $\mathbbm{k}_{n}$, and hence its maximal order. \end{itemize} \end{fact} \begin{lem}\label{cyclosec} If $m,n \in \mathbbm{N}$, then $\mathbbm{K}_{m} \cap \mathbbm{K}_{n} = \mathbbm{K}_{\operatorname{gcd}(m,n)}$. \end{lem} \begin{proof} The assertion follows from similar arguments as in the proof of the special case $(m,n)=1$; compare \cite[Ch. VI.3, Corollary 3.2]{La}. Here, one has to observe $\mathbbm{Q}(\zeta_{m},\zeta_{n})=\mathbbm{K}_{m}\mathbbm{K}_{n}= \mathbbm{K}_{\operatorname{lcm}(m,n)}$ and then to employ the identity \begin{equation}\label{eq2}\phi(m)\phi(n)=\phi(\operatorname{lcm}(m,n))\phi(\operatorname{gcd}(m,n))\end{equation} instead of merely using the multiplicativity of the arithmetic function $\phi$. \end{proof} \begin{lem}\label{incl} Let $m,n \in \mathbbm{N}$. The following statements are equivalent: \begin{itemize} \item[(i)] $\mathbbm{K}_{m} \subset \mathbbm{K}_{n}$. \item[(ii)] $m|n$, or $m\; \equiv \;2 \;(\operatorname{mod} 4)$ and $m|2n$. \end{itemize} \end{lem} \begin{proof} For direction (ii) $\mathbbm{R}ightarrow$ (i), the assertion is clear if $m|n$. Further, if $m\; \equiv \;2 \;(\operatorname{mod} 4)$, say $m=2o$ for a suitable odd number $o$, and $m|2n$, then $\mathbbm{K}_{o} \subset \mathbbm{K}_{n}$ (due to $o|n$). However, Fact~\rightf{gau} shows that the inclusion of fields $\mathbbm{K}_{o} \subset \mathbbm{K}_{2o}=\mathbbm{K}_{m}$ cannot be proper since we have, by means of the multiplicativity of $\phi$, the equation $\phi(m)=\phi(2o)=\phi(o)$. This gives $\mathbbm{K}_{m}\subset \mathbbm{K}_{n}$. For direction (i) $\mathbbm{R}ightarrow$ (ii), suppose $\mathbbm{K}_{m} \subset \mathbbm{K}_{n}$. Then, Lemma~\rightf{cyclosec} implies $\mathbbm{K}_{m} = \mathbbm{K}_{\operatorname{gcd}(m,n)}$, whence \begin{equation}\label{equation}\phi(m)=\phi(\operatorname{gcd}(m,n))\,\end{equation} by Fact~\rightf{gau} again. Using the multiplicativity of $\phi$ together with $\phi(p^{j})\,=\,p^{j-1}\,(p-1)$ for $p\in\mathcal{P}$ and $j\in\mathbbm{N}$, we see that, given the case $\operatorname{gcd}(m,n)< m$, Equation (\rightf{equation}) can only be fulfilled if $m\; \equiv \;2 \;(\operatorname{mod} 4)$ and $m|2n$. The remaining case $\operatorname{gcd}(m,n)= m$ is equivalent to the relation $m|n$. \end{proof} \begin{coro}\label{unique} Let $m,n \in \mathbbm{N}$. The following statements are equivalent: \begin{itemize} \item[(i)] $\mathbbm{K}_{m} = \mathbbm{K}_{n}$. \item[(ii)] $m=n$, or $m$ is odd and $n=2m$, or $n$ is odd and $m=2n$. \end{itemize} \end{coro} \begin{rem}\label{unrem} Corollary~\rightf{unique} implies that, for $m,n\; \not\equiv \;2 \;(\operatorname{mod} 4)$, one has the identity $\mathbbm{K}_{m} = \mathbbm{K}_{n}$ if and only if $m=n$. \end{rem} \begin{lem}\label{unique2} Let $m,n \in \mathbbm{N}$ with $m,n \geq 3$. Then, one has: \begin{itemize} \item[(a)] $\mathbbm{k}_{m}=\mathbbm{k}_{n} \;\Leftrightarrow\; \mathbbm{K}_{m}=\mathbbm{K}_{n} \mbox{ or\, } m,n \in \{3,4,6\}.$ \item[(b)] $\mathbbm{k}_{m}\subset\mathbbm{k}_{n} \;\Leftrightarrow\; \mathbbm{K}_{m} \subset \mathbbm{K}_{n} \mbox{ or\, } m\in \{3,4,6\}.$ \end{itemize} \end{lem} \begin{proof} For claim (a), let us suppose $\mathbbm{k}_{m}=\mathbbm{k}_{n}=:\mathbbm{k}$ first. Then, Fact~\rightf{gau} and Corollary~\rightf{cr5} imply that $[\mathbbm{K}_{m}:\mathbbm{k}]=[\mathbbm{K}_{n}:\mathbbm{k}]=2$. Note that $\mathbbm{K}_{m} \cap \mathbbm{K}_{n} = \mathbbm{K}_{\operatorname{gcd}(m,n)}$ is a cyclotomic field containing $\mathbbm{k}$. It follows that either $\mathbbm{K}_{m} \cap \mathbbm{K}_{n}=\mathbbm{K}_{\operatorname{gcd}(m,n)}=\mathbbm{K}_{m}=\mathbbm{K}_{n}$ or $\mathbbm{K}_{m} \cap \mathbbm{K}_{n}=\mathbbm{K}_{\operatorname{gcd}(m,n)}= \mathbbm{k}$ and hence $\mathbbm{k}_{m}=\mathbbm{k}_{n}= \mathbbm{k}= \mathbbm{Q}$, since the latter is the only real cyclotomic field. Now, this implies $m,n\in\{3,4,6\}$; see also Lemma~\rightf{phin2p}(a) below. The other direction is obvious. Claim (b) follows immediately from the part (a). \end{proof} \begin{lem}\label{phin2p} Consider $\phi$ on $\{n\in \mathbbm{N}\, |\, n\; \not\equiv \;2 \;(\operatorname{mod} 4)\}$. Then, one has: \begin{itemize} \item[(a)] $\phi(n)/2=1$ if and only if $n\in\{3,4\}$. \item[(b)] $\phi(n)/2 \in \mathcal{P}$ if and only if $n \in \mathcal{S}:=\{8,9,12\} \cup \{2p+1\, | \, p \in \mathcal{P}_{\rm SG}\}$. \end{itemize} \end{lem} \begin{proof} The equivalences follow from the multiplicativity of $\phi$ in conjunction with the identity $\phi(p^{j})\,=\,p^{j-1}\,(p-1)$ for $p\in\mathcal{P}$ and $j\in\mathbbm{N}$. \end{proof} \begin{rem} Let $n\; \not\equiv \;2 \;(\operatorname{mod} 4)$. By Corollary~\rightf{cr5}, for $n\geq 3$, the field extension $\mathbbm{k}_{n} / \mathbbm{Q}$ is a Galois extension with Abelian Galois group $G(\mathbbm{k}_{n} / \mathbbm{Q})$ of order $\phi(n)/2$. Using Lemma~\rightf{phin2p}, one sees that $G(\mathbbm{k}_{n} / \mathbbm{Q})$ is trivial if and only if $n\in\{1,3,4\}$, and simple if and only if $n\in\mathcal{S}$, with $\mathcal{S}$ as defined in Lemma~\rightf{phin2p}(b). \end{rem} \section{The characterization}\label{sec2} The following notions will be of crucial importance. \begin{defi}\label{algdeldef} For a set $\varLambda\subset\mathbbm{R}^2$, we define the following properties: \begin{itemize} \item[(Alg)] $\left[\mathbbm{K}_{\varLambda}:\mathbbm{Q}\right]<\infty$. \item[(Aff)] For all $F\in\mathcal{F}(\mathbbm{K}_{\varLambda})$, there is a non-singular affine transformation $\Psi\!:\, \mathbbm{R}^{2} \rightarrow \mathbbm{R}^{2}$ such that $h(F)\subset \varLambda$. \end{itemize} Moreover, $\varLambda$ is called \emph{degenerate} when $\mathbbm{K}_{\varLambda}\subset \mathbbm{R}$; otherwise, $\varLambda$ is non-degenerate. \end{defi} \begin{rem}\label{mrs} If $\varLambda\subset\mathbbm{R}^2$ satisfies property~(Alg), then one has $\left[\mathbbm{k}_{\varLambda}:\mathbbm{Q}\right]<\infty$, i.e., $\mathbbm{k}_{\varLambda}$ is a real algebraic number field. \end{rem} Before we turn to examples of planar sets $\varLambda$ having properties (Alg) and (Aff), let us prove the central result of this text, where we use arguments similar to the ones used by Gardner and Gritzmann in the proof of~\cite[Theorem 4.1]{GG}. \begin{theorem}\label{charac} Let $\varLambda\subset\mathbbm{R}^2$ be non-degenerate with property~(Aff). Further, let $m\in\mathbbm{N}$ with $m\geq 3$. The following statements are equivalent: \begin{itemize} \item[(i)] There is an affinely regular $m$-gon in $\varLambda$. \item[(ii)] $\mathbbm{k}_{m}\subset\mathbbm{k}_{\varLambda}$. \end{itemize} If $\varLambda$ additionally fulfils property~(Alg), then it only contains affinely regular $m$-gons for finitely many values of $m$. \end{theorem} \begin{proof} For (i) $\mathbbm{R}ightarrow$ (ii), let $P$ be an affinely regular $m$-gon in $\varLambda$. There is then a non-singular affine transformation $\Psi \!:\, \mathbbm{R}^2 \rightarrow \mathbbm{R}^2$ with $\Psi(R_{m}) = P$, where $R_{m}$ is the regular $m$-gon with vertices given in complex form by $1, \zeta_{m}, \dots, \zeta_{m}^{m - 1}$. If $m\in\{3,4,6\}$, condition (ii) holds trivially. Suppose $6\neq m\geq 5$. The pairs $\{1,\zeta_{m}\}$, $\{\zeta_{m}^{-1},\zeta_{m}^{2}\}$ lie on parallel lines and so do their images under $\Psi$. Therefore, $$\frac{\vert \zeta_{m}^{2} - \zeta_{m}^{-1} \vert}{\vert \zeta_{m} - 1 \vert} = \frac{\vert \Psi(\zeta_{m}^{2}) - \Psi(\zeta_{m}^{-1}) \vert}{\vert \Psi(\zeta_{m}) - \Psi(1) \vert}\,.$$ Moreover, since $\Psi(\zeta_{m}^{2}) - \Psi(\zeta_{m}^{-1})$ and $\Psi(\zeta_{m}) - \Psi(1)$ are elements of $\varLambda -\varLambda$ and since $\vert z\vert^2=z\bar{z}$ for $z\in\mathbbm{C}$, we get the relation $$(1 + \zeta_{m} + \bar{\zeta}_{m})^2 = (1 + \zeta_{m} + \zeta_{m}^{-1})^2 = \frac{\vert \zeta_{m}^{2} - \zeta_{m}^{-1} \vert^2}{\vert \zeta_{m} - 1 \vert^2} = \frac{\vert \Psi(\zeta_{m}^{2}) - \Psi(\zeta_{m}^{-1}) \vert^2}{\vert \Psi(\zeta_{m}) - \Psi(1) \vert^2} \in \mathbbm{k}_{\varLambda}\,.$$ The pairs $\{\zeta_{m}^{-1},\zeta_{m}\}$, $\{\zeta_{m}^{-2},\zeta_{m}^{2}\}$ also lie on parallel lines. An argument similar to that above yields $$ (\zeta_{m} + \bar{\zeta}_{m})^2 = (\zeta_{m} + \zeta_{m}^{-1})^2 = \frac{\vert \zeta_{m}^{2} - \zeta_{m}^{-2} \vert^2}{\vert \zeta_{m} - \zeta_{m}^{-1} \vert^2} \in \mathbbm{k}_{\varLambda}\,.$$ By subtracting these equations, one gets the relation $$2(\zeta_{m} + \bar{\zeta}_{m}) + 1 \in \mathbbm{k}_{\varLambda}\,,$$ whence $\zeta_{m} + \bar{\zeta}_{m} \in \mathbbm{k}_{\varLambda}$, the latter being equivalent to the inclusion of the fields $ \mathbbm{k}_{m} \subset \mathbbm{k}_{\varLambda}$. For (ii) $\mathbbm{R}ightarrow$ (i), let $R_{m}$ again be the regular $m$-gon as defined in step (i) $\mathbbm{R}ightarrow$ (ii). Since $m\geq 3$, the set $\{1,\zeta_{m}\}$ is an $\mathbbm{R}$-basis of $\mathbbm{C}$. Since $\varLambda$ is non-degenerate, there is an element $\tau\in\mathbbm{K}_{\varLambda}$ with non-zero imaginary part. Hence, one can define an $\mathbbm{R}$-linear map $L\! :\, \mathbbm{R}^2\rightarrow \mathbbm{R}^2$ as the linear extension of $1 \mapsto 1$ and $\zeta_{m} \mapsto \tau$. Since $\{1,\tau\}$ is an $\mathbbm{R}$-basis of $\mathbbm{C}$ as well, this map is non-singular. Since $\mathbbm{k}_{m}\subset\mathbbm{k}_{\varLambda}$ and since $\{1,\zeta_{m}\}$ is a $\mathbbm{k}_{m}$-basis of $\mathbbm{K}_{m}$ (cf. Corollary~\rightf{cr5}), the vertices of $L(R_{m})$, i.e., $L(1), L(\zeta_{m}), \dots, L(\zeta_{m}^{m - 1})$, lie in $\mathbbm{K}_{\varLambda}$, whence $L(R_{m})$ is a polygon in $\mathbbm{K}_{\varLambda}$. By property~(Aff), there is a non-singular affine transformation $\Psi\! :\, \mathbbm{R}^2 \rightarrow \mathbbm{R}^2$ such that $\Psi(L(R_{m}))$ is a polygon in $\varLambda$. Since compositions of non-singular affine transformations are non-singular affine transformations again, $\Psi(L(R_{m}))$ is an affinely regular $m$-gon in $\varLambda$. For the additional statement, note that, since $\varLambda$ has property~(Alg), one has $[\mathbbm{k}_{\varLambda}:\mathbbm{Q}]<\infty$ by Remark~\rightf{mrs}. Thus, $\mathbbm{k}_{\varLambda}/\mathbbm{Q}$ has only finitely many intermediate fields. The assertion now follows immediately from condition (ii) in conjunction with Corollary~\rightf{unique}, Remark~\rightf{unrem} and Lemma~\rightf{unique2}. \end{proof} Let $\mathbbm{L}$ be an imaginary algebraic number field with $\overline{\mathbbm{L}}=\mathbbm{L}$ and let $\mathcal{O}_{\mathbbm{L}}$ be the ring of integers in $\mathbbm{L}$. Then, every translate $\varLambda$ of $\mathbbm{L}$ or $\mathcal{O}_{\mathbbm{L}}$ is non-degenerate and satisfies the properties (Alg) and (Aff). To this end, we first show that in both cases one has $\mathbbm{K}_{\varLambda}=\mathbbm{L}$. If $\varLambda$ is a translate of $\mathbbm{L}$, this follows immediately from the calculation $$ \mathbbm{K}_{\varLambda}=\mathbbm{K}_{\mathbbm{L}}=\mathbbm{Q}(\mathbbm{L}\cup\overline{\mathbbm{L}})=\mathbbm{L}\,. $$ If $\varLambda$ is a translate of $\mathcal{O}_{\mathbbm{L}}$, one has to observe that $$ \mathbbm{K}_{\varLambda}=\mathbbm{K}_{\mathcal{O}_{\mathbbm{L}}}=\mathbbm{Q}(\mathcal{O}_{\mathbbm{L}}\cup\overline{\mathcal{O}_{\mathbbm{L}}})=\mathbbm{L}\,, $$ since $\overline{\mathbbm{L}}=\mathbbm{L}$ implies $\overline{\mathcal{O}_{\mathbbm{L}}}=\mathcal{O}_{\mathbbm{L}}$. In the first case, property (Aff) is evident, whereas, if $\varLambda$ is a translate of $\mathcal{O}_{\mathbbm{L}}$, property~(Aff) follows from the fact that there is always a $\mathbbm{Z}$-basis of $\mathcal{O}_{\mathbbm{L}}$ that is simultaneously a $\mathbbm{Q}$-basis of $\mathbbm{L}$. Thus, if $F\subset \mathbbm{L}$ is a finite set, then a suitable translate of $aF$ is contained in $\varLambda$, where $a$ is defined as the least common multiple of the denominators of the $\mathbbm{Q}$-coordinates of the elements of $F$ with respect to a $\mathbbm{Q}$-basis of $\mathbbm{L}$ that is simultaneously a $\mathbbm{Z}$-basis of $\mathcal{O}_{\mathbbm{L}}$. Hence, for these two examples, property (Aff) may be replaced by the stronger property \begin{itemize} \item[(Hom)] For all $F\in\mathcal{F}(\mathbbm{K}_{\varLambda})$, there is a homothety $h\!:\, \mathbbm{R}^{2} \rightarrow \mathbbm{R}^{2}$ such that $h(F)\subset \varLambda$\,. \end{itemize} Thus, we have obtained the following consequence of Theorem~\rightf{charac}. \begin{coro}\label{numcor} Let $\mathbbm{L}$ be an imaginary algebraic number field with $\overline{\mathbbm{L}}=\mathbbm{L}$ and let $\mathcal{O}_{\mathbbm{L}}$ be the ring of integers in $\mathbbm{L}$. Let $\varLambda$ be a translate of $\mathbbm{L}$ or a translate of $\mathcal{O}_{\mathbbm{L}}$. Further, let $m\in\mathbbm{N}$ with $m\geq 3$. Denoting the maximal real subfield of $\mathbbm{L}$ by $\mathbbm{l}$, the following statements are equivalent: \begin{itemize} \item[(i)] There is an affinely regular $m$-gon in $\varLambda$. \item[(ii)] $\mathbbm{k}_{m}\subset\mathbbm{l}$. \end{itemize} Further, $\varLambda$ only contains affinely regular $m$-gons for finitely many values of $m$. \end{coro} \begin{rem} In particular, Corollary~\rightf{numcor} applies to translates of imaginary cyclotomic fields and their rings of integers, with $\mathbbm{l}=\mathbbm{k}_{n}$ for a suitable $n\geq 3$; cf. Facts~\rightf{gau} and~\rightf{p1} and also compare the equivalences of Corollary~\rightf{th1mod} below. \end{rem} \section{Application to cyclotomic model sets} Remarkably, there are Delone subsets of the plane satisfying properties~(Alg) and (Hom). These sets were introduced as \emph{algebraic Delone sets} in~\cite[Definition 4.2]{H}. Note that algebraic Delone sets are always non-degenerate, since this is true for all relatively dense subsets of the plane. It was shown in~\cite[Proposition 4.15]{H} that the so-called \emph{cyclotomic model sets} $\varLambda$ are examples of algebraic Delone sets; cf. Section~4 of~\cite{H} and~\cite[Definition 4.6]{H} in particular for the definition of cyclotomic model sets. Any cyclotomic model set $\varLambda$ is contained in a translate of $\mathcal{O}_n$, where $n\geq 3$, in which case the $\mathbbm{Z}$-module $\mathcal{O}_n$ is called the \emph{underlying $\mathbbm{Z}$-module} of $\varLambda$. With the exception of the crystallographic cases of translates of the square lattice $\mathcal{O}_4$ and translates of the triangular lattice $\mathcal{O}_3$, cyclotomic model sets are aperiodic (i.e., they have no translational symmetries) and have long-range order; cf.~\cite[Remarks 4.9 and 4.10]{H}. Well-known examples of cyclotomic model sets with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$ are the vertex sets of aperiodic tilings of the plane like the Ammann-Beenker tiling~\cite{am,bj,ga} ($n=8$), the T\"ubingen triangle tiling~\cite{bk1,bk2} ($n=5$) and the shield tiling~\cite{ga} ($n=12$); cf.~\cite[Example 4.11]{H} for a definition of the vertex set of the Ammann-Beenker tiling and see~Figure~\rightf{fig:ab2} and~\cite[Figure 1]{H} for illustrations. For further details and illustrations of the examples of cyclotomic model sets mentioned above, we refer the reader to~\cite[Section 1.2.3.2]{H2}. Clearly, any cyclotomic model set $\varLambda$ with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$ satisfies \begin{equation}\label{eq1} \mathbbm{K}_{\varLambda}\subset\mathbbm{Q}(\mathcal{O}_n\cup\overline{\mathcal{O}_n})= \mathbbm{K}_{n}\,,\end{equation} whence $\mathbbm{k}_{\varLambda}\subset\mathbbm{k}_{n}$. It was shown in ~\cite[Lemma 4.14]{H} that cyclotomic model sets $\varLambda$ with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$ even satisfy the following stronger version of property (Hom) above: \begin{itemize} \item[(\underline{Hom})] For all $F\in\mathcal{F}(\mathbbm{K}_n)$, there is a homothety $h\!:\, \mathbbm{R}^{2} \rightarrow \mathbbm{R}^{2}$ such that $h(F)\subset \varLambda$\,. \end{itemize} This property enables us to prove the following characterization. \begin{figure} \caption{A central patch of the eightfold symmetric Ammann-Beenker tiling of the plane with squares and rhombi, both having edge length $1$. Therein, an affinely regular $6$-gon is marked.} \label{fig:ab2} \end{figure} \begin{coro}\label{th1mod} Let $m,n\in \mathbbm{N}$ with $m,n\geq 3$. Further, let $\varLambda$ be a cyclotomic model set with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$. The following statements are equivalent: \begin{itemize} \item[(i)] There is an affinely regular $m$-gon in $\varLambda$. \item[(ii)] $\mathbbm{k}_{m}\subset\mathbbm{k}_{n}$. \item[(iii)] $m \in \{3,4,6\}$, or $\mathbbm{K}_{m}\subset \mathbbm{K}_{n}$. \item[(iv)] $m \in \{3,4,6\}$, or $m|n$, or $m=2d$ with $d$ an odd divisor of $n$. \item[(v)] $m \in \{3,4,6\}$, or $\mathcal{O}_{m}\subset \mathcal{O}_{n}$. \item[(vi)] $\thinspace\scriptstyle{\mathcal{O}}\displaystyle_{m}\subset\thinspace\scriptstyle{\mathcal{O}}\displaystyle_{n}$. \end{itemize} \end{coro} \begin{proof} Direction (i) $\mathbbm{R}ightarrow$ (ii) is an immediate consequence of Theorem~\rightf{charac} in conjunction with Relation~\eqref{eq1}. For direction (ii) $\mathbbm{R}ightarrow$ (i), let $R_{m}$ again be the regular $m$-gon as defined in step (i) $\mathbbm{R}ightarrow$ (ii) of Theorem~\rightf{charac}. Since $m,n\geq 3$, the sets $\{1,\zeta_{m}\}$ and $\{1,\zeta_{n}\}$ are $\mathbbm{R}$-bases of $\mathbbm{C}$. Hence, one can define an $\mathbbm{R}$-linear map $L\! :\, \mathbbm{R}^2\rightarrow \mathbbm{R}^2$ as the linear extension of $1 \mapsto 1$ and $\zeta_{m} \mapsto \zeta_{n}$. Clearly, this map is non-singular. Since $\mathbbm{k}_{m}\subset\mathbbm{k}_{n}$ and since $\{1,\zeta_{m}\}$ is a $\mathbbm{k}_{m}$-basis of $\mathbbm{K}_{m}$ (cf. Corollary~\rightf{cr5}), the vertices of $L(R_{m})$, i.e., $L(1), L(\zeta_{m}), \dots, L(\zeta_{m}^{m - 1})$, lie in $\mathbbm{K}_{n}$, whence $L(R_{m})$ is a polygon in $\mathbbm{K}_{n}$. Because $\varLambda$ has property~(\underline{Hom}), there is a homothety $h\! :\, \mathbbm{R}^2 \rightarrow \mathbbm{R}^2$ such that $h(L(R_{m}))$ is a polygon in $\varLambda$. Since homotheties are non-singular affine transformations, $h(L(R_{m}))$ is an affinely regular $m$-gon in $\varLambda$. As an immediate consequence of Lemma~\rightf{unique2}(b), we get the equivalence (ii) $\Leftrightarrow$ (iii). Conditions (iii) and (iv) are equivalent by Lemma~\rightf{incl}. Finally, the equivalences (iii) $\Leftrightarrow$ (v) and (ii) $\Leftrightarrow$ (vi) follow immediately from Fact~\rightf{p1}. \end{proof} Although the equivalence (i) $\Leftrightarrow$ (iv) in Corollary~\rightf{th1mod} is fully satisfactory, the following consequence deals with the two cases where condition (ii) can be used more effectively. \begin{coro}\label{cor2} Let $m,n\in \mathbbm{N}$ with $m,n\geq 3$. Further, let $\varLambda$ be a cyclotomic model set with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$. Consider $\phi$ on $\{n\in \mathbbm{N}\, |\, n\; \not\equiv \;2 \;(\operatorname{mod} 4)\}$. Then, one has: \begin{itemize} \item[(a)] If $n\in\{3,4\}$, there is an affinely regular $m$-gon in $\varLambda$ if and only if $m \in \{3,4,6\}$. \item[(b)] If $n\in\mathcal{S}$, there is an affinely regular $m$-gon in $\varLambda$ if and only if $$\left\{ \begin{array}{ll} m \in \{3,4,6,n\}, & \mbox{if $n=8$ or $n=12$,}\\ m \in \{3,4,6,n,2n\}, & \mbox{otherwise.} \end{array}\right. $$ \end{itemize} \end{coro} \begin{proof} By Lemma~\rightf{phin2p}(a), $n\in\{3,4\}$ is equivalent to $\phi(n)/2=1$, thus condition (ii) of Corollary~\rightf{th1mod} specializes to $\mathbbm{k}_{m} = \mathbbm{Q}$, the latter being equivalent to $\phi(m)=2$, which means $m\in\{3,4,6\}$; cf. Corollary \rightf{cr5}. This proves the part~(a). By Lemma~\rightf{phin2p}(b), $n\in\mathcal{S}$ is equivalent to $\phi(n)/2\in\mathcal{P}$. By Corollary \rightf{cr5}, this shows that $[\mathbbm{k}_{n}:\mathbbm{Q}]= \phi(n)/2 \in \mathcal{P}$. Hence, by condition (ii) of Corollary~\rightf{th1mod}, one either gets $\mathbbm{k}_{m}=\mathbbm{Q}$ or $\mathbbm{k}_{m}=\mathbbm{k}_{n}$. The former case implies $m\in\{3,4,6\}$ as in the proof of the part (a), while the proof follows from Lemma~\rightf{unique2}(a) in conjunction with Corollary~\rightf{unique} in the latter case. \end{proof} \begin{ex} As mentioned above, the vertex set $\varLambda_{\rm AB}$ of the Ammann-Beenker tiling is a cyclotomic model set with underlying $\mathbbm{Z}$-module $\mathcal{O}_8$. Since $8\in\mathcal{S}$, Corollary~\rightf{cor2} now shows that there is an affinely regular $m$-gon in $\varLambda_{\rm AB}$ if and only if $m\in\{3,4,6,8\}$; see Figure~\rightf{fig:ab2} for an affinely regular $6$-gon in $\varLambda_{\rm AB}$. The other solutions are rather obvious, in particular the patch shown also contains the regular $8$-gon $R_8$, given by the $8$th roots of unity. \end{ex} For further illustrations and explanations of the above results, we refer the reader to~\cite[Section 2.3.4.1]{H2} or~\cite[Section 5]{H0}. This references also provide a detailed description of the construction of affinely regular $m$-gons in cyclotomic model sets, given that they exist. \section{Application to discrete tomography of cyclotomic model sets} \emph{Discrete tomography} is concerned with the inverse problem of retrieving information about some finite object from information about its slices; cf.~\cite{G,GG,H0,H,H2} and also see the refences therein. A typical example is the \emph{reconstruction} of a finite point set from its ({\em discrete parallel}\/) {\em $X$-rays} in a small number of directions. In the following, we restrict ourselves to the planar case. \begin{defi}\label{xray..} Let $F\in \mathcal{F}(\mathbbm{R}^{2})$, let $u\in \mathbb{S}^{1}$ be a direction and let $\mathcal{L}_{u}$ be the set of lines in direction $u$ in $\mathbbm{R}^{2}$. Then, the ({\em discrete parallel}\/) {\em X-ray} of $F$ {\em in direction} $u$ is the function $X_{u}F: \mathcal{L}_{u} \rightarrow \mathbbm{N}_{0}:=\mathbbm{N} \cup\{0\}$, defined by $$X_{u}F(\ell) := \operatorname{card}(F \cap \ell\,) =\sum_{x\in \ell} \mathbbm{1}_{F}(x)\,.$$ \end{defi} In~\cite{H}, we studied the problem of \emph{determining} convex subsets of algebraic Delone sets $\varLambda$ by $X$-rays. Solving this problem amounts to find small sets $U$ of suitably prescribed $\varLambda$-directions with the property that different convex subsets of $\varLambda$ cannot have the same $X$-rays in the directions of $U$. More generally, one defines as follows. \begin{defi} Let $\mathcal{E}\subset \mathcal{F}(\mathbbm{R}^{2})$, and let $m\in\mathbbm{N}$. Further, let $U\subset\mathbb{S}^{1}$ be a finite set of directions. We say that $\mathcal{E}$ is {\em determined} by the $X$-rays in the directions of $U$ if, for all $F,F' \in \mathcal{E}$, one has $$ (X_{u}F=X_{u}F'\;\,\forall u \in U) \; \Longrightarrow\; F=F'\,. $$ \end{defi} Let $\varLambda\subset\mathbbm{R}^2$ be a Delone set and let $U\subset \mathbb{S}^{1}$ be a set of two or more pairwise non-parallel $\varLambda$-directions. Suppose the existence of a $U$-polygon $P$ in $\varLambda$. Partition the vertices of $P$ into two disjoint sets $V,V'$, where the elements of these sets alternate round the boundary of $P$. Since $P$ is a $U$-polygon, each line in the plane parallel to some $u\in U$ that contains a point in $V$ also contains a point in $V'$. In particular, one sees that $\operatorname{card}(V)=\operatorname{card}(V')$. Set $$ C:=(\varLambda\cap P)\setminus (V\cup V') $$ and, further, $F:=C\cup V$ and $F':=C\cup V'$. Then, $F$ and $F'$ are different convex subsets of $\varLambda$ with the same $X$-rays in the directions of $U$. We have just proven direction (i) $\mathbbm{R}ightarrow$ (ii) of the following equivalence, which particularly applies to cyclotomic model sets, since any cyclotomic model set is an algebraic Delone set by~\cite[Proposition 4.15]{H}. \begin{theorem}\cite[Theorem 6.3]{H}\label{characungen} Let $\varLambda$ be an algebraic Delone set and let $U\subset \mathbb{S}^{1}$ be a set of two or more pairwise non-parallel $\varLambda$-directions. The following statements are equivalent: \begin{itemize} \item[(i)] $\mathcal{C}(\varLambda)$ is determined by the $X$-rays in the directions of $U$. \item[(ii)] There is no $U$-polygon in $\varLambda$. \end{itemize} \end{theorem} \begin{rem}\label{trivrem} Trivially, any affinely regular $m$-gon $P$ in $\varLambda$ with $m$ even is a $U$-polygon in $\varLambda$ with respect to any set $U\subset\mathbb{S}^1$ of pairwise non-parallel directions having the property that each element of $U$ is parallel to one of the edges of $P$. The set $U$ then consists only of $\varLambda$-directions and, moreover, satisfies $\operatorname{card}(U)\leq m/2$. \end{rem} By combining Corollary~\rightf{th1mod}, direction (i) $\mathbbm{R}ightarrow$ (ii) of Theorem~\rightf{characungen} and Remark~\rightf{trivrem}, one immediately obtains the following consequence. \begin{coro}\label{corofin} Let $n\geq 3$ and let $\varLambda$ be a cyclotomic model set with underlying $\mathbbm{Z}$-module $\mathcal{O}_n$. Suppose that there exists a natural number $k\in\mathbbm{N}$ such that, for any set $U$ of $k$ pairwise non-parallel $\varLambda$-directions, the set $\mathcal{C}(\varLambda)$ is determined by the $X$-rays in the directions of $U$. Then, one has $$ k>\,\max\left\{3,\tfrac{\operatorname{lcm}(n,2)}{2}\right\}\,. $$ \end{coro} \begin{rem} In the situation of Corollary~\rightf{corofin}, the question of existence of a suitable number $k\in\mathbbm{N}$ is a much more intricate problem. So far, it has only been answered affirmatively by Gardner and Gritzmann in the case of translates of the square lattice ($n=4$), whence corresponding results hold for all translates of planar lattices, in particular for translates of the triangular lattice ($n=3$); cf.~\cite[Theorem 5.7(ii) and (iii)]{GG}. More precisely, it is shown there that, for these cases, the number $k=7$ is the smallest among all possible values of $k$. It would be interesting to know if suitable numbers $k\in\mathbbm{N}$ exist for all cyclotomic model sets. \end{rem} Let us finally note the following relation between $U$-polygons and affinely regular polygons. The proof uses a beautiful theorem of Darboux~\cite{D} on second midpoint polygons; cf.~\cite{GM} or~\cite[Ch. 1]{G}. \begin{prop}\cite[Proposition 4.2]{GG}\label{uaffine} If $U\subset \mathbb{S}^{1}$ is a finite set of directions, there exists a $U$-polygon if and only if there is an affinely regular polygon such that each direction of $\,U$ is parallel to one of its edges. \end{prop} \begin{rem} A $U$-polygon need not itself be an affinely regular polygon, even if it is a $U$-polygon in a cyclotomic model set; cf.~\cite[Example 4.3]{GG} for the case of planar lattices and~\cite[Example 2.46]{H2} or~\cite[Example 1]{H0} for related examples in the case of aperiodic cyclotomic model sets. \end{rem} \end{document}
\begin{document} \null \begin{center} {\large {\bf Equivariant Khovanov homology associated with symmetric links}}\\ Nafaa Chbili \begin{footnotesize}\footnote{This work was partially done while I was visiting KAIST supported by a fellowship from the project BK21. I would like to thank Professor K. Hyoung Ko for his kind hospitality.}\\ Department of Mathematics\\ Korea Advanced Institute of Science and Technology\\ Daejeon, 305-701, Korea\\ E-mail: [email protected] \end{footnotesize} \end{center} \begin{abstract} Let $\Delta$ be a trivial knot in the three-sphere. For every finite cyclic group $G$ of odd order, we construct a $G$-equivariant Khovanov homology with coefficients in the filed $\F_{2}$. This homology is an invariant of links up to isotopy in $(S^{3},\Delta)$. Another interpretation is given using the categorification of the Kauffman bracket skein module of the solid torus. Our techniques apply in the case of graphs as well to define an equivariant version of the graph homology which categorifies the chromatic polynomial.\\ \emph{Key words.} Khovanov homology, group action, equivariant Jones polynomial, skein modules.\\ \emph{MSC.} 57M25.\\ \end{abstract} \subsection*{1- Introduction} In the late nineties, M. Khovanov \cite{Kh} introduced an invariant of isotopy classes of oriented links in the three-sphere, now widely known as the Khovanov homology. This invariant takes the form of bigraded homology groups whose polynomial Euler characteristic is the Jones polynomial. Namely, if $L$ is an oriented link and $H^{*,*}(L)$ is its Khovanov homology with integral coefficients, then the Jones polynomial of $L$ is given by the following formula $$V(L)(q)=\displaystyle\sum_{i,j}(-1)^iq^{j}\mbox{rank}H^{i,j}(L),$$ where $V(L)(q)$ is the augmented version of the Jones polynomial, equal to $(q+q^{-1})$ times the original Jones invariant \cite{Jo}. The original definition of Khovanov homology is complicated and overloaded with algebraic details. Viro \cite{Vi} suggests an elementary combinatorial approach to define the Khovanov homology. This approach has proved to be useful in several works. For instance, it was used in \cite{APS} to construct an homology theory for framed links in $I-$bundles over surfaces. This theory categorifies the Kauffamn bracket skein module \cite{Pr1}.\\ Quantum invariants of links have proved to be a powerful tool in the study of the symmetry of links. For instance, the Jones and the HOMFLY polynomials satisfy certain necessary conditions which helped determine the symmetries of some links \cite{Mu,Tr,Pr2}. Our main goal in this paper is to investigate the behavior of the Khovanov homology of links with $\Z/p\Z-$symmetry.\\ Let $\Delta$ be a trivial knot in $S^3$ and let $L$ be a link in $S^3$ such that $L$ does not intersect $\Delta$. Let $\tilde L$ be the covering link of $L$ in the $p-$fold cyclic cover branched over $\Delta$. Obviously, the group $G=\Z/p\Z$ acts on $(S^3,\tilde L)$. Let $D$ be a diagram of the link $L$ and let $\tilde D$ be a symmetric diagram of $\tilde L$. We prove that the action of $G$ extends naturally to the Khovanov chain complex of $\tilde D$ with coefficients in $\F_2$. The homology of the quotient chain complex is called here the \emph{$G-$equivariant Khovanov homology} of $D$, we shall denote it by $H_G^{*,*}(D)$. Throughout this paper, two links in $(S^3, \Delta)$ are isotopic if they are related by an isotopy of $S^3$ keeping the knot $\Delta$ fixed. If $E$ is a vector space on which the group $G$ acts, then we set $E^G$ to be the subset of fixed points under this action.\\ \textbf{Theorem 1.} \emph{If the order of $G$ is odd, then the $G-$equivariant Khovanov homology $H_G^{*,*}$ is an invariant of ambiant isotopy of oriented links in $(S^3,\Delta)$. In addition, $H_G^{*,*}(L)$ is isomorphic to the subspace of fixed points $H^{*,*}(\tilde L)^G$. } \\ The polynomial Euler characteristic of $H_G^{*,*}$ is an invariant of ambient isotopy of links in $S^3$ which we call here the \emph{$G-$equivariant Jones polynomial:} $$V_{G}(L)(q)=\displaystyle\sum_{i,j}(-1)^iq^{j}\mbox{dim}H_G^{i,j}(L).$$ \textbf{Corollary 1.} \emph{If $V_{G}(L)\neq V(\tilde L)$, then the action of $G$ in homology is not trivial}.\\ In \cite{APS}, Asaeda, Przytycki and Sikora constructed an homology theory which categorifies the Kauffman bracket skein modules of $I-$bundles over surfaces. In the case of the solid torus, this homology is an invariant of framed links which associates to each framed link $L$ homology groups $H^{*,*,*}(L)$, where scripts are integers. Let $L$ be a framed link in the solid torus $S^1 \times I \times I$ and let $D$ be a diagram of $L$ in the annulus. Let $\tilde L$ be the pre-image of $L$ in the $p-$fold cyclic cover of the solid torus. Let $\tilde D$ be a symmetric diagram of $\tilde L $ in the annulus. We prove that the finite cyclic group $G=\Z/p\Z$ acts on the chain complex $(C^{*,*,*}(\tilde D), d)$, where coefficients are taken in $\F_2$. Thus we construct a $G-$equivariant Khovanov homology $H_G^{*,*,*}(D)$ and we prove that this homology defines an invariant of framed links.\\ \textbf{Theorem 2.} \emph{If the order of $G$ is odd, then the $G-$equivariant Khovanov homology $H_G^{*,*,*}$ is an invariant of framed links in the solid torus.} \\ \textbf{Examples.} Computing the Khovanov homology of a link is not an easy task in general. The computation of the equivariant Khovanov homology is even more difficult. We give here some easy examples with $G=\Z/3\Z$.\\ If $L$ is a trivial knot such $\Delta\cup L$ is a trivial link, then the only non trivial homology spaces are $H_G^{0,3}(L)=H_G^{0,-3}(L)=\F_2$ and $H_G^{0,1}(L)=H_G^{0,-1}(L)=\F_2$. Since $H^{0,3}(\tilde L)=H^{0,-3}(\tilde L)=\F_2$, $H^{0,1}(\tilde L)=H^{0,-1}(\tilde L)=(\F_2)^3$, then the equivariant homology of $L$ is different from the Khovanov homology of $\tilde L$. The equivariant Jones polynomial of $L$ is different from the Jones polynomial of $\tilde L$, as we have $V_G(L)=q^3+q+q^{-1}+q^{-3}\neq V(\tilde L)=q^3+3q+3q^{-1}+q^{-3}$. In conclusion, the Khovanov homology of $L$, the Khovanov homology of $\tilde L$ and the $G-$equivariant Khovanov homology of $L$ are different.\\ Now, we consider the knot $L$ depicted by the picture below, where the linking number of $\Delta$ and $L$ is equal to 2. The covering link $\tilde L$ is the trefoil knot. Computations show that the $G-$equivariant Khovanov homology of $L$ is equal to the Khovanov homology of $\tilde L$, the non-trivial homology spaces are listed below: $H_G^{0,1}(L)=H_G^{0,3}(L)=H_G^{2,5}(L)=H_G^{2,7}(L)=H_G^{3,7}(L)=H_G^{3,9}(L)=\F_2$. The equivariant Jones polynomial of $L$ is $V_G(L)=-q^9+q^5+q^3+q$ which is equal to the Jones polynomial of $\tilde L$.\\ \begin{center} \includegraphics[width=3cm,height=1.5cm]{khovanov3.eps} \end{center} Here is an outline of our paper. In Section 2, we review the construction of Khovanov homology following \cite{Vi}. Section 3 discusses the Khavanov homology of symmetric links. In Section 4, we review some basic properties of the transfer map in homology needed in the sequel. The proof of Theorem 1 is given in Section 5. Section 6 and 7 discusses extension of our construction to framed links in the solid torus and to graph homology. \subsection*{2- Khovanov homology} This section is to review the definition of the Khovanov homology of links following the elementary combinatorial construction introduced by Viro \cite{Vi}. Note that coefficients will be always in $\F_2$ and will usually be dropped from the notation except when desired for stress.\\ Let $D$ be a link diagram with $n$ crossings. A \emph{Kauffman state} of $D$ is an assignment of $+1$ marker or $-1$ marker to each crossing of $D$. In a Kauffman state the crossings of $D$ are smoothed according to the following convention \begin{center} \includegraphics[width=10cm]{zchbili-fig1.eps} \end{center} \begin{center} Figure 1 \end{center} to obtain a collection of circles $D_s$. Let $|s|$ be the number of circles in $D_s$ and let \begin{center}$\sigma(s)= \sharp \{\mbox{$+$1 markers}\}$ $-$ $\sharp \{\mbox{$-$1 markers}\}.$ \end{center} The augmented version of Kauffman bracket of $D$ is the Laurent polynomial in the indeterminate $A$ given by the following formula: $$\prec D\succ (A)= \displaystyle\sum_{\rm{ states } \;s \;\rm{ of }\; D} (-A)^{\sigma(s)}(-A^2-A^{-2})^{|s|}. $$ An \emph{enhanced Kauffman state} $S$ of $D$ is a Kauffman state $s$ with an assignment of a $+$ or $-$ sign to each circle in $D_s$. We set $\tau(S)$ to be the algebraic sum of signs associated to the circles of $D_s$.\\ If $D$ is given an orientation, then let $w(D)$ stands for the writhe of $D$. Now, we define: $$\begin{array}{lll} i(S)=&\displaystyle\frac{w(D)-\sigma(s)}{2}& \mbox{ and}\\ &&\\ j(S)=&\displaystyle\frac{3w(D)-\sigma(s)+2\tau(S)}{2}.& \end{array} $$ One may check easily that both $i(S)$ and $j(S)$ are integers. Let $i$ and $j$ be two integers, we define ${\cal S}_D^{i,j}$ to be the set of states of $D$ with $i(S)=i$ and $j(S)=j$. The Khovanov chain space $C^{i,j}(D)$ is defined to be the vector space over $\F_{2}$ having ${\cal S}_D^{i,j}$ as a basis.\\ It remains to define the differential. Assume that $v$ is a crossing of $D$, we define the partial differential $d_v$ as follows \begin{center} $\func{d_v^{i,j}}{C^{i,j}(D)}{C^{i+1,j}(D)}{S}{\displaystyle\sum_{\mbox{\tiny {All states S'}}}(S:S')_vS'}$ \end{center} where $(S:S')_v$ is \begin{itemize} \item 1 if $S$ and $S'$ differ only at the crossing $v$, where $S$ has a $+1$ marker, $S'$ has a $-1$ marker, all the common circles in $D_S$ and $D_{S'}$ have the same signs and in a neighborhood of $v$, $S$ and $S'$ are as in figure 2, \item $(S:S')$ is zero otherwise. \end{itemize} \begin{center} \includegraphics[width=10cm]{zchbili-fig2.eps} \end{center} \begin{center} Figure 2. \end{center} The differential $d$ is defined by: \begin{center} $\func{d^{i,j}}{C^{i,j}(D)}{C^{i+1,j}(D)}{S}{\displaystyle\sum_{\mbox{\tiny {$v$}}}d_v^{i,j}(S)}$ \end{center} The homology $H^{*,*}(D)$ of the chain complex $(C^{*,*}(D),d^{*,*})$ is called the Khovanov homology of $D$. This homology is conserved under Reidemeister moves. Hence, it is an invariant of ambiant isotopy of links. If $L$ is an oriented link in $S^3$, then we denote its Khovanov homology by $H^{*,*}(L)$. As we have mentioned in the introduction, the Jones polynomial of $L$ is obtained as the polynomial Euler characteristic of $H^{*,*}(L)$.\\ \textbf{Framed Khovanov homology.} As it is the case for the Jones polynomial. It is sometimes more convenient to work with framed links when studying Khovanov homology. Viro \cite{Vi} showed that one may define a Khovanov homology which categorifies the Kauffamn bracket polynomial. Let $D$ be a nonoriented link diagram. With respect to the notations of section 2, we set $p(S)=\tau(S)$ and $q(S)=\sigma(S)-2\tau(S)$. Let $C_{p,q}(D)$ be the vector space generated by all enhanced states with $p(S)=p$ and $q(S)=q$. We have a chain complex $(C_{*,*}(D),d)$, where $d: C_{p,q}(D)\longmapsto C_{p-1,q}(D)$ is defined as in the previous paragraph. If $D$ is oriented, then we get the Khovanov homology of $C^{*,*}(D)$ by shifting the degrees in the homology of $C_{*,*}(D)$. The advantage of this framed version of Khovanov homology is that there is a short exact sequence which categorifies the Kauffman bracket skein relation \cite{Vi}. Let $D$, $D_{0}$ and $ D_{\infty}$ be three link diagrams which are identical except in a small disk where they are like in the following picture\\ \begin{center} \includegraphics[width=2cm,height=1.5cm]{kplus} \hspace{1cm} \includegraphics[width=2cm,height=1.5cm]{kzero} \hspace{1cm} \includegraphics[width=2cm,height=1.5cm]{kinfini} \end{center} \begin{center} {\sc Figure 3.} \end{center} The following short sequence is exact: $$0 \longmapsto C_{p,q}(\includegraphics[width=0.5cm,height=0.5cm]{kinfini}) \stackrel{\alpha}{\longmapsto} C_{p,q-1}(\includegraphics[width=0.5cm,height=0.5cm]{kplus})\stackrel{\beta}{\longmapsto} C_{p,q-2}(\includegraphics[width=0.5cm,height=0.5cm]{kzero}) \longmapsto 0 $$ where $\alpha$ is the chain map defined by: \includegraphics[width=0.5cm,height=0.5cm]{kinfini}$\;\longmapsto\;$\includegraphics[width=0.5cm,height=0.5cm]{negativemarker}\\ and $\beta$ is defined by the following correspondence \begin{center} \includegraphics[width=0.5cm,height=0.5cm]{negativemarker}$\;\longmapsto\;$ 0 \end{center} \begin{center} \includegraphics[width=0.5cm,height=0.5cm]{positivemarker}$\;\longmapsto\;$ \includegraphics[width=0.5cm,height=0.5cm]{kzero}. \end{center} \subsection*{3- Symmetric links and equivariant Khovanov homology} This section is concerned with the natural question of whether the Khovanov homology reflects the symmetry of links. In other words, does the invariance of a link by some finite group action on the three-sphere induce some group action on the Khovanov homology of the link?\\ A link $L$ in $S^3$ is said to be \emph{$p-$periodic} if and only if there exists an orientation preserving diffeomorphism $\varphi$ of $S^3$ such that $\varphi$ is of order $p$, the set of fixed points of $\varphi$ is a knot disjoint from $L$ and $\varphi(L)=L$. By the positive solution of the smith conjecture, we may assume without loss of generality that $\varphi$ is a rotation of $2\pi/p-$angle around a trivial knot. Consequently, a $p-$periodic knot admits a planar diagram which is invariant by a planar rotation of the same angle.\\ Let $\Delta$ be a trivial knot and let $L$ be a link in the three-sphere such that $L\bigcap\Delta=\emptyset$. Let $\tilde L$ be the covering link of $L$ in the $p-$fold cyclic cover branched over $\Delta$. Let $\tilde D$ be a diagram of $\tilde L$ which is invariant by a planar rotation. Such a diagram exists since the link $\tilde L$ is $p-$periodic. Let $D$ be the quotient diagram of $\tilde D$ under the action of the group $G=\Z/p\Z=<\varphi>$. The rest of this section is devoted to study the Khovanov chain complex $(C^{*,*}(\tilde D),d^{*,*})$.\\ One can see easily that the action of the rotation on the diagram $\tilde D$ extends naturally to an action of the cyclic group $G$ on the set of enhanced Kauffman states of $\tilde D$. In addition, for all enhanced state $S$ we have: $$ i(\varphi^{k}(S))=i(S) \; \mbox{and}\; j(\varphi^{k}(S))=j(S) \;\; \mbox{ for all $1\leq k \leq p$}. $$ Consequently, the group $G$ acts on the set ${\cal S}_{\tilde D}^{i,j}$. Since ${\cal S}_{\tilde D}^{i,j}$ is a basis for $C^{*,*}(\tilde D)$, then this action extends naturally to an action of $G$ on $C^{*,*}(\tilde D)$. It remains now to check if the action of $G$ commutes with the differential.\\ \textbf{Lemma 3.1.} \emph{We have: $\varphi\circ d=d\circ \varphi$.}\\ \emph{Proof:} Let $S$ be an enhanced state, one can easily see that for every crossing in $\tilde D$ we have $(S:S')_v=(\varphi(S),\varphi(S'))_{\varphi(v)}$. Thus: $$\begin{array}{ll} \varphi(d_v (S))&=\varphi(\displaystyle\sum_{\mbox{\tiny {All states S'}}}(S:S')_vS') \\ &=\displaystyle\sum_{\mbox{\tiny {All states S'}}}(S:S')_v\varphi(S')\\ &= \displaystyle\sum_{\mbox{\tiny {All states S'}}}(\varphi(S):\varphi(S'))_{\varphi(v)}\varphi(S')\\ &= \displaystyle\sum_{\mbox{\tiny {All states T}}}(\varphi(S):T)_{\varphi(v)}T\\ &= d_{\varphi(v)} \circ \varphi(S). \end{array} $$ Finally, we get $\displaystyle\sum_{\mbox{\tiny {crossings v}}}\varphi(d_v^{i,j}(S))=\displaystyle\sum_{\mbox{\tiny {crossings v}}}d_v^{i,j}(\varphi(S))$ which means that $\varphi\circ d=d\circ \varphi$. This ends the proof of the lemma. \fin \\ Let $(\overline{C^{*,*}(\tilde D)},\overline d)$ be the quotient chain complex of $(C^{*,*}(\tilde D), d)$ by the action of $G$. The homology of the quotient chain complex $(\overline{C^{*,*}(\tilde D)},\overline d)$ is called the \emph{$G-$equivariant homology} of $D$. We denote this homology by $H_{G}^{*,*}(D)$.\\ \textbf{Remark 3.1.} Similarly, a framed version of equivariant Khovanov homology can be defined for non oriented diagrams. We shall denote it by here $H_{*,*}^G $.\\ \textbf{Remark 3.2.} If we consider Khovanov homology with coefficients in $\Z$ as in the original definition \cite{Kh}. We still have an action of the group $G$ on the Khovanov chain groups but that action does not commute with the differential. This is due to the signs that appear in the definition of the differential. Actually, this is the raison for which we choose to work with coefficients in $\F_2$. \subsection*{4- The transfer in homology} In this section, we review some properties of the transfer map. Let $G=<\varphi>$ be the finite cyclic group of order $p$ and let $(C^{*},d)$ be a chain complex with coefficients in some field $F$. Assume that $G$ acts on the chain complex $(C^{*},d)$ and set $(\overline {C^{*}},\overline d)$ to be the quotient chain complex. We denote by $\pi$ the canonical surjection with respect to the action of $G$. Let $t$ be the map from $(\overline {C^{*}},\overline d)$ to $(C^{*},d)$ defined by $t(\bar S)= S+\varphi(S)+ ...+\varphi^{p-1}(S)$. The map $t$ induces a map $t_*$ from the homology of $(\overline {C^{*}},\overline d)$ to the homology of $(C^{*},d)$. This map called the transfer has been useful in the study of homological properties of topological transformation groups. The following properties are extracted from \cite{Br}.\\ \textbf{ Theorem 4.1.} \emph{The composition $\pi_*t_*$ is the multiplication by $p$. It is an isomorphism if the field $F$ is of characteristic zero or prime to $p$.}\\ Obviously, the action of $G$ on the chain complex $(C^{*},d)$ induces an action of $G$ on the homology. We have:\\ \textbf{ Theorem 4.2.} If the field $F$ is of characteristic zero or prime to $p$, then:\\ $\begin{array}{ccccl} &\pi_*:& H((C^{*},d))^G & \longrightarrow & H((\overline {C^{*}},\overline d)) \end{array}$ is an isomorphism, as is\\ $\begin{array}{ccccl} &t_*:& H((\overline {C^{*}},\overline d)) \longrightarrow & H((C^{*},d))^G. \end{array}$ \subsection*{5- Proof of Theorem 1} In this section, we shall prove that if the order of $G$ is odd, then the $G-$equivariant Khovanov homology does not change under Reidemeister moves $R1, R2$ and $R3$. Note that as we consider isotopy in $(S^3,\Delta)$, then we should consider only Reidemeister moves which are performed in a a three ball which does not intersect $\Delta$. \subsubsection*{5.1- Invariance under first Reidemeister move} Let $D$ and $D'$ be two link diagrams which are related by a Reidemeister move $R1$. Assume that $D$ is the diagram in the middle of figure 4 and that $D'$ is the right twisted diagram.\\ \begin{center} \includegraphics[width=3cm,height=1cm]{reidi} \end{center} \begin{center} Figure 4. \end{center} Let $\tilde D$ and $\tilde {D'}$ be the two covering diagrams. Obviously these two diagrams differ by $p$ Reidemeister moves of type 1 performed along an orbit of the action of $G$.\\ Following Viro \cite{Vi}, if two diagrams differ by a Reidemeister move $R1$, there is a chain map $h_v$ ($v$ is the crossing which appears in $D'$ but not in $D$) between the two complexes which induces an isomorphism in homology. This map is defined by \begin{center} \includegraphics[width=4cm,height=1cm]{r1khovanov} \end{center} The two diagrams $\tilde D$ and $\tilde {D'}$ differ by $p$ Reidemeister moves of type R1. Let us label the crossings which appear in $\tilde {D'}$ and not in $D$ by $v,\varphi(v),\dots \varphi^{p-1}(v)$. Now, let $h$ be the chain map $h_v\circ h_{\varphi(v)}\circ\dots\circ h_{\varphi^{p-1}(v)}$.\\ \textbf{Lemma 5.1.} \emph{The linear map $h: C^{*,*}(\tilde D) \longmapsto C^{*,*}(\tilde D') $ induces an isomorphism in homology. In addition $h$ is $G-$equivariant.}\\ \emph{Proof.} The induced map $h_*$ is an isomorphism in homology because it is the composition of isomorphisms. It is $G-$equivariant due to the two elementary facts: $h_w\circ h_{w'} =h_{w'}\circ h_w$ and $\varphi \circ h_w=h_{\varphi(w)}\circ \varphi$. \fin \\ According to Lemma 5.1, the map $h$ induces a map $\overline h$ from $(\overline{C^{*,*}(\tilde D)},\overline d)$ to $(\overline{C^{*,*}(\tilde D')},\overline d)$. We are going to prove that this map induces an isomorphism in homology. Note that we have a commutative diagram \\ \begin{center} $ {\begin{array}{ccc} C^{*,*}(\tilde D)&\stackrel{h}{\longrightarrow}& C^{*,*}(\tilde {D'})\\ \Big\downarrow\vcenter{ \rlap {$\pi$}}&& \Big\downarrow\vcenter{ \rlap {$\pi'$}}\\ \overline{C^{*,*}(\tilde D)}&\stackrel{\overline{h}}{\longrightarrow}&\overline{ C^{*,*}(\tilde {D'})} \end{array}}$ \end{center} which induces a commutative diagram in homology\\ \begin{center} ${\begin{array}{ccl} H^{*,*}(\tilde D)&\stackrel{h_{*}}{\longrightarrow}& H^{*,*}(\tilde {D'})\\ \vcenter{ \llap {${t_{*}}$}}{{\Big\uparrow}}\;\;\Big\downarrow\vcenter{ \rlap {$\pi_{*}$}}&& \Big\downarrow\vcenter{ \llap {$\pi'_{*}\;\;$}}\;\vcenter{ \rlap {$\;\;{t'_{*}}$}}{{\Big\uparrow}}\\ H^{*,*}_{G}(D)&\stackrel{\overline{h}_{*}} {\longrightarrow}&H_{G}^{*,*}(D') \end{array}}$ \end{center} where $t_*$ (respectively $t'_*$) stands for the transfer map corresponding to the action of $G$ on $\tilde D$ (respectively $\tilde {D'}$). Since we are working with coefficients in $\F_2$ and the order of $G$ is odd, then both $\pi_*t_*$ and $\pi'_*t'_*$ are isomorphisms. In addition, the commutative diagram implies that $h_*t_*=t'_*\overline h_*$. Using the fact that $h_*$ is an isomorphism we should be able to conclude that $\overline h_*$ is injective. A similar argument using the fact that $\pi_*$ and $\pi'_*$ are onto implies that $\overline h_*$ is surjective. Finally, the equivariant homologies of $D$ and $D'$ are isomorphic. \\ The invariance under the left twisted first Reidemeister move is proved in a similar way. \subsubsection*{5.2- Invariance under the second Reidemeister move} We will switch to framed links for a while. Let $D$ and $D'$ be two link diagrams related by a single second Reidemeister move and assume that $D'$ is the one that has more crossings, see figure 5. Let $\tilde D$ and $\tilde D'$ be the two covering diagrams. We shall prove that the equivariant homologies are isomorphic. Here we consider the framed version $H_{*,*}^G (D)$ and $H_{*,*}^G ( D')$. \begin{center} \includegraphics[width=4cm,height=1cm]{r2khovanov}\\ Figure 5. \end{center} Following \cite{APS}, we define two maps $\overline \alpha$ and $\overline \beta$ $$\func{\overline \beta}{C_{p,q-2}(\includegraphics[width=0.5cm,height=0.5cm]{kauffzero})}{C_{p,q-1}(\includegraphics[width=0.5cm,height=0.5cm]{kplus})}{ \includegraphics[width=0.5cm,height=0.5cm]{kauffzero}} {\includegraphics[width=0.5cm,height=0.5cm]{positivemarker}} $$ and $$\doublefunc{\overline \alpha}{C_{p,q-1}(\includegraphics[width=0.5cm,height=0.5cm]{kplus})}{C_{p,q}(\includegraphics[width=0.5cm,height=0.5cm]{kauffinfini})}{\includegraphics[width=0.5cm,height=0.5cm]{positivemarker}}{0} {\includegraphics[width=0.5cm,height=0.5cm]{negativemarker}}{\includegraphics[width=0.5cm,height=0.5cm]{kauffinfini}}$$ Now, we set: $\gamma=\overline \alpha d_v \overline \beta$ which is chain map from $C_{p,q}(\includegraphics[width=0.5cm,height=0.5cm]{kauffzero})\longmapsto C_{p-1,q+2}(\includegraphics[width=0.5cm,height=0.5cm]{kauffinfini})$. We define two maps $f$ and $g$ as follows: $$\func{f}{C_{p,q}(\includegraphics[width=0.5cm,height=0.5cm]{kauffzero})}{C_{p,q}(D')}{\includegraphics[width=1cm,height=1cm]{r2khovanov0}} {\includegraphics[width=1cm,height=1cm]{r2khovanov2}}$$ and $$\func{g}{C_{p,q}(\includegraphics[width=0.5cm,height=0.5cm]{kauffinfini})}{C_{p+1,q-2}(D')} {\includegraphics[width=1cm,height=1cm]{r2khovanov1}}{\includegraphics[width=1cm,height=1cm]{r2khovanov3}.}$$ Let $\rho$ be the chain map $\rho=f+g\circ\gamma: C_{p,q}(D)\longmapsto C_{p,q}(D')$.\\ \textbf{Theorem 5.2. \cite{APS}} \emph{ The map $\rho$ induces an isomorphism in homology.}\\ The diagrams $\tilde D$ and $\tilde D'$ differ by $p$ Reidemeister moves of type 2. To each move we associate a map $\rho$ as explained earlier. Let us label these maps by $\rho_v,\rho_{\varphi(v)},\dots,\rho_{\varphi^{p-1}(v)}$. By composing these maps we define a map $\Phi=\rho_v\circ\rho_{\varphi(v)}\circ\dots\circ\rho_{\varphi^{p-1}(v)}: C_{p,q}(\tilde D)\longmapsto C_{p,q}(\tilde {D'})$. It is easy to see that we have $\rho_w\circ \rho_{w'} =\rho_{w'}\circ \rho_w$ and $\varphi \circ \rho_w=\rho_{\varphi(w)}\circ \varphi$. Consequently, $\Phi$ is $G-$equivariant. Thus, it induces a map $\overline \Phi$ between the quotient chain complexes. Arguments similar to those used in the case of the invariance under first Reidemeister move should enable us to conclude that $\overline \Phi$ induces an isomorphism between the equivariant Khovanov homologies of $D$ and $D'$. \subsubsection*{5.3- Invariance under the third Reidemeister move} In this paragraph, we shall prove the invariance of our equivariant homology under the third Reidemeister move. Once again, we are going to work with the framed version. Let $D$ and $D'$ be two diagrams which differ by a single third Reidemeister move as in the following picture \begin{center} \includegraphics[width=3cm,height=1cm]{r3khovanov}\\ Figure 6. \end{center} Our proof is based both on the construction in \cite{APS} and the techniques we have developed in the previous paragraph. It could be helpful if the reader has a copy of \cite{APS} with him. Let us consider the following diagrams $D_{+}=$ \includegraphics[width=0.75cm,height=0.5cm]{r3khovanov4}, $D_{-}=$ \includegraphics[width=0.75cm,height=0.5cm]{r3khovanov6} and $D_{++-}=$ \includegraphics[width=0.75cm,height=0.5cm]{r3khovanov5}. We define diagrams $D'_{+}$, $D'_{-}$ and $D'_{++-}$ in the same way. Note that the signs in the subscripts refer to the marker associated to the considered crossing.\\ The diagrams $D_{++-}$ and $D_{+}$ differ by a single Reidemeister move of type 2. As we have explained in the previous paragraph there exists a map $\Phi: C_{*,*}(\tilde D_{++-} )\longmapsto C_{*,*}(\tilde D_{+} ) $ which is $G-$equivariant and induces an isomorphism in homology. Now, let $C'_{*,*}(\tilde {D_{+}})=\Phi(C_{*,*}(\tilde D_{++-}) )$ and consider the map $\tilde i: C'_{*,*}(\tilde {D_{+}})\longmapsto C_{*,*}(\tilde {D_{+}}) $. We set $\tilde \beta: C_{*,*}(\tilde D) \longmapsto C_{*,*}(\tilde D_+)$ to be the map define by: \begin{center} $\includegraphics[width=0.5cm,height=0.5cm]{positivemarker} \dots\includegraphics[width=0.5cm,height=0.5cm]{positivemarker} \longmapsto \includegraphics[width=0.5cm,height=0.5cm]{kauffzero}\dots \includegraphics[width=0.5cm,height=0.5cm]{kauffzero}$ \end{center} and zero otherwise. Let $C_{*,*}'(\tilde D)=\tilde \beta^{-1}(C'_{*,*}(\tilde D_+))$. We have the following \cite{APS}\\ \textbf{Lemma 5.3.} \emph{The maps $\tilde i: C'_{*,*}(\tilde D_+) \longmapsto C_{*,*}(\tilde D_+) $ (resp. $\tilde {i'}: C'_{*,*}(\tilde D'_+) \longmapsto C_{*,*}(\tilde D'_+) $ ) and $\tilde j: C'_{*,*}(\tilde D) \longmapsto C_{*,*}(\tilde D)$ (resp. $\tilde {j'}: C'_{*,*}(\tilde {D'}) \longmapsto C_{*,*}(\tilde {D'}) $) induce isomorphisms in homology. }\\ \emph{Proof.} The induced map $\tilde{i}_*$ is an isomorphism in homology because it is a composition of isomorphisms, see \cite[Proposition 11.10]{APS}. Same argument applies for $\tilde{j}_*$. \fin\\ Similarly to the case of the second Reidemeister move discussed earlier, by composition of the maps of type $\rho_{III}$ defined in \cite{APS} we should be able to construct a map $\tilde \Psi: C'_{*,*}(\tilde D) \longmapsto C'_{*,*}(\tilde D')$ which is $G-$equivariant and induces an isomorphism in homology. This map induces an isomorphism in homology $\overline \Psi$ between the homology of $\overline {C'_{*,*}(\tilde D)}$ and the homology of $\overline {C'_{*,*}(\tilde D')}$. Consequently, $\overline {\tilde {j'}_*}\circ\overline {\Psi_*} \circ \overline{\tilde {j}_*^{-1}}: H^G_{*,*}(D)\longmapsto H^G_{*,*}(D')$ is an isomorphism. This completes the proof of the invariance under the third Reidemeister move.\\ Finally we use Theorem 4.2 to prove that $H_G^{*,*}(L)$ is isomorphic to $H^{*,*}(\tilde L)^G$. This completes the proof of Theorem 1. \subsection*{6- Equivariant Khovanov homology for framed links in the solid torus} In this section, we show how our equivariant construction can be described in the context of the categorification of the Kauffman bracket skein module of the solid torus \cite{APS}. Everything here is done similarly to what we have discussed in the previous sections. Consequently, we are going to omit the details and describe things briefly. We first review the notion of skein modules.\\ Let $M$ be an oriented compact three-manifold. A framed link in $M$ is an embedding of a finite family of annuli into the interior of $M$. Let $\cal L$ be the set of all isotopy classes of framed links in $M$ including the empty link. Let $\Z[A^{\pm}]$$ {\cal L}$ be the free module generated by $\cal L$. The Kauffman bracket skein module of $M$, denoted here by $\cal K$$(M)$, is defined as the quotient of $\Z[A^{\pm}]$${\cal L}$ by the smallest submodule generated by all elements of the following form\\ 1) $L\cup \bigcirc + (A^2+A^{-2}) L $, where $L$ is any framed link in $M$, and $L \cup \bigcirc$ is the disjoint union of $L$ with a trivial component,\\ 2) $L_+ - AL_0-A^{-1}L_{\infty}$, where $L_+$, $L_0$ and $L_{\infty}$ are three links which are identical except in a small ball where they look like in figure 3.\\ The existence and the uniqueness of the Jones polynomial is equivalent to the fact that ${\cal{K}}(S^3)$ is isomorphic to $\Z[A^{\pm}]$ with the empty link as a basis. If $F$ is an oriented surface, then the skein module of $F\times I$ admits an algebra structure \cite{Bu}. In particular, the skein algebra of the solid torus $S^1 \times I \times I$ is isomorphic to the polynomial algebra $\Z[A^{\pm}][z]$, where $z$ is represented by a nontrivial curve in the annulus as in the following picture \null \begin{center} \begin{picture}(0,0) \put(0,0){\circle{10}} \put(0,0){\circle{20}} \put(12,0){$z$} \put(0,0){\circle{40}} \end{picture} \end{center} Let $L$ be a link in the solid torus. Let $D$ be diagram of $L$ in the annulus and let $(C^{*,*,*}(D),d)$ be the chain complex of $D$ with coefficients in $\F_2$. The skein module of the solid torus has a basis made up of links of the form $z^n$ ($n$ parallel copies of $z$). Thus, we shall use $n$ as the third script instead of $z^n$ as in original definition \cite{APS}. The homology of $(C^{*,*,*}(D),\F_2)$ defines an invariant of framed links in the solid torus.\\ Let $\tilde L$ be the covering link of $L$ in the $p-$fold cyclic cover of the solid torus. Let $\tilde D$ be a symmetric diagram of $\tilde L$. Arguments similar to those used in section 3 show that the action of the rotation on the diagram $\tilde D$ extends to an action of the finite cyclic group $G$ on the chain complex $(C^{*,*,*}(\tilde D),d)$. The homology $H^{*,*,*}_G$ of the quotient complex $(\overline {C^{*,*,*}(\tilde D)},\overline{d})$ is called the $G-$equivariant Khovanov homology of $D$. The proofs in the previous section extend straightforward to conclude that $H^{*,*,*}_G$ is an invariant of framed links in the solid torus. \subsection*{7- Equivariant graph homology } In this section, we explain how one may extend our link equivariant homology to graphs. Let us first fix notations and review some definitions. Throughout the rest of this paper, a graph is a 1-dimensional finite CW-complex. Let $\mathcal G$ be a graph with vertex set $V({\cal G})$ and edge set $E({\mathcal G})$. The chromatic polynomial of $\cal G$ is a one variable polynomial $P({\mathcal G})\in \Z[\lambda]$ which when evaluated at an integer $m$ gives the number of colorings of the vertices of $\mathcal G$ by a palette of $m-$colors satisfying the property that vertices which are connected by a edge have different colors. Now, we shall briefly review the definition of graph homology following \cite{HR}. We consider homology with coefficients in $\F_2$. Take a set of colors $\{1,x\}$ and define a product $\star$ as in $\Z[x]/x^2$. For each $s\subseteq E({\mathcal G})$, we set $[G:s]$ to be the graph whose vertex set is $V({\cal G})$ and whose edge set is $s$. An \emph{enhanced state} of $\mathcal {G}$ is $S=(s,c)$ where $s\subseteq E({\mathcal G})$ and $c$ is an assignment of $1$ or $x$ to each connected component of the spanning subgraph $[G:s]$. If $S$ is an enhanced state then we set $i(S)$ to be the number of edges in $S$ and we set $j(S)$ to be the number of $x$'s in $c$. Now, we define $C^{i,j}({\mathcal G})$ to be the vector space generated by all enhanced states of $\mathcal G$ with $i(S)=i$ and $j(S)=j$. The differential is defined by $$\func{d} {C^{i,j}({\mathcal G})} {C^{i+1,j}({\mathcal G})} {S} {\displaystyle\sum_{e \in E({\mathcal G}-s)}S_e,}$$ where $S_e$ is any enhanced state obtained from $S$ by adding an edge not in $s$ and adjusting the sign according to the product $\star$, see \cite[Page 1375]{HR} for more details. The homology of $C^{*,*}({\mathcal G},d)$ is an invariant of $\mathcal G$. The chromatic polynomial is the Euler characteristic of $H^{*,*}({\mathcal G})$ evaluated at $q=\lambda-1$.\\ Let $\tilde{\mathcal G}$ be a graph on which the finite cyclic group $G$ acts. The action of $G$ on $\tilde{\mathcal G}$ extends to an action on the set of enhanced states. Thus, the group $G$ acts on $C^{i,j}(\tilde {\mathcal G})$. The action of $G$ commutes with the differential for the same reason as in the proof of Lemma 3.1. We obtain a quotient complex, its homology is called the $G-$equivariant homology of $\tilde {\mathcal G}$. \end{document}
\begin{document} \maketitle \begin{abstract} In this paper, we give a duality theorem between the category of $\kappa$-additive complete atomic modal algebras and the category of $\kappa$-downward directed multi-relational Kripke frames, for any cardinal number $\kappa$. Multi-relational Kripke frames are not Kripke frames for multi-modal logic, but frames for monomodal logics in which the modal operator $\Diamond$ does not distribute over (possibly infinite) disjunction, in general. We first define homomorphisms of multi-relational Kripke frames, and then show the equivalence between the category of $\kappa$-downward directed multi-relational Kripke frames and the category $\kappa$-complete neighborhood frames, from which the duality theorem follows. We also present another direct proof of this duality based on the technique given by Minari. \end{abstract} \section{Introduction} It is proved by Thomason \cite{thm75} that the category of all completely additive complete atomic modal algebras is dually equivalent to the category of all Kripke frames, where a modal algebra is said to be completely additive, if the modal operator $\Diamond$ distributes over the joins of every subsets of the algebra. However, there are some modal logics which cannot be characterized by a class of completely additive modal algebras. For example, if we see the existential and universal quantifiers as infinite joins and meets, respectively, the Barcan formula $\forall x\Box\phi\supset\Box\forall x\phi$ corresponds to the complete additivity, but there exist predicate modal logics in which it is not derivable. Moreover, there exists a propositional normal modal logic which is incomplete with respect to any class of completely additive complete modal algebras \cite{hld-ltk19}. Subsequently, Do\v{s}en \cite{dsn89} gives broad kinds of duality theorems for categories of modal algebras and neighborhood frames, including duality between the category of complete atomic modal algebras and the category of $\omega$-complete neighborhood frames (which are called full filter frames in \cite{dsn89}) and that between the category of completely additive complete atomic modal algebras and the category of complete neighborhood frames (which are called full hyperfilter frames in \cite{dsn89}), as well as equivalence between the category of complete neighborhood frames and the category of Kripke frames. However, it should be remarked that the category of neighborhood frames is not a generalization of the category of Kripke frames, in the following sense: For any Kripke frame $F=\langle W,R\rangle$, we can define the "underlying" neighborhood frame $U(F)=\langle W,\mathcal{V}_{F} \rangle$, where, $$ \mathcal{V}_{F}(x)=\{\{y\mid (x,y)\in R\}\}, $$ for any $x\in W$. However, as we will see in Theorem~\ref{nfr-mkf}, $U$ does not define the forgetful functor. In this paper, we give another duality theorem for the category of complete atomic modal algebras between the category of multi-relational Kripke frames. Multi-relational Kripke frames are not Kripke frames for multi-modal logic, but frames for monomodal logics in which the modal operator $\Diamond$ does not distribute over (possibly infinite) disjunction, in general. For example, in deontic logic (see, e.g., \cite{gbl00,cld13}), \begin{equation}\label{mdldistland} (\Box p\land\Box q)\supset\Box(p\land q) \end{equation} should not be derived, as the formula $(\Box\phi\land\Box\neg\phi)\supset\Box\psi$, which means that "if there is any conflict of obligation, then everything is obligatory" (\cite{gbl00}, p.114) can be deduced from it, and in the least infinitary modal logic, it is proved that the countable extension of (\ref{mdldistland}) is not derivable \cite{tnkcut,mnr16}. Consequently, these logics are Kripke incomplete, but it is proved that deontic logic $\mathrm{P}$ is complete with respect to the class of serial multi-relational Kripke frames \cite{gbl00}, and the least infinitary modal logic is complete with respect to the class of $\omega$-downward directed multi-relational Kripke frames \cite{mnr16}. In this paper, we first define homomorphisms of multi-relational Kripke frames so that the category of multi-relational Kripke frames is going to be a generalization of the category of Kripke frames. Then we show that the category of $\kappa$-downward directed multi-relational Kripke frames are equivalent to the category of $\kappa$-complete neighborhood frames for every cardinal number $\kappa$, which is a generalization of Do\v{s}en's equivalence theorem between the category of Kripke frames and the category of complete neighborhood frames. From this equivalence, duality between the category of $\kappa$-additive complete atomic modal algebras and the category of $\kappa$-downward directed multi-relational Kripke frames follows. In addition, we give another proof for this duality for any regular cardinal $\kappa$. The basic technique of this proof is given by Minari \cite{mnr16}. He proved completeness theorem for the least infinitary modal logic with respect to $\omega$-downward directed multi-relational Kripke frames by constructing a multi-relational Kripke frame such that each binary relation is given in the same way as the canonical frame of a finite fragment of the Lindenbaum algebra. We show that Minari's technique works also for homomorphisms and can be extended for any regular cardinal $\kappa$. \section{Preliminaries} In this section, we fix notation and recall definitions and basic results. For the details, see, e.g., \cite{blc-rjk-vnm01,gvn-hlm09}. Let $W$ be a non-empty set and $R$ a binary relation on $W$. For any $w_{1}$ and $w_{2}$ in $W$, we write $w_{1}\rel{R}w_{2}$ if $(w_{1},w_{2})\in R$. For any $X\subseteq W$, $\urel{R}X$ and $\drel{R}X$ denote the subsets of $W$ defined by $$ \urel{R}X=\{w\in W\mid \exists x\in X(x\rel{R}w)\},\ \ \drel{R}X=\{w\in W\mid \exists x\in X(w\rel{R}x)\}, $$ respectively. If $X$ is a singleton $\{w\}$, we write $\urel{R}w$ and $\drel{R}w$ for $\urel{R}X$ and $\drel{R}X$, respectively. If $R$ is a partial order $\leq$, we write $\mbox{$\Upsilonarrow$}$ and $\mbox{$\downarrow$}$ for $\urel{\leq}$ and $\drel{\leq}$, respectively. Let $f:A\rightarrow B$ be a mapping from a set $A$ to a set $B$. For any set $X\subseteq A$ and $Y\subseteq B$, $f\left[X\right]$ and $f^{-1}\left[Y\right]$ denote the sets $$ f\left[X\right] = \{f(x)\mid x\in X\},\ \ f^{-1}\left[Y\right] = \{x\in X\mid f(x)\in Y\}, $$ respectively. \begin{definition}\rm A Boolean algebra $A$ is said to be {\em complete} if for any $X\subseteq A$, $\bigvee X$ and $\bigwedge X$ exist in $A$. Let $A$ and $B$ be complete Boolean algebras. A mapping $f:A\rightarrow B$ is called a {\em homomorphism of complete Boolean algebras} if $f$ is a homomorphism of Boolean algebras which satisfies $$ f\left(\bigvee X\right)=\bigvee f\left[X\right],\ f\left(\bigwedge X\right)=\bigwedge f\left[X\right] $$ for any $X\subseteq A$. \end{definition} \begin{definition}\rm For any homomorphism $f:A\rightarrow B$ of complete Boolean algebras, $\radj{f}$ and $\ladj{f}$ denote mappings from $B$ to $A$ which are defined by $$ \radj{f}(b)=\bigvee f^{-1}\left[\mbox{$\downarrow$} b\right],\ \ \ladj{f}(b)=\bigwedge f^{-1}\left[\mbox{$\Upsilonarrow$} b\right], $$ for any $b\in B$, respectively. \end{definition} \begin{proposition} Let $f:A\rightarrow B$ be a homomorphism of complete Boolean algebras. For any $a\in A$ and $b\in B$, \begin{equation}\label{adjoint} f(a)\leq b \ \Leftrightarrow\ a\leq\radj{f}(b),\ \ b\leq f(a)\ \Leftrightarrow\ \ladj{f}(b)\leq a. \end{equation} That is, $\radj{f}$ and $\ladj{f}$ are right and left adjoints of $f$, respectively. \end{proposition} It follows from (\ref{adjoint}) that $\radj{f}$ and $\ladj{f}$ are order preserving mappings and \begin{equation}\label{monotone} f\circ\radj{f}, \ \ladj{f}\circ f\leq\mathrm{Id}_{B}, \ \ \ \mathrm{Id}_{A} \leq\radj{f}\circ f,\ f\circ\ladj{f}. \end{equation} \begin{definition}\rm Let $A$ be a Boolean algebra. A non-zero element $a\in A$ is called an {\em atom} if $0< x\leq a$ implies $x=a$. The set of all atoms of $A$ is denoted by $\mathcal{A}(A)$. A Boolean algebra $A$ is said to be {\em atomic} if every non-zero element $x\in A$ satisfies $$ x=\bigvee_{a\in\mathcal{A}(A),\ a\leq x}a. $$ We write $\mathbf{CABA}$ for the category whose objects are all complete and atomic Boolean algebras and arrows are all homomorphisms of complete Boolean algebras. \end{definition} \begin{proposition}\label{atom-property} Let $A$ be a Boolean algebra and $0\not=a\in A$. Then the following conditions are equivalent: \begin{enumerate} \item \label{con:atom} $a$ is an atom. \item \label{con:compjoinirr} For any $X\subseteq A$, if $\bigvee X\in A$ and $a\leq \bigvee X$ then $a\leq x$ for some $x\in X$. \item \label{con:joinirr} For any $x$ and $y$ in $A$, if $a\leq x\lor y$ then $a\leq x$ or $a\leq y$. \item \label{con:complete} For any $x\in A$, $a\leq x$ or $a\leq -x$. \end{enumerate} \end{proposition} \begin{proposition} Let $A$ and $B$ be complete atomic Boolean algebras and $f:A\rightarrow B$ a homomorphism of complete Boolean algebras. If $b\in\mathcal{A}(B)$, then $\ladj{f}(b)\in\mathcal{A}(A)$. \end{proposition} \begin{definition}\label{kripke}\rm A {\em Kripke frame} is a pair $\langle W,R\rangle$, where $W$ is a non-empty set and $R$ is a binary relation on $W$. Let $F_{1}=\langle W_{1},R_{1}\rangle$ and $F_{2}=\langle W_{2},R_{2}\rangle$ be Kripke frames. A {\em homomorphism $f:F_{1}\rightarrow F_{2}$ of Kripke frames} is a mapping from $W_{1}$ to $W_{2}$ which satisfies the following: \begin{enumerate} \item for any $v$ and $w$ in $W_{1}$, if $v\rel{R_{1}}w$ then $f(v)\rel{R_{2}}f(w)$; \item for any $w\in W_{1}$ and $u\in W_{2}$, if $f(w)\rel{R_{2}}u$ then there exists $v\in W_{1}$ such that $w\rel{R_{1}}v$ and $f(v)=u$. \end{enumerate} We write $\mathbf{KFr}$ for the category of all Kripke frames. \end{definition} \section{The category of complete atomic modal algebras} \begin{definition}\rm An algebra $\langle A;\lor,\land,-,\Diamond,0,1\rangle$ is called a {\em modal algebra} if its reduct $\langle A;\lor,\land,-,0,1\rangle$ is a Boolean algebra and $\Diamond$ is a unary operator which satisfies $ \Diamond 0=0 $ and $$ \Diamond x\lor \Diamond y=\Diamond(x\lor y) $$ for any $x$ and $y$ in $A$. A modal algebra $A$ is said to be {\em complete} or {\em atomic} if its Boolean reduct is complete or atomic, respectively. Let $A$ and $B$ be modal algebras. A mapping $f:A\rightarrow B$ is called a {\em homomorphism of modal algebras} if $f$ is a homomorphism of Boolean algebras which satisfies $$ f(\Diamond x)=\Diamond f(x) $$ for any $x\in A$. A homomorphism $f$ of modal algebras is called a {\em homomorphism of complete modal algebras} if it is a homomorphism of complete Boolean algebras. \end{definition} \begin{definition}\label{def:additive}\rm A complete modal algebra $A$ is said to be {\em completely additive} if \begin{equation}\label{eqbarcan} \bigvee_{x\in X}\Diamond x=\Diamond\bigvee X \end{equation} holds for any $X\subseteq A$. Let $\kappa$ be a cardinal number. A complete modal algebra $A$ is said to be {\em $\kappa$-additive} if the equation (\ref{eqbarcan}) holds for any $X\subseteq A$ such that $|X|<\kappa$. \end{definition} \begin{definition}\rm The objects of the category $\mathbf{CAMA}_{\infty}$ are all completely additive complete atomic modal algebras and the arrows of it are all homomorphisms of complete modal algebras between them. Let $\kappa$ be a cardinal number. The objects of the category $\mathbf{CAMA}_{\kappa}$ are all $\kappa$-additive complete atomic modal algebras and the arrows of it are all homomorphisms of complete modal algebras between them. \end{definition} \begin{theorem}{\rm (Thomason \cite{thm75})}. $\mathbf{CAMA}_{\infty}$ and $\mathbf{KFr}$ are dually equivalent. \end{theorem} \begin{proof} First, we define a functor $F:\mathbf{CAMA}_{\infty}\rightarrow\mathbf{KFr}$. For any object $A$ of $\mathbf{CAMA}_{\infty}$, define $F(A)$ by $$ F(A)=\langle \mathcal{A}(A),R\rangle, $$ where, $$ a\rel{R}b\ \Leftrightarrow\ a\leq \Diamond b $$ for any $a$ and $b$ in $\mathcal{A}(A)$, and for any arrow $f:A\rightarrow B$ of $\mathbf{CAMA}_{\infty}$, define $F(f):F(B)\rightarrow F(A)$ by $$ F(f)(b)=\ladj{f}(b) $$ for any $b\in\mathcal{A}(B)$. Next, we define a functor $G:\mathbf{KFr}\rightarrow\mathbf{CAMA}_{\infty}$. For any object $K=\langle W,R\rangle$ of $\mathbf{KFr}$, define $G(K)$ by $$ G(K)= \langle \mathcal{P}(W);\cup,\cap,W\setminus-,\Diamond_{K},\emptyset,W \rangle, $$ where $$ \Diamond_{K}X=\drel{R}X $$ for any $X\subseteq W$, and for any arrow $g$ from $K_{1}=\langle W_{1},R_{1}\rangle$ to $K_{2}=\langle W_{2},R_{2}\rangle$ of $\mathbf{KFr}$, define $G(g):G(K_{2})\rightarrow G(K_{1})$ by $$ G(g)(X)=g^{-1}[X] $$ for any $X\in\mathcal{P}(W_{2})$. Then $F:\mathbf{CAMA}_{\infty}\rightarrow\mathbf{KFr}$ and $G:\mathbf{KFr}\rightarrow\mathbf{CAMA}_{\infty}$ are well-defined contravariant functors and $$ \mathrm{Id}_{\mathbf{CAMA}_{\infty}}\cong G\circ F,\ \ \mathrm{Id}_{\mathbf{KFr}}\cong F\circ G. $$ \end{proof} \section{The category of neighborhood frames}\label{section:nfr} A {\em neighborhood frame} is a pair $\langle C, \mathcal{V}\rangle$, where $C$ is a non-empty set and $\mathcal{V}$ is a mapping from $C$ to $\mathcal{P}(\mathcal{P}(C))$. A neighborhood frame $\langle C, \mathcal{V}\rangle$ is said to {\em include the whole set} if for any $c\in C$, $C\in\mathcal{V}(c)$, and is said to be {\em upward closed} if for any $c\in C$, $X\in\mathcal{V}(c)$, and $Y\subseteq C$, if $X\subseteq Y$ then $Y\in\mathcal{V}(c)$. A neighborhood frame $\langle C, \mathcal{V}\rangle$ is said to be {\em complete} if it includes the whole set, is upward closed, and for any $c\in C$ and non-empty subset $S$ of $\mathcal{V}(c)$, \begin{equation}\label{completenfr} S\subseteq\mathcal{V}(c)\ \Rightarrow\ \bigcap S\in \mathcal{V}(c). \end{equation} Let $\kappa$ be a cardinal number. A neighborhood frame $\langle C, \mathcal{V}\rangle$ is said to be {\em $\kappa$-complete} if it includes the whole set, is upward closed, and (\ref{completenfr}) holds for any non-empty subset $S$ of $\mathcal{V}(c)$ such that $|S|<\kappa$. Let $Z_{1}=\langle C_{1},\mathcal{V}_{1}\rangle$ and $Z_{2}=\langle C_{2},\mathcal{V}_{2}\rangle$ be neighborhood frames. A mapping $f:C_{1}\rightarrow C_{2}$ is called a {\em homomorphism of neighborhood frames} from $Z_{1}$ to $Z_{2}$ if for any $c\in C_{1}$ and $X\subseteq C_{2}$, $$ f^{-1}[X]\in \mathcal{V}_{1}(c)\ \Leftrightarrow\ X\in \mathcal{V}_{2}(f(c)) $$ holds. We write $\mathbf{NFr}$ for the category of all neighborhood frames. We also write $\mathbf{NFr}_{\infty}$ and $\mathbf{NFr}_{\kappa}$ for its full subcategories of all complete neighborhood frames and all $\kappa$-complete neighborhood frames, respectively. The duality theorem between $\mathbf{NFr}_{\omega}$ and $\mathbf{CAMA}$ and that between $\mathbf{NFr}_{\infty}$ and $\mathbf{CAMA}_{\infty}$, which are given in Do\v{s}en \cite{dsn89}, can be generalized to any cardinal number $\kappa$, immediately: \begin{theorem}\label{camanfr} {\rm (Do\v{s}en \cite{dsn89})}. For any cardinal number $\kappa$, $\mathbf{CAMA}_{\kappa}$ and $\mathbf{NFr}_{\kappa}$ are dually equivalent. \end{theorem} \begin{proof} First, we define a functor $J:\mathbf{CAMA}_{\kappa}\rightarrow\mathbf{NFr}_{\kappa}$. For any object $A$ of $\mathbf{CAMA}_{\kappa}$, define $J(A)$ by $$ J(A)=\langle \mathcal{A}(A),\mathcal{V}\rangle, $$ where $$ \mathcal{V}(a)=\{\mathcal{A}(A)\cap\mbox{$\downarrow$} x\mid a\not\leq\Diamond-x\} $$ for any $a$, and for any arrow $f:A\rightarrow B$ of $\mathbf{CAMA}_{\kappa}$, define $J(f):J(B)\rightarrow J(A)$ by $$ J(f)(b)=\ladj{f}(b) $$ for any $b\in\mathcal{A}(B)$. Next, we define a functor $K:\mathbf{NFr}_{\kappa}\rightarrow\mathbf{CAMA}_{\kappa}$. For any object $Z=\langle C,\mathcal{V}\rangle$ of $\mathbf{NFr}_{\kappa}$, define $K(Z)$ by $$ K(Z)= \langle \mathcal{P}(C);\cup,\cap,C\setminus-,\Diamond_{Z},\emptyset,C \rangle, $$ where $$ \Diamond_{Z}X=\{c\in C\mid C\setminus X\not\in\mathcal{V}(c)\} $$ for any $X\subseteq C$, and for any arrow $g$ from $Z_{1}=\langle C_{1},\mathcal{V}_{1}\rangle$ to $Z_{2}=\langle C_{2},\mathcal{V}_{2}\rangle$ of $\mathbf{NFr}_{\kappa}$, define $K(g):K(Z_{2})\rightarrow K(Z_{1})$ by $$ K(g)(X)=g^{-1}[X] $$ for any $X\in\mathcal{P}(C_{2})$. Then $J:\mathbf{CAMA}_{\kappa}\rightarrow\mathbf{NFr}_{\kappa}$ and $K:\mathbf{NFr}_{\kappa}\rightarrow\mathbf{CAMA}_{\kappa}$ are well-defined contravariant functors and $$ \delta:\mathrm{Id}_{\mathbf{CAMA}_{\kappa}}\cong K\circ J,\ \ \ \gamma:\mathrm{Id}_{\mathbf{NFr}_{\kappa}}\cong J\circ K, $$ where the natural isomorphisms $\delta$ and $\gamma$ are defined by $$ \delta_{A}:x\mapsto \{a\in\mathcal{A}(A)\mid a\leq x\}, \ \ \gamma_{Z}:y\mapsto\{y\}, $$ for any object $A$ in $\mathbf{CAMA}_{\kappa}$ and any $Z$ in $\mathbf{NFr}_{\kappa}$. \end{proof} Do\v{s}en also proved the following equivalence of categories: \begin{theorem}\label{nfrkfr} {\rm (Do\v{s}en \cite{dsn89})}. $\mathbf{NFr}_{\infty}\cong\mathbf{KFr}$. \end{theorem} For any Kripke frame $F=\langle W, R\rangle$, we can define a neighborhood frame $U(F)$ by $U(F)=\langle W, \{\urel{R}x\mid x\in W\}\rangle$. However, as is shown in Theorem~\ref{nfr-mkf} there exists a Kripke frame $F$ such that $U(F)$ is not a complete neighborhood frame and there exists a homomorphism $f:F_{1}\rightarrow F_{2}$ of Kripke frames which is not a homomorphism of neighborhood frames from $U(F_{1})$ to $U(F_{2})$. In this sense, the neighborhood frames are not a generalization of the Kripke frames, although the two categories are equivalent. \section{The category of multi-relational Kripke frames} \begin{definition}\label{def:mkf}\rm A pair $\langle W,S\rangle$ is called a {\em multi-relational Kripke frame} if $W$ is a non-empty set and $S$ is a non-empty set of binary relations on $W$. A multi-relational Kripke frame $\langle W,S\rangle$ is said to be {\em completely downward directed} if for any $S'\subseteq S$, there exists $R\in S$ such that \begin{equation}\label{dd} R\subseteq \bigcap S'. \end{equation} Clearly, $\langle W,S\rangle$ is completely downward directed if and only if $\bigcap S \in S$. Let $\kappa$ be a cardinal number. A multi-relational Kripke frame $\langle W,S\rangle$ is said to be {\em $\kappa$-downward directed} if for any $S'\subseteq S$ such that $|S'|<\kappa$, there exists $R\in S$ which satisfies (\ref{dd}). Let $M_{1}=\langle W_{1},S_{1}\rangle$ and $M_{2}=\langle W_{2},S_{2}\rangle$ be multi-relational Kripke frames. A mapping $f:W_{1}\rightarrow W_{2}$ is called a {\em homomorphism of multi-relational Kripke frames} from $M_{1}$ to $M_{2}$ if it satisfies the following two conditions: \begin{enumerate} \item for any $x\in W_{1}$ and $R_{2}\in S_{2}$, there exists $R_{1}\in S_{1}$ such that for any $y\in W_{1}$, \begin{equation*} x\rel{R_{1}}y\ \Rightarrow\ f(x)\rel{R_{2}}f(y); \end{equation*} \item for any $x\in W_{1}$ and $R_{1}\in S_{1}$, there exists $R_{2}\in S_{2}$ such that for any $u\in W_{2}$, \begin{equation*} f(x)\rel{R_{2}}u \ \Rightarrow\ \mbox{$\exists y\in W_{1}$ such that $x\rel{R_{1}}y$ and $f(y)=u$}. \end{equation*} \end{enumerate} A homomorphism of multi-relational Kripke frames is an {\em isomorphism} if it is bijective. Indeed, if $f$ is an isomorphism, its inverse is also a homomorphism of multi-relational Kripke frames. \end{definition} \begin{definition} We write $\mathbf{MRKF}$ for the category of all multi-relational Kripke frames. We also write $\mathbf{MRKF}_{\infty}$ and $\mathbf{MRKF}_{\kappa}$ for its full subcategories of all completely downward directed multi-relational Kripke frames and all $\kappa$-downward directed multi-relational Kripke frames, respectively. \end{definition} The following theorem states that the multi-relational Kripke frames can be seen as a generalization of the Kripke frames: \begin{proposition}\label{kfrmkf} For any Kripke frame $F=\langle W,R\rangle$, define $M(F)$ by $ M(F)= \langle W,\{R\}\rangle, $ and for any homomorphism $f:F_{1}\rightarrow F_{2}$ of Kripke frames, define $M(f)$ by $f$. Then, $M$ is a well-defined functor and the image of $\mathbf{KFr}$ by $M$ is a full and faithful subcategory of $\mathbf{MRKF}_{\infty}$. \end{proposition} \begin{proof} Clear from the definition of the homomorphism of multi-relational Kripke frames. \end{proof} It is easy to prove the equivalence of $\mathbf{MRKF}_{\infty}$ and $\mathbf{KFr}$. Define a functor $L:\mathbf{MRKF}_{\infty}\rightarrow\mathbf{KFr}$ by $$ L:\langle W,S\rangle\mapsto \left\langle W,\bigcap S\right\rangle,\ \ L(f)=f. $$ Then it is easy to show that $L$ is a well-defined functor and both $L\circ M\cong\mathrm{Id}_{\mathbf{KFr}}$ and $M\circ L\cong\mathrm{Id}_{\mathbf{MRKF}_{\infty}}$ hold. \section{Equivalence between $\mathbf{MRKF}$ and $\mathbf{NFr}$}\label{section:enfr} For any multi-relational Kripke frame $M=\langle W,S\rangle$, we can define the "underlying" neighborhood frame $U(M)$ by $U(M)=\langle W,\mathcal{V}_{M}\rangle$, where $$ \mathcal{V}_{M}(x)=\{\urel{R} x\mid R\in S\}. $$ However, $U$ does not define the forgetful functor from $\mathbf{MRKF}$ to $\mathbf{NFr}$ nor that from $\mathbf{MRKF}_{\kappa}$ to $\mathbf{NFr}_{\kappa}$. In fact, we have the following: \begin{theorem}\label{nfr-mkf} \begin{enumerate} \item There exists an object $M$ of $\mathbf{MRKF}_{\kappa}$ such that $U(M)$ is not an object of $\mathbf{NFr}_{\kappa}$. Moreover, there exists such an object $M$ in $\mathbf{KFr}$ such that $U(M)$ is not an object of $\mathbf{NFr}_{\infty}$. \item There exists an arrow $f:M_{1}\rightarrow M_{2}$ of $\mathbf{MRKF}_{\kappa}$ such that $U(M_{1})$ and $U(M_{2})$ are objects of $\mathbf{NFr}$ but $f$ is not an arrow of $\mathbf{NFr}$. Moreover, there exists such an arrow $f$ in $\mathbf{KFr}$, either. \item There exists an arrow $f:U(M_{1})\rightarrow U(M_{2})$ of $\mathbf{NFr}$ such that $M_{1}$ and $M_{2}$ are objects of $\mathbf{MRKF}$ but $f:M_{1}\rightarrow M_{2}$ is not an arrow of $\mathbf{MRKF}$. \end{enumerate} \end{theorem} \begin{proof} \noindent (1): Let $M=\left\langle \{0\},\{\emptyset\}\right\rangle$. Then $M$ is an object of $\mathbf{MRKF}_{\kappa}$, but not that of $\mathbf{NFr}_{\kappa}$, since $\mathcal{V}_{M}(0)$ is not upward closed. If we identify a singleton $\{R\}$ of a relation with $R$, $M$ is a Kripke frame, either. \noindent (2): Let $M_{1}=\left\langle \{0\},\{\{(0,0)\}\}\right\rangle$ and $M_{2}=\left\langle \{0,1\},\{\{(0,0)\}\}\right\rangle$. Let $f:0\mapsto 0$. It is easy to see that $f\in\hom_{\mathbf{MRKF}_{\kappa}}(M_{1},M_{2})$. If we identify a singleton $\{R\}$ of a relation with $R$, $f$ is a homomorphism of Kripke frames, either. However, $f$ is not an arrow of $\mathbf{NFr}$ from $U(M_{1})$ to $U(M_{2})$, since $ f^{-1}\left[\{0,1\}\right] = \{0\}\in\mathcal{V}_{M_{1}}(0)$, but $\{0,1\}\not\in\mathcal{V}_{M_{2}}(0)$. \noindent (3): Let $ M_{1}=\left\langle \{0,1,2\}, \{R_{1},R_{2}\}\right\rangle$ and $M_{2}=\left\langle \{0,1\}, \{Q\}\right\rangle, $ where $$ R_{1}=\{(0,1)\},\ \ R_{2}=\{(0,0),(0,1),(0,2)\},\ \ Q=\{(0,0),(0,1)\}. $$ Then $$ \mathcal{V}_{M_{1}}(0)=\{\{1\},\{0,1,2\}\},\ \ \mathcal{V}_{M_{1}}(1)=\mathcal{V}_{M_{1}}(2)=\{\emptyset\} $$ and $$\mathcal{V}_{M_{2}}(0)=\{\{0,1\}\},\ \ \mathcal{V}_{M_{2}}(1)=\{\emptyset\}. $$ Define $f:\{0,1,2\}\rightarrow\{0,1\}$ by $f(0)=0$ and $f(1)=f(2)=1$. It is easy to see that $f\in \hom_{\mathbf{NFr}}(U(M_{1}),U(M_{2}))$. However, $f\not\in \hom_{\mathbf{MRKF}}(M_{1},M_{2})$, since $0\rel{Q}0$ but $0\not\rel{R_{1}}0$. \end{proof} If we identify $U(M)$ with $M$ and a singleton $\{R\}$ of a relation with $R$, Proposition~\ref{kfrmkf} and Theorem~\ref{nfr-mkf} can be summarized as follows: \begin{equation*} \begin{array}{ccc} \mathbf{NFr} & \mathop{{\not\supseteq}\atop{\not\subseteq}}\limits_{\mathrm{arrows}} & \mathbf{MRKF} \\ \mathop{\rotatebox[origin=c]{90}{$\subseteq$}}\limits_{\mathrm{\ }} & & \mathop{\rotatebox[origin=c]{90}{$\subseteq$}}\limits_{\mathrm{\ }} \\ \mathbf{NFr}_{\kappa} & \mathop{\subsetneqq}\limits_{\mathrm{objects}} & \mathbf{MRKF}_{\kappa} \\ \mathop{\rotatebox[origin=c]{90}{$\subseteq$}}\limits_{\mathrm{\ }} & & \mathop{\rotatebox[origin=c]{90}{$\subseteq$}}\limits_{\mathrm{\ }} \\ \mathbf{NFr}_{\infty} & \mathop{\subsetneqq}\limits_{\mathrm{objects}} & \mathbf{KFr} \end{array} \end{equation*} In the rest of this section, we show that $\mathbf{NFr}_{\kappa}$ and $\mathbf{MRKF}_{\kappa}$ are equivalent. First, we show the following lemmas: \begin{lemma}\label{mkftonfr} Let $\kappa$ be any cardinal number. For any $\kappa$-downward directed multi-relational Kripke frame $M=\langle W, S\rangle$, define a $\kappa$-complete neighborhood frame $N(M)$ by $ \langle W,\mathcal{V}_{M}\rangle $, where $\mathcal{V}_{M}\subseteq\mathcal{P}(W)$ is defined by $$ \mathcal{V}_{M}= \mbox{$\Upsilonarrow$}\{\urel{R}x\mid R\in S\} $$ for any $x\in W$, and for any homomorphism $f$ of multi-relational Kripke frames, define $N(f)$ by $f$. Then $N$ is a full functor from $\mathbf{MRKF}_{\kappa}$ to $\mathbf{NFr}_{\kappa}$. \end{lemma} \begin{proof} It is clear that $N(M)$ is an object of $\mathbf{NFr}_{\kappa}$. We show that for any $M_{1}=\langle W_{1},S_{1}\rangle$ and $M_{2}=\langle W_{2},S_{2}\rangle$, $$ \hom_{\mathbf{MRKF}_{\kappa}}(M_{1},M_{2})=\hom_{\mathbf{NFr}_{\kappa}}(N(M_{1}),N(M_{2})). $$ \noindent ($\subseteq$): Suppose $f\in\hom_{\mathbf{MRKF}_{\kappa}}(M_{1},M_{2})$. Take any $x\in W_{1}$ and $Y\subseteq W_{2}$. By definition of $\mathcal{V}_{M_{1}}(x)$ and $\mathcal{V}_{M_{2}}(x)$, \begin{eqnarray*} Y\in\mathcal{V}_{M_{2}}(f(x)) &\ \Leftrightarrow\ & \exists Q\in S_{2}\left(\urel{Q}f(x)\subseteq Y\right)\\ &\ \Rightarrow\ & \exists R\in S_{1}\left(f\left[\urel{R} x\right]\subseteq \urel{Q}f(x)\subseteq Y\right)\\ &\ \Rightarrow\ & \exists R\in S_{1}\left(\urel{R} x\subseteq f^{-1}\left[Y\right]\right)\\ &\ \Leftrightarrow\ & f^{-1}\left[Y\right]\in\mathcal{V}_{M_{1}}(x). \end{eqnarray*} Conversely, \begin{eqnarray*} f^{-1}\left[Y\right]\in\mathcal{V}_{M_{1}}(x) &\ \Leftrightarrow\ & \exists R\in S_{1}\left(\urel{R}x\subseteq f^{-1}\left[Y\right]\right)\\ &\ \Rightarrow\ & \exists Q\in S_{2}\left(\urel{Q}f(x)\subseteq f\left[\urel{R}x\right]\subseteq f\left[f^{-1}\left[Y\right]\right]\right)\\ &\ \Rightarrow\ & \exists Q\in S_{2}\left(\urel{Q}f(x)\subseteq Y\right)\\ &\ \Leftrightarrow\ & Y\in\mathcal{V}_{M_{2}}(f(x)). \end{eqnarray*} \noindent ($\supseteq$): Suppose $f\in\hom_{\mathbf{NFr}_{\kappa}}(N(M_{1}),N(M_{2}))$ and $x\in W_{1}$. First, take any $Q\in S_{2}$. Then $ f^{-1}\left[\urel{Q}f(x)\right]$ is in $\mathcal{V}_{M_{1}}(x)$, since $\urel{Q}f(x)\in \mathcal{V}_{M_{2}}(f(x))$. Hence, there exists $R\in S_{1}$ such that $$ \urel{R}x\subseteq f^{-1}\left[\urel{Q}f(x)\right]. $$ Then for any $y\in W_{1}$, if $x\rel{R} y$ then $f(x)\rel{Q} f(y)$. Next, take any $R\in S_{1}$. Since $\urel{R}x\in\mathcal{V}_{M_{1}}(x)$ and $\mathcal{V}_{M_{1}}(x)$ is upward closed, $$ \urel{R}x\subseteq f^{-1}\left[f\left[\urel{R}x\right]\right]\in\mathcal{V}_{M_{1}}(x). $$ Hence, $f\left[\urel{R}x\right]\in\mathcal{V}_{M_{2}}(f(x))$. Then there exists $Q\in S_{2}$ such that $\urel{Q}f(x)\subseteq f\left[\urel{R}x\right]$. Then for any $u\in W_{2}$ such that $f(x)\rel{Q}u$, there exists $y\in W_{2}$ such that $x\rel{R}y$ and $f(y)=u$. \end{proof} \begin{lemma}\label{nfrtomkf} Let $Z=\langle C,\mathcal{V}\rangle$ be any $\kappa$-complete neighborhood frame. We write $V_{Z}$ for the set $$ V_{Z}=\{v:C\rightarrow\mathcal{P}(C)\mid \forall x\in C(v(x)\in\mathcal{V}(x))\}, $$ and for any $v\in V_{Z}$, we write $R_{v}$ for a binary relation on $C$ defined by $$ R_{v}=\{(x,y)\mid x\in C,\ y\in v(x)\}. $$ Then $ H(Z)= \langle C,S_{Z}\rangle, $ is a $\kappa$-downward directed multi-relational Kripke frame, where $ S_{Z}=\{R_{v}\mid v\in V_{Z}\}$. If we define $H(f)$ by $f$ for any homomorphism $f$ of neighborhood frames, then $H$ is a functor from $\mathbf{NFr}_{\kappa}$ to $\mathbf{MRKF}_{\kappa}$. \end{lemma} \begin{proof} We first show that $H(Z)$ is an object of $\mathbf{MRKF}_{\kappa}$. Since $Z$ includes the whole set, $V_{Z}\not=\emptyset$. Therefore, $S_{Z}\not=\emptyset$. Take any subset $\{R_{v_{i}}\mid i\in\kappa\}$ of $S_{Z}$. As $Z$ is $\kappa$-complete, there exists $u\in V_{Z}$ such that $$ u(x)=\bigcap_{i\in\kappa}v_{i}(x)\in\mathcal{V}(x) $$ for any $x\in C$. Then $R_{u}=\bigcap_{i\in\kappa}R_{v_{i}}$. Next, we show that $$ \hom_{\mathbf{NFr}_{\kappa}}(Z_{1}, Z_{2})\subseteq \hom_{\mathbf{MRKF}_{\kappa}}(H(Z_{1}), H(Z_{2})) $$ for any $\kappa$-complete neighborhood frames $Z_{1}=\langle C_{1},\mathcal{V}_{1}\rangle$ and $Z_{2}=\langle C_{2},\mathcal{V}_{2}\rangle$. Suppose $f\in \hom_{\mathbf{NFr}_{\kappa}}(Z_{1}, Z_{2})$. First, take any $x\in C_{1}$ and $R_{v}\in S_{Z_{2}}$. Then $\urel{R_{v}}f(x)=v(f(x))\in\mathcal{V}_{2}(f(x))$. Hence, $f^{-1}\left[\urel{R_{v}}f(x)\right]\in \mathcal{V}_{1}(x)$. By definition of $V_{Z_{1}}$, there exists $u\in V_{Z_{1}}$ such that $$ \urel{R_{u}}x= u(x)=f^{-1}\left[\urel{R_{v}}f(x)\right]. $$ Hence, for any $y\in C_{1}$, $$ x\rel{R_{u}}y\ \Leftrightarrow\ y\in u(x)\ \Leftrightarrow\ f(x)\rel{R_{v}} f(y). $$ Next, take any $x\in C_{1}$ and $R_{u}\in S_{Z_{1}}$. Then $\urel{R_{u}}x=u(x)\in\mathcal{V}_{1}(x)$. Since $Z_{1}$ is upward closed, $f^{-1}\left[f\left[\urel{R_{u}}x\right]\right]\in\mathcal{V}_{1}(x)$. Therefore, $f\left[\urel{R_{u}}x\right]\in\mathcal{V}_{2}(f(x))$. By definition of $V_{Z_{2}}$, there exists $v\in V_{Z_{2}}$ such that $$ \urel{R_{v}}f(x)= v(f(x))=f\left[\urel{R_{u}}x\right]. $$ Hence, for any $z\in C_{2}$ such that $ f(x)\rel{R_{v}}z $, there exists $y\in C_{1}$ such that $x\rel{R_{u}}y$ and $f(y)=z$. \end{proof} Now, we prove that $\mathbf{MRKF}_{\kappa}$ and $\mathbf{NFr}_{\kappa}$ are equivalent, which is a generalization of Theorem~\ref{nfrkfr}: \begin{theorem}\label{mkfnfr} $N$ and $H$ are equivalence between $\mathbf{MRKF}_{\kappa}$ and $\mathbf{NFr}_{\kappa}$, for every cardinal number $\kappa$. \end{theorem} \begin{proof} For any object $M=\langle W,S\rangle$ of $\mathbf{MRKF}_{\kappa}$, define a map $ \gamma_{M}:M\rightarrow H(N(M)) $ by $\gamma_{M}(x)=x$ for any $x\in W$, and for any object $Z=\langle C,\mathcal{V}\rangle$ of $\mathbf{NFr}_{\kappa}$, define a map $ \delta_{Z}:Z\rightarrow H(N(Z)) $ by $\delta_{Z}(c)=c$ for any $c\in C$. It is trivial that $ H(N(f))\circ\gamma_{M_{1}}=\gamma_{M_{2}}\circ f $ holds for any $f:M_{1}\rightarrow M_{2}$ and $ N(H(g))\circ\delta_{Z_{1}}=\delta_{Z_{2}}\circ g $ holds for any $g:Z_{1}\rightarrow Z_{2}$. First, we show that for any multi-relational Kripke frame $M=\langle W,S\rangle$, $\gamma_{M}$ is an isomorphism of multi-relational Kripke frames from $M$ to $H(N(M))$. We check the first condition of the homomorphisms of multi-relational Kripke frames: Take any $x\in W$ and $R_{v}\in S_{N(M)}$, where $v\in V_{N(M)}(x)$. Then there exists $R\in S$ such that $ \urel{R}x\subseteq v(x) $. For any $y\in W$, $R$ satisfies that \begin{equation*}\label{naturaliso} x\rel{R}y \ \Rightarrow\ y\in v(x) \ \Leftrightarrow\ x\rel{R_{v}}y. \end{equation*} Then we check the second condition: Take any $x\in W$ and $R\in S$. As $\urel{R}x\in\mathcal{V}_{M}(x)$, there exists $v\in V_{N(M)}$ such that $\urel{R}x=v(x)$. Then $R_{v}\in S_{N(M)}$ satisfies that for any $y\in W$, $$ x\rel{R_{v}}y \ \Leftrightarrow\ y\in v(x) \ \Leftrightarrow\ x\rel{R}y. $$ As $\gamma_{M}$ is the identity mapping on $W$, $\gamma_{M}$ is an isomorphism of multi-relational Kripke frames. Next, we prove that for any neighborhood frame $Z=\langle C,\mathcal{V}\rangle$, $\delta_{Z}$ is an isomorphism of neighborhood frames from $Z$ to $N(H(Z))$. Since $Z$ is upward closed, \begin{equation}\label{upwardclosed} \mbox{$\Upsilonarrow$}\{\urel{R_{v}}x\mid v\in V_{Z}\} = \{\urel{R}x\mid v\in V_{Z}\} \end{equation} for any $x\in C$. Take any $c\in C$ and $X\subseteq C$. Then, \begin{align*} X\in\mathcal{V}(x) &\ \Leftrightarrow\ \exists v\in V_{Z}\left(v(x)= X\right)\\ &\ \Leftrightarrow\ \exists R_{v}\in S_{Z}\left(\urel{R_{v}}x= X\right)\\ &\ \Leftrightarrow\ X\in\mathcal{V}_{H(Z)}(x) &&(\text{by \ref{upwardclosed}}). \end{align*} As $\delta_{Z}$ is the identity mapping on $C$, $\delta_{Z}$ is an isomorphism of neighborhood frames. \end{proof} By Theorem~\ref{camanfr} and Theorem~\ref{mkfnfr}, we have the following: \begin{theorem} For any cardinal number $\kappa$, $\mathbf{CAMA}_{\kappa}$ and $\mathbf{MRKF}_{\kappa}$ are dually equivalent. \end{theorem} By the same argument as Theorem~\ref{mkfnfr}, it follows that the category of all multi-relational Kripke frames are equivalent to the category of all upward closed neighborhood frames which includes the whole set. These categories are dually equivalent to the category of algebras which is obtained by weakening the definition of the modal operator in $\mathbf{CAMA}$ to the following; $\Diamond 0=0$ and $\Diamond x\leq \Diamond y$ whenever $x\leq y$. \section{Functor from $\mathbf{CAMA}_{\kappa}$ to $\mathbf{MRKF}_{\kappa}$} In the rest of the paper, we give another direct proof of duality between $\mathbf{CAMA}_{\kappa}$ and $\mathbf{MRKF}_{\kappa}$ for every regular cardinal $\kappa$. First, we define a contravariant functor $F:\mathbf{CAMA}_{\kappa}\rightarrow\mathbf{MRKF}_{\kappa}$ for every regular cardinal $\kappa$. For any object $A$ of $\mathbf{CAMA}_{\kappa}$, a multi-relational Kripke frame $F(A)$ is defined by $$ F(A)=\langle\mathcal{A}(A),\{R(X)\mid X\subseteq A,\ |X|<\kappa\}\rangle, $$ where, for any $a\in\mathcal{A}(A)$ and $b\in\mathcal{A}(B)$, $$ a\rel{R(X)}b \ \Leftrightarrow\ a\leq\bigwedge \Diamond\left[\mbox{$\Upsilonarrow$} b\cap X\right], $$ and for any arrow $f:A\rightarrow B$ of $\mathbf{CAMA}_{\kappa}$, the mapping $F(f):\mathcal{A}(B)\rightarrow\mathcal{A}(A)$ is defined by $$ F(f)(b)=\ladj{f}(b) $$ for any $b\in\mathcal{A}(B)$. Below, we show that $F$ is a well-defined contravariant functor. \begin{proposition} Let $\kappa$ be a regular cardinal. If $A$ is a $\kappa$-additive complete atomic modal algebra, $F(A)$ is a $\kappa$-downward directed multi-relational Kripke frame. \end{proposition} \begin{proof} It is clear that $F(A)$ is a multi-relational Kripke frame. We show that $F(A)$ is $\kappa$-downward directed. Suppose $X_{i}\subseteq A$ and $|X_{i}|<\kappa$ for any $i\in I$. If $|I|<\kappa$, then $$ |\bigcup_{i\in I}X_{i}|<\kappa, $$ since $\kappa$ is regular. Hence, $F(A)$ is $\kappa$-downward directed, because $$ R\left(\bigcup_{i\in I}X_{i}\right)\subseteq \bigcap_{i\in I} R(X_{i}). $$ \end{proof} \begin{definition}\rm Let $A$ be a $\kappa$-additive complete atomic modal algebra. For any $X\subseteq A$ and $a\in \mathcal{A}(A)$, $p(X,a)$ denotes an element of $A$ defined by $$ p(X,a)=\bigvee\Diamond^{-1}\left[\mbox{$\downarrow$}(-a)\right]\cap X. $$ \end{definition} \begin{lemma}\label{equivalence} Let $A$ be a $\kappa$-additive complete atomic modal algebra, $X$ a subset of $A$ such that $|X|<\kappa$, and $a\in\mathcal{A}(A)$. Then for any $a'\in\mathcal{A}(A)$, $$ a\rel{R(X)}a' \ \Leftrightarrow\ a'\not\leq p(X,a). $$ \end{lemma} \begin{proof} For any $a'\in\mathcal{A}(A)$, \begin{align*} a\rel{R(X)}a' &\ \Leftrightarrow\ a\leq \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} a'\cap X\right]\\ &\ \Leftrightarrow\ \forall x\in X ( a'\leq x\ \Rightarrow\ a\leq \Diamond x )\\ &\ \Leftrightarrow\ \forall x\in X ( a\not\leq \Diamond x \ \Rightarrow\ a'\not\leq x )\\ &\ \Leftrightarrow\ \forall x\in X ( a\leq -\Diamond x \ \Rightarrow\ a'\not\leq x) && \text{($a\in\mathcal{A}(A)$)} \\ &\ \Leftrightarrow\ \forall x\in X ( \Diamond x\leq -a \ \Rightarrow\ a'\not\leq x )\\ &\ \Leftrightarrow\ \forall x \left( x\in\Diamond^{-1}\left[\mbox{$\downarrow$}(-a)\right]\cap X \ \Rightarrow\ a'\not\leq x \right)\\ &\ \Leftrightarrow\ a' \not\leq \bigvee\Diamond^{-1}\left[\mbox{$\downarrow$}(-a)\right]\cap X && \text{($a'\in\mathcal{A}(A)$)} . \end{align*} \end{proof} \begin{lemma}\label{inv-equiv} Let $A$ and $B$ be $\kappa$-additive complete atomic modal algebras, $f:A\rightarrow B$ a homomorphism of complete modal algebras, $Y\subseteq B$ such that $|Y|<\kappa$, and $b\in\mathcal{A}(B)$. Suppose $X=\{\radj{f}(p(Y,b))\}$. Then for any $a\in\mathcal{A}(A)$, $$ \ladj{f}(b)\rel{R(X)}a \ \Leftrightarrow\ a\not\leq\radj{f}(p(Y,b)). $$ \end{lemma} \begin{proof} By Lemma \ref{equivalence}, all we have to prove is $$ \radj{f}(p(Y,b))=p(X,\ladj{f}(b)). $$ As $$ p(X,\ladj{f}(b)) = \bigvee\Diamond^{-1}\left[\mbox{$\downarrow$}(-\ladj{f}(b))\right]\cap \{\radj{f}(p(Y,b))\}, $$ it is enough to show $$ \radj{f}(p(Y,b))\in \Diamond^{-1}\left[\mbox{$\downarrow$}\left(-\ladj{f}(b)\right)\right]. $$ Since $B$ is $\kappa$-additive \begin{align*} \Diamond f(\radj{f}(p(Y,b))) &\leq \Diamond p(Y,b) && \text{(by (\ref{monotone}))}\\ &= \Diamond\bigvee\Diamond^{-1}\left[\mbox{$\downarrow$}(-b)\right]\cap Y \\ &= \bigvee\Diamond\left(\Diamond^{-1}\left[\mbox{$\downarrow$}(-b)\right]\cap Y\right) && \text{($\kappa$-additivity)}\\ &\leq \bigvee\mbox{$\downarrow$}(-b)\\ &= -b. \end{align*} Hence $$ b\leq-\Diamond f(\radj{f}(p(Y,b)))=f(-\Diamond\radj{f}(p(Y,b))). $$ By (\ref{adjoint}), $$ \ladj{f}(b)\leq-\Diamond\radj{f}(p(Y,b)), $$ so \begin{equation*}\label{property} \Diamond\radj{f}(p(Y,b))\leq-\ladj{f}(b). \end{equation*} Hence, $$ \radj{f}(p(Y,b))\in \Diamond^{-1}\left[\mbox{$\downarrow$}\left(-\ladj{f}(b)\right)\right]. $$ \end{proof} \begin{proposition} Let $\kappa$ be a regular cardinal. For any $\kappa$-additive complete atomic modal algebras $A$ and $B$ and for any homomorphism $f:A\rightarrow B$ of complete modal algebras, $F(f):\mathcal{A}(B)\rightarrow\mathcal{A}(A)$ is a homomorphism of multi-relational Kripke frames from $F(B)$ to $F(A)$. \end{proposition} \begin{proof} Condition 1 of Definition \ref{def:mkf}: Take any $b_{1}\in\mathcal{A}(B)$ and any $X\subseteq A$ such that $|X|<\kappa$. Then $|f\left[X\right]|<\kappa$. Take any $b_{2}\in\mathcal{A}(B)$. We show that $$ b_{1}\rel{R(f\left[X\right])}b_{2} \ \Rightarrow\ \ladj{f}(b_{1})\rel{R(X)}\ladj{f}(b_{2}). $$ Suppose $b_{1}\rel{R(f\left[X\right])}b_{2}$. By definition of $R(f\left[X\right])$, \begin{equation*} b_{1} \leq \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} b_{2}\cap f\left[X\right]\right]. \end{equation*} Therefore, \begin{equation}\label{eq:assump} \ladj{f}(b_{1}) = \bigwedge_{x\in A,\ b_{1}\leq f(x)}x \leq \bigwedge \left\{ x\in A\mid \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} b_{2}\cap f\left[X\right]\right]\leq f(x) \right\}. \end{equation} On the other hand, \begin{equation}\label{eq:member} \Diamond\left[\mbox{$\Upsilonarrow$}\ladj{f}(b_{2})\cap X\right] \subseteq \left\{ x\in A\mid \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} b_{2}\cap f\left[X\right]\right]\leq f(x) \right\}, \end{equation} because, for any $z\in\Diamond\left[\mbox{$\Upsilonarrow$} \ladj{f}(b_{2})\cap X\right]$, there exists $u\in X$ such that $$ \ladj{f}(b_{2})\leq u,\ \Diamond u=z, $$ and, this implies $b_{2}\leq f(u)$ and $f(u)\in f\left[X\right]$, and therefore, $$ \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} b_{2}\cap f\left[X\right]\right] \leq \Diamond f(u) = f(\Diamond u) = f(z). $$ By (\ref{eq:assump}) and (\ref{eq:member}), $$ \ladj{f}(b_{1})\leq \bigwedge\Diamond\left[\mbox{$\Upsilonarrow$} \ladj{f}(b_{2})\cap X\right]. $$ Hence, $$ \ladj{f}(b_{2}) \rel{R(X)} \ladj{f}(b_{1}). $$ \noindent Condition 2 of Definition \ref{def:mkf}: Take any $b\in\mathcal{A}(B)$ and any $Y\subseteq B$ such that $|Y|<\kappa$. Define $X\subseteq A$ by $$ X=\{\radj{f}(p(Y,b))\}. $$ Suppose $a\in\mathcal{A}(A)$ and $\ladj{f}(b)\rel{R(X)} a$. Then $a\not\leq \radj{f}(p(Y,b))$ by Lemma \ref{inv-equiv}. Hence, $f(a)\not\leq p(Y,b)$. Since $B$ is atomic, there exists $b'\in\mathcal{A}(B)$ such that $$ b'\leq f(a),\ \ b'\not\leq p(Y,b). $$ Then $\ladj{f}(b')\leq a$, and $b\rel{R(Y)}b'$ by Lemma \ref{equivalence}. Since $\ladj{f}(b')$ and $a$ are in $\mathcal{A}(A)$, $\ladj{f}(b')= a$. \end{proof} \section{Functor from $\mathbf{MRKF}_{\kappa}$ to $\mathbf{CAMA}_{\kappa}$} We define a contravariant functor $G:\mathbf{MRKF}_{\kappa}\rightarrow\mathbf{CAMA}_{\kappa}$ for every cardinal number $\kappa$. For any object $M=\langle W, S\rangle$ of $\mathbf{MRKF}_{\kappa}$, a complete atomic modal algebra $G(M)$ is defined by $$ G(M)= \langle \mathcal{P}(W);\cup,\cap,W\setminus-,\Diamond_{M},\emptyset,W \rangle, $$ where $\Diamond_{M}$ is defined by $$ \Diamond_{M}X=\bigcap_{R\in S}\drel{R}X $$ for any $X\subseteq W$, and for any multi-relational Kripke frames $M_{1}=\langle W_{1},S_{1}\rangle$, $M_{2}=\langle W_{2},S_{2}\rangle$, and any arrow $g:M_{1}\rightarrow M_{2}$ of $\mathbf{MRKF}_{\kappa}$, the mapping $G(g):\mathcal{P}(W_{2})\rightarrow\mathcal{P}(W_{1})$ is defined by $$ G(g)(X)=g^{-1}\left[X\right] $$ for any $X\subseteq W_{2}$. Below, we show that $G$ is a well-defined contravariant functor. \begin{proposition} Let $\kappa$ be a cardinal number. If $M=\langle W, S\rangle$ is a $\kappa$-downward directed multi-relational Kripke frame, $G(g)(M)$ is a $\kappa$-additive complete atomic modal algebra. \end{proposition} \begin{proof} It is clear that $\langle \mathcal{P}(W);\cup,\cap,W\setminus-,\emptyset \rangle$ is a complete atomic Boolean algebra. Since $\drel{R}\emptyset=\emptyset$ for any $R\in S$, $$ \Diamond_{M}\emptyset=\bigcap_{R\in S}\drel{R}\emptyset=\emptyset. $$ Let $\{X_{i}\}_{i\in I}$ be a subset of $\mathcal{P}(W)$ such that $|I|<\kappa$. Since $\Diamond_{M}$ is order preserving, $$ \bigcup_{i\in I}\Diamond_{M}X_{i}\subseteq \Diamond_{M}\bigcup_{i\in I}X_{i}. $$ We show the converse. For any $w\in W$, \begin{align*} w\not\in \bigcup_{i\in I}\Diamond_{M}X_{i} &\ \Leftrightarrow\ w\not\in \bigcup_{i\in I}\bigcap_{R\in S}\drel{R}X_{i}\\ &\ \Leftrightarrow\ \forall{i\in I}\left( w\not\in \bigcap_{R\in S}\drel{R}X_{i} \right)\\ &\ \Leftrightarrow\ \forall{i\in I}\exists{R_{i}\in S} \forall x\in X_{i} \left( w\not\rel{R_{i}} x \right). \end{align*} Since $M$ is $\kappa$-downward directed, there exists $Q\in S$ such that $$ Q\subseteq \bigcap_{i\in I} R_{i}. $$ Then $$ \forall{i\in I} \forall x\in X_{i} \left( w\not\rel{Q} x \right). $$ Thus, $$ w\not\in \drel{Q} \bigcup_{i\in I} X_{i}. $$ Hence, $$ w\not\in \bigcap_{R\in S} \drel{R} \bigcup_{i\in I} X_{i} = \Diamond_{M} \bigcup_{i\in I} X_{i}. $$ \end{proof} \begin{proposition}\label{garrow} Let $\kappa$ be a cardinal number. For any $\kappa$-downward directed multi-relational Kripke frames $M_{1}=\langle W_{1},S_{1}\rangle$, $M_{2}=\langle W_{2},S_{2}\rangle$ and a homomorphism $g:M_{1}\rightarrow M_{2}$ of multi-relational Kripke frames, $G(g):\mathcal{P}(W_{2})\rightarrow\mathcal{P}(W_{1})$ is a homomorphism of complete modal algebras from $G(M_{1})$ to $G(M_{2})$. \end{proposition} \begin{proof} We only show that for any $U\subseteq W_{2}$, $$ \Diamond_{M_{1}}G(g)(U)=G(g)(\Diamond_{M_{2}}U). $$ All we have to prove is $$ \bigcap_{R\in S_{1}}\drel{R} g^{-1}\left[U\right] = g^{-1}\left[\bigcap_{Q\in S_{2}}\drel{Q}U\right]. $$ \noindent ($\subseteq$): Take any $x\in W_{1}$ and suppose $$ x\in \bigcap_{R\in S_{1}}\drel{R} g^{-1}\left[U\right]. $$ Then $$ \forall R\in S_{1} \exists w_{R}\in g^{-1}\left[U\right](x\rel{R}w_{R}). $$ Since $g$ is a homomorphism of multi-relational Kripke frames, for any $Q\in S_{2}$, there exists $R_{Q}\in S_{1}$ such that for any $y\in W_{1}$ \begin{equation*} x\rel{R_{Q}}y\ \Rightarrow\ g(x)\rel{Q}g(y). \end{equation*} Therefore, for any $Q\in S_{2}$, there exists $R_{Q}\in S_{1}$ and $w_{R_{Q}}\in g^{-1}\left[U\right]$ such that $$ g(x)\rel{Q}g(w_{R_{Q}}). $$ Hence, $$ g(x)\in\drel{Q}U. $$ Since $Q$ is arbitrary, $$ g(x)\in\bigcap_{Q\in S_{2}}\drel{Q}U. $$ Hence, $$ x\in g^{-1}\left[\bigcap_{Q\in S_{2}}\drel{Q}U\right]. $$ \noindent ($\supseteq$): Take any $x\in W_{1}$. Then \begin{align*} x\in g^{-1}\left[\bigcap_{Q\in S_{2}}\drel{Q}U\right] &\ \Leftrightarrow\ g(x)\in\bigcap_{Q\in S_{2}}\drel{Q}U\\ &\ \Leftrightarrow\ \forall Q\in S_{2}\exists u_{Q}\in U \left(g(x)\rel{Q}u_{Q}\right). \end{align*} Since $g$ is a homomorphism of multi-relational Kripke frames, for any $R\in S_{1}$, there exists $Q_{R}\in S_{2}$ such that for any $u\in W_{2}$ \begin{equation*} g(x)\rel{Q_{R}}u \ \Rightarrow\ \mbox{$\exists y\in W_{1}$ such that $x\rel{R}y$ and $g(y)=u$}. \end{equation*} Therefore, for any $R\in S_{1}$, there exist $Q_{R}\in S_{1}$, $u_{Q_{R}}\in U$, and $y\in W_{1}$ such that $$ x\rel{R}y,\ \ g(y)=u_{Q_{R}}\in U. $$ Hence, $$ x\in \drel{R} g^{-1}\left[U\right]. $$ Since $R$ is arbitrary, $$ x\in \bigcap_{R\in S_{1}}\drel{R} g^{-1}\left[U\right]. $$ \end{proof} \section{Duality between $\mathbf{CAMA}_{\kappa}$ and $\mathbf{MRKF}_{\kappa}$} In this section, we show that for any regular cardinal $\kappa$, $$ \mathrm{Id}_{\mathbf{CAMA}_{\kappa}}\cong G\circ F,\ \ \mathrm{Id}_{\mathbf{MRKF}_{\kappa}}\cong F\circ G. $$ \begin{proposition} Let $\kappa$ be a regular cardinal. For any object $A$ of $\mathbf{CAMA}_{\kappa}$, define a mapping $\tau_{A}:A\rightarrow G(F(A))$ by $$ \tau_{A}(x)=\{a\in\mathcal{A}(A)\mid a\leq x\} $$ for any $x\in A$. Then $\tau$ is a natural transformation from $\mathrm{Id}_{\mathbf{CAMA}_{\kappa}}$ to $G\circ F$. \end{proposition} \begin{proof} Let $f:A\rightarrow B$ be an arrow of $\mathbf{CAMA}_{\kappa}$. Then for any $x\in A$ and $b\in\mathcal{A}(B)$, \begin{align*} b\in G(F(f))\circ\tau_{A}(x) &\ \Leftrightarrow\ b\in (\ladj{f})^{-1}\left[\{a\in\mathcal{A}(A)\mid a\leq x\}\right]\\ &\ \Leftrightarrow\ \ladj{f}(b)\leq x\\ &\ \Leftrightarrow\ b\leq f(x)\\ &\ \Leftrightarrow\ b\in\tau_{B}\circ f(x). \end{align*} Hence, $$ G(F(f))\circ\tau_{A}=\tau_{B}\circ f. $$ \end{proof} \begin{theorem}\label{camatomkf} Let $\kappa$ be a regular cardinal. For any object $A$ of $\mathbf{CAMA}_{\kappa}$, $\tau_{A}:A\rightarrow G(F(A))$ is an isomorphism of complete modal algebras. \end{theorem} \begin{proof} It is clear that $\tau_{A}$ is an isomorphism of complete Boolean algebras. We show that $$ \tau_{A}(\Diamond x)=\Diamond_{F(A)}\tau_{A}(x) $$ for any $x\in A$. What we have to show is $$ \{a\in\mathcal{A}(A)\mid a\leq\Diamond x\} = \bigcap_{X\subseteq A,\ |X|<\kappa}\drel{R(X)}\{a\in\mathcal{A} (A)\mid a\leq x\}. $$ \noindent ($\subseteq$): Suppose $a\leq\Diamond x$. Take any $X\subseteq A$ such that $|X|<\kappa$. If $x\leq p(X,a)$, then \begin{align*} a &\leq \Diamond x\\ &\leq \Diamond p(X,a)\\ &= \Diamond \bigvee\Diamond^{-1}\left[\mbox{$\downarrow$}(-a)\right]\cap X\\ &= \bigvee\Diamond\left[\Diamond^{-1}\left[\mbox{$\downarrow$}(-a)\right]\cap X\right] && \text{($\kappa$-additivity)}\\ &\leq \bigvee \mbox{$\downarrow$}(-a)\\ &= -a, \end{align*} which contradicts to $a\in\mathcal{A}(A)$. Hence, $x\not\leq p(X,a)$. As $A$ is atomic, there exists $b\in\mathcal{A}(A)$ such that $b\leq x$ and $b\not\leq p(X,a)$. Then $a\rel{R(X)}b$ by Lemma \ref{equivalence}, and $$ a\in\drel{R(X)}\{b\in\mathcal{A}(A)\mid b\leq x\}. $$ As $X$ is taken arbitrarily, $$ a\in\bigcap_{X\subseteq A,\ |X|<\kappa}\drel{R(X)}\{b\in\mathcal{A}(A)\mid b\leq x\}. $$ \noindent ($\supseteq$): Suppose $a\not\leq\Diamond x$. Then for any $b\in\mathcal{A}(A)$ such that $b\leq x$, $$ a\not\leq \Diamond x=\bigwedge\Diamond \left[\mbox{$\Upsilonarrow$} b\cap \{x\}\right]. $$ Hence, $$ a\not\in\drel{R\left(\{x\}\right)}\{b\in\mathcal{A}(A)\mid b\leq x\}. $$ Thus, $$ a\not\in\bigcap_{X\subseteq A,\ |X|<\kappa}\drel{R(X)}\{b\in\mathcal{A}(A)\mid b\leq x\}. $$ \end{proof} \begin{proposition} Let $\kappa$ be a regular cardinal. For any object $M=\langle W,S\rangle$ of $\mathbf{MRKF}_{\kappa}$, define $\theta_{M}:M\rightarrow F(G(M))$ by $$ \theta_{M}(w)=\{w\} $$ for any $w\in W$. Then $\theta$ is a natural transformation from $\mathrm{Id}_{\mathbf{MRKF}_{\kappa}}$ to $F\circ G$. \end{proposition} \begin{proof} For any $M$, $\theta_{M}$ is well-defined as a mapping, since $$\mathcal{A}(G(M))=\{\{w\}\mid w\in W\}. $$ Let $M_{1}=\langle W_{1},S_{1}\rangle$ and $M_{2}=\langle W_{2},S_{2}\rangle$ be objects of $\mathbf{MRKF}_{\kappa}$, and $g:M_{1}\rightarrow M_{2}$ an arrow of $\mathbf{MRKF}_{\kappa}$. Then for any $w\in W_{1}$, \begin{align*} F(G(g))\circ\theta_{M_{1}}(w) &= G(g)_{\ast}(\{w\})\\ &= \bigcap \{X\subseteq W_{2}\mid w\in G(g)(X)\}\\ &= \bigcap \{X\subseteq W_{2}\mid w\in g^{-1}\left[X\right]\}\\ &= \bigcap \{X\subseteq W_{2}\mid g(w)\in X\}\\ &= \{g(w)\}\\ &= \theta_{M_{2}}\circ g(w). \end{align*} Hence, $$ F(G(g))\circ\theta_{M_{1}}=\theta_{M_{2}}\circ g. $$ \end{proof} \begin{theorem}\label{mkftocama} Let $\kappa$ be a regular cardinal. For any object $M=\langle W, S\rangle$ of $\mathbf{MRKF}_{\kappa}$, $\theta_{M}:M\rightarrow F(G(M))$ is an isomorphism of multi-relational Kripke frames. \end{theorem} \begin{proof} It is clear that $\theta_{M}$ is a set-theoretical bijection. We show that it is a homomorphism of multi-relational Kripke frames. By definition of $G$ and $F$, $$ F(G(M)) = \langle \left\{\{w\}\mid w\in W\right\}, \left\{R(U)\mid U\subseteq \mathcal{P}(W),\ |U|<\kappa\right\} \rangle, $$ where \begin{align*} \{w_{1}\}\rel{R(U)}\{w_{2}\} &\ \Leftrightarrow\ \{w_{1}\}\subseteq\bigcap\Diamond_{M}\left[\mbox{$\Upsilonarrow$}\{w_{2}\}\cap U\right]. \end{align*} By definition of $\Diamond_{M}$ in $G(M)$, \begin{align*} \{w_{1}\}\rel{R(U)}\{w_{2}\} &\ \Leftrightarrow\ \{w_{1}\}\subseteq \bigcap\left\{\bigcap_{R\in S}\drel{R} X \mid X\in\mbox{$\Upsilonarrow$}\{w_{2}\}\cap U\right\}\\ &\ \Leftrightarrow\ \forall X\in U \left(w_{2}\in X\ \Rightarrow\ w_{1}\in\bigcap_{R\in S}\drel{R} X \right) \\ &\ \Leftrightarrow\ \forall X\in U\left( w_{1}\not\in\bigcap_{R\in S}\drel{R} X \ \Rightarrow\ w_{2}\not\in X\right). \end{align*} \noindent Condition 1 of Definition \ref{def:mkf}: Take any $w\in W$ and any $U\in \mathcal{P}(W)$ such that $|U|<\kappa$. For any $X\in U$, if $w\not\in\bigcap_{R\in S}\drel{R} X$, then we can fix one $R_{X}\in S$ such that $w\not\in\drel{R_{X}}X$. Since $M$ is $\kappa$-downward directed, there exists $Q\in S$ such that $$ Q\subseteq \bigcap\left\{R_{X}\mid X\in U,\ w\not\in\bigcap_{R\in S}\drel{R} X\right\}. $$ We claim that for any $w'\in W$, $$ w\rel{Q}w'\ \Rightarrow\ \{w\}\rel{R(U)}\{w'\}. $$ Suppose $w\rel{Q}w'$. Take any $X\in U$ and suppose $w\not\in\bigcap_{R\in S}\drel{R} X$. Then $w\not\in\drel{R_{X}}X$. As $w\rel{R_{X}}w'$ by definition of $Q$, $w'\not\in X$. \noindent Condition 2 of Definition \ref{def:mkf}: Take any $w\in W$ and any $R\in S$. Let $$ U=\left\{W\setminus\urel{R}w\right\}. $$ Clearly, $$ w\not\in\drel{R}\left(W\setminus\urel{R}w\right). $$ Therefore, $$ w\not\in\bigcap_{Q\in S}\drel{Q}\left(W\setminus\urel{R}w\right). $$ Hence, for any $v\in W$, \begin{align*} \{w\}\rel{R(U)}\{v\} &\ \Leftrightarrow\ v\not\in W\setminus\urel{R}w\\ &\ \Leftrightarrow\ w\rel{R}v. \end{align*} \end{proof} \begin{theorem}\label{mkfcama} For any regular cardinal $\kappa$, $\mathbf{CAMA}_{\kappa}$ and $\mathbf{MRKF}_{\kappa}$ are dually equivalent. \end{theorem} \begin{proof} Theorem \ref{camatomkf} and Theorem \ref{mkftocama}. \end{proof} \begin{corollary}\label{mkfmap} Let $M_{1}=\langle W_{1},S_{1}\rangle$ and $M_{2}=\langle W_{2},S_{2}\rangle$ be multi-relational Kripke frames. A mapping $f:W_{1}\rightarrow W_{2}$ is a homomorphism of multi-relational Kripke frames from $M_{1}$ to $M_{2}$ if and only if the mapping $g:\mathcal{P}(M_{2})\rightarrow \mathcal{P}(M_{1})$ which is defined by $$ g:S\mapsto f^{-1}[S] $$ for any $S\subseteq W_{2}$ is a homomorphism of complete modal algebras from $G(M_{2})$ to $G(M_{1})$. \end{corollary} \begin{proof} We only show the if-part. Suppose that $g$ is a homomorphism of complete modal algebras. Then $F(g):FG(M_{2})\rightarrow FG(M_{2})$ is a homomorphism of multi-relational Kripke frames. Let $$ h=\theta_{M_{2}}^{-1}\circ F(g)\circ \theta_{M_{1}}. $$ By definition of $\theta$ and $\tau$, the composite of $G\theta$ and $\tau_{G}$ is the identity natural transformation on $G$. Hence, for any $S\subseteq\mathcal{P}(W_{2})$, \begin{align*} h^{-1}[S] &= G(h)(S)\\ &= G(\theta_{M_{1}})\circ GF(g)\circ G(\theta_{M_{2}}^{-1})\\ &= \tau_{G(M_{1})}^{-1}\circ GF(g)\circ \tau_{G(M_{2})}\\ &= g(S)\\ &= f^{-1}[S]. \end{align*} Thus, $f=h$ is a homomorphism of multi-relational Kripke frames. $$ \xymatrix{ M_{1} \ar[d]^{\theta_{M_{1}}} \ar[r]^{h} & M_{2} \ar[d]^{\theta_{M_{2}}}\\ FG(M_{1})\ar[r]^{F(g)} & FG(M_{2}) } \hspace{20pt} \xymatrix{ G(M_{1}) & G(M_{2}) \ar[l]^{G(h)} \\ GFG(M_{1})\ar[u]_{G\theta_{M_{1}}}& GFG(M_{2})\ar[l]^{GF(g)} \ar[u]_{G\theta_{M_{2}}}\\ G(M_{1}) \ar[u]_{\tau_{G(M_{1})}} & G(M_{2})\ar[l]^{g} \ar[u]_{\tau_{G(M_{2})}} } $$ \end{proof} \section{Application} As an application of the duality theorem, we show that for any regular cardinals $\kappa$ and $\kappa'$ with $\kappa<\kappa'$, the inclusion functor from $\mathbf{CAMA}_{\kappa'}$ to $\mathbf{CAMA}_{\kappa}$ and that from $\mathbf{MRKF}_{\kappa'}$ to $\mathbf{MRKF}_{\kappa}$ are not essentially surjective, where a functor $F$ from a category $C$ to a category $D$ is said to be {\em essentially surjective}, if for any object $d$ of $D$, there exists an object $c$ of $C$ such that $F(c)$ is isomorphic to $d$. The following proposition is based on Fact 4.5 of \cite{mnr16}. \begin{proposition}\label{minari} Let $\kappa$ and $\kappa'$ be regular cardinals. If $\kappa<\kappa'$, there exists a complete atomic modal algebra $A$ which is $\kappa$-additive but not $\kappa'$-additive. \end{proposition} \begin{proof} Consider a multi-relational Kripke frame $M$ defined by $$ M= \left\langle \kappa\cup\{\infty\}, \{Q_{X}\mid X\subseteq\kappa,\ |X|<\kappa\} \right\rangle $$ where $$ Q_{X}=\{(\infty,\alpha)\mid \alpha\not\in X\}. $$ Suppose $|I|<\kappa$, and for any $i\in I$, suppose $X_{i}\subseteq \kappa$ and $|X_{i}|<\kappa$. Then $|\bigcup_{i\in I}X_{i}|<\kappa$ and $$ Q_{\bigcup_{i\in I}X_{i}} = \bigcap_{i\in I}Q_{X_{i}}. $$ Hence, $M$ is an object of $\mathbf{MRKF}_{\kappa}$. Therefore, by the duality theorem, $G(M)$ is an object of $\mathbf{CAMA}_{\kappa}$. We show that in $G(M)$, $$ \Diamond_{M}\bigvee_{i\in\kappa}\{i\} \not\leq \bigvee_{i\in\kappa}\Diamond_{M}\{i\}. $$ For any $X\subseteq\kappa$ such that $|X|<\kappa$, there exists $i\in \kappa$ such that $i\not\in X$. Hence, $$ \infty\in\bigcap_{X\subseteq\kappa,\ |X|<\kappa}\drel{Q_{X}} \bigcup_{i\in\kappa} \{i\}. $$ Thus, $$ \infty\in \Diamond_{M}\bigvee_{i\in\kappa}\{i\}. $$ \noindent On the other side, for any $i\in\kappa$, $$ \infty\not\in \drel{Q_{\{i\}}}\{i\}. $$ Therefore, $$ \infty \not\in \bigcap_{X\subseteq\kappa,\ |X|<\kappa}\drel{Q_{X}} \{i\}. $$ Since $i$ is taken arbitrarily $$ \infty \not\in \bigcup_{i\in I} \bigcap_{X\subseteq\kappa,\ |X|<\kappa}\drel{Q_{X}} \{i\}. $$ Hence, $$ \infty\not\in \bigvee_{i\in\kappa}\Diamond_{M}\{i\}. $$ \end{proof} \begin{theorem} Let $\kappa$ and $\kappa'$ be regular cardinals such that $\kappa<\kappa'$. Then the inclusion functor from $\mathbf{CAMA}_{\kappa'}$ to $\mathbf{CAMA}_{\kappa}$ and that from $\mathbf{MRKF}_{\kappa'}$ to $\mathbf{MRKF}_{\kappa}$ are not essentially surjective. \end{theorem} \begin{proof} Let $M$ be the multi-relational Kripke frame defined in Proposition \ref{minari}. Then $G(M)$ is an object of $\mathbf{CAMA}_{\kappa}$, and it is clear that no objects of $\mathbf{CAMA}_{\kappa'}$ are isomorphic to $G(M)$. Hence, by Theorem \ref{mkfcama}, no objects of $\mathbf{MRKF}_{\kappa'}$ are isomorphic to $M$. \end{proof} \newcommand{\noop}[1]{} \end{document}
\begin{document} \newcommand{\textup{d}}{\textup{d}} \def\Pcm#1{{\mathcal{#1}}} \newcommand{\partial}{\partial} \def\eqref#1{(\ref{#1})} \def\er#1{eqn.\eqref{#1}} \def\nonumber{\nonumber} \title{ Three results on weak measurements.} \author{ N.D. Hari Dass } \email{[email protected] } \affiliation{TIFR-TCIS, Hyderabad 500075 } \maketitle \hrule {\noindent\bf Three recent results on weak measurements are presented. They are: i) repeated measurements on a single copy can not provide any information on it and further, that in the limit of very large such measurements, weak measurements have exactly the same characterstics as strong measurements, ii) the apparent non-invasiveness of weak measurements is \emph{illusory} and they are no more advantageous than strong measurements even in the specific context of establishing Leggett-Garg inequalities, when errors are properly taken into account, and, finally, iii) weak value measurements are optimal, in the precise sense of Wootters and Fields, when the post-selected states are mutually unbiased with respect to the eigenstates of the observable whose weak values are being measured. Notion of weak value coordinates for state spaces are introduced and elaborated. } \hrule \noindent {\bf Keywords:} Projective, Weak , Repeated weak , Non-invasive, Optimal. \section{Quantum Measurements} The chief ingredients for a quantum measurement on a quantum system are i) an appropriate apparatus, with well defined pointer states $P_i$ (these, in the present folklore, are to be determined by suitable apparatus decoherence processes), and an appropriate measurement interaction ${\cal M}$ between the system and the apparatus. The latter is determined by the observable of the system to be measured. A point to be emphasised is that the \emph{same} measurement interaction can be used both for the strong, projective measurements, as well as for the so called \emph{weak measurements}. For example, for qubit measurements, this can be taken to be (A,S are for apparatus and system,respectively, and $P_i$ are the pointer-states of the apparatus): \begin{eqnarray} \label{eq:measint} & &|P_i\rangle_A\otimes|\uparrow\rangle_S\xrightarrow{{\cal M}}\,|P_{i+1}\rangle_A\otimes|\uparrow\rangle_S\nonumber\\ & &|P_i\rangle_A\otimes|\downarrow\rangle_S\xrightarrow{{\cal M}}\,|P_{i-1}\rangle_A\otimes|\downarrow\rangle_S \end{eqnarray} This is sybolically depicted in Figure.(\ref{fig:measint0}) where the central line denotes the pointer state $P_i$, and those flanking it denote $P_{i\pm 1}$. \begin{figure} \caption{Measurement interaction of eqn.(\ref{eq:measint} \label{fig:measint0} \end{figure} \subsection{Projective measurements} We now discuss the so called projective or strong measurements. For this, the initial state of the apparatus is taken to be a \emph{single} pointer state, say, $P_0$. The same measurement interaction discussed above now reads: \begin{equation} |P_0\rangle_A\otimes|\pm\rangle_S\xrightarrow{{\cal M}}\,|P_\pm\rangle_A\otimes|\pm\rangle_S \end{equation} in an obvious relabelling of states. Henceforth we shall drop the $\otimes$. If the initial state of the system is taken to be: \begin{equation} \label{eq:sysini} |\psi\rangle = \alpha|\uparrow\rangle+\beta|\downarrow\rangle\quad\quad |\alpha|^2+|\beta|^2=1 \end{equation} and the initial state of the apparatus-system complex is taken to be $|\psi\rangle\,|P_0\rangle$, the \emph{post-measurement-interaction state} of the composite is given by \begin{equation} |P_0\rangle\,|\psi\rangle\xrightarrow\,|\Psi\rangle_{SA}=\alpha|P_+\rangle|\uparrow\rangle+\beta|P_-\rangle|\downarrow\rangle \end{equation} As is well known, this is an entangled state and does not correspond to the expected state after a definite measurement outcome. The current folklore is that \emph{environmental decoherence} reduces the density matrix $\rho_{SA}$ of this pure state to the mixed state, which by construction, is diagonal in the pointer-states bases: \begin{equation} \rho_{SA}\xrightarrow{decoh}\,|\alpha|^2|P_+\rangle\langle P_+|\,|\uparrow\rangle\langle \uparrow|+|\beta|^2\,|P_-\rangle\langle P_-||\downarrow\rangle\langle \downarrow| \end{equation} The system itself can be efficiently characterized by its \emph{reduced density matrix}: \begin{equation} \rho_{red} = |\alpha|^2|\uparrow\rangle\langle \uparrow|+|\beta|^2|\downarrow\rangle\langle \downarrow|\quad\quad \end{equation} The so called \emph{Purity} of this mixed state, defined as $tr_A\,\rho_{SA}^2$, is given by $ 1-2|\alpha|^2|\beta|^2$. This is generically far from a purity value of unity. It should be appreciated that decoherence, however, does not explain the measurement process on an event by event basis. With each outcome, the system is irretrievably altered. The pointer position $+1$ occurs with probability $|\alpha|^2$, while the outcome $-1$ occurs with probability $|\beta|^2$. The mean pointer position is $|\alpha|^2-|\beta|^2$. The variance is the standard uncertainty associated with the state $|\psi\rangle$, and the error in the result of M measurements falls off as $\frac{1}{{\sqrt M}}$. \subsection{Weak measurements} Now we turn to the so called \emph{weak measurements}. To demystify the hopelessly large hype(and many wrong statements), we consider a highly idealised example which nevertheless contains the essential features of this very interesting new category of measurements introduced by Aharonov and his collaborators \cite{aharonovorig} (for a detailed exposition of many aspects of weak measurements see \cite{nori}). The initial state of the apparatus is now taken to be a very broad superposition of pointer states with equal weights and no relative phases: \begin{equation} |A\rangle = \frac{1}{\sqrt{N}}\,\sum_{i=1}^{i=N}\,|P_i\rangle \end{equation} In some of the current literature, even this very very broad state is treated as a pointer state, with its centroid identified as the corresponding \emph{pointer position}. It is quite meaningless to take this position. Introducing the apparatus state \begin{equation} |{\bar A}\rangle = \frac{1}{\sqrt{N-2}}\,\sum_{i=2}^{i=N-1}\,|P_i\rangle \end{equation} one sees that the measurement interaction of eqn(\ref{eq:measint}) leads in this case to \begin{equation} |A\rangle|\uparrow\rangle\,\rightarrow\: \{\sqrt{\frac{N-2}{N}}\,|{\bar A}\rangle+\frac{1}{\sqrt{N}}\, (|P_N\rangle+|P_{N+1}\rangle)\}|\uparrow\rangle \end{equation} \begin{equation} |A\rangle|\downarrow\rangle\,\rightarrow\: \{\sqrt{\frac{N-2}{N}}\,|{\bar A}\rangle+\frac{1}{\sqrt{N}}\, (|P_0\rangle+|P_{1}\rangle)\}|\downarrow\rangle \end{equation} This is depicted in Figure.(\ref{fig:measint}). \begin{figure} \caption{A weak measurement} \label{fig:measint} \end{figure} If the initial state of the apparatus and system is taken to be $|A\rangle\otimes|\psi\rangle$, with $|\psi\rangle$ as given by eqn.(\ref{eq:sysini}), the \emph{post-measurement-interaction} composite state is now given by: \begin{eqnarray} \label{eq:postmeasweakexample} & &\sqrt{\frac{N-2}{N}}\,|{\bar A}\rangle|\psi\rangle+\frac{\alpha}{\sqrt{N}}\,(|P_N\rangle+|P_{N+1}\rangle)|\uparrow\rangle\nonumber\\ & &+ \frac{\beta}{\sqrt{N}}\,(|P_0\rangle+|P_1\rangle)|\downarrow\rangle \end{eqnarray} The \emph{post-decoherence} system-apparatus mixed state, which is by construction diagonal in $P_i$ (the incorrectness of treating the initial apparatus state $|A\rangle$ becomes evident here), is easily worked out to be: \begin{eqnarray} \label{eq:postdecohweakexample} & &\frac{N-2}{N}\,\sum_{i=2}^{i=N-1}|P_i\rangle\langle P_i||\psi\rangle\langle \psi|\nonumber\\ &+&\frac{|\alpha|^2}{N}(|P_N\rangle\langle P_N|+|P_{N+1}\rangle\langle P_{N+1}|)|\uparrow\rangle\langle \uparrow|\nonumber\\ &+&\frac{|\beta|^2}{N}(|P_0\rangle\langle P_0|+|P_{1}\rangle\langle P_{1}|)|\downarrow\rangle\langle \downarrow| \end{eqnarray} The post-measurement reduced density matrix of the system is obtained by tracing over the apparatus state-space: \begin{equation} \label{eq:redrhoweakexample} \rho^{weak}_{red} = |\psi\rangle\langle \psi|-\frac{2}{N}(\alpha\beta^*|\uparrow\rangle\langle \downarrow|+\alpha^*\beta|\downarrow\rangle \langle \uparrow|) \end{equation} The purity of this reduced density matrix is \begin{equation} \label{eq:weakpurity} {\cal P}_{weak} = 1-\frac{8}{N}|\alpha|^2|\beta|^2 \end{equation} When $N >> 1$, this post-measurement purity can be arbitrarily close to the unit purity of the system state before measurement. In this sense, the weak measurements appear to be highly \emph{non-invasive}, but there is more to invasiveness than just this measure. A number of important properties attributed to weak measurements in general can be gleaned from this highly idealized example. From eqn.(\ref{eq:postmeasweakexample}), it follows that with probability $1-2/N$, the system is not changed at all(extreme weakness). It is also important to observe that this 'weakness' has nothing to do with the strength of the measurement interaction. Rather, it is completely controlled by N, the \emph{width} of the initial apparatus state. While with most measurement outcomes, there is no change of the system, the \emph{information} obtained about the system by these outcomes is also zero. This follows from the fact that the probabilities for these outcomes has \emph{no} dependence on the initial state. On the other hand, the outcomes $i=N, N+1$ occur with the very low probability $\frac{|\alpha|^2}{N}$ and likewise, $i=0,1$ with probability $\frac{|\beta|^2}{N}$. For these outcomes, the system is irretrievably changed exactly as in projective measurements! These probabilities being dependent on the system state, these outcomes give full information! Let us now calculate the mean pointer position ${\bar i}$ and the associated variance. Elementary calculations give this to be $(N+1)/2$ before measurement, and, $(N+1)/2+|\alpha|^2-|\beta|^2$. Therefore the shift in the mean pointer position is exactly the expectation value of the observable, as in the projective measurements. The variance in the pointer positions is now dramatically different. Before measurements it is $(N^2-1)/12$ while after measurements, it is still essentially this, but shifted by a tiny system-dependent part: \begin{equation} \label{eq:weakexamplevariance} (\Delta i)^2_{pre} = \frac{N^2-1}{12}\quad\quad (\Delta i)^2_{post} = (\Delta i)^2_{pre}+(\Delta S)^2_{\psi} \end{equation} The results for the mean and variance are exactly the same as for the most generic weak measurements \cite{nori}. In this elementary example, the deviations of pointer outcomes can trivially be much larger than the eigenvalues of the observable in question. There is no big mystery that needs some special understanding. Another noteworthy feature is that since for pointer outcomes, the system is mostly not an eigenstate of the observable (in the example, this happens only when the outcomes are $i=0,1,N,N+1$), there is no \emph{value} of the observable associated with the value of the pointer outcome, unlike the case in projective measurements. Too much has been made of this starting from the title of the first paper on weak measurements \cite{aharonovorig}. Now we turn out to a standard treatment of weak measurements. The \emph{Pointer variable} is taken to be $p$ the momentum. The \emph{Pointer states} are taken to be the momentum eigenstates $|p\rangle$. In practice, these are taken to be narrow gaussian wave packets in momentum representation. As seen in our extreme example, the initial apparatus state for weak measurements should be a \emph{very broad superposition} of pointer states i.e \begin{equation} \label{eq:appstategen} |A\rangle = {\bar N}_p\,\int\,dp\,e^{-\frac{p^2}{2\Delta_p^2}}\,|p\rangle \quad\quad {\bar N}_p = (\pi\Delta_p^2)^{-1/4} \end{equation} with $\Delta_p >> 1$. The \emph{measurement interaction} is taken to be $e^{-iQA}$ where $A$ is the observable that is being measured, and Q the variable conjugate to momentum. As in the von Neumann model, this is taken to be impulsive, acting exactly at the time of measurement. For simplicity, we take the observable A to have the discrete, non-degenerate spectrum $a_i, |a_i\rangle$. The initial system state is taken to be: \begin{equation} \label{eq:syststategen} |\psi\rangle = \sum_i\,\alpha_i\,|a_i\rangle\quad\quad \sum_i\,|\alpha_i|^2=1 \end{equation} The \emph{post-measurement-interaction} state of system and apparatus is then given by: \begin{eqnarray} |\Psi\rangle_{SA,weak} &=& {\bar N}_p\,\sum_i\,\alpha_i\,\int\,dp\,e^{-\frac{p^2}{2\Delta_p^2}}|p+a_i\rangle|a_i\rangle\nonumber\\ &=&\,\int\,dp\,N(p,\{\alpha\})|p\rangle|\psi_p\rangle \end{eqnarray} Where \begin{eqnarray} N(p,\{\alpha\}) &=& {\bar N}_p\,\sqrt{\sum_i\,|\alpha_i|^2\,e^{-\frac{(p-a_i)^2}{\Delta_p^2}}}\nonumber\\ |\psi_p\rangle &=& \frac{{\bar N}_p}{N(p,\{\alpha\})}\,\sum_i\,\alpha_i\,e^{-\frac{(p-a_i)^2}{2\Delta_p^2}}|a_i\rangle \end{eqnarray} Hence, weak measurements can be viewed as the so called Positive Operator Valued Measurements(POVM) with measurement operators: \begin{equation} \label{eq:weakpovm} M_p = {\bar N}_p\,\sum\,e^{-\frac{(p-a_i)^2}{2\Delta_p^2}}|a_i\rangle\langle a_i| \end{equation} The \emph{post-decoherence} mixed state of the system and apparatus is easily calculated to be: \begin{equation} \rho^{post-decoh}_{SA} = \int\,dp\,|N(p,\{\alpha\})|^2\, |p\rangle\langle p||\psi_p\rangle\langle \psi_p| \end{equation} The probability distribution for the pointer outcomes is given by $|N(p,\{\alpha\}|^2$. As the eigenvalues $a_i$ are bounded, this distribution, when $p|a_i| << \Delta_p^2$, is well approximated by \begin{equation} \label{eq:lowpapprox} |N(p,\{\alpha\}|^2\simeq\quad {\bar N}_p^2\,e^{-\frac{P^2}{\Delta_p^2}}+\ldots \end{equation} In this case \begin{equation} \label{eq:lowpstate} |\psi_p\rangle\simeq\,|\psi\rangle+\ldots \end{equation} where the dots represent small corrections. One once again observes the same features encountered in the example, namely, that for most of the outcomes the state changes very little(in the example, that change was zero while in the more realistic cases, as here, it is small). But precisely for those cases, the probability of outcome is either independent, or nearly independent, of the system state and no information can be obtained about the system state. Nevertheless, as in the example, the mean pointer position has full information about the state(provided a \emph{complete} set of weak measurements are performed). The average outcome and its variance can be calculated exactly: \begin{equation} \langle p \rangle = \sum_i\,|\alpha_i|^2\,a_i = \langle A \rangle_\psi\quad (\Delta p)^2 = \frac{\Delta_p^2}{2}+(\Delta A)^2_\psi \end{equation} These necessarily have to be \emph{ensemble} measurements. The errors in weak measurements are very large because $\Delta_p >> 1$. These are to be reduced \emph{statistically}. It is instructive to compute the \emph{reduced} density matrix of the system: \begin{equation} \label{eq:weakredrho} \rho^{red,weak}_{sys} = |\psi\rangle\langle \psi| -\frac{1}{4\Delta_p^2}\sum_{i,j}\,\alpha_i\alpha_j^*\,(a_i-a_j)^2|a_i\rangle\langle a_j| \end{equation} \section{Weak Measurements and Leggett-Garg Inequalities} We saw in eqn.(\ref{eq:weakredrho}) that the reduced density matrix after a weak measurement is \emph{practically} the same as the initial pure density matrix. In this sense, the weak measurements can be said to be \emph{non-invasive}. Non-invasive measurements have been emphasized in a variety of contexts. The most notable of these has been the \emph{Leggett-Garg} inequalities \cite{agarg,dhome,mahesh}. A typical experimental setup consists of four series of measurements on identical initial states. In each series, some quantity $Q(t)$ is measured at two instants of time. In the first, measurements are done at $t_1$ and $t_2$; in the second, at $t_2$ and $t_3$, in the third at $t_3$ and $t_4$, and finally in the fourth at $t_1$ and $t_4$. It is to be noted that $t_1 < t_2 < t_3 < t_4$. The first measurement in each series is required to be \emph{non-invasive}, as then the second measurement can be \emph{construed} to have also been made on the \emph{same state} as the initial one. Thus a total of 8 measurements of which 4 have to be non-invasive. The natural question is whether weak measurements can be used to achieve this? The answer to this hinges on the accuracy of measurements(errors) as well as the \emph{available resources}, the apparent non-invasiveness of weak measurements notwithstanding. An obvious resource to be considered is the \emph{ensemble size} of the initial state. Let this be M identical copies. If we consider using weak measurements to provide the required non-invasive measurements, it will be necessary to divide M into 4 equal subensembles of $M/4$ copies each, and use one for each series of measurements. The statistical error in the resulting weak measurements will be $\epsilon_w=\frac{\Delta_p}{\sqrt{2}}\frac{1}{\sqrt{M/4}}$. It should be remembered that for the second measurement in each series, the state will not be exactly the same as the original state. Depending on $\Delta_p$, this could be an important factor to reckon with in practical implementations. Since the second measurement does not have to be non-invasive, it can even be done with strong measurements, which, for the same ensemble size would yield an error substantially lowered by a factor $\frac{\sqrt{2}(\Delta A)_\psi}{\Delta_p}$. The error analysis of LG-inequalities would be more complicated then. Let us estimate the ensemble size that would yield the same error $\epsilon_w$ but now done with strong measurements. The relation between statistical error and ensemble size for strong measurements is $\epsilon_s = \frac{(\Delta A)_\psi}{\sqrt{M_s}}$, where $M_s$ is the relevant ensemble size. Therefore the ensemble size for strong measurements with the same error as in the weak measurements is: \begin{equation} M_s = \frac{(\Delta A)_\psi^2}{\epsilon_w^2}=\frac{M}{2}\cdot\frac{(\Delta A)_\psi^2}{\Delta_p^2} \end{equation} The idea now is to divide the original resource into 8 equal subensembles and use each of them to perform the total of 8 measurements required. Altogether 8 strong measurements need to be done and the total ensemble size required is $M\cdot\frac{4(\Delta A)_\psi^2}{\Delta_p^2}$ Hence it follows that as long as $\frac{(\Delta A)_\psi}{\Delta_p} << 1/2$, the ensemble size required for strong version of checking LG inequalities is \emph{much smaller} than what was required for the weak version of the same! Furthermore, in the strong version the states used for all the 8 measurements are \emph{exactly identical}! In summary, if $\Delta_p$ is very large, one can test the LG-inequalities with much smaller resources using strong measurements. If $\Delta_p$ is not so large, the weak measurements are no longer non-invasive. Either way, there is no case for invoking weak measurements to test the LG-inequalities. Similar considerations for determination of so called trajectories will be taken up elsewhere. \section{Repeated Weak Measurements On a Single Copy} One of the most \emph{surprising} and \emph{shocking} facets of the Copenhagen view of quantum mechanics is what one may call the \emph{demise of the individual} (for a detailed exposition see \cite{ndhonto}). More precisely, that view predicated that no information can be obtained about the unknown state of a \emph{single copy}. This is a trivial consequence if one uses projective or strong measurements. This is so as the first measurement randomly results in an eigenstate and all subsequent measurements have no bearing on the original unknown state. Weak measurements offer a \emph{superficial hope} that it may be possible to determine the unknown state of a single copy. The basis for that hope is that each weak measurement, with high probability, very weakly alters the system state while giving some information about the original state. Consider the following \emph{schema} for repeated weak measurements on a single copy. (i) Perform a weak measurement of observable A on a single copy of an unknown state $|\psi\rangle$. Let the apparatus outcome be, say, $p_1$. Consequently, the system state at this stage is $|\psi_{p_1}\rangle$. (ii) Restore the apparatus to the same state before the first weak measurement. (iii) Perform weak measurement of A in the new system state $|\psi_{p_1}\rangle$. (iv) Repeat. The crucial question is whether the statistics of outcomes $p_1,p_2,...,p_N$ have anything to say about the original unknown $|\psi\rangle$? The naive argument would be that since at each step one gathers some \emph{information} about the original unknown state, although very little, with sufficiently large repetitions one ought to gather enough information to determine the original state. The question will be answered in the negative here. The details can be found in \cite{ndhweakrepeat}. Alter and Yamamoto \cite{orly,orlybook} had in fact analysed a very similar problem in the context of repeated QND measurements long ago, but issues of degradation of the state as well connections to strong measurements were not considered by them. The probability $P^{(1)}(p_1)$ of the first outcome $p_1$ is given by: \begin{equation} P^{(1)}(p_1) = |N(p_1,\{\alpha\}|^2 \end{equation} The system state after this outcome is $|\psi_{p_1}\rangle$. It is useful to describe this state as one with \emph{changed values} of $\{\alpha\}$: \begin{equation} \alpha_i^{(1)} = \frac{{\bar N}_p}{N(p_1,\{\alpha\})}\:e^{-\frac{(p_1-a_i)^2}{2\Delta_p^2}} \end{equation} The probability $P(p_2)$ of the second outcome $p_2$ is, therefore: \begin{eqnarray} & &P^{(2)}(p_2) = |N(p_2,\{\alpha^{(1)}\}|^2\nonumber\\ &=&\frac{{\bar N}_p^4}{N(p_1,\{\alpha\})^2}\, \sum_i\,|\alpha_i|^2\,e^{-\frac{(p_1-a_i)^2}{\Delta_p^2}}\cdot e^{-\frac{(p_2-a_i)^2}{\Delta_p^2}} \end{eqnarray} But this is the \emph{conditional probability} $P(p_2|p_1)$ for obtaining $p_2$ \emph{given} that the first outcome was $p_1$. The \emph{joint probability} distribution $P(p_1,p_2)$ is given by Bayes theorem to be $P(p_1)\cdot P(p_2|p_1)$: \begin{equation} P(p_1,p_2) = {\bar N}_p^4\, \sum_i\,|\alpha_i|^2\,e^{-\frac{(p_1-a_i)^2}{\Delta_p^2}}\cdot e^{-\frac{(p_2-a_i)^2}{\Delta_p^2}} \end{equation} The state of the system when the outcomes are $p_1,p_2$ is: \begin{equation} |\psi(p_1,p_2)\rangle = \frac{\sum_i\,\prod_{j=1}^2\,e^{-\frac{(p_j-a_i)^2}{\Delta_p^2}}\alpha_i\,|a_i\rangle}{\sqrt{\sum_i\, \prod_j\,|\alpha_i|^2\,e^{-\frac{(p_j-a_i)^2}{\Delta_p^2}}}} \end{equation} These readily generalize to the case of M repeated measurements: \begin{eqnarray} P(p_1,p_2,\ldots,p_M) &=& ({\bar N}_P^2)^M\,\sum_i\,|\alpha_i|^2\,\prod_{j=1}^M\,e^{-\frac{(p_j-a_i)^2}{\Delta_p^2}}\nonumber\\ |\psi(p_1,p_2,..,p_M)\rangle &=& \frac{\sum_i\,\prod_{j=1}^M\,e^{-\frac{(p_j-a_i)^2}{\Delta_p^2}}\alpha_i\,|a_i\rangle}{\sqrt{\sum_i\, \prod_j\,|\alpha_i|^2\,e^{-\frac{(p_j-a_i)^2}{\Delta_p^2}}}} \end{eqnarray} These equations codify \emph{all} the information that can be obtained by repeated weak measurements on a single copy of an unknown state. The joint probability distribution is not \emph{factorisable} as the outcomes are \emph{not mutually independent}, but it is still of the so called \emph{separable} form. The average $y_M$ of the M outcomes is $\sum_i\,|\alpha_i|^2\,a_i = \langle A \rangle_\psi$! Does this mean we have obtained the same information in a weak measurement on a single copy what could only be obtained by ensemble measurements of the strong kind? It is necessary to look into the distribution function $P(y_M)$ for such an average. Recall that in ensemble measurements this takes the form (Central Limit Theorem): \begin{equation} P(y_M)_{ensemble} = {\tilde N}\,e^{-\frac{M(y_M-\mu)^2}{\Delta^2}} \end{equation} In ensemble measurements too, the sequence of outcomes in \emph{a particular realization} will be different, and \emph{unpredictable}. But the average obtained in any particular realisation converges to the true average as $M\rightarrow\,\infty$. Now it turns out that the story is entirely different for repeated weak measurements on a single copy! The distribution function $P(y_M)$: \begin{eqnarray} P(y_M)&=& \sqrt{\frac{M}{\pi\Delta_p^2}}\,\sum_i\,|\alpha_i|^2\,e^{-\frac{M(y_M-a_i)^2}{\Delta_p^2}}\nonumber\\ &\rightarrow& \sum_i\,|\alpha_i|^2\,\partialta(y_M-a_i) \end{eqnarray} The distribution of $y_M$ is no longer peaked at the true average with errors decreasing as $M^{-1/2}$. Instead, it is a weighted sum of distributions that increasingly peak around the eigenvalues as $\Delta_p$ increases. In the limiting case, averages over a particular realisation will be eigenvalues occurring with probability $|\alpha_i|^2$, exactly as in the case of strong measurements. Hence averages over any particular realisation do not give any information about the initial state. To substantiate this picture further, one can investigate the average value of the post-measurement system reduced density matrix: \begin{equation} \langle \rho^{red} \rangle = \rho - \sum_{i,j}\,\alpha_i\alpha_j^*\,(1-e^{-\frac{M(a_i-a_j)^2}{4\Delta_p^2}})|a_i\rangle\langle a_j| \end{equation} Therefore as M becomes larger and larger, there is significant change in the system state. In the limit $M\rightarrow\,\infty$, the \emph{off-diagonal} parts of the density matrix get completely quenched, as in \emph{decoherence}! In that limit, the density matrix becomes diagonal in the eigenstate(of A) basis: \begin{equation} \langle \rho^{red} \rangle\,\rightarrow\: \sum_i\,|\alpha_i|^2\,|a_i\rangle\langle a_i| \end{equation} This is exactly the post-measurement system state in the case of strong measurements. It should be noted that this decoherence in eigenstate basis has nothing to do with the environmental decoherence in the pointer state basis of the apparatus. It is entirely due to the large number of repeated weak measurements. Such an effect had also been noted by Gurvitz in 1997 \cite{gurwitz}. We can view the distance between the initial $\rho$ and the average post-measurement reduced density matrix $\langle \rho^{red} \rangle$, according to some reasonable distance measure, as a measure of the \emph{disturbance} caused by the repeated weak measurements on the single copy. For example, ${\cal D} = 1-tr\,\rho\,\langle \rho^{red}\rangle$ is one such distance measure. The statistical error $\epsilon = \frac{\Delta_p}{\sqrt{2M}}$. Then one gets the \emph{error-disturbance} relation: \begin{eqnarray} {\cal D}(\epsilon)&=& \sum_{i,j}\:|\alpha_i|^2|\alpha_j|^2(1-e^{-\frac{(a_i-a_j)^2}{8\epsilon^2}})\nonumber\\ &\rightarrow& \sum_i\,|\alpha_i|^2(1-|\alpha_i|^2) \end{eqnarray} Reducing errors can only be at the cost of increasing invasiveness! It should be noted that this error-disturbance relation bears no obvious relation to the ones being discussed by Ozawa \cite{ozawa}. \section{Weak value coordinates and optimal weak value measurements} This section is based on the works \cite{ndhsaicoord} and \cite{ndhsaioptimal}. Consider the projection operators ${\cal P}_\pm$ for the eigenstates $|\pm\rangle$ of, say, $S_z$. Let the preselected state be $|\psi =\alpha |+\rangle\,+\,\beta|-\rangle$, with $|\alpha|^2+|\beta|^2=1$. Let the post-selected state be $|b\rangle$. If $w_\pm$ are the weak values of ${\cal P}_\pm$ \begin{equation} w_\pm=\frac{\langle b||\pm\rangle\langle \pm||\psi\rangle}{\langle b|\psi\rangle} \quad\quad w_+\,+\,w_-=1 \end{equation} The idea of \emph{weak value tomography}($b_\pm=\langle b|\pm\rangle$): \begin{equation} \alpha = \frac{\frac{w_+}{b_+}}{\sqrt{|\frac{w_+}{b_+}|^2+|\frac{w_-}{b_-}|^2}}\quad\quad \beta = \frac{\frac{w_-}{b_-}}{\sqrt{(\frac{w_+}{b_+})^2+(\frac{w_-}{b_-})^2}} \end{equation} Thus experimentally determining a single complex weak value ($w_+$ or $w_-$) suffices to determine the state. $w_+=\frac{1}{2}+w_z$ and $w_-=\frac{1}{2}-w_z$, where $w_z$ is the weak value of $S_z$. Thus it suffices to measure the weak value of a single observable to determine the state as against conventional tomography which would require the \emph{expectation values} of \emph{two} independent observables and a \emph{sign}! At this stage, the fact that $Re\: w, Im\: w$ are \emph{unbounded} becomes crucial. It indicates that the real and imaginary parts of weak values provide a \emph{stereographic projection} of the \emph{Riemann sphere}. The \emph{metric} on the state space can be introduced through the line element \begin{equation} dl^2 = 2\,tr\,d\rho\,d\rho \end{equation} For example, if the pure state density matrix is parametrised as \begin{equation} \rho = \frac{I}{2} +\langle S_x \rangle\,\sigma_x +\langle S_y \rangle\,\sigma_y +\langle S_z \rangle\,\sigma_z \end{equation} with \begin{equation} \langle S_x \rangle^2+ \langle S_y \rangle^2+ \langle S_z \rangle^2 = \frac{1}{4} \end{equation} The line element becomes \begin{equation} dl^2 = 4\{(dS_x)^2+(dS_y)^2+(dS_z)^2\} \end{equation} This is just the metric on a sphere. The most general form of the line element is \begin{equation} dl^2 = g_{ww}\,dw^2\,+g_{{\bar w}{\bar w}}\,d{\bar w}^2\,+g_{w{\bar w}}\,dw\,d{\bar w} \end{equation} Explicit evaluation yields \begin{equation} g_{w{\bar w}}=\frac{4}{|b_+|^2|b_-|^2}\,\frac{1}{\sqrt{|\frac{w_+}{b_+}|^2+\frac{w_-}{b_-}|^2}} \end{equation} with $g_{ww} = g_{{\bar w}{\bar w}}=0$. Therefore, the weak value coordinates have the nice feature that they are \emph{conformal}! In terms of $Re\:w_+=x,Im\:w_+=y$, the line element can be rewritten as \begin{equation} dl^2 = \frac{4|b_+|^2|b_-|^2\:(dx^2+dy^2)}{\{x^2+y^2+x(|b_-|^2-|b_+|^2)+\frac{1}{4}\}^2} \end{equation} The volume(area) element of the state space is then \begin{equation} dA = \frac{4|b_+|^2|b_-|^2\:dx\,dy}{\{x^2+y^2+x(|b_-|^2-|b_+|^2)+\frac{1}{4}\}^2} \end{equation} The total volume of $\rho$-space is correctly reproduced as \emph{4$\pi$}(area of unit sphere). Now another remarkable feature of weak measurements comes into play i.e the measurement errors in both x and y are the \emph{same}, and are \emph{state-independent}. The common statistical error is $\Delta_s = \frac{\Delta_p}{\sqrt{2M}}$. This is in contrast to strong measurements. Following Wootters and Fields, the \emph{error volume} is \begin{equation} (\Delta A)_{err} = \frac{16\,\Delta_s^2\,|b_+|^2|b_-|^2\:dx\,dy}{\{x^2+y^2+x(|b_-|^2-|b_+|^2)+\frac{1}{4}\}^2} \end{equation} As noted by Wootters and Fields in the case of standard tomography, this is \emph{state-dependent}, and it is not possible to optimise it. We follow them and optimise the error volume \emph{averaged over state space}. The state averaged error volume can easily be worked out: \begin{equation} \langle (\Delta A)_{err} \rangle = \frac{16\,\Delta_s^2}{|b_+|^2|b_-|^2} \end{equation} Since $\Delta_s$ has no dependence on the post-selected state $|b\rangle$, it is straight forward to optimise this. The solution is $|b_+|^2=|b_-|^2=\frac{1}{2}$. In other words \emph{weak value measurements are optimal in the sense of minimizing state averaged error volume when the post-selected states are MUB with respect to the eigenstates of the observable measured}. Extension to spin-1 and higher spin values is under investigation. \noindent{ACKNOWLEDGEMENTS: I thank Dipankar Home and T.S. Mahesh for discussions on non-invasive measurements. I also acknowledge support from DST, India, for the project IR/S2/PU-801/2008.} \end{document}
\betagin{document} \betagin{abstract} We study and obtain Slope inequalities for fibred irregular varieties of non-maximal Albanese dimension. We give a comparison theorem between Clifford-Severi and Slope inequalities for this type of fibrations. We also obtain a set of Slope inequalities considering the geometry of the Albanese map and the associated eventual maps. \end{abstract} \title{Slope Inequalities for fibrations of non-maximal Albanese dimension} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} We consider triplets $(X,L,a)$ where $X$ is an irregular, complex, projective variety of dimension $n$, $L$ is a line bundle on $X$ and $a:X\longrightarrow A$ is a non trivial generating map to an abelian variety of dimension $q$. Se will have a fibration $f: X \longrightarrow B$ onto a smooth curve. We will assume that the fibration is {\it irregular}, i.e. ${\rm dim} \, a(F)>0$, where $F$ is a general fibre el $f$. In this situation, several invariants associated to the triplet can be defined: the {\it continuous rank} $h^0_a(X,L)$, the {\it continuous positive degree} ${\rm deg}_a^+f_*L$ and the {\it eventual map} $\phi_L$. If $h^0_a(X,L)\neq 0$ we define the Clifford-Severi slope of $(X,L,a)$ as $\lambdambda (L,a)={\rm vol}(L)/h^0_a(X,L)$ and the Slope of $L$ with respect to $f$ as $s(f,L,a)={\rm vol}(L)/{\rm deg}_a^+f_*L$. When we consider a {\it good} class of triplets ${\mathcal F}$ we denote $\lambdambda_{\mathcal F}(n)$ and $s_{\mathcal F}(n)$ to be the minimum values of such slopes when $(X,L,a)\in {\mathcal F}$ or $(F,L_{|F},a_{|F})\in {\mathcal F}$, respectively, and $n={\rm dim} \,(X)$. When varieties in ${\mathcal F}$ are of maximal $a$-dimension, in \cite{B2} we prove that \betagin{equation}\lambdabel{main} {\lambdambda}_{\mathcal F}(n)\geq \, s_{\mathcal F}(n)\geq \,n\, {\lambdambda}_{\mathcal F}(n-1). \end{equation} We also characterize fibrations with minimal slope in the family. Observe that this provides a way to inductively give higher dimensional Clifford-Severi and Slope inequalities just giving an inequality in low dimension. Moreover, we deduce a huge set of Slope inequalities just using all the existing Clifford-Severi inequalities for varieties of maximal $a$-dimension. In order to obtain these results, in \cite{B2} we develop a version of the method of Xiao adapted to the irregular setting, the so called {\it continuous Xiao's method}. The aim of this work is to obtain similar results for classes of varieties of non-maximal $a$-dimension. Under this assumption it is easy to see that $\lambdambda_{\mathcal F}(n)=s_{\mathcal F}(n)=0$, since in this case $h^0_a(X,L)>0$ does not implies bigness. We redefine ${\overline \lambdambda}_{\mathcal F}(n)$ and ${\overline s}_{\mathcal F}(n)$ when restricting to line bundles $L$ with {\it continuous moving part} $L_c$ big. Adapting the arguments given in \cite{B2}, our first result is an analogous to (\ref{main}) (see Proposition \ref{pardini}, Theorem \ref{thm1} and Remark \ref{resumen}), and allows to obtain Slope inequalities for varieties of non maximal $a$-dimension from Clifford-Severi inequalities of maximal $a$-dimension ones. More concretely, given a good class of triplets ${\mathcal F}$, we define the subclass ${\mathcal F}_p$ imposing the extra condition that $c(X)={\rm dim}(X)-{\rm dim}\,a(X)\leq p$. Then \noindent {\bf Theorem A}. {\it \betagin{itemize} \item [(i)] ${\overline \lambdambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\geq (n-p)\,\lambdambda_{{\mathcal F}_0}(n-p-1).$ \small \item [(ii)] If equality ${\overline s}_{{\mathcal F}_p}(n)= (n-p)\,\lambdambda_{{\mathcal F}_0}(n-p-1)$ holds, then $f_*(L\otimes a^*(\alphapha))^+$ is semistable for general $\alphapha \in {\widehat A}$ and $F$ is covered by $(n-p-1)$-dimensional varieties $V$ of maximal $a_{|V}$-dimension such that $\lambdambda (R,a_{|V})=\lambdambda_{{\mathcal F}_0}(n-p-1)$, for some $R\in {\rm Pic} (V)$. \end{itemize} } The technique used here is again the continuous Xiao's method. The second part of the paper is devoted to obtain new Slope inequalities considering the geometry of $T$ and $G$, the connected components of the general fibres of $a$ and $\phi_L$, respectively. We refer to Section 2 and Remark \ref{beta} for definitions. Again we use continuous Xiao's method, adapting the arguments of \cite{B} and \cite{J}. Our main result is \noindent {\bf Theorem B.} {\it Let $f: X \longrightarrow B$ an irregular fibration with general fibre $F$ and $k={\rm dim} \, a(F)$. Then: \betagin{itemize} \item [(i)] $s(f,L,a)\geq {\rm vol}_{F|G}(L)\, (k+1)!$ \item [(ii)] If $(L_{|F})_c$ is $a_{|F}$-big, then $s(f,L,a)\geq \betata (L_{|T},n,k+1)\,(k+1)!$ \noindent In particular, if $F$ is not uniruled, then $s(f,L,a)\geq 2\,(k+1)!$ \item [(iii)] If equality holds in (i) or (ii), then $f_*(L\otimes a^*(\alphapha))^+$ is semistable for general $\alphapha \in {\widehat A}$. \end{itemize} } In Section 2 we survey all the techniques we will use, known results on this topic and the involved definitions. Section 3 is devoted to prove theorems A and B. \noindent {\underline{Notations and Conventions.}} We work over $\mathbb{C}$. Varieties are projective and smooth unless otherwise stated. We will use notation of divisor or line bundles indistinctly. Given a triplet $(X,L,a)$ we will write $L\otimes \alphapha$ instead of $L \otimes a^*(\alphapha)$, for $\alphapha \in {\widehat A}$. \noindent {\underline{Acknowledgements.}} The author thanks Lidia Stoppino and Rita Pardini for extremely useful discussions on this topic along the last years. \section{Preliminaries and technical results} For benefit of the reader, we collet here a series of preliminaries, definitions, known results and techniques we will use in next section. Main references for these are \cite{B2}, \cite{B}, \cite{BPS3} and \cite{J}. We consider triplets $(X,L,a)$ where $X$ is a smooth irregular variety of dimension $n$, $a:X \longrightarrow A$ is a nontrivial generating map to an abelian variety of dimension $q$ and $L$ is a line bundle on $X$ such that $h^0_a(X,L)={\rm min}\{h^0(X,L\otimes \alphapha)\,|\,\alphapha \in {\widehat A}\}\neq 0$. {\noindent {\underline {\it Multiplication maps}}}. We will offen consider situations {\it up to a multiplication map}, meaning that we will consider base changes via a multiplication map on $A$ by some $d$, which is \'etale of degree $d^{2q}$: \betagin{equation}\lambdabel{multiplicationmap} \xymatrix{ X^{(d)}\ar[d]_{a_d}\ar[r]_{\nu_d} &X\ar[d]^a\\ A\ar[r]_{\nu_d}&A} \end{equation} We will denote $L^{(d)}:=\nu_d^*(L)$. Continuous rank and volume are multiplicative through a multiplication map. \noindent {\underline {\it Continuous moving divisor $L_c$}}. Up to a blow-up, there is a decomposition $L=P+W$ such that, for any $d>>0$ and divisible and any general $\alphapha \in {\widetilde A}$, $P^{(d)}$ is base point free and is the moving divisor of $|L^{(d)}\otimes \alphapha|$ and $W^{(d)}$ is its fixed divisor. Following \cite{B}, $P$ and $W$ are called the {\it continuous moving divisor} and {\it continuous fixed divisor} of $L$, respectively. According to the notation of \cite{J}, we will set $L_c:=P$ for the continuous moving part. \noindent {\underline {\it Eventual map and eventual degree.}} Up to a blow-up, there is a factorization of the map $a$, $X \rightarrow X_L\rightarrow A$ such that the map $\phi_L: X\longrightarrow X_L$ verifies the following properties: \betagin{itemize} \item $L_c=\phi_L^*(R_L)$ for some line bundle $R_L$ on $X_L$ which induces a base point free generically finite morphism on $X_L$ (\cite{BPS3}). \item Up to a multiplication map, the linear system $|L_c^{(d)}\otimes \alphapha|$, for $\alphapha$ general, is base point free and induces the map $\phi_L^{(d)}: X^{(d)}\longrightarrow X_L^{(d)}$ (\cite{B}, \cite{BPS3}). \item Since the map $\phi_L$ factorizes $a$, it is generically finite provided $X$ is of maximal $a$-dimension. It is birational if ${\rm deg}\, a=1$. \item The map $\phi_L$ is generically finite provided $L_c$ is $a$-big (\cite{B}). \item A birational model of the map $\phi_L$ is given by the natural map $\rho: X \dashrightarrow \mathbb{P}_A(a_*L)$, where $X_L:=\rho (X)$ (\cite{J}). \item When the eventual map $\phi_L$ is generically finite, we define the {\it eventual degree} $L$ to be $e(L)={\rm deg} (\phi_L)$ (cf. \cite{BPS3}, Section 3). We can extend the definition to any $L$ just considering the degree of the finite part in the Stein factorization of $\phi_L$. \item $\kappa (L_c)={\rm dim}\phi_L(X) \geq {\rm dim} \, a(X)$ (\cite{J}). \end{itemize} \noindent {\underline {\it Good classes of triplets.}} Given a family ${\mathcal F}$ of triplets with $h^0_a(X,L)\neq 0$, we say that the family is {\it good} if it stable via the following four operations: \betagin{itemize} \item[(1)] If $(X,L,a)\in {\mathcal F}$, then $({\overline X},\sigmagma ^*L,a\circ \sigmagma)\in {\mathcal F}$, where $\sigmagma:{\overline X}\longrightarrow X$ is a birational morphism. \item[(2)] If $(X,L,a)\in {\mathcal F}$, then $(X^{(d)},L^{(d)},a_d)\in {\mathcal F}$. \item[(3)] If $(X,L,a)\in {\mathcal F}$, then $(X,L',a)\in {\mathcal F}$ for $L'\leq L$ such that $h^0_a(X,L')>0$. \item[(4)] If $(X,L,a)\in {\mathcal F}$, then $(M,L_{|M},a_{|M})\in {\mathcal F}$, for a general smooth $M$, moving in a base point free linear system on $X$. \end{itemize} \noindent {\underline {\it Clifford-Severi inequalities (maximal $a$-dimension)}}. Given a triplet $(X,L,a)$ with $h^0_a(X,L)>0$, we define its Clifford-Severi slope as $$\lambdambda (L,a)=\frac{{\rm vol}(L)}{h^0_a(X,L)}$$ \noindent which remains constant under multiplication maps. Given a good class ${\mathcal F}$ we define ${\lambdambda}_{\mathcal F}(n)$ to be the infimum of the Clifford-Severi slopes of triplets in ${\mathcal F}$, of dimension $n$. Clifford Severi-Inequalities for a given good class ${\mathcal F}$ are inequalities of type $$ {\lambdambda}_{\mathcal F}(n)\geq \lambdambda_n. $$ There are many known Clifford-Severi inequalities for different good classes of maximal $a$-dimension varieties: see Remark 4.7 in \cite{B2} for a (almost) complete list. For example: \betagin{itemize} \item Higher dimensional Severi inequality states that ${\lambdambda}_{\mathcal F}(n)\geq 2\,n!$ if ${\mathcal F}$ is defined by the property that $L$ is {\it numerically subcanonical}. \item For a general $L$ we have ${\lambdambda}_{\mathcal F}(n)\geq e(L)\,n!$ (\cite{BPS2}). \end{itemize} \noindent {\underline {\it Clifford-Severi inequalities (non-maximal $a$-dimension)}}. In the case of triplets of non-maximal $a$-dimension, situation is not so clear and depends heavily on conditions of bigness of $L$ or $L_c$ and the geometry of the fibre of the map $a$ or $\phi_L$. The case of irregular threefolds is well understood by results of Zhang (\cite{Z3}). Here you can find a (non complete) list of known results for arbitrary dimension $n$. We set $k={\rm dim} a(X)$, and $G$ and $T$ for a connected component of the general fibre of $\phi_L$ and $a$, respectively. In \cite{B}, Main Theorem and Remark 5.8, the author proves \betagin{itemize} \item If $L_c$ is $a$-big and $L$ is numerically $r$-subcanonical, then $\lambdambda(L,a)\geq \delta (r)\,k!$. \item If $L$ is nef and $a$-big then $\lambdambda(L,a)\geq (L_{|G})^{n-k}\,k!\geq \, k!$. \end{itemize} \noindent When $k=n-1$, Zhang gives a better bound (\cite{Z2}): \betagin{itemize} \item If $g$ is the genus of the curve $T$, then $\lambdambda (L,a)\geq 2\, \frac{g-1}{g+n-2}\, n!.$ \end{itemize} \noindent Finally Jiang (\cite{J}) gives a set of inequalities depending on the geometry of $G$ or $T$. The simplest ones are the following (see Proposition 3.6 and Theorem 3.1 in \cite{J} and Remark \ref{beta} for a more detailed result): \betagin{itemize} \item If $L$ is big, then $\lambdambda(L,a)\geq {\rm vol}_{X|G}(L)\,k!$. \item If $L_c$ is big and $T$ is not uniruled, then $\lambdambda(L,a)\geq 2 \, k!$. \end{itemize} \betagin{rem} The proof of Main Theorem (iii) in \cite{B} uses implicitly the volume of $L_{|G}$ (see Remark 5.8 in the cited paper). In Corollary B (iii), loc. cit., extending this inequality to $K_X$ in the singular setting, it is erroneously assumed that ${\rm vol}_{X|G}(K_X)\geq 1$, assuming that the minimal variety $X$ is Gorenstein. \end{rem} \noindent {\underline {\it Irregularly fibred triplets.}} Given a triplet $(X,L,a)$, we will say that it is {\it irregularly fibred} if moreover we have a fibration $f:X\longrightarrow B$ onto a smooth curve, such that ${\rm dim}\,a(F)>0$, where $F$ is a general smooth fibre of $f$. If $f$ is an irregular fibration as above, the family of vector bundles $\{f_*(L\otimes \alphapha)\}_{{\alphapha}\in {\widehat A}}$ has constant type of Harder-Narashimann filtration for $\alphapha \in U_0$, for some nonempty open set $U_0$. We set $\{(r_i,\mu_i)\}$ for theit ranks and slopes. For $\alphapha \in U_0$ we will write $f_*(L\otimes \alphapha)^+$ for the biggest nef subbundle of $f_*(L\otimes \alphapha)$ and we denote $${\rm deg}_a^+f_*L={\rm deg}f_*(L\otimes \alphapha)^+\geq 0.$$ We will define the (continuous) Slope of $L$ w.r.t. $f$ to be: $$ s(f,L,a)=\frac{{\rm vol}( L)}{{\rm deg}_a^+f_*L}\in (0+\infty]. $$ \noindent which is also constant under multiplication maps. \noindent {\underline {\it Slope inequalities.}} Given a good class ${\mathcal F}$, we will say that an irregular fibration $f$, with general fibre $F$, is of type ${\mathcal F}$ if $(F,L_{|F},a_{|F})\in {\mathcal F}$ (the triplet $(X,L,a)$ is not necessarily in ${\mathcal F}$). We define $s_{\mathcal F}(n)$ to be the infimum of slopes of fibrations $f$ of type ${\mathcal F}$, where $n={\rm dim} X$. Slope inequalities for the family ${\mathcal F}$ is a set of inequalities for any $n$: $$ s_{\mathcal F}(n) \geq \lambdambda_n. $$ In \cite{HZ} Hu and Zhang give slope inequalities for $L=K_f$ and $X$ of maximal Albanese dimension, by direct computation, giving properties of the limit cases. In \cite{B2}, Theorem 4.11, a broad generalization is given, for any $L$, establishing an equivalence between Clifford-Severi inequalities and Slope inequalities for a given good class ${\mathcal F}$ of varieties of maximal $a$-dimension. Moreover, this result allows to produce automatically a whole set of Clifford-Severi and Slope inequalities for any dimension, just given one inequality in low dimension, typically 1 or 2 (see Remark 4.12 and Corollary 1.1 in \cite{B2}). The main result can be stated as: $$ \lambdambda_{\mathcal F}(n)\geq s_{\mathcal F}(n)\geq n\,\lambdambda_{\mathcal F}(n-1). $$ \noindent {\underline {\it Continuous Xiao's method and derived inequalities.}} Take a general $\alphapha_0\in {\widehat A}$ and let $L_0=L\otimes \alphapha_0$. After a suitable blow-up and multiplication map there is a filtration by nef line bundles: $$T_1 \leq T_2 \leq ...\leq T_m \leq L_0$$ \noindent such that, for all $i$ $N_i:=T_i-\mu_iF$ is nef and, if $P_i:={N_i}_{|F}$, then \betagin{itemize} \item $P_i$ is a base point free linear system on $F$ such that $h^0(F,P_i)=h^0_{a_{|F}}(F,P_i)\geq r_i$. \item ${\rm deg}_a^+f_*(L)={\rm deg}({\mathcal E}_m^{\alphapha_0})=\sum_{i=1}^{m}r_i(\mu_i-\mu_{i+1})$ \end{itemize} \noindent By convention we can take, coherently, $(N_{m+1},\mu_{m+1})=(T_m,0)$ or, in case $L$ is nef, $(N_{m+1},\mu_{m+1})=(L,0)$. Fix $r\leq n$. Consider an ordered, increasing partition of the set $\{ \, 1,...,m \, \}$ given by subsets $I_s$, $s=1,...,r-1$ (some of the sets $I_s$ may be empty), with $I_{r-1}\neq \emptyset$. Define, decreasingly, for $s=1,...,r-1$ $$b_s=\{ \betagin{array}{cc} {\rm min} I_s & {\rm if} \,\, I_s \neq \emptyset \\ b_{s+1} & {\rm otherwise} \\ \end{array}$$ Then we have that, for any $Q_1,...,Q_{n-r}$ nef $\mathbb{Q}$-Cartier divisors the following inequality holds: \betagin{equation}\lambdabel{Xiaogeneral} Q_1...Q_{n-r}\left[N^r_{m+1}-(\sum _{s=1}^{r-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}))\right]\geq 0. \end{equation} We will use the following particular cases. Taking $r=n$, and any partition: \betagin{equation}\lambdabel{xiaobuena} {\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum _{s=1}^{n-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}). \end{equation} Taking any $r$, the trivial partition $I_{r-1}=I$ ($I_s=\emptyset$ for $s<r-1$) and $Q_i=N_{m+1}$: \betagin{equation}\lambdabel{xiaoalbanesenomaxima} {\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum_{i=1}^m P_m^{n-r} \left[P_{i+1}^{r-1}+P_{i+1}^{r-2}P_i+...+P_i^{r-1}\right](\mu_i-\mu_{i+1}). \end{equation} \section{Slope inequalities for non maximal $a$-dimension fibrations} In \cite{B2} we study the equivalence of Slope and Clifford-Severi inequalities for general irregular varieties and fibrations in a good class ${\mathcal F}$, which mostly applies to maximal $a$-dimension varieties. Our aim is to study whether this equivalence can be extended to classes of varieties of non-maximal $a$-dimension. To this aim, we need to make this condition stable by smooth sections (condition (3)), and the right condition to impose is on the codimension $c(X)=\dim X-\dim a(X)$, which we called condition $Q_p$, i.e. triplets such that $c(X)\leq p$. Parts of the main result in \cite{B2} given in Theorem 4.11 hold for families of non maximal $a$-dimension as well, but they are not interesting since in these cases $\lambdambda_{\mathcal F}(n)$ and $s_{\mathcal F}(n)$ vanish. The reason is that condition $h^0_a(X,L)>0$ does not imply bigness if the variety is not of maximal $a$-dimension. Indeed, if $a:X\longrightarrow A$ verifies that $k=\dim a(X)<n=\dim X$, take $L=a^*H$ for any $H$ very ample on $A$. Then clearly ${\rm vol}(L)=0$ and $h^0_a(X,L)>0$.The same phenomena occur if $X$ is fibred: we can construct examples with $\deg_a^+f_*L\neq 0$ and ${\rm vol}(L)=0$. So we need to impose extra hypotheses to obtain nontrivial inequalities. Natural conditions are $a$-bigness of $L$ or its continuous moving part $L_c$. There are several strategies (see \cite{B} and \cite{J}), according to whether $L$ or $L_c$ are $a$-big. Observe that bigness of $L_c$ implies bigness of $L$ but the viceversa does not hold (see Remarks 3.7 and 3.8 in \cite{B}). Observe also that in \cite{B} it is shown that $a$-bigness of $L_c$ implies bigness of $L$. In particular, for $L_c$, bigness and $a$-bigness are equivalent, provided $h^0_a(X,L)\neq 0$. \betagin{comment} The main point here is the concept of {\it eventual map} induced by a line bundle $L$ with $h^0_a(X,L)>0$. Up to a blow-up, there is a factorization of the map $a$, ${\overline X} \rightarrow X_L\rightarrow A$ such that the map $\phi_L: {\overline X}\longrightarrow X_L$ verifies the following properties: \betagin{itemize} \item $L_c=\phi_L^*(R_L)$ for some line bundle $R_L$ on $X_L$, which induces a base point free generically finite morphism on $X_L$ (\cite{BPS3}). \item Up to a multiplication map, the linear system $|L_c^{(d)}\otimes \alphapha|$, for $\alphapha$ general is base point free and induces the map $\phi_L^{(d)}: {\overline X}^{(d)}\longrightarrow X_L^{(d)}$ (\cite{B}, \cite{BPS3}). \item Since the map $\phi_L$ factorizes $a$, it is generically finite provided $X$ is of maximal $a$-dimension and it is birational if ${\rm deg} a=1$. In general, we have that ${\rm dim} X_L\geq {\rm dim} a(X)$. \item The map $\phi_L$ is generically finite provided $L_c$ is $a$-big (\cite{B}). \item A birational model of the map $\phi_L$ is given by the natural map $\rho: X \dashrightarrow \mathbb{P}_A(a_*L)$, where $X_L:=\rho (X)$ (\cite{J}). \end{itemize} \end{comment} If we restrict our good families ${\mathcal F}$ adding the condition of bigness of $L$ (or $L_c$), the resulting subfamily is not {\it good} since bigness is not stable by subbundles (and so condition (4) fails). Nevertheless, we can obtain some closely related Slope inequalities for fibrations of non maximal $a$-dimension with adapted arguments. Let us first introduce some extra notation. \betagin{defn} Given a class ${\mathcal F}$ of triplets we define: \betagin{itemize} \item [(i)] ${\mathcal F}_p=\{(X,L,a)\in {\mathcal F}\,|\, c(X)\leq p\,\}.$ \item[(ii)] ${\overline \lambdambda}_{{\mathcal F}_p}(n)={\rm inf}\{\lambdambda(L,a)\,|\, (X,L,a)\in {\mathcal F}_p,\,n=\dim X,\,\, L_c \, {\rm big}\}.$ \item[(ii')] ${\widehat \lambdambda}_{{\mathcal F}_p}(n)={\rm inf}\{\lambdambda(L,a)\,|\, (X,L,a)\in {\mathcal F}_p,\,n=\dim X,\,\, L \, {\rm big}\}.$ \item[(iii)] ${\overline s}_{{\mathcal F}_p}(n)={\rm inf}\{s(f,L,a)\,|\, f \,{\rm of}\,\,{\rm type}\,\, {\mathcal F}_p,\,n=\dim X, \,\,(L_{|F})_c \,{\rm big}\}.$ \item[(iii')] ${\widehat s}_{{\mathcal F}_p}(n)={\rm inf}\{s(f,L,a)\,|\, f \,{\rm of}\,\,{\rm type}\,\, {\mathcal F}_p,\,n=\dim X, \,\,(L_{|F}) \,{\rm big}\}.$ \end{itemize} \end{defn} \betagin{rem} \betagin{itemize} \item If the class ${\mathcal F}$ is good, so is ${\mathcal F}_p$. \item If we consider classes of maximal $a$-dimension as in \cite{B2}, then ${\mathcal F}={\mathcal F}_0$. In this case, since $h^0_a(X,L)\neq 0$ implies bigness, we have that $\lambdambda_{\mathcal F}(n)={\overline {\lambdambda}}_{{\mathcal F}_0}(n)={\widehat {\lambdambda}}_{{\mathcal F}_0}(n)$ and $s_{\mathcal F}(n)={\overline s}_{{\mathcal F}_0}(n)={\widehat s}_{{\mathcal F}_0}(n)$. \end{itemize} \end{rem} One of the two inequalities between Clifford-Severi and Slope inequalities given in Theorem 4.11 in \cite{B2} holds without change in this new setting: \betagin{prop}\lambdabel{pardini} Let ${\mathcal F}$ be a good class of triplets of irregular varieties. Then, for all $n$ and $p\leq n-2$ we have $$ {\overline \lambdambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\,\,\,\, {\rm and} \,\,\,\, {\widehat \lambdambda}_{{\mathcal F}_p}(n)\geq {\widehat s}_{{\mathcal F}_p}(n). $$ \end{prop} \betagin{proof} We refer to the proof of the maximal $a$-dimension case using Pardini's trick given in Theorem 4.11, (ii) of \cite{B2}. As pointed out in Remark 4.13 (loc.cit.), only properties (1), (2) and (3) of a good class are used, and bigness of $L$ or $L_c$ are maintained in all the process. The condition $p\leq n-2$ ensures that ${\rm dim} \, a(X)\geq 2$. In this case, the sections of $\nu_d^*(H)$ are irreducible and $f_d$ has connected fibres (and so it is an irregular fibration). The rest of the proof holds without changes. \end{proof} The reverse inequality is more subtle and does depend heavily on bigness properties of $L_c$ or $L$. When $L_c$ is $a$-big two possible strategies are possible. The first option, following \cite{B}, is by hyperplane section argument since the eventual map allows us to maintain the process inside the good class ${\mathcal F}$. This is the content of Theorem \ref{thm1} and Remark \ref{resumen}. The second option is to consider the geometry of $T$, the connected component of the general fibre of $a_{|F}$. This approach uses Theorem 3.1 in \cite{J}, adapted to the irregularly fibred case via a suitable use of the continuous Xiao's method. This is the content of Theorem \ref{thm2} (ii). In general $(L_{|F})_c$ may not be $a_{|F}$-big. In this case, we can also obtain a good lower estimation of $s(f,L,a)$, but we need to consider the geometry of $G$, a connected component of the general fibre of the eventual map $\phi_{L_{|F}}$. This approach adapts the argument in Proposition 3.6 in \cite{J} to the relative setting via continuous Xiao's method and is the content of Theorem \ref{thm2} (i). Observe that bounds in Theorem \ref{thm2} in general are sharper than those in Theorem \ref{thm1} when considering a single fibred triplet $(X,L,a)$ but strongly depend on properties not well behaved in a good class ${\mathcal F}$. As a by product of the use of Xiao's method, in theorems \ref{thm1} and \ref{thm2} we can give properties in the limit cases, being those in Theorem \ref{thm1} analogous to those obtained in the cases of maximal $a$-dimension varieties. \betagin{thm}\lambdabel{thm1} Let ${\mathcal F}$ be a good class of triplets. Let $(X,L,a)$ be a fibred triplet $f: X\longrightarrow B$ of type ${\mathcal F}$ such that $f$ is of $a$-dimension $k$ (i.e., $k={\rm dim}\,a(F)$). Assume that $L_c$ is $f$-big. Then: \betagin{itemize} \item [(i)] $s(f,L,a) \geq \,(k+1)\,{\lambdambda}_{{\mathcal F}_0}(k).$ \item[(ii)] If equality holds, then: \betagin{itemize} \item There is a family of varieties $V$ of dimension $k$ covering $F$, and line bundles $R\leq L_{|V}$ such that $(V,R,a_{|V})$ are of maximal $a_{|V}$-dimension and verify the Clifford-Severi equality ${\rm vol}(R)={\lambdambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R).$ \item $f_*(L\otimes \alphapha)^+$ is semistable for general $\alphapha \in {\widehat A}$. If, moreover, $L$ is nef, then $f_*(L\otimes \alphapha)$ is nef and semistable. \end{itemize} \end{itemize} \end{thm} \betagin{rem}\lambdabel{thm1simple} Statement (i) of the above theorem, combined with Clifford-Severi inequalities for maximal $a$-dimension varieties as given in Remark 4.9 in \cite{B2} gives a broad set of Slope inequalities. For example, in general we have $s(f,L,a) \geq (k+1)!$ and, if $L$ is numerically $r$-subcanonical, then $s(f,L,a) \geq \delta(r)(k+1)!$ \end{rem} \betagin{rem} \lambdabel{resumen} Theorem \ref{thm1} together with Proposition \ref{pardini} can be rephrased as, for $p\leq n-2$: $$ {\overline \lambdambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\geq (n-p)\,\lambdambda_{{\mathcal F}_0}(n-p-1). $$ \end{rem} \betagin{proof} \noindent (i) Let $(X,L,a)$ be a triplet with a fibration $f: X \longrightarrow B$ of type ${\mathcal F}$. Let us apply inequality (\ref{xiaoalbanesenomaxima}) with $r=k+1={\rm dim}\,a(F)+1$: \betagin{equation} \lambdabel{uno} {\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum_{i=1}^m P_m^{n-k-1}\left[P_{i+1}^{k}+P_{i+1}^{k-1}P_i+...+P_i^{k}\right](\mu_i-\mu_{i+1}). \end{equation} Since $L_c \leq T_m=N_{m+1}$ (see Lemma 3.1 in \cite{B2}), we have by hypothesis that $P_m={T_m}_{|F}$ is big and hence $ a_{|F}$-big. Hence, its eventual map is generically finite. Moreover, since it is continuously globally generated by construction, up to a multiplication map we can assume that the linear system $|P_m|$ is base point free. Take $V_1,...,V_{n-k-1}\in |P_m|$ general sections and let $V=V_1\cap ... \cap V_{n-k-1}$. Hence $V$ is a smooth variety of maximal $a_{|V}$-dimension $k$. Let $R_i={P_i}_{|V}$. By the properties of a good class, we have that the triplet $(V,R_i,a_{|V})\in {\mathcal F}_0$. Hence we have that \betagin{equation}\lambdabel{dos} R_i^{k}\geq {\lambdambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R_i). \end{equation} Observe that $$ P_m^{n-k-1}P_i^{k}=R_i^{k}. $$ Finally, we use that $P_i-P_m\leq 0$ and then $h^0_{a_{|F}} (F,P_i-P_m)=0$, and the same holds by cutting by successive $V_i$. Then we can conclude that \betagin{equation}\lambdabel{tres} h^0_{a_{|V}}(V,R_i)\geq h^0_{a_{|F}}(F,P_i)\geq r_i. \end{equation} \noindent Finally, observe that using general Clifford-Severi inequality for irregular varieties (Main Theorem in \cite{B}) \betagin{equation}\lambdabel{cuatro} \delta_i:=R_{i+1}^{k}+R_{i+1}^{k-1}R_i+...+R_i^{k}\geq R_{i+1}^{k}+k\,R_i^{k}\geq \lambdambda_{{\mathcal F}_0}(k)\,( r_{i+1}+kr_i)\geq (k+1)\,\lambdambda_{{\mathcal F}_0}(k)\,r_i. \end{equation} Then we can conclude \betagin{equation} \lambdabel{cinco} {\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum_{i=1}^m (k+1)\,\lambdambda_{{\mathcal F}_0}(k)\,r_i(\mu_i-\mu_{i+1})=(k+1)\,\lambdambda_{{\mathcal F}_0}(k)\,{\rm deg}_a^+f_*L. \end{equation} \noindent (ii) Assume that equality holds. Then we have equality in (\ref{dos}), (\ref{tres}), (\ref{cuatro}) and (\ref{cinco}), which imply, for all $i=1,...,m$: \betagin{itemize} \item $r_{i+1}=r_i$, \item $h^0_{a_{|V}}(V,R_i)= h^0_{a_{|F}}(F,P_i)= r_i$, \item $R_i^{k}={\lambdambda}_{\mathcal F}(k)_0\,h^0_{a_{|V}}(V,R_i)$, \item ${\rm vol} (L_0)=T_m^{n}$. \end{itemize} Hence we have that $m=1$ (so $f_*(L\otimes \alphapha)^+$ is semistable), $h^0_{a_{|V}}(V,R_m)= h^0_{a_{|F}}(F,P_m)= r_m$, and $(V,R_m)$ verifies the Clifford-Severi equality $R_m^{k}={\lambdambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R_m)$. Observe that if equality holds then $L$ is big since $s(f,L,a)>0$. We have that $L_0=T_m+Z_m$ and ${\rm vol} (L_0)=T_m^{n}$. If $L$ is moreover nef, then we have that $Z_m=0$ (cf. Theorem A in \cite{FKL}). Hence: $${\rm rank}f_*(L_0)^+=r_1=h^0_{a_{|F}}(F,P_1)=h^0_{a_{|F}}(F,{L_0}_{|F})={\rm rank}f_*(L_0)$$ \noindent and hence $f_*(L_0)$ is semistable and nef. \end{proof} \betagin{lem}\lambdabel{fujita} Let $X$ be a smooth, projective variety and $L$ a big line bundle on $X$. Let $\phi: X \longrightarrow Y$ be a fibred space, $G$ a general fibre of $\phi$ and $R\in {\rm Pic} (Y)$ a line bundle such that $L'=\phi^* (R)\leq L$. Then: $$ {\rm vol}_X (L)\geq {\rm vol}_{X|G}(L)\,{\rm vol}_{Y}(R). $$ \end{lem} \betagin{proof} We have that $L=\phi ^*(R)+Z$ with $Z\geq 0$. The result is obvious if $L$ is nef. We will reduce to this case via Fujita Approximation theorem. In the general case, assume $R$ is big, otherwise the result is trivial. Following Theorem 3.5 in \cite{BDPP}, there is an extension of the volume function given by the moving intersection numbers which is non decreasing in each factor, superadditive and coincides with the intersection product for nef line bundles. Let $e={\rm dim} (Y)$. For any birational compatible modifications of $X$ and $Y$ and any decompositions $L=W_1+E_1$ and $R=W_2+E_2$ such that $W_i$ are big and nef $\mathbb{Q}$-divisors and $E_i$ are effective $\mathbb{Q}$-divisors, we have: $$ {\rm vol}(L)=<L,....,L>\,\geq\, <W_1,...,W_1,\phi^*(W_2),...,\phi^*(W_2)>=W_1^{n-e}(\phi^*(W_2))^e=({W_1}_{|G})^{n-e}W_2^e $$ \noindent since the moving intersection numbers are nondecreasing and $W_i$ are nef. To conclude, we apply Fujita Approximation theorem for the volume and the relative volume (see, for example, Proposition 2.11 and Theorem 2.13 in \cite{elmnp}). For any $\varepsilonlon >0$, there are birational modifications of $X$ and $Y$ (that we can make compatible with the above hypotheses) and decompositions $L=W_1+E_1$ and $R=W_2+E_2$ as above and such that \betagin{itemize} \item ${\rm vol}_{X|G}(L) \geq {W_1}_{|G}^{n-e}\geq {\rm vol}_{X|G}(L)-\varepsilonlon$ and \item ${\rm vol}_Y(R)\geq W_2^e \geq {\rm vol}_Y(R)-\varepsilonlon$. \end{itemize} We apply the above inequality for any such decompositions and we conclude that $$ {\rm vol}_X (L) \geq {\rm vol}_{X|G}(L)\, {\rm vol}_Y (R). $$ \end{proof} \betagin{rem}\lambdabel{beta} Let $Z$ be a smooth, projective variety, and $L$ a line bundle on $Z$. In \cite{J}, two invariants to compute the positivity of $L$ are defined: $\delta(L)$ and $\delta_1(L)$, the second being the minimum of volumes of subline bundles of $L$ inducing generically finite maps, when restricted to general positive dimensional subvarieties $V$ covering $Z$. We clearly have that $\delta_1(L)\geq cov.gon(Y)$. For any $k\leq s\leq {\rm dim} (Z)$, define $$ \betata(L,s,k)={\rm min}\{\binom{s}{k}\delta (L),\delta_1(L)\}. $$ \noindent We have that $\betata(L,s,k)\geq 1$ and that $\betata(L,s,k)\geq 2$, provided $Y$ is not uniruled. In \cite{J} Theorem 3.1, the following Clifford-Severi inequality is proved. Consider a triplet $(X,L,a)$ of dimension $n$. Let $k={\rm dim} \, a(X)$ and $T$ be a connected component of the general fibre of $a$. Assume that $L_c$ is $a$-big. Then: $$ \lambdambda(L,a)\geq \betata(L_{|T},n,k)\,k! $$ \end{rem} \betagin{thm}\lambdabel{thm2} Let $(X,L,a)$ be a fibred triplet. Let $G$ be a connected component of the general fibre of the eventual map $\phi_{L_{|F}}$ and let $T$ be a connected component of the general fibre of $a_{|F}$. Then \betagin{itemize} \item [(i)] If $L_{|F}$ is $a_{|F}$-big, then $s(f,L,a)\geq {\rm vol}_{F|G}(L)\,(k+1)!$ \item [(ii)] If $(L_{|F})_c$ is $a_{|F}$-big, then $s(f,L,a)\geq \betata (L_{|T},n,k+1)\,(k+1)!$ \noindent In particular, if $F$ is not uniruled, then $s(f,L,a)\geq 2\,(k+1)!$ \item [(iii)] If equality holds in (i) or (ii), then $f_*(L\otimes \alphapha)^+$ is semistable for general $\alphapha \in {\widehat A}$. \end{itemize} \end{thm} \betagin{proof} (i) Since $(T_m)_{|F}=P_m$ is continuously globally generated and coincides by construction with $(L_{|F})_c$, up to a multiplication map and a birational modification, we can consider that $|P_m|$ induces the eventual map of $L_{|F}$ and $a_{|F}$ factorizes through this map. Up to a further birational modification and multiplication map, we can consider the relative map induced by the quotient $f_*(L_0)^+=f_*(T_m)^+={\mathcal E}_m\longrightarrow T_m$. The Stein factorization of such map gives a relative fibration $\phi_m: X\longrightarrow X_m$ over $B$, with general fibre $G$ and a factorization as $a={\overline a}\circ \phi_m$ for some ${\overline a}: X_m\longrightarrow A$. We can assume $X_m$ to be smooth. Then $T_m=\phi_m^*(L_m)$ for some line bundle $L_m$ on $X_m$ such that $g_*(L_m)^+=f_*(T_m)^+=f_*(L_0)^+={\mathcal E}^{\alphapha_0}_m$, where $g: X_m \longrightarrow B$ is the induced fibration over $B$. When restricted to a general fibre ${\overline F}$, this is just the Stein factorization of the eventual map $\phi_{L_{|F}}$. Then we apply Lemma \ref{fujita}: $$ {\rm vol}_X (L) \geq {\rm vol}_{X|G}(L)\, {\rm vol}_{X_m}(L_m). $$ Observe that ${L_m}$ restricted to the fibre of $g$ is ${\overline a}$-big. Hence, we can apply Remark \ref{thm1simple} to $(X_m,L_m,{\overline a})$ and obtain $$ {\rm vol}_{X_m}(L_m)\geq (k+1)!\,{\rm deg}_{\overline a}^+g_*(L_m)=(k+1)!\,{\rm deg}_a^+ f_*(L_0). $$ \noindent (ii) Consider the set of indexes $I=\{1,....,m\}$ and let us construct an increasing ordered partition as follows. For $s=1,...,n-1$, consider $I_s=\{i\in I \,|\, \kappa (P_i)=s\,\}$. Recall that $\kappa (P_i)={\rm dim} \, \phi_{P_i}(F)$. Since $(L_{|F})_c=P_m$ is $a_{|F}$-big then its eventual map is generically finite, and so $I_{n-1}\neq \emptyset$. On the other hand, since $a_{|F}$ factorizes through any eventual map, we have that ${\rm dim}\,\phi_i(F)\geq k$, and hence $I_s=\emptyset$ if $s<k$. Consider now Xiao's inequality (\ref{Xiaogeneral}): \betagin{equation}\lambdabel{xiaobuena} N^n_{m+1}\geq \sum _{s=1}^{n-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}). \end{equation} Consider the Stein factorization of the eventual maps induced by $|P_i|$, $\phi_i: F \longrightarrow F_i$, and let $R_i\in {\rm Pic}(F_i)$ be such that $P_i=\phi_i^*(R_i)$. We can assume that all the $F_i$ are smooth and that $\phi_i$ factorizes through $\phi_j$ if $i<j$. Let $G_i$ the generic fibre of $\phi_i$. Observe that if $i,i'\in I_s$, then $G_i=G_{i'}$ and so we denote it by $G_s$. Observe that ${\rm dim} \, G_s=n-1-s.$ Since eventual maps factorizes $a_{|F}$, we also have maps $a_i: F_i\longrightarrow A$ such that $a_{|F}=a_i\circ \phi_i$. Then we have that, for $i\in I_s$: $$ \sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l}\geq (s+1)P_i^s\geq \left[(k+1)R_i^s\right]G_s. $$ Since $(F_i,R_i,a_i)$ is a triplet such that $(R_i)_c=R_i$ is big and of $a_i$-dimension $k$, we can apply general Clifford-Severi inequaliy to obtain $R_i^s\geq k! h^0_{a_i}(F_i,R_i)=k!h^0_{a_{|F}}(F,P_i)\geq k! \, r_i$. Summing up we have \betagin{equation}\lambdabel{paraigualdad} \sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l}\geq \left[k! \,(r_{i+1}+kr_i)\right]\,G_s\geq \left[(k+1)!\,r_i\right]\,G_s \end{equation} When $s\leq n-2$, we have that $(\prod_{k>s}P_{b_k})G_s$ is the volume of $P_{b_{n-1}}$ restricted to a curve in $T$. Since $|P_{b_{n-1}}|$ induces a generically finite map, we have that $(\prod_{k>s}P_{b_k})G_s\geq \delta_1(L_{|T})$. Hence, for any $i\in I_s$ we have $$ (\prod_{k>s}P_{b_k})(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}))\geq (s+1)\delta_1(L_{|T})k!r_i \geq (k+1)!\delta_1(L_{|T})r_i. $$ When $s=n-1$, and $i\in I_{n-1}$ we can use Theorem 3.1 in \cite{J} to obtain $$ P_i^{n-1-l}P_{i+1}^l\geq P_i^{n-1}\geq \betata(L_{|T},n-1,k) k! r_i, $$ \noindent and so $$ (\sum_{l=0}^{n-1}P_i^{n-1-l}P_{i+1}^{l})(\mu_i-\mu_{i+1})\geq n\betata(L_{|T},n-1,k)\,k!\,r_i. $$ Now we take the minimal lower bound for $s\leq n-2$ and $s=n-1$ and observe that $$ {\rm min}\{n\,\betata (L_{|T},n-1,k)\,k!, (k+1)!\,\delta_1(L_{|T})=\betata (L_{|T},n,k+1)(k+1)! $$ We finally obtain $$ {\rm vol}(L)\geq N^n_{m+1}\geq \betata(L_{|T},n,k+1) (k+1)! \sum_{i=1}^mr_i(\mu_i-\mu_{i+1})=\betata(L_{|T},n,k+1)(k+1)!\,{\rm deg}_a^+f_*L. $$ \noindent (iii) If equality holds in (i), then equality holds for $s(g,L_m,{\overline a})$ and so Theorem \ref{thm1} applies. If equality holds in (ii), then equality holds in any step and in particular in formula (\ref{paraigualdad}). If $r_{i+1}=r_i$ for all $i$ then $m=1$ and $f_*(L_0)^+$ is semistable. \end{proof} \betagin{comment} \betagin{cor}\lambdabel{CSeventualdegree} Let $(X,L,a)$ be a triplet such that $k={\rm dim}\,a(X)\geq 2$. Let $G$ be a connected component of the general fibre of $a$. Then $$ \lambdambda (L,a)\geq {\rm vol}_{X|G}(L)\,e(L)\, k! $$ \end{cor} \betagin{proof}Up to a multiplication map by $d$ and a birational modification, consider a suitable fibration over $\mathbb{P}^1$ as given in the proof of Proposition \ref{pardini}. The fibre $F$ there has the save connected component $G$ of $a_{|F}$ and ${\rm dim}\,a(F)=k-1$. The slope of such fibration verifies inequality given in Theorem \ref{thm2} (i) and tends to $\lambdambda (L,a)$ when $d\rightarrow +\infty$ as shown in the proof of Proposition \ref{pardini}. \end{proof} \end{comment} \betagin{thebibliography}{ABCD} \bibitem{BS} M.A.~Barja, L.~Stoppino, {\em Stability conditions and positivity of invariants of fibrations}. Algebraic and complex geometry, 1–40, Springer Proc. Math. Stat., 71, Springer, Cham, 2014. \bibitem{B} M.A.~Barja, {\em Generalized Clifford Severi inequality and the volume of irregular varieties}. Duke Math. J. {\bf164} (2015), no. 3, 541--568. \bibitem{B2} M.A.~Barja, {\em Slope Inequalities for higher dimensional irregular fibrations}. arXiv: math.AG 2012-06889. \bibitem{BPS1} M.A.~Barja, R.~Pardini, L.~Stoppino, {\em Surfaces on the Severi line}, Journal de Math\'ematiques Pures et Appliqu\'ees, (2016), no. 5, 734--743. \bibitem{BPS2} M.A.~Barja, R.~Pardini, L.~Stoppino, {\em The eventual paracanonical map of a variety of maximal Albanese dimension}. Algebr. Geom. 6 (2019), no. 3, 302–311. \bibitem{BPS3} M.A.~Barja, R.~Pardini, L.~Stoppino, {\em Linear systems on irregular varieties}. Journal of the Institute of mathematics of Jussieu, 1 (2019), p. 1-39. \bibitem{BPS4} M.A.~Barja, R.~Pardini, L.~Stoppino, {\em Higher Dimensional Clifford-Severi equalities}. Communications in Contemporary Mathematics, (2019) doi.org/10.1142/S0219199719500792 \bibitem{BDPP} S. Boucksom, J.-P. Demailly, M. Paun and T. Peternell, The pseudo-effective cone of a compact Kahler manifold and varieties of negative Kodaira dimension, math.AG/0405285. \bibitem{elmnp} L.~Ein, R.~Lazarsfeld, M.~Popa, M.~Mustata and M.~Nakamaye, {\em Restricted volumes and base loci of linear series}, Amer. J. Math. {\bf 131} (2009), 607--651. \bibitem{FKL} M.~Fulger, J.~Koll\'ar, B.~Lehmann, {\em Volume and Hilbert Functions of $\mathbb R$-divisors}, Michigan Math. J. {\bf 65} (2016), no. 2, 371--387. \bibitem{HZ} Y. ~Hu, T.~Zhang. {\em Relative Severi inequality for fibrations of maximal Albanese dimension over curves }, arXiv: 1905.08404 [math.AG] \bibitem{J} Z. ~Jiang {\em On Severi type inequalities}, Math. Ann. (2020). doi.org/10.1007/s00208-020-02082-6 \bibitem{Z} T.~Zhang, {\em Severi inequality for varieties of maximal Albanese dimension}, Math. Annalen {\bf 359} (2014), 3, 1097--1114. \bibitem{Z3} T. ~Zhang, {\em Geography of irregular Gorenstein 3-folds}, Canad. J. Math. 67 (2015), no. 3, 696--720. \bibitem{Z2} T.~Zhang, {\em Relative Clifford inequality for varieties fibred by curves}, Journal of Differential Geometry, to appear. ( arXiv:1706.06523). \end{thebibliography} \end{document}
\begin{document} \title{A 2-Base for Inverse Semigroups} \begin{center} {\em Dedicated to the memory of Bill McCune (1953-2011)} \end{center} \begin{abstract} An open problem in the theory of inverse semigroups was whether the variety of such semigroups, when viewed as algebras with a binary operation and a unary operation, is $2$-based, that is, has a base for its identities consisting of $2$ independent axioms. In this note, we announce the affirmative solution to this problem: the identities \[ \quad x(x'x) = x \qquad \quad x (x' (y (y' ((z u)' w')'))) = y (y' (x (x' ((w z) u)))) \] form a base for inverse semigroups where ${}'$ turns out to be the natural inverse operation. We recount here the history of the problem including our previous efforts to find a $2$-base using automated edduction and the method that finally worked. We describe our efforts to simplify the proof using \textsc{Prover9}, present the simplified proof itself and conclude with some open problems. A humanized proof will be submitted elsewhere. \end{abstract} \section{Introduction} \seclabel{intro} The notion of inverse in semigroup theory generalizes the corresponding notion in group theory. Given an element $a$ of a semigroup, an element $b$ is said to be an \emph{inverse} of $a$ if the equations $aba = a$ and $bab = b$ hold. A semigroup in which every element has an inverse is said to be \emph{regular}. A regular semigroup in which each element has a \emph{unique} inverse is called an \emph{inverse semigroup}. Arguably, groups, inverse semigroups and regular semigroups constitute the most important classes of semigroups. A standard reference for semigroup theory in general is \cite{Howie}, and another for inverse semigroups in particular is \cite{Petrich}. As idempotents shape the structure of regular semigroups to a large extent, it is no surprise that these classes can be defined in terms of idempotents, that is, \begin{itemize} \item a group is a regular semigroup with exactly one idempotent; \item an inverse semigroup is a regular semigroup in which the idempotents commute. \end{itemize} Unlike general regular semigroups, inverse semigroups share an important property with groups, namely that there is a natural unary operation $x\mapsto x'$ which assigns to each element its unique inverse. Thus inverse semigroups are frequently viewed as algebras of type $\langle 2,1\rangle$ with the binary operation $\cdot$ being the semigroup multiplication and the unary operation ${}'$ being the natural inversion. In the language of algebras of type $\langle 2,1\rangle$, a set of $n$ independent identities is an $n$-base for inverse semigroups, if those identities define the variety of inverse semigroups with the unary operation coinciding with the natural inversion. The best known equational characterization of the variety of inverse semigroups is the following, due to B. Schein (\cite[Theorem 1.4]{Schein}): \[ x(yz)=(xy)z \qquad (xy)'=y'x' \qquad x''=x \qquad xx'x=x \qquad (xx')(y'y)=(y'y)(xx')\,. \] This is not, strictly speaking, a base because the identity $(xy)'=y'x'$ is dependent on the others, as Schein himself noted. Removing that identity turns out to give a $4$-base for inverse semigroups. One can also replace $x''=x$ with $x'xx' = x'$ thus giving another independent 4-base \cite[Theorem 3.2]{AM}. A $5$-base also due to Schein is given by keeping by the first four of the above identities and replacing $xx'y'y=y'yxx'$ with $xx'x'x=x'xxx'$. So the next natural question was if there exists a $3$-base for inverse semigroups. Finding minimal axiom sets for classes of mathematical objects has, of course, a long and distinguished tradition in the area of automated deduction; see the bibliography in \cite{AM}. Not surprisingly, then, the affirmative solution to the $3$-base problem was found by the first-named author and Bill McCune using \textsc{Prover9} \cite{McCune}. That solution, which appeared in \cite{AM}, can be described as follows. \begin{theorem}\label{final} Consider the following identities in a binary operation and a unary operation: \begin{align*} (S_1) &\quad x(yz)=(xy)z & (S_4) &\quad x'xyy' = yy'x'x &\quad (S_7) &\quad (xy')z=x(z'y)' \\ (S_2) &\quad xx'x = x & (S_5) &\quad (x')' = x & (S_8) &\quad (xx')'x=x \\ (S_3) &\quad x'xx' = x' & (S_6) &\quad (xy)' = y'x' & (S_9) &\quad (xx')(yy')=(yy')(x'x)' \end{align*} As algebras $(S,\cdot,{}')$ of type $\langle 2,1\rangle$, where the unary operation coincides with the natural inversion, inverse semigroups can be defined by each of the following sets of axioms: \begin{enumerate} \item \emph{(}$S_1$\emph{)}, \emph{(}$S_2$\emph{)}, \emph{(}$S_4$\emph{)}--\emph{(}$S_6$\emph{)}; \item \emph{(}$S_1$\emph{)}--\emph{(}$S_4$\emph{)}; \item \emph{(}$S_1$\emph{)}, \emph{(}$S_2$\emph{)}, \emph{(}$S_4$\emph{)} and \emph{(}$S_5$\emph{)}; \item \emph{(}$S_1$\emph{)}, \emph{(}$S_3$\emph{)}, \emph{(}$S_4$\emph{)} and \emph{(}$S_5$\emph{)}; \item \emph{(}$S_7$\emph{)}--\emph{(}$S_9$\emph{)}. \end{enumerate} \end{theorem} The $3$-base ($S_7$)--($S_9$) was shortly followed by a much more elegant one \cite{AK} (again using automated deduction): \[ \quad x(yz'')=(xy)z \qquad \quad x=(xx')x \qquad \quad (xx')(y'y) = (y'y)(xx') \,. \] Since then we have discovered even more elegant $3$-bases, including some which have associativity as one of the axioms. These will appear together with human proofs in \cite{AKP}. As was proved in \cite{AM}, there does not exist a $1$-base for inverse semigroups, that is, any $k$-base for inverse semigroups requires $k\geq 2$. So until now the main open problem in this line of research has been to prove or disprove the existence of a $2$-base. \section{Early Attempts} A naive approach to the problem of trying to find a base for some algebra is to generate many identities which can serve as candidates, and then to loop through those candidates, testing them against both a theorem prover and a finite model builder. In order for the naive approach to have any chance of succeeding, one needs some \emph{a priori} idea of what properties a particular base must have. For instance, if there is a lower bound on the number of variables which are necessary or a lower bound on the length of the axioms (as measured by, say, symbol counting), then this can very helpful in reducing the size of the candidate pool. Inverse semigroups form a uniform (or regular) variety of algebras, that is, in each identity, the set of variables appearing on the left of the identity equals the set of variables appearing on the right of the identity. As is well known, any consequence of uniform identities is also uniform and hence any equational base must be uniform as well. In addition, it is easy to see that one of the identities must have the form $u=x$ (a variable equal to a word), and so $u$ must depend only on $x$. Putting together these criteria with a few others, we have the following \cite{AKP}: \begin{proposition} A $2$-base for inverse semigroups must have the form $\{u(x) = x, s = t\}$ where $u(x)$ is a term involving only the variable $x$. The identity $s=t$ involves at least three variables, all of which occur in both $s$ and $t$. The unary operation occurs in $u(x)$ and in at least one of $s$ or $t$. \end{proposition} In the end, this description was not as helpful as one might hope. In 2006, Bill McCune generated approximately half a million candidates for possible $2$-bases, but after looping through them as described above, none of them worked. In 2010, we asked Bill if any of those sets of candidates could imply just \[ \quad x(yz)=(xy)z \qquad \quad x=x(x'x) \qquad \quad (xx')(y'y) = (y'y)(xx')\,. \] A semigroup together with a unary operation ${}'$ satisfying just these identities will indeed be an inverse semigroup, but in general, the unary operation will not coincide with the natural inverse. As it turns out, the point was moot, because once again all candidates failed. We then tried another (unpublished) technique which has sometimes been helpful in finding smaller axiom sets for various structures. The idea is to take advantage of \textsc{Prover9}'s \emph{semantic guidance} feature. \textsc{Prover9} can take as input \emph{interpretations} which are models in the standard \textsc{Mace4} format. In the default given clause selection scheme for \textsc{Prover9} (assuming there are no hints), clauses are selected in the following order: $1$ oldest clause, $4$ lightest false clauses and $4$ lightest true clauses. Here ``false'' means false in all interpretations and ``true'' means true in at least one interpretation. The default interpretation (meaning that the interpretations list does not contain any models) is that false clauses are negative and true clauses are nonnegative. The idea behind using semantic guidance to reduce axiom sets goes as follows. Suppose we have a theory with, say, $3$ independent formulas $A$, $B$ and $C$. We use \textsc{Mace4} and \textsc{Prover9} as follows to carry out the following procedure and hence produce candidates that are more likely to be a 2-base: \begin{enumerate} \item \textsc{Mace4}: Find models in which $A$ and $B$ are true, but $C$ is false. \item \textsc{Mace4}: Find models in which $A$ and $C$ are true, but $B$ is false. \item \textsc{Prover9:} With all models in the interpretations list, generate consequences of $A$, $B$ and $C$ for a fixed amount of time (or fixed number of given clauses or any other criterion for stopping the job). \item If \textsc{Prover9} generated any clauses marked false, they will be false in \emph{all} interpretations. Collect all such clauses. \item For each false clause $D$, test if $\{A,D\}$ imply $B$ and $C$ using \textsc{Prover9} and \textsc{Mace4}. \end{enumerate} This procedure has been very useful to us in other situations, but it failed here. We used all the $3$-bases known to us (those mentioned above and others) and tried the procedure above with all of them, producing for each a $3G$ text file; but none of the candidates worked. After this experiment and others like it, the first two authors were temporarily convinced that no $2$-base exists for inverse semigroups (as stated in the end of \cite{AK}). This conviction was totally changed when we discovered $2$-bases for two proper varieties of inverse semigroups, namely commutative inverse semigroups and Clifford semigroups \cite{AKP}. This made the existence of a $2$-base for inverse semigroups much more plausible since it is unusual to have a non-$2$-based variety with $2$-based subvarieties. \section{A $2$-base for inverse semigroups and its proofs} As already noted, one identity in any $2$-base for inverse semigroups must have the form $u(x) = x$. Thus in our search for a $2$-base, we decided to fix this identity in the simplest way possible, namely $x(x'x) = x$ (or sometimes $(xx')x = x$). So what remained was to find a uniform identity which would imply associativity, the commutativity of idempotents and $x''=x$. After many tests using \textsc{Prover9}, we identified an identity that, together with $x(x'x)=x$, implies associativity and $x''=x$, but not commutativity of the idempotents: $(x y) z = ((y z)' x')'$. Then we used a trick typical in the world of cancellative semigroups: we \emph{glued} together this identity with an identity expressing commutativity of idempotents. Somewhat surprisingly, one variant of this approach worked and yielded the following $2$-base: \begin{align*} & x = x(x'x) \\ & x (x' (y (y' ((z u)' w')'))) = y (y' (x (x' ((w z) u))))\,. \end{align*} What is remarkable is that making even small changes in the second identity, such as switching the roles of $y$ and $y'$, will fail to yield a $2$-base. While the discovery of these identities required a mix of automated tools and human intuition, the verification that these identities form a $2$-base for inverse semigroups is straightforward for automated theorem provers. We initially checked it using \textsc{Prover9} and subsequently using \textsc{Waldmeister} \cite{Hillenbrand} and \textsc{E} \cite{Schulz}. The first proof found by any of these tools is quite lengthy and complex. We have been able to generate simpler proofs using \textsc{Prover9}'s hints feature \cite{Veroff}. The basic idea is to find an initial proof, use it to generate hints, run \textsc{Prover9} again to find a new proof and repeat. We do not have formal criteria for simplicity (\emph{cf.} \cite{TW}), but rather look for proofs which find a balance between length, depth, the number of variables used and so on. The underlying idea here follows a dictum oft repeated by McCune: ``It's all about the given clause." Changing which clauses will be selected as given in subsequent runs changes the search space, thus leading to the possibility that new, simpler proofs will be found. For example, in practice, when \textsc{Prover9} is fed hints from its first successful proof, it rarely reproduces that same proof in the second run. Typically, the proofs of key steps are found more quickly so that the whole second proof is shorter and perhaps simpler than the first proof. At the same time, in order to reduce the complexity of the proof further, we gradually reduce the maximum weight parameter to try to find a proof using shorter and shorter clauses. (Note that by default, hint matchers are exempt from \textsc{Prover9}'s various limits on kept clauses, so this particular strategy requires setting a flag.) We reduce the maximum clause weight until just before a proof can no longer be found. Of course, there is no reason to suspect that a proof which has the minimum possible maximum clause weight will necessarily be the simplest proof, so slight increases in the maximum weight from its minimum sometimes yield better results. There is a law of diminishing returns in this in that eventually a point is reached where the effort involved in further simplifying a proof goes beyond any additional minor simplifications that might occur. In our case, we eventually found a 92 step proof of $x'' = x$, a 29 step proof of associativity (assuming $x''=x$) and a 26 step proof of $xx'y'y=y'yxx'$ (assuming $x''=x$ and associativity). Here we are referring to \textsc{Prover9}'s own reporting of proof length which, in proofs with demodulations, counts only the primary inferences. If we count rewrites as inferences, the proof lengths are 274, 82 and 93, respectively. However, we find that proofs treating rewrites as secondary inferences are often easier to follow for a human reader than those which treat primary paramodulations and rewrites on an equal footing. Of course the three proofs, with or without demodulations, share many inferences in common. Interestingly, adjoining the three goals together into a single goal and getting a proof in one run leads to a more complicated proof. This is probably because when separated, the second proof takes early advantage of the additional assumption $x''=x$, and the third proof takes advantage of both that and associativity. In a subsequent publication \cite{AKP}, we will present humanized proofs that our pair of identities forms a $2$-base for inverse semigroups, as well as some interesting $3$-bases and some $2$-bases for certain subvarieties of the variety of inverse semigroup. For now, we present the three proofs, lightly edited. The justification steps list only the parent clauses, not how they are specifically used to make the inference. {\tiny{ \begin{verbatim} 1 x'' = x # label(goal). 2 x * (x' * x) = x. 3 x * (x' * (y * (y' * ((z * u)' * w')'))) = y * (y' * (x * (x' * ((w * z) * u)))). 4 c1'' != c1. [1]. 5 ((x * y)' * z')' * (((x * y)' * z')'' * (u * (u' * ((z * x) * y)))) = u * (u' * ((x * y)' * z')'). [2,3]. 6 x * (x' * (y * (y' * (((z' * z) * u)' * z')'))) = y * (y' * (x * (x' * (z * u)))). [2,3]. 7 ((x * y) * z) * (((x * y) * z)' * (u * (u' * ((y * z)' * x')'))) = u * (u' * ((x * y) * z)). [2,3]. 8 ((x * y)' * z')' * (((x * y)' * z')'' * ((z * x) * y)) = ((z * x) * y) * (((z * x) * y)' * ((x * y)' * z')'). [2,7]. 9 ((x * y)' * z')' * (((x * y)' * z')'' * (((z * x) * y) * (((z * x) * y)' * ((x * y)' * z')'))) = ((x * y)' * z')'. [2,5,8]. 10 ((x * y)' * z')' = (z * x) * y. [9,3,8,7,2]. 11 x * (x' * (y * (y' * (z * u)))) = y * (y' * (x * (x' * (z * u)))). [6,10,2]. 12 (x * y) * (y' * y) = (y' * x')'. [2,10]. 13 (((x * y) * z) * u')' = (u * (y * z)') * x'. [10,10]. 14 (x * (y * z)) * (z' * z) = ((z' * y')'' * x')'. [12,10]. 15 (x * ((y' * y) * z)') * y' = ((y * z) * x')'. [2,13]. 16 (x * (y' * y'')'') * y' = (y * x')'. [2,15,12]. 17 (x * y) * ((y' * y)' * (y' * y)) = (x * (y * y')) * y. [2,14,10,10]. 18 ((x' * x'')'' * x')' = x. [2,14,2]. 19 ((x' * x'')'' * (y * x)')' = (x' * y')' * (x' * x). [12,14]. 20 (((x' * x'')'' * x') * y) * z = ((y * z)' * x)'. [18,10]. 21 ((x * y) * ((x * y)' * x)) * y = x * y. [10,18,10]. 22 ((x * x')'' * x)' = (x' * x'')'' * x'. [18,18,18,18]. 23 (x'' * x''')'' * x'' = x. [18,22]. 24 (x * ((y * z) * ((y * z)' * y))) * z = (x * y) * z. [21,10,10]. 25 (((x * y)' * z') * (((z * x) * y) * (x * y)')) * z' = (x * y)' * z'. [10,21]. 26 (x * (y * ((z * u) * ((z * u)' * z)))) * u = (x * (y * z)) * u. [24,10,10]. 27 (x' * (x * x')) * x = x' * x. [17,2]. 28 (x * (y' * (y * y'))) * y = (x * y') * y. [27,10,10]. 29 (x' * (x' * (x * x'))')' = (x' * x'')'. [27,12,12]. 30 (x' * (y * (x' * (x * x')))')' = (x' * (y * x')')'. [28,12,12]. 31 (x * y') * (y' * (y * y'))' = (x * y') * y''. [29,10,10]. 32 x' * (x' * (x * x'))' = x' * x''. [29,23,29,29,23]. 33 (x * (y * z')) * (z' * (z * z'))' = (x * (y * z')) * z''. [31,10,10]. 34 x' * (y * (x' * (x * x')))' = x' * (y * x')'. [30,23,30,30,23]. 35 ((x * (y' * (y * y'))) * y'')' = ((x * y') * y'')'. [34,15,15]. 36 (x * (y' * (y * y'))) * y'' = (x * y') * y''. [35,23,35,35,23]. 37 ((x * (y * z)) * ((y * z)' * y)) * z = ((x * y) * z) * ((y * z)' * (y * z)). [10,19,10,10]. 38 (x * (y * (z * u'))) * (u' * (u * u'))' = (x * (y * (z * u'))) * u''. [33,10,10]. 39 x * (x' * (y * (y' * z))) = y * (y' * (x * (x' * z))). [2,11,2]. 40 x * (x' * (y * (y' * x))) = y * (y' * x). [2,39]. 41 ((x * y) * ((x * y)' * (z * (z' * x)))) * y = (z * (z' * x)) * y. [39,26]. 42 (x'' * x') * ((x'' * x')' * (x * x')) = x * x'. [2,40,2]. 43 (x * y) * (y' * (z * (z' * y))) = (x * z) * (z' * y). [40,10,10]. 44 (x * (x' * y)) * (y' * x) = y * (y' * x). [40,14,2,10,10]. 45 ((x * (x' * y)) * y') * x = (y * y') * x. [44,12,12,10,10]. 46 (x' * ((x * (x' * y)) * y')')' = (x' * (y * y')')'. [45,12,12]. 47 ((x * y) * z) * ((y * z)' * (y * z)) = (y' * x')' * z. [43,26,37,12]. 48 ((x * y') * (y * y')) * y = (y'' * x')' * y. [47,17]. 49 ((x * y)' * z)' * ((x * y)' * (x * y)) = (x' * z)' * y. [20,47,22,23]. 50 ((x * y') * y'') * (((y' * (y * y')) * y'')' * ((y' * (y * y')) * y'')) = ((x * y') * (y * y')) * y''. [36,47,10]. 51 (x' * ((y * x') * (x * x'))')' = (x' * (x'' * y')'')'. [48,12,12]. 52 x' * ((x * (x' * y)) * y')' = x' * (y * y')'. [46,23,46,46,23]. 53 (x'' * x''')'' * (((x'' * x''')' * x) * x''')' = (x'' * x''')'' * (x'' * x''')'. [23,52]. 54 x * (((x * y) * y')' * x)'' = x * (y * y')'. [20,52,22,23,22,23,22,23]. 55 x * ((y' * (y * y')) * y'')' = x * (y' * y'')'. [33,54,38,36,54]. 56 (x'' * y)' * (x' * (x * x'))' = (x'' * y)' * x''. [32,49,32,32,49]. 57 ((x' * x'') * (((x' * (x * x')) * x'')' * x')) * x'' = (x' * (x * x')) * x''. [55,22,10,36,10,23]. 58 x''' * (x' * (x * x'))' = x''' * x''. [2,56,2]. 59 (x' * (x * x'))' * ((x' * (x * x'))'' * x'') = x''. [58,40,2,58,2]. 60 (((x' * (x * x'))' * y) * (((x' * (x * x'))' * y)' * x'')) * y = x'' * y. [58,41,2,58,2]. 61 x' * ((y * x') * (x * x'))' = x' * (x'' * y')''. [51,23,51,51,23]. 62 ((x' * (x * x')) * x'')' = (x' * x'')'. [61,15,16]. 63 (x' * (x * x')) * x'' = x' * x''. [57,62,21]. 64 ((x * y') * (y * y')) * y'' = (y'' * x')' * y''. [50,63,63,47]. 65 ((x' * (x * x'))' * y)' * x'' = (x'' * y)' * x''. [63,49,63,63,49]. 66 (((x' * (x * x'))' * y) * ((x'' * y)' * x'')) * y = x'' * y. [60,65]. 67 (x' * (x * x'))' * y' = x'' * y'. [33,25,64,66]. 68 (x * y') * (y * y') = (y'' * x')'. [67,10]. 69 ((x' * (x * x'))' * (x' * x''))' * y' = (x' * (x * x'))'' * y'. [33,67,63]. 70 (x' * (x * x'))'' * x'' = x''' * x''. [67,65,33,63,67]. 71 (x' * (x * x'))' * (x''' * x'') = x''. [59,70]. 72 ((x' * (x * x'))' * y)' = (x'' * y)'. [68,20,22,23]. 73 (x' * (x * x'))'' * y' = x''' * y'. [69,72,67]. 74 (x' * (x * x'))' = x''. [73,2,58,71]. 75 x' * (x * x') = x'. [73,23,74,74,23]. 76 (x'' * y)' * (x'' * x') = (x'' * y)' * (x * x'). [75,49,75,75]. 77 x'' * x' = x * x'. [76,2,42]. 78 (x''' * y)' * x' = (x' * y)' * x'. [76,49,77,77,49]. 79 (x'' * x''')'' * (((x'' * x''')' * x) * x''')' = (x'' * x''') * (x'' * x''')'. [53,77]. 80 (x'' * x''')' = (x * x')'. [77,12,77,68,77]. 81 ((x * x') * (x * x')')' = (x * x')' * (x * x'). [77,19,80,77,80,77]. 82 (x * x')'' * (((x * x')' * x) * x''')' = (x'' * x''') * (x * x')'. [79,80,80,80]. 83 (x * x')'' * x'' = x. [23,80]. 84 (x'' * x''') * (x * x')' = (x * x') * (x * x')'. [83,52,82,80,77]. 85 ((x * x')' * (x * x'))' * (x * x')'' = x'' * x'''. [80,83,84,81,80]. 86 (x' * (x'' * x'''))' * x' = x'''' * x'. [2,78,77]. 87 x'' * x''' = x * x'. [81,83,85]. 88 x'''' * x' = x * x'. [86,87,75,77]. 89 x''' * (x * x') = x'. [87,40,75,88,87,75]. 90 x''' = x'. [87,75,89]. 91 x'' = x. [87,83,90,83]. 92 $F. [91,4]. ============================== end of proof ========================== 1 (x * y) * z = x * (y * z) # label(goal). 2 x * (x' * x) = x. 3 x * (x' * (y * (y' * ((z * u)' * w')'))) = y * (y' * (x * (x' * ((w * z) * u)))). 4 x'' = x. 5 (c1 * c2) * c3 != c1 * (c2 * c3). [1]. 6 x * (x' * (y * (y' * ((z * u) * (u' * u))))) = y * (y' * (x * (x' * (u' * z')'))). [2,3]. 7 ((x * y)' * z')' * (((x * y)' * z') * (u * (u' * ((z * x) * y)))) = u * (u' * ((x * y)' * z')'). [2,3,4]. 8 ((x * y) * z) * (((x * y) * z)' * (u * (u' * ((y * z)' * x')'))) = u * (u' * ((x * y) * z)). [2,3]. 9 x' * (x * x') = x'. [4,2]. 10 ((x * y) * (y' * y)) * (((x * y) * (y' * y))' * (z * (z' * (y' * x')'))) = z * (z' * ((x * y) * (y' * y))). [2,6]. 11 x * (x' * (y' * (y * ((z * u) * (u' * u))))) = y' * (y * (x * (x' * (u' * z')'))). [4,6,4]. 12 x' * (x * (y' * (y * ((z * u) * (u' * u))))) = y' * (y * (x' * (x * (u' * z')'))). [4,11,4]. 13 ((x * y)' * z')' * (((x * y)' * z') * ((z * x) * y)) = ((z * x) * y) * (((z * x) * y)' * ((x * y)' * z')'). [2,8,4]. 14 ((x * y)' * z')' * (((x * y)' * z') * (((z * x) * y) * (((z * x) * y)' * ((x * y)' * z')'))) = ((x * y)' * z')'. [2,7,4,13]. 15 ((x * y) * (y' * y)) * (((x * y) * (y' * y))' * (y' * x')') = (y' * x')' * ((y' * x') * ((x * y) * (y' * y))). [2,10,4]. 16 (x * y) * (y' * y) = (y' * x')'. [2,10,15,11,15,12,9,9]. 17 (x' * y)' * ((x' * y) * (z * (z' * (x' * y)'))) = z * (z' * (x' * y)'). [4,10,16,4,16,4,4,16,4]. 18 ((x * y) * z) * (((x * y) * z)' * ((y * z)' * x')') = ((y * z)' * x')'. [14,17]. 19 (x * y') * (y * y') = (y * x')'. [4,16,4]. 20 ((x * y)' * z')' = (z * x) * y. [18,8,18,2]. 21 ((x * y) * z)' = (y * z)' * x'. [20,4]. 22 ((x * y)' * z)' = (z' * x) * y. [4,20]. 23 ((x * y)' * x) * x' = (x * y)'. [2,21,21,4]. 24 (x' * (y * z)) * u = (y' * x)' * (z * u). [21,22,21,4]. 25 (x * (y * z))' = (y * z)' * x'. [23,22,4,21,24,4,21,19,4]. 26 (x * y)' = y' * x'. [2,25,2]. 27 (x' * y) * z = x' * (y * z). [22,26,26,26,4,4]. 28 (x * y) * z = x * (y * z). [4,27,4]. 29 $F. [28,5]. ============================== end of proof ========================== 1 (x * x') * (y' * y) = (y' * y) * (x * x') # label(goal). 2 x * (x' * x) = x. 3 x * (x' * (y * (y' * ((z * u)' * w')'))) = y * (y' * (x * (x' * ((w * z) * u)))). 4 x'' = x. 5 (x * y) * z = x * (y * z). 6 (c2' * c2) * (c1 * c1') != (c1 * c1') * (c2' * c2). [1]. 7 c2' * (c2 * (c1 * c1')) != c1 * (c1' * (c2' * c2)). [6,5,5]. 8 x * (x' * (y * (y' * ((z * u)' * w')'))) = y * (y' * (x * (x' * (w * (z * u))))). [3,5]. 9 x' * (x * x') = x'. [4,2]. 10 x * (x' * (x * y)) = x * y. [2,5,5]. 11 x * (x' * ((y * (z * u))' * x')') = x * ((z * u)' * y')'. [8,8,4,4,4,4,10,4,4,4,4,10,10]. 12 x' * (x * (x' * y)) = x' * y. [9,5,5]. 13 x * (x' * ((y * z)' * u')') = x * (x' * (u * (y * z))). [10,8,12]. 14 x * (x' * (y * (y' * (z * u)))) = y * (y' * (x * (x' * (z * u)))). [10,8,13,10]. 15 x * ((y * z)' * u')' = x * (u * (y * z)). [11,13,10]. 16 x * (y * z')' = x * (z * y'). [9,15,4,9]. 17 x * (y * z)' = x * (z' * y'). [4,16]. 18 x * ((y * z)' * u) = x * (z' * (y' * u)). [17,5,5,5]. 19 x * (y * (y' * (x' * (z * (z' * (x * y)))))) = z * (z' * (x * y)). [2,14,18,5]. 20 x * (x' * (y * (y' * (x * z)))) = y * (y' * (x * z)). [10,14]. 21 x * (x' * (y' * (y * (x * z)))) = y' * (y * (x * z)). [4,20,4]. 22 x' * (x * (y * (y' * x'))) = y * (y' * x'). [9,20,4,9]. 23 x * (y * (z * (z' * (y' * y)))) = x * (y * (z * z')). [22,17,17,18,4,4,4,18,18,18,4,4]. 24 x * (y * (y' * (x' * x))) = x * (y * y'). [23,2,18,18,18,4,21,18,4,10,5,5,5,5,10,21,10]. 25 x' * (x * (y * y')) = y * (y' * (x' * x)). [19,12,4,24]. 26 $F. [25,7]. ============================== end of proof ========================== \end{verbatim}}} \section{Problems} We conclude with a couple of open problems which are suitable for investigation by means of automated deduction. \begin{prob} Does there exist a $2$-base with fewer than $5$ variables? \end{prob} \begin{prob} Taking the first identity $x(x'x)=x$ as fixed, is there a $2$-base with a shorter second identity? \end{prob} We have found $2$-bases for certain subvarieties of inverse semigroups, such as commutative inverse semigroups and Clifford semigroups; these will appear in \cite{AKP}. But there are other subvarieties, such as strict inverse semigroups \cite{Petrich} for which we have not found $2$-bases. \begin{prob} Find $2$-bases for other varieties of inverse semigroups. \end{prob} \begin{acknowledgment} The first author was partially supported by FCT and FEDER, Project POCTI-ISFL-1-143 of Centro de Algebra da Universidade de Lisboa, and by FCT and PIDDAC through the project PTDC/MAT/69514/2006. The third author was supported by a University of Manitoba research leave grant during 2012-13 and he thanks the University of Manitoba for sanctioning Research-Study Leave. \end{acknowledgment} \end{document}
\begin{document} \title{{f On the Expected Total Number of Infections for Virus Spread on a Finite Network} \begin{abstract} In this paper we consider a simple virus infection spread model on a finite population of $n$ agents connected by some neighborhood structure. Given a graph $G$ on $n$ vertices, we begin with some fixed number of initial infected vertices. At each discrete time step, an infected vertex tries to infect its neighbors with probability $\beta \in (0,1)$ independently of others and then it dies out. The process continues till all infected vertices die out. We focus on obtaining proper lower bounds on the expected number of ever infected vertices. We obtain a simple lower bound, using \textit{breadth-first search} algorithm and show that for a large class of graphs which can be classified as the ones which locally ``look like'' a tree in sense of the \emph{local weak convergence} \cite{AlSt04}, this lower bound gives better approximation than some of the known approximations through matrix-method based upper bounds \cite{DrGaMa08}. \\ \noindent \emph{{\bf AMS 2000 Subject Classifications:}} Primary: 60K35, 05C80; secondary: 60J85, 90B15 \\ \noindent \emph{{\bf Keywords and phrases:}} Bread-first search, local weak convergence, random $r$-regular graphs, susceptible infected removed model, virus spread. \end{abstract} \section{Introduction} \label{Sec:Intro} \subsection{Background and Motivation} \label{SubSec:Background} Often it is observed that the normal operation of a system which is organized in a network of individual machines or agents is threatened by the propagation of a harmful entity through the network. Such harmful entities are often termed as a \emph{viruses}. For example the Internet, as a network is threatened by the computer viruses and worms which are self-replicating pieces of code, that propagate in a network of computers. These codes use a number of different methods to propagate, for example, an e-mail virus typically sends copies of itself to all addresses in the address book of the infected machine. Weaver et. al. \cite{WePaStCu03} gives a good survey of different techniques of propagation for computer viruses. In this paper we use a simple \emph{susceptible infected removed (SIR)} model which was studied by Draief, Ganesh and Massouli\'{e} in 2008 \cite{DrGaMa08}. In this model, each susceptible agent, can be infected by its infected neighbors at a rate, proportional to their number and remains infected till it is removed after an unit time. While it is infected, it has the potential to infect its neighbors. In general, removal can correspond to a quarantining of a machine from the network or patching the machine. In this model, it is assumed that once a node is removed, it is ``out of the network''. That is, it can no longer be susceptible or infected. Such a model is justified, provided the epidemic spread happens at a much faster rate than the rate of patching of the susceptible machines. The study of mathematical models for epidemic spread has a long history in biological epidemiology and in the study of computer viruses. One of the first work in this area was by Kermack and Mckendrick \cite{KerMc27}, where they established the first stochastic theory for epidemic spread. They also proved the existence of an epidemic threshold, which determines whether the epidemic will spread or die out. As mentioned in \cite{DrGaMa08}, earlier work mainly focused on finding or approximating the \emph{law of large numbers} limit where the stochastic behavior was approximated by its mean behavior and hence mainly studied deterministic models. More recent works \cite{BaUt04, LeUt95}, have focused on stochastic nature of the models and have tried to prove asymptotic distribution of the number of survivors, using a key concept called \emph{basic reproductive number} $R_0$, which is defined as the expected number of secondary infective, caused by a single primary infective. This concept of basic reproductive number is well defined under the \emph{uniform mixing} assumption, that is, when any infective can infect any susceptible equally likely, and hence the underlying network is given by a complete graph. For a general network, where basic reproductive number may become vertex dependent, it is not clear how to use this concept effectively. As in \cite{DrGaMa08}, in this work we would like to study this model on a general network. \subsection{Model} \label{SubSec:Model} We consider a closed population of $n$ agents, connected by a network structure, given by an undirected graph $G = \left(V, E\right)$ with vertex set $V$, containing all the agents and edge set $E$. A vertex can be in either of the three states, namely, \emph{susceptible (S)}, \emph{infected (I)} or \emph{removed (R)}. At the beginning, the initial set of infected vertices is assumed to be non-empty and all others are susceptible. The evolution of the epidemic is described by the following discrete time model: \begin{itemize} \item After a unit epoch of time, each infected vertex instantaneously tries to infect each susceptible neighbor with probability $\beta \in \left(0,1\right)$ independent of all others. \item Each infected vertex is removed from the network after an unit time. \end{itemize} Mathematically, at an integer multiple of unit time, say $t$, if a susceptible vertex $v$ has $I_v\left(t\right)$ neighbors who are infected, then the probability of $v$ being infected instantaneously is $1 - \left(1 - \beta\right)^{I_v\left(t\right)}$ and each susceptible vertex gets infected independently. Also an infected vertex remains in the network only for an unit time, after that it tries to infect its susceptible neighbors and then it is immediately removed. As pointed out by \cite{DrGaMa08}, this is a simple model, falling in the class of models known as Reed-Frost Models, where infection period is deterministic and is same for every vertex. It is worth noting that the evolution of the epidemic can be modeled as a Markov chain. It is interesting to note here that, the model is essentially same as the i.i.d. Bernoulli bond percolation model with parameter $\beta$ \cite{Gri99}. This is because the set of ever infected (or removed) vertices is same as the union of connected open components of i.i.d. bond percolation on $G$, containing all the initial infected vertices. Although for percolation, it is customary to work with an infinite graph $G$. If $G$ is the complete graph $K_n$, then this model is fairly well studied in literature and is known as the \emph{binomial random graph}, also known as Erd\"{o}s-R\'{e}nyi random graph \cite{Jan, Bola01}. Like in \cite{DrGaMa08}, our goal is to study the total number of vertices that eventually become infected (and hence removed) without specifying the underlying network. In \cite{DrGaMa08}, the authors derived an explicit upper bound of the expected number of vertices ever infected which depends on both the size of the network as well as the infection rate $\beta$. This bound also needed an assumption of ``small'' value for $\beta$. Unfortunately, the work \cite{DrGaMa08} did not provide any indication whether the derived upper bound is a good approximation of the quantity of interest. In this work we derive a simple lower bound of the expected number of vertices ever infected which works for every infection rate $0 < \beta < 1$. Our lower bound is based on the \emph{breadth-first search (BFS)} algorithm and hence easily computable for any general finite network $G$. We also prove that, under certain assumptions on the qualitative behavior of the underlying graph, namely if $G$ ``\emph{locally looks like a tree}'' in the sense of Aldous and Steele \cite{AlSt04} \emph{local weak convergence}, then our lower bound is asymptotically exact for ``small'' $\beta$, thus providing a good approximation when the network is ``large''. As we will see later, for such graphs $G$, the range we cover for $\beta$ always includes the range in which the upper bound obtained in \cite{DrGaMa08} holds and in all these cases, the upper bound over estimates the expected total number of infections. \subsection{Outline} \label{SubSec:Outline} In the following section, we state and prove our main results. Section \ref{Sec:Example} gives several examples where our lower bound holds and gives asymptotically correct answer. Finally in Section \ref{Sec:Discussion} we summarize the merits of our work and indicate some of its limitations as well. \section{Main Results and Proofs} \label{Sec:Results} We will denote by $Y^{G, I}$, the total number of vertices ever infected when the epidemic runs on a network $G$ and the infection starts at the vertices in $I \subseteq V$. Note that $Y^{G, I}$ implicitly depends on the size of the network. In Subsection \ref{SubSec:Results-One} we present the results, when the epidemic starts with only one infected vertex. We generalize these results for epidemic starting with more than one infection, which are presented in Subsection \ref{SubSec:Results-Many}. In both cases, our results relay on a specific search algorithm, known as \emph{breadth-first search (BFS)}. We briefly describe the algorithm here. \begin{quote} {\tt \begin{itemize} \item[Step-0] Input graph $G$ with a linear ordering of its vertices say $V := \left\{v_0, v_1, v_2, \cdots, v_{n-1}\right\}$. Let $T \leftarrow \left\{v_0\right\}$ and $N \leftarrow \left\{ v_0 \right\}$. \item[Step-1] Write $N = \left\{v_{i_1}, v_{i_2}, \cdots, v_{i_r} \right\}$ for some $r \geq 1$ such that $i_1 < i_2 < \cdots < i_r$. \item[Step-2] For $l=1$ to $r$ find all neighbors $u$ of $v_{i_l}$ which are not in $T$, put \[ N' \leftarrow N' \cup \left\{u \,\Big\vert\, u \sim v_{i_l} \text{\ and\ } u \not\in T \,\right\} \] and update $T$ as \[ T \leftarrow T \cup \left\{u \,\Big\vert\, u \sim v_{i_l} \text{\ and\ } u \not\in T \,\right\} \,. \] \item[Step-3] Update $N \leftarrow N'$. \item[Step-4] Go to Step-1 unless vertex set of $T$ is same as that of $V$. \item[Step-5] Stop with output $T$ as the BFS spanning tree with root $v_0$. \end{itemize} } \end{quote} Note that the BFS spanning tree is not necessarily unique, it depends on the starting point $v_0$ which is typically called the root and also it depends on the ordering of the vertices in which the exploration of neighbors is done in {\tt Step-2}. Also note that iff $G$ is a tree to start with then, BFS spanning tree is just itself. \subsection{Starting with Only One Infected Vertex} \label{SubSec:Results-One} Our first result gives a lower bound of the expected total number of vertices ever infected starting with exactly one infected vertex. \begin{Theorem} \label{Thm:LB} Let $G$ be an arbitrary finite graph and $v_{0} \in V$ be a fixed vertex of it. Let $T$ be a spanning tree of the connected component of $G$ containing the vertex $v_0$ and rooted at $v_0$. Let $Y^{T, \{v_0\}}$ be the total number of vertices ever infected when the epidemic runs only on $T$ and starting with exactly one infection at $v_0$. Then \begin{equation} {\bf E}\left[Y^{T, \left\{v_0\right\}}\right] \leq {\bf E}\left[Y^{G, \left\{v_0\right\}}\right] \,\,\, \text{for all} \,\,\, 0 < \beta < 1 \,. \label{Equ:LB-Tree} \end{equation} Moreover, if $\mbox{${\mathcal T}$}$ is a BFS spanning tree of the connected component of $v_0$ rooted at $v_0$, then \begin{equation} {\bf E}\left[Y^{T, \left\{v_0\right\}}\right] \leq {\bf E}\left[Y^{{\mathcal T}, \left\{v_0\right\}}\right] \leq {\bf E}\left[Y^{G, \left\{v_0\right\}}\right] \,\,\, \text{for all} \,\,\, 0 < \beta < 1 \,. \label{Equ:BFS-Tree} \end{equation} \end{Theorem} \begin{proof} Suppose $G = \left(V, E\right)$ where $V$ is the set of vertices and $E$ is the set of edges and let $H = \left(V, E'\right)$ where $E' \subseteq E$. So $H \subseteq G$, is a spanning sub-graph of $G$. Note that $v_0$ is a vertex in both $H$ and $G$. Let $\left(X_e\right)_{e \in E}$ be i.i.d. $\mbox{Bernoulli}\left(\beta\right)$ random variables indexed by the edges of the graph $G$. We consider the random graphs $G_{\beta} := \left(V_{\beta}, E_{\beta}\right)$ and $H_{\beta} := \left( V_{\beta}, E'_{\beta}\right)$ with the same vertex set $V_{\beta} = V$ and the random sets of edges $E_{\beta} := \left\{e \in E \,\big\vert\, X_e = 1 \,\right\}$ and $E'_{\beta} := \left\{e \in E' \,\big\vert\, X_e = 1 \,\right\}$. Note that $H_{\beta}$ is a spanning sub-graph of $G_{\beta}$. Let $C^{G, v_0}$ and $C^{H, v_0}$ be the connected components of the vertex $v_0$ in $G_{\beta}$ and $H_{\beta}$ respectively. From definition $C^{H, v_0} \subseteq C^{G, v_0}$. Now it follows from the definition of the infection spread model that $\left\vert C^{G, v_0} \right\vert \ed Y^{G, \left\{v_0\right\}}$ and $\left\vert C^{H, v_0} \right\vert \ed Y^{H, \left\{v_0\right\}}$. So to prove equation (\ref{Equ:LB-Tree}) observe that \[ {\bf E}\left[Y^{T, \left\{v_0\right\}}\right] = {\bf E}\left[ \left\vert C^{T, \left\{v_0\right\}} \right\vert \right] \leq {\bf E}\left[ \left\vert C^{G, \left\{v_0\right\}} \right\vert \right] = {\bf E}\left[Y^{G, \left\{v_0\right\}}\right] \,. \] For the second part, we note that if $T$ is a spanning tree of $G$ with root $v_0$, then $d_G\left(v, v_0\right) \leq $ $d_T\left(v, v_0\right)$ for all $v \in V$, where $d_G$ and $d_T$ are the graph distance functions on $G$ and $T$ respectively. Moreover, the BFS algorithm preserves the distances, so if $\mbox{${\mathcal T}$}$ is a BFS spanning tree with root $\left\{v_0\right\}$ then we must have \[d_G\left(v, v_0\right) = d_{{\mathcal T}}\left(v, v_0\right)\] for all $v \in V$. Thus $d_{{\mathcal T}}\left(v, v_0\right) \leq d_T\left(v, v_0\right)$ for all $v \in V$. Now from the model description, it follows that for any spanning tree $T$ with root $v_0$ we have \[ {\bf E}\left[Y^{T, \left\{v_0\right\}}\right] = \sum_{v \in V} \beta^{d_T\left(v, v_0\right)} \,. \] So we conclude that \[ {\bf E}\left[Y^{T, \left\{v_0\right\}}\right] = \sum_{v \in V} \beta^{d_T\left(v, v_0\right)} \leq \sum_{v \in V} \beta^{d_{{\mathcal T}}\left(v, v_0\right)} = {\bf E}\left[Y^{{\mathcal T}, \left\{v_0\right\}}\right] \,, \] as $0 < \beta < 1$. \end{proof} Let $\text{LB}^{G, \left\{v_0\right\}} := {\bf E}\left[Y^{{\mathcal T}, \left\{v_0\right\}}\right]$ be the lower bound obtained through BFS algorithm for a BFS spanning tree $\mbox{${\mathcal T}$}$ of $G$, rooted at $v_0$. Then from the proof of Theorem \ref{Thm:LB} we get that \begin{equation} \text{LB}^{G, \left\{v_0\right\}} = \sum_{v \in V} \beta^{d_{G}\left(v, v_0\right)} \,, \label{Equ:LB-Universal} \end{equation} which is free of the choice of the BFS spanning tree. Later, we will see that, this helps us to generalize the lower bound for epidemic starting with more than one infected vertex. We also note that $\text{LB}^{G, \left\{v_0\right\}}$ can be easily computed using the breadth-first search algorithm described earlier. Our next result shows that if we have a ``large'' finite graph $G$ on $n$ vertices and the epidemic starts with exactly one infected vertex $v_0$, such that any cycle containing $v_0$ is ``relatively large'', that is of order $\Omega\left(\log n\right)$, then the lower bound $\text{LB}^{G, \left\{v_0\right\}}$ given above, is asymptotically same as the exact quantity ${\bf E}\left[ Y^{G, \left\{ v_0 \right\}} \right]$. To state the result rigorously, we use the following graph theoretic notations. Given a graph $G$, a fixed vertex $v_0$ of $G$ and $d \geq 1$, let $V_d\left(G\right)$ be the set of vertices of $G$ which are at a \emph{graph distance} at most $d$ from $v_0$ in $G$. Let $N_d\left(G, v_0\right)$ be the induced sub-graph of $G$ on the vertices $V_d\left(G\right)$. \begin{Theorem} \label{Thm:Large-Local-Girth} Let $\left\{(G_n,v_0^n)\right\}_{n \geq 1}$ be a sequence of rooted connected graphs on $n$-vertices with roots $\left\{v_0^n\right\}_{n \geq 1}$ such that there exists a sequence $\alpha_n = \Omega\left(\log n\right)$ with $N_{\alpha_n}\left(G_n, v_0^n\right)$ is a tree for all $n \geq 1$. Then, there exists $0 < \beta_0 \leq 1$, such that for all $0 < \beta < \beta_{0}$ \begin{equation} \frac{{\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right]}{\text{LB}^{G_n, \left\{v_0^n\right\}}} \longrightarrow 1 \,\,\, \text{as} \,\,\, n \rightarrow \infty \,. \label{Equ:LB-Exact-for-Large-Local-Girth} \end{equation} \end{Theorem} \begin{proof} Let $\mbox{${\mathcal T}$}_n$ be a BFS spanning tree rooted at $v_0^n$ of the graph $G_n$ and as defined earlier and let $\text{LB}^{G_n, \left\{v_0^n\right\}} = {\bf E}\left[Y^{{\mathcal T}_n, \left\{v_0^n\right\}}\right]$. Then \begin{eqnarray} \text{LB}^{G_n, \left\{v_0^n\right\}} & \leq & {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] \nonumber \\ & \leq & {\bf E}\left[Y^{N_{\alpha_n}\left(G_n, v_0^n\right), \left\{v_0^n\right\}}\right] + {\bf E}\left[Y^{N_{\alpha_n}\left(G_n, v_0^n\right), \left\{v_0^n\right\}}\right] \times \beta^{\alpha_n} \times n \nonumber \\ & \leq & \text{LB}^{G_n, \left\{v_0^n\right\}} + \text{LB}^{G_n, \left\{v_0^n\right\}} \times \beta^{\alpha_n} \times n \,,\label{Equ:Bound-1} \end{eqnarray} Note that the first term of the second inequality in \eqref{Equ:Bound-1} is the expected number of infected nodes within an $\alpha_{n}$ neighbourhood of the initial infective $v_0^n$. The second term there is an upper bound of the expected number of nodes which may become infected by these neighbourhood infectives. Consider the nodes which are on the boundary of $\alpha_{n}$ neighbourhood of $v_0^n$, that is the infected vertices in $G_{n}$ after $\alpha_{n}$ units of time starting with one infected at vertex $v_{0}^{n}$. Since we have assumed that $N_{\alpha_n}\left(G_n, v_0^n\right)$ is a tree, so these nodes have probability $\beta^{\alpha_{n}}$ to get infected after $\alpha_n$ units of time. But the number of nodes outside the neighbourhood is bounded by $n-{\bf E}\left[Y^{N_{\alpha_n}\left(G_n, v_0^n\right), \left\{v_0^n\right\}}\right] \leq n$. Therefore an upper bound for the expected number of nodes which may become infected by these neighbourhood infectives is \[{\bf E}\left[Y^{N_{\alpha_n}\left(G_n, v_0^n\right), \left\{v_0^n\right\}}\right] \times \beta^{\alpha_n} \times n \,.\] Also the last inequality follows from the fact that $N_{\alpha_n}\left(G_n, v_0^n\right)$ is a tree and hence is a subtree of $\mbox{${\mathcal T}$}_n$. This proves (\ref{Equ:LB-Exact-for-Large-Local-Girth}) since by assumption $\alpha_n = \Omega\left(\log n\right)$. \end{proof} Although the assumption in the above theorem, may seem to be very restrictive, it is satisfied in many examples including the $n$-cycle (see Subsection \ref{SubSec:Cycle}). The method of the proof on the other hand, helps us generalize the result for a large class of graphs including certain random graphs. Following Aldous and Steele \cite{AlSt04}, we say a sequence of rooted random or deterministic graphs $\left\{(G_n,v_0^n)\right\}_{n \geq 1}$ with roots $\left\{v_0^n\right\}_{n \geq 1}$ converges to a random or deterministic graph $\left(G_{\infty}, v_0^{\infty}\right)$ in the sense of \emph{local weak convergence (l.w.c)} and write $\left(G_n, v_0^n\right) \xrightarrow{l.w.c.} \left(G_{\infty}, v_0^{\infty}\right)$ if for any $d \geq 1$, \begin{equation} {\bf P}\left( N_d\left(G_n, v_0^n\right) \cong N_d\left(G_{\infty}, v_0^{\infty}\right) \right) \longrightarrow 1 \,\,\, \text{as} \,\, n \rightarrow \infty \,. \label{Equ:LWC} \end{equation} Note that for a sequence deterministic graphs, (\ref{Equ:LWC}) means that the event occurs for ``large"' enough $n$. \begin{Theorem} \label{Thm:Limit-for-Bounded-Degree-Graphs} Let $\left\{(G_n,v_0^n)\right\}_{n \geq 1}$ be a sequence of rooted connected deterministic or random graphs with deterministic or randomly chosen roots $\left\{v_0^n\right\}_{n \geq 1}$. Suppose that for each $G_n$ the maximum degree of a vertex is bounded by a fixed constant, namely $\Delta$. Suppose there is a rooted deterministic or random tree $\mbox{${\mathcal T}$}T$ with root $\phi$ such that \begin{equation} \left(G_n, v_0^n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathcal T}$}T, \phi\right) \,\,\, \text{as} \,\,\, n \rightarrow \infty \,. \label{Equ:LWC-Tree} \end{equation} Let $\text{LB}^{G_n, \left\{v_0^n\right\}} := {\bf E}\left[Y^{\mbox{${\mathcal T}$}_n, \left\{v_0^n\right\}}\right]$ where $\mbox{${\mathcal T}$}_n$ is a BFS spanning tree rooted at $v_0^n$ of the graph $G_n$. \\ Then for $\beta < \frac{1}{\Delta}$ \begin{equation} \left( {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] - \text{LB}^{G_n, \left\{v_0^n\right\}} \right) \longrightarrow 0 \,\,\, \text{as} \,\,\, n \rightarrow \infty \,. \label{Equ:Limit-Exact} \end{equation} Moreover for $\beta < \frac{1}{\Delta}$ we have \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{G_n, \left\{v_0^n\right\}} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] = {\bf E}\left[Y^{\mathscr{T}, \phi}\right] \,. \label{Equ:Limit-Answer} \end{equation} \end{Theorem} \begin{proof} Let $\mbox{${\mathcal T}$}_n$ be a BFS spanning tree rooted at $v_0^n$ of the graph $G_n$ and also as defined earlier let $\, \,\text{LB}^{G_n, \left\{v_0^n\right\}} = {\bf E}\left[Y^{{\mathcal T}_n, \left\{v_0^n\right\}}\right]$. Fix $d \geq 1$ and $E_n$ be the event $\left[ N_d\left(G_n, v_0^n\right) \cong N_d\left(\mbox{${\mathcal T}$}T, \rho \right) \right]$. Therefore from Theorem \ref{Thm:LB} \begin{equation} \text{LB}^{G_n, \left\{v_0^n\right\}} \leq {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] = {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \mathbf{1}_{E_n} \right] + {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \mathbf{1}_{E_n^c} \right] \,. \label{Equ:Basic-UB-LB} \end{equation} Now under our assumption, the degree of any vertex of $G_n$ is bounded by $\Delta$ and $\beta < \frac{1}{\Delta}$, so using Theorem 2.3 of \cite{DrGaMa08} we have \begin{equation} {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \mathbf{1}_{E_n^c} \right] \leq \frac{1}{1 - \beta \Delta} {\bf P}\left(E_n^c\right) \,. \label{Equ:UB-Second-Part} \end{equation} Further note that if $E_n$ occurs, $N_d\left(G_n, v_0^n\right)$ is a tree rooted at $v_0^n$ and thus on $E_n$, $N_d\left(G_n, v_0^n\right)$ is a sub-tree of $\mbox{${\mathcal T}$}_n$. So \[ Y^{N_d\left({\mathcal T}_n, v_0^n\right), \left\{v_0^n\right\}} \mathbf{1}_{E_n} \leq Y^{{\mathcal T}_n, \left\{v_0^n\right\}} \mathbf{1}_{E_n} \,.\] Hence we have \begin{eqnarray} {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \mathbf{1}_{E_n} \right] & \leq & {\bf E}\left[ Y^{N_d\left({\mathcal T}_n, v_0^n\right), \left\{v_0^n\right\}} \mathbf{1}_{E_n} \right] + \beta^d {\bf E}\left[ Y^{G_n, \partial_d^* N_d\left(G_n, v_0^n\right)} \right] \nonumber \\ & \leq & {\bf E}\left[ Y^{N_d\left({\mathcal T}_n, v_0^n\right), \left\{v_0^n\right\}} \mathbf{1}_{E_n} \right] + \beta^d \frac{1}{1 - \beta \Delta} {\bf E}\left[ \left\vert \partial_d^* N_d\left(G_n, v_0^n\right) \right\vert \right] \nonumber \\ & \leq & \text{LB}^{G_n, \left\{v_0^n\right\}} + \beta^d \frac{1}{1 - \beta \Delta} {\bf E}\left[Y^{G_n, \left\{ v_0^n \right\}}\right] \nonumber \\ & \leq & \text{LB}^{G_n, \left\{v_0^n\right\}} + \beta^d \frac{1}{\left( 1 - \beta \Delta \right)^2} \label{Equ:UB-First-Part} \,, \end{eqnarray} where $\partial_d^* N_d\left(G_n, v_0^n \right)$ denotes the infected vertices in $G_n$ after $d$ units of time starting with one infected vertex $v_0^n$. For the first inequality, note that on the event $E_n$ we have $N_d\left(G_n, v_0^n\right)$ is a tree and thus on $E_n$ each vertex in $\partial_d^* N_d\left(G_n, v_0^n \right)$ has exactly $\beta^d$ probability to get infected after $d$ units of time starting with one infected vertex at $v_0^n$. In the second and the last inequalities, we use Theorem 2.3 of \cite{DrGaMa08}. So finally combining (\ref{Equ:Basic-UB-LB}), (\ref{Equ:UB-First-Part}) and (\ref{Equ:UB-Second-Part}) we get that for $\beta < \frac{1}{\Delta}$ and for any $d \geq 1$ we have \begin{equation} \left( {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] - \text{LB}^{G_n, \left\{v_0^n\right\}} \right) \leq \beta^d \frac{1}{\left( 1 - \beta \Delta \right)^2} + \frac{1}{1 - \beta \Delta} {\bf P}\left(E_n^c\right) \,. \label{Equ:Difference-Exact-to-LB} \end{equation} Now under assumption (\ref{Equ:LWC-Tree}), we have $\mathop{\lim}\limits_{n \rightarrow \infty} {\bf P}\left(E_n^c\right) = 0$ so we conclude that for any $d \geq 1$ \begin{equation} \limsup_{n \rightarrow \infty} \left( {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] - \text{LB}^{G_n, \left\{v_0^n\right\}} \right) \leq \beta^d \frac{1}{\left( 1 - \beta \Delta \right)^2} \,. \end{equation} This proves (\ref{Equ:Limit-Exact}) by taking $d \rightarrow \infty$ as $\beta < \frac{1}{\Delta}$. Now for proving (\ref{Equ:Limit-Answer}), we first observe that from (\ref{Equ:LWC-Tree}) the degree of any vertex of $\mathscr{T}$ is also bounded by $\Delta$. So using Theorem 2.3 of \cite{DrGaMa08} we have for $\beta < \frac{1}{\Delta}$ \[ {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] \leq \frac{1}{1 - \beta \Delta} \,. \] Moreover from the definition, $Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \uparrow Y^{\mathscr{T}, \left\{\rho\right\}}$ as $d \rightarrow \infty$. So by the Monotone Convergence Theorem we have \begin{equation} \lim_{d \rightarrow \infty} {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] = {\bf E}\left[ Y^{\mbox{${\mathcal T}$}T, \left\{\rho\right\}} \right] \leq \frac{1}{1 - \beta \Delta} < \infty \,. \label{Equ:UB-on-Limit} \end{equation} Thus for fixed $\Varepsilonilon > 0$ we can find $d \geq 1$ such that \begin{equation} \left\vert {\bf E}\left[ Y^{\mbox{${\mathcal T}$}T, \left\{\rho\right\}} \right] - {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] \right\vert < \Varepsilonilon \label{Equ:Limit-d-eps-1} \end{equation} and \begin{equation} \beta^d \frac{1}{\left(1-\beta \Delta\right)^2} < \Varepsilonilon \,. \label{Equ:Limit-d-eps-2} \end{equation} The last inequality holds as $\beta < \frac{1}{\Delta} < 1$. Further, as degree of any vertex of $\mbox{${\mathcal T}$}T$ is bounded by $\Delta$ so arguing similar to the derivation of the equation (\ref{Equ:UB-Second-Part}) we conclude \begin{equation} {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] - {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \mathbf{1}_{E_n} \right] = {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \mathbf{1}_{E_n^c} \right] \leq \frac{1}{1 - \beta \Delta} {\bf P}\left(E_n^c\right) \,. \label{Equ:TTT-d-to-whole} \end{equation} Also, arguing similar to the derivation of the equation (\ref{Equ:Difference-Exact-to-LB}) we get \begin{eqnarray} \left\vert {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \right] - {\bf E}\left[ Y^{N_d\left(G_n, v_0^n\right), \left\{ v_0^n \right\}} \mathbf{1}_{E_n} \right] \right\vert & \leq & \beta^d \frac{1}{\left( 1 - \beta \Delta \right)^2} + \frac{1}{1 - \beta \Delta} {\bf P}\left(E_n^c\right) \nonumber \\ & \leq & \Varepsilonilon + \frac{1}{1 - \beta \Delta} {\bf P}\left(E_n^c\right)\,, \label{Equ:Difference-Exact-to-Approx} \end{eqnarray} where the last equality follows from (\ref{Equ:Limit-d-eps-2}). Finally, \begin{eqnarray*} \left\vert {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \right] - {\bf E}\left[ Y^{\mbox{${\mathcal T}$}T, \left\{\rho\right\}} \right] \right\vert & \leq & \left\vert {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}} \right] - {\bf E}\left[ Y^{N_d\left(G_n, v_0^n\right), \left\{ v_0^n \right\}} \mathbf{1}_{E_n} \right] \right\vert \\ & & + \left\vert {\bf E}\left[ Y^{N_d\left(G_n, v_0^n\right), \left\{ v_0^n \right\}} \mathbf{1}_{E_n} \right] - {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] \right\vert \\ & & + \left\vert {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \right] - {\bf E}\left[ Y^{\mbox{${\mathcal T}$}T, \left\{\rho\right\}} \right] \right\vert \\ & \leq & 2 \Varepsilonilon + \frac{2}{1 - \beta \Delta} {\bf P}\left(E_n^c\right) \,, \end{eqnarray*} where the last inequality follows from the equations (\ref{Equ:Limit-d-eps-1}), (\ref{Equ:Limit-d-eps-2}), (\ref{Equ:TTT-d-to-whole}) and (\ref{Equ:Difference-Exact-to-Approx}) and also observing the fact that ${\bf E}\left[ Y^{N_d\left(G_n, v_0^n\right), \left\{ v_0^n \right\}} \mathbf{1}_{E_n} \right] = {\bf E}\left[ Y^{N_d\left(\mathscr{T}, \rho\right), \left\{ \rho \right\}} \mathbf{1}_{E_n} \right]$. Now under our assumption (\ref{Equ:LWC-Tree}) we have ${\bf P}\left(E_n\right) \longrightarrow 1$. So we conclude that \begin{equation} \lim_{n \rightarrow \infty} {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] = {\bf E}\left[Y^{\mbox{${\mathcal T}$}T, \left\{\rho\right\}}\right] \,. \label{Equ:Limit-Answer-1} \end{equation} Thus using (\ref{Equ:Limit-Exact}), it follows that \[ \lim_{n \rightarrow \infty} \text{LB}^{G_n, \left\{v_0^n\right\}} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{G_n, \left\{v_0^n\right\}}\right] = {\bf E}\left[Y^{\mathscr{T}, \left\{\rho\right\}}\right] \,. \] This completes the proof. \end{proof} An immediate and interesting application of the above theorem is the following result which gives an explicit formula for the limit of epidemic spread on a randomly selected $r$-regular graph when the infection starts from an randomly chosen vertex. \begin{Theorem} \label{Thm:r-Reg-Graphs} Suppose $G_n$ is a graph selected uniformly at random from the set of all $r$-regular graphs on $n$ vertices where we assume $n r$ is an even number. Let $v_0^n$ be an uniformly selected vertex of $G_n$. Then for $\beta < \frac{1}{r}$ \begin{equation} \lim_{n \rightarrow \infty} {\bf E}\left[ Y^{G_n, \left\{v_0^n\right\}} \right] = \frac{1 + \beta}{1 - \left(r-1\right) \beta} \,. \label{Equ:r-Reg-Ans} \end{equation} \end{Theorem} We note that in this case, the upper bound given in \cite{DrGaMa08} is $\frac{1}{1 - r \beta}$ when $\beta < \frac{1}{r}$ which is strictly bigger than the exact answer given in (\ref{Equ:r-Reg-Ans}). \begin{proof} It is known \cite{Jan, AlSt04} that if $G_n$ is a graph selected uniformly at random from the set of all $r$-regular graphs on $n$ vertices, where $n r$ is even and $v_0^n$ be a randomly selected vertex of $G_n$ then \begin{equation} \left(G_n, v_0^n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathbb T}$}_r, \phi\right) \,, \label{Equ:LWC-r-Reg-Graphs} \end{equation} where $\mbox{${\mathbb T}$}_r$ is the infinite $r$-regular tree with root say $\phi$. The result then follows from Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} and equation (\ref{Equ:r-Reg-Tree-Ans}). \end{proof} \subsection{Starting with More than One Infected Vertex} \label{SubSec:Results-Many} Now suppose instead of one infected vertex, we start with $k$ infected vertices given by $I := \left\{v_{0,1}, v_{0,2}, \cdots, v_{0,k} \right\}$. The following theorem gives a lower bound similar to that of Theorem \ref{Thm:LB}. \begin{Theorem} \label{Thm:LB-Many} Let $G$ be an arbitrary finite graph and $I := \left\{v_{0,j}\right\}_{j=1}^k$ be a fixed set of $k$ vertices. Let $T$ be a spanning forest of the connected components of $G$ containing the vertices in $I$ with exactly $k$ trees which are rooted at the vertices in $I$. Then \begin{equation} {\bf E}\left[Y^{T, I}\right] \leq {\bf E}\left[Y^{G, I}\right] \,\,\, \text{for all} \,\,\, 0 < \beta < 1 \,. \label{Equ:LB-Forest} \end{equation} Moreover, if $\mbox{${\mathcal T}$}$ is a \emph{breath-first-search spanning forest} of the connected components of $G$ containing the vertices in $I$ with exactly $k$ trees which are rooted at the vertices in $I$ then \begin{equation} {\bf E}\left[Y^{T, I}\right] \leq {\bf E}\left[Y^{{\mathcal T}, I}\right] \leq {\bf E}\left[Y^{G, I}\right] \,\,\, \text{for all} \,\,\, 0 < \beta < 1 \,. \label{Equ:BFS-Forest} \end{equation} \end{Theorem} Given a finite labeled graph $G$ and a fixed set of vertices $I = \left\{v_{0,j}\right\}_{j=1}^k$ of it, by a \emph{breath-first-search spanning forest} of the connected components of $G$ containing the vertices in $I$ with exactly $k$ trees which are rooted at the vertices in $I$, we mean a spanning forest of $G$ with exactly $k$ connected components which are rooted at the vertices $\left\{v_{0,1}, v_{0,2}, \cdots, v_{0,k}\right\}$, that are obtained through the \emph{breath-first-search} algorithm, starting at some vertex $v \in I$ and assuming that all the vertices $\left\{v_{0,1}, v_{0,2}, \cdots, v_{0,k}\right\}$ are at the same level. Alternately, we can consider a new graph $G^*$ which is same as $G$ except it has one ``artificial'' vertex, say $v^*$ which is connected to the vertices $v_{0,1}, v_{0,2}, \cdots, v_{0,k}$ through $k$ ``artificial'' edges and we perform the BFS algorithm on $G^*$ starting with the vertex $v^*$, to obtain a BFS spanning tree, say $\mbox{${\mathcal T}$}^*$ of $G^*$ rooted at $v^*$. Then a \emph{breath-first-search spanning forest} of $G$ with exactly $k$ trees which are rooted at the vertices $\left\{v_{0,1}, v_{0,2}, \cdots, v_{0,k}\right\}$ is given by the forest $\mbox{${\mathcal T}$}^* \setminus \left\{ v^* \right\}$. This alternate description, is quite useful in practice. Note that if $\left\{ \mbox{${\mathcal T}$}_i \right\}_{1 \leq i \leq k}$ are the $k$ connected components, rooted respectively at $\left\{ v_{0,1}, v_{0,2}, \cdots, v_{0,k} \right\}$ of $\mbox{${\mathcal T}$}$, a breath-first-search spanning forest of the connected components of $G$ containing the vertices in $I$, then the following identity holds for every $\beta \in \left(0,1\right):$ \begin{equation} {\bf E}\left[Y^{{\mathcal T}, I}\right] = \sum_{i=1}^k {\bf E}\left[Y^{{\mathcal T}_i, I}\right] = \frac{{\bf E}\left[Y^{{\mathcal T}^*, \left\{v^*\right\}}\right] - 1}{\beta} \,. \label{Equ:One-to-Many-Fundamental} \end{equation} Using the above identity, we can now generalize all the results of the previous section for epidemic spread starting with more than one infected vertex. We write $\text{LB}^{G, I}$ for ${\bf E}\left[Y^{{\mathcal T}, I}\right]$ which is the lower bound of ${\bf E}\left[Y^{G, I}\right]$ for starting with $k$ infected vertices given by $I$. Observe that from equation (\ref{Equ:One-to-Many-Fundamental}) we can write \begin{equation} \text{LB}^{G, I} = \sum_{i=1}^k {\bf E}\left[Y^{{\mathcal T}_i, I}\right]\,, \label{Equ:One-to-Many-Representation} \end{equation} where $\mbox{${\mathcal T}$} = \mathop{\cup}\limits_{i=1}^k \mbox{${\mathcal T}$}_i$ is as above. It is worth nothing here that the lower bound $\text{LB}^{G,I}$ does not depend on the choice of $\mbox{${\mathcal T}$}$ but the representation given in equation (\ref{Equ:One-to-Many-Representation}) uses a specific choice of $\mbox{${\mathcal T}$}$. \begin{Theorem} \label{Thm:Large-Local-Girth-Many} Let $\left\{(G_n, I_n)\right\}_{n \geq 1}$ be a sequence of graphs where each $G_n$ has $k$-roots given by $I_n := \left\{v_{0,1}^n, v_{0,2}^n, \cdots, v_{0,k}^n \right\}$ such that there exists a sequence $\alpha_n = \Omega\left(\log n\right)$ with $N_{\alpha_n}\left(G_n, I_n\right) := \mathop{\cup}\limits_{j=1}^k N_{\alpha_n}\left(G_n, v_{0,j}^n\right)$ is a forest with $k$ components. Then there exists $0 < \beta_0 \leq 1$, such that for all $0 < \beta < \beta_{0}$ \begin{equation} \frac{{\bf E}\left[Y^{G_n, I_n} \right]}{\text{LB}^{G_n,I_n}} \longrightarrow 1 \,\,\, \text{as} \,\,\, n \rightarrow \infty \,. \label{Equ:LB-Exact-for-Large-Local-Girth-Many} \end{equation} \end{Theorem} The proof of this result is similar to that of Theorem \ref{Thm:Large-Local-Girth} and follows from the identity (\ref{Equ:One-to-Many-Fundamental}). The details are thus omitted. Our next result is parallel to the Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} which needs a generalization of the concept of local weak convergence which was introduced by W\"{a}stlund \cite{Wa11}. We will say a sequence of random or deterministic graphs $\left\{G_n\right\}_{n \geq 1}$ with $k$ roots given by $I_n := \left\{v_{0,1}^n, v_{0,2}^n, \cdots, v_{0,k}^n\right\}$, $n \geq 1$ converges to a random or deterministic graph $G_{\infty}$ with $k$-roots say $I_{\infty} := \left\{v_{0,1}^{\infty}, v_{0,2}^{\infty}, \cdots, v_{0,k}^{\infty}\right\}$ in the sense of \emph{local weak convergence (l.w.c)} and write $\left(G_n, I_n\right) \xrightarrow{l.w.c.} \left(G_{\infty}, I_{\infty}\right)$ if for any $d \geq 1$ \begin{equation} {\bf P}\left( N_d\left(G_n, v_{0,j}^n\right) \cong N_d\left(G_{\infty}, v_{0,j}^{\infty}\right) \,\,\, \text{for all} \,\,\, 1 \leq j \leq k\right) \longrightarrow 1 \,\,\, \text{as} \,\, n \rightarrow \infty \,. \label{Equ:LWC-Many} \end{equation} Note that for a sequence deterministic graphs, (\ref{Equ:LWC-Many}) means that the event occurs for ``large"' enough $n$. \begin{Theorem} \label{Thm:Limit-for-Bounded-Degree-Graphs-Many} Let $\left(G_n\right)_{n \geq 1}$ be a sequence of deterministic or random graphs. Suppose each $G_n$ has deterministic or randomly chosen $k$ roots given by $I_n := \left\{v_{0,1}^n, v_{0,2}^n, \cdots, v_{0,k}^n \right\}$ and maximum degree of each $G_n$ is bounded by $\Delta$. Suppose $\mbox{${\mathcal T}$}T := \mathop{\cup}\limits_{j=1}^k \mbox{${\mathcal T}$}T_j$ is a forest with $k$ rooted tress with roots $I_{\infty} := \left\{ \phi_1, \phi_2, \cdots, \phi_k \right\}$. We assume that \begin{equation} \left(G_n, I_n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathcal T}$}T, I_{\infty}\right) \,\,\, \text{as} \,\,\, n \rightarrow \infty \,. \label{Equ:LWC-Tree-Many} \end{equation} Then for $\beta < \frac{1}{\Delta}$ \begin{equation} \left( {\bf E}\left[Y^{G_n, I_n}\right] - \text{LB}^{G_n,I_n} \right) \longrightarrow 0 \,, \label{Equ:Limit-Exact-Many} \end{equation} as $n \rightarrow \infty$. Moreover \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{G_n,I_n} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{G_n, I_n}\right] = {\bf E}\left[Y^{\mathscr{T}, I_{\infty}}\right] = \sum_{j=1}^k {\bf E}\left[ Y^{\mathscr{T}_j, \left\{\phi_j\right\}} \right] \,. \label{Equ:Limit-Answer-Many} \end{equation} \end{Theorem} \begin{proof} For each $n \geq 1$ as done above we define a new rooted graph $G_n^*$ with artificial vertex $v_n^*$ which is connected to the the $k$-roots in $I_n$ of $G_n$ through $k$ artificial edges. Also we consider $\mbox{${\mathcal T}$}T^*$ defined similarly with an artificial root $\phi^*$ connecting to $\left\{ \phi_1, \phi_2, \cdots, \phi_k \right\}$. Then our assumption of local weak convergence (\ref{Equ:LWC-Tree-Many}) is equivalent to \begin{equation} \left( G_n^*, v_n^* \right) \xrightarrow{l.w.c.} \left(\mbox{${\mathcal T}$}T^*, \phi^*\right) \,. \end{equation} This together with the relation (\ref{Equ:One-to-Many-Fundamental}) and Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} completes the proof. \end{proof} It is worth noting that in case $\left\{\mbox{${\mathcal T}$}T_j\right\}_{1 \leq j \leq k}$ are i.i.d. (if they are random) or isomorphic (if they are constant) then equation (\ref{Equ:Limit-Answer-Many}) can be reformulated as \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{G_n,I_n} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{G_n, I_n}\right] = {\bf E}\left[Y^{\mathscr{T}, I_{\infty}}\right] = k \, {\bf E}\left[ Y^{\mathscr{T}_1, \left\{\phi_1\right\}} \right] \,. \label{Equ:Limit-Answer-Many-2} \end{equation} As in the case of starting with one infected vertex, the following theorem is an immediate application of the above results. \begin{Theorem} \label{Thm:r-Reg-Graphs-Many} Suppose $G_n$ is a graph selected uniformly at random from the set of all $r$-regular graphs on $n$ vertices where we assume $n r$ is an even number. Let $I_n := \left\{v_{0,1}^n, v_{0,2}^n, \cdots, v_{0,k}^n \right\}$ be $k$ uniformly and independently selected vertices of $G_n$. Then for $\beta < \frac{1}{r}$ \begin{equation} \lim_{n \rightarrow \infty} {\bf E}\left[ Y^{G_n, I_n} \right] = k \frac{1 + \beta}{1 - \left(r-1\right) \beta} \,. \label{Equ:r-Reg-Ans-Many} \end{equation} \end{Theorem} \begin{proof} Since the vertices in $I_n$ are selected unformly at random so from \cite{AlSt04} we have \begin{equation} \left(G_n, I_n\right) \xrightarrow{l.w.c.} \left( \mbox{${\mathcal T}$}T_r, I_{\infty}\right) \,, \label{Equ:LWC-r-Reg-Graphs-Many} \end{equation} where $I_{\infty} := \left\{ \phi_1, \phi_2, \cdots, \phi_k \right\}$ and $\mbox{${\mathcal T}$}T_r$ is a forest with $k$ infinite $r$-regular tree with roots in $I_{\infty}$. The result then follows from Theorems \ref{Thm:Limit-for-Bounded-Degree-Graphs-Many} and \ref{Thm:r-Reg-Graphs}. \end{proof} Once again we note that in this case, the upper bound $\frac{k}{1 - r \beta}$ given in \cite{DrGaMa08} for $\beta < \frac{1}{r}$, is strictly bigger than the exact answer given in (\ref{Equ:r-Reg-Ans-Many}) and the gap increases with $k$, the initial number of infections. \section{Examples} \label{Sec:Example} \subsection{Tree} \label{SubSec:Tree} If $G$ is a tree and the epidemic starts with only one infected vertex say $\phi$ which we call the root, then from the construction of the lower bound it is clear that $\text{LB}^{G, \left\{\phi\right\}} = {\bf E}\left[Y^{G, \left\{\phi\right\}}\right]$. In certain cases one can find explicit formula for this quantity. Two such examples are discussed below. \paragraph{Regular Tree} Consider a rooted $r$-array tree ($r \geq 2$), with height $m$, denote it by $T\left(r,m\right)$. In $T\left(r,m\right)$ every internal vertex except the root $\phi$ has degree $r$. A vertex $v$ is said to be an internal vertex if it has a neighbor which is not on the unique path from $v$ to $\phi$. We assume that the degree of the root $\phi$ is $\left(r-1\right)$. Let $\mu_{m}:={\bf E}[Y^{T\left(r,m\right), \left\{\phi\right\}}]$. Note that the total number of vertices in $T\left(r, m\right)$ is $\frac{\left(r-1\right)^{m+1}-1}{r-2}$. Now, to calculate the exact value of $\mu_m$ we note that \begin{equation} \mu_{m}=1 + \left( r - 1 \right) \beta \mu_{m-1} \end{equation} which gives the formula \begin{equation} \mu_{m} = \frac{\left[\left( r - 1\right) \beta\right]^{m+1} - 1}{\left( r - 1\right) \beta - 1} \,. \label{r-tree} \end{equation} As $T\left(r,m\right)$ is a tree so the lower bound is exact, that is, $\text{LB}^{T\left(r,m\right), \left\{\phi\right\}} = \mu_m$. Now the upper bound from \cite{DrGaMa08} is $\frac{1}{1 - r \beta}$ for $\beta <\frac{1}{r}$. If $\beta < \frac{1}{r}$ then by Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} we get \begin{equation} {\bf E}\left[ Y^{T\left(r\right), \left\{\phi\right\}} \right] = \lim_{m \rightarrow \infty} \mu_m = \frac{1}{1 - \left( r - 1 \right) \beta} \,, \label{limmu} \end{equation} where $T\left(r\right)$ is the rooted infinite $r$-regular tree, where each vertex except the root $\phi$ has degree $r$ and the degree of the root is $\left(r - 1 \right)$. We observe a gap between the lower bound which in this case agrees with $\mu_m$ to that of the upper bound obtained in \cite{DrGaMa08}. Now let $\mbox{${\mathbb T}$}_r$ be the infinite $r$-regular tree where each vertex including the root has degree $r$. Such a tree can be viewed as disjoint union of $r$ rooted infinite $r$-regular trees whose roots are joint to the root, say $\phi$ of $\mbox{${\mathbb T}$}_r$. Thus from (\ref{limmu}) we get that for $\beta < \frac{1}{r}$ \begin{equation} \text{LB}^{{\mathbb T}_r, \left\{\phi\right\}} = {\bf E}\left[Y^{{\mathbb T}_r, \left\{\phi\right\}} \right] = 1 + \frac{r \beta}{1 - \left( r - 1 \right) \beta} = \frac{1 + \beta}{1 - \left( r - 1 \right) \beta} \,. \label{Equ:r-Reg-Tree-Ans} \end{equation} \paragraph{Galton-Watson Tree} Consider a Galton-Watson branching process starting with one individual. Let the mean of the offspring distribution be $c > 0$. We denote the random tree generated by this process as $\text{GW}\left(c\right)$ with root $\phi$. Once again, as discussed above since $\text{GW}\left(c\right)$ is a tree, so $\text{LB}^{\text{GW}\left(c\right), \left\{\phi\right\}} = {\bf E}\left[Y^{\text{GW}\left(c\right), \left\{\phi\right\}}\right]$. Now in this case, the epidemic process starting with only one infection at $\phi$, is a Galton-Watson branching process starting with one individual as the root and with mean of the new progeny distribution being $\beta c$. So in particular if $\beta < \frac{1}{c}$ then from standard branching process theory ${\bf E}\left[Y^{\text{GW}\left(c\right), \left\{\phi\right\}}\right] < \infty$ and equals $\frac{1}{1 - \beta c}$ \cite{AthNey04}. \subsection{Cycle} \label{SubSec:Cycle} Cycle graph is a graph that consists of a single cycle. We denote the cycle with $n$ vertices by $C_{n}$. For simplicity we assume $n$ is odd and then from the BFS algorithm, it is immediate that starting with one infected individual, say at $v_0^n$, we have \begin{equation} \text{LB}^{C_n, \left\{v_0\right\}} = 1 + 2 \left( \beta + \beta^2 + \cdots +\beta^{\frac{n-1}{2}}\right) \end{equation} which converges to $\frac{1+\beta}{1-\beta}$ as $n \rightarrow \infty$ for any $0 < \beta < 1$. Now it is clear from the definition that \begin{equation} \left(C_n, v_0^n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathbb Z}$}, 0\right) \,. \end{equation} Thus using Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} we conclude that if $\beta < \frac{1}{2}$ then \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{C_n, \left\{v_0^n\right\}} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{C_n, \left\{v_0^n\right\}}\right] = \frac{1+\beta}{1-\beta} \,. \label{Equ:Limit-Answer-Cycle} \end{equation} In fact this holds for any $0 < \beta < 1$. This is because for a cycle graph, the assumption in Theorem \ref{Thm:Large-Local-Girth} holds for $\alpha_n = n/3$. Thus from the proof of Theorem \ref{Thm:Large-Local-Girth} we conclude that the equation (\ref{Equ:Limit-Answer-Cycle}) holds for any $0 < \beta < 1$. Now if the epidemic starts with $k$ initial infected vertices given by $I_n := \left\{ v_{0,1}^n, v_{0,2}^n, \cdots, v_{0,k}^n\right\}$ which are uniformly distributed, then it is easy to see that \begin{equation} \left(C_n, I_n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathbb Z}$}_j, 0\right)_{1 \leq j \leq k} \,, \label{Equ:Cycle-LWC-Many} \end{equation} where $\mbox{${\mathbb Z}$}_j$ is just a copy of $\mbox{${\mathbb Z}$}$. Then by Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs-Many} we conclude that for $0 < \beta < \frac{1}{2}$, \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{C_n,I_n} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{C_n, I_n}\right] = k \, \frac{1+\beta}{1-\beta} \,. \label{Equ:Limit-Answer-Many-Cycle} \end{equation} As earlier we can use Theorem \ref{Thm:Large-Local-Girth-Many} with $\alpha_n = O\left(n\right)$ to conclude that (\ref{Equ:Limit-Answer-Many-Cycle}) holds for all all $0 < \beta < 1$. \subsection{Generalized Cycle} \label{SubSec:GCycle} Suppose in a cycle graph we choose randomly without replacement $2 m$ vertices and connect these vertices by joining edges between them where $m \geq 1$ is fixed. We call this graph a Generalized Cycle and denote it by $\text{GC}\left(n,m\right)$. Now consider the epidemic model on this graph with one initial infected site $v_0^n$. For large enough $n$, the probability of having at least one of the $m$ pairs inside a neighborhood of $v_0^n$ of radius $r$ is given by \[ 1-\left(1-\frac{2r (2r+1)}{n(n-1)}\right)^{m} \] which tends to zero as $n \rightarrow \infty$. Therefore, a fixed neighborhood of the root is a tree with high probability, in fact it is isomorphic to a neighborhood of integer line. Hence by Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs} it follows that for $\beta < \frac{1}{2}$ \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{\text{GC}\left(n,m\right), \left\{v_0^n\right\}} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{\text{GC}\left(n,m\right), \left\{v_0^n\right\}}\right] = \frac{1+\beta}{1-\beta} \,. \label{Equ:Limit-Answer-GCycle} \end{equation} Similarly if we start with $k$ initial infected sites, say $I_n := \left\{ v_{0,j}^n\right\}_{j=1}^k$ which are chosen uniformly at random, then it is easy to see that \begin{equation} \left(\text{GC}\left(n,m\right), I_n\right) \xrightarrow{l.w.c.} \left(\mbox{${\mathbb Z}$}_j, 0\right)_{1 \leq j \leq k} \,, \label{Equ:GCycle-LWC-Many} \end{equation} where $\mbox{${\mathbb Z}$}_j$ is just a copy of $\mbox{${\mathbb Z}$}$. Thus by Theorem \ref{Thm:Limit-for-Bounded-Degree-Graphs-Many} we get \begin{equation} \lim_{n \rightarrow \infty} \text{LB}^{\text{GC}\left(n,m\right), I_n} = \lim_{n \rightarrow \infty} {\bf E}\left[Y^{\text{GC}\left(n,m\right), I_n}\right] = k \frac{1+\beta}{1-\beta} \,, \label{Equ:Limit-Answer-Many-GCycle} \end{equation} when $\beta < \frac{1}{3}$, because the maximum degree in $\text{GC}\left(n,m\right)$ is $3$. \subsection{Cube graph} The cube graph is the graph obtained from the vertices and edges of the $3$-dimensional unit cube. We denote it by $Q_3$. Suppose initially only the vertex $(0,0,0)$ is infected. Consider a BFS spanning tree $\mbox{${\mathcal T}$}$ of $Q_3$ rooted at $(0,0,0)$. Since $Q_3$ has only $8$ vertices so $Y^{{\mathcal T}, \left\{(0,0,0)\right\}}$ takes values $\left\{0,1,2,3,4,5,6,7\right\}$ and \begin{eqnarray*} \text{LB}^{{\mathcal T}, \left\{(0,0,0)\right\}} & = & {\bf E}\left[Y^{{\mathcal T}, \left\{(0,0,0)\right\}}\right] \\ & = & 1 + 3 \beta + 3\beta^{2} + \beta^{3} \\ & = & \left(1 + \beta \right)^3 \,. \end{eqnarray*} In general, the $d$-dimensional cube graph say $Q_d$ is a $d$-regular graph which has $n=2^d$ vertices. Following a similar calculation as done above, one can show that for an epidemic starting at one vertex, the lower bound obtained in Theorem \ref{Thm:LB} for the expected total number of vertices ever infected is given by $(1+\beta)^{d}$. In this example computation of the exact value of ${\bf E}\left[Y^{Q_d, \left\{ (0,0,0) \right\}}\right]$ is difficult, but we note that there is a gap between the upper bound obtained in \cite{DrGaMa08}, namely $\frac{1}{1 - d \beta}$ which is valid only when $\beta < \frac{1}{d}$ and our lower bound. However this is an example which does not fall under any of the theorem we discuss in this paper and hence we are not sure if the lower bound gives a good approximation. \section {Discussion} \label{Sec:Discussion} The goal of this study has been to get a better idea of the expected total number of vertices ever infected with as little assumption as possible on the underlying graph $G$. Our approach has been to find an appropriate lower bound of this expectation. Although from a practical point of view, approximation from above with an upper bound is a more conservative method. As shown in the examples given in Section \ref{Sec:Example}, the only known upper bounds obtained in \cite{DrGaMa08} often over estimate the exact quantity. Moreover the upper bounds in \cite{DrGaMa08} hold only for ``small'' values of the parameter $\beta$. For an arbitrary finite network, we have obtained a lower bound of the expectation of the number of vertices ever infected for any value of the parameter $\beta$ which is computable through the breadth-first search algorithm. Theorems \ref{Thm:Large-Local-Girth}, \ref{Thm:Limit-for-Bounded-Degree-Graphs}, \ref{Thm:Large-Local-Girth-Many} and \ref{Thm:Limit-for-Bounded-Degree-Graphs-Many} show that this lower bound is asymptotically exact for a large class of graphs when $\beta$ value is ``small'', which always includes the values of $\beta$ for which the upper bounds in \cite{DrGaMa08} are defined. However, we would also like to mention here that even though the lower bound we present, works for any infection parameter $0< \beta < 1$, if the underlying graph has many loops, such as the complete graph $K_n$, then it does not necessarily give a good approximation. To see this, consider the complete graph $K_n$ and suppose that the epidemic starts at a fixed vertex $v_0$. Then the lower bound $\text{LB}^{K_n, \left\{v_0\right\}} = 1 + \left(n-1\right) \beta$. Now, let $X_1$ be the number of infected vertices at time $t=1$. In this case it is easy to see that $X_1 \sim \text{Binomial}\left( n - 1,\beta \right)$. Let $u$ be one of $n-1-X_1$ vertices which are not infected at time $t=1$. Since $K_n$ is the complete graph, so the conditional probability of $u$ becomes infected at time $t=2$ given $X_1$ is $1 - \left( 1 - \beta \right)^{X_1}$. Hence \begin{eqnarray*} {\bf E}\left[Y^{K_{n}, \left\{v_0\right\}}\right] & \geq & 1+\left(n-1\right)\beta + {\bf E}\left[\left(n-1-X_{1}\right) \left(1-\left(1-\beta\right)^{X_{1}}\right) \right] \\ & = & 1+\left(n-1\right)\beta + \left(n-1\right)- \left(n-1\right)\left(1-\beta^2\right)^{n-1} \\ & & \qquad \qquad - \left(n-1\right)\beta +\left(n-1\right)\beta \left(1-\beta\right)\left(1-\beta^2\right)^{n-2} \end{eqnarray*} Therefore we get \begin{equation} \limsup_{n \rightarrow \infty} \frac{{\bf E}\left[Y^{K_{n}, \left\{v_0\right\}}\right] - \text{LB}^{K_n, \left\{v_0\right\}}} {\text{LB}^{K_n, \left\{v_0\right\}}} \geq \frac{1-\beta}{\beta}\,. \end{equation} where $\text{LB}^{K_n, \left\{v_0\right\}} :={\bf E}[Y^{{\mathcal T}_{n}, \left\{v_0^n\right\}}]$. Here, it is worth mentioning that for the complete graph if we start with one infected vertex, then as discussed in Section \ref{Sec:Intro} the set of vertices ever infected is no other than an Erd\"{o}s-R\'{e}nyi random graph with parameter $n$ and $\beta$. Thus asymptotic behavior of ${\bf E}\left[Y^{K_n, \left\{v_0\right\}}\right]$ is well understood in the literature \cite{Jan, Bola01}. \end{document}
\begin{document} \title{$d$-representability of simplicial complexes of fixed dimension hanks{ I have obtained the main result of this note when I was working on my PhD thesis. Thus the contents of this contribution also appears in modified version in my PhD thesis. } \begin{abstract} Let ${\mathsf K}$ be a simplicial complex with vertex set $V = \{v_1, \dots, v_n\}$. The complex ${\mathsf K}$ is $d$-representable if there is a collection $\{C_1, \dots, C_n\}$ of convex sets in $\mathbb{R}^d$ such that a subcollection $\{C_{i_1}, \dots, C_{i_j}\}$ has a nonempty intersection if and only if $\{v_{i_1}, \dots, v_{i_j}\}$ is a face of ${\mathsf K}$. In 1967 Wegner proved that every simplicial complex of dimension $d$ is $(2d + 1)$-representable. He also suggested that his bound is the best possible, i.e., that there are $d$-dimensional simplicial complexes which are not $2d$-representable. However, he was not able to prove his suggestion. We prove that his suggestion was indeed right. Thus we add another piece to the puzzle of intersection patterns of convex sets in Euclidean space. \end{abstract} \section{Introduction} Let $\mathcal{C}$ be a collection of sets. The \emph{nerve} of $\mathcal{C}$ is a simplicial complex\footnote{We assume that the reader is familiar with simplicial complexes; otherwise we refer him to standard sources such as~\cite{hatcher01,munkres84,matousek03}.} with vertex set $\mathcal{C}$ and with faces of the form $\{C_1, \cdots, C_k\} \subseteq \mathcal{C}$ such that the intersection $C_1 \cap \cdots \cap C_k$ is nonempty. We say that a simplicial complex is \emph{$d$-representable} if it is isomorphic to the nerve of a finite collection of convex sets in $\mathbb{R}^d$. This notion is designed to capture possible `intersection patterns' of convex sets in $\mathbb{R}^d$. Study of intersection patterns of convex sets is active since a theorem by Helly~\cite{helly23}. Let us also mention that $d$-representable simplicial complexes are very closely related to well studied intersection graphs of convex sets. An intersection graph only records which pairs of convex sets have a nonempty intersection; however, it does not take care of multiple intersections. Thus $d$-representable complexes provide more detailed information about the intersection pattern. From another point of view, need of understanding intersection patterns of convex sets appears, e.g., also in manifold learning. The task might be to reconstruct the homotopy type of a manifold $M$ given by sample points $\{p_i\}$. Sample points can be enlarged to convex sets $\{C_i\}$; and under certain conditions $M$ is homotopic to $\bigcup C_i$. On the other, via the nerve theorem, $\bigcup C_i$ is homotopic to the nerve of $\{C_i\}$. See, e.g.,~\cite{attali-lieutier10} for more details. The reader is referred to~\cite{eckhoff85} or~\cite{tancer11surveyarxiv} for more background on intersection patterns of convex sets. One of the question arising in this area is how the dimension of a complex affects $d$-representability. Wegner~\cite{wegner67} showed that a complex of dimension $d$ is always $(2d+1)$-representable. (This result was also independently found by Perel'man~\cite{perelman85}.) Wegner also suggested that the value $2d+1$ is the best possible, i.e., that there are $d$-dimensional simplicial complexes which are not $2d$-representable. (The question about the best possible value is also reproduced by Eckhoff~\cite{eckhoff85}, and the author is not aware that this question would be answered yet.) Wegner proved that the barycentric subdivision\footnote{In this case, every edge is subdivided into two edges and a new vertex in the center of the edge is inserted.} of a nonplanar graph is not $2$-representable. He also suggested that the barycentric subdivision of a $d$-dimensional complex that does not embed into $\mathbb{R}^{2d}$ is not $2d$-representable; however, he was not able to prove his suggestion. In this short note we prove that the value $2d + 1$ is indeed the best possible. Let $\mathcal{D}elta_n$ denotes the full simplex of dimension $n$ and let ${\mathsf K}^{(k)}$ denotes the $k$-skeleton of a simplicial complex ${\mathsf K}$. We prove that the barycentric subdivision of $\mathcal{D}elta^{(d)}_{2d+2}$ and also the barycentric subdivision of many other complexes is not $d$-representable; see the precise statement below. \begin{theorem} \label{t:dimen} The barycentric subdivision of $\mathcal{D}elta^{(d)}_{2d+2}$ is not $d$-representable. More generally, if ${\mathsf L}$ is a $d$-dimensional simplicial complex with vanishing Van Kampen obstruction, then the barycentric subdivision $\sd {\mathsf L}$ is not $d$-representable. \end{theorem} \begin{remark} \emph{Van Kampen obstruction} is a certain cohomology obstruction for embeddability $d$-dimensional simplicial complexes into $\mathbb{R}^{2d}$. We are not going to define this obstruction precisely since we would need to many preliminaries. The interested reader is referred either to~\cite{melikhov09} for a survey or to~\cite[Appendix D]{matousek-tancer-wagner11} for an elementary exposition. Let us just mention some properties of Van Kampen obstruction. If ${\mathsf K}$ is a $d$-dimensional simplicial complex which embeds into $\mathbb{R}^{2d}$, then its Van Kampen obstruction has to vanish. If $d \neq 2$, then also the converse is true, i.e., a $d$-dimensional simplicial complex with vanishing Van Kampen obstruction embeds into $\mathbb{R}^{2d}$. In case $d = 2$ there are, however, simplicial complexes with vanishing Van Kampen obstruction which do not embed into $\mathbb{R}^4$; see~\cite{freedman-krushkal-teichner94}. \end{remark} Regarding our proof method, let us first indicate Wegner's approach for case $d=1$. Let $\mathsf{G}$ be a nonplanar graph (graph is a $1$-dimensional simplicial complex). Assuming that $\sd \mathsf{G}$ was $2$-representable, Wegner is able to construct a piecewise linear embedding $g$ of the geometric realization $|\sd \mathsf{G}|$ into $\mathbb{R}^2$. This contradicts the fact that $\mathsf{G}$ is nonplanar. It seems hard to extend this construction in such a way that $g$ would be an embedding in higher dimensions. Our main observation is that it is not necessary to require that $g$ is an embedding in order to obtain a contradiction with an embeddability-type result. We only construct such a $g$ that disjoint simplices have disjoint images, which is still in contradiction with vanishing Van Kampen obstruction. \section{Barycentric subdivision} In order to set up notation, we recall the definition of a barycentric subdivision of a simplicial complex. From geometric point of view we put a new vertex into the barycenter of every geometric face of a simplicial complex ${\mathsf K}$. Then we form a new simplicial complex whose vertices are the barycenters and whose faces are simplices formed in between these barycenters. It is perhaps more convenient to state the precise definition in abstract setting. Given a simplicial complex ${\mathsf K}$ the \emph{barycentric subdivision} of ${\mathsf K}$ is a simplicial complex $\sd {\mathsf K}$ whose set of vertices is the set ${\mathsf K} \setminus \emptyset$ and whose faces are collections $\{\alpha_1, \dots, \alpha_m\}$ of faces of ${\mathsf K}$ such that $$ \alpha_1 \supsetneq \alpha_2 \supsetneq \cdots \supsetneq \alpha_m \neq \emptyset. $$ The vertices of $\sd {\mathsf K}$ play role of barycenters of faces of ${\mathsf K} \setminus \emptyset$. The faces of $\sd {\mathsf K}$ are the simplices in between of these barycenters. See Figure~\ref{f:baryc}. \begin{figure} \caption{Barycentric subdivision of a complex. For example, the vertex $b_{13} \label{f:baryc} \end{figure} The complexes ${\mathsf K}$ and $\sd {\mathsf K}$ have the same geometric realization, i.e., $| {\mathsf K} | = |\sd {\mathsf K}|$. \section{Proof} For the proof we will need two auxiliary results. \begin{theorem}[Van Kampen - Flores theorem; see, e.g.,~{\cite[Theorem 5.1.1]{matousek03}}] \label{t:vkf} Let ${\mathsf K} = \mathcal{D}elta^{(d)}_{2d+2}$. Then for any continuous map $f\colon |{\mathsf K}| \rightarrow \mathbb{R}^{2d}$ there are two disjoint $d$-dimensional simplices $\gamma$ and $\delta$ of ${\mathsf K}$ such that their images $f(|\gamma|)$ and $f(|\delta|)$ intersect. \end{theorem} We remark that the conclusion of the theorem remains true if ${\mathsf K}$ is replaced with any $d$-dimensional complex with non-zero Van Kampen obstruction (in particular, ${\mathsf K}$ has a non-zero Van Kampen obstruction). The fact that Theorem~\ref{t:vkf} extends to complexes with non-zero obstruction just follows from one of possible definitions of Van Kampen obstruction (and is trivial for a reader familiar with this topic); see, e.g., exposition in~\cite{freedman-krushkal-teichner94}.\footnote{There is a sign error in~\cite{freedman-krushkal-teichner94} in the definition of Van Kampen obstruction observed by Melikhov~\cite{melikhov09}. However, it does not affect our conclusion.} On the other hand, Theorem~\ref{t:vkf} for our specific ${\mathsf K}$ can be proved on more elementary level using Borsuk-Ulam theorem; and that is why we also emphasize this specific case. Let $\alpha$ and $\beta$ be faces of a simplicial complex ${\mathsf K}$. We say that $\alpha$ and $\beta$ are \emph{remote} if there is no edge $ab \in {\mathsf K}$ with $a \in \alpha, b \in \beta$. \begin{lemma} \label{l:remote} Let ${\mathsf K}F$ be a collection of convex sets in $\mathbb{R}^m$ and let ${\mathsf K} := \mathbb{N}K({\mathsf K}F)$ be the nerve of ${\mathsf K}F$. Then there is a linear map $g: |\sd {\mathsf K}| \rightarrow \mathbb{R}^m$ such that $g(|\sd \alpha|) \cap g(|\sd \beta|) = \emptyset$ for any remote $\alpha, \beta \in {\mathsf K}$. \end{lemma} \begin{figure} \caption{Mapping $\sd {\mathsf K} \label{f:mapsd} \end{figure} \begin{proof} First we specify $g$ on the vertices of $\sd {\mathsf K}$ then we extend it linearly to the whole $\sd {\mathsf K}$. See Figure~\ref{f:mapsd}. A vertex of $\sd {\mathsf K}$ is a simplex of ${\mathsf K}$, i.e., a subcollection ${\mathsf K}F'$ of ${\mathsf K}F$ with a nonempty intersection. Let us pick a point $p({\mathsf K}F')$ inside $\cap {\mathsf K}F'$. We set $g({\mathsf K}F') := p({\mathsf K}F')$ for ${\mathsf K}F' \in {\mathsf K}$. As we already mentioned, we extend $g$ linearly to $\sd {\mathsf K}$. If $\alpha = {\mathsf K}F' \in {\mathsf K}$, then $g(|\sd \alpha|) \subseteq \cup {\mathsf K}F'$. Thus $g(|\sd \alpha|) \cap g(|\sd \beta|) = \emptyset$ for remote $\alpha, \beta \in {\mathsf K}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:dimen}] First we prove the specific case. Let ${\mathsf K} = \sd \mathcal{D}elta^{(d)}_{2d+2}$. For contradiction we assume that ${\mathsf K}$ is $2d$-representable. Let ${\mathsf K}F$ be the $2d$-representation of it. (Without loss of generality ${\mathsf K}F = \mathbb{N}K({\mathsf K})$.) According to Lemma~\ref{l:remote} there is a map $g\colon |\sd {\mathsf K}| \rightarrow \mathbb{R}^{2d}$ such that $g(|\sd \alpha|) \cap g(|\sd \beta|) = \emptyset$ for any remote $\alpha, \beta \in {\mathsf K}$. Since $\sd {\mathsf K} = \sd \sd \mathcal{D}elta^{(d)}_{2d+2}$, we have $|\mathcal{D}elta^{(d)}_{2d+2}| = |{\mathsf K}| = |\sd {\mathsf K}|$, and thus we can also apply $g$ to simplices of $\mathcal{D}elta^{(d)}_{2d+2}$. Let $\gamma$ and $\delta$ be disjoint simplices of $\mathcal{D}elta^{(d)}_{2d+2}$. Let $\alpha$ be a simplex of $\sd \gamma$ and $\beta$ a simplex of $\sd \delta$. Then $\alpha$ and $\beta$ are remote in ${\mathsf K}$. Thus $g(|\sd \alpha|) \cap g(|\sd \beta|) = \emptyset$. Consequently, $g(|\gamma|) \cap g(|\delta|) = \emptyset$ for any choice of $\gamma$ and $\delta$. However, this contradicts the Van Kampen-Flores theorem. More general part of the theorem is obtained along the same lines when a generalized version of Theorem~\ref{t:vkf} is used. \end{proof} \section*{Acknowledgment} I would like to thank Xavier Goaoc for point me out that intersection patterns of convex sets relate to manifold learning. \end{document}
\begin{document} \title{Illusory Decoherence} \author{Sam Kennerly} \maketitle \noindent Email: [email protected]\\ Website: http://sites.google.com/site/samkennerly\\ Institution: Drexel University \begin{abstract} \noindent If a quantum experiment includes random processes, then the results of repeated measurements can appear consistent with irreversible decoherence even if the system's evolution prior to measurement was reversible and unitary. Two thought experiments are constructed as examples. \end{abstract} \ \noindent PACS 03.65.Yz, 89.70.Cf, 03.67.-a\\ Keywords: decoherence, entropy, quantum infomation, encryption, qubits \section{Introduction} \label{intro} Time evolution according to the Schr\"odinger equation (or equivalent formulations) is deterministic, unitary, and cannot alter von Neumann entropy. Despite this fact, quantum computing experiments routinely produce data which appear to show decoherence of pure states into mixed states. The fickle behavior of qubits lends yet more support to the widely-used principle that physical systems tend irreversibly toward disorder. As summarized by Eddington: \begin{quote} The law that entropy always increases holds, I think, the supreme position among the laws of Nature... If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.\cite{Eddington_1927} \end{quote} The Schr\"odinger equation has not yet collapsed in deepest humilation. But its apparent conflict with the second law is not easily dismissed: how can a reversible theory produce irreversible evolution? This \emph{quantum Loschmidt paradox} (or \emph{reversibility paradox}) is essentially a modern reformulation of Loschmidt's criticism of Boltzmann's H-theorem.\footnote{The quantum Loschmidt paradox should not be confused with the cosmological time-reversal paradox, which may or may not be related. See e.g. \cite{Carroll_book} for a discussion of both.} If time evolution of quantum systems is unitary, then von Neumann entropy does not tend to increase. Does von Neumann entropy disobey the second law, or do states evolve in a non-unitary way? The following thought experiments are examples for which the answer is ``none of the above.'' In each experiment, a quantum Loschmidt paradox is created by careless use of the term ``entropy.'' The paradoxes are resolved by defining von Neumann entropy exclusively for \emph{statistical mixtures}, not for \emph{physical objects}. \section{Allyson's choice} \label{AC} Professor Bob intends to replicate a classic \emph{welcher-weg} experiment for his students. On each trial, Bob sends a single neutron through a Mach-Zehnder interferometer as shown in Figure \ref{fig:MZA}. Bob performs 1000 such trials, adjusts the mirrors so that the phase difference $\phi$ between the paths is increased to $\phi + \Delta$, then repeats this procedure many times. When Bob plots his detector counts as a function of $\phi$, he expects to see sinusoidal $\phi$ dependence due to de Broglie interference. As a practical joke, Bob's student Allyson subverts the experiment. Before each trial, she flips a fair coin and records the result.\footnote{To avoid the difficulty of flipping thousands of coins without attracting the suspicion of her advisor, suppose she automates this process with a quantum RNG instead of a coin.} If heads, she performs the experiment as planned. If tails, she covertly reverses the orientation of the second beamsplitter. It is then very probable that Bob's plot of detector counts will show almost no $\phi$ dependence. Though it appears to Bob that quantum information has been destroyed, it has actually been encrypted. Using her coin-flip history as a password, Allyson can decrypt Bob's data to produce two plots, each of which will show the sinusoidal dependence predicted by quantum mechanics. In each of these examples, the neutron is assumed to evolve unitarily on each trial. De Broglie interferometry is used because it is a well-documented topic that is relatively easy to visualize. Neutrons are chosen to avoid questions of relativity and electrodynamics, but other uncharged, massive particles would be suitable as well.\footnote{Similar experiments with different interferometer confgurations have been performed with C$_{60}$ ``buckyballs'' and even larger molecules.\cite{Buckyball}\cite{Large_molecules}} For realizations of such an experiment, see e.g.\ \cite{MZ_neutron}\cite{MZ_sodium}. \begin{figure} \caption{An idealized Mach-Zehnder inferferometer. A single neutron is sent through splitter $S_1$. The resulting superposition of paths is reflected from mirrors, altered by splitter $S_2$, and sent to detectors $D_L, D_R$. Solid lines label path $|L\rangle$ and dotted lines label path $|R\rangle$.\label{fig:MZA} \label{fig:MZA} \end{figure} \subsection{Bob's intended experiment} \label{AC_Bob} A neutron is sent through the Mach-Zehnder apparatus shown in Figure \ref{fig:MZA}. The beamsplitter $S_1$ sends the neutron to a superposition of paths $|L\rangle$ and $|R\rangle$. Mirrors send the paths to a second splitter $S_2$, then to detectors $D_L$ and $D_R$. If the splitters are of the lossless type described in \cite{Z_beamsplitter}, then each can be represented by a unitary matrix acting on a Hilbert space with basis $\{ |L\rangle, |R\rangle \}$: $$ \hat{S} = \left[\begin{array}{cc} r_{LL} & t_{LR} \\ t_{RL} & r_{RR} \end{array}\right] \qquad r_{RR} = r_{LL}^*, \quad t_{RL} = -t_{LR}^*, \quad \det[\hat{S}] = 1 $$ The parameters $r_{LL}, r_{RR}, t_{LR}, t_{RL}$ are (complex) reflection and transmission coefficients. Bob chooses splitters $S_1$ and $S_2$ as follows: $$ \hat{S}_1 = \frac{1}{\sqrt{2}} \left[\begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array}\right] \qquad \hat{S}_2 = (\hat{S}_1)^{-1} = \frac{1}{\sqrt{2}} \left[\begin{array}{cc} 1 & -1 \\ 1 & 1 \end{array}\right] $$ Represent the phase shifts of the neutron's wavefunction along the two paths by an operator $\hat{\Phi}$. The combined action of the interferometer is then $\hat{H} \equiv \hat{S}_2\hat{\Phi}\hat{S}_1$.\footnote{The notation $\hat{H}$ is chosen to suggest ``heads,'' not ``Hamiltonian.''} $$ \hat{\Phi} = \left[\begin{array}{cc} e^{\imath \theta_L} & 0 \\ 0 & e^{\imath \theta_R} \end{array}\right] \quad \hat{H} = \hat{S}_2\hat{\Phi}\hat{S}_1 = e^{\imath \theta_L} \frac{1}{2} \left[\begin{array}{cc} 1 + e^{\imath \phi} & 1 - e^{\imath \phi} \\ 1 - e^{\imath \phi} & 1 + e^{\imath \phi} \end{array}\right] \quad \phi \equiv \theta_R - \theta_L $$ If the neutron is launched as shown in Figure \ref{fig:MZA}, then it reaches the detectors in state $|\Psi_H\rangle \equiv \hat{H}| L \rangle$. Ignoring any unobservable overall phase, $|\Psi_H\rangle$ is: $$ |\Psi_H\rangle \equiv \hat{H}| L \rangle = \frac{1}{2} \left[\begin{array}{cc} 1 + e^{\imath \phi} & 1 - e^{\imath \phi} \\ 1 - e^{\imath \phi} & 1 + e^{\imath \phi} \end{array}\right] \left[\begin{array}{c} 1 \\ 0 \end{array}\right] = \frac{1}{2} \left[\begin{array}{c} 1 + e^{\imath \phi} \\ 1 - e^{\imath \phi} \end{array}\right] $$ The detection probabilities $P(D_L)$ and $P(D_R)$ are: \begin{center}$ P(D_L) = || \langle L | \Psi_H \rangle ||^2 = || \frac{1}{2}(1 + e^{\imath \phi})||^2 = \frac{1}{2}( 1 + \cos \phi ) $\end{center} \begin{center}$ P(D_R) = || \langle R | \Psi_H \rangle ||^2 = || \frac{1}{2}(1 - e^{\imath \phi}) ||^2 = \frac{1}{2} ( 1 - \cos \phi ) $\end{center} \subsection{Allyson's randomized experiment} \label{AC_Allyson} When Allyson's coin lands heads, the neutron state immediately prior to detection is $\hat{H}| L \rangle$. When it lands tails, she reverses the orientation of $S_2$ so that its matrix representation is $(\hat{S}_2)^T = \hat{S}_1$ and the action of the M-Z apparatus is $\hat{T} \equiv \hat{S}_1 \hat{\Phi} \hat{S}_1$. For these trials, the state vector immediately before detection is: $$ |\Psi_T\rangle \equiv \hat{T}|L \rangle = \frac{1}{2} \left[\begin{array}{cc} 1 - e^{\imath \phi} & 1 + e^{\imath \phi} \\ - (1 + e^{\imath \phi}) & -1 + e^{\imath \phi} \end{array}\right] \left[\begin{array}{c} 1 \\ 0 \end{array}\right] = \frac{1}{2} \left[\begin{array}{c} 1 - e^{\imath \phi} \\ - (1 + e^{\imath \phi}) \end{array}\right] $$ Given tails, the conditional probabilities $P(D_L | T)$ and $P(D_R | T)$ are: \begin{center}$ P( D_L | T) = || \langle L | \Psi_T \rangle ||^2 = || \frac{1}{2}(1 - e^{\imath \phi})||^2 = \frac{1}{2}( 1 - \cos \phi ) $\end{center} \begin{center}$ P( D_R | T) = || \langle R | \Psi_T \rangle ||^2 = || \frac{-1}{2}(1 + e^{\imath \phi})||^2 = \frac{1}{2}( 1 + \cos \phi ) $\end{center} Given heads, the conditional probabilities $P(D_L | H)$ and $P(D_R | H)$ are the same as in Bob's intended experiment. The unconditional probabilities are: $$ P(D_L) = P(D_L | H) P(H) + P(D_L | T) P(T) = \frac{1}{2} $$ and likewise for $P(D_R)$, which is also $\frac{1}{2}$ regardless of the value of $\phi$. Figure \ref{fig:Bob_fail} shows numerical simulations of the intended and randomized experiments. \begin{figure} \caption{Simulated data with 1000 trials for each value of $\phi$. Y-axis is detector counts; X-axis is $\phi$. Left: Bob's intended experiment. Right: Allyson's randomized experiment. (Imperfect fit to predicted probabilities is an artifact of finite sample size.)\label{fig:Bob_fail} \label{fig:Bob_fail} \end{figure} Allyson's shenanigans have concealed evidence of de Broglie interference, and Bob's experiment appears ruined. However, Allyson can unscramble the data if she has recorded her coin tosses. Suppose the first 8 coin toss results were $THHT$ $THTH$. Then Allyson represents the first 8 final states as a list: $$ \begin{array}{ccccccccc} |\Psi_T\rangle, & |\Psi_H\rangle, & |\Psi_H\rangle, & |\Psi_T\rangle & \quad & |\Psi_T\rangle, & |\Psi_H\rangle, & |\Psi_T\rangle, & |\Psi_H\rangle \\ \end{array} $$ By contrast, Bob's incorrect final-state list contains exclusively $|\Psi_H \rangle$ entries. \begin{table} \begin{center} \caption{Example final state list (first 8 trials only)\label{tab:state_lists}} \begin{tabular}{ccccccccc} \hline\noalign{ } Bob's plaintext & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \noalign{ }\hline\noalign{ } Coin result & T & H & H & T & T & H & T & H \\ \noalign{ }\hline\noalign{ } Allyson's ciphertext & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ \noalign{ }\hline \end{tabular} \end{center} \end{table} Consider each physicist's list as a binary string with $|\Psi_H \rangle \sim 1$ and $|\Psi_T \rangle \sim 0$. Then Bob's list is a plaintext, Allyson's list is a ciphertext, and her coin history is a password. The encryption scheme is bitwise modular binary addition as shown in Table \ref{tab:state_lists}. Allyson's password is the same size as the plaintext, used only once, and chosen randomly with uniform probability. Her encryption scheme is thus a provably-secure Vernam cipher.\cite{Shannon_49} Even if Bob discovers Allyson's subterfuge, he cannot decrypt the data unless he knows the results of her coin flips. Allyson can decrypt the data by using her coin-toss history to label each detection event H or T. She can then produce separate heads-only and tails-only plots as shown in Figure \ref{fig:Allyson_decrypt}. These plots show that evidence of the neutrons' de Broglie interference was reversibly \emph{encrypted}, not irreversibly \emph{destroyed}. \begin{figure} \caption{Decryption of simulated data from Figure \ref{fig:Bob_fail} \label{fig:Allyson_decrypt} \end{figure} \subsection{Mixed-state description of the experiment} \label{AC_mixed} Suppose Allyson recorded her coin history on a flash memory stick, which she has now misplaced. Unless she made backup copies or kept some other record of coin flips, the password is lost and neither Allyson nor Bob can decrypt the data. Decryption without the password is extremely unlikely, but a mixed-state representation of the neutrons' final state remains possible. Allyson and Bob know only that the final state on each trial might have been $|\Psi_H\rangle$ or $|\Psi_T\rangle$, each with probability $\frac{1}{2}$. The projection operators $\hat{\rho}_H, \hat{\rho}_T$ corresponding to these states are: $$ \hat{\rho}_H = |\Psi_H \rangle \langle \Psi_H | = \frac{1}{2} \left[\begin{array}{cc} 1 + \cos(\phi) & \imath \sin(\phi) \\ - \imath \sin(\phi) & 1 - \cos(\phi) \end{array}\right] $$ $$ \hat{\rho}_T = |\Psi_T \rangle \langle \Psi_T | = \frac{1}{2} \left[\begin{array}{cc} 1 - \cos(\phi) & \imath \sin(\phi) \\ - \imath \sin(\phi) & 1 + \cos(\phi) \end{array}\right] $$ Following von Neumann's prescription for ``when we do not even know what state is actually present,'' Allyson and Bob weight each of these projections by its probability and sum the results to form a mixed state $\bar{\rho}$.\cite{vN_foundations} $$ \bar{\rho} = \frac{1}{2}\hat{\rho}_H + \frac{1}{2}\hat{\rho}_T = \frac{1}{2} \left[\begin{array}{cc} 1 & \imath \sin(\phi) \\ - \imath \sin(\phi) & 1 \end{array}\right] $$ This $\bar{\rho}$ predicts detection probabilities of $\frac{1}{2}$ for all values of $\phi$, agreeing with Bob's observations. Its eigenvalues $\lambda_{\pm}$ and von Neumann entropy $S_{vN}$ are: $$ S_{vN} = - \sum \lambda_{\pm} \log_2(\lambda_{\pm}) \qquad \lambda_{\pm} = \frac{1}{2}\left[ 1 \pm \sin(\phi) \right] $$ This entropy varies smoothly from to 1 bit to 0 bits depending on the value of $\phi$ for a particular trial. When $\sin(\phi) = 1$, the neutron is equally likely to be detected by $D_L$ or $D_R$ regardless of Allyson's coin flip. But when $\sin(\phi) = 0$, her subterfuge transforms what should have been a certain event into a 50/50 proposition. \section{Decoherence by 1000 small cuts} \label{1000_cuts} Suppose Bob repeats the experiment with Allyson's cooperation. Suppose also that this time, their control of the interferometer is imperfect in a specific way: the phase difference $\phi$ along the neutron paths varies erratically by an amount that is not negligible but is impractical to measure directly.\footnote{The physical source of imprecision in $\phi$ is left to readers' imaginations; perhaps it is seismic vibrations, flexibility of the beamsplitters' mounting brackets, or some other nuisance.} Bob now faces a subtler version of his previous difficulty. For each value of $\phi$, he assumes that performing 1000 trials will ensure the ratio of $D_L/D_R$ detections is close to its expectation value $||\langle L | \Psi \rangle ||^2 \ / \ || \langle R | \Psi \rangle ||^2$. In the idealized experiment, this expectation is identical over 1000 trials and Bob's assumption follows from the law of large numbers.\footnote{Bob must also assume that detection events for different trials are independent.} But imprecision in $\phi$ means $|\Psi\rangle$ is not identical for all 1000 trials, which invalidates Bob's reasoning. Errors in $\phi$ prevent either physicist from knowing $|\Psi\rangle$ exactly on each trial. If instead they represent $\phi$ as a random variable which is identically \emph{distributed} over 1000 trials, then they can describe the neutrons' final state as a statistical mixture. For simplicity, let $\phi$ be normally-distributed with mean $\mu$ and variance $\sigma^2$.\footnote{If $\phi$ is the sum of very many independent random variables with finite mean and variance, then this assumption is justified by the central limit theorem.} Assume also that $\sigma > 0$ is fixed, but the experimenters' control of $\mu$ is nearly perfect. For each choice of $\mu$, they define $\bar{\rho}(\mu)$ as a conditional expectation: $$ \bar{\rho}(\mu) \equiv E(\hat{\rho} | \mu ) = \frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} \frac{1}{2} \left[\begin{array}{cc} 1 + \cos(\phi) & \imath \sin(\phi) \\ - \imath \sin(\phi) & 1 - \cos(\phi) \end{array}\right] e^{-\frac{1}{2}\left( \frac{\phi - \mu}{\sigma} \right)^2} \ d \phi $$ Each matrix element is then a convolution. The resulting mixture has $S_{vN} > 0$: $$ \bar{\rho}(\mu) = \frac{1}{2} \left( \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right] + \left[\begin{array}{cc} \cos(\mu) & \imath \sin(\mu) \\ -\imath \sin(\mu) & -\cos(\mu) \end{array}\right] e^{-\frac{1}{2}\sigma^2} \right) $$ If $\sigma^2 \ll 1$, then $\bar{\rho}$ is nearly pure and a plot of detection counts versus $\mu$ is likely to closely match Bob's intentions. As $\sigma^2$ increases, the amplitude of $\mu$-dependence decreases exponentially. If $\sigma^2 \gg 1$, then $\bar{\rho}$ approaches the maximum-entropy mixed state and Bob's plot shows no evidence of de Broglie interference. In principle, a similar analysis can be applied to any controllable two-level quantum system. For example, the operators $\hat{S}_1, \hat{\Phi}, \hat{S}_2$ could represent transformations of a superconducting qubit state as performed in \cite{MZ_qubit} or \cite{NMR_control}. The same mathematical formalism can be used, though different physical sources of experimental errors may require different noise models, e.g. \cite{Bias_noise} or \cite{Dielectric_loss}. The conclusion that noisy experiments can produce decoherence is unlikely to surprise many experimental physicists. However, it may be surprising that any credible prediction can be made from such a crudely simplified model. There was no attempt to describe the laboratory environment or any extra degrees of freedom for the neutron - only an assumption that on each trial, $\phi$ is a random variable which is i.i.d.\ normal with mean $\mu$ controlled by experimenters. \section{Interpretation and conclusions} \label{interpretation} For Allyson's randomized experiment, Bob's quantum Loschmidt paradox is: ``How did unitary evolution of a pure state produce data consistent with decoherence?'' Allyson's resolution is: ``Evolution was unitary during each trial but \emph{the trials were not identical}.'' The classical appearance of Bob's data is due to his erroneous assumption that the neutron evolved identically for each batch of 1000 trials. In the ``1000 cuts'' experiment, the paradox and resolution are the same: evolution was unitary, but the trials were not identical. In this case, encryption was performed by \emph{Bob's laboratory} rather than a mischievous graduate student, and the password is the precise history of errors in $\phi$. Without knowing the password, Allyson and Bob cannot exactly represent $|\Psi\rangle$ on each trial. At best, they can resort to a probabilistic representation in terms of a density matrix $\bar{\rho}$. This $\bar{\rho}$ evolves irreversibly with $\Delta S_{vN} > 0$ while the neutrons themselves evolve unitarily. \subsection{Where did the information go?} \label{missing_information} Shannon interpreted the quantity $- \sum p_n \log(p_n)$ as a measure of ``missing information.''\cite{Shannon_48} That interpretation can be taken quite literally in these examples. Because any mixture $\bar{\rho}$ is a convex combination of projection operators, it can be used to define a probability distribution of pure states. Given a mixture $\bar{\rho}$, diagonalize it and refer to its eigenvectors $\{ | \Psi_n \rangle \}$ as ``possible pure states.''\footnote{Diagonalization does not determine the overall phase of each eigenvector, but these phases are arbitrary and do not represent any physically observable quantity.} The corresponding eigenvalues $\{ \lambda_n \}$ then form a probability distribution in the usual sense: $\lambda_n \in [0,1]$ and $\sum \lambda_n = \textrm{Tr}[\bar{\rho}] = 1$. The von Neumann entropy of $\bar{\rho}$ is the Shannon entropy of its associated probability distribution. In Allyson's randomized experiment, the coin-toss history is needed to determine whether $|\Psi_H\rangle$ or $|\Psi_T\rangle$ occurred on each trial. That information was missing from Bob's record of the experiment, so Allyson could represent the neutrons with a sequence of pure states and Bob could not. When Allyson then misplaces her flash memory, it becomes ``missing information'' from her point of view as well. In the other example, the ``missing'' history of errors in $\phi$ is not misplaced or hidden; it was simply never recorded. In both cases information has not been irreversibly destroyed or removed from the universe. It is ``missing'' only in the sense that neither physicist has any practical means of recovering it. It should be emphasized that ``missing information'' here refers to data needed to specify a \emph{state vector}, not a \emph{measurement result}. Given a flawless record of the neutron's evolution, Allyson and Bob can predict a unique $|\Psi \rangle$ on any given trial. But even in an idealized noiseless experiment, quantum theory asserts that neither physicist can predict which detector will detect the neutron unless $\sin(\phi) = 0$. If so, then the information needed to predict specific measurement results is fundamentally inaccessible, not merely ``missing.''\footnote{Unorthodox theories (e.g. Bohmian mechanics or the stochastic-spacetime interpretation) may consider this information accessible in principle but missing from quantum theory.} \subsection{Entropy of mixtures versus entropy of objects} \label{entropy_of_what} In these experiments, it is unclear how to answer the question ``What is the entropy of the neutron?'' If Allyson knows the neutron's exact history and Bob does not, then the ``entropy of the neutron'' appears to be zero for Allyson and nonzero for Bob. By contrast, the question ``What is the von Neumann entropy of the mixture $\bar{\rho}$?'' has a unique answer. The distinction is semantic, but important: $S_{vN}$ is well-defined for \emph{statistical mixtures}, not for \emph{physical objects}. A classical analogy may be more intuitive. Suppose Bob shuffles a new deck of 52 cards while Allyson films with a high-speed camera. By replaying the shuffle in slow motion, she determines each card's new location and concludes that the entropy of the deck is zero. If Bob has not seen the video, then he concludes that the entropy of the deck is $\log(52!)$. This ambiguity can be avoided by defining Shannon entropy exclusively for probability distributions. Allyson has assigned a degenerate distribution to the set of all deck permutations, while Bob has assigned a uniform distribution.\footnote{A discrete distribution is \emph{degenerate} iff its support consists of exactly one value.} These distributions have well-defined Shannon entropies even if the deck of cards itself does not. Shuffling did not irreversibly alter the deck of cards -- it merely obscured Bob's knowledge of the cards' order. A similar distinction between entropy of \emph{objects} and entropy of \emph{experiments} was advocated by Jaynes in 1957: \begin{quote} It is possible to maintain the view that the system is at all times in some definite but unknown pure state, which changes because of definite but unknown external forces; the probabilities represent only our ignorance as to the true state. With such an interpretation the expression ``irreversible process'' represents a semantic confusion; it is not the physical process that is irreversible, but rather our ability to follow it.\cite{Jaynes_1957_II} \end{quote} Jaynes' statement was made in the context of semiclassical statistical mechanics, but it is also relevant here. By assumption, each trial produces a pure final state $|\Psi\rangle$. In these thought experiments, it is the physicists' ability to describe $| \Psi \rangle$ which evolves irreversibly, not the neutrons themselves. \subsection{Relation to Jaynes' subjective statistical mechanics} \label{subjectivity} The interpretation of $S_{vN}$ advocated here can be summarized as follows: \begin{quote} $S_{vN}$ is a measure of the missing information an experimenter needs in order to distinguish a pure state $|\Psi\rangle$ from a statistical mixture $\bar{\rho}$. \end{quote} According to this interpretation, $S_{vN}$ is ``anthropomorphic'' in the sense that it is a measure of a scientist's inability to precisely represent a physical system, not a natural property of the system itself. Jaynes made the stronger statement (which he attributed to Wigner) that \emph{all} entropy is anthropomorphic: \begin{quote} Entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it.\cite{Jaynes_1965} \end{quote} The implications of this interpretation are still a topic of active research.\cite{FoundPhys_MaxEnt_2011} Anthropomorphic entropy appears to be a useful concept for describing experimental quantum decoherence. In particular, the quantum Loschmidt paradox is avoided by defining $S_{vN}$ exclusively for mixtures resulting from random models of imperfectly-controlled experiments. But if \emph{thermodynamic} entropy $S_T$ is also defined anthropomorphically, then one must be careful to avoid subjectivity paradoxes.\footnote{Jaynes' original papers on subjective statistical mechanics address this issue.\cite{Jaynes_1957_I}} In the randomized experiment, the value Bob calculates for $S_{vN}$ depends on whether he knows Allyson's coin-toss history. Thermodynamic quantities presumably do not depend on physicists' knowledge; precisely describing the state vector of a boiling pot of water does not prevent it from scalding one's finger. Whether $S_T$ should also be interpreted anthropomorphically is not directly addressed by the thought experiments described here. None of these examples invokes any thermodynamic laws or definitions, nor is there any assumption of equilibrium with an environment. Consequently the neutrons' thermodynamic entropies $S_T$ need not even be well-defined quantities -- and if they are, there is no reason to assume that $S_T$ is related to $S_{vN}$ in either example. It is thus hard to see how these thought experiments could support or refute any statements about thermodynamics. But while they neither support nor refute Jaynes' interpretation, they are consistent with it. Jaynes defined $S_T$ as a special case of $S_{vN}$: it is $S_{vN}$ of the mixture $\bar{\rho}_{max}$ which maximizes entropy for a given macrostate.\footnote{The usual method of MaxEnt quantum thermodynamics is: given a Hilbert space and a set of expectation values $\{ \langle F_i \rangle \}$, define an equilibrium mixture $\bar{\rho}_T$ as the density operator which maximizes $S_{vN} - \sum \lambda_i \langle F_i \rangle$. Here $\{ \langle F_i \rangle \}$ is called a \emph{macrostate} and $\{ \lambda_i \}$ are Lagrange multipliers. The von Neumann entropy of $\bar{\rho}_T$ is then identified with $S_T$ for that macrostate.\cite{Jaynes_1957_II}} For a given macrostate, $\bar{\rho}_{max}$ is not subjective and need not equal Bob's $\bar{\rho}$. The interpretation of $S_{vN}$ advocated here is consistent with Jaynes' view but is less ambitious. The purpose of these thought experiments is simply to show that unitary evolution can \emph{appear} to produce evolution of pure states into mixed states. \end{document}
\begin{document} \title{A lower bound for the amplitude of\\ traveling waves of suspension bridges} \author{Paschalis Karageorgis \and John Stalker} \address{School of Mathematics, Trinity College, Dublin 2, Ireland} \email{[email protected]} \email{[email protected]} \begin{abstract} We obtain a lower bound for the amplitude of nonzero homoclinic traveling wave solutions of the McKenna--Walter suspension bridge model. As a consequence of our lower bound, all nonzero homoclinic traveling waves become unbounded as their speed of propagation goes to zero (in accordance with numerical observations). \end{abstract} \maketitle We study traveling wave solutions of the McKenna--Walter suspension bridge model \begin{equation}\label{pde} u_{tt} + u_{xxxx} + f(u) = 0 \end{equation} introduced in \cite{mcwa}. Two of the most standard choices for the nonlinear term $f$ are \begin{equation}\label{nonl} f(u) = \max(u,-1), \qquad f(u) = e^u - 1. \end{equation} The piecewise linear choice was the original choice in \cite{mcwa} and stems from the fact that the cables of a suspension bridge will resist movement only in one direction. The exponential choice was introduced in \cite{chemck} as a smooth version of the piecewise linear one which behaves the same way as $u\to -\infty$ and also near $u=0$. The smooth version is more suitable when it comes to numerics and also seems more appealing to engineers \cite{ds}. Numerical results for \eqref{pde} go back to McKenna and Walter \cite{mcwa} who studied traveling wave solutions. Those are solutions of the form $u=u(x-ct)$, so they satisfy the ODE \begin{equation}\label{ode} u'''' + c^2 u'' + f(u) = 0. \end{equation} Based on the numerical results of \cite{mcwa} for the piecewise linear model and those of \cite{chemck, chmc} for the exponential one, traveling waves become unbounded as $c\to 0$. A rigorous proof of this observation was given by Lazer and McKenna \cite{lamc}, but it only applied to the piecewise linear model and not the exponential one. In this paper, we give a simple proof that applies to any nonlinear term $f$ such that \begin{itemize} \item[(A1)] $f$ is locally Lipschitz continuous with $uf(u)>0$ for all $u\neq 0$, and \item[(A2)] $f$ is differentiable at the origin with $f'(0)>0$. \end{itemize} Clearly, these assumptions hold for both nonlinearities in \eqref{nonl}. To show that traveling waves become unbounded as their speed goes to zero, we actually prove a lower bound for their amplitude, which is of independent interest by itself. In what follows, we shall only focus on homoclinic solutions, namely those which vanish at $\pm\infty$. \begin{theorem}\label{main} Assume (A1)-(A2) and that $0<c^4<4f'(0)$. If $u$ is a nonzero homoclinic solution of equation \eqref{ode}, then $||u||_\infty\geq L(f,c)$, where \begin{equation}\label{lb} L(f,c) = \sup \left\{ \partialelta >0: \frac{f(u)}{u} > \frac{c^4}{4} \:\,\text{whenever\, $0\neq |u| < \partialelta$} \right\}. \end{equation} \end{theorem} We remark that the lower bound $L(f,c)$ is well-defined because \begin{equation*} \lim_{u\to 0} \frac{f(u)}{u} = f'(0) > \frac{c^4}{4}. \end{equation*} Moreover, $L(f,c)\to\infty$ as $c\to 0$ by (A1), so nonzero homoclinic solutions of \eqref{ode} become unbounded as $c\to 0$. Our assumption that $c^4<4f'(0)$ is natural because the eigenvalues of the linearized problem become purely imaginary when $c^4\geq 4f'(0)$, so one does not expect homoclinic solutions in that case. The existence of homoclinic solutions of \eqref{ode} has been studied by several authors. We refer the reader to \cite{chemck} for the piecewise linear case, \cite{kamc} for more general nonlinearities that grow polynomially and \cite{sw} for the exponential case. The authors of \cite{BFGK} studied the qualitative properties of solutions for nonlinearities that satisfy (A1). There is also an existence result by Levandosky \cite{lev1} when $f(u)= u-|u|^{p-1}u$, but this case is somewhat different because it does not satisfy (A1) and the solutions remain bounded as $c\to 0$. To prove Theorem \ref{main}, we shall need to use the following facts from \cite{BFGK}. We only give the proof of the last two parts and refer the reader to \cite[Proposition 11]{BFGK} for the first. \begin{lemma} Suppose $u$ is a nonzero homoclinic solution of equation \eqref{ode}, namely a solution of equation \eqref{ode} that vanishes at $\pm\infty$. \begin{itemize} \item[(a)] Assume $f$ is continuous with $f(0)=0$. Then $u',u'',u'''$ must also vanish at $\pm\infty$. \item[(b)] Assume (A1). Then $u(s)$ must change sign infinitely many times as $s\to\pm \infty$. \item[(c)] Assume (A1)-(A2) and that $0<c^4<4f'(0)$. Then $u\in H^2$. \end{itemize} \end{lemma} \begin{proof} To prove (b), we note that $u(s)$ is a solution of \eqref{ode} if and only if $u(-s)$ is. Thus, it suffices to show that $u(s)$ changes sign infinitely many times as $s\to \infty$. Suppose $u(s)$ is eventually non-negative, the other case being similar. Then $w(s)= u''(s) + c^2 u(s)$ satisfies \begin{align*} w''(s) = -f(u(s))\leq 0 \end{align*} for large enough $s$, so $w(s)$ is eventually concave. Since $w(s)$ goes to zero by part (a), this implies that $w(s)$ is eventually non-positive. Using this fact, we now get \begin{equation*} u''(s) = w(s) - c^2 u(s) \leq 0 \end{equation*} for large enough $s$, so $u(s)$ is eventually concave. As before, this implies $u(s)$ is eventually non-positive, so we must actually have $u\equiv 0$, a contradiction. To prove (c), we consider the function \begin{equation}\label{H} H(s) = u'(s)u''(s) -u(s)u'''(s) - c^2 u(s) u'(s). \end{equation} We note that $H(s)$ is bounded by part (a) and that a short computation gives \begin{equation}\label{H2} H(s_2) - H(s_1) = \int_{s_1}^{s_2} [u''(s)^2 - c^2 u'(s)^2 + u(s) f(u(s))] \,ds \end{equation} for all $s_1<s_2$. Now, fix some $0<\varepsilon< f'(0) -\frac{c^4}{4}$ and let $s_0\in{\mathbb R}$ be such that \begin{equation*} |s|\geq s_0 \quad\Longrightarrow\quad \frac{f(u(s))}{u(s)} \geq f'(0) -\varepsilon. \end{equation*} Recalling part (b), suppose $s_1,s_2$ are any two roots of $u(s)$ for which $|s_1|,|s_2|> |s_0|$. Then we may combine the last two equations to find that \begin{align} \label{Hin} H(s_2) - H(s_1) &\geq \int_{s_1}^{s_2} \bigl[ u''(s)^2 - c^2 u'(s)^2 + (f'(0) -\varepsilon) u(s)^2 \bigr] \,ds \\ &\geq -2\sqrt{f'(0)-\varepsilon} \int_{s_1}^{s_2} u''(s) u(s) \,ds - c^2 \int_{s_1}^{s_2} u'(s)^2 \:ds \notag \\ &= \alpha \int_{s_1}^{s_2} u'(s)^2 \,ds \notag, \end{align} where $\alpha = 2\sqrt{f'(0)-\varepsilon} - c^2>0$. Since $H(s)$ is bounded by above, this implies $u'\in L^2$. Using this fact and the inequality \eqref{Hin}, we conclude that $u\in H^2$. \end{proof} \begin{proof_of}{Theorem \ref{main}} We multiply equation \eqref{ode} by $u$ and integrate by parts to get \begin{equation*} \int_{-\infty}^\infty u''(s)^2 \,ds - c^2 \int_{-\infty}^\infty u'(s)^2 \,ds + \int_{-\infty}^\infty uf(u) \,ds = 0. \end{equation*} Using the Fourier transform and a trivial estimate, we conclude that \begin{equation}\label{useful} \int_{-\infty}^\infty uf(u) \,ds = c^2 \int_{-\infty}^\infty u'(s)^2 \,ds - \int_{-\infty}^\infty u''(s)^2 \,ds \leq \frac{c^4}{4} \int_{-\infty}^\infty u(s)^2 \,ds. \end{equation} Since $\lim_{u\to 0} \frac{f(u)}{u} = f'(0) > \frac{c^4}{4}$, we can always find some $\partialelta>0$ such that \begin{equation*} 0\neq |u| < \partialelta \quad\Longrightarrow\quad u f(u) > \frac{c^4 u^2}{4}. \end{equation*} Suppose $\partialelta>0$ is any number with this property. Given a nonzero homoclinic solution $u$ for which $||u||_\infty< \partialelta$, we can then use the last equation to get \begin{equation*} \int_{-\infty}^\infty uf(u) \,ds > \frac{c^4}{4} \int_{-\infty}^\infty u(s)^2 \,ds, \end{equation*} contrary to \eqref{useful}. This means that $||u||_\infty\geq \partialelta$ for each nonzero homoclinic solution and each such $\partialelta$, so the result follows. \end{proof_of} The following corollary is a trivial consequence of our estimate \eqref{useful}. This result refines \cite[Theorem 13i]{BFGK} which imposes the stronger assumption $\frac{f(u)}{u} \geq f'(0)$. \begin{corollary} Assume (A1)-(A2) and that $0<c^4<4f'(0)$. If $\frac{f(u)}{u} > \frac{c^4}{4}$ for all $u\neq 0$, then equation \eqref{ode} has no nonzero homoclinic solutions. \end{corollary} \end{document}
\begin{document} \title{Towards a Classification of Modular Compactificatons of $\mathcal{M}_{g,n}$} \author{David Ishii Smyth} \maketitle \begin{abstract} The moduli space of smooth curves admits a beautiful compactification $\mathcal{M}_{g,n} \subset \overline{\mathcal{M}}_{g,n}$ by the moduli space of stable curves. In this paper, we undertake a systematic classification of alternate modular compactifications of $\mathcal{M}_{g,n}$. Let $\mathcal{U}_{g,n}$ be the (non-separated) moduli stack of all $n$-pointed reduced, connected, complete, one-dimensional schemes of arithmetic genus $g$. When $g=0$, $\mathcal{U}_{0,n}$ is irreducible and we classify all open proper substacks of $\mathcal{U}_{0,n}$. When $g \geq 1$, $\mathcal{U}_{g,n}$ may not be irreducible, but there is a unique irreducible component $\mathcal{V}_{g,n} \subset \mathcal{U}_{g,n}$ containing $\mathcal{M}_{g,n}$. We classify open proper substacks of $\mathcal{V}_{g,n}$ satisfying a certain stability condition. \end{abstract} \tableofcontents \pagebreak \section{Introduction} \emph{Notation: An $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is a reduced, connected, complete, one-dimensional scheme of finite-type over an algebraically closed field, together with a collection of $n$ points $p_1, \ldots, p_n \in C$. The marked points need not be smooth nor distinct. We say that a point on $C$ is \textit{distinguished} if it is marked or singular, and that a point on the normalization $\tilde{C}$ is \textit{distinguished} if it lies above a distinguished point of $C$. If $C$ is any curve and $Z \subset C$ is a proper subcurve, we call $Z^{c}:=\overline{C \backslash Z}$ \textit{the complement of $Z$}.}\\ \subsection{Statement of main result}\label{S:MainResult} One of the most beautiful and influential theorems in modern algebraic geometry is the construction of a modular compactification $\mathcal{M}_{g,n} \subset \overline{\mathcal{M}}_{g,n}$ for the moduli space of smooth curves \cite{DeligneMumford}. The key point in this construction is the identification of a suitable class of singular curves, namely \emph{Deligne-Mumford stable curves}, with the property that every incomplete one-parameter family of smooth curves has a unique limit contained in this class. While the class of stable curves gives a natural modular compactification of the space of smooth curves, it is not unique in this respect. There exist two alternate compactifications in the literature, the moduli space of pseudostable curves \cite{Schubert}, in which cusps arise, and the moduli space of weighted pointed curves, in which sections with small weight are allowed to collide \cite{Hassett1}. In light of these constructions, it is natural to ask \begin{problem} Can we classify all possible stability conditions for curves, i.e. classes of singular marked curves which are deformation-open and satisfy the property that any one-parameter family of smooth curves contains a unique limit contained in that class. \end{problem} Stable, pseudostable, and weighted stable curves all have the property that every rational component of the normalization has at least three distinguished points. In general, we say that an $n$-pointed curve with this property is \emph{prestable}. The main result of this paper classifies stability conditions on prestable curves, i.e. we give a simple combinatorial description of all deformation-open classes of prestable curves with the unique limit property. Stability conditions on curves correspond to open proper substacks of the moduli stack of all curves. To make this precise, let $\mathcal{U}_{g,n}$ be the functor from schemes to groupoids defined by \begin{equation*} \mathcal{U}_{g,n}(T):= \left\{ \begin{matrix} \text{ Flat, proper, finitely-presented morphisms $\mathcal{C} \rightarrow T$, with $n$ sections}\\ \text{$\{ \sigma_i\}_{i=1}^{n}$, and connected, reduced, one-dimensional geometric fibers.}\\ \end{matrix} \right\} \end{equation*} Note that we always allow the total space $\mathcal{C}$ of a family to be an algebraic space. In Appendix B, it is shown that $\mathcal{U}_{g,n}$ is an algebraic stack, locally of finite-type over $\mathcal{S}pec \mathbb{Z}$. Let $\mathcal{M}_{g,n} \subset \mathcal{U}_{g,n}$ denote the open substack corresponding to families of smooth curves. Since $\mathcal{M}_{g,n}$ is irreducible, there is a unique irreducible component $\mathcal{V}_{g,n} \subset \mathcal{U}_{g,n}$ containing $\mathcal{M}_{g,n}$. The points of $\mathcal{V}_{g,n}$ correspond to smoothable curves, while the generic point of every extraneous component of $\mathcal{U}_{g,n}$ parametrizes a non-smoothable curve. Since we are interested in irreducible compactifications of $\mathcal{M}_{g,n}$, we work exclusively in $\mathcal{V}_{g,n}$. \begin{definition} A \emph{modular compactification of $\mathcal{M}_{g,n}$} is an open substack $\mathcal{X} \subset \mathcal{V}_{g,n}$, such that $\mathcal{X}$ is proper over $\mathcal{S}pec \mathbb{Z}.$ \end{definition} Since the definition of a modular compactification is topological, the set of modular compactifications of $\mathcal{M}_{g,n}$ does not depend on the stack structure of $\mathcal{V}_{g,n}$. In the absence of a functorial description for $\mathcal{V}_{g,n}$, we simply define the stack structure by taking the stack-theoretic image in $\mathcal{U}_{g,n}$ of the inclusion $\mathcal{V}_{g,n}^{0} \hookrightarrow \mathcal{U}_{g,n}$, where $\mathcal{V}_{g,n}^{0}$ is the interior of $\mathcal{V}_{g,n}$, i.e. $ \mathcal{V}_{g,n}^{0}:=\mathcal{V}_{g,n} \backslash (\mathcal{V}_{g,n} \cap \overline{\mathcal{U}_{g,n} \backslash \mathcal{V}_{g,n}}). $ The long-term goal of this project is to classify all modular compactifications of $\mathcal{M}_{g,n}$. This paper takes the first step by classifying all \emph{stable modular compactifications of $\mathcal{M}_{g,n}$}. If $(C,\{p_i\}_{i=1}^{n})$ is an $n$-pointed curve, we say that $(C,\{p_i\}_{i=1}^{n})$ is \emph{prestable} (resp. \emph{presemistable}) if every rational component of the normalization $\tilde{C}$ has at least three (resp. two) distinguished points. We then define the restricted class of stable (resp. semistable) modular compactifications as follows. \begin{definition} A modular compactification $\mathcal{X} \subset \mathcal{V}_{g,n}$ is \emph{stable} (resp. \emph{semistable}) if every geometric point $[C,\{p_i\}_{i=1}^{n}] \in \mathcal{X}$ is prestable (resp. presemistable). \end{definition} \begin{remark} It is by no means obvious that there should exist strictly semistable modular compactifications of $\mathcal{M}_{g,n}$. After all, if a nodal curve $(C, \{p_i\}_{i=1}^{n})$ contains a smooth rational subcurve with only two distinguished points, then $\text{Aut\,}(C, \{p_i\}_{i=1}^{n})$ is not proper; in particular, $(C,\{p_i\}_{i=1}^{n})$ cannot be contained in any proper substack of $\mathcal{U}_{g,n}$. A similar argument (see Section \ref{SS:GenusZero}) shows that every modular compactification of $M_{0,n}$ is stable, so the methods of this paper give a classification of \emph{all} modular compactifications of $M_{0,n}$. By contrast, the author has constructed a sequence of strictly semistable modular compactifications of $\mathcal{M}_{1,n}$ \cite{Smyth1}. Thus, for $g \geq 1$, our classification of stable compactifications does not tell the whole story. \end{remark} We should observe that a stable modular compactification $\mathcal{X}$ necessarily has quasi-finite diagonal. In particular, $\mathcal{X}$ admits an irreducible coarse moduli space $X$, which gives a proper (though not necessarily projective) birational model of $M_{g,n}$ \cite{KeelMori}. \begin{proposition} If $\mathcal{X}$ is a stable modular compactification, then $\mathcal{X}$ has quasi-finite diagonal. \end{proposition} \begin{proof} We must show that if $(C,\{p_i\}_{i=1}^{n})$ is a prestable curve over an algebraically closed field $k$, the group scheme $\text{Aut\,}_{k}(C,\{p_i\}_{i=1}^{n})$ has finitely many $k$-points. Let $\{\tilde{q}_i\}_{i=1}^{m} \in \tilde{C}$ be the set of distinguished points of $\tilde{C}$. Any $k$-automorphism of $(C,\{p_i\}_{i=1}^{n})$ induces a $k$-automorphism of $\tilde{C}$ mapping the set $\{\tilde{q}_i\}_{i=1}^{m}$ to itself. Since each rational component of $\tilde{C}$ has at least three distinguished points, and each elliptic component has at least one distinguished point, the set of such automorphisms is finite. \end{proof} \begin{comment} We can give an equivalent definition of stable and semistable modular compactifications by considering the functor from schemes to groupoids \begin{equation*} \mathcal{U}_{g,n}^{red}(T):= \left\{ \begin{matrix} \text{Flat proper finitely-presented morphisms $X \rightarrow T$, with $n$ sections}\\ \text{$\{ \sigma_i\}_{i=1}^{n}$, and geometrically connected and reduced 1-dimensional fibers.}\\ \end{matrix} \right\} \end{equation*} $\mathcal{U}^{red}_{g,n}$ which is the moduli stack of reduced connected curves and it is immediate to check that $\mathcal{U}^{red}_{g,n} \subset \mathcal{U}_{g,n}$ is an open substack. As before, we define $\mathcal{V}_{g,n}^{red} =\mathcal{V}_{g,n} \cap \mathcal{U}^{red}_{g,n}$ to be the unique irreducible component containing $\mathcal{M}_{g,n}.$ Then we have \begin{lemma} Stable (resp. semistable) modular compactifications $\mathcal{X} \subset \mathcal{V}_{g,n}$ correspond naturally to open substacks $\mathcal{X} \subset \mathcal{V}_{g,n}^{red}$ which satisfy \item $\mathcal{M}_{g,n} \subset \mathcal{X}$ \item $\mathcal{X}$ is proper over $\mathcal{S}pec \mathbb{Z}.$ \end{lemma} \begin{proof} If $\mathcal{X}$ is a semistable modular compactification, then obviously $\mathcal{X}$ is open in $\mathcal{V}_{g,n}$ and $\mathcal{U}_{g,n}^{red}$, hence also in $\mathcal{V}_{g,n}^{red}$. It only remains to check that if $\mathcal{X}$ is proper over $\mathcal{S}pec \mathbb{Z}$, then $\mathcal{X} \supset \mathcal{M}_{g,n}$. This is proved in corollary ?? \end{proof} \end{comment} Now let us describe the combinatorial data that goes into the construction of a stable modular compactification. If $(C, \{p_i\}_{i=1}^{n})$ is a Deligne-Mumford stable curve, we may associate to $(C,\{p_i\}_{i=1}^{n})$ its \emph{dual graph} $G$. The vertices of $G$ correspond to the irreducible components of $C$, the edges correspond to the nodes of $C$, and each vertex is labeled by the arithmetic genus of the corresponding component as well as the marked points supported on that component. The dual graph encodes the topological type of $(C, \{p_i\}_{i=1}^{n})$, and for any fixed $g,n$, there are only finitely many isomorphism classes of dual graphs of $n$-pointed stable curves of genus $g$. \begin{figure} \caption{The specialization of dual graphs induced by a one-parameter specialization of stable curves. Note that $v_{1} \label{F:GraphSpecialization} \end{figure} We write $G \leadsto G'$ if there exists a stable curve $(\mathcal{C} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$ over the spectrum of a discrete valuation ring with algebraically closed residue field, such that the geometric generic fiber has dual graph $G$ and the special fiber has dual graph $G'$. If $v$ is a vertex of $G$, and we have $G \leadsto G'$, we say that $G \leadsto G'$ induces $v \leadsto v_1' \cup \ldots \cup v_k'$ to indicate that the limit of the irreducible component corresponding to $v$ is the union of the irreducible components corresponding to $v_1', \ldots, v_k'$ (See Figure \ref{F:GraphSpecialization}). More precisely, if $\mathcal{C} \rightarrow \mathcal{D}elta$ is a one-parameter family witnessing the specialization $G \leadsto G'$, then (possibly after a finite base-change) we may identify the irreducible components of the geometric generic fiber with the irreducible components of $\mathcal{C}$. In particular, $v$ corresponds to an irreducible component $\mathcal{C}_{1} \subset \mathcal{C}$, and the limit of $v$ is simply the collection of irreducible components in the special fiber of $\mathcal{C}_{1}$. Now we come to the key definition of this paper. \begin{definition}[Extremal assignment over $\overline{\mathcal{M}}_{g,n}$]\label{D:Assignment} Let $G_1, \ldots, G_N$ be an enumeration of dual graphs of $n$-pointed stable curves of genus $g$, up to isomorphism, and consider an assignment $$G_{i} \rightarrow \mathcal{Z}(G_i) \subset G_i, \text{ for each $i=1, \ldots, N$,}$$ where $\mathcal{Z}(G_i)$ is a subset of the vertices of $G_i$. We say that $\mathcal{Z}$ is an \emph{extremal assignment over $ \overline{\mathcal{M}}_{g,n}$} if it satisfies the following three axioms. \begin{itemize} \item[(1)] For any dual graph $G$, $\mathcal{Z}(G) \neq G$. \item[(2)] For any dual graph $G$, $\mathcal{Z}(G)$ is invariant under $\text{Aut\,}(G)$. \item[(3)] For every specialization $G \leadsto G'$, inducing $v \leadsto v_1' \cup \ldots \cup v_k'$, we have $$v \in \mathcal{Z}(G) \iff v_1', \ldots, v_k' \in \mathcal{Z}(G').$$ \end{itemize} \end{definition} \begin{remark} Axiom 2 in Definition \ref{D:Assignment} implies that an extremal assignment $\mathcal{Z}$ determines, for each stable curve $(C,\{p_i\}_{i=1}^{n})$, a certain subcurve $\mathcal{Z}(C) \subset C.$ Indeed, we may chose an isomorphism of the dual graph of $(C,\{p_i\}_{i=1}^{n})$ with $G_{i}$ for some $i$, and then define $\mathcal{Z}(C)$ to be the collection of irreducible components of $C$ corresponding to $\mathcal{Z}(G_i)$ under this isomorphism. By axiom 2, $\mathcal{Z}(C)$ does not depend on the choice of isomorphism. \end{remark} Given an extremal assignment $\mathcal{Z}$, we say that a curve is \emph{$\mathcal{Z}$-stable} if it can be obtained from a Deligne-Mumford stable curve $(C, \{p_i\}_{i=1}^{n})$ by replacing each connected component of $\mathcal{Z}(C) \subset C$ by an isolated curve singularity whose contribution to the arithmetic genus is the same as the subcurve it replaces. We make this precise in Definitions \ref{D:Genus} and \ref{D:Zstable} below. \begin{definition}[Genus of a curve singularity]\label{D:Genus} Let $p \in C$ be a point on a curve, and let $\pi: \tilde{C} \rightarrow C$ denote the normalization of $C$ at $p$. The $\delta$-invariant $\delta(p)$ and the number of branches $m(p)$ are defined by following following formulae: \begin{align*} \delta(p)&:=\text{dim\,}_{k} (\pi_*\mathscr{O}_{\tilde{C}}/\mathscr{O}_{C}),\\ m(p)&:=|\pi^{-1}(p)|, \end{align*} and we define the \emph{genus} $g(p)$ by \begin{align*} g(p)&:=\delta(p)-m(p)+1. \end{align*} We say that a singularity $p \in C$ has \emph{type} $(g,m)$ if $g(p)=g$ and $m(p)=m$. \end{definition} \begin{definition}[$\mathcal{Z}$-stability]\label{D:Zstable} A smoothable $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is \emph{$\mathcal{Z}$-stable} if there exists a stable curve $(C^{s}, \{p_i^s\}_{i=1}^{n})$ and a morphism $\phi: (C^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ satisfying \begin{enumerate} \item $\phi$ is surjective with connected fibers. \item $\phi$ maps $C^{s}-\mathcal{Z}(C^{s})$ isomorphically onto its image. \item If $Z_{1}, \ldots, Z_{k}$ are the connected components of $\mathcal{Z}(C^{s})$, then $p_i:=\phi(Z_i) \in C$ satisfies $g(p_i)=p_a(Z_i)$ and $m(p_i)=|Z_i \cap Z_i^c|$. \end{enumerate} \end{definition} For any extremal assignment $\mathcal{Z}$, we define $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ to be the set of points corresponding to $\mathcal{Z}$-stable curves. The following theorem is our main result. \begin{theorem}[Classification of Stable Modular Compactifications]\label{T:Main} \begin{itemize} \item[] \item[(1)] If $\mathcal{Z}$ is an extremal assignment over $\overline{\mathcal{M}}_{g,n}$, then $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ is a stable modular compactification of $\mathcal{M}_{g,n}$. \item[(2)] If $\mathcal{X} \subset \mathcal{V}_{g,n}$ is a stable modular compactification, then $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ for some extremal assignment $\mathcal{Z}$. \end{itemize} \end{theorem} \begin{proof} See Theorems \ref{T:Construction} and \ref{T:Classification}. \end{proof} Since the definition of an extremal assignment is purely combinatorial, one can (in principal) write down all extremal assignments over $\overline{\mathcal{M}}_{g,n}$ for any fixed $g$ and $n$. Thus, we obtain a complete classification of the collection of stable modular compactifications of $\mathcal{M}_{g,n}$. Before proceeding, let us consider some examples of extremal assignments, and describe the corresponding stability conditions. \begin{example}[Destabilizing elliptic tails]\label{E:FirstAssignments} Consider the assignment defined by $$ \mathcal{Z}(C, \{p_i\}_{i=1}^{n})=\{ Z \subset C \,|\, p_a(Z)=1, |Z \cap Z^{c}|=1, Z \text{ is unmarked }\}. $$ If we call an subcurve $Z \subset C$ satisfying $p_a(Z)=1$ and $|Z \cap Z^{c}|=1$ an \emph{elliptic tail}, we may say that the assignment $\mathcal{Z}$ is defined by picking out all unmarked elliptic tails of $(C, \{p_i\}_{i=1}^{n})$. This defines an extremal assignment over $\overline{\mathcal{M}}_{g,n}$ provided that $g>2$ or $n>1$. The case $(g,n)=(2,0)$ is forbidden because an unmarked genus two curve may be the union of two elliptic tails. Assuming $(g,n) \neq (2,0)$, axioms 1 and 2 are obvious. For axiom 3, simply note that if $v \in G$ corresponds to an elliptic tail, then any specialization of dual graphs $G \leadsto G'$ necessarily induces a specialization $v \leadsto v'$, where $v'$ is also an elliptic tail. Thus, $v \in \mathcal{Z}(G) \iff v' \in \mathcal{Z}(G')$ as required. Now let us consider the associated $\mathcal{Z}$-stability condition. By definition, an $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable if there exists a map from a stable curve $(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ which is an isomorphism away from the locus of elliptic tails, and contracts each elliptic tail of $C^{s}$ to a singularity of type $(1,1)$. It is elementary to check that the unique curve singularity of type $(1,1)$ is a cusp $(y^2-x^3)$. Thus, an $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is \emph{$\mathcal{Z}$-stable} for this assignment iff it satisfies: \begin{itemize} \item[(1)]$C$ has only nodes and cusps as singularities. \item[(2)] The marked points $\{p_i\}_{i=1}^{n}$ are smooth and distinct. \item[(3)] Each rational component of $\tilde{C}$ has at least three distinguished points. \item[(4)] If $E \subset C$ is an unmarked arithmetic genus one subcurve, $|E \cap E^{c}| \geq 2$. \end{itemize} When $n=0$, this is precisely the definition of \emph{pseudostability} introduced in \cite{Schubert} and further studied in \cite{Hassett3}. \begin{comment} In these papers, it is shown that $$\overline{\mathcal{M}}_{g}(\mathcal{Z}) = \overline{\mathcal{M}}_{g}^{ps}= [\textbf{Hilb}_{4}^{s}\text{//}\text{PGL}(7g-7)],$$ where $\textbf{Hilb}^{4}_{s}$ is the stable locus in the Hilbert scheme of 4-canonically embedded curves. In addition, $\overline{\mathcal{M}}_{g}^{ps}$ is isomorphic to the log-canonical model of $\overline{\mathcal{M}}_{g}$ associated to the divisor $K_{\overline{\mathcal{M}}_{g}}+\alpha\mathcal{D}elta$ for $\alpha \in \mathbb{Q} \cap (7/10,9/11]$. \end{comment} \end{example} \begin{example} (Destabilizing rational tails) Consider the assignment defined by $$ \mathcal{Z}(C)=\{ Z \subset C \,|\, p_a(Z)=0, |Z \cap Z^{c}|=1, |\{p_i \in Z\}| \leq k \}. $$ If we call a subcurve $Z \subset C$ satisfying $p_a(Z)=0$ and $|Z \cap Z^{c}|=1$ a \emph{rational tail}, we may say that the assignment $\mathcal{Z}$ is defined by picking out all rational tails of $(C, \{p_i\}_{i=1}^{n})$ with $\leq k$ marked points. This defines an extremal assignment over $\overline{\mathcal{M}}_{g,n}$ provided that $g>0$ or $n>2k$. The case $g=0$ and $n \leq 2k$ is forbidden because such stable curves may be the union of two rational tails with $\leq k$ marked points. If $g>0$ or $n>2k$, axioms 1 and 2 are easily verified. Axiom 3 is also obvious, bearing in mind that we do not require a rational tail $Z \subset C$ to be irreducible. Now let us consider the associated $\mathcal{Z}$-stability condition. An $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable if there exists a map from a stable curve $(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ which is an isomorphism away from the locus of rational tails with $\leq k$ points, and contracts each such rational tail to a point of type (0,1) on $C$. It follows directly from the definition that the unique `singularity' of type (0,1) is a smooth point. Thus, an $n$-pointed curve $(C, \{p_i\}_{i=1}^{n})$ is $\mathcal{\mathcal{Z}}$-stable for this assignment iff it satisfies: \begin{itemize} \item[(1)]$C$ has only nodes as singularities. \item[(2)] The marked points $\{p_i\}_{i=1}^{n}$ are smooth, and up to $k$ points may coincide. \item[(3)] Each rational component of $\tilde{C}$ has at least three distinguished points. \item[(4)] If $Z \subset C$ is a rational tail, then $|\{p_i: p_i \in Z\}| >k$. \end{itemize} This is equivalent to the definition of $\mathcal{A}$-stability introduced in \cite{Hassett1} with symmetric weights $\mathcal{A}=\{1/k, \ldots, 1/k\}$. \end{example} \begin{example}\label{E:CrazyAssignment} (Destabilizing all unmarked components) Consider the assignment defined by $$ \mathcal{Z}(C, \{p_i\}_{i=1}^{n})=\{ Z \subset C \,|\, Z\text{ is unmarked\! }\}. $$ As long as $n\geq1$, this assignment clearly satisfies axioms 1-3 of Definition \ref{D:Assignment}. The corresponding $\mathcal{Z}$-stable curves have all manner of exotic singularities. In fact, for any pair of integers $(h,m)$, there exists $g>>0$ such that $n$-pointed stable curves of genus $g$ contain unmarked subcurves $Z \subset C$ satisfying $p_{a}(Z)=h$ and $|Z \cap Z^{c}|=m$. It follows that every smoothable curve singularity of type $(h,m)$ appears on a $\mathcal{Z}$-stable curve for $g>>0.$ The corresponding moduli spaces $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ have no counterpart in the existing literature.\\ \end{example} \subsection{Consequences of main result}\label{S:Consequences} In this section, we describe several significant consequences of Theorem \ref{T:Main}. First, we will show that the number of extremal assignments over $\overline{\mathcal{M}}_{g,n}$ is a rapidly increasing function of both $g$ and $n$ by explaining how $\pi$-nef line-bundles on the universal curve $\pi:\mathcal{C} \rightarrow \overline{\mathcal{M}}_{g,n}$ induce extremal assignments. We deduce the existence of many new stability conditions which have never been described in the literature. Next, we explain why $\mathcal{Z}$-stability nevertheless fails to give an entirely satisfactory theory of stability conditions for curves. We will see, for example, that there is no $\mathcal{Z}$-stability condition picking out only curves with nodes $(y^2=x^2)$, cusps $(y^2=x^3)$, and tacnodes $(y^2=x^4)$, and indicate how a systematic study of semistable compactifications might remedy this deficiency. Finally, we will show that $\mathcal{Z}$-stability \emph{does} give a satisfactory theory of stability conditions in the case $g=0$. We will see that every modular compactification of $M_{0,n}$ must be stable, so our result actually gives a complete classification of modular compactifications of $M_{0,n}$. \subsubsection{Extremal assignments from $\pi$-nef line-bundles}\label{SS:NefAssignments} Let $\overline{\mathcal{M}}_{g,n}$ denote the moduli stack of stable curves over an algebraically closed field of characteristic zero, and let $\pi:\mathcal{C} \rightarrow \overline{\mathcal{M}}_{g,n}$ denote the universal curve. The following lemma shows that numerically-nontrivial $\pi$-nef line-bundles on $\mathcal{C}$ induce extremal assigments. (In this context, to say that $\mathscr{L}$ is $\pi$-nef and numerically-nontrivial simply means that $\mathscr{L}$ has non-negative degree on every irreducible component of every fiber of $\pi$ and positive degree on the generic fiber.) \begin{lemma}\label{L:NefAssignments} Let $\mathscr{L}$ be a $\pi$-nef, numerically non-trivial line-bundle on $\mathcal{C}$. Then $\mathscr{L}$ induces an extremal assignment by setting: $$ \mathcal{Z}(C,\{p_i\}_{i=1}^{n}):=\{Z \subset C |\, \deg(\mathscr{L}|_{Z})=0 \,\}, $$ for each stable curve $[C,\{p_i\}_{i=1}^{n}] \in \overline{\mathcal{M}}_{g,n}$. \end{lemma} \begin{proof} We must check that the assignment $\mathcal{Z}$ satisfies axioms 1-3 in Definition \ref{D:Assignment}. For axiom 1, observe that since $\mathscr{L}$ is $\pi$-nef and numerically non-trivial, $\mathscr{L}$ must have positive degree on some irreducible component of each geometric fiber of $\pi$. For axiom 2, recall that $\mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})$ is generated by line-bundles whose degree on any irreducible component of a fiber of $\pi$ depends only on the dual graph of the fiber \cite{AC}. For axiom 3, consider any specialization $G \leadsto G'$ induced by a one-parameter family of stable curves $(\mathcal{C}^s \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$. We have a Cartesian diagram \[ \xymatrix{ \mathcal{C}^s \ar[d] \ar[r]^{f}&\mathcal{C} \ar[d]\\ \mathcal{D}elta \ar[r]& \overline{\mathcal{M}}_{g,n}, } \] and after a finite base-change, we may assume that the irreducible components of $\mathcal{C}^{s}$ are in bijective correspondence with the irreducible components of the geometric generic fiber, i.e. we have $\mathcal{C}^s \simeq \mathcal{C}_{1} \cup \ldots \cup \mathcal{C}_{k}$ with each $\mathcal{C}_{i} \rightarrow \mathcal{D}elta$ having smooth generic fiber. Now $f^*\mathscr{L}$ has degree zero on the generic fiber of $\mathcal{C}_{i} \rightarrow \mathcal{D}elta$ iff it has degree zero on every irreducible component of the special fiber. This is precisely the statement of axiom 3. \end{proof} Translated into the language of higher-dimensional geometry, this lemma says that every face of the relative cone of curves $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ gives rise to a stable modular compactification of $\mathcal{M}_{g,n}$. In Appendix A, we give an explicit definition of $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ as a closed polyhedral cone in $\mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})$, and describe the stability conditions corresponding to each extremal face in the cases $(g,n)=(2,0), (3,0), (2,1)$. In general, since $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ is a full polyhedral cone in a vector space of dimension $\rho(\mathcal{C} /\overline{\mathcal{M}}_{g,n})$, it is clear that the number of extremal faces of $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ (and hence the number of extremal assignments over $\overline{\mathcal{M}}_{g,n}$) is a rapidly increasing function of both $g$ and $n$. \subsubsection{Singularities arising in stable compactifications}\label{SS:StableGeometry} While Lemma \ref{L:NefAssignments} shows that there exist many stability conditions for curves, it does not provide much insight into the following natural question: Given a deformation-open class of curve singularities, is there a stability condition which picks out curves with precisely this class of singularities? We have already seen that the anwer is yes if the class consists of nodes or nodes and cusps. In general, however, one cannot always expect an affirmative answer using only stability conditions on prestable curves. Indeed, Corollaries \ref{C:1} and \ref{C:2} below show that the collection of stable modular compactifications of $\mathcal{M}_{g,n}$ is severely constrained by two features: the necessity of compactifyng the moduli of attaching data of a singularity (a local obstruction) and the presence of symmetry in dual graphs of stable curves (a global obstruction). Let us say that a given curve singularity \emph{arises in a modular compactification $\mathcal{X}$} if $\mathcal{X}$ contains a geometric point $[C, \{p_i\}_{i=1}^{n}] \in \mathcal{X}$ such that $C$ possesses this singularity. \begin{corollary}\label{C:1} Let $\mathcal{X}$ be a stable modular compactification of $\mathcal{M}_{g,n}$. If one singularity of type $(h,m)$ arises in $\mathcal{X}$, then every singularity of type $(h,m)$ arises in $\mathcal{X}$. \end{corollary} \begin{proof} We have $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ for some extremal assignment $\mathcal{Z}$. If a singularity of type $(h,m)$ appears on some $\mathcal{Z}$-stable curve, there exists a stable curve $(C^{s}, \{p_i^s\}_{i=1}^{n})$ and a connected component $Z \subset \mathcal{Z}(C^{s})$ such that $p_a(Z)=h$ and $|Z \cap Z^{c}|=m$. Since the definition of $\mathcal{Z}$-stability allows $Z$ to be replaced by any singularity of type $(h,m)$, it follows that \emph{all} singularities of type $(h,m)$ arise in $\mathcal{X}$. \begin{comment} To see that all singularities of type $(h',m') \leq (h,m)$ arise in $\mathcal{X}$, it suffices to exhibit a stable curve $D^{s}$ with a connected component $Z' \subset \mathcal{Z}(D^{s})$ such that $p_a(Z')=h'$ and $|Z' \cap (Z')^{c}|=m'.$ We produce $D^{s}$ in two steps: first specialize $C^{s}$ so that $Z$ splits off a subcurve $Z'$ with $p_a(Z')=h'$ and $|Z' \cap (Z')^{c}|=m',$ then smooth the nodes external to $Z'$. Applying axiom (3) of Definition \ref{D:Assignment} to this pair of specializations, we conclude that $\mathcal{Z}(D^{s})=Z'$. \end{comment} \end{proof} \begin{figure} \caption{Two methods for compactifying the $k^*$-moduli of attaching data of the tacnode. In a stable modular compactification, one must compactify by degenerating to a cusp with a transverse branch. In a semistable modular compactification, one may compactify by allowing the normalization to sprout additional rational components.} \label{F:AttachingData} \end{figure} This corollary precludes the existence of a stability condition on prestable curves picking out precisely nodes, cusps, and tacnodes. Indeed, one easily checks that the spatial singularity obtained by passing a smooth branch through the tangent plane of a cusp, i.e. $$\hat{\mathscr{O}}_{C,p} \simeq k[[x,y,z]]/((x,y) \cap (z,y^2-x^3)),$$ has the same genus (1) and number of branches (2) as the tacnode. Thus, any stability condition on prestable curves which allows tacnodes must allow this spatial singularity as well. The geometric phenomenon responsible for this implication is the existence of moduli of `attaching data' for a tacnode. Unlike nodes or cusps, the isomorphism class of a tacnodal curve $C$ is not uniquely determined by its pointed normalization $(\tilde{C},q_1,q_2)$; one must also specify an element $\lambda \in \text{Isom}(T_{q_1}\tilde{C}, T_{q_2}\tilde{C}) \simeq k^{*}$. As $\lambda \rightarrow 0$ or $\infty$, the tacnodal curve degenerates into a cusp with a transverse branch (see Figure \ref{F:AttachingData}). Note, however, that in a semistable compactification, one may compactify moduli of attaching data by sprouting additional rational components (see Figure \ref{F:AttachingData}). Indeed, this alternate method of compactification is used in \cite{Smyth1} to construct strictly semistable modular compactifications of $\mathcal{M}_{1,n}$ for every deformation-open class of genus-one Gorenstein singularities. Since one cannot use stability conditions on prestable curves to pick out arbitrary deformation-open classes of singularities, let us consider the weaker question: Does every curve singularity appear on some stable modular compactification of $\mathcal{M}_{g,n}$ for suitable $g$ and $n$? Surprisingly, the answer is `yes' if $n = 1$, but `no' if $n=0$. In fact, the following corollary shows that a ramphoid cusp $(y^2=x^5)$ can never arise in a stable modular compactification of $\mathcal{M}_{g}$. \begin{corollary}\label{C:2} \begin{enumerate} \item[] \item Every smoothable curve singularity arises in some stable modular compactification of $\mathcal{M}_{g,1}$ for $g>>0$. \begin{comment} \item Every singularity of type $(h,m)$ arises in some stable modular compactification of $\mathcal{M}_{h,n}$ for $n>>0$. \end{comment} \item No singularity of genus $\geq 2$ arises in any stable modular compactification of $\mathcal{M}_{g}$. \end{enumerate} \end{corollary} \begin{proof} For (1), see Example \ref{E:CrazyAssignment}. For (2), it suffices to prove that an extremal assignment $\mathcal{Z}$ over $\overline{\mathcal{M}}_{g}$ can never pick out a genus two subcurve. If $C^{s}$ is an unmarked stable curve and $Z \subset \mathcal{Z}(C^{s})$ is a connected component of genus two, we obtain a contradiction as follows: first, specialize $C^{s}$ so that $Z$ splits off an elliptic bridge. Second, smooth all nodes external to the elliptic bridge. Finally, specialize to a ring of elliptic bridges (See Figure \ref{F:NoGenus2Tail}). Applying axiom 3 of Definition \ref{D:Assignment} to this sequence of specializations, we conclude that if $D^{s}$ is a ring of $g-1$ elliptic bridges, then $\mathcal{Z}(D^{s}) \subset D^{s}$ is non-empty. If $G$ is the dual graph of $D^{s}$, then $\text{Aut\,}(G)$ acts transitively on the vertices of $G$, so axiom 2 implies that $\mathcal{Z}(C^{s})=C^{s}$. But this contradicts axiom 1. We conclude that an extremal assignment over $\overline{\mathcal{M}}_{g}$ can never pick out a genus two subcurve. \end{proof} \begin{figure} \caption{Sequence of specializations showing that any extremal assignment which picks out a genus two tail must also pick out an elliptic bridge within a ring of elliptic bridges.} \label{F:NoGenus2Tail} \end{figure} \subsubsection{Modular Compactifications of $M_{0,n}$}\label{SS:GenusZero} In the previous section, we saw that $\mathcal{Z}$-stability does not give an entirely satisfactory theory of stability conditions for curves. In this section, we will see that $\mathcal{Z}$-stability \emph{does} give a satisfactory theory of stability conditions when $g=0$. In particular, we will see that every modular compactification of $M_{0,n}$ is automatically stable, so Theorem \ref{T:Main} actually classifies \emph{all} modular compactifications of $M_{0,n}$. The starting point of our analysis is the following classification of genus zero singularities. It turns out that any genus zero singularity with $m$ branches is analytically isomorphic to the union of $m$ coordinate axes in $\mathbb{A}^{m}$, and we call such singularities \emph{rational $m$-fold points}. \begin{definition}[Rational $m$-fold point] Let $C$ be a curve over an algebraically closed field $k$. We say that $p \in C$ is a \emph{rational $m$-fold point} if $$\hat{O}_{C,p} \simeq k[[x_1, \ldots, x_m]]/(x_ix_j: 1 \leq i<j \leq m). $$ \end{definition} \begin{lemma}\label{P:GenusZeroSingularities} \begin{itemize} \item[] \end{itemize} \begin{enumerate} \item If $p \in C$ is a singularity with genus zero and $m$ branches, then $p$ is a rational $m$-fold point. \item The rational $m$-fold point is smoothable. \end{enumerate} \end{lemma} \begin{proof} (1) is elementary. For (2), one can realize a smoothing of the rational $m$-fold point by taking a pencil of hyperplane sections of the cone over the rational normal curve of degree $m$. Both statements are proved in \cite{Stevens}. \end{proof} \begin{comment} \begin{corollary}\label{C:Treelike} Suppose that $C$ is a reduced curve of arithmetic genus zero. Then any singular point $p \in C$ is a rational $m$-fold point (for some integer $m$), and the normalization of $C$ at $p$ has $m$ connected components. \end{corollary} \begin{proof} Suppose $p \in C$ is a singular point with $m$ branches, and let $C_{1}, \ldots, C_{l}$ be the connected components of the normalization of $C$ at $p$. Then we have $$ 0=p_{a}(C)=\sum_{i=1}^{l}p_{a}(C_i)+\delta(p)-l+1. $$ Since $p_{a}(C_i) \geq 0$, $l \leq m$, and $\delta(p) \geq m-1$, we conclude that $l=m$ and $\delta(p)=m-1$. \end{proof} \end{comment} \begin{corollary}\label{C:GenusZeroSmoothability} Every reduced connected curve of arithmetic genus zero is smoothable, i.e. $\mathcal{U}_{0,n} = \mathcal{V}_{0,n}$. \end{corollary} \begin{proof} A complete reduced curve is smoothable iff its singularities are smoothable (I.6.10, \cite{Kollar}). The only singularities on a reduced curve of arithmetic genus zero curve are rational $m$-fold points, so all such curves are smoothable. \end{proof} Next, we study automorphisms of genus zero singular curves. If $(C, \{p_i\}_{i=1}^{n})$ is an $n$-pointed curve of arithmetic genus zero over an algebraically closed field $k$, it is convenient to define $$\text{Aut\,}^{0}_k(C, \{p_i\}_{i=1}^{n}) \subset \text{Aut\,}_k(C, \{p_i\}_{i=1}^{n})$$ to be the subgroup of the automorphisms which fix each component and each singular point of $C$. Then we have \begin{lemma}\label{L:Aut} Let $(C, \{p_i\}_{i=1}^{n})$ be an $n$-pointed curve of arithmetic genus zero, and let $\pi: \tilde{C} \rightarrow C$ be the normalization of $C$. Let $\{\tilde{p}_i\}_{i=1}^{n}$ be the points of $\tilde{C}$ lying above $\{p_i\}_{i=1}^{n}$, and let $\{\tilde{q}_i\}_{i=1}^{m}$ be the points lying above the singular locus of $C$, and consider $(\tilde{C}, \{\tilde{p}_i\}_{i=1}^{n}, \{\tilde{q}_i\}_{i=1}^{m})$ as an $n+m$ pointed curve. Then the natural map $$ \text{Aut\,}^0_k(C, \{p_i\}_{i=1}^{n}) \hookrightarrow \text{Aut\,}^0_k(\tilde{C}, \{\tilde{p}_i\}_{i=1}^{n}, \{\tilde{q}_i\}_{i=1}^{m}) $$ is an isomorphism. \end{lemma} \begin{proof} Clearly, an automorphism $\phi \in \text{Aut\,}^0_k(C, \{p_i\}_{i=1}^{n})$ induces an automorphism $\tilde{\phi} \in \text{Aut\,}^0_k(\tilde{C}, \{\tilde{p}_i\}_{i=1}^{n}, \{\tilde{q}_i\}_{i=1}^{m})$. Conversely, an automorphism $\tilde{\phi} \in \text{Aut\,}^0_k(\tilde{C}, \{\tilde{p}_i\}_{i=1}^{n}, \{\tilde{q}_i\}_{i=1}^{m})$ descends to an automorphism of $(C, \{p_i\}_{i=1}^{n})$ iff the natural map $$ \tilde{\phi}^*:\mathscr{O}_{\tilde{C}} \simeq \mathscr{O}_{\tilde{C}} $$ preserves the subsheaf of functions pulled-back from $C$, i.e. if $\phi^*(\pi^*\mathscr{O}_{C})=\pi^*\mathscr{O}_{C}$. Since the only singularities of $C$ are rational $m$-fold points, $\pi^*\mathscr{O}_{C} \subset \mathscr{O}_{\tilde{C}}$ is simply the $k$-subalgebra generated by all functions vanishing at $\{\tilde{q}_i\}_{i=1}^{m}$, and this is clearly preserved. \end{proof} \begin{corollary}\label{C:GenusZeroStability} Every modular compactification of $M_{0,n}$ is stable. \end{corollary} \begin{proof} Let $\mathcal{X}$ be a modular compactification of $M_{0,n}$, and let $[C,\{p_i\}_{i=1}^{n}] \in \mathcal{X}$ be a geometric point over an algebraically closed field $k$. Since $\mathcal{X}$ is proper over $\mathcal{S}pec \mathbb{Z}$, the automorphism group $\text{Aut\,}_{k}(C,\{p_i\}_{i=1}^{n})$ must be proper over $k$. If $\tilde{C}$ contains an irreducible component with one or two distinguished points, then Lemma \ref{L:Aut} implies $\text{Aut\,}^0_{k}(C,\{p_i\}_{i=1}^{n})$ contains a factor which is isomorphic to $\text{Aut\,}_k(\mathbb{P}^{1},\infty)$ or $\text{Aut\,}_k(\mathbb{P}^{1},0,\infty)$, neither of which is proper. We conclude that each irreducible component of $\tilde{C}$ must have at least three distinguished points. \end{proof} \begin{comment} \begin{corollary} Every modular compactification of $\overline{M}_{0,n}$ is an algebraic space. \end{corollary} \begin{proof} By \cite{LMB}, it suffices to show that the stablizers of every geometric point are trivial. But, by the previous corollary, each irreducible component of $\tilde{C}$ has at least three distinguished points, which implies $\text{Aut\,}(C)$ has no $k$-points. Need some combinatorics associated to dual graphs. \end{proof} \end{comment} In light of these remarks, we obtain the following corollary of our main result. \begin{theorem}\label{T:GenusZero} \begin{itemize} \item[] \item[(1)] If $\mathcal{X} \subset \mathcal{U}_{0,n}$ is any open proper substack, then $\mathcal{X}=\overline{\mathcal{M}}_{0,n}(\mathcal{Z})$ for some extremal assignment $\mathcal{Z}$. \item[(2)] $\mathcal{X}$ is an algebraic space. \end{itemize} \end{theorem} \begin{proof} (1) follows from Corollary \ref{C:GenusZeroStability}, Corollary \ref{C:GenusZeroSmoothability}, and Theorem \ref{T:Main}. For (2), it suffices to show that if $[C, \{p_i\}_{i=1}^{n}] \in \overline{\mathcal{M}}zero(\mathcal{Z})$ is any geometric point, then $\text{Aut\,}_{k}(C, \{p_i\}_{i=1}^{n})$ is trivial. Since every component of $\tilde{C}$ has at least three distinguished points, we have $\text{Aut\,}_{k}^{0}(C,\{p_i\}_{i=1}^{n})=\{0\}$, so we only need to see that every automorphism of a prestable genus zero curve fixes the irreducible components and singular points. This is an elementary combinatorial consequence of the fact that every component of $(C,\{p_i\}_{i=1}^{n})$ has at least three distinguished points. \end{proof} \subsection{Outline of proof}\label{S:Outline} In this section, we give a detailed outline of the proof of Theorem \ref{T:Main}, which occupies sections 2-4 of this paper. In Section \ref{S:Preliminaries}, we establish several fundamental lemmas, which are used repeatedly throughout. In Section \ref{S:ExtendingFamilies}, we prove that a birational map between two generically-smooth families of curves over a normal base is automatically Stein (Lemma \ref{L:Normality}). We also prove that, after an alteration of the base, one can birationally dominate any family of prestable curves by a family of stable curves (Lemma \ref{L:ExtendingCurves}). Taken together, these lemmas allow us to analyze deformations and specializations of prestable curves by studying the deformations and specializations of the stable curves lying over them. In Section \ref{S:BirationalMaps}, we define a \emph{contraction morphism of curves} to be a surjective morphism with connected fibers, which contracts subcurves of genus $g$ to singularities of genus $g$. The motivation for this definition is Lemma \ref{L:BirationalBaseChange}, which says that a birational contraction $\mathcal{C}_{1} \rightarrow \mathcal{C}_{2}$ between two irreducible families of generically smooth curves induces a contraction of curves on each geometric fiber. Finally, in Section \ref{S:ZStability}, we define the stability condition associated to an extremal assignment $\mathcal{Z}$: An $n$-pointed curve is \emph{$\mathcal{Z}$-stable} if there exists a stable curve $(C^s, \{p_i^s\}_{i=1}^{n})$ and a contraction $\phi:(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ with $\text{Exc\,}(\phi)=\mathcal{Z}(C^s)$. An important consequence of the axioms for an extremal assignment is that the existence of a single contraction $\phi:(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ with $\text{Exc\,}(\phi)=\mathcal{Z}(C^s)$ implies that $\text{Exc\,}(\phi)=\mathcal{Z}(C^s)$ for \emph{any} contraction from a stable curve (Corollary \ref{C:Independence}). In Section \ref{S:Openness}, we prove that the locus of $\mathcal{Z}$-stable curves is open in $\mathcal{V}_{g,n}$, the main component of the moduli stack of all curves. Given a generically-smooth family of curves $(\mathcal{C} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ over an irreducible base $T$, we must show that the set \[ S:=\{t \in T \, | \, (\mathcal{C}_{\overline{t}}, \{\sigma_i(\overline{t})\}_{i=1}^{n})\text{ is $\mathcal{Z}$-stable} \} \] is open in $T$. It is sufficient to prove that $i^{-1}(S)$ is open after any proper surjective base-change $i:\tilde{T} \rightarrow T$. Thus, using the results of Section 2.1, we may assume there exists a stable curve over $T$ birationally dominating $\mathcal{C}$, i.e. we have a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]^{\pi^s} && \mathcal{C} \ar[dl]_{\pi}\\ &T \ar@/^1pc/[lu]^{\{\sigma^s_i\}_{i=1}^{n}} \ar@/_1pc/[ru]_{\{\sigma_{i}\}_{i=1}^{n}}& } \] By Section 2.2, the fibers of $\phi$ are contractions of curves. Thus, the fiber $\pi^{-1}(t)$ is $\mathcal{Z}$-stable if and only if $\text{Exc\,}(\phi_t)=\mathcal{Z}(C^{s}_t)$. Thus, it suffices to prove that $$\{t \in T \, |\, \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}})\}$$ is open in $T$. This is an immediate consequence of axiom 3 in the definition of an extremal assignment. In Section \ref{S:Properness}, we prove that $\mathcal{Z}$-stable curves satisfy the unique limit property. To prove that $\mathcal{Z}$-stable limits exist, we use the classical stable reduction theorem and Artin's criterion for the contractibility of 1-cycles on a surface. Given a family of smooth curves over the punctured disc $\mathcal{D}elta^*$, we may complete it to a stable curve $\mathcal{C}^{s} \rightarrow \mathcal{D}elta$. Using Artin's criterion, we construct a birational morphism $\phi:\mathcal{C}^{s} \rightarrow \mathcal{C}$ with $\text{Exc\,}(\phi)=\mathcal{Z}(C^{s})$, where $C^{s} \subset \mathcal{C}^{s}$ is the special fiber. The restriction of $\phi$ to the special fiber induces a contraction of curves $\phi_0:C^{s} \rightarrow C$ with $\text{Exc\,}(\phi_0)=\mathcal{Z}(C^{s})$. Thus, $C$ is the desired $\mathcal{Z}$-stable limit. To prove that $\mathcal{Z}$-stable limits are unique, we show that if $\mathcal{C}_{1} \rightarrow \mathcal{D}elta$ and $\mathcal{C}_{2} \rightarrow \mathcal{D}elta$ are two $\mathcal{Z}$-stable families with smooth isomorphic generic fiber, then there exists a stable curve $\mathcal{C}^{s} \rightarrow \mathcal{D}elta$ and birational maps \[ \xymatrix{ &\mathcal{C} \ar[dr]^{\phi_2} \ar[ld]_{\phi_1}&\\ \mathcal{C}_1&&\mathcal{C}_2\\ } \] Since $\phi_1$ and $\phi_2$ induce contraction morphisms on the special fiber, the hypothesis that $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are $\mathcal{Z}$-stable implies that $\text{Exc\,}(\phi_1)=\mathcal{Z}(C^{s})$ and $\text{Exc\,}(\phi_2)=\mathcal{Z}(C^s)$. In particular, $\text{Exc\,}(\phi_1)=\text{Exc\,}(\phi_2)$. Since $\phi_1$ and $\phi_2$ are Stein morphisms, we conclude that the rational map $\mathcal{C}_1 \dashrightarrow \mathcal{C}_2$ extends to an isomorphism. Section \ref{S:Classification} is devoted to the proof that any stable modular compactification $\mathcal{X} \subset \mathcal{V}_{g,n}$ takes the form $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ for some extremal assignment $\mathcal{Z}$ over $\overline{\mathcal{M}}_{g,n}$. Given a stable modular compactification $\mathcal{X} \subset \overline{\mathcal{M}}_{g,n}(\mathcal{Z})$, Lemma \ref{L:Diagram} produces a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[dr]^{\pi^{s}} \ar[rr]^{\phi}&&\mathcal{C} \ar[dl]_{\pi}\\ &T \ar[dl]_{p} \ar[dr]^{q} \ar@/_1pc/[ru]_{\{\sigma_{i}\}_{i=1}^{n}} \ar@/^1pc/[lu]^{\{\sigma_i^{s} \}_{i=1}^{n}}&\\ \overline{\mathcal{M}}_{g,n}&\mathcal{U} \ar@{^{(}->}[r] \ar@{_{(}->}[l]&\mathcal{X}\\ } \] satisfying \begin{itemize} \item[(0)] $\mathcal{U} \subset \mathcal{M}_{g,n}$ is an open dense substack, \item[(1)] $T$ is a normal scheme, \item[(2)] $p$ and $q$ are representable proper dominant generically-\'{e}tale morphisms, \item[(3)] $\pi^s$ and $\pi$ are the families induced by $p$ and $q$ respectively, \item[(4)] $\phi$ is a birational morphism. \end{itemize} For any graph $G$, set $T_{G}:=\mathcal{M}_{G} \times_{\overline{\mathcal{M}}_{g,n}} T$, i.e. $T_{G}$ is the locally-closed subscheme over which the fibers of $\pi^{s}$ have dual graph isomorphic to $G$. In addition, for any $t \in T$, let $G_{t}$ denote the dual graph of the fiber $(\pi^{s})^{-1}(t)$. We wish to associate to $\mathcal{X}$ an extremal assignment $\mathcal{Z}$ by setting \[ \mathcal{Z}(G):=i(\text{Exc\,}(\phi_t)) \subset G, \] for some choice of $t \in T_{G}$ and some choice of isomorphism $i:G_{t} \simeq G$. The key point is to show that the subgraph $\mathcal{Z}(G) \subset G$ does not depend on these choices (Proposition \ref{P:ExceptionalLocus}). We then show that $\mathcal{Z}$ satisfies axioms 1-3 in Definition \ref{D:Assignment}. Axiom 1 is an immediate consequence of the fact that $\phi$ cannot contract an entire fiber of $\pi^s$. Axiom 2 is forced by the separatedness of $\mathcal{X}$. For axiom 3, consider a one-parameter family of stable curves $(\mathcal{C}^{s} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n}s)$ inducing a specialization of dual graphs $G \leadsto G'$. Since $T \rightarrow \overline{\mathcal{M}}_{g,n}$ is proper, we may lift the natural map $\mathcal{D}elta \rightarrow \overline{\mathcal{M}}_{g,n}$ to $T$, and consider the induced birational morphism of families over $\mathcal{D}elta$: \[ \xymatrix{ \mathcal{C}^{s} \ar[dr] \ar[rr]^{\phi} && \mathcal{C} \ar[dl]\\ &\mathcal{D}elta&\\ } \] After a finite base-change, we may assume that $\mathcal{C}^{s}=\mathcal{C}_{1} \cup \ldots \cup \mathcal{C}_{m}$, where each $\mathcal{C}_{i} \rightarrow \mathcal{D}elta$ is a flat family of curves with smooth generic fiber, and axiom 3 follows from the fact that $$(\mathcal{C}_{i})_{\bar{\eta}} \in \text{Exc\,}(\phi_{\overline{\eta}}) \iff (\mathcal{C}_{i})_{0} \in \text{Exc\,}(\phi_0).$$ Once we have established that $\mathcal{Z}$ is a well-defined extremal assignment, the fact that $\phi$ induces a contraction of curves over each geometric point $t \in T$ implies that each fiber $\pi^{-1}(t)$ is $\mathcal{Z}$-stable for this assignment. Since $T \rightarrow \mathcal{X}$ is surjective, we conclude that each geometric point of $\mathcal{X}$ corresponds to a $\mathcal{Z}$-stable curve. Thus, the open immersion $\mathcal{X} \hookrightarrow \mathcal{V}_{g,n}$ factors through $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$. The induced map $\mathcal{X} \hookrightarrow \overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is proper and dominant, so $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ as desired. \subsection{Notation}\label{S:Notation} The following notation will be in force throughout: An \textit{$n$-pointed curve} consists of a pair $(C, \{p_i\}_{i=1}^{n})$, where $C$ is a reduced connected complete one-dimensional scheme of finite-type over an algebraically closed field, and $\{p_i\}_{i=1}^{n}$ is an ordered set of $n$ marked points of $C$. The marked points need not be smooth nor distinct. We say that a point on $C$ is \textit{distinguished} if it is marked or singular, and that a point on the normalization $\tilde{C}$ is \textit{distinguished} if it lies above a distinguished point of $C$. A curve $(C,\{p_i\}_{i=1}^{n})$ is \textit{prestable} (resp. \textit{presemistable}) if every rational component of $\tilde{C}$ has at least three (resp. two) distinguished points. A curve $(C, \{p_i\}_{i=1}^{n})$ is \emph{smooth} if $C$ is smooth and the points $\{p_i\}_{i=1}^{n}$ are distinct. A curve $(C,\{p_i\}_{i=1}^{n})$ is \textit{nodal} if the only singularities of $C$ are ordinary nodes and the marked points of $C$ are smooth and distinct. We say that $(C,\{p_i\}_{i=1}^{n})$ is \textit{stable} (resp. \textit{semistable}) if $(C,\{p_i\}_{i=1}^{n})$ is nodal and prestable (resp. presemistable). All these definitions extend to general bases in the usual way: Given a scheme $T$, an \textit{$n$-pointed curve over $T$} consists of a flat proper finitely-presented morphism $\pi: \mathcal{C} \rightarrow T$, together with a collection of sections $\{\sigma_{i}\}_{i=1}^{n}$, such that the geometric fibers are $n$-pointed curves. We say that a curve over $T$ is \textit{prestable}, \textit{presemistable}, \textit{nodal}, \textit{stable}, or \textit{semistable} if the corresponding conditions hold on geometric fibers. Families are typically denoted in script, while geometric fibers are denoted in regular font. Note that we always allow the total space of a family of curves to be an algebraic space. Whenever we consider a morphism $\phi:\mathcal{C}_{1} \rightarrow \mathcal{C}_{2}$ between two families of $n$-pointed curves, say $(\mathcal{C}_{1} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ and $(\mathcal{C}_{2} \rightarrow T, \{\tau_{i}\}_{i=1}^{n})$, we always assume that $\phi \circ \sigma_{i}=\tau_{i}$ for all $i$. $\mathcal{D}elta$ will always denote the spectrum of a discrete valuation ring $R$ with algebraically closed residue field $k$ and field of fractions $K$. We make constant use of the fact that if $x,y \in X$ are two points of any noetherian scheme (or Deligne-Mumford stack) with $y \in \overline{\{ x\}}$, then there exists a map $\mathcal{D}elta \rightarrow X$ sending $\eta \rightarrow x$, $0 \rightarrow y$. When we speak of a finite base-change $\mathcal{D}elta' \rightarrow \mathcal{D}elta$, we mean that $\mathcal{D}elta'$ is the spectrum of a discrete valuation ring $R' \supset R$ with field of fractions $K'$, where $K' \supset K$ is a finite separable extension. We use the notation \begin{align*} 0&:=\mathcal{S}pec k \rightarrow \mathcal{D}elta,\\ \eta&:=\mathcal{S}pec K \rightarrow \mathcal{D}elta,\\ \overline{\eta}&:=\mathcal{S}pec \overline{K} \rightarrow \mathcal{D}elta, \end{align*} for the closed point, generic point, and geometric generic point respectively. Thus, $C_0, \mathcal{C}_{\eta}, C_{\overline{\eta}}\,$ and $C'_0, \mathcal{C}'_{\eta}, C'_{\overline{\eta}}\,$ denote the special fiber, generic fiber, and geometric generic fibers of $\mathcal{C} \rightarrow \mathcal{D}elta$ and $\mathcal{C}' \rightarrow \mathcal{D}elta$ respectively. Sometimes we omit the subscript `0', and denote the special fibers of $\mathcal{C} \rightarrow \mathcal{D}elta$ or $\mathcal{C}' \rightarrow \mathcal{D}elta$ by $C$ or $C'$. Also, we let $\eta_{\mathcal{X}}$ denote the generic point of any irreducible stack or scheme $\mathcal{X}$. $\overline{\mathcal{M}}_{g,n}$ will denote the moduli stack (over $\mathcal{S}pec \mathbb{Z}$) of $n$-pointed stable curves of genus $g$. Recall that $\overline{\mathcal{M}}_{g,n}$ admits a stratification by topological type, i.e. $$ \overline{\mathcal{M}}_{g,n} = \coprod_{G} \mathcal{M}_{G}, $$ where the union runs over all isomorphism classes of dual graphs of $n$-pointed stable curves of genus $g$, and $\mathcal{M}_{G} \subset \overline{\mathcal{M}}_{g,n}$ is the locally-closed substack parametrizing stable curves whose dual graph is isomorphic to $G$. While the language of stacks is employed throughout, everything we do is essentially topological. We use little more than the definition of the Zariski topology for an algebraic stack, and the various valuative criteria for specialization and properness, for which we refer the reader to \cite{LMB}.\\ \textbf{Acknowledgements.} Joe Harris offered countless suggestions and insights throughout the course of this project, and it is a pleasure to acknowledge his great influence. I would also like to thank Jarod Alper, Maksym Fedorchuk, Jack Hall, Brendan Hassett, Sean Keel, Matthew Simpson, and Fred van der Wyck, for many helpful conversations, by turns mathematical, whimsical, or therapeutic. I am especially indebted to Fred van der Wyck, whose work on moduli of crimping data hovers in the background (carefully hidden!) of several parts of this paper. \section{Preliminaries on $\mathcal{Z}$-stability} \label{S:Preliminaries} \subsection{Extending families of prestable curves}\label{S:ExtendingFamilies} In this section, we present two key lemmas, which will be used repeatedly. Lemma \ref{L:Normality} says that a birational map between two generically-smooth families of curves over a normal base is automatically Stein. Lemma \ref{L:ExtendingCurves} says that, after an alteration of the base, one can dominate any family of prestable curves by a family of stable curves. (Recall that an alteration is proper, surjective, generically-\'{e}tale morphism.) Taken together, these two lemmas allow us to reduce questions about families of prestable curves to questions about stable curves. \begin{lemma}[Normality of generically-smooth families of curves]\label{L:Normality} \begin{enumerate} \item Suppose that $S$ is an irreducible, normal, noetherian scheme, and that $\mathcal{C} \rightarrow S$ is a curve over $S$ with smooth generic fiber. Then $\mathcal{C}$ is normal. \item Suppose that $S$ is an irreducible, normal, noetherian scheme, and that $\mathcal{C}_{1} \rightarrow S$ and $\mathcal{C}_{2} \rightarrow S$ are curves over $S$ with smooth generic fiber. If $\phi:\mathcal{C}_{1} \rightarrow \mathcal{C}_{2}$ is a birational morphism over $S$, then $\phi_{*}\mathscr{O}_{\mathcal{C}_1}=\mathscr{O}_{\mathcal{C}_2}$. \end{enumerate} \end{lemma} \begin{proof} For (1), first observe that since $\mathcal{C} \rightarrow S$ is smooth in the generic fiber and has isolated singularities in every fiber, $\mathcal{C}$ must be regular in codimension one. Furthermore, since $\mathcal{C} \rightarrow S$ is a flat morphism with both base and fibers satisfying Serre's condition $S_2$, $\mathcal{C}$ satisfies $S_{2}$ as well \cite[6.4.2]{EGAIV}. By Serre's criterion, $\mathcal{C}$ is normal. For (2), $\phi: \mathcal{C}_{1} \rightarrow \mathcal{C}_{2}$ is a proper birational morphism of normal noetherian algebraic spaces. Since a finite birational morphism of normal algebraic spaces is an isomorphism \cite[4.7]{Knutson}, $\phi$ is equal to its own Stein factorization, i.e. $\phi_{*}\mathscr{O}_{\mathcal{C}_1}=\mathscr{O}_{\mathcal{C}_2}$. \end{proof} \begin{lemma}[Extending prestable curves to stable curves]\label{L:ExtendingCurves} \begin{itemize} \item[(1)] Let $T$ be an integral noetherian scheme, and $(\mathcal{C} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ an $n$-pointed curve over $T$ with smooth generic fiber. There exists an alteration $\tilde{T} \rightarrow T$, and a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar@{-->}[rr]^{\phi} \ar[dr] && \tilde{\mathcal{C}} \ar[dl]\\ &\tilde{T} \ar@/^1pc/[lu]^{\{\sigma^s_i\}_{i=1}^{n}} \ar@/_1pc/[ru]_{\{\tilde{\sigma}_i\}_{i=1}^{n}}& } \] where $(\mathcal{C}^{s} \rightarrow \tilde{T}, \{\sigma^s_i\}_{i=1}^{n})$ is a stable curve, $(\tilde{\mathcal{C}} \rightarrow \tilde{T},\{\tilde{\sigma}_i\}_{i=1}^{n})$ is the $n$-pointed curve induced by base-change, and $\phi$ is a birational map over $\tilde{T}$. \item[(2)] We may choose the alteration $\tilde{T} \rightarrow T$ so that $\tilde{T}$ is normal, and the open subset $S \subset \tilde{T}$ defined by $$ S:=\{t \in \tilde{T} \,| \, \text{$\phi$ is regular in a neighborhood of the fiber $\mathcal{C}_{t}^{s}$} \} $$ contains every geometric point $t \in \tilde{T}$ such that the fiber $(\tilde{\mathcal{C}}_{t}, \{\tilde{\sigma}_i(t)\}_{i=1}^{n})$ is prestable. \end{itemize} \end{lemma} \begin{proof} The moduli stack $\overline{\mathcal{M}}_{g,n}$ admits a finite generically-etale cover by a scheme, say $M \rightarrow \overline{\mathcal{M}}_{g,n}$ (\cite{DeJong1}, 2.24). Let $$ U:=\{t \in T \,| \, \text{ $(\mathcal{C}_{t}^{s}, \{\sigma_i(t)\}_{i=1}^{n})$ is stable.}\}, $$ and consider the Cartesian diagram \[ \xymatrix{ U \times_{\overline{\mathcal{M}}_{g,n}} M \ar[d] \ar[r]&M \ar[d]\\ U \ar[r]& \overline{\mathcal{M}}_{g,n}\\ } \] Let $\tilde{U}$ be any irreducible component of $U \times_{\overline{\mathcal{M}}_{g,n}} M$ dominating $U$, and define $\tilde{T}$ to be the closure of the image of $\tilde{U}$ in $T \times_{\mathcal{S}pec \mathbb{Z}}M.$ Then $\tilde{T} \rightarrow T$ is an alteration satisfying the conclusion of (1). Next, we claim that we may choose the alteration $\tilde{T} \rightarrow T$, so that there exists a diagram \[ \xymatrix{ &\mathcal{C}^{n} \ar[rd]^{\phi_2} \ar[ld]_{\phi_1}&\\ \mathcal{C}^{s} \ar@{-->}[rr]^{\phi}\ar[dr]^{\pi_1} &&\tilde{\mathcal{C}} \ar[dl]_{\pi_2}\\ &\tilde{T} \ar@/^1pc/[lu]^{\{\sigma^s_i\}_{i=1}^{n}} \ar@/_1pc/[ru]_{\{\tilde{\sigma}_i\}_{i=1}^{n}}& } \] satisfying \begin{enumerate} \item $\tilde{T}$ is a normal noetherian scheme, \item $(\mathcal{C}^{n} \rightarrow S, \{\tau_{i}\}_{i=1}^{n})$ is a nodal curve, \item $\phi_1$ and $\phi_2$ are regular birational maps over $\tilde{T}$. \end{enumerate} To see this, start by taking $\tilde{T} \rightarrow T$ as in (1). After blowing-up $\tilde{T}$ further, we may assume that there exists a flat projective morphism $X \rightarrow \tilde{T}$ of relative dimension one, admitting regular birational maps to both $\mathcal{C}^{s}$ and $\tilde{\mathcal{C}}$. (Apply Chow's lemma and the flattening results of \cite{RG} to the graph of $\phi$.) Let $Z \subset X$ denote the pure codimension-one subscheme obtained by taking the strict transform of the sections $\{\tilde{\sigma}_i\}_{i=1}^{n}$ on $X$. By a theorem of de Jong (\cite{DeJong2}, 2.4), we may alter $(X \rightarrow \tilde{T}, Z)$ to a nodal curve, i.e. there exists an alteration $\tilde{T}' \rightarrow \tilde{T}$ with $\tilde{T}'$ a normal noetherian scheme, a nodal curve $(\mathcal{C}^{n} \rightarrow \tilde{T}', \{\tau_{i}\}_{i=1}^{n})$, and a commutative diagram \[ \xymatrix{ \mathcal{C}^{n} \ar[d] \ar[r]& X \ar[d]\\ \tilde{T}' \ar[r]& \tilde{T}\\ } \] such that the induced map $(\mathcal{C}^{n}, \cup_{i=1}^{n}\tau_{i}) \rightarrow (X \times_{\tilde{T}} \tilde{T}', Z \times_{\tilde{T}} \tilde{T}')$ is an isomorphism over the generic point of $\tilde{T}'$. In particular, $\mathcal{C}^{n}$ admits regular birational maps to both $(\mathcal{C}^{s} \times_{\tilde{T}} \tilde{T}')$ and $\tilde{\mathcal{C}} \times_{\tilde{T}} \tilde{T}'$, so $\tilde{T}' \rightarrow T$ gives the desired alteration. Now fix an alteration $\tilde{T} \rightarrow T$ and a diagram satisfying (1)-(3) above. Since $\pi_1$ is proper, the set \[ S:=\{t \in \tilde{T} \,| \, \text{$\phi$ is regular in a neighborhood of the fiber $\mathcal{C}_{t}^{s}$} \} \] is open in $\tilde{T}$. We must show that if $t \in \tilde{T}$ is any point such that the fiber $(\tilde{C}_{t}, \{\tilde{\sigma}_i(t)\}_{i=1}^{n})$ is prestable, then $t \in S$. By Lemma \ref{L:Normality}, we have $(\phi_1)_*\mathscr{O}_{\mathcal{C}^{n}}=\mathscr{O}_{\mathcal{C}^s}$ and $(\phi_2)_*\mathscr{O}_{\mathcal{C}^{n}}=\mathscr{O}_{\tilde{\mathcal{C}}}$. Thus, it suffices to show that if $E \subset \mathcal{C}^{n}_t$ is any irreducible component contracted by $\phi_1$, then $E$ is also contracted by $\phi_2$. By the uniqueness of stable reduction, we have $$ \text{Exc\,}(\phi_{1})_{t}=\{ E \subset \mathcal{C}^{n}_t |\text{ $E$ is smooth rational with one or two distinguished points }\} $$ Thus, if $E \subset \text{Exc\,}(\phi_{1})_{t}$ is not contracted by $\phi_{2}$, its image is a rational component of $(\tilde{C}_{t}, \{\tilde{\sigma}_i(t)\}_{i=1}^{n})$ with fewer than three distinguished points. This is a contradiction, since $(\tilde{C}_{t}, \{\tilde{\sigma}_i(t)\}_{i=1}^{n})$ is prestable. \end{proof} \subsection{Contractions of curves}\label{S:BirationalMaps} \begin{comment} Suppose that $\mathcal{C}^{s} \rightarrow \mathcal{D}elta$ is a generically-smooth family of stable curves over a disc, and that $Z$ is an arbitrary subcurve of the special fiber. In this section, we explain how to produce birational contractions \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[dl]\\ &\mathcal{D}elta& } \] such that $\text{Exc\,}(\phi)=Z$ and $\phi$ replaces each connected component of $Z$ by an isolated curve singularity in the special fiber of $\mathcal{C}$. \end{comment} \begin{comment} \begin{example} It is not difficult to see that \begin{align*} g(p)=0, m(p)=1 &\iff p \in C\text{ is smooth},\\ g(p)=0, m(p)=2 &\iff p \in C\text{ is an ordinary node }(y^2-x^2),\\ g(p)=1, m(p)=1 &\iff p \in C\text{ is an ordinary cusp }(y^2-x^3).\ \end{align*} By contrast, there are two analytic isomorphism classes of singularities with $g(p)=1, m(p)=2$, namely the tacnode $(y^2-x^4)$ and the spatial singularity obtained as an ordinary cusp with a smooth transverse branch $(y^2-x^3,xz,yz)$. This assertion is proved in Proposition \ref{P:Classification}. \end{example} \end{comment} In Lemma \ref{L:BirationalBaseChange}, we will see that birational contractions between generically-smooth families of curves have the effect of replacing arithmetic genus $g$ subcurves by isolated singularities of genus $g$. This motivates the following definition. \begin{definition}[Contraction of curves]\label{D:BirationalMap} If $\phi:C \rightarrow D$ is a morphism of curves, let $\text{Exc\,}(\phi)$ denote the union of those irreducible components $E \subset C$ which are contracted to a point $\phi(E) \in D$. We say that $\phi$ is a \emph{contraction} if it satisfies \begin{itemize} \item[(1)] $\phi$ is surjective with connected fibers, \item[(2)] $\phi$ is an isomorphism on $C - \text{Exc\,}(\phi)$, \item[(3)] If $Z$ is any connected component of $\text{Exc\,}(\phi)$, then the point $p:=\phi(Z) \in D$ satisfies $g(p)=p_a(Z)$ and $m(p)=|Z \cap Z^{c}|$, where $g(p)$ and $m(p)$ are the genus and number of branches of $p$, as in Definition \ref{D:Genus}. \end{itemize} \end{definition} \begin{remark}\label{R:Normalize} If $C$ is a nodal curve and $C \rightarrow D$ is a contraction, then we have a decomposition $$ C=\tilde{D} \cup Z_1 \cup \ldots \cup Z_k,$$ where $Z_1, \ldots, Z_k$ are the connected components of $\text{Exc\,}(\phi)$, and $\tilde{D}$ is the normalization of $D$ at $\phi(Z_1), \ldots, \phi(Z_k) \in C$. This is immediate from the fact that if $C$ is nodal, the points of $\overline{C \backslash Z_i}$ lying above $\phi(Z_i) \in D$ are smooth. \end{remark} \begin{example}[Contraction morphisms contracting an elliptic bridge]\label{E:Birat} Let $$C=C_{1} \cup E \cup C_{2}$$ be a nodal curve with an elliptic bridge (see Figure \ref{F:BirationalMap}). Then there exist contraction morphisms contracting $E$ to a tacnode $(y^2-x^4)$ or to a planar cusp with a smooth transerverse branch $(xz, yz,y^2-x^3)$, since both these singularities have two branches and genus one. By contrast, the map contracting $E$ to an ordinary node is not a contraction because the genus of an ordinary node is zero. In fact, the map contracting $E$ to a node is the Stein factorization of the given contractions. \end{example} \begin{figure} \caption{Two contractions of curves, each contracting an elliptic bridge.} \label{F:BirationalMap} \end{figure} \begin{proposition}[Existence of Contractions]\label{P:Contractions} Let $\mathcal{D}elta$ be the spectrum of a discrete valuation ring, and $\pi:\mathcal{C}^{n} \rightarrow \mathcal{D}elta$ a generically-smooth, nodal curve over $\mathcal{D}elta$. If $Z \subsetneq C^{n}$ is a proper subcurve of the special fiber, then there exists a diagram \[ \xymatrix{ \mathcal{C}^{n} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[dl]\\ &\mathcal{D}elta& } \] such that \begin{itemize} \item[(1)] $\phi$ is proper, birational, $\phi_*\mathscr{O}_{\mathcal{C}^{n}}=\mathscr{O}_{\mathcal{C}}$, and $\text{Exc\,}(\phi)=Z$. \item[(2)] $\mathcal{C} \rightarrow \mathcal{D}elta$ is a flat family of geometrically reduced connected curves. \item[(3)] The restriction of $\phi$ to the special fiber induces a contraction of curves. \end{itemize} \end{proposition} \begin{proof} We claim that it is sufficient to produce a birational morphism $\phi:\mathcal{C}^{n} \rightarrow \mathcal{C}$ such that $\text{Exc\,}(\phi)=Z$. Indeed, after taking the Stein factorization, we may assume that $\phi$ satisfies (1). By Lemma \ref{L:Normality}, $\mathcal{C}^{n}$ is normal, so $\mathcal{C}$ is as well. In particular, the special fiber $C$ is Cohen-Macaulay and therefore has no embedded points. Since each component of $C$ is the birational image of an irreducible component of $C^{s}$, no component of $C$ can be generically non-reduced. It follows that the special fiber is reduced and connected. In addition, $\mathcal{C} \rightarrow \mathcal{D}elta$ is flat since the generic point of $\mathcal{C}$ maps to the generic point of $\mathcal{D}elta$. This shows that $\mathcal{C} \rightarrow \mathcal{D}elta$ satisfies (2). Finally condition (3) is a consequence of the more general statement proved in Lemma \ref{L:BirationalBaseChange} below. It remains to show that there exists a birational morphism $\phi:\mathcal{C}^{n} \rightarrow \mathcal{C}$ with $\text{Exc\,}(\phi)=Z$. There exists a minimal resolution of singularities $p: \tilde{\mathcal{C}}^{n} \rightarrow \mathcal{C}^{n}$ such that $ \tilde{\mathcal{C}}^{n}\rightarrow \mathcal{D}elta$ is still a nodal curve \cite{Lipman}, and it is sufficient to produce a birational contraction $\phi: \tilde{\mathcal{C}}^{n} \rightarrow \mathcal{C}$ with $\text{Exc\,}(\phi)=p^{-1}(Z)$. Thus, we may assume that the total space $\mathcal{C}^{n}$ is regular to begin with. Now $\mathcal{C}^{n}$ is a regular algebraic space over an excellent Dedekind ring, so there is a necessary and sufficient condition for the existence of a contraction. If $Z_1 \cup \ldots \cup Z_k$ are the irreducible components of the $Z$, then the intersection matrix $||(Z_i.Z_j)||$ must be negative-definite \cite[6.17]{Artin} . In fact, it is easy to see that any proper subcurve of the special fiber must have negative-definite intersection matrix. To see this, let $C_1, \ldots, C_m$ be the irreducible components of the special fiber, and $C=\sum_{i=1}^{m}C_i$ the class of the fiber. Let $Z:=a_1C_1+\ldots+a_kC_k$ be an arbitrary cycle supported on the special fiber. We will prove by induction on $k$ that $Z^{2} \leq 0$, with equality iff $Z$ is a multiple of the entire fiber. Let us consider first the case when $Z$ is effective. After reordering, we may assume that $a_1>a_2>\ldots>a_k>0$. Since $C^{2}=C.Z=0$, we have \begin{align*} Z^{2}&=(Z-a_kC)^2=\left( \sum_{i=1}^{k-1}(a_i-a_k)C_i -a_{k}\sum_{i=k+1}^{m}C_i \right)^2 \\ &=\left(\sum_{i=1}^{k-1}(a_i-a_k)C_i\right)^2-2a_{k}\left(\sum_{i=1}^{k-1}(a_i-a_k)C_i\right).\left(\sum_{i=k+1}^{m}C_i\right)+a_{k}^2\left(\sum_{i=k+1}^{m}C_i\right)^2\\ &=\left(\sum_{i=1}^{k-1}(a_i-a_k)C_i\right)^2-2a_{k}\left(\sum_{i=1}^{k-1}(a_i-a_k)C_i\right).\left(\sum_{i=k+1}^{m}C_i\right)-a_{k}^2\left(\sum_{i=1}^{k}C_i \right).\left(\sum_{i=k+1}^{m}C_i\right)\\ \end{align*} The latter two terms are obviously non-positive and by induction the first term is non-positive as well. Furthermore, $Z^{2}=0$ iff each term is zero which evidently forces $a_1=\ldots=a_{k-1}=a_k$ and $k=m$. If $Z$ is not effective, then we may write $Z=Z_{1}-Z_{2}$, where $Z_{1}$ and $Z_{2}$ are effective with no common components. Then we have $$ Z^2=Z_1^2-2Z_1.Z_2+Z_2^2 \leq 0, $$ with equality iff $Z_{1}$ and $Z_{2}$ are multiples of the entire fiber. This completes the proof. \end{proof} \begin{lemma}\label{L:BirationalBaseChange} Let $S$ be an irreducible, normal, noetherian scheme, and let $\pi_1:X \rightarrow S$ and $\pi_2:Y \rightarrow S$ be two curves over $S$. Suppose that $\pi_1$ is nodal, and that $\pi_1$ and $\pi_2$ are generically smooth. If we are given a birational morphism over $S$ \[ \xymatrix{ X \ar[rr]^{\phi} \ar[rd]_{\pi_1}&& Y \ar[dl]^{\pi_2}\\ &S& } \] then the induced map $\phi_{s}:X_{s} \rightarrow Y_{s}$ is a contraction, for each geometric point $s \in S$. \end{lemma} \begin{proof} By Lemma \ref{L:Normality}, we have $\phi_{*}\mathscr{O}_{X}=\mathscr{O}_{Y}$. Using this, we will show that $\phi_{s}$ satisfies conditions (1)-(3) of Definition \ref{D:BirationalMap}. By Zariski's main theorem, $\phi$ has geometrically connected fibers, so $\phi_{s}$ satisfies (1). Furthermore, $\phi$ is an isomorphism when restricted to the complement of the positive-dimensinal fibers of $\phi$, so $\phi_{s}$ satisfies (2). It remains to verify that $\phi_{s}$ satisfies (3). Without loss of generality, we may assume that $Z:=\text{Exc\,}(\phi_{s})$ is connected, and we must show that $p:=\phi_{s}(Z) \in Y_{s}$ is a singularity of genus $g$. Since the number of branches of $p \in Y_{s}$ is, by definition, the number of points lying above $p$ in the normalization, we have $$m(p)=|\overline{X_{s} \backslash Z} \cap Z|.$$ To obtain $\delta(p)=p_a(Z)+m(p)-1$, note that \begin{align*} \delta&=\chi(X_{s},\mathscr{O}_{\overline{X_{s} \backslash Z} })-\chi(Y_{s},\mathscr{O}_{Y_{s}})\\ &=\chi(X_{s},\mathscr{O}_{\overline{X_{s} \backslash Z} })-\chi(X_{s},\mathscr{O}_{X_{s}})\\ &=-\chi(X_{s},I_{\overline{X_{s} \backslash Z}}) . \end{align*} The first equality is just the definition of $\delta$ since $\overline{X_{s} \backslash Z}$ is the normalization of $Y_{s}$ at $p$. The second equality follows from the fact that $X_{s}$ and $Y_{s}$ occur in flat families with the same generic fiber, and the third equality is just the additivity of Euler characteristic on exact sequences. Since $I_{\overline{X_{s} \backslash Z}}$ is supported on $Z$, we have $$\chi(X_{s},I_{\overline{X_{s} \backslash Z}})=\chi(Z,I_{\overline{X_{s} \backslash Z}}|_{Z})= \chi(Z,\mathscr{O}_{Z}(-Z \cap \overline{X_{s} \backslash Z}))=1-m(p)-p_a(Z),$$ which gives the desired equality. \end{proof} \subsection{$\mathcal{Z}$-stability}\label{S:ZStability} In this section, we define the stability condition associated to a fixed extremal assignment $\mathcal{Z}$. Using the definition of a contraction (Definition \ref{D:BirationalMap}), we can recast our original definition of $\mathcal{Z}$-stability (Definition \ref{D:Zstable}) as follows: \begin{definition}[$\mathcal{Z}$-stable curve]\label{D:ZStable} A smoothable $n$-pointed curve $(C,\{p_i\}_{i=1}^{n})$ is \emph{$\mathcal{Z}$-stable} if there exists a stable curve $(C^s,\{p_i^s\}_{i=1}^{n})$ and a contraction $\phi:(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$ such that $\text{Exc\,}(\phi)=\mathcal{Z}(C^s)$. \end{definition} We will make frequent use of the following observation: If $(C, \{p_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable, and $\phi:(C^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C, \{p_i\}_{i=1}^{n})$ is \emph{any} contraction from a stable curve, then $\text{Exc\,}(\phi)=\mathcal{Z}(C^{s})$. (The definition of $\mathcal{Z}$-stability asserts the existence of a single contraction with this property.) In order to prove this, we need the following lemma which gives an explicit description of the set of stable curves admitting contractions to a fixed prestable curve. \begin{lemma}\label{L:Mapping} Let $(C,\{p_i\}_{i=1}^{n})$ be an $n$-pointed prestable curve, and let $z_1, \ldots, z_k \in C$ be the set of points which satisfy one of the following conditions: \begin{itemize} \item[(1)] $z_{i}$ is non-nodal singularity, \item[(2)] $z_i$ is a node, and at least one marked point is supported at $z_i$, \item[(3)] $z_i$ is a smooth point, and at least two marked points are supported at $z_i$. \end{itemize} As in Definition \ref{D:Genus}, set $m_{i}=m(z_i)$, $g_{i}=g(z_i)$, and let $l_{i}$ denote the number of marked points supported at $z_{i}$. There exists a map \[ g:=g_{(C,\{p_i\})}:\prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i} \rightarrow \overline{\mathcal{M}}_{g,n} \] with the property that a stable curve $(C^{s}, \{p_i^s\}_{i=1}^{n})$ admits a contraction to $(C,\{p_i\}_{i=1}^{n})$ iff it lies in the image of $g$. \end{lemma} \begin{proof} In order to define $g$, let us relabel the marked points of $C$: \[ \{ p_i\}_{i=1}^{n}=\{ p_j \}_{j=1}^{r} \cup \{p_{1j}\}_{j=1}^{l_1} \cup \ldots \cup \{p_{kj} \}_{j=1}^{l_{k}}, \] where $\{p_{ij}\}_{j=i}^{l_i}$ is the set of marked points supported at $z_{i}$, and the points $\{p_{j}\}_{j=1}^{r}$ are distinct smooth points of $C$. Let $\tilde{C} \rightarrow C$ denote the normalization of $C$ at $\{ z_i\}_{i=1}^{k}$, and let $\{\tilde{q}_{ij}\}_{j=1}^{m_i}$ denote the set of points on $\tilde{C}$ lying above $z_i$. The assumption that $(C,\{p_i\}_{i=1}^{n})$ is prestable implies that each connected component of $(\tilde{C}, \{ p_{j}\}_{j=1}^{r}, \{\tilde{q}_{1j}\}_{j=1}^{m_1}, \ldots, \{\tilde{q}_{kj}\}_{j=1}^{m_k})$ is stable. Thus, we may define $g$ by sending \[ \coprod_{i=1}^{k} (Z_i, \{p_{ij}\}_{j=1}^{l_i}, \{q_{ij}\}_{j=1}^{m_i}) \rightarrow (\tilde{C} \cup Z_1 \cup \ldots \cup Z_k, \{ p_{j}\}_{j=1}^{r}, \{p_{ij}\}_{j=1}^{l_1}, \ldots, \{p_{ij} \}_{j=1}^{l_{k}} ), \] where $\tilde{C}$ and $\coprod_{i=1}^{k} Z_{i}$ are glued by identifying $\tilde{q}_{ij} \sim q_{ij}$. If $(C^{s}, \{p_i^s\}_{i=1}^{n})$ is in the image of $g$, then we can write \[(C^{s},\{p_i^s\}_{i=1}^{n}) = (\tilde{C} \cup Z_1 \cup \ldots \cup Z_k, \{ p_{j}\}_{j=1}^{r}, \{p_{ij}\}_{j=1}^{l_1}, \ldots, \{p_{ij} \}_{j=1}^{l_{k}} ), \] and we may define a contraction \[ (C^{s},\{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n}) \] by collapsing $Z_1, \ldots, Z_k$ to $z_1, \ldots, z_k$, and mapping $\tilde{C}$ birationally onto $C$. Conversely, we claim that if $\phi:(C^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$ is any contraction, then $(C^{s}, \{p_i^s\}_{i=1}^{n})$ is in the image of $g$. First, let us show that $\phi(\text{Exc\,}(\phi))=\{z_1, \ldots, z_k\}$. Since a stable curve has no points satisfying (1), (2), or (3), it is clear that $\{z_1, \ldots, z_k\} \subset \phi(\text{Exc\,}(\phi))$. Conversely, if $z \in C$ does not satisfy (1), (2), or (3), then $z$ is either an unmarked node, a marked smooth point, or an unmarked smooth point. In either case, since the genus of a node or a smooth point is zero, $\phi^{-1}(z)$ must be a reduced, connected, arithmetic genus zero curve with only two distinguished points. But this is impossible, since $(C^{s}, \{p_i^s\}_{i=1}^{n})$ is stable. Now, if $\phi:(C^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$ is any contraction, then we have (see Remark \ref{R:Normalize}) \[ C^{s}=\tilde{C} \cup Z_1 \cup \ldots \cup Z_k, \] where $\phi(Z_i)=z_i$, and $\tilde{C}$ is the normalization of $C$ at $z_1, \ldots, z_k$. If $z_i$ is a singularity with $m_i$ branches, then $Z_i$ meets $Z_i^{c}$ at $m_i$ points. If $z_{i}$ supports $l_{i}$ marked points, then $Z_{i}$ supports the same set of marked points. Thus, if we mark points of attachment in the usual way, we may consider $Z_i$ as an $l(z_i)+m(z_i)$-pointed stable curve of genus $g(z_i)$, for each $i=1, \ldots, k$. This shows that $(C^s, \{p_i^s\}_{i=1}^{n})$ is in the image of $g$. \end{proof} \begin{corollary}\label{C:Independence} Let $(C,\{p_i\}_{i=1}^{n})$ be a $\mathcal{Z}$-stable curve, and suppose that \[ \phi:(C^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n}) \] is any contraction from a stable curve $(C^{s}, \{p_i^s\}_{i=1}^{n})$. Then $\text{Exc\,}(\phi)=\mathcal{Z}(C^{s})$. \end{corollary} \begin{proof} By Lemma \ref{L:Mapping}, there exists a map $g:\prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i} \rightarrow \overline{\mathcal{M}}_{g,n},$ such that $(C^{s}, \{p_i^s\}_{i=1}^{n})$ admits a contraction to $(C, \{p_i\}_{i=1}^{n})$ iff $(C^{s}, \{p_i^s\}_{i=1}^{n}) \in \mathscr{I}mage(g).$ The pull-back of the universal curve $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{g,n}$ via $g$ decomposes as $$ (\tilde{C} \times \prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j} ) \coprod \left(\coprod_{j=1}^{k}\mathcal{C}_i \right), $$ where $\mathcal{C}_{i}$ is the pull-back of the universal curve over $ \overline{\mathcal{M}}_{g_i,m_i+l_i}$ via the $i^{th}$ projection $\prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j} \rightarrow \overline{\mathcal{M}}_{g_i,m_i+l_i}$. For any geometric point $x \in \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$, let $C^{s}_x$ denote the fiber $\pi^{-1}(x)$. Then $C^{s}_x$ decomposes as \[C^{s}_x= \tilde{C} \cup (\mathcal{C}_1)_{x} \cup \ldots \cup (\mathcal{C}_{k})_{x}, \] and there exists a contraction $\phi: C^{s}_x \rightarrow C$ with $\text{Exc\,}(\phi)=(\mathcal{C}_1)_{x} \cup \ldots \cup (\mathcal{C}_{k})_{x}$. The hypothesis that $(C, \{p_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable implies that \emph{there exists} a geometric point $y \in \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$ such that $\mathcal{Z}(\mathcal{C}^{s}_{y})=\cup_{i=1}^{k}(\mathcal{C}_{i})_y$. To prove the corollary, we must show $\mathcal{Z}(\mathcal{C}^{s}_{x})=\cup_{i=1}^{k}(\mathcal{C}_{i})_x$ for \emph{every} geometric point $x \in \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}.$ This follows easily from two applications of the one-parameter specialization property for extremal assignments: First, let $\zeta \in \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$ be the generic point, and consider a map $\mathcal{D}elta \rightarrow \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$ sending $\eta \rightarrow \zeta$, $0 \rightarrow y$. Applying Definition \ref{D:Assignment} (3) to the induced family over $\mathcal{D}elta$, we conclude $$\mathcal{Z}(\mathcal{C}^{s}_{y})=(\mathcal{C}_{1})_y \cup \ldots \cup (\mathcal{C}_{k})_{y} \text{im\,}plies \mathcal{Z}(\mathcal{C}^{s}_{\overline{\zeta}})=(\mathcal{C}_{1})_{\overline{\zeta}} \cup \ldots \cup (\mathcal{C}_{k})_{\overline{\zeta}}.$$ Next, let $x \in \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$ be an arbitrary geometric point, and consider a map $\mathcal{D}elta \rightarrow \prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$ sending $\eta \rightarrow \zeta, 0 \rightarrow x$. Applying Definition \ref{D:Assignment} (3) to the induced family over $\mathcal{D}elta$, we see that $$\mathcal{Z}(\mathcal{C}^{s}_{\overline{\zeta}})=(\mathcal{C}_{1})_{\overline{\zeta}} \cup \ldots \cup (\mathcal{C}_{k})_{\overline{\zeta}} \text{im\,}plies \mathcal{Z}(\mathcal{C}^{s}_{x})=(\mathcal{C}_{1})_{x} \cup \ldots \cup (\mathcal{C}_{k})_{x}.$$ \begin{comment} giving rise to a Cartesian diagram \[ \xymatrix{ (\tilde{C} \times \prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j} ) \times \prod_{j=1}^{k}\pi_i^*\mathcal{C}_i \ar[dr] \ar[r]^{\,\,\,\,\,\,\,\,\,\,\,\,\simeq}& \mathcal{C} \times_{\overline{\mathcal{M}}_{g,n}} \prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j} \ar[r] \ar[d]&\mathcal{C} \ar[d]\\ &\prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j} \ar[r]&\overline{\mathcal{M}}_{g,n},\\ } \] such that any stable curve $(C^s,\{p_i^s\}_{i=1}^{n})$ admitting a contraction to $(C,\{p_i\}_{i=1}^{n})$ lies in the image of $\mathcal{Z}$. The hypothesis that $(C,\{p_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable precisely implies that there exists a $k$-point \[ \mathcal{S}pec k \rightarrow \prod_{j=1}^{k}\overline{\mathcal{M}}_{g_j,m_j+l_j}, \] such that $\phi^{*}\mathscr{L}$ has degree zero on every irreducible component of the fibers $(\pi_1^*\mathcal{C}_{1})_{k}, \ldots, (\pi_k^*\mathcal{C}_{k})_{k}$. Since the degree of $\phi^{*}\mathscr{L}$ is constant in flat families, we have that $\phi^{*}\mathscr{L}$ has degree zero on every geometric fiber of $(\pi_1^*\mathcal{C}_{1})_{k}, \ldots, (\pi_k^*\mathcal{C}_{k})_{k}$. Finally, since $\mathscr{L}$ is $\pi$-nef, $\mathscr{L}$ has non-negative degree on every irreducible component such that $\mathscr{L}$ has degree zero on the fibers of $\pi_1^*\mathcal{C}_1, \ldots, \pi_l^*\mathcal{C}_l$. But since $\mathscr{L}$ is relatively nef, it follows immediately that $\mathscr{L}$ must have degree zero on every irreducible component of every fiber of $\pi_i^*\mathcal{C}_i \rightarrow \overline{\mathcal{M}}_{g_1,k_i} \times \ldots \overline{\mathcal{M}}_{g_l,k_l}$. \end{comment} \end{proof} \begin{comment} \begin{lemma}\label{L:Independence} Let $S$ be a normal scheme, of finite-type over $k$, $X \rightarrow S$ a stable curve over $S$ with smooth generic fiber, $Y \rightarrow S$ any curve. Suppose that $\phi:X \rightarrow Y$ is a proper $S$-morphism, satisfying $\phi_*\mathscr{O}_{X} = \mathscr{O}_{Y}.$ Then $Y_{\overline{s}}$ is $\mathscr{L}$-stable iff $X_{\overline{s}} \rightarrow Y_{\overline{s}}$ is the map associated to $\mathscr{L}_{s}$. \end{lemma} \begin{proof} If $X_{\overline{s}} \rightarrow Y_{\overline{s}}$ is the map associated to $\mathscr{L}_{s}$, then $Y_{\overline{s}}$ is $\mathscr{L}_{s}$ by definition. The reverse implication, however, is non-trivial. We must show that if $(C,p_1, \ldots, p_n)$ is an $\mathscr{L}$-stable curve, base-change using a dvr. \end{proof} \end{comment} \section{Construction of $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$}\label{S:Construction} Throughout this section, we fix an extremal assignment $\mathcal{Z}$ over $\overline{\mathcal{M}}_{g,n}$. \begin{definition}[The moduli stack of $\mathcal{Z}$-stable curves] Let $\mathcal{C} \rightarrow \mathcal{V}_{g,n}$ be the universal curve over $\mathcal{V}_{g,n}$, the main component in the stack of all curves (Section \ref{S:MainResult}). We define $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ as the collection of points $\mathcal{S}pec k \rightarrow \mathcal{V}_{g,n}$ such that the geometric fiber $\mathcal{C} \times_{\mathcal{V}_{g,n}} \overline{k}$ is $\mathcal{Z}$-stable. \end{definition} The first main theorem of this paper is \begin{theorem}\label{T:Construction} $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ is a stable modular compactification of $\mathcal{M}_{g,n}$. \end{theorem} In Section \ref{S:Openness}, we will show that $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ is Zariski-open. Thus, $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ inherits the structure of an algebraic stack, locally of finite-type over $\mathcal{S}pec \mathbb{Z}$. Since a $\mathcal{Z}$-stable curve has no more irreducible components than an $n$-pointed stable curve of arithmetic genus $g$, the moduli problem of $\mathcal{Z}$-stable curves is bounded (see Corollary \ref{C:Stackgne}). Thus, $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ inherits the structure of an algebraic stack of finite-type over $\mathcal{S}pec \mathbb{Z}$, and we may use the valuative criterion to check that $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is proper over $\mathcal{S}pec \mathbb{Z}$. This is accomplished in Section \ref{S:Properness}. It follows that $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is a modular compactification of $\mathcal{M}_{g,n}$. To see that $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is a stable modular compactification, simply observe that any $\mathcal{Z}$-stable curve is obviously prestable, since it is obtained as a contraction from a stable curve. \subsection{$\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is open in $\mathcal{V}_{g,n}$}\label{S:Openness} \begin{lemma}\label{L:Rigidity} Suppose we have a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[ld]\\ &T \ar@/^1pc/[lu]^{\{\sigma^s_i\}_{i=1}^{n}} \ar@/_1pc/[ru]_{\{\sigma_i\}_{i=1}^{n}} & } \] satsifying: \begin{itemize} \item[(1)] $T$ is a noetherian scheme. \item[(2)] $(\mathcal{C}^{s} \rightarrow T, \{\sigma_{i}^{s}\}_{i=1}^{n})$ is a stable curve, and $(\mathcal{C} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ an arbitrary curve. \item[(3)] $\phi:\mathcal{C}^{s} \rightarrow \mathcal{C}$ is a birational morphism. \end{itemize} Then the set \[ S:=\{t \in T \, |\, \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}})\} \] is open in $T$. \end{lemma} \begin{proof} Since $T$ is noetherian, it suffices to prove that $S$ is constructible and stable under generalization. First, we show that $S$ is constructible. There exists a finite stratification of $T$ into locally-closed subschemes over which the dual graph of the fibers of $\pi$ are constant, i.e. we have $ T=\coprod_{G}T_{G},$ where $T_{G}:=\mathcal{M}_{G} \times_{\overline{\mathcal{M}}_{g,n}} T.$ We will prove that $T$ is a finite union of the subschemes $T_{G}$ by showing that \[ \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}}) \text{ for one point $t \in T_{G}$} \text{im\,}plies \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}})\text{ for all points $t \in T_{G}$.} \] There exists a finite surjective map $\tilde{T}_{G} \rightarrow T_{G}$, such that $$ \mathcal{C}^{s} \times _{T} \tilde{T}_G=\mathcal{C}_{1}^{s} \cup \ldots \cup \mathcal{C}_{k}^{s}, $$ where each $\mathcal{C}_{i}^{s} \rightarrow \tilde{T}_{G}$ is a smooth proper curve, and it is enough to prove that \[ \text{Exc\,}(\tilde{\phi}_{\overline{t}})=\mathcal{Z}(C_{\overline{t}}) \text{ for one point $t \in \tilde{T}_{G}$} \text{im\,}plies \text{Exc\,}(\tilde{\phi}_{\overline{t}})=\mathcal{Z}(C_{\overline{t}})\text{ for all points $t \in \tilde{T}_{G}$,} \] where $\tilde{\phi}:\mathcal{C}^{s} \times _{T} \tilde{T}_G \rightarrow \mathcal{C} \times _{T} \tilde{T}_G$ is the birational morphism induced by $\phi$. Since each $\mathcal{C}_{i}^{s} \rightarrow \tilde{T}_{G}$ is smooth and proper, the rigidity lemma implies that $$ (\mathcal{C}^{s}_{i})_{\overline{t}} \subset \text{Exc\,}(\tilde{\phi}_{\overline{t}}) \text{ for one point $t \in \tilde{T}_{G}$} \text{im\,}plies (\mathcal{C}^{s}_{i})_{\overline{t}} \subset \text{Exc\,}(\tilde{\phi}_{\overline{t}}) \text{ for all points $t \in \tilde{T}_{G}$.} $$ On the other hand, since the dual graph of the fibers of $\mathcal{C}^{s} \times_T \tilde{T}_{G} \rightarrow \tilde{T}_G$ is constant, we have $$ (\mathcal{C}^{s}_{i})_{\overline{t}} \subset \mathcal{Z}(C_{\overline{t}}) \text{ for one point $t \in \tilde{T}_{G}$} \text{im\,}plies (\mathcal{C}^{s}_{i})_{\overline{t}} \subset \mathcal{Z}(C_{\overline{t}}) \text{ for all points $t \in \tilde{T}_{G}$.} $$ It follows that $$ \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}}) \text{ for one point $t \in \tilde{T}_{G}$} \text{im\,}plies \text{Exc\,}(\phi_{\overline{t}})=\mathcal{Z}(C_{\overline{t}})\text{ for all points $t \in \tilde{T}_{G}$,} $$ as desired. Next, we show that $S$ is stable under generization. If $s, t \in T$ satisfy $s \in \overline{\{t\}}$, there exists a map $\mathcal{D}elta \rightarrow T$, sending $\eta \rightarrow t$, $0 \rightarrow s$, inducing a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[dr]\ar[rr]^{\phi} & &\mathcal{C} \ar[ld]\\ &\mathcal{D}elta&\\ } \] We wish to show that \[\text{Exc\,}(C^{s}_{0})=\mathcal{Z}(C^{s}_0) \text{im\,}plies \text{Exc\,}(C^{s}_{\overline{\eta}})=\mathcal{Z}(C^{s}_{\overline{\eta}}). \] As with step 1, this is an elementary application of the rigidity lemma. After a finite base-change, we may assume that the irreducible components of $\mathcal{C}^{s}$, say $\mathcal{C}^{s}=\mathcal{C}^{s}_1 \cup \ldots \cup \mathcal{C}^{s}_{k}$ are in bijective correspondence with the irreducible components of $\mathcal{C}^{s}_{\overline{\eta}}$. By Definition \ref{D:Assignment} (3), we have \[(\mathcal{C}^{s}_i)_0 \subset \mathcal{Z}(\mathcal{C}^{s}_0) \iff (\mathcal{C}^{s}_i)_{\overline{\eta}} \subset \mathcal{Z}(\mathcal{C}^{s}_{\overline{\eta}})\] On the other hand, since each $\mathcal{C}_{i} \rightarrow \mathcal{D}elta$ is flat and proper with irreducible generic fiber, the rigidity lemma implies that \[(\mathcal{C}^{s}_i)_0 \subset \text{Exc\,}(\phi_0) \iff (\mathcal{C}^{s}_i)_{\overline{\eta}} \subset \text{Exc\,}(\phi_{\overline{\eta}}).\] We conclude that $\text{Exc\,}(\mathcal{C}^{s}_{0})=\mathcal{Z}(\mathcal{C}^{s}_0) \text{im\,}plies \text{Exc\,}(\mathcal{C}^{s}_{\overline{\eta}})=\mathcal{Z}(\mathcal{C}^{s}_{\overline{\eta}})$ as desired. \end{proof} \begin{theorem} $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$ is an open substack. \end{theorem} \begin{proof} Since $\mathcal{V}_{g,n}$ is an algebraic stack, irreducible and locally of finite-type over $\mathcal{S}pec \mathbb{Z}$, there exists a smooth atlas $T \rightarrow \mathcal{V}_{g,n}$, where $T$ is a scheme, irreducible and locally of finite-type over $\mathcal{S}pec \mathbb{Z}$. If $(\mathcal{C} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ denotes the corresponding $n$-pointed curve over $T$, then the required statement is that \[ S:=\{t \in T \, | \, (\mathcal{C}_{\overline{t}}, \{\sigma_i(\overline{t})\}_{i=1}^{n})\text{ is $\mathcal{Z}$-stable} \} \] is open in $T$. Since this is local on $T$, we may assume that $T$ is irreducible and of finite-type over $\mathcal{S}pec \mathbb{Z}$. Note that if $p: \tilde{T} \rightarrow T$ is any proper surjective morphism of schemes, and $(\tilde{\mathcal{C}} \rightarrow \tilde{T}, \{\tilde{\sigma}_i\}_{i=1}^{n})$ is the family obtained by pull-back, then it is sufficient to show that \[ \tilde{S}:=\{t \in \tilde{T} \, | \, (\tilde{\mathcal{C}}_{\overline{t}}, \{\tilde{\sigma}_i(\overline{t})\}_{i=1}^{n})\text{ is $\mathcal{Z}$-stable} \} \] is open in $\tilde{T}$. By Lemma \ref{L:ExtendingCurves}, there exists an alteration $\tilde{T} \rightarrow T$, and a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar@{-->}^{\phi}[rr] \ar[dr] && \tilde{\mathcal{C}} \ar[dl]\\ &\tilde{T} \ar@/^1pc/[lu]^{\{\sigma^s_i\}_{i=1}^{n}} \ar@/_1pc/[ru]_{\{\tilde{\sigma}_i\}_{i=1}^{n}}& } \] satisfying \begin{enumerate} \item $\tilde{T}$ is a normal noetherian scheme. \item $(\mathcal{C}^{s}, \{\sigma^s_i\}_{i=1}^{n})$ is a stable curve. \item $\phi$ is regular over the locus $\{t \in \tilde{T} |\text{ $\tilde{C}_{t}, \tilde{\sigma}(t)$ is prestable }\}$. \end{enumerate} In particular, since every $\mathcal{Z}$-stable curve is prestable, $\tilde{S}$ is contained in the open set $$U:=\{ t \in \tilde{T} | \text{ $\phi$ is regular in a neighborhood of the fiber $\mathcal{C}^{s}_t$ }\}.$$ Thus, we may replace $\tilde{T}$ by $U$ and assume that $\phi$ is regular. By Lemma \ref{L:BirationalBaseChange}, the restriction of $\phi$ to each fiber is a contraction of curves. Thus, by Corollary \ref{C:Independence}, $$ \tilde{S}=\{t \in \tilde{T} \,\,|\,\, \text{Exc\,}(\phi_{t})=\mathcal{Z}(C^{s}_{t}) \}. $$ By Lemma \ref{L:Rigidity}, this set is open. \end{proof} \subsection{$\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is proper over $\mathcal{S}pec \mathbb{Z}$}\label{S:Properness} To show that $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is proper, it suffices to verify the valuative criterion for discrete valuation rings with algebraically closed residue field, whose generic point maps into the open dense substack $\mathcal{M}_{g,n} \subset \overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ (\cite{LMB}, Chapter 7). \begin{theorem}[Valuative Criterion for Properness of $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$] Let $\mathcal{D}elta$ be the spectrum of a discrete valuation ring with algebraically closed residue field. \begin{itemize} \item[(1)] (Existence of $\mathcal{Z}$-stable limits) If $(\mathcal{C},\{\sigma_{i}\}_{i=1}^{n} )|_{\eta}$ is a smooth $n$-pointed curve over $\eta$, there exists a finite base-change $\mathcal{D}elta' \rightarrow \mathcal{D}elta$, and a $\mathcal{Z}$-stable curve $(\mathcal{C}' \rightarrow \mathcal{D}elta', \{\sigma_{i}\}_{i=1}^{n}prime)$, such that $$(\mathcal{C}', \{\sigma_{i}\}_{i=1}^{n}prime )|_{\eta'} \simeq (\mathcal{C},\{\sigma_{i}\}_{i=1}^{n})|_{\eta} \times_{\eta} \eta'.$$ \item[(2)] (Uniqueness of $\mathcal{Z}$-stable limits) Suppose that $(\mathcal{C} \rightarrow \mathcal{D}elta,\{\sigma_{i}\}_{i=1}^{n})$ and $(\mathcal{C}' \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n}prime)$ are $\mathcal{Z}$-stable curves with smooth generic fiber. Then any isomorphism over the generic fiber $$(\mathcal{C},\{\sigma_{i}\}_{i=1}^{n})|_{\eta} \simeq (\mathcal{C}',\{\sigma_{i}\}_{i=1}^{n}prime)|_{\eta}$$ extends to an isomorphism over $\mathcal{D}elta$: $$(\mathcal{C},\{\sigma_{i}\}_{i=1}^{n}) \simeq (\mathcal{C}',\{\sigma_{i}\}_{i=1}^{n}prime).$$ \end{itemize} \end{theorem} \begin{proof} To prove existence of limits, start by applying the stable reduction theorem to $(\mathcal{C},\{\sigma_{i}\}_{i=1}^{n} )|_{\eta}$. There exists a finite base-change $\mathcal{D}elta' \rightarrow \mathcal{D}elta$, and a Deligne-Mumford stable curve $(\pi: \mathcal{C}^{s} \rightarrow \mathcal{D}elta',\{\sigma_{i}\}_{i=1}^{n}prime)$ such that $$ (\mathcal{C}^{s}, \{\sigma_{i}\}_{i=1}^{n}s)|_{\eta'} \simeq (\mathcal{C},\{\sigma_{i}\}_{i=1}^{n}) \times_{\eta} \eta'. $$ For notational simplicity, we will continue to denote our base by $\mathcal{D}elta$. By Proposition \ref{P:Contractions}, there exists a birational morphism over $\mathcal{D}elta$: \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[dl]\\ &\mathcal{D}elta& } \] such that \begin{enumerate} \item $(\mathcal{C} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$ is a flat family of $n$-pointed curves, \item $\phi$ is proper birational with $\text{Exc\,}(\phi)=\mathcal{Z}(C^{s}_0)$, \item $\phi_{0}:C^{s}_0 \rightarrow C_0$ is a contraction of curves. \end{enumerate} Properties (2) and (3) imply that the special fiber $(C_0, \{\sigma_i(0)\}_{i=1}^{n})$ is $\mathcal{Z}$-stable, so $(\mathcal{C} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$ is the desired $\mathcal{Z}$-stable family.\\ To prove uniqueness of limits, we must show that a rational isomorphism $$ (\mathcal{C},\{\sigma_{i}\}_{i=1}^{n})|_{\eta} \simeq (\mathcal{C}',\{\sigma_{i}\}_{i=1}^{n}prime)|_{\eta} $$ between two families of $\mathcal{Z}$-stable curves extends to an isomorphism over $\mathcal{D}elta$. It suffices to check that the rational map $\mathcal{C} \dashrightarrow \mathcal{C}'$ extends to an isomorphism after a finite base-change. Thus, applying semistable reduction to the graph of this rational map, we may assume there exists a nodal curve $(\mathcal{C}^{n} \rightarrow \mathcal{D}elta, \{\tau_{i}\}_{i=1}^{n})$ and a diagram \[ \xymatrix{ &(\mathcal{C}^{n}, \{\tau_{i}\}_{i=1}^{n}) \ar[rd]^{\phi'} \ar[ld]_{\phi} &\\ (\mathcal{C},\{\sigma_{i}\}_{i=1}^{n})& &(\mathcal{C}', \{\sigma_{i}\}_{i=1}^{n}prime) } \] where $\phi$ and $\phi'$ are proper birational morphisms over $\mathcal{D}elta$. In fact, we may further assume that $(\mathcal{C}^{n} \rightarrow \mathcal{D}elta, \{\tau_{i}\}_{i=1}^{n})$ is stable. Indeed, any rational component $E \subset C^{n}_{0}$ with only one or two distinguished points must be contracted by both $\phi$ and $\phi'$ since $C_0$ and $C_0'$ are both prestable. Thus, $\phi$ and $\phi'$ both factor through the stable reduction $\mathcal{C}^{n} \rightarrow \mathcal{C}^{s}$, and we may replace by $\mathcal{C}^{n}$ by $\mathcal{C}^{s}$. Now consider the restriction of $\phi$ and $\phi'$ to the special fiber. By Lemma \ref{L:BirationalBaseChange}, $\phi_{0}:C^{s}_0 \rightarrow C_0$ and $\phi_{0}':C^{s}_0 \rightarrow C_0'$ are both contractions of curves. Furthermore, since $C_{0}$ and $C_{0}'$ are both $\mathcal{Z}$-stable, Corollary \ref{C:Independence} implies that $\text{Exc\,}(\phi)=\text{Exc\,}(\phi')=\mathcal{Z}(C^{s}_0)$. Since $\mathcal{C}$ and $\mathcal{C}'$ are normal (Lemma \ref{L:Normality}), this implies $\mathcal{C} \simeq \mathcal{C}'$ as desired. \end{proof} \section{Classification of stable modular compactifications of $\mathcal{M}_{g,n}$}\label{S:Classification} In this section, we prove the following theorem. \begin{theorem}[Classification of Stable Modular Compactifications]\label{T:Classification} Suppose $\mathcal{X} \subset \mathcal{V}_{g,n}$ is a stable modular compactification of $\mathcal{M}_{g,n}$. Then there exists an extremal assignment $\mathcal{Z}$ over $\overline{\mathcal{M}}_{g,n}$, such that $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$. \end{theorem} The starting point for the proof of Theorem \ref{T:Classification} is the following lemma, which allows us to compare an arbitrary stable modular compactification to $\overline{\mathcal{M}}_{g,n}$ by regularizing the rational map between their respective universal curves. \begin{lemma}\label{L:Diagram} Suppose that $\mathcal{X} \subset \mathcal{V}_{g,n}$ is a stable modular compactification of $\mathcal{M}_{g,n}$. Then there exists a diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[dr]^{\pi^{s}} \ar[rr]^{\phi}&&\mathcal{C} \ar[dl]_{\pi}\\ &T \ar[dl]_{p} \ar[dr]^{q} \ar@/_1pc/[ru]_{\{\sigma_{i}\}_{i=1}^{n}} \ar@/^1pc/[lu]^{\{\sigma_i^{s} \}_{i=1}^{n}}&\\ \overline{M} \ar[d]_{i}&U\ar[d] \ar@{^{(}->}[r] \ar@{_{(}->}[l]&X \ar[d]^{j}\\ \overline{\mathcal{M}}_{g,n}&\mathcal{U} \ar@{^{(}->}[r] \ar@{_{(}->}[l]&\mathcal{X}\\ } \] satisfying \begin{itemize} \item[(1)] $X$, $U$, $\overline{M}$, and $T$ are irreducible normal schemes, of finite-type over $\mathcal{S}pec \mathbb{Z}$. \item[(2)] $p$ and $q$ are proper birational morphisms, $i$ and $j$ are generically-\'{e}tale finite morphisms, $\pi^{s}:\mathcal{C}^{s} \rightarrow T$ and $\pi: \mathcal{C} \rightarrow T$ are the families induced by $i \circ p$ and $j \circ q$ respectively. \item [(3)] $\mathcal{U}:=\mathcal{M}_{g,n} \cap \mathcal{X}$ is an open dense substack of $\mathcal{X}$ and $\mathcal{M}_{g,n}$. The lower squares are Cartesian, and the unlabelled arrows are open immersions. \item[(4)] The rational map $\phi$, induced by the natural isomorphism between $\mathcal{C}^{s}$ and $\mathcal{C}$ over the generic point of $T$, is regular. \end{itemize} \end{lemma} \begin{proof} Since $\mathcal{X} \cup \overline{\mathcal{M}}_{g,n}$ is a (non-separated) algebraic stack of finite-type over $\mathcal{S}pec \mathbb{Z}$ with quasi-finite diagonal, there exists a representable, finite, generically-\'{e}tale, surjective morphism $S \rightarrow \mathcal{X} \cup \overline{\mathcal{M}}_{g,n}$, where $S$ is a scheme (\cite{EHKV}, Theorem 2.7). We may assume that $S$ is irreducible since $\mathcal{X} \cup \overline{\mathcal{M}}_{g,n}$ is. Define $X$, $\overline{M}$, and $U$ as the normalizations of the fiber products $S \times_{\mathcal{X} \cup \overline{\mathcal{M}}_{g,n}} \mathcal{X}$, $S \times_{\mathcal{X} \cup \overline{\mathcal{M}}_{g,n}} \overline{\mathcal{M}}_{g,n}$, and $S \times_{\mathcal{X} \cup \overline{\mathcal{M}}_{g,n}} \mathcal{U}$ respectively. Finally, define $T$ to be the normalization of the closure of the image of the diagonal immersion $U \hookrightarrow X \times_{\mathcal{S}pec \mathbb{Z}} \overline{M}$. This gives a diagram satisfying (1), (2), and (3), but not necessarily (4), i.e. the induced rational map $\phi: \mathcal{C}^{s} \dashrightarrow \mathcal{C}$ may not be regular. Since the geometric fibers of $(\mathcal{C} \rightarrow T, \{\sigma_{i}\}_{i=1}^{n})$ are prestable, Lemma \ref{L:ExtendingCurves} gives an alteration $T' \rightarrow T$ such that the rational map $\mathcal{C}^{s} \times_{T} T' \dashrightarrow \mathcal{C} \times_{T} T'$ is regular. Now define $X' \rightarrow X$ and $\overline{M}' \rightarrow \overline{M}$ to be the finite morphisms appearing in the Stein factorizations of $T' \rightarrow X$ and $T' \rightarrow \overline{M}$ respectively. Replacing $X$, $\overline{M}$, $T$ by $X'$, $\overline{M}'$, $T'$, and $U$ by $U \times_{\overline{M}} T'=U \times_{X} T'$ gives the desired diagram. \end{proof} For the remainder of this section, we fix a stable modular compactification $\mathcal{X}$, and a diagram as in Lemma \ref{L:Diagram}. We also use the following notation: For any geometric point $t \in T$, let $G_{t}$ be the dual graph of the fiber $(C^{s}_t, \{\sigma_i^{s}(t)\}_{i=1}^{n})$, so that $\text{Exc\,}(\phi_t)$ determines a subgraph $\text{Exc\,}(\phi_t) \subset G_t$. Also, we set $T_{G}:= \mathcal{M}_{G} \times_{\overline{\mathcal{M}}_{g,n}} T$, i.e. $T_{G} \subset T$ is the locally closed subscheme in $T$ over which the geometric fibers of $\pi^{s}$ have dual graph isomorphic to $G$. We wish to associate to $\mathcal{X}$ an extremal assignment $\mathcal{Z}$ by setting \[ \mathcal{Z}(G):=i(\text{Exc\,}(\phi_t)) \subset G, \] for some choice of $t \in T_{G}$ and some choice of isomorphism $i:G_{t} \simeq G$. The key point is to show that the subgraph $\mathcal{Z}(G) \subset G$ does not depend on these choices. More precisely, we will show: \begin{proposition}\label{P:ExceptionalLocus} \begin{itemize} \item[] \item[(a)] For any two geometric points $t_1, t_2 \in T_{G}$, there exists an isomorphism $i:G_{t_1} \simeq G_{t_2}$ such that $i(\text{Exc\,}(\phi_{t_1}))=\text{Exc\,}(\phi_{t_2}).$ \item[(b)] For any geometric point $t \in T_{G}$, and any automorphism $i:G_{t} \simeq G_{t}$, we have $i(Exc(\phi_t))=\text{Exc\,}(\phi_t)$, i.e. $\text{Exc\,}(\phi_t)$ is $\text{Aut\,}(G_t)$-invariant. \end{itemize} \end{proposition} Before proving this proposition, let us use it to prove Theorem \ref{T:Classification}. \begin{proof}[Proof of Theorem \ref{T:Classification}, assuming Proposition \ref{P:ExceptionalLocus}.] If $G$ is any dual graph of an $n$-pointed stable curve of genus $g$, we define a subgraph $\mathcal{Z}(G) \subset G$ by the recipe $$ \mathcal{Z}(G):=i(\text{Exc\,}(\phi_t)) \subset G, $$ for any choice of $t \in T_{G}$ and isomorphism $i:G_{t} \simeq G$. By Proposition \ref{P:ExceptionalLocus}, the subgraph of $\mathcal{Z}(G) \subset G$ does not depend on the choice of $t \in T_{G}$ or the choice of isomorphism $i:G_{t} \simeq G$. We claim that the assignment $G \rightarrow \mathcal{Z}(G)$ defines an extremal assignment over $\overline{\mathcal{M}}_{g,n}$, and that $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$. To prove that $\mathcal{Z}$ is an extremal assignment, we must show that it satisfies axioms 1-3 of Definition \ref{D:Assignment}. For axiom 1, suppose that $\mathcal{Z}(G) =G$ for some dual graph $G$. Then there exists a point $t \in T$ such that $\text{Exc\,}(\phi_t)=C^{s}_t$. Since $T$ is connected, and the locus $$\{ t \in T |\,\, \text{Exc\,}(\phi_t)=C^s_{t} \}$$ is both open and closed in $T$, it must follow that $\text{Exc\,}(\phi_t)=C^{s}_t$ \emph{for every} $t \in T$. But this is impossible since $\phi$ is an isomorphism over the generic point of $T$. We conclude that $\mathcal{Z}(G) \neq G$ for every dual graph $G$. Axiom 2 is immediate from Proposition \ref{P:ExceptionalLocus} (b). Finally, for axiom 3, suppose that $G \leadsto G'$ is an arbitrary specialization of dual graphs. Let $x \in \overline{\mathcal{M}}_{g,n}$ be the generic point of $\mathcal{M}_{G}$, let $y \in \overline{\mathcal{M}}_{g,n}$ be a closed point of $\mathcal{M}_{G'}$, and let $t \in T$ be any point lying over $x$. Since $y \in \overline{\{x\}}$, there exists a diagram \[ \xymatrix{ \eta \ar[d] \ar[r]^{\tilde{u}}&T \ar[d]\\ \mathcal{D}elta \ar[r]^{u}& \overline{\mathcal{M}}_{g,n} } \] satisfying $u(\eta)=x$, $u(0)=y$, $\tilde{u}(\eta)=t$. Since $T \rightarrow \overline{\mathcal{M}}_{g,n}$ is proper, this diagram fills in (possibly after a finite base-change) to give map $\tilde{u}:\mathcal{D}elta \rightarrow T$, and we may consider the induced morphism of families pulled back from $T$: \[ \xymatrix{ \mathcal{C}^{s} \ar[dr] \ar[rr]^{\phi} && \mathcal{C} \ar[dl]\\ &\mathcal{D}elta&\\ } \] Since our definition of $\mathcal{Z}$ does not depend on the choice of $t \in T$, we have $\mathcal{Z}(G)=\text{Exc\,}(\phi_{\overline{\eta}})$ and $\mathcal{Z}(G')=\text{Exc\,}(\phi_{0})$. After a finite base-change, we may assume that $\mathcal{C}^{s}=\mathcal{C}_{1} \cup \ldots \cup \mathcal{C}_{m}$, where each $\mathcal{C}_{i} \rightarrow \mathcal{D}elta$ is a flat family of curves with smooth generic fiber, and it suffices to show that $$(\mathcal{C}_{i})_{\bar{\eta}} \in \text{Exc\,}(\phi_{\overline{\eta}}) \iff (\mathcal{C}_{i})_{0} \in \text{Exc\,}(\phi_0).$$ But this is an immediate consequence of the rigidity lemma. This completes the proof that $\mathcal{Z}$ is an extremal assignment over $\overline{\mathcal{M}}_{g,n}$. Since $\mathcal{Z}$ is an extremal assignment, Theorem \ref{T:Construction} gives a stable modular compactification $\overline{\mathcal{M}}_{g,n}(\mathcal{Z}) \subset \mathcal{V}_{g,n}$, and we wish to show that $\mathcal{X}=\overline{\mathcal{M}}_{g,n}(\mathcal{Z}).$ By Lemma \ref{L:BirationalBaseChange}, the map $\phi_t:C^{s}_t \rightarrow C_t$ is a contraction for any geometric point $t \in T$. Since $\text{Exc\,}(\phi_t)=\mathcal{Z}(C^{s}_t)$ by the definition of $\mathcal{Z}$, every geometric fiber of $(\mathcal{C} \rightarrow T, \{\sigma_i\}_{i=1}^{n})$ is $\mathcal{Z}$-stable. Since $T \rightarrow \mathcal{X}$ is surjective, we conclude that every geometric point of $\mathcal{X}$ is contained in $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$. Since $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ and $\mathcal{X}$ are open in $\mathcal{V}_{g,n}$, the natural inclusion $\mathcal{X} \subset \overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ is an open immersion. Furthermore, this inclusion is proper over $\mathcal{S}pec \mathbb{Z}$, since both $\mathcal{X}$ and $\overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ are. A proper dominant morphism is surjective, so $\mathcal{X} = \overline{\mathcal{M}}_{g,n}(\mathcal{Z})$ as desired. \end{proof} It remains to prove Proposition \ref{P:ExceptionalLocus}. The major difficulty comes from the fact that the subscheme $T_{G} \subset T$ may have several connected components. The following lemma, used in conjunction with the valuative criterion for $\mathcal{X}$, will allow us to relate the exceptional loci of the contractions $\phi_{t}$, even as $t$ ranges over different connected components of $T_{G}$. \begin{lemma}\label{L:LiftableMap} Given $T \rightarrow \overline{\mathcal{M}}_{g,n}$ as in Lemma \ref{L:Diagram}, let $x \in \overline{\mathcal{M}}_{g,n}$ be any geometric point, and let $T_1, \ldots T_k$ be the connected components of the fiber $T_{x}$. Then there exists a diagram \[ \xymatrix{ &&T \ar[d]\\ \mathcal{D}elta \ar[rr]^{\,\,\,\,u} \ar[urr]^{\{\tilde{u}_i\}_{i=1}^{k}}&& \overline{\mathcal{M}}_{g,n} } \] such that \begin{itemize} \item[1.] $u(\eta)=\eta_{\overline{\mathcal{M}}_{g,n}}$, $u(0)=x$, \item[2.] $\tilde{u}_i(\eta)=\eta_T$, $\tilde{u}_i(0) \in T_i$ for each $i=1, \ldots, k$, \end{itemize} where $\eta_{\overline{\mathcal{M}}_{g,n}}$ and $\eta_T$ are the generic points of $\overline{\mathcal{M}}_{g,n}$ and $T$ respectively. \end{lemma} \begin{proof} Let $\{x_{1}, \ldots, x_{k}\}$ be the fiber of $\overline{M} \rightarrow \overline{\mathcal{M}}_{g,n}$ over $x$. By Zariski's main theorem, the geometric fibers of $T \rightarrow \overline{M}$ are connected, so the connected components $T_{1}, \ldots, T_{k}$ are simply the fibers of $T \rightarrow \overline{M}$ over $x_1, \ldots, x_k$. Let $u:\mathcal{D}elta \rightarrow \overline{\mathcal{M}}_{g,n}$ be any map sending $\eta \rightarrow \eta_{\overline{\mathcal{M}}_{g,n}}$ and $0 \rightarrow x$. Let $S$ be any irreducible component of the fiber product $\overline{M} \times_{\overline{\mathcal{M}}_{g,n}} \mathcal{D}elta$ which dominates both $\overline{M}$ and $\mathcal{D}elta$. Since $S \rightarrow \mathcal{D}elta$ is generically etale, we may assume, after a finite base-change, that the generic fiber is contained in a union of sections $\{\sigma_i\}_{i=1}^{d}$. Furthermore, since the generic fiber of $S \rightarrow \mathcal{D}elta$ is dense in $S$, each point of the special fiber $S_{0}=\{x_1, \ldots, x_k\}$ is equal to $\sigma_i(0)$ for some section $\sigma_i$. Reordering if necessary, we may assume that $\sigma_i(0)=x_i$ for $i=1, \ldots, k$. The sections $\{\sigma_i\}_{i=1}^{k}$ induces lifts of $u$ to $\overline{M}$, i.e we have a diagram \[ \xymatrix{ &&\overline{M} \ar[d]\\ \mathcal{D}elta \ar[rr]^{\,\,\,\,u} \ar[urr]^{\{u_i\}_{i=1}^{k}}&& \overline{\mathcal{M}}_{g,n} } \] where $u_i(\eta)=\eta_{\overline{M}}$ and $u_i(0)=x_i$ for $i=1, \ldots, k$. Since $T \rightarrow \overline{M}$ is proper and an isomorphism over the generic point, each map $u_{i}$ lifts uniquely to a map $\tilde{u}_i:\mathcal{D}elta \rightarrow T$ satisfying $\tilde{u}_i(\eta)=\eta_{T}$ and $\tilde{u}_i(0) \in T_i.$ \end{proof} \begin{itemize} \item[] \end{itemize} \begin{proof}[Proof of Proposition \ref{P:ExceptionalLocus}(a)] \begin{itemize} \item[] \end{itemize} We will prove the statement in two steps. First, we show that if a pair of points $t_1,t_2$ is contained in a single connected component of $T_{G}$, then there exists an isomorphism $i:G_{t_1} \simeq G_{t_2}$ such that $i(\text{Exc\,}(\phi_{t_1}))=\text{Exc\,}(\phi_{t_2})$. Second, we will show that if a pair of points $t_1, t_2 \in T$ is contained in the fiber $T_{x}$ over a geometric point $x \in \overline{\mathcal{M}}_{g,n}$, then there exists an isomorphism $i:G_{t_1} \simeq G_{t_2}$ such that $i(\text{Exc\,}(\phi_{t_1}))=\text{Exc\,}(\phi_{t_2})$. It is easy to see that these two statements suffice: Indeed, given two arbitrary geometric points $t_1, t_2 \in T_{G}$ mapping to $x_1, x_2 \in \overline{\mathcal{M}}_{g,n}$, let $T_{G}'$ be an irreducible component of $T_{G}$ dominating $\mathcal{M}_{G}$, and let $u_1, u_2 \in T_{G}'$ be two geometric points lying above $x_1$ and $x_2$ respectively. By the second claim, there exist isomorphisms \begin{align*} i_1:G_{t_1} \simeq G_{u_1}& \text{ satisfying } i_1(\text{Exc\,}(\phi_{t_1}))=\text{Exc\,}(\phi_{u_1}),\\ i_2:G_{t_2} \simeq G_{u_2}& \text{ satisfying } i_2(\text{Exc\,}(\phi_{t_2}))=\text{Exc\,}(\phi_{u_2}).\\ \intertext{By the first claim, there exists an isomorphism} i:G_{u_1} \simeq G_{u_2}& \text{ satisfying } i(\text{Exc\,}(\phi_{u_1}))=\text{Exc\,}(\phi_{u_2}). \end{align*} Thus, $j:=i_2^{-1} \circ i \circ i_1: G_{t_1} \simeq G_{t_2}$ satisfies $j(\text{Exc\,}(\phi_{t_1}))=\text{Exc\,}(\phi_{t_2})$, as desired.\\ To prove the first claim, let $S$ be any connected component of $T_{G}$ and consider the induced diagram \[ \xymatrix{ \mathcal{C}^{s} \ar[dr] \ar[rr]^{\phi}&& \mathcal{C} \ar[ld]\\ &S& } \] After a finite surjective base-change, we may assume that $$ \mathcal{C}^{s} \simeq \mathcal{C}^s_{1} \cup \ldots \cup \mathcal{C}^s_{m}, $$ where each $\mathcal{C}^s_{i} \rightarrow S$ is proper and smooth. Note that this isomorphism induces an obvious identification of dual graphs $i:G_{s_1} \simeq G_{s_2}$ for any two geometric points $s_1, s_2 \in S$. Furthermore, since one geometric fiber of $\mathcal{C}^{s}_{i} \rightarrow S$ is contracted by $\phi$ if and only if \emph{every} geometric fiber $\mathcal{C}_{i} \rightarrow S$ is contracted by $\phi$, we have $i(\text{Exc\,}(\phi_{s_1}))=\text{Exc\,}(\phi_{s_2})$.\\ To prove the second claim, let $x \in \overline{\mathcal{M}}_{g,n}$ be any geometric point and let $T_1, \ldots, T_k$ be the connected components of $T_{x}$. Given the first claim, it suffices to prove that there exist points $t_{i} \in T_{i}$ for each $i=1, \ldots, k$, and isomorphisms $$ G_{t_1} \simeq G_{t_2} \simeq \cdots \simeq G_{t_k}, $$ identifying $\text{Exc\,}(\phi_{t_1}) \simeq \text{Exc\,}(\phi_{t_2}) \simeq \cdots \simeq \text{Exc\,}(\phi_{t_k})$. By Lemma \ref{L:LiftableMap}, there exists a commutative diagram \[ \xymatrix{ &&T \ar[d]\\ \mathcal{D}elta \ar[rr]^{\,\,\,\,u} \ar[urr]^{\{\tilde{u}_i\}_{i=1}^{k}}&& \overline{\mathcal{M}}_{g,n} } \] satisfying \begin{itemize} \item[1.] $u(\eta)=\eta_{\overline{\mathcal{M}}_{g,n}}$, $u(0)=x$. \item[2.] $\tilde{u}_i(\eta)=\eta_T$, $\tilde{u}_i(0) \in T_i$ for each $i=1, \ldots, k$. \end{itemize} Set $t_{i}:=\tilde{u}_{i}(0)$ for each $i=1, \ldots, k$, and let \[ \xymatrix{ \mathcal{C}_{i}^{s} \ar[rr]^{\phi_i} \ar[dr]&&\mathcal{C}_{i} \ar[dl]\\ &\mathcal{D}elta& } \] be the diagram induced by the morphism $\tilde{u}_i:\mathcal{D}elta \rightarrow T$. Note that, since the compositions $\mathcal{D}elta \rightarrow T \rightarrow \overline{\mathcal{M}}_{g,n}$ are identical when restricted to the generic point $\eta \in \mathcal{D}elta$, we obtain a commutative diagram of isomorphisms over $\eta$: \[ \xymatrix{ (\mathcal{C}^{s}_1)_{\eta} \ar[r]^{\simeq} \ar[d]^{(\phi_1)_{\eta}} & (\mathcal{C}^{s}_2)_{\eta} \ar[r]^{\simeq} \ar[d]^{(\phi_2)_{\eta}} & \cdots \ar[r]^{\simeq} &(\mathcal{C}^{s}_k)_{\eta} \ar[d]^{(\phi_k)_{\eta}} \\ (\mathcal{C}_1)_{\eta} \ar[r]^{\simeq} & (\mathcal{C}_2)_{\eta} \ar[r]^{\simeq} & \cdots \ar[r]^{\simeq}&(\mathcal{C}_k)_{\eta}\\ } \] Since $\overline{\mathcal{M}}_{g,n}$ is proper over $\mathcal{S}pec \mathbb{Z}$, each isomorphism $(\mathcal{C}^{s}_{i})_{\eta} \simeq (\mathcal{C}^{s}_j)_{\eta}$ extends uniquely to an isomorphism $\mathcal{C}^{s}_{i} \simeq \mathcal{C}^{s}_{j}$. Similarly, since $\mathcal{X}$ is proper over $\mathcal{S}pec \mathbb{Z}$, each isomorphism $(\mathcal{C}_{i})_{\eta} \simeq (\mathcal{C}_j)_{\eta}$ extends uniquely to an isomorphism $\mathcal{C}_{i} \simeq \mathcal{C}_{j}$. Thus, we obtain a commutative diagram over $\mathcal{D}elta$: \[ \xymatrix{ \mathcal{C}^{s}_1 \ar[r]^{\simeq} \ar[d]^{\phi_1} & \mathcal{C}^{s}_2 \ar[r]^{\simeq} \ar[d]^{\phi_2} & \cdots \ar[r]^{\simeq} &\mathcal{C}^{s}_k \ar[d]^{\phi_k} \\ \mathcal{C}_1 \ar[r]^{\simeq} & \mathcal{C}_2 \ar[r]^{\simeq} & \cdots \ar[r]^{\simeq}& \mathcal{C}_k\\ } \] Restricting to the top row of isomorphisms to the special fiber, we obtain isomorphisms $$ C^{s}_{t_1} \simeq C^{s}_{t_2} \simeq \ldots \simeq C^{s}_{t_k}, $$ identifying $\text{Exc\,}(\phi_{t_1}) \simeq \ldots \simeq \text{Exc\,}(\phi_{t_k})$, as desired.\\ \end{proof} \begin{comment} \begin{figure} \caption{Exceptional dual graphs when $(g,n)=(2,0)$ and $(2,1)$.} \label{F:NonseparatedLimits} \end{figure} \end{comment} It remains to prove Proposition \ref{P:ExceptionalLocus} (b). From Proposition \ref{P:ExceptionalLocus} (a), it already follows that every curve parametrized by $\mathcal{X}$ is obtained by contracting some subcurve of a stable curve $Z(C^{s}) \subset C^{s}$, and that this subcurve depends only on the dual graph of $C^{s}$. We will show that the separatedness of $\mathcal{X}$ forces the subcurve $Z(C^{s})$ to be invariant under automorphisms of the dual graph. The idea is simple: if $i$ is an automorphism of the dual graph of $C^{s}$ such that $Z(C^{s}) \neq i(Z(C^{s}))$, then contracting $Z(C^{s})$ or $i(Z(C^{s}))$ in a one-parameter smoothing of $C^{s}$ gives two distinct limits in $\mathcal{X}$. To make this precise, we need two preliminary lemmas. \begin{lemma}\label{L:LastLemma} Suppose that $(C,\{p_i\}_{i=1}^{n})$ admits a contraction from a stable curve $f:(C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$. Suppose that there exists $t \in T$ and an isomorphism $i:(C^{s}_t, \{\sigma_{i}(t)\}_{i=1}^{n}) \simeq (C^s, \{p_i^s\}_{i=1}^{n})$ satisfying $i(\text{Exc\,}(\phi_t))=\text{Exc\,}(f)$. Then the same holds true for \emph{any} contraction from a stable curve to $(C,\{p_i\}_{i=1}^{n})$, i.e given \emph{any} contraction $g:(D^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$ with $(D^{s}, \{p_i^s\}_{i=1}^{n})$ stable, there exists $t \in T$ and an isomorphism $i:(C^{s}_t, \{\sigma_{i}(t)\}_{i=1}^{n}) \simeq (D^s, \{p_i^s\}_{i=1}^{n})$ satisfying $i(\text{Exc\,}(\phi_t))=\text{Exc\,}(g)$. \end{lemma} \begin{proof} By Lemma \ref{L:Mapping}, there exists a map \[ h:\prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i} \rightarrow \overline{\mathcal{M}}_{g,n} \] with the property that a stable curve admits a contraction to $(C,\{p_i\}_{i=1}^{n})$ iff it lies in the image of $h$. Set $\overline{\mathcal{M}}_{C}:=\prod_{i=1}^{k} \overline{\mathcal{M}}_{g_i, m_i+l_i}$, let $T_{C}$ be an irreducible component of the fiber product $T \times_{\overline{\mathcal{M}}_{g,n}} \overline{\mathcal{M}}_{C}$ which dominates $\overline{\mathcal{M}}_{C}$, and consider the induced birational morphism of families over $T_{C}$: \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[dl]\\ &T_{C}&\\ } \] Since the family $\mathcal{C}^{s} \rightarrow T_{C}$ is pulled back from $\overline{\mathcal{M}}_{C}$, it decomposes as $$ \mathcal{C}^{s} \simeq (\tilde{C} \times T_{C}) \coprod \left( \coprod_{j=1}^{k}\mathcal{C}^s_i \right), $$ where $\mathcal{C}^{s}_i \rightarrow T_{C}$ is pulled back from the universal curve over $\overline{\mathcal{M}}_{g_i, m_i+l_i}$ via the $i^{th}$ projection. Our hypothesis implies that there exists a point $t \in T_{C}$ such that $\text{Exc\,}(\phi_{t})$ is precisely the union of fibers $\cup_{i=1}^{k}(\mathcal{C}_i)_{t}$. By the rigidity lemma, $\text{Exc\,}(\phi_t)=\cup_{i=1}^{k}(\mathcal{C}^s_i)_{t}$ for \emph{all} $t \in T_{C}$. Since $T_{C} \rightarrow \overline{\mathcal{M}}_{C}$ is surjective, this implies that if $\psi:(D^{s}, \{p_i^s\}_{i=1}^{n}) \rightarrow (C,\{p_i\}_{i=1}^{n})$ is any contraction from a stable curve to $(C,\{p_i\}_{i=1}^{n})$, there exists $t \in T_{C} \subset T$ and an isomorphism $i:C^{s}_t \simeq D^{s}$ such that $i(\text{Exc\,}(\phi_t))=\text{Exc\,}(g)$. \end{proof} \begin{lemma}\label{L:TwistingX} Let $f: (C^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (D, \{p_i\}_{i=1}^{n})$ be a contraction from a stable curve $(C^s, \{p_i^s\}_{i=1}^{n})$ to a smoothable curve $(D, \{p_i\}_{i=1}^{n})$. Suppose that there exists $t \in T$ and an isomorphism $i:(C^{s}_t, \{\sigma_{i}(t)\}_{i=1}^{n}) \simeq (C^{s}, \{p_i^s\}_{i=1}^{n})$ satisfying $i(\text{Exc\,}(\phi_t))=\text{Exc\,}(f)$. Then $[D, \{p_i\}_{i=1}^{n}] \in \mathcal{X}$. \end{lemma} \begin{proof} Since $(D, \{p_i\}_{i=1}^{n})$ is smoothable, we may consider a generic smoothing $(\mathcal{D} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$. Since the fibers of $\mathcal{D} \rightarrow \mathcal{D}elta$ are prestable, we may apply Lemma \ref{L:ExtendingCurves} (2) to obtain (after finite base-change) a birational morphism: \[ \xymatrix{ \mathcal{D}^{s} \ar[rr]^{\psi} \ar[dr]&&\mathcal{D} \ar[dl]\\ &\mathcal{D}elta&\\ } \] where $(\mathcal{D}^{s} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n}s)$ is a stable curve. By Lemma \ref{L:BirationalBaseChange}, the restriction of $\psi$ to the special fiber is a contraction morphism $\psi_{0}:(D^s, \{p_i^s\}_{i=1}^{n}) \rightarrow (D, \{p_i\}_{i=1}^{n})$, though not necessarily the one given in the hypothesis of the lemma. Nevertheless, by Lemma \ref{L:LastLemma}, there exists a geometric point $t \in T$ and an isomorphism $i:(C^{s}_t, \{\sigma_{i}(t)\}_{i=1}^{n}) \simeq (D^{s}, \{p_i^s\}_{i=1}^{n})$ satisfying $i(\text{Exc\,}(\phi_t))=\text{Exc\,}(\psi_0)$. The stable curve $(\mathcal{D}^{s} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n}s)$ induces a map $\mathcal{D}elta \rightarrow \overline{\mathcal{M}}_{g,n}$, and we let $T_{\mathcal{D}elta}$ denote an irreducible component of $T \times_{\overline{\mathcal{M}}_{g,n}}\mathcal{D}elta$ which dominates $\mathcal{D}elta$. By Proposition \ref{P:ExceptionalLocus} (a), we may assume that the given point $t \in T$ lies in $T_{\mathcal{D}elta}$, so the isomorphism $i:C^{s}_t \simeq D^{s}$ determines a lift of the closed point $\mathcal{S}pec k \hookrightarrow \mathcal{D}elta$ to $T_{\mathcal{D}elta}$, i.e. we have a diagram \[ \xymatrix{ &T_{\mathcal{D}elta} \ar[d] \ar[r]&T \ar[d]&\\ \mathcal{S}pec k \ar[r] \ar[ur]& \mathcal{D}elta \ar[r]& \overline{\mathcal{M}}_{g,n} \\ } \] Since the projection $T_{\mathcal{D}elta} \rightarrow \mathcal{D}elta$ is generically-\'{e}tale, we may assume (after finite base-change) that there exists a section $\mathcal{D}elta \rightarrow T_{\mathcal{D}elta}$ whose image contains the given $k$-point. This section induces a lifting $\mathcal{D}elta \rightarrow T$, and we may consider the birational morphism of families pulled back from $T$: \[ \xymatrix{ \mathcal{C}^{s} \ar[rr]^{\phi} \ar[dr]&&\mathcal{C} \ar[dl]\\ &\mathcal{D}elta&\\ } \] By construction, the natural isomorphism $\mathcal{C}^{s} \simeq \mathcal{D}^s$ identifies $\text{Exc\,}(\phi)$ with $\text{Exc\,}(\psi)$. Since $\mathcal{C}$ and $\mathcal{D}$ are normal, there is an induced isomorphism $\mathcal{C} \simeq \mathcal{D}$. Thus, the isomorphism class of the curve $(D,\{p_i\}_{i=1}^{n})$ appears as a fiber of the family $\mathcal{C} \rightarrow T$, i.e. $[D, \{p_i\}_{i=1}^{n}] \in \mathcal{X}$ as desired. \end{proof} Now we can prove Proposition \ref{P:ExceptionalLocus} (b). \begin{proof}[Proof of Proposition \ref{P:ExceptionalLocus}(b)] Suppose there exists a dual graph $G$ and a geometric point $t \in T_{G}$ such that $\text{Exc\,}(\phi_t)$ fails to be $\text{Aut\,}(G)$-invariant. By Proposition \ref{P:ExceptionalLocus}(a), \emph{every} geometric point $t \in T_{G}$ has the property that $\text{Exc\,}(\phi_t)$ fails to be $\text{Aut\,}(G)$-invariant. In particular, since $T_{G} \rightarrow \mathcal{M}_{G}$ is surjective, we may choose a geometric point $t \in T_{G}$ with the property that the induced map $$\text{Aut\,}(C^s_t, \{\sigma_{i}(t)\}_{i=1}^{n}) \rightarrow \text{Aut\,}(G)$$ is surjective. (Simply choose a curve $[C^{s}, \{p_i\}_{i=1}^{n}] \in \mathcal{M}_{G}$ with the property that each of its components have identical moduli, and take $t \in T_{G}$ to be a point lying over $[C^{s}, \{p_i\}_{i=1}^{n}]$.) Now we will derive a contradiction to the separatedness of $\mathcal{X}$. Let $(\mathcal{C}^{s} \rightarrow \mathcal{D}elta, \{\sigma_{i}\}_{i=1}^{n})$ be a smoothing of the curve $(C^s_t, \{\sigma_{i}(t)\}_{i=1}^{n})$. By our choice of $(C^s_t, \{\sigma_{i}(t)\}_{i=1}^{n})$, there exist two distinct subcurves of the special fiber $Z_1, Z_2 \subset C^{s}$ and isomorphisms $i_1, i_2:C^{s} \simeq C^s_{t}$ satisfying $i_1(Z_1)=\text{Exc\,}(\phi_t)$, $i_2(Z_2)=\text{Exc\,}(\phi_t)$. By Proposition \ref{P:Contractions}, there exist birational contractions \[ \xymatrix{ &\mathcal{C}^{s} \ar[dr]^{\phi_2} \ar[dl]_{\phi_1}&\\ \mathcal{C}_1\ar[dr]&&\mathcal{C}_2 \ar[dl]\\ &\mathcal{D}elta&\\ } \] with $\text{Exc\,}(\phi_1)=Z_1$, $\text{Exc\,}(\phi_2)=Z_2$. Since the restriction of $\phi_1$ and $\phi_2$ to the special fiber are contractions, Lemma \ref{L:TwistingX} implies that the special fibers of $\mathcal{C}_1$ and $\mathcal{C}_2$ both lie in $\mathcal{X}$. Since $\mathcal{X} \subset \mathcal{V}_{g,n}$ is open, the maps $\mathcal{D}elta \rightarrow \mathcal{V}_{g,n}$ induced by $\mathcal{C}_1$ and $\mathcal{C}_2$ both factor through $\mathcal{X}$. Since $Z_{1} \neq Z_{2}$, the rational morphism $\mathcal{C}_{1} \dashrightarrow \mathcal{C}_{2}$ does not extend to an isomorphism, and we conclude that $\mathcal{X}$ is not separated, a contradiction. \end{proof} \appendix \section{Stable modular compactifications of $\mathcal{M}_{2}, \mathcal{M}_{3}, \mathcal{M}_{2,1}$} In this appendix, we give an explicit definition of the relative nef cone $\text{N}ef$ and cone of curves $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ as rational closed convex polyhedral cones in $\mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})$ and $\mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})^{\vee}$. In the special cases $(g,n)=(2,0), (3,0), (2,1)$, we enumerate the extremal faces of $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n})$ and describe the corresponding $\mathcal{Z}$-stability conditions, as guaranteed by Lemma \ref{L:NefAssignments}. To begin, let $\pi:\mathcal{C} \rightarrow \overline{\mathcal{M}}_{g,n}$ denote the universal curve over the moduli stack of stable curves over an algebraically closed field of characteristic zero. In this setting, the $\mathbb{Q}$-Picard group of $\overline{\mathcal{M}}_{g,n}$ is well-known: we have natural line-bundles $\lambda, \{\psi_i\}, \{ \delta_{i,S} \} \in \mathbb{P}ic(\overline{\mathcal{M}}_{g,n})$, where $\lambda=\text{det} (\pi_*\omega_{\mathcal{C}/\overline{\mathcal{M}}_{g,n}})$, $\psi_{i}:=\sigma_i^*\omega_{\mathcal{C}/\overline{\mathcal{M}}_{g,n}}$, and $\delta_{i,S}$ is the line-bundle corresponding to the reduced irreducible Cartier divisor $\mathcal{D}elta_{i,S} \subset \overline{\mathcal{M}}_{g,n}$. Of course, if $i=0$ (resp. $g$), then we must have $|S| \geq 2$ (resp. $|S| \leq n-2$). Since we have a natural identification $\mathcal{C} \simeq \overline{\mathcal{M}}_{g,n+1}$, we may define elements $\omega_{\pi}, \{\sigma_i\}, \{E_{i,S}\} \in \mathbb{P}ic(\mathcal{C})$ by the formulae: \begin{align*} \omega_{\pi}:&=\psi_{n+1},\\ \sigma_{i}:&=\delta_{0,i \cup \{n+1\}},&& i \in [1,n] \\\ E_{i,S}:&=\delta_{i,S \cup \{n+1\}}.&& i \in [0,g], S \subset [1,n] \end{align*} One should think of $\omega_{\pi}$ as the relative dualizing sheaf of $\pi$, $\sigma_i$ as the line-bundle corresponding to the divisor $\sigma_{i}(\overline{\mathcal{M}}_{g,n}) \subset \mathcal{C}$, and $E_{i,S}$ as the line-bundle corresponding to the irreducible component of $\pi^{-1}(\mathcal{D}elta_{i,S})$ whose fibers over $\mathcal{D}elta_{i,S}$ are curves of genus $i$, marked by the points of $S$. Whenever we write $\{\sigma_i\}$, we consider the index $i$ to run between $1$ and $n$, and whenever we write $\{E_{i,S}\}$ we consider $(i,S)$ to run over a set of indices representing each irreducible component of the boundary of $\overline{\mathcal{M}}_{g,n}$ once, excluding $\mathcal{D}elta_{irr}$ and $\mathcal{D}elta_{g/2, \emptyset}$. \begin{lemma}\label{L:RelativePic} The classes $\omega_{\pi}$, $\{\sigma_i\}$, and $\{E_{i,S}\}$ generate $\mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})$. Moreover, we have \begin{itemize} \item[1.] If $g \geq 2$, these classes freely generate, i.e. \[ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n}) = \mathbb{Q}\{\omega_{\pi}, \{\sigma_i\}, \{E_{i,S}\} \} \] \item[2.] If $g=1$, then the classes $\{\sigma_i\}$ and $\{E_{i,S}\}$ freely generate, i.e. \[ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{1,n}) = \mathbb{Q}\{ \{\sigma_i\}, \{E_{i,S}\} \} \] \item[3.]If $g=0$, then the classes $\omega_{\pi}$ and $\{E_{i,S}\}$ freely generate, i.e. \[ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{0,n}) = \mathbb{Q}\{\omega_{\pi}, \{E_{i,S}\} \}. \] \end{itemize} \end{lemma} \begin{proof} This follows from the generators and relations for $\mathbb{P}ic(\overline{\mathcal{M}}_{g,n}) \otimes \mathbb{Q}$ described in \cite{AC}. \end{proof} Now let us recall how these generators intersect irreducible components of fibers of $\pi$ (see, for example, \cite{HarMor}). Let $(C^s,\{p_i^s\}_{i=1}^{n})$ be a fiber of the universal curve $\pi: \mathcal{C} \rightarrow \overline{\mathcal{M}}_{g,n}$, and let $G$ be the dual graph of $(C^s,\{p_i^s\}_{i=1}^{n})$. If $Z \subset C^{s}$ is an irreducible component, corresponding to the vertex $v \in G$, then we have \begin{align*} \omega_{\pi}.Z&=2g(v)-2+|v|\\ \sigma_{i}.Z&= \begin{cases} 1&\text{if $v$ is labelled by $p_i$,}\\ 0&\text{otherwise.}\\ \end{cases}\\ E_{i,S}.Z&= \begin{cases} 1&\text{if $v$ has an edge of type-$(i,S)$,}\\ -1&\text{if $v$ has an edge of type-$(i,S)^{c}$,}\\ 0&\text{otherwise},\\ \end{cases} \end{align*} where we say that \emph{$v$ has an edge of type-$(i,S)$} if $v$ meets an edge corresponding to a node that disconnects the curve into pieces of type $(i,S)$ and $(g-i,S)$, and $v$ lies on the piece of type $(g-i,S)$. Given a $\mathbb{Q}$-line bundle $$\mathscr{L}:=a\omega_{\pi}+\sum_{i}b_i\sigma_i+\sum_{i,S}c_iE_{i,S}, \text{ where } a, \{b_i\}, \{c_{i,S}\} \in \mathbb{Q},$$ $\mathscr{L}$ is nef iff, for every dual graph $G$ and every vertex $v \in G$, we have $$a(\omega_{\pi}.v)+b_i(\sigma_i.v)+c_{i,S}(E_{i,S}.v) \geq 0,$$ where $\omega_{\pi}.v$, $\sigma_i.v$, $E_{i,S}.v$ are defined by the expressions above. We then define the relative nef cone $\text{N}ef \subset \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})$ to be the intersection of this finite collection of half-spaces. The fact that $\omega_{\pi}$ is positive on every stable curve implies that these half-spaces have non-empty intersection, hence determine a piecewise-linear closed convex cone. Of course, the \emph{relative cone of curves} is simply defined to be the dual cone $\overline{\N}_{1}^{+}(\C/\mathcal{S}M_{g,n}):=\text{N}ef^{\vee} \subset \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{g,n})^{\vee}$. Let us see how this works in practice by computing the relative cone of curves for $\overline{\mathcal{M}}_{2}$, $\overline{\mathcal{M}}_{3}$, and $\overline{\mathcal{M}}_{2,1}$, and describe the stability condition corresponding to each face: already, in these low-genus examples, one sees many new stability conditions that have no counterpart in the existing literature. Throughout the following examples, we will make repeated use of the observation that to determine whether a line-bundle is $\pi$-nef, it is sufficient to intersect it against those fibers of $\pi$ which are maximally-degenerate, i.e. those which correspond to zero strata in $\overline{\mathcal{M}}_{g,n}$. \begin{figure} \caption{The zero-strata of $\overline{\mathcal{M} \label{F:ZeroStrata} \end{figure} \begin{example}[$\mathcal{M}_{2}$]\label{E:M2} By Lemma \ref{L:RelativePic}, the relative $\mathbb{Q}$-Picard group of the universal curve $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{2}$ is given by $$ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{2})=\mathbb{Q}\{\omega_{\pi}\}. $$ Thus, any numerically non-trivial $\pi$-nef line bundle on $\mathcal{C}$ is ample, and induces the trivial extremal assignment $\mathcal{Z}(G) = \emptyset$. In fact, it is easy to verify that there are no non-trivial extremal assignments over $\overline{\mathcal{M}}_{2}$ directly from the axioms. Let $G_1$, $G_2$ be dual graphs corresponding to thee two zero-strata pictured in Figure \ref{F:ZeroStrata}. By axiom 2, any extremal assignment which picks out one vertex from $G_1$ (or $G_2$) must pick out both vertices, which contradicts axiom 1. We conclude that $\mathcal{Z}(G_1)=\mathcal{Z}(G_2)=\emptyset$. By axiom 3, $\mathcal{Z}$ must be the trivial extremal assignment. In sum, $\overline{\mathcal{M}}_{2}$ is the unique stable modular compactification of $\mathcal{M}_{2}$. \end{example} \begin{example}[$\mathcal{M}_{3}$]\label{E:M3} By Lemma \ref{L:RelativePic}, the relative $\mathbb{Q}$-Picard group of the universal curve $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{3}$ is given by $$ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{3})=\mathbb{Q}\{\omega_{\pi}, E\}, $$ where $E:=E_{1}$ is the divisor of elliptic tails in the universal curve. Intersecting the divisor $a\omega_{\pi}+bE$ ($a,b \in \mathbb{Q}$) with the irreducible components of vital stratum (1) in Figure \ref{F:ZeroStrata}, we deduce the inequalities $a+3b \geq 0$ and $a-b \geq 0$. One easily checks that any divisor whose coefficients satisfy these two inequalities automatically satisfies the inequalities arising from the irreducible components in strata (2)-(4). Thus, the nef cone $\overline{\text{N}}^{1}_{+}(\mathcal{C}/\overline{\mathcal{M}}_{3}) \subset \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{3})$ is defined by $$ \overline{\text{N}}^{1}_{+}(\mathcal{C}/\overline{\mathcal{M}}_{3})=\mathbb{Q}_{\geq 0}\{\omega_{\pi}(E), \omega_{\pi}(-E/3)\}. $$ Thus, the relative cone of curves has two extremal faces, namely $\omega_{\pi}(-E/3)^{\perp}$ and $\omega_{\pi}(E)^{\perp}$. One easily checks that the nef divisor $\omega_{\pi}(E)$ has degree zero on an irreducible component of a fiber of the universal curve $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{3}$ iff this component is contained in the divisor $E$ (i.e. if it is an elliptic tail). Thus, the extremal assignment induced by this divisor coincides with Example \ref{E:FirstAssignments} (2), and the corresponding moduli space replaces elliptic tails by cusps. On the other hand, one easily checks that $\omega_{\pi}(-E/3)$ has degree zero on a fiber of $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{3}$ iff it has the form $R \cup E_1 \cup E_2 \cup E_3$, where $R$ is a smooth rational curve attached to three distinct elliptic tails $E_1$, $E_2$, and $E_3$. Since the unique singularity of type $(0,3)$ is the rational triple point (i.e. the union of the 3 coordinate axes in $\mathbb{A}^{3}$), such curves are replaced in $\overline{\mathcal{M}}_{3}(\mathcal{Z})$ by curves of the form $E_{1} \cup E_{2} \cup E_{3}$ where the three elliptic tails meet in a rational triple point. \begin{comment} Now let us argue that every non-trivial extremal assignment must be one of the two assignments induced by $\pi$-nef line-bundles, so that $\overline{\mathcal{M}}_{3}$, $\overline{\mathcal{M}}_{3}^{ps}$, and the space associated to $\omega_{\pi}(-E_1/3)$ give all possible stable modular compactifications of $\mathcal{M}_{3}$. It suffices to show that if $\mathcal{Z}$ is any extremal assignment, then the only components that $\mathcal{Z}$ can select from the strata (2)-(5) are the elliptic tails. Clearly an extremal assignment cannot pick out any components from zero-strata (2) or (4) since automorphisms of the dual graph act transitively on components in these strata. Now consider zero-stratum (5), and suppose there was an extremal assignment $\mathcal{Z}$ picking one of the two smooth rational components. Then by axiom 2, $\mathcal{Z}$ would necessarily pick out both smooth rational components. Now applying axiom 3 to the specializations pictured in Figure \ref{F:Genus3Specialization}, we conclude that $\mathcal{Z}$ must also pick out an elliptic tail. Of course, once $\mathcal{Z}$ picks out one elliptic tail in one curve, axiom 3 shows that $\mathcal{Z}$ picks out all elliptic tails in all curves. In particular, we conclude that $\mathcal{Z}$ picks out every irreducible component of graph (5), a contradiction. \begin{figure} \caption{A degeneration of one vital stratum, in which two internal nodes are smoothed to create an elliptic bridge, followed by a specialization to a rational spine together with an elliptic tail.} \label{F:Genus3Specialization} \end{figure} Finally consider zero stratum (3), and suppose there was an extremal assignment picking out the vertical rational spine. One can construct a pair of specializations showing that such an assignment would also pick out one of the smooth rational components in zero-stratum (5), which is impossible by the arguments of the previous paragraph. A similar argument shows that $\mathcal{Z}$ cannot pick out either of the remaining smooth rational components in graph (3). \end{comment} \end{example} \begin{figure} \caption{Faces of the relative cone of curves $\overline{\text{N} \label{F:M_{2,1} \end{figure} \begin{example}[$\mathcal{M}_{2,1}$]\label{E:M21} By Lemma \ref{L:RelativePic}, the relative $\mathbb{Q}$-Picard group of the universal curve $\mathcal{C} \rightarrow \overline{\mathcal{M}}_{2,1}$ is given by $$ \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{2,1})=\mathbb{Q}\{\omega_{\pi}, \sigma, E\}, $$ where $\sigma:=\sigma_1$ is the universal section, and $E:=E_{1, \emptyset}$ is the divisor of unmarked elliptic tails in the universal curve. Intersecting the divisor $a\omega_{\pi}+b\sigma+cE$ ($a,b,c \in \mathbb{Q}$) with the three irreducible components of vital strata (1)-(3) in Figure \ref{F:ZeroStrata}, we deduce the following inequalities for the nef cone: \begin{eqnarray*} \text{\underline{Stratum 1}:}&\text{\underline{Stratum 2}:}&\text{\underline{Stratum 3}:}\\ b \geq 0 & a-b \geq 0 & b \geq 0\\ a+c \geq 0& b+2c \geq 0 & a \geq 0\\ a-c \geq 0 & b+2c \geq 0 & a \geq 0 \end{eqnarray*} One easily checks that this intersection of half-spaces is simply the polyhedral cone generated by the vectors $\{ (1,0,0), (1,0,1), (0,1,0), (1,2,-1) \}$. Thus, the nef cone $\overline{\text{N}}^{1}_{+}(\mathcal{C}/\overline{\mathcal{M}}_{2,1}) \subset \mathbb{P}ic_{\mathbb{Q}}(\mathcal{C}/\overline{\mathcal{M}}_{2,1})$ is defined by $$ \overline{\text{N}}^{1}_{+}(\mathcal{C}/\overline{\mathcal{M}}_{2,1})=\mathbb{Q}_{\geq 0}\{\omega_{\pi},\, \omega_{\pi}(E),\, \sigma,\, \omega_{\pi}(2\sigma-E)\}. $$ It follows that the cone of curves $\overline{\text{N}}_{1}^{+}(\mathcal{C}/\overline{\mathcal{M}}_{2,1})$ has eight extremal faces: the codimension-one faces are given by $\omega_{\pi}^{\perp},\, \omega_{\pi}(E)^{\perp},\, \sigma^{\perp},\, \omega_{\pi}(2\sigma-E)^{\perp}$, while the codimension-two faces are given by $\omega_{\pi}^{\perp} \cap \omega_{\pi}(E)^{\perp},\, \omega_{\pi}(E)^{\perp} \cap \sigma^{\perp},\, \sigma^{\perp} \cap \omega_{\pi}(2\sigma-E)^{\perp},\, \omega_{\pi}(2\sigma-E)^{\perp} \cap \omega_{\pi}^{\perp}$. The irreducible components of the vital strata (1)-(3) contained in these faces are displayed in Figure \ref{F:M_{2,1}Graphs}. In addition, we have indicated the singular curves that arise in the alternate moduli functors associated to these faces: For example, associated to $\omega_{\pi}^{\perp}$, we see only nodal curves, but the marked point is allowed to pass through the node. Associated to $\omega_{\pi}(E)^{\perp}$, we see the same phenomenon as well as elliptic tails replaced by cusps. Associated to $\sigma^{\perp}$, we see genus-one bridges being replaced by the two isomorphism classes of singularities of type (1,2), namely tacnodes and a planar cusp with a smooth transverse branch. Finally, associated to $\omega_{\pi}(2\sigma-E)^{\perp}$, we see both an unmarked rational curve replaced by a rational triple point and a marked rational curve replaced by a marked node. \begin{comment} We should remark that for $\mathcal{Z}=\omega_{\pi}(2\sigma-E)^{\perp}$, the associated rational map $\phi: \overline{\mathcal{M}}_{2,1} \dashrightarrow \overline{\mathcal{M}}_{2,1}(\mathcal{Z})$ is not regular. This can be see as follows: Let $C^s=E \cup R$ be the reducible stable curve consisting of an elliptic bridge and a smooth rational bridge, with the marked point contained on $R$. Let $\mathcal{C}^s \rightarrow \mathcal{D}elta$ be a one-parameter smoothing of $C^s$, and let $\mathcal{C} \rightarrow \mathcal{D}elta$ be the associated $\mathcal{Z}$-stable model of the generic fiber, obtained by contracting $E$. If $\phi$ were regular, then the central fiber $C$ would be independent of the smoothing. One can check, however, that there exist smoothings in which contraction of the elliptic bridge produces a tacnode and others where it produces a cusp with transverse branch. Thus, $\phi$ cannot be regular. A straightforward albeit tedious combinatorial argument, similar to what was carried out in the previous example, shows that in fact that there every non-trivial extremal assignment over $\overline{\mathcal{M}}_{2,1}$ is one of the eight listed above. \end{comment} \end{example} \begin{comment} \section{Singularities of low genus}\label{S:Singularities} In Proposition \ref{P:Contractions}, we saw that one can always contract a subcurve of the special fiber of a one-parameter family of stable curves to obtain a new singular special fiber. In this section, we consider the problem of determining which singularity arises when contracting a specific subcurve in a specific one-parameter family. Since any smoothable isolated curve singularity of genus $g$ arises via the contraction of a genus $g$ subcurve in \emph{some} family of stable curves, we cannot hope to answer this question in complete generality. Nevertheless, by classifying curve singularities of genus zero and one (Proposition \ref{P:Classification}), we can at least understand what curve singularities arise by contracting rational and elliptic subcurves. This will enable an explicit description of the class of $\mathcal{Z}$-stable curves for many extremal assignments $\mathcal{Z}$. Let us provide some geometric intuition for the classification in Proposition \ref{P:Classification}. Throughout the following discussion, $C$ is a reduced connected curve over an algebraically closed field $k$. Recall that if $p \in C$ is a curve singularity, then $\delta(p)$ is the number of conditions for a function to descend from $\tilde{C}$ to $C$. Of course, if a singularity has $m(p)$ branches, there are $m(p)-1$ obviously necessary conditions for a function $f \in \mathscr{O}_{\tilde{C}}$ to descend: $f$ must have the same value at each point in $\pi^{-1}(p)$. Thus, the genus $g(p)=\delta(p)-m(p)+1$ is the number of conditions for a function to descend \emph{beyond the obvious ones}. From this description, it is clear that the genus is always non-negative integer. For each integer $m \geq 2$, there is a unique singularity with $m$ branches and genus zero, namely the union of the $m$ coordinate axes in $\mathbb{A}^{m}$, and we call these rational $m$-fold points. \begin{definition}[The rational $m$-fold point] We say that $p \in C$ is a \emph{rational $m$-fold point} if \begin{align*} \hat{O}_{C,p} &\simeq k[[x_1, \ldots, x_m]]/I_{m},\\ I_{m}&:=(x_ix_j: 1 \leq i<j \leq m). \end{align*} \end{definition} In the case of genus one, the situation is more complicated. It turns out that, for each integer $m \geq 1$, there is a unique Gorenstein curve singularity with $m$ branches and genus one. \begin{definition}[The Gorenstein elliptic $m$-fold point] We say that $p \in C$ is a \emph{Gorenstein elliptic $m$-fold point} if \begin{align*} \hat{O}_{C,p} \simeq & \begin{cases} k[[x,y]]/(y^2-x^3) & m=1\,\,\,\text{(ordinary cusp)}\\ k[[x,y]]/(y^2-yx^2) & m=2 \,\,\,\text{(ordinary tacnode)} \\ k[[x,y]]/(x^2y-xy^2) & m=3 \,\,\, \text{(planar triple-point)}\\ k[[x_1, \ldots, x_{m-1}]]/J_m & m \geq 4, \text{($m$ general lines through the origin in $\mathbb{A}^{m-1}$),}\\ \end{cases}\\ &\,\,\,\,\, J_{m}:= \left( x_{h}x_i-x_{h}x_j \, : \,\, i,j,h \in \{1, \ldots, m-1\} \text{ distinct} \right). \end{align*} \end{definition} It is easy to see, however, that this sequence cannot comprise all singularities of genus one. Given an arbitrary isolated curve singularity, adjoining a smooth branch transverse to the tangent space of the original singularity increases $\delta(p)$ and $m(p)$ by one, thereby leaving $g(p)$ unchanged. Thus, since an ordinary cusp has genus one, so does the spatial singularity obtained by taking a cusp and a smooth branch transverse to the tangent plane of the cusp. It turns out that all genus one singularities are generated by adding transverse branches to the Gorenstein singularities of genus one. \begin{figure} \caption{Isolated curve singularites in genus zero and one.} \label{F:Singularities} \end{figure} \begin{definition}[The general elliptic $m$-fold point] We say that $p \in C$ is an \emph{elliptic $m$-fold point} if there exists $k \leq m$ such that \begin{align*} \hat{O}_{C,p} \simeq & \begin{cases} k[[x,y, z_1, \ldots, z_{m-1}]]/(y^2-x^3) \cap I_{m-1} & m=1\,\,\,\text{(cusp + transverse branches)}\\ k[[x,y, z_1, \ldots, z_{m-2}]]/(y^2-yx^2) \cap I_{m-2} & m=2 \,\,\,\text{(tacnode + transverse branches)} \\ k[[x,y, z_1, \ldots, z_{m-3}]]/(x^2y-xy^2) \cap I_{m-3} & m=3 \,\,\, \text{(planar triple point + transverse branches)}\\ k[[x_1, \ldots, x_{k-1}, z_{1}, \ldots, z_{m-k}]]/J_k \cap I_{m-k} & m \geq 4, \text{(Gorenstein $k$-fold point + transverse branches)}\\ \end{cases}\\ &\,\,\,\,\,\,\,\, I_{m}:=\left(z_iz_j: i,j \in \{1, \ldots, m\} \text{ distinct} \right).\\ &\,\,\,\,\,\,\,\, J_{k}:= \left( x_{h}x_i-x_{h}x_j \, : \,\, i,j,h \in \{1, \ldots, k-1\} \text{ distinct} \right). \end{align*} \end{definition} We summarize this discussion in the following proposition. \begin{proposition}\label{P:Classification} Let $p \in C$ be an isolated curve singularity. \begin{itemize} \item[(1)] If $g(p)=0, m(p)=m$, then $p \in C$ is a rational $m$-fold point. \item[(2)] If $g(p)=1, m(p)=m$, the $p \in C$ is an elliptic $m$-fold point. \end{itemize} \end{proposition} Proposition \ref{P:Classification} is proved in \cite{Stevens} when $k=\mathbb{C}$. Here, we give a purely algebraic proof, applicable in any characteristic. Let us switch to ring-theoretic notation, and set \begin{align*} &R:=\hat{\mathscr{O}}_{C,p}.\\ &\tilde{R}:=\widetilde{R/P_1} \oplus \ldots \oplus \widetilde{R/P_{k(p)}}, \end{align*} where $P_1, \ldots, P_{m}$ are the minimal primes of $R$, and $\widetilde{R/P_i}$ denotes the integral closure of $R/P_i$. Note that $$ \tilde{R} \simeq k[[t_1]] \oplus \ldots \oplus k[[t_m]], $$ since each $\widetilde{R/P_i}$ is a complete, regular local ring of dimension one over $k$. Let $m_R$ be the maximal ideal of $R$, and let $m_{\tilde{R}}$ be the ideal $(t_1) \oplus \ldots \oplus (t_m)$. Since $R$ is reduced, we have an embedding $R \hookrightarrow \tilde{R},$ with $m_R=(m_{\tilde{R}} \cap R)$. Note that the $R$-module $\tilde{R}/R$ has a natural grading given by powers of $m_{\tilde{R}}$. Setting $$ (\tilde{R}/R)^{i}:=m_{\tilde{R}}^i/((m_{\tilde{R}}^i \cap R)+m_{\tilde{R}}^{i+1}), $$ we have the following trivial observations: \begin{itemize} \item[(1)] $\delta(p)=\sum_{i \geq 0} \text{dim\,}_{k} (\tilde{R}/R)^i$ \item[(2)] $g(p)=\sum_{i \geq 1} \text{dim\,}_{k} (\tilde{R}/R)^i$ \item[(3)] $(\tilde{R}/R)^i=(\tilde{R}/R)^j=0 \text{im\,}plies (\tilde{R}/R)^{i+j}=0$ for any $i,j \geq 1$. \end{itemize} With these preliminaries, the proof of Proposition \ref{P:Classification} is straightforward, albeit somewhat tedious. The idea is to find a basis for $m_{R}/m_{R}^2$ in terms of the local coordinates $t_1, \ldots, t_m$. \begin{proof}[Proof of Proposition \ref{P:Classification} (1)] If $g(p)=0$, then $(\tilde{R}/R)^i=0$ for all $i>0$, so $m_{R}=m_{\tilde{R}}$. Thus, we may define a local homomorphism of complete local rings \begin{align*} k[[x_1, \ldots, x_m]]&\rightarrow R \subset k[[t_1]] \oplus \ldots \oplus k[[t_m]]\\ x_i &\rightarrow (0, \ldots,0, t_i, 0, \ldots 0) \end{align*} This homomorphism is surjective since it is surjective on tangent spaces, and the kernel is precisely the ideal $ I_{m}=(x_ix_j, i<j). $\\ \end{proof} \begin{proof}[Proof of Proposition \ref{P:Classification} (2)] Since $g(p)=1$, observations (2) and (3) imply that \begin{align*} \text{dim\,}_{k} (\tilde{R}/R)^1&=1\\ \text{dim\,}_{k} (\tilde{R}/R)^i&=0 \text{ for all $i \geq 1$}. \end{align*} Put differently, this says that $ m_{R} \supset m_{\tilde{R}}^2, $ while $ m_{R}/m_{\tilde{R}}^2 \subset m_{\tilde{R}}/m_{\tilde{R}}^2 $ is a codimension-one subspace. By Gaussian elimination, we may choose $f_1, \ldots, f_{m-1} \in m_{R}$ such that \begin{equation*} \left( \begin{matrix} f_1\\ \vdots\\ \vdots\\ f_{m-1} \end{matrix} \right)\equiv \left( \begin{matrix} t_1& 0& \hdots & 0 & a_1t_{m-1}\\ 0&t_2& \ddots & \vdots & a_2t_{m-1} \\ \vdots& \ddots& \ddots& 0& \vdots \\ 0 & \hdots &0 & t_{m-2} & a_{m-1}t_{m-1} \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^2 \end{matrix} \end{equation*} for some $a_1, \ldots, a_{m-1} \in k.$ We claim that we may assume $a_1, \ldots, a_{m-1}=1$. To see this, first observe that if $a_i=0$ for any $i$, then the $i^{\text{th}}$ branch of $p$ is smooth and transverse to the remaining branches. Thus, the singularity is analytically isomorphic to the union of a smooth transverse branch and a genus one singularity with $m-1$ branches. By induction on the number of branches, we conclude that $p$ is an elliptic $m$-fold point. Thus, we may assume that $a_i \in k^{*}$ for each $i=1, \ldots, m$. Making a change of uniformizer $t_i'=a_i^{-1}t_i$ and replacing $f_i$ by $a_{i}^{-1}f_i$, we may assume that each $a_i=1$. At this point, the proof breaks into three cases: \begin{itemize} \item[I.] $(m \geq 3)$ We claim that $f_1, \ldots, f_{m-1}$ give a basis for $m_{R}/m_{R}^2$. Clearly, it is enough to show that $m_{R}^2=m_{\tilde{R}}^2$. Since $m_{R}^2 \supset m_{\tilde{R}}^4$, it is enough to show that $$m_{R}^2/m_{\tilde{R}}^4 \hookrightarrow m_{\tilde{R}}^2/m_{\tilde{R}}^4$$ is surjective. Using the matrix expressions for the $\{f_i\}$, one easily verifies that $f_1^{2}, \ldots, f_{m-1}^2, f_1f_2$ map to a basis of $m_{\tilde{R}}^2/m_{\tilde{R}}^3$, and $f_1^{3}, \ldots, f_{m-1}^3, f_1^2f_2$ map to a basis of $m_{\tilde{R}}^3/m_{\tilde{R}}^4$. Since $f_1, \ldots, f_{m-1}$ give a basis of $m_{R}/m_{R}^2$, we have a surjective hoomomorphism \begin{align*} k[[x_1, \ldots, x_{m-1}]] &\rightarrow R \subset k[[t_1]] \oplus \ldots \oplus k[[t_m]]\\ x_i &\rightarrow (0, \ldots,0, t_i, 0, \ldots 0,t_{m-1}), \end{align*} and the kernel is precisely $ I=(x_{h}(x_i-x_j) \text{ with } i,j,h \in \{1, \ldots, m-1\} \text{ distinct}). $\\ \item[II.] $(m=2)$ By the preceeding analysis, there exists $f_1 \in m_{R}$ such that $$f_1 \equiv (t_1\,\,\, t_2) \mod m_{R}^2.$$ Since $m_{R} \supset m_{\tilde{R}}^2$, we may choose $f_2 \in m_{R}$ such that $f_1^2, f_2$ map to a basis of $m_{\tilde{R}}^2/m_{\tilde{R}}^3$. After Gaussian elimination, we may assume that \begin{equation*} \left( \begin{matrix} f_1^2\\ f_{2} \end{matrix} \right)\equiv \left( \begin{matrix} t_1^2& t_2^2\\ 0 & t_2^2\\ \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^3 \end{matrix} \end{equation*} We claim that $f_1$ and $f_2$ form a basis for $m_{R}/m_{R}^2$. Since $f_1, f_2, f_1^2$ form a basis for $m_{R}/m_{\tilde{R}}^3$, it suffices to show that $m_{R}^2 \cap m_{\tilde{R}}^3=m_{\tilde{R}}^3$. Since $m_{R}^2 \supset m_{\tilde{R}}^4$, it is enough to show that $$ (m_{R}^2 \cap m_{\tilde{R}}^3 )/m_{\tilde{R}}^4 \hookrightarrow m_{\tilde{R}}^3/m_{\tilde{R}}^4 $$ is surjective. From the matrix expression for the $\{f_i\}$, one easily sees that $f_1^3$ and $f_1f_2$ give a basis of $m_{\tilde{R}}^3/m_{\tilde{R}}^4$. Since $f_1, f_2$ give a basis of $m_{R}/m_{R}^2$, we have a surjective homomorphism of complete local rings \begin{align*} k[[x,y]] &\rightarrow R \subset k[[t_1]] \oplus k[[t_2]]\\ x &\rightarrow (t_1,t_2),\\ y &\rightarrow (0,t_2^2), \end{align*} with kernel $y(y-x^2)$.\\ \item[III.] $(m=1)$ Since $m_{R}/m_{\tilde{R}}^2 \subset m_{\tilde{R}}/m_{\tilde{R}}^2$ is codimension-one, we have $m_{R}=m_{\tilde{R}}^2$. Thus, we may pick $f_1, f_2 \in m_{R}$ so that \begin{equation*} \left( \begin{matrix} f_1\\ f_{2} \end{matrix} \right)\equiv \left( \begin{matrix} t_1^2\\ t_1^3\\ \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^4. \end{matrix} \end{equation*} Since $m_{R}^2 = m_{\tilde{R}}^4$, $f_1$ and $f_2$ give a basis for $m_{R}/m_{R}^2$. Thus, the homomorphism \begin{align*} k[[x,y]]&\rightarrow R \subset k[[t_1]] \\ x &\rightarrow (t_1^2),\\ y &\rightarrow (t_1^3), \end{align*} is surjective, with kernel $y^2-x^3$. \end{itemize} \end{proof} \end{comment} \section{The moduli stack of (all) curves (by Jack Hall)}\label{S:Stack} The purpose of this appendix is to prove that the moduli stack of curves is algebraic. The statement is well-known to experts and appears in \cite[Proposition 2.3]{DHS}, but it seems worthwhile to give a self-contained proof which does not rely on the theory of Artin approximation. In addition, corollaries \ref{C:Stackgn} and \ref{C:Stackgne} are used in the main body of the paper and do not appear elsewhere in the literature. Throughout this appendix, we follow the notations and terminology of \cite{LMB}. Let $(\text{Aff}/S)$ denote the category of affine $S$-schemes. If $S = \mathcal{S}pec \mathbb{Z}$, we write simply $(\text{Aff})$. We define a category $\mathcal{U}$, fibered in groupoids over $(\text{Aff})$, whose objects are flat, proper, finitely-presented morphisms of relative dimension one $\mathcal{C} \to S$, where $\mathcal{C}$ is an algebraic space and $S$ is an affine scheme, and whose arrows are Cartesian diagrams: \[ \xymatrix{\mathcal{C}' \ar[r] \ar[d] & \mathcal{C} \ar[d] \\ S' \ar[r] & S} \] Clearly, $\mathcal{U}$ is a stack over $\mathcal{S}pec \mathbb{Z}$. We will prove \begin{theorem}\label{T:Stack} $\mathcal{U}$ is an algebraic stack, locally of finite type over $\mathcal{S}pec \mathbb{Z}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{T:Stack}] To prove that $\mathcal{U}$ is algebraic and locally of finite type over $\mathbb{Z}$, we must show: \begin{enumerate} \item the diagonal $\mathcal{D}elta : \mathcal{U} \to \mathcal{U}\times \mathcal{U}$ is representable, quasi-compact, and separated. This is done in Section B.1. \item There is an algebraic space $U$, locally of finite type over $\mathcal{S}pec \mathbb{Z}$, together with a smooth, surjective 1-morphism $U \to \mathcal{U}$. This is done in Section B.2. \end{enumerate} \end{proof} Let $\mathcal{U}_{g,n}$ (resp. $\mathcal{U}_{g,n,d}$) be the stack of flat, proper, finitely presented morphisms $\mathcal{C} \to S$, together with $n$ sections, whose geometric fibers are reduced, connected curves of arithmetic genus $g$ (resp. with no more than $d$ irreducible components). The following corollaries of Theorem \ref{T:Stack} will be proved in Section B.3. \begin{corollary}\label{C:Stackgn} $\mathcal{U}_{g,n}$ is an algebraic stack, locally of finite type over $\mathcal{S}pec \mathbb{Z}$. \end{corollary} \begin{corollary}\label{C:Stackgne} $\mathcal{U}_{g,n,d}$ is an algebraic stack, of finite type over $\mathcal{S}pec\mathbb{Z}$. \end{corollary} Throughout this appendix, we make free use of the fact that $\mathcal{U}$ is limit-preserving. Since any ring can be written as an inductive limit of finitely-generated $\mathbb{Z}$-algebras, this allows one to check properties of $\mathcal{U}$ using test schemes which are of finite type over $\mathbb{Z}$ (in particular, noetherian). \begin{lemma} If $A = \varinjlim_{i}A_{i}$ is an inductive system of rings, then there is an equivalence of groupoids: $$ \mathcal{U}(\mathcal{S}pec A) \simeq \varprojlim_{i} \mathcal{U}(\mathcal{S}pec A_{i}). $$ \end{lemma} \begin{proof} By \cite[4.18.1]{LMB}, the category of finitely-presented algebraic spaces over affine base-schemes is a limit-preserving stack over $(\text{Aff})$. The fact that a proper, flat, relative-dimension one morphism over $\mathcal{S}pec A$ is induced from a proper, flat, relative-dimension one morphism over some $\mathcal{S}pec A_{i}$, then follows from \cite[IV.3.1]{Knutson}, \cite[11.2.6]{EGAIV}, and \cite[4.1.4]{EGAIV} respectively. \end{proof} \subsubsection*{B.1 Representability of the diagonal}\label{S:Diagonal} In this section, we prove that the diagonal morphism $\mathcal{D}elta : \mathcal{U} \to \mathcal{U} \times \mathcal{U}$ is representable, finitely presented and separated. Equivalently, we must show that if $\pi_1 : \mathcal{C}_1 \to S$, $\pi_2 : \mathcal{C}_2 \to S$ are two objects of $\mathcal{U}$, then the sheaf $\mathcal{I}\!som_S(\pi_1,\pi_2)$ is representable by an algebraic space, finitely presented and separated over $S$. Recall that if $\pi: \mathcal{C}_1 \rightarrow S$ and $\pi: \mathcal{C}_2 \rightarrow S$ are proper finitely-presented morphisms over an affine scheme $S$ (not necessarily curves), then $\mathscr{H}om_S(\pi_1,\pi_2)$ and $\mathcal{I}\!som_S(\pi_1,\pi_2)$ are the sheaves over (\text{Aff}/S) whose sections over $T \rightarrow S$ are given by $T$-morphisms (resp. $T$-isomorphisms) $\mathcal{C}_{1} \times_{S} T \to \mathcal{C}_{2} \times_{S} T$. Also recall that if $\mathcal{C} \rightarrow S$ is proper and finitely-presented, the Hilbert functor $\mathscr{H}ilb_S(\mathcal{C})$ is the sheaf over $(\text{Aff}/S)$ whose sections over $T \rightarrow S$ are given by closed subschemes $\mathcal{Z} \subset \mathcal{C} \times_{S} T$, flat and finitely-presented over $S$. If $\mathcal{C} \rightarrow S$ is projective, then $\mathscr{H}ilb_{S}(\mathcal{C})$ is representable by an $S$-scheme of the form $\coprod_{p(t) \in \mathbb{Z}[t]}H_{p(t)}$, where $H_{p(t)}$ is a projective scheme over $S$ parametrizing families with Hilbert polynomial $p(t)$. We will show that objects $\mathcal{C} \to S$ of $\mathcal{U}$ are \'etale-locally projective (Lemma \ref{L:Fppfprojective}) and the representability of $\mathcal{I}\!som_{S}(\pi_1,\pi_2)$ will be deduced from the representability of the Hilbert scheme for finitely presented projective morphisms. \begin{lemma}\label{L:Fppfprojective} Let $\pi : C \to S$ be a proper, finitely presented morphism of algebraic spaces. Let $s\in S$ be a closed point such that $\text{dim\,}_{\Bbbk(s)} C_s \leq 1$, then there is an \'etale neighbourhood $(U,u)$ of $(S,s)$ such that $C\times_S U \to U$ is projective. \end{lemma} \begin{proof} The statement is local on $S$ for the \'etale topology and by the standard limit methods we reduce immediately to the following situation: $S = \mathcal{S}pec R$, where $R$ is an excellent, strictly henselian local ring and $s \in S$ is the unique closed point. First, we asume that $C$ is a scheme. Take $C_s\to s$ to denote the special fiber of $C\to S$. Since $C_s$ is proper and of dimension 1 over a field, it is manifestly projective. Let $\mathscr{L}_s$ be an ample bundle on $C_s$, then by \cite[Proposition 4.1]{SGA4.5}, we may lift it to a line bundle $\mathscr{L}$ on $C$. By \cite[Theoreme 4.7.1]{EGAIII}, one concludes that $\mathscr{L}$ is ample for $C\to S$ and thus, $C\to S$ is projective. Next, assume that $C$ is a reduced, normal algebraic space, then by \cite[Cor. 16.6.2]{LMB}, there is an isomorphism of algebraic spaces $[C'/G]\to C$, where $C'$ is a scheme and $G$ is a finite group acting freely on $C'$. By the above, $C'$ is a projective and since the quotient of a finite group acting freely on a projective scheme is projective, one concludes $C$ is projective. Take $C$ to be general, then the usual exponential sequence, coupled with basic facts about the \'etale site on $C$, implies that if $C''$ is the normalization of the reduction of $C$, the pullback $\mathbb{P}ic C \to \mathbb{P}ic C''$ is surjective. Since $S$ is excellent, $C'' \to S$ is proper and satisfies the hypotheses of the previous paragraph and is consequently projective. Lifting an ample line bundle of $C''$ to $C$, we conclude that this lift is ample ($C'' \to C$ is finite) and hence $C\to S$ is projective. \end{proof} \begin{corollary}\label{C:RepDiagonalStep}\label{C:RepDiagonal} Suppose that $\pi_1 : \mathcal{C}_1 \to S$, $\pi_2 : \mathcal{C}_2 \to S$ are objects of $\mathcal{U}$, then the sheaves $\mathscr{H}om_S(\pi_1,\pi_2)$ and $\mathcal{I}\!som_{S}(\pi_1,\pi_2)$ are both representable by finitely presented and separated algebraic $S$-spaces. \end{corollary} \begin{proof} By Lemma \ref{L:Fppfprojective}, there is an \'etale surjection $T \to S$ such that for $i=1$, $2$, the pullbacks, $\pi_{i,T} : \mathcal{C}_{i,T}\to T$, are projective, flat and finitely presented. The inclusions $\mathcal{I}\!som_{T}(\pi_{1,T},\pi_{2,T}) \subset \mathscr{H}om_{T}(\pi_{1,T},\pi_{2,T}) \subset \mathscr{H}ilb_{T}(\mathcal{C}_{1,T} \times_{T} \mathcal{C}_{2,T})$ are representable by finitely-presented open immersions.\footnote{$\mathcal{U}$ is limit preserving, so we may assume that $T$ is noetherian. The assertion for the second inclusion follows from the first. Indeed, this inclusion is given by the graph homomorphism and those families of closed subschemes of $\mathcal{C}_{1,T} \times_T \mathcal{C}_{2,T}$ which are graphs are precisely those families for which projection onto the first factor is an isomorphism. The first inclusion is covered by \cite[Prop. 4.6.7(ii)]{EGAIII}.} From the existence of the Hilbert scheme for finitely presented projective morphisms, we make the following two observations: \begin{enumerate} \item $\mathscr{H}om_S(\pi_1,\pi_2) \times_{S} T \simeq \mathscr{H}om_T(\pi_{1,T},\pi_{2,T})$ is representable by a separated and locally of finite type $S$-scheme. In particular, the morphism $\mathscr{H}om_T(\pi_{1,T},\pi_{2,T}) \to \mathscr{H}om_S(\pi_1,\pi_2)$ is \'etale and surjective. \item The map $\mathscr{H}om_T \times_{\mathscr{H}om_S } \mathscr{H}om_T \rightarrow \mathscr{H}om_T \times_{S} \mathscr{H}om_T$ is a quasicompact, closed immersion. Indeed, this is simply the locus where two separated morphisms of schemes agree. \end{enumerate} Putting these together, one concludes (by the definition of an algebraic space) that $\mathscr{H}om_S(\pi_1,\pi_2)$ is representable by a separated and locally of finite type algebraic $S$-space. Since finitely presented open immersions are local for the \'etale topology, we deduce the corresponding result for $\mathcal{I}\!som_S(\pi_1,\pi_2)$. All that remains to check is that $\mathscr{H}om_T$ is quasicompact. Taking a Segre embedding of the product $\mathcal{C}_{1,T}\times_T \mathcal{C}_{2,T}$ into some fixed projective space $\mathbb{P}^M_T$, an application of \cite[Theorem B.7]{FGA_exp} allows one to conclude that graphs of morphisms $\mathcal{C}_1 \times_S T \rightarrow \mathcal{C}_2 \times_S T$ all have the same Hilbert polynomial and consequently, $\mathscr{H}om_T$ is quasi-projective and in particular, quasi-compact. \end{proof} \subsubsection*{B.2 Existence of a smooth cover}\label{S:Cover} In this section, we prove that $\mathcal{U}$ admits a smooth surjective cover by a scheme $U \rightarrow \mathcal{U}$. The key point is that if $C$ is a proper one-dimensional scheme over an algebraically closed field, then $C$ admits an embedding $C \hookrightarrow \mathbb{P}^n$ such that the natural map from embedded deformations to abstract deformations is smooth and surjective (Lemma \ref{L:DefSurjective}) at $[C]$. Thus, we may build our atlas as a disjoint union of open subschemes of Hilbert schemes. The key lemma is well-known in the local complete intersection case, but we need the statement for an arbitrary one-dimensional scheme, and this requires the cotangent complex. Throughout this section, $k$ denotes an algebraically closed ground field. If $X \subset Y$ are $k$-schemes, we let $\mathcal{D}ef_X$ and $\mathcal{D}ef_{X \subset Y}$ denote the functor of abstract and embedded deformations respectively. (For the basic definitions of deformation theory, the reader may consult \cite{Sernesi}). \begin{lemma}\label{L:DefSurjective} Suppose that we have an embedding $i:X \hookrightarrow Y$, where $X$ is a proper $k$-scheme and $Y$ is a smooth $k$-scheme. If $H^1(X,i^*T_{Y/k})=0$, then \emph{$\mathcal{D}ef_{X \subset Y} \rightarrow \mathcal{D}ef_{X}$} is formally smooth. \end{lemma} \begin{proof} We must show that for any small extension $A' \to A$ with square-zero kernel, the map \[ \mathcal{D}ef_{X \subset Y}(A') \to \mathcal{D}ef_{X}(A') \times_{\mathcal{D}ef_X(A)} \mathcal{D}ef_{X \subset Y}(A) \] is surjective. Consider a diagram \[ \xymatrix{ X_{A} \ar@{^{(}->}[r] \ar[d] & Y_{A}:=Y \times_{\mathcal{S}pec k} \mathcal{S}pec A \ar[d] \\ X_{A'} \ar@{-->}[r] & Y_{A'}:=Y \times_{\mathcal{S}pec k} \mathcal{S}pec A'\\ } \] where $[X_{A'}] \in \mathcal{D}ef_{X}(A')$ and $[X_{A} \hookrightarrow Y_{A}] \in \mathcal{D}ef_{X \subset Y}(A)$ each restrict to $[X_{A}] \in \mathcal{D}ef_{X}(A)$. We must show that there exists a map $X_{A'} \rightarrow Y_{A'}$ (any such map is automatically a closed embedding). Let us abuse notation by writing $A$ (resp. \!$A'$) for $\mathcal{S}pec A$ (resp. \!$\mathcal{S}pec A'$). The triple $X_A \xrightarrow{f} Y_{A'} \to A'$ gives a distinguished triangle of complexes \cite[II.2.1.2]{Illusie}: \[ \xymatrix{f^*L_{Y_{A'}/A'} \ar[r] & L_{X_A/A'} \ar[r] & L_{X_A/Y_{A'}}}, \] and the long exact sequence associated to $\text{\rm Hom}_{X_A}(-,\mathscr{O}_{X_A})$ gives: \[ \xymatrix{\text{Ext\,}^1_{X_A}(L_{X_A/Y_{A'}}, \mathscr{O}_{X_A})\ar[r] & \ar[r] \text{Ext\,}^1_{X_A}(L_{X_A/A'},\mathscr{O}_{X_A}) & \text{Ext\,}^1_{X_A}(f^*L_{Y_{A'}/A'},\mathscr{O}_{X_A})}. \] We claim that $\text{Ext\,}^1_{X_A}(f^*L_{Y_{A'}/A'},\mathscr{O}_{X_A})=0$. Since $Y_{A'} \to A'$ is smooth, $L_{Y_{A'}/A'} \cong \mathscr{O}mega_{Y_{A'}/A'}$ and $ \text{Ext\,}^1_{X_A}(f^*L_{Y_{A'}/A'},\mathscr{O}_{X_A}) \cong H^1(X_A,T_{Y_{A}/A}\mid_{X_A}) $ \cite[III.3]{Illusie}. Since $T_{Y_{A}/A}$ is flat over $\mathcal{S}pec A$ and $H^1(X,\text{im\,}ath^*T_{Y/k}) = 0$, $H^1(X_A,T_{Y_{A}/A}\mid_{X_A})=0$ by \cite[Exercise III.11.8]{Hartshorne}. By \cite[III.1.2]{Illusie}, $\text{Ext\,}^1_{X_A}(L_{X_A/Y_{A'}}, \mathscr{O}_{X_A}) \simeq \text{Exal\,}_{\mathscr{O}_{Y_{A'}}}(\mathscr{O}_{X_A},\mathscr{O}_{X_A})$ and $\text{Ext\,}^1_{X_A}(L_{X_A/A'},\mathscr{O}_{X_A}) \simeq \text{Exal\,}_{\mathscr{O}_{A'}}(\mathscr{O}_{X_A},\mathscr{O}_{X_A})$, so we obtain a surjection: \[ \xymatrix{\text{Exal\,}_{\mathscr{O}_{Y_{A'}}}(\mathscr{O}_{X_A},\mathscr{O}_{X_A}) \ar[r] & \text{Exal\,}_{\mathscr{O}_{A'}}(\mathscr{O}_{X_A},\mathscr{O}_{X_A}) \ar[r] & 0 }. \] Thus, there is an $\mathscr{O}_{Y_{A'}}$-extension of $\mathscr{O}_{X_A}$ by $\mathscr{O}_{X_A}$ mapping to $[X_{A'}] \in \text{Exal\,}_{\mathscr{O}_{A'}}(\mathscr{O}_{X_A},\mathscr{O}_{X_A}).$ In particular, there is a map of sheaves $\mathscr{O}_{Y_{A'}} \to \mathscr{O}_{X_{A'}}$ and consequently a morphism $X' \to Y_{A'}$ extending $X_{A} \to Y_{A'}$. \end{proof} \begin{corollary} Suppose that $X$ is a projective $k$-scheme with $h^2(X,\mathscr{O}_{X})=0$. If $X \hookrightarrow \mathbb{P}^n$ is any embedding such that $h^1(X,\mathscr{O}_{X}(1))=0$, then \emph{$\mathcal{D}ef_{X \subset \mathbb{P}^{n}} \rightarrow \mathcal{D}ef_{X}$} is formally smooth. \end{corollary} \begin{proof} By Lemma \ref{L:DefSurjective}, it is sufficient to show that $h^1(C,i^*T_{\mathbb{P}^n})=0$. We have an exact sequence $$ 0 \rightarrow \mathscr{O}_{X} \rightarrow \mathscr{O}_{X}(1)^{\oplus(n+1)} \rightarrow i^*T_{\mathbb{P}^{n}} \rightarrow 0, $$ and taking cohomology gives $$ H^1(X, \mathscr{O}_{X}(1))^{\oplus(n+1)} \rightarrow H^1(X, i^*T_{\mathbb{P}^{n}}) \rightarrow H^2(X, \mathscr{O}_{X}). $$ Since $h^1(X,\mathscr{O}_{X}(1))=h^2(X,\mathscr{O}_{X})=0$, we conclude $h^1(X, i^*T_{\mathbb{P}^n})=0$ as desired. \end{proof} \begin{corollary} There exists a scheme $U$, locally of finite-type over $\mathcal{S}pec \mathbb{Z}$, and a smooth surjective cover $U \rightarrow \mathcal{U}$. \end{corollary} \begin{proof} Given a geometric point $x:=[C] \in \mathcal{U}$, consider any embedding $C \hookrightarrow \mathbb{P}^{n}$ such that $h^1(C, i^*T_{\mathbb{P}^n})=0$. By cohomology and base-change \cite[Theorem 12.11]{Hartshorne}, there exists a Zariski-open affine neighborhood $$ [C] \in U_{x} \subset Hilb(\mathbb{P}^{n}) $$ such that the restriction of the universal embedding $\mathcal{C} \hookrightarrow \mathbb{P}^{n} \times Hilb(\mathbb{P}^{n})$ to $U_{x}$ satisfies $h^1(C_y, i^*T_{\mathbb{P}^n_y})=0$ for every $y \in U_{[C]}$. By Lemma \ref{L:DefSurjective}, the induced map $U_x \rightarrow \mathcal{U}$ is formally smooth. Also, since the diagonal $\mathcal{U} \rightarrow \mathcal{U} \times \mathcal{U}$ is representable and of finite presentation (Corollary \ref{C:RepDiagonal}), the map $U_{x} \rightarrow \mathcal{U}$ is representable and finitely-presented. Since $\mathcal{U}$ is limit-preserving, it suffices to check the smoothness of $U_{x} \rightarrow \mathcal{U}$ on noetherian test schemes, and we conclude that $U_{x} \rightarrow \mathcal{U}$ is smooth by \cite[Theorem 17.14.2]{EGAIV}. Now let $Hilb_{g,d,n}$ be the Hilbert scheme (over $\mathcal{S}pec \mathbb{Z}$) of curves of genus $g$ and degree $d$ in $\mathbb{P}^{n}$, and let $U \subset \coprod_{g,d,n}H_{g,d,n}$ be the open subscheme parametrizing curves $[C \subset \mathbb{P}^{n}]$ with $h^1(C, i^*T_{\mathbb{P}^n})=0$. Then $U$ is locally of finite type over $\mathcal{S}pec \mathbb{Z}$, and the arguments of the preceding paragraph show that $$ U \rightarrow \mathcal{U} $$ is smooth and surjective. \end{proof} \subsubsection*{B.3. Proofs of corollaries} \begin{proof}[Proof of Corollary \ref{C:Stackgn}] If $\mathcal{C} \rightarrow S$ is an object of $\mathcal{U}$, the set $$ \{s \in S \,|\, C_{\overline{s}}\mbox{ is reduced, connected, of arithmetic genus $g$}\} $$ is open in $S$ by \cite[12.2.1-12.2.3]{EGAIV}. It follows that $\mathcal{U}_{g}$, the stack whose objects are flat proper finitely-presented morphisms of relative dimension one with geometrically connected reduced fibers of arithmetic genus $g$, is an open substack of $\mathcal{U}$. Now observe that the morphism $\mathcal{U}_{g,1} \rightarrow \mathcal{U}_{g}$ obtained by forgetting a section is representable, finitely-presented, and proper. Indeed, if $S \rightarrow \mathcal{U}_{g}$ is any morphism from an affine scheme, corresponding to a family $\mathcal{C} \rightarrow S$, then the fiber product $S \times_{\mathcal{U}_{g}} \mathcal{U}_{g,1}$ is naturally isomorphic to $\mathscr{H}om_{S}(S,\mathcal{C})$, which is represented by $\mathcal{C}$ itself. It follows that $\mathcal{U}_{g,1}$ is an algebraic stack, locally of finite type over $\mathcal{S}pec \mathbb{Z}$. Since \[ \mathcal{U}_{g,n} \simeq \underbrace{\mathcal{U}_{g,1} \times_{\mathcal{U}_{g}} \cdots \times_{\mathcal{U}_{g}} \mathcal{U}_{g,1}}_n, \] we conclude that $\mathcal{U}_{g,n}$ is an algebraic stack, locally of finite-type over $\mathcal{S}pec \mathbb{Z}$. \end{proof} For the second corollary, we need the following boundedness lemma \begin{lemma}(with Smyth, Vakil, and van der Wyck)\label{L:Boundedness} There exists integers $N_{g,e}$ and $D_{g,e}$ depending only on $g$ and $e$ such that any reduced curve of arithmetic genus $g$ with no more than $e$ irreducible components admits a degree $d$ embedding into $\mathbb{P}^{n}$ with $d \leq D_{g,e}$ and $n \leq N_{g,e}.$ \end{lemma} \begin{proof} First, we show there exists an integer $D_{g,e}$ such that any reduced curve of arithmetic genus $g$ with no more than $e$ irreducible components admits a degree $d \leq D_{g,e}$ embedding into some projective space. Given such a curve $C$ with normalization $\pi: \tilde{C} \rightarrow C$, let $Z \subset C$ be an effective Cartier divisor whose support meets the smooth locus of every irreducible component of $C$. Since $C$ has no more than $e$ irreducible components, we may assume that $\deg Z \leq e.$ Let $\mathscr{L}:=\mathscr{O}(Z)$. It suffices to exhibit an integer $m:=m(g,e)$, depending only on $g$ and $e$, such that $\mathscr{L}^{m}$ is very ample on $C$. Indeed, we may take $D_{g,e}=me$. To show that $\mathscr{L}^{m}$ separates points and tangent vectors, it is sufficient to show that, for any $p \in C$, $$ H^1(C,\mathscr{L}^{m} \otimes m_{p})=H^1(C,\mathscr{L}^{m} \otimes m_{p}^2)=0. $$ Clearly, the latter vanishing implies the former. Given $p \in C$, let $\pi^{-1}(p)=p_1+\ldots+p_r$ and let $\delta(p)$ denote the $\delta$-invariant of $p$. We have an exact sequence $$ \xymatrix{0 \ar[r] & \pi_*\mathscr{O}_{\tilde{C}}(-2\delta(p)(p_1+\ldots+p_r)) \ar[r] & m_{p}^2 \ar[r] & \mathscr{E}\ar[r] & 0}, $$ where $\mathscr{E}$ is a coherent sheaf supported at $p$. Twisting by $\mathscr{L}^{m}$ and taking cohomology, we obtain $$ \xymatrix{H^1(C, \mathscr{L}^{m} \otimes \pi_*\mathscr{O}_{\tilde{C}}(-2\delta(p)(p_1+\ldots+p_r))) \ar[r] & H^1(C, \mathscr{L}^{m} \otimes m_{p}^2 ) \ar[r] & 0}. $$ By the projection formula, we have $$H^1(C, \mathscr{L}^{m} \otimes \pi_*\mathscr{O}_{\tilde{C}}(-2\delta(p)(p_1+\ldots+p_r)))=H^1(\tilde{C}, (\pi^*\mathscr{L})^m(-2\delta(p)(p_1+\ldots+p_r))),$$ which vanishes as soon as $m>2g-2+2\delta(p)r$. Since $\delta(p) \leq g+e-1$ and $r \leq \delta(p)+1$, we may take $m(g,e):=2g-2+2(g+e)(g+e-1)$. Next, we must bound the ambient dimension of the projective space. Given a degree $d$ embedding $C \hookrightarrow \mathbb{P}^{n}$, let $\text{Sec\,}(C)$ and $\text{Tan\,}(C)$ denote the secant and tangent varieties of $C$ respectively. As long as $n>\max\{\text{Sec\,}(C), \text{Tan\,}(C)\}$, we may obtain a lower-dimensional embedding by projection. The dimension of $\text{Sec\,}(C)$ is at most 3, and the dimension of $\text{Tan\,}(C)$ is bounded by the maximum embedding dimension of any singular point on $C$. Thus, it suffices to exhibit an integer $N:=N(g,e)$ such that any singularity of $p \in C$ has embedding dimension at most $N$. Suppose first that $p \in C$ is a unibranch singularity, so $\mathscr{O}_{C,p}$ is a finitely-generated subalgebra of the power series ring $k[[t]]$. Choose a set of generators for $\mathscr{O}_{C,p}$, say $f_1, \ldots, f_m$, and let $N = \min_{i} \{ \deg f_i \}$. Clearly we may choose our generators such that the residues of $\{ \deg f_i \}$ modulo $N$ are all distinct. In particular, we have $m \leq N.$ Since $N \leq \delta(p) \leq g+e-1$, we have the bound $N(g,e)=g+e-1$ for the unibranch case. If $p \in C$ has $r$ branches, then we have $\mathscr{O}_{C,p} \subset k[[t_1]]\oplus \ldots \oplus k[[t_r]]$. The restriction of $\mathscr{O}_{C,p}$ to each branch is generated by at most $g+e-1$ elements, and there are at most $\delta(p)+1 \leq g+e$ branches, so altogether $\mathscr{O}_{C,p}$ must be generated by at most $(g+e)(g+e-1)$ elements. Thus, we may take $N(g,e)=(g+e)(g+e-1)$. \end{proof} \begin{proof}[Proof of Corollary \ref{C:Stackgne}] The stack $\mathcal{U}_{g,n,d}$ is an open substack of $\mathcal{U}_{g,n}$ by \cite{EGAIV}[12.2.1 (xi)]. It only remains to see that we can cover $\mathcal{U}_{g,n,d}$ by a scheme of finite-type over $\mathcal{S}pec \mathbb{Z}$. Since the Hilbert schemes of curves in $\mathbb{P}^{n}$ of arithmetic genus $g$ and degree $d$ is of finite-type, this follows immediately from Lemma \ref{L:Boundedness}. \end{proof} \begin{comment} \section{Singularities of low genus}\label{S:Singularities} In this appendix, we classify all curve singularities of genus zero and genus one. This will enable an explicit description of the class of $\mathcal{Z}$-stable curves for many extremal assignments $\mathcal{Z}$, including all those introduced in the examples of Appendix C. Let us provide some geometric intuition for the classification in Proposition \ref{P:Classification}. Throughout the following discussion, $C$ is a reduced connected curve over an algebraically closed field $k$. Recall that if $p \in C$ is a curve singularity, then $\delta(p)$ is the number of conditions for a function to descend from $\tilde{C}$ to $C$. Of course, if a singularity has $m(p)$ branches, there are $m(p)-1$ obviously necessary conditions for a function $f \in \mathscr{O}_{\tilde{C}}$ to descend: $f$ must have the same value at each point in $\pi^{-1}(p)$. Thus, the genus $g(p)=\delta(p)-m(p)+1$ is the number of conditions for a function to descend \emph{beyond the obvious ones}. From this description, it is clear that the genus is always non-negative integer. For each integer $m \geq 2$, there is a unique singularity with $m$ branches and genus zero, namely the union of the $m$ coordinate axes in $\mathbb{A}^{m}$, and we call these rational $m$-fold points. \begin{definition}[The rational $m$-fold point] We say that $p \in C$ is a \emph{rational $m$-fold point} if \begin{align*} \hat{O}_{C,p} &\simeq k[[x_1, \ldots, x_m]]/I_{m},\\ I_{m}&:=(x_ix_j: 1 \leq i<j \leq m). \end{align*} \end{definition} In the case of genus one, the situation is more complicated. It turns out that, for each integer $m \geq 1$, there is a unique Gorenstein curve singularity with $m$ branches and genus one. \begin{definition}[The Gorenstein elliptic $m$-fold point] We say that $p \in C$ is a \emph{Gorenstein elliptic $m$-fold point} if \begin{align*} \hat{O}_{C,p} \simeq & \begin{cases} k[[x,y]]/(y^2-x^3) & m=1\,\,\,\text{(ordinary cusp)}\\ k[[x,y]]/(y^2-yx^2) & m=2 \,\,\,\text{(ordinary tacnode)} \\ k[[x,y]]/(x^2y-xy^2) & m=3 \,\,\, \text{(planar triple-point)}\\ k[[x_1, \ldots, x_{m-1}]]/J_m & m \geq 4, \text{($m$ general lines through the origin in $\mathbb{A}^{m-1}$),}\\ \end{cases}\\ &\,\,\,\,\, J_{m}:= \left( x_{h}x_i-x_{h}x_j \, : \,\, i,j,h \in \{1, \ldots, m-1\} \text{ distinct} \right). \end{align*} \end{definition} It is easy to see, however, that this sequence cannot comprise all singularities of genus one. Given an arbitrary isolated curve singularity, adjoining a smooth branch transverse to the tangent space of the original singularity increases $\delta(p)$ and $m(p)$ by one, thereby leaving $g(p)$ unchanged. Thus, since an ordinary cusp has genus one, so does the spatial singularity obtained by taking a cusp and a smooth branch transverse to the tangent plane of the cusp. It turns out that all genus one singularities are generated by adding transverse branches to the Gorenstein singularities of genus one. \begin{figure} \caption{Isolated curve singularites in genus zero and one.} \label{F:Singularities} \end{figure} \begin{definition}[The general elliptic $m$-fold point] We say that $p \in C$ is an \emph{elliptic $m$-fold point} if there exists $k \leq m$ such that \begin{align*} \hat{O}_{C,p} \simeq & \begin{cases} k[[x,y, z_1, \ldots, z_{m-1}]]/(y^2-x^3) \cap I_{m-1} & m=1\,\,\,\text{(cusp + transverse branches)}\\ k[[x,y, z_1, \ldots, z_{m-2}]]/(y^2-yx^2) \cap I_{m-2} & m=2 \,\,\,\text{(tacnode + transverse branches)} \\ k[[x,y, z_1, \ldots, z_{m-3}]]/(x^2y-xy^2) \cap I_{m-3} & m=3 \,\,\, \text{(planar triple point + transverse branches)}\\ k[[x_1, \ldots, x_{k-1}, z_{1}, \ldots, z_{m-k}]]/J_k \cap I_{m-k} & m \geq 4, \text{(Gorenstein $k$-fold point + transverse branches)}\\ \end{cases}\\ &\,\,\,\,\,\,\,\, I_{m}:=\left(z_iz_j: i,j \in \{1, \ldots, m\} \text{ distinct} \right).\\ &\,\,\,\,\,\,\,\, J_{k}:= \left( x_{h}x_i-x_{h}x_j \, : \,\, i,j,h \in \{1, \ldots, k-1\} \text{ distinct} \right). \end{align*} \end{definition} We summarize this discussion in the following proposition. \begin{proposition}\label{P:Classification} Let $p \in C$ be an isolated curve singularity. \begin{itemize} \item[(1)] If $g(p)=0, m(p)=m$, then $p \in C$ is a rational $m$-fold point. \item[(2)] If $g(p)=1, m(p)=m$, the $p \in C$ is an elliptic $m$-fold point. \end{itemize} \end{proposition} Proposition \ref{P:Classification} is proved in \cite{Stevens} when $k=\mathbb{C}$. Here, we give a purely algebraic proof, applicable in any characteristic. Let us switch to ring-theoretic notation, and set \begin{align*} &R:=\hat{\mathscr{O}}_{C,p}.\\ &\tilde{R}:=\widetilde{R/P_1} \oplus \ldots \oplus \widetilde{R/P_{k(p)}}, \end{align*} where $P_1, \ldots, P_{m}$ are the minimal primes of $R$, and $\widetilde{R/P_i}$ denotes the integral closure of $R/P_i$. Note that $$ \tilde{R} \simeq k[[t_1]] \oplus \ldots \oplus k[[t_m]], $$ since each $\widetilde{R/P_i}$ is a complete, regular local ring of dimension one over $k$. Let $m_R$ be the maximal ideal of $R$, and let $m_{\tilde{R}}$ be the ideal $(t_1) \oplus \ldots \oplus (t_m)$. Since $R$ is reduced, we have an embedding $R \hookrightarrow \tilde{R},$ with $m_R=(m_{\tilde{R}} \cap R)$. Note that the $R$-module $\tilde{R}/R$ has a natural grading given by powers of $m_{\tilde{R}}$. Setting $$ (\tilde{R}/R)^{i}:=m_{\tilde{R}}^i/((m_{\tilde{R}}^i \cap R)+m_{\tilde{R}}^{i+1}), $$ we have the following trivial observations: \begin{itemize} \item[(1)] $\delta(p)=\sum_{i \geq 0} \text{dim\,}_{k} (\tilde{R}/R)^i$ \item[(2)] $g(p)=\sum_{i \geq 1} \text{dim\,}_{k} (\tilde{R}/R)^i$ \item[(3)] $(\tilde{R}/R)^i=(\tilde{R}/R)^j=0 \text{im\,}plies (\tilde{R}/R)^{i+j}=0$ for any $i,j \geq 1$. \end{itemize} With these preliminaries, the proof of Proposition \ref{P:Classification} is straightforward, albeit somewhat tedious. The idea is to find a basis for $m_{R}/m_{R}^2$ in terms of the local coordinates $t_1, \ldots, t_m$. \begin{proof}[Proof of Proposition \ref{P:Classification} (1)] If $g(p)=0$, then $(\tilde{R}/R)^i=0$ for all $i>0$, so $m_{R}=m_{\tilde{R}}$. Thus, we may define a local homomorphism of complete local rings \begin{align*} k[[x_1, \ldots, x_m]]&\rightarrow R \subset k[[t_1]] \oplus \ldots \oplus k[[t_m]]\\ x_i &\rightarrow (0, \ldots,0, t_i, 0, \ldots 0) \end{align*} This homomorphism is surjective since it is surjective on tangent spaces, and the kernel is precisely the ideal $ I_{m}=(x_ix_j, i<j). $\\ \end{proof} \begin{proof}[Proof of Proposition \ref{P:Classification} (2)] Since $g(p)=1$, observations (2) and (3) imply that \begin{align*} \text{dim\,}_{k} (\tilde{R}/R)^1&=1\\ \text{dim\,}_{k} (\tilde{R}/R)^i&=0 \text{ for all $i \geq 1$}. \end{align*} Put differently, this says that $ m_{R} \supset m_{\tilde{R}}^2, $ while $ m_{R}/m_{\tilde{R}}^2 \subset m_{\tilde{R}}/m_{\tilde{R}}^2 $ is a codimension-one subspace. By Gaussian elimination, we may choose $f_1, \ldots, f_{m-1} \in m_{R}$ such that \begin{equation*} \left( \begin{matrix} f_1\\ \vdots\\ \vdots\\ f_{m-1} \end{matrix} \right)\equiv \left( \begin{matrix} t_1& 0& \hdots & 0 & a_1t_{m-1}\\ 0&t_2& \ddots & \vdots & a_2t_{m-1} \\ \vdots& \ddots& \ddots& 0& \vdots \\ 0 & \hdots &0 & t_{m-2} & a_{m-1}t_{m-1} \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^2 \end{matrix} \end{equation*} for some $a_1, \ldots, a_{m-1} \in k.$ We claim that we may assume $a_1, \ldots, a_{m-1}=1$. To see this, first observe that if $a_i=0$ for any $i$, then the $i^{\text{th}}$ branch of $p$ is smooth and transverse to the remaining branches. Thus, the singularity is analytically isomorphic to the union of a smooth transverse branch and a genus one singularity with $m-1$ branches. By induction on the number of branches, we conclude that $p$ is an elliptic $m$-fold point. Thus, we may assume that $a_i \in k^{*}$ for each $i=1, \ldots, m$. Making a change of uniformizer $t_i'=a_i^{-1}t_i$ and replacing $f_i$ by $a_{i}^{-1}f_i$, we may assume that each $a_i=1$. At this point, the proof breaks into three cases: \begin{itemize} \item[I.] $(m \geq 3)$ We claim that $f_1, \ldots, f_{m-1}$ give a basis for $m_{R}/m_{R}^2$. Clearly, it is enough to show that $m_{R}^2=m_{\tilde{R}}^2$. Since $m_{R}^2 \supset m_{\tilde{R}}^4$, it is enough to show that $$m_{R}^2/m_{\tilde{R}}^4 \hookrightarrow m_{\tilde{R}}^2/m_{\tilde{R}}^4$$ is surjective. Using the matrix expressions for the $\{f_i\}$, one easily verifies that $f_1^{2}, \ldots, f_{m-1}^2, f_1f_2$ map to a basis of $m_{\tilde{R}}^2/m_{\tilde{R}}^3$, and $f_1^{3}, \ldots, f_{m-1}^3, f_1^2f_2$ map to a basis of $m_{\tilde{R}}^3/m_{\tilde{R}}^4$. Since $f_1, \ldots, f_{m-1}$ give a basis of $m_{R}/m_{R}^2$, we have a surjective hoomomorphism \begin{align*} k[[x_1, \ldots, x_{m-1}]] &\rightarrow R \subset k[[t_1]] \oplus \ldots \oplus k[[t_m]]\\ x_i &\rightarrow (0, \ldots,0, t_i, 0, \ldots 0,t_{m-1}), \end{align*} and the kernel is precisely $ I=(x_{h}(x_i-x_j) \text{ with } i,j,h \in \{1, \ldots, m-1\} \text{ distinct}). $\\ \item[II.] $(m=2)$ By the preceeding analysis, there exists $f_1 \in m_{R}$ such that $$f_1 \equiv (t_1\,\,\, t_2) \mod m_{R}^2.$$ Since $m_{R} \supset m_{\tilde{R}}^2$, we may choose $f_2 \in m_{R}$ such that $f_1^2, f_2$ map to a basis of $m_{\tilde{R}}^2/m_{\tilde{R}}^3$. After Gaussian elimination, we may assume that \begin{equation*} \left( \begin{matrix} f_1^2\\ f_{2} \end{matrix} \right)\equiv \left( \begin{matrix} t_1^2& t_2^2\\ 0 & t_2^2\\ \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^3 \end{matrix} \end{equation*} We claim that $f_1$ and $f_2$ form a basis for $m_{R}/m_{R}^2$. Since $f_1, f_2, f_1^2$ form a basis for $m_{R}/m_{\tilde{R}}^3$, it suffices to show that $m_{R}^2 \cap m_{\tilde{R}}^3=m_{\tilde{R}}^3$. Since $m_{R}^2 \supset m_{\tilde{R}}^4$, it is enough to show that $$ (m_{R}^2 \cap m_{\tilde{R}}^3 )/m_{\tilde{R}}^4 \hookrightarrow m_{\tilde{R}}^3/m_{\tilde{R}}^4 $$ is surjective. From the matrix expression for the $\{f_i\}$, one easily sees that $f_1^3$ and $f_1f_2$ give a basis of $m_{\tilde{R}}^3/m_{\tilde{R}}^4$. Since $f_1, f_2$ give a basis of $m_{R}/m_{R}^2$, we have a surjective homomorphism of complete local rings \begin{align*} k[[x,y]] &\rightarrow R \subset k[[t_1]] \oplus k[[t_2]]\\ x &\rightarrow (t_1,t_2),\\ y &\rightarrow (0,t_2^2), \end{align*} with kernel $y(y-x^2)$.\\ \item[III.] $(m=1)$ Since $m_{R}/m_{\tilde{R}}^2 \subset m_{\tilde{R}}/m_{\tilde{R}}^2$ is codimension-one, we have $m_{R}=m_{\tilde{R}}^2$. Thus, we may pick $f_1, f_2 \in m_{R}$ so that \begin{equation*} \left( \begin{matrix} f_1\\ f_{2} \end{matrix} \right)\equiv \left( \begin{matrix} t_1^2\\ t_1^3\\ \end{matrix} \right) \begin{matrix} \mod m_{\tilde{R}}^4. \end{matrix} \end{equation*} Since $m_{R}^2 = m_{\tilde{R}}^4$, $f_1$ and $f_2$ give a basis for $m_{R}/m_{R}^2$. Thus, the homomorphism \begin{align*} k[[x,y]]&\rightarrow R \subset k[[t_1]] \\ x &\rightarrow (t_1^2),\\ y &\rightarrow (t_1^3), \end{align*} is surjective, with kernel $y^2-x^3$. \end{itemize} \end{proof} \end{comment} \end{document}
\begin{document} \begin{abstract} We initiate a systematic study of the convolution operation on Keisler measures, generalizing the work of Newelski in the case of types. Adapting results of Glicksberg, we show that the supports of definable and finitely satisfiable (or just definable, assuming NIP) measures are nice semigroups, and classify idempotent measures in stable groups as invariant measures on type-definable subgroups. We establish left-continuity of the convolution map in NIP theories, and use it to show that the convolution semigroup on finitely satisfiable measures is isomorphic to a particular Ellis semigroup in this context. \end{abstract} \title{Definable convolution and idempotent Keisler measures} \section{Introduction} Various notions and ideas from topological dynamics were introduced into the model-theoretic study of definable group actions by Newelski \cite{N1,N2}. A fundamental observation is that certain spaces of types over a definable group carry a natural algebraic structure of a (left-continuous) semigroup, with respect to the ``independent product'' of types. In a rather wide context, this operation can be extended from types to general \emph{Keisler measures} on a definable group (i.e.~finitely additive probability measures on the Boolean algebra of definable subsets), where it corresponds to \emph{convolution} of measures. We first recall the classical setting. When $G$ is a locally-compact topological group, then the space of regular Borel probability measures on $G$ is equipped with the convolution product: if $\mu$ and $\nu$ are bounded measures on $G$, then their product is the measure $\mu*\nu$ on $G$ defined via \begin{equation*} \mu * \nu(A) = \int_{y \in G} \int_{x \in G} \chi_{A}(x\cdot y) d\mu(x) d\nu(y), \end{equation*} for an arbitrary Borel set $A \subseteq G$ (where $\chi_A$ is the characteristic function of $A$). And a measure $\mu$ is \emph{idempotent} if $\mu * \mu = \mu$. A classical theorem of Wendel \cite{Wendel} shows that if $G$ is a compact topological group and $\mu$ is a regular Borel probability measure on $G$, then $\mu$ is idempotent if and only if the support of $\mu$ is a compact subgroup of $G$, and the restriction of $\mu$ to this subgroup is the (bi-invariant) Haar measure. Wendel's result was extended to locally compact abelian groups by Rudin \cite{Rudin} and Cohen \cite{Cohen}, and this line of research continued into the study of the structure of idempotent measures on (semi-)topological semigroups, in particular in the work of Glicksberg \cite{Glicksberg1, Glicksberg2} and Pym \cite{Pym1,Pym2}. In this paper we consider the counterpart of these developments in the definable category, i.e.~ for definable groups and Keisler measures on them. In particular, we aim to address the following questions. \begin{enumerate} \item[(Q1)] Under what conditions the convolution product of two global Keisler measures can be defined? \item[(Q2)] What algebraic structures arise from idempotency of a Keisler measure? \item[(Q3)] Is there a connection between the convolution semigroups of Keisler measures and Ellis semigroups? \end{enumerate} We begin by reviewing some (mostly standard) material on Keisler measures in Section \ref{sec: prelims on Keisl meas}: we recall various classes of measures (invariant, Borel-definable, finitely satisfiable, finitely approximable, smooth), summarize the relationship between them (in general, as well as in NIP and stable theories) and discuss supports of measures. In particular, in Proposition \ref{convfs} we give a topological characterization of the space of measures finitely satisfiable over a small model $M$, and in Lemma \ref{lem: sup on inv types} we make a couple of observations on \emph{invariantly supported} measures (i.e.~global measures such that all types in their support are (automorphism-)invariant over a fixed small model). In Section \ref{sec: ext prod of measures} we extend the usual product $\otimes$ of Borel-definable measures to a slightly larger context. Namely, when defining $\mu \otimes \nu$, we only require the level functions of the measure $\mu$ to be Borel \emph{restricted to the support of $\nu$} (Definition \ref{Borel}). It is equivalent to the standard definition when $\mu$ is Borel-definable, but allows one to evaluate the product of an arbitrary invariant measure $\mu$ with an arbitrary type $p$ for example (and this extends the usual independent product of invariant types, see Proposition \ref{prop: gen prod ext type prod}). In relation to (Q1), in Section \ref{sec: def of conv} we define the convolution operation on \emph{$*$-Borel pairs} of Keisler measures in terms of this generalized product of measures (Definition \ref{def: conv}) and observe some of its basic properties, in particular that it extends the independent product of arbitrary invariant types in a group (Proposition \ref{Newelski}). In Section \ref{sec: idemp}, we begin investigating idempotent Keisler measures. In Proposition \ref{wow} we observe that every invariant measure on a type-definable subgroup is idempotent (the extended $\otimes$-product is needed for this to hold without any definability assumptions on the invariant measure). Mirroring the classical situation in Wendel's theorem, the expectation is that in tame contexts all idempotent measures should arise in this way. In the case of a definably amenable NIP group, invariant measures were classified in \cite{CS}. We observe in Proposition \ref{prop: bdd index def am NIP} that a type-definable subgroup of bounded index of a definably amenable NIP group is still definably amenable (and the analysis from \cite{CS} extends to it). We also point out that, as a consequence of Wendel's theorem, idempotent measures finitely supported on realized types correspond to finite subgroups (Proposition \ref{prop: finite realized support}); and that in an abelian NIP group, the class of idempotent generically stable measures is closed under convolution (Proposition \ref{prop: commute preserves idemp}). In Section \ref{sec: supports}, we study the supports of idempotent Keisler measures (question (Q2) above). In the proof of Wendel's theorem (as well as Glicksberg's proof in the abelian semitopological semigroup case \cite{Glicksberg2}), an idempotent regular Borel measure $\mu$ is associated to a closed subgroup given by its support. In particular, $S(\mu)$ is a closed group and $\mu|_{S(\mu)}$ is its associated (bi-invariant) Haar measure. In the general model-theoretic context the situation is not as nice (see Examples \ref{exa: 1} and \ref{circle}). However, adapting some of Glicksberg's work to our context, we show that if $\mu$ is definable, invariantly supported and idempotent, then $\left(S(\mu),* \right)$ (with respect to the usual independent product of invariant types) is a compact, left-continuous semigroup with no closed two-sided ideals (Corollary \ref{qcont} and Theorem \ref{two}). This assumption is satisfied when $\mu$ is a dfs measure in an arbitrary theory, or when $\mu$ is an arbitrary definable measure in an NIP theory. We also deduce that if $S(\mu)$ has no proper closed left ideals, then $\mu$ is ``generically'' invariant restricting to its support (Corollary \ref{cor: meas on sup is invariant}). It follows that in abelian stable groups, the supports of the idempotent measures are precisely the closed subgroups of the convolution semigroup on the space of types (Corollary \ref{cor: supp stab ab}); which leads to a quick description of idempotent measures in strongly minimal groups (Example \ref{exa: str min}). In Section \ref{sec: stable groups} we classify idempotent measures on a stable group, demonstrating that they are precisely the invariant measures on its type-definable subgroups. More precisely, every idempotent measure is the unique invariant Keisler measure on its own (type-definable) stabilizer. Our proof relies on the results of the previous section and a variant of Hrushovski's group chunk theorem due to Newelski \cite{N3}. Concerning question (Q3), it was observed by Newelski \cite{N1} that the convolution semigroup $(S_{x}(\mathcal{G},G),*)$ on the space of global types finitely satisfiable in a small model $G \prec \mathcal{G}$ is isomorphic to the enveloping \emph{Ellis semigroup} $E(S_{x}(\mathcal{G},G),G)$ of the action of $G$ on this space of types. Ellis semigroups for definable group actions in the context of NIP theories were previously considered in e.g.~\cite{MR3245144, CS}, to which we refer for a general discussion. In Section \ref{sec: Ellis calc}, under the NIP assumption, we obtain an analogous description for the convolution semigroup $(\mathfrak{M}_{x}(\mathcal{G},G),*)$ on the space of global Keisler measures finitely satisfiable in a small model. Namely, in Theorem \ref{thm: Ellis grp iso} we show that it is isomorphic to the Ellis semigroup $E(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G))$ of the action of $\operatorname{conv}(G)$, the convex hull of $G$ in the space of global measures finitely satisfiable on $G$, on this space of measures (see Remark \ref{rem : without conv} on why the convex hull is necessary). Our proof relies in particular on left-continuity of convolution of invariant measures in NIP theories established in Section \ref{sec: left-cont of conv} using approximation arguments with smooth measures. \section{Preliminaries on Keisler measures}\label{sec: prelims on Keisl meas} \subsection{Basic facts about Keisler measures} For the majority of this article, we focus on global Keisler measures and their relationship to small elementary submodels. In this section we recall some of the material from \cite{Keisl, NIP1, NIP2, NIP3, NIP5, Gannon}, and refer to e.g.~\cite[Chapter 7]{Guide} for a more detailed introduction to the subject, or \cite{StBourb, CheSurv} for a survey. Given $r_1, r_2 \in \mathbb{R}$ and $\varepsilon \in \mathbb{R}_{>0}$, we write $r_1 \approx_{\varepsilon} r_2 $ if $|r_1 - r_2| < \varepsilon$. Let $T$ be a first order theory in a language $\mathcal{L}$ and assume that $\mathcal{U}$ is a sufficiently saturated model of $T$ (we make no assumption on $T$ unless explicitly stated otherwise). In this section, we write $x,y,z, \ldots$ to denote arbitrary finite tuples of variables. If $x$ is a tuple of variables and $A \subseteq \mathcal{U}$, then $\mathcal{L}_{x}(A)$ is the collection of formulas with free variables in $x$ and parameters from $A$, up to logical equivalence (which we identify with the corresponding Boolean algebra of definable subsets of $\mathcal{U}^x$). We write $\mathcal{L}_{x}$ for $\mathcal{L}_{x}(\emptyset)$. Given a partitioned formula $\varphi(x;y)$, we let $\varphi^*(y;x) := \varphi(x;y)$ be the partitioned formula with the roles of $x$ and $y$ reversed. As usual, $S_x(A)$ denotes the space of types over $A$, and if $A \subseteq B \subseteq \mathcal{U}$ then $S_x(B,A)$ (respectively, $S^{^{\text{-}1}a}_x(B,A)$) denotes the closed set of types in $S_x(B)$ that are finitely satisfiable in $A$ (respectively, invariant over $A$). For any set $A \subseteq \mathcal{U}$, a \emph{Keisler measure} over $A$ in variables $x$ is a finitely additive probability measure on $\mathcal{L}_{x}(A)$. We denote the space of Keisler measures over $A$ (in variables $x$) as $\mathfrak{M}_{x}(A)$. Every element of $\mathfrak{M}_{x}(A)$ is in unique correspondence with a regular Borel probability measure on the space $S_{x}(A)$, and we will routinely use this correspondence. If $M_0 \preceq M \preceq \mathcal{U}$ are small models, then there is an obvious restriction map $r_{0}$ from $\mathfrak{M}_{x}(M)$ to $\mathfrak{M}_{x}(M_0)$ and we denote $r_{0}(\mu)$ simply as $\mu|_{M_0}$. Conversely, every $\mu \in \mathfrak{M}_{x}(M_0)$ admits an extension to some $\mu' \in \mathfrak{M}_{x}(M)$ (not necessarily a unique one). The space $\mathfrak{M}_{x}(A)$ is a compact Hausdorff space with the topology induced from $[0,1]^{\mathcal{L}_{x}(A)}$. This is the coarsest topology on the set $\mathfrak{M}_{x}(A)$ such that for any continuous function $f:S_{x}(A) \to \mathbb{R}$, the map $\mu \to \int f d\mu$ is continuous. If $M_0 \preceq M$, then under this topology, the restriction map $r_0$ is continuous. We identify every $p \in S_x(A)$ with the corresponding Dirac measure $\delta_p \in \mathfrak{M}_{x}(A)$, and under this identification $S_x(A)$ is a closed subset of $\mathfrak{M}_{x}(A)$. We recall several important properties of global measures that will make an appearance in this article. \begin{definition}\label{def: props of measures} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ be a global Keisler measure. \begin{enumerate} \item $\mu$ is \emph{invariant} if there is a small model $M \prec \mathcal{U}$ such that for any partitioned $\mathcal{L}(M)$-formula $\varphi(x;y)$ and any $b,b'\in \mathcal{U}^y$, if $b\textit{eq}uiv_M b'$ then $\mu(\varphi(x;b))=\mu(\varphi(x;b'))$. In this case, we say $\mu$ is \emph{$M$-invariant}. We let $\mathfrak{M}^{^{\text{-}1}a}_x(\mathcal{U}, M)$ denote the closed set of all $M$-invariant measures in $\mathfrak{M}_x(\mathcal{U})$. \item Assume that $\mu$ is $M$-invariant and $\varphi(x;y)$ is a partitioned $\mathcal{L}(M)$-formula. We define the map $F_{\mu,M}^{\varphi}:S_{y}(M) \to [0,1]$ by $F_{\mu,M}^{\varphi}(q)=\mu(\varphi(x;b))$, where $b\models q$ (this is well-defined by $M$-invariance of $\mu$). \noindent We will often write $F_{\mu}^{\varphi}$ instead of $F_{\mu,M}^{\varphi}$ when the base model $M$ is clear from the context. \item $\mu$ is \emph{Borel-definable} (respectively, \emph{definable}) if there is $M\prec\mathcal{U}$ such that $\mu$ is $M$-invariant and for any partitioned $\mathcal{L}(M)$-formula $\varphi(x;y)$, the map $F_{\mu, M}^{\varphi}$ is Borel-measurable (respectively, continuous). In this case, we say that $\mu$ is \emph{Borel-definable over $M$} (respectively, \emph{definable over $M$}). \item $\mu$ is \emph{finitely satisfiable in $M\prec \mathcal{U}$} if for any $\mathcal{L}_x(\mathcal{U})$-formula $\varphi(x)$, if $\mu(\varphi(x))>0$ then $\mathcal{U}\models\varphi(a)$ for some $a\in M^x$. We let $\mathfrak{M}_{x}(\mathcal{U},M)$ denote the closed set of measures in $\mathfrak{M}_{x}(\mathcal{U})$ which are finitely satisfiable in $M$. \item $\mu$ is \emph{dfs} if there is $M\prec\mathcal{U}$ such that $\mu$ is both definable over $M$ and finitely satisfiable in $M$. Similarly, if this is the case, we say that $\mu$ is \emph{dfs over $M$}. \item Given $\overline{a} \in (\mathcal{U}^{x})^{<\omega}$, with $\overline{a} = (a_1,...,a_n)$, the associated \emph{average measure} $\operatorname{Av}_{\overline{a}} \in \mathfrak{M}_{x}(\mathcal{U})$ is defined by \begin{equation*} \operatorname{Av}_{\overline{a}}(\varphi(x)) := \frac{|\{i: \mathcal{U} \models \varphi(a_i)\}| }{n} \end{equation*} for any $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$. \item $\mu$ is \emph{finitely approximated} if there is $M\prec\mathcal{U}$ such that for any $\mathcal{L}(M)$-formula $\varphi(x;y)$ and any $\varepsilon \in \mathbb{R}_{>0}$, there exist $n \in \mathbb{N}_{\geq 1}$ and $\bar{a}\in (M^x)^n$ such that for any $b\in\mathcal{U}^{y}$, $\mu(\varphi(x;b)) \approx_{\varepsilon} \operatorname{Av}_{\bar{a}}(\varphi(x;b))$. In this case, we call $\bar{a}$ a \emph{$(\varphi,\varepsilon)$-approximation for $\mu$}, and we say $\mu$ is \emph{finitely approximated in $M$}. \item $\mu$ is \emph{smooth} if there exists a small model $M \prec \mathcal{U}$ such that for any $N$ with $M \preceq N \preceq \mathcal{U}$, there exists a unique measure $\mu' \in \mathfrak{M}_{x}(N)$ such that $\mu'|_M = \mu|_{M}$. In this case, we say that $\mu$ is \emph{smooth over $M$}. \end{enumerate} \end{definition} These properties are related as follows. \begin{fact}\label{fac: props of measures relation} \begin{enumerate} \item In any theory $T$, given $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, over any given $M \prec \mathcal{U}$: \begin{enumerate} \item $\mu$ is smooth $\mathbb{R}ightarrow$ $\mu$ is finitely approximated \cite[Corollary 2.6]{NIP3}; \item $\mu$ is finitely approximated $\mathbb{R}ightarrow$ $\mu$ is dfs (e.g.~see \cite[Proposition 4.12]{Gannon}); \item $\mu$ is definable $\mathbb{R}ightarrow$ $\mu$ is Borel-definable; \item if $\mu$ is either Borel-definable or finitely satisfiable, then $\mu$ is invariant. \end{enumerate} \item Assuming $T$ is NIP, given $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, over any $M \prec \mathcal{U}$ we have additionally: \begin{enumerate} \item $\mu$ is invariant $\mathbb{R}ightarrow$ $\mu$ is Borel-definable (\cite[Corollary 4.9]{NIP2}, or \cite[Proposition 7.19]{Guide}); \item $\mu$ is dfs $\mathbb{R}ightarrow$ $\mu$ is finitely approximated \cite[Theorem 3.2]{NIP3}. \end{enumerate} \item Assuming $T$ is stable, given any $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ we have moreover: \begin{enumerate} \item $\mu$ is finitely approximated (see e.g.~\cite[Lemma 4.3]{NIP5} for a direct proof); \item for every $\mathcal{L}$-formula $\varphi(x;y)$, there exist types $(p_i)_{i \in \omega}$ in $S_x(\mathcal{U})$ and $(r_i)_{i\in \omega}, r_i \in [0,1]$ such that $\sum r_i = 1$, and taking $\mu' := \sum r_i \cdot p_i$ we have $\mu(\varphi(x;b)) = \mu'(\varphi(x;b))$ for all $b \in \mathcal{U}^y$ \cite[Lemma 1.7]{Keisl}; \item If $T$ is $\omega$-stable, then there exist $(p_i)_{i \in \omega}$ in $S_x(\mathcal{U})$ and $(r_i)_{i\in \omega}, r_i \in [0,1]$ such that $\sum r_i = 1$ and $\mu = \sum r_i \cdot p_i$ (same as the proof of \cite[Lemma 1.7]{Keisl}, using boundedness of the global rank). \end{enumerate} \end{enumerate} \end{fact} We have the following characterization of definability (see e.g.~\cite[Proposition 4.4]{Gannon}). \begin{fact}\label{fac: chars of def meas} The following are equivalent for $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $M \preceq \mathcal{U}$. \begin{enumerate} \item The measure $\mu$ is definable over $M$. \item For any partitioned $\mathcal{L}(M)$-formula $\varphi(x;y)$ and any $\varepsilon > 0$, there exist formulas $\Phi_{1}(y),...,\Phi_{n}(y)$ such that each $\Phi_{i}(y) \in \mathcal{L}_{y}(M)$, the collection $\{\Phi_{i}(\mathcal{U}): i \leq n\}$ forms a partition of $\mathcal{U}^{y}$, and if $\models \Phi_{i}(c) \wedge \Phi_{i}(c')$, then $|\mu(\varphi(x,c)) - \mu(\varphi(x,c'))| < \varepsilon$. \item For every partitioned formula $\varphi(x;y) \in \mathcal{L}(M)$ and every $n \in \mathbb{N}_{\geq 1}$ there exist some $\mathcal{L}_y(M)$-formulas $\Phi^{\varphi,\frac{1}{n}}_i(y)$ with $i \in I_n := \{0, \frac{1}{n}, \frac{2}{n}, \ldots, \frac{n-1}{n}, 1\}$ such that: \begin{enumerate} \item the collection $\{ \Phi^{\varphi, \frac{1}{n}}_i(\mathcal{U}) : i \in I_n\}$ forms a covering of $\mathcal{U}_y$ (but not necessarily a partition); \item For every $i \in I_n$ and $b \in \mathcal{U}_y$, if $\mathcal{U} \models \Phi^{\varphi,\frac{1}{n}}_i(b)$ then $|\mu(\varphi(x,b)) - i| < \frac{1}{n}$. \end{enumerate} \end{enumerate} \end{fact} This easily implies the following. \begin{fact}\label{fac: unique ext of def meas} If $M \preceq N \prec \mathcal{U}$ and $\mu \in \mathfrak{M}_x(N)$ is definable over $M$, then there exists a unique extension $\mu' \in \mathfrak{M}_x(\mathcal{U})$ of $\mu$ which is definable over $N$, denoted $\mu|_{\mathcal{U}}$ (it is then automatically definable over $M$ and given by the same definition schema $\Phi$ for $\mu$ as in Fact \ref{fac: chars of def meas}). \end{fact} In an NIP theory, every measure over a small model can be extended to a smooth measure over a slightly larger elementary extension (\cite[Theorem 3.16]{Keisl}, or \cite[Proposition 7.9]{Guide}). \begin{fact}\label{fac: NIP ext to smooth} Let $T$ be an NIP theory. Let $M \prec \mathcal{U}$ and $\mu \in \mathfrak{M}_{x}(M)$. Then $\mu$ admits a smooth extension. I.e., there exist some $\nu \in \mathfrak{M}_{x}(\mathcal{U})$ and some small $M \preceq N \prec \mathcal{U}$ such that $\nu$ is smooth over $N$ and $\nu|_{M} = \mu$. \end{fact} \begin{definition} Given a Keisler measure $\mu \in \mathfrak{M}_x(A)$, the \emph{support of $\mu$} is \begin{equation*} S(\mu) = \{ p \in S_{x}(A): \mu(\varphi(x)) > 0 \text{ for any } \varphi(x) \in p\}. \end{equation*} Types in $S(\mu)$ are sometimes called \emph{weakly random} with respect to $\mu$ in the literature. \end{definition} We recall some properties of supports, with proofs for the sake of completeness. \begin{proposition}\label{prop: supp 1} Let $\mu \in \mathfrak{M}_{x}(A)$. \begin{enumerate} \item Then for any $\varphi(x) \in \mathcal{L}_{x}(A)$ such that $\mu(\varphi(x)) > 0$, there exists some $q \in S(\mu)$ such that $\varphi(x) \in q$. In particular, $S(\mu) \neq \emptyset$. \item $S(\mu)$ is a closed subset of $S_{x}(A)$ and $\mu(S(\mu)) = 1$ (and $S(\mu)$ is the smallest set of types under inclusion with this property). \end{enumerate} \end{proposition} \begin{proof} (1) Without loss of generality, $\varphi(x) \textit{eq}uiv x=x$. Otherwise, we reiterate the proof with the normalization of $\mu$ to the definable set $\varphi(x)$, i.e.~considering the Keisler measure $\mu'$ defined by $\mu'(\psi(x)) := \frac{\mu(\psi(x) \land \varphi(x))}{\mu(\varphi(x))}$ for all $\psi(x)$. Assume that $S(\mu) = \emptyset$, then for every type $p \in S_{x}(A)$, there exists some $\varphi_{p}(x) \in p$ such that $\mu(\varphi_{p}(x)) = 0$. Then, $\mu(\neg \varphi_{p}(x)) =1$ for every $p \in S_{x}(A)$, hence for any $n$ and $p_1,...,p_n \in S_{x}(A)$, we have $\bigcap_{i=1}^{n} \neg \varphi_{p_i}(x) \neq \emptyset$. Then $K = \bigcap_{p \in S_{x}(A)} \neg \varphi_{p}(x) \neq \emptyset$ by compactness of $S_x(A)$. But if $q \in K$, then in particular $\neg \varphi_{q}(x) \in q$ --- a contradiction. (2) Assume that $p \not \in S(\mu)$. Then, there exists a formula $\varphi_{p}(x)$ such that $\varphi_{p}(x) \in p$ and $\mu(\varphi_{p}(x)) =0$. Then, \begin{equation*} S_x(A) \backslash S(\mu) = \bigcup_{p \not \in S(\mu)} \varphi_{p}(x). \end{equation*} Therefore, $S(\mu)$ is closed. Assume that $\mu(S_{x}(M) \backslash S(\mu)) > 0$. By regularity of $\mu$, there exists a clopen $C \subseteq S_{x}(A) \backslash S(\mu)$ with positive measure. But by (1) we must have $C \cap S(\mu) \neq \emptyset$, a contradiction. \end{proof} \begin{proposition}\label{coheir} Let $A \subseteq B \subseteq \mathcal{U}$ and $\mu \in \mathfrak{M}_x(B)$ be arbitrary. Let $r: S_{x}(B) \to S_{x}(A), q \mapsto q|_{A}$ be the restriction map. Then: \begin{enumerate} \item $r(S(\mu)) = S(\mu|_{A})$; \item the measure $\mu|_{A}$ is the pushforward of $\mu$ along $r$, i.e.~$r^{*}(\mu) = \mu|_{A}$. \end{enumerate} \end{proposition} \begin{proof} (1) The map $r$ is a continuous surjection between compact Hausdorff spaces. By Proposition \ref{prop: supp 1}(2), $r(S(\mu))$ is compact (hence, closed), as the continuous image of a compact set. Clearly $r(S(\mu)) \subseteq S(\mu|_A)$, and as $r(S(\mu))$ is closed it suffices to show that $r(S(\mu))$ is a dense subset of $S(\mu|_A)$. Indeed, assume that $\varphi(x) \in \mathcal{L}_{x}(A)$ and $\varphi(x) \cap S(\mu|_{A}) \neq \emptyset$. Then $\mu|_{A}(\varphi(x)) > 0$, hence $\mu(\varphi(x)) > 0$, and by Proposition \ref{prop: supp 1}(1) there exists some $q \in S(\mu)$ with $\varphi(x) \in q$. Hence $\varphi(x) \in r(q)$, and so $r(S(\mu)) \cap \varphi(x) \neq \emptyset$. And (2) is clear. \end{proof} \begin{definition}\label{def: inv sup meas} We say that $\mu \in \mathfrak{M}_x(\mathcal{U})$ is \emph{invariantly supported} if there exists some small model $M \prec \mathcal{U}$ such that every type $p \in S(\mu)$ is $M$-invariant. \end{definition} \begin{lemma} \label{lem: sup on inv types} Let $\mu \in \mathfrak{M}_x(\mathcal{U})$. \begin{enumerate} \item If $\mu$ is finitely satisfiable, then $\mu$ is invariantly supported. \item If $\mu $ is invariantly supported, then $\mu$ is invariant. \item If $T$ is NIP, then $\mu$ is invariant if and only if it is invariantly supported. \item In some theory, there exist a definable measure $\mu \in \mathfrak{M}_x(\mathcal{U})$ and $p \in S(\mu)$ such that $p$ is not invariant (over any small set). \end{enumerate} \end{lemma} \begin{proof} (1) Clearly if $\mu$ is finitely satisfiable over a small model $M \prec \mathcal{U}$, then every $p \in S(\mu)$ is also finitely satisfiable in $M$. (2) Let $M \prec \mathcal{U}$ be a small model such that every $p \in S(\mu)$ is invariant over $M$. If $\mu$ is not invariant over $M$, then there exist some $\varphi(x,y) \in \mathcal{L}_{xy}$ and some $b \textit{eq}uiv_M b'$ in $\mathcal{U}^y$ such that $\mu(\varphi(x,b)) \neq \mu(\varphi(x,b'))$. But then $\mu(\varphi(x,b) \triangle \varphi(x,b')) > 0$, hence $\varphi(x,b) \triangle \varphi(x,b') \in p$ for some $p \in S(\mu)$ by Proposition \ref{prop: supp 1} --- contradicting $M$-invariance of $p$. (3) $(\Leftarrow)$ holds by (2). For $(\mathbb{R}ightarrow)$, we note that if $\mu$ is invariant over $M \prec \mathcal{U}$, then every global type $p \in S(\mu)$ does not divide over $M$ (given $\varphi(x,b) \in p$ and an $M$-indiscernible sequence $(b_i)_{i \in \omega}$ in $\mathcal{U}^y$ such that $b_i \textit{eq}uiv_M b$, we have that $\mu(\varphi(x,b_i)) = \mu(\varphi(x,b)) =: \varepsilon > 0$ for all $i$; but then $\mu(\bigwedge_{i < k} \varphi(x,b_i)) > 0$ for every $k\in \omega$, so in particular $\neq \emptyset$, by a standard probability lemma, see e.g. \cite[Lemma 7.5]{Guide}), hence $p$ is invariant over $M$ by \cite[Proposition 2.1(ii)]{NIP2}. (4) Let $T$ be the theory of the random graph, in a language with a single binary relation. We let $\mu \left(\bigwedge_{i<k} E(x,b_i)^{t_i} \right) = \frac{1}{2^k}$ for every $k \in \omega$, pairwise distinct $b_i \in \mathcal{U}$ and $t_i \in \{0,1 \}$, and $\mu(x=b) = 0$ for every $b \in \mathcal{U}$. By quantifier elimination, this determines a measure $\mu \in \mathfrak{M}_x(\mathcal{U})$. This $\mu$ is clearly definable over $\emptyset$, and the support of $\mu$ consists of all non-realized types in $S_x(\mathcal{U})$. However, it is easy to construct by transfinite induction a non-realized type $p \in S_x(\mathcal{U})$ which is not invariant over any small $M \prec \mathcal{U}$. \end{proof} The space of measures $\mathfrak{M}_x(\mathcal{U})$ can be naturally viewed as a closed convex subset of a real topological vector space (of all bounded real-valued measures). Given $M \prec \mathcal{U}$, we identify $M^x$ with the set $\{\delta_a : a \in M^x \} \subseteq \mathfrak{M}_x(\mathcal{U})$, and let $\operatorname{conv}(M^x)$ denote the convex hull. We have the following topological characterization of finite satisfiability for measures. \begin{proposition}\label{convfs} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and let $M \prec \mathcal{U}$ be a small model. Then $\mu$ is finitely satisfiable in $M$ if and only if $\mu$ is in the closure of $\operatorname{conv}(M^{x})$ (viewed as a subset of $\mathfrak{M}_{x}(\mathcal{U}))$. \end{proposition} \begin{proof} Assume $\mu$ is finitely satisfiable in $M$. Let $U$ be a basic open subset of $\mathfrak{M}_{x}(\mathcal{U})$ containing $\mu$. Say \begin{equation*} U = \bigcap_{i=1}^n \left\{ \mu' \in \mathfrak{M}_{x}(\mathcal{U}): r_i < \mu'(\varphi_{i}(x)) < s_i\right\} \end{equation*} for some $n \in \mathbb{N}$, $\varphi_1(x),...,\varphi_n(x) \in \mathcal{L}_x(\mathcal{U})$ and $r_{i},...,r_{n}, s_1,...,s_n \in [0,1]$. The collection $\{\varphi_1(x),...,\varphi_n(x)\}$ generates a finite Boolean subalgebra of $\mathcal{L}_x(\mathcal{U})$. Let $\theta_1(x),...,\theta_m(x)$ be its atoms, and let $\Theta := \{\theta_j(x): \mu(\theta_j(x)) > 0\}$. As $\mu$ is finitely satisfiable in $M$, for each $\theta_j(x) \in \Theta$, there exists some $a_j \in M^{x}$ such that $\models \theta_j(a_j)$. Let $\nu := \sum_{\theta_j \in \Theta} \mu(\theta_j(x))\delta_{a_j} \in \mathfrak{M}_{x}(\mathcal{U})$. Then we have $\mu(\varphi_i(x)) = \nu(\varphi_i(x))$ for all $1 \leq i \leq n$ (note that $a_j \models \theta_i \iff i=j$), so $\nu \in U \cap \operatorname{conv}(M^{x})$. Hence $\mu \in \operatorname{cl}(\operatorname{conv}(M^{x}))$. Conversely, suppose $\mu \in \operatorname{cl}(\operatorname{conv}(M^{x}))$ and let $\psi(x) \in \mathcal{L}_{x}(\mathcal{U})$ be such that $\mu(\psi(x)) > 0$. Consider the open set $U_{\psi} := \{\nu \in \mathfrak{M}_{x}(\mathcal{U}): 0 < \nu(\psi(x))\}$ containing $\mu$. Since $\mu$ is in the closure of $\operatorname{conv}(M^{x})$, there exists some $\mu_{\psi}=\sum_{i=1}^{n} r_i \delta_{a_i}$, where $a_i \in M^{x}$ for all $i$ and $\mu_{\psi} \in U_{\psi}$. But then $ \mathcal{U} \models \psi(a_i)$ for at least one $i$. \end{proof} \section{Definable convolution and idempotent measures} \subsection{Extended product of measures}\label{sec: ext prod of measures} We begin by defining a slight generalization of the product of measures that encompasses both the usual independent product of Borel-definable measures and the standard Morley product of invariant types (without any definability assumptions), and also allows to take products of $G$-invariant measures in arbitrary theories. This is accomplished by slightly tweaking the domain of the integral in the usual definition of the $\otimes$-product. \begin{definition}\label{Borel} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, $\nu \in \mathfrak{M}_{y}(\mathcal{U})$, and $\varphi(x,y,\overline{c}) \in \mathcal{L}_{xy}(\mathcal{U})$. We say that the triple $(\mu,\nu,\varphi)$ is \emph{Borel} if there exists some $N \prec \mathcal{U}$ such that: \begin{enumerate} \item $\overline{c} \subseteq N$; \item for any $q \in S(\nu|_{N})$ and $d,d' \in \mathcal{U}^{y}$ with $d,d'\models q$, we have that $\mu(\varphi(x,d,\overline{c})) = \mu(\varphi(x,d',\overline{c}))$; \item the map $F_{\mu,N}^{\varphi}: S(\nu|_{N}) \to [0,1]$ is Borel, where $F_{\mu,N}^{\varphi}(q) = \mu(\varphi(x,d,\overline{c}))$ for some/any $d \models q$. \end{enumerate} We say that the ordered pair $(\mu,\nu)$ is \emph{Borel} if $(\mu,\nu,\varphi)$ is Borel for any $\varphi(x,y,\overline{c}) \in \mathcal{L}_{xy}(\mathcal{U})$. \end{definition} \begin{definition}\label{def: gen prod} Assume that $(\mu,\nu)$ is Borel. Then we define the product measure $\mu \tilde{\otimes} \nu \in \mathfrak{M}_{xy}(\mathcal{U})$ as follows: given an arbitrary formula $\varphi(x,y,\bar{c}) \in \mathcal{L}_{xy}(\mathcal{U})$, let $N$ be any small elementary submodel of $\mathcal{U}$ witnessing that $(\mu,\nu,\varphi)$ is Borel (as in Definition \ref{Borel}); we define \begin{equation*} \mu \tilde{\otimes} \nu(\varphi(x,y,\overline{c})) := \int_{S(\nu|_{N})} F_{\mu,N}^{\varphi} d\nu_{N}, \end{equation*} with the notation from Definition \ref{Borel}, where $\nu_{N}$ is the restriction of the regular Borel measure $\nu|_N$ to the compact set $S(\nu|_{N})$. \end{definition} We check that $\tilde \otimes$ is well-defined. \begin{proposition}\label{prop: int over diff mod} Assume that $(\mu,\nu,\varphi)$ is Borel. Then the value of $\mu \tilde{\otimes} \nu(\varphi(x;y,\overline{c}))$ in Definition \ref{def: gen prod} does not depend on the choice of $N$ (as in Definition \ref{Borel}). \end{proposition} \begin{proof} This proof is essentially the same as for $\otimes$ (see e.g.~\cite[Proposition 7.19]{Guide}). Assume that $(\mu,\nu,\varphi)$ is Borel with respect to both $M$ and $N$. We may assume that $M \subseteq N$ (taking a common extension). By Proposition \ref{coheir}, let $r: S(\nu_{N}) \to S(\nu_{M})$ be the restriction map; then $F_{\mu,M}^{\varphi} \circ r = F_{\mu,N}^{\varphi} $ and the pushforward of the measure $\nu_{N}$, namely $r^{*}(\nu_{N})$, is equal to $\nu_{M}$. Hence we have: \begin{equation*} \int_{S(\nu|_{M})} F_{\mu,M}^{\varphi} d(\nu_{M}) = \int_{S(\nu|_{M})} F_{\mu,M}^{\varphi} d r^*(\nu_{N}) = \int_{S(\nu|_{N})} \left( F_{\mu,M}^{\varphi} \circ r \right) d \nu_{N} \end{equation*} \begin{equation*} = \int_{S(\nu|_{N})} F_{\mu,N}^{\varphi} d{\nu_{N}}. \end{equation*} \end{proof} We recall the independent product of invariant types (see e.g.~\cite[Section 2.2]{Guide}). \begin{fact}\label{fac: product of inv types} \begin{enumerate} \item Assume $M \prec \mathcal{U}$ is a small submodel, $p \in S_x^{^{\text{-}1}a}(\mathcal{U},M)$ and $\mathcal{U}' \succ \mathcal{U}$. There exists a unique type $p' \in S_x^{^{\text{-}1}a}(\mathcal{U}',M)$ extending $p$. Then for any $A \subseteq \mathcal{U}'$, we write $p|_A$ to denote $p'|_A$. \item Assume that $p \in S_{x}(\mathcal{U}), q \in S_{y}(\mathcal{U})$ and $p$ is invariant. Then $p \otimes q := tp(a,b/\mathcal{U}) \in S_{xy}(\mathcal{U})$ for some/any $b \models q$ and $a \models p|_{\mathcal{U}b}$ (in some $\mathcal{U}' \succ \mathcal{U}$; this is well-defined by (1)). \item If $p,q \in S_x(\mathcal{U}, M)$ (respectively, $p,q \in S^{^{\text{-}1}a}_x(\mathcal{U}, M)$), then $p \otimes q \in S_{xy}(\mathcal{U}, M)$ (respectively, $p \otimes q \in S^{^{\text{-}1}a}_{xy}(\mathcal{U}, M)$). \end{enumerate} \end{fact} The product $\tilde{\otimes}$ extends both the independent product on invariant types and the product of Borel definable probability measures in arbitrary theories. \begin{proposition}\label{prop: gen prod ext type prod} \begin{enumerate} \item Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. Assume that $\mu$ is Borel-definable. Then, $\mu \otimes \nu = \mu \tilde{\otimes} \nu$. \item If $\mu \in \mathfrak{M}_x(\mathcal{U})$ is invariant and $q\in S_x(\mathcal{U})$ is arbitrary, then $(\mu,\delta_q)$ is Borel and $\mu \tilde{\otimes} \delta_q$ is well-defined. \item Let $p \in S_{x}(\mathcal{U})$ and $q \in S_{y}(\mathcal{U})$, and $p$ is invariant. Then $\delta_{p \otimes q} = \delta_p \tilde{\otimes} \delta_q$, where $p \otimes q$ is the free product (see Fact \ref{fac: product of inv types}). \end{enumerate} \end{proposition} \begin{proof} (1) Assume that $\mu$ is Borel-definable over $M \prec \mathcal{U}$. Let $\varphi(x,y,\bar{c})$ be an arbitrary formula in $\mathcal{L}_{xy}(\mathcal{U})$, and let $M' \prec \mathcal{U}$ witness $(\mu, \nu, \varphi)$ is Borel. Taking $N \prec \mathcal{U}$ to be an elementary extension of both $M$ and $M'$, we have that both $\mu$ is Borel-definable over $N$ and $N$ witnesses that $(\mu, \nu, \varphi)$ is Borel. It is then easy to see that $\int_{S_{y}(N)}F^{\varphi}_\mu d (\nu|_N) = \int_{S(\nu|_N)}F^{\varphi}_\mu d \nu_N$ as long as the integral on the left hand side is well-defined --- which is the case by Borel-definability of $\mu$. (2) Let $\varphi(x,y) \in \mathcal{L}(\mathcal{U})$, and let $N \prec \mathcal{U}$ containing all the parameters from $\varphi$ be such that $\mu$ is invariant over $N$. Note that the map $F^\varphi_{\mu}: S_y(N) \to [0,1]$ need not be Borel definable. However $S(\delta_q|_N)$ is a single point as $q$ is a type, hence $F^\varphi_{\mu} \restriction_{S(\delta_q|_N)}$ is trivially Borel. (3) By (2), $(\delta_p, \delta_q)$ is Borel. Let $N \prec \mathcal{U}$ witness this, and let $b \in \mathcal{U}^y, b \models q|_N$. Then \begin{equation*} \delta_{p} \tilde{\otimes}\delta_{q}(\varphi(x;y))=\int_{S(\delta_{q}|_{N})} F_{\delta_{p}}^{\varphi} d(\delta_q)_{N} = F_{\delta_{p}}^{\varphi}(q|_{N}) = \begin{cases} \begin{array}{cc} 1 & \varphi(x, b) \in p,\\ 0 & \neg \varphi(x,b) \in p. \end{array}\end{cases} \end{equation*} That is, $\delta_{p} \tilde{\otimes}\delta_{q}(\varphi(x;y)) = 1$ if and only if $\varphi(x,y) \in \operatorname{tp}(a,b/N)$ for some/any $b\models q|_N$ and $a \models q|_{Nb}$. \end{proof} \begin{remark}\label{rem: tilde otimes on types} If $p \in S_x(\mathcal{U}), q \in S_y(\mathcal{U})$, we say that $(p,q)$ is Borel if $(\delta_{p}, \delta_{q})$ is Borel. In this case the proof of Proposition \ref{prop: gen prod ext type prod}(3) shows that there exists some $r \in S_{xy}(\mathcal{U})$ such that $\delta_{r} = \delta_p \tilde{\otimes} \delta_q$, we will denote it by $r := p \tilde{\otimes} q$ --- by Proposition \ref{prop: gen prod ext type prod} this extends the $\otimes$ operation on invariant types to a larger class of types. \end{remark} From now on we will simply write $\otimes$ instead of $\tilde{\otimes}$ to denote this extended operation (on types and measures) when there is no ambiguity involved. \begin{definition} We say that $\mu \in \mathfrak{M}_x(\mathcal{U})$ and $\nu \in \mathfrak{M}_y(\mathcal{U})$ ($\otimes$-)\emph{commute} if both $(\mu,\nu)$ and $(\nu,\mu)$ are Borel, and $\mu \otimes \nu = \nu \otimes \mu$. \end{definition} We recall some facts about the $\otimes$ operation and commuting measures. \begin{fact}\label{fac: meas commute} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U}), \nu \in \mathfrak{M}_{y}(\mathcal{U}), \lambda \in \mathfrak{M}_{z}(\mathcal{U})$ and $M \prec \mathcal{U}$. \begin{enumerate} \item \cite[Theorem 2.5]{NIP3}\label{fac: smooth commute} Assume that $\mu$ is Borel-definable over $M$ and $\nu$ is smooth over $M$. Then, for any $\varphi(x;y) \in \mathcal{L}_{xy}(M)$, we have that, \begin{equation*} \int_{S_{x}(M)} F_{\mu}^{\varphi} d(\nu|_{M}) = \int_{S_{y}(M)} F_{\nu}^{\varphi^*}d(\mu|_M). \end{equation*} In particular, $\mu \otimes \nu = \nu \otimes \mu$. \item (\cite[Proposition 2.13]{NIP5} or \cite[Proposition 2.10]{CG}) If $\mu$ and $\nu$ are finitely approximated over $M$, then $\mu \otimes \nu = \nu \otimes \mu$. \item \cite[Proposition 7.30]{Guide} If $T$ is NIP, $\mu$ is dfs over $M$ and $\nu$ is invariant over $M$, then $\mu \otimes \nu = \nu \otimes \mu$. \item \cite[Corollary 1.3]{GanCon2} If $\mu$ and $\nu$ are smooth over $M$, then $\mu \otimes \nu$ is also smooth over $M$. \item If $T$ is NIP and $\mu,\nu$ are invariant, then $\mu \otimes (\nu \otimes \lambda) = (\mu \otimes \nu) \otimes \lambda$. (See \cite{GanCon2}, Theorem 2.2 and the introduction there.) \end{enumerate} \end{fact} \subsection{Definable convolution}\label{sec: def of conv} Throughout this section, we let $T$ be a first order $\mathcal{L}$-theory expanding a group. We let $\mathcal{G}$ be a sufficiently saturated model of $T$, and $G$ denotes a small elementary submodel. We use letters $x,y$ to denote \emph{singleton} variables, i.e.~ of the sort on which the group is defined. For any formula $\varphi(x,\overline{c}) \in \mathcal{L}_{x}(\mathcal{G})$, we let $\varphi'(x,y,\overline{c}) = \varphi(x \cdot y,\overline{c})$. \begin{definition}\label{def: conv} Let $\mu,\nu \in \mathfrak{M}_{x}(\mathcal{G})$, and let $\nu_{y}$ denote the measure in $\mathfrak{M}_{y}(\mathcal{G})$ such that for any $\varphi(y) \in \mathcal{L}_{y}(\mathcal{G})$, $\nu_{y}(\varphi(y)) = \nu(\varphi(x))$. \begin{enumerate} \item We say that \emph{$(\mu, \nu)$ is $*$-Borel} if for every formula $\varphi(x,\overline{c}) \in \mathcal{L}_{x}(\mathcal{G})$, the triple $(\mu,\nu_{y},\varphi')$ is Borel. We say that \emph{$\mu$ is $*$-Borel} if the pair $(\mu,\mu)$ is $*$-Borel. \item If $(\mu,\nu)$ is $*$-Borel, then we define the \textit{(definable) convolution product of $\mu$ with $\nu$} as follows: \begin{equation*} \mu * \nu (\varphi(x,\overline{c})) = \mu \tilde{\otimes} \nu_{y} (\varphi'(x,y, \overline{c})) = \int_{S(\nu_{y}|_{G})} F_{\mu}^{\varphi'} d\nu_G(y), \end{equation*} where $G$ is some/any small submodel of $\mathcal{G}$ witnessing that $(\mu,\nu_{y},\varphi')$ is Borel and $\nu_G(y)$ is the Borel measure $\nu_y$ restricted to $S(\nu_y|_G)$ (as in Definition \ref{def: gen prod}). We will routinely write this product simply as $\int F_{\mu}^{\varphi'}d\nu$ when there is no possibility of confusion. \end{enumerate} \end{definition} Note that we are integrating over translates with respect to the \emph{right action of $\mathcal{G}$}, and in general throughout the article, when speaking about $\mathcal{G}$-invariance and related notion, we will typically consider the action of $\mathcal{G}$ \emph{on the right}. This choice is made to make sure that this definition correctly extends Newelski's product of types (Proposition \ref{Newelski}), but of course all of our results hold with respect to left actions modulo obvious modifications. First we check that the convolution operation indeed defines a measure. \begin{fact} Let $\mu, \nu \in \mathfrak{M}_{x}(\mathcal{G})$. If $(\mu,\nu)$ is $*$-Borel, then $\mu * \nu$ is a Keisler measure. \end{fact} \begin{proof} Clearly $\mu * \nu ( x = x) =1$ and $\mu * \nu(\neg \varphi(x)) =1 - \mu * \nu(\varphi(x))$. Assume that $\psi_1(x), \psi_2(x) \in \mathcal{L}_x(\mathcal{U})$ satisfy $\psi_1(x) \wedge \psi_2(x) = \emptyset$. Let $\theta(x;y) := \psi_1(x \cdot y) \vee \psi_2(x \cdot y)$, and let $G \prec \mathcal{G}$ witness that both $(\mu,\nu_y,\psi'_1)$ and $(\mu,\nu_y,\psi'_2)$ are Borel. Then for any $q \in S(\nu|_G)$ and $b \models q$ we have $F_{\mu}^{\theta}(q) = \mu(\theta(x;b)) = \mu(\psi_1(x\cdot b) \vee \psi_2(x \cdot b))$. As $\psi_1(x) \wedge \psi_2(x) = \emptyset$ implies $\psi_1(x \cdot b) \land \psi_2(x \cdot b) = \emptyset$, we have \begin{equation*} F_{\mu}^{\theta}(q) = \mu(\psi_1(x\cdot b)) + \mu(\psi_2(x \cdot b)) = F_{\mu}^{\psi'_1}(q) + F_{\mu}^{\psi'_2}(q). \end{equation*} Then \begin{equation*} (\mu * \nu)(\psi_1(x) \vee \psi_2(x)) = \int_{S(\nu|_{G})} F^{\theta}_\mu d\nu_{G} = \int_{S(\nu|_{G})} \left( F^{\psi'_1}_\mu + F^{\psi'_2}_\mu \right) d\nu_{G} \end{equation*} \begin{equation*} = \int_{S(\nu|_{G})} F^{\psi'_1}_\mu d\nu_{G} + \int_{S(\nu|_{G})}F^{\psi'_2}_\mu d\nu_{G} = (\mu*\nu)(\psi_1(x)) + (\mu*\nu)(\psi_2(x)). \end{equation*} \end{proof} This notion of convolution extends the notion of the product of invariant types extensively studied by Newelski \cite{N1, N2} and others from the point of view of topological dynamics. The following is easy using Fact \ref{fac: product of inv types}. \begin{fact}\label{NTM} Let $G \prec \mathcal{G}$ be a small model. Given $p,q \in S_{x}^{^{\text{-}1}a}(\mathcal{G},G)$, we define $p * q := \operatorname{tp}(a \cdot b/\mathcal{G}) \in S_{x}^{^{\text{-}1}a}(\mathcal{G},G)$, for some/any $(a,b) \models p \otimes q$ in a larger monster model. Then $\left(S^{^{\text{-}1}a}_{x}(\mathcal{G},G),* \right)$ is a semigroup, with multiplication continuous in the left coordinate: for each $q \in S_{x}^{^{\text{-}1}a}(\mathcal{G},G)$, the map $ -* q: S_{x}^{^{\text{-}1}a}(\mathcal{G},G) \to S_{x}^{^{\text{-}1}a}(\mathcal{G},G)$ is continuous. And $(S_{x}(\mathcal{G},G),*)$ is a closed sub-semigroup. \end{fact} \begin{proposition}\label{Newelski} Let $ \delta:S_{x}^{^{\text{-}1}a}(\mathcal{G},G) \to \mathfrak{M}_{x}^{^{\text{-}1}a}(\mathcal{G},G)$ be the map $\delta(p) = \delta_{p}$. Then $\delta$ is a topological embedding, and $\delta_{p * q} = \delta_{p} * \delta_{q}$ for any $p,q \in S_x(\mathcal{G},G)$. \end{proposition} \begin{proof} Clearly $\delta$ is a topological embedding. Now let $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$ be arbitrary, then by Proposition \ref{prop: gen prod ext type prod}(3) we have \begin{equation*} \delta_{p * q}(\varphi(x)) = \delta_{p_{x} \otimes q_{y}}(\varphi(x\cdot y)) = \delta_{p_{x}} \tilde{\otimes} \delta_{q_{y}}(\varphi(x \cdot y)) = \delta_{p} * \delta_{q}(\varphi(x)). \end{equation*} \end{proof} \begin{remark}\label{rem: ast on types} As in Remark \ref{rem: tilde otimes on types}, given $p,q \in S_x(\mathcal{G})$ we say that $(p,q)$ is $\ast$-Borel if $(\delta_p, \delta_q)$ is $\ast$-Borel, and in this case denote by $p \ast q$ the type $r \in S_x(\mathcal{G})$ such that $\delta_r = \delta_p \ast \delta_q$ --- by Proposition \ref{Newelski} this extends the operation on invariant types from Fact \ref{NTM}. \end{remark} The next lemma follows by straightforward computations. \begin{proposition}\label{prop: calc conv} Let $\mu, \mu_1, \ldots, \mu_n, \nu_1, \ldots, \nu_m \in \mathfrak{M}_x(\mathcal{G})$ be arbitrary, and assume that the pairs $(\mu_i, \nu_j)$ are $*$-Borel for all $1 \leq i \leq n, 1\leq j \leq m$. Let $a, b, a_1, \ldots, a_n \in \mathcal{G}$ and $r_1, \ldots, r_n, s_1, \ldots, s_m \in \mathbb{R}_{\geq 0}$ be such that $\sum_{i=1}^{n} r_i = \sum_{j=1}^{m} s_j = 1$. Then: \begin{enumerate} \item $\mu * \delta_e = \delta_{e} * \mu = \mu$, \item $\delta_{a} * \delta_{b} = \delta_{ab}$, \item $(\delta_{a} * \mu )(\varphi(x)) = \mu(\varphi(a \cdot x))$ for any $\varphi(x) \in \mathcal{L}_x(\mathcal{U})$, \item $\left(\sum_{i=1}^n r_i \cdot \mu_i \right) * \left(\sum_{j=1}^m s_j \cdot\nu_j \right) = \sum_{i,j = 1}^{n,m}r_i\cdot s_j \cdot (\mu_i * \nu_j)$, \item $\left( \left(\sum_{i =1}^n r_i \cdot \delta_{a_i} \right) * \mu \right)(\varphi(x)) = \sum_{i=1}^n r_i \cdot \mu \left(\varphi(a_i \cdot x) \right)$ for any $\varphi(x) \in \mathcal{L}_x(\mathcal{U})$. \end{enumerate} \end{proposition} Finally, we observe that the following properties of measures are preserved under convolution. \begin{proposition}\label{prop: pros pres under conv} Let $\mu,\nu \in \mathfrak{M}_{x}(\mathcal{G})$ be such that $(\mu,\nu)$ is $*$-Borel.\begin{enumerate} \item If $\mu,\nu$ are definable over $G \prec \mathcal{G}$, then $\mu * \nu$ is definable over $G$. \item If $\mu,\nu$ are finitely satisfiable over $G \prec \mathcal{G}$, then $\mu * \nu$ is finitely satisfiable over $G$. \item If $\mu,\nu$ are finitely approximated over $G \prec \mathcal{G}$, then $\mu*\nu$ is finitely approximated over $G$. \item If $\mu(x =b) =0$ for every $b$ in $\mathcal{G}$, then $\mu * \nu( x = b) = 0$ for every $b \in \mathcal{G}$. \end{enumerate} \end{proposition} \begin{proof} Claims (1), (2) and (3) are slight variations on the preservation of the corresponding properties with respect to $\otimes$ (see e.g.~\cite[Lemma 1.6]{NIP3} or \cite[Proposition 2.6]{CG} for (1) and (2), and \cite[Proposition 2.13]{NIP5} or \cite[Proposition 2.10]{CG} for (3)). (4) Let $b \in \mathcal{G}$ be arbitrary, let $\varphi(x) \in \mathcal{L}_x(\mathcal{U})$ be the formula ``$x=b$'' and let $G \prec \mathcal{G}$ witness that $\left(\mu, \nu_y, \varphi' \right)$ is Borel. Then \begin{equation*} \mu * \nu ( x =b ) = \mu \tilde{\otimes} \nu_{y} ( x \cdot y = b) = \int_{S(\nu|_{G})} F^{\varphi'}_{\mu} d\nu_{G}(y). \end{equation*} And for $q \in S(\nu|_{G})$, $F_{\mu}^{\varphi'}(q) = \mu(x \cdot c = b)$ for some/any $c \models q$ in $\mathcal{G}$. Then, $\mu(x \cdot c = b) = \mu(x =b c^{-1}) = 0$ by assumption. Therefore, $\int F^{\varphi'}_{\mu} d\nu_{G} = \int 0 d\nu_{G} = 0$. \end{proof} \subsection{Idempotent measures}\label{sec: idemp} We continue working in a theory expanding a group, and begin with some standard definitions. \begin{definition} Let $\mu \in \mathfrak{M}_x(\mathcal{G})$. \begin{enumerate} \item We say that $\mu$ is \emph{idempotent} if $\mu$ is $*$-Borel and $\mu*\mu = \mu$. \item We say that $\mu$ is \emph{right-invariant} if for any formula $\varphi(x) \in \mathcal{L}_x(\mathcal{G})$ and any $a \in \mathcal{G}$, we have $\mu(\varphi(x)) = \mu(\varphi(x\cdot a))$. \end{enumerate} \end{definition} \begin{definition} Let $\mathcal{H}$ be a type-definable subgroup of $\mathcal{G}$, where $H(x)$ is the partial type defining the domain of $\mathcal{H}$ (which we associate with the closed set of types implying $H$). Then $\mathcal{H}$ is \emph{definably amenable} if there exists a measure $\mu \in \mathfrak{M}_x(\mathcal{G})$ such that $\mu(H(x)) = 1$, and for any $\varphi(x) \in \mathcal{L}_x(\mathcal{G})$ and $a \in \mathcal{H}$ we have $\mu(\varphi(x)) = \mu(\varphi(x \cdot a))$. In this case, we call $\mu$ \emph{right $\mathcal{H}$-invariant}. \end{definition} \begin{remark} For NIP groups, existence of a right-invariant measure on $\mathcal{H}$ is equivalent to the existence of a left invariant measure on $\mathcal{H}$ (as well as a bi-invariant measure, see \cite[Lemma 6.2]{CS}). \end{remark} \begin{proposition}\label{wow} Let $\mathcal{H}$ be a type-definable, definably amenable subgroup of $\mathcal{G}$, defined by a partial type $H(x)$. Suppose that $\mu \in \mathfrak{M}_{x}(\mathcal{G})$ is right $\mathcal{H}$-invariant. Then $\mu$ is idempotent. Moreover, if $\nu$ is another measure such that $\nu(H(x)) = 1$, then $(\mu,\nu)$ is $*$-Borel and $\mu * \nu = \mu$. \end{proposition} \begin{proof} We show that for any measure $\nu \in \mathfrak{M}_{x}(\mathcal{G})$ such that $\nu(H(x)) = 1$, $(\mu,\nu)$ is $*$-Borel and $\mu * \nu = \mu$. For ease of notation, we will identify $\nu$ with $\nu_{y}$. Fix a formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$. Let $G$ be a small elementary submodel of $\mathcal{G}$ containing the parameters of $H(x)$ and $\varphi(x)$. Fix some $q \in S(\nu|_{G}) \subseteq S_{y}(G)$, then $q \vdash H(y)$. If not, then $q \in S_{y}(G) \backslash H(y)$. Since $H(y)$ is closed, $S_{y}(G) \backslash H(y)$ is open, hence $S_{y}(G) \backslash H(y) = \bigcup_{i \in I} \psi_i(y)$ for some index set $I$ and $\psi_i \in \mathcal{L}_y(G)$. Then $\psi_i(y) \in q$ for some $i$ and since $q \in S(\nu|_{G})$, we know that $\nu(\psi_i(y)) > 0$. But this is a contradiction since $\nu(H(y)) = 1$ and $\psi_i(y)$ is disjoint from $H(y)$. Therefore, if $b \in \mathcal{G}$ and $b \models q$, then $b \in \mathcal{H}$. Now, the function $F_{\mu,G}^{\varphi'}$ is constant on $S(\nu|_{G})$ since $F_{\mu,G}^{\varphi'}(q) = \mu(\varphi(x \cdot b)) = \mu(\varphi(x))$ by right $\mathcal{H}$-invariance of $\mu$, hence $(\mu,\nu)$ is $*$-Borel. And $\mu * \nu = \mu$ as \begin{equation*} \mu * \nu(\varphi(x)) = \int_{S(\nu|_{G})} F_{\mu,G}^{\varphi'} d\nu_{G} = \int_{S(\nu|_{G})} \mu(\varphi(x)) d\nu_{G} = \mu(\varphi(x)). \end{equation*} In particular, $(\mu,\mu)$ is Borel and $\mu * \mu = \mu$. \end{proof} The expectation is that in tame situations, all idempotent measures are of this form for some type-definable subgroup. We will show that this is indeed the case when $\mathcal{G}$ is a stable group in Section \ref{sec: stable groups}, but for now we discuss some examples in which idempotent measures arise. If $\mathcal{G}$ is a definably amenable group, and $\mathcal{H}$ is a type-definable subgroup of finite index (hence definable), then $\mathcal{H}$ is definably amenable (if $\mu$ is a right-invariant measure on $\mathcal{G}$, then $\mu_{\mathcal{H}}(\varphi(x)) := [\mathcal{G} : \mathcal{H}] \cdot \mu(\varphi(x) \cap H(x))$ gives a right-invariant measure on $\mathcal{H}$). This generalizes to type-definable subgroups of bounded index when $\mathcal{G}$ is NIP. \begin{proposition}\label{prop: bdd index def am NIP} Assume that $\mathcal{G}$ is definably amenable and NIP, and let $\mathcal{H}$ be a type-definable subgroup of $\mathcal{G}$ of bounded index. Then $\mathcal{H}$ is also definably amenable. \end{proposition} Proposition \ref{prop: bdd index def am NIP} follows from a slightly generalized construction of the $\mathcal{G}$-invariant measures $\mu_p$ from \cite{NIP1, NIP2, CS} in NIP groups. We will use some properties of the absolute type-definable connected component $\mathcal{G}^{00}$, the intersection of all type-definable subgroups of $\mathcal{G}$ of bounded index, and refer to the aforementioned texts for further details. To be compatible with our set up for convolutions, we work with \emph{$\mathcal{G}$ acting on the right}. By NIP, $\mathcal{G}^{00}$ is a normal subgroup type-definable over $\emptyset$, and let $G \prec \mathcal{G}$ be an arbitrary small model. As usual, $\pi: \mathcal{G} \to \mathcal{G}/\mathcal{G}^{00}$ is the surjective group homomorphism with $\pi(g)$ depending only on $\operatorname{tp}(g/G)$. Then $\mathcal{G}/\mathcal{G}^{00}$ is a compact Hausdorff topological group with respect to the logic topology, i.e.~a subset $X$ of $\mathcal{G}/\mathcal{G}^{00}$ is closed if and only if $\pi^{-1}(X)$ is type-definable, if and only if $\pi^{-1}(X)$ is type-definable over $G$. The induced map $S_x(G) \to \mathcal{G}/\mathcal{G}^{00}$ is continuous. With respect to this topology, closed subgroups of $\mathcal{G}/\mathcal{G}^{00}$ are in a bijective correspondence with type-definable subgroups of $\mathcal{G}$ of bounded index (equivalently, containing $\mathcal{G}^{00}$). Namely, if $K$ is a closed subgroup of $\mathcal{G}/\mathcal{G}^{00}$, then $\mathcal{H} := \pi^{-1}(K)$ is a type-definable set containing $\mathcal{G}^{00} = \pi^{-1}(e_K)$, and is a group since $\pi$ is a group homomorphism (and vice versa). Also, if $\mathcal{H} \subseteq \mathcal{G}$ is type-definable, then $K := \pi(\mathcal{H}) \subseteq \mathcal{G}/\mathcal{G}^{00}$ is a closed subgroup (as $\pi:S_x(G) \to \mathcal{G}/\mathcal{G}^{00}$ is a closed map). Recall that a global type $p \in S_x(\mathcal{G})$ is \emph{strongly $f$-generic} over $G$ if $p \cdot g$ is $G$-invariant for every $g \in \mathcal{G}$. If $\mathcal{G}$ is definably amenable and $G$ is an arbitrary small model, then there exists a type $p$ strongly $f$-generic over $G$ (see \cite{NIP2}). Moreover, as every right translate of a strong $f$-generic over $G$ is again a strong $f$-generic over $G$, we may assume $p(x) \vdash \mathcal{G}^{00}(x)$. \begin{proof}[Proof of Proposition \ref{prop: bdd index def am NIP}] Let $K := \pi(\mathcal{H})$, then $\pi^{-1}(K) = \mathcal{H}$ (by the fourth isomorphism theorem for groups), hence $K$ is a closed subgroup of $\mathcal{G}/\mathcal{G}^{00}$. Denote by $\nu$ the right-invariant Haar measure on Borel subsets of $K$ normalized by $\nu(K) = 1$. Let $p \in S_x(\mathcal{G})$ be a strong $f$-generic over $G$ with $p(x) \vdash \mathcal{G}^{00}(x)$, so in particular $p \vdash \mathcal{H}$ and $p \cdot g = p$ for every $g \in \mathcal{G}^{00}$. For a formula $\varphi(x)\in \mathcal{L}_x(\mathcal{G})$, let $$A_{\varphi,p} := \left\{ \bar{g} \in K : \varphi(x) \in p \cdot \bar{g} \right\}.$$ Then $A_{\varphi,p}$ is a Borel subset of $K$ (as $A_{\varphi,p} = K \cap \left\{ \bar{g} \in \mathcal{G}/\mathcal{G}^{00}: \varphi(x) \in p \cdot \bar{g} \right\}$, and the latter set is Borel by \cite[Proposition 5.6]{NIP2}). We define $$\mu_{p,\nu}(\varphi(x)) := \nu(A_{\varphi, p}).$$ Then we have the following. \begin{itemize} \item $\mu_{p,\nu}$ is a Keisler measure with $\mu_{p,\nu}(H) = 1$. It is easy to check that $\mu_{p,\nu}$ is a measure. And by regularity, $\mu_{p,\nu}(H) = \inf\{ \mu_{p,\nu}(\psi(x)) : H (x) \vdash \psi(x), \psi(x) \in \mathcal{L}_x(\mathcal{G}) \}$, and as $p \vdash H \vdash \psi$ for all such $\psi$, we have that $A_{\psi,p} = K$ by definition, hence $\mu_{p,\nu}(\psi) = 1$. \item $\mu_{p,\nu}$ is right $\mathcal{H}$-invariant (as $\mu_{p,\nu}(\varphi(x)\cdot g) = \nu(A_{\varphi \cdot g,p}) = \nu(A_{\varphi,p} \cdot \pi(g)) = \nu(A_{\varphi,p}) = \mu_{p,\nu}(\varphi(x))$ by right $K$-invariance of $\nu$, as $\pi(g) \in K$). \end{itemize} Hence $\mathcal{H}$ is definably amenable, witnessed by $\mu_{p,\nu}$. \end{proof} \begin{question} Is Proposition \ref{prop: bdd index def am NIP} true without the NIP assumption? \end{question} Classification of measures supported on finite subsets of $\mathcal{G}$ follows from Wendel's theorem. \begin{proposition} \label{prop: finite realized support} If $\mu$ is a measure on $\mathcal{G}$ whose support is a finite collection of realized types, then $\mu$ is idempotent if and only if $\mu = \frac{1}{|\mathcal{H}|}\sum_{a \in \mathcal{H}} \delta_{a}$ for some finite subgroup $\mathcal{H}$ of $\mathcal{G}$. \end{proposition} \begin{proof} $(\Leftarrow)$ is by Proposition \ref{wow}. $(\mathbb{R}ightarrow)$ Assume that $S(\mu) = \{a_1,...,a_n\} = A \subseteq \mathcal{G}$. As $\mu$ is idempotent, $S(\mu)$ is closed under multiplication (if not, then there exists $c \in \mathcal{G} \setminus A$ such that $c = a_i \cdot a_j$ and $c$ for some $i,j$; then $\mu(x = c) = 0$, but $\mu * \mu(x = c) >0$). Therefore $A$ is closed under products. As any finite subset of a group closed under products is a subgroup, $A$ is a compact group, and $\mu|_{A}$ is an idempotent measure on $A$. Therefore, by \cite[Theorem 1]{Wendel}, $\mu|_{A}$ is the unique Haar measure on the subgroup $S(\mu|_A)$ of $A$. But as $S(\mu) = A$, we conclude that $\mu = \frac{1}{n} \sum_{a \in A} \delta_{a}$. \end{proof} Finally, we observe a sufficient condition for idempotence to be preserved under convolution in the NIP context. \begin{proposition}\label{prop: commute preserves idemp} If $\mathcal{G}$ is NIP and abelian, and both $\mu,\nu$ are idempotent and dfs, then $\mu * \nu$ is idempotent and dfs. \end{proposition} \begin{proof} Fix a formula $\varphi(x) \in \mathcal{L}_x(\mathcal{G})$ and assume that $G \prec \mathcal{G}$ witnesses that both $(\mu,\nu, \varphi)$ and $(\nu,\mu, \varphi)$ are Borel, and both $\mu$ and $\nu$ are dfs over $G$ (taking a common extension of the models witnessing each of this properties separately). By Proposition \ref{prop: pros pres under conv}, $\mu \ast \nu$ is dfs over $G$. By Fact \ref{fac: meas commute}(3), $\mu$ and $\nu$ commute, so we have \begin{equation*} \mu * \nu (\varphi(x)) = \mu_{x} \otimes \nu_{y} (\varphi(x \cdot y)) = \nu_{y} \otimes \mu_{x}(\varphi(x \cdot y)). \end{equation*} By change of variables and abelianity of $\mathcal{G}$, we can conclude \begin{equation*} = \nu_{x} \otimes \mu_{y}(\varphi(y \cdot x)) = \nu_{x} \otimes \mu_{y}(\varphi(x \cdot y)) = \nu * \mu(\varphi(x)). \end{equation*} Now, let $\lambda := \mu * \nu$. Using associativity of $*$ in the NIP context (see Proposition \ref{NIP:measure}), \begin{equation*} \lambda * \lambda = \mu * \nu * \mu * \nu = \mu * \mu * \nu * \nu = \mu * \nu = \lambda. \end{equation*} \end{proof} \section{Supports of idempotent measures}\label{sec: supports} In this section, we will show (in an arbitrary theory) that if $\mu$ is definable, invariantly supported (see Definition \ref{def: inv sup meas}) and idempotent, then $(S(\mu),*)$ is a compact, left-continuous semigroup with no closed two-sided ideals. The assumption ``definable and invariantly supported'' is satisfied when $\mu$ is a dfs measure in an arbitrary theory (by Lemma \ref{lem: sup on inv types}(1)), and when $\mu$ is an arbitrary definable measure in an NIP theory (by Lemma \ref{lem: sup on inv types}(3)). We begin by considering two examples, which illustrate in particular that the support of an idempotent dfs Keisler measure need not be a group in general. \begin{example}\label{exa: 1} Let $T = T_{\operatorname{doag}}$ be the complete theory of a divisible ordered abelian group in the language $\{+,<,0,1\}$. Let $\mathcal{G}$ be a monster model of $T$ and consider $G := \mathbb{Q}$ as an elementary substructure in the natural way. Let $p_{\infty}$ be the unique global type finitely satisfiable in $G$ and extending $\{x > a: a \in \mathbb{Q}\}$. Let $p_{-\infty}$ be the unique global type finitely satisfiable in $G$ and extending $\{ x <a :a \in \mathbb{Q}\}$. Let $\mu := \frac{1}{2}\delta_{p_{- \infty}} + \frac{1}{2}\delta_{p_{\infty}}$, we claim that $\mu, \delta_{p_{\infty}},\delta_{p_{-\infty}} \in \mathfrak{M}_x(\mathcal{G})$ are idempotent. By Proposition \ref{prop: pros pres under conv}, the product $\delta_{\alpha} * \delta_{\beta}$ for $\alpha,\beta \in \{p_{\infty},p_{-\infty}\}$ is finitely satisfiable in $\mathbb{Q}$. Then, using Proposition \ref{prop: calc conv}, it is not hard to verify the following calculation: \begin{equation*} \mu * \mu = \Big(\frac{1}{2}\delta_{p_{- \infty}} + \frac{1}{2}\delta_{p_{\infty}}\Big) * \Big(\frac{1}{2}\delta_{p_{- \infty}} + \frac{1}{2}\delta_{p_{\infty}}\Big) \end{equation*} \begin{equation*} =\frac{1}{4} \Big(\delta_{p_{-\infty} }* \delta_{ p_{-\infty}} \Big) + \frac{1}{4} \Big(\delta_{p_{- \infty}} * \delta_{p_{\infty}}\Big) + \frac{1}{4} \Big(\delta_{p_{\infty}} * \delta_{p_{-\infty}}\Big) + \frac{1}{4}\Big(\delta_{p_{\infty}} * \delta_{p_{\infty}}\Big) \end{equation*} \begin{equation*} = \frac{1}{4}\delta_{p_{- \infty}} + \frac{1}{4}\delta_{p_{\infty}} + \frac{1}{4} \delta_{p_{- \infty}} + \frac{1}{4} \delta_{p_{\infty}} = \frac{1}{2}\delta_{p_{-\infty}} + \frac{1}{2}\delta_{p_{\infty}} = \mu. \end{equation*} We observe that while $(S(\delta_{p_{\infty}}),*)$ and $(S(\delta_{p_{-\infty}}),*)$ are groups (with a single element), $(S(\mu),*)$ is not a group since it does not contain an identity element. \end{example} \begin{example}\label{circle} Let $G = (S^{1},\cdot,C(x,y,z))$ be the standard circle group over $\mathbb{R}$, with $C$ the cyclic clockwise ordering. Let $T_{O}$ be the corresponding theory. Let $\mu$ be the Keisler measure on this structure which corresponds to the restriction of the Haar measure on $S^{1}$. Let $\mathcal{G}$ be a monster model of $T_{O}$ such that $S^{1} \prec \mathcal{G}$. Then $\mu$ is smooth over $S^{1}$ and admits a unique global extension $\tilde{\mu}$. We remark that $\tilde{\mu}$ is right invariant, hence idempotent (Proposition \ref{wow}). Let $\textrm{st}:S_{x}(\mathcal{\mathcal{G}}) \to S^{1}$ be the standard part map. Assume that $p \in S(\tilde{\mu})$ and $\textrm{st}(p) = a$. Then $\varphi_\varepsilon(x) := C(a-\varepsilon, x, a + \varepsilon) \notin p$ for every infinitesimal $\varepsilon \in \mathcal{G}$ ($x \neq a \in p$ as $\mu(x=a) = 0$, and if $\varphi_\varepsilon(x) \in p$, then $\tilde{\mu}(\varphi_\varepsilon(x) \land x \neq a) > 0$, but $\varphi_\varepsilon (G) = \{ a \}$ --- contradicting finite satisfiability of $\tilde{\mu}$ in $G$). As the types are determined by the cuts in the circular order, it follows that for every $a \in S^1$ there are exactly two types $a_{+}(x),a_{-}(x) \in S(\tilde{\mu})$ determined by whether $C(a+\varepsilon, x, b)$ holds for every infinitesimal $\varepsilon$ and $b \in G$, or $C(b, x, a-\varepsilon)$ holds for every infinitesimal $\varepsilon$ and $b \in G$, respectively. It follows that $(S(\tilde{\mu}),*) \cong S^{1} \times \{+,-\}$ with multiplication defined by: \begin{equation*} a_{\delta} * b_{\gamma} = (a \cdot b)_{\delta} \end{equation*} for all $a,b \in S^1$ and $\delta,\gamma \in \{+,-\}$. Again, $(S(\mu),*)$ is not a group. \end{example} Next we establish various properties of $(S(\mu),*)$ when $\mu$ is a global idempotent measure which is definable and invariantly supported. Given $S_1, S_2 \subseteq S_x(\mathcal{G})$, we write $S_1 * S_2 := \{ p_1 * p_2 \in S_x(\mathcal{G}) : p_i \in S_i \}$ (under the assumption that all such products are defined, i.e.~assuming $(p_1, p_2)$ is $\ast$-Borel for all $p_i \in S_i$ --- see Remark \ref{rem: ast on types}). The assumption of being invariantly supported in the lemmas below is only needed to ensure that $S(\mu)*S(\mu)$ is defined (Fact \ref{NTM}). \begin{proposition}\label{semigroup} Let $\mu,\nu \in \mathfrak{M}_{x}(\mathcal{G})$. Assume that $\mu$ is definable, and both $\mu$ and $\nu$ are invariantly supported. Then: \begin{enumerate} \item $S(\mu) * S(\nu) \subseteq S(\mu * \nu)$; \item $S(\mu) * S(\nu)$ is a dense subset of $S(\mu * \nu)$. \end{enumerate} \end{proposition} \begin{proof} (1) Assume that $p \in S(\mu), q \in S(\nu)$, and let $\varphi(x) \in p *q$. Choose $G \prec \mathcal{G}$ such that $\mu$ is definable over $G$, $p,q$ are finitely satisfiable in $G$, and $G$ contains all the parameters from $\varphi$. We need to show that $\mu * \nu(\varphi(x)) > 0$. Now, \begin{equation*} \mu * \nu(\varphi(x)) = \int_{S(\nu|_{G})} F_{\mu, G}^{\varphi'}d\nu_{G} \end{equation*} Since $\mu$ is definable, the map $F_{\mu,G}^{\varphi'}:S(\nu|_{G}) \to [0,1]$ is continuous. Therefore, it suffices to find some $r \in S(\nu|_{G})$ such that $F_{\mu,G}^{\varphi'}(r) > 0$. Consider $r := q|_{G}$. Then, $F_{\mu,G}^{\varphi'}(q|_{G}) = \mu(\varphi(x\cdot b))$, where $b \models q|_{G}$. Then, $\varphi(x \cdot b) \in p$ and since $p \in S(\mu)$, we have that $\mu(\varphi(x\cdot b)) > 0$. Hence, $F_{\mu}^{\varphi'}(q|_{G}) > 0$ and so $\mu * \nu(\varphi(x)) > 0$. (2) By (1), we already know that $S(\mu) * S(\nu) \subseteq S( \mu * \nu)$. Fix some $r \in S(\mu * \nu)$ and a formula $\varphi(x) \in r$. We need to find $p \in S(\mu)$ and $q \in S(\nu)$ such that $\varphi(x) \in p*q$. Choose $G$ such that $\mu$ is definable over $G$, all types in $S(\mu), S(\nu)$ are invariant over $G$, and $G$ contains the parameters of $\varphi(x)$. Since $\varphi(x) \in r$ and $r$ is in the support of $\mu * \nu$, we know that $\mu * \nu (\varphi(x)) > 0$. Therefore, $\int_{S(\nu|_G)} F_{\mu, G}^{\varphi'} d(\nu_{G})>0$, and so there exists some $t \in S(\nu|_{G})$ such that $F_{\mu, G}^{\varphi'}(t) > 0$. If $c \models t$, then $\mu(\varphi(x \cdot c)) > 0$. So, by Proposition \ref{prop: supp 1}(1), there exists $p \in S(\mu)$ such that $\varphi(x \cdot c) \in p$. By Proposition \ref{coheir}, we let $q \in S(\nu)$ be such that $q|_{G} = t$. By construction, we then observe that $\varphi(x) \in p * q$. \end{proof} \begin{corollary}\label{qcont} Assume that $\mu$ is definable, invariantly supported and idempotent. Then $\left(S(\mu),* \right)$ is a compact Hausdorff (with the subspace topology) semigroup which is left-continuous, i.e.~the map $-*q: S(\mu) \to S(\mu)$ is continuous for each $q \in S(\mu)$. \end{corollary} \begin{proof} By Proposition \ref{prop: supp 1}(2), $S(\mu)$ is a compact Hausdorff space. By Proposition \ref{semigroup}, $S(\mu) * S(\mu) \subseteq S(\mu * \mu) = S(\mu)$. Now, choose some $G \prec \mathcal{G}$ such that $\mu$ is definable over $G$, and all types in $S(\mu)$ are invariant over $G$. Then $(S(\mu),*)$ is a sub-semigroup of $(S_{x}^{^{\text{-}1}a}(\mathcal{G},G),*)$ and $*$ is left-continuous by Fact \ref{NTM}. \end{proof} We now define some global functions which mimic the map $y \mapsto \int f(x \cdot y)d\mu(x)$. \begin{definition} Let $\mu \in \mathfrak{M}_x(\mathcal{G})$ be definable, and fix $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$. We then define the global function $D_{\mu}^{\varphi'}:S_{y}(\mathcal{G}) \to [0,1]$ via $p \mapsto \mu(\varphi(x\cdot c))$, for some/any $c \models p|_{G}$ and small $G \prec \mathcal{G}$ containing the parameters of $\varphi(x)$ and such that $\mu$ is definable over $G$. \end{definition} Note that for any formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$, the map $D^{\varphi'}_{\mu}$ is continuous: $D_{\mu}^{\varphi'} = F_{\mu,G}^{\varphi'}\circ r$, where $r: S_y(\mathcal{G}) \to S_y(G)$ is the restriction map, and $F_{\mu,G}^{\varphi'}$ is continuous by definability of $\mu$. The next two results are adapted from Glicksberg's work on semi-topological semigroups into the general model theory context. In particular, see \cite{Glicksberg1,Glicksberg2}. \begin{proposition} \label{prop: max attained val} Let $\mu \in \mathfrak{M}_{x}(\mathcal{G})$ be definable, invariantly supported and idempotent, and $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$ arbitrary. Assume that $D_{\mu}^{\varphi'}|_{S(\mu)}$ attains a maximum at $q \in S(\mu)$ (exists as this is a continuous function on a compact set). Then for any $p \in S(\mu)$, we have that $D_{\mu}^{\varphi'}(q) = D_{\mu}^{\varphi'}(p * q)$. \end{proposition} \begin{proof} Fix a small model $G_{0} \prec \mathcal{G}$ such that $\mu$ is definable over $G_{0}$, and $G_0$ contains the parameters of $\varphi(x)$. Let $b \models q|_{G_{0}}$ and let $\theta(x;y) := \varphi((x\cdot y) \cdot b)$. Now fix a larger submodel $G \prec \mathcal{G}$ such that $G_0b \subset G$. Let $\delta : = \mu(\varphi(x\cdot b))$. Observe that then for any $t \in S(\mu|_{G})$, $a \models t$, and $\tilde{t} \in S(\mu)$ such that $\tilde{t}|_{G} = t$, we have $F_{\mu, G}^{\theta}(t) = \mu(\varphi(x \cdot a) \cdot b) = \mu \left(\varphi(x \cdot (a \cdot b)) \right) = D_{\mu}^{\varphi'}(\tilde{t}*q) \leq D_{\mu}^{\varphi'}(q)= \delta$ (by the assumption on $q$). We conclude that for any $t \in S(\mu|_{G})$, $F_{\mu,G}^{\theta}(t) \leq \delta$. On the other hand, \begin{equation*} \delta = D_{\mu}^{\varphi'}(q) = \mu(\varphi(x \cdot b)) = \mu * \mu (\varphi(x \cdot b)) = \mu_{x} \tilde{\otimes} \mu_{y}(\theta(x;y)) \end{equation*} \begin{equation*} = \int_{S(\mu|_G)} F_{\mu,G}^{\theta} d\mu_G. \end{equation*} Therefore, $F_{\mu}^{\theta} = \delta$ almost everywhere (with respect to $\mu_{G}$). Since both maps are continuous, they are equal on $S(\mu|_{G})$. Finally, for any $p \in S(\mu)$ and $a \models p$, we have: \begin{equation*} D_{\mu}^{\varphi'}(q)=\delta = F_{\mu,G}^{\theta}(p|_{G}) = \mu(\varphi((x \cdot a) \cdot b)) = \mu(\varphi(x \cdot (a \cdot b))) = D_{\mu}^{\varphi'}(p * q), \end{equation*} as wanted. \end{proof} \begin{theorem}\label{two} Let $\mu \in \mathfrak{M}_{x}(\mathcal{G})$ be definable, invariantly supported and idempotent. Let $I \subset S(\mu)$ be a closed two-sided ideal. Then, $I = S(\mu)$. \end{theorem} \begin{proof} If $I$ is dense in $S(\mu)$, then $I = S(\mu)$. So we may assume that $I$ is not dense in $S(\mu)$. Therefore, there exists some $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$ such that $\varphi(x) \cap S(\mu) \neq \emptyset$ and $\varphi(x) \cap I = \emptyset$. Let $G \prec \mathcal{G}$ be a small model containing the parameters of $\varphi$, and such that $\mu$ is definable and invariantly supported over $G$. \begin{cla} There exists some $q \in S(\mu)$ such that $D_{\mu}^{\varphi'}(q)>0$. \end{cla} \begin{proof} Assume not. Let $p,q \in S(\mu)$ be arbitrary. Let $b \models q|_G, a \models p|_{Gb}$. Then $\mu(\varphi(x \cdot b)) = D^{\varphi'}_{\mu}(q) = 0 $ by assumption, hence $\models \neg \varphi(a \cdot b)$ as $p \in S(\mu)$, so $\varphi(x) \notin p * q$. Consider now the continuous characteristic function $\chi_{\varphi}:S(\mu) \to \{0,1\}$. By Proposition \ref{semigroup}(2) and the previous paragraph, $\chi_{\varphi}$ vanishes on a dense subset $S(\mu) * S(\mu)$ of $S(\mu)$, hence $\chi_{\varphi}$ vanishes on $S(\mu)$. But this contradicts the choice of $\varphi$. \end{proof} So there exists some $q \in S(\mu)$ such that $D_{\mu}^{\varphi'}(q) > 0$. Then, since $D_{\mu}^{\varphi'}$ is continuous, it attains a maximum $\delta >0$ on some $r \in S(\mu)$. \begin{cla} For any $h \in I$, we have $D_{\mu}^{\varphi'}(h) = 0$. \end{cla} \begin{proof} Let $h \in I$. Then $D_{\mu}^{\varphi'}(h) = \mu(\varphi(x \cdot b))$, where $b \models h|_{G}$. Then \begin{equation*}\mu(\varphi(x \cdot b)) = \mu \left(\{p \in S(\mu): \varphi(x \cdot b ) \in p\} \right) = \mu \left(\{ p \in S(\mu) : \varphi(x) \in p*h\} \right). \end{equation*} As $I$ is a left ideal, we have $S(\mu) * h \subseteq I$. By assumption, $\varphi(x) \cap I = \emptyset$, and so we have $\{p \in S(\mu) : \varphi(x) \in p * h\} = \emptyset$. Therefore, $D_{\mu}^{\varphi'}(h) = 0$. \end{proof} Finally, since $I$ is a right ideal, we have that $h*r \in I$. Therefore, using Proposition \ref{prop: max attained val} and the claim, \begin{equation*} 0 < D_{\mu}^{\varphi'}(r) = D_{\mu}^{\varphi'}(h * r) = 0. \end{equation*} Therefore, we obtain a contradiction. \end{proof} \begin{corollary} Assume that $|S(\mu)| >1$, i.e.~$\mu$ is not a type. Then $S(\mu)$ contains no zero elements, i.e.~there is no element $p \in S(\mu)$ such that for any $q$ in $S(\mu)$, $p * q = q*p = p$. \end{corollary} \begin{proof} If $p$ is a zero-element, then $\{p\}$ is a closed two sided ideal. \end{proof} We make some further observations on the structure of the semigroup $S(\mu)$ under the additional assumptions on the idempotent measure $\mu$. We recall the following structural theorem of Ellis (with the roles of multiplication on the left and on the right exchanged everywhere). \begin{fact}\label{Ellis}\cite[Proposition 4.2]{Ellis} Assume that $(S,\cdot)$ is a compact Hausdorff semigroup which is left-continuous (i.e.~such that for any $a \in S$, the map $- \cdot a:S \to S$ is continuous). Then, there exists a minimal left ideal $I$ (which is automatically closed). We let $J(I) = \{i \in I: i^{2} = i\}$ be the set of idempotents in $I$. \begin{enumerate} \item $J(I)$ is non-empty. \item For every $p \in I$ and $i \in J(I)$, we have that $p \cdot i = p$. \item $I = \bigcup \{i \cdot I: i \in J(I)\}$, where the union is over disjoint sets, and each set $i \cdot I$ is a group with identity $i$. \item $I \cdot q$ is a minimal left ideal for all $q \in S$. \end{enumerate} \end{fact} \noindent Assume that $\mu \in \mathfrak{M}_x(\mathcal{G})$ is definable, invariantly supported and idempotent. Then $(S(\mu), *)$ is a semigroup satisfying the assumption of Fact \ref{Ellis} by Corollary \ref{qcont}. \begin{definition} We let $I_{\mu}$ denote a minimal (closed) left ideal of $(S(\mu), *)$ (it exits by Fact \ref{Ellis}). We say that $\mu$ is \emph{minimal} if $I_{\mu} = S(\mu)$. \end{definition} In particular, if $\mu$ is minimal, then $S(\mu)$ is a disjoint union of subgroups. \begin{example} For example, the measure $\tilde{\mu}$ considered in Example \ref{circle} is minimal. \end{example} \begin{proposition} \label{prop: D is const on supp} Assume that $\mu \in \mathfrak{M}_x(\mathcal{G})$ is definable, invariantly supported, idempotent and minimal (i.e. $I_\mu = S(\mu)$). Let $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$ be any formula. Then for any $p,q \in S(\mu)$, we have that $D_{\mu}^{\varphi'}(p) = D_{\mu}^{\varphi'}(q)$. \end{proposition} \begin{proof} By Fact \ref{Ellis}, $S(\mu)= \bigcup \{i * S(\mu): i \in J(I_\mu)\}$. By continuity, $D_{\mu}^{\varphi'}$ attains a maximum at some $p \in S(\mu)$. Let now $q \in S(\mu) = I_\mu$ be arbitrary. Then $q \in i *I_\mu$ for some $i \in J(I_\mu)$. Also $i * p \in i * I_\mu$ as $I_\mu = S(\mu)$. As $i*I_\mu$ is a group by Fact \ref{Ellis}(3), there exists some $r \in i * I_\mu$ such that $r * (i * p) = q$. But then, applying Proposition \ref{prop: max attained val}, we have $$D_{\mu}^{\varphi'}(p) = D_{\mu}^{\varphi'}((r * i) * p) = D_{\mu}^{\varphi'}(r * (i * p)) = D_{\mu}^{\varphi'}(q).$$ As $q \in S(\mu)$ was arbitrary, this shows the proposition. \end{proof} \begin{proposition} Assume that $\mu \in \mathfrak{M}_x(\mathcal{G})$ is definable, invariantly supported, idempotent and minimal. Then for every $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$, $\mu(\varphi(x)) = D_{\mu}^{\varphi'}(p)$ for any $p \in S(\mu)$. \end{proposition} \begin{proof} Assume not. By Proposition \ref{prop: D is const on supp} and replacing $\varphi(x)$ by $\neg \varphi(x)$ if necessary, we may assume that $\mu(\varphi(x)) > D_{\mu}^{\varphi'}(i)$, where $i$ is an idempotent in $S(\mu)$. Then $\mu \left(\varphi(x) \wedge \neg \varphi(x \cdot b) \right) > 0$, where $b \models i|_{G}$ and $G \prec \mathcal{G}$ is chosen as usual. Hence there exists $q \in S(\mu)$ such that $\varphi(x) \wedge \neg \varphi(x \cdot b) \in q$. Then $\varphi(x) \in q$, and $\neg \varphi(x) \in q * i$. However, $q * i = q$ by Fact \ref{Ellis}(2), and so we have $\varphi(x), \neg \varphi(x) \in q$ --- a contradiction. \end{proof} \noindent A direct translation of the previous proposition then says that minimal idempotent measures are ``generically'' right-invariant on their supports. \begin{corollary} \label{cor: meas on sup is invariant}Assume that $\mu \in \mathfrak{M}_x(\mathcal{G})$ is definable, invariantly supported, idempotent and minimal. Let $\varphi(x;\overline{b}) \in \mathcal{L}_{x}(\mathcal{G})$. Then, for any $a \in \mathcal{G}$ such that $tp(a/G\overline{b}) \in S(\mu|_{G\overline{b}})$, we have $\mu(\varphi(x)) = \mu(\varphi(x \cdot a))$. \end{corollary} \noindent Finally, we record a corollary for the case when the group $\mathcal{G}$ is stable and abelian. \begin{remark} $I_{\mu} = S(\mu)$ if and only if for every $p,q$ in the $S(\mu)$ there exists $r \in S(\mu)$ such that $r * q =p$. \end{remark} \noindent The following corollary is a direct consequence of Glicksberg's theorem for semi-topological semigroups \cite{Glicksberg2} (note that unless the group is stable and abelian, we only have continuity of $*$ on the left, so we were not in the context of Glicksberg's theorem in the earlier considerations). \begin{corollary}\label{cor: supp stab ab} If $\mathcal{G}$ is stable, abelian and $\mu \in \mathfrak{M}_x(\mathcal{G})$ is idempotent, then $S(\mu)$ is an abelian compact Hausdorff topological group. \end{corollary} \begin{proof} Note that $\mu$ is automatically dfs by Fact \ref{fac: props of measures relation}(3), hence the results of this section apply to it. We see that $(S(\mu), *)$ is commutative, as in Proposition \ref{prop: commute preserves idemp}. Then $*$ is both left and right-continuous. Hence $I_\mu = S(\mu)$ by Theorem \ref{two}. But this is equivalent to: for every $p,q \in S(\mu)$ there exists $r \in S(\mu)$ such that $r * q =p$. By commutativity of $*$ and Fact \ref{Ellis}, this implies that $S(\mu)$ is a group. Finally, by a classical theorem of Ellis \cite{ellis1957locally}, separate continuity of multiplication implies joint continuity for (locally) compact groups. \end{proof} \noindent Using this corollary, we can quickly describe idempotent measures in strongly minimal groups. \begin{example}\label{exa: str min} Let $\mathcal{G}$ be a strongly minimal group. Then the idempotent measures are precisely of the following form: \begin{enumerate} \item Haar measures on finite subgroups of $\mathcal{G}$; \item $\delta_{p}$, where $p$ is the unique non-algebraic type in $S_{x}(\mathcal{G})$. \end{enumerate} \end{example} \begin{proof} Assume that $\mathcal{G}$ is strongly minimal, then it is abelian, and let $\mu$ be an idempotent measure. As $\mathcal{G}$ is in particular $\omega$-stable, by Fact \ref{fac: props of measures relation}(3c) $\mu = \sum_{i \in \omega} r_i \cdot p_i$ for some $p_i \in S_x(\mathcal{G})$ and some $r_i \in \mathbb{R}_{\geq 0}$ with $\sum_{i \in \omega} r_i = 1$. By strong minimality, let $p \in S_x(\mathcal{G})$ be the unique non-algebraic type. Then clearly $S(\mu) = \operatorname{cl}(\{p_i : i \in \omega \}) \subseteq \{p_i : i \in \omega \} \cup \{p\}$, in particular $S(\mu)$ is countable. By Corollary \ref{cor: supp stab ab}, $S(\mu)$ is a compact group, and every countable compact group must be finite (using the existence of finite Haar measure). If $p \notin S(\mu)$, then $S(\mu)$ is a finite subgroup of $\mathcal{G}$, and we are in the first case by Proposition \ref{prop: finite realized support}. Assume that $p \in S(\mu)$. Note that $p$ is clearly right $\mathcal{G}$-invariant, hence $p \ast q = p$ for any $q \in S(\mu)$ by Proposition \ref{wow}. As $(S(\mu), \ast)$ is a group, this implies $S(\mu) = \{ p \}$. \end{proof} This example is generalized to arbitrary stable groups in the next section. \section{Idempotent measures in stable groups}\label{sec: stable groups} In this section we classify idempotent measures on a stable group, demonstrating that they are precisely the invariant measures on its type-definable subgroups. Our proof relies on the results of the previous section and a variant of Hrushovski's group chunk theorem due to Newelski \cite{N3}. We will assume some familiarity with the theory of stable groups (see \cite{Poi} or \cite{WagSt} for a general reference). As before, $\mathcal{G}$ is a monster model for a theory extending a group. \subsection{Stabilizers of definable measures} \begin{definition} Given a measure $\mu \in \mathfrak{M}_x(\mathcal{G})$, we consider the following (left) \emph{stabilizer group} associated to it: $$\operatorname{Stab}(\mu) := \{g \in \mathcal{G} : g \cdot \mu = \mu \} $$ $$=\{ g \in \mathcal{G} : \mu(\varphi(x)) = \mu(\varphi(g \cdot x)) \textrm{ for all } \varphi(x) \in \mathcal{L}(\mathcal{G}) \}.$$ \end{definition} Below we use the characterization of definability of a measure from Fact \ref{fac: chars of def meas}(3), and we follow the notation there. \begin{definition} Assume that $\mu_x \in \mathfrak{M}_x(\mathcal{G})$ is definable over a small model $G \prec \mathcal{G}$.\begin{enumerate} \item Fix a formula $\varphi(x;y) \in \mathcal{L}$ and $n \in \mathbb{N}_{>0}$. We write $\varphi'(x;y,z)$ to denote the formula $\varphi(z \cdot x;y)$, and given $i \in I_n$ we write $$\Phi^{\varphi', \frac{1}{n}}_{\geq i}(y,z) := \bigvee_{j \in I_n, j \geq i} \Phi^{\varphi', \frac{1}{n}}_{j}(y,z).$$ \item Consider the following formula with parameters in $G$ (where $e$ is the identity of $\mathcal{G}$): $$\operatorname{Stab}_\mu^{\varphi, \frac{1}{n}}(z) := $$ $$\forall y \bigwedge_{i \in I_n, i \geq \frac{3}{n}} \left( \left( \Phi^{\varphi',\frac{1}{n}}_{\geq i}(y,e) \rightarrow \Phi^{\varphi', \frac{1}{n}}_{\geq (i-\frac{2}{n})} (y,z) \right) \land \left(\Phi^{\varphi',\frac{1}{n}}_{\geq i}(y,z) \rightarrow \Phi^{\varphi', \frac{1}{n}}_{\geq (i-\frac{2}{n})} (y,e) \right) \right).$$ \item We define the following partial type over $G$: $$\operatorname{Stab}_\mu(z) := \bigwedge_{\varphi(x,y) \in \mathcal{L}, n \in \mathbb{N}_{>0}} \operatorname{Stab}^{\varphi, \frac{1}{n}}_{\mu}(z).$$ \end{enumerate} \end{definition} \begin{proposition}\label{prop: stab type def} Let $\mu \in \mathfrak{M}_x(\mathcal{G})$ be definable. Then $\operatorname{Stab}(\mu) = \operatorname{Stab}_\mu(\mathcal{G})$. \end{proposition} \begin{proof} Assume $g \notin \operatorname{Stab}(\mu)$. Then there exist some $\varphi(x,y) \in \mathcal{L}$ and $b \in \mathcal{G}^y$ such that taking $r := \mu(\varphi(x,b)) = \mu(\varphi'(x;b,e))$ and $s := \mu(\varphi(g \cdot x,b)) = \mu(\varphi'(x,b,g)) $ we have $r \neq s$. Say $r > s$ (the case $r < s$ is similar). We choose $n \in \mathbb{N}_{>0}$ large enough so that $|r-s| \geq \frac{4}{n}$ (so in particular $r \geq \frac{4}{n}$). As $\{ \Phi^{\varphi',\frac{1}{n}}_i(\mathcal{G}) : i \in I_n \}$ is a covering of $\mathcal{G}^{yz}$ by Fact \ref{fac: chars of def meas}(3a), there is some $i \in I_n$ such that $\models \Phi^{\varphi',\frac{1}{n}}_i(b,e)$, so in particular $\models \Phi^{\varphi',\frac{1}{n}}_{\geq i}(b,e)$. Hence $|r - i| < \frac{1}{n}$ by Fact \ref{fac: chars of def meas}(3b) (hence $i \geq \frac{3}{n}$). If $\models \Phi^{\varphi', \frac{1}{n}}_{\geq (i-\frac{2}{n})} (b,g)$, then by Fact \ref{fac: chars of def meas}(3b) again we must have $\mu(\varphi'(x;b,g)) > i - \frac{2}{n} - \frac{1}{n}$, so $s > i - \frac{3}{n}$, and $r - s < \frac{4}{n}$, contradicting the choice of $n$. Hence $g \not \models \operatorname{Stab}^{\varphi, \frac{1}{n}}_\mu(z)$. Assume $g \in \operatorname{Stab}(\mu)$, and let $\varphi(x,y), b \in \mathcal{G}^y, n \geq 1$ and $i\geq \frac{3}{n}$ in $I_n$ be arbitrary. Assume that $\models \Phi^{\varphi',\frac{1}{n}}_{\geq i}(b,e)$ holds, then by Fact \ref{fac: chars of def meas}(3b) we have $\mu(\varphi'(x;b,e)) > i - \frac{1}{n}$. If $\models \neg \Phi^{\varphi', \frac{1}{n}}_{\geq (i-\frac{2}{n})} (b,g)$, as $\{ \Phi^{\varphi',\frac{1}{n}}_i(\mathcal{G}) : i \in I_n \}$ is a covering, we must have $\models \Phi^{\varphi',\frac{1}{n}}_j(b,g)$ for some $j < i - \frac{2}{n}$ in $I_n$. But then $\mu(\varphi'(x;b,g)) < j + \frac{1}{n}$ by Fact \ref{fac: chars of def meas}(3b) again. Hence $\mu(\varphi'(x;b,g)) < i - \frac{1}{n} < \mu(\varphi'(x;b,e))$, contradicting $g \in \operatorname{Stab}(\mu)$. Similarly, we get that $\models \Phi^{\varphi',\frac{1}{n}}_{\geq i}(b,g)$ implies $\models \Phi^{\varphi', \frac{1}{n}}_{\geq (i-\frac{2}{n})} (b,e)$, hence $g \models \operatorname{Stab}^{\varphi, \frac{1}{n}}_{\mu}(z)$ as wanted. \end{proof} \subsection{Stable groups and group chunks}\label{sec: stab grp review} As before, $T$ is a theory extending a group in a language $\mathcal{L}$, and we let $\mathcal{G}$ be a monster model of $T$. In this section we review some results on stable groups that will be needed for our purpose. \begin{fact}(see e.g.~\cite[Fact 1.8]{Pillay} + \cite{CS})\label{fac: top dyn in stable} Let $\mathcal{G}$ be a stable group and $G \prec \mathcal{G}$ a small model. Let $\mathcal{H}$ be a subgroup of $\mathcal{G}$ type-definable over $G$ (by a partial type $H(x)$ over $G$). Let $S_H(\mathcal{G}) := \{ p \in S_x(\mathcal{G}) : p(x) \vdash H(x) \}$ be the set of types over $\mathcal{G}$ concentrated on $\mathcal{H}$. Then the following hold. \begin{enumerate} \item For $p,q \in S_H(\mathcal{G})$, we have that $p \ast q$ (well-defined as all types are definable by stability) is equal to $\operatorname{tp}(a \cdot b / \mathcal{G})$, where $a \models p, b \models q$ in a bigger model of $T$ and $a \mathop{\mathpalette\Ind{}}_{\mathcal{G}} b$ (in the sense of forking independence). \item The semigroup $(S_H(\mathcal{G}), \ast)$ has a unique minimal closed left ideal $I$ (also the unique minimal closed right ideal) which is already a subgroup of $(S_H(\mathcal{G}),*)$. \item $I$ is precisely the generic types of $\mathcal{H}$, and with its induced topology $I$ is a compact topological group (isomorphic to $\mathcal{H}/\mathcal{H}^{0}$). \item $\mathcal{H}$ admits a unique left invariant Keisler measure $\mu$ (which is also the unique right invariant Keisler measure) with $S(\mu) = I$. Viewing $\mu$ as a regular Borel measure on $S_H(\mathcal{G})$ and restricting to the closed set $I$, $\mu \restriction_{S(\mu)}$ coincides with the Haar measure on $I$. \end{enumerate} \end{fact} In what follows, we let $\widehat{\mathcal{G}} \succ \mathcal{G}$ be a larger monster model of $T$. We will be following the notation from \cite{N3}. \begin{definition} \begin{enumerate} \item Throughout this section, $\Delta$ will denote a finite \emph{invariant set of formulas}, i.e.~formulas of the form $\varphi(u \cdot x \cdot v, \bar{y}) \in \mathcal{L}$ (so a right or a left translate of an instance of $\varphi$ is again an instance of $\varphi$). \item We write $R_\Delta$ to denote Shelah's $\Delta$-rank, note that it is invariant under two-sided translation since $\Delta$ is. \item For $P \subseteq S_x(\mathcal{G})$, we let $\operatorname{cl}(P)$ denote the topological closure of $P$ in $S_x(\mathcal{G})$, and $\ast P$ denote the closure of $P$ under $\ast$. \item For $P \subseteq S_x(\mathcal{G})$, let $\operatorname{gen}(P)$ denote the set of $r \in \operatorname{cl}(\ast P)$ such that there is no $q \in \operatorname{cl}(\ast P)$ with $R_{\Delta}(r) \leq R_{\Delta}(q)$ for all $\Delta$ and $R_{\Delta}(r) < R_{\Delta}(q)$ for some $\Delta$. \item For $P \subseteq S_x(\mathcal{G})$, let $\langle P \rangle$ denote the smallest $\mathcal{G}$-type-definable subgroup of $\widehat{\mathcal{G}}$ containing $P(\widehat{\mathcal{G}})$, where $P(\widehat{\mathcal{G}}) = \{b \in \widehat{\mathcal{G}} : b \models p \textrm{ for some } p \in P \}$. \end{enumerate} \end{definition} The following two facts are stated in \cite{N3} for strong types over $\emptyset$. Our statements here for types over $\mathcal{G}$ follow by applying them in the stable theory $T_{\mathcal{G}}$ with all of the elements of $\mathcal{G}$ named by constants (for which $\widehat{\mathcal{G}}$ is still a monster model). \begin{fact}\label{fac: gen in stable} \begin{enumerate} \item \cite[Fact 2.1]{N3} If $P \subseteq S_x(\mathcal{G})$ is non-empty, then $\operatorname{gen}(P)$ is a non-empty closed subset of $S_x(\mathcal{G})$. \item \cite[Lemma 2.2]{N3} $R_{\Delta}(p \ast q) \geq R_{\Delta}(p), R_{\Delta}(q)$ for any $p,q \in S_x(\mathcal{G})$ and $\Delta$ (this follows by the symmetry of forking, invariance of $R_{\Delta}$ under two-sided translations, and the fact that forking is characterized by a drop in rank). \end{enumerate} \end{fact} \begin{fact}\label{fac: group chunk}\cite[Theorem 2.2]{N3} Assume $T$ is stable. Let $P \subseteq S_x(\mathcal{G})$ be a non-empty set of types. Then $$\langle P \rangle = \left\{ a \in \widehat{\mathcal{G}} : \operatorname{tp}(a/\mathcal{G}) \ast \operatorname{gen}(P) = \operatorname{gen}(P) \textrm{ setwise} \right\}$$ is a $\mathcal{G}$-type-definable subgroup of $\widehat{\mathcal{G}}$ and $\operatorname{gen}(P)$ is precisely the set of generic types of $\langle P \rangle$ over $\mathcal{G}$. \end{fact} \subsection{Classification of idempotent measures} We are ready to prove the main result of this section. \begin{theorem} Let $\mathcal{G}$ be a monster model of $T$, and let $\mu \in \mathfrak{M}_x(\mathcal{G})$ be a global Keisler measure (in particular, $\mu$ is dfs by Fact \ref{fac: props of measures relation}(3a)). Then the following are equivalent: \begin{enumerate} \item $\mu$ is idempotent; \item $\mu$ is the unique right-invariant (and also the unique left-invariant) measure on the type-definable subgroup $\operatorname{Stab}(\mu)$ of $\mathcal{G}$. \end{enumerate} \end{theorem} \begin{proof} (2) implies (1) by Proposition \ref{wow}, and we show that (1) implies (2). Let $\mu \in \mathfrak{M}_x(\mathcal{G})$ be an idempotent measure, by Fact \ref{fac: props of measures relation}(3a) $\mu$ is definable over some small model $G \prec \mathcal{G}$ by Proposition \ref{prop: stab type def}. By Corollary \ref{qcont}, $S(\mu)$ is a closed subset of $S_x(\mathcal{G})$ and is closed under $\ast$, hence $\operatorname{cl}(\ast S(\mu)) = S(\mu)$ and $\operatorname{gen}(S(\mu)) \subseteq S(\mu)$. We claim that $\operatorname{gen}(S(\mu))$ is a two-sided ideal in $(S(\mu), \ast)$. Indeed, let $r \in \operatorname{gen}(S(\mu))$ and $q \in S(\mu)$ be arbitrary. If $r \ast q$ is not in $\operatorname{gen}(S(\mu))$, then there exists some $p \in S(\mu)$ with $R_{\Delta}(p) \geq R_{\Delta}(r \ast q) \geq R_{\Delta}(r)$ for all $\Delta$ and some inequality strict (by Fact \ref{fac: gen in stable}(2)), contradicting $r \in \operatorname{gen}(S(\mu))$. But also if $q \ast r$ is not in $\operatorname{gen}(S(\mu))$, then there exists some $p \in S(\mu)$ with $R_{\Delta}(p) \geq R_{\Delta} (q \ast r) \geq R_{\Delta}(r)$ and some inequality strict, again by Fact \ref{fac: gen in stable}(2), contradicting $r \in \operatorname{gen}(S(\mu))$. Hence $\operatorname{gen}(S(\mu)) = S(\mu)$ by Theorem \ref{two}. We now fix a larger monster model $\widehat{\mathcal{G}} \succ \mathcal{G}$ as above (and view $\mathcal{G}$ as a small elementary submodel of it). Then, by Fact \ref{fac: group chunk}, we have that $$\widehat{\mathcal{H}} := \langle S(\mu)\rangle = \{ a \in \widehat{\mathcal{G}} : a \models p \textrm{ for some } p \in S(\mu) \}$$ is a $\mathcal{G}$-type-definable subgroup of $\widehat{\mathcal{G}}$ and $S(\mu) = \operatorname{gen}(S(\mu))$ is precisely the set of generic types of $\widehat{\mathcal{H}}$ over $\mathcal{G}$. Note that the definition of $\widehat{\mathcal{H}}$ a priori uses all of the parameters in $\mathcal{G}$. We will show that it can be defined over a subset of $\mathcal{G}$ that is small with respect to $\mathcal{G}$, and that it is equal to the stabilizer of $\mu$. Let $H(x)$ be a partial type over $\mathcal{G}$ defining $\widehat{\mathcal{H}}$, i.e.~$H(\widehat{\mathcal{G}}) = \widehat{\mathcal{H}}$. Given $p \in S_x(\mathcal{G})$, we let $\widehat{p} \in S_x(\widehat{\mathcal{G}})$ be its unique $\mathcal{G}$-definable extension, and given $\nu \in \mathfrak{M}_x(\mathcal{G})$ we let $\widehat{\nu} \in \mathfrak{M}_x(\widehat{\mathcal{G}})$ be the unique $\mathcal{G}$-definable extension of $\nu$ (by Fact \ref{fac: unique ext of def meas}). We have the following sequence of observations. \begin{enumerate} \item $p \ast q = r \iff \widehat{p} \ast \widehat{q} = \widehat{r}$ for any $p,q,r \in S_x(\mathcal{G})$. \item The same holds for measures, in particular $\widehat{\mu}$ is an idempotent of $\left(\mathfrak{M}_x(\widehat{\mathcal{G}}), \ast \right)$. Indeed, assume $\mu, \nu \in \mathfrak{M}_x(\mathcal{G})$ are definable over some small $G' \prec \mathcal{G}$. Then $\widehat{\mu} * \widehat{\nu}$ is definable over $G'$ (by Proposition \ref{prop: pros pres under conv}) and extends $\mu * \nu$, hence $\widehat{\mu} * \widehat{\nu} = \widehat{\mu * \nu}$ by uniqueness of definable extensions (Fact \ref{fac: unique ext of def meas}). \item $\operatorname{Stab}_\mu(\widehat{\mathcal{G}}) = \operatorname{Stab}(\widehat{\mu})$ (by Proposition \ref{prop: stab type def} and definability of the measure). \item $S(\widehat{\mu}) = \{\widehat{p} : p \in S(\mu) \}$. \noindent Indeed, suppose $p \in S(\mu)$, but $\widehat{p} \notin S(\widehat{\mu})$, then there is some $\varphi(x,b) \in \widehat{p}$ such that $\widehat{\mu}(\varphi(x,b)) = 0$. In particular, $\models d_p \varphi(b)$, where $d_p(y) \in \mathcal{L}_y(c)$ for some finite tuple $c \subseteq \mathcal{G}$ is a $\varphi$-definition for $p$. By $|G|^+$-saturation of $\mathcal{G}$, we can find some $b' \in \mathcal{G}^{y}$ with $b' \textit{eq}uiv_{Gc} b$. By definability (and hence invariance) of $\widehat{\mu}$ over $G$, we have $\models d_p(b')$ and $ \mu(\varphi(x,b')) = \widehat{\mu}(\varphi(x,b')) = 0$. So $\varphi(x,b') \in p$, contradicting $p \in S(\mu)$. \noindent Conversely, suppose $r \in S(\widehat{\mu})$. As $\widehat{\mu}$ is definable over $G$, in particular it is non-forking over $G$, hence every type in its support is non-forking over $G$. In particular $r$ is non-forking over $G$, hence definable over $G$ by stability, so $r = \widehat{(r | _{\mathcal{G}})}$ and $r | _{\mathcal{G}}$ is clearly in $S(\mu)$. \item The generics of $H(x)$ over $\widehat{\mathcal{G}}$ are precisely $\{\widehat{p} : p \mbox{ is a generic of } H \mbox{ over } \mathcal{G}\}$. By stability, every generic $r$ of $H(x)$ over $\widehat{\mathcal{G}}$ does not fork over $\mathcal{G}$, so it is definable over $\mathcal{G}$ and $r|_{\mathcal{G}}$ is a generic of $H(x)$ over $\mathcal{G}$, hence $r = \widehat{(r|_{\mathcal{G}})}$. Conversely, a definable (non-forking) extension of a generic type is generic. \item Hence, by (4) and (5), $S(\widehat{\mu})$ is precisely the set of the generics of $H(x)$ over $\widehat{\mathcal{G}}$, in particular $(S(\widehat{\mu}), \ast)$ is a topological group by Fact \ref{fac: top dyn in stable}(3). \item Viewed as a regular Borel measure on $S_x(\widehat{\mathcal{G}})$, $\widehat{\mu}$ restricted to $(S(\widehat{\mu}), \ast)$ is right-$\ast$-invariant. \noindent Indeed, $\left( S(\widehat{\mu}), * \right)$ is a group by (6), so for any $p \in S(\widehat{\mu})$, $p^{-1} \in S(\widehat{\mu})$ is well-defined. By regularity of the measure, it suffices to check $*$-invariance for clopen subsets. Let $\varphi(x,\bar{b}) \in \mathcal{L}_x(\widehat{\mathcal{G}})$. Then for any $p \in S(\widehat{\mu})$ we have $$\widehat{\mu}( \langle \varphi(x,\bar{b}) \rangle * p) = \widehat{\mu}\left(\{ q * p : \varphi(x,\bar{b}) \in q \} \right)$$ $$ = \widehat{\mu} \left( \{q: \varphi(x, \bar{b}) \in q * p^{-1} \} \right) = \widehat{\mu}(\varphi(x \cdot c, \bar{b})),$$ where $c \models p^{-1}|_{G \bar{b}}$. And by Corollary \ref{cor: meas on sup is invariant}, $\widehat{\mu}(\varphi(x \cdot c,\bar{b})) = \widehat{\mu}(\varphi(x,\bar{b}))$. \item By Fact \ref{fac: top dyn in stable}(4) for $\widehat{\mathcal{H}}$, there is a unique right-$\widehat{\mathcal{H}}$-invariant Keisler measure $\nu \in \mathfrak{M}_x(\widehat{\mathcal{G}})$ such that $\nu(H(x)) = 1$, $S(\nu)$ is the set of generics of $H(x)$ over $\widehat{\mathcal{G}}$, and $\nu \restriction_{S(\nu)}$ (viewed as a Borel measure) is the Haar measure on the compact topological group $(S(\nu), \ast)$. \item Thus $S(\widehat{\mu}) = S(\nu)$ using (6), and as both $\mu,\nu$ are right $*$-invariant restricting to $S(\nu)$, by uniqueness of the Haar measure we have $\widehat{\mu} \restriction_{S(\widehat{\mu})} = \nu \restriction_{S(\nu)}$, hence $\widehat{\mu} = \nu$ (as for any formula $\varphi(x) \in \mathcal{L}_x(\widehat{\mathcal{G}})$ we have $\mu(\varphi(x)) = \mu(\varphi(x) \cap S(\mu))$ and the same for $\nu$, by Proposition \ref{prop: supp 1}(2)). \item By (8) we have $\widehat{\mathcal{H}} \subseteq \operatorname{Stab}(\nu) $ (the stabilizer in $\widehat{\mathcal{G}}$), and in fact $\widehat{\mathcal{H}} = \operatorname{Stab}(\nu)$ (as any two cosets of $\widehat{\mathcal{H}}$ in $\operatorname{Stab}(\nu)$ would give two disjoint sets of $\nu$-measure $1$). Using (8) (and (3)) it follows that $S(\widehat\mu) \subseteq \widehat{\mathcal{H}} = \operatorname{Stab}(\nu) = \operatorname{Stab}(\widehat{\mu}) = \operatorname{Stab}_{\mu}(\widehat{\mathcal{G}})$, so $\widehat \mu$ is a left- (and also right, by stability) invariant measure on the $G$-type-definable group $\operatorname{Stab}_\mu(\widehat{\mathcal{G}})$. Hence $\mu$ is a right-invariant measure on the $G$-type-definable group $\operatorname{Stab}_\mu(\mathcal{G})$. \end{enumerate} \end{proof} \begin{remark} It was pointed out by the referee that type-definability of $\widehat{\mathcal{H}} $ over a \emph{small subset of $\mathcal{G}$} is immediate from Hrushovski's theorem that in a stable theory $T$, any type-definable group is given by an intersection of at most $|T|$-many definable groups (see e.g.~\cite[Lemma 6.18]{pillay1996geometric}). However, showing that $\widehat{\mathcal{H}}$ is the stabilizer appears to require some additional argument along the lines presented above. \end{remark} \begin{remark} Some of these results can be generalized for idempotent measures in NIP groups, and we hope to address it in future work. \end{remark} \section{Describing the convolution semigroup on finitely satisfiable measures as an Ellis semigroup}\label{sec: Ellis calc} \subsection{Dynamics} We begin this section by recalling the construction of the Ellis semigroup. Let $X$ be a compact Hausdorff space and $S$ be a semigroup acting on $X$ by homeomorphisms. In particular, there is a map $\pi: S \times X \to X$ such that for each $s \in S$, the map $\pi_{s}:X \to X, x \mapsto \pi(s,x)$ is a homeomorphism. Let $X^X$ be the space of functions from $X$ to $X$ equipped with the product topology. Then, $\{\pi_{s}: s \in S\}$ is naturally a subset of $X^X$. Finally, the \emph{Ellis semigroup of the action $(X,S, \pi)$} is $\left(\operatorname{cl} \left(\left\{\pi_{s} : s \in S\right\} \right), \circ \right)$, where we take the closure of $\{ \pi_{s}: s \in S\}$ in $X^X$. When the action map $\pi$ is clear, we will denote this semigroup as $E(X,S)$. Let now $T$ be a first order theory expanding a group, $\mathcal{G}$ a saturated model of $T$, and $G \prec \mathcal{G}$ a small elementary substructure. Recall that $S_x(\mathcal{G},G)$ denotes the set of global types finitely satisfiable in $G$. There is a natural action of $G$ on $S_{x}(\mathcal{G},G)$ via $g \cdot p = \{\varphi(x): \varphi(g^{-1} \cdot x) \in p\}$. \begin{fact}[Newelski \cite{N1}]\label{fac: New isom} There exists a semigroup isomorphism (which is also a homeomorphism of compact spaces) $E(S_{x}(\mathcal{G},G),G) \cong (S_{x}(\mathcal{G},G),*)$. \end{fact} In this section, we provide an analogous description for the convolution semigroup on finitely satisfiable measures in NIP theories. Recall that $\mathfrak{M}_x(\mathcal{G},G) \subseteq \mathfrak{M}_x(\mathcal{G})$ is the collection of global measures finitely satisfiable in $G$, and this space of measures is naturally identified with a closed convex subset of a real topological vector space (of all bounded real-valued measures on $S_x(\mathcal{G})$). We identify $G$ with the set $\{\delta_g : g \in G \} \subseteq \mathfrak{M}_x(\mathcal{G},G)$, and let $\operatorname{conv}(G)$ denote the convex hull of $G$. There is a natural semigroup action of $\operatorname{conv}(G)$ on $\mathfrak{M}_{x}(\mathcal{G},G)$: for any $\sum_{i=1}^{n} r_i \delta_{g_i} \in \operatorname{conv}(G)$ (with $g_i \in G$ and $r_i \in \mathbb{R}_{\geq 0}, \sum_{i=1}^n r_i = 1$), $\mu \in \mathfrak{M}_x(\mathcal{G},G)$ and $\varphi(x) \in \mathcal{L}_{x}(\mathcal{G})$, we define $\left( \sum_{i=1}^{n} r_i \delta_{g_i} \right) \cdot \mu \in \mathfrak{M}_x(\mathcal{G},G)$ by \begin{equation*} \left( \left(\sum_{i=1}^{n} r_i \delta_{g_i} \right) \cdot \mu \right) (\varphi(x)) = \sum_{i=1}^{n} r_i \mu( \varphi( g_i \cdot x)). \end{equation*} For the rest of this section, we will denote elements of $\operatorname{conv}(G)$ as $k$, the semigroup action described above as $\pi : \operatorname{conv}(G) \times \mathfrak{M}_x(\mathcal{G},G) \to \mathfrak{M}_x(\mathcal{G},G)$, and the map $\mu \mapsto \pi(k,\mu)$ as $\pi_{k}$. It is not difficult to see that for every $k \in \operatorname{conv}(G)$, the map $\pi_{k}$ is continuous. Therefore, we can consider the Ellis semigroup of this semigroup action, namely $E \left(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G) \right)$. We will show that if $T$ is NIP, then this Ellis semigroup is isomorphic to the convolution semigroup of global measures finitely satisfiable in $G$, i.e.~$(\mathfrak{M}_{x}(\mathcal{G},G),*)$ (Theorem \ref{thm: Ellis grp iso}). We demonstrate that that these two semigroups are isomorphic by considering the map $\rho:\mathfrak{M}_{x}(\mathcal{G},G) \to \mathfrak{M}_{x}(\mathcal{G},G)^{\mathfrak{M}_{x}(\mathcal{G},G)}$ defined by $\rho(\nu) = \rho_{\nu} := \nu * -$, and proving that the image of $\rho$ is precisely the Ellis semigroup. Before continuing, we check that $\rho$ is well-defined, and that $\mathfrak{M}_{x}(\mathcal{G},G)$ is a semigroup. \begin{proposition}\label{NIP:measure} Let $T$ be NIP and assume that $\mu \in \mathfrak{M}_{x}(\mathcal{G},G)$. Then: \begin{enumerate} \item $\mu$ is Borel-definable over $G$; \item for any $\nu \in \mathfrak{M}_{x}(\mathcal{G},G)$, $\mu * \nu \in \mathfrak{M}_{x}(\mathcal{G},G)$; \item the operation $*$ on $\mathfrak{M}_{x}(\mathcal{G},G)$ is associative, hence $(\mathfrak{M}_{x}(\mathcal{G},G),*)$ is a semigroup. \end{enumerate} \end{proposition} \begin{proof} $(1)$ follows from Fact \ref{fac: props of measures relation}(2a) while $(2)$ follows from Proposition \ref{prop: pros pres under conv}(2). $(3)$ We show that the operation $*$ is associative on $\mathfrak{M}_{x}^{^{\text{-}1}a}(\mathcal{\mathcal{G}},G)$ (note that it is closed under $*$, as under the NIP assumption the $\otimes$-product of two invariant measures is invariant, see e.g. \cite[Section 7.4]{Guide}). The proof is similar, but not identical, to the proof that $\otimes$ is associative on invariant measures in NIP theories \cite{GanCon2}. Fix a formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$. Let $\theta(x,y;z) := \varphi(x \cdot y \cdot z)$ and $\rho(x;y,z) := \varphi(x \cdot y \cdot z)$ (where $y,z$ are variables of the same sort as $x$). Assume that $\mu, \nu, \lambda \in \mathfrak{M}_{x}^{^{\text{-}1}a}(\mathcal{\mathcal{G}},G)$ --- all Borel-definable over $G$ by Fact \ref{fac: props of measures relation}(2a). Without loss of generality, we may assume that $G$ contains all of the parameters from $\varphi$. Let $\widehat{\lambda}$ be a smooth extension of $\lambda|_{G}$ such that $\widehat{\lambda}$ is smooth over some small $H_1 \succeq G$. Let $\widehat{\nu}$ be a smooth extension of $\nu|_{H_1}$ such that $\widehat{\nu}$ is smooth over some small model $H_2$ such that $G \prec H_1 \prec H_2$. We are following the notation of Section \ref{sec: def of conv}, in particular $\varphi'(x;z) = \varphi(x \cdot z)$. We have: \begin{enumerate}[(a)] \item $F_{(\mu * \nu)_x,G}^{\varphi'(x;z)}(r) = F_{\mu_x \otimes \nu_y,G}^{\theta(x,y;z)}(r)$ for all $r \in S_z(G)$ --- as for any $b \in \mathcal{U}^{z}$ realizing $r$ we have $F_{(\mu * \nu)_x,G}^{\varphi'}(r) = (\mu * \nu)_x(\varphi(x \cdot b)) = \mu_x \otimes \nu_y (\varphi(x \cdot y \cdot b)) = F_{\mu \otimes \nu}^{\theta}(r)$. \item $F_{\widehat{\nu}_y\otimes\widehat{\lambda}_z,H_2}^{\rho^{*}(y,z;x)}(p)=F_{(\widehat{\nu}*\widehat{\lambda})_z, H_2}^{(\varphi')^{*}(z;x)}(p)$ for all $p \in S_x(H_2)$ --- similar to (a); \item $(\widehat{\nu} \otimes \widehat{\lambda})|_{G}=(\nu \otimes \lambda)|_{G}$, hence $(\widehat{\nu}*\widehat{\lambda})|_{G}=(\nu*\lambda)|_{G}$ --- by the Claim in the proof of \cite[Theorem 2.2]{GanCon2}. \end{enumerate} Note that $\mu$ is invariant over both $G$ and $H_2$, hence for any measure $\gamma \in \mathfrak{M}_{yz}(\mathcal{\mathcal{G}})$ and formula $\psi(x;y,z) \in \mathcal{L}(G)$ we have (as in Proposition \ref{prop: int over diff mod}): \begin{enumerate} \item[(d)] $ \int_{S_{yz}(G)}F_{\mu,G}^{\psi}d(\gamma|_{G}) = \int_{S_{yz}(H_2)}F_{\mu,H_2}^{\psi}d(\gamma|_{H_2})$. \end{enumerate} Finally, we also have: \begin{enumerate} \item[(e)] $\widehat{\nu} \otimes \widehat{\lambda}$ is smooth over $H_2$ (by Fact \ref{fac: meas commute}(4)); \item[(f)] $\widehat{\nu} * \widehat{\lambda}$ is dfs over $H_2$ (by Proposition \ref{prop: pros pres under conv}(1) and (2)); \end{enumerate} Using these observations we calculate: \begin{gather*} [(\mu*\nu)*\lambda](\varphi(x))=[(\mu*\nu)_x\otimes\lambda_z](\varphi(x\cdot z))=\int_{S_{z}(G)}F_{\mu*\nu,G}^{\varphi'(x;z)}d(\lambda_z|_{G})\\ \overset{\textrm{(a)}}{=}\int_{S_{z}(G)}F_{\mu_x\otimes\nu_y,G}^{\theta(x,y;z)}d(\lambda_z|_{G})\\=[(\mu_x\otimes\nu_y)\otimes\lambda_z](\theta(x,y;z))=[(\mu_x \otimes \nu_y)\otimes\lambda_z](\varphi(x\cdot y\cdot z))\\ =[\mu_x\otimes(\nu_y\otimes\lambda_z)](\varphi(x\cdot y\cdot z))\textrm{ (by Fact \ref{fac: meas commute}(5))}\\ =\int_{S_{yz}(G)}F_{\mu_x,G}^{\rho(x;y,z)}d \left((\nu_y\otimes\lambda_z)|_{G} \right) \overset{\textrm{(c)}}{=} \int_{S_{yz}(G)}F_{\mu_x,G}^{\rho(x;y,z)}d \left((\widehat{\nu}_y\otimes \widehat{\lambda}_z)|_{G} \right)\\ =\int_{S_{yz}(H_{2})}F_{\mu_x,H_2}^{\rho(x;y,z)}d \left((\widehat{\nu}_y\otimes\widehat{\lambda}_z)|_{H_2} \right) \\ \textrm{(by (d) with }\gamma_{y,z} := \widehat{\nu}_y \otimes \widehat{\lambda}_z \textrm{ and } \psi(x;y,z) :=\rho(x;y,z) \textrm{)} \\ =\int_{S_{x}(H_{2})}F_{\widehat{\nu}_y\otimes\widehat{\lambda}_z, H_2}^{\rho^{*}(y,z;x)}d(\mu_x|_{H_2}) \textrm{ (by (e) and Fact \ref{fac: meas commute}(1))}\\ \overset{\textrm{(b)}}{=}\int_{S_{x}(H_{2})}F_{(\widehat{\nu}*\widehat{\lambda})_z,H_2}^{(\varphi')^{*}(z;x)}d(\mu_x|_{H_2})\\ =\int_{S_{z}(H_{2})}F_{\mu_x,H_2}^{\varphi'(x;z)}d \left((\widehat{\nu}*\widehat{\lambda})_z|_{H_2} \right) \textrm{(by (f) and Fact \ref{fac: meas commute}(3))}\\ =\int_{S_{z}(G)}F_{\mu_x,G}^{\varphi'(x;z)}d \left((\widehat{\nu}*\widehat{\lambda})_z|_{G} \right) \textrm{ (by (d) with } \gamma_z:=(\widehat{\nu} * \widehat{\lambda} )_z\textrm{ and } \psi(x;z):=\varphi'(x;z) \textrm{)} \\ \overset{\textrm{(c)}}{=}\int_{S_{z}(G)}F_{\mu_x, G}^{\varphi'(x;z)}d \left((\nu*\lambda)_z|_{G} \right) \\ =\mu*(\nu*\lambda)(\varphi(x)). \end{gather*} \end{proof} Hence the map $\rho: \mathfrak{M}_{x}(\mathcal{G},G) \to \mathfrak{M}_{x}(\mathcal{G},G)^{\mathfrak{M}_{x}(\mathcal{G},G)}$ is well-defined. In the next subsection we show that it is also left-continuous. \subsection{Left-continuity of convolution}\label{sec: left-cont of conv} We begin with a general continuity result in arbitrary NIP theories. Let $T$ be an NIP theory, $\mathcal{U}$ a monster model of $T$, and $M$ a small elementary substructure of $\mathcal{U}$. \begin{proposition}[T NIP]\label{continuity} Let $M \prec \mathcal{U}$ and let $\mathfrak{M}_{x}^{^{\text{-}1}a}(\mathcal{U},M)$ be the closed set of global $M$-invariant measures (Definition \ref{def: props of measures}). If $\nu \in \mathfrak{M}_{y}(\mathcal{U})$ and $\varphi(x;y)$ is any partitioned $\mathcal{L}_{xy}(\mathcal{U})$ formula, then the map $-\otimes\nu(\varphi(x;y)):\mathfrak{M}^{^{\text{-}1}a}_{x}(\mathcal{U},M)\to[0,1]$ is continuous. \end{proposition} \begin{proof} Choose $N_{0} \prec \mathcal{U}$ small and such that $M \preceq N_{0}$, and $N_{0}$ contains the parameters of $\varphi$. Then, choose a small $N \prec \mathcal{U}$ such that $N_{0} \preceq N$ and there exists some $\widehat{\nu}\in\mathfrak{M}_{y}(\mathcal{U})$ such that $\widehat{\nu}|_{N_{0}}=\nu|_{N_{0}}$ and $\widehat{\nu}$ is smooth over $N$ (by Fact \ref{fac: NIP ext to smooth}). Fix $\varepsilon \in \mathbb{R}_{>0}$, by Fact \ref{fac: props of measures relation}(1a) let $\overline{b} = (b_1, \ldots, b_n)$ be some $(\varphi^{*},\varepsilon)$-approximation for $\widehat{\nu}$ over $N$ (where $\varphi^*(y;x) = \varphi(x;y)$ and $\overline{b}$ is some element in $(N^{y})^{<\omega}$, see Definition \ref{def: props of measures}(7)). Note that every $\mu\in\mathfrak{M}^{^{\text{-}1}a}_{x}(\mathcal{U},M)$ is invariant over both $N_{0}$ and $N$. Then we have (the last equality holds as in the proof of Proposition \ref{prop: int over diff mod}): \begin{equation*} \mu\otimes\nu(\varphi(x;y))=\int_{S_{y}(N_{0})}F_{\mu,N_{0}}^{\varphi}d(\nu|_{N_{0}})=\int_{S_{y}(N_{0})}F_{\mu,N_{0}}^{\varphi}d(\widehat{\nu}|_{N_{0}})=\int_{S_{y}(N)}F_{\mu,N}^{\varphi}d(\widehat{\nu}|_{N}). \end{equation*} As $\widehat{\nu}$ is smooth over $N$, by Fact \ref{fac: meas commute}(1) we have \begin{equation*} \int_{S_{y}(N)}F_{\mu,N}^{\varphi}d(\widehat{\nu}|_{N}) = \int_{S_{x}(N)}F_{\widehat{\nu},N}^{\varphi^{*}}d(\mu|_{N}). \end{equation*} Note that $F^{\varphi^*}_{\operatorname{Av}_ {\overline{b}}, N} (p) = \frac{1}{n}\sum_{i=1}^{n} \chi_{\{ r \in S_x(N) : \varphi(x,b_i) \in r \}}(p)$ for every $p \in S_x(N)$, where $\chi$ is the characteristic function. Now, using that $\bar{b} \subseteq N$ is a $(\varphi^{*},\varepsilon)$-approximation for $\widehat{\nu}$, we have the following (note that we identify $\varphi(x,b_i)$ with the set of types satisfying it over $N$ in the first step, and over $\mathcal{U}$ in the second step). $$ \int_{S_{x}(N)}F_{\widehat{\nu},N}^{\varphi^{*}}d(\mu|_{N}) \approx_{\varepsilon}\int_{S_{x}(N)}F_{\operatorname{Av}_{\overline{b}}, N}^{\varphi^{*}}d(\mu|_{N})$$ $$ = \int_{S_x(N)} \left( \frac{1}{n} \sum_{i=1}^{n} \chi_{\varphi(x,b_i)} \right) d(\mu|_N) $$ $$ = \frac{1}{n}\sum_{i=1}^{n} \left( \int_{S_x(N)} \chi_{\varphi(x,b_i)} d(\mu|_N) \right) = \frac{1}{n}\sum_{i=1}^{n} \mu|_N(\varphi(x,b_i))$$ $$ =\frac{1}{n}\sum_{i=1}^{n}\int_{S_{x}(\mathcal{U})}\chi_{\varphi(x,b_i)} d\mu. $$ Clearly, each map $\int \chi_{\varphi(x,b_{i})}:\mathfrak{M}_{x}(\mathcal{U})\to[0,1]$ is continuous by the definition of the topology on the space of measures. Therefore, each map $\int\chi_{\varphi(x,b_{i})}:\mathfrak{M}^{^{\text{-}1}a}_{x}(\mathcal{U},M)\to[0,1]$ is continuous, hence their sum is continuous as well. Since the choice of $\overline{b}$ is independent of the choice of $\mu$, we have \begin{equation*} \sup_{\mu\in\mathfrak{M}^{^{\text{-}1}a}_{x}(\mathcal{U},M)} \left \lvert \mu\otimes\nu(\varphi(x;y))-\frac{1}{n}\sum_{i=1}^{n}\int_{S_{x}(\mathcal{U})}\chi_{\varphi(x,b_{i})}d\mu \right \rvert<\varepsilon. \end{equation*} Therefore, the map $-\otimes\nu(\varphi(x;y))$ is a uniform limit of continuous functions and hence continuous. \end{proof} Now, we apply this to our group theoretic context. Let again $T$ be an NIP theory expanding a group, $\mathcal{G}$ a monster model of $T$, and $G \prec \mathcal{G}$ a small model. \begin{proposition}\label{prop: conv is left cont} Let $\nu \in \mathfrak{M}_{x}(\mathcal{G},G)$. Then the map $-*\nu:\mathfrak{M}_{x}(\mathcal{G},G) \to \mathfrak{M}_{x}(\mathcal{G},G)$ is continuous. \end{proposition} \begin{proof} Let $U$ be a basic open subset of $\mathfrak{M}_{x}(\mathcal{G},G)$. That is, there exist formulas $\varphi_{1}(x),...,\varphi_{n}(x)$ in $\mathcal{L}_{x}(\mathcal{G})$ and real numbers $r_1,...,r_n,s_1,...,s_n \in [0,1]$ such that \begin{equation*} U = \bigcap_{i=1}^{n} \{\mu \in \mathfrak{M}_{x}(\mathcal{G},G): r_i < \mu(\varphi_{i}(x)) < s_i\}. \end{equation*} Then we have \begin{equation*} \Big(-*\nu\Big)^{-1}(U) = \bigcap_{i=1}^{n}\{\mu \in \mathfrak{M}_{x}(\mathcal{G},G): r_i < \mu * \nu(\varphi_{i}(x)) < s_i\} \end{equation*} \begin{equation*} = \bigcap_{i=1}^{n}\{\mu \in \mathfrak{M}_{x}(\mathcal{G},G): r_i < \mu_{x} \otimes \nu_{y}(\varphi_{i}(x \cdot y)) < s_{i}\} \end{equation*} \begin{equation*} = \bigcap_{i=1}^{n}\big(- \otimes \nu_{y} \left(\varphi_{i}(x\cdot y)\right) \big)^{-1} \Big(\{\mu \in \mathfrak{M}_{x}(\mathcal{G},G): r_i < \mu(\varphi_{i}(x)) < s_i \} \Big). \end{equation*} Therefore, by continuity of the map $-\otimes\nu(\varphi(x \cdot y))$ (Proposition \ref{continuity}), the preimage of $U$ under $-* \nu$ is a finite intersection of open sets, and therefore open. \end{proof} \subsection{The isomorphism} In this subsection we show that the map $\rho:\mathfrak{M}_{x}(\mathcal{G},G) \to E(\mathfrak{M}_{x}(\mathcal{G},G), \operatorname{conv}(G))$ given by $\rho(\nu) = \rho_{\nu} = \nu * -$ is an isomorphism. We begin by recalling the topology on $\mathfrak{M}_{x}(\mathcal{G},G)^{\mathfrak{M}_{x}(\mathcal{G},G)}$. \begin{remark}\label{rem: openset} The topology on $\mathfrak{M}_{x}(\mathcal{G},G)^{\mathfrak{M}_{x}(\mathcal{G},G)}$ is generated by the basic open sets of the form \begin{equation*} U = \bigcap_{i=1}^n \{f: \mathfrak{M}_{x}(\mathcal{G},G) \to \mathfrak{M}_{x}(\mathcal{G},G) \mid r_i < f(\nu_i)(\psi_i(x)) < s_i\}, \end{equation*} with $n \in \mathbb{N}$, $r_i, s_i \in \mathbb{R}$, $\psi_i(x) \in \mathcal{L}_{x}(\mathcal{G})$, and $\nu_i \in \mathfrak{M}_{x}(\mathcal{G},G)$ (with possible repetitions of $\nu_i$'s and $\psi_i$'s). \end{remark} \begin{lemma}\label{lem: rho inj} The map $\rho$ is injective. \end{lemma} \begin{proof} Note that for every $\nu \in \mathfrak{M}_{x}(\mathcal{G},G)$, $\rho_{\nu}(\delta_{e}) = \nu$, where $e$ is the identity of $\mathcal{G}$. \end{proof} \begin{lemma}\label{lem: image of rho} If $\mu \in \mathfrak{M}_{x}(\mathcal{G},G)$, then $\rho_{\mu} \in \operatorname{cl} \big(\{\pi_{k}: k \in \operatorname{conv}(G)\} \big)$. So $$\rho\left(\mathfrak{M}_{x}(\mathcal{G},G) \right) \subseteq E(\mathfrak{M}_{x}(\mathcal{G},G), \operatorname{conv}(G)).$$ \end{lemma} \begin{proof} Let $U$ be an open subset of $\mathfrak{M}_{x}(\mathcal{G},G)^{\mathfrak{M}_{x}(\mathcal{G},G)}$ containing $\rho_{\mu}$. It is a union of basic open sets (see Remark \ref{rem: openset}), hence we can choose some $n \in \mathbb{N}$, a sufficiently small $\varepsilon > 0$ and some $\psi_{1}(x),...,\psi_{n}(x) \in \mathcal{L}_{x}(\mathcal{U})$ and $\nu_1,...,\nu_n \in \mathfrak{M}_{x}(\mathcal{G},G)$ such that \begin{equation*} B_{\varepsilon} := \bigcap_{i=1}^n \{f : |f(\nu_i)(\psi_i(x)) - \rho_{\mu}(\nu_{i})(\psi_i(x))| < \varepsilon\} \subseteq U. \end{equation*} Let $H_0 \prec \mathcal{G}$ be a small model containing $G$ and the parameters of $\psi_1, \ldots, \psi_n$. By Fact \ref{fac: unique ext of def meas}, we can choose a small model $H \prec \mathcal{G}$ and measures $\widehat{\nu}_i \in \mathfrak{M}_x(\mathcal{G})$ such that: \begin{itemize} \item $G \preceq H_0 \preceq H \prec\mathcal{G}$; \item $\widehat{\nu}_i|_{H_0} = \nu_i|_{H_0}$, for all $1 \leq i \leq n$; \item $\widehat{\nu}_i$ is smooth over $H$, for all $1 \leq i \leq n$. \end{itemize} Take some $0 < \varepsilon_{0} < \frac{\varepsilon}{3}$. Recall from Section \ref{sec: def of conv} that $\psi'(x;y) = \psi(x \cdot y) \in \mathcal{L}_{xy}(H_0)$. By Fact \ref{fac: props of measures relation}(1a), let $\overline{b}_{i} = (b_{i,j} : 1 \leq j \leq m_i) \in H^{<\omega}$ be a $(\left(\psi'_i \right)^*,\varepsilon_{0})$-approximation for $\widehat{\nu}_{i}$. Then, using that $\mu$ is invariant over both $H_0$ and $H$ and $\widehat{\nu}_i$ is smooth over $H$ as in Proposition \ref{continuity}, for every $1 \leq i \leq n$ we have: \begin{equation*} \rho_{\mu}(\nu_{i})(\psi_{i}(x))=\mu*\nu_{i}(\psi_{i}(x))=\mu\otimes\nu_{i}(\psi_{i}(x\cdot y)) \end{equation*} \begin{equation*} =\int_{S_{y}(H_0)}F_{\mu,H_0}^{\psi'_{i}}d(\nu_{i}|_{H_0})=\int_{S_{y}(H_0)}F_{\mu,H_0}^{\psi'_{i}}d(\widehat{\nu}_{i}|_{H_0}) \end{equation*} \begin{equation*} =\int_{S_{y}(H)}F_{\mu,H}^{\psi'_{i}}d(\widehat{\nu}_{i}|_{H})=\int_{S_{x}(H)}F_{\widehat{\nu}_{i},H}^{\left(\psi'_{i} \right)^{*}}d(\mu|_{H}) \end{equation*} \begin{equation*} \approx_{\varepsilon_{0}}\int_{S_{x}(H)}F_{\operatorname{Av}_{\overline{b}_{i}}, H}^{\left(\psi'_{i}\right)^{*}}d(\mu|_{H})=\frac{1}{m_i}\sum_{j=1}^{m_i}\mu(\psi_{i}(x\cdot b_{i,j})). \end{equation*} Let $\Psi= \{\psi_{i}(x \cdot b_{i,j}) : 1 \leq i \leq n, 1 \leq j \leq m_i\}$. Since $\mu$ is finitely satisfiable in $G$, we can find some $k_{\mu} \in \operatorname{conv}(G)$ such that $k_{\mu}(\theta(x)) \approx_{\varepsilon_0} \mu(\theta(x))$ for each $\theta(x) \in \Psi$ (see the proof of Proposition \ref{convfs}). We claim that then $\pi_{k_{\mu}}$ is in $B_{\varepsilon}$. This follows directly from running the equations above in reverse: for each $1 \leq i \leq n$ we have (using that $k_\mu$ is obviously invariant over $G$, hence also over $H_0$) \begin{equation*} \frac{1}{m_i}\sum_{j=1}^{m_i}\mu(\psi_{i}(x\cdot b_{i,j})) \approx_{\varepsilon_0} \frac{1}{m_i}\sum_{j=1}^{m_i}k_{\mu}(\psi_{i}(x\cdot b_{i,j})) \end{equation*} \begin{equation*} = \int_{S_{x}(H)}F_{\operatorname{Av}_{\overline{b}_i}, H}^{\left(\psi'_{i} \right)^{*}}d(k_{\mu}|_{H}) \approx_{\varepsilon_{0}} \int_{S_{x}(H)}F_{\widehat{\nu}_{i}, H}^{\left(\psi'_{i} \right)^*}d \left(k_{\mu}|_{H} \right) \end{equation*} \begin{equation*} =\int_{S_{y}(H)}F_{k_{\mu}, H}^{\psi'_{i}}d(\widehat{\nu}_{i}|_{H})=\int_{S_{y}(H_0)}F_{k_{\mu}, H_0}^{\psi'_{i}}d(\widehat{\nu}_{i}|_{H_0}) \end{equation*} \begin{equation*} =k_{\mu} \otimes \nu_i(\psi_i(x \cdot y)) = \pi_{k_{\mu}}(\nu_{i})(\psi_i(x)). \end{equation*} Hence $\rho_{\mu}(\nu_{i})(\psi_{i}(x)) \approx_{3\varepsilon_{0}}\pi_{k_{\mu}}(\nu_{i})(\psi_i(x))$ for each $1 \leq i \leq n$, so $\pi_{k_\mu} \in B_{\varepsilon} \subseteq U$ and we are finished. \end{proof} \begin{lemma}\label{lem: surj} $\rho\left(\mathfrak{M}_{x}(\mathcal{G},G) \right) = E(\mathfrak{M}_{x}(\mathcal{G},G), \operatorname{conv}(G))$. \end{lemma} \begin{proof} Let $f \in E\left(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G) \right)$ be arbitrary. Then $f \in \operatorname{cl}\left(\{\pi_{k}: k \in \operatorname{conv}(G)\} \right)$, and so there exists a net $(k_{i})_{i \in I}$ with $k_i \in \operatorname{conv}(G)$ such that $\lim_{i \in I} \pi_{k_i} = f$. Then, using Remark \ref{rem: openset}, for every $\psi(x) \in \mathcal{L}_{x}(\mathcal{G})$ and $\nu \in \mathfrak{M}_{x}(\mathcal{G},G)$ we have \begin{equation*} \lim_{i \in I} \pi_{k_i}(\nu)(\psi(x)) = f(\nu)(\psi(x)). \end{equation*} Consider $\delta_{e}$, where $e \in \mathcal{G}$ is the identity. Let $\mu_{f} := f(\delta_{e}) \in \mathfrak{M}_{x}(\mathcal{G},G)$. We claim that the net $(k_i)_{i \in I}$ converges to $\mu_{f}$ in $\mathfrak{M}_{x}(\mathcal{G},G)$. Indeed, for any $\psi(x) \in \mathcal{L}_{x}(\mathcal{G})$ we have \begin{equation*} \lim_{i \in I} {k_i}(\psi(x))=\lim_{i \in I} \pi_{k_i}(\delta_{e})(\psi(x)) = f(\delta_{e})(\psi(x)) = \mu_f(\psi(x)). \end{equation*} Next, we claim that for any $\nu \in \mathfrak{M}_{x}(\mathcal{G},G)$, we have that $f(\nu) = \rho_{\mu_f}(\nu)$. Indeed, first we have \begin{equation*} f(\nu) = \lim_{i \in I}\pi_{k_i}(\nu) = \lim_{i \in I}[\pi_{k_i} \circ \rho_{\nu}](\delta_{e}) = \lim_{i \in I} \rho_{k_i * \nu}(\delta_{e}) = \lim_{i \in I} [k_i * \nu]. \end{equation*} The map $-*\nu: \mathfrak{M}_{x}(\mathcal{G},G) \to \mathfrak{M}_{x}(\mathcal{G},G)$ is continuous by Proposition \ref{prop: conv is left cont}, hence it commutes with net limits. Therefore, \begin{equation*} \lim_{i \in I} [k_i * \nu] = [\lim_{i \in I} k_i] * \nu = \mu_{f} * \nu = \rho_{\mu_{f}}(\nu). \end{equation*} We conclude that $f = \rho_{\mu_f} = \mu_f*-$. \end{proof} \begin{lemma}\label{lem: rho inv cont} The map $\rho^{-1}: E(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G)) \to \mathfrak{M}_{x}(\mathcal{G},G)$ is a continuous bijection. \end{lemma} \begin{proof} The map $\rho^{-1}$ is a well-defined bijection by Lemmas \ref{lem: rho inj} and \ref{lem: surj}. Let $U$ be a basic open subset of $\mathfrak{M}_{x}(\mathcal{G},G)$, say \begin{equation*}U = \bigcap_{i=1}^{n} \{\mu \in \mathfrak{M}_{x}(\mathcal{G},G): r_i < \mu(\varphi_{i}(x)) <s_i\} \end{equation*} for some $n \in \mathbb{N}$, $\varphi_{i}(x) \in \mathcal{L}_{x}(\mathcal{U})$ and $r_i,s_i \in [0,1]$. Then, \begin{equation*} \left(\rho^{-1} \right)^{-1}(U) = \bigcap_{i=1}^{n}\{ f \in E(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G)) : r_i<f(\delta_{e})(\varphi_{i}(x)) < s_i\}. \end{equation*} This is a restriction of a basic open subset (see Remark \ref{rem: openset}) to $E(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G))$, hence open in the subspace topology. \end{proof} \begin{theorem}\label{thm: Ellis grp iso} The map $\rho: (\mathfrak{M}_{x}(\mathcal{G},G),*) \to E(\mathfrak{M}_{x}(\mathcal{G},G),\operatorname{conv}(G))$ is a homeomorphism which respects the semigroup operation, and therefore an isomorphism. \end{theorem} \begin{proof} The map $\rho$ is a homeomorphism since, by Lemma \ref{lem: rho inv cont}, $\rho^{-1}$ is a continuous bijection between compact Hausdorff spaces. And note that $\rho(\mu * \nu)(\lambda) = (\mu * \nu) * \lambda = \mu * (\nu * \lambda) = \rho_{\mu} ( \nu * \lambda) = \rho_{\mu} \circ \rho_{\nu}(\lambda)$, hence $\rho(\mu * \nu) = \rho_{\mu} \circ \rho_{\nu}$. \end{proof} \begin{remark}\label{rem : without conv} On the other hand, if $T$ is NIP, then $$E(\mathfrak{M}_{x}(\mathcal{G},G),G) \cong E(S_{x}(\mathcal{G},G),G),$$ \noindent and so $\cong (S_{x}(\mathcal{G},G),*)$ by Fact \ref{fac: New isom}. For a countable $G \prec \mathcal{G}$, this is an immediate consequence of the corresponding observation in the context of tame metrizable dynamical systems (see e.g.~\cite[Theorem 1.5]{glasner2006tame}); and for an arbitrary small $G \prec \mathcal{G}$, an approximation argument with smooth measures (as in Lemma \ref{lem: image of rho}) can be adapted. As typically $(\mathfrak{M}_{x}(\mathcal{G},G),*) \not \cong (S_{x}(\mathcal{G},G),*)$, we see that it was crucial to consider the action of $\operatorname{conv}(G)$ rather than $G$ in our characterization of $(\mathfrak{M}_{x}(\mathcal{G},G),*)$ as an Ellis semigroup. \end{remark} \end{document}
\begin{document} This paper has been withdrawn by the author due to a crucial error related with the singularities of the isotropy action. \end{document}
\begin{document} \begin{abstract} We construct two explicit Leray-Hirsch isomorphisms for torus equivariant oriented cohomology of flag varieties and give several applications. One isomorphism is geometric, based on Bott-Samelson classes. The other is algebraic, based on the description of the torus equivariant oriented cohomology of a flag variety as the dual of a formal affine Demazure algebra. \end{abstract} \maketitle \allowdisplaybreaks \section{Introduction} \subsection{}\label{ssec:1.1} The Leray-Hirsch Theorem is a fundamental result from algebraic topology. Suppose $p\colon E\to B$ is fibre bundle with fibre $F$ and fibre embedding $i\colon F\to E$. Then $\pi$ and $i$ induce maps \[ \pi^*\colon H^*(B)\to H^*(E) \quad\text{and}\quad i^*\colon H^*(E)\to H^*(F) \] in singular cohomology. Given any homomorphism $j\colon H^*(F)\to H^*(E)$, we can define a \emph{Leray-Hirsch homomorphism} \begin{equation*} \label{eq:0} \varphi_j\colon H^*(B)\otimes H^*(F) \to H^*(E)\quad\text{by}\quad \varphi_j(b \otimes f) = \pi^*(b) \smile j(f) . \end{equation*} Under reasonable hypotheses, which include that the map $i^*$ is surjective and that the map $j$ is a right inverse (or section) of $i^*$, the conclusion of the Leray-Hirsch Theorem is that the Leray-Hirsch homomorphism $\varphi_j$ is an isomorphism. \subsection{}\label{ssec:1.2} Examples with applications in Schubert calculus and the representation theory of groups of Lie type are the fibrations that arise as projections from a complete flag variety to a partial flag variety. Suppose $G$ is a connected, reductive, complex algebraic group, $B$ is a Borel subgroup of $G$, and $P$ is a parabolic subgroup that contains $B$. Then the natural projection $\pi\colon G/B \to G/P$ is a fibration with fibre $P/B$. Suppose $T\subseteq B$ is a maximal torus and $L$ is the Levi factor of $P$ that contains $T$. Then $P/B\cong L/ B\cap L$ and the Leary-Hirsch Theorem provides an explicit isomorphism \begin{equation*} \label{eq:3} H^*(G/P)\otimes H^*(L/(B\cap L))\cong H^*(G/B). \end{equation*} This isomorphism shows that cohomologically, the flag variety $G/B$ is the product of the partial flag variety $G/P$ and the smaller flag variety $L/(B\cap L)$, and is a topological incarnation of the factorization of the group algebra of the group algebra of $W$, \[ \BBQ W^L \otimes \BBQ W_L \cong \BBQ W, \] where $W$ is the Weyl group of $(G,T)$, $W_L$ is the Weyl group of $(L,T)$, and $W^L$ is the set of minimal length left coset representatives of $W_L$ in $W$. \subsection{}\label{ssec:1.4} In this paper, we construct two distinct Leray-Hirsch homomorphisms, \begin{equation*} \label{eq:lhiso} \BBh_T(G/P)\otimes_{\BBh_T(\pt)}\BBh_T(P/B) \xrightarrow{\ \cong\ } \BBh_T(G/B), \end{equation*} where $\BBh(\cdot)$ is an oriented cohomology theory in the sense of Levine and Morel and $\BBh_{\star}(\cdot)$ is an associated equivariant, oriented cohomology theory, and prove that they are isomorphisms. The first, which we call the \emph{geometric Leray-Hirsch homomorphism,} is an extension of the geometric construction in ordinary cohomology and depends on the choice of resolutions of the Schubert varieties in $G/B$. The second, which we call the \emph{algebraic Leray-Hirsch homomorphism,} is algebraically much more natural, but at this level of generality, a direct connection with the underlying geometry of $G/B$ is not known. \subsection{}\label{ssec:1.5} In singular cohomology, the fundamental classes of the Schubert varieties $B_w= \overline{B\cdot wB}$, for $w \in W$, form a basis of the homology of $G/B$. Let $\{\, \xi^w\mid w\in W\,\}$ be the dual basis of $H^*(G/B)$, with respect to the usual scalar pairing between homology and cohomology. By definition, $\xi^w\in H^{2\ell(w)}(G/B)$, where $\ell(w)$ is the length of $w$. Replacing $G/B$ by $L/(B\cap L)$ and using the natural isomorphism $L/(B\cap L) \cong P/B$ gives a basis $\{\, \xi^v_L\mid v\in W_L\,\}$ of $H^*(P/B)$. Starting with the $B$-orbits in $G/P$, the same construction gives a basis $\{\, \xi^w_P\mid w\in W^L\,\}$ of $H^*(G/P)$. It is shown in \cite{bernsteingelfandgelfand:schubert} that $\pi^*\colon H^*(G/P)\to H^*(G/B)$ is injective. In addition, one can check that $i^*\colon H^*(G/B)\to H^*(P/B)$ is surjective with \[ i^*(\xi^w) = \begin{cases} \xi_{L}^w &w\in W_L \\ 0 &w\notin W_L. \end{cases} \] Define \[ j\colon H^*(P/B)\to H^*(G/B)\quad \text{by}\quad j(\xi_{L}^v) = \xi^v \] for $v\in W_L$. Then $j$ is a right inverse of $i^*$ and it is straightforward to show that the mapping \[ H^*(G/P) \otimes H^*(P/B) \to H^*(G/B) \quad\text{given by}\quad \xi^w_P \otimes \xi^v_L \mapsto \pi^*(\xi^w_P) \smile j(\xi^v_L) \] is an isomorphism. More generally, one can replace cohomology by $T$-equivariant cohomology, where $T$ is a maximal torus in $B$. Drellich and Tymoczko \cite{drellichtymoczko:module} have described a choice of a right inverse of $j$ using GKM theory, and constructed an explicit Leray-Hirsch isomorphism in $T$-equivariant cohomology. They also show that the isomorphism they construct in equivariant cohomology induces an isomorphism in cohomology. The geometric Leray-Hirsch homomorphism constructed below is an extension of the preceding construction using Bott-Samelson classes in equivariant oriented cohomology instead of Schubert classes in singular cohomology. \subsection{}\label{ssec:1.a} In equivariant $K$-theory, the group $K^T(G/B)$ has four distinguished bases, namely the basis given by the isomorphism classes of the structure sheaves of the Schubert varieties and its dual basis, and the basis given by the isomorphism classes of the structure sheaves of the opposite Schubert varieties and its dual basis. For the construction of the geometric Leray-Hirsch homomorphism for a general equivariant cohomology theory, the choice of a Bott-Samelson class is a viable substitute for the class of the structure sheaf of the corresponding Schubert variety (or the corresponding schubert class in singular cohomology). Results of Graham and Kumar \cite{grahamkumar:positivity} show that in equivariant $K$-theory, the basis given by the structure sheaves of the opposite Schubert varieties is arguably better adapted to the projection $\pi\colon G/B \to G/P$ than the basis given by the structure sheaves of the Schubert varieties. The algebraic Leray-Hirsch homomorphism is based on an algebraically constructed basis in the dual of the associated formal affine Demazure algebra that in equivariant $K$-theory maps to the basis of structure sheaves of opposite Schubert varieties. The authors do not know of a geometric construction of this basis that is valid for a general equivariant oriented cohomology theory. \subsection{}\label{ssec:1.8} The rest of this paper is organized as follows. In \S2 we review the notions of oriented and equivariant oriented cohomology theories for smooth, complex varieties from \cite{levinemorel:algebraic} and \cite{calmeszainoullinezhong:equivariant}; and construct the geometric Leray-Hirsch homomorphism for the $T$-equivariant fibration $\pi\colon G/B\to G/P$. In \S3 we review the algebraic description of $\BBh_T(G/B)$, $\BBh_T(G/P)$, and $\pi^*$ developed in \cite{calmeszainoullinezhong:equivariant}, \cite{calmeszainoullinezhong:coproduct}, and \cite{calmeszainoullinezhong:push}; define the algebraic Leray-Hirsch homomorphism for $\pi$; and state the main theorem, namely that the geometric and algebraic Leray-Hirsch homomorphisms are isomorphisms. The main theorem is proved in \S4, and in \S5 we give some applications of the main theorem. \subsection{}\label{ssec:1.9} Throughout this paper, for convenience we consider only complex algebraic groups and complex varieties. The results and proofs are unchanged if the complex field is replaced by any algebraically closed field with characteristic zero. Similarly, we assume that the reductive group $G$ has simply connected derived group and we fix a maximal torus $T$. It follows from \cite[Lem.~11.1]{calmeszainoullinezhong:equivariant} that passing to a reductive quotient, say $\widetilde G$, of $G$, and a maximal torus $\widetilde T$ of $\widetilde G$, does not change the cohomology rings. \section{The Leray-Hirsch homomorphism} \label{sec:lhh} In this section we fix notation and review the concepts from equivariant (algebraic) oriented cohomology theories needed to give the precise formulation of the geometric Leray-Hirsch homomorphism. \subsection{(Algebraic) oriented cohomology theories and formal group laws \cite[\S1.1, \S1.2] {levinemorel:algebraic}} \label{ssec:oct} Let $\SmC$ denote the category of smooth, quasi-projective, complex, algebraic varieties. An algebraic oriented cohomology theory, or simply an \emph{oriented cohomology theory,} in the sense of Levine and Morel, is a contravariant functor, $\BBh(\cdot)$, from $\SmC$ to the category of commutative, graded rings with identity together with a natural push-forward map $f_*\colon \BBh(X) \to \BBh(Y)$ for each projective morphism $f\colon X\to Y$ in $\SmC$. The functor $\BBh$ and the push-forward maps are assumed to satisfy a natural list axioms. Let $\BBh(\cdot)$ be an oriented cohomology theory and set $R=\BBh(\pt)$, where $\pt$ denotes the one point variety. It follows from the axioms that one can define the notion of $\BBh$-Chern classes of vector bundles. If $E$ is a non-zero vector bundle over a smooth variety $X$, then $c_1(E)\in \BBh(X)$ denotes the first Chern class of $E$. With this notation, there is a formal power series $F\in R[[t,u]]$ such that $c_1(L\otimes M)= F(c_1(L), c_1(M))$ for line bundles $L$ and $M$ on $X$. The formal power series $F$ has the form \begin{equation} \label{eq:21} F(t,u)= \sum_{i,j} r_{i,j} t^i u^j = t+u-tu\cdot G(t,u), \quad\text{where}\quad G(t,u) = -r_{1,1} -r_{2,1}t -r_{1,2}u - \dotsm. \end{equation} More precisely, $F$ is a commutative formal group law of rank one, or more simply a formal group law, that is \[ F(t,0)= F(0,t)=t, \quad F(t,u)= F(u,t),\quad \text{and}\quad F(F(t,u),v)= F(t, F(u,v)), \] and is called the formal group law of $\BBh$. \subsection{Examples}\label{ssec:fglex} The fundamental example of an oriented cohomology theory is given by functor of Chow rings, denoted by $\Chow^*$. Clearly $\Chow^*(\pt) \cong \BBZ$. If $X$ is a smooth, quasi-projective variety and $L$ and $M$ are line bundles on $X$, then $c_1(L\otimes M) = c_1(L)+ c_1(M)$, so the formal group law of $\Chow^*(\cdot)$ is the \emph{additive formal group law:} \[ F_{add}(t,u)= t+u. \] An oriented cohomology theory whose formal group law is $F_{add}$ is called \emph{ordinary}. Another basic example of an ordinary oriented cohomology theory is the even part of de Rahm cohomology. If $R$ is any ring with identity and $\kappa \in R\setminus \{0\}$, then \[ F(t,u)= t+u-\kappa tu \] is a formal group law, called a \emph{multiplicative formal group law.} For example, the functor $K(\cdot)$, where $K(X)$ is the Grothendieck group of vector bundles on $X$, is an oriented cohomology theory whose formal group law is multiplicative with $\kappa=1$, namely $F(t,u)= t+u -tu$. A universal example of an oriented cohomology theory is the algebraic cobordism functor $\Omega^*(\cdot)$ constructed by Levine and Morel. The ring $\Omega^*(\pt)$ is isomorphic to the Lazard ring, $\BBL= \BBZ[a_{i,j}\mid i,j\in \BBN]/J$, where the $a_{i,j}$ are indeterminates and $J$ is the ideal generated by the relations among the variables $a_{i,j}$ imposed by the requirement that if $f_{i,j}= a_{i,j}+ J$, then $F(t,u) = \sum_{i,j} f_{i,j} t^i u^j$ defines a formal group law in $\BBL$. Moreover, the formal group law of $\Omega^*(\pt)$ is the \emph{universal formal group law} on $\BBL$: \[ F_{univ}(t,u) = \sum_{i,j} f_{i,j} t^i u^j. \] The oriented cohomology theory $\Omega^*(\cdot)$ has the universal property that if $\BBh(\cdot)$ is any oriented cohomology theory, then there is a unique natural transformation $\Omega^*(\cdot) \to \BBh(\cdot)$ that commutes with push-forwards of projective morphisms. If $R$ is a commutative graded ring with identity and $F$ is a formal group law on $R$, let $\BBh(\cdot)$ be the composition of $\Omega^*(\cdot)$ with the base change functor $(\cdot) \otimes_{\BBL} R$, where $R$ is an $\BBL$-algebra via the unique ring homomorphism $\psi\colon \BBL\to R$ such that $F(t,u)= \sum_{i,j} \psi(f_{i,j}) t^i u^j$. Then $\BBh(\cdot)$ is an oriented cohomology theory with $\BBh(\pt)= \BBL \otimes_{\BBL}R \cong R$ and formal group law $F$. \subsection{Equivariant oriented cohomology theories \cite[\S2] {calmeszainoullinezhong:equivariant}} Let $\GrC$ be the subcategory of $\SmC$ consisting of linear algebraic groups and group homomorphisms. Define $\GrSmC$ to be the category with objects all pairs $(G,X)$, where $G\in \GrC$, $X\in \SmC$, and $X$ is a $G$-variety. Given $(G,X)$ and $(G',X')$, a morphism in $\GrSmC$ is a pair $(\varphi,f)$, where $\varphi\colon G\to G'$ is a morphism in $\GrC$, $f\colon X\to X'$ is a morphism in $\SmC$, and the obvious compatibility condition intertwining the $G$-action on $X$ and the $G'$-action on $X'$ holds, namely $f(g\cdot x)= \varphi(g)\cdot f(x)$ for $(g,x)\in G\times X$ (or equivalently, the corresponding diagram commutes). An \emph{equivariant oriented cohomology theory} is an additive, contravariant functor from the category $\GrSmC$ to the category of commutative rings with identity endowed with push-forward maps for equivariant projective morphisms that has certain properties and that satisfies a collection of axioms \cite[\S2] {calmeszainoullinezhong:equivariant}. Suppose $\BBh_\star(\cdot)$ be an equivariant oriented cohomology theory. On objects, the functor $\BBh_\star(\cdot)$ carries $(G, X)$ to the commutative ring $\BBh_G(X)$. \begin{itemize} \item The subcategory of $\GrSmC$ with objects whose first component is equal to a fixed group $G$ with morphisms $(\id_G, f)$ is canonically isomorphic to the category of $G$-varieties. The restriction of $\BBh_\star(\cdot)$ to this subcategory defines a contravariant functor from the category of $G$-varieties to the category of commutative rings with identity. On objects, $X\mapsto \BBh_G(X)$, and if $f\colon X\to Y$ is an $G$-equivariant morphism of $G$-varieties, then $\BBh_{\id_G}(f)$ is denoted simply by $f^*$, so $f^*\colon \BBh_G(X) \to \BBh_G(Y)$ is a ring homomorphism. If $f$ is projective, then the push-forward map is denoted simply by $f_*$, so $f_*\colon \BBh_G(X) \to \BBh_G(Y)$. In general, $f_*$ is not a ring homomorphism. \item When $G=\{e\}$ is the trivial group, the subcategory in the previous bullet point is a full subcategory of $\GrSmC$ that is canonically isomorphic to $\SmC$. One of the axioms is that the restriction of $\BBh_\star(\cdot)$ to this subcategory is an oriented cohomology theory in the sense of Levine and Morel. The restriction of $\BBh_\star(\cdot)$ to $\{e\}$ is usually denoted simply by $\BBh$, so $\BBh(X) = \BBh_{\{e\}}(X)$ for a smooth quasi-projective variety $X$. \end{itemize} \subsection{Group theory notation}\label{ssec:gp} As in the introduction, $G$ is a connected, reductive, complex algebraic group, \[ T\subseteq B \subseteq P\subseteq G \] are a fixed maximal torus, Borel subgroup, and parabolic subgroup of $G$, respectively; $\pi\colon G/B\to G/P$ is the projection; and $i\colon P/B \to G/B$ is the inclusion. For convenience, we assume that the derived group of $G$ is simply connected. Let $\Lambda$ denote the character group of $T$. Let $L$ be the Levi factor of $P$ that contains $T$. Then $B\cap L$ is a Borel subgroup of $L$, the natural projection $L/(B\cap L) \to P/B$ is an isomorphism, and we may identify $P/B$ with the flag variety of $L$. Let $\Phi$ be the root system of $(G,T)$, so $\Phi$ is a subset of $\Lambda$. Let $\Phi^+$ be the set of positive roots defined as the set of weights of $T$ on the cotangent space $T^*_B (G/B)$. Let $\Phi_L$ the subset of roots corresponding to the root system of $(L,T)$, and let $\Phi_L^+= \Phi_L\cap \Phi^+$. Let $W$ be the Weyl group of $(G,T)$ and let $W_L$ be the Weyl group of $(L,T)$. We always identify $W_L$ as a subgroup of $W$. The positive system $\Phi^+$ determines a base of $\Phi$ that in turn determines both a length function, $\ell$, on $W$ and the Bruhat order, $\leq$, on $W$. Let $W^L$ be the set of minimal length right coset representatives of $W_L$ in $W^L$. Then multiplication in $W$ defines a bijection $W^L \times W_L\leftrightarrow W$. For a simple reflection $s=s_\alpha\in W$, let $P_s$ be the parabolic subgroup of $G$ that contains $B$ and such that $T$ acts on $T^*_B (P_s/B)$ as $\alpha$ and let $L_s$ be the Levi factor in $P_s$ that contains $T$. Set $W_s=W_{L_s}$ and $W^s=W^{L_s}$. Then $W_s= \{e,s\}$ and for $z\in W$, $z\in W^s$ if and only if $zs>z$. Fix linear orders on $W^L$ and $W_L$ that extend the Bruhat order. Then the lexicographic order on $W$ given by the factorization $W= W^LW_L$, say $\preceq$, is a linear order on $W$ that extends the Bruhat order. In this linear order on $W$ we have \[ \text{$wv \preceq w'v'$ \quad if and only if \quad $w \preceq w'$, or $w=w'$ and $v \preceq v'$.} \] When we consider square matrices indexed by $W_L$ or $W$ it will always be with respect to this linear order. \subsection{Notation for tuples (of simple reflections)} Suppose $I=(s_1, \dots, s_p)$ and $J=(s_1', \dots, s_q')$ are two tuples of simple reflections in $W$. \begin{itemize} \item If $s$ is any simple reflection in $W$, then write $s\in I$ if $s=s_i$ for some $1\leq i\leq p$. \item Write $I\sqsubseteq J$ if $I$ is a subsequence of $J$. \item $I\sqcup J$ denotes the concatenation of $I$ and $J$. \item If $J$ is a subsequence of $I$, then $I\setminus J$ is the subsequence of $I$ obtained by removing the entries in $J$. \item If $I$ and $J$ are subsequences of a sequence $K$, then $I\sqcap J$ is the subsequence of $K$ formed entries in $K$ that are in positions occupied by both $I$ and $J$. \end{itemize} \subsection{Standing assumptions}\label{ssec:asm} A reduced expression for an element $w\in W$ is a sequence $I=(s_1, \dots, s_p)$ of simple reflections such that $w=s_1\dots s_p$ and $\ell(w)=p$. In the rest of this paper with fix a set of reduced expressions for the elements of $W$, $\{\, I_w\mid w\in W\,\}$ and assume that the choice of reduced expressions is \emph{$L$-compatible,} that is, if $w\in W^L$ and $v\in W_L$, then $I_{wv}= I_w \sqcup I_v$. For the rest of this paper $\BBh_\star(\cdot)$ denotes an equivariant oriented cohomology theory, \[ R=\BBh(\pt)\quad \text{and}\quad S=\BBh_T(\pt). \] We assume that either $2$ is not a zero divisor in $R$ or that the derived group of $G$ does not contain an irreducible factor of type $C_l$. With this assumption we can freely use the results in \cite{calmeszainoullinezhong:coproduct} and \cite{calmeszainoullinezhong:push}. In addition, with the notation in \cref{ssec:S}, let $S_+$ be the image in $S$ of the ideal in $R[[\Lambda]]$ generated by $\{\, x_\lambda\mid \lambda \in \Lambda\,\}$ and suppose that $S$ is separated and complete in the $S_+$-adic topology. In the terminology of \cite{calmeszainoullinezhong:equivariant}, the assumption is that $\BBh_\star(\cdot)$ is ``Chern-complete over the point for $T$.'' It is observed in \cite[Rmk.~2.3]{calmeszainoullinezhong:equivariant} that if $S$ is separated and not complete, then extending scalars to the completion of $S$ is an equivariant oriented cohomology theory with the same underlying oriented cohomology theory. \subsection{Bott-Samelson resolutions and classes \cite[\S7,8]{calmeszainoullinezhong:equivariant}}\label{ssec:bs} For a tuple $I=(s_1, \dots, s_p)$ of simple reflections such that $s_p\notin W_L$, define \[ Z^P_I = P_{s_1} \times^B \dotsm \times^B P_{s_{p-1}} \times^B (P_{s_p}P /P) \] and let \[ q^P_I\colon Z^P_I\to G/P \] be the projection given by multiplication of the factors. Notice that $P_{s_p}P/P \cong P/B$ because $s_p\notin W_L$. It follows that the natural projection from $Z^B_{I}$ to $Z^P_{I}$ is an isomorphism and that the diagram \[ \xymatrix{ Z^B_{I} \ar[r]_{\cong} \ar[d]^{q^B_{I}} & Z^P_{I} \ar[d]^{q^P_{I}} \\ G/B \ar[r]^{\pi} &G/P} \] commutes. Define the \emph{Bott-Samelson class} \[ \xi^P_I = \pi_* (q^B_{I})_*(1)\in \BBh_T(G/P), \] where $1$ denotes the identity in $\BBh_T(Z^B_{I})$. If $P=B$, then there is no condition on $I$ and we set $\xi_I= \xi_I^B$. If $s_p\notin W_L$, then $\xi_I$ and $\xi^P_I$ are both defined and $\pi_*(\xi_I)= \xi^P_I$. Now suppose $w\in W^L$ and $I=I_w$. Then $s_p\notin W_L$ and $q^P_{I_w}\colon Z^P_{I_w} \to \overline{BwP/P}$ is a resolution of singularities. Define $\xi^P_w = \xi^P_{I_w}$, and when $P=B$ define $\xi_w=\xi^B_w$. By definition, \[ \pi_{*}(\xi_w)= \xi^P_w, \] and by \cite[Prop.~8.9]{calmeszainoullinezhong:equivariant}, $\{\, \xi^P_w\mid w\in W^L\,\}$ is an $S$-basis of $\BBh_T(G/P)$. A special case is $\xi_\emptyset = \xi_e= i^e_*(1)$, where $i^e\colon \{B\} \hookrightarrow G/B$ is the inclusion. Notice that in general $\xi_e \ne 1$ because $ i^e_*$ is not necessarily a ring homomorphism. \subsection{Duality \cite[\S9]{calmeszainoullinezhong:equivariant}}\label{ssec:dual} For a smooth, projective $T$-variety, $X$, let $a_X\colon X\to \pt$ be the canonical morphism. Then $a_X$ is a projective morphism and so $(a_X)_*\colon \BBh_T(X) \to \BBh_T(\pt)=S$ is defined. Define a pairing \[ \langle \,\cdot\,,\,\cdot\, \rangle_X\colon \BBh_T(X) \otimes_S \BBh_T(X) \to S \quad\text{by}\quad \langle \xi, \xi'\rangle_X = (a_X)_*(\xi \xi'). \] We frequently write $\langle \xi, \xi'\rangle$ instead of $\langle \xi, \xi'\rangle_X$ when the ambient space $X$ is clear from context. We call this pairing the \emph{scalar pairing} because it reduces to the scalar product pairing between singular cohomology and singular homology given by evaluation. By \cite[Thm.~9.3]{calmeszainoullinezhong:equivariant}, the scalar pairing $\langle \,\cdot\,,\,\cdot\, \rangle_{G/P}$ is non-degenerate for every $P$. Let $\{\, \xi^{z}\mid z\in W\,\}$ be the $S$-basis of $\BBh_T(G/B)$ dual to $\{\, \xi_{z}\mid z\in W\,\}$ and let $\{\, \xi_P^{w}\mid w\in W^L\,\}$ be the $S$-basis of $\BBh_T(G/P)$ dual to $\{\, \xi^P_{w}\mid w\in W^L\,\}$. Then \[ \langle \xi_z, \xi^{z'}\rangle_{G/B} = \delta_{z,z'} \text{\ for $z,z'\in W$} \quad\text{and}\quad \langle \xi_w^P, \xi^{w'}_P \rangle_{G/P} = \delta_{w,w'} \text{\ for $w,w'\in W^L$.} \] \subsection{The flag variety of $L$ and $\BBh_T(P/B)$}\label{ssec:fvL} Consider the composition \[ \xymatrix{L/(B\cap L) \ar[r]_-{\cong} &P/B \ar@{^(->}[r]^-{i} & G/B.} \] For $v\in W_L$ we have smooth variety $Z^{B\cap L}_{I_v}$ and the projection $q^{B\cap L}_v\colon Z^{B\cap L}_{I_v} \to L/(B\cap L)$. There is a canonical isomorphism $Z^{B\cap L}_{I_v} \cong Z^{B}_{I_v}$ and the diagram \[ \xymatrix{ Z^{B\cap L}_{I_v} \ar[r]_{\cong} \ar[d]^{q^{B\cap L}_{I_v}}& Z^B_{I_v} \ar[d] \ar[dr]^{q^B_{I_v}} & \\ L/(B\cap L) \ar[r]_-{\cong} &P/B \ar@{^(->}[r]^-{i} & G/B} \] commutes. Let $\xi^L_v$ be the Bott-Samelson class in $\BBh_T(P/B)$ determined by $q^{B\cap L}_{I_v}$, that is, the image in $\BBh_T(P/B)$ of the identity in $\BBh_T(Z^{B\cap L}_{I_v})$. Then $\{\, \xi^L_v\mid v\in W_L\,\}$ is an $S$-basis of $\BBh_T(P/B)$ and by the naturality of push-forward maps $i_*(\xi^L_v) = \xi_v$. Let $\{\, \xi_L^v\mid v\in W_L\,\}$ be the dual basis of $\BBh_T(P/B)$ with respect to the scalar pairing. If $z\in W$, then it follows by duality and the projection formula that \[ i^*(\xi^z) = \sum_{v\in W_L} \langle i^*(\xi^z), \xi^L_v \rangle_{P/B} \, \xi_L^v = \sum_{v\in W_L} \langle \xi^z, i_{*}( \xi^L_v) \rangle_{G/B} \,\xi_L^v = \sum_{v\in W_L} \langle \xi^z, \xi_v \rangle_{G/B} \,\xi_L^v . \] Therefore, $i^*\colon \BBh_T(G/B) \to \BBh_T(P/B)$ is the projection given by \begin{equation*} \label{eq:7} i^*(\xi^z)= \begin{cases} \xi^v_L&\text{if $z=v\in W_L$,}\\ 0&\text{if $z\notin W_L$.} \end{cases} \end{equation*} \subsection{The geometric Leray-Hirsch homomorphism}\label{ssec:glhh} Define \[ j_g\colon \BBh_T(P/B) \to \BBh_T(G/B) \quad\text{by $j_g( \xi^v_L) = \xi^v$ and $S$-linearity.} \] Then $j_g$ is a right inverse of the projection $i^*$. The \emph{geometric Leray-Hirsch homomorphism} is the composition $\varphi_g =\operatorname{mult} \circ (\pi^*\otimes j_g)$, where $\mult$ is the multiplication map: \begin{equation*} \label{eq:glhi} \vcenter{\vbox{ \xymatrix{ \BBh_T(G/P) \otimes_{S} \BBh_T(P/B) \ar[rr]^-{\pi^* \otimes j_g} \ar@/^30pt/[rrrr]^{\varphi_g} && \BBh_T(G/B) \otimes_{S} \BBh_T(G/B) \ar[rr]^-{\operatorname{mult}} && \BBh_T(G/B) .} }} \end{equation*} For the $S$-basis $\{\, \xi_P^w \otimes \xi_L^v\mid (w,v)\in W^L\times W_L\,\}$ of $\BBh_T(G/B) \otimes_{S} \BBh_T(P/B)$ we have \begin{equation} \label{eq:19} \varphi_g(\xi_P^w \otimes \xi_L^v)= \pi^*(\xi_P^w) j_g( \xi_L^v) =\zeta^w \xi^v, \end{equation} where $\zeta^w=\pi^*(\xi_P^w)$ for $w\in W^L$. \section{The Leray-Hirsch isomorphism} \label{sec:lhi} In this section we review the algebraic description of the $T$-equivariant oriented cohomology theory of partial flag varieties, developed by Calm\`es, Lenart, Zainoulline, Zhong, and others that extends the description of $T$-equivariant cohomology and $K$-theory due to Kostant and Kumar \cite{kostantkumar:nil}, \cite{kostantkumar:equivariant}, define the algebraic Leray-Hirsch homomorphism, and state the main theorem. \subsection{Formal group algebras and the $T$-equivariant oriented cohomology of a point \cite[\S3] {calmeszainoullinezhong:equivariant} \cite[\S2] {calmespetrovzainoulline:invariants} }\label{ssec:S} The study of $T$-equivariant oriented cohomology begins with the algebraic description of $S=\BBh_T(\pt)$ in \cite[Thm~3.3] {calmeszainoullinezhong:equivariant}. Let $R[[\Lambda]]$ be the ring of formal power series in a set of variables $x_\lambda$, indexed by $\Lambda$ and considered as a topological ring with the adic topology defined by the ideal generated by $\{\, x_\lambda\mid \lambda\in \Lambda\,\}$. Define the \emph{formal group algebra of $\Lambda$} to be the quotient \[ R[[\Lambda]]_F= R[[\Lambda]]/\CJ_F, \] where $\CJ_F$ is the closure of the ideal generated by $\{x_0\} \cup \{\, x_{\lambda+\mu}-F(x_\lambda, x_\mu)\mid \lambda, \mu\in \Lambda\,\}$. Denote the image of $x_\lambda$ in $R[[\Lambda]]_F$ also by $x_\lambda$. Then the rule that maps $x_\lambda$ to the first equivariant Chern class of the line bundle on $G/B$ such that $T$ acts on the fibre over $B$ by the character $\lambda$ defines an isomorphism $S\xrightarrow{\,\sim\,} R[[\Lambda]]_F$. From now on we identify $S$ with $R[[\Lambda]]_F$ using this isomorphism. The $W$-action on $\Lambda$ induces an $R$-linear action of $W$-action on $S$ with $z\cdot x_\lambda= x_{z(\lambda)}$ for $z\in W$ and $\lambda\in \Lambda$. \subsection{}\label{ssec:S+} In the proof of the main theorem it is important to know that $S$ has the structure of a ring of formal power series and to identify the augmentation ideal. Recall from \cref{ssec:asm} that $S_+$ is the ideal in $S$ generated by $\{\,x_\lambda \mid \lambda\in \Lambda\,\}$. It is shown in \cite[Cor.~2.13] {calmespetrovzainoulline:invariants} that if $\{\omega_1,\dots ,\omega_n\}$ is a basis of $\Lambda$, then there is an isomorphism of topological $R$-algebras \[ \tau\colon S= R[[\Lambda]]_F\xrightarrow{\,\sim\,} R[[x_{\omega_1}, \dots ,x_{\omega_n}]]\quad\text{with}\quad \tau(x_{\omega_i})= x_{\omega_i} \text{ for $1\leq i\leq n$.} \] Thus, $\tau$ carries $S_+$ isomorphically onto $R[[x_{\omega_1}, \dots ,x_{\omega_n}]]_+$. Therefore, every element $q\in S$ can be written uniquely as $q=r\cdot 1+q_+$, where $r\in R$ and $q_+\in S_+$, and $q$ is a unit in $S$ if and only if $r$ is a unit in $R$. \subsection{}\label{ssec:ualpha} For example, suppose $\beta\in \Phi$. Then \begin{itemize} \item $F(x_\beta, x_{-\beta})=x_{\beta-\beta} = 0$ by the defining relations in $S$ and \item $F(x_\beta, x_{-\beta})=x_\beta+ x_{-\beta} - x_\beta x_{-\beta} G(x_\beta, x_{-\beta})$ because $F$ is a formal group law (see \cref{ssec:oct}\cref{eq:21}). \end{itemize} Define \begin{equation*} \label{eq:22} \kappa_\beta = G(x_{\beta}, x_{-\beta}) = x_\beta\inverse + x_{-\beta}\inverse \quad\text{and}\quad u_\beta = x_\beta / x_{-\beta} = -1+\kappa_\beta x_\beta. \end{equation*} Then $\kappa_\beta$ and $u_\beta$ are both elements in $S$ and $u_\beta\in -1+S_+$, so $u_\beta$ is a unit in $S$. \subsection{Formal affine Demazure algebras and their duals (\cite{calmeszainoullinezhong:equivariant}, \cite{calmeszainoullinezhong:coproduct}, \cite{calmeszainoullinezhong:push})} \label{ssec:alg} The algebraic description of the $T$-equivariant oriented cohomology of partial flag varieties is encapsulated in the commutative diagram of $S$-algebras and $S$-algebra homomorphisms: \begin{equation} \label{eq:fda} \vcenter{ \vbox{ \xymatrix{ \BBh_T(G/B) \ar[d]_{\Theta}^{\cong} \ar[r]^-{\pi_*} & \BBh_T(G/P) \ar[d]_{\Theta_P}^{\cong} \ar[r]^-{\pi^*} & \BBh_T(G/B) \ar[d]_{\Theta}^{\cong} \\ \bfD^* \ar@{->>}[r]^-{Y_P\bullet(\cdot)} \ar@{^(->}[d] & (\bfD^*)^{W_L} \ar@{^(->}[d] \ar@{^(->}[r]& \bfD^* \ar@{^(->}[d]\\ Q_W^* \ar@{->>}[r]^-{Y_P\bullet(\cdot)} & (Q_W^*)^{W_L} \ar@{^(->}[r] & Q_W^* ,}}} \end{equation} where the notation is as follows. \begin{enumerate} \item $Q$ is the localization of $S$ at $\{\, x_\beta\mid \beta\in \Phi \,\}$. Define $x_L$ in $S$ (and $Q$) by \[ x_L= \prod_{\beta\in \Phi_L^+} x_{-\beta}. \] For example, if $P=G$, then $x_G= \prod_{\beta\in \Phi^+} x_{-\beta}$. \label{it:alg1} \item $Q_W$, the \emph{twisted group algebra,} is the free $Q$-module with basis $\{\, \delta_z\mid z\in W\,\}$. $Q_W$ is also an $R$-algebra with multiplication determined by the rule \[ (q\delta_z)\cdot (q'\delta_{z'})=qz(q')\delta_{zz'}, \quad\text{for $q,q'\in Q$ and $z, z'\in W$.} \] Define $Y_P\in Q_W$ by \[ Y_P= \sum_{v\in W_L} \delta_v x_L\inverse. \] If $s=s_\alpha$ is a simple reflection define $Y_s=Y_{P_s}$, so \[ Y_s= (\delta_e +\delta_s) x_{-\alpha}\inverse = x_{-\alpha}\inverse \delta_e +x_{\alpha}\inverse \delta_s. \] If $I=(s_1, \dots, s_p)$ is a sequence of simple reflections in $W$ define $Y_I=Y_{s_1} \dotsm Y_{s_p}$. For $z\in W$ set $Y_z=Y_{I_z}$. Notice that $Y_{wv} =Y_wY_v$ if $w\in W^L$ and $v\in W_L$. It is not hard to check that $\{\, Y_z\mid z\in W\,\}$ is a $Q$-basis of $Q_W$. Taking $P=P_s$ in \cref{eq:fda} one sees that the elements $Y_s$ encode the push-pull operators in equivariant oriented cohomology. These elements are called \emph{push-pull elements.} In another direction, the Demazure operators on the coordinate ring of the Lie algebra of $T$ are encoded by the \emph{Demazure elements}, which are defined by \[ X_s= x_\alpha\inverse (\delta_e - \delta_s)= x_\alpha\inverse \delta_e -x_\alpha\inverse \delta_s \] for a simple reflection $s=s_\alpha \in W$. If $I=(s_1, \dots, s_p)$ is a sequence of simple reflections in $W$, define $X_I=X_{s_1} \dotsm X_{s_p}$, and for $z\in W$ set $X_z=X_{I_z}$. As with the $Y$'s, $X_{wv} =X_wX_v$ if $w\in W^L$ and $v\in W_L$, and $\{\, X_z\mid z\in W\,\}$ is a $Q$-basis of $Q_W$. \label{it:alg2} \item $Q_W^* = \Hom_Q(Q_W,Q)$ is the $Q$-dual of $Q_W$. Let $\{\, Y_z^*\mid z\in W\,\}$ and $\{\, X_z^*\mid z\in W\,\}$ be the bases of $Q_W^*$ dual to the bases $\{\, Y_z\mid z\in W\,\}$ and $\{\, X_z\mid z\in W\,\}$ of $Q_W$, respectively. The $\bullet$-action of $Q_W$ on $Q_W^*$ is defined by \[ (h\bullet f) (h')=f(h'h), \quad h, h'\in Q_W, f\in Q_W^*. \] The map $Y_P\bullet(\cdot)$ in \cref{eq:fda} denotes the left $\bullet$-action of $Y_P$ on $Q_W^*$, namely $f\mapsto Y_P\bullet f$ for $f\in Q_W^*$. Restricting the $\bullet$-action to the basis $\{\,\delta_z\mid z\in W\,\}$ of $Q_W$ defines an action of $W$ on $Q_W^*$ by $Q$-algebra automorphisms. The space of $W_L$-invariants for this action is denoted by $(Q_W^*)^{W_L}$. By \cite[Lem.~6.5]{calmeszainoullinezhong:push}, $Y_P\bullet Q_W^* = (Q_W^*)^{W_L}$. \label{it:alg3} \item The \emph{formal affine Demazure algebra of the based root system determined by $T\subseteq B\subseteq G$,} denoted by $\bfD$, is the $R$-subalgebra of $Q_W$ generated by $S$ and $\{\, X_s \mid \text{$s\in W$ is a simple reflection}\,\}$. Since $\delta_e=1$ and $\delta_s= 1-x_\alpha X_s$, it is easy to see that $\{\, \delta_z\mid z\in W\,\} \subseteq \bfD$, and using the results in \cite[\S6]{HMSZ:formal} it is straightforward to check that $\bfD$ is a free $S$-module with bases $\{\, X_z\mid z\in W\,\}$ and $\{\, Y_z\mid z\in W\,\}$. \label{it:alg4} \item $\bfD^* = \Hom_S (\bfD, S)$ is the $S$-dual of $\bfD$. We consider $\bfD^*$ as a commutative $S$-algebra with the product induced from the coproduct on $\bfD$ defined in \cite[\S8, \S11] {calmeszainoullinezhong:coproduct}. Since $\bfD$ is a free $S$-submodule of $Q_W$ that contains the $Q$-basis $\{\,\delta_z\mid z\in W\,\}$ of $Q_W$, it follows that every $S$-module homomorphism $\bfD \to S$ extends uniquely to a $Q$-module homomorphism $Q_W\to Q$. We identify $\bfD^*$ with an $S$-submodule of $Q_W^*$ using this correspondence. It is not hard to see that $\bfD^*$ is a $W$-stable subset of $Q_W^*$ (with respect to the $\bullet$-action of $W$). Moreover, by \cite[Lem.~11.7] {calmeszainoullinezhong:push}, $Y_P\bullet \bfD^* = (\bfD^*)^{W_L}$. \label{it:alg5} \item The inclusion of $T$-fixed points $(G/B)^T\hookrightarrow G/B$ induces an $S$-algebra homomorphism $\BBh_T(G/B) \to \BBh_T( (G/B)^T)$. Because $(G/B)^T=\{\, zB\mid z\in W\,\}$, we see that $\BBh_T ((G/B)^T)$ is isomorphic to $\bigoplus_{z\in W} \BBh_T(zB)$, and so may be identified with the $S$-algebra of functions $W\to S$, with pointwise operations. Similarly, using the basis $\{\,\delta_z\mid z\in W\,\}$ of $Q_W$ we identify $Q_W^*$ with the $Q$-algebra of functions $W\to S$. Composing the map $\BBh_T(G/B) \to \BBh_T( (G/B)^T)$ with the identification $\BBh_T( (G/B)^T)\xrightarrow{\ \cong\ } S_W^*$ gives an $S$-algebra homomorphism $\BBh_T(G/B) \to Q_W^*$. By \cite[\S10]{calmeszainoullinezhong:push} this mapping is injective with image equal to $\bfD^*$. The $S$-algebra isomorphism $\Theta$ is obtained from $S$-algebra homomorphism $\BBh_T(G/B) \to Q_W^*$ by restricting the codomain to $\bfD^*$. The $S$-algebra homomorphism $\Theta_P$ is defined similarly and $\Theta=\Theta_B$. \item For a sequence, $I$, of simple reflections, let $I\rev$ denote the reverse of $I$. Also, let $f_e\in Q_W^*$ denote the $Q$-linear homomorphism that maps $\delta_e$ to $1$ and $\delta_z$ to $0$ for $z\ne e$. It follows from \cite[Lem.~8.8]{calmeszainoullinezhong:equivariant} that \[ \Theta(\xi_z) = Y_{I_z\rev} \bullet x_G f_e \quad\text{and}\quad \Theta_P(\xi^P_w) = Y_PY_{I_w\rev} \bullet x_G f_e \] for $z\in W$ and $w\in W^L$. Thus, $\{\, Y_{I_z\rev}\bullet x_G f_e\mid z\in W\,\}$ is an $S$-basis of $\bfD^*$ and $\{\, Y_P Y_{I_w\rev}\bullet x_G f_e\mid w\in W^L\,\}$ is an $S$-basis of $(\bfD^*)^{W_L}$. Finally, by \cite[Lem.~14.3] {calmeszainoullinezhong:push} and \cite[Lem.~11.6] {calmeszainoullinezhong:push}, the sets $\{\, Y_P X_{I_w\rev}\bullet x_G f_e\mid w\in W^L\,\}$ and $\{\, X_w^*\mid w\in W^L\,\}$ are $S$-basis of $(\bfD^*)^{W_L}$. On the other hand, if $w\in W^L$, then in general $Y_w^*\notin (\bfD^*)^{W_L}$. \end{enumerate} \subsection{Duality \cite{calmeszainoullinezhong:push}} \label{ssec:algdual1} An application of the diagram \cref{ssec:alg} \cref{eq:fda} is the algebraic description of the scalar pairings on $\BBh_T(G/B)$ and $\BBh_T(G/P)$. By transport of structure via the isomorphisms $\Theta$ and $\Theta_P$, the non-degenerate $S$-bilinear forms $\langle\, \cdot\,, \, \cdot\, \rangle_{G/B}$ and $\langle\, \cdot\,, \, \cdot\, \rangle_{G/P}$ determine non-degenerate $S$-bilinear forms $\langle\, \cdot\,, \, \cdot\, \rangle_{\bfD^*}$ and $\langle\, \cdot\,, \, \cdot\, \rangle_{(\bfD^*)^{W_L}}$ on $\bfD^*$ and $(\bfD^*)^{W_L}$, respectively. Normally the subscripts are included in the notation only in order to emphasize the ambient space. Because $G/G = \pt$, $W_G=W$, and $(\bfD^*)^{W} \cong S$, the mapping \[ (a_{G/B})_* \colon \BBh_T(G/B) \to \BBh_T(\pt)=S \] is given algebraically by the mapping \[ Y_G\bullet(\cdot) \colon \bfD^*\to (\bfD^*)^{W} \cong S. \] It follows that \[ \langle f,f'\,\rangle_{\bfD^*} =Y_G\bullet (ff') \quad\text{for}\quad f,f'\in \bfD^*. \] By \cite[Thm.~15.5] {calmeszainoullinezhong:push}, $\{\, Y_z^*\mid z\in W\,\}$ is the $S$-basis of $\bfD^*$ that is dual to the basis $\{\, Y_{I_z\rev}\bullet x_G f_e\mid z\in W\,\}$. Therefore \begin{equation} \label{eq:20} \Theta(\xi^z)= Y_z^*\quad \text{for $z\in W$.} \end{equation} For $G/P$ and $(\bfD^*)^{W_L}$, the projection $a_{G/B}\colon G/B\to G/G=\pt$ factors as $a_{G/B}= a_{G/P} \circ \pi$, the element $Y_G\in Q_W$ factors as $Y_{G,L}Y_P$, where $Y_{G,L}= \sum_{w\in W^L} \delta_w x_G\inverse x_L$, and there is a commutative diagram \begin{equation*} \vcenter{\vbox{ \xymatrix{\BBh_T(G/B) \ar[d]_{\Theta}^{\cong} \ar@{->>}[r]^-{\pi_*} & \BBh_T(G/P) \ar[d]_{\Theta_P}^{\cong} \ar@{->>}[r]^-{a_*} & \BBh_T(\pt) \ar[d]_{\Theta_G}^{\cong}\\ \bfD^* \ar@{->>}[r]^-{Y_P\bullet(\cdot)} & (\bfD^*)^{W_L} \ar@{->>}[r]^-{Y_{G,L} \bullet(\cdot)}& \bfD^* .} }} \end{equation*} It follows that \[ \langle f,f'\,\rangle_{(\bfD^*)^{W_L}} =Y_{G,L}\bullet (ff') \quad\text{for}\quad f,f'\in (\bfD^*)^{W_L}. \] By \cite[Thm.~15.6] {calmeszainoullinezhong:push}, the bases $\{\, Y_P X_{I_w\rev}\bullet x_G f_e\mid w\in W^L\,\}$ and $\{\, X_w^*\mid w\in W^L\,\}$ are dual bases of $(\bfD^{W_L})^*$. Define \begin{equation} \label{eq:8} Z_w^* = \Theta_P( \xi_P^w) = \Theta(\zeta^w) \quad \text{for $w\in W^L$.} \end{equation} Then $\{\, Z_w^*\mid w\in W^L\,\}$ is the $S$-basis of $(\bfD^* )^{W_L}$ dual to the basis $\{\, Y_P Y_{I_w\rev}\bullet x_G f_e\,\}$. \subsection{} \label{ssec:algdual2} By the projection formula in equivariant oriented cohomology, the mappings $\pi_*$ and $\pi^*$ are adjoint with respect to the scalar products on $\BBh_T(G/B)$ and $\BBh_T(G/P)$. It then follows that $\pi^* \pi_* \colon \BBh_T(G/B) \to \BBh_T(G/B)$ is self-adjoint. By transport of structure via $\Theta$ and $\Theta_P$ in \cref{ssec:alg}\cref{eq:fda} we see that \begin{itemize} \item the mapping $Y_P\bullet(\cdot) \colon \bfD^* \to (\bfD^*)^{W_L}$ and the inclusion $(\bfD^*)^{W_L} \hookrightarrow \bfD^*$ are adjoint: \[ \langle \tilde f, f\rangle_{\bfD^*} = \langle \tilde f, Y_P\bullet f \rangle_{(\bfD^*)^{W_L}}=, \quad\text{for $\tilde f\in (\bfD^*)^{W_L}$ and $f\in \bfD^*$.} \] and \item the mapping $Y_P\bullet(\cdot) \colon \bfD^* \to \bfD^*$ is self-adjoint with respect to the pairing $\langle\, \cdot\,, \, \cdot\, \rangle_{\bfD^*}$: \[ \langle Y_P\bullet f, f' \rangle_{\bfD^*}= \langle f, Y_P\bullet f'\rangle_{\bfD^*}, \quad\text{for $f,f'\in \bfD^*$}. \] \end{itemize} \subsection{Executive Summary} To help streamline the notation and save some bullets, for $z\in W$ define $Y_z^{\times}\in \bfD^*$ by \[ Y_z^{\times}= Y_{I_z\rev} \bullet x_Gf_e \quad\text{and}\quad X_z^{\times}= X_{I_z\rev} \bullet x_Gf_e. \] \begin{itemize} \item The $S$-module $\BBh_T(G/B)$ has Bott-Samelson and dual Bott-Samelson bases, $\{\, \xi_z\mid z\in W\,\}$ and $\{\, \xi^z\mid z\in W\,\}$, respectively; the mapping $\Theta\colon \BBh_T(G/B)\to \bfD^*$ is an $S$-algebra isomorphism with \[ \Theta(\xi_z)= Y_z^{\times}, \quad\text{and}\quad \Theta(\xi^z)= Y_z^*; \] and $\bfD^*$ has pairs of dual bases \begin{itemize} \item $\{\, Y_{z}^\times\mid z\in W\,\}$ and $\{\, Y_z^*\mid z\in W\,\}$, and \item $\{\, X_{z}^\times\mid z\in W\,\}$ and $\{\, X_z^*\mid z\in W\,\}$. \end{itemize} \item The description of $\BBh_T(G/P)$ is similar, but less complete. The $S$-module $\BBh_T(G/P)$ has Bott-Samelson and dual Bott-Samelson bases, $\{\, \xi^P_w\mid w\in W^L\,\}$ and $\{\, \xi_P^w\mid w\in W^L\,\}$, respectively; the mapping $\Theta_P\colon \BBh_T(G/P)\to (\bfD^*)^{W_L}$ is an $S$-algebra isomorphism with \[ \Theta_P(\xi^P_w)= Y_P \bullet Y_w^{\times}, \quad\text{and}\quad \Theta_P(\xi_P^w)= Z_w^*; \] and $(\bfD^*)^{W_L}$ has pairs of dual bases \begin{itemize} \item $\{\, Y_P \bullet Y_w^{\times}\mid w\in W^L\,\}$ and $\{\, Z_w^*\mid w\in W^L\,\}$, and \item $\{\, Y_P\bullet X_{w}^\times\mid w\in W\,\}$ and $\{\, Y_P \bullet X_w^*\mid w\in W\,\}$. \end{itemize} The expansion of $Z_w^*$ in the $Y_z^*$-basis is not well understood in general. Some partial information is given in \cref{cor:zw}. \item The algebraic descriptions of $\BBh_T(G/B)$ and $\BBh_T(G/P)$ are compatible by the commutativity of the upper left square in \cref{ssec:alg}\cref{eq:fda}: $\big(Y_P\bullet (\cdot) \big) \circ \Theta = \Theta_P \circ \pi_*$. \end{itemize} Notice that it follows from the preceding constructions that the $R$-algebra $\BBh_T(\pt)$ and the $\BBh_T(\pt)$-algebra structure of $\BBh_T(G/P)$, depend only on the ring $R$, the formal group law $F$, and the root system $\Phi$, but not on the oriented cohomology theory $\BBh$. \subsection{The formal affine Demazure algebra $\bfD_L$ and $\BBh_T(P/B)$} \label{ssec:DL} Given $R$, note that $S$ depends only on $T$, whereas $Q$, $\bfD$, and $\bfD^*$ depend on the based root system determined by the inclusions $T\subseteq B \subseteq G$. Let $\bfD_L$ be the formal affine Demazure algebra of the based root system determined by $T\subseteq B\cap L \subseteq L$. We identify $\bfD_L$ with the $S$-subalgebra of $\bfD$ with $S$-module bases $\{\, Y_v\mid v\in W_L\,\}$ and $\{\, X_v\mid v\in W_L\,\}$. When $Y_v$, respectively $X_v$, is considered as an element of $\bfD_L$ it is denoted by $Y_{v,L}$, respectively $X_{v,L}$. Let $i_a\colon \bfD_L\to \bfD$ be the inclusion. Then $i_a(Y_{v,L})= Y_v$ and $i_a(X_{v,L})= X_v$. Hence, taking $S$-duals, $D_L^*$ may be identified with the $S$ submodule of $\bfD^*$ with bases $\{\, Y_v^*\mid v\in W_L\,\}$ and $\{\, X_v^*\mid v\in W_L\,\}$, and the dual map $i_a^*\colon \bfD^* \to \bfD_L^*$ is the projection given by \[ i_a^*(Y_z^*)=\begin{cases} Y_{v, L}^* &\text{if $z=v\in W_L$,} \\ 0 &\text{if $z\notin W_L$,} \end{cases} \quad\text{and}\quad i_a^*(X_z^*)=\begin{cases} X_{v, L}^* &\text{if $z=v\in W_L$,} \\ 0 &\text{if $z\notin W_L$.} \end{cases} \] It is not difficult to check that there is a commutative diagram \begin{equation*} \vcenter{\vbox{ \xymatrix{\BBh_T(G/B) \ar[d]_\Theta^{\cong} \ar[r]^-{i^*} & \BBh_T(P/B) \ar[d]_{\Theta_L}^{\cong} \\ \bfD^* \ar[r]^-{i_a^*} & \bfD_L^* ,} }} \end{equation*} where $\Theta_L \colon \BBh_T(P/B)\to \bfD_L^*$ is the analog of $\Theta$ for the flag variety $P/B \cong L/(B\cap L)$. \subsection{The algebraic Leray-Hirsch homomorphism} \label{ssec:alhh} Define \[ j_a\colon D_L^*\to D^* \quad\text{by} \quad j_a( Y_{v, L}^*)= Y_v^* \quad\text{for $v\in W_L$}, \] so $j_a$ is a right inverse of the dual map $i_a^*$. Abusing notation slightly, also define \[ j_a\colon \BBh_T(P/B) \to \BBh_T(G/B) \quad\text{to be the composition $\Theta\inverse \circ j_a\circ \Theta_L$.} \] Then $j_a$ is a right inverse of the surjection $i^*$. The \emph{algebraic Leray-Hirsch homomorphism} is the composition $\varphi_a =\operatorname{mult} \circ (\pi^*\otimes j_a)$. The constructions in \cref{ssec:alg} and \cref{ssec:DL} piece together to give a commutative diagram \begin{equation*} \label{eq:alhi} \vcenter{\vbox{ \xymatrix{ \BBh_T(G/P) \otimes_{S} \BBh_T(P/B) \ar[rr]^-{\pi^* \otimes j_a} \ar@/^30pt/[rrrr]^{\varphi_a} \ar[d]_{\Theta_P \otimes \Theta_L}^{\cong} && \BBh_T(G/B) \otimes_{S} \BBh_T(G/B) \ar[rr]^-{\operatorname{mult}} \ar[d]_{\Theta\otimes \Theta}^{\cong} && \BBh_T(G/B) \ar[d]_{\Theta}^{\cong} \\ (\bfD^*)^{W_L} \otimes_{S} \bfD_L^* \ar@{^(->}[rr]^{k\otimes j_a}&& \bfD^* \otimes_{S} \bfD^* \ar[rr]^-{\operatorname{mult}} && \bfD^* ,} }} \end{equation*} where $k\colon (\bfD^*)^{W_L} \to D^*$ is the inclusion. For the $S$-basis $\{\, X_w^* \otimes X_{v,L}^* \mid w\in W^L, \, v\in W_L\,\}$ of $(\bfD^*)^{W_L}\otimes \bfD_L^*$, the composition of $\Theta \circ \varphi_a \circ (\Theta_P\otimes \Theta_L)\inverse$ is given by \begin{equation} \label{eq:18} \Theta \circ \varphi_a \circ (\Theta_P\otimes \Theta_L)\inverse (X_w^* \otimes X_{v,L}^* )= X_w^* j_a(X_{v,L}^*) = X_w^* X_v^*. \end{equation} It follows from \cref{ssec:glhh}\cref{eq:19}, \cref{ssec:algdual1}\cref{eq:8}, and \cref{ssec:algdual1}\cref{eq:20} that \[ \Theta \circ \varphi_g \circ (\Theta_P\otimes \Theta_L)\inverse (Z_w^* \otimes Y_{v,L}^* )= Z_w^* j_a(Y_{v,L}^*) = Z_w^* Y_v^* . \] Thus, $\varphi_a \ne \varphi_g$. The $S$-module homomorphism $\varphi_a$ is purely algebraic in the sense that no geometric interpretation of the classes $\Theta\inverse(X_z^*)$ is known. We can now state the main theorem. \begin{theorem} \label{thm:main} Suppose $\BBh_\star(\cdot)$ is an equivariant oriented cohomology theory that satisfies the assumptions in \cref{ssec:asm} and that $T\subseteq B\subseteq P\subseteq G$ are complex algebraic groups as in \cref{ssec:gp}. Then the Leray-Hirsch homomorphisms $\varphi_g$ and $\varphi_a$ are isomorphisms. \end{theorem} \section{Proof of \cref{thm:main}} \label{sec:pf} In this section we prove that the Leray-Hirsch homomorphisms $\varphi_g$ and $\varphi_a$ are isomorphisms. Both proofs follow the same basic argument outlined in the next subsection, but the details are more delicate for the geometric Leray-Hirsch homomorphism. We give full details in this case and leave much of the simpler case of the algebraic Leray-Hirsch homomorphism to the reader. \subsection{Outline of the proof}\label{ssec:ol} Considering the geometric Leray-Hirsch homomorphism, define $\psi_g= \Theta\circ \varphi_g \circ (\Theta_P \otimes \Theta_L)\inverse$. Then the diagram \[ \xymatrix{ \BBh_T(G/P) \otimes_{S} \BBh_T(P/B) \ar[rr]^-{\varphi_g} \ar[d]_{\Theta_P \otimes \Theta_L}^{\cong} && \BBh_T(G/B) \ar[d]_{\Theta}^{\cong} \\ (\bfD^*)^{W_L} \otimes_{S} \bfD_L^* \ar[rr]^-{\psi_g}&& \bfD^* } \] commutes. Therefore, to prove that $\varphi_g$ is an isomorphism, it is sufficient to show that $\psi_g$ is an isomorphism. Because the domain and codomain are free $S$-modules with the same rank, to show that $\psi_g$ is an isomorphism it is enough to show that it is surjective. It follows from \cref{ssec:glhh}\cref{eq:19}, \cref{ssec:algdual1}\cref{eq:20}, and \cref{ssec:algdual1}\cref{eq:8} that \[ \psi_g(Z_w^* \otimes Y_{L,v}^*) = Z_w^* Y_v^*, \] and so to show that $\psi_g$ is surjective, it is enough to show that $\{\, Z_w^* Y_v^*\mid w\in W^L,\, v\in W_L\,\}$ spans $\bfD^*$. Let $C$ denote the $S$-valued, $|W| \times |W|$-matrix with entries given by the coefficients of the elements $Z_w^* Y_v^*$, for $w\in W^L$ and $v\in W_L$, in terms of the basis $\{\, Y_z^*\mid z\in W\,\}$. To show that $\{\, Z_w^* Y_v^*\mid w\in W^L,\, v\in W_L\,\}$ spans $\bfD^*$, it is enough to show that the $C$ is invertible. In turn, the matrix $C$ is invertible if its determinant is a unit in $S$. Using the identification of $S$ with a ring of formal power series given by the isomorphism $\tau$ in \cref{ssec:S+}, it suffices to show (1) that $C$ is upper triangular mod $S_+$, in which case every entry in the usual expansion of $\det C$ as a sum of products will lie in $S_+$, with the possible exception of the product of the diagonal entries, and (2) that the diagonal entries all lie in $1+S_+$. Therefore, $\det C \in 1+S_+$, and so is a unit in $S$. After some preliminary results, assertions (1) and (2) are proved in \cref{pro:u} and \cref{pro:d}, respectively. An example of the matrix $C$ is given in \cref{ssec:exa1}. \subsection{Example -- type $A_2$} \label{ssec:exa1} Suppose $G=\operatorname{SL}_3(\BBC)$, with simple reflections $s=s_\alpha$ and $t=s_\beta$, and that $P=P_s$. Since the rank of $W$ is equal to $2$, the only element in $W$ that does not have a unique reduced expression is $sts=tst$. We take $I_{sts} =(s,t,s)$. The linear order on $W$ described in \cref{ssec:gp} is \[ e \prec s\prec t\prec ts \prec st \prec sts. \] Recall the elements $u_\gamma= x_\gamma x_{-\gamma}\inverse$ (in $1+S_+$) defined in \cref{ssec:ualpha}. The coefficients in the expansion of $Z_w^*Y_v^*$ in the basis $\{\, Y_z^*\mid z\in W\,\}$, and hence the entries of the matrix $C$, are given in \cref{tab:1}. In the table, the entries marked $*$ do not contribute to $\det C$. \begin{table}[htb!] \caption{Entries of the matrix $C$} \centering \renewcommand\arraystretch{1.3} \begin{tabular}{>{$}c<{$}||>{$}c<{$} >{$}c<{$}|>{$}c<{$} >{$}c<{$} |>{$}c<{$} >{$}c<{$}} &Y^*_{e}&Y^*_{s}&Y^*_{t}&Y^*_{ts}&Y^*_{st} &Y^*_{w_0} \\ \hline\hline Z^*_{e} Y^*_e&1&0&*&*&*&* \\ Z^*_{e} Y^*_{s}&0&1&*&*&*&*\\ \hline Z^*_{t} Y^*_{e} &0&0&-u_\beta&0&*&*\\ Z^*_{t} Y^*_{s}&0&0&0&-u_\beta&*&*\\ \hline Z^*_{st} Y^*_{e}&0&0&0&0&u_\alpha u_{\alpha+\beta}&0 \\ Z^*_{st} Y^*_{s}&0 &0&0&0&-x_\alpha u_{\alpha+\beta} &- u_{\alpha+\beta} \end{tabular} \label{tab:1} \end{table} The argument in \cref{pro:d} shows that (in the notation used in the proof) $c_{w,v}^{w,v} +S_+ = (-1)^{\ell(wv)} u_{\gamma_1} \dotsm u_{\gamma_{p+q}} +S_+$. This is consistent with the diagonal entries in the table, but it is not obvious. For example consider the diagonal $(t,s)$-entry, $c^{t,s}_{t,s}$. It is straightforward to show that \[ c^{t,s}_{t,s}= u_\beta u_{\alpha+\beta} \kappa_{\alpha+\beta}- u_\beta x_{\alpha+\beta} = -u_\beta, \quad \text{and so} \quad c^{t,s}_{t,s} +S_+= -u_\beta +S_+= u_\beta u_{\alpha+\beta} +S_+ , \] as predicted by \cref{pro:d}. Also, in this example, each diagonal block of $C$ is lower triangular and is in fact a diagonal matrix modulo $S_+$. Examples in type $A_3$ show that in general the diagonal blocks of $C$ are neither lower triangular nor diagonal modulo $S_+$. Rather, it is shown in \cref{pro:u} that the diagonal blocks of $C$ are upper triangular modulo $S_+$. \subsection{Base change matrices} \label{ssec:bc} We begin with a lemma that quantifies the non-linearity, in $S$, of the mapping that carries $Y_I$ to $Y_I^\times = Y_{I\rev} \bullet x_G f_e$, for a sequence, $I$, of simple reflections. It follows from the definitions that when $Y_z$ is expanded in terms of the basis $\{\,\delta_z\mid z\in W\,\}$ of $Q_W$, the coefficient of $\delta_y$ is equal to zero unless $y\leq z$, and that the coefficient of $\delta_z$ is a unit in $Q$. Let $\{\, f_z\mid z\in W\,\}$ be the basis of $Q_W^*$ that is dual to the basis $\{\, \delta_z\mid z\in W\,\}$. For $y,z\in W$ define $a_{z,y}\in Q$ and $b_{z,y}\in S$ by \begin{equation} \label{eq:25} Y_z= \sum_{y\leq z} a_{z,y} \delta_y \quad \text{and}\quad \delta_z= \sum_{y\leq z} b_{z,y} Y_y . \end{equation} Then by duality we have \[ Y_x^*= \sum_{z\in W} b_{z,x} f_z \quad \text{and} \quad f_x= \sum_{z\in W} a_{z,x} Y_z^*. \] It is shown in \cite[Lem.~7.3] {calmeszainoullinezhong:push} that \begin{equation} \label{eq:14} Y_z^{\times}= \sum_{y\leq z} a_{z,y} y(x_G) f_y,\quad\text{and so} \quad f_z= \sum_{y\leq z} z(x_G)\inverse b_{z,y} \cdot Y_y^{\times}. \end{equation} \begin{lemma}\label{lem:multqs} Suppose $q\in Q$ and $z\in W$. Then \[ qY_{I_z\rev} \bullet x_G f_e = \sum_{\substack{w\\ w\leq z}} \Big( \sum_{\substack{y\\ w\leq y \leq z}} y(q) \cdot a_{z,y} \cdot b_{y,w} \Big) \,Y_w^{\times} . \] \end{lemma} \begin{proof} It is shown in \cite[Cor.~7.2] {calmeszainoullinezhong:push} that $Y_{I_z\rev} = \sum_{y, y\inverse \leq z} y(a_{z,y\inverse} x_G\inverse) \cdot x_G \cdot \delta_{y}$ and it is straightforward to check that $(p\delta_y)\bullet (qf_z)= q\cdot zy\inverse(p) \cdot f_{zy\inverse}$ for $p,q\in Q$ and $y,z\in W$. Thus, \[ qY_{I_z\rev} \bullet x_G f_e = \sum_{\substack{y\\ y\inverse \leq z}} \big(q \cdot y(a_{z,y\inverse} x_G\inverse) \cdot x_G \cdot \delta_{y} \big)\bullet x_G f_e= \sum_{\substack{y\\ y \leq z}} \big(y(q x_G) \cdot a_{z,y} \big) f_{y} . \] Then using \cref{ssec:bc}\cref{eq:14} to replace $f_y$ by $\sum_{w\leq y} y(x_G)\inverse b_{y,w} Y_w^\times$ and simplifying gives the result. \end{proof} \subsection{}\label{ssec:eww} The next few lemmas lead to a description of the expansion of $Z_w^*$ with respect to the basis $\{\, Y_z^*\mid z\in W\,\}$ of $\bfD^*$. If $w\in W^L$ and $v\in W_L$, then $Y_P\bullet Y_{wv}^{\times} \in (\bfD^*)^{W_L}$. Define $e_{wv, w'}\in S$ by \begin{equation} \label{eq:10} Y_P\bullet Y_{wv}^{\times}= \sum_{w'\in W^L} e_{wv, w'} \big(Y_P\bullet Y_{w'}^{\times} \big). \end{equation} \begin{lemma}\label{lem:yPyI} Suppose that $I= (s_1, \dots, s_p)$ with $s_1, \dots, s_p \in W_L$. Then \[ Y_P Y_{I\rev}= Y_P\big(Y_I\cdot 1\big). \] \end{lemma} \begin{proof} The proof is by induction on $p$. Suppose $s=s_\alpha$ is a simple reflection in $W$. If $I=\emptyset$, then $Y_PY_I=Y_P=Y_P (Y_\emptyset \cdot 1)$, and if $I=(s)$, then $Y_P Y_s= Y_P \kappa_\alpha= Y_P (Y_s\cdot 1)$, where $\kappa_\alpha=x_{-\alpha}\inverse +x_{\alpha}\inverse$ (see \cref{ssec:ualpha}). More generally, an easy computation shows that if $q\in Q$, then \[ qY_s= Y_ss(q) + \Delta_s(q) \quad \text{and}\quad \kappa_\alpha s(q) +\Delta_s(q) = Y_s q . \] Now by induction, \begin{align*} Y_P Y_{I\rev} &= Y_P \big(Y_{s_2}\dotsm Y_{s_p}\cdot 1 \big) Y_{s_1} \\ &= Y_P \Big(Y_{s_1} s_1\big( Y_{s_2}\dotsm Y_{s_p}\cdot 1 \big) +\Delta_{s_1} \big(Y_{s_2}\dotsm Y_{s_p}\cdot 1 \big) \Big)\\ &= Y_P \Big(\kappa_{s_1} s_1\big( Y_{s_2}\dotsm Y_{s_p}\cdot 1 \big) +\Delta_{s_1} \big(Y_{s_2}\dotsm Y_{s_p}\cdot 1 \big) \Big) = Y_P \big(Y_I\cdot 1\big). \end{align*} \end{proof} \begin{lemma}\label{lem:ypywv} Suppose $w\in W^L$ and $v\in W_L$. Then \[ e_{wv,w'}=0 \text{ unless $w\geq w'$} \quad\text{and}\quad e_{wv, w}= w(Y_v\cdot 1). \] Thus, \[ Y_P\bullet Y_{wv}^{\times} = w(Y_v\cdot 1) \big(Y_P \bullet Y_w^\times \big)+ \sum_{\substack{w'\in W^L\\ w'< w}} e_{wv, w'} \big(Y_P\bullet Y_{w'}^{\times} \big). \] \end{lemma} \begin{proof} The conclusions of the lemma follow from the definition of $Y_{wv}^\times$, \cref{lem:yPyI}, and \cref{lem:multqs}: \begin{multline*} Y_P\bullet Y_{wv}^\times= Y_P \bullet Y_{I_{wv}\rev} \bullet x_G f_e =Y_P\bullet (Y_{v}\cdot1) Y_{I_w\rev} \bullet x_G f_e \\ = w(Y_v\cdot 1) \big(Y_P \bullet Y_w^\times \big)+ \sum_{\substack{z\in W\\z<w}} \Big( \sum_{\substack{y\in W\\ z\leq y\leq w}} y(Y_v\cdot 1) a_{w,y}b_{y,z} \Big) \big(Y_P \bullet Y_z^\times \big) . \end{multline*} \end{proof} The next corollary follows formally from \cref{lem:ypywv}, the fact that $Y_P\bullet(\cdot)$ is adjoint to the inclusion of $(\bfD^*)^{W_L}$ in $\bfD^*$, and duality, because $\{\, Z_w^* \mid w\in W^L\,\}$ and $\{\, Y_P\bullet Y_w^\times \mid w\in W^L\,\}$ are dual bases of $(\bfD^*)^{W_L}$. \begin{corollary}\label{cor:zw} Suppose $w\in W^L$. Then \[ Z_w^* = \sum_{v\in W_L} w(Y_v\cdot 1) Y_{wv}^* + \sum_{\substack{w'v'\in W\\ w'>w}} e_{w'v', w} Y_{w'v'}^*. \] \end{corollary} \subsection{Structure constants and the matrix $C$} \label{ssec:scme} Define structure constants $p_{u,v}^w\in S$ by \[ Y_u^* Y_v^* = \sum_w p_{u,v}^w Y_w^*. \] An explicit formula for $p_{u,v}^w$, to be described below, is given in \cite[Thm.~4.1] {goldinzhong:structure}. One consequence of the formula is that if $p_{u,v}^w \ne 0$, then $u\leq w$ and $v\leq w$. Using \cref{ssec:eww}\cref{eq:10} and duality we can write \[ Z_w^*Y_v^*= \Big(\sum_{w'\in W^L} \sum_{v'\in W_L} e_{w'v', w} \,Y_{w'v'}^* \Big) Y_v^* = \sum_{w''\in W^L} \sum_{v''\in W_L} \Big( \sum_{w'\in W^L} \sum_{v'\in W_L} e_{w'v', w} p_{w'v', v}^{w''v''} \Big) Y_{w''v''}^*. \] Define $c^{w'',v''}_{w,v}$ to be the coefficient of $Y_{w''v''}^*$ in $Z_w^* Y_v^*$, so \[ c^{w'',v''}_{w,v} = \sum_{w'\in W^L} \sum_{v'\in W_L} e_{w'v', w} p_{w'v', v}^{w''v''}. \] Using the linear order on $W$ from \cref{ssec:gp}, define $C$ be the $|W| \times |W|$ matrix whose $(wv, w''v'')$ entry is $c^{w'',v''}_{w,v}$. Similarly, for $w, w''\in W^L$, let $C^{w,w''}$ by the $|W_L| \times |W_L|$ matrix with entries in $S$ whose $(v, v'')$ entry is $c^{w'',v''}_{w,v}$. By definition, $C$ is a block matrix with blocks $C^{w,w''}$ for $w, w''\in W^L$. By \cref{lem:ypywv}, $e_{w'v', w}=0$ unless $w'\geq w$ and as noted above, $p_{w'v',v}^{w''v''} =0$ unless $w''v''\geq w'v'$, so $e_{w'v', w} p_{w'v', v}^{w''v''}=0$ unless $w''\geq w'\geq w$. Therefore $c^{w'',v''}_{w,v}=0$ unless $w''\geq w$, and so $C^{w,w''}=0$ unless $w'' \geq w$. Therefore $C$ is an upper triangular block matrix with diagonal blocks $C^{w,w}$. It follows from \cref{cor:zw} that the $(v, v'')$-entry of $C^{w,w}$ is \begin{equation*} \label{eq:16} c^{w,v''}_{w,v} = \sum_{v'\in W_L} w(Y_{v'}\cdot 1)\, p_{wv', v}^{wv''} . \end{equation*} \subsection{Explicit formulas for $p^{wv''}_{wv',v}$ and $c^{w,v''}_{w,v}$} \label{ssec:sc} Suppose $w\in W^L$ and $v,v',v''\in W_L$. Let $I_w=(s_1, \dots, s_p)$ and let $I_{v''}=(s_{p+1}, \dots, s_{p+q})$. For $1\leq j\leq p+q$ set $s_j=s_{\alpha_j}$. By assumption, $I_{wv''}= I_w \sqcup I_{v''}$ is the concatenation of $I_w$ and $I_{v''}$. \begin{itemize} \item For subsequences $E$ and $F$ of $I_{wv''}$ and $1\leq j\leq p+q$, define $B_j^{E,F}\in \bfD$ by \begin{equation} \label{eq:11} B_{j}^{E,F}= \begin{cases} x_{\alpha_j} \delta_{s_j}&\text{if $s_j\in E\sqcap F$,}\\ -u_{\alpha_j}\delta_{s_j}&\text{if $s_j\in (E\setminus F) \sqcup (F\setminus E)$,}\\ x_{-\alpha_j} \inverse + \big(x_{\alpha_j} x_{-\alpha_j}^{-2} \big) \delta_{s_j}&\text{if $s_j\notin E\sqcup F$.} \end{cases} \end{equation} \item For a sequence, $I$, of simple reflections and $z\in W$, define $b_{I,z}$ to be the coefficient of $Y_z$ in the expansion of $Y_I$, so $Y_I=\sum_{z\in W} b_{I,z}Y_z$. \end{itemize} With this notation, Goldin and Zhong \cite[Thm.~4.1] {goldinzhong:structure} prove that \[ p_{wv',v}^{wv''} = \sum_{E,F \sqsubseteq I_{wv''}} \big(B_{1}^{E,F} \dotsm B_{p+q}^{E,F} \cdot 1 \big) b_{E,wv'} b_{F,v}. \] Therefore, \begin{equation} \label{eq:41} c^{w,v''}_{w,v} = \sum_{v'\in W_L} \sum_{E,F \sqsubseteq I_{wv''}} w\big( Y_{v'}\cdot 1 \big)\big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) b_{E,wv'} b_{F, v}. \end{equation} \subsection{} Recall that $W$ is the group generated by the simple reflections subject to the braid relations and the relations $s^2=e$ for each simple reflection $s$. Let $\Wtilde$ be the semigroup generated by the simple reflections in $W$ subject to the braid relations and the relations $s^2=s$ for each simple reflection $s$. As sets $W=\Wtilde$, but obviously the semigroup operation, say $*$, in $\Wtilde$ is not equal the group operation in $W$. For a sequence $I=(s_1, \dots, s_p)$ of simple reflections in $W$ define \[ \wtilde(I) =s_1*\dotsm *s_p\in \Wtilde \] and consider $\wtilde(I)$ as an element in $W$. For example, if $s_1\ne s_2$ are simple reflections, then $\wtilde(s_1, s_2, s_2)= s_1s_2$, and if $I$ is a reduced expression of $z\in W$, then $\wtilde(I) =z$. In general, there is a subsequence $\widetilde I=(s_{i_1}, \dots, s_{i_k}) \sqsubseteq I$ such that $\wtilde(I)=s_{i_1} \dotsm s_{i_k}$. \begin{lemma}\label{lem:ulev} Suppose $y, z\in W$ and $I$ is a sequence of simple reflections. \begin{enumerate} \item If $b_{I,y} \ne 0$, then $y\leq \wtilde(I)$. \label{it:u1} \item If $I\sqsubseteq I_z$ and $b_{I,y} \ne 0$, then $y\leq z$. \label{it:u2} \end{enumerate} \end{lemma} \begin{proof} The first assertion is proved in \cite[Lem.~3.2]{goldinzhong:structure}. The second assertion follows by first writing $Y_I$ as a $Q$-linear combination of $\{\, \delta_u\mid u\in W\,\}$, then writing each $\delta_u$ as a linear combination of $\{\, Y_t\mid t\in W\,\}$, and then observing that \begin{itemize} \item if $\delta_u$ appears in the expansion of $Y_I$, then $u\leq z$, by the subword property of the Bruhat order, and \item if $Y_y$ appears in the expansion of $\delta_u$, then $y\leq u$, by \cref{ssec:bc}\cref{eq:25}. \end{itemize} \end{proof} \begin{proposition}\label{pro:u} For $w\in W^L$, the matrix $C^{w,w}$ is upper triangular modulo $S_+$. \end{proposition} \begin{proof} Suppose $w\in W^L$, $v,v''\in W_L$, and consider the formula for $c^{w,v''}_{w,v}$ in \cref{ssec:sc}\cref{eq:41}. We first show that if $v\not \leq v''$, then $c^{w,v''}_{w,v} \in S_+$. Indeed, if $v'\in W_L$, $E,F\sqsubseteq I_{wv''}$, and \[ w\big( Y_{v'}\cdot 1 \big) \big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) b_{E,wv'} b_{F, v} \ne 0, \] then $b_{E,wv'} \ne 0$ and $b_{F, v}\ne 0$. If $I_w\not\sqsubseteq E$ and $b_{E, w'v'} \ne 0$, then it must be that $w'<w$ because $w\in W^L$ and $v\in W_L$. Therefore, $I_w\subseteq E$. By \cref{lem:ulev}\cref{it:u1}, $v\leq \wtilde (F)$. If in addition, $I_w \sqcap F=\emptyset$, then $F\sqsubseteq I_{v''}$ and so it follows from \cref{lem:ulev}\cref{it:u2} that $v\leq v''$. By assumption $v\not \leq v''$, so it must be that $I_w \sqcap F\ne \emptyset$. But then it follows from \cref{ssec:sc}\cref{eq:11} that $\big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) \in S_+$. Therefore, $\big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) \in S_+$ for all $E$ and $F$, and so $c^{w,v''}_{w,v} \in S_+$. The contrapositive of the assertion in the preceding paragraph is that if $c^{w,v''}_{w,v} \notin S_+$, then $v\leq v''$. Therefore, if $c^{w,v''}_{w,v} \notin S_+$, then $v'' \not \prec v$ in the linear order on $W_L$. Equivalently, if $v''\prec v$, then $c^{w,v''}_{w,v} \in S_+$, and so every entry of $C^{w,w}$ below the diagonal lies in $S_+$. \end{proof} \begin{proposition}\label{pro:d} Suppose $w\in W^L$ and $v\in W_L$. Then $c^{w,v}_{w,v} \in 1+ S_+$. \end{proposition} \begin{proof} By \cref{ssec:sc}\cref{eq:41}, \[ c^{w,v}_{w,v} = \sum_{v'\in W_L} \sum_{E,F \sqsubseteq I_{wv}} w\big( Y_{v'}\cdot 1 \big)\big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) b_{E,wv'} b_{F, v}. \] Define \[ \gamma_1= \alpha_1, \quad \gamma_2= s_1(\alpha_2), \quad \dots\quad \gamma_{p+q}= s_1\dotsm s_{p+q-1} (\alpha_{p+q}). \] We show that $w\big( Y_{v'}\cdot 1 \big) \big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) b_{E,wv'} b_{F, v} \in S_+$, unless $v'=e$, $E=I_w$, and $F=I_v$, and that $w\big( Y_{e}\cdot 1 \big) \big( B^{I_w,I_v}_1 \dotsm B^{I_w,I_v}_{p+q} \cdot 1 \big) b_{I_w,w} b_{I_v, v} = (-1)^{\ell(wv)} u_{\gamma_1} \dotsm u_{\gamma_{p+q}}$. Consequently, \[ c^{w,v}_{w,v}\in (-1)^{\ell(wv)} u_{\gamma_1} \dotsm u_{\gamma_{p+q}} +S_+ \subseteq 1+ S_+ , \] because each $u_{\gamma_j}\in -1+S_+$. To simplify the notation a little bit, define \[ S(v',E,F)= w\big( Y_{v'}\cdot 1 \big) \big( B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \big) b_{E,wv'} b_{F, v} \] for $v'\in W_L$ and $E, F \sqsubseteq I_{wv}$. Suppose that $S(v',E,F) \ne 0$. \begin{enumerate} \item If $F\sqcap I_w \ne \emptyset$, then it follows from \cref{ssec:sc}\cref{eq:11} that $B^{E,F}_1 \dotsm B^{E,F}_{p+q} \cdot 1 \in S_+$, and so $S(v', F, E)\in S_+$. \label{it:1} \item If $F\sqcap I_w=\emptyset$, then $F\sqsubseteq I_v$, and so $|F| \leq |I_v|$. On the other hand, $b_{F, v} \ne 0$, and so $v\leq \wtilde(F)$ by \cref{lem:ulev}\cref{it:u1}. But then $\ell(v)\leq \ell(\wtilde(F)) \leq |F|$, and so $|I_v|\leq |F|$. Thus, $F=I_v$. \label{it:2} \item \label{it:3} If $F=I_v$ and $E\sqcap I_v \ne \emptyset$, then it follows from \cref{ssec:sc}\cref{eq:11} that $B^{E,I_v}_1 \dotsm B^{E,I_v}_{p+q} \cdot 1 \in S_+$, and so $S(v',E, I_v)\in S_+$. \item Finally, if $F=I_v$ and $E\sqcap I_v =\emptyset$, then $E=I_w$, and since $b_{E, wv'}\ne0$, we must also have $v'=e$. \label{it:4} \end{enumerate} It follows from \cref{it:1}, \cref{it:2}, \cref{it:3}, and \cref{it:4} that $S(v',E,F)\in S_+$, unless $F=I_v$, $E=I_w$, and $v'=e$. Finally, it follows from \cref{ssec:sc}\cref{eq:11} that $B^{I_w,I_v}_1 \dotsm B^{I_w,I_v}_{p+q} \cdot 1 = (-1)^{\ell(ww)} u_{\gamma_1} \dotsm u_{\gamma_{p+q}}$, and clearly $\big( Y_{e}\cdot 1 \big) b_{I_w,w} b_{I_v, v} =1$, so $S(e,I_w,I_v) = (-1)^{\ell(wv)} u_{\gamma_1} \dotsm u_{\gamma_{p+q}}$, as claimed. \end{proof} \subsection{The algebraic Leray-Hirsch isomorphism} The proof that the algebraic Leray-Hirsch homomorphism is an isomorphism follows the same general strategy as the preceding proof, but with the following modifications and simplifications. First, following the argument in \cref{ssec:ol}, and using \cref{ssec:alhh}\cref{eq:18} in place of \cref{ssec:glhh}\cref{eq:19}, one sees that it is sufficient to show that the set $\{\, X_w^* X_v^* \mid w\in W^L, v\in W_L\,\}$ is an $S$-basis of $\bfD^*$. Next, define structure constants $p_{u,v}^w\in S$ by \[ X_u^* X_v^* = \sum_{w\in W} p_{u,v}^w X_w^* \] and define $C$ to be the $|W| \times |W|$ matrix whose $(wv, w''v'')$ entry is $p^{w'',v''}_{w,v}$. For $w, w''\in W^L$, $C^{w,w''}$ is the $|W_L| \times |W_L|$ matrix with entries in $S$ whose $(v, v'')$ entry is $p^{w'',v''}_{w,v}$. It is shown in \cite[Thm.~4.1] {goldinzhong:structure} that if $p^{w'',v''}_{w,v}\ne 0$, then $w\leq w''$. Thus, $C$ is an upper triangular block matrix with diagonal blocks $C^{w,w}$. Finally, the structure constants $p^{w,v''}_{w,v}$ can be computed using the explicit formula in \cite[Thm.~4.1] {goldinzhong:structure}. Arguments similar to those in the proofs of \cref{pro:u} and \cref{pro:d} then show that each $C^{w,w}$ is upper triangular modulo $S_+$ and that the diagonal entries, $p^{w,v}_{w,v}$ of $C^{w,w}$ lie in $1+S_+$. Further details are left to the reader. \subsection{Example -- type $A_2$} \label{ssec:exa2} Continuing the example in \cref{ssec:exa1}, the entries of the matrix $C$ for the algebraic Leray-Hirsch homomorphism $\varphi_a$ are given in \cref{tab:2}. \begin{table}[htb!] \caption{Entries of the matrix $C$ (for $\varphi_a$)} \centering \renewcommand\arraystretch{1.3} \begin{tabular}{>{$}c<{$}||>{$}c<{$} >{$}c<{$}|>{$}c<{$} >{$}c<{$} |>{$}c<{$} >{$}c<{$}} &X^*_{e}&X^*_{s}&X^*_{t}&X^*_{ts}&X^*_{st} &X^*_{w_0} \\ \hline\hline X^*_{e} X^*_e&1&0&0&0&0&0 \\ X^*_{e} X^*_{s}&0&1&0&0&0&0\\ \hline X^*_{t} X^*_{e} &0&0&1&0&0&0\\ X^*_{t} X^*_{s}&0&0&0&1&1&-\kappa_\alpha\\ \hline X^*_{st} X^*_{e}&0&0&0&0&1&0 \\ X^*_{st} X^*_{s}&0&0&0&0&x_\alpha&-u_\alpha \end{tabular} \label{tab:2} \end{table} Again, each diagonal block is lower triangular and is a diagonal matrix modulo $S_+$, but higher rank examples show that in general the diagonal blocks of $C$ are neither lower triangular nor diagonal modulo $S_+$. \section{Applications} Throughout this section $\varphi\colon \BBh_T(G/P) \otimes_{\BBh_T(\pt)} \BBh_T(P/B) \to \BBh_T(G/B)$ is a Leray-Hirsch homomorphism. \subsection{Leray-Hirsch isomorphisms in (non-equivariant) oriented cohomology} Let $H$ be a subgroup of $T$. Applying the base change functor $\BBh_H(\pt) \otimes_{\BBh_T(\pt)} (\cdot)$ to $\varphi$, and composing with the natural isomorphism \begin{multline*} \BBh_H(\pt)\otimes_{\BBh_T(\pt)} \Big( \BBh_T(G/P) \otimes_{\BBh_T(\pt)} \BBh_T(P/B)\Big) \\ \cong \Big(\BBh_H(\pt)\otimes_{\BBh_T(\pt)} \BBh_T(G/P) \Big) \otimes_{\BBh_H(\pt)} \Big(\BBh_H(\pt)\otimes_{\BBh_T(\pt)} \BBh_T(P/B) \Big) , \end{multline*} we obtain an $\BBh_H(\pt)$-algebra homomorphism \begin{equation*} \label{eq:1} \Big(\BBh_H(\pt)\otimes_{\BBh_T(\pt)} \BBh_T(G/P) \Big) \otimes_{\BBh_H(\pt)} \Big(\BBh(\pt)\otimes_{\BBh_T(\pt)} \BBh_T(P/B) \Big) \rightarrow \BBh_H(\pt)\otimes_{\BBh_T(\pt)} \BBh_T(G/B). \end{equation*} It is shown in \cite[Lem.~11.1] {calmeszainoullinezhong:equivariant} that the natural homomorphism \[ \BBh_H(\pt)\otimes_{\BBh_T(\pt)}\BBh_T(G/P) \to \BBh_H(G/P) \] induced by the restriction functor $\BBh_T(\cdot) \to \BBh_H(\cdot)$ is an isomorphism. Therefore, the base change functor $\BBh_H(\pt) \otimes_{\BBh_T(\pt)} (\cdot)$ applied to the Leray-Hirsch homomorphism $\varphi$ (in $T$-equivariant cohomology) induces a Leray-Hirsch homomorphism \[ \varphi_H\colon \BBh_H(G/P) \otimes_{\BBh_H(\pt)} \BBh_H(P/B) \xrightarrow{} \BBh_H(G/B) \] in $H$-equivariant cohomology, and if $\varphi$ is an isomorphism, then so is $\varphi_H$. Taking $H$ to be the trivial subgroup of $T$, a Leray-Hirsch isomorphism in $T$-equivariant cohomology induces a Leray-Hirsch isomorphism \[ \BBh(G/P) \otimes_{\BBh(\pt)} \BBh(P/B) \xrightarrow{\,\cong \,} \BBh(G/B) \] in oriented cohomology. \subsection{Modules and characters} The projection $\pi\colon G\to G/P$ induces an $S$-algebra homomorphism $\pi^*\colon \BBh_T(G/P)\to \BBh_T(G/B)$, which may be identified with the inclusion of $S$-algebras $(\bfD^*)^{W_L} \subseteq \bfD^*$. Thus, the following corollary is an immediate consequence of \cref{thm:main}. \begin{corollary} The ring $\bfD^*$ is a free $(\bfD^*)^{W_L}$-module with basis $\{\, X_v^*\mid v\in W_L \,\}$ and the ring $\BBh_T(G/B)$ is a free $\BBh_T(G/P)$-module with basis $\{\, \xi_L^v \mid v\in W_L\, \}$. \end{corollary} \subsection{} Now recall the $\bullet$-action of $W$ on $\bfD^*$ from \cref{ssec:alg}\cref{it:alg3} and \cref{ssec:alg}\cref{it:alg5}. By transport of structure via the isomorphism $\Theta$, the $W$ action on $\bfD^*$ determines an action of $W$ on $\BBh_T(G/B)$ by $S$-algebra automorphisms. If $y,z\in W$, then $y(f_z)= \delta_y \bullet f_z= f_{zy\inverse}$. It follows that the $\bullet$-action of $W$ on $Q_W^*$ induces the regular representation. Because $W$ acts on $\bfD^*$ as $S$-algebra automorphisms, if $w\in W^L$ and $v, v'\in W_L$, then $X_w^*\in (\bfD^*)^{W_L}$ and so $v'( X_w^* X_v^*) = X_w^* \cdot v'(X_v^*)$. Thus, the following corollary, which generalizes \cite[Thm.~4.7]{drellichtymoczko:module}, is an immediate consequence of \cref{thm:main}. \begin{corollary} Let $\chi_L$ be the character of the $\bullet$-action of $W_L$ on $\BBh_T(G/B)$, and let $\chi$ be the character of the $\bullet$-action of $W_L$ on $\BBh_T(P/B)$, then $\chi_L=|W^L|\, \chi$. \end{corollary} \subsection{Alternate formulations: The Borel model and the $\bullet$ and $\odot$ $W$-actions} Finally, we give an alternate formulation of the Leray-Hirsch isomorphisms using the so-called Borel model of $\BBh_T(G/B)$ and the $\bullet$- and $\odot$-actions of $W$ on $\BBh_T(G/B)$. In equivariant $K$-theory there are two maps from $K_T(\pt)$, which may be identified with the representation ring of $T$, to $K_T(G/B)$. One is induced from the canonical map $a_{G/B}\colon G/B \to \pt$ and the other, called the \emph{characteristic map}, and denoted by $c$, is given by the rule that maps a character $\lambda$ of $T$ to the isomorphism class of the $T$-equivariant line bundle on $G/B$ on which $T$ acts on the fibre over $B$ as $\lambda$. These two maps induce an isomorphism $K_T(\pt) \otimes_{K_T(\pt)^W} K_T(\pt) \xrightarrow{\,\cong\,} K_T(G/B)$ given by $p\otimes q\mapsto a_{G/B}^*(p) \cdot c(q)$ From now on, in addition to the standing assumptions in \cref{ssec:asm} we also assume that the torsion primes of $\Phi$ are invertible in $R$. A generalization of the characteristic map, \[ \ch_g\colon \BBh_T(\pt) \to \BBh_T(G/B), \] is defined in \cite[\S10] {calmeszainoullinezhong:equivariant} for any equivariant oriented cohomology theory and also called the \emph{characteristic map}. Set \[ \ch_a= \Theta\circ \ch_g, \quad\text{so}\quad \ch_a\colon S\to \bfD^*. \] It is shown in \cite[Lem.~10.1] {calmeszainoullinezhong:equivariant} that the composition $\ch_a(p)= p\bullet 1$. One checks that $\ch_a(p)= \sum_{w\in W} w(p) f_w$ and that $\ch_a$ is $W$-equivariant with respect to the natural action of $W$ on $S$ and the $\bullet$-action of $W$ on $\bfD^*$. It is shown in \cite[Thm.~11.4] {calmeszainoullinezhong:coproduct} that the map \[ \rho\colon S\otimes_{S^W}S \to \bfD^*\quad \text{with} \quad \rho(p\otimes q)= p \cdot\ch_a(q) \] is an $S$-algebra isomorphism, where $S$-module structure on $S\otimes_{S^W}S$ is given by the action of $S$ on the left-hand factor. This is the \emph{Borel model} of $\bfD^*$. \subsection{}\label{ssec:eq2} The group $W$ acts independently on each factor of the $S$-algebra $S\otimes_{S^W}S$. It is shown in \cite[Lem~3.7] {lenartzainoullinezhong:parabolic} that the isomorphism $\rho$ intertwines the action of $W$ on the right-hand factor with the $\bullet$-action of $W$ on $\bfD^*$, and the action of $W$ on the left-hand factor with an action of $W$ on $\bfD^*$, which is denoted by $\odot$. In other words, for $z\in W$, the diagrams \[ \vcenter{\vbox{\xymatrix{S\otimes _{S^W}S\ar[r]^-\rho \ar[d]_-{\id\otimes z} & \bfD^*\ar[d]^-{z\bullet(\cdot)}\\ S\otimes_{S^W}S\ar[r]^-\rho & \bfD^*}}} \quad\text{and}\quad \vcenter{\vbox{\xymatrix{S\otimes _{S^W}S\ar[r]^-\rho \ar[d]_-{z\otimes \id} & \bfD^*\ar[d]^-{z\odot (\cdot)}\\ S\otimes_{S^W}S\ar[r]^-\rho & \bfD^*}}} \] commute. By transport of structure via the isomorphism $\Theta$, the $\bullet$- and $\odot$-actions of $W$ on $\bfD^*$ define commuting actions of $W$ on $\BBh_T(G/B)$, also denoted by $\bullet$ and $\odot$, respectively. It is shown in \cite[\S9] {calmeszainoullinezhong:equivariant} that the $\bullet$-action of $W$ on $\BBh_T(G/B)$ coincides with the action arising from the right action of $W$ on $G/T$. Roughly speaking, the $\odot$-action of $W$ on $\BBh_T(G/B)$ arises from the action of $W$ on the Picard group of $G/B$. In order to distinguish the $W$-actions, let $(\bfD^*)^{(W_L, \bullet)}$ and $(\bfD^*)^{(W_L, \odot)}$ denote the $W_L$-invariants with respect to the $\bullet$-action and the $\odot$-action, respectively. Similar notation will be used for $\BBh_T(G/B)$. For example, \begin{equation} \label{eq:2} \BBh_T(G/P) \cong (\bfD^*)^{(W_L, \bullet)} \cong \BBh_T(G/B)^{(W_L, \bullet)} , \end{equation} and a Leray-Hirsch homomorphism is a map \[ \varphi\colon \BBh_T(G/B)^{(W_L, \bullet)} \otimes_{\BBh_T(\pt)} \BBh_T(P/B)\to \BBh_T(G/B). \] We can use the $\odot$ action to make the domain of $\varphi$ uniformly symmetric with respect to taking $W_L$-invariants. \subsection{}\label{ssec:eq4} Replacing $W$ by $W_L$, there is a characteristic map $\ch_a^L\colon S\to \bfD^*_L$ and an $S$-algebra isomorphism $\rho^L\colon S\otimes_{S^{W_L}}S \to \bfD^*_L$. Consider the chain of isomorphisms \[ S\cong S^{W_L}\otimes _{S^{W_L}}S\cong (S\otimes_{S^{W_L}}S)^{(W_L, \odot)}\xrightarrow[\cong]{\rho^L} (\bfD^*_L)^{(W_L, \odot)} . \] It is straightforward to check that the composition is given by the characteristic map for $W_L$, namely $p\mapsto \rho^L(1\otimes p)=\ch_a^L(p) \in \bfD^*_L$. Thus, identifying $L/(L\cap B)$ with $P/B$ as above, there are isomorphisms \begin{equation} \label{eq:4} \BBh_T(P/B)^{(W_L, \odot)} \cong (\bfD^*_L)^{(W_L, \odot)} \cong S . \end{equation} \begin{corollary} If $j\colon \BBh_T(P/B) \to \BBh_T(G/B)$ is a right inverse to $i^*\colon \BBh_T(G/B) \to \BBh_T(P/B)$ and $\varphi$ is the resulting Leray-Hirsch homomorphism, then $\varphi$ may be identified with the homomorphism \[ \BBh_T(G/B)^{(W_L, \bullet)}\otimes_{\BBh_T(\pt)^{W_L}} \BBh_T(P/B)^{(W_L, \odot)} \to \BBh_T(G/B) \] given by the composition of the multiplication map in $\BBh_T(G/B)$ with $\id\otimes j$. In particular, the Leray-Hirsch isomorphisms $\varphi_g$ and $\varphi_a$ each induce an isomorphism \[ \BBh_T(G/B)^{(W_L, \bullet)}\otimes_{\BBh_T(\pt)^{W_L}} \BBh_T(P/B)^{(W_L, \odot)} \xrightarrow{\, \cong\,} \BBh_T(G/B). \] \end{corollary} \begin{proof} It is straightforward to check that if $k\colon (D^*_L)^{(W_L, \odot)} \to D^*_L$ denotes the inclusion, then \[ \id\otimes k\colon (D^*)^{(W_L, \bullet)} \otimes_{S^{W_L}} (D^*_L)^{(W_L, \odot)} \to (D^*)^{(W_L, \bullet)} \otimes_S D_L^* \] is an isomorphism and the diagram \begin{equation} \label{eq:5} \vcenter{\vbox{ \xymatrix{(D^*)^{(W_L, \bullet)} \otimes_{S^{W_L}} (D^*_L)^{(W_L, \odot)} \ar[rr]^-{\id\otimes k}_-{\cong} \ar[dr]_{\mult \circ (\id \otimes j)} && (D^*)^{W_L} \otimes_{S} D^*_L \ar[dl]^{\varphi} \\ &D^*&}}} \end{equation} commutes. The proof follows from \cref{eq:5}, using \cref{ssec:eq2}\cref{eq:2}, \cref{ssec:eq4}\cref{eq:4}, and the maps $\Theta$, $\Theta_L$, and $\Theta_P$. \end{proof} \end{document}
\begin{document} \title{Global Bifurcation Diagram for the Kerner-Konh\"auser Traffic Flow Model} \textbf{Keywords:} Continuous traffic flow. Traveling waves. Bautin bifurcation. Degenerate Takens--Bogdanov bifurcation. \begin{abstract} We study traveling wave solutions of the Kerner--Konh\"auser PDE for traffic flow. By a standard change of variables, the problem is reduced to a dynamical system in the plane with three parameters. In a previous paper \cite{Ca1} it was shown that under general hypotheses on the fundamental diagram, the dynamical system has a surface of critical points showing either a fold or cusp catastrophe when projected under a two dimensional plane of parameters named $q_g$--$v_g$. In any case a one parameter family of Bogdanov--Takens (BT) bifurcation takes place, and therefore local families of Hopf and homoclinic bifurcation arising from each BT point exist. Here we prove the existence of a degenerate Bogdanov--Takens bifurcation (DBT) which in turn implies the existence of Generalized Hopf or Bautin bifurcations (GH). We describe numerically the global lines of bifurcations continued from the local ones, inside a cuspidal region of the parameter space. In particular, we compute the first Lyapunov exponent, and compare with the GH bifurcation curve. We present some families of stable limit cycles which are taken as initial conditions in the PDE leading to stable traveling waves. \end{abstract} \section{Introduction} Macroscopic traffic models are posed in analogy to continuous one dimensional, compressible flow. Second-order models consist of a system of two coupled equations involving the density $\rho(x,t)$ and the average velocity $V(x,t)$. In the Kerner--Konh\"auser model these variables are related through the continuity and momentum equation \begin{eqnarray} &&\frac{\partial \rho}{\partial t}+\frac{\partial \rho V}{\partial x}=0, \label{continuity}\\ && \rho\left(\pder{V}{t}+V\pder{V}{x}\right)= -\pder{P}{x}+\frac{\rho (V_e(\rho)-V)}{\tau}. \label{balance} \end{eqnarray} Here in analogy with compressible fluids, the rate of change in momentum in (\ref{balance}) is due to a decreasing gradient in ``pressure" $P$. The bulk forces are modeled as a tendency to acquire a safe velocity $V_e(\rho)$. The constant $\tau$ is a relaxation time. The model can be closed by a constitutive equation of the form $$ P=\rho\Theta-\eta\pder{V}{x}, $$ where $\Theta(x,t)$ is the traffic ``variance" and $\eta$ is the analogous of the viscosity. Here and in what follows we will take $\Theta(x,t)=\Theta_0,$ and $\eta=\eta_0$ as positive constants. See \cite{KK0} for details. The fundamental diagram is the relationship between the average velocity and traffic density $V=V_e(\rho)$. Although empirical data shows that even the mere existence of such a functional relationship may be criticized~\cite{KSS}, we depart from the point of view that it yields a first approximation by assuming homogeneous solutions where the density and the average velocity remain constant but are related through the fundamental diagram. Next in complexity are traveling wave solutions. Under the change of variables $\xi=x+V_g t$ system (\ref{continuity})--(\ref{balance}) is transformed into a system of ordinary differential equations. In the process of integration of the continuity equation (\ref{continuity}), there appears the constant $Q_g$ having the dimension of flux. In this paper $\Theta_0$, $Q_g$ and $V_g$ are considered as the main parameters of the present study. The first one has a dynamical character being the proportional factor among density and pressure, $-V_g$ describes the velocity of the traveling wave and $Q_g$ is the net flux as measured by an observer moving with the same velocity as the wave \cite{Saa-Ve}. The main motivation for doing this research is to analyze if bounded solutions of the dynamical system can give us valuable information of the system of PDEs (\ref{continuity})--(\ref{balance}) for different boundary conditions: periodic for a finite domain, or bounded for an infinite domain. Others authors such as Lee, Lee and Kim \cite{Lee} have work with the dynamical system relating, in a qualitative form, its solutions to solutions of the PDE. As far as we know, this is the first time in this context that the dynamical machinery is applied in order to make a rigorously analysis of the global bifurcation diagram, and establishing a relation between what is observed in the dynamical system, and the solutions of the PDE. We have shown in a previous work \cite{Ca1}, that under general properties of the fundamental diagram, a one parameter curve of Takens-Bogdanov (BT) bifurcations exists, associated to a folding projection of the surface of critical points into the two--dimensional space of parameters $Q_g$--$V_g$. The family of BT points can be parametrized by the value of $\Theta_0$. For a fixed value of $\Theta_0$ the versal unfolding of the BT point contains codimension--one local curves of Hopf and homoclinic bifurcations in the $Q_g$--$V_g$ plane. In this article we consider the dynamical system for a particular fundamental diagram due to Kerner and Konh\"auser: \begin{equation}\label{FD} V_e(\rho)=V_{max}\left(\frac{1}{1+\exp{[(\frac{\rho}{\rho_{max}}-0.25)/0.06]} }-3.72 \times 10^{-6}\right). \end{equation} We compute explicitly the bifurcation set and show that there exists a cuspidal curve in the parameter space $Q_g$--$V_g$ corresponding to BT bifurcations for a proper choice of $\Theta_0$. The main result refers to the cuspidal point of the bifurcation curve. We show that this is in fact a degenerate Takens--Bogdanov point (DBT), whose bifurcation diagram corresponds to the saddle case, according to Dumortier et al in \cite{Du}. We also prove that a local curve of GH bifurcations originates from DBT and that a bifurcation of two limit cycles can occur in our model (one stable and the other unstable) \textit{for the same values of the parameters}. We also compute the first Lyapunov exponent $\ell_1$ and describe the set of GH points as the zero set $\ell_1=0.$ This defines a curve that divides limit cycles bifurcating from Hopf curves into stable and unstable. We use systematically Kusnetzov and Govaert's \textit{Matcont } in order to perform the global numerical continuation of Hopf bifurcation and limit cycles curves that gives the global picture of bifurcations. We take as initial conditions for system (\ref{balance}) two limit cycles, generated by Matcom, one in the stable region other in the unstable, and we show that they give place to two traveling waves that can be stable or unstable. The rest of the paper is organized as follows: in Section ~2 we introduce the dynamical system, and the surface of critical points where we give conditions for non hyperbolic points to be Hopf or Takens Bogdanov. In Section~3 we present all the theoretical results, including the calculation of the first Lyapunov exponent in order to analytically determine the curve of Bautin points, or Generalized Hopf points which let us determine the stability region of limit cycles, associated to Hopf points. We also show the existence of a degenerate Takens Bogdanov bifurcation. In Section~4 we present the dynamical consequences of the global bifurcation diagram obtained in the previous sections. This includes families of homoclinic an heteroclinic solutions. In Section~5 we study in detail families of limit cycles which represent periodic traveling waves of the PDE in a bounded domain. Finally, conclusions are given in Section~6. At the end of the article we include the proof of some of the theoretical results. \section{The dynamical system and the surface of critical points}\label{TRsection} We look for traveling wave solutions of (\ref{continuity},\ref{balance}). In order to obtain it we apply to these equations the following change of variables $\xi=x+V_g t.$ The first equation is transformed into a quadrature which can be immediately solved: \begin{equation}\label{flux} \rho (V+V_g)=Q_g. \end{equation} Following \cite{Saa-Ve} we introduce dimensionless variables \begin{equation}\label{adim} z=\rho_{max}\xi,\quad v=\frac{V}{V_{max}},\quad v_g=\frac{V_g}{V_{max}},\quad q_g=\frac{Q_g}{\rho_{max}V_{max}},\quad r= \frac{\rho}{\rho_{max}}. \end{equation} Then (\ref{flux}) becomes \begin{equation}\label{flux2} r=\frac{q_g}{v+v_g}, \end{equation} and observe that in the fundamental diagram (\ref{FD}), $V_e$ depends only on the ratio $r$. By abuse of notation we also write $V_e(\rho)$ as $V_e(r)$. Also let \begin{equation}\label{adim2} \tilde{v_e}(r)=\frac{V_e(r)}{V_{max}},\quad \theta_0=\frac{\Theta_0}{V_{max}^2},\quad \lambda=\frac{V_{max}}{\eta_0},\quad \mu=\frac{1}{\rho_{max}\eta_0\tau}. \end{equation} In what follows we will denote by $v_e(v)$ the composition of $\tilde{v_e}$ with $r$ given by (\ref{flux2}), and whenever we want to make explicit the dependence on the parameters \begin{equation}\label{ve} v_e(q_g,v_g,v)=\tilde{v_e}\left(\frac{q_g}{v+v_g}\right). \end{equation} Also for simplicity in the notation we will use the shorthand $$v_e'(v)=\pder{v_e(q_g,v_g,v)}{v}. $$ Observe that $v_g$ and $v$ appear symmetrically in (\ref{ve}), therefore $$ \pder{v_e(q_g,v_g,v)}{v_g}=\pder{v_e(q_g,v_g,v)}{v}=v_e'(v). $$ Substitution of (\ref{flux}) into the second equation of (2) yields the following dynamical system \begin{eqnarray}\label{KKode} \frac{dv}{dz} &=& y, \nonumber\\ \frac{dy}{dz} &=& \lambda q_g \left[1-\frac{\theta_0}{(v+v_g)^2}\right] y-\mu q_g \left(\frac{v_e(v)-v}{v+v_g}\right). \end{eqnarray} Here and in what follows, we will take the parameter values $\lambda$, $\mu$ as given by the model, and we will analyze the dynamical behavior with respect to the parameters $\theta_0$, $v_g$, $q_g.$ \begin{proposition} Let $V_e(\rho)$ be given by (\ref{FD}) then there exist parameter values for $q_g$ and $v_g$ such that the dynamical system has up to 3 critical points. \end{proposition} This proposition was proved in \cite{Ca1}. The Figure~\ref{cusp} shows the corresponding graph for the Kerner--Konh\"auser fundamental diagram in the case of three critical points. \begin{figure} \caption{Kerner-Konh\"auser fundamental diagram $v_e(v)$ showing up to three intersections with the identity (dashed line): $v_e(v_c)=v_c$. Distinct situations are illustrated by graphs in different colors. Red: $v_e'(v_c)=v_e''(v_c)=0$. Brown and blue: $v_e'(v_c)=0$. Orange: three intersections, the middle one with $v_e'(v_c)>1$, the others satisfy $v_e'(c_c)<1.$} \label{sigmoide} \end{figure} The linear part of (\ref{KKode}) at $v_c$ is $$ A_0=\left( \begin{array}{ll} 0 & 1 \\ -\frac{\mu q_g \left(v_e'(v_c)-1\right)}{v+v_g} & \lambda q_g \left(1-\frac{\theta_0}{(v_c+v_g)^2}\right) \end{array} \right)\equiv \left( \begin{array}{ll} 0 & 1 \\ c & b \end{array} \right). $$ The characteristic polynomial $\lambda^2-b\lambda -c=0$ yields the eigenvalues \begin{equation}\label{eigenvalues} l_{1,2}=\frac{b\pm \sqrt{b^2+4c}}{2}. \end{equation} The stability of the critical points is given in the following proposition \cite{Ca1} . \begin{proposition}\label{stability} Let $(v_c,0)$ be a critical point of system (\ref{KKode}), then \begin{itemize} \item If $v_e'(v_c)<1$ then $c>0$ and the roots $l_{1,2}$ are real and with opposite signs. Thus the critical point is a saddle. \item If $v_e'(v_c)>1$ then $c<0$ and either the roots $l_{1,2}$ are real of the same sign as $b$ and the critical point is a node, or $l_{1,2}$ are complex conjugate with real part $b$ and the critical point is a focus. Thus the sign of $b$ determines the stability of the critical point: if $b<0$ it is stable, if $b>0$ it is unstable. \item If $v_e'(v_c)=1$ then $c=0$ and one eigenvalue becomes zero. If in addition, $b=0$ then zero is an eigenvalue of multiplicity two. \end{itemize} \end{proposition} Whenever there are three critical points, two of them $v_c^1<v_c^2$ are saddles, and one is a stable/unstable focus or node $v_c$ depending on the parameter values $(q_g,v_g)$, and $v_1^2<v_c<v_c^2$. In this case the condition $v_e'(v_c)>1$ must be satisfied. For the Kerner--Konh\"auser fundamental diagram (\ref{FD}) the set of critical points is given by the surface \begin{equation}\label{fold_surface} \{(q_g,v_g,v_c)\mid v_e(v_c)-v_c=0\}, \end{equation} which is depicted in Figure~\ref{cusp}. \begin{figure} \caption{Left: Surface of critical points. Right: The singular locus of the projection $\gamma$. The upper part $\gamma^+$ is shown in blue, the lower part $\gamma^-$ in red.} \label{cusp} \end{figure} For simplicity, the surface of critical points is represented in $(q_g,v_g,x)$ coordinates where $x=v_g+v_c$ and we restrict to $x>0$. Geometrically for given points in the parameter plane $(q_g,v_g)$, the coordinates of the critical points $v_g+v_c$ are obtained as intersections of the line parallel to the $x$--axis passing through the point. There is a curve in three dimensional space $(v_g,q_g,v_c)$ where the surface of critical points folds back. It is the set of points where the projection $(v_g,q_g,v_c)\stackrel{\pi}{\to} (v_g,q_g)$ restricted to the surface fails to be surjective. Analytically, this set is a curve given by two equations \begin{eqnarray*} \tilde{\gamma}=\{ (q_g,v_g,v_c)\mid v_e(v_c)-v_c= 0,\quad v_e'(v_c)-1=0\}. \end{eqnarray*} This curve and its projection $\gamma=\pi(\tilde{\gamma})$ in parameter space $q_g$--$v_g$ are shown in Figure ~\ref{cusp}. For $(q_g,v_g)\in\gamma $ the graph of $v_e(v)$ is tangent to the identity at $v_c$ which is then a saddle--node. If in addition, $\theta =\sqrt{v_g+v_c}$ then the critical point is a Takens--Bogdanov bifurcation point whenever the non--degeneracy conditions \begin{equation}\label{ND} v_e''(v_c)\neq 0,\qquad\mbox{and}\qquad \frac{\partial^2 v_e(v_c)}{\partial q_g\partial v}\neq 0 \end{equation} are satisfied. The complement of $\tilde{\gamma}$ has two components, the folded part corresponds to critical points such that $v_e(v_c)=v_c$ and $v_e'(v_c)>1$. This follows from the sigmoidal shape of the curve $v_e(v)$, shown in Figure~\ref{sigmoide}, see \cite{Ca}. The second component contains the saddle points associated to the same value of the parameters $(q_g,v_g)$ where $v_e'(v_c)<1.$ The cusp point $K$ of the curve $\gamma$ is defined by the three conditions \begin{equation}\label{degenerate_BT} v_e(v_c)-v_c=0,\qquad v_e'(v_c)=1,\qquad v_e''(v_c)=0,\quad v_e'''(v_c)\neq0 \end{equation} and divides $\gamma$ in two components. We will call $\gamma^+$ the upper, and $\gamma^-$ the lower part of $\gamma$ according to Figure~\ref{cusp}. It will be analyzed in detail in Section 3.2, that this cusp point gives rise to a degenerate Takens--Bogdanov (DTB) bifurcation. Here we just mention that for the Kerner--Konh\"auser fundamental diagram (\ref{FD}) there exists a unique point $(q_v^*,v_g^*,v_c^*)$ satisfying (\ref{degenerate_BT}) with $v_e'''(v_c^*)<0$, therefore $K=(q_g^*,v_g^*)$. Numerical values are given in Section 4.1. \section{Global bifurcations inside the cusp} In this paper we will be interested in the cuspidal region $\Delta$ with boundary $\gamma=\partial\Delta$, which is the projection of the patch of the surface that folds back: \begin{equation}\label{fold_surface} \mathcal{F}=\{(q_g,v_g,v_c)\mid v_e(v_c)-v_c=0,\quad v_e'(v_c)>1\}. \end{equation} \begin{proposition} Let $\pi_{\mathcal{F}}$ be the restriction of the projection $(q_g,v_g,v_c)\mapsto (q_g,v_g)$ to $\mathcal{F}$. Then $\pi_{\mathcal{F}}\colon\mathcal{F}\to\Delta $ is a diffeomorphism . \end{proposition} \begin{proof} Let $p^{(0)}=(q^{(0)}_g,v_g^{(0)})\in\Delta$. By the implicit function theorem applied to $v_e(q_g,v_g,v_c)-v_c=0$, if $v_e'(v_c)>1$ there exists a smooth function $\kappa_{p_0}$, defined in a neighborhood $\mathcal{N}_{p_0}$ of $p^{(0)}$, such that $v_e(q_g,v_g,\kappa_{p_0}(q_g,v_g))-\kappa_{p_0}(q_g,v_g)=0$, for $(q_g,v_g)\in \mathcal{N}_{p_0}$. Obviously $\Delta=\bigcup_{p\in\Delta}\mathcal{N}_{p}$. Let the map $k\colon\Delta\to\mathcal{F}$ be defined by $k(q_g,v_g)=\kappa_{p_0}(q_g,v_g)$ if $(q_g,v_g)\in\mathcal{N}_{p_0}$. We will see that $k$ is well defined. For this, suppose $(q_g,v_g)\in \mathcal{N}_{p_1}\cap \mathcal{N}_{p_2}$. By contradiction, suppose $\kappa_{p_1}(q_g,v_g)\neq \kappa_{p_2}(q_g,v_g)$. Then $v_e(q_g,v_g,\kappa_{p_i}(q_g,v_g))=\kappa_{p_i}(q_g,v_g),\quad i=1,2$, and by the mean value theorem \begin{eqnarray*} \kappa_{p_1}(q_g,v_g)-\kappa_{p_2}(q_g,v_g)&=& v_e(q_g,v_g,\kappa_{p_1}(q_g,v_g))-v_e(q_g,v_g,\kappa_{p_2}(q_g,v_g))\\ &=&v_e'(q_v,v_g,v_c)\left(\kappa_{p_1}(q_g,v_g)-\kappa_{p_2}(q_g,v_g)\right) \end{eqnarray*} Thus \begin{eqnarray*} |\kappa_{p_1}(q_g,v_g)-\kappa_{p_2}(q_g,v_g)|&=& |v_e'(q_v,v_g,v_c)| |\kappa_{p_1}(q_g,v_g)-\kappa_{p_2}(q_g,v_g)|\\ &>&|\kappa_{p_1}(q_g,v_g)-\kappa_{p_2}(q_g,v_g)|. \end{eqnarray*} this completes the proof. \end{proof} For future reference we compute by implicit differentiation \begin{equation}\label{partial_vg} \pder{v_c}{v_g}=-\frac{v_e'(v_c)}{v_e'(v_c)-1}. \end{equation} In the following section we present the global picture of bifurcations appearing in system (\ref{KKode}) inside the cuspidal region $\Delta$. We will describe the global Hopf curves emerging from Takens-Bogdanov, and the families of limit cycles which originated in Hopf points. We also compute the first Lyapunov coefficient which determines their stability (see Proposition~\ref{FirstLyapunov}). When the first Lyapunov coefficient is zero we get a curve of Bautin bifurcations (see Section 3.1) . We also show that the cuspidal point is a degenerate Takens--Bogdanov point whose bifurcation diagram corresponds to the saddle case studied by Dumortier et al~\cite{Du}. This prove rigorously the existence of Bautin bifurcations. \subsection{Bautin bifurcation} Generalized Hopf or Bautin bifurcation has codimension two. Its normal form is given in \cite[p. 311]{Kuz} and its bifurcation diagram is shown in Figure~\ref{Bautin}. For our purposes it will be enough to recall that necessary conditions can be stated in terms of the eigenvalues $l_{1,2}=\mu(\alpha)\pm i\omega(\alpha)$ depending on the vector of parameters $\alpha\in\mathbb{R}^2$, namely \begin{equation}\label{Bautin-cond} \mu(0)=0,\quad \ell_1(\alpha)=0. \end{equation} Additional non--degeneracy conditions involving the second Lyapunov coefficient $\ell_2(0)$, and the regularity of the map $\alpha\mapsto (\mu(\alpha),\ell_1(\alpha))$ are shown to be sufficient. We will prove the existence of this kind of bifurcation, indirectly, by proving that in fact a codimension three bifurcation, a degenerate Takens--Bogdanov, occurs associated to a cusp point of the surface of bifurcation. See Theorem~\ref{DTB}, and in the Appendix~B its proof. Thus the existence of Bautin bifurcations will follow from the normal form already mentioned~\cite{Du}. In this way we will not need to verify explicitly the non--degeneracy conditions. The bifurcation diagram for a Bautin bifurcation is shown in Figure~\ref{Bautin}. It†contains two branches of subcritical ($H_{+}$) and supercritical ($H_{-}$) Hopf bifurcations and a single branch of saddle--node bifurcation of cycles LPC (standing for limit point of cycles) where two hyperbolic stable and unstable cycles, coalesce in a single saddle--node cycle. \begin{figure} \caption{Bifurcation diagram for Bautin bifurcation} \label{Bautin} \end{figure} Besides the vanishing of the First Lyapunov coefficient $\ell_1$ determines a Bautin deformation, its sign also determines the stability of a limit cycle emerging from a Hopf bifurcation. The explicit form of $\ell_1$ is stated in Proposition~\ref{FirstLyapunov}, and it will be of great importance in the numerical study of limit cycles presented in Section \ref{numerical}. Given $q_g,$ $v_g,$ denote by $l_{1,2}(q_g,v_g)=\mu(q_g,v_g)\pm\omega(q_g,v_g) i$ the eigenvalues (\ref{eigenvalues}) of the linearization. Let $(v_c,0) $ be a critical point of (\ref{KKode}) such that $v_e(v_c)=v_c$, $v_e'(v_c)>1$ and choose $\theta_0=(v_c+v_g)^2$, then $b=0$ and the eigenvalues are purely imaginary $$ l_{1,2}=\pm i\omega_0 $$ with $$ \omega_0^2=\frac{\mu q_g(v_e'(v_c)-1)}{(v_c+v_g)}. $$ \begin{proposition}\label{FirstLyapunov} Let $(v_c,0)$ be a critical point such that $v_e'(v_c)>1$, and $\theta=\sqrt{v_v+v_g}$, then the first Lyapunov coefficient is given by the expression \begin{equation}\label{ell1} \ell_1(q_g,v_g)=-\frac{\lambda\mu q_g^2}{2\omega_0^3(v_c+v_g)^2}\left( \frac{v_e'(v_c)-1}{v_c+v_g}+v_e''(v_c)\right). \end{equation} \end{proposition} The proof is a straightforward computation and is presented in the Appendix~A. \begin{proposition} There exists a smooth function $v_g=h(q_g)$ defined for $0<q_g<q_g^*$ such that $\ell_1(q_g,h(q_g))=0$ and\, $\lim_{q_g\to q_q^*}h(q_g)=v_q^*$. In other words, $\ell_1(q_g,v_g)=0$ is the graph of a function that divides $\Delta$ and has limit point at $K=(q_g^*,v_g^*)$, the cusp point of the curve $\gamma$. \end{proposition} \begin{proof} Observe that from definition (\ref{ve}) it follows that \begin{equation}\label{derivs} \pder{v_e(v_c)}{v_g}=\left(1+\pder{v_c}{v_g}\right)v_e'(v_c),\quad \pder{v_e'(v_c)}{v_g}= \left(1+\pder{v_c}{v_g}\right)v_e''(v_c), \end{equation} and so forth. From the expression for $\ell_1$ in (\ref{ell1}) we compute \begin{eqnarray*} \lefteqn{\left.\pder{\ell_1(q_g,v_g)}{v_g}\right|_{\ell_1=0}}\\ &=& -A\left( \frac{\pder{v_e'(v_c)}{v_g}}{v_c+v_g}+(v_e'(v_c)-1) \left(-\frac{1}{(v_c+v_g)^2}-\frac{1}{(v_c+v_g)^2}\pder{v_c}{v_g}\right) +\pder{v_e''(v_c)}{v_g} \right)\\ &=& -A\left( \frac{\pder{v_e'(v_c)}{v_g}}{v_c+v_g}- \frac{(v_e'(v_c)-1)}{(v_c+v_g)^2} \left( 1+\pder{v_c}{v_g}\right) +\pder{v_e''(v_c)}{v_g} \right) \end{eqnarray*} where $$ A= \frac{\lambda\mu q_g^2}{2\omega_0^3(v_c+v_g)^2} $$ is a positive quantity. Using (\ref{derivs}) we get \begin{eqnarray} \lefteqn{\left.\pder{\ell_1(q_g,v_g)}{v_g}\right|_{\ell_1=0}}\nonumber \\ &=&-A\left( \frac{v_e''(v_c)}{v_c+v_g}-\frac{v_e'(v_c)-1}{(v_c+v_g)^2}+v_e'''(v_c) \right)\left( 1+\pder{v_c}{v_g}\right)\nonumber\\ &=& A\left( \frac{v_e''(v_c)}{v_c+v_g}-\frac{v_e'(v_c)-1}{(v_c+v_g)^2}+v_e'''(v_c) \right)\left( \frac{1}{v_e'(v_c)-1}\right),\label{three-dogs} \end{eqnarray} where we have used (\ref{partial_vg}). We now analyze the sign of each term in the second factor: for the first term, observe that along $\ell_1=0$, $$ v_e''(v_c)=-\frac{v_e'(v_c)-1}{v_c+v_g}<0. $$ The second term is negative since $v_e'(v_c)-1>0$. For the third term, recall that for fixed values of $q_g$, $v_g$, $v_e(v)$ is sigmoidal \cite{Ca}; therefore, its graph is monotone increasing and the concavity changes from convex to concave passing trough a unique point of inflection. Then the second derivative passes from $v_e''>0$ to $v_e''<0$. Thus $ v_e''(v)$ is decreasing. In particular, $v_e'''(v_c)<0$. Therefore, the second factor in (\ref{three-dogs}) is negative. Since the first and second factors are negative we conclude that \begin{equation}\label{dl1} \pder{\ell_1(q_g,v_g)}{v_g}<0 \end{equation} whenever $\ell_1(q_g,v_g)=0$. Define the Lagrangian $$ L(q_g,v_g)=-\int_{v_g^0}^{v_g} \ell_1(q_g,s)\,ds $$ and the associated Legendre transform $$ \mathcal{L}(q_g,v_g)=(q_g,p),\quad\mbox{where}\quad p=\pder{L}{v_g}(q_g,v_g) $$ then its is immediate that $\mathcal{L}$ is inyective: If $\mathcal{L}(q_g,v_g)=(q_g',v_g')$ then $q_g=q_g'$ and $$ \pder{L}{v_g}(q_g,v_g)=\pder{L}{v_g}(q_g,v_g') $$ that is $\ell_1(q_g,v_g)=\ell_1(q_g,v_g')$; by monotonicity this implies $v_g=v_g'$. The Jacobian determinant of $\mathcal{L}$ is given by $$ \left| \begin{array}{cc} 1 & 0 \\ \frac{\partial^2 L}{\partial q_g\partial v_g} & \frac{\partial^2 L}{\partial v_g^2} \end{array}\right| = \frac{\partial^2 L}{\partial v_g^2}=-\ell_1(q_g,v_g) >0, $$ from (\ref{dl1}). Thus $\mathcal{L}$ is a global diffeomorphism onto its image. Let the inverse mapping be denoted as $$ (q_g,v_g)= (q_g,\mathcal{H}(q_g,p)) $$ then, by definition $$ p=\ell_1(q_g,\mathcal{H}(q_g,p)), $$ setting $p=0$ we get $$ 0=\ell_1(q_g,\mathcal{H}(q_g,0)). $$ This completes the proof by setting $v_g=h(q_g)=\mathcal{H}(q_g,0).$ \end{proof} \noindent We call $L_1$ the curve defined by $\ell_1(q_g,v_g)=0$. From the last proposition it follows that the cuspidal region $\Delta$ is divided in two components by the graph of $L_1$. We are now able to determine the regions where $\ell_1>0$ and $\ell_1<0$. \begin{proposition}\label{ell1_positive} The first Lyapunov coefficient $\ell_1(q_g,v_g)$ is positive for the lower cuspidal region $\Delta^{-}$, and it is negative for the upper cuspidal region $\Delta^{+}$. \end{proposition} \begin{proof} From the previous Proposition it follows that $$ \pder{\ell_1(q_g,v_g)}{v_g}<0. $$ Take a point $(q_g,v_g)\in L_1$, therefore $\ell_1(q_g,v_g)=0$. Since $ \ell_1(q_g,v_g)$ is decreasing with respect to $v_g$, it follows that $\ell_1(q_g,v_g+\delta)<0$ for small $\delta>0$, but $(q_g,v_g+\delta)\in\Delta^+$ which is connected; therefore, $\ell_1(q_g,v_g)<0$ for all $(q_g,v_g)\in\Delta^+$. By continuity of $\ell_1$ in $\Delta$, $\ell_1$ is positive in $\Delta^{-}$. \end{proof} In Figure~\ref{diagram1} the regions $\Delta^{\pm}$ are delimited by the corresponding curves $\gamma^{\pm}$ and $L_1$. In Figure~\ref{diagram1-left}, the dashed curve interpolates a number of points computed numerically with Matcont, where $\ell_1=0$ (see Section~\ref{numerical}). In Figure~\ref{diagram1-right}, the same set of points and the curve $L_1$, as given by expression (\ref{ell1}), are plotted showing a remarkable fitting. \subsection{Degenerate Takens-Bogdanov bifurcation} Among codimension three bifurcation that have been study, degenerate Takens--Bogdanov bifurcation is relevant to this paper. The monograph of Dumortier, Roussarie, Sotomayor \& \.Zol\c{a}dek \cite{Du} is the main reference to our work. Our presentation follows closely \cite{Kuz-I}. Whenever a system of the form $x'=f(x,\alpha)$, $x,\alpha\in\mathbb{R}^2$, with $f(0,0)=0$, $A=f_x(0,0)$ has a double zero eigenvalue with non--semisimple Jordan form, then the ODE is formally smooth equivalent to \begin{eqnarray}\label{smooth-equivalent} \dot{w}_0 &=& w_1,\\ \dot{w}_1 &=& \sum_{k\geq2}\left(a_k w_0^k + b_k w_0^{k-1}w_1\right). \end{eqnarray} In the non--degenerate case $a_1b_2\neq0$, the universal unfolding is the well known Takens-Bogdanov system. When $a_2=0$ but $a_3b_2\neq0$, the system is smoothly orbitally equivalent to \begin{eqnarray}\label{orbitally-equivalent} \dot{w}_0 &=& w_1,\\ \dot{w}_1 &=& a_3 w_0^3+b_2w_0w_1+ b_3' w_0^2 w_1 + O(||(w_0,w_1)||^5). \end{eqnarray} There appear three inequivalent cases \begin{itemize} \item When $a_3>0$, it is called the saddle case. \item When $a_3<0$, $b_2^2+8a_3<0$ and $b_3'\neq0$, it is called focus case. \item When $a_3<0$ and $b_2^2+8a_3>0$, it is called the elliptic case. \end{itemize} According to \cite{Kuz-I} in all cases, a universal unfolding is given by \begin{eqnarray} \dot{\xi}_0 &=& \xi_1, \nonumber\\ \dot{\xi}_1 &=& \beta_1+\beta_2\xi_0+\beta_3\xi_1 + a_3\xi_0^3+b_2\xi_0 \xi_1 +b_3' \xi_0^2\xi_1. \label{UniversalUnfoldingDBT} \end{eqnarray} An equivalent bifurcation diagram, after a re-scaling, is presented in \cite{Du}. \begin{theorem}\label{DTB} Let $v_c$ a critical point of (\ref{KKode}) that satisfies $v_e(v_c)=v_c,$ $v_e'(v_c)=1,$ $v_e''(v_c)=0$ but $v_e'''(v_c)<0.$ If $\theta_0=(v_c +v_g)^2$ is chosen then this point corresponds to a degenerate Takens-Bogdanov point whose bifurcation diagram is the saddle case. \end{theorem} The proof is given in the Appendix~B. \noindent The bifurcation diagram of the universal unfolding (\ref{UniversalUnfoldingDBT}) is given in Dumorter et al., see~\cite{Du}. A sketch is shown in Figure~\ref{lips} keeping their notation. \begin{figure} \caption{Sketch of the degenerate Takens--Bogdanov bifurcation diagram. Notation is: BT: Takens-Bogdanov; H: Hopf; GH: Bautin; P: homoclinic: PLC: saddle-node cycle; TSC: two saddle connections; SC: saddle connection; SNC: saddle--node connection. The subindices mean s: superior, i: inferior, l: left, r: right, and describe the position of the bifurcation in the phase plane.} \label{lips} \end{figure} The description is as follows: within the lips--shaped region there exists three critical points, two saddles and an interior focus or node. In the outer part of the lips, there exists exactly one saddle. The two regions are separated by a closed curve formed either by BT points (if we choose the value of the parameter $\theta_0=\sqrt{v_c+v_g}$) otherwise by saddle--nodes. If we start with the (left) Takens--Bogdanov point BTl in the left part of the curve, there are two branches of homoclinic and Hopf bifurcating from it, according to Takens-Bogdanov theorem \cite{Kuz}. The homoclinic curve of bifurcation P, the dotted line in blue, continues up to a point TSC, and terminates in a second BTr point, in the right part of the curve. The BTl point arises when the left saddle in phase space coalesce with the focus/node. At the BTr point the right saddle coalesce with the focus/node. TSC is also a point of intersection of two curves bifurcating from saddle--node connection points, named (superior left) SNCsl and (inferior right) SNCir, which intersect precisely at TSC. They continue separately ending up at two different saddle--node connection points named (superior right) SNCsr, and (inferior left) SNCil, respectively. The curve joining the points SNCsl and SNCsr is named SCs; the curve connecting the points SNCir, and SNCil is named SCi. SCs and SCi are curves of saddle---saddle connections, connecting two saddles in phase space by a regular curve connecting a saddle point and a saddle-node. The Hopf curve of bifurcating from the BT point continues up to a Bautin point named GH (generalized Hopf), and continues as a Hopf curve that ends in the same BT point as the previous described homoclinic curves of bifurcation. There is a segment line connecting the TSC point, and the GH point, marked as a dot--line red curve, denoted by LPC. This curve consists of saddle-node cycle. This curve is the same as the local curve LPC in the local diagram of the Bautin bifurcation in Figure~\ref{Bautin}. When we cross PLC from the exterior of the triangular region GH-t-TSC, an hyperbolic saddle becomes a saddle-node cycle, and bifurcates into two limit cycles --one unstable and the other stable-- just as the local diagram of the Bautin bifurcation in Figure~\ref{Bautin}. \section{Dynamical consequences in the PDE} In order to obtain a particular solution of system (\ref{continuity}), (\ref{balance}) initial and boundary conditions must be given. Let $f(x)$, $g(x)$ be smooth functions such that $V(x,0)=f(x),$ and $\rho(0,t)=g(x).$ We discuss two types of boundary conditions: (a) periodic in a finite road and, (b) bounded in an unbounded road. More precisely for type (a), for $0<x<L$ we consider the boundary conditions \begin{equation}\label{periodic} V(0,t)=V(L,t),\quad \rho(0,t)=\rho(L,t)\quad\mbox{for all $t>0$.} \end{equation} For type (b), we consider the boundary conditions \begin{equation}\label{unbounded} V(x,t)\quad\mbox{and}\quad \rho(x,t) \quad\mbox{remain bounded as $x\to\pm\infty$ for all $t>0.$} \end{equation} Of course type (b) boundary conditions can only be approximated numerically by a sufficient long finite road, but they are interesting to discuss for theoretical purposes. The solutions of interest, arising from the dynamical system (\ref{KKode}), can be classified according to the Poincar\'e--Bendixon theorem as: \begin{enumerate} \item Critical points. \item Limit cycles. \item Cycles of critical points and homoclinic orbits. \begin{enumerate} \item Homoclinic connections. \item Heteroclinic connections. \end{enumerate} \end{enumerate} \subsection{Critical points} Critical points, $v_e'(v_c)=v_c$, give rise to homogeneous solutions for both types of boundary conditions (a) and (b). Under the change of variables (\ref{adim}), critical points are given by a pair of values $(\rho_0, V_0)$ in the graph of the fundamental diagram: $V_0=V_e(\rho_0)$. The linear stability is given according to Proposition~\ref{stability} . In \cite{Saa-Ve}, type (a) boundary conditions were considered. It was shown, numerically, that if the homogeneous solution is linearly unstable in the PDE then it evolves, under a small perturbation, into a traveling wave solution. This observed behavior can be partially explained by the dynamical system (\ref{KKode}) as follows: consider an unstable critical point of the focus type with parameters $(q_g,v_g)$ within the cuspidal region $\Delta$, surrounded by a stable limit cycle. This scenario takes place whenever a Hopf bifurcation with negative Lyapunov coefficient takes place. Then by a small perturbation of the initial condition near the critical point, solutions evolve along the unstable spiral towards the stable cycle. \subsection{Homoclinic and heteroclinic connections} Homoclinic solutions are associated to saddle points, located to the left or right in the $v$ direction of phase space $v$--$y$ of system (\ref{KKode}). This kind of solutions correspond to one--bump traveling wave solutions with the same horizontal asymptotes as $\xi\to\pm\infty$ (see Figure~\ref{clinics} left). The family of homoclinic orbits described in Section~\ref{homoclinics} are accumulation points of limit cycles. If it is accumulated by unstable cycles, then the homoclinic presents a ``two sided" stability behavior: it is stable for initial conditions within the annular region defined by the unstable limit cycle and the homoclinic, but it is unstable for initial conditions outside the limit cycle. This poses the possibility that an unstable traveling wave would evolve towards a one--bump traveling wave in the PDE by a proper small perturbation. If the homoclinic is accumulated by stable limit cycles, then it is always unstable. Heteroclinic orbits are interpreted similarly, and give rise to traveling fronts as shown in Figure~\ref{clinics}. \subsection{Double saddle connection} This is a codimension three phenomenon. As explained in the bifurcation diagram of Figure~\ref{lips}, for a fixed value of $\theta_0$, a double saddle connection is determined as the intersection of two lines of saddle-node (homoclinic) connections. In the PDE there coexist, for the same value of the parameters, two front traveling waves as shown in Figure~\ref{clinics}. \begin{figure} \caption{One-bump and coexisting front traveling wave solutions, corresponding to a homoclinic (left) and a double saddle connection (right).\label{clinics} \label{clinics} \end{figure} \subsection{Heteroclinic connection between two limit cycles} This type of solutions arise within the triangular region of parameters GH-t-TSC shown in Figure~\ref{lips}, where two limit cycles, one unstable the other stable, and the annular region in between contains a double asymptotic spiral. An orbit of this type correspond to an increasing in amplitude oscillating traveling solution as shown in Figure~\ref{twocycles}. \begin{figure} \caption{Heteroclinic connecting two limit cycles give rise to increasing in amplitude traveling solution.\label{twocycles} \label{twocycles} \end{figure} \section{Periodic boundary conditions}\label{numerical} For periodic boundary conditions in a bounded road of length $L$, only periodic solutions of (\ref{KKode}) that satisfy the condition \begin{equation}\label{resonance} L\rho_{max}= mT, \end{equation} for some positive integer $m,$ where $T$ is the period of the limit cycle, give rise to traveling wave solutions, see \cite{Ca1}. If $T$ is the minimal period, then we call $L_0=T/\rho_{max}$ the minimal road length. Then by considering a limit cycle of minimal period $T$ as a limit cycle of period $mT$ yields a traveling wave solution in a road of length $mL$. In this way one can obtain multiple bump- traveling waves in the PDE. The following result characterizes the shape of traveling wave solutions of minimal period in a road of minimal length. \begin{proposition} Let $T$ be the minimal period of a limit cycle and consider a road of minimal length $L_0$. Then the corresponding traveling wave solution has exactly one minimum and one maximum. \end{proposition} \begin{proof} According to (\ref{KKode}) a limit cycle crosses transversally the $v$--axis exactly twice. These are the minimum and maximum of $v(z)$. \end{proof} This result says that traveling wave solutions of minimal period in a road of minimal length are one--bump traveling waves. In the rest of the section we compute the global bifurcation diagram inside the cuspidal region in the parameter space $q_g$--$v_g$. We use Matcont to extend numerically, the local curves of bifurcations given by the Takens--Bogdanov theorem, namely Hopf and homoclinic curves. We present in detail the continuation of limit cycles from Hopf points which give rise to periodic orbits of fixed period, that correspond to traveling wave solutions in the PDE. For each BT point we found a GH bifurcation when continuing Hopf curves, that constitute a complete family of Bautin bifurcations, which are given by the condition $\ell_1=0$ that is numerically verified. The presence of Bautin bifurcations found in this study are consistent with the global bifurcation diagram presented in \cite{Du}, and in fact are justified by Theorem \ref{DTB}. For the Kerner-Kornh\"auser fundamental diagram we use the following parameter values: $$ \rho_{max}= 140\, veh/km,\quad V_{max}= 120\, km/h,\quad \tau=30\ seg,\quad \eta_0=600\ km/h, $$ $$ \lambda=\frac{1}{5}=0.2,\qquad \mu=\frac{1}{700}=0.00142857. $$ \subsection{Cusp point} With these values one can show that there exist a unique critical point that satisfies the hypotheses $v_e(v_c)=v_c$, $v_e'(v_c)=1$, $v_e''(v_c)=0$ of Theorem \ref{DTB}, given by $$ q_g=0.316762381,\quad v_g= 0.752937578,\quad v_c = 0.300464598,\quad \theta=1.109656146, $$ and $v_e'''(v_c)=-11.317691591012832<0.$ \subsection{The Hopf curves}\label{Hopf_curves} In Figure \ref{diagram1}, we show the continuation of Hopf curves from several BT points taken on the lower branch of the cuspidal curve. Amid each continuation, a GH point is found, and we show with a dotted line the interpolated curve passing through these points. When $\ell_1(q_g,v_g)=0$ is plotted, a remarkable fit is shown. According to Proposition~\ref{ell1_positive}, the Hopf points that are located below the GH--curve ($\Delta^{-}$) have positive Lyapunov coefficient, while those located above ($\Delta^{+}$) have a negative exponent, therefore limit cycles which bifurcate from Hopf points in this region are stable. \begin{figure} \caption{Left: numerical continuation of Hopf curves from BT points. Right: Bautin points and the curve $\ell_1(q_g,v_g)=0.$ } \label{diagram1-left} \label{diagram1-right} \label{diagram1} \end{figure} \subsection{Limit cycles}\label{homoclinics} Recall that the cuspidal region is partitioned in two components $\Delta^{\pm}$, the upper component $\Delta^{+}$ is defined by the boundaries $\gamma^{+}$ of BT points and the curve $L_1$ of GH points where the first Lyapunov coefficient vanishes leading to Bautin bifurcations The following analysis is performed on a particular BT point in the lower part of the cuspidal curve. A similar analysis can be done with the other BT points. Starting with this particular BT point, we get a curve of Hopf points passing through a GH point. By further continuation, we end up with a BT on the upper part of the cuspidal curve as it is shown in left graph of Figure~\ref{diagram1}. Next we take a Hopf point on one side of the GH point and perform the continuation of limit cycles holding the period fixed. Examples of families of cycles of fixed period in parameter space $q_g$--$v_g$, for a fixed value of $\theta_0$, are shown in Sections~\ref{section-familyA} and~\ref{section-familyB} As the initial Hopf point is taken closer to the initial BT point, the period increases, in this way we obtain a nested family of curves of cycles of increasing period. These families tend towards a limiting curve which is precisely the homoclinic curve of bifurcations emerging from the initial BT point. \begin{figure} \caption{(a) Two families of limit cycles of increasing period emerging from the line of Hopf points. These families accumulate towards the line of homoclinics. LPC is a turning point with respect to the parameter $q_g$. (b) The two particular families: Families A (in green) of long period and Family B (light blue) of short period. These families are presented in Sections~\ref{section-familyA} \label{(a)} \label{(a)} \end{figure} According to Corollary~1, limit cycles located in the upper part of the cuspidal region are stable, while those in the lower part are unstable. A natural question is if stable limit cycles correspond to stable traveling wave solutions of the PDE. In order to explore this issue, we take two limit cycles generated as explained above, one in the stable region, the other in the unstable region, as initial conditions for the PDE problem with periodic boundary conditions satisfying the condition (\ref{resonance}) with $m=1$, namely one--bump traveling waves. We first illustrate the case of an unstable limit cycle which gives place to an unstable traveling wave in Figure \ref{stable_unstable_solitions}. Here, the solution evolves towards a traveling wave, after a transient period. \begin{figure} \caption{Unstable traveling wave solution from an unstable limit cycle at $t=0$ min. (black continuous graph) and at $t=50$ (dashed graph), $80$ (red continuous graph) min., when the final shape is fully developed (right).} \label{stable_unstable_solitions} \end{figure} Further examples of stable limit cycles are presented in the form of families in the following sections. \subsection{Family A of long period orbits}\label{section-familyA} For this family we take initially the Hopf point $$ q_g=0.164212226,\quad v_g=0.335569670,\quad v_c=0.064430330\quad \theta_0= 0.16 $$ and continue into a family of stable limit cycles with period $T=1469.90$. The value of the period correspond to a circuit of length $L=10.49928571$~ km. The shape of some typical members of this family are shown in Figure \ref{evolve} left column. We took the velocity and density profiles as initial conditions for the system of PDEs (\ref{continuity}--\ref{balance}) and solved it numerically. In Figures \ref{(ev-fam3a)}, \ref{(ev-fam3b)}, \ref{(ev-fam3c)} we show the temporal evolution for the first 50 minutes, of some member of the family, when a steady state solution of the PDE has fully developed. \begin{figure} \caption{ Family A of long periodic cycles (left column) and family B of short periodic cycles (right column) in phase space (first row). Temporal evolution in the PDE for some members of the family~A, for 10, 20, 30, 40, 50 min, and for some members of the family~B, for 10, 20, 30, 40, 50, 60 min. are shown in the following rows in the Figure.\label{evolve} \label{(fam3)} \label{(fam5)} \label{(ev-fam3a)} \label{(ev-fam5a)} \label{(ev-fam3b)} \label{(ev-fam5b)} \label{(ev-fam3c)} \label{(ev-fam5c)} \label{evolve} \end{figure} \subsection{Family B of short period orbits}\label{section-familyB} Family B of short period orbits are shown in Figure~\ref{evolve}. It was computed by continuing to a stable limit cycle a Hopf point near the GH point in the stable part of the diagram. The values of the parameters of the generating Hopf point of the family are: $$ q_g=0.133886021,\quad v_g=0.204071932,\quad v_c=0.195928068,\quad \theta_0=0.16. $$ The period corresponds to a length of $2L=1.875802158$~km; its characteristics are shown in Figure~\ref{evolve}. \subsection{Stability of traveling waves} The first Lyapunov coefficient determines the stability of limit cycles emerging from a Hopf bifurcation. According to Proposition~\ref{ell1_positive}, the stability region of limit cycles in parameter space $q_g$--$v_g$ is the upper part $\Delta^+$ in Figure~\ref{diagram1}. The relationship between Lyapunov stability of a limit cycle and the corresponding traveling wave solution is a delicate issue. Since the dynamical system (\ref{KKode}) is planar, from the Jordan closed curve theorem, a limit cycle defines a bounded region and an unbounded region in phase space. So for example, in the case of an unbounded road with bounded boundary conditions, a limit cycle may be stable from the bounded region and unstable from the outside part (as is the case of a saddle--node limit cycle), so one cannot assure the existence of a \emph{bounded solution} that is not completely contained in a neighborhood of the limit cycle. As another example, with the same kind of boundary conditions, if a limit cycle is unstable (from the bounded and unbounded regions), the corresponding traveling wave is unstable: this follows from the Poincar\'e-Bendixon theorem that guarantees the existence of a bounded solution inside the limit cycle, and from the very definition of Lyapunov instability of the limit cycle. For the case of periodic boundary conditions. Neither instability of a limit cycle implies instability of the traveling wave, since for example an unstable limit cycle may contain in the bounded region an heteroclinic orbit connecting to a critical point, and by definition this heteroclinic does not satisfy periodic boundary conditions. The two examples of families presented in the previous sections point out to the conjecture that stable limit cycles correspond to stable traveling waves. Limit cycles of family A have a long period, and so are close to a homoclinic orbit, therefore they spend a long time close to a critical point. This gives the family its sharp characteristic shown in Figure~\ref{(fam3)}. In particular our approximation of the limit cycle in MatCont reveals not to be precise enough to simulate the exact shape of the traveling wave, and therefore a short transient occurs before the complete profile develops. This is becomes evident for several member of the family in Figures \ref{(ev-fam3a)}, \ref{(ev-fam3b)} and \ref{(ev-fam3c)} . For family B, having short period, the numerical approximation to the limit cycle with MatCont is good enough, as the initial profile at time $t=0$ is very similar to the fully developed profile. This behavior is shown in Figures~\ref{(ev-fam5a)}, \ref{(ev-fam5b)} and \ref{(ev-fam5c)}. \subsection{Multiple bump traveling waves} Multiple bump traveling waves are obtained by considering values of $m>1$. Thus for a limit cycle of minimal period $T$, there is one bump traveling wave in a road of length $L=T/\rho_{max}$ and a two bump traveling wave in a road of length $2L.$ In Figure~\ref{2bump} we show a two--bump traveling wave obtained by the condition (\ref{resonance}) with $m=2$. \begin{figure} \caption{A two-bump traveling wave. } \label{2bump} \end{figure} \section{Conclusions}\label{REMsection} In this paper, we study traveling waves for the system of PDE (\ref{continuity}, \ref{balance}) for the Kerner--Konh\"auser fundamental diagram by the usual reduction to a system of ODE. We study the surface of critical points, and we analyzed thoroughly the cuspidal region in the parameter space $q_g$--$v_g$. We find, analytically and numerically, a complex map of Hopf, Takens--Bogdanov, Bautin, homoclinics and heteroclinic bifurcations curves. This scenario is organized around a degenerate Takens Bogdanov point of bifurcation, according to the bifurcation diagram (\ref{lips}) due to Dumortier et al \cite{Du}. Even though, there is a considerable simplification in the solution space, the dynamical system reveals the complexity of the space of solutions, which make us expect more complexity in the case of the PDE. Dynamical structures of the EDO system can tell us relevant things about the existence of periodic solutions in bounded domains or bounded solutions in unbounded domains. In particular, limit cycles can be related to periodic solutions. Homoclinic and heteroclinic trajectories describe traveling waves that tend to an homogenous solution when $\xi\to\pm\infty.$ Our numerical results obtained in this work show that stable limit cycles yield stable traveling waves and viceversa. The non--linear stability is a more complicated issue which needs further study. \appendix \section{Proof of Proposition \ref{DTB} } We will write the dynamical system (\ref{KKode}) in normal form in order to analyze its coefficients and prove that it is a degenerate Takens Bogdanov point. Let $w_1=v-v_c$ and $w_2=y,$ then system (\ref{KKode}) is written as \begin{eqnarray}\label{w12} w_1'&=&f_1(w_1,w_2)=w_2, \\ w_2'&=& f_2(w_1,w_2) = \lambda q_g (1-\frac{\theta h^2}{(1+hw_1)^2}) w_2 -\mu q_g \frac{(v_e-w_1-v_c)}{(w_1+v_c +v_g)}, \end{eqnarray} with $h=\frac{1}{v_c+v_g}$. By Hopf theorem, choosing $\theta$ as the reference parameter, if $\theta=\theta_0=(v_c+v_g)^2$ then $b(\theta_0)=0$ and the critical point $(v_c,0)$ has imaginary eigenvalues $l_{1,2}=\pm i\omega_0$. Moreover, $$b'(\theta_0)=-\frac{\lambda q_g}{(v_c+v_g)^2}< 0$$ thus a limit cycle bifurcates from the critical point. Its stability relies on the sign of the first Lyapunov coefficient which we will explicitly calculate. Expanding in Taylor Series around $(0,0),$ we will write system (\ref{KKode}) in the form $$ \vec w'=A \vec w + \frac{1}{2}B(\vec w,\vec w) + \frac{1}{6}C(\vec w,\vec w,\vec w)+ \dots. $$ where the bilinear and trilinear forms are defined with $\xi=(\xi_1,\xi_2)$, $\eta=(\eta_1,\eta_2)$, $\zeta=(\zeta_1,\zeta_2)$ as \begin{equation}\label{B} B(\xi,\eta)=\left( \begin{array}{c} \frac{\partial^2 f_1}{\partial w_1^2}\xi_1\eta_1 +\frac{\partial^2 f_1}{\partial w_1\partial w_2} (\xi_1\eta_2+\eta_1\xi_2)+ \frac{\partial^2 f_1}{\partial w_2^2}\xi_2\eta_2 \\[5pt] \frac{\partial^2 f_2}{\partial w_1^2}\xi_1\eta_1 +\frac{\partial^2 f_2}{\partial w_1\partial w_2} (\xi_1\eta_2+\eta_1\xi_2) + \frac{\partial^2 f_2}{\partial w_2^2}\xi_2\eta_2 \end{array} \right) \end{equation} and $$ C(\xi,\eta,\zeta) = $$ \begin{equation}\scriptsize \left( \begin{array}{c} \frac{\partial^3 f_1}{\partial w_1^2}\xi_1\eta_1\zeta_1+ \frac{\partial^2 f_1}{\partial w_1^2 \partial w_2}( \xi_1\eta_1\zeta_2+\xi_2\eta_1\zeta_1+\xi_1\eta_2\zeta_1)+ \frac{\partial^2 f_1}{\partial w_1 \partial w_2^2} (\xi_1\eta_2\zeta_2+\xi_2\eta_2\zeta_1+\xi_2\eta_1\zeta_2)+ \frac{\partial^2 f_1}{\partial w_2^3} \xi_2\eta_2\zeta_2\\[5pt] \frac{\partial^3 f_2}{\partial w_1^2}\xi_1\eta_1\zeta_1+ \frac{\partial^2 f_2}{\partial w_1^2 \partial w_2}( \xi_1\eta_1\zeta_2+\xi_2\eta_1\zeta_1+\xi_1\eta_2\zeta_1)+ \frac{\partial^2 f_2}{\partial w_1 \partial w_2^2} (\xi_1\eta_2\zeta_2+\xi_2\eta_2\zeta_1+\xi_2\eta_1\zeta_2)+ \frac{\partial^2 f_2}{\partial w_2^3} \xi_2\eta_2\zeta_2 \end{array} \right) \end{equation} According to (\ref{w12}), $$ \frac{\partial f_1}{\partial w_1}=0\quad\mbox{and}\quad \frac{\partial f_1}{\partial w_2}=1, $$ all the other higher order derivatives of $f_1$ are zero, thus the first components of $B$ and $C$ are zero. For the second components, we calculate the following partial derivatives \begin{eqnarray*} \frac{\partial f_2}{\partial w_1}&=& \frac{2 \lambda q_g h^3 \theta}{(1+h w_1)^3}w_2-\frac{\mu q_g(v_e'-1)}{(w_1+v_c +v_g)}+ \frac{\mu q_g(v_e-w_1-v_c)}{(w_1+v_c+v_g)^2},\\ \frac{\partial f_2}{\partial w_2}&=& \lambda q_g \left(1-\frac{\theta h^2 }{(1+hw_1)^2}\right). \\ \end{eqnarray*} The derivatives of second order are: \begin{eqnarray*} \frac{\partial^2 f_2}{\partial w_1^2}&=& \frac{-6\lambda q_g h^4 \theta}{(1+h w_1)^4}w_2+\\ && \frac{\mu q_g}{(w_1+v_c+v_g)}\left[\frac{ 2(v_e'-1)}{(w_1+v_c +v_g)}-v_e''- \frac{2(v_e-w_1-v_c)}{(w_1+v_c +v_g)^2}\right], \\ \frac{\partial^2 f_2}{\partial w_2 \partial w_1}&=&\frac{2 \lambda q_g h^3 \theta}{(1+h w_1)^3},\\ \frac{\partial^2 f_2}{\partial w_2^2}&=&0, \end{eqnarray*} when these derivatives are evaluated at $w_1=w_2=0$ yields \begin{eqnarray*} \frac{\partial^2 f_2}{\partial w_1^2} &=& \frac{(2\omega_0^2-\mu q_g v_e'')}{(v_c +v_g)}.\\ \frac{\partial^2 f_2}{\partial w_2 \partial w_1}&=& 2 \lambda q_g h ,\\ \frac{\partial^2 f_2}{\partial w_2^2}&=&0. \end{eqnarray*} For the third order partial derivatives we get \begin{eqnarray*} \frac{\partial^3 f_2}{\partial w_1^3}&=& \frac{24 \lambda q_g h^5 \theta}{(1+h w_1)^5}w_2+\frac{\mu q_g}{(w_1+v_c+v_g)}\cdot\\ && \left[\frac{ 3v_e''}{(w_1+v_c +v_g)}-v_e'''-\frac{ 6(v_e'-1)}{(w_1+v_c +v_g)^2} +\frac{6(v_e-w_1-v_c)}{(w_1+v_c+v_g)^3}\right], \end{eqnarray*} and \begin{eqnarray*} \frac{\partial^3 f_2}{\partial w_2 \partial w_1^2}&=&-\frac{6\lambda q_g h^4\theta}{(1+hw_1)^4},\\ \frac{\partial^3 f_2}{\partial w_2^2 \partial w_1}&=&\frac{\partial^3 f_2}{\partial w_2^3}=0. \end{eqnarray*} when they are evaluated in $w_1=w_2=0,$ we obtain \begin{eqnarray*} \frac{\partial^3 f_2}{\partial w_1^3}&=& \frac{1}{(v_c+v_g)}\left[\frac{ 3\mu q_g v_e''}{(v_c +v_g)}-\mu q_gv_e'''-\frac{ 6\omega_0^2}{(v_c +v_g)} \right]\\ \frac{\partial^3 f_2}{\partial w_2 \partial w_1^2} &=& -6\lambda q_g h^4\theta. \end{eqnarray*} Then $B$ and $C$ are equal to $$B(\xi,\eta)= \left( \begin{array}{l} 0 \\ \frac{(2 \omega_0^2-\mu q_g v_e''(v_c))}{(v_c+v_g)}\xi_1 \eta_1+2\lambda q_g h (\xi_1 \eta_2+\eta_1\xi_2 ) \end{array} \right) $$ and\goodbreak $$C(\vec \xi, \vec \eta,\vec \zeta)=$$ $$\scriptsize \left( \begin{array}{l} 0 \\ \frac{1}{(v_c+v_g)}\left[\frac{3 \mu q_gv_e''(v_c)}{(v_c+v_g)}-\mu q_g v_e'''(v_c)-\frac{6\omega_0^2}{v_c+v_g}\right]\xi_1 \eta_1 \zeta_1 -6\lambda q_g h^4\theta( \xi_1\eta_1\zeta_2+\xi_2\eta_1\zeta_1+\xi_1\eta_2\zeta_1) \end{array} \right). $$ To calculate the first Lyapunov coefficient we have first to calculate vectors $\vec q$ and $\vec p$ such that $A \vec q=\omega_0 i \vec q$ and $A^T \vec p=-\omega_0 i \vec p,$ respectively, and they satisfy $\langle \vec p, \vec q \rangle=1$. We take $\vec q^T=(1, \omega_0 i)$ and $ \vec p^T=\frac{1}{2}(1,\frac{i}{\omega_0})$. Now we have to calculate $g_{20}=\langle \vec p, B(\vec q, \vec q) \rangle,$ $g_{11}=\langle \vec p, B(\vec q, \vec{\overline{q}} ) \rangle$ and $g_{21}=\langle \vec p, C(\vec q, \vec q, \vec{\overline{q}} )\rangle$ in order to evaluate \begin{equation}\label{l1} \ell_1=\frac{1}{2 \omega_0^2} Re(i g_{20} g_{11}+\omega_0 g_{21}), \end{equation} which is the first Lyapunov coefficient. Now, \begin{eqnarray*} g_{20}&=& 2 \lambda q_g h-\frac{(2\omega_0^2-\mu q_g v_e''(v_c)) i}{2\omega_0(v_c+v_g)}, \end{eqnarray*} \begin{eqnarray*} g_{11}&=& -\frac{(2\omega_0^2-\mu q_g v_e''(v_c)) i}{2\omega_0(v_c+v_g)}, \end{eqnarray*} \begin{eqnarray*} g_{21}&=& \frac{i}{2\omega_0(v_c+v_g)}\left[\frac{3\mu q_g v_e''(v_c)}{(v_c+v_g)}-\mu q_g v_e'''(v_c) -\frac{6\omega_0^2}{(v_c+v_g)}\right]-3\lambda q_g h^4\theta \end{eqnarray*} Thus \begin{eqnarray*} ig_{20}g_{11}&=& \frac{ \lambda q_g h(2\omega_0^2-\mu q_g v_e''(v_c))}{\omega_0(v_c+v_g)} -\frac{(2\omega_0^2-\mu q_g v_e''(v_c)) i}{2\omega_0(v_c+v_g)}. \end{eqnarray*} Substituting these values in (\ref{l1}) we obtain \begin{eqnarray*} l_1(\theta_0) &=&-\frac{\lambda\mu q_g^2 h}{2\omega_0^3(v_c+v_g)}\left( \frac{v_e'(v_c)-1}{v_c+v_g}+v_e''(v_c)\right). \end{eqnarray*} \section{Proof of theorem \ref{DTB}} Expanding in Taylor Series $c(w_1)=\lambda q_g (1-\frac{\theta h^2}{(1+hw_1)^2})$ and $f(w_1)=\mu q_g \frac{(v_e(v)-w_1-v_c)}{(w_1+v_c +v_g)}=L(w_1)(v_e-w_1-v_c)$ around $(0,0)$ we obtain: $$(1+hw_1)^2 =(1-2h w_1+3 h^2 w_1^2-4 h^3 w_1^3+5h^4 w_1^4+\dots ) $$ and $$ c(w_1)=\lambda q_g (1-\theta h^2(1-2h w_1+3 h^2 w_1^2-4 h^3 w_1^3+5h^4 w_1^4+\dots )). $$ If we chose $\theta_0=(v_c+v_g)^2$ then $\theta_0 h^2=1$ and $$ c(w_1)=\lambda q_g \theta_0 h^3( 2 w_1-3 h^2 w_1^2+4 h^3 w_1^3-5h^4 w_1^4+\dots ). $$ Then $$c(w_1)w_2= \lambda q_g \theta_0 h^3 ( 2 w_1 w_2-3 h^2 w_1^2 w_2+....)=b_2 w_1 w_2+ b_3w_1^2 w_2+\dots.$$ where $$ b_2=2\lambda q_g h,\quad b_3= -3\lambda q_g h^3. $$ On the other hand, \begin{equation} f(w_1)=f(0)+f'(0) w_1+ \frac{1}{2} f''(0) w_1^2 + \frac{1}{6}f'''(0) w_1^3+\dots \end{equation} with \begin{eqnarray*} f'(w_1)&=&L'(w_1) (v_e-w_1-v_c)+L(w_1)(v_e'-1), \\ f''(w_1)&=&L''(w_1) (v_e-w_1-v_c)+2L'(w_1)(v_e'-1)+L(w_1) v_e'', \\ f'''(w_1)&=&L'''(w_1)(v_e-w_1-v_c)+3L''(w_1)(v_e'-1)+3L'(w_1)v_e''+L(w_1) v_e'''. \\ \end{eqnarray*} Evaluating these derivatives in $w_1=0$ and using the hypothesis we obtain $$ f(w_1)= \frac{1}{6}f'''(0) w_1^3 +\dots=a_3 w_1^3+a_4 w_1^4+\dots $$ Given that $a_2=0$ and $a_3 b_2 \neq 0$ we can write system (\ref{w12}) in the normal form (\ref{orbitally-equivalent}) as \begin{eqnarray*} \dot{w}_0&=&w_1, \\ \dot{w}_1&=& a_3 w_0^3 + b_2 w_0 w_1 + b'_3 w_0^2 w_1+O(\|(w_0,w_1)\|)^5 . \\ \end{eqnarray*} where $a_3=\frac{-\mu q_g v_e'''(v_c)}{6(v_c+v_g)},$ and $b'_3=b_3-\frac{3b_2 a_4}{5a_3}.$ By hypothesis $v_e'''(v_c)<0,$ therefore $a_3>0,$ and we are in the saddle case. \end{document}
\begin{document} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemme}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem*{theoetoile}{Theorem} \newtheorem*{conjetoile}{Conjecture} \newtheorem*{theoetoilefr}{Théorème} \newtheorem*{propetoilefr}{Proposition} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{examples}[thm]{Examples} \newtheorem{question}[thm]{Question} \newtheorem{Rem}[thm]{Remark} \newtheorem{Notation}[thm]{Notation} \numberwithin{equation}{section} \newcommand{\On}[1]{\mathcal{O}_{#1}} \newcommand{\En}[1]{\mathcal{E}_{#1}} \newcommand{\Fn}[1]{\mathcal{F}_{#1}} \newcommand{\tFn}[1]{\mathcal{\tilde{F}}_{#1}} \newcommand{\hum}[1]{hom_{\mathcal{A}}({#1})} \newcommand{\hcl}[2]{#1_0 \lbrack #1_1|#1_2|\ldots|#1_{#2} \rbrack} \newcommand{\hclp}[3]{#1_0 \lbrack #1_1|#1_2|\ldots|#3|\ldots|#1_{#2} \rbrack} \newcommand{\mathsf{Mod}}{\mathsf{Mod}} \newcommand{\mathsf{D}}{\mathsf{D}} \newcommand{D_{\mathbb{C}}}{D_{\mathbb{C}}} \newcommand{\mathsf{D}^{b}_{dg,\mathbb{R}-\mathsf{C}}(\mathbb{C}_X)}{\mathsf{D}^{b}_{dg,\mathbb{R}-\mathsf{C}}(\mathbb{C}_X)} \newcommand{[\mspace{-1.5 mu} [}{[\mspace{-1.5 mu} [} \newcommand{] \mspace{-1.5 mu} ]}{] \mspace{-1.5 mu} ]} \newcommand{\mathcal{K}u}[2]{\mathfrak{K}_{#1,#2}} \newcommand{\iKu}[2]{\mathfrak{K^{-1}}_{#1,#2}} \newcommand{B^{e}}{B^{e}} \newcommand{\op}[1]{#1^{\opp}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\Ab}[1]{#1/\lbrack #1 , #1 \rbrack} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\omega}{\omega} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{H}\mathcal{H}}{\mathcal{H}\mathcal{H}} \newcommand{\env}[1]{{\vphantom{#1}}^{e}{#1}} \newcommand{{}^eA}{{}^eA} \newcommand{{}^eB}{{}^eB} \newcommand{{}^eC}{{}^eC} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\Hn^0_{\textrm{per}}}{\Hn^0_{\textrm{per}}} \newcommand{\Der_{\mathrm{perf}}}{\mathsf{D}_{\mathrm{perf}}} \newcommand{\textrm{Y}}{\textrm{Y}} \newcommand{\mathrm{gqcoh}}{\mathrm{gqcoh}} \newcommand{\mathrm{coh}}{\mathrm{coh}} \newcommand{\mathrm{cc}}{\mathrm{cc}} \newcommand{\mathrm{qcc}}{\mathrm{qcc}} \newcommand{\mathrm{qcoh}}{\mathrm{qcoh}} \newcommand{\obplus}[1][i \in I]{\underset{#1}{\overline{\bigoplus}}} \newcommand{\mathop{\otimes}\limits^{\rm L}}{\mathop{\otimes}\limits^{\rm L}} \newcommand{\textnormal{pt}}{\textnormal{pt}} \newcommand{\A}[1][X]{\mathcal{A}_{{#1}}} \newcommand{\dA}[1][X]{\mathcal{C}_{X_{#1}}} \newcommand{\conv}[1][]{\mathop{\circ}\limits_{#1}} \newcommand{\sconv}[1][]{\mathop{\ast}\limits_{#1}} \newcommand{\reim}[1]{\textnormal{R}{#1}_!} \newcommand{\roim}[1]{\textnormal{R}{#1}_\ast} \newcommand{\ldetens}{\overset{\mathnormal{L}}{\underline{\boxtimes}}} \newcommand{\bigr)}{\bigr)} \newcommand{\bigl(}{\bigl(} \newcommand{\mathscr{C}}{\mathscr{C}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\underline{\boxtimes}}{\underline{\boxtimes}} \newcommand{\mathop{\underline{\otimes}}\limits^{\rm L}}{\mathop{\underline{\otimes}}\limits^{\rm L}} \newcommand{\mathrm{L}p}{\mathrm{L}p} \newcommand{\mathcal{L}te}{\mathop{\overline{\otimes}}\limits^{\rm L}} \author{Fran\c{c}ois Petit} \address{Max Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany} \email{[email protected]} \title[Fourier-Mukai transform in the quantized setting]{Fourier-Mukai transform in the quantized setting} \begin{abstract} We prove that a coherent DQ-kernel induces an equivalence between the derived categories of DQ-modules with coherent cohomology if and only if the graded commutative kernel associated to it induces an equivalence between the derived categories of coherent sheaves. \end{abstract} \maketitle \section{Introduction} Fourier-Mukai transform has been extensively studied in algebraic geometry and is still an active area of research (see \cite{Barto} and \cite{Huy}). In the past years, several works have extended to the framework of deformation quantization of complex varieties some important aspects of the theory of integral transforms. In \cite{KS3}, Kashiwara and Schapira have developed the necessary formalism to study integral transforms in the framework of DQ-modules and some classical results have been extended to the quantized setting. In particular, in \cite{PantenBassa}, Ben-Bassat, Block and Pantev have quantized the Poincaré bundle and shown it induces an equivalence between certain derived categories of coherent DQ-modules. Our paper grew out of an attempt to understand which properties the integral transforms associated to the quantization of a coherent kernel would enjoy. The main result of this paper is Theorem \ref{finalmukai} which states that a coherent DQ-kernel induces an equivalence between the derived categories of DQ-modules with coherent cohomology if and only if the graded commutative kernel associated to it induces an equivalence between the derived categories of coherent sheaves. Whereas the second part of the proof relies on technique of cohomological completion, the first part builds upon the results of \cite{dgaff}. Indeed, as explained in section \ref{adjs} there is a pair of adjoint functors between the categories of qcc objects and the derived category of quasi-coherent sheaves. Both of these functors preserve compact generators. Then, roughly speaking, to show that a certain property of the quantized integral transform implies a similar properties at the commutative level it is sufficient to check that the category of objects satisfying this properties is thick and that this property hold at the quantized level for a compact generator of the triangulated category of qcc objects. This paper is organized as follow. In the second section we review some material about DQ-modules, cohomological completeness, compactly generated categories, thick subcategories and qcc modules. In the third section, we study integral transforms in the quantized setting. We start by extending the framework of convolutions of kernels of \cite{KS3} to the case of qcc objects and prove that an integral transform of qcc objects preserving compact objects has a coherent kernel (Thm. \ref{thm:kercoh}). Then, we concentrate our attention to the case of integral transforms with coherent kernel. We start by extending to DQ-modules some classical adjunction results and then establish the main theorem of this paper. Finally, in an appendix we show that the cohomological dimension of a certain functor is finite. \begin{flushleft} \textbf{Aknowledgement}: I would like to thank Oren Ben-Bassat, Andrei C\v{a}ld\v{a}raru, Carlo Rossi, Pierre Schapira, Nicol\`o Sibilla, Geordie Williamson for many useful discussions and Damien Calaque and Michel Vaquié for their careful reading of early version of the manuscript and numerous suggestions which have allowed substantial improvements. \end{flushleft} \section{Some recollections on DQ-modules}\label{adjs} \subsection{DQ-modules} We refer the reader to \cite{KS3} for an in-depth study of DQ-modules. Let us briefly fix some notations. Let $(X,\mathcal{O}_X)$ be a smooth complex algebraic variety endowed with DQ-algebroid $\mathcal{A}_X$. It is possible to define a quotient algebroid stack $\mathcal{A}_X / \hbar \mathcal{A}_X$. It comes with a canonical morphism of algebroid stack $\mathcal{A}_X \to \mathcal{A}_X / \hbar \mathcal{A}_X$. On a smooth complex algebraic variety the stack $\mathcal{A}_X / \hbar \mathcal{A}_X$ is equivalent to the algebroid stack associated to $\mathcal{O}_X$. Thus, there is a natural morphism $\mathcal{A}_X \to \mathcal{O}_X$ of $\mathbb{C}$-algebroid stacks which induces a functor \begin{equation*} \iota_g:\Mod(\mathcal{O}_X) \to \Mod(\mathcal{A}_X). \end{equation*} The functor $\iota_g$ is exact and fully faithful and induces a functor \begin{equation*} \iota_g:\mathsf{D}(\mathcal{O}_X) \to \mathsf{D}(\mathcal{A}_X). \end{equation*} \begin{defi} We denote by $\gr_\hbar: \mathsf{D}(\mathcal{A}_X) \to \mathsf{D}(\mathcal{O}_X)$ the left derived functor of the right exact functor $\Mod(\mathcal{A}_X) \to \Mod(\mathcal{O}_X)$ given by $\mathcal{M} \mapsto \mathcal{M} / \hbar \mathcal{M}\simeq \mathcal{O}_X \otimes_{\mathcal{A}_X} \mathcal{M}$. For $\mathcal{M} \in \mathsf{D}(\mathcal{A}_X)$ we call $\gr_\hbar(\mathcal{M})$ the graded module associated to $\mathcal{M}$. We have \begin{equation*} \gr_{\hbar} \mathcal{M} \simeq \mathcal{O}_X \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_X} \mathcal{M}. \end{equation*} \end{defi} Finally, we have the following proposition. \begin{prop}\label{double_adj} The functor $\gr_\hbar$ and $\iota_g$ define pairs of adjoint functors $(\gr_\hbar,\iota_g)$ and $(\iota_g,\gr_\hbar[-1])$. \end{prop} \subsection{Cohomologically complete modules} We briefly present the notion of cohomologically complete module and state the few results that we need. Again, we refer the reader to \cite[§1.5]{KS3} for a detailed study of this notion. We denote by $\mathbb{C}^\hbar$ the ring of formal power series with coefficient in $\mathbb{C}$. Let $\mathcal{R}$ be a $\mathbb{C}^\hbar$-algebroid stack without $\hbar$-torsion. We set $\mathcal{R}_0=\mathcal{R}/ \hbar \mathcal{R}$ and $\mathcal{R}^{loc}=\mathbb{C}^{\hbar,loc} \otimes_{\mathbb{C}^{\hbar}} \mathcal{R}$ where $\mathbb{C}^{\hbar,loc}$ is the field of formal Laurent's series. \begin{defi} An object $\mathcal{M} \in \mathsf{D}(\mathcal{R})$ is cohomologically complete if $\fRHom_{\mathcal{R}}(\mathcal{R}^{loc}, \mathcal{M}) \simeq 0$. We write $\mathsf{D}_{\mathrm{cc}}(\mathcal{R})$ for the full subcategory of $\mathsf{D}(\mathcal{R})$ whose objects are the cohomologically complete modules. \end{defi} The category $\mathsf{D}_{\mathrm{cc}}(\mathcal{R})$ is a triangulated subcategory of $\mathsf{D}(\mathcal{R})$. \begin{prop}[{\cite[Cor. 1.5.9]{KS3}}]\label{ccgr} Let $\mathcal{M} \in \mathsf{D}_{\mathrm{cc}}(\mathcal{R})$. If $\gr_{\hbar} \mathcal{M} \simeq 0$, then $\mathcal{M} \simeq 0$. \end{prop} \begin{prop}\label{isogr} Let $f:\mathcal{M} \to \mathcal{N}$ be a morphism of $\mathsf{D}_{\mathrm{cc}}(\mathcal{R})$. If $\gr_{\hbar}(f)$ is an isomorphism then $f$ is an isomorphism. \end{prop} By Proposition 1.5.6 of \cite{KS3}, for any object $\mathcal{M}$ of $\mathsf{D}(\mathcal{R})$, the object $\fRHom_{\mathcal{R}}((\mathcal{R}^{loc}/\mathcal{R})[-1],\mathcal{M})$ belongs to $\mathsf{D}_{cc}(\mathcal{R})$. \begin{defi}\label{def:cohocomp} We denote by $(\cdot)^{\mathrm{cc}}$\index{cc@$(\cdot)^{\mathrm{cc}}$} the functor \begin{equation*} \fRHom_{\mathcal{R}}((\mathcal{R}^{loc}/\mathcal{R})[-1],\cdot): \mathsf{D}(\mathcal{R}) \to \mathsf{D}(\mathcal{R}). \end{equation*} We call this functor the functor of cohomological completion. \end{defi} The name of functor of cohomological completion is also justified by the fact that $(\cdot)^{\mathrm{cc}} \circ (\cdot)^{\mathrm{cc}} \simeq (\cdot)^{cc}$. There is a natural transformation \begin{equation} \label{morcc} cc:\id \to (\cdot)^{cc}. \end{equation} It enjoys the following property. \begin{prop}[{\cite[Prop. 3.8]{dgaff}}]\label{cciso} The morphism of functors \begin{equation*} \gr_\hbar(cc):\gr_\hbar \circ \id \to\gr_\hbar \circ (\cdot)^{cc} \end{equation*} is an isomorphism in $\mathsf{D}(\mathcal{R}_0)$. \end{prop} \subsection{Compactly generated categories and thick subcategories} In this subsection, we review a few facts about compactly generated categories and thick subcategories. These facts play an essential role in the proof of Theorems \ref{thm:kercoh} and \ref{finalmukai}. A classical reference is \cite{Nee_book}. We also refer to \cite[§ 2]{BVdB}. \begin{defi} Let $\mathcal{T}$ be a triangulated category. Let $\mathfrak{G}=\lbrace G_i \rbrace_{i \in I}$ be a set of objects of $\mathcal{T}$. One says that $\mathfrak{G}$ generates $\mathcal{T}$ if the following condition is satisfied. \begin{center} If $F \in \mathcal{T}$ is such that for every $G_i \in \mathfrak{G}$ and $n \in \mathbb{Z}$ $\Hom_{\mathcal{T}}(G_i[n],F)=0$ then $F \simeq 0$. \end{center} \end{defi} \begin{defi} Assume that $\mathcal{T}$ is a cocomplete triangulated category. \begin{enumerate} [(i)] \item An object $L$ in $\mathcal{T}$ is compact if the functor $\Hom_{\mathcal{T}}(L, \cdot)$ commutes with coproducts. We write $\mathcal{T}^{c}$ for the full subcategory of $\mathcal{T}$ whose objects are the compact objects. \item The category $\mathcal{T}$ is compactly generated if it is generated by a set of compact objects. \end{enumerate} \end{defi} \begin{defi} \begin{enumerate}[(i)] \item A full subcategory of a triangulated category is thick if it is closed under isomorphisms and contains all direct summands of its objects. \item The thick envelop $\langle \mathcal{S} \rangle$ of a set of objects $\mathcal{S}$ of a triangulated category $\mathcal{T}$ is the smallest thick triangulated subcategory of $\mathcal{T}$ containing $\mathcal{S}$. \item One says that $\mathcal{S}$ classicaly generates $\mathcal{T}$ if its thick envelop is equal to $\mathcal{T}$. \end{enumerate} \end{defi} \begin{thm}[{\cite{Nee_comp} and \cite{Rav}}] \label{Rav_Nee} Let $\mathcal{T}$ be compactly generated triangulated category. Then a set of compact objects $\mathcal{S}$ of $\mathcal{T}$ classically generates $\mathcal{T}^{c}$ if and only if it generates $\mathcal{T}$. \end{thm} The next result is probably well known. We include a proof for the sake of completeness. \begin{prop} \label{thickiso} Let $F, \; G: \mathcal{T} \to \mathcal{S}$ be two functors of triangulated categories and $\alpha:F \Rightarrow G$ a natural transformation between them. Then the full subcategory $\mathcal{T}_\alpha$ of $\mathcal{T}$ whose objects are the $X$ such that $\alpha_X:F(X) \to G(X)$ is an isomorphism is a thick subcategory of $\mathcal{T}$. \end{prop} \begin{proof} The category $\mathcal{T}_\alpha$ is triangulated and is closed under isomorphism. Let $X$ be an object of $\mathcal{T}_\alpha$ and $Y$ and $Z$ two objects of $\mathcal{T}$ such that $X \simeq Y \oplus Z$. By definition of the direct sum there is a map $i_Y: Y \to Y \oplus Z$ and a map $p_Y: Y \oplus Z \to Y$ such that $p_Y \circ i_Y= \id_Y$. Since $\alpha$ is a natural transformation we have the following commutative diagram. \begin{equation*} \xymatrix{ F(Y) \ar[r]^-{\alpha_Y} \ar[d]_-{F(i_Y)} & G(Y) \ar[d]^-{G(i_Y)}\\ F(Y \oplus Z) \ar[r]^-{\sim}_-{\alpha_{Y \oplus Z}} \ar[d]_-{F(p_{Y})} & G(Y \oplus Z) \ar[d]^-{G(p_{Y})}\\ F(Y) \ar[r]^-{\alpha_Y} & G(Y). } \end{equation*} It follows that $F(p_Y) \circ \alpha_{Y \oplus Z}^{-1} \circ G(i_Y)$ is the inverse of $\alpha_Y$. Thus, $Y$ belongs to $\mathcal{T}_\alpha$. It follows that $\mathcal{T}_\alpha$ is a thick subcategory of $\mathcal{T}$. \end{proof} \subsection{Qcc modules} We review some facts about qcc modules. They may be considered as a substitute to quasi-coherent sheaves in the quantized setting. For a more detailed study one refers to \cite{dgaff}. In this subsection, $(X, \mathcal{O}_X)$ is a smooth complex algebraic variety endowed with a DQ-algebroid $\mathcal{A}_X$. We denote by $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$ the derived category of sheaves with quasi-coherent cohomology and by $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_X)$ (resp. $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_X)$) the derived category of bounded complexes of $\mathcal{O}_X$-modules (resp. $\mathcal{A}_X$-modules) with coherent cohomology. \begin{defi} An object $\mathcal{M} \in \mathsf{D}(\mathcal{A}_X)$ is $\mathrm{qcc}$ if it is cohomologically complete and $\gr_\hbar \mathcal{M} \in \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$. The full subcategory of $\mathsf{D}(\mathcal{A}_X)$ formed by $\mathrm{qcc}$ modules is denoted by $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$\index{deriveqcc@$\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$}. \end{defi} One easily shows that the category $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$ is a triangulated subcategory of $\mathsf{D}(\mathcal{A}_X)$. \begin{prop}[{\cite[Cor. 3.14]{dgaff}}] If $\mathcal{N} \in \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$, then $\iota_g(\mathcal{N}) \in \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$. \end{prop} The functors $\gr_\hbar$ and $\iota_g$ induce the following functors. \begin{equation}\label{map:qcccoh} \xymatrix{ \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X) \ar@<.4ex>[r]^-{\gr_\hbar} & \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X) \ar@<.4ex>[l]^-{\iota_g}.} \end{equation} We have the following proposition. \begin{prop}\label{prop:compres} Let $X$ be a smooth complex algebraic variety endowed with a DQ-algebroid $\mathcal{A}_X$. The functors $\iota_g:\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X) \to \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$ and $\gr_\hbar: \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X) \to \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$ preserve compact generators. \end{prop} \begin{proof} \begin{enumerate}[(i)] \item We refer to \cite[Cor. 3.15]{dgaff} for the case of the functor $\iota_g$. \item Let us prove the claim for the functor $\gr_\hbar$. Let $\mathcal{M} \in \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$ such that $\RHom_{\mathcal{O}_X}(\gr_\hbar \mathcal{G},\mathcal{M})\simeq 0$. Then, we have \begin{align*} \RHom_{\mathcal{O}_X}(\gr_\hbar \mathcal{G},\mathcal{M}) &\simeq \RHom_{\mathcal{A}_X}( \mathcal{G}, \iota_g \mathcal{M})\\ &\simeq 0. \end{align*} It follows that $\iota_g(\mathcal{M}) \simeq 0$. Hence, for every $i \in \mathbb{Z}, \, \Hn^i(\iota_g \mathcal{M}) \simeq 0$. Since $\iota_g:\Mod(\mathcal{O}_X) \to \Mod(\mathcal{A}_X)$ is fully faithful and exact, we have $\Hn^i(\mathcal{M})\simeq 0$. It follows that $\mathcal{M} \simeq 0$. Moreover, $\gr_\hbar \mathcal{G}$ is coherent and on a smooth algebraic variety coherent sheaves are compact. \end{enumerate} \end{proof} \begin{Rem} We know by \cite{BVdB} that for a complex algebraic variety the category $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$ is compactly generated by a single compact object i.e by a perfect complex. As shown in \cite{dgaff}, this implies in particular that $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$ is compactly generated by a single compact object. \end{Rem} \begin{cor}\label{compcohdq} Let $X$ be a smooth complex algebraic variety endowed with a DQ-algebroid. \begin{enumerate}[(i)] \item If $\mathcal{G}$ is a compact generator of $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$ then $\gr_\hbar \iota_g \mathcal{G}$ is still a compact generator of $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_X)$. \item One has $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_X)= \langle \gr_\hbar \iota_g(\mathcal{G}) \rangle $. \end{enumerate} \end{cor} \begin{proof} \begin{enumerate}[(i)] \item Follows immediatly from Proposition \ref{prop:compres}. \item On a complex smooth algebraic variety the category of compact objects is equivalent to $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_X)$. Hence the results follows from Theorem \ref{Rav_Nee}. \end{enumerate} \end{proof} Finally, let us recall the following result from \cite{dgaff}. \begin{thm}\label{qccompact} An object $\mathcal{M}$ of $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$ is compact if and only if $ \mathcal{M} \in \mathsf{D}^{b}_{\mathrm{coh}}(\mathcal{A}_X)$ and $\mathcal{A}_X^{loc} \otimes_{\mathcal{A}_X} \mathcal{M}=0$. \end{thm} \section{Fourier-Mukai functors in the quantized setting} The aim of this section is to study integral transforms in the framework of DQ-modules. In the first subsection, we review some results, from \cite{KS3}, concerning the convolution of DQ-kernels. In the second one, we adapt to qcc modules the framework for integral transforms develloped in \cite{KS3}. We prove that an integral transform preserving the compact objects of the qcc has a coherent kernel. In the last subsection, we focus our attention on integral transforms of coherent DQ-modules on projective smooth varieties. We first extend some classical adjunction results and finally prove that a coherent DQ-kernel induced an equivalence between the derived categories of DQ-modules with coherent cohomology if and only if the graded commutative kernel associated to it induces an equivalence between the derived categories of coherent sheaves. All along this section we use the following notations. \begin{Notation}\label{not:www1} \begin{enumerate}[(i)] \item If $X$ is a smooth complex variety endowed with a DQ-algebroid $\mathcal{A}_X$, we denote by $X^a$ the same variety endowed with the opposite DQ-algebroid $\mathcal{A}_X^{op}$ and we write $\mathcal{A}_{X^a}$ for this algebroid. \item Consider a product of smooth complex varieties $X_1\times X_2 \times X_3$, we write it $X_{123}$. We denote by $p_i$ the $i$-th projection and by $p_{ij}$ the $(i,j)$-th projection ({\em e.g.,} $p_{13}$ is the projection from $X_1\times X_1^a\times X_2$ to $X_1\times X_2$). \item We write $\mathcal{A}_i$ and $\mathcal{A}_{ij^a}$ instead of $\mathcal{A}_{X_i}$ and $\mathcal{A}_{X_i\times X_j^a}$ and similarly with other products. \item We set $\mathbb{D}^{\prime}_{\mathcal{A}_X}=\fRHom_{\mathcal{A}_X}(\cdot,\mathcal{A}_X): \mathsf{D}(\mathcal{A}_X)^{\opp} \to \mathsf{D}(\mathcal{A}_{X^a})$. \end{enumerate} \end{Notation} \subsection{Convolution of DQ-kernel} We review some results, from \cite{KS3}, concerning the convolution of DQ-kernels. \subsubsection{Tensor product and convolution of DQ-kernels} The tensor product of DQ-modules is given by \begin{defi}[{\cite[Def. 3.1.3]{KS3}}]\label{def:DQtens} Let $\mathcal{K}_i \in \mathsf{D}(\mathcal{A}_{i(i+1)^a})$ $(i=1, 2)$. We set \begin{equation*} \mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2 = p_{12}^{-1} \mathcal{K}_1 \mathop{\otimes}\limits^{\rm L}_{p_{12}^{-1}\mathcal{A}_{12^a}} \mathcal{A}_{123} \mathop{\otimes}\limits^{\rm L}_{p_{23^a}^{-1} \mathcal{A}_{23}} p_{23}^{-1} \mathcal{K}_2 \in \mathsf{D}(p_{13}^{-1}\mathcal{A}_{13^a}). \end{equation*} \end{defi} The composition of kernels is given by \begin{defi}\label{def:DQconv} Let $\mathcal{K}_i \in \mathsf{D}(\mathcal{A}_{i(i+1)^a})$ $(i=1, 2)$. We set \begin{align*} \mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2 =& \dR p_{13 \ast} (\mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2) \in \mathsf{D}(\mathcal{A}_{13^a})\\ \mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2 =& \dR p_{13!} (\mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2) \in \mathsf{D}(\mathcal{A}_{13^a}). \end{align*} \end{defi} \subsubsection{Finiteness and duality for DQ-modules} The following result is a special case of Theorem 3.2.1 of \cite{KS3}. \begin{thm} \label{coherence} Let $X_i$ $(i=1, \; 2, \; 3)$ be a smooth complex variety. For $i=1 \, ,2$, consider the product $X_i \times X_{i+1}$ and $\mathcal{K}_i \in \mathsf{D}_{\mathrm{coh}}^b(\mathcal{A}_{i (i+1)^a})$. Assume that $X_2$ is proper. Then the object $\mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2$ belongs to $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{13^a})$. \end{thm} Let $X_i$ $(i=1, \, 2, \, 3)$ be a smooth projective complex variety endowed with the Zariski topology and let $\mathcal{A}_i$ be a DQ-algebroid on $X_i$. We recall some duality results for DQ-modules from \cite[Chap. 3]{KS3}. First, we need the following result. \begin{prop} [{\cite[p. 93]{KS3}}] Let $\mathcal{K}_i \in \mathsf{D}^b(\mathcal{A}_{i(i+1)^a})$ $(i=1 \, ,2)$ and let $\mathcal{L}$ be a bi-invertible $\mathcal{A}_2 \otimes \mathcal{A}_{2^a}$-module. Then, there is a natural isomorphism \begin{equation*} (\mathcal{K}_1 \underset{2}{\circ} \mathcal{L})\underset{2}{\circ} \mathcal{K}_2 \simeq \mathcal{K}_1 \underset{2}{\circ} (\mathcal{L}\underset{2}{\circ} \mathcal{K}_2). \end{equation*} \end{prop} We denote by $\omega_i$ the dualizing complexe for $\mathcal{A}_i$. It is a bi-invertible $(\mathcal{A}_i \otimes \mathcal{A}_{i^a})$-module. Since the category of bi-invertible $(\mathcal{A}_i \otimes \mathcal{A}_{i^a})$-modules is equivalent to the category of coherent $\mathcal{A}_{ii^a}$-modules simple along the diagonal, we will regard $\omega_i$ as an $\mathcal{A}_{ii^a}$-module supported by the diagonal and we will still denote it by $\omega_i$. \begin{thm}[{\cite[Theorem 3.3.3]{KS3}}] \label{dualdqserre} Let $\mathcal{K}_i \in \mathsf{D}_{\mathrm{coh}}^b(\mathcal{A}_{i (i+1)^a})$ $(i=1, \, 2)$. There is a natural isomorphism in $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{1^a3})$ \begin{equation*} (\mathbb{D}^{\prime}_{\mathcal{A}_{12^a}}\mathcal{K}_1) \underset{2^a}{\circ} \omega_{2^a} \underset{2^a}{\circ} (\mathbb{D}^{\prime}_{\mathcal{A}_{23^a}}\mathcal{K}_2) \stackrel{\sim}{\to} \mathbb{D}^{\prime}_{\mathcal{A}_{13^a}}(\mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2). \end{equation*} \end{thm} \subsection{Integral transforms for qcc modules} In this section, we adapt to qcc objects the framework of convolutions of kernels of \cite{KS3}. In view of Definitions \ref{def:DQtens} and \ref{def:DQconv}, it is easy, using the functor of cohomological completion (see Definition \ref{def:cohocomp}), to define a tensor product and a composition for cohomologically complete modules. \begin{defi} Let $\mathcal{K}_i \in \mathsf{D}_{\mathrm{cc}}(\mathcal{A}_{i(i+1)^a})$ $(i=1, 2)$. We set \begin{align*} \mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2 =& (\mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2)^{\mathrm{cc}} \in \mathsf{D}_{cc}(p_{13}^{-1}\mathcal{A}_{13^a}),\\ \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2 =& \dR p_{13 \ast} (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2) \in \mathsf{D}_{cc}(\mathcal{A}_{13^a}),\\ \mathcal{K}_1 \underset{2}{\overline{\circ}} \mathcal{K}_2 =& \dR p_{13!} (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2) \in \mathsf{D}_{cc}(\mathcal{A}_{13^a}). \end{align*} \end{defi} \begin{Rem} If $\mathcal{K}_1 \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{12^a})$ and $\mathcal{K}_2 \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{23^a})$, then \begin{align*} \mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2 & \simeq \mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2,\\ \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2 & \simeq \mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2,\\ \mathcal{K}_1 \underset{2}{\overline{\circ}} \mathcal{K}_2 & \simeq \mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2. \end{align*} \end{Rem} \begin{lemme}\label{lemme:inversioncc} Let $\mathcal{K}_i \in \mathsf{D}_{\mathrm{cc}}(\mathcal{A}_{i(i+1)^a})$ $(i=1, 2)$. \begin{align*} \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2 \simeq & (\mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2)^{\mathrm{cc}},\\ \mathcal{K}_1 \underset{2}{\overline{\circ}} \mathcal{K}_2 \simeq & (\mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2)^{\mathrm{cc}}. \end{align*} \end{lemme} \begin{proof} Using morphism (\ref{morcc}), we get a map \begin{equation*} \mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2 \to \mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2. \end{equation*} It induces a morphism \begin{equation*} (\dR p_{13 \ast} (\mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2))^{cc} \to (\dR p_{13 \ast} (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2))^{cc}. \end{equation*} By Proposition 1.5.12 of \cite{KS3} the direct image of a cohomologically complete module is cohomologically complete. Then, \begin{equation*} (\dR p_{13 \ast} (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2))^{cc} \simeq \dR p_{13 \ast} (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2). \end{equation*} This gives us a map \begin{equation}\label{mor:comcc} (\mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2)^{cc} \to \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2. \end{equation} Using the fact that the functor $\gr_\hbar$ commutes with direct image and Proposition \ref{cciso}, we get the following commutative diagram. \begin{equation*} \xymatrix{\gr_\hbar ((\mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2)^{cc}) \ar[r]& \gr_\hbar( \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2) \\ \gr_\hbar (\mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2) \ar[u]^-{\gr_\hbar(cc)}_-{\omegar} \ar[r] & \gr_\hbar( \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2) \ar@{=}[u]\\ \dR p_{13 \ast} \gr_\hbar (\mathcal{K}_1 \mathop{\underline{\otimes}}\limits^{\rm L}_{\mathcal{A}_2} \mathcal{K}_2) \ar[u]_-{\omegar} \ar[r]^-{\sim}_-{\gr_\hbar(cc)} & (\dR p_{13 \ast} \gr_\hbar (\mathcal{K}_1 \mathcal{L}te_{\mathcal{A}_2} \mathcal{K}_2)) \ar[u]^-{\omegar}. \\ } \end{equation*} It follows that the morphism $\gr_\hbar ((\mathcal{K}_1 \underset{2}{\ast} \mathcal{K}_2)^{cc}) \to \gr_\hbar( \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2)$ is an isomorphism. Applying Proposition \ref{isogr}, we obtain that the morphism (\ref{mor:comcc}) is an isomorphism. The second formula is proved similarly. \end{proof} From now on all the varieties considered are smooth complex algebraic varieties endowed with the Zariski topology \begin{cor} Let $\mathcal{K}_i \in \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_{i(i+1)^a})$ $(i=1, 2)$. The kernel $ \mathcal{K}_1 \underset{2}{\overline{\ast}} \mathcal{K}_2$ is an object of $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_{13^a})$. \end{cor} \begin{proof} This follows from Lemma \ref{lemme:inversioncc} and \cite[Prop 3.1.4]{KS3} which says that the functor $\gr_\hbar$ commutes with the compostion of DQ-kernel (see Definition \ref{def:DQconv}). \end{proof} Let $\mathcal{K} \in \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_{12^a})$. The above corollary implies that the functor (\ref{MukaiDQqcc}) is well-defined. \begin{equation} \label{MukaiDQqcc} \scalebox{0.96}{$\Phi_\mathcal{K}: \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_2) \to \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_1), \quad \mathcal{M} \mapsto \mathcal{K} \underset{2}{\overline{\ast}} \mathcal{M} = Rp_{1\ast}(\mathcal{K} \mathcal{L}te_{p_2^{-1}\mathcal{A}_2} p_2^{-1} \mathcal{M}).$} \end{equation} Before proving Theorem \ref{thm:kercoh}, we need to establish the following result. \begin{prop}\label{prop:boundingcomplex} Let $\mathcal{M} \in \mathsf{D}_{cc}(\mathcal{A}_X)$. If $\gr_\hbar \mathcal{M} \in \mathsf{D}^b(\mathcal{O}_X)$ then $\mathcal{M} \in \mathsf{D}^b_{cc}(\mathcal{A}_X)$. \end{prop} \begin{proof} Let $\mathcal{M} \in \mathsf{D}_{cc}(\mathcal{A}_X)$ such that $\gr_\hbar \mathcal{M} \in \mathsf{D}^b(\mathcal{O}_X)$. It follows immediately from \cite[Prop. 1.5.8]{KS3}, that $\mathcal{M} \in \mathsf{D}^{+}(\mathcal{A}_X)$. Then to establish that $\mathcal{M} \in \mathsf{D}^{b}(\mathcal{A}_X)$, it is sufficient to prove that there exists a number $q$ such that $\tau^{\geq q} \mathcal{M} \in \mathsf{D}^{b}(\mathcal{A}_X)$. For that purpose, we essentially follow the proof of Proposition 1.5.8 of \cite{KS3}. Since $\gr_\hbar \mathcal{M} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_X)$, there exists $p \in \mathbb{Z}$ such that for every $i \geq p$, $\Hn^i(\gr_\hbar \mathcal{M})=0$. We deduce from the exact sequence $\Hn^i(\gr_\hbar \mathcal{M}) \to \Hn^{i+1}(\mathcal{M}) \stackrel{\hbar}{\to} \Hn^{i+1}(\mathcal{M}) \to \Hn^{i+1}(\gr_\hbar \mathcal{M})$ that $\Hn^i(\mathcal{M}) \stackrel{\hbar}{\to} \Hn^i(\mathcal{M})$ is an isomorphism for $i>p$. Thus, $\tau^{\geq p+1} \mathcal{M} \in \mathsf{D}(\mathcal{A}_X^{loc})$ which means that $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\tau^{\geq p+1} \mathcal{M})\simeq \tau^{\geq p+1} \mathcal{M}$. Applying $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc}, \cdot)$ to the distinguished triangle \begin{equation*} \scalebox{0.9}{$\tau^{\leq p} \mathcal{M} \to \mathcal{M} \to \tau^{\geq p+1} \mathcal{M} \stackrel{+1}{\to}$}, \end{equation*} we get the distinguished triangle \begin{equation*} \scalebox{0.9}{$\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\tau^{\leq p} \mathcal{M}) \to \fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc}, \mathcal{M}) \to \fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\tau^{\geq p+1} \mathcal{M}) \stackrel{+1}{\to}.$} \end{equation*} The module $\mathcal{M}$ is cohomologically complete. Hence, we have the isomorphism $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\mathcal{M})\simeq 0$. It follows that \begin{equation*} \tau^{\geq p+1}\mathcal{M} \simeq \fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\tau^{\leq p} \mathcal{M})[1]. \end{equation*} Corollary \ref{cor:wayout} implies that $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc},\tau^{\leq p} \mathcal{M})[1] \in \mathsf{D}^{-}(\mathcal{A}_X)$. Then $\tau^{\geq p+1}\mathcal{M} \in \mathsf{D}^{+}(\mathcal{A}_X) \cap \mathsf{D}^{-}(\mathcal{A}_X)=\mathsf{D}^{b}(\mathcal{A}_X)$. Thus, $\mathcal{M} \in \mathsf{D}^{b}(\mathcal{A}_X)$. \end{proof} We now restrict our attention to the case of smooth proper algebraic varieties. The next result is inspired by \cite[Thm. 8.15]{mordg}. Recall that the objects of $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_X)$ are not necessarily compact in $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_X)$ (see Theorem \ref{qccompact}). \begin{thm}\label{thm:kercoh} Let $X_1$ (resp. $X_2$) be a smooth complex algebraic variety endowed with a DQ-algebroid $\mathcal{A}_1$ (resp. $\mathcal{A}_2$). Let $\mathcal{K} \in \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_{12^a})$. Assume that the functor $\Phi_\mathcal{K}: \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_2) \to \mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_1)$ preserves compact objects. Then, $\mathcal{K}$ belongs to $\mathsf{D}^{b}_{\mathrm{coh}}(\mathcal{A}_{12^a})$. \end{thm} \begin{proof} The kernel $\gr_\hbar\mathcal{K}$ induces an integral transform \begin{equation*} \Phi_{\gr_\hbar \mathcal{K} } :\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_2) \to \mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_1). \end{equation*} Let $\mathcal{G}$ be a compact generator of $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_2)$. Then, by Proposition \ref{prop:compres} $\iota_g(\mathcal{G})$ is a compact generator of $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_2)$. By hypothesis, $\Phi_\mathcal{K} (\iota_g(\mathcal{G}))$ is a compact object of $\mathsf{D}_{\mathrm{qcc}}(\mathcal{A}_1)$. It follows that the object $\Phi_{\gr_\hbar \mathcal{K}} (\gr_\hbar \iota_g(\mathcal{G}))$ belongs to $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_1)$ and thus is a compact object of $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_1)$. Let $\mathcal{T}$ be the full subcategory of $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_2)$ such that $\Ob(\mathcal{T})=\lbrace \mathcal{M} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2) | \Phi_{\gr_\hbar \mathcal{K}} (\mathcal{M}) \in \mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_1)\rbrace$. The category $\mathcal{T}$ is a thick subcategory of $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_2)$ containing $\gr_\hbar \iota_g (\mathcal{G})$. By Corollary \ref{compcohdq}, $\mathsf{D}_{\mathrm{coh}}^{b}(\mathcal{O}_2)$ is the thick envelop of $\gr_\hbar \iota_g(\mathcal{G})$. Thus, $\mathcal{T}=\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2)$. It follows that the image of an object of $\mathsf{D}_{\mathrm{coh}}^{b}(\mathcal{O}_2)$ by $\Phi_{\gr_\hbar \mathcal{K} }$ is an object of $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_1)$. Applying Theorem 8.15 of \cite{mordg}, we get that $\gr_\hbar \mathcal{K}$ is an object of $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_{12})$. Applying Proposition \ref{prop:boundingcomplex}, we get that $\mathcal{K} \in \mathsf{D}^b(\mathcal{A}_{12^a})$. Now, Theorem 1.6.4 of \cite{KS3} implies that $\mathcal{K} \in \mathsf{D}^{b}_{\mathrm{coh}}(\mathcal{A}_{12^a})$. \end{proof} \subsection{Integral transforms of coherent DQ-modules} In this section we study integral transforms of coherent DQ-modules. Recall that all the varieties considered are smooth complex projective varieties endowed with the Zariski topology. Let $\mathcal{K} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{12^a})$. Theorem \ref{coherence} implies that the functor (\ref{MukaiDQ}) is well-defined. \begin{equation} \label{MukaiDQ} \scalebox{0.96}{$\Phi_\mathcal{K}: \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1),\; \mathcal{M} \mapsto \mathcal{K} \underset{2}{\circ} \mathcal{M} = Rp_{1\ast}(\mathcal{K} \mathop{\otimes}\limits^{\rm L}_{p_2^{-1}\mathcal{A}_2} p_2^{-1} \mathcal{M}).$} \end{equation} \begin{prop} Let $\mathcal{K}_1 \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{12^a})$ and $\mathcal{K}_2 \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{23^a})$. The composition \begin{align*} \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_3)\stackrel{\Phi_{\mathcal{K}_2}}{\rightarrow}\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2) \stackrel{\Phi_{\mathcal{K}_1}}{\rightarrow}\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1) \end{align*} is isomorphic to $\Phi_{\mathcal{K}_1 \underset{2}{\circ} \mathcal{K}_2}:\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_3) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1)$. \end{prop} \begin{proof} It is a direct consequence of Proposition 3.2.4 of \cite{KS3}. \end{proof} We extend to DQ-modules some classical adjunctions results. They are usually established using Grothendieck duality which does not seem possible to do here. Our proof relies on Theorem \ref{dualdqserre}. \begin{defi} For any object $\mathcal{K} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{12^a})$, we set \begin{align*} \mathcal{K}_R= \mathbb{D}^{\prime}_{\mathcal{A}_{12^a}}(\mathcal{K}) \underset{2^a}{\circ} \omega_{2^a} && \mathcal{K}_L= \omega_{1^a} \underset{1^a}{\circ} \mathbb{D}^{\prime}_{\mathcal{A}_{12^a}}(\mathcal{K}) \end{align*} objects of $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{A}_{1^a2})$. \end{defi} \begin{prop}\label{adjoint_mukai} Let $\Phi_\mathcal{K}:\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1)$ be the Fourier-Mukai functor associated to $\mathcal{K}$ and $\Phi_{\mathcal{K}_R}:\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2)$ (resp. $\Phi_{\mathcal{K}_L}:\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2)$) the Fourier-Mukai functor associated to $\mathcal{K}_R$ (resp. $\mathcal{K}_L$). Then $\Phi_{\mathcal{K}_R}$ (resp. $\Phi_{\mathcal{K}_L}$) is right (resp. left) adjoint to $\Phi_\mathcal{K}$. \end{prop} \begin{proof} We have \begin{equation*} \RHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{N}) \simeq \Rg(X_1,\fRHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{N})). \end{equation*} Applying Theorem \ref{dualdqserre} and the projection formula, we get \begin{align*} \fRHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{N}) & \simeq \fRHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{A}_1) \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_1} \mathcal{N}\\ &\simeq (\mathbb{D}^{\prime}_{\mathcal{A}_{12^a}}(\mathcal{K})\underset{2^a}{\circ} \omega_{2^a} \underset{2^a}{\circ} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_1} \mathcal{N}\\ &\simeq (\mathcal{K}_R \underset{2^a}{\circ} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_1} \mathcal{N}\\ &\simeq \dR p_{1 \ast} ( \mathcal{K}_R \mathop{\otimes}\limits^{\rm L}_{p_2^{-1} \mathcal{A}_{2^a}} p_2^{-1} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_1} \mathcal{N}\\ &\simeq \dR p_{1 \ast} ( \mathcal{K}_R \mathop{\otimes}\limits^{\rm L}_{p_2^{-1} \mathcal{A}_{2^a}} p_2^{-1} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M}) \mathop{\otimes}\limits^{\rm L}_{p_1^{-1}\mathcal{A}_1} p_1^{-1} \mathcal{N}). \end{align*} Taking the global section and applying again the projection formula, we get \begin{align*} \Rg(X_1,\fRHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{N})) \simeq &\\ & \hspace{-2.8cm}\Rg(X_1, \dR p_{1 \ast} ( \mathcal{K}_R \mathop{\otimes}\limits^{\rm L}_{p_2^{-1} \mathcal{A}_{2^a}} p_2^{-1} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{p_1^{-1}\mathcal{A}_1} p_1^{-1} \mathcal{N})\\ &\hspace{-2.8cm} \simeq \Rg(X_1 \times X_2,(\mathcal{K}_R \mathop{\otimes}\limits^{\rm L}_{p_2^{-1} \mathcal{A}_{2^a}} p_2^{-1} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{p_1^{-1}\mathcal{A}_1} p_1^{-1} \mathcal{N})\\ & \hspace{-2.8cm} \simeq \Rg(X_2, \dR p_{2 \ast} ( (\mathcal{K}_R \mathop{\otimes}\limits^{\rm L}_{p_2^{-1} \mathcal{A}_{2^a}} p_2^{-1} \mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M})) \mathop{\otimes}\limits^{\rm L}_{p_1^{-1}\mathcal{A}_1} p_1^{-1} \mathcal{N}))\\ &\hspace{-2.8cm} \simeq \Rg(X_2,\mathbb{D}^{\prime}_{\mathcal{A}_2}(\mathcal{M}) \mathop{\otimes}\limits^{\rm L}_{\mathcal{A}_2} (\mathcal{K}_R \underset{1}{\circ} \mathcal{N}))\\ & \hspace{-2.8cm} \simeq \RHom_{\mathcal{A}_2}(\mathcal{M},\mathcal{K}_R \underset{1}{\circ} \mathcal{N}). \end{align*} Thus, $\RHom_{\mathcal{A}_1}(\mathcal{K} \underset{2}{\circ} \mathcal{M}, \mathcal{N})\simeq \RHom_{\mathcal{A}_2}(\mathcal{M},\mathcal{K}_R \underset{1}{\circ} \mathcal{N})$ which proves the claim. The proof is similar for $\mathcal{K}_L$. \end{proof} Finally, we have the following theorem. \begin{thm}\label{finalmukai} Let $X_1$ (resp. $X_2$) be a smooth complex projective variety endowed with a DQ-algebroid $\mathcal{A}_1$ (resp. $\mathcal{A}_2$). Let $\mathcal{K} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_{12^a})$. The following conditions are equivalent \begin{enumerate}[(i)] \item The functor $\Phi_\mathcal{K}: \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_1)$ is fully faithful (resp. an equivalence of triangulated categories). \item The functor $\Phi_{\gr_\hbar \mathcal{K}}:\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2) \to \mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_1)$ is fully faithful (resp. an equivalence of triangulated categories). \end{enumerate} \end{thm} \begin{proof} We recall the following fact. Let $F$ and $G$ be two functors and assume that $F$ is right adjoint to $G$. Then, there are two natural morphisms \begin{align} G \circ F \to \id \label{eq:1f}\\ \id \to F \circ G \label{eq:2f}. \end{align} The morphism (\ref{eq:1f}) (resp. (\ref{eq:2f})) is an isomorphism if and only if $F$ (resp. G) is fully faithfull. The morphisms (\ref{eq:1f}) and (\ref{eq:2f}) are isomorphisms if and only if $F$ and $G$ are equivalences. \begin{enumerate} \item $(i) \Rightarrow (ii)$. Proposition \ref{adjoint_mukai} is also true for $\mathcal{O}$-modules since the proof works in the commutative case without any changes. Moreover, the functor $\gr_\hbar$ commutes with the composition of kernels. Hence, we have $\gr_\hbar (\mathcal{K}_R) \simeq (\gr_\hbar \mathcal{K})_R$. Therefore, the functor $\Phi_{\gr_\hbar \mathcal{K}_R}$ is a right adjoint of the functor $\Phi_{\gr_\hbar \mathcal{K}}$. Thus, there are morphisms of functors \begin{align} \Phi_{\gr_\hbar \mathcal{K}} \circ \Phi_{\gr_\hbar \mathcal{K}_R} \to \id, \label{isoeqi1}\\ \id \to \Phi_{\gr_\hbar \mathcal{K}_R} \circ \Phi_{\gr_\hbar \mathcal{K}}. \label{isoeqi2} \end{align} Set $\Phi_{\mathcal{L}}=\Phi_{\gr_\hbar \mathcal{K}_R} \circ \Phi_{\gr_\hbar \mathcal{K}}$. Let $\mathcal{T}_2$ be the full subcategory of $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2)$ whose objects are the $\mathcal{M} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2)$ such that \begin{equation*} \mathcal{M} \to \Phi_{\mathcal{L}}(\mathcal{M}) \end{equation*} is an isomorphism. It follows from Proposition \ref{thickiso} that $\mathcal{T}_2$ is a thick subcategory of $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_2)$. Let $\mathcal{G}$ be a compact generator of $\mathsf{D}_{\mathrm{qcoh}}(\mathcal{O}_2)$. By Corollary \ref{compcohdq}, $\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_2)=\langle \mathcal{G} \rangle$. Since $\Phi_{\mathcal{K}}$ is a fully faithful we have the isomorphism \begin{equation*} \iota_g(\mathcal{G}) \stackrel{\sim}{\to} \Phi_{\mathcal{K}_R} \circ \Phi_{\mathcal{K}}(\iota_g(\mathcal{G})). \end{equation*} Applying the functor $\gr_\hbar$, we get that $\gr_\hbar \iota_g(\mathcal{G})$ belongs to $\mathcal{T}_2$ and by Corollary \ref{compcohdq}, $\gr_\hbar \iota_g(\mathcal{G})$ is a classical generator of $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{O}_2)$. Hence, $\mathcal{T}_2=\mathsf{D}_{\mathrm{coh}}^b(\mathcal{O}_2)$. Thus, the morphism (\ref{isoeqi2}) is an isomorphism of functors. A similar argument shows that if $\Phi_{\gr_\hbar \mathcal{K}}$ is an equivalence the morphism (\ref{isoeqi1}) is also an isomorphism which proves the claim. \item $(ii) \Rightarrow (i)$. Since $\Phi_\mathcal{K}$ and $\Phi_{\mathcal{K}_R}$ are adjoint functors we have natural morphisms of functors \begin{align*} \Phi_{\mathcal{K}} \circ \Phi_{\mathcal{K}_R} \to \id, \\ \id \to \Phi_{\mathcal{K}_R} \circ \Phi_{\mathcal{K}}. \end{align*} If $\mathcal{M} \in \mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2)$, then we have \begin{equation} \label{buttraite} \mathcal{M} \to \Phi_{\mathcal{K}_R} \circ \Phi_{\mathcal{K}}(\mathcal{M}). \end{equation} Applying the functor $\gr_\hbar$, we get \begin{equation}\label{equicom} \gr_\hbar \mathcal{M} \to \Phi_{\gr_\hbar \mathcal{K}_R} \circ \Phi_{\gr_\hbar \mathcal{K}}(\gr_\hbar \mathcal{M}). \end{equation} If $\Phi_{\gr_\hbar \mathcal{K}}$ is fully faithful, then the morphism (\ref{equicom}) is an isomorphism. The objects $\Phi_{\mathcal{K}_R} \circ \Phi_{\mathcal{K}}(\mathcal{M})$ and $\mathcal{M}$ are cohomologically complete since they belongs to $\mathsf{D}^b_{\mathrm{coh}}(\mathcal{A}_2)$. Thus the morphism (\ref{buttraite}) is an isomorphism that is to say \begin{equation*} \id \stackrel{\sim}{\to} \Phi_{\mathcal{K}_R} \circ \Phi_{\mathcal{K}}. \end{equation*} It follows that $\Phi_\mathcal{K}$ is fully faithful. Similarly, one shows that if $\Phi_{\gr_\hbar \mathcal{K}}$ is an equivalence then in addition \begin{equation*} \Phi_{\mathcal{K}} \circ \Phi_{\mathcal{K}_R} \stackrel{\sim}{\to} \id. \end{equation*} It follows that $\Phi_{\mathcal{K}}$ is an equivalence. \end{enumerate} \end{proof} \begin{Rem} The implication $(ii) \Rightarrow (i)$ of Theorem \ref{finalmukai} and Proposition \ref{adjoint_mukai} still hold if one replaces smooth projective varieties by complex compact manifolds. This result implies immediatly that the quantization of the Poincaré bundle constructed in \cite{PantenBassa} induces an equivalence. \end{Rem} \section{Appendix} In this appendix we show that the cohomological dimension of the functor $\fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}^{\hbar,loc}_X, \mathcal{M})$ is finite. We refer to \cite{categories_and_sheaves} for a detailed account of pro-objects. Recall that to an abelian category $\mathcal{C}$ one associates the abelian category $\Pro(\mathcal{C})$ of its pro-objects. Then, there is a natural fully faithful functor $i_\mathcal{C}: \mathcal{C} \to \Pro(\mathcal{C})$. The functor $i_\mathcal{C}$ is exact. For any small filtrant category $I$ the functor $``\varprojlim":\Fct(I^{\opp}, \mathcal{C}) \to \Pro(\mathcal{C})$ is exact. If $\mathcal{C}$ admits small projective limits the functor $i_\mathcal{C}$ admits a right adjoint denoted $\pi$. \begin{equation*} \pi:\Pro(\mathcal{C}) \to \mathcal{C} , \; "\varprojlim_i" X_i \mapsto \varprojlim_i X_i. \end{equation*} If $\mathcal{C}$ is a Grothendieck category, then $\pi$ has a right derived functor $\dR \pi:\mathsf{D}(\Pro(\mathcal{C})) \to \mathsf{D}(\mathcal{C})$. Let us recall Lemma 1.5.11 of \cite{KS3}. \begin{lemme}\label{lemme:procomp} Let $\mathcal{M} \in \mathsf{D}(\mathbb{C}^\hbar)$. Then , we have \begin{equation} \dR \pi (( `` \varprojlim_{n}" \mathbb{C}^\hbar_X \hbar^n) \mathop{\otimes}\limits^{\rm L}_{\mathbb{C}^\hbar_X} \mathcal{M}) \simeq \fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}_X^{\hbar,loc}, \mathcal{M}). \end{equation} \end{lemme} \begin{prop} The functor $\fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}^{\hbar,loc}_X , \cdot)$ has finite cohomological dimension. \end{prop} \begin{proof} Let $\mathcal{M} \in \Mod(\mathbb{C}^\hbar_X)$. By Lemma \ref{lemme:procomp} we have \begin{equation} \fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}_X^{\hbar,loc}, \mathcal{M}) \simeq \dR \pi (( `` \varprojlim_{n}" \mathbb{C}^\hbar_X \hbar^n) \mathop{\otimes}\limits^{\rm L}_{\mathbb{C}^\hbar_X} \mathcal{M}). \end{equation} Then by Proposition 6.1.9 of \cite{categories_and_sheaves} adapted to the case of pro-objects, we have \begin{equation*} \dR \pi (( `` \varprojlim_{n}" \mathbb{C}^\hbar_X \hbar^n) \mathop{\otimes}\limits^{\rm L}_{\mathbb{C}^\hbar_X} \mathcal{M})\simeq \dR \pi ( `` \varprojlim_{n}" (\mathbb{C}^\hbar_X \hbar^n \mathop{\otimes}\limits^{\rm L}_{\mathbb{C}^\hbar_X} \mathcal{M})). \end{equation*} It follows from Corollary 13.3.16 from \cite{categories_and_sheaves} that \begin{equation*} \forall i>1, \; \dR^i \pi ( `` \varprojlim_{n}" (\mathbb{C}^\hbar_X \hbar^n \mathop{\otimes}\limits^{\rm L}_{\mathbb{C}^\hbar_X} \mathcal{M})) \simeq 0 \end{equation*} which proves the claim. \end{proof} \begin{prop} The functor $\fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}_X^{\hbar,loc},\cdot):\mathsf{D}(\mathbb{C}^\hbar_X) \to \mathsf{D}(\mathbb{C}^\hbar_X)$ is such that $\fRHom_{\mathbb{C}^\hbar_X}(\mathbb{C}_X^{\hbar,loc},\mathsf{D}^{-}(\mathbb{C}^\hbar_X))\subset \mathsf{D}^{-}(\mathbb{C}^\hbar_X)$. \end{prop} \begin{proof} This follows immediately from Example 1 of \cite[Ch. I §7.]{resdu}. \end{proof} \begin{cor}\label{cor:wayout} Let $X$ be a smooth complex (algebraic or analytic) variety endowed with a DQ-algebroid $\mathcal{A}_X$. The functor $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc}, \cdot)$ is such that $\fRHom_{\mathcal{A}_X}(\mathcal{A}_X^{loc}, \mathsf{D}^{-}(\mathcal{A}_X))\subset \mathsf{D}^{-}(\mathcal{A}_X)$. \end{cor} \end{document}
\begin{document} \title{A Schur-Weyl type duality for twisted weak modules over a vertex algebra} \begin{abstract} Let $V$ be a vertex algebra of countable dimension, $G$ a subgroup of ${\mathcal A}ut V$ of finite order, $V^{G}$ the fixed point subalgebra of $V$ under the action of $G$, and ${\mathscr S}$ a finite $G$-stable set of inequivalent irreducible twisted weak $V$-modules associated with possibly different automorphisms in $G$. We show a Schur--Weyl type duality for the actions of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and $V^G$ on the direct sum of twisted weak $V$-modules in ${\mathscr S}$ where ${\mathscr A}_{\alpha}(G,{\mathscr S})$ is a finite dimensional semisimple associative algebra associated with $G,{\mathscr S}$, and a $2$-cocycle $\alpha$ naturally determined by the $G$-action on ${\mathscr S}$. It follows as a natural consequence of the result that for any $g\in G$ every irreducible $g$-twisted weak $V$-module is a completely reducible weak $V^G$-module. \end{abstract} \mathbf igskip \noindent{\it Mathematics Subject Classification.} 17B69 \noindent{\it Key Words.} vertex algebras, Schur--Weyl type duality, weak modules. \section{\label{section:introduction}Introduction} Let $V$ be a vertex algebra (cf. \cite{B1986, LL}) and $G$ a subgroup of ${\mathcal A}ut V$ of finite order. We denote by $V^{G}$ the fixed point subalgebra of $V$ under the action of $G$: $V^{G}=\{a\in V\ |\ g(a)=a\mbox{ for all }g\in G\}$. The fixed point subalgebras play an important role in the study of vertex algebras, particularly in the construction of interesting examples with the moonshine vertex algebra $V^{\natural}$ as a representative example \cite{B1986,FLM}. One of the main problems with $V^G$ is describing the $V^G$-modules in terms of $V$ and $G$. In line with the various ideas proposed in \cite{DVVV1989}, for a simple vertex operator algebra $V$, Dong and Mason initiated a systematic study of representations of $V^G$ and showed a Schur-Weyl type duality for $(G,V^{G})$ when $G$ is a solvable group \cite{DM1997}. Here, it is worth mentioning that as one consequence of the Schur-Weyl type duality, a Galois correspondence in simple vertex operator algebras was established in \cite{DM1997} when $G$ is either abelian or dihedral, and in \cite{HMT1999} in the general case. The Schur-Weyl type duality was extended in \cite{DM1996, DY2002, MT2004, Yamskphd} using the Zhu algebra in \cite{Z1996} and its generalizations in \cite{DLM1998t,DLM1998tn,DLM1998v, MT2004}. In connection with this problem, twisted $V$-modules have been studied systematically (cf. \cite{DLM1998t,FLM,Lepowsky1985,Li1996}) for the following reasons: for $g\in G$, every $g$-twisted $V$-module becomes a $V^G$-module and, moreover, it is conjectured that under some conditions on $V$, every irreducible $V^G$-module is contained in some irreducible $g$-twisted $V$-module for some $g\in G$ (cf. \cite{DVVV1989}). Let ${\mathscr S}$ be a finite $G$-stable set (see the explanation just after \eqref{eqn:actionYmodulecdotsigma} for the definition) of inequivalent irreducible twisted $V$-modules associated with possibly different automorphisms in $G$. A finite dimensional semisimple associative ${\mathbb C}$-algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ (see \eqref{eq:algebraAalphaGS} for the definition) associated to $G$, ${\mathscr S}$, and a $2$-cocycle $\alpha$ naturally determined by the $G$-action on ${\mathscr S}$ was constructed in \cite{DY2002}. The algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ acts on the direct sum ${\mathscr M}=\oplus_{M\in{\mathscr S}}M$ and the actions of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and $V^G$ commute with each other. The Schur-Weyl type duality above is about a decomposition of ${\mathscr M}$ under the action of ${\mathscr A}_{\alpha}(G,{\mathscr S})\otimes_{{\mathbb C}}V^G$. In \cite{DM1997, DY2002, Yamskphd} they studied the case that ${\mathscr S}$ consists of $g$-twisted $V$-modules associated with a single automorphism $g\in G$, where we note that in this case, ${\mathscr M}$ is also a $g$-twisted $V$-module. If this is not the case, then ${\mathscr M}$ is not a twisted $V$-module in general since the definition of a twisted $V$-module depends on each automorphism of $V$ and hence a direct sum of twisted $V$-modules associated with different automorphisms is not always a twisted $V$-module. This fact makes it difficult to investigate ${\mathscr M}$. To overcome the difficulty, Miyamoto and the author introduced the $G$-twisted Zhu algebra which allows us to study twisted $V$-modules in a unified way and we showed a Schur-Weyl type duality for any finite $G$-stable set ${\mathscr S}$ of inequivalent irreducible twisted $V$-modules in \cite[Theorem 2]{MT2004}. The purpose of this paper is to generalize the Schur-Weyl type duality in \cite[Theorem 2]{MT2004} to twisted weak modules for a vertex algebra. The settings are all the same as above if we replace the terms \lq\lq vertex operator algebra\rq\rq\ and \lq\lq(twisted) $V$-module\rq\rq\ by \lq\lq vertex algebra\rq\rq\ and \lq\lq (twisted) weak $V$-module\rq\rq,\ respectively. Here we mention that the weak modules for $V_{L}^{+}$, a fixed point subalgebra of the lattice vertex algebra $V_{L}$ where $L$ is a non-degenerate even lattice, are classified in \cite{Tanabe2021-1} and every weak $V_{L}^{+}$-module is a submodule of some twisted weak $V_{L}^{+}$-module. The same result is shown for the weak modules with some conditions for $M(1)^{+}$, a fixed point subalgebra of the Heisenberg vertex operator algebra $M(1)$, in \cite{Tanabe2017}. These examples lead us to expect a Schur-Weyl type duality holds for twisted weak $V$-modules. For an automorphism $g \in {\mathcal A}ut V$ of finite order, we denote by ${\mathscr T}_{g}$ the set of all irreducible $g$-twisted weak $V$-modules and we define ${\mathscr T}_{G}=\cup_{g\in G}{\mathscr T}_{g}$. Let ${\mathscr S}$ be a finite $G$-stable subset of ${\mathscr T}_{G}$ consisting of inequivalent irreducible twisted weak $V$-modules and let ${\mathscr M}=\oplus_{M\in{\mathscr S}}M$. Under the assumption that $V$ is a simple vertex algebra of countable dimension, a Schur-Weyl type duality for $({\mathscr A}_{\alpha}(G,{\mathscr S}),V^G)$ was studied in \cite{ALPY2019, ALPY2022, DRY2023}. There it is assumed that ${\mathscr S}$ consists of $g$-twisted weak $V$-modules associated with a single automorphism $g\in G$. If this is not the case, as in the case of twisted $V$-modules, ${\mathscr M}$ is not a twisted weak $V$-module in general. Moreover, we can not apply the $G$-twisted Zhu algebra to twisted weak $V$-modules since we do not assume any grading in the definition of a twisted weak $V$-module. To overcome the difficulty, instead of the $G$-twisted Zhu algebra, we shall use weak $(V,T)$-modules (see Definition \ref{definition:def-vt} for the definition) defined for any positive integer $T$, introduced by the author \cite{Tanabe2015} as a generalization of the notion of a twisted weak module so that it is closed under taking \textcolor{black}{direct sums}. For any $g\in {\mathcal A}ut V$ of finite order and any $T$ that is a multiple of the order of $g$, every $g$-twisted weak $V$-module becomes a weak $(V,T)$-module. The notion of a weak $(V,T)$-module allows us to study twisted weak $V$-modules in a unified way (Lemma \ref{lemma:twisted-(V,T)}). We state the main result of this paper, which is a Schur-Weyl type duality for $({\mathscr A}_{\alpha}(G,{\mathscr S}),V^G)$ in the general case: \begin{theorem} \label{theorem:main} Let $V$ be a simple vertex algebra of countable dimension, $G$ a subgroup of ${\mathcal A}ut V$ of finite order, ${\mathscr S}$ a finite $G$-stable set of inequivalent weak $(V,T)$-modules, and ${\mathscr M}=\oplus_{M\in{\mathscr S}}M$. Under the action of ${\mathscr A}_{\alpha}(G,{\mathscr S})\otimes_{{\mathbb C}}\textcolor{black}{V^G}$, the space ${\mathscr M}$ decomposes into a direct sum \begin{align} {\mathscr M}&=\mathbf igoplus\limits_{j\in J, \lambda\in\Lambda_{j}}W_{\lambda}^{j}\otimes_{{\mathbb C}}M_{\lambda}^{j}. \end{align} Moreover \begin{enumerate} \item For any $j\in J$ and $\lambda\in\Lambda_j$, $M^{j}_{\lambda}$ is non-zero and is an irreducible weak $(V^{G},T)$-module. \item For $j_1,j_2\in J$ and $\lambda_1\in\Lambda_{j_1},\lambda_2\in\Lambda_{j_2}$, $M^{j_1}_{\lambda_1}$ and $M^{j_2}_{\lambda_2}$ are isomorphic as weak $(V^G,T)$-modules if and only if $(j_1,\lambda_1)=(j_2,\lambda_2)$. \end{enumerate} \end{theorem} \noindent{}In the theorem, $\{W^{j}_{\lambda}\ |\ j\in J, \lambda\in\Lambda_{j}\}$ is a complete set of representatives of isomorphism classes of irreducible ${\mathscr A}_{\alpha}(G,{\mathscr S})$-modules and $M^{j}_{\lambda}$ is the multiplicity space of $W^{j}_{\lambda}$ in ${\mathscr M}$ for $j\in J$ and $\lambda\in \Lambda_{j}$, on which $V^G$ acts naturally (see \eqref{eq:Mlambda=HomC}, \eqref{eq:definitionWjlambda}, and the explanation just before \eqref{eq:GjGmoduelj} for the precise definitions of $W^{j}_{\lambda}$ and $M^{j}_{\lambda}$). We note that if ${\mathscr S}$ is a $G$-stable subset of ${\mathscr T}_{G}$ consisting of inequivalent irreducible twisted weak $V$-modules, then ${\mathscr M}=\oplus_{M\in{\mathscr S}}M$ becomes a weak $V^G$-module. Hence, in this case, from the definition of a weak $(V,T)$-module (Definition \ref{definition:def-vt}) we can replace the term \lq\lq weak $(V^G,T)$-module\rq\rq\ by \lq\lq weak $V^G$-module\rq\rq\ in Theorem \ref{theorem:main}. Setting $T=|G|$, we have the following result as a special case of Theorem \ref{theorem:main}: \begin{corollary} \label{corollary:stabletwisted} Let $V$ be a simple vertex algebra of countable dimension, $G$ a subgroup of ${\mathcal A}ut V$ of finite order, ${\mathscr S}$ a finite $G$-stable subset of ${\mathscr T}_{G}$ consisting of inequivalent irreducible twisted weak $V$-modules, and ${\mathscr M}=\oplus_{M\in{\mathscr S}}M$. Under the action of ${\mathscr A}_{\alpha}(G,{\mathscr S})\otimes_{{\mathbb C}}\textcolor{black}{V^G}$, the space ${\mathscr M}$ decomposes into a direct sum \begin{align} {\mathscr M}&=\mathbf igoplus\limits_{j\in J, \lambda\in\Lambda_{j}}W_{\lambda}^{j}\otimes_{{\mathbb C}}M_{\lambda}^{j}. \end{align} Moreover \begin{enumerate} \item For any $j\in J$ and $\lambda\in\Lambda_{j}$, $M^{j}_{\lambda}$ is non-zero and is an irreducible weak $V^{G}$-module. \item For $j_1,j_2\in J$ and $\lambda_1\in\Lambda_{j_1},\lambda_2\in\Lambda_{j_2}$, $M^{j_1}_{\lambda_1}$ and $M^{j_2}_{\lambda_2}$ are isomorphic as weak $V^G$-modules if and only if $(j_1,\lambda_1)=(j_2,\lambda_2)$. \end{enumerate} In particular, for any $g\in G$ every irreducible $g$-twisted weak $V$-module is a completely reducible weak $V^G$-module. \end{corollary} We briefly introduce the basic idea. The key result is Lemma \ref{lemma:associative-action} below, which is a kind of \lq\lq associativity\rq\rq\ of the action of $V$ on a weak $(V,T)$-module $M$ in the following sense: for $a,b\in V$, $u\inM$, and $m,n\in(1/T){\mathbb Z}$, \begin{align} a_{m}b_{n}u&\in \Span_{{\mathbb C}}\{(a_ib)_{m+n-i}u\inM\ |\ i\in{\mathbb Z}\}. \end{align} The result is well-known when $M$ is a (twisted) weak $V$-module (cf. \cite[Proposition 4.5.7]{LL}). However, in the general case, the result is far from trivial due to the complexity of the structure of a weak $(V,T)$-module. In the proof, we use the methods developed in \cite{MT2004} and \cite{Tanabe2015}. Once Lemma \ref{lemma:associative-action} is established, the argument in \cite[Section 7]{DRY2023} works well even for weak $(V,T)$-modules, which leads Theorem \ref{theorem:main}. The organization of the paper is as follows. \textcolor{black}{In Section \ref{section:preliminary}} we recall some basic properties of weak $(V,T)$-modules and show some results. In Section \ref{section:proof} after recalling some properties of the algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$, we show the main theorem. \section{\label{section:preliminary}Preliminary} Throughout this paper, ${\mathbb Z}$ denotes the set of all integers, $T$ is a fixed positive integer, and $(V,Y,{\mathbf 1})$ is a vertex algebra. Recall that $V$ is the underlying vector space, $Y(\mbox{ },x)$ is a linear map from $V\otimes_{{\mathbb C}}V$ to $V\db{x}$, and ${\mathbf 1}$ is the vacuum vector. For $g\in{\mathcal A}ut V$, $|g|$ denotes the order of $g$. For $g\in{\mathcal A}ut V$ or finite order, a positive integer $n$ that is a multiple of $|g|$, and $r=0,1,dots,n-1$, we define $V^{(g,r)}=\{a\in V\ |\ ga=e^{-2\pi\sqrt{-1}r/n}a\}$. For $a\in V$, $a^{(g,r)}$ denotes the $r$th component of $a$ in the decomposition $V=\oplus_{r=0}^{n-1}V^{(g,r)}$, that is, $a=\sum_{r=0}^{n-1}a^{(g,r)}$, $a^{(g,r)}\in V^{(g,r)}$. For $i,j\in{\mathbb Z}$, we define \begin{align} {\mathbb Z}_{\leq i}&=\{k\in{\mathbb Z}\ |\ k\leq i\},\nonumber\\ {\mathbb Z}_{\geq i}&=\{k\in{\mathbb Z}\ |\ k\geq i\},\nonumber\\ {\mathbb C}[z,z^{-1}]_{\leq i}&=\Span_{{\mathbb C}}\{z^k\ |\ k\leq i\},\mbox{ and}\nonumber\\ {\mathbb C}[z,z^{-1}]_{i,j}&=\Span_{{\mathbb C}}\{z^k\ |\ i\leq k\leq j\}. \end{align} \subsection{\label{section:subspace} Subspaces of ${\mathbb C}[z,z^{-1}]$ } For $\gamma,q\in{\mathbb Z}$ and $\alpha\in{\mathbb Q}$, we denote by $O(\gamma,\alpha,q;z)$ the subspace of ${\mathbb C}[z,z^{-1}]$ spanned by \begin{align} \label{eq:res-x-q} {\mathbb R}es_{x}\mathbf ig((1+x)^{\alpha+1}x^{q-j}\sum_{i\in{\mathbb Z}_{\leq \gamma}}z^ix^{-i-1}\mathbf ig) &= \sum\limits_{i=0}^{\gamma-q+j}\mathbf inom{\alpha+1}{i}z^{i+q-j},\quad j=0,1,dots \end{align} and $z^i, i\in{\mathbb Z}_{\geq \gamma+1}$, which will be used in Lemmas \ref{lemma:vanish-Oz} and \ref{lemma:associative-action}. The subspace $\zhuOzero{\gamma}{\alpha}{q}{z}$ in this paper is written as $\zhuOzero{\gamma}{\alpha+1}{q}{z}$ in \cite{Tanabe2015}. We note that \begin{align} \label{eq:osubset} O(\gamma,\alpha,q;z)\subset O(\gamma,\alpha,q+1;z)\subset\cdots. \end{align} The following result holds as a special case of \cite[Lemma 3.2]{Tanabe2015} and its proof. \begin{lemma}\label{lemma:iso-oplus} For $\gamma,q\in{\mathbb Z}$ and $\alpha\in {\mathbb Q}$, the diagonal map ${\mathbb C}[z,z^{-1}]\ni f\mapsto (f,dots,f)\in {\mathbb C}[z,z^{-1}]^{\oplus T}$ induces an isomorphism \begin{align} \label{eq:C[wzwz-1]/bigcap} {\mathbb C}[z,z^{-1}]/\mathbf igcap_{s=0}^{T-1}O(\gamma,\alpha+\frac{s}{T},q;z) &\overset{\cong}{\rightarrow} \mathbf igoplus_{{s}=0}^{T-1}{\mathbb C}[z,z^{-1}]/O(\gamma,\alpha+\frac{s}{T},q;z) \end{align} as vector spaces. Moreover, each representative element of the quotient space on the left-hand side of \eqref{eq:C[wzwz-1]/bigcap} can be taken uniquely from ${\mathbb C}[z,z^{-1}]_{\gamma+1-T(\gamma-q),\gamma}$. \end{lemma} For $r=0,\dots,T-1$ and $i\in{\mathbb Z}_{\leq \gamma}$, by Lemma \ref{lemma:iso-oplus} there exists a unique $\unitmu{r}(\gamma,\alpha,q,i;z)\in {\mathbb C}[z,z^{-1}]_{\gamma+1-T(\gamma-q),\gamma}$ such that \begin{align} \label{eq:unit-s} \unitmu{r}(\gamma, \alpha,q,i;z) &\equiv \delta_{r,s}z^i\pmod{ \zhuOzero{\gamma}{\alpha+\frac{s}{T}}{q}{z}},\qquad s=0,1,dots,T-1. \end{align} We also define $\unitmu{r}(\gamma,\alpha,q,i;z)=0\mbox{ for $i\in{\mathbb Z}_{\geq \gamma+1}$}$ for convenience. \subsection{$(V,T)$-modules \label{section:vt} } In this subsection, we recall the definition of a weak $(V,T)$-module and its properties from \cite{Tanabe2015}. We also show some results about weak $(V,T)$-modules. For a vector space $M$ over ${\mathbb C}$, we define three linear injective maps \begin{align*} \iota_{x,y} : & M[[x^{1/T},y^{1/T}]][x^{-1/T},y^{-1/T},(x-y)^{-1}]\rightarrow M\db{x^{1/T}}\db{y^{1/T}},\\ \iota_{y,x} : &M[[x^{1/T},y^{1/T}]][x^{-1/T},y^{-1/T},(x-y)^{-1}]\rightarrow M\db{y^{1/T}}\db{x^{1/T}},\\ \iota_{x,x-y} : & M[[x^{1/T},y^{1/T}]][x^{-1/T},y^{-1/T},(x-y)^{-1}]\rightarrow M\db{y^{1/T}}\db{x-y} \end{align*} by \begin{align*} \iota_{x,y} f&=\sum_{j,k,l}u_{j,k,l}\sum_{i=0}^{\infty}\mathbf inom{l}{i}(-1)^{i}x^{j+l-i}y^{k+i},\\ \iota_{y,x} f&=\sum_{j,k,l}u_{j,k,l}\sum_{i=0}^{\infty}\mathbf inom{l}{i}(-1)^{l-i}y^{k+l-i}x^{j+i},\\ \iota_{y,x-y} f&=\sum_{j,k,l}u_{j,k,l}\sum_{i=0}^{\infty}\mathbf inom{j}{i}y^{k+j-i}(x-y)^{l+i} \end{align*} for $f=\sum_{j,k,l}u_{j,k,l} x^jy^k(x-y)^l\in M[[x^{1/T},y^{1/T}]][x^{-1/T},y^{-1/T},(x-y)^{-1}], u_{j,k,l}\in M$. Now we write down the definition of a {\it weak $(V,T)$-module}, which is called a $(V,T)$-module in \cite[Definition 2.1]{Tanabe2015}. \begin{definition}\label{definition:def-vt} A {\it weak $(V,T)$-module} $M$ is a vector space over ${\mathbb C}$ equipped with a linear map \begin{align} \label{eq:inter-form} Y_{M}(\ , x) : V\otimes_{{\mathbb C}}M&\rightarrow M\db{x}\nonumber\\ a\otimes u&\mapsto Y_{M}(a, x)u=\sum_{n\in(1/T){\mathbb Z}}a_{n}u x^{-n-1} \end{align} such that the following conditions are satisfied: \begin{enumerate} \item $Y_{M}({\bf 1},x)=\id_{M}$. \item For $a,b\in V$ and $u\in M$, there exists $\glb{a,b,u}{x,y}\in M[[x^{1/T},y^{1/T}]][x^{-1/T},y^{-1/T},(x-y)^{-1}]$ such that \begin{align*} \iota_{x,y}\glb{a,b,u}{x,y}&=Y_{M}(a,x)Y_{M}(b,y)u,\\ \iota_{y,x}\glb{a,b,u}{x,y}&=Y_{M}(b,y)Y_{M}(a,x)u,\quad\mbox{and }\\ \iota_{y,x-y}\glb{a,b,u}{x,y}&=Y_{M}(Y(a,x-y)b,y)u. \end{align*} \end{enumerate} \end{definition} For a weak $(V,T)$-module $M$, a subspace $N$ of $M$ is called {\it weak $(V,T)$-submodule} of $M$ if $(N,Y_M|_{N})$ is a weak $(V,T)$-module, where $Y_M|_{N}$ is the restriction of $Y_{M}$ to $N$. A non-zero weak $(V,T)$-module $M$ is called {\it irreducible} or {\it simple} if there is no submodule of $M$ except $0$ and $M$ itself. For weak $(V,T)$-modules $M$ and $N$, a linear map $f : M\rightarrow N$ is called a {\it homomorphism} if $f(a_iu)=a_if(u)$ for all $a\in V$, $u\in M$, and $i\in (1/T){\mathbb Z}$. For weak $(V,T)$-modules $M$ and $N$, we call $M$ is \textcolor{black}{\it isomorphic} to $N$ as weak $(V,T)$-modules, which we write as $M\congN$, if there exist homomorphisms $f : M\rightarrow N$ and $g : N\rightarrowM$ such that $f\circ g=\id_{N}$ and $g\circ f=\id_{M}$. For a submodule $N$ of a weak $(V,T)$-module $M$, the quotient space $M/N$ is a weak $(V,T)$-module. For a set of weak $(V,T)$-modules $\{M_{i}\}_{i\in I}$, the direct sum $\oplus_{i\in I}M_i$ is a weak $(V,T)$-module. Let $M$ be a weak $(V,T)$-module. For $a\in V$ and $s=0,1,dots,T-1$, we define \begin{align} \label{eq:Ys} Y^{s}_{M}(a,x)&=\sum_{\begin{subarray}{c}i\in s/T+{\mathbb Z}\end{subarray}} a_{i}x^{-i-1}. \end{align} Clearly, $\sum_{s=0}^{T-1}Y^{s}_{M}(a,x)=Y_{M}(a,x)$ holds. For $a,b\in V$, $u\inM$, and $s=0,1,dots,T-1$, writing \begin{align} Y_{M}(a,b,u | x_1, x_2)&=\sum_{i,j\in(1/T){\mathbb Z}}\sum_{k\in{\mathbb Z}}v_{ijk}x_1^{i}x_2^{j}(x_1-x_2)^{k},\ v_{ijk}\in M, \end{align} we define \begin{align} Y^{(s)}_{M}(a,b | x_2, x_1-x_2)(u)&=\iota_{x_2,x_1-x_2}\mathbf ig(\sum_{i\in -s/T+{\mathbb Z}}\sum_{j\in (1/T){\mathbb Z}}\sum_{k\in{\mathbb Z}}v_{ijk}x_1^{i}x_2^{j}(x_1-x_2)^{k}\mathbf ig). \end{align} By the definition, \begin{align} \label{eq:sum-Y} \sum_{s=0}^{T-1}\ty{s}{M}{a,b}{x_2,x_0}(u)&=Y_{M}(Y(a,x_0)b,x_2)u. \end{align} For $q\in{\mathbb Z}$, $M\db{x_2^{1/T}}\db{x_0}_{\geq q}$ denotes the set of all elements in $M\db{x_2^{1/T}}\db{x_0}$ of the form $\sum_{\begin{subarray}{c}i\in(1/T){\mathbb Z}\\j\in{\mathbb Z}_{\geq q}\end{subarray}} u_{ij}x_2^{i}x_0^{j},\ u_{ij}\inM$. By \cite[Remark 2.6]{Tanabe2015}, if $Y_{M}(Y(a,x_0)b,x_2)u\in M\db{x_2^{1/T}}\db{x_0}_{\geq q}$, then so is $Y^{(s)}_{M}(a,b | x_2, x_0)(u)$ for any $s=0,1,dots,T-1$. We write the expansion of $\ty{s}{M}{a,b}{x_2,x_0}$ as \begin{align} \label{eq:tysMabwx2wx0} \ty{s}{M}{a,b}{x_2,x_0}&=\sum_{i\in(1/T){\mathbb Z}}\sum_{j\in{\mathbb Z}}\cty{s}{M}{a,b}{i,j}x_2^{-i-1}x_0^{-j-1},\ \cty{s}{M}{a,b}{i,j}\in \End_{{\mathbb C}}M. \end{align} We recall the following result from \cite[(2.16)]{Tanabe2015}, which is essentially the same as the Borcherds identity: for $a,b\in V$, $u\inM$, and $j,k\in(1/T){\mathbb Z}, l\in{\mathbb Z}$, \begin{align} \label{eq:Borcherds-Coeff} \sum_{i=0}^{\infty}\mathbf inom{j}{i}\cty{s}{M}{a,b}{j+k-i,l+i}(u) &= \sum_{i=0}^{\infty}\mathbf inom{l}{i}(-1)^{i}(a_{l+j-i}b_{k+i}+(-1)^{l+1}b_{l+k-i}a_{j+i})u. \end{align} \textcolor{black}{We recall} the relationship between the notion of a weak $(V,T)$-module and that of a (twisted) weak $V$-module discussed in \cite{Tanabe2015}. Before that, we caution that if $V$ is a vertex operator algebra, then the notion of a module for $V$ viewed as a vertex algebra is different from the notion of a module for $V$ viewed as a vertex operator algebra (cf. \cite[Definitions 4.1.1 and 4.1.6]{LL}). To avoid confusion, throughout this paper, we refer to a module for a vertex algebra defined in \cite[Definition 4.1.1]{LL} as a {\it weak module} (cf. \cite[Definition 2.1]{Tanabe2021-1}). The reason why we use the terminology \lq\lq weak module\rq\rq\ is that when $V$ is a vertex operator algebra, a module for $V$ viewed as a vertex algebra is called a weak $V$-module (cf. \cite[Definition 2.3]{ABD2004}, \cite[p.150]{DLM1997}, and \cite[p.157]{Li1996}). We also apply this rule for twisted modules and twisted weak modules. By \cite[Lemma 2.4]{Tanabe2015}, we find that the following definitions of a (twisted) weak $V$-module are the same as the standard ones (\cite[Definition 2.1]{Tanabe2021-1} and \cite[Definition 2.6]{Li1996t}): \begin{definition}\label{definition:def-weak-twisted-weak} \begin{enumerate} \item A weak $(V,1)$-module is called a {\it weak $V$-module}. \item Let $g\in {\mathcal A}ut V$ of finite order. A weak $(V,|g|)$-module $M$ is called a {\it $g$-twisted weak $V$-module} if $Y_{M}(a,x)=Y^{s}_{M}(a,x)$ for all $s=0,1,dots,|g|-1$ and $a\in V^{(g,s)}$. \end{enumerate} \end{definition} The following result implies that the notion of a weak $(V,T)$-module allows us to study twisted weak $V$-modules in a unified way. \begin{lemma} \label{lemma:twisted-(V,T)} Let $g,h\in{\mathcal A}ut V$ of finite order and $T$ a common multiple of $|g|$ and $|h|$. Let $M$ be a $g$-twisted weak $V$-module and $N$ an $h$-twisted weak $V$-module. Then, $M \cong N$ as weak $(V,T)$-modules if and only if $g=h$ and $M\congN$ as $g$-twisted weak modules. \end{lemma} \begin{proof} Assume $M\congN$ as weak $(V,T)$-modules. \textcolor{black}{We} note that $V^{(g,rT/|g|)}\neq 0$ for all $r=0,1,dots,|g|-1$ since $V$ is simple. For any $r=0,1,dots,|g|-1$ and $0\neq a\in V^{(g,rT/|g|)}$, since $V$ is simple, $0\neq Y_{M}(a,x)=Y_{M}^{rT/|g|}(a,x)$ and hence $Y_{N}(a,x)=Y^{rT/|g|}_{N}(a,x)$. Thus, $g=h$. The rest of the proof is clear. \end{proof} For a vertex algebra $V$, we take the tensor algebra $B(V)=T(V\otimes_{{\mathbb C}} {\mathbb C}[\zeta^{1/T},\zeta^{-1/T}])$ where $\zeta^{1/T}$ is a variable. For a weak $(V,T)$-module $M$, we regard $M$ as a $B(V)$-module in the following way: for $a\in V, i\in(1/T){\mathbb Z}$, and $u\in M$, we define \begin{align} (a\otimes \zeta^{i})\cdot u&=a_{i}u. \end{align} We note that if $V$ is of countable dimension, then so is $B(V)$. The following result clearly holds: \begin{lemma} \label{lemma:tensor} Let $V$ be a vertex algebra and let $M$, $N$ be weak $(V,T)$-modules. Then \begin{enumerate} \item $M$ is an irreducible weak $(V,T)$-module if and only if $M$ is an irreducible weak $B(V)$-module. \item $M \cong N$ as weak $(V,T)$-modules if and only if $M \cong N$ as $B(V)$-modules. \item (Schur's lemma) If $V$ is of countable dimension and $M$ is irreducible, then $\End_{V}M=\End_{B(V)}M\cong {\mathbb C}$. \end{enumerate} \end{lemma} Let $M$ be a weak $(V,T)$-module. For $a\in V$ and $u\in M$, we define $\epsilon(a,u)\in(1/T){\mathbb Z}\cup\{-\infty\}$ by \begin{align} \label{eqn:max-vanish0} a_{\epsilon(a,u)}u&\neq 0\mbox{ and }a_{i}u =0\mbox{ for all }i>\epsilon(a,u) \end{align} if $Y_{M}(a,x)u\neq 0$ and $\epsilon(a,u)=-\infty$ if $Y_{M}(a,x)u= 0$. For the next results, we prepare the following symbols. Let $M$ be a weak $(V,T)$-module, $a,b\in V$, $u\in M$, $m,n\in(1/T){\mathbb Z}$, and $\alpha,\beta,\gamma \in{\mathbb Z}$ such that $\alpha\geq \epsilon(a,u)$, $\beta\geq \epsilon(b,u)$, and $\gamma\geq\epsilon(a,b)$. We write $m=l_1+r_1/T$ and $n=l_2+r_2/T$ with $l_1,l_2\in{\mathbb Z}$ and $0\leq r_1,r_2<T$. For $s=0,1,dots,T-1$, we define a linear map $Z^{(s)}_{m,n}(a,b,u;-) : {\mathbb C}[z,z^{-1}]\rightarrow M$ by \begin{align} \label{eq:zmnm} Z^{(s)}_{m,n}(a,b,u;z^i)&=\cty{s}{M}{a,b}{m+n-i,i}(u),\ i\in{\mathbb Z}. \end{align} Here $\cty{s}{M}{a,b}{m+n-i,i}(u)$ is defined in \eqref{eq:tysMabwx2wx0}. \begin{lemma} \label{lemma:vanish-Oz} For $s=0,1,dots,T-1$ and $p(z)\in\zhuOzero{\gamma}{\alpha+s/T}{l_1-\alpha+l_2-\beta-2}{z}$, $Z^{(s)}_{m,n}(a,b,u;p(z))=0$. \end{lemma} \begin{proof} If $i\geq \gamma+1$, then $Z^{(s)}_{m,n}(a,b,u;z^i)=0$ by \eqref{eq:zmnm} and the explanation just after \eqref{eq:sum-Y}. By \eqref{eq:res-x-q}, \eqref{eq:Borcherds-Coeff}, \eqref{eqn:max-vanish0}, and \eqref{eq:zmnm}, for any $j=0,1,dots$, \begin{align} &Z^{(s)}_{m,n}(a,b,u;\sum_{i=0}^{\gamma} \mathbf inom{\alpha+s/T+1}{i}z^{i+l_1-\alpha+l_2-\beta-2-j})\nonumber\\ &=\sum_{i=0}^{\gamma}\mathbf inom{\alpha+s/T+1}{i}\cty{s}{M}{a,b}{m+n-i-l_1+\alpha-l_2+\beta+2+j,i+l_1-\alpha+l_2-\beta-2-j}(u) \nonumber\\ &=\sum_{i=0}^{\infty}\mathbf inom{\alpha+s/T+1}{i}\nonumber\\ &\quad{}\times\cty{s}{M}{a,b}{(\alpha+s/T+1)+(\beta+(r_1+r_2-s)/T+1+j)-i,l_1-\alpha+l_2-\beta-2-j+i}(u) \nonumber\\ &=\sum_{i=0}^{\infty}\mathbf inom{l_1-\alpha+l_2-\beta-2-j}{i}(-1)^{i}\mathbf ig(a_{s/T+l_1+l_2-\beta-j-1-i}b_{\beta+(r_1+r_2-s)/T+1+j+i}\nonumber\\ &\qquad{} +(-1)^{l_1-\alpha+l_2-\beta-j-1}b_{(r_1+r_2-s)/T+l_1-\alpha+l_2-1-i}a_{\alpha+s/T+1+i}\mathbf ig)u \nonumber\\ &=0. \end{align} Here we have used $-1<(r_1+r_2-s)/T$. The proof is complete. \end{proof} The following is the key result of this paper. \begin{lemma} \label{lemma:associative-action} Let $M$ be a weak $(V,T)$-module. For $a,b\in V, u\in M$, and $m,n\in(1/T){\mathbb Z}$, \begin{align} a_{m}b_{n}u&\in \Span_{{\mathbb C}}\{(a_ib)_{m+n-i}u\inM\ |\ i\in{\mathbb Z}\}. \end{align} \end{lemma} \begin{proof} Let $\alpha,\beta,\gamma \in{\mathbb Z}$ such that $\alpha\geq \epsilon(a,u)$, $\beta\geq \epsilon(b,u)$, and $\gamma\geq\epsilon(a,b)$. We write $m=l_1+r_1/T$ and $n=l_2+r_2/T$ with $l_1,l_2\in{\mathbb Z}$ and $0\leq r_1,r_2<T$. We take $\mu,\nu\in (1/T){\mathbb Z}$ so that $\epsilon(a,u)\leq \mu, \epsilon(b,u)\leq \nu,$ and $m\equiv \mu, n\equiv \nu\pmod{{\mathbb Z}}$. We define a Laurent polynomial $\Phi_{m, n}(z)\in{\mathbb C}[z,z^{-1}]$ by \begin{align} \label{eq:def-mul-x} &\Phi_{m, n}(z)\nonumber\\ &=\sum_{j=0}^{\nu-n} \mathbf inom{m-\mu-1}{j} {\mathbb R}es_{x}(1+x)^{\mu+1}x^{m-\mu-1-j}\sum_{i\in{\mathbb Z}_{\leq\gamma}} \unitmu{r_1}(\gamma,\alpha,l_1-\alpha+l_2-\beta,i;z)x^{-i-1} \nonumber\\ &=\sum_{j=0}^{\nu-n} \mathbf inom{m-\mu-1}{j}\sum_{i=0}^{\gamma-(m-\mu-1-j)}\mathbf inom{\mu+1}{i} \unitmu{r_1}(\gamma,\alpha,l_1-\alpha+l_2-\beta,i+m-\mu-1-j;z) \nonumber\\ &\in{\mathbb C}[z,z^{-1}]_{\gamma+1-T(\gamma-l_1-l_2+\alpha+\beta),\gamma}. \end{align} For any $i\in{\mathbb Z}$ and $0\leq s\leq T-1$ with $s\neq r_1$, by \eqref{eq:unit-s}, \begin{align} \unitmu{r_1}(\gamma,\alpha,l_1-\alpha+l_2-\beta,i;z)&\equiv 0\pmod{\zhuOzero{\gamma}{\alpha+\dfrac{s}{T}}{l_1-\alpha+l_2-\beta}{z}} \end{align} and hence by Lemma \ref{lemma:vanish-Oz}, $Z^{(s)}_{m,n}(a,b,u;\Phi_{m,n}(z))=0$. By Lemma \ref{lemma:vanish-Oz}, we also have \begin{align} \Phi_{m, n}(z) &=\sum_{j=0}^{\nu-n} \mathbf inom{m-\mu-1}{j}\sum_{i=0}^{\gamma-(m-\mu-1-j)}\mathbf inom{\mu+1}{i} z^{i+m-\mu-1-j} \pmod{\zhuOzero{\gamma}{\alpha+\dfrac{r_1}{T}}{l_1-\alpha+l_2-\beta}{z}} \end{align} and hence by \eqref{eq:Borcherds-Coeff}, \begin{align} &Z^{(r_1)}_{m,n}(a,b,u;\Phi_{m,n}(z))\nonumber\\ &= \sum_{j=0}^{\nu-n} \mathbf inom{m-\mu-1}{j}\sum_{i=0}^{\gamma-(m-\mu-1-j)}\mathbf inom{\mu+1}{i}Y^{(r_1)}(a,b,u;\mu+1+n+j-i,i+m-\mu-1-j)u\nonumber\\ &=\sum_{j=0}^{\nu-n} \mathbf inom{m-\mu-1}{j}\sum_{i=0}^{\infty}\mathbf inom{m-\mu-1-j}{i}(-1)^{i} \mathbf ig(a_{m-j-i}b_{n+j+i}u +(-1)^{m-\mu-j}b_{m+n-\mu-1-i}a_{\mu+1+i} u\mathbf ig)\nonumber\\ &=\sum_{k=0}^{\nu-n}\sum_{i=0}^{k}\mathbf inom{m-\mu-1}{k-i}\mathbf inom{m-\mu-1-k+i}{i}(-1)^{i} a_{m-k}b_{n+k}u\nonumber\\ &=\sum_{k=0}^{\nu-n}\mathbf inom{k-1}{k}a_{m-k}b_{n+k}u\nonumber\\ &=a_{m}b_{n}u. \end{align} Writing $\Phi_{m,n}(a,b,u;z)=\sum_{i\in{\mathbb Z}}\lambda_{i}z^i, \lambda_i\in{\mathbb C}$, by \eqref{eq:sum-Y} we have \begin{align} \label{eq:proof-lemma-hom} \sum_{i\in{\mathbb Z}}\lambda_{i}(a_ib)_{m+n-i}u &=\sum_{s=0}^{T-1}\sum_{i\in{\mathbb Z}}\lambda_{i}\cty{s}{M}{a,b}{m+n-i,i}(u)\nonumber\\ &=\sum_{s=0}^{T-1}Z_{m,n}^{(s)}(\Phi_{m,n}(a,b,u;z))\nonumber\\ &=Z_{m,n}^{(r_1)}(\Phi_{m,n}(a,b,u;z))\nonumber\\ &=a_{m}b_{n}u. \end{align} The proof is complete. \end{proof} \begin{remark} \label{remark:(awlb)indau} The following result, which is complementary to Lemma \ref{lemma:associative-action}, follows from \cite[Lemma 2.8]{Tanabe2015}: for a weak $(V,T)$-module $M$, $a,b\in V$, $u\inM$, and $m\in(1/T){\mathbb Z}, l\in {\mathbb Z}$, \begin{align} (a_{l}b)_{m}u&\in \Span_{{\mathbb C}}\{a_{l+m-i}b_iu\inM\ |\ i\in(1/T){\mathbb Z}\}. \end{align} Hence a standard argument (cf. \cite[Corollary 4.5.15]{LL}) shows the following result: let $V$ be a simple vertex algebra, $M$ a weak $(V,T)$-module, and $a\in V$, $u\inM$. If $Y_{M}(a,x)u=0$, then $a=0$ or $u=0$. \end{remark} \section{The algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and Proof of Theorem \ref{theorem:main}\label{section:proof}} Throughout this section, $V$ is a simple vertex algebra of countable dimension and $G$ is a subgroup of ${\mathcal A}ut V$ of finite order $T$. In this section, after recalling some properties of the algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ introduced in \cite{DY2002}, we give a proof of Theorem \ref{theorem:main}\label{section:proof}. \subsection{The algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and its modules} For a weak $(V,T)$-module $(M,Y_M)$ and $\sigma\in {\mathcal A}ut V$ we define a weak $(V,T)$-module $(M\cdot \sigma,Y_{M\cdot \sigma})$ by $M\cdot \sigma=M$ as a vector space and \begin{align} \label{eqn:actionYmodulecdotsigma} Y_{M\cdot \sigma}(a,x)&=Y_{M}(\sigma a,x) \mbox{ for }a\in V. \end{align} Note that if $M$ is irreducible, then so is $M\cdot \sigma$. A set ${\mathscr S}$ of inequivalent irreducible weak $(V,T)$-modules is called {\it $G$-stable} if for any $M\in {\mathscr S}$ and $\sigma \in G$ there exists $N\in {\mathscr S}$ such that $M\cdot \sigma \cong N$ as weak $(V,T)$-modules. For a finite right $G$-set ${\mathscr X}$, a finite dimensional semisimple associative ${\mathbb C}$-algebra ${\mathscr A}_{\alpha}(G,{\mathscr X})$ was constructed in \cite{DY2002}. For a finite $G$-stable set ${\mathscr S}$ of inequivalent irreducible weak $(V,T)$-modules, we recall the definition of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and its basic properties according to \cite{DY2002}. We assume that ${\mathscr S}$ is a finite $G$-stable set of inequivalent irreducible weak $(V,T)$-modules until the end of this section. Let $M\in{\mathscr S}$ and $\sigma \in G$. Then there exists $N\in{\mathscr S}$ such that $N\cdot \sigma \cong M$ as weak $(V,T)$-modules. That is, there is an isomorphism $\phi(\sigma , M):M\rightarrowN$ of vector spaces such that \begin{align} \phi(\sigma ,M)Y_{M}(a,x)&=Y_{N}(\sigma a,x)\phi(\sigma ,M) \end{align} for all $a\in V$. For $\sigma ,\tau\in G$, by the irreducibility of $M$, there exists $\alpha_{M}(\sigma ,\tau)\in{\mathbb C}$ such that \begin{align} \phi(\sigma ,M \cdot \tau^{-1})\phi(\tau,M)=\alpha_{M}(\sigma ,\tau)\phi(\sigma \tau,M). \end{align} Moreover, for $\sigma ,\tau,\rho\in G$ and $M\in{\mathscr S}$ we have \begin{align} \alpha_{M\cdot \rho^{-1}}(\sigma,\tau)\alpha_{M}(\sigma \tau,\rho)&= \alpha_{M}(\sigma ,\tau\rho)\alpha_{M}(\tau,\rho). \end{align} We define a vector space ${\mathbb C}{\mathscr S}=\oplus_{M\in{\mathscr S}}{\mathbb C} e(M)$ with a basis $e(M)$ for $M\in{\mathscr S}$. The vector space ${\mathbb C}{\mathscr S}$ is an associative algebra with multiplication given by $e(M)e(N)=\delta_{M,N}e(N)$. We define a subset ${\mathscr U}({\mathbb C} {\mathscr S})=\{\sum_{M\in{\mathscr S}}\lambda_{M}e(M)\ |\ \lambda_{M}\in{\mathbb C}^{\times}\}$ of ${\mathbb C}{\mathscr S}$ where ${\mathbb C}^{\times}={\mathbb C}Sminus\{0\}$. The set ${\mathscr U}({\mathbb C}{\mathscr S})$ becomes a right $G$-set by the action \begin{align} \sum_{M\in{\mathscr S}}\lambda_{M}e(M)\cdot \sigma &=\sum_{M\in{\mathscr S}}\lambda_{M}e(M\cdot \sigma ) \end{align} for $\sigma \in G$. For $\alpha(\sigma ,\tau)=\sum_{M\in{\mathscr S}}\alpha_{M}(\sigma ,\tau)e(M)$ we have \begin{align} \textcolor{black}{(\alpha(\sigma ,\tau)\cdot \rho)}\alpha(\sigma \tau,\rho)&=\alpha(\sigma ,\tau \rho)\alpha(\tau,\rho) \end{align} for all $\sigma ,\tau,\rho\in G$ and hence $\alpha : G\times G\rightarrow {\mathscr U}({\mathbb C} {\mathscr S})$ is a $2$-cocycle. The vector space ${\mathscr A}_{\alpha}(G,{\mathscr S})={\mathbb C}[G]\otimes_{{\mathbb C}}{\mathbb C} {\mathscr S}$ is an associative ${\mathbb C}$-algebra with multiplication given by \begin{align} \label{eq:algebraAalphaGS} (\sigma\otimes e(M))\cdot (\tau\otimes e(N))&=\alpha_{N}(\sigma,\tau)\sigma\tau\otimes e(M\cdot \tau)e(N). \end{align} We define an action on ${\mathscr A}_{\alpha}(G,{\mathscr S})$ on ${\mathscr M}=\oplus_{M\in {\mathscr S}}M$ as follows: for $M,N\in {\mathscr S}$, $u\inN$, and $\sigma\in G$, we define \begin{align} \sigma\otimes e(M)\cdot u&=\delta_{M,N}\phi(\sigma,M)u. \end{align} Note that the actions of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ and $V^G$ on ${\mathscr M}$ commute with each other. For $M\in {\mathscr S}$, we define \begin{align} G_{M}&=\{\sigma\in G\ |\ M\cdot \sigma\cong M\mbox{ as weak $(V,T)$-modules}\}. \end{align} The restriction map $\alpha_{M}=\alpha|_{G_{M}\times G_{M}}$ is clearly a $2$-cocycle for $G_{M}$. Let ${\mathscr O}_{M}$ be the orbit of $M$ under the action of $G$ and let $G=\cup_{j=1}^{k}G_{M}g_{j}$ be a right coset decomposition with $g_1=1$. Then ${\mathscr O}_{M}=\{M\cdot g_j\ |\ j=1,dots,k\}$ and $G_{M\cdot g_j}=g_j^{-1}G_{M}g_j$. We define several subspaces of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ by \begin{align} S(M)&=\Span_{{\mathbb C}}\{\sigma\otimes e(M)\ |\ \sigma\in G_{M}\},\nonumber\\ D(M)&=\Span_{{\mathbb C}}\{\sigma\otimes e(M)\ |\ \sigma\in G\},\nonumber\\ D({\mathscr O}_{M})&=\Span\{\sigma\otimes e(M\cdot g_j)\ |\ \sigma\in G, j=1,dots,k\}. \end{align} The subalgebra $S(M)$ of ${\mathscr A}_{\alpha}(G,{\mathscr S})$ is isomorphic to ${\mathbb C}^{\alpha_{M}}[G_{M}]$, the twisted group algebra with $2$-cocycle $\alpha_{M}$. Decompose ${\mathscr S}$ into a disjoint union of orbits: \begin{align} {\mathscr S} = \mathbf igcup_{j\in J}{\mathscr O}_{j}. \end{align} Let $M^{j}$ be a representative element of ${\mathscr O}_j$. Then ${\mathscr O}_{j}={\mathscr O}_{M^{j}}=\{M^{j}\cdot\sigma\ |\ \sigma\in G\}$ and ${\mathscr A}_{\alpha}(G,{\mathscr S})=\oplus_{j\in J}D(\textcolor{black}{{\mathscr O}_{M^{j}}})$. For $M\in{\mathscr S}$, let $\Lambda_{G_{M}, \alpha_{M}}$ be the set of all irreducible characters $\lambda$ of ${\mathbb C}^{\alpha_{M}}[G_{M}]$. We denote the corresponding irreducible module by $W(M)_{\lambda}$. Note that $M$ is a semisimple ${\mathbb C}^{\alpha_{M}}[G_{M}]$-module. Let $M^{\lambda}$ be the sum of irreducible ${\mathbb C}^{\alpha_{M}}[G_{M}]$-module of $M$ isomorphic to $W(M)_{\lambda}$. Then \begin{align} M&=\mathbf igoplus_{\lambda\in\Lambda_{G_{M}, \alpha_{M}}}M^{\lambda}. \end{align} Moreover $M^{\lambda}=W(M)_{\lambda}\otimes M_{\lambda}$ where \begin{align} \label{eq:Mlambda=HomC} M_{\lambda}=\mathcal Hom_{{\mathbb C}^{\alpha_{M}}[G_{M}]}(W(M)_{\lambda},M) \end{align} is the multiplicity of $W(M)_{\lambda}$ in $M$. We can realize $M_{\lambda}$ as a subspace of $M$ in the following way: let $w\in W(M)_{\lambda}$ be a fixed nonzero vector. Then we can identify $\mathcal Hom_{{\mathbb C}^{\alpha_{M}}[G_{M}]}(W(M)_{\lambda},M)$ with the subspace \begin{align} \{f(w)\ |\ f\in \mathcal Hom_{{\mathbb C}^{\alpha_{M}}[G_{M}]}(W(M)_{\lambda},M)\} \end{align} of $M^{\lambda}$. Note that the actions of ${\mathbb C}^{\alpha_{M}}[G_{M}]$ and $V^{G_{M}}$ on $M$ commute with each other. So $M^{\lambda}$ and $M_{\lambda}$ are weak $(V^{G_{M}},T)$-modules. Furthermore, $M^{\lambda}$ and $M_{\lambda}$ are weak $(V^G,T)$-modules. The same argument as in \cite[Corollary 6.5 and Theorem 6.7]{DRY2023} shows that $M_{\lambda}\neq 0$ and $M_{\lambda}$ is an irreducible weak $(V^{G_{M}},T)$-module for any $\lambda\in \Lambda_{G_{M}, \alpha_{M}}$. Let ${\mathscr S}=\cup_{j\in J}{\mathscr O}_{j}$ be an orbit decomposition and fix $M^{j}\in {\mathscr O}_j$ for each $j\in J$. For convenience, we set \begin{align} \label{eq:GjGmoduelj} G_{j}&=G_{M^{j}},\quad \Lambda_{j}=\Lambda_{G_{M^{j}}, \alpha_{M^{j}}}, \quad\mbox{ and }\quad W_{j,\lambda}=W(M^{j})_{\lambda} \end{align} for $j\in J$ and $\lambda\in \Lambda_{j}$. We have a decomposition \begin{align} M^{j}&=\mathbf igoplus_{\lambda\in\Lambda_{j}}W_{j,\lambda}\otimes M^{j}_{\lambda} \end{align} as a ${\mathbb C}^{\alpha_{M^{j}}}[G_{M^{j}}]$-module. For $j\in J$ and $\lambda\in\Lambda_{j}$ we set \begin{align} \label{eq:definitionWjlambda} W_{\lambda}^{j}= \Ind^{D(M^{j})}_{S(M^{j})}W_{j,\lambda}. \end{align} It is shown in \cite[Theorem 3.6]{DY2002} that the algebra ${\mathscr A}_{\alpha}(G,{\mathscr S})$ is semisimple and $\{W^{j}_{\lambda}\ |\ j\in J, \lambda\in\Lambda_{j}\}$ is a complete set of representatives of isomorphism classes of irreducible ${\mathscr A}_{\alpha}(G,{\mathscr S})$-modules. It follows from \cite[Propositions 6.6 and 6.7]{DY2002} that under the action of ${\mathscr A}_{\alpha}(G,{\mathscr S})\otimes_{{\mathbb C}} \textcolor{black}{V^{G}}$, the space ${\mathscr M}$ decomposes into \begin{align} \label{eq:Mdecomposition} {\mathscr M}&=\mathbf igoplus_{\lambda\in\Lambda_{j}}W_{\lambda}^{j}\otimes M^{j}_{\lambda}. \end{align} \subsection{Proof of Theorem \ref{theorem:main}} We note that \cite[Theorem 4.7]{DRY2023} holds even for weak $(V,T)$-modules. Thanks to Lemma \ref{lemma:tensor}, Lemma \ref{lemma:associative-action}, \eqref{eq:Mdecomposition}, \cite[Corollary 3.3 and Theorem 4.7]{DRY2023}, the same argument as in \cite[Theorem 7.4]{DRY2023} shows the result. \input{weakVG.bbl} \end{document}